id
stringlengths
3
9
source
stringclasses
1 value
version
stringclasses
1 value
text
stringlengths
1.54k
298k
added
stringdate
1993-11-25 05:05:38
2024-09-20 15:30:25
created
stringdate
1-01-01 00:00:00
2024-07-31 00:00:00
metadata
dict
14550091
pes2o/s2orc
v3-fos-license
Diphtheria, Tetanus, and Pertussis Immunity in Indian Adults and Immunogenicity of Td Vaccine Rise of diphtheria cases in adults is a cause of concern worldwide. Pertussis is also now affecting adults. We assessed serum levels of tetanus, diphtheria and pertussis antibodies in 62 adults in Pune, India, who had missed their primary immunization. All adults were then given three doses of tetanus-diphtheria (Td) vaccine at 0, 1, and 6 months. All adults were immune to tetanus but 78% had long-term protection. For diphtheria, 88% were protected but only 9% had long term immunity. Only 60% were immune to pertussis. After three doses of the vaccine, long term immunity to both tetanus and diphtheria increased to 87% and 97%, respectively (P < 0.05). Geometric mean titres (GMT) of both antibodies also increased significantly. The vaccine caused minor local reactions and mild fever in a few subjects. There is need of three doses of Td vaccination in those Indian adults, who missed their primary immunization. Susceptibility to pertussis also needs to be further explored. Introduction In the 1990s, a large epidemic of diphtheria began in Russia and subsequently spread to the Newly Independent States (NIS) of the former Soviet Union. About two-thirds of the reported cases occurred among persons ≥15 years of age. In Ukraine too, at the peak of the epidemic in 1995, more than 80% cases were reported in the same age group [1][2][3][4]. In fact, serologic studies in the 1980s from these countries had suggested that >50% of adults were susceptible to diphtheria [5,6]. Since then, diphtheria immunity among adults became an important issue. Tetanus too remains an important public health problem in many parts of the world, particularly in the tropical developing countries. In 2008, the total number of deaths caused by tetanus worldwide was estimated to be more than 61,000 [7]. In India, DTP vaccine was introduced in routine immunization in 1978, resulting in substantial decline in incidence in the pediatric populations. The effect was a shift of the infection to the older age groups. In 1998, around 65% of cases occurred above 3 years of age. The age shift justified the need of booster diphtheria immunization [8]. Therefore, the World Health Organization (WHO) recommends three doses of diphtheria toxoid-containing vaccination of adults who may have not been primed previously neither by natural infection nor by vaccination [9]. Pertussis is generally considered as a childhood disease but was well documented in adults during the twentieth century [10][11][12]. Recently, in the United States, there has been an increase in pertussis among adolescents and adults [13,14]. In India, there are no reports of pertussis in adults yet but chances are that these cases are not detected and the susceptibility is also not known. In the present study, we assessed the diphtheria, pertussis, and tetanus immunity in adult individuals who missed primary DTP immunization. We also assessed effect of threedose schedule of a tetanus-diphtheria (Td) vaccine in this population. Td vaccine is not combined with whole cell pertussis because of higher reactions in this age group [15]. Td vaccine manufactured by Serum Institute of India Ltd (SIIL) is licensed in India. It is also prequalified by WHO for the sale to the United Nations agencies since 1995. The vaccine is safe and immunogenic [16]. Millions of doses of this vaccine have been used worldwide. Setting. The study was conducted at the clinic of Serum Institute of India Research Foundation (SIIRF), Pune, after the Ethics Committee approval. Adult employees of the Poonawalla group of companies were enrolled after taking written informed consent. The study was conducted between May and November 2007. Study Procedures. On day 0, subjects were screened for eligibility and then enrolled in the study. Blood samples were collected for baseline serological status. Three doses of the Td vaccine were given on 0, 1, and 6 months to all the subjects. They were asked to record adverse events in diary cards. On each visit, medical history was asked for adverse events and concomitant medications. Physical examination was performed. One month after the third dose, the second blood sample was taken for serology. Study Population. The Expanded Programme on Immunization (EPI) was initiated in India in 1978 which included DTP and DT vaccines. Hence, healthy adults of age 30 years to 65 years (born before 1978), who gave consent and who had not received DTP or DT vaccines in the past, were selected. Subjects with pregnancy and lactation or any medical disorder or allergy were excluded. Contraindications for the subsequent doses were any serious adverse event (SAE) following the previous dose. Safety Assessment. The subjects were closely monitored for 15 minutes following each dose. They recorded all adverse events in a diary card. Medical history was asked, and physical examination was done on each visit. Statistics. Age was expressed in mean, standard deviation (SD) and median. Gender was expressed in percentages. Proportions of subjects with seroprotection for diphtheria and tetanus before and after vaccination were calculated and were compared by McNemar test. Percentages of subjects having baseline seronegative antipertussis IgG antibody titres were calculated. GMTs of anti-D and anti-T were calculated. "Paired t-test" was used to compare pre-and postvaccination GMTs. Incidence of adverse reactions was expressed in percentages. Results Total 62 subjects were screened and enrolled. The baseline blood samples were collected in all the subjects. Three subjects were lost to followup, while one missed one of the doses. The immunogenicity was assessed in 58 subjects. All the subjects were male. Mean age was 45 years (±7.7 years), while median was 43.5 years. After receiving three doses of Td vaccine, the proportion of seroprotection changed significantly. All the subjects achieved safe protection, while 87% reached long-term protection for diphtheria (P < 0.02). For tetanus, 97% of the subjects attained long-term seroprotection. The change was significant. GMTs of both antibodies also increased significantly (Table 1). Discussion In our study, 88% adult population was adequately protected against diphtheria, but only 9% had long-term protection. Short-term protection for tetanus was 100%, most probably because of frequent TT boosters. But here also, only 74% had long-term protection. A study in Delhi among a random sample of healthy adults reported that 53% of adults were unprotected; 22% were seen to have only a basic protection against diphtheria; 25% were protected against both diseases; 47% were susceptible to tetanus [17]. Both the studies clearly demonstrate a need for Td vaccination in Indian adults, especially those who were never immunized. Total 40% subjects were not adequately protected against pertussis. More than 50% of them had absolutely no seroprotection. This is a cause of concern and needs to be confirmed in larger studies. The developed countries have already seen a rise in the cases in the adolescent and the adult age group [18,19]. Outbreaks have been reported among children and adults in countries such as Afghanistan, Israel, and Taiwan (Taipei) [20][21][22]. Though data from India is not available, it is quite likely that pertussis in adults may be a problem there also. There is definitely a need for larger serosurveys among adults as also studies defining the disease burden. The study also demonstrated that Td vaccine in three doses induces an adequate immune response against diphtheria and pertussis in the unvaccinated adults. The study also demonstrated the safety and tolerability of the vaccine. The results are also in line with other studies on Td vaccine [16]. Despite certain limitations (the study was not community based, women were not represented, and small sample size), the study indicates that there is a definite need of Td vaccination in adult Indian population who did not receive primary immunisation with three doses of diphtheriacontaining vaccines. Td vaccine of SIIL, when given in three doses in adults of 30-65 years of age, is immunogenic and safe. Susceptibility to diphtheria infection increases with increase in age. Larger studies should be undertaken in Indian adult population to determine prevalence of pertussis.
2016-05-04T20:20:58.661Z
2011-12-28T00:00:00.000
{ "year": 2011, "sha1": "e75115410730d64be1f5b90660022c6a8bffe49c", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/archive/2011/745868.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "817d4d5fca7edb2c41799ae34a8f5832fd5c7342", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
15798132
pes2o/s2orc
v3-fos-license
Border Collision Bifurcations in Two Dimensional Piecewise Smooth Maps Recent investigations on the bifurcations in switching circuits have shown that many atypical bifurcations can occur in piecewise smooth maps which can not be classified among the generic cases like saddle-node, pitchfork or Hopf bifurcations occurring in smooth maps. In this paper we first present experimental results to establish the need for the development of a theoretical framework and classification of the bifurcations resulting from border collision. We then present a systematic analysis of such bifurcations by deriving a normal form --- the piecewise linear approximation in the neighborhood of the border. We show that there can be eleven qualitatively different types of border collision bifurcations depending on the parameters of the normal form, and these are classified under six cases. We present a partitioning of the parameter space of the normal form showing the regions where different types of bifurcations occur. This theoretical framework will help in explaining bifurcations in all systems which can be represented by two dimensional piecewise smooth maps. Introduction Most studies in bifurcation theory have been done using smooth dynamical systems like the Hénon map, the Ikeda map and the pendulum equation. In the class of nonsmooth systems, maps with square root singularity have been studied extensively [1,2,3,4] because of their application in impact oscillators and other impacting mechanical systems. On the other hand, piecewise smooth maps with finite one-sided partial derivatives at the discontinuity have attracted relatively little attention. Though the possibility of strange bifurcations like period-2 to period-3 or period-2 to 18-piece chaotic attractor have been reported [5], no systematic study has been made to categorize the possible bifurcations in piecewise smooth maps. Such maps were considered to be just a mathematical possibility as no physical system with these characteristics was known. However in recent years there has been a discovery that a large class of engineering systems, particularly the switching circuits used in power electronics, yield piecewise smooth maps under discrete modeling, and border collision bifurcations are quite common in such systems [6,7]. This has provided motivation for the present study whose objective is to systematically analyze all different kinds of bifurcations that can occur in two dimensional piecewise smooth maps. We consider a general two-dimensional piecewise smooth map g(x,ŷ; ρ) which depends on a single parameter ρ. Let Γ ρ , given byx = h(ŷ, ρ) denote a smooth curve that divides the phase plane into two regions R A and R B . The map is given by g(x,ŷ; ρ) = g 1 (x,ŷ; ρ) forx,ŷ ∈ R A , g 2 (x,ŷ; ρ) forx,ŷ ∈ R B It is assumed that the functions g 1 and g 2 are both continuous and have continuous derivatives. The map g is continuous but its derivative is discontinuous at the line Γ ρ , called the "border". It is further assumed that the one-sided partial derivatives at the border are finite. We study the bifurcations of this system as the parameter ρ is varied. If a bifurcation occurs when the fixed point of the map is in one of the smooth regions R A or R B , it is one of the generic types, namely, period doubling, saddle-node or Hopf bifurcation. But if a fixed point collides with the borderline, there is a discontinuous jump in the eigenvalue of the Jacobian matrix. In such a case an eigenvalue may not "cross" the unit circle in a smooth way, but rather "jumps" over it as a parameter is varied continuously. One therefore cannot classify the bifurcations arising from such border collisions as those occurring for smooth systems where the eigenvalues cross the unit circle smoothly. In this paper we develop a new classification for border collision bifurcations. The paper is organized as follows. In Sec. 2, we illustrate the problem with the help of an example of switching circuit. In Sec. 3, the normal form is derived. In Sec. 4, we analyse the border collision bifurcations occurring in piecewise smooth maps. We present a partitioning of the parameter space of the normal form exhibiting various kinds of border collision bifurcations. We conclude in Sec. 5. Examples of border collision bifurcations in a power electronic circuit The subject of power electronics is concerned with high efficiency conversion of electric power, from the form available at the power source, to the form required by the specific appliance or load. Power electronic technology is increasingly finding application in the home and workplace: familiar examples are domestic light dimmers, fluorescent lamp ballasts, battery chargers, and switch-mode power supplies of all electronic appliances including the personal computer. In contrast with mainstream electronics, power electronics is characterized by the use of electronic switches which operate in "on" or "off" state. Since electrical power supplies can be either dc or ac, there are four basic types of power converters: ac-dc, dc-ac, dc-dc and ac-ac. Here we will consider one of the simplest but most useful of power converters -the dc-dc buck converter -which is used to convert a dc input to a dc output at a lower voltage. Figure 1: (a) The buck converter with duty cycle controlled by voltage feedback, (b) The three ways the state can move from one sampling instant to the next. The circuit diagram of the buck converter is shown in Fig.1(a). The controlled switch S (generally realized by a MOSFET) opens and closes in succession, thus "chopping" the dc input into a square wave that alternates between the input voltage V in and zero. The pulsed waveform is then low-pass filtered by a simple LC network, removing most of the switching ripple and delivering a relatively smooth dc output voltage v to the load resistance R. The diode D provides a path for the continuation of the inductor current during the off period. The dc output voltage can easily be varied by changing the duty ratio, i.e., the fraction of time that the switch is closed in each cycle. In practice it is necessary to regulate v against changes in the input voltage and the load current. For example, if a buck converter is used to convert the standard 5 V dc supply used in computers to the 3.3 V needed for the Pentium CPU chip, it would be necessary to regulate the average output voltage at 3.3 V in spite of the varying power demand of the chip. This can be achieved by controlling the switch S by voltage feedback as shown in Fig.1. In this simple proportional controller, a constant reference voltage V ref is subtracted from the output voltage and the error is amplified with gain A to form a control sig- The switching signal is generated by comparing the control signal with a periodic sawtooth (ramp) waveform. S turns on whenever v con goes below v ramp and a latch allows it to switch off only at the end of the ramp cycle. Though this circuit or its variants are used in a large number of practical applications requiring regulated dc power supply, it has been demonstrated [8,9,10] that the system can exhibit bifurcations and chaos for a large portion of the parameter space. To investigate the dynamics analytically, we obtain a two dimensional Poincaré map by sampling the inductor current and capacitor voltage at the end of each ramp cycle. Because of the transcendental form of the equations, the map cannot be determined in closed form. In simulation, the map has to be obtained numerically. It is however possible to infer the form of the map. There are three ways in which the system can move from one observation point to the next: (a) the control voltage is throughout above the ramp waveform and the switch remains off, (b) the cycle involves an off period and an on period, (c) the control voltage is throughout below the ramp waveform and the switch remains on. The three cases are shown in Fig.1(b). These are represented by three different expressions of the map. The borderlines are given by the condition where the control voltage grazes the top and bottom of the ramp waveform. Therefore there are three compartments in the phase space, separated by two borderlines, and we have a piecewise smooth map. We present the experimentally obtained bifurcation diagrams for this system for different sets of parameter values. Fig.2(a). Here we find two parameter values (shown with arrows) for which a periodic orbit directly bifurcates into a chaotic orbit. Such bifurcations have been reported earlier in [8,11,12,13]. The slight expansion of the attractor at the bifurcation point is due to system noise and can be ignored in theoretical studies. In Fig.2(b) we present the continuous time plots of v con and the triangular wave voltage at the bifurcation point shown by the second arrow, where a period-3 orbit bifurcates into a 3-piece chaotic orbit. It is seen that the v con waveform grazes the top of the triangular wave, which means that a border collision bifurcation has occurred. An experimental bifurcation diagram is shown in The distinguishing feature of this chaotic attractor is that there is no periodic window over a large range of the parameter value. We find from simulation that there are no coexisting attractors in this range. We say a chaotic attractor is robust if, for its parameter values there exists a neighborhood in the parameter space with no periodic attractor and the chaotic attractor is unique in that neighborhood [14]. The chaotic attractor resulting from this border collision is therefore robust. The question is, under what condition does robust chaos occur? Another experimental bifurcation diagram for this system is shown in Fig.3(a). The arrow shows a period doubling bifurcation, but the two bifurcated orbits do not diverge perpendicularly from the path of the fixed point before the critical parameter value. This is therefore not a standard pitchfork bifurcation. This kind of bifurcation has been reported in [15,16] also. Fig.3(b) gives the continuous time plots of v con and the triangular wave voltage just after the bifurcation and shows that the period doubling occurred at a border collision. Again the question is, under what condition does this special type of period doubling occur? It has been reported earlier [17] that this system has coexisting attractors for some ranges of parameter values. Since multiple attractors cannot be seen in experimental bifurcation diagrams, we present a numerically obtained bifurcation diagram in Fig.4 showing the evolution of the main attractor and a coexisting attractor. It is found that the chaotic attractor comes into existence out of nothing at a particular parameter value. Under what condition can such strange bifurcations occur? In the following sections we develop a complete theory of bifurcations in piecewise smooth maps, from which the answers to the above questions can be derived. The normal form Since the local structure of border collision bifurcations depends only on the local properties of the map in the neighborhood of the border, we study the border collision bifurcations with the help of "normal forms" -the piecewise affine approximations of g in the neighborhood of the border. This ρ-dependent change of variables moves the border to theỹ axis. Then the map g(x,ŷ; ρ) can be written and the border isx = 0. Suppose that when ρ = ρ 0 the map f (x,ỹ; ρ) has a fixed point P 0 on the border, that is, Let e 1 be a tangent vector in theỹ direction. The vector e 1 maps to a vector e 2 . We assume e 2 is not parallel to e 1 . Define the local coordinates as the following (C.f. Fig. 5). Choose the point P 0 as the new origin for e 1 in theȳ direction and e 2 in thex direction. In thesex-ȳ coordinates, the fixed point P 0 is given by (0, 0), and the border Γ ρ is given byx = 0. We define the new parameter µ = ρ − ρ 0 so thatμ 0 = 0. Choose the scales such that at µ = 0 an unit vector alongȳ-axis maps to an unit vector alongx-axis. The phase space is now divided into the two halves L and R and the map f (x,ỹ; ρ) can be written as F (x,ȳ;μ). We can write the map F (x,ȳ;μ) in the side L in the matrix form as , and F (0, 0; 0) = 0 0 . Linearizing F (x,ȳ;μ) in the neighborhood of (0,0;0), we have: Since an unit vector alongȳ-axis maps to an unit vector alongx-axis, by (2) this particular choice of coordinates makes J 12 = 1 and J 22 = 0. Further, we note that J 11 is the trace (denoted τ L ) and J 21 is the negative of the determinant (denoted −δ L ) of the Jacobian matrix. Thus (2) becomes Similarly, for side R we obtain where the corresponding quantities in R are defined in a similar way. We now make another change of variables so that the choice of axes is independent of the parameter. The coordinate transformation x = x, and y = y −μ v y , and which is the desired 2-D normal form. Note that if (v x + v y ) = 0, then the fixed point moves along the border as µ varies. Hence we assume the genericity condition (v x +v y ) = 0 to ensure that a border collision occurs at µ = 0. It is interesting to note that τ L and δ L are simply the trace and the determinant of the Jacobian matrix of the fixed point P 0 on R A side of the border Γ. Let P ρ denote a fixed point of g(x,ŷ; ρ) defined on ρ 0 − ǫ < ρ < ρ 0 + ǫ for some small ǫ > 0, then P ρ depends continuously on ρ. Assume that P ρ is in region R A when ρ < ρ 0 and in region R B when ρ > ρ 0 , and that P ρ is on Γ when ρ = ρ 0 . For ρ < ρ 0 , the eigenvalues of the Jacobian matrix of the fixed point P ρ are denoted as λ 1 and λ 2 . Since the trace and the determinant of the Jacobian is invariant under the transformation of coordinates, we can obtain the values of τ L and δ L as The values of τ R and δ R can be calculated in a similar way for ρ > ρ 0 . This property is very important in numerical computations. For a border-crossing periodic orbit with higher period, we examine the pth (if the period is p) iterate of the map. The matrices in (5) then correspond to the pth iterate rather than the first iterate of the map. When δ L and δ R are zero, the system becomes one dimensional and the normal form reduces to where a and b are the slopes of the graph at the two sides of the border x = 0. Classification of border collision bifurcations Various combinations of the values of τ L , τ R , δ L and δ R exhibit different kinds of bifurcation behaviors as µ is varied through zero. To present a complete picture, we break up the four dimensional parameter space into regions with the same qualitative bifurcation phenomena. If the parameter combination is inside a region, then g and G 2 will have the same types of bifurcations. If it is on a boundary, then higher order terms are needed to determine the bifurcations of g. The fixed points of the system in both sides of the boundary are given by and the stability of each of them is determined by the eigenvalues λ 1,2 = 1 2 τ ± √ τ 2 − 4δ . If the eigenvalues are real, the slopes of the corresponding eigenvectors are given by −(δ/λ 1 ) and −(δ/λ 2 ), respectively. Since we consider only dissipative systems, we assume |δ L | < 1 and |δ R | < 1. Under this condition there can be four types of fixed points. 1. When δ > τ 2 /4, both eigenvalues of the Jacobian are complex, indicating that the fixed point is spirally attracting. If τ > 0, it is a clockwise spiral, and if τ < 0 the spiralling motion is counter-clockwise. If the determinant is negative, there can be only two types of fixed points: 1. For −(1 + δ) < τ < (1 + δ), one eigenvalue is positive and the other negative -which means that the fixed point is a flip attractor. When referring to sides L and R, these quantities have the appropriate subscripts, i.e., λ 1L , λ 2L are the eigenvalues in side L and λ 1R , λ 2R are the eigenvalues in side R. As a fixed point collides with the border, its character can change from any one of the above types to any other. This provides a way of classifying border collision bifurcations. It may be noted that in some portions of the parameter space there may be no fixed point in half of the phase space. For example, the location of L * calculated by the above formula may turn out to be in side R. In such cases, the dynamics in L is determined by the character of the "virtual" fixed point. We denote such virtual fixed points by the overbar sign, asL * andR * . If the eigenvalues are real, invariant manifolds of these virtual fixed points still exist and play an important role in deciding the system dynamics. It should also be noted that if a certain kind of bifurcation occurs when µ is increased through zero, the same kind of bifurcation would also occur when µ is decreased through zero if the parameters in L and R are interchanged. Therefore, there exists a symmetry in the parameter space and in the following discussion it suffices to describe the bifurcations in half the parameter space. Moreover, we first consider the case of positive determinant, which constitute a large class of physical systems. We take up the special features of systems with negative determinant at a later stage. A special feature of the normal form (5) is that the unstable manifolds fold at every intersection with the xaxis, and the images of every fold point is a fold point. The stable manifolds fold at every intersection with the yaxis and the pre-image of every fold point is a fold point. The argument is as follows. Forward iterate of points on the unstable manifold remain on the same manifold. In the normal form, points on the y-axis map to points on the x-axis. As an unstable manifold crosses the y-axis, one linear map changes to another linear map. Therefore the slope of the unstable manifold in the two sides of the x-axis cannot be the same unless the parameters of the normal form in the two sides of the border are the same (implying a smooth map). In case of the stable manifold, the same argument applies for the inverse map. Under the action of G 2 , the line x = 0 maps to the line y = 0. Therefore under the action of G −1 2 , points on the x-axis map to points on the y-axis, and hence the stable manifold must have different slopes in the two sides of the y-axis. We now present the partitioning of the parameter space as shown in Fig.6. The system behavior in the various regions of the parameter space are taken up in the following subsections. Border collision pair bifurcation then there is no fixed point for µ < 0 and there are two fixed points, one each in L and R, for µ > 0. The two fixed points are born on the border at µ = 0. We call this a border collision pair bifurcation. An analogous situation occurs if τ L < (1 + δ L ) and τ R > (1 + δ R ) as µ is reduced through zero. Due to the symmetry of the two cases, we consider only the parameter region (8). There can be three types of border collision pair bifurcations depending on the character of the orbits for µ > 0. Case 1(a): If (1 + δ R ) > τ R > −(1 + δ R ), then R * is stable. Therefore it is like a saddle-node bifurcation, where a periodic attractor appears at µ = 0. There are two special features of this saddle node bifurcation. First, the fixed points are born on the border and move away from it as µ is increased. Second, there is no intermittency associated with this bifurcation. Case 1(b): If there is a bifurcation from no attractor to a chaotic attractor. The chaotic attractor for µ > 0 is robust [14]. Case 1(c): If τ L > (1 + δ L ) and τ R < −(1 + δ R ) and then there is an unstable chaotic orbit for µ > 0. For (9), L * is a regular saddle and R * is a flip saddle. Let U L and S L be the unstable and stable manifolds of L * and U R and S R be the unstable and stable manifolds of R * , respectively. As shown earlier, U L and U R experience folds along the x-axis, and all images of fold points are fold points. S L and S R fold along the y-axis, and all pre-images of fold points are fold points. For condition (9), λ 1L > λ 2L > 0 and 0 > λ 1R > λ 2R . The stable eigenvector at R * has a slope m 1 = (−δ R /λ 1R ) and the unstable eigenvector has a slope m 2 = (−δ R /λ 2R ). Since points on an eigenvector map to points on the same eigenvector and since points on the y-axis map to the xaxis, we conclude that points of U R to the left of y-axis map to points above x-axis. From this we find that U R has an angle m 3 = (δ L λ 2R )/(δ R − τ L λ 2R ) after the first fold. Under condition (9) we have m 1 > m 2 > 0 and m 3 < 0. Therefore there must be a transverse homoclinic intersection in R. This implies an infinity of homoclinic intersections and the existence of a chaotic orbit. We now investigate the stability of this orbit. The basin boundary is formed by S L . S L folds at the y-axis and intersects the x-axis at point C. The portion of U L to the left of L * goes to infinity and the portion to the right of L * leads to the chaotic orbit. U L meets the x-axis at point D, and then undergoes repeated foldings leading to an intricately folded compact structure as shown in Fig.7. The unstable eigenvector at L * has a negative slope given by (−δ L /λ 1L ). Therefore it must have a heteroclinic intersection with S R . Since both U L and U R have transverse intersections with S R , by the Lambda Lemma [18] we conclude that for each point q on U R and for each ǫ-neighborhood N ǫ (q), there exist points of U L in N ǫ (q). Since U L comes arbitrarily close to U R , the attractor must span U L in one side of the heteroclinic point. Since all initial conditions in L converge on U L and all initial conditions in R converge on U R , and since there are points of U L in every neighborhood of U R , we conclude that the attractor is unique. This chaotic attractor cannot be destroyed by small changes in the parameters. Since small changes in the parameters can only cause small changes in the Lyapunov exponents, where the chaotic attractor is stable, it is also robust. It is clear from this geometrical structure that no point of the attractor can be to the right of point D. If D lies towards the left of C, the chaotic orbit is stable. If D falls outside the basin of attraction, it is an unstable chaotic orbit or chaotic saddle. From this, the condition (10) of stability of the chaotic attractor is obtained. If δ L = δ R = δ this condition reduces to τ R λ 1L − λ 1L λ 2L + τ L − τ R − δ > 0. Border crossing bifurcations In all regions of the parameter space except (8), a fixed point crosses the border as µ is varied through zero. The resulting bifurcations are called border crossing bifurcations. In the following discussions we consider the bifurcations as µ varies from a negative value to a positive value. Case 2: Linear attractor to flip saddle. This occurs if 2 δ L < τ L < (1 + δ L ) and τ R < −(1 + δ R ). There is a bifurcation from a period-1 attractor to a chaotic attractor as µ is increased through zero. This chaotic attractor is robust. For µ < 0, L * is a linear attractor whileR * is a flip saddle. All initial conditions in L converge on to L * , while initial conditions in R converge on to U R . Since U R must have a heteroclinic intersection with one of the stable manifolds of L, all initial conditions in R also converge on to L * . For µ > 0, R * is a flip saddle. As shown in the discussion for Case 1(b), there is a homoclinic intersection in R implying the existence of a chaotic orbit. AsL * is in R, its stable manifolds point towards R. Since there is an intersection of S R with the invariant manifold associated with λ 1L , all initial conditions converge on U R , making the chaotic attractor unique. Case 3: There is a unique period-1 attractor for both positive and negative values of µ in the following cases. At border collision, only the path of the fixed point changes. For µ < 0, all initial conditions in R are attracted tō R * which is in L. All initial conditions in L converge on to L * . Therefore the fixed point is the unique attractor. For µ > 0, all initial conditions in L move linearly towards L * which is in R, and all points in R spiral towards R * . Therefore R * is the unique attractor. If the spiralling orbits in L and R have the same sense, there is an overall spiralling orbit converging on the fixed point. Therefore there is an unique period-1 attractor for both µ < 0 and µ > 0. Regular attractor to regular attractor: Flip attractor to flip attractor: Regular attractor to flip attractor: In the above three cases, for µ < 0, initial conditions in R move linearly toR * . Since there must be a heteroclinic intersection of the stable manifolds, all initial conditions converge on L * . The situation for µ > 0 is similar. Case 4: In the following cases there can be bifurcation from multiple attractors to multiple attractors. There are general mechanisms for the occurrence of coexisting attractors. There can be multiple attractors on both sides of µ, one of which is a fixed point. Case 5: In the parameter space region τ R < −(1 + δ R ) and τ L < 0, initial conditions in L move to R and vice versa. Therefore the dynamics is governed by the stability of the second iterate with one point in L and the other in R. The eigenvalues of the second iterate are From this, the condition of stability of the period-2 orbit is obtained as for λ 2 > −1 (12) There are three sub-cases: then there is a unique period-1 attractor for µ < 0 and a unique period-2 attractor for µ > 0. For µ < 0, L * is a flip attractor andR * is a flip saddle. All initial conditions in L converge on L * and all initial conditions in R go to L in the first iteration and then converge on to L * . For µ > 0, the condition (11) ensures the stability of the period-2 orbit. The existence of heteroclinic intersection makes the attractor unique. This is like a period doubling bifurcation occurring on the borderline. In contrast with standard period doubling bifurcation, the distinctive feature of the border collision period doubling is that as µ is varied through zero, the bifurcated orbit does not emerge orthogonally from the orbit before the bifurcation. then for µ < 0 there can be multiple attractors one of which is a period-1 fixed point. For µ > 0, the period-2 orbit involving both L and R is stable. Therefore there is an unique period-2 attractor. Case 5(c): If then there is a period-1 attractor for µ < 0. For −(1 + δ L ) < τ L < −2 √ δ L , the eigenvalues of L * are real and coexisting attractors cannot occur. Otherwise multiple attractors may exist. For µ > 0, since (11) is not satisfied, it implies that the fixed point of the twice iterated map is unstable. The eigenvalues are real and initial conditions diverge away from it along the unstable eigenvector. Therefore there can be no attractor for µ > 0. Case 5(d): If there is no attractor for both positive and negative values of µ since all the fixed points of the first and second iterate are unstable. Case 6:Spiral attractor to flip saddle: This occurs if 0 < τ L < 2 δ L and τ R < −(1 + δ R ). For µ < 0, there can be multiple attractors one of which is a period-1 fixed point. The asymptotic behavior for µ > 0 may be a periodic attractor (of periodicity greater than unity), or chaotic attractor. As τ L is increased, periodic windows of successively higher periodicities (2,3,4,...) occur, and there are windows of chaos between two such periodic windows. The period-n attractor comes into existence through a border collision pair bifurcation in the nth iterate and goes out of existence when the period-n fixed point becomes unstable. From (12), the stability boundary of period-2 attractor is given by 1+τ L τ R −δ L −δ R +δ L δ R = 0. For higher iterates such analytical expressions for the boundary of periodic windows become involved and are not presented here. There is no mechanism to prevent the occurrence of multiple attractors. This gives a complete description of the bifurcations that can occur at various regions of the parameter space of the normal form (5). Representative bifurcation diagrams of the cases (where attractors exist) are shown in Fig.8. The case of negative determinant If the determinant is negative, one has to find out which type of fixed point changes to which type at it moves across the border. Depending on the type of the fixed point at the two sides of the border, the bifurcations will be of the same kind as discussed in the previous section. For example, if δ L , δ R < 0 then the eigenvalues are real for all values of τ L and τ R . Therefore there can be no coexisting attractors anywhere in the parameter space. The region of stability of period-2 attractor, given by conditions (12) and (13), is much larger. Moreover there is a region of parameter space where a border collision pair bifurcation results in the creation of a period-2 attractor since condition (13) is satisfied. The partitioning of the parameter space for negative determinants is given in Fig.9. There is however a difference in the equation for the boundary crisis in border collision pair bifurcation. For −1 < δ R < 0, we have 1 > λ 1R > 0, λ 2R < −1, and R * is located above the x-axis. A positive value of λ 1R implies that U L converges on U R from one side. If (13) is not satisfied, the intersection of U R with the x-axis becomes the rightmost point of the attractor, and the condition of existence of the chaotic attractor changes to For δ L < 0 and δ R < 0, L * is below the x-axis and the same logic as above applies. But if δ L < 0 and δ R > 0, the stable manifold of R * has a negative eigenvalue and hence U L does not approach U R from one side. Therefore, if (13) is not satisfied, there is no analytic condition for boundary crisis -it has to be determined numerically. Conclusions In this paper we have investigated the various types of border collision bifurcations that can occur in piecewise smooth maps by deriving a piecewise affine approximation of the map in the neighborhood of the border. We have shown that there can be basically eleven different types of border collision bifurcations, classified under six "cases". We have presented a partitioning of the parameter space into regions where qualitatively different bifurcations occur. This body of knowledge helps us in explaining the bifurcations observed in experimental and numerical investigations of switching circuits, some of which have been presented in Sec.2. For example, the experimental bifurcations of the type seen in Fig.2 can occur in Case 2 and a part of Case 6. A period doubling bifurcation of the type shown in Fig.3 can occur in the second iterate of the map if the parameters fall under Cases 5(a), 5(b) and a part of Case 6 (coexisting attractors can not be observed in experimental bifurcation diagrams). The sudden appearance of a chaotic attractor as in Fig.4 can occur in border collision pair bifurcation and can be categorized under Case 1(b). Note that this bifurcation occurs in the third iterate while the period-1 attractor is present, and therefore the resulting chaotic attractor is not robust. The theoretical problem dealt in this paper was posed by the recent investigations in switching electrical circuits. But we believe that such atypical bifurcations will be observed in other nonsmooth physical systems also and the Figure 9: Schematic diagram of the parameter space partitioning for −1 < δ L < 0 and −1 < δ R < 0 into regions with the same qualitative bifurcation phenomena. (1) No fixed point to period-1; (2) No fixed point to period-2; (3) No fixed point to chaos; (4) No fixed point to unstable chaotic orbit, no attractor; (5) Period-1 to period-2; (6) Period-1 to chaos; (7) Period-1 to period-1; (8) Period-1 to no attractor; (9) No attractor to no attractor. The regions shown in primed numbers have the same bifurcation behavior as the unprimed ones when µ is varied in the opposite direction. theory developed in this paper will help in understanding the nonlinear phenomena and bifurcations in such systems.
2014-10-01T00:00:00.000Z
1998-08-14T00:00:00.000
{ "year": 1998, "sha1": "d57af9503ab4a9c765630046cc6387c18a79f890", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "303d64f2a63ef57bb3a356bf3556cbccd86de018", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics", "Physics" ] }
53793762
pes2o/s2orc
v3-fos-license
Evaporative Drying of Low-Rank Coal Low-rank coals including the brown and the subbituminous coals are commonly known to contain high moisture content (up to 65%, wet basis), which limits their utilization around the world in spite of their low cost. Today, the most of the drying technolo‐ gies are based on the evaporation of the water from the moist product. In this chapter, the most effective parameters on the evaporative coal-drying process are investigated with the data in the recent literature. The effective parameters are evaluated in three categories as follows: (1) the parameters about the drying media (the type of the media, the temperature, the pressure, the velocity and the relative humidity), (2) the coal parameters (the type of the coal and the size) and (3) the drying method. using the superheated steam, the possibilities of using the process steam should be evaluated, and the optimization studies should be conducted. Introduction Today, the lignite is one of the cheapest energy sources [1,2]. The lignite reserves constitute about 45% of the total coal reserves and are distributed throughout the world [3]. The lowrank coals (LRCs) including the brown and the subbituminous coals, which are known to contain high moisture content (up to 65%, wet basis), are very important for the LRC-fired power plants, the gasification and the liquefaction [4]. The high moisture content of the LRC limits its availability in spite of its low cost [5]. The moisture in the coal causes problems in the handling, the storage, the transportation, the milling and the combustion [4,6]. In the coal combustion, the important part of the energy is consumed to evaporate the moisture inside the coal [5][6][7]. The combustion of the high moisture content coal creates some problems such as the additional energy consumption for the moisture evaporation, the insufficient combustion and the additional exhaust discharge [8]. The LRC should be dried to the required moisture level to decrease the energy losses and the transportation costs, and to increase the quality of the products [9,10]. The drying of the LRC may be divided into the evaporative drying or the non-evaporative dewatering [11]. In this study, only the evaporative drying of the LRC is considered. The drying of the LRC decreases the problems caused by the high moisture content. In a coalfired power plant with the coal drying, the heat lost with the flue gas, the water consumption in the cooling tower and the energy consumption in the mill decrease [12]. The efficiency of the coal-drying process for a coal-fired power plant mainly depends on the source of the drying energy. The low-quality heat source for the drying process can enhance the efficiency of a coalfired power plant [13]. In the drying process, both the heat and the mass transfer mechanisms are active. In the evaporative drying of the coal, the heat is provided to remove the water from the coal particle. In references [5,14,15], it is stated that the effective parameters on the drying of the lignite are the temperature, the drying media flow rate, the sample thickness and the particle size. Many studies have been conducted on the lignite drying. In the literature, there are some attempts to review the studies about the coal drying such as references [11,[16][17][18][19][20][21][22]. The estimation of the exit coal moisture content of the dryer is an important research topic. However, there is not much study on this issue. The thin-layer drying models and the neural network methods were applied to estimate the drying curve [23][24][25][26][27][28][29]. The performance of the used models and methods seem so satisfactory. There are various studies on the evaporative coal drying. In this study, the most effective parameters on the evaporative coal-drying process are investigated with data in recent literature open to the authors. The effective parameters are evaluated in three categories as follows: (1) the parameters about drying media, (2) the coal parameters and (3) the drying method. The effective parameters on the drying media are the type of media, the temperature, the pressure, the velocity and the relative humidity. Different coals in varying sizes are investigated in the section of parameters about coal. Finally, the drying methods used in the literature are studied. The main aims of this study are to summarize the recent studies on the LRC drying and to investigate the most effective parameters on the drying. Parameters about drying media In this section, the most effective parameters on the drying media are examined. These parameters are as follows: the type of drying media, the temperature, the pressure, the velocity and the relative humidity. All of these parameters should be defined before the design of the dryer. Parameters about drying media In the coal-drying literature, four drying medium (air, steam, exhaust gases and nitrogen) are used in the studies. The summary of the types of drying medium used in the coal-drying studies is presented in Table 1. Drying media References Air [3,4,6,8,9,23, Steam [8, 9, 25, 27, 37, 47-52, 54, 55] Exhaust gases [7,54] Nitrogen [25,31,33,53,56,57] The high temperature (700-900°C) air or the exhaust gases are used in the conventional evaporative dryers [18]. In the power plants, the exhaust gases can be used in the drying process, so the overall efficiency of the plant can be increased [36]. Akkoyunlu et al. [58] studied the economic upper limit of a possible dryer for the coal-fired power plants without considering the method, the conditions, the source of energy, etc.. However, in the coal drying, the air and the exhaust gases may cause some problems. The air and the exhaust gases with the high temperatures are not applicable because of the spontaneous combustion of the coal and the loss of the volatiles [11,59]. Using superheated steam as the drying media has many advantages over the air and the exhaust gases [18,60,61]. The energy consumption in the air drying is more than the superheated steam drying [25]. In the superheated steam drying, the risk of oxidation and the fire are highly unlikely due to the oxygen-free atmosphere [60,61]. Therefore, the drying temperature can be raised and the higher drying rates can be achieved [18]. The exhaust of the superheated steam drying is pure steam, and so its latent heat can be recovered by the condensation [8,49,53]. Moreover, using superheated steam for the coal drying with high capacities in the power plants seems more effective than the others [6]. Using nitrogen as the drying media is not applicable. However, the results of these studies can be evaluated in conjunction with the exhaust gases. The significant proportion of the exhaust gases are nitrogen. Drying with the air and the steam are the most important topics in the coal-drying literature. The pros and cons for both are presented in many papers. In Figure 1, the drying rate curves for the lignite in the hot air and superheated steam are shown. For the same drying temperatures (120, 140 and 160°C), the final moisture content in the air drying is nearly zero. However, in the superheated steam drying, the final moisture content is about 0.7 kg/(kg db). The drying rate increases as the temperature increases. At the temperature of 120°C, the air drying is faster but at the temperatures of 140 and 160°C, the steam drying is faster. Inversion temperature term is used in the comparison of the air and the steam drying. It shows the temperature point above which the drying rate in the steam is greater than that in the air. In reference [13], the inversion temperature was found in the range from 120 to 140°C. Temperature The drying temperature is one of the most important parameters affecting the drying rate and time. Using the high-temperature drying media requires short drying time. However, the hightemperature values are not applicable for the coal drying due to the spontaneous ignition and the loss of volatiles [59]. The drying temperature levels used in the literature are categorized in two classes (below and above the boiling temperature), and they are presented in Table 2. The LRC is liable to the spontaneous combustion because of its reactive nature [62]. The hightemperature media comprising oxygen may result in combustion of the coal. Using the air or the exhaust gases (comprising uncontrolled rate of oxygen), the drying media may cause the spontaneous combustion of the coal even in the low temperatures. In some of the applications, the rate of oxygen in the exhaust gases is regulated, so the risk of the fire is controlled. However, there is still risk of the fire. In addition, in the high temperatures, the coal losses its volatiles, which in turn decreases its calorific value [59]. Moreover, the volatiles increase the risk of the fire. The effects of the drying temperature on the coal weight loss and the drying rate are shown in Figure 2. As can be seen, the higher temperature provides faster drying and short drying time. Pressure The pressure of the drying media also affects the drying of the LRC. The increase in the pressure improves the overall heat transfer coefficient [49]. However, the higher pressure values result in the higher equilibrium moisture content. The effect of the pressure can be investigated in three categories such as the atmospheric, the vacuum and the high pressure. The drying pressure levels used in the literature are presented in Table 3. Pressure References Atmospheric [3-6, 8, 9, The effect of the pressure on the heat transfer coefficient is shown in Figure 3. The percent increase in the heat transfer is calculated relating to the pressure at 1.1 bar. The pressure seems significantly effective according to reference [49]. Moreover, according to reference [63], the higher pressure values result in the faster drying. However, according to reference [48], the pressure does not affect the drying rate. The effect of the pressure on the drying should be presented clearly. Velocity The velocity of the drying media is effective on the LRC drying. In the literature, the different velocity values are studied. In the fluid bed coal-drying studies, the fluidization velocity is also studied. For the case of the fluid bed drying, the level of the drying media's velocity according to the minimum fluidization velocity is very important. The effect of the drying media is investigated in reference [39] (Figure 4). The higher velocity value provides the faster drying rate. The velocity does not affect the drying rate significantly in the last part of the drying. For the fluidized bed dryers, the higher fluid velocities increase the heat transfer rate and the solid mixing [33]. Relative humidity The relative humidity of the drying media affects the drying of the LRC. As can be seen from Figure 5, the lower relative humidity means the higher drying rate. At the surface of the coal particles, the evaporation rate is dependent on the water vapour pressure difference between the coal surface and the drying media. The water vapour pressure of the drying media increases with the increase in humidity, and thus, the drying rate decreases with the increase in humidity. In addition, the equilibrium moisture content of the coal particles increases with the increase in the relative humidity. Parameters about coal The characteristics and the particle size of the coal have an important effect on the drying. All types of the coals have different characteristics such as the initial moisture content, the porosity, the equilibrium moisture content, the volatile matter, the grindability, the ash content and the heating value. The effect of the moisture content on the coal heating value for different coal types is shown in Figure 6. The moisture in the coal can be categorized in three groups: the surface moisture (the free water), the physically bound moisture and the chemically bound moisture [32,64]. The heat is provided to the coal particle, in the evaporative drying, for heating the particle, for evaporating the water, and for overcoming the binding forces (both the physical and the chemical) between the coal and the water [32,49]. The surface water is easily removed by the evaporation but the other types of the moisture require more energy to be removed. As can be seen from Figure 7, the heat of the desorption of the water from the Yallourn brown coal increases with the decrease in the moisture content after a critical moisture value, which shows the end of the surface water and the start of the domination of the internal mass transfer mechanisms. It is important to understand the types of the water in the coal to be effectively removed. For different coal types, the binding forces change, and the binding enthalpy increases with the decreasing moisture content (Figure 8) [66][67][68]. The higher part of the water in the lignite is in the pores [69]. Therefore, the number, the size, the distribution and the shape of the pores in the LRC have important effects on the drying. The water in the smaller pores means difficult to remove. The importance of the effects of the coal parameters on the evaporative drying clarifies that all the types of the coal should be studied separately to obtain the drying characteristics. Type of coal In the literature, there are many studies ( [5,7,30,32,35,36,45,50,51,57,70], etc.), which investigated the effects of the coal type on the drying. In Figure 9, the drying curves of the North Dakota lignite and the subbituminous coal from the Powder River Basin (PRB) are shown. Different types of coals show different drying characteristics. Particle size The particle size is highly important in the drying process. In addition, the particle size is very important parameter in the fluidization of the fluidized bed. Moreover, the size of the lignite particles has an important effect on the heat transfer coefficient inside the superheated steam fluidized bed dryer [49]. The sizes of the coal particles used in the literature are presented in Table 4. [5-8, 24, 25, 27, 33, 34, 39-41, 47, 53, 57] <5 [3, 4, 9, 26, 31, 32, 35-37, 44, 49-52, 56] >5 [23, 26, 28-30, 35-37, 43, 45, 46, 48, 49, 55] The effect of the particle size on the drying rate is presented in Figure 10. The drying rate increases as the coal particle size decreases. The smaller particle fractions have larger surface area, and thus, they dry faster [71]. In addition, the moisture transport distance inside the particle decreases as the particle size decreases. Drying method In the literature, many different coal-drying technologies are seen. However, the common conventional drying systems are the fluidized bed dryer, the rotary dryer, the shaft dryer, the pneumatic dryer, fixed bed, etc. [72]. The experimental drying methods used in the literature are presented in Table 5. Some of the experimental drying methods (the TGA-Thermal Gravimetric Analysis, the oven and the others) used in the literature are just for investigating, analysis and modelling. Therefore, they are not examined for the applicability of the methods. The results of these studies are important to understand the drying characteristics of the coals, to evaluate the effective parameters on the drying and to be able to model the drying of the LRC in a convenient drying technology. The low-temperature fluidized bed drying method is developed in the United States [32,74]. The low-grade waste heat is used in this process. This method decreases the risk of the oxidation and the fire due to the low-temperature air. The in-bed heat exchangers are used to increase the temperature of the air and its moisture carrying capacity. However, there is still a risk of the spontaneous combustion in the low-temperature air drying. The superheated steam fluidized bed-drying technology is a promising one for the coal drying, especially for the high capacities such as the coal-fired power plants. For the power plants, the necessary steam for the drying process can be supplied from the turbine. The in-bed heat exchangers are also used in this method. The heat is supplied to the exchanger tubes by the steam in the lignite drying process [49]. The generated steam can be used in the process by increasing its temperature by the vapour compressor [49]. In addition, the generated condensate in the in-bed heat exchangers can be used for preheating [49]. Using the steam as the drying and the heating medium may increase the efficiency of the process considerably. The microwave drying is used in a few coal-drying studies [5,53]. It has some advantages such as the higher heating rates compared to the conventional heating and the more uniform heat supply [5,60]. The microwave drying directly uses the electricity as the energy source, so it seems so expensive for the LRC drying. However, it can be used by integrating with a conventional drying system due to the advantages of the microwave drying in removing water inside the coal, which is difficult to evaporate with the other drying technologies (Figure 11) [75]. Figure 11. Effect of microwave drying on normal drying curve [75]. The microwave power level has effects on the drying with the microwave. The weight loss increases with the increasing microwave power (Figure 12). In addition, the coal type affects the weight loss with the microwave drying. The flash drying is one of the most widely used technologies in the drying, and it is also known as the pneumatic drying [72]. As the drying medium, the steam, the air and the exhaust gases can be used. It requires the high drying medium velocities to transport the particles. The particle size range for the flash drying is usually 0.01-0.5 mm [72]. It is not applicable for the larger particle sizes. The packed moving bed dryer was developed for the large capacities and it uses the 150°C exhaust gases with the controlled oxygen content [76]. Using of the exhaust gases at this temperature levels provides the waste heat recovery potential. In addition, the controlled oxygen content decreases the spontaneous combustion risk. This methodology seems applicable to the power plants with the high capacities and the heat recovery systems. The fixed-bed drying technology was used to dry the coarse lignite particles [46]. However, there is not much study on the fixed-bed drying. This methodology is a promising one for the drying of lignite particles greater than 10 mm. Conclusions The coal is one of the most important energy sources in the world. The drying of the LRC is very essential to utilize it efficiently. Because the coal has a reactive and a combustible nature, the drying technology and the drying media should be determined carefully. Moreover, drying is an energy-intensive process; thus, the energy source for the drying process should be determined with care. There is not only one correct method to dry a product. There are numerous methods in the drying literature. Therefore, every drying process should be studied separately. The drying of the LRCs is still a contemporary topic. There are many studies on the LRC drying, but the current studies are not enough. According to the authors, some further steps should be taken as stated below: • There is not any detailed study, which examines the effect of porosity on the drying characteristics of the LRCs and links up the drying characteristics, the coal type and the porosity. • The single-particle drying characteristics of different coals should be studied in detail, and all the effective parameters (particularly the pressure) on the drying should be presented clearly. • One of the most important areas for the LRCs is the power plants. More elaborative studies should be conducted over the use of the LRC in the power plants such as: ○ All the stages (from the mine to the boiler) of the coal combustion in the power plant should be presented clearly, and the problems caused by the moisture content should be presented. ○ All the possible drying technologies should be presented. ○ All the possible energy sources should be presented, and especially the waste heat recovery sources should be determined. ○ For the technologies using the superheated steam, the possibilities of using the process steam should be evaluated, and the optimization studies should be conducted. • The innovative coal drying systems should be created, and the hybrid and integrated coaldrying systems should be examined. • There are many mathematical, numerical and theoretical models for the drying of the moist solids. The simple models should be developed for the coal drying.
2018-11-18T14:49:31.428Z
2016-08-31T00:00:00.000
{ "year": 2016, "sha1": "8f5ed6c201dec87c2a5006a4b158b7ca1f3c88df", "oa_license": "CCBY", "oa_url": "https://www.intechopen.com/citation-pdf-url/51034", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "fdb5adf034554a47887532301b8ec29cfabff734", "s2fieldsofstudy": [ "Materials Science", "Environmental Science" ], "extfieldsofstudy": [ "Environmental Science" ] }
843605
pes2o/s2orc
v3-fos-license
Saccharomyces cerevisiae YOR071C Encodes the High Affinity Nicotinamide Riboside Transporter Nrt1* NAD+ is an essential coenzyme for hydride transfer enzymes and a substrate of sirtuins and other NAD+-consuming enzymes. Nicotinamide riboside is a recently discovered eukaryotic NAD+ precursor converted to NAD+ via the nicotinamide riboside kinase pathway and by nucleosidase activity and nicotinamide salvage. Nicotinamide riboside supplementation of yeast extends replicative life span on high glucose medium. The molecular basis for nicotinamide riboside uptake was unknown in any eukaryote. Here, we show that deletion of a single gene, YOR071C, abrogates nicotinamide riboside uptake without altering nicotinic acid or nicotinamide import. The gene, which is negatively regulated by Sum1, Hst1, and Rfm1, fully restores nicotinamide riboside import and utilization when resupplied to mutant yeast cells. The encoded polypeptide, Nrt1, is a predicted deca-spanning membrane protein related to the thiamine transporter, which functions as a pH-dependent facilitator with a Km for nicotinamide riboside of 22 μm. Nrt1-related molecules are conserved in particular fungi, suggesting a similar basis for nicotinamide riboside uptake. NAD ؉ is an essential coenzyme for hydride transfer enzymes and a substrate of sirtuins and other NAD ؉ -consuming enzymes. Nicotinamide riboside is a recently discovered eukaryotic NAD ؉ precursor converted to NAD ؉ via the nicotinamide riboside kinase pathway and by nucleosidase activity and nicotinamide salvage. Nicotinamide riboside supplementation of yeast extends replicative life span on high glucose medium. The molecular basis for nicotinamide riboside uptake was unknown in any eukaryote. Here, we show that deletion of a single gene, YOR071C, abrogates nicotinamide riboside uptake without altering nicotinic acid or nicotinamide import. The gene, which is negatively regulated by Sum1, Hst1, and Rfm1, fully restores nicotinamide riboside import and utilization when resupplied to mutant yeast cells. The encoded polypeptide, Nrt1, is a predicted deca-spanning membrane protein related to the thiamine transporter, which functions as a pH-dependent facilitator with a K m for nicotinamide riboside of 22 M. Nrt1-related molecules are conserved in particular fungi, suggesting a similar basis for nicotinamide riboside uptake. NAD ϩ and the phosphorylated and reduced derivatives, NADP ϩ , NADH, and NADPH, are essential coenzymes for hydride transfer enzymes and essential substrates of NAD ϩ -consuming enzymes, including sirtuins, ADP-ribose transferases, poly(ADP-ribose) polymerases, and cyclic ADP-ribose synthases (1). In most fungi and vertebrates, de novo NAD ϩ synthesis is derived from tryptophan utilization. De novo synthesis is sufficient for viability and supplies a yeast cell with ϳ0.8 mM intracellular NAD ϩ . However, supplementation of yeast with nicotinamide riboside (NR), 4 a newly identified eukaryotic NAD ϩ precursor (2), increases intracellular NAD ϩ levels and Sir2 activity and extends replicative life span (3). Nicotinic acid (NA) was discovered as a vitamin in 1938 (4), and the enzymology of NA utilization was sketched out in 1958 as a pathway common to yeast and vertebrates (5). Nicotinamide (Nam), also a classically identified NAD ϩ precursor vitamin (4), is utilized in yeast via NA salvage after nicotinamidase activity (6 -8). Nam is used in vertebrate cells via conversion to NMN (9). NR utilization by yeast cells, first demonstrated in 2004, depends on eukaryotic NR kinase Nrk1 or Nrk2 (2). A second pathway for NR utilization is initiated by Urh1 and Pnp1, which split NR into Nam, followed by Nam salvage (3,10,11). NA riboside is yet another salvageable precursor converted to NAD ϩ in pathways initiated by Nrk1, Urh1, and Pnp1 (12). In yeast, submicromolar NA is imported by the high affinity Tna1 permease, which is transcriptionally up-regulated by low NA (13). The molecular basis for import of Nam, NR, and NA riboside is not known in any eukaryotic system. Although NR is found in milk (2), can protect transected dorsal root ganglion neurons from degeneration (14), and extends yeast life span on high glucose medium (3), it is not known how widespread the compound is in nature or whether there is a specific transport system. Here, we discovered that Nrt1, a predicted deca-spanning membrane protein encoded by the YOR071C gene and described previously as low affinity thiamine transporter Thi71 (15), is responsible for high affinity NR uptake and is both necessary and sufficient for NR utilization. Gene expression of Nrt1 and the acid-dependent nature of NR import establish the specificity and regulation of the first step in NR utilization in the yeast system. EXPERIMENTAL PROCEDURES Saccharomyces cerevisiae Strains, Plasmids, and Medium-Yeast strains were derivatives of the laboratory wild-type strain BY4742. Single deletion strains have been described (16), and additional strains were generated by one-step gene disruption (17). Plasmid pNRT1, carrying NRT1 under the control of its own promoter, was created by amplifying the gene from BY4742 DNA using primers 14112 and 14113. After digestion with SacI and HindIII, the product was inserted into pRS317. NA-free synthetic glucose complete medium (3) and NR (2) were prepared as described. [ 3 H]NMN was purchased from Moravek Biochemicals (Brea, CA). Growth curves were measured from overnight cultures diluted to A 600 nm ϭ 0.2. A yeast strain list and primer sequences are provided in the supplemental table. Intracellular NAD ϩ and NR Transport-Intracellular NAD ϩ was calculated for cells grown to A 600 nm ϭ 1.0 as described (3). For measurement of NR uptake, cells were grown in NA-free medium with rigorous aeration to A 600 nm ϭ 1.0, at which time 1 ml of cell culture was combined with the appropriate amount of [ 3 H]NR and incubated for the specified times. Triplicate cell samples were collected using a Millipore 1225 sampling vac-* The costs of publication of this article were defrayed in part by the payment of page charges. This article must therefore be hereby marked "advertisement" in accordance with 18 U.S.C. Section 1734 solely to indicate this fact. Fellowship from the Dartmouth Graduate Office. 2 Supported by a presidential fellowship from Dartmouth College. 3 To whom correspondence should be addressed. E-mail: charles.brenner@ dartmouth.edu. 4 The abbreviations used are: NR, nicotinamide riboside; NA, nicotinic acid; Nam, nicotinamide. uum manifold on premoistened 1.2-m nitrocellulose filters and washed twice with 5 ml of potassium-buffered saline prior to scintillation counting. Kinetic parameters were calculated from initial rates. For pH-dependent transport assays, the culture medium was buffered to the indicated pH values by addition of a 1% volume of sodium citrate (pH 3.5-5.5) or Tris-Cl (pH 6.5-9.5). RESULTS AND DISCUSSION Nrt1 Is Necessary and Sufficient for Normal NR Incorporation into NAD ϩ -The NR transporter in Hemophilus influenza is encoded by the pnuC gene (18), which has homologs in eubacteria and bacteriophage. There are no reports of a eukaryotic NR transporter and no pnuC-homologous eukaryotic sequences. Overexpression studies of human nucleoside transporters in frog oocytes indicated that high expression of the equilibrative transporters ENT1 and ENT2 and the concentrative transporter CNT3 increased the import of the NR analogs tiazofurin and benzamide riboside (19). Human ENT1 and ENT2 have sequence similarity to Fun26, a broad specificity nucleoside transporter expressed predominantly in intracellular membranes (20). On the basis of the dearth of homologybased candidates for a yeast NR transporter, we assembled a set of 14 single mutants in putative transporter genes, including fun26 and fui1, a reported uridine transporter (20); tna1, the NA transporter (13); dal4, a reported allantoin/uracil permease (21); dal5, a reported ureidosuccinate/allantoate permease (22); fcy2, fcy21, and fcy22, reported purine/cytosine permeases (23); fur4, a uracil permease (24); thi7, YOR071C, and thi72, the thiamine transporter and related molecules (15,25); and thi73 and yil166c, which resemble additional uncharacterized transporters. Wild-type yeast cells grown on NA-free medium have an intracellular NAD ϩ concentration of ϳ0.8 mM, which is elevated by ϳ1 mM when supplemented with 10 M NR (3). Yeast strains deleted for each of the 14 candidate transporters were assayed for diminution or loss of this effect. As shown in Fig. 1A, all but two candidates were as sensitive to NR as the wild type. Deletions in nrt1 (previously termed YOR071C/THI71) and fun26 decreased incorporation of NR into NAD ϩ by 83 and 36%, respectively. As shown in Fig. 1B, the double nrt1 fun26 mutant exhibited a 93% reduction in NR utilization, rendering the NR-dependent increase in NAD ϩ concentration statistically insignificant. These data suggest that Nrt1 is responsible for the majority of NR import, with a potential minor role for Fun26. To test the hypothesis that NR utilization depends on Nrt1, the gene was cloned into the single copy vector pRS317 under the control of its native promoter. As shown in Fig. 1B, upon introduction of this plasmid into the double mutant background, there was a 102% restoration of the NR-dependent increase in NAD ϩ . To test whether Nrt1 and Fun26 might have roles in assimilation of other NAD ϩ precursors, we grew wild-type and nrt1 fun26 mutant strains in NA-free medium and in medium supplemented with 10 M NR, NA, or Nam. Whereas the wild-type strain incorporated each vitamin into an additional 1 mM intracellular NAD ϩ , the nrt1 fun26 strain obtained the full NAD ϩ benefit from NA and Nam, but not from NR (Fig. 1C). Thus, neither Nrt1 nor Fun26 appears to play any role in import of NA or Nam. Nrt1 Is Necessary and Sufficient for NR-dependent Cell Growth-To set up a system in which deletions of putative vitamin transporter genes could be assayed for effects on cell growth, we deleted the gene encoding 3-hydroxyanthranilic acid dioxygenase, which performs an essential step in the de novo biosynthesis of NAD ϩ from tryptophan. As shown in Fig. 2A, the bna1 mutant was incapable of growth in NA-free medium but grew well when supplemented with either 10 M NA or NR. In contrast, as shown in Fig. 2 (B and C), deletion of the NRT1 gene with or without deletion of FUN26 permitted NA-dependent growth but abolished normal growth with NR supplementation. When the NRT1 gene was added back to the bna1 nrt1 fun26 mutant (Fig. 2D), NR-dependent growth was fully restored. Indeed, the Nrt1 dependence of NR utilization is stronger than the Tna1 dependence of NA utilization because bna1 tna1 mutants fail to grow at 40 nM NA, grow slightly at 400 nM NA, and grow normally at 4 M NA (13). In contrast, examination of the growth curves in Fig. 2 (B and C) indicates that growth of bna1 nrt1 mutants was slight at 10 M NR. Nrt1 Is Required for High Affinity, pH-dependent, Specific Import of NR-To determine whether NR is incorporated into yeast cells in an Nrt1-dependent manner, we prepared [ 3 H]NR from [ 3 H]NMN and measured incorporation of radioactivity into yeast cells exposed to 50 M NR. The wild-type strain exhibited a robust linear import activity for the entirety of the 70-min assay, whereas the nrt1 disruption strain exhibited no detectable import (Fig. 3A). Nrt1 is highly similar in sequence to Thi7, a concentrative thiamine transporter and member of the major facilitator superfamily (15) with an ability to concentrate thiamine 1000-fold inside the cell (26). Thiamine import is pH-dependent, with strong activity between pH 4 and 5 and declining activity at pH 6 and 7 (26). Additionally, Nrt1 is related in sequence to Fur4, which transports uracil via a proton symport mechanism (27). Accordingly, we tested the hypothesis that Nrt1 transport of NR is pH-dependent. Although initial rates of 25 M NR import were essentially unchanged from pH 3.5 to 6.5, the import of NR was reduced to 5% at pH 7.5 and not distinguishable from the background at pH 8 and above (Fig. 3B). Having shown that all detectable NR import is Nrt1-dependent, we were able to characterize the kinetic parameters of NR transport by Nrt1 in wild-type cells over a range of NR concentrations. Transport was saturable with a K m of 21 Ϯ 3.6 M and a maximal rate of 20.4 Ϯ 0.9 pmol/min/10 7 cells (Fig. 3C). Taking the intracellular volume of a haploid cell to be 70 fl (28), the rate of maximal import would produce ϳ29 M NR inside the In addition to the pH dependence of NR transport, we considered that NR transport may be under the regulation of a set of transcriptional repressors that control expression of multiple components of the NAD ϩ biosynthetic machinery. Examination of the microarray data from cells deleted for either the transcriptional repressor Sum1, the NAD ϩ -dependent protein lysine deacetylase Hst1, and Rfm1, which forms a protein complex with Sum1 and Hst1, indicates that NRT1 is one of 55 genes derepressed by deletion of each factor (29). These data suggest that NR transport capacity may be derepressed at the transcriptional level under conditions of declining NAD ϩ , when Hst1 enzyme activity becomes limited (30). Once NR enters the cell, it is converted to NMN (2) or Nam (3). To test whether metabolites related to NR inhibit NR transport, non-labeled Nam and NMN were added to NR import assays. At high micromolar concentrations, no inhibition was detected. However, at Nam and NMN concentrations of 10 mM, competitive inhibition was demonstrated (Fig. 3D), albeit with K i values Ͼ2 mM. Sequence Analysis of Nrt1-By hidden Markov modeling and topology prediction (31), Nrt1 is a deca-spanning membrane protein with both termini and a 74-amino acid loop between helices 6 and 7 projecting into the cytoplasm (Fig. 4). This topology is the same as the 28% sequence identical Fur4 uracilproton symporter, the internalization of which is mediated by cytoplasmic phosphorylation and ubiquitylation (32). The closest sequence homologs of Nrt1 are Thi7 (84%), the thiamine transporter, and Thi72 (81%), which, like Nrt1, has been considered a low affinity thiamine transporter (15). These data suggest recent gene duplication events and, considering the absence of NR transport in nrt1 mutants, functional divergence. In the pathogenic yeast Candida glabrata, NR availability appears to play a critical role in host infection (11). Thus, it is interesting to note that the sequences most similar to Nrt1 outside of S. cerevisiae Thi7 and Thi72 are C. glabrata sequences XP_446731 (68%) and XP_449349.1 (65%) and the Vanderwaltozyma polyspora sequence XP_001645454.1 (67%). Identification of Nrt1 as the high affinity NR transporter in S. cerevisiae reinforces the need to identify environmental sources of NR. Current data indicate that NR utilization is limited by alkaline conditions and by Nrt1 expression, which is repressed by Sum1, Hst1, and Rfm1 (29). Additional work will be needed to define the effect of cell aging and nutritional conditions on NR transport, the influence of cellular metabolism on Nrt1 internalization, and the identity of NR transporters in other organisms.
2018-04-03T03:10:05.609Z
2008-03-28T00:00:00.000
{ "year": 2008, "sha1": "a532c76250e0f7315fc913012149e51cd7f23bd5", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/283/13/8075.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "6e8db3a58162f8871ff286abf9561140bf2dd6c4", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Medicine", "Chemistry", "Biology" ] }
199478531
pes2o/s2orc
v3-fos-license
Goaf Compaction Evaluation Based on Integrity and Permeability Measurement of the Overburden Strata Groundwater outbursts from coal mine goafs, which are widely distributed in China with the increase in the number of abandoned coal mines, demonstrate an impartible relationship with the permeability of goaf overburden strata. A research on rock mass integrity and permeability characteristics of goaf overburden strata is necessary to assess waterinrush risk induced by goaf water. Two goafs in two adjacent coal mines located at the Huaibei mining area in the Anhui Province of China were considered as the research targets. A ground exploration hole was conducted in each goaf. The rock mass integrity and fracture development of goaf overburden strata were determined on the basis of rock quality designation (RQD) and borehole television. The permeability of goaf overburden strata was measured by performing a field packer test. Results are as follows: (1) The fracture development of the borehole wall of the production mine is favorable, and a water drenching phenomenon occurs. The RQD is approximately 20.8–35.6%, which indicates that the integrity of the goaf overburden strata of the production mine is poor. The unit water inflow (q) is 0.133 L/s.m, and the average permeability coefficient is 0.26 m/d, which reveals that the goaf overburden strata of the production mine exhibit medium water abundance and permeable strata. (2) The overburden strata parameters of the abandoned mine goaf are as follows: the q is 0.012–0.014 L/s.m, and the average permeability coefficient is 0.005 m/d, which suggests that the goaf overburden strata exhibit poor water abundance and low permeable strata. The results demonstrate that the overburden strata compaction of the two goafs is different and the results can provide a reference for adjacent mines to access the water inrush risk induced by the goaf water of the abandoned coal mines. INTRODUCTION Mine water disaster has constantly been a major catastrophe that threatens the safety of coal mine production in China [1÷3]. Coal mining in China went through a rapid development stage. The number of abandoned coal mines increases with the adjustment of energy structure and the implementation of a national capacity to implement policies. The demands for safe and efficient exploitation of coal resources are also increasing. China has been faced with a severe problem of coal resources exhaustion. According to the statistics, nearly one-third of large-and medium-scale coal mines reach or are near the design years, and the proportion accounts for 56.52% [4]. Groundwater cannot be pumped after closing the coal mines, thereby causing serious security threats to adjacent production mines. Many serious and severe water inrush accidents have occurred in recent years [5÷7]. Thus, the integrity and permeability of goaf overburden strata are crucial for water resistance in adjacent abandoned coal mines. In addition, integrity and permeability are common scientific problems for coal pillar retention and gas extraction [8÷10]. However, the existing research results on goaf water hazards induced by abandoned coal mines have been mainly focused on the dynamic recovery process of goaf water level [11÷13], environmental pollution induced by groundwater recharge [14] and stability evaluation of coal pillar [15]. The integrity and permeability change induced by goaf overburden strata compaction have not been considered. The goaf can be re-compacted under the geostatic pressure of overlying strata with time. The direct threat to the adjacent production mines is the rising of the goaf water level. The key to determine the rising rate of the goaf water level and the regenerative water-resisting ability of the fillings in the goaf is the integrity and permeability of the overburden strata. Thus, it is necessary to study the integrity and permeability of goaf overburden strata. In order to scientifically evaluate the threat of goaf water to adjacent production mines, it is necessary to evaluate the goaf overburden strata compaction degree on both sides of the mining boundary. Considering the limitations of simple theoretical calculation and simulation, a field test method is proposed in this study to evaluate the integrity and permeability of the goaf overburden strata, and the compaction of the goaf overburden strata is studied. STATE OF THE ART Scholars all over the world have conducted plenty of relevant studies on the integrity and permeability of the goaf overburden strata. Brett et al. [16] estimated the permeability change of overburden strata during longwall mining by PFC2D software. The permeability coefficient and porosity metrics were calculated and the height of the enhanced permeability fractured zone above a longwall goaf was identified. The results showed that the permeability coefficient increased approximately eight orders-of-magnitude in the caved zone and one to two orders-of-magnitude in the strata above the fractured zone. Bai et al. [17] described the stress-strain relationship of caved rock, thus verifying the compaction theory of goaf on the basis of the FLAC3D software and this theory was successfully applied in practice. Meng et al. [18] discussed the relationship between permeability and the stress of goaf rock mass based on the deformation and failure characteristics of goaf overburden strata and achieved the transverse and longitudinal zoning of goaf. Adhikaryand Guo [19] investigated the strata permeability change induced by longwall mining at a mine site in New South Wales on the basis of an underground packer test and numerical simulation. These authors concluded that the permeability of goaf overburden strata increased more than 1000-fold and measured permeability coefficient varied widely and remarkably in different positions. Schatzel et al. [20] studied the permeability change in goaf overburden strata induced by coal mining by a field measurement method. These researchers concluded that the permeability of the overburden strata increased by hundred to thousand times, and the permeability continuously changed 7 months after coal mining. Qureshi et al. [21] calculated the rock quality designation (RQD) based on core drilling. The empirical relationship between permeability coefficient and RQD of unconsolidated sedimentary rocks in Oman area was established. The results were consistent with those obtained by a field packer test, thus revealing the relationship between RQD and permeability coefficient of the rock mass. Song et al. [22] established the permeability coefficient calculation model for fracture rock mass based on the RQD and the distribution characteristics of permeability coefficient of water sealed underground storage caverns in Qingdao were obtained. Vincenzo et al. [23] conducted a field packer test in hard rocks (mainly andesites and metamorphites of western Turkey) and the relationship between permeability coefficient and depth of hard rocks was obtained. The results showed that the permeability coefficient decreased with the increase of depth and there was no obvious relationship between RQD and rock burial depth. Xue et al. [24] established the fracture distribution model from the gas extraction point of review by a similar material simulation test and the conditions for the rapid change of gas migration were obtained. Lu et al. [25] established a mining model above a confined aquifer using a numerical simulation method. The permeability change law of coal floor was achieved during coal mining based on the fluid-solid coupling theory. They concluded that a higher homogeneity index of floor strata could result in a sudden formation of water inrush. In order to evaluate the influence of coal seam mining on surface water system, Khanal et al. [26] investigated the change in permeability on the overburden strata due to the longwall mining by a numerical simulation method. These researchers concluded that the permeability of the overburden strata increased in excess of six orders of magnitude and the permeability change varied with different mining methods. Holla et al. [27] investigated the coal roof cracking and surface deformation during mining in a shallow longwall working face. Meanwhile, the permeability of overburden strata was measured by the packer test. The results concluded that the height of fracture zone in a shallow working face was 9 times the thickness of the coal seam, and the RQD had a good correlation with permeability. Guo et al. [28] used the numerical simulation method to comprehensively study the law of surface movement, fracture development and gas migration during mining in a deep longwall working face in Anhui Province, China. The stress evolution in overburden strata, fracture development and gas migration law were obtained, and the best gas drainage area around the longwall working face was determined, thus providing a support for gas extraction. Zhang et al. [29] used FLAC3D software to analyze the mining induced fracture and stress evolution in overburden strata of a longwall working face, and they obtained that the coal roof could be divided into five zones. The gas migration area was determined to provide a basis for eliminating gas outburst. Chen et al. [30] calculated the mining induced permeability change by a numerical simulation method based on the Biot theory, and the water inrush risk from an aquifer above the coal seam was predicted. The results showed that the mining induced permeability change was exponentially related to the aquifer water pressure change, which was consistent with the field test results. Current research mainly includes the following subjects: the permeability investigation of overburden strata by numerical simulation and similar material simulation for revealing the gas migration law [24,28,29]; the surface water protection and the prediction of water inrush risk from coal seam roof or coal floor by numerical simulation [25,26,30]; the research of permeability change law in overburden strata by a numerical simulation method simply [16÷18]. Some other scholars have carried out the permeability measurement of overburden strata by the field test, but they have mainly studied the evolution law of permeability with time and space, without considering the water-resisting ability of the goaf [21,22,27]. In summary, there have been many studies on the integrity and permeability of overburden strata, but minimal consideration is related to the goaf overburden strata compaction. In order to reveal the goaf overburden strata compaction degree and avoid the limitations of numerical simulation and similar material simulation methods, the integrity and permeability of goaf overburden strata in the Huaibei mining area of China are studied on the basis of RQD index, borehole television, and packer tests. The results can provide a new method for evaluating the goaf overburden strata compaction in the abandoned coal mines. The remainder of this study is organized as follows. Section 3 elaborates the basic geological survey and research methods adopted in the study area. Section 4 evaluates the goaf overburden strata compaction degree based on the results of RQD and permeability coefficient. Section 5 summarizes and concludes the study. METHODOLOGY 3.1 Geological Settings The Huaibei mining area, which is located in Northern Anhui Province, is a major mining area in Eastern China (Fig. 1). The target coal mine goafs are located in Yuanzhuang and Shenzhuang coal mines of the Huaibei mining area. Stratigraphic classification of the study area belongs to the North China-type strata, and the primary mineable coal seam is the No. 3 coal seam, which is located in the Lower Permian Shihezi formation. The general structural feature of the area is a monoclinal structure with NE trend and a dip angle of 20-30°. The Shenzhuang coal mine is an abandoned mine and the underground water is not pumped any more, whereas the Yuanzhuang coal mine is a production mine. The boundary of the two mines is an artificial boundary, that is, the boundary is a coal pillar boundary. Therefore, the production of the Yuanzhuang mine is threatened by the goaf water in the Shenzhuang mine. The study of the integrity and permeability of the goaf overburden strata near the coal mine boundary is necessary to provide a reference for the goaf waterinrush evaluation of the Yuanzhuang coal mine. The two nearest working faces of the two adjacent mines are working face III3142 mined in 1993-1994 of the Yuanzhuang coal mine and working face S2II313 mined in 1973 of the Shenzhuang coal mine (Fig. 2). The minimum distance between the two working faces is less than 20 m, and working face S2II313 is located in the shallow part of working face III3142. The working face III3142 goaf is connected with the main roadways in the mine. Therefore, the goaf water of working face S2II313 may flow into the Yuanzhuang coal mine through the fracture zone of the goaf overburden strata between the two adjacent working faces. Drilling Exploration Two ground holes (marked as 1# and 2#) were conducted by drilling to investigate the integrity and permeability of the goaf overburden strata. The No. 1 hole is located in working face III3142, and No. 2 hole is found in working face S2II313 (Fig. 2). The average thickness of quaternary strata is 43,35 m. The depth of No. 1 hole is 363.20 m, and the depth from 43.35 to 270.12 m is the non-coring section. The rocks mainly comprise mudstone, siltstone, and fine sandstone based on logging data. The depth from 270.12 to 363.20 m is the coring section, and the lithology and thickness are 35.08 m mudstone, 13.16 m fine sandstone, and 22.62 m sandstone (Fig. 3). The thickness of the caving fracture zone is approximately 8.10 m, the cores are cracked and loosened, and the high-angle fractures are developed (Fig. 4). The depth of No. 2 hole is 280.67 m and is a non-coring hole. The rocks also mainly comprise mudstone, siltstone, and fine sandstone based on logging data. Borehole Television Exploration Borehole television images were used to identify and determine the characteristics of an in-depth fracture development [31]. Therefore, this method was used to investigate the fracture development characteristics of the No. 1 hole. The borehole television images are displayed in Fig. 5. Borehole television is not conducted in No. 2 hole induced by the collapsed hole. Packer Test Packer test is a common method for testing the permeability of rock mass in engineering [32]. In this study, the packer tests were conducted in two boreholes drilled vertically from the surface. The equipment utilized in this study is illustrated in Fig. 6. Fig. 2 Packer Test in No. 2 Hole The hole was thoroughly washed before the packer test until the water was clear in the hole. The duration for stabilizing injection water was more than 24 h, and the change range of water level and injection rate was 0. Finally, the recovery water level was observed, and the cement slurry was blocked after the test. The process and results of the packer test are presented in Tab. 1 and Fig. 7, respectively. The permeability coefficient is calculated by the Dupuit and Babushkin formulas based on test data. (1) The Dupuit formula is as follows: where Q is the injection water flow (L/s); M is the injection interval (m); S is the uplift height of the water level (m); R is the influence radius (m); r is the borehole radius (m); and K is the permeability coefficient (m/d). The permeability varies in different sections. The average permeability coefficient is calculated as 0.014 m/d, and the unit water inflow (q) is 0.00576 L/s.m. (2) The Babushkin formula is as follows: where K is the permeability coefficient (m/d); ω is the unit water absorption (L/min.m.m); L is the injection interval (m); and r is the borehole radius (m). The unit water absorption is calculated by the formula as follows: , where L is the injection interval (m); and P is the water pressure using a water head express (m). The data are inputted to formulas (2) and (3), and the calculation results are shown in Tab. 3. The average permeability coefficient is calculated as 0.012 m/d, and the unit water absorption is 0.0073 L/min.m.m. Packer Test in No. 1 Hole The packer test in No. 1 hole was a simple test. Drilling fluid was consumed when the depth was 270.12 m, and fluid consumption was serious when the depth was 323.65 m. The injection interval was from 270.12 to 323.65 m. The distance between the ground and the water surface in the hole was 110.90 m. The packer test was then conducted. The water injection rate was 15000 l/s for 1 h, and the distance between the ground and the water surface in the hole became stable at 79.60 m. Then, the groundwater depth which fluctuated from 88.50 to 96.25 m was observed. The permeability coefficient is calculated by the Dupuit and Babushkin formulas, as expressed in formulas (1)-(3) based on the test data. The injection interval is 53.53 m, and the uplift height of the water level is 31.30 m, and the borehole radius is 0.0455 m. The data are inputted into formula (1). The permeability coefficient is calculated as 0.28 m/d, and the q is 0.133 L/s.m. The data are inputted into formulas (2) and (3). The permeability coefficient is calculated as 0.25 m/d, and the unit water absorption is 0.15 L/min.m.m. RESULT ANALYSIS AND DISCUSSION 4.1 Integrity of the Goaf Overburden Strata The rock mass integrity is closely related to the fracture development degree. The RQD is a quantitative parameter that reflects the integrity of rock mass. The RQD was proposed by Deere [33] as a measure of the quality of borehole core and was defined as the percentage of borehole core or scanline that consisted of intact lengths ≥0.1 m, and hence, it can be defined as follows: 1 100 , where l i is the length of the i th intact length ≥0.1 m, n is the number of intact lengths ≥0.1 m, and L is the total length of the borehole core or scanline. Moreover, the RQD is the most easily obtained index in the exploration work. Thus, it is widely applied in engineering [34,35]. The RQD of No. 1 hole is calculated on the basis of the cores and compared with the extraction rate of cores (Tab. 4 and Fig. 8). The RQD of the No. 1 hole core ranged from 7.3 to 66.7%, and the average value was 35.6%, which revealed that the integrity of the overburden strata of working face III3142 was poor, and the fractures developed well. In addition, a favorable correlation was found between the core adoption rate and the RQD, and the RQD value was small in the section of a high drilling fluid consumption (Fig. 3). The borehole television results showed that the fractures developed well in the overburden strata of working face III3142 (Fig. 5). The distance between the ground and water surface in the hole was approximately 350.8 m, and the depth of goaf water in the hole was approximately 1.1 m. Water effluent phenomenon was evident on the hole wall, thus demonstrating that the integrity of working face III3142 goaf overburden strata was poor. Permeability of the Goaf Overburden Strata The packer test on No. 2 hole indicated that the results obtained by the two methods were similar. The q was 0.00576 L/s.m, which was less than 0.10 L/s.m, and the average permeability coefficient was 0.012-0.014 m/d, which was less than 10 −4 cm/s. According to the Regulation of Water Control in China Coal Mine and the Code of Engineering Geology Investigation Technology in China (Tab. 5), the overburden strata of the working face S2II313 goaf demonstrated weak water abundance and weak-tomicropermeable strata, which indicated that the permeability of goaf overburden strata was poor and had a certain water-resisting ability. For the packer test in No. 1 hole, the q was 0.133 L/s.m, which was larger than 0.10 L/s.m, and the average permeability coefficient was 0.26 m/d, which was larger than 10 −4 cm/s. According to the regulation and code in China (Tab. 5), the goaf overburden strata of working face III3142 demonstrated medium water abundance and permeable strata, thus indicating that the goaf overburden strata had excellent permeability. Therefore, the goaf water can easily permeate into working face III3142 along the overburden strata fractured zone between the two working faces. CONCLUSIONS In order to estimate the compaction degree of goaf overburden strata, the integrity and permeability of the goaf overburden strata were studied by using the RQD index, field packer test method, and borehole television exploration. The permeability coefficient and RQD of goaf overburden strata were obtained, and the two goafs compaction were estimated. The following conclusions could be drawn: (1) The overburden strata permeability of the boundary goafs of two adjacent coal mines is measured by the packer test, and the permeability of the two goaf overburden strata is different. The permeability of the goaf overburden strata of the production coal mine is better than that of the abandoned coal mine. The compaction of the abandoned coal mine goaf is better than that of the production mine. (2) The borehole television is an effective method for exploring the overburden strata integrity of coal mine goafs. That is, the fracture development can be clearly observed. (3) The production of the Yuanzhuang coal mine remains unaffected by the goaf water in the Shengzhuang coal mine, thus indicating that the goaf overburden strata of the Shenzhuang mine is well compacted and has a certain water-resisting ability. (4) The production practice is consistent with the test results. The permeability measurement using the packer test can be used to indirectly evaluate the compaction of the goaf overburden strata. The research results can provide a reference for the water inrush risk evaluation of goaf water in abandoned coal mines. In this study, integrity and permeability are used to evaluate the goaf overburden strata compaction. The method is simple, the evaluation index is less, and it is easy to obtain. It provides a method for evaluating the compaction of goaf and lays a foundation for preventing goaf water hazards from adjacent abandoned coal mines. However, the compaction degree and the water-resisting ability in goaf are also closely related to the composition of fillings in goaf and the change of physical properties after water encounter. Therefore, the properties of the fillings in goaf after the goaf water level recovery should be considered in future studies, which can provide a basis for the prevention and control of water hazards in abandoned mines.
2019-08-04T14:05:34.080Z
2019-01-01T00:00:00.000
{ "year": 2019, "sha1": "ba7b1e28baafba9befe3a25303f2f3c2eee99cc4", "oa_license": "CCBY", "oa_url": "https://hrcak.srce.hr/file/320443", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "ba7b1e28baafba9befe3a25303f2f3c2eee99cc4", "s2fieldsofstudy": [ "Environmental Science", "Geology", "Engineering" ], "extfieldsofstudy": [] }
226740267
pes2o/s2orc
v3-fos-license
1766P COVID-19 and lung cancer: What do we know? Background: Currently we still have limited information on how COVID-19 infection has affected lung cancer patients In our study, we analysed whether there are differences in terms of mortality from COVID-19 between patients diagnosed with lung cancer and the overall population within our hospital health area (320,000 people) We have also studied the most frequent characteristics of lung cancer patients who develop infection with COVID-19, and we have analysed possible factors of poor prognosis, as well as treatment outcome Methods: We performed a retrospective review of a total of 2216 patients admitted to Hospital Universitario Infanta Leonor in Madrid between March 5 and May 13, 2020 to identify the cumulative incidence of COVID-19 in patients with lung cancer and make a description of the characteristics of these patients, treatment outcome, risk factors for poor prognosis and mortality We performed uni and multivariate logistic regression Results: 22/2216 of the total number of patients diagnosed with COVID-19 in our hospital had lung cancer (0 99%) 12/22 lung cancer patients with a COVID-19 diagnosis died (54 5%) vs 300/2216 COVID-19 patients in our hospital (p&lt;0 0001) Lung cancer patients who died had a median age of 72 years (range of 49-84 years) Infection with COVID-19 in lung cancer patients was more frequent in men (72 73%) 18/22 (81 81%) had locally advanced or metastatic tumours We observed a trend towards higher mortality among patients with hypertension than among non-hypertensive patients (10/15 vs 2/7;P=0 095) We found higher mortality among patients who developed acute respiratory distress syndrome (ARDS) than among those who did not (4/4 vs 8/12;P=0 044) There seems to be a trend towards lower mortality among patients who received treatment with the combination of hydroxychloroquine and azithromycin than among those who did not (6/14 vs 6/8;P=0 145) Conclusions: Lung cancer patients who became infected with COVID-19 have higher mortality than the general population It is more frequent among men and the development of ARDS results in a worse prognosis with higher mortality Although treatment with azithromycin and hydroxychloroquine appears to be a good treatment option, we must wait until we have more data on the safety of the combination and results in larger patient series Legal entity responsible for the study: The authors Funding: Has not received any funding Disclosure: All authors have declared no conflicts of interest 1765P Developing a risk assessment score for cancer patients during the COVID-19 pandemic A. Indini, M. Cattaneo, M. Ghidini, E. Rijavec, C. Bareggi, B. Galassi, D. Gambini, F. Grossi Medical Oncology Unit, Ospedale Maggiore Policlinico-Fondazione IRCCS Ca' Granda, Milan, Italy Background: Data on the novel coronavirus (CoV) respiratory disease in cancer patients (pts) are limited. In some individuals, CoV infection triggers an aberrant inflammatory response, leading to lung tissue damage. Cancer pts treated with immunotherapy (IT) may therefore be more at risk for COVID-19 infection and related complications. Methods: We performed a thorough review of the literature on CoV pathogenesis and cancer, selecting shared features of the two disease entities to develop a riskassessment score to quantify both the risk of infection and the risk implied in cancer treatment delays. Results: The score includes clinical and laboratory variables (Table). Pts' characteristics include: age, presence of comorbidities (hypertension, cardiovascular disease, diabetes, chronic obstructive pulmonary disease, chronic systemic infections), obesity, sex, Eastern Cooperative Oncology Group (ECOG) performance status (PS), and concomitant steroid treatment (>10 mg daily of prednisone equivalent, lasting for >1-month period). Disease characteristics include: lung cancer diagnosis, history of thoracic radiotherapy (RT) (only for pts with extra-thoracic tumours). Treatment characteristics include: line of treatment, type (IT or combined IT/chemotherapy [CT] considered high-risk, followed by CT, and other anticancer drugs), history of immunerelated adverse events (irAEs). Laboratory tests include: levels of neutrophil-to-lymphocite ratio (NLR), lactate-dehydrogenase (LDH), and C-reactive protein (CRP). Based on the resulting score, pts can be divided in the following categories of risk: low (score <4), intermediate (score 4-6), and high risk (score >7). Conclusions: There is a strong rationale supporting the presented data as potential risk factors for COVID-19 in cancer pts. The present score is currently undergoing validation on a wide population of cancer pts to confirm its role and potentially help physicians' treatment decisions. Legal entity responsible for the study: The authors. Funding: Has not received any funding. Background: Currently we still have limited information on how COVID-19 infection has affected lung cancer patients. In our study, we analysed whether there are differences in terms of mortality from COVID-19 between patients diagnosed with lung cancer and the overall population within our hospital health area (320,000 people). We have also studied the most frequent characteristics of lung cancer patients who develop infection with COVID-19, and we have analysed possible factors of poor prognosis, as well as treatment outcome. Methods: We performed a retrospective review of a total of 2216 patients admitted to Hospital Universitario Infanta Leonor in Madrid between March 5 and May 13, 2020 to identify the cumulative incidence of COVID-19 in patients with lung cancer and make a description of the characteristics of these patients, treatment outcome, risk factors for poor prognosis and mortality. We performed uni and multivariate logistic regression. We observed a trend towards higher mortality among patients with hypertension than among non-hypertensive patients (10/15 vs 2/7; P¼0.095). We found higher mortality among patients who developed acute respiratory distress syndrome (ARDS) than among those who did not (4/4 vs 8/12; P¼0.044). There seems to be a trend towards lower mortality among patients who received treatment with the combination of hydroxychloroquine and azithromycin than among those who did not (6/14 vs 6/8; P¼0.145). Conclusions: Lung cancer patients who became infected with COVID-19 have higher mortality than the general population. It is more frequent among men and the development of ARDS results in a worse prognosis with higher mortality. Although treatment with azithromycin and hydroxychloroquine appears to be a good treatment option, we must wait until we have more data on the safety of the combination and results in larger patient series. Legal entity responsible for the study: The authors. Funding: Has not received any funding. Background: There is growing evidence that cancer patients may be more susceptible to contracting coronavirus disease 2019 (COVID-19) infection, show a more aggressive course and associate a poorer prognosis than the general population. An unbalanced inflammatory response and systemic coagulopathy seem to define the pathological hallmark underlying severe presentations. However, the complex immune cell interplay and the role of the tumor-associated pro-coagulative state in COVID-19 remain a challenge. Methods: We prospectively evaluated cancer patients presenting to the emergency department of the Hospital Clínico San Carlos (Madrid, Spain) with severe pneumonia, and compared a comprehensive coagulation and immunological profile from blood samples on admission between those with SARS-CoV-2 positive and negative RT-PCR tests. Results: 14 patients with suspected COVID-19 and receiving in-hospital care were prospectively followed. SARS-CoV-2 RT-PCR was positive on admission in 6 patients, and negative on admission and on re-test in 8 patients. Peripheral blood samples were drawn on admission. In spite of the modest sample size, patients with SARS-CoV-2 positive showed higher levels of D-dimer (median 6,355 vs.
2020-09-22T13:06:36.993Z
2020-09-01T00:00:00.000
{ "year": 2020, "sha1": "8df3e2c68c5cfd8388891a9aa0eec2881cc37c0e", "oa_license": "publisher-specific-oa", "oa_url": "http://www.annalsofoncology.org/article/S0923753420418265/pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "8df3e2c68c5cfd8388891a9aa0eec2881cc37c0e", "s2fieldsofstudy": [ "Environmental Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
57758112
pes2o/s2orc
v3-fos-license
Profile of melatonin and its receptors and synthesizing enzymes in cumulus–oocyte complexes of the developing sheep antral follicle—a potential estradiol-mediated mechanism Background Melatonin is an amine hormone that plays an important role in regulating mammalian reproduction. This study aimed to investigate the expression pattern of melatonin synthesis enzymes AANAT and HIOMT and melatonin receptors MT1 and MT2 in sheep cumulus–oocyte complexes (COCs) as well as the change of melatonin level in follicular fluid (FF) during antral follicle development. In this research, we also study the effect of β-estradiol (E2) on MT1 and MT2 expression as well as melatonin synthesis in COCs so as to lay the foundation for further exploration of the regulation mechanism of melatonin synthesis in the ovary. Methods COCs and FF were collected from different size (large follicles (diameter ≥ 5 mm), medium follicles (diameter 2–5 mm), and small follicles (diameter ≤ 2 mm)) of antral follicles in sheep ovaries. To assess whether E2 regulates melatonin synthase and its receptors expression in sheep COCs and whether it is mediated through estrogen receptor (ER) pathway. The collected COCs were cultured in vitro for 24 h and then treat with 1 μM E2 and/or 1 μM ICI182780 (non-selective ER antagonist). The expression of AANAT, HIOMT, MT1 and MT2 mRNA and protein were determined by qRT-PCR and western blot. The melatonin level was determined by ELISA. Results The expression of AANAT, HIOMT, MT1 and MT2 were significantly higher expression in the COCs of small follicles than in those of large follicles (P < 0.05). However, the melatonin level was significantly higher in large follicle FF than in small follicle FF (P < 0.05). Further, the expression of AANAT, HIOMT, MT1, and MT2 and melatonin production were decreased by E2 treatment (P < 0.05), but when ICI182780 was added, the expression of AANAT, HIOMT, MT1, and MT2 and melatonin production recovered (P < 0.05). Conclusions We suggest that sheep COCs can synthesize melatonin, but this ability is decreased with increasing follicle diameter. Furthermore, E2 play an important role in regulated the expression of MT1 and MT2 as well as melatonin synthesis in sheep COCs through the ER pathway. Background Melatonin (N-acetyl-5-methoxytryptamine) is an indoleamine originally identified in the pineal gland, where it is synthesized enzymatically from serotonin (5-hydroxytryptamine) by the sequential action of arylalkylamine N-acetyltransferase (AANAT) and hydroxyindole-O-methyltrans ferase (HIOMT) [1][2][3]. Both AANAT and HIOMT have been considered rate-limiting steps in melatonin production. Generally considered, the synthesis of melatonin by the pineal gland is regulated by external photoperiodic cues. However, estrogen, an important ovarian hormone, also influences melatonin synthesis. Study have shown that a higher dose of estradiol benzoate reduces the activities of AANAT and HIOMT in the pineal gland of ovariectomized rats, and causes a decrease in melatonin concentration in peripheral blood [4]. The pineal gland is thought to be the main site of melatonin synthesis; however, many extrapineal tissues, such as the retina [5], gastrointestinal tract [6], spleen, liver, kidney, heart [7],testes [8], and ovaries [9] secrete melatonin. AANAT and HIOMT have been found in the ovaries of rats and humans [9][10][11][12]. The affinities of AANAT and HIOMT for their substrates in the ovaries are approximately equal to those in the pineal glands, which indicate that the ovary can also synthesize melatonin [11,12]. Moreover, the granulosa cells, including those forming the cumulus oophorus [13,14], and the oocytes [15] have been reported to detect the melatoninsynthesizing enzymes, and studies have found that human [16] and bovine [13] cumulus-oocyte complexes (COCs) can also synthesize melatonin. Rather, these cells use the melatonin they produce for their own benefit or for that of their neighboring cells as an antioxidant and as an autocrine or paracrine agent [17]. However, high levels of steroid hormones and metabolic demands characterize the developing follicle, and as a result oxidative stress may ensue [18][19][20], thereby affecting the development of follicles and oocytes [21]. Melatonin, a powerful antioxidant, has the ability to scavenge reactive oxygen species (ROS) and reactive nitrogen species (RNS) via receptor-independent actions [22][23][24]. Melatonin not only directly protects the COCs from oxidative damage, but also promotes the COCs to secrete antioxidant proteins such as CuZn-SOD, Mn-SOD, and glutathione peroxidase GPx to protect itself [25]. In addition to its antioxidant properties, melatonin has also been identified to have other functions in oocyte and follicular development. So far, melatonin has been found in the human follicular fluids, and its concentrations increase with follicular development [12]. The two melatonin high-affinity membrane receptors, MT1 and MT2, have been detected in granulosa cells, cumulus cells, and oocytes of human [26] and rat [27], indicating that melatonin is involved in regulation of steroidogenesis modulation [28,29], follicular development [30][31][32], oocyte maturation [33][34][35][36], ovulation [37] and luteinization [38] via its receptor pathway. However, most of the research on melatonin and its related proteins in follicular development have focused on humans and rodents, and there are few reports in sheep. Therefore, in this study, sheep were used as the experimental animals. Immunohistochemistry, real-time PCR, and western blotting were used to detect whether AANAT, HIOMT, MT1, and MT2 are expressed in sheep COCs and to analyze the expression of AANAT, HIOMT, MT1, and MT2 mRNA and protein in COCs of follicles of different sizes in sheep. We also assessed the melatonin levels in follicular fluid (FF) from follicles of different sizes. In addition, we added exogenous E2 and estrogen receptor inhibitor ICI182780 in vitro to test whether E2 regulates melatonin synthase and its receptor expression in sheep COCs. In this experiment, ovaries were obtained from adult sheep (body weight: 35-55 kg) killed at Lanzhou slaughterhouse. Ovaries samples were held in Dulbecco's PBS (DPBS; Ca 2+ -and Mg 2+ -free) at 30-35°C containing streptomycin (100 IU/mL) and penicillin (50 mg/mL) and send to the laboratory within 3 h. All experimental procedures involving animals were approved by the institutional animal care and local ethics committee. COCs collection Total 128 sheep ovaries were used in this experiment. Six ovaries were randomly selected and fixed in 4% formaldehyde for 24 h, then embedded in paraffin for immunohistochemistry. Sheep ovary COCs and FF were aspirated from large follicles (diameter ≥ 5 mm), medium follicles (diameter 2-5 mm), and small follicles (diameter ≤ 2 mm) from 60 sheep ovaries. Then, the collected FF was clarified by centrifugation for 10 min at 3000 rpm. The supernatant fluids were passed through a 0.45-mm filter and stored at − 80°C until analysis of the melatonin concentration. The sheep ovary COCs were washed three times in DPBS and stored at − 80°C until analysis of AANAT, HIOMT, MT1, and MT2 mRNA and protein expression. Cell culture and treatment The COCs for in vitro cultured were aspirated from follicles by cutting the surface of 1-6 mm follicles from the remaining ovaries. Only COCs with fine homogenous granular cytoplasm surrounded by compact layers of granulosa cells were selected for maturation. After washing three times with DPBS and one time with TCM199, the COCs chosen for the experiment were placed in a four-well plate, each well containing 50 COCs in 700 μL of TCM199 with 5 mg/ml BSA. COCs were cultured in a humidified incubator (5% CO2) at 38.5°C in order to separate from the hormone environment in vivo. After 24 h cultured, the cell culture medium were changed with fresh medium which added with 5-HT (melatonin precursor 5-hydroxytryptamine) to a final concentration of 1 μM [15]. Then, the COCs were treated with: (1) only 0.1% DMSO (w/v), as a Control group; (2) 1 μM E2; (3) 1 μM ICI182780 (non-selective ER antagonist); (4) 1 μM E2 and 1 μM ICI182780 + E2 for 24 h. Then, the COCs and culture medium were collected and stored at − 80°C until analysis of AANAT, HIOMT, MT1 and MT2 mRNA and protein expression and melatonin production. Total RNA isolation and real-time-polymerase chain reaction (qRT-PCR) Total RNA was extracted using Trlquick Reagent (Solarbio, Beijing, China). RNA purity and integrity were determined as described previously [39]. Then, the RNA was reverse-transcribed to single-stranded cDNA with a reverse transcription kit (Promega, Wisconsin, America) for qRT-PCR. The qRT-PCR primers were designed according to the Ovis aries AANAT, HIOMT, MT1, MT2, and β-actin gene sequences showed in Table 1. qRT-PCR was conducted with an FTC-3000 thermocycler (Funglyn Biotech, Canada) at a 20-μl reaction volume consisting of 1 μl of cDNA, 1 μl of forward primer, 1 μl of reverse primer, 10 μl of 2× SYBR Green II PCR mix (TaKaRa, Shiga, Japan), and 7 μl of nuclease-free H 2 O. The PCR conditions were as follows: 95°C for 30 s, 95°C for 5 s, and 60°C for 30 s for a total of 40 cycles; 95°C for 30 s; 60°C for 90s; and 95°C for 10 s. Four replicates were set for each sample to ensure the accuracy of the relative expression of the target gene in the samples. The 2 -△△Ct method was used to determine the expression of AANAT, HIOMT, MT1, and MT2 mRNA relative to β-actin according to the system-generated Ct value [40]. Melatonin levels in FF from follicles of different sizes and in COC culture medium The melatonin levels in FF from follicles of different sizes and COC culture medium were quantified using an enzyme-linked immunosorbent assay (ELISA; sheep melatonin ELISA kit; USCN, Wuhan, China). In brief, cell culture medium was collected and centrifuged at 3000 rpm for 15 min at 4°C; the resulting supernatant was extracted and stored at − 80°C. Then, 10 μl of the extracted proteins from the cell culture medium were added to 96-well plates and 40 μl of the diluted samples were added to each pre-coated well. After incubation for 1 h at 37°C with shaking and five washes, 100 μl of detection antibody-horseradish peroxidase (HRP) was added to each well, and the plate was incubated for 1 h at 37°C. After shaking and five washes, 100 μl of substrate solution was added and the samples were incubated for 15 min at room temperature in the dark. The reaction was stopped by adding 50 μl of stop solution, and the absorbance was measured immediately at 450 nm against a standard curve. Each sample was tested in duplicate and the net absorbance was obtained when the absorbance of the negative control (blanks, without sample) was subtracted from that of the samples. The melatonin level in cell culture medium was expressed as pg/ml. Immunohistochemical staining Expression of AANAT, HIOMT, MT1, and MT2 was analyzed by immunocytochemistry. Briefly, ovary samples were fixed using 4% paraformaldehyde (w/v) in 0.1 M phosphate buffer (pH 7.4) and embedded in paraffin. Sections (4 μm) were mounted onto gelatin/poly-L-lysine-coated glass slides. The sections were dried on the slides in a 60°C incubator for 2 h. They were dewaxed twice in xylene for 15 min each and rehydrated through graded ethanol solutions (100, 90, and 70%, (v/v)). Then, the sections were dewaxed in water, washed three times with 0.01 M PBS (pH 7.4) for 3 min each, incubated with 0.3% H 2 O 2 (w/v) for 10 min to block endogenous peroxidase activity, and then stained using the immunohistochemical SP procedure. Detailed information on SP has been given in earlier papers [41]. The antibody concentrations of AANAT, HIOMT, MT1, and MT2 were 1:100, 1:200, 1:100, and 1:200 respectively. Color development reaction was achieved using diaminobenzidine (DAB), and nuclear counterstaining was performed with hematoxylin. The negative control was incubated with PBS instead of the primary antibody, and the remaining conditions and steps were the same. The images were observed and photographed using an Olympus-DP71 optical microscope (Olympus, Japan). Statistical analysis Statistical analyses were performed using SPSS version 10.0 (IBM Corporation, NY, USA). All data were tested for normality and homoscedasticity, then subjected to a one-way ANOVA, followed by Duncan's multiple test to determine differences. All quantitative data are presented as mean ± SEM. P < 0.05 was considered significant. Results Immunohistochemical staining of AANAT, HIOMT, MT1, and MT2 in sheep COCs The localization of AANAT, HIOMT, MT1, and MT2 in COCs in sheep ovaries was analyzed using immunohistochemical staining. As shown in Fig. 1, AANAT, HIOMT, MT1, and MT2 were found to be expressed in the same parts, prominently localized to the cumulus cells and oocytes, but with almost no expression in thecal cells. Quantification of melatonin levels in FF of follicles of different sizes The melatonin levels in FF from follicles of different sizes in sheep were detected by ELISA. The results are shown in Table 2. The lowest melatonin level was seen in the small follicles (19.54 ± 1.64 pg/ml, P < 0.05), but there was no significant difference between medium follicles and large follicles (respectively, 26.12 ± 2.88 pg/ml and 29.83 ± 3.29 pg/ml, P > 0.05). The analysis showed that the melatonin level in the small FF was significantly lower than that in the medium FF and the large FF (P < 0.05). and the expression levels in small follicle COCs were significantly higher than in large follicle COCs (P < 0.05, Fig. 2a); however, there was no significant difference in AANAT levels between medium follicles and large follicles, and there was no significant difference in HIOMT levels between small follicles and medium follicles. A similar trend was observed for protein expression (Fig. 2b); these proteins were significantly decreased in large follicles compared with small follicles (P < 0.05); however, the highest expression level of MT1 protein was seen in medium follicles (P < 0.05), and there was no significant difference in MT2 protein levels between medium follicles and large follicles. Effect of E2 and ER antagonist ICI182780 on melatonin production and the expression of AANAT, HIOMT, MT1, and MT2 mRNA and protein in sheep COCs To examine whether melatonin production was affected by treatment of COCs with 1 μM E2, 1 μM ICI182780, or a combination of 1 μM E2 and 1 μM ICI182780, the cell culture medium was collected and the melatonin concentration was examined. As shown in Fig. 3a, E2 significantly decreased the melatonin level (P < 0.05). ICI182780 had no effect on melatonin production, but when COCs were co-cultured with E2 and ICI182780, the melatonin production was significantly increased compared with that by E2 treatment (P < 0.05). Moreover, E2 significantly decreased the AANAT and HIOMT mRNA and protein expression in COCs (P < 0.05, Fig. 3b and c), but ICI182780 alone had no effect on the expression of AANAT and HIOMT mRNA and protein in COCs. However, the combination of E2 and ICI182780 significantly increased the AANAT and HIOMT mRNA and protein expression in COCs compared with the E2 group (P < 0.05). The MT1 and MT2 mRNA and protein expression levels were similar to those of AANAT and HIOMT (Fig. 3b and c). Discussion AANAT and HIOMT are the key enzymes in the synthesis of melatonin, and also key to determine whether tissues or cells synthesize melatonin. MT1 and MT2 are the main receptors of melatonin, exerting biological activity. In this study, we not only detected the expression of MT1 and MT2, but also that of AANAT and HIOMT proteins in sheep COCs. This finding is consistent with the known presence of AANAT and HIOMT in human [16] and bovine [13] COCs and MT1 and MT2 in goat [42], yak [43], and human COCs [26]. This indicates that the COCs of sheep not only contain the melatonin synthetic pathway but are also a target of melatonin function. In this study, we found that melatonin exists in sheep FF, and that the melatonin level increased with increasing follicle diameter. This finding is similar to findings in humans [12]. The melatonin in the follicle originates not only from self-secretion by the cells in the follicle, but also from enrichment in the blood [11,12,44]. Although the concentration of melatonin increases with the diameter of the follicle, the ability of the sheep COC itself to synthesize melatonin as the follicle develops is still unclear. Thus, we investigated the expression patterns of AANAT and HIOMT mRNA and protein in sheep COCs in follicles of different sizes. The result showed that the expression of AANAT and HIOMT mRNA and protein in sheep COCs in follicles of different sizes varies, and that the expression of AANAT and HIOMT in small follicle COCs is significantly higher than that in large follicle COCs. These findings suggest that although the melatonin level increases with the increase in follicle diameter, the ability of the COC itself to synthesize melatonin is decreased. Studies showed that with the development of the antral follicle, the amount of vascular tissue is increased; hence, substance exchange between FF and blood is increased [45,46]. Another study showed that the ovarian cells do not discharge melatonin into the general circulation [17]. These findings suggest that the melatonin in large follicles mainly originated from blood. A study in rat ovaries showed that AANAT levels in oocytes increased progressively from primordial follicles to antral follicles [15]. This suggests that in preantral follicles, the follicle cells themselves synthesize melatonin and may be the main source of melatonin in follicle. We also investigated the expression patterns of MT1 and MT2 mRNA and protein in sheep COCs in follicles of different sizes in this study. The result showed that the expression patterns of MT1 and MT2 in sheep COCs in follicles of different sizes are similar to the expression patterns of AANAT and HIOMT, and that the expression of MT1 and MT2 in small follicle COCs is significant higher than in large follicle COCs. One study showed that estrogen exposure not only downregulates MT1 in rat ovaries [47], but also reduces the activity of AANAT and HIOMT in the pineal gland of ovariectomized rats, and causes a decrease in melatonin concentration in peripheral blood [4]. E2, an ovarian hormone, plays an important role in the development of follicles and oocytes [48,49]. In addition, E2 levels in the dominant follicles are significantly higher than those in the atretic follicles and peak before ovulation [50]. Sheep large FF contain much higher levels of E2 up to 1 μM [51]. Thus, we speculate that the drastic decrease in melatonin-related proteins in sheep COCs of large follicles is associated with the high level of E2 in these follicles. In order to test this hypothesis, we simulated the high concentration of E2 in the large follicles in vitro by treating the COCs with 1 μM E2 after 24 h of culture. The results showed that 1 μM E2 significantly reduced the expression levels of MT1, MT2, AANAT, and HIOMT in COCs and decreased the melatonin level in the culture medium. This indicated that the high level of E2 in large follicles not only inhibits the expression of melatonin membrane receptors MT1 and MT2 in the COCs, but also inhibits the expression of melatonin synthesis enzymes AANAT and HIOMT, and thus inhibits melatonin production. However, when E2 and ICI182780 were both added to the cultured COCs, the expression levels of MT1, MT2, AANAT, and HIOMT and melatonin production were restored. Thus far, there are few reports about how E2, through the estrogen receptor (ER), regulates the expression of melatonin synthase and melatonin receptor and the synthesis of melatonin. One study showed that the MT1 receptor is upregulated in estrogen receptor-negative cells (MDA-MB-231) and downregulated in estrogen receptor positive cells (MCF-7) [52]. Our results show that E2 inhibits the expression of melatonin synthesis enzymes AANAT and HIOMT through ER, thus inhibiting the production of melatonin in COCs, whereas the expression of melatonin receptors MT1 and MT2 in COCs is also inhibited by E2 through ER. Many studies have reported the effects of melatonin on inhibition of the synthesis and function of estrogen. Chuffa et al. [53] showed that the regulation of estrogen secretion by melatonin in the body mainly through the neuroendocrine-gonad axis affects ovarian function and downregulates the secretion of estrogen in the ovary. Melatonin also can act as a selective estrogen receptor modulator (SERM) by reducing the amount of estrogen binding to ER receptors and inhibiting the binding of the E2-ER complex to the DNA [54]. Recently, melatonin has been shown to regulate the activity of some enzymes (aromatase, sulfatase, 17ß-hydroxysteroid dehydrogenase, and estrogen sulfotransferase) responsible for the local synthesis of estrogens in cultured human breast cancer cells, thus behaving as a selective estrogen enzyme modulator (SEEM) [55][56][57]. However, there are few reports about the effects of estrogen on inhibiting the synthesis of melatonin and expression of its receptors. This experiment for the first time demonstrated the inhibitory effect of E2 on melatonin production and expression of related proteins in sheep COCs. This study also has several limitations. The ER inhibitor ICI182780 is widely used in basic research and has been proven to effectively inhibit ER function [58]. However, classic ER is made up of two forms, ER-α and ER-β [59,60], and both are expressed in sheep follicle [61,62]. In this study, because we primarily focused on the effects of E2, we did not investigate which ER, either ER-α or ER-β, participates in this regulatory process. This question can be answered in our future experiments by using certain specific commercial ligands of ER-α and ER-β. Conclusions Our results draw the conclusion that sheep COCs can synthesize melatonin and are also a target of melatonin action. E2 reduces the expression of MT1 and MT2 and inhibits the expression of AANAT and HIOMT in sheep COCs and ultimately inhibits COC synthesis of melatonin through ER. This study provides a theoretical basis for further study on the synthesis of melatonin in COCs by E2 regulation.
2019-01-06T12:36:33.093Z
2019-01-03T00:00:00.000
{ "year": 2019, "sha1": "8eec161c86f7e19322e4d1a26042275a77a841ca", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1186/s12958-018-0446-7", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8eec161c86f7e19322e4d1a26042275a77a841ca", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
237472263
pes2o/s2orc
v3-fos-license
Relationships of Gut Microbiota Composition, Short-Chain Fatty Acids and Polyamines with the Pathological Response to Neoadjuvant Radiochemotherapy in Colorectal Cancer Patients Emerging evidence has suggested that dysbiosis of the gut microbiota may influence the drug efficacy of colorectal cancer (CRC) patients during cancer treatment by modulating drug metabolism and the host immune response. Moreover, gut microbiota can produce metabolites that may influence tumor proliferation and therapy responsiveness. In this study we have investigated the potential contribution of the gut microbiota and microbial-derived metabolites such as short chain fatty acids and polyamines to neoadjuvant radiochemotherapy (RCT) outcome in CRC patients. First, we established a profile for healthy gut microbiota by comparing the microbial diversity and composition between CRC patients and healthy controls. Second, our metagenomic analysis revealed that the gut microbiota composition of CRC patients was relatively stable over treatment time with neoadjuvant RCT. Nevertheless, treated patients who achieved clinical benefits from RTC (responders, R) had significantly higher microbial diversity and richness compared to non-responder patients (NR). Importantly, the fecal microbiota of the R was enriched in butyrate-producing bacteria and had significantly higher levels of acetic, butyric, isobutyric, and hexanoic acids than NR. In addition, NR patients exhibited higher serum levels of spermine and acetyl polyamines (oncometabolites related to CRC) as well as zonulin (gut permeability marker), and their gut microbiota was abundant in pro-inflammatory species. Finally, we identified a baseline consortium of five bacterial species that could potentially predict CRC treatment outcome. Overall, our results suggest that the gut microbiota may have an important role in the response to cancer therapies in CRC patients. Introduction Colorectal cancer (CRC) is the second most common malignant cancer in Western countries. The global burden of CRC is expected to substantially increase in the next two decades as a consequence of adopting Western lifestyles [1]. In recent years, several works have demonstrated that the gut microbiome could be a critical environmental factor that contributes to the tumorigenesis and progression of CRC, potentially by inducing pro-inflammatory responses, by producing microbial oncometabolites, and by interfering with the energy balance in cancer cells. Moreover, CRC is frequently associated with a dysbiosis in the microbial composition of the tumor and adjacent mucosa [2][3][4]. Several studies have suggested that the composition of the gut microbiota could affect the body's response to a variety of cancer therapies, including chemotherapy, radiotherapy, and immunotherapy [5][6][7]. Preoperative radiochemotherapy (RCT) followed by surgery has become the standard treatment for patients with CRC [8,9]. Recent studies have suggested that the gut microbiota may influence drug response (efficacy and toxicity) in CRC patients through several mechanisms such as immunomodulation, reduced diversity, translocation, metabolism, and ecological variation [10]. Specific gut bacteria have been shown to affect cancer treatment by modulating drug metabolism and the host immune response [11,12]. Thus, several phyla are known to mediate drug metabolism via different reactions such as isoxazole scission, denitration, proteolytic degradation, acetylation/deacetylation, deconjugation, physical adherence to the drugs as well as by amine formation and/or hydrolysis [13]. Scott et al. described that the gut microbiota was able to influence the efficacy of one of the first-line treatments for CRC, such asfluoropyrimidines, through drug interconversion involving bacterial vitamin B6 and B9 and ribonucleotide metabolism [14]. In addition, the effect 5-fluorouracil treatment in CRC cells could be mediated by gut microbial metabolites [15]. Remarkably, Fusobacterium nucleatum is able to promote CRC resistance to chemotherapy by targeting both TLR4 and MYD88 innate immune signaling [16]. Furthermore, radiation may also lead to alterations in gut microbiota composition in animal models [17]. However, the clinical impact of radiotherapy on gut microbiota in cancer patients remains mostly unexplored although it has been proposed that the gut microbiota might play a role in the immunogenic effect of radiotherapy [18]. On the other hand, the gut microbiome produces bacteria-derived metabolites that could affect cancer proliferation and chemotherapy responsiveness. Thus, previous studies describe that SCFAs (such as butyric acid, isobutyric acid and acetic acid) inhibit the growth of cultured human colorectal cancer cells and that butyric acid is the strongest inhibitor [19]. Ross et al. reported an association between the levels of the short-chain fatty acids (SCFAs) propionate and butyrate in patients with early stage breast cancer with a pathological complete response (pCR) to neoadjuvant chemotherapy [20]. Coutzac et al. suggested that SCFA limits anti-CTLA-4 activity in patients with metastatic melanoma [21]. Other bacteria-derived metabolites, such as the polyamines (PAs) (spermine, spermidine and putrescine), have been involved in almost all the steps of colorectal tumorigenesis. PAs are molecules that are indispensable in normal cell growth and gene expression and are needed in cell proliferation, but their concentrations increase during the transition from a healthy cell to a tumor cell [23]. Recently, it was shown that the level of acetylated PAs is more specific for cancer. For example, N1, N12-diacetylspermine (DiAcSPM) was increased in CRC and in dysplastic colorectal lesions [24]. Therefore, taking all of the evidence together, we hypothesized a bidirectional interaction between the neoadjuvant RCT and the gut microbiome in CRC patients: RCT might induce alterations in the gut microbiome, and these alterations might, in turn, influence the effectiveness of RCT by directly interacting with the treatment and/or by stimulating the host's immune response. In this study, we aimed to identify the possible relationship between the gut microbiome, the fecal SCFAs levels, the serum levels of the polyamines and acetyl derivatives of polyamines, and the intestinal permeability to neoadjuvant RCT outcome in CRC patients. Clinical Characteristics of the Patients and Healthy Controls CRC patients and healthy controls had comparable eating habits to exclude the influence of dietary differences. CRC patients and healthy controls followed a Mediterranean diet consisting in a high consumption of olive oil, fruits, legumes, vegetables, nuts, whole grains, and fish and a low intake of red meat and dairy products. Adherence to the Mediterranean diet was assessed by using a validated 14-item food frequency questionnaire in all study patients. All CRC patients completed the neoadjuvant RCT and underwent surgical resection. There was no significant difference between CRC patients and healthy controls in terms of age, sex, BMI, and biochemical data (Table 1). A total of 28 of the 40 CRC patients (70%) had a good response to the neoadjuvant RCT (responders, R) (TGR 1-2), and 12 (30%) had a poor or non-response (non-responders, NR) (TGR 3-5) to therapy. Both R and NR patients were similar in terms of sex, age, BMI, and stage of the cancer, as shown in Table 1. Differences in Taxonomic Composition and Diversity of Gut Microbiota between CRC Patients and Healthy Controls The analysis of stool samples revealed 17,496,823 reads of the 16S rRNA gene (hypervariable V2-V9 regions), with an average of 105,632 (±10,825) reads for each sample in a range between 359 and 39,873. After trimming and filtering, 52,844 high-quality reads were selected. A total of 15,326 OTUs were obtained in the OTUs clustering process, and after the alignment of the OTU representative sequences, 2582 OTUs were identified to have a relative abundance >1% in at least four samples (97% similarity cut-off). For the taxonomic assignment of these OTUs, QIIME2 pipeline and Greengenes v13.8 were used, and the OTUs were binned into 7 phyla, 39 families, 45 genera, and 53 species. We first compared the landscape of the gut microbiome in the stool samples of all CRC patients at baseline and in healthy controls in order to define a normal gut microbiota profile. As expected, we found significantly higher diversity and richness (defined by the Shannon and Chao1 indexes, respectively) in the fecal samples of healthy controls with respect to those of CRC patients (Shannon p = 0.026 and Chao1 p = 0.001) ( Figure S1A,B). The beta diversity (Bray-Curtis dissimilarity) comparison of the baseline CRC patients and the healthy controls indicated that the two cohorts had significantly different genus compositions of intestinal bacteria (p = 0.0001, ANOSIM) ( Figure S1C). Furthermore, the analysis of the gut microbiota profiles between the CRC patients and the healthy controls at baseline revealed significant differences in the abundance at different taxonomic levels. At phylum level, the relative abundance of Fusobacteria (q < 0.001), Firmicutes (q < 0.001), Lentisphaerae (q = 0.007), and Proteobacteria (q = 0.003) were significantly increased in patients with CRC, while the relative abundance of Bacteroidetes (q < 0.001) and Actinobacteria (q = 0.034) were significantly decreased in CRC patients when compared to the controls ( Figure 1A). At species level, while healthy subjects showed a significantly higher abundance of Bifidobacterium bifidum (q = 0.034) and Faecalibacterium prausnitzii (q = 0.040) with respect to the CRC patients, Fusobacterium nucleatum (q = 0.020), Bacteroides fragilis (q = 0.024), and Escherichia coli (q = 0.016) were significantly increased in the fecal samples of CRC patients in comparison to the controls. Changes in Gut Microbiota Diversity and Composition in Response to Neoadjuvant RCT Treatment in CRC Patients We compared the gut microbiota communities at baseline (T0) versus at post-treatment time points (T1, T2, and T3) to study the effect of neoadjuvant RCT on the gut microbial diversity and composition in CRC patients. The alpha diversity comparison showed no significant differences in the levels of richness (Chao 1) and diversity (Shannon) between the baseline and the different time points (Shannon p = 0.75 and Chao1 p = 0.61) (Figure 2A,B). Moreover, the PCoA plot based on the beta diversity (Bray-Curtis dissimilarity) revealed that the differences in the gut microbial community at T1, T2, and T3 compared to at baseline (T0) were not significant (p = 0.716, ANOSIM) ( Figure 2C). The main bacterial phyla (Firmicutes and Bacteroidetes) remained stable over time, while other, less abundant phyla, such as Fusobacteria and Proteobacteria, were significantly decreased at T3 compared to at T0 (q = 0.042 and q = 0.039, respectively) in the CRC patients. Although the bacterial family and genera proportions differed between the different time points, they were not significantly altered by the RCT treatment (Wilcoxon test p > 0.05), apart from the genera Fusobacterium (q = 0.015), Escherichia (q = 0.04) and Klebsiella (q = 0.035), which were significantly decreased after treatment, and the genus Bifidobacterium (q = 0.049), which was significantly increased at T3 compared to T0 (Figure 3). Post-Treatment Microbiota Diversity and Composition Is Associated to Clinical Response to Neoadjuvant RCT in CRC Patients To evaluate the relationship between the microbial community and the treatment outcome, we classified the patients based on their response to RCT into categories such as responders (R) and non-responders (NR). As shown in Table 1, no significant differences in terms of stage of cancer, sex, age, and BMI were observed between the study groups (R vs. NR). An analysis of the alpha diversity at T3 revealed that the R group had higher diversity (Shannon index, q < 0.001; Simpson index, q = 0.039) and richness that the NR group (Chao1 index, q = 0.015) at genus level ( Figure 4A,B). Furthermore, the ordination plot based on Bray-Curtis dissimilarities and the Jaccard index showed different intestinal microbial compositions at the genus level between both the R and the NR groups at T3 (Bray-Curtis index, q = 0.038; Jaccard index, q = 0.035; non-parametric ANOSIM test) ( Figure 4C). Baseline Microbiota Composition Could Predict Response to RCT Treatment in CRC Patients After describing the significant differences in the intestinal microbial composition between the R and NR after RCT treatment, we next assessed the predictive power of the gut microbiome related to neoadjuvant RCT response. We used random forest (RF) to build a predictive model based on the overall gut microbiota profile using the species-level abundance data as input. After RF analysis with 500 bootstrap samples, we found that the overall gut microbiota composition data had a significant accuracy of 80% and an area under the curve (AUC) of 0.71. The main species accounting for this stratification were Ruminococcus albus, Bifidobacterium bifidum, Faecalibacterium prausnitzii, Fusobacterium nucleatum, and Bacteroides fragilis, and when the proportions of these bacterial species were only used for testing the accuracy of the RF classifier, this increased to 96% (AUC = 0.92). Thus, the response to RTC or the lack of it were identified with an accuracy of 94% (AUC = 0.95) and of 91% (AUC = 0.92), respectively ( Figure 7A). The validation cohort consisted of 84 CRC patients under neoadjuvant RCT (45 R patients and 39 NR patients) (data collected from the Genome Sequence Archive in National Genomics Data Center, accession number CRA002850). After RF analysis in this validation cohort, an accuracy of 92.0% (AUC = 0.93) and 90.0% (AUC = 0.91) were obtained for the response to RTC or the lack of it, respectively ( Figure 7B). Among the five species variables, Fusobacterium nucleatum, and Bacteroides fragilis were biomarkers of R patients, and Ruminococcus albus, Bifidobacterium bifidum, and Faecalibacterium prausnitzii were biomarkers of NR patients. The area under the ROC curve (AUC) was 0.95, and the 95% confidence interval (CI) was 0.901-1 for the R patients (green), and the AUG was 0.92 and 95% the IC was 0.827-1 for the NR patients (red). (B) Validation cohort. The AUG was 0.93 and the 95% IC was 0.877-0.987 for the R patients (green), and the AUG was 0.91 and 95% the IC was 0.835-0.984 for the NR patients (red). Nevertheless, compared to the R patients, in the NR patients, there was a significant over-representation of genes for lipid metabolism, such as for araquidonic acid metabolism (q = 0.006); amino acid metabolism pathways, such as for arginine and proline metabolism (q = 0.029); for glycine, serine, and threonine metabolism (q = 0.001); in genes for the metabolism of other amino acids such as glutathione metabolism (q = 0.003); for the metabolism of cofactors and vitamins such as riboflavin metabolism (q = 0.003), ubiquinone, and other terpenoid metabolism (q < 0.001); folate biosynthesis (q = 0.014), glycan biosynthesis, and metabolism, such as lipopolysaccharide biosynthesis (q = 0.007); lipopolysaccharide biosynthesis proteins (q = 0.001); cellular processes and signaling that contain cell motility and secretion (q = 0.0018); oxidative phosphorylation (q < 0.001); and for pathways in cancer (q < 0.001) (Figure 8). Changes in the Serum Level of Polyamines and Zonulin and Fecal Levels of SCFAs after RCT Treatment in CRC Patients Significant differences in the serum levels of several polyamines and acetyl derivatives of polyamines were found in the R and NR patients at post-treatment point (T3). Then, in the NR patients, we found a significant increase in the levels of spermine, N1-acetyl spermine (N1-AcSP), N1, N12-diacetylspermine (N1, N12-DiAcSP), N1-acetylspermidine (N1-AcSPD), N1, N8-diacetylspermidine (N1, N8-DiAcSPD), and N1-acetylputrescine (N1-AcPUT) compared to those in the R patients. On the other hand, within-group, there were also significant changes in the levels of N1-AcSPD and spermine in both the R and NR patients and in the serum levels of N8-AcSPD only in the NR group (Table 2). Serum polyamine levels were measured by means of ultra-high performance liquid chromatography tandem mass spectrometry (UHPLC-MS/MS). Values are expressed as mean ± SD or mean (95% CI). R: responder; NR: non-responder. 1 Difference between R and NR patients at post-treatment when adjusted for baseline. 2 Comparison among post-treatment changes was conducted with a covariance model (ANCOVA) adjusted for baseline. * Wilcoxon signed-rank test was used to calculate differences in polyamines between baseline and post-treatment in R and NR patients. p < 0.05 was considered statistically significant. SCFAs are bacterial-derived metabolites with important physiological functions in the host and that have anti-cancer properties. Analysis of the fecal levels of SCFAs revealed significant differences in the concentrations of acetic, butyric, isobutyric, valeric, isovaleric, and hexanoic acid between the R and NR study groups at post-treatment time point T3. Moreover, we found several significant differences in the within-group comparison of the fecal concentrations of acetic and butyric acid, which significantly increased after RCT treatment in the R group. On the other hand, serum zonulin levels (a circulating marker of gut permeability) were significantly increased in the NR group (but not in R group) after RCT treatment (Table 3). Values are expressed as mean ± SD or mean (95% CI). R: responder; NR: non-responder. 1 Difference between R and NR patients at post-treatment when adjusted for baseline. 2 Comparison among post-treatment changes was conducted with a covariance model (ANCOVA) adjusted for baseline. * Wilcoxon signed-rank test was used to calculate differences in the SCFAs and zonulin between the baseline and post-treatment in R and NR patients. p < 0.05 was considered statistically significant. In addition, pairwise comparisons using Spearman rank correlation analysis were then performed between bacterial species enriched in the gut microbiome of both the R and NR patients and the fecal SCFAs and serum polyamines and zonulin levels. Interestingly, we found a statistically significant positive correlation between the fecal levels of butyrate and the abundance of the Faecalibacterium prausnitzii (r = 0.816 p < 0.001) and Ruminoccocus albus (r = 0.924 p = 0.008) in the R group and between the concentration of propionic acid and Bacteroides fragilis in the NR group. In addition, negative associations of Faecalibacterium prausnitzii with the serum levels of spermine (r = −0.619 p = 0.018) and N1,N12-DiAcSP (r = −0.793 p = 0.01) in the R patients were described, while there was a positive association between the abundance of Bacteroides fragilis and Fusobacterium nucleatum with the levels of N1,N12-DiAcSP (r=0.436 p = 0.043; r = 0.637 p = 0.001, respectively) and N8-AcSPD (r = 0.547 p = 0.014; r = 0.752 p < 0.001) in the NR patients. Finally, Prevotella copri was positively associated with the serum zonulin levels in NR patients. Discussion In this study, we have demonstrated the existence of a significant association between the gut microbiota and the anti-cancer response of CRC patients treated with neoadjuvant RCT. Moreover, we have found that some microbial-derived metabolites such as SCFAs could be at least partially responsible for the response to RCT in these CRC patients. Finally, we have identified a baseline consortium of CRC-enriched bacterial species that may potentially serve as diagnostic bacterial markers of a good or bad response to neoadjuvant RCT. Where Ruminococcus albus, Bifidobacterium bifidum, and Faecalibacterium prausnitzii, were overrepresented in R patients and chosen as discriminatory variables in our responseprediction RF model, Fusobacterium nucleatum and Bacteroides fragilis were overrepresented in the NR patients. The loss of microbial diversity has been associated with chronic health conditions [25][26][27] and cancer [27,28] as well as with poor outcomes to certain forms of cancer therapy [29][30][31]. Accordingly, recent works have also reported that patients with CRC display a lower bacterial diversity and richness in fecal samples and the intestinal mucosa compared to healthy individuals [32,33]. In this study, we found that compared to healthy controls, the CRC microbiota exhibited a state of dysbiosis with a reduced overall bacterial richness and diversity. In addition, the analysis of the Bray-Curtis PCoA plot for beta diversity revealed that the CRC patients were clustered separately to the healthy controls, suggesting important CRC-mediated microbial changes. Related to gut microbiota composition, several microbes have been found to be differentially represented in fecal samples between both study groups. Thus, the gut microbiota in the CRC patients was enriched with pro-inflammatory opportunistic pathogens and was depleted in butyrate-producing bacteria, which have been shown to be essential for the preservation of intestinal homeostasis [34]. In particular, we have shown that some bacteria such as Fusobacterium nucleatum, Escherichia coli, and Bacteroides fragilis had high prevalence in CRC patients in comparison to the healthy controls, whereas genera such as Roseburia, Faecalibacterium, and Bifidobacterium were depleted, demonstrating that microbial dysbiosis was already present in CRC at the time of diagnosis. On the other hand, we observed that gut microbiota composition was relatively stable over treatment time following RCT treatment, with the exception of a significant decrease in the abundance of Fusobacterium, Escherichia, and Klebsiella and a significant increase in Bifidobacterium (probiotic bacteria) at post-treatment time compared to at baseline, showing the beneficial effect of RCT on the gut microbiome of CRC patients. Klebsiella and Fusobacterium are pathogens normally found in the human intestine that cause diarrhea and bloodstream infections and that considerably increase the rates of treatment failure and death [35]. After treatment, the CRC patients were classified as responders (N) versus nonresponders (NR), based on their good or poor response to the RCT. Interestingly, we found significant differences in the alpha diversity at the genus level, with an increase in the diversity (Shannon) and richness (Chao 1) in the R patients compared to in the NR patients. Similarly, there was a statistically significant difference in B-diversity (Bray-Curtis dissimilarities and Jaccard index), finding a notable clustering effect by response status in the gut microbiome of these patients, indicating a difference in the bacterial community composition between the R and NR patients. At the taxa levels, we found a significant enrichment in probiotic and butyrate producer-bacteria such as Bifidobacterium bifidum, Ruminoccous albus, Roseburia, and Faecalibacterium prausnitzii in the R patients, while the NR patients showed an enrichment in unfavorable microbial taxa such as Fusobacterium nucleatum, Bacteroides fragilis, Escherichia coli, Prevotella copri, and Klebsiella. Several studies have shown that butyrate-producing bacteria are negatively related to irritable bowel disease and colorectal cancer [36,37]. Additionally, both Fusobacterium and Prevotella have been related to recurrent CRC after chemotherapy. Given that Fusobacterium nucleatum has been previously correlated with chemoresistance [17], our results may suggest that the higher load of Fusobacterium nucleatum present in NR patients could be a potential promoter of CRC chemoresistance and therefore of a poor response to CRC treatment. Similarly, the enterotoxigenic Bacteroides fragilis, which was also enriched in the NR patients, is a significant source of chronic inflammation, and it has previously been associated with the development and aggressiveness of colorectal cancer and poor patient outcome [6,38]. These data also suggest that the gut microbiota composition of the R patients shifted towards a microbial profile that has great similarity to the gut microbiota of the healthy controls. Next, we sought to gain insight into the mechanism through which the gut microbiome may influence response to RCT. Regarding the metabolic function of gut microbiota, in the current study, Picrust analysis showed significant differences between the R and NR patients. In the NR patients, we have found an increase in the abundance of genes for lipopolysaccharide biosynthesis as well as for araquidonic acid metabolism, for glutathione metabolism, and for the amino acid metabolism pathways (such as arginine and proline metabolism) compared to in the R patients. The significant increase of genes for lipopolysaccharide biosynthesis could be related to the significant increase in the abundance of Gram-negative bacteria such as Escherichia coli in the NR patients; these bacteria contain specific enzymes that produce LPS, which can induce Toll-like receptor 4 signaling and can promote cell survival and proliferation in CRC patients [39]. Similarly, the arachidonic acid pathway is important in the development and progression of numerous malignant diseases, including CRC, due to the fact that araquidonic acid stimulates key downstream signaling cascades that regulate cell proliferation, apoptosis, angiogenesis, inflammation, and immune surveillance [40,41]. With respect to the increase in the genes for glutathione metabolism in NR patients, some studies have described that the elevated levels of glutathione in tumor cells are able to protect such cells in bone marrow, breast, colon, larynx, and lung cancers by conferring resistance to several chemotherapeutic drugs [42,43]. Other bacterial functions involving the metabolism of cofactors and vitamins and the energy production pathways such as oxidative phosphorylation were also increased in NR patients. These pathways may serve as alternative bioenergetic sources for metabolically stressed cancer cells [44]. Remarkably, a recent metagenomic analysis reported that the CRC-associated microbiome showed an association with the conversion of amino acids into polyamines (e.g., the biosynthesis of putrescine from the amino acids L-arginine and L-ornithine), indicating that these metabolites could be particularly important in CRC development and progression [45]. In our study, significant differences in the serum levels of several polyamines and acetyl derivatives of polyamines were found between R and NR patients at post-treatment point. Moreover, we observed that the abundance of N1,N12-DiAcSP and N8-AcSPD were positively associated with the increased abundance of Bacteroides fragilis and Fusobacterium nucleatum in NR patients. In fact, Bacteroides spp. and Fusobacterium spp. can synthesize putrescine and spermidine in vitro and in vivo [46]. Goodwin et al. demonstrated that the purified Bacteroides fragilis toxin (BFT) up-regulates spermine oxidase in HT29/c1 and T84 colonic epithelial cells, producing the spermine oxidase-dependent generation of ROS and the induction of a marker of DNA damage such as γ-H2A.x. [47]. In another study, Johnson et al. found that antibiotic treatment led to a lower tissue concentration of N1, N12diacylspermine and that a disturbed bacterial biofilm was observed in resected CRC tissues compare to CRC tissues with negative bacterial biofilm, suggesting the implication of gut microbes in the increase of host generated N1, N12-diacetylspermine [48]. Moreover, the activation of the amino acid metabolic pathways by the intestinal microbiota of the NR patients could contribute to the increase in polyamines, which are actively assimilated by the cells of the intestinal epithelium and induce rapid cell proliferation, favoring the tumorigenesis [49,50]. On the other hand, several works performed in both cellular and animal models have demonstrated that CRC is linked to alterations in the metabolism of SCFAs, which have been shown to exhibit potential anti-carcinogenic effects [51,52]. Here, we have found that R patients displayed a significant over-representation of genes involved in butanoate metabolism and a significant increase in the fecal abundance of several SCFAs such as acetic and butyric acid after RCT treatment. Moreover, there was a positive correlation between the fecal levels of butyrate and the abundance of Faecalibacterium prausnitzii and Ruminoccocus albus in these patients. Faecalibacterium praustnitzi is considered important in health promotion, as it is able to produce butyrate from dietary fibre and possesses anti-inflammatory properties [53]. A decrease in Faecalibacterium prausnitzii and butyrate levels defines microbiota dysbiosis in patients suffering inflammatory bowel disease [54]. In addition, Faecalibacterium is able to use the acetate produced by Bifidobacterium (also increased in N patients) with the subsequent modulation of the intestinal mucus barrier by the modification of goblet cells and mucin glycosylation [55]. Butyrate is required for colonic epithelium repair and the production of Treg cells, which regulate the local immune response and suppress colonic inflammation and carcinogenesis [56]. Moreover, butyrate has been described to be able to induce the production of IL-18 by the intestinal epithelial cells through the activation of the GPR109a receptor, which stimulates mucosal tissue repair via the regulation of the production and availability of IL-22 [57]. The absence of IL-18 has been associated with gut microbiota dysbiosis, a dysregulation of the homeostatic and mucosal repair and alteration of the inflammatory response, producing an increased susceptibility to carcinogenesis [58]. In addition, after RCT treatment, we found a significant decrease in the fecal levels of acetic, butyric, isobutyric, and hexanoic acid in the NR study group compared to in R patients, indicating the exhaustion of butyric acid-producing microbiota in their colon. In a previous study, hexanoic acid was shown to reduce the colonization and dysbiotic expansion of potentially pathogenic bacteria in the gut [59]. Finally, we found that plasma zonulin levels were significantly increased in the NR patients compared to in the R patients. A higher zonulin level was correlated with the relaive abundance of Prevotella copri in the R patients. Zonulin is a protein synthesized in intestinal and liver cells that reversibly modulates the intestinal permeability of the intestinal epithelial barrier by modulating intercellular tight junctions [60]. Wright et al. found that Prevotella contains key enzymes implicated in mucin degradation, which are able to disrupt the colonic mucus barrier. A disrupted mucosal barrier may result in increased intestinal permeability, which allows the diffusion of antigens, toxins, and pathogens from the luminal environment into the mucosal tissues and circulatory system [55]. As a consequence, an inflammatory response can be triggered that induces cancer initiation, progression, and response to anticancer treatment [61]. Then, the significant increase in Prevotella abundance found in our study could be associated in party with the poor or non-response to RCT in NR patients. This study has some limitations, such as the relatively small sample size, which could reduce the power of the study. However, despite the relatively small size of our study, statistically significant differences were observed, suggesting that the results presented herein provide solid evidence on the potential contribution of the gut microbiome to RCT outcomes in CRC patients. Moreover, our study also has several strengths, such as the careful design, the well-matched cohorts of CRC patients and controls, a complete definition of the inclusion and exclusion criteria, and the consideration of lifestyle-associated confounding factors that may affect the gut microbiota composition, such as dietary pattern. Study Patients A total of forty patients aged 35-75-years-old who were newly diagnosed with CRC in stages II-III (T2-T4 and/or N1-N2) from the Radiotherapy Oncology Service at the Virgen de la Victoria Hospital and with no metastatic lesions detected on imaging were enrolled in the study and were followed-up with for at least 1 year. All of the CRC patients received only neoadjuvant treatment for 5 weeks with pelvic radiation therapy (50 Gy in fractions of 2 Gy/session) and oral capecitabine (825 mg/m 2 /12 h) during radiotherapy treatment. Patients with a history of colorectal cancer or bowel resection, type 2 diabetes, chronic inflammatory bowel disease, severe active infection, or hereditary colorectal cancer syndromes were excluded from the study. Patients who received pelvic cancer radiation therapy or anti-tumor treatment in the previous 2 years, who used antibiotics or immunosuppressants in the previous 2 months, or who regularly used non-steroidal anti-inflammatory drugs, statins, or probiotics before the study were also excluded. A pathologist examined surgical specimens and tumor response after neoadjuvant RCT was determined in surgical specimens according to the tumor regression grades (TRG) system described by Mandard et al. [62]. We divided the CRC patients into TRG1-2 (patients with good response or responders (R)) and TRG 3-5 (patients with poor or non-response (NR)). Blood and fecal samples were collected at baseline (T0), 2 and 4 weeks after starting RCT (T1 and T2, respectively), and 7 weeks after finishing treatment (T3). In the study, we also included fecal samples from 20 healthy patients that were matched with the CRC patients according to sex, age, and BMI. The healthy controls did not have gastrointestinal tract disorders or other complications and were not administered antibiotics or probiotics during the 2 months prior to sample collection. The study protocol was approved by the Medical Ethics Committee at the Virgen de la Victoria University Hospital and was conducted in accordance with the Declaration of Helsinki. Written informed consent was provided by all study participants. Laboratory Measurements Fasting venous blood samples were collected, and serum was separated in aliquots and was immediately frozen at −80 • C. Serum levels of glucose, total cholesterol, triglycerides, HDL-cholesterol, and LDL-cholesterol were measured in duplicate using a Dimension autoanalyzer (Dade Behring Inc., Deerfield, IL, USA) using enzymatic methods (Randox Laboratories Ltd. Ardmore, UK). DNA Extraction and Gut Microbiota Sequencing The frozen fecal samples were thawed at 4 • C to avoid dramatic temperature changes that might affect bacterial DNA integrity. Afterwards, the fecal samples were manually homogenized for 30 s with a sterile plastic scoop, and aliquots of 200 mg were used for DNA extraction using the QIAamp DNA Stool Mini kit following the manufacturer's instructions (Qiagen, Hilden, Germany). DNA concentration (A260) and purity (A260/A280 ratio) were estimated with a Nanodrop spectrophotometer (Nanodrop Technologies, Wilmington, DE, USA). DNA was amplified using the Ion 16S Metagenomics kit (Thermo Fisher Scientific, Madrid, Spain), which contains a primer pool to amplify multiple variable regions (V2, 3, 4, 6-7, 8 and 9) of the 16S rRNA gene. The Ion PlusTM Fragment Library Kit (Thermo Fisher Scientific, Madrid, Spain) was used to ligate the barcoded adapters to the generated amplicons and to create the barcoded libraries, which were pooled and templated on the automated Ion Chef system (Thermo Fisher Scientific, Madrid, Spain). The sequencing was done on an Ion S5 platform (Thermo Fisher Scientific, Madrid, Spain). Bioinformatics Analysis Analysis of 16S rRNA amplicons was performed using QIIME (2-2019.4 version). The q-dada2 plugin with the DADA2 pipeline was used for the quality filtering and the denoised, dereplicated, and chimera filtering of the raw sequence data. The sequence variants obtained through the DADA2 pipeline were merged into a single feature table using the q2-feature-table plugin. Using the q2-vsearch plugin with 97% sequence similarity, all amplicon sequence variants from the merged feature table were clustered into OTU's using the Open Reference Clustering method against Greengenes version 13_8 with 97% similarity from the OTU reference sequences. The OTUs were aligned with MAFFT (via q2-alignment) and were used to construct a phylogeny with fasttree2 (via q2-phylogeny). The q2-feature-classifier classify-sklearn naive Bayes taxonomy classifier was used to assign taxonomy to the OTUs. Alpha diversity metrics (Shannon and Chao1), beta diversity metrics (Bray-Curtis dissimilarity), and principal coordinate analysis (PCoA) were estimated using a q2-diversity plugin after the samples were rarefied to 994 sequences per sample. Alpha diversity significance was estimated with Kruskal-Wallis test, and beta diversity significance was estimated using the non-parametric ANOSIM test. Analysis of Short-Chain Fatty Acids (SCFAs) in Fecal Samples by Gas Chromatography (GC) Coupled with a Flame-Ionization Detector The fecal concentrations of SCFAs were measured by GC coupled with a flameionization detector as previously described [63][64][65][66] in the Servicios de Apoyo a la Investigación de la Universidad de Extremadura (SAIUEx). Briefly, 20 mg of the fecal samples were homogenized manually using a spatula in 200 µL of distilled water. Subsequently, 100 µL of homogenized fecal samples were mixed with 40 mg of sodium chloride, 20 mg of citric acid, 40 µL of 0.1 M hydrochloric acid, and 200 µL of butanol: tetrahydrofuran: acetonitrile (50:30:20). The samples were then vigorously vortexed for 3 min and were centrifuged at 14,870× g at room temperature for 10 min. The supernatant was transferred to a new plastic tube, and 200 µL of a benzyl alcohol-pyridine mixture (3:2) and 100 µL DMSO were added, and the mixture was vortexed for 5 s. Then, 100 µL of benzyl chloroformate was added carefully. To release the gases generated by the reaction, the tube lid was kept open for 1 min. The tube was then closed, and the mixture was vortexed. After derivatization, 200 µL hexane was added to the reaction mixture, and the sample was vortexed for 5 min followed by a centrifugation step at 21,000× g for 2 min. Subsequently, 100 µL of derivative extract (upper hexane layer) was transferred to a glass insert, and 5 µL were injected into the gas chromatograph and were further analyzed using an Agilent 6850 gas chromatograph coupled with a split/spitless injector and a flame-ionization detector (FID) (Agilent Technologies, Santa Clara, CA, USA). The temperature of the injector and detector was adjusted to 250 • C, and the samples (5 µL) were injected in a split ratio of 25:1 using a fused-silica capillary DB-23 column Agilent (60 m × 0.25 mm (internal diameter) coated with a 0.15 µm thick layer of 80.2% 1-methylnaphatalene. Nitrogen was used as the carrier gas at 1 mL/min (hold 4 min), reduced to 0.8 mL/min (hold 1 min) and then 0.6 mL/min (hold 1 min), and finally increased to 1 mL/min. The temperature of the FID detector was adjusted and maintained at 260 • C, and the flow rates of H 2 , the air, and the make-up gas N 2 were adjusted to 30 mL/min, 350 mL/min, and 25 mL/min, respectively. The initial oven temperature was 100 • C (hold 2 min), which was increased to 200 • C at a rate of 15 • C/min, and was finally maintained at 200 • C for 5 min. The identity of the SCFAs detected in the fecal samples was confirmed through the comparison of their retention times and their mass spectra with those of the analytical SCFA standards (Sigma-Aldrich, Madrid, Spain). The standard calibration curves for SCFAs (acetic acid, propionic acid, butyric acid, isobutyric acid, valeric acid, isovaleric acid, 4-methylvaleric acid, hexanoic acid, and heptanoic acid) were prepared in triplicate, with a concentration range of 15-1,000 µg/mL. Analysis of Serum Polyamine Levels by Ultra-High Performance Liquid Chromatography Tandem Mass Spectrometry (UHPLC-MS/MS) For the analysis of the polyamine concentrations, serum samples were processed as previously described [67]. Briefly, 50 µL of serum (aliquoted in 1.5 mL Eppendorf LoBind tube) were mixed with 5 µL of internal standard and 167 µL of methanol. The mixture was vortexed for 1 min, and 334 µL of chloroform was added, vortexed for 1 min, and centrifuged for 10 min at 15,000 rpm and 4 • C. After centrifugation, the upper layer was collected and was transferred to a new tube, where 100 µL of carbonate-bicarbonate buffer (pH 9) and 50 µL of dansyl chloride (10 mg/mL in acetone) were added to derivatize the sample. The mixture was vortexed and was placed in the dark for 1 h at room temperature. A total of two extractions of the compounds were conducted with 250 µL of ethyl acetate, between which 2.5 µL of trifluoroacetic acid were added. A SpeedVac at 45 • C was used to evaporat the combined organic phases, which were stored at −20 • C until analysis. An amount of 50 µL of ammonium acetate and 0.2 M acetonitrile (30:70) was used to reconstitute the samples. Chromatography of the samples was completed with Agilent UHPLC 1290 series binary pump equipment (Agilent Technologies, Santa Clara, CA, USA), and the separation was performed on a Kinetex EVO C18 column (2.6 µm particle size, 2.1 mm internal diameter × 150 mm length) (Phenomenex, Torrance, CA, USA) held at 25 • C. A gradient was established between the water acidified with 0.1% formic acid (A), and acetonitrile acidified with 0.1% formic acid (B) at a flow rate of 400 µL/min was used as a mobile phase for elution. The injected amount was 2.5 µL. Intestinal Permeability Analysis Plasma levels of zonulin were measured in duplicate using an ELISA commercial kit (Immunodiagnostik AG, Bensheim, Germany). Mean values were used for data analysis. Intra-and inter-assay coefficients of variation were between 3-10%, and the detection limit was 0.22 ng/mL. Statistical Analysis The Kruskal-Wallis rank-sum test was performed to compare the bacterial abundance between the study groups, and the false discovery rate (FDR) using the Benjamini-Hochberg method was applied to correct the significant p-values (q < 0.05). The Kruskal-Wallis rank-sum test and subsequent post hoc Bonferroni were used to analyze differences in the clinical and biochemical variables between three study groups, whereas differences between the two groups were analyzed using the Mann-Whitney U test. Inter-group comparison among post-treatment changes in fecal SCFAs and plasma zonulin levels were performed using a covariance model (ANCOVA) adjusted for baseline. A Wilcoxon signed-rank test was used to calculate differences in fecal SCFAs and plasma zonulin between baseline and the post-treatment timepoint T3. The Spearman correlation coefficients were calculated to estimate the correlations between the bacterial taxa and microbial derived-metabolites (SCFAs and polyamines) and the permeability. Statistical analyses were conducted with the statistical software package SPSS version 26.0 (SPSS Inc., Chicago, IL, USA). Random forests (RF) were used to predict baseline bacteria (species-level relative abundance data) related to the neoadjuvant RCT response using the default parameters of the R implementation of the algorithm (R package "randomForest"), and bootstrapping (n = 500) was used to assess the classification accuracy. P values below 0.05 were considered statistically significant. Conclusions In this study, we have demonstrated that the gut microbiota in CRC patients differs in intestinal microbiota composition in comparison with healthy controls. In CRC patients, the gut microbiota is characterized by a significantly lower bacterial diversity and richness, a significant increase in proinflammatory opportunistic pathogens, and a decrease in the relative abundance of beneficial or commensal butyrate-producing bacteria. In addition, neoadjuvant RCT treatment did not induce significant changes in gut microbiota diversity and composition, with the exception of a significant decrease in Fusobacterium, Escherichia, and Klebsiella and a significant increase in Bifidobacterium at post-treatment time compared to baseline. Nevertheless, after the classification of CRC patients in the R and NR groups to the neoadjuvant RCT, we observed a significant increase in the diversity and richness in R patients compared to in the NR patients. Additionally, a compositional change was shown between both study patient groups, with a significant enrichment of probiotic and butyrate-producing bacteria in the R patients, accompanied by an enrichment in unfavorable pro-inflammatory bacteria in the NR patients. Moreover, the NR patients had significantly higher levels of spermine and some acetyl derivatives of polyamines and serum zonulin and significantly lower levels of fecal of acetic, butyric, isobutyric, and hexanoic acids than the R patients. These microbial-derived metabolites are important factors that connect the intestinal microbiota to CRC and could be responsible for RCT efficiency. Moreover, in the NR patients, the PICRUSt analysis found an over-representation of genes involved in lipopolysaccharide biosynthesis as well as in araquidonic acid and glutathione metabolism, genes from pathways associated with bacterial pathogenesis, inflammation, cell survival, proliferation, and therapy response. In addition, we also identified a baseline consortium of CRC-enriched bacterial species (Ruminococcus albus, Bifidobacterium bifidum, Faecalibacterium prausnitzii, Fusobacterium nucleatum, and Bacteroides fragilis) that potentially could predict cancer treatment outcome, suggesting that the intestinal composition in CRC patients is important in predicting the response of the gut microbiome to neoadjuvant RCT. Altogether, our results suggest that a healthy gut microbiome could be indispensable for an optimum therapeutic response and that dysbiotic microbiota could be the underlying reason for variable responses to similar therapeutic strategies in different patients. Funding: This work was supported by PI15/00256 from the Institute of Health "Carlos III" (ISCIII), co-funded by the Fondo Europeo de Desarrollo Regional-FEDER. Maria Isabel Queipo-Ortuño was supported by the "Miguel Servet Type II" program (CPI18/00003, ISCIII, Spain, co-funded by the Fondo Europeo de Desarrollo Regional-FEDER) and by the "Nicolas Monardes" research program of the Consejería de Salud (C-0030-2018, Junta de Andalucía, Spain. Bruno Ramos Molina was supported by the "Miguel Servet Type I" program (CP19/00098, ISCIII, Spain, co-funded by the Fondo Europeo de Desarrollo Regional-FEDER). Lidia Sanchez-Alcoholado was the recipient of a predoctoral grant (PE-0106-2019) from the Consejería de Salud y Familia (co-funded by the Fondo Europeo de Desarrollo Regional-FEDER, Andalucia, Spain). Aurora Laborda-Illanes was the recipient of a predoctoral grant, PFIS-ISCIII (FI19-00112), co-funded by the Fondo Europeo de Desarrollo Regional-FEDER, Madrid, Spain. Institutional Review Board Statement: The study was conducted according to the guidelines of the Declaration of Helsinki and was approved by the Ethics Committee of Virgen de la Victoria University Hospital (30 October 2015). Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Data Availability Statement: The data presented in this study are available upon request from the corresponding author. The data are not publicly available, as they contain information that could compromise the privacy of research participants. Conflicts of Interest: The authors declare no conflict of interests.
2021-09-11T06:17:05.321Z
2021-09-01T00:00:00.000
{ "year": 2021, "sha1": "926aae5e7fbc1cf77fa9a41cb78cd8fa089c1def", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1422-0067/22/17/9549/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2b8e0084001ed56ef23d55d1383f58fbd874339a", "s2fieldsofstudy": [ "Biology", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
3188101
pes2o/s2orc
v3-fos-license
Synthesis of two SAPAP3 isoforms from a single mRNA is mediated via alternative translational initiation In mammalian neurons, targeting and translation of specific mRNAs in dendrites contribute to synaptic plasticity. After nuclear export, mRNAs designated for dendritic transport are generally assumed to be translationally dormant and activity of individual synapses may locally trigger their extrasomatic translation. We show that the long, GC-rich 5′-untranslated region of dendritic SAPAP3 mRNA restricts translation initiation via a mechanism that involves an upstream open reading frame (uORF). In addition, the uORF enables the use of an alternative translation start site, permitting synthesis of two SAPAP3 isoforms from a single mRNA. While both isoforms progressively accumulate at postsynaptic densities during early rat brain development, their levels relative to each other vary in different adult rat brain areas. Thus, alternative translation initiation events appear to regulate relative expression of distinct SAPAP3 isoforms in different brain regions, which may function to influence synaptic plasticity. Sustained translation initiation depends on the continued cycling of eIF2 between the GTP-and GDP-bound state 2 . Phosphorylation of eIF2a at Serine 51 prevents GDP/GTP exchange, reduces initiation events and consequently inhibits general translation. Paradoxically, increased phospho-eIF2a levels enhance the synthesis of proteins encoded by certain mRNAs that typically possess several uORFs 13,14,21 . After translating a uORF, a portion of the post-termination 40 S complexes can resume scanning, rebind the ternary complex and then reinitiate translation at a downstream uAUG. Successive reading of several uORFs strongly restricts the number of 43 S complexes reaching AUG 11 and therefore inhibits translation of the mORF. Phosphorylation of eIF2a lowers the levels of GTPbound eIF2a. This increases the likelihood that post-termination 40 S subunits resuming scanning will bypass remaining uORFs and not re-acquire a ternary complex before reaching AUG 11 . Such a mechanism increases the probability that a reinitiating 43 S complex will translate the mORF and explains the unusual increase in the levels of proteins synthesized from these transcripts. Importantly, synaptic activity is known to control the levels of phospho-eIF2a in neurons 22,23 and the phosphorylation status of eIF2a bidirectionally modulates both synaptic plasticity and memory storage 24 . In mammalian neurons, a subset of mRNAs are specifically targeted to dendrites 25,26 . These mRNAs are believed to be kept in a translationally dormant state after exiting the nucleus until specific synaptic signals initiate local protein synthesis at synapses. The molecular details of this regulation are only partially understood [27][28][29] . For example, several dendritic transcripts initiate translation in a 59cap-independent manner using so-called internal ribosome entry sites (IRESs) 30 that recruit ribosomes directly to a start codon without the assembly of initiation factors at the 59-end of the mRNA 31,32 . Importantly, many dendritically localized mRNAs encode proteins that are highly concentrated at postsynaptic densities (PSDs), such as SAPAP3 33,34 , different Shank/ProSAP family members (Shanks) 35 and PSD-95/SAP90 36,37 . PSDs are dense cytoplasmic protein signaling networks associated with the postsynaptic membrane of excitatory synapses [38][39][40] . SAPAP3, Shanks and PSD-95 are master scaffolding proteins of the PSD that cross-link neurotransmitter receptors, signaling molecules and cytoskeletal components [41][42][43] . While SAPAP3 mRNAs have not yet been shown to be specifically translated at synapses, an activity triggered local synthesis of PSD components at synapses is generally believed to induce a reorganization of the postsynaptic signal transduction machinery and thereby regulate synaptic plasticity 27 . Consistently, SAPAP3 knockout mice exhibit a behavior reminiscent of obsessive-compulsive disorder in humans suggesting that a tight control of SAPAP3 levels in mammalian brain neurons is essential to maintain normal synaptic physiology 44 . Here we show that the regulation of SAPAP3 levels is exerted at the level of translation. A particular uORF strongly down-regulates translation efficiency while also enabling the synthesis of two distinct SAPAP3 isoforms via alternative translation initiation. Results The 59UTR of SAPAP3 mRNAs down-regulates translation. 59UTRs are involved in regulating translation initiation, a major point of translation control. To investigate regulatory effects of the 59UTR of rat SAPAP3 mRNAs (S3-59UTR), we first determined its complete sequence. A 59RACE product obtained from adult rat brain cDNA encodes a 295 nt GC-rich (72%) 59UTR that is highly conserved in mouse, dog and human (Fig. 1A) and includes four uORFs and 104 nt of the previously reported rat SAPAP3 cDNA (NM_173138). To assess if the S3-59UTR contains an IRES, bicistronic transcripts encoding separate Photinus (PhoLuc) and Renilla luciferase (RenLuc) ORFs connected via different intervening sequences were expressed in HEK293 cells and primary cortical neurons. IRES activity is defined as PhoLuc to RenLuc activity ratio (Pho:Ren ratio) with the quotient obtained with the basic mRNA containing a synthetic 29 nt intervening sequence set to 1. While known IRES elements from Encephalomyocarditis virus (EMCV) or Arc/arg3.1 mRNAs 30 produced Pho:Ren ratios of about 4 or higher, Pho:Ren ratios below 1 clearly showed that the S3-59UTR exhibits no IRES activity in both cell systems (see Supplementary Fig. S1 online). To further examine the translation regulatory potential of the S3-59UTR, mammalian cells were transfected with various eukaryotic expression vectors. pEGFP-N3 encodes mRNAs consisting of a 92 nt synthetic 59UTR, the enhanced green fluorescent protein (EGFP) ORF and a 192 nt 39UTR partially derived from the SV40 early mRNA. The 59UTR is short, contains no uORFs and enables efficient translation. p59S3-EGFP transcripts are identical but additionally contain the S3-59UTR plus the first eight codons of SAPAP3 mRNAs inserted upstream of and in-frame with the EGFP ORF. One day after transfection, pEGFP-N3 transfected HEK293 cells and neurons exhibited strong autofluorescence while only very little EGFP fluorescence was observed in p59S3-EGFP transfected cells (Fig. 1B&C). Thus, in comparison to the synthetic 59UTR, the S3-59UTR appears to strongly reduce translation in neuronal and nonneuronal cells. What mechanism underlies translational down-regulation? We tested transcripts derived from three different vectors all containing an ORF encoding FLAG-tagged SAPAP3 and the complete 644 nt SAPAP3 39UTR (S3-39UTR; GenBank accession number FJ705274). However, each vector possesses distinct 59UTRs: i) a synthetic 28 nt element supporting efficient translation (pFS3), ii) the S3-59UTR (pS3-FS3) and iii) the 59UTR of rat SAPAP1 mRNAs (S1-59UTR, pS1-FS3). While similar in length to the S3-59UTR the 274 nt S1-59UTR only has a 50% GC content. Northern blot analysis with RNA from transfected HEK293 cells revealed that all three recombinant mRNAs are present at comparable concentrations ( Fig. 2A) indicating that the distinct 59UTRs do not lead to different transcript levels. However, while pFS3 and pS1-FS3 transfected cells contained comparable FLAG-SAPAP3 levels as observed by Western blotting, the recombinant protein was barely detectable in extracts of cells synthesizing the S3-59UTR containing mRNA (Fig. 2B). Thus, in contrast to the S1-59UTR, the S3-59UTR strongly down-regulates translation efficiency. Translation control often involves interactions between 59 and 39UTRs [45][46][47] . To assess the regulatory effect of both S3-59UTR and S3-39UTR, HEK293 cells and cortical neurons were co-transfected with pLUC-based constructs and pREN encoding PhoLuc and RenLuc, respectively. Luciferase activities were then determined from cell extracts. The Pho:Ren ratio of pLUC transfected cells synthesizing mRNAs containing a 87 nt synthetic 59UTR, the PhoLuc ORF and a 294 nt 39UTR partially derived from the bovine growth hormone (BGH) mRNA was set to 100%. Pho:Ren ratios of cells transfected with other constructs were calculated as percentage of this value (nPho:Ren ratio). Replacing the BGH 39UTR with the S3-39UTR (pS33-LUC) did not alter the nPho:Ren ratio in neurons and led to only a slight reduction (28%) in HEK293 cells (Fig. 2C). In contrast, swapping the synthetic 59UTR for the S3-59UTR (pS35-LUC) drastically reduced the nPho:Ren ratio in both neurons (95%) and HEK293 cells (97%). In both cells systems, the additional exchange of the BGH 39UTR for the S3-39UTR (pS353-LUC) did not significantly alter the S3-59UTR mediated reduction of the nPho:Ren ratio. Thus, the S3-59UTR strongly reduces translation efficiency in neuronal and non-neuronal cells without significant contribution from the S3-39UTR. These findings were supported by luciferase assays performed in reticulocyte lysates programmed with in vitro transcribed capped mRNAs. PhoLuc activity achieved with 25 ng and 1 mg pS35-LUC mRNA is only about 3% and 1%, respectively, of the PhoLuc activity obtained with the same amount of pLUC mRNA. uORF mediated translational down-regulation. Down-regulation of translation efficiency is often mediated via uORFs or stable secondary structures within 59UTRs 7,10 . Stem-loop formations with DG values below 250 kcal/mol often inhibit translation by stalling 43 S pre-initiation complex scanning of the 59UTR 9 . MFold predicts that the S3-59UTR folds into a stable secondary structure with a DG of about 2150 kcal/mol (see Supplementary Fig. S2 online). Does the predicted conformation contribute to translational downregulation? Three pS3-FS3-based vectors encoding mRNAs harboring distinct deletions within the S3-59UTR were constructed to address this question (Fig. 3A). Deleting nt 53-124 (pS3D53-124-FS3), 1-150 (pS3D150-FS3) and 1-203 (pS3D203-FS3) from the S3-59UTR changes the DG values of the predicted secondary structures to about 2100, 258 and 227 kcal/mol, respectively. Cell lysates of HEK293 cells co-transfected with pEGFP-N3 and one of the pS3-FS3-based vectors were analyzed by Western blotting with anti-EGFP and anti-FLAG antibodies (Fig. 3B). Comparable EGFP levels in all tested lysates indicate similar transfection efficiencies. However, FLAG-SAPAP3 was highly concentrated in pFS3 transfected cells but only weakly detected in cells expressing mRNAs containing either the complete S3-59UTR or truncated versions thereof. Thus, the proximal 90 nt of the S3-59UTR are sufficient for translational down-regulation. Since the DG value predicted for the secondary structure of this truncated 59UTR is about 227 kcal/mol, the S3-59UTR appears to down-regulate translation efficiency by a mechanism unrelated to stalling of 43 S complex scanning by strong secondary structures. Are the four evolutionarily conserved uORFs of the S3-59UTR involved in translational control? Whereas uORF1 consists of only a start and a stop codon uORF2, uORF3 and uORF4 encode peptides spanning 23, 11 and 5 amino acids, respectively. Of the four uORFs, only uORF3 is in-frame with the SAPAP3 ORF (S3-ORF) while only uORF2 overlaps with the S3-ORF (Fig. 3A). The first two nt of the uORF2 stop codon also represent the last two nt of the SAPAP3 start codon (AUGA, Fig. 1). NetStart (www.cbs.dtu.dk/services/NetStart/) predicts that uAUG2 and uAUG3 can indeed function as translation (GenBank accession number AI836865) and human (BI756308) SAPAP3 mRNAs indicating the AUG start codon of the SAPAP3 ORF (,), conserved nucleotides (highlighted by a black background) and four conserved uORFs (underlined; uORF phases are indicated relative to the phase of the SAPAP3 ORF (W)). (B&C) HEK293 cells (B) and rat cortical neurons (C) transfected with pEGFP-N3 exhibit strong EGFP fluorescence (upper panels, green channel), whereas EGFP levels are drastically reduced in cells transfected with p59S3-EGFP (lower panels, green channel). Identical exposure times were used to detect EGFP. In primary neurons somatodendritic microtubule-associated protein 2 (MAP2) is detected by immunocytochemistry (red channel). Two-channel overlay pictures (C, right panels) indicate that transfected neurons are normally differentiated. www.nature.com/scientificreports SCIENTIFIC REPORTS | 2 : 484 | DOI: 10.1038/srep00484 start sites (Table 1). To determine whether uORF2 and uORF3 interfere with translation initiation at AUG 11 , the translation efficiency of puORF213 AAG -D150-FS3 transcripts, in which uAUG2 and uAUG3 were converted into AAG triplets was examined in transfected HEK293 cells (Fig. 3B). In contrast to the weak FLAG-SAPAP3 synthesis from pS3D150-FS3 derived mRNAs, both puORF213 AAG -D150-FS3 and pFS3 transcripts lead to high FLAG-SAPAP3 levels. Disruption of only uORF2 in the context of the entire S3-59UTR (puORF2 AAG -S3-FS3; Fig. 3C) completely abolished the down-regulation of FLAG-SAPAP3 synthesis while disruption of only uORF3 (puORF3 AAG -S3-FS3) had no effect. Note that the point mutation in puORF2 AAG -S3-FS3 transcripts does not change the predicted DG value of the S3-59UTR. Thus, S3-59UTR-mediated down-regulation of FLAG-SAPAP3 synthesis results from the translation of uORF2. Since uORF2 overlaps with the S3-ORF, ribosomes translating uORF2 bypass AUG 11 and are thus unable to synthesize full-length SAPAP3. uORF2 mediates synthesis of two distinct SAPAP3 isoforms from a single mRNA. Transcripts of pFS3-based vectors contain regions that are not present in SAPAP3 mRNAs, such as the FLAG-ORF. To eliminate all effects possibly mediated by these sequences, we performed experiments with constructs derived from pS3-S3, a vector encoding the full-length authentic SAPAP3 mRNA. Lysates from transfected HEK293 cells were analyzed by Western blotting utilizing a SAPAP3 antiserum (Fig. 3D). SAPAP3 was very efficiently synthesized from pS3 transcripts, in which the S3-59UTR was replaced by a short synthetic 59UTR. In comparison, authentic SAPAP3 mRNAs (pS3-S3) or identical transcripts missing nt 1-150 (pS3D150-S3) and 1-203 (pS3D203-S3) yielded strongly diminished SAPAP3 levels (Fig. 3D, ,130 kDa band designated ''SAPAP3a''). However, disruption of both uORF2 and uORF3 in pS3D203-S3 transcripts (puORF213 AAG -D203-S3) restored the SAPAP3a concentration to levels observed in pS3-transfected cells. Interestingly, cells expressing mRNAs containing the complete (pS3-S3) or a truncated S3-59UTR (pS3D150-S3 and pS3D203-S3) produce a second SAPAP3 isoform with an apparent molecular weight of ,110 kDa (''SAPAP3b''). Yet, SAPAP3b is neither synthesized from an mRNA in which both uORF2 and uORF3 are disrupted (puORF213 AAG -D203-S3) nor from transcripts totally lacking S3-59UTR sequences (pS3). Thus, uORF2 and/or uORF3 are required for SAPAP3b synthesis while partially suppressing SAPAP3a synthesis. Mutation of only uAUG3 in full-length SAPAP3 mRNAs (puORF3 AAG -S3) did not alter the SAPAP-3a:SAPAP3b level ratio (a:b ratio) as compared to authentic SAPAP3 mRNAs, while the single disruption of uORF2 (puORF2 AAG -S3) shifted the ratio in favor of SAPAP3a. Comigration of SAPAP3a and FLAG-SAPAP3 suggests that translation of the full-length S3-ORF leads to SAPAP3a. Remarkably, rat PSD preparations contain two SAPAP3 isoforms co-migrating with SAPAP3a and SAPAP3b synthesized in transfected cells (Fig. 3E). Taken together, these data suggest that two distinct SAPAP3 isoforms are synthesized from a single mRNA, in which uORF2 down-regulates SAPAP3a synthesis in favor of SAPAP3b synthesis. Alternative start codons mediate synthesis of two SAPAP3 isoforms. The synthesis of two SAPAP3 isoforms may result from ATI involving two parallel scenarios: i) ribosomes scanning the S3-59UTR skip uAUG2 (leaky scanning) and initiate SAPAP3a synthesis at AUG 11 ; ii) ribosomes translating uORF2 bypass AUG 11 , reinitiate translation at a downstream AUG and thus synthesize SAPAP3b. NetStart predicts that the in-frame triplets AUG 167 and AUG 1277 are suitable for translation initiation (Table 1). To determine if one of these AUGs directs SAPAP3b synthesis, two pS3-derived vectors containing deletions of coding sequence nucleotides 1-66 (pDAUG 11 -S3) and 1-276 (pDAUG 167 -S3) of the S3-ORF (Fig. 4A), respectively, were transfected into HEK293 cells. Western blot analysis revealed that mRNAs in which either AUG 11 alone or AUG 11 and AUG 167 together are deleted lead to the synthesis of a protein co-migrating with SAPAP3b translated from pS3D150-S3 transcripts (Fig. 4B). In addition, when AUG 1277 was mutated to an AAC triplet in pS3-S3, the resulting vector pS3-S3 1277AAC still led to the synthesis of SAPAP3a but not SAPAP3b (Fig. 4C). Thus, SAPAP3b synthesis appears to start at AUG 1277 . Interestingly, an additional protein intermediate in size between SAPAP3a and SAPAP3b is synthesized in minute amounts from pDAUG 11 -S3 transcripts but not pDAUG 167 -S3 mRNAs and is likely to result from rare translation initiation at AUG 167 . Taken together, different mechanisms appear to contribute to the synthesis of two SAPAP3 isoforms from a mutual mRNA. Leaky scanning of 43 S complexes past all four uAUGs and subsequent translation initiation at AUG 11 mediates SAPAP3a synthesis. In contrast, ribosomes translating uORF2 bypass AUG 11 , stop translation shortly thereafter and may subsequently reinitiate translation at AUG 1277 leading to SAPAP3b synthesis. Noteworthy, the first 300 nt of the SAPAP3 mORF are highly conserved in various vertebrate species (see Supplementary Fig. S3 online). SAPAP3 isoform concentrations differ in distinct rodent brain regions. To determine the postsynaptic abundance of both SAPAP3 isoforms during brain development, we analyzed whole rat brain PSD fractions by Western blotting. Whereas SAPAP1 is already detected in embryonic brain (E20) and remains present until adulthood, its low molecular weight isoform GKAP first appears around postnatal day 21 (P21) (Fig. 5A). In contrast, both SAPAP3 isoforms progressively accumulate at PSDs during early rat brain development, a trend similarly observed for PSD-95. Noteworthy, the a:b ratio remains approximately constant throughout brain development. While postsynaptic SAPAP levels drop after P21, PSD-95 still appears to accumulate at PSD during late postnatal development. Further analysis of PSD fractions from six different adult rat brain regions revealed a strong postsynaptic accumulation of both SAPAP3 isoforms in the neocortex, hippocampus and thalamus (Fig. 5B), whereas the cerebellum, brain stem and olfactory bulb contain relatively low SAPAP3 levels. In the hippocampus, thalamus, cerebellum and brain stem, SAPAP3b is the predominant isoform with a:b ratios of 0.69, 0.59, 0.47 and 0.89, respectively. In contrast, SAPAP3a levels in the (3) and DG values of individual 59UTRs are listed to their right. (B) Translation inhibition is independent of stable secondary structures. Protein extracts (10 mg) from untransfected and transfected HEK293 cells were analyzed by Western blotting with anti-FLAG tag and anti-EGFP antibodies. (C) Down-regulation of translation depends on an intact uORF2. Western blotting was performed as described in B. (D) Concomitant synthesis of aand b-isoforms from authentic SAPAP3 mRNAs depends on uORF2. Western blots were performed as described in B using anti-SAPAP3 and anti-a-tubulin antibodies. (E) Two endogenous SAPAP3 isoforms co-migrating with SAPAP3a and SAPAP3b are present in a rat brain PSD fraction. Western blotting was performed as described in D. olfactory bulb are about 20% higher than the corresponding SAPAP3b concentration (1.19 a:b ratio) while the a:b ratio in the neocortex is about 1. Phosphorylation at Serine 51 decreases the level of functional eIF2a by inhibiting GTP-GDP exchange and consequently limits translation initiation events 2 . In transcripts containing several uAUGs, post-termination 40 S subunits resuming 59UTR scanning are therefore more likely to reinitiate translation at the authentic mAUG after translating a uORF. To assess whether eIF2a phosphorylation may affect relative initiation rates occurring at AUG 11 and AUG 1277 of SAPAP3 mRNAs in vivo, we analyzed the a:b ratio in three different brain regions of heterozygous eIF2a 1/S51A knock-in mice 48 . In these animals, one eIF2a allele encodes a variant that contains an alanine instead of a serine residue at position 51 and is therefore phosphorylation resistant. Thus, heterozygous mice possess reduced phospho-eIF2a levels compared to wildtype animals. We prepared both total homogenates and PSD enriched fractions from the neocortex, hippocampus and cerebellum of individual adult wildtype and heterozygous mice. In these brain fractions, the a:b ratio was determined by Western blotting with a SAPAP3 specific antiserum. In both homogenates and PSD enriched fractions obtained from all three tested brain areas, a:b ratios were found to be unaltered in eIF2a 1/S51A knock-in animals compared to wildtype mice (Fig. 5C&D). Also we did not observe any statistically significant variations in total SAPAP3a or SAPAP3b levels (normalized against tubulin concentrations) between wildtype and knock-in mice (Fig. 5E). However, whereas a:b ratios were close to 1 in homogenates from all three brain regions and postsynaptic sites of the cerebellum, SAPAP3a is the predominant isoform in PSD fractions of the neocortex and hippocampus. Taken together, these data suggest that while in brain neurons the relative synthesis rate of SAPAP3a compared to SAPAP3b may not be influenced by the phosphorylation of eIF2a, postsynaptic a:b ratios in different brain regions may still be variable. Discussion In this study, we describe cis-acting elements regulating the translation of mRNAs encoding the postsynaptic protein SAPAP3. At excitatory mammalian brain synapses, central scaffold proteins of the PSD, such as SAPAP3, Shanks and PSD-95, cross-link glutamate receptors, signaling molecules and microfilaments [39][40][41]43 . Activityinduced synaptic translation of dendritic mRNAs encoding these scaffold proteins is thought to initiate a reorganization of the postsynaptic signaling machinery and thus mediate synaptic plasticity 38,41,42,44 . When entering the cytoplasm, mRNAs designated for dendritic transport are generally assumed to remain translationally dormant until specific synaptic signals trigger their local translation 27,28 . However, molecular events controlling translation initiation of particular dendritic transcripts remain poorly understood. Our data show that SAPAP3 mRNAs contain a relatively long and GC-rich 59UTR, which does not possess IRES activity. Instead, independent of the 39UTR, the 59UTR strongly down-regulates translation efficiency as compared to both a short synthetic 59-leader and the 59UTR of SAPAP1 mRNAs, which is of equivalent length to the S3-59UTR. Since SAPAP1 in contrast to SAPAP3 transcripts are confined to neuronal somata 33,34 the observed regulatory difference between these two mRNAs encoding similar proteins may reflect distinct translation control mechanisms employed by different neuronal subregions. In particular, it is tempting to speculate that the S3-59UTR mediates translation silencing while the transcripts are transported to dendrites. Moreover, based upon current knowledge, the identified regulatory mechanism sets SAPAP3 mRNAs apart from other dendritic transcripts that have been shown to regulate translation via 59-IRES elements or sequence motifs residing within the 39UTR [49][50][51] . Taken together, these findings suggest that individual dendritic transcripts utilize distinct molecular mechanisms to regulate translation. How does the S3-59UTR mediate down-regulation of translation? Our deletion analysis shows that although long and GC-rich, the S3-59UTR does not limit translation initiation via the formation of a stable secondary structure that stalls the linear movement of scanning 43 S complexes 9 . Furthermore, deletion or mutation of uAUG1, uAUG3 and uAUG4 does not alter translation efficiency. Consistently, all three uAUGs are surrounded by suboptimal Kozak sequences and are thus likely to be bypassed by most scanning 43 S complexes. In contrast, selective mutation of uAUG2 within the full-length S3-59UTR strongly enhances translation initiation at the downstream mORF start codon. As uORF2 overlaps with the S3-ORF ribosomes translating uORF2 will bypass AUG 11 and thus be unable to synthesize full-length SAPAP3. Despite this, a significant proportion of scanning 43 S complexes can still bypass uAUG2 by leaky scanning and instead initiate translation at AUG 11 as evidenced by the amount of SAPAP3a synthesized from the pS3-S3 transcripts. In summary, uORF2 strongly diminishes the rate of translation initiation occurring at AUG 11 and thereby limits the synthesis of full-length SAPAP3. Further studies will be needed to dissect, which of the molecular control mechanisms described herein may be employed in particular subcellular regions of brain neurons. Quite unexpectedly, full-length authentic SAPAP3 mRNAs direct the synthesis of two distinct isoforms in transfected cells. Synthesis of the lower molecular weight isoform SAPAP3b depends on the presence of the intact S3-59UTR. Additionally, mutation of uAUG2 but not uAUG3 dramatically shifted the relative ratio of both isoforms in favor of the longer SAPAP3a. We further showed that translation initiation at AUG 11 and AUG 1277 drives synthesis of SAPAP3a and SAPAP3b, respectively. Taken together these findings suggest that two distinct SAPAP3 isoforms are synthesized from a single mRNA by ATI (Fig. 6). 43 S complexes skipping uAUG2 via leaky scanning can initiate translation at AUG 11 hence resulting in the synthesis of SAPAP3a. In contrast, ribosomes translating uORF2 bypass AUG 11 and may either dissociate from the mRNA afterwards or continue to reinitiate translation at AUG 1277 thereby synthesizing SAPAP3b. A similar mechanism contributes to the synthesis of two FLI-1 isoforms from a single mRNA 52 . While both AUG 167 and AUG 1277 are predicted to represent equally efficient start codons, our data show that only the second triplet serves as a competent initiator site for SAPAP3b synthesis. This pronounced preference for reinitiation at AUG 1277 as compared to AUG 167 may result from its larger distance from the uORF2 stop codon. As a consequence, the probability of eIF2-Met-tRNAi-GTP reloading for 40 S ribosomal subunits that resume scanning after translating uORF2 and thus become initiation competent again is much higher for AUG 1277 relative to AUG 167 7,10 . In Western blots, two endogenous rat SAPAP3 isoforms from PSD fractions co-migrated with SAPAP3a and SAPAP3b synthesized in transfected HEK293 cells. These data imply that the ATI scenario outlined above is indeed responsible for the synthesis of two distinct SAPAP3 isoforms in the rat brain. This assumption is further supported by the fact that uORF2 is highly conserved and the nucleotides surrounding AUG 1277 including the Kozak sequence are identical in Figure 5 | Both SAPAP3 isoforms are present at PSDs. (A) Both SAPAP3a and SAPAP3b accumulate at PSDs during rat brain development. PSD fractions (2,5 mg protein) isolated from the brain of 20 days old rat embryos (E20), 1, 3, 7 and 21 days old animals (P1-P21) and adult rats were analyzed by Western blotting with an anti-SAPAP3 antiserum. (B) The a:b ratio varies in different brain regions. PSD fractions of distinct adult rat brain areas were analyzed by Western blotting as described in A. Upper and lower panels show two different exposures of the same Western blot. (C) Brain homogenates and PSD enriched fractions derived from different brain areas of each three different wildtype (wt) and heterozygous eIF2a 1/S51A knock-in mice (ki) were used to detect both SAPAP3 isoforms and tubulin by Western blotting. (D) Bar graph indicating the ratio of SAPAP3a to SAPAP3b levels measured by Western blotting in homogenates and PSD enriched fractions obtained from the neocortex, hippocampus and cerebellum of wildtype (grey) and heterozygous eIF2a 1/S51A knock-in mice (light grey). Variations in the SAPAP3a to SAPAP3b ratio observed in individual areas of rat (B) and wildtype mouse brains most likely reflect species specific differences. (E) Bar graph depicting total levels of SAPAP3a and SAPAP3b in different brain regions of wildtype (grey) and heterozygous eIF2a 1/S51A knock-in animals (light grey). SAPAP3 levels were normalized against tubulin and normalized wildtype values are arbitrarily set to 1. Variations in both SAPAP3a and SAPAP3b concentrations observed between wildtype and knock-in mice are not statistically significant. See text for further details. SAPAP3 mRNAs from rat, mouse, dog and human (Figs. 1 & S3). Indeed, the mouse brain also contains two distinct SAPAP3 isoforms 34 that are no longer detected in SAPAP3 knockout mice in which a single exon containing most of uORF2, AUG 11 and AUG 1277 has been deleted 44 . While phosphorylation of eIF2a reduces general translation, it enhances translation of particular mRNAs containing several uORFs 2 . Analyzing the postsynaptic a:b ratio in three different brain areas of both wildtype and eIF2a 1/S51A mice, we did not observe any genotype specific differences. These findings indicate that control over the relative synthesis of SAPAP3a to SAPAP3b from the same mRNA is not exerted by eIF2a phosphorylation. Nevertheless, it remains to be ascertained if other signaling events regulating the probability of leaky scanning or reinitiation 31 could be used to control SAPAP3a:SAPAP3b ratios in the mammalian brain and whether ratio variations in different rodent brain areas observed herein are a result of such a regulatory mechanism. In addition to alternative splicing of pre-mRNAs, ATI represents an alternative mechanism to increase the number of isoforms, which are generated from a single gene and often possess distinct cellular functions 31 . Our data show that the 92 N-terminal amino acids of SAPAP3a, which do not encode a known domain, are missing in SAPAP3b. In SAPAP1, the N-terminal 343 amino acid residues direct selective postsynaptic targeting 53 . Our finding that both SAPAP3 isoforms are highly concentrated in PSD preparations however suggests that the unique N-terminal part of SAPAP3a is not required for trafficking to the PSD. Moreover, while the N-terminus of SAPAP1 binds neurofilaments 54 it is neither clear whether the corresponding region in SAPAP3 possesses the same capacity nor how the 92 N-terminal amino acids of SAPAP3a may influence such an interaction. Despite this lack of functional information both SAPAP3a and SAPAP3b appear to be required for normal brain function as the molecular components ensuring the synthesis of two isoforms are highly conserved in several mammalian species. As mice lacking both SAPAP3 variants display an obsessive-compulsive behavior 44 it will be interesting to test whether the selective loss of either SAPAP3a or SAPAP3b may be sufficient to cause this abnormality. Methods RNA preparation, polymerase chain reaction (PCR), 59 rapid amplification of cDNA ends (59 RACE) and Northern Blotting. RNA preparation, PCR and reverse transcription initiated PCR (RT-PCR) were performed as described 55 . PCRs to amplify 59 UTR sequences of SAPAP3 mRNAs contained 6% (v/v) DMSO. cDNA sequences corresponding to the 59 UTR of SAPAP3 mRNAs were amplified using oligonucleotide RA3 (CTCGGTCGCCATGGTAACCCCTC) and the SMART RACE cDNA Amplification Kit (Clontech). Total RNA from HEK293 cells was isolated using TRIzol reagent (Invitrogen) and Northern blots were generated using the glyoxal method 56 . A cDNA fragment containing nucleotides 96 to 1440 of the rat SAPAP3 cDNA (GenBank accession number NM_173138) was labeled with P 32 using the Prime-It II Random Primer Labeling Kit (Stratagene) and used to probe Northern blots. Labeled bands were visualized using a BAS-1800II phosphoimager (Fujifilm). Eukaryotic expression vectors. p59S3-EGFP was constructed by inserting a cDNA region corresponding to the complete 59UTR ( Fig. 1A; GenBank accession number FJ705273) and the first 24 nucleotides of the coding region of rat SAPAP3 mRNAs (GenBank accession number AY530298) into the polylinker of pEGFP-N3 (Clontech). To generate pFS3 the EYFP coding region in pEYFP-N1 (Clontech) was replaced by a cDNA sequence encoding full-length rat SAPAP3 (GenBank accession number AY530298) tagged with an N-terminal FLAG epitope and the complete 3'UTR of rat SAPAP3 mRNAs (GenBank accession number FJ705274). cDNA sequences corresponding to the 59UTR of rat SAPAP3 (Fig. 1A, nucleotides 1-281) and SAPAP1 mRNAs (GenBank accession number NM_022946) were inserted into pFS3 shortly upstream of the coding region to create pS3-FS3 and pS1-FS3, respectively. Vectors pS3D53-124-FS3, pS3D150-FS3 and pS3D203-FS3 are identical to pS3-FS3 except for deletions spanning nucleotides 53-124, 1-150 and 1-203 of the 59UTR encoding cDNA, respectively. puORF2 AAG S3-FS3 and puORF3 AAG S3-FS3 are derivatives of pS3-FS3 containing TRA point mutations in positions 229 and 243 of the 59UTR cDNA, respectively. puORF213 AAG -D150-FS3 is identical to pS3D150-FS3 but contains two TRA nucleotide exchanges in positions 229 and 243 of the 59UTR cDNA. Replacing the EYFP cDNA in pEYFP-N1 by cDNA sequences corresponding to either the complete rat SAPAP3 mRNA or all regions except for the 59UTR gave rise to vectors pS3-S3 and pS3, respectively. In the pS3-S3 derivatives puORF2 AAG -S3, puORF2 AAG -S3 and pS3-S3 1277AAC , the ATG triplets encoding uAUG2, uAUG3 and AUG 1277 are mutated into AAG and AAC trinucleotides, respectively. Vectors pDAUG 11 -S3 and pDAUG 167 -S3 are identical to pS3 with the exception that nucleotides 1-66 and 1-276 of the coding region of the rat SAPAP3 cDNA (GenBank accession number NM_173138) are deleted, respectively. pBFS3, pBFE and pBFA are generated by exchanging the intervening sequence between Photinus and Renilla luciferase cDNAs in pBicFire 57 with cDNAs corresponding to the 59UTR of rat SAPAP3 mRNAs or IRES elements derived from Encephalomyocarditis virus or Arc/arg3.1 transcripts, respectively 30 . Digesting pBicFire with either EcoRI and XbaI or NheI and EcoRI, treatment with Klenow polymerase (Fermentas) and ligation of vector ends with T4 DNA ligase (Fermentas) gave rise to pLUC and pREN, respectively. Vectors pS35-LUC and pS33-LUC are derivatives of pLUC in which cDNA sequences encoding either the synthetic 59 or 39UTR are exchanged for cDNAs corresponding to the respective regions of rat SAPAP3 mRNAs, respectively, whereas pS353-LUC contains both 59 and 39UTR sequences of SAPAP3 cDNAs (for 59 and 39UTR sequences of various mRNAs used in this study see Supplementary Table 1 online). Animals, cell culture, transfection and luciferase assays. Wistar rats were raised in the animal facility of the University Hospital Hamburg-Eppendorf. Rat primary neurons were essentially prepared and transfected as described 58 , but neurons were grown in NEUROBASAL medium (Invitrogen) without glial feeder layer and transfected seven days after plating. Growth and transfection of human embryonic kidney (HEK) 293 cells was performed as described 59 . The Dual-Luciferase Reporter Assay System (Promega) was used according to the manufacturer's recommendations with cell extracts prepared 24 hours after transfection. Antibodies, Western blotting and PSD preparations. GST fusion proteins containing amino acid residues 694-747 and 722-776 of rat SAPAP3 and SAPAP1 were used to raise rabbit polyclonal antisera #5297 and #5280, respectively. Monoclonal antibodies directed against the FLAG epitope (Stratagene) and rabbit polyclonal antisera recognizing GFP (Abcam) are commercially available. Western Figure 6 | Synthesis of two distinct SAPAP3 isoforms from a single mRNA. As a result of leaky scanning, some 43 S complexes will skip uAUG2 and initiate translation at AUG 11 thus synthesizing SAPAP3a (upper part). Alternatively, ribosomes translating uORF2 bypass AUG 11 and may reinitiate translation at AUG 1277 giving rise to SAPAP3b (lower part). See text for further details. www.nature.com/scientificreports SCIENTIFIC REPORTS | 2 : 484 | DOI: 10.1038/srep00484 blotting was performed as described 59 with primary antibodies used at the following dilutions: affinity-purified anti-SAPAP3 (#5297), 152000; affinity-purified anti-SAPAP1 (#5280), 1566; anti-FLAG, 152000; anti-GFP, 1510000. PSD 60 and PSD enriched fractions 61 were essentially prepared as described, snap frozen in liquid nitrogen and stored at 280uC. Animals welfare. Experimental animals were bred at the animal facility of the McGill Life Sciences Complex, Montréal, Canada and handled in accordance with national guidelines for animal welfare. All studies were approved by the Mc Gill University animal committee. Nucleotide sequence GenBank accession numbers. 59 and 39 UTR sequences of rat cDNA sequences encoding 59 and 39 UTRs of the rat SAPAP3 mRNA, respectively, were deposited in the GenBank database under the accession numbers FJ705273 and FJ705274, respectively.
2016-05-01T08:53:11.475Z
2012-07-02T00:00:00.000
{ "year": 2012, "sha1": "ec3d1233bd9561107a17e56dccd9525d6efa613c", "oa_license": "CCBYNCSA", "oa_url": "https://www.nature.com/articles/srep00484.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "22a2eba5cbc5e4359e383958e66e5e120ff91495", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
119601364
pes2o/s2orc
v3-fos-license
Conformal Killing $L^{2}-$forms on complete Riemannian manifolds with nonpositive curvature operator We give a classification for connected complete locally irreducible Riemannian manifolds with nonpositive curvature operator, which admit a nonzero closed or co-closed conformal Killing $L^{2}-$form. Moreover, we prove vanishing theorems for closed and co-closed conformal Killing $L^{2}-$forms on some complete Riemannian manifolds. Introduction and results Conformal Killing forms (also called as conformal Killing-Yano tensors) have been defined on Riemannian manifolds more than forty-five years ago by S. Tachibana and T. Kashiwada (see [24] and [9]) as a natural generalization of conformal Killing vector fields. We also know from the literature about closed conformal Killing forms or, otherwise, closed conformal Killing-Yano tensors and co-closed conformal Killing forms or, otherwise, Killing-Yano tensors (see, for example, [7, pp. 426-427]; [18; pp. 559-564]). We remark here that the Hodge dual of a co-closed Killing form is a closed conformal Killing form. Moreover, the converse is also true (see [14]). Surveys of the publications on conformal Killing, co-closed and closed conformal Killing forms and their numerous applications can be found in the introductions to our papers [19] and [20]. In addition, it should be taken into account the list of recent papers on these forms: [4]; [6]; [13]; [21]; [22] and [27]. In the present paper we consider conformal Killing, co-closed and closed conformal Killing 2 L -forms of degree p for 1 1 − ≤ ≤ n p on a simply connected and complete Riemannian manifold ( ) (see [11, pp. 36-37]). We say that the manifold has a nonpositive (respectively nonnegative) curvature ) for all two-forms 0 ≠ θ . There have been many papers on the relationship between the curvature operator R of a Riemannian manifold ( ) g M, and some global characterization of it, such as its homotopy type, topological types and etc. In connection with above, the first our result on conformal Killing forms will be proved by the most important analytic method of differential geometry "in the large" which derived by S. Bochner for proving so-called vanishing theorems under appropriate curvature conditions on compact Riemannian manifolds (see [28]). S.-T. Yau generalized this method of proving vanishing theorems for the case of complete noncompact Riemannian manifolds (see, for example, [29]). We use a generalization of the "Bochner technique" to prove the following statement. In addition, we recall that an arbitrary co-closed conformal Killing pform ω on an -dimension compact Kählerian manifold is parallel for (see [25] ) is parallel on an -dimension compact Riemannian manifold with n nonpositive curvature operator (see [17]). The following theorem is an analogue of Theorem 1. , then its universal cover M is either a round sphere, or has a factor isometric to a round sphere in its de Rham decomposition (see [12]). for the curvature tensor R of ( g M, ) will be linearly independent in the Lie algebra of the orthogonal group ( ) n O . In this case, the corollary is true. Preliminary information be the bundle of differential p -forms over a connected complete . By ∇ we will denote the Levi- ). true (see also [14]). In [16] we found the operator formally adjoint to and then constructed the * D D second-order differential operator . Properties of the operator were studied in the following D D * papers [16]; [19]; [20] and [21]. In particular, we proved in [16] that i D D * s a second-order self-adjoint elliptic differential operator acting on Proofs of the statements A direct calculation yields the second inequality of Kato (see [ At the same time, by the Weitzenböck decomposition (2.1), the operator can D D * be written in the following form Then ω * is a co-closed conformal Killing form whose covariant derivative vanishes. In particular, it means that const 2 = * ω . In this case, we have minus sing (see [29]) that we took into account in our proof. In particular, if p n 2 = we obtain from (3.4) that nonpositive curvature operator. In this case, the assertions of Lemma 2 and Theorem 2 become obvious. A Riemannian globally symmetric space ( ) g M, is complete. We also know that a Riemannian symmetric space has nonpositive curvature operator if and only if it has nonpositive sectional curvature (see [5]). After the above remarks, the assertion of Corollary 1 becomes obvious. Let be the restricted holonomy group of ( )
2019-04-12T05:10:29.389Z
2017-03-28T00:00:00.000
{ "year": 2017, "sha1": "346e33270495efe1cdfce02b4b942f632b47349e", "oa_license": null, "oa_url": "https://doi.org/10.1016/j.jmaa.2017.08.053", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "346e33270495efe1cdfce02b4b942f632b47349e", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
118419587
pes2o/s2orc
v3-fos-license
Formation of massive protostars in atomic cooling haloes We present the highest-resolution three-dimensional simulation to date of the collapse of an atomic cooling halo in the early Universe. We use the moving-mesh code arepo with the primordial chemistry module introduced in Greif (2014), which evolves the chemical and thermal rate equations for over more than 20 orders of magnitude in density. Molecular hydrogen cooling is suppressed by a strong Lyman-Werner background, which facilitates the near-isothermal collapse of the gas at a temperature of about $10^4\,$K. Once the central gas cloud becomes optically thick to continuum emission, it settles into a Keplerian disc around the primary protostar. The initial mass of the protostar is about $0.1\,{\rm M}_\odot$, which is an order of magnitude higher than in minihaloes that cool via molecular hydrogen. The high accretion rate and efficient cooling of the gas catalyse the fragmentation of the disc into a small protostellar system with 5-10 members. After about 12 yr, strong gravitational interactions disrupt the disc and temporarily eject the primary protostar from the centre of the cloud. By the end of the simulation, a secondary clump has collapsed at a distance of $\simeq 150\,$au from the primary clump. If this clump undergoes a similar evolution as the first, the central gas cloud may evolve into a wide binary system. High accretion rates of both the primary and secondary clumps suggest that fragmentation is not a significant barrier for forming at least one massive black hole seed. INTRODUCTION Black holes (BHs) are a key ingredient in the formation and evolution of galaxies. In the local Universe, the stellar velocity dispersion in galaxy bulges is correlated with the mass of the BH at their centre (Ferrarese & Merritt 2000;Gebhardt et al. 2000). BHs also power luminous quasars by accreting gas from their host galaxies. Recent observations suggest that quasars powered by BHs with masses 10 9 M were already present when the Universe was less than one billion year old (Fan et al. 2003(Fan et al. , 2006. These supermassive black holes most likely grew from smaller seed BHs that formed earlier, but the origin of these seeds remains unclear (Haiman 2006(Haiman , 2009Greene 2012;Volonteri 2012;Volonteri & Bellovary 2012). One possible candidate are the remnants of massive Population III stars (Madau & Rees 2001;Li et al. 2007;Johnson et al. 2012), or the direct collapse of primordial gas in haloes with virial temperatures Tvir 10 4 K, socalled atomic cooling haloes (Bromm & Loeb 2003;Bromm & Yoshida 2011). In the former case, the seeds have initial E-mail: fbecerra@cfa.harvard.edu masses of the order of 100 M , and grow at or above the Eddington limit for the remaining 500 Myr between seed formation and z 6. However, numerical simulations have shown that accretion on to early BHs is inefficient, due to the low density of the gas surrounding the BH remnant, which is caused by photoionization heating from the progenitor star (Johnson & Bromm 2007;Alvarez, Wise & Abel 2009). Accretion rates are thus not high enough to allow efficient growth of the seed, which poses a serious complication for the Population III stellar remnant scenario. In the direct collapse scenario, haloes with virial temperatures 10 4 K may host seed BHs that are substantially more massive. A prerequisite is that the accretion rate on to the central object is high enough that radiative feedback does not severely impede the accretion flow (Johnson et al. 2011Hosokawa, Omukai & Yorke 2012;Hosokawa et al. 2013). In this case, a supermassive star or 'quasi-star' forms, which may collapse into a BH of mass ∼ 10 5 − 10 6 M (Heger et al. 2003 Schleicher et al. 2013;Chen et al. 2014). Since the accretion rate in a Jeans-unstable cloud scales asṀ ∝ T 3/2 , molecular hydrogen cooling must be suppressed until the virial temperature of the halo is high enough that Lyα cooling becomes important. This may be achieved by a Lyman-Werner (LW) radiation background (Omukai 2001;Bromm & Loeb 2003;Volonteri & Rees 2005;Spaans & Silk 2006;Schleicher, Spaans & Glover 2010;Johnson et al. 2013). Simple one-zone models have found that the critical flux is of the order of J21,crit = 10 5 in units of J21 = 10 −21 erg s −1 cm −2 Hz −1 sr −1 for a blackbody spectrum with 10 5 K (Omukai 2001). For Population I/II stars, recent studies have found that the critical flux may be somewhat lower (Shang, Bryan & Haiman 2010;Wolcott-Green & Haiman 2012;Van Borm & Spaans 2013;Latif et al. 2014b,a;Regan, Johansson & Wise 2014;Sugimura, Omukai & Inoue 2014). Even though the LW flux on cosmological scales is well below this value, local star formation may raise the flux to supercritical levels (Dijkstra et al. 2008;Dijkstra, Ferrara & Mesinger 2014;Agarwal et al. 2012Visbal, Haiman & Bryan 2014). If the LW flux is high enough, the halo gas collapses nearly isothermally at 10 4 K up to a density of nH 10 6 cm −3 , where the gas becomes optically thick to Lyα emission (Omukai 2001). At this point, continuum cooling via free-bound emission of H − takes over, and allows the gas to again contract nearly isothermally up to a density of nH 10 16 cm −3 . Once the continuum emission becomes trapped, the gas evolves nearly adiabatically and a protostar forms at the centre of the halo. During the initial collapse, the angular momentum is constantly redistributed by turbulence and bar-like instabilities, such that the cloud contracts nearly unhindered (Oh & Haiman 2002;Koushiappas, Bullock & Dekel 2004;Begelman, Volonteri & Rees 2006;Lodato & Natarajan 2006;Wise, Turk & Abel 2008;Begelman & Shlosman 2009;Choi, Shlosman & Begelman 2013;Latif et al. 2013a;Prieto, Jimenez & Haiman 2013). The subsequent accretion phase was investigated by Haehnelt (2009) andLatif et al. (2013b,a). They found that a Keplerian disc forms around the primary protostar, which becomes gravitationally unstable and fragments into a small system of protostars. The secondary protostars merge on a short time-scale and do not prevent the growth of the primary protostar. These studies employed a pressure floor beyond a certain refinement level, such that the maximum density was limited to nH ∼ 10 6 − 10 9 cm −3 . The simulations of Regan, Johansson & Haehnelt (2014) also displayed the formation of a disc-like object at the centre of the halo, which in some cases fragmented on a scale of 100 au. However, these simulations also suffered from limited resolution, and did not include the relevant H2 cooling and chemistry. Recently, Inayoshi, Omukai & Tasker (2014) used the most detailed chemical and thermal model to date, but stopped the simulation once the primary protostar had formed. In addition, they did not use cosmological initial conditions. We here attempt to improve upon these studies by carrying out a simulation that starts from cosmological initial conditions and is not resolution-limited. We use a slightly less sophisticated chemical model as Inayoshi, Omukai & Tasker (2014), but evolve the simulation well beyond the formation of the first protostar at the centre of the halo. Our paper is organized as follows. In Section 2, we describe the simulation setup and the chemistry and cooling network. In Section 3, we analyse the simulation and discuss the collapse of the central gas cloud, the formation and fragmentation of the disc, the development of the protostellar system, and the collapse of a secondary clump towards the end of the simulation. Finally, in Section 4 we summarize and draw conclusions. All distances are quoted in proper units, unless noted otherwise. SIMULATIONS We perform three-dimensional, cosmological hydrodynamical simulations to investigate the collapse of gas in atomic cooling haloes in which the formation of H2 has been suppressed by a LW background. For this purpose we employ the moving-mesh code arepo (Springel 2010). We also include the recently developed primordial chemistry and cooling network of Greif (2014). In the following, we briefly describe the initialization of the simulations, the extraction procedure and refinement criteria used to achieve densities nH 10 21 cm −3 , and the chemistry and cooling network. Dark matter simulations We first initialize a dark matter (DM)-only simulation at a redshift of z = 99 in a standard Λ cold dark matter (ΛCDM) cosmology. We adopt cosmological parameters based on the Wilkinson Microwave Anisotropy Probe results (Komatsu et al. 2009). We use a matter density Ωm = 1−ΩΛ = 0.27, baryon density Ω b = 0.046, Hubble parameter h = H0/100 km s −1 Mpc −1 = 0.7 (where H0 is the present Hubble expansion rate), spectral index ns = 0.96, and normalization σ8 = 0.81. The simulation is initialized in a box of side length 2 Mpc (comoving) with a total of 512 3 DM particles of mass 2.2 × 10 3 M . The gravitational softening length is set to 195 pc (comoving), which corresponds to 5% of the initial mean inter-particle separation. We stop the simulation when the first halo with virial mass exceeding 10 8 M collapses. This occurs at z coll 12.4, when the first halo reaches Mvir 1.7 × 10 8 M . At this point the halo has a virial radius of Rvir 1.4 kpc and a spin parameter λ 0.05. Resimulations The second step is to locate the target halo and flag it for further refinement. We select the particles belonging to that halo and a sufficiently large boundary region around it, and trace them back to their initial conditions. Once the particle locations have been determined, we reinitialize the simulation centred on the target halo. In order to acquire higher resolution we replace each DM particle by 64 less-massive DM particles and 64 mesh-generating points. The resolution is gradually decreased as the distance from the highresolution region increases, replacing cells and DM particles by higher-mass particles outside the target region. The resimulation has lower resolution towards the edges of the box than the original DM-only simulation, but the accuracy of the gravitational tidal field around the target halo is preserved. The refined DM particle mass is given by M dm,ref = (1 − Ω b /Ωm)M dm /64 28 M , and the gravitational softening length is set to 49 pc (comoving). The refined mass of each cell is given by M gas,ref = (Ω b /Ωm)M dm /64 6 M . We stop the resimulation once the first cell has exceeded a density of nH 10 9 cm −3 . We then proceed to extract the particles in the central 3 pc and reinitialize the simulation with reflective boundary conditions. Hence, the central region of the final output in the first resimulation becomes the initial condition for a second resimulation with a box size of 3 pc. Furthermore, at those densities the gas component is already well decoupled from the DM component , so we discard the DM and keep only the gas particles. We evolve the second resimulation until it exceeds a density of nH 10 19 cm −3 , after which we conduct a second extraction similar in nature to the first, but cut out the central 5 × 10 −3 pc of the second resimulation. We then use this as the side length for the third resimulation. This approach has the risk that perturbations from the edges of the box might influence the central regions. However, we explicitly avoid this issue by assuring that the sound crossing time through the box is much longer than the free-fall time of the central high-density cloud. Refinement An essential refinement criterion that grid codes have to fulfill to resolve gravitational instability and avoid artificial fragmentation is the so-called Truelove criterion (Truelove et al. 1997). This criterion states that the local Jeans length needs to be resolved by at least four cells, where the cell size is approximately given by h = (3V /4π) 1/3 , and V is the volume of the cell. In order to adequately resolve turbulence, recent studies using grid codes with a fixed mesh have found that the Jeans length must be resolved by at least 32 cells (Federrath et al. 2011;Turk et al. 2012;Latif et al. 2013a). A disadvantage of using refinement based on the Jeans length is that shock-heated regions may be much less resolved than adjacent cold regions. In order to avoid this problem, we follow the refinement criterion proposed by Turk, Norman & Abel (2010), who suggest using the minimum temperature of the gas to evaluate the Jeans length. We slightly modify this criterion by using Tmin = 5000 K for cells with T Tmin, but the correct temperature for cells with T > Tmin. This ensures that the initial collapse phase is adequately resolved, while at high densities the resolution does not become excessively high and slows down the calculation. Below nH = 10 15 cm −3 , we employ 64 cells per Jeans length, which is degraded to 8 cells above nH = 10 18 cm −3 . We use a linear extrapolation between these densities. The maximum spatial resolution achieved with this refinement strategy is 6.6 × 10 −4 au. Next to the Jeans refinement, we refine a cell if its mass increases to more than twice its initial mass. Chemistry and cooling A detailed description of the chemical and thermal model used here can be found in Greif (2014). Here, we only briefly describe the most important reactions and cooling processes. The chemical network employs a non-equilibrium solver at low densities and an equilibrium solver at high densities for the species H, H2, H − , H + , and e − . The transition from non-equilibrium to equilibrium H2 chemistry occurs at nH 2 ,eq = 10 15 cm −3 , since three-body reactions depend on the cube of the density and would otherwise prohibitively decrease the time-step of the non-equilibrium solver. For densities above n H + ,eq = 10 18 cm −3 , the electron and H + abundances are also considered to be in equilibrium. The main reactions include the formation of H2 via associative detachment as well as three-body reactions, the destruction of H2 via collisions and photodissociation, and the formation and destruction of H + by collisional ionizations and recombinations. The relevant cooling processes are H2 line cooling, H2 collision-induced emission, Lyα cooling, and inverse Compton cooling. H2 cooling plays a substantial role up to nH 10 15 cm −3 , where the gas becomes optically thick to the H2 line emission, while collision-induced emission becomes important at nH 10 14 cm −3 and provides the last radiative cooling channel (Omukai & Nishi 1998;Ripamonti & Abel 2004). Although we include molecular hydrogen cooling, its effect does not become important during the evolution of the simulation due to the presence of a strong LW background that dissociates H2 via the Solomon process (Abel et al. 1997). Previous studies found that a strong LW flux with J21 10 3 is required to dissociate molecular hydrogen in the progenitors of an atomic cooling halo (Omukai 2001;Johnson & Bromm 2007;Dijkstra et al. 2008;Latif et al. 2013b;Wolcott-Green, Haiman & Bryan 2011). Here, we assume a constant LW flux of J21 = 10 5 for a blackbody spectrum with T rad = 10 5 K, which is commonly used to estimate the spectra of Population III stars. In this case, the H − photodissociation rate is much smaller than the H2 photodissociation rate (Sugimura, Omukai & Inoue 2014). We approximate the combined effects of Lyα cooling and continuum cooling by assuming that Lyα cooling remains optically thin up to densities nH 10 16 cm −3 . The cooling rate is exponentially suppressed at densities nH 10 16 cm −3 to approximately reproduce the density-temperature relation found in Omukai (2001). Due to this simplification, we may somewhat underestimate the true cooling rate. Collapse of central gas cloud A number of studies have discussed the properties of the collapse of primordial gas clouds in atomic cooling haloes (e.g., Bromm & Loeb 2003;Regan & Haehnelt 2009;Choi, Shlosman & Begelman 2013;Latif et al. 2013a;Inayoshi, Omukai & Tasker 2014;Regan, Johansson & Haehnelt 2014). Here, we investigate the collapse of the gas over an unprecedented range in scale, as shown in Fig. 1. The six panels show a zoom-in on the central gas cloud, ranging from 10 pc down to scales of 10 au. The panel on the bottom-left side of the figure shows the primary protostar surrounded by an accretion disc. The cloud shows an irregular morphology and changes shape as it collapses. Its filamentary structure is indicative of turbulence, which is especially pronounced during the later stages of the collapse. On the largest scales, the cloud shows less substructure and is more spherically symmetric. Fig. 2 shows various physical quantities as a function of Figure 1. Zoom-in on the gas cloud that forms at the centre of the atomic cooling halo. The number density of hydrogen nuclei is weighted with the square of the density along the line of sight, which is perpendicular to the plane of the disc. Clockwise from the top left, the width of the individual cubes are 10 pc, 1 pc, 0.1 pc, 1000 au, 100 au, and 10 au. The cloud has an irregular morphology that continues to change shape and orientation throughout the collapse. The filamentary structure indicates that turbulence is present on all scales. distance from the densest cell in the halo. The radial profiles are constructed from data of the three resimulations. We proceed by extracting the inner 300 au from the last resimulation, while the range between 300 and 10 5 au is taken from the second resimulation. To complete the profiles, the outer region corresponds to data from the first resimulation up to 10 10 au. Due to the self-similarity of the collapse, moving from large to small radii is equivalent to moving from early to late times. Properties plotted in the figure are the number density of hydrogen nuclei, enclosed gas mass, temperature, H2 abundance, H − abundance, and Hii abundance. These profiles have been calculated using mass-weighted averages of the cells contributing to the radial bins. Colours and line styles represent different evolutionary stages of the gas cloud as described in the legend and the caption of the figure. As the gas collapses into the DM halo, it is shock-heated to the virial temperature. In the central parts of the halo, Lyα cooling becomes important and keeps the gas nearly isothermal at 10 4 K (Wise & Abel 2007). During this period, the H2 abundance builds up from 10 −16 to 10 −8 , with small spikes due to the existence of shocks in the outer regions of the halo. The Hii abundance increases by about one order of magnitude. The collapse then approximately follows the Larson-Penston solution for an isothermal, selfgravitating gas cloud (Larson 1969;Penston 1969), which is described by a density profile following ρ ∝ r −2 . The strong LW background suppresses the formation of H2 and maintains an abundance of 10 −8 down to scales of 10 3 au. This prevents H2 cooling, which in turn leads to a roughly isothermal collapse between 10 8 and 1 au. Over these scales, the H − and Hii abundances drop by many orders of magnitude due to recombination. On a scale of 10 3 au, the H2 fraction increases due to three-body reactions. Up to this point, the radial profiles agree well with those of previous studies ( 26, the green dash-dotted line to z 15, the red dotted line to when the number density first exceeds 10 9 cm −3 , the cyan dashed line to 9 × 10 3 yr after that, the purple dash-dotted line to when the number density first exceeds 10 19 cm −3 , the yellow dotted line to 6 yr after the formation of first protostar, and the black solid line to the end of the simulation after approximately 18 yr. The halo follows several evolutionary stages from large to small scales: shock-heating to the virial temperature, onset of cooling, Jeans instability, isothermal contraction, formation of the primary protostar and disc, and fragmentation of the disc (see Section 3 for details). . Clockwise from the top left panel: distribution of the gas in temperature, H 2 , Hii , and H − abundances versus number density of hydrogen nuclei at the end of the simulation. The mass per bin over the total mass in the computational domain is colour-coded from blue (lowest) to red (highest). The solid black lines show the mass-weighted average values. After shock-heating to the virial temperature of 10 4 K, the gas collapses nearly isothermally to densities of n 10 16 cm −3 . The gas then becomes optically thick to continuum emission and evolves nearly adiabatically. At this point, the Hii abundance dramatically increases from ∼ 10 −14 to unity. The H 2 abundance stays below 10 −7 due to the LW background, but then increase to 10 −4 as three-body reactions set in. However, the H 2 abundance never becomes high enough for H 2 cooling to become important. The 'fingers' visible in the various distributions show the evolutionary paths of individual protostars. Regan, Johansson & Haehnelt 2014;Inayoshi, Omukai & Tasker 2014). In the final stage of the collapse, when the primary protostar forms, the gas becomes optically thick to continuum cooling at 1 au, which results in a rise in temperature of more than two orders of magnitude to 10 6 K. This is accompanied by a drop in the H2 abundance and an increase in both the H − and Hii abundances. At the end of the simulation, two pronounced spikes in the density profile are clearly visible in the central 100 au. These correspond to secondary protostars that have formed due to fragmenta- right, the panels show the mass-weighted average surface density, sound speed, orbital frequency, and Toomre parameter versus radius just before the disc fragments. Above 0.1 au, the power-law profiles of the surface density and rotation speed yield a Toomre parameter that is close to unity, which indicates that perturbations in the disc can grow. Right: effective equation of state, root-mean-squared density contrast, cooling time over free-fall time, and free-fall time over sound-crossing time. The isothermal collapse of the gas on scales 1 au results in γ eff 1, while the increasing optical depth of the gas to continuum emission on smaller scales results in an exponent that is closer to that of an adiabatic gas. The cooling time over the free fall time has a local minimum on a scale of 1 au: this is approximately the radius at which the first fragment forms. The density contrast created by the supersonic turbulence is between 1 and 10. The free-fall time exceeds the sound crossing time on a scale of 0.1 au, which shows the size of the central, Jeans-unstable clump. tion in the disc. This will be discussed in detail in Section 3.2. In Fig. 3, we show the temperature-density distribution of the gas at the end of the simulation, next to those of the H2, H − , and Hii abundances. At low densities, the temperature distribution spans almost six orders of magnitude, reaching as high as 10 4 K. A similarly high scatter is present in the H2 and H − abundances, while the Hii abundance varies only by two orders of magnitude. Up to nH 10 15 cm −3 , the temperature distribution becomes much narrower, showing the near-isothermal collapse of the gas. Once three-body reactions become important, the distribution of the H2 fraction widens for densities in the range 10 5 − 10 15 cm −3 , with particles reaching abundances as high as yH 2 0.1. The resulting temperature dispersion leads to an increasing dispersion in the H − and Hii abundances, while their average values continue to decrease due to recombinations. The values of the H2 abundance are somewhat smaller than those found in Inayoshi, Omukai & Tasker (2014), but agree with Latif et al. (2013a). We therefore do not distinguish between two thermal phases of the gas as in Inayoshi, Omukai & Tasker (2014). For densities 10 18 cm −3 , the formation of the primary and secondary protostars can be recognized as 'fingers' of gas in the individual panels, which evolve nearly adiabatically. The high temperatures in the interior of the protostars results in a decrease of the H2 and H − abundances, and an increase of the Hii abundance to unity. The radial profiles of the magnitude of the radial velocity, rotational velocity, Keplerian velocity, turbulent velocity, and sound speed at the end of the simulation are shown in the left-hand panel of Fig. 4. In addition, the right-hand panel shows the Mach numbers of each velocity component. The turbulent Mach number is given by where cs denotes the sound speed of the radial bin, M the total mass, i the index of a cell contributing to the bin, mi its mass, vi the velocity, v rad i the radial velocity vector, and v rot i the rotational velocity vector. During the initial free-fall phase, the turbulent component is supersonic with M 3. In contrast, the Mach number of the rotational velocity remains below unity, indicating the poor rotational support of the cloud at that stage. The trend for each component is roughly maintained once the halo has entered the isothermal collapse phase, with the exception of the Mach number of the radial velocity, which briefly drops to below unity. Down to 100 au, the rotational velocity oscillates between 0.2 and 0.5 of the Keplerian velocity, indicating a substantial degree of rotational support. It reaches its peak at the edge of the disc on scales 1 au, where vrot v kep . On smaller scales, the primary protostar is characterized by an increase in temperature and thus sound speed, such that all velocity components become subsonic. In addition, v rad drops precipitously, which shows that the infall rate decreases rapidly within the primary protostar. Similar values for the velocity have been found in previous studies (e.g. Regan, Johansson & Haehnelt 2014). Overall, we find good agreement between our results and previous work. However, some differences exist. The morphology of the halo between 10 au and 10 pc is sim- Figure 6. Enclosed gas mass over the mass-weighted average BE mass as a function of enclosed gas mass. Colours and line styles are the same as in Fig. 2. As the halo grows in mass, the BE mass increases due to the rise in the virial temperature, which reduces Menc/M BE . Once the atomic cooling halo is assembled, this ratio exceeds unity on a scale of 10 8 M . Following the onset of runaway cooling due to Lyα emission, the central 10 6 M become Jeans-unstable (red dotted line). The minimum Jeans mass of the cloud is indicated by the purple dash-dotted line as 0.1 M , which coincides with the initial mass of the primary protostar. ilar to that of Inayoshi, Omukai & Tasker (2014), but we do not find clumps on larger scales as pointed out by Regan & Haehnelt (2009), Latif et al. (2013a, and Regan, Johansson & Haehnelt (2014). However, this is not surprising, since in our case the gas has not yet had time to settle into a disc on these scales. The radial profiles resemble those of Latif et al. (2013a) quite well, while Inayoshi, Omukai & Tasker (2014) found a slightly higher H2 abundance, which is also reflected in lower temperatures during the isothermal collapse phase. These differences may be caused by the different chemical networks used in our study (see Section 3.6). Disc formation and fragmentation After the formation of the first protostar, the gas becomes fully rotationally supported in a Keplerian disc. We study its stability by computing Toomre's parameter (Toomre 1964): where cs is the sound speed of the gas, κ the epicyclic frequency of the disc, G the gravitational constant, and Σ the surface density of the gas. For the case of a Keplerian disc, the epicyclic frequency may be replaced by the orbital frequency Ω. The Q parameter was originally proposed to determine whether perturbations can grow in an infinitely thin, isothermal disc. Later studies have extended this criterion for thick discs, finding that it only deviates by a factor of order unity from the above equation (Wang et al. 2010). For Figure 7. Evolution of the protostellar system that forms at the centre of the atomic cooling halo. The number density of hydrogen nuclei is weighted with the square of the density along the line of sight, which is perpendicular to the plane of the disc. The top three rows show cubes with a side length of 20 au, centred on the position of the first protostar. The bottom row shows the later evolution of the protostellar system on a somewhat larger scale of 50 au, where the centre has been fixed on the position of the primary protostar after 12 yr. The time is measured from the instant when the density first exceeds 10 19 cm −3 . The formation of a Keplerian disc around the primary protostar is clearly visible. Shortly thereafter, the disc becomes Toomre-unstable and spiral arms form that transport mass inwards and angular momentum outwards. After 6 yr, the disc becomes gravitationally unstable and fragments due to the high mass accretion rate from the surrounding cloud on to the disc, and the efficient cooling of the disc by continuum emission. Over the next 7 yr, an additional five protostars form before three-body interactions lead to the temporary ejection of the primary protostar from the cloud, which disrupts the disc. values greater than Qcrit = 1, the system is stable due to gas pressure and shear by the differential rotation of the disc, while for lower values the system is unstable and hence susceptible to the growth of perturbations. These lead to the formation of spiral arms that transport mass inwards and angular momentum outwards. Radial profiles for the gas surface density, sound speed, orbital frequency, and Toomre parameter are shown in the left-hand panel of Fig. 5. We compute the profiles using mass-weighted spherical shells centred on the densest cell in the halo of the final resimulation. The surface density increases as Σ ∝ r in the interior of the protostar, where the The latter initially orbits around the primary protostar at a distance of 4 au, but strong gravitational forces due to three-body interactions temporarily eject both protostars from the centre of the cloud after 12 yr. The relative velocity reaches a peak value of about 100 km s −1 , which declines to 20 km s −1 towards the end of the simulation. For comparison, we also show the escape velocities of both protostars. The green dashed line corresponds to the primary protostar, and the red dotted line to the secondary protostar. Since the relative velocity decreases to well below the escape velocity, both protostars will likely return to the centre of the cloud. density is almost constant. On larger scales, the radial dependence changes to Σ ∝ r −1 , as deduced from the relation ρ ∝ r −2 for isothermal collapse, while the orbital frequency roughly follows Ω ∝ r −1 . On scales between 0.1 and 100 au, the radial dependence of Σ and Ω thus cancel each other, such that Q remains roughly constant around unity. In the interior of the protostar, Q increases due to the increase in the sound speed and the different radial scaling between Σ and Ω. Since the value of Q is roughly equal to the critical value, the disc is prone to perturbation growth. Further properties at the time when the primary protostar has just formed and is surrounded by a disc that has not yet fragmented are shown in the right-hand panel of Fig. 5. From the top left to the bottom right, the panels show the effective equation of state, root-mean-squared density contrast, cooling time over free-fall time, and free-fall time over sound-crossing time. In the outer region of the disc, on scales 1 au, the equation of state is characterized by γ eff 1, as expected for isothermal collapse. The density contrast is roughly constant around unity, while the interior of the primary protostar is characterized by values close to 0.1. Here, γ eff increases to 1.2−1.5 as a reflection of the temperature increase by almost two orders of magnitude in the central 1 au. The cooling time remains well above the free-fall time in the inner 100 au of the halo and down to 1 au, the scale at which the gas becomes optically thick to continuum cooling. The free-fall time remains below the sound-crossing time down to 0.1 au, showing the gravitational instability of the cloud down to this scale. Minimum fragment mass Further evidence for the gravitational instability of the gas is presented in Fig. 6, where we plot the enclosed gas mass over the locally estimated Bonnor-Ebert (BE;Ebert 1955;Bonnor 1956) mass as a function of enclosed gas mass. Colours and line styles are the same as in Fig. 2. The profiles have been computed using spherical shells centred on the densest cell, where the BE mass is calculated as the mass-weighted average among cells within a given radius according to: During the initial collapse, the ratio of enclosed gas mass to BE mass decreases as a consequence of the rise in temperature as the gas is shock-heated. The enclosed gas mass surpasses MBE at Menc 10 8 M , which is in agreement with the mass of the halo. As the halo keeps accreting, another region where the ratio exceeds unity emerges at about 10 6 M . This marks the initial Jeans instability of the cloud. From 10 6 M , the point where Menc surpasses MBE moves down to 0.1 M when the densest cell first reaches 10 19 cm −3 . This is the minimum fragment mass and coincides with the initial mass of the protostar formed at the centre of the halo. From then on the temperature of the central object increases, which is translated into an increase of the BE mass, and hence a decrease of the Menc/MBE ratio. As a result, the point at which this ratio equals unity briefly moves up to 10 M and always stays above 1 M . Protostellar system The fragmentation of the disc into a small protostellar system is shown in Fig. 7. The top three rows show cubes of side length 20 au centred on the position of the primary protostar, while the cubes of the last row are 50 au wide with the centre fixed on the position of the primary protostar after 12 yr. In total, we present 16 different output times, which are measured with respect to the point in time at which the densest cell first exceeds 10 19 cm −3 . During the first 6 yr, perturbations grow between 1 and 10 au in the form of spiral arms. After 7 yr, they become gravitationally unstable and the first secondary protostar forms. In the next 2 yr, the efficient cooling of the gas results in the formation of additional protostars, and after 12 yr a small protostellar system with six members has emerged. Shortly thereafter, three-body interactions and strong tidal forces during a close passage of a secondary protostar and the primary results in the disruption of the disc. Both protostars are ejected from the centre of the cloud. The sequence in the bottom row of Fig. 7 shows the evolution of this interaction and how both protostars move away from each other. To quantify the interaction between both protostars, Fig. 8 shows the relative distance and velocity between the protostars over time. For comparison, we also include the escape velocity of both protostars, using the enclosed mass in a spherical region around their respective centres, with radii equal to their separation. Once the secondary protostar forms, it orbits at a roughly constant distance of 4 au from the primary protostar, but they soon move together and their separation decreases to 1 au. Shortly thereafter, three-body interactions eject both protostars from the centre of the halo. This is reflected by a high relative velocity with a peak value of 100 km s −1 , which is followed by a gradual drop in the relative velocity. The parabolic shape of the relative distance suggests that it may reach a point of turnaround and the protostars will begin to re-collapse towards the centre. This trend is supported by the declining profile of the relative velocity and the fact that it has fallen well below the escape velocity by the end of the simulation. In Fig. 9, we show the mass, radius, and accretion rate of all protostars over time. The solid lines correspond to individual protostars, while the black dashed lines denote the total mass and accretion rate, respectively. The radius of a protostar is calculated as the distance at which the Rosseland mean opacity reaches its maximum value (Stacy et al. 2013). The protostellar mass is given by the mass enclosed within that radius, and the accretion rate by the time derivative of the enclosed mass. A total of eight secondary protostars form during the evolution and fragmentation of the disc. Out of these, four survive until the end of the simulation. The rest merge with other protostars or are tidally disrupted. During the first 6 yr after the formation of the primary protostar, its mass builds up from 0.1 to 6.4 M at a rate of roughly 1 M yr −1 , in agreement with previous work (Latif et al. 2013a;Inayoshi, Omukai & Tasker 2014). Its radius increases from 32 to 136 R . The second protostar forms after 7 yr with an initial mass of 0.02 M and a radius of 22 R . Most of the gas is accreted by the primary protostar, while the second protostar only accretes at a rate of 0.3 M yr −1 before it is tidally disrupted. Shortly thereafter, the disc fragments vigorously and Figure 9. Stellar mass, radius, and accretion rate of all protostars formed in the simulation. Each line corresponds to an individual protostar, and the black dashed lines shows the total mass and accretion rate, respectively. Initially, the mass budget is entirely dominated by the primary protostar (blue line), which grows from 0.1 to 15 M at a rate of 1 M yr −1 , while its radius swells to well over 100 au. Once the primary protostar is expelled from the centre, its accretion rate and size drop significantly. The protostar formed in the secondary clump (red line) grows to about 5 M at a rate of 1 M yr −1 . The other protostars stay below 2 M , and thus do not contribute significantly to the total protostellar mass. Figure 10. Simultaneous collapse of a secondary gas clump at a distance of 150 au from the primary clump in the atomic cooling halo. The number density of hydrogen nuclei is weighted with the square of the density along the line of sight in a cube with a side length of 300 au. The panels on the right show zoom-ins on the primary and secondary clumps with a side length of 60 and 20 au, respectively. While strong interactions occur in the central protostellar system, a second clump has collapsed and is in the early stages of its evolution. Ultimately, the clumps may evolve into a wide binary system. gives rise to a protostellar system characterized by a massive primary protostar with 9.2 M and a radius of 160 R , while the secondary protostars only have masses between 0.06 and 0.6 M , and radii in the range 26 − 41 R . The accretion on to the secondary protostars results in a slight decrease of the accretion rate on to the primary protostar of 0.5 M yr −1 , while the total accretion rate remains roughly constant at 1.5 M yr −1 . After 13 yr, the primary protostar is expelled from the centre of the halo and the disc is disrupted (see bottom row of Fig. 7). As the primary protostar is deprived of gas, its accretion rate drops to 0.3 M yr −1 , and its radius decreases from 160 to 56 R . Its final mass is 15 M . The formation of a protostellar system has recently been reported in studies of minihaloes that cool via H2 lines (Clark, Glover & Klessen 2008;Clark et al. 2011;Greif et al. 2011). Initial masses of protostars in atomic cooling haloes are an order of magnitude higher, while the accretion rates exceed those in minihaloes by about three orders of magnitude. Other studies have found similar values for the initial protostellar masses and accretion rates (Regan & Haehnelt 2009;Latif et al. 2013a;Regan, Johansson & Haehnelt 2014;Inayoshi, Omukai & Tasker 2014). Secondary clump About 13 yr into the evolution of the protostellar system, a second clump collapses at a distance of about 150 au from the primary clump. Fig. 10 shows both clumps in a cube with a side length of 300 au at the end of the simulation, with smaller cubes showing zoom-ins on to the individual clumps. The zoom-in of the secondary clump shows a protostar with a disc and spiral arms similar to the early evolutionary stages of the primary clump. The protostar in the secondary clump is denoted by the red line in Fig. 9. Its mass quickly grows from 0.2 to 4.9 M , and its radius increases from 42 to 116 R . Despite its later formation, it accretes more rapidly than the first protostar. Ultimately, both clumps may evolve into a wide binary system. Caveats Previous studies that investigated the collapse and fragmentation of gas in atomic cooling haloes did not have sufficiently high resolution to self-consistently follow the formation of protostars at the centre of the cloud. We have attempted to address this shortcoming by performing a simulation that is not resolution-limited. Nevertheless, we have neglected some physical processes that might affect the fragmentation of the cloud. In particular, we have assumed that the optically thin regime for atomic hydrogen cooling extends up to densities 10 16 cm −3 . In reality, the gas becomes optically thick to Lyα radiation at densities of 10 6 cm −3 , and then free-bound continuum emission of H − becomes the main cooling agent. Previous studies have found that this kind of cooling may lower the temperature by up to a factor of 2 in the range n 10 15 − 10 20 cm −3 compared to our study (Omukai 2001;Inayoshi, Omukai & Tasker 2014). A lower temperature should translate into a lower Toomre parameter, which would enhance the fragmentation seen in our simulation. In addition, at nH 10 16 cm −3 we have introduced an artificial cut-off for continuum cooling in order to approximately reproduce the density-temperature relation found in Omukai (2001). This simplification may also affect the thermal and gravitational stability of the gas. Another factor that might influence the temperature of the disc is the heating from the accretion luminosity of the primary protostar. The accretion luminosity is given by where κP is the Planck mean opacity, r the distance from the source, and Lacc = GM Ṁ /R the accretion luminosity. The effects of the accretion luminosity have been discussed in similar studies that focused on minihaloes Smith et al. 2011). They found that the additional heating of the gas may slightly delay fragmentation, but does not prevent it. The photospheric temperature of the protostar of 8000 K during the early stages of the collapse is too low to produce significant amounts of ionizing radiation. Latif et al. (2013a) investigated the influence of accretion luminosity in atomic cooling haloes. Assuming a power-law relation between the mass and the radius of the star, and an accretion rate of 1 M yr −1 , they computed an accretion luminosity of 2 × 10 −4 erg cm −3 s −1 for a 500 M clump with a size of 100 au and a temperature of 8000 K. This value is comparable to the energy emitted by Lyα cooling, and may exceed it once the mass of the clump reaches 1000 M . However, Latif et al. (2013a) found that this difference only translates into an increase of the temperature by 500 K. Since we investigate the evolution of the protostellar system at even earlier times, when the mass of the protostar is much lower, the effects of the accretion luminosity are expected to be even smaller. Next to the aforementioned cooling and heating processes, we do not include the effects of magnetic fields. These are expected to become dynamically important in minihaloes as well as atomic cooling haloes (e.g. Xu et al. 2008;Schleicher, Spaans & Glover 2010;Sur et al. 2010;Peters et al. 2012Peters et al. , 2014Schober et al. 2012;Turk et al. 2012;Latif et al. 2013c). Indeed, Latif, Schleicher & Schmidt (2014) found that the magnetic pressure provides additional support against gravity and delays or suppresses fragmentation. Future simulations should therefore include magnetic fields as well as a more detailed chemical and thermal model. SUMMARY AND CONCLUSIONS We have performed the highest-resolution cosmological simulation to date of the formation and evolution of a protostellar system in an atomic cooling halo. We follow the collapse of the gas from a few Mpc down to 0.01 au, spanning almost 13 orders of magnitude in scale, and reaching densities as high as nH 10 22 cm −3 . The simulation includes an equilibrium/non-equilibrium primordial chemistry solver that evolves five species (H, H2, H − , H + , and e − ), and includes H2 line emission, H2 collision-induced emission, Lyα cooling, and inverse Compton cooling. Additionally, we have included a uniform LW background of strength J21 = 10 5 to prevent star formation in progenitor haloes. During the initial collapse, the gas is shock-heated to the virial temperature of about 10 4 K. The molecular hydrogen abundance briefly increases due to the presence of supersonic shocks, but the external radiation background photodissociates H2 to a level of yH 2 ∼ 10 −7 within the halo. As a result, runaway collapse due to Lyα cooling ensues once the virial mass has risen to 5 × 10 7 M . The central gas cloud becomes Jeans-unstable with a mass of 10 6 M and collapses nearly isothermally over many orders of magnitude in density, characterized by a profile of the form ρ ∝ r −2 . At densities nH ∼ 10 6 cm −3 , the gas becomes optically thick to Lyα emission and effectively cools via free-bound continuum emission of H − up to a density of nH ∼ 10 16 cm −3 , where the continuum emission is trapped. The average H2 abundance increases to yH 2 ∼ 10 −4 at nH 10 10 cm −3 due to three-body reactions, but never becomes high enough for H2 line emission to become important. The H + abundance declines to 10 −8 due to recombinations before increasing to unity for densities 10 16 cm −3 , where the gas evolves nearly adiabatically and a protostar with an initial mass of 0.1 M is formed. Following the formation of the primary protostar, the gas settles into a Keplerian disc. The Toomre parameter within the disc is close to unity, such that perturbations can grow. The emerging spiral arms feed gas on to the primary protostar at a rate of 1 M yr −1 . However, this is not sufficient to process the mass that accretes from the surrounding cloud on to the disc. In combinations with the efficient cooling of the gas via continuum emission, the disc becomes gravitationally unstable and a secondary protostar forms after only 7 yr. The disc continues to fragment, such that after 18 yr a total of eight secondary protostars have formed. By the end of the simulation, four of these have survived, while the rest have merged away or are tidally disrupted. The primary protostar has grown to a mass of 15 M , while all other secondary protostars have masses 2 M . Three-body interactions lead to the temporary ejection of the primary protostar from the disc after 12 yr, which is disrupted in the process. However, an analysis of the relative velocity of the protostars shows that it is well below the escape velocity. It will therefore likely return to the centre of the cloud. After 13 yr, a second clumps collapses at a distance of 150 au from the primary clump. It has not yet fragmented and contains a single protostar that rapidly grows to 5 M . If this clump show a similar pattern of rapid migration and merging, the cloud may evolve into a wide binary system. Despite the temporary ejection of the primary protostar from the centre of the cloud, subfragmentation likely does not substantially impede its growth. Once it returns to the centre of the cloud, its accretion rate will likely again increase to 1 M yr −1 . In addition, the secondary protostars formed in the disc quickly migrate to the centre of the cloud, where they merge with the primary protostar. They are also typically 10 times less massive than the primary protostar, which has accreted 15 M by the end of the simulation, while the most massive secondary protostar has only grown to 1.5 M . Most of the accreted material thus does not stem from other protostars, but from the bar-like instabilities in the disc. The secondary clump may be a much more potent candidate for accreting mass that may have otherwise been accreted by the primary clump, but even in this case the growth of the most massive protostar would be reduced by at most a factor of 2. It thus appears that fragmentation is not a significant barrier for forming at least one massive BH seed per atomic cooling halo, assuming that the LW background is high enough to prevent H2 cooling. Recent simulations have shown that this may indeed be the much more limiting factor (Latif et al. 2014b,a;Regan, Johansson & Wise 2014). One of the main caveats of this study is the simplified chemistry and cooling network. Future work should include a more detailed chemical model, such as that used in Inayoshi, Omukai & Tasker (2014). It may also become possible to treat the radiative transfer of the various line and continuum processes (e.g. Greif 2014). Finally, the influence of magnetic fields may be investigated with modules that have already been implemented in arepo (Pakmor, Bauer & Springel 2011). The influence of the radiation may not be that strong, since it is difficult to heat the gas above 10 4 K, while magnetic fields may have a substantial effect on the thermal and gravitational stability of the cloud (e.g. Latif et al. 2013c;Latif, Schleicher & Schmidt 2014). The additional support provided by magnetic fields may reduce the ability of the gas to fragment, and further increase the accretion rate of the primary protostar.
2014-12-04T23:37:39.000Z
2014-09-11T00:00:00.000
{ "year": 2014, "sha1": "cbf6a3115dbf7807ce3bcce49a7fa51ec4c672c1", "oa_license": null, "oa_url": "https://academic.oup.com/mnras/article-pdf/446/3/2380/10448029/stu2284.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "cbf6a3115dbf7807ce3bcce49a7fa51ec4c672c1", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
25495971
pes2o/s2orc
v3-fos-license
Evolution of ribozymes in the presence of a mineral surface Mineral surfaces are often proposed as the sites of critical processes in the emergence of life. Clay minerals in particular are thought to play significant roles in the origin of life including polymerizing, concentrating, organizing, and protecting biopolymers. In these scenarios, the impact of minerals on biopolymer folding is expected to influence evolutionary processes. These processes include both the initial emergence of functional structures in the presence of the mineral and the subsequent transition away from the mineral-associated niche. The initial evolution of function depends upon the number and distribution of sequences capable of functioning in the presence of the mineral, and the transition to new environments depends upon the overlap between sequences that evolve on the mineral surface and sequences that can perform the same functions in the mineral's absence. To examine these processes, we evolved self-cleaving ribozymes in vitro in the presence or absence of Na-saturated montmorillonite clay mineral particles. Starting from a shared population of random sequences, RNA populations were evolved in parallel, along separate evolutionary trajectories. Comparative sequence analysis and activity assays show that the impact of this clay mineral on functional structure selection was minimal; it neither prevented common structures from emerging, nor did it promote the emergence of new structures. This suggests that montmorillonite does not improve RNA's ability to evolve functional structures; however, it also suggests that RNAs that do evolve in contact with montmorillonite retain the same structures in mineral-free environments, potentially facilitating an evolutionary transition away from a mineral-associated niche. INTRODUCTION Interactions between minerals and organic molecules likely played a role in the emergence of life on the early Earth and perhaps even play(ed) a role in the emergence of life on other planets. Mineral surfaces can support several processes that may be exploited by emerging life (Hazen and Sverjensky 2010) including the selective sorption (Franchi et al. 2003), concentration, protection (Biondi et al. 2007b), organization (Hanczyc et al. 2003;Konnyu et al. 2015;Shay et al. 2015), and chemical transformation (Huang and Ferris 2006) of organic molecules. Additionally, similarities between some bioinorganic structures and mineral surfaces suggest that metabolic functions in emerging life occurred on mineral surfaces (Nitschke et al. 2013). It is therefore important to address the role of inorganic structures when considering the processes involved in the origin(s) and early evolution of life. Clay minerals are among those predicted to be present in prebiotic environments, including the early Earth, where they have been proposed to facilitate the transition from abiotic chemistry to biology. Water on the early Earth (Mojzsis et al. 2001) would have weathered basaltic rocks and generated several different clay mineral species (Hazen et al. 2013). While direct evidence of clay minerals on the early Earth has been lost due to geological cycling, the presence of 3.5 billion year old clay minerals on Mars (Bristow and Milliken 2011) supports their predicted presence on the early Earth and other potentially prebiotic environments. With their small particle size and typically flattened plate-like crystallites, even small proportions of clay minerals in rocks and sediments provide the majority of mineral surface area available for reactions with organic compounds (Ransom et al. 1998). Interaction between organic molecules and clay minerals on the early Earth (or similar habitable planets) is therefore likely and may play a significant role in the emergence of life. Among the potential interactions between organics and clay minerals, those involving nucleotides and nucleic acids are of particular interest given the central role of RNA in contemporary biology, and evidence of an even greater role for RNA in early life (Benner et al. 2012;Robertson and Joyce 2012). Montmorillonite clay minerals have been shown to bind RNA (Franchi et al. 2003), act as a scaffold to facilitate the formation of RNA from activated monomers (Huang and Ferris 2006;Joshi et al. 2009), and support formation of RNA encapsulating vesicles (Hanczyc et al. 2003(Hanczyc et al. , 2007. At least two biologically derived, functional RNA structures (hammerhead and hairpin ribozymes) remain catalytically active in the presence of montmorillonite clay (Biondi et al. 2007a,b); however, it is unclear whether these two structures from contemporary biology are representative of ribozymes in general. Interaction with montmorillonite affects the activity of certain hammerhead ribozymes (Biondi et al. 2007a), and molecular dynamics simulations indicate altered folding pathways for RNA through interaction with montmorillonite (Swadling et al. 2010(Swadling et al. , 2015. Based on these observations, we predicted that montmorillonite would both interfere with the folding of certain functional structures and stabilize other structures that cannot properly fold without an inorganic scaffold. Through their impact on RNA folding, clay minerals could dramatically alter the distribution of functional RNAs within sequence space, possibly presenting unique opportunities and challenges for nascent life. If clay minerals can support a wider variety of functional structures, then this could make it easier for RNA-based life to emerge in association with a mineral surface; however, if populations evolving in the presence of clay are sufficiently depleted in RNAs that can function in the absence of the mineral surface, this could represent a major challenge in transitioning away from a mineral-associated, initial niche. Our experiments address these possibilities. To understand broadly how RNA's potential to adopt functional structures can be influenced by the presence of a mineral surface, we evolved RNA populations in vitro in the presence or absence of Na-saturated montmorillonite. The RNA populations were evolved to catalyze RNA cleavage, a function catalyzed by several different RNA structures present in biology (Hammann et al. 2012) and in in vitro evolved populations (Jayasena and Gold 1997; Tang and Breaker 2000;Salehi-Ashtiani and Szostak 2001;Popovićet al. 2015). We recently used this in vitro evolution approach to investigate the impact of pH and ion identity on RNA function (Popovićet al. 2015), and showed that some structures that are highly favored in one environment are disfavored in others. In contrast, in the study described here we find that the outcomes of parallel in vitro evolu-tion experiments conducted either in the presence or absence of montmorillonite are strikingly similar. This similarity provides evidence that montmorillonite does not provide an enhanced folding environment, but it does demonstrate the potential for a smooth evolutionary transition from an initial mineral-associated RNA world to environments more like the cellular environments of known, contemporary biology. RESULTS Ribozymes can be readily evolved through selection of self-cleavage activity in the presence of montmorillonite clay We evolved self-cleaving ribozymes in vitro in the presence or absence of a Na-saturated montmorillonite clay (Fig. 1A,B). The RNA construct used for in vitro evolution was a 203nucleotide (nt) long RNA with 90 fully random positions, flanked by 5 ′ and 3 ′ constant sequences (Fig. 1C). Self-cleaving ribozymes were selected based on their ability to cleave a specific 16-nt target sequence within the 3 ′ constant sequence. Gel electrophoresis was used to separate active RNA sequences from inactive sequences based on the reduction in length upon self-cleavage. The RNA populations were evolved in parallel, along two separate evolutionary trajectories, starting from a shared, multicopy population of random sequences ( Fig. 2A). The selection steps were carried out by first heat denaturing and refolding the populations, either FIGURE 1. The mineral and RNA used for in vitro evolution. (A) 2:1 Phyllosilicate crystal structure of montmorillonite, with layers consisting of Si-bearing tetrahedral sheets sandwiching Albearing octahedral sheets. (B) X-ray diffraction pattern of the prepared Na-saturated montmorillonite sample in air-dried state confirms the identity and purity of the clay. The intensity of the Xray reflections are shown as a function of the diffraction angle 2θ along with the corresponding interatomic spacing. (C) RNA construct used for in vitro evolution with 90-nt variable sequence, flanked by the 5 ′ and 3 ′ constant sequence. The constant sequences contain primer binding sites (PBS) for reverse transcription (RT) and PCR. The 3 ′ constant sequence contains the 16-nt cleavage site and a 29-nt spacer sequence 3 ′ of the cleavage site to improve separation between cleaved and uncleaved RNA during electrophoresis. in the presence of 10 mg/mL Na-saturated montmorillonite clay ([+]clay) suspended in a pH 7 buffer with 50 mM NaCl or in the presence of the same buffer without montmorillonite ([−]clay). After refolding, Mg 2+ was added to a final concentration of 5 mM and the populations were allowed to react for 60 min. As a control for changes in the solution conditions that could occur from exchanges with the clay, the buffer used in the [−]clay selection steps was preincubated with montmorillonite for 60 min and then filtered to remove the clay particles. During the selection, the cleavage reaction was stopped and the RNA was separated from the clay by a 100-fold dilution into a denaturing stripping solution followed by filtration to remove clay particles prior to electrophoresis. Self-cleavage within the populations was apparent during the fifth round of evolution along both the [+]clay and [−]clay trajectories. The populations that emerged from the fifth round (C5 and B5) clearly exhibited self-cleavage activity. Both populations exhibited a similar extent of cleavage in the presence of clay, and for both populations the extent of cleavage is slightly higher in the absence of clay (Fig. 2B). While the extent of cleavage is similar for both populations, the size distribution of the cleavage prod-ucts shows that the preferred cleavage sites for the two populations are different. Following in vitro evolution, six populations (B5, B6, B5C1, C5, C6, C5B1) were sequenced using high-throughput sequencing ( Fig. 2A). 1.3 × 10 6 quality-filtered sequence reads were analyzed for each of these populations, which includes between 36,518 and 56,296 unique sequences. For each unique sequence the number of reads was counted. Many sequences are nearly identical to several other sequences in the population. Similar sequences were clustered into sequence families and the number of reads per family was determined. The number of reads for each family in the C6 population was used to assign names to the families based on their rank-order in terms of read abundance. Sequence families are defined such that all members are within 12 edits (substitutions, insertions, or deletions) of the family's most abundant sequence (Fig. 3A). There were between 207 and 3507 sequence families in the populations sequenced. The diversity of sequences within a family likely represents a combination of both mutations to shared parent sequences that arise during the amplification steps of evolution and sequencing errors of a shared sequence. Nearly all sequences that are more than 12 edits apart in sequence space are separated by edit distances between 40 and 55 ( Fig. 3A). Those few sequences at edit distances greater than 12 and less than 40 appear to be largely the result of recombination events (Supplemental Fig. S1). Analysis of a simulated population shows that edit distances between random sequences are typically between 40 and 55 ( Fig. 3A), indicating that the sequences within a given family are related. Additionally, the edit distances between families in the physical populations are comparable to the edit distances between random sequences in a simulated population (Fig. 3B). Shared sequences dominate populations evolved in either the presence or absence of montmorillonite clay All six populations are largely composed of the same sequence families (Fig. 4). The 10 families with the most reads in the C6 population account for 98.6% of the reads in that population and >89% of the reads in the remaining populations C5, C5B1, B5, B6, and B5C1. In all cases, the sixth round of evolution resulted in an increase in abundance of the largest sequence families relative to the rest of the population. After the sixth round (both when conditions are held constant and when they are switched), over 98% of sequence reads belong to these 10 shared families. Populations evolved in the presence or absence of clay are strikingly similar in terms of the identity of sequence families and the associated number of reads (Fig. 5). The differences between populations C5 and B5 (Fig. 5A) are comparable to the differences between a previously generated pair of replicate evolutionary trajectories (Supplemental Fig. S2;Popovićet al. 2015). The differences between populations To assess the degree to which small differences between the [+]clay and [−]clay populations reflect a response to the presence or absence of clay, we switched conditions in the final round of evolution ( Fig. 2A). Both when the populations are kept in the same conditions as the preceding rounds and when the conditions are changed, the number of reads in most families decreases (Fig. 6). An increase in some of the most abundant families shows that they are becoming relatively enriched at the expense of others. For both populations B5 and C5, switching conditions causes the populations to become dominated by Family 2 and the abundance of Family 3 is greatly diminished (Figs. 4, 6B). The direction of change in family abundances is frequently the same whether changing from [−]clay to [+]clay or from [+]clay to [−] clay (Fig. 6B). This suggests that the small differences in the evolved populations reflect small random differences in the starting populations and are not primarily driven by the presence or absence of clay, consistent with the similarity between populations C6 and B6. Ribozyme activity is partially inhibited by the presence of clay To test the impact of clay on the activity of specific ribozymes within these populations, we assayed the activity of representative sequences (the most abundant sequence within a family) from several families. While sequence comparisons suggest that clay has, at most, a modest impact on relative fitness, we assayed a representative set of sequences for claydependent activity. Families 1, 7, 9, and 13 were identified as candidates for sequences with higher activity in clay. They are between 29 and 126 times more abundant in the [+]clay population C6 than in the [−]clay population B6 (Fig. 5B) and increase from round 5 to 6 in the [+]clay trajectory (Fig. 6A). They also decrease in abundance between rounds 5 and 6 in the [−]clay trajectory (Fig. 6A). Family 2 was selected because it grew to dominate the B5C1 population (switch from [−]clay to [+clay]), and a representative from Family 4 was identified as a family that potentially has higher activity without clay. Family 4 is the most overrepresented family (among the 20 most abundant) in B6 relative to C6 (Fig. 5B). Individual representative RNA sequences were prepared and allowed to cleave using the same conditions used for in vitro evolution. Under the selection conditions, all representative ribozymes tested cleaved in both the presence and absence of clay, and all with a lower extent of cleavage in the presence of clay (Fig. 7A). The diminished activity observed in the presence of clay appears to represent a true drop in activity and not an artifact of differential recovery of full-length and cleaved RNA from the clay. When ribozymes were incubated without clay for 1 h and then clay was added for 1 min prior to filtering, the observed cleavage was unchanged relative to the activity without clay (Supplemental Fig. S3). We further explored the clay dependence of ribozyme activity by measuring cleavage kinetics of individual representatives from families 1, 2, and 4. For kinetic assays, the filtering step, used to remove clay particles prior to electrophoresis during the selection steps, was omitted and samples were loaded directly onto the gels in the stripping solution. Families 1 and 2 have slightly slower rate constants in the presence of clay (Fig. 7B,C) and almost identical amplitudes. Alternatively, Family 4 is more strongly inhibited by clay. The representative from Family 4 was unusual in that, unlike the other five families assayed, it exhibited extensive cleavage during sample preparation, ranging from 25% to 67%. While the remaining uncleaved material is rapidly cleaved in the absence of clay, the rate and magnitude of cleavage are much smaller in the presence of clay (Fig. 7D). The similar activities of representative sequences in both the presence and absence of clay and the overall similarity of the populations does not arise from a lack of interaction between the clay particles and the evolved ribozymes. As with the starting population, the evolved populations retain an affinity for montmorillonite (Fig. 8). When [−]clay and [+]clay populations are incubated with clay and then diluted with native buffer, the populations are largely retained within or immediately below the wells during electrophoresis (Fig. 8A). This indicates that the RNA remains associated with the clay particles, which cannot move into the gel. When clay-incubated samples are diluted with stripping solution, they are able to enter the gel, but still have slightly retarded mobility relative to samples without clay (Fig. 8B). Unimpeded mobility can be achieved for clay-incubated samples by dilution into stripping solution followed by filtering (Fig. 2B). Without the addition of stripping solution, the individual representative ribozymes are also retained within or immediately below the wells after clay incubation (Fig. 8C). DISCUSSION RNA's adsorption onto montmorillonite surfaces (Franchi et al. 2003), the reduced activity of at least one biologically derived ribozyme in the presence of montmorillonite (Biondi et al. 2007a), and its impact on RNA folding and dynamics in molecular dynamics simulations (Swadling et al. 2010(Swadling et al. , 2015 all suggest significantly altered folding landscapes. This impact on folding could make montmorillonite either more or less favorable to RNA-based life. We observe that the presence of montmorillonite did not significantly alter the number or identity of RNAs that can adopt functional structures during in vitro evolution. Additionally, the ribozymes tested here are active in the presence of montmorillonite with only moderate inhibition, and even this limited inhibition may partially reflect indirect effects on folding that arise from the dynamic exchange of ions between solution and montmorillonite particles. The extensive overlap between ribozymes evolving in the presence and absence of a clay mineral surface suggests that the presence of montmorillonite does not provide unique opportunities to adopt functional structures, but it does suggest extensive opportunities for The presence of Na-saturated montmorillonite did not significantly impact the evolution of a random RNA population when selecting for self-cleavage. On the population level, whether evolved with or without montmorillonite, there is a similar degree of cleavage after the same number of rounds of evolution (Fig. 2B). At the sequence level, we observe that the same sequence families dominate the majority of the populations in both the [−]clay and [+]clay evolved populations (Figs. 4, 5). Critically, this similarity is not an inevitable consequence of using this starting population or technique, as evidenced by the large differences that were previously observed (Popovićet al. 2015) between populations evolved from the same starting population used here and the same partitioning method. The differences between the [+]clay and [−]clay populations are relatively small compared to the differences between our previously evolved self-cleaving ribozyme populations in which pH and ion identity were varied (Fig. 5;Supplemental Fig. S2;Popovićet al. 2015). Many of the most abundant sequences that emerged in the [+]clay and [−]clay populations are also among the most abundant sequences present in populations evolved previously, at the same pH, from the same starting population (Supplemental Figs. S2,S4). Additionally, the small differences that are present between the [+]clay and [−]clay populations are not reflected in changes in the populations upon changing the selection condition (Fig. 6), indicating a limited role of montmorillonite in generating those differences. Activity assays indicate that multiple unrelated sequences respond to the presence of montmorillonite similarly; most are moderately inhibited (Fig. 7). Self-cleavage prior to the selections steps could contribute to similarities between evolutionary trajectories, and our activity assays indicate that at least one family (Family 4) can undergo self-cleavage during preparative steps. Yet, multiple lines of evidence indicate that this behavior is not the primary determinant of the similarities we observe. For example, the other ribozymes tested had minimal cleavage prior to incubation. Additionally, Family 4 comprises ≤0.3% of the reads in four prior, independently evolved populations that used this same shared starting population and the same partitioning method, with those populations being selected for self-cleavage activity at pH 5 (Supplemental Fig. 4). In contrast, in the three previous evolution trajectories selected at pH 7, Family 4 is the most abundant family (Supplemental Figs. S2,S4), indicating that the evolutionary success of this sequence family depends more on the pH of the selection steps than on activity during preparative steps. Multiple structures capable of catalyzing self-cleavage are known to be prevalent within short RNA sequences in random sequence libraries (Jayasena and Gold 1997;Conaty et al. 1999;Tang and Breaker 2000;Salehi-Ashtiani and Szostak 2001) and in biology. At least two common structural motifs that are prevalent when self-cleaving ribozymes are evolved in the presence of montmorillonite are also common in multiple populations evolved in the absence of montmorillonite, the DCGUY-3WJ and hammerhead motifs. The populations evolved here have many abundant sequences that contain the hammerhead ribozyme motif, which was previously observed in multiple populations evolved in the absence of minerals and at similar pH values (Tang and Breaker 2000;Salehi-Ashtiani and Szostak 2001;Popovicé t al. 2015). The hammerhead ribozyme is the most abundant recurring motif identified in the [+]clay population, with eight of the 20 most abundant families containing this structural motif. The hammerhead is also an abundant recurring motif identified within many biological RNAs (Hammann et al. 2012). The dominant structural motifs in the [+]clay evolved population are therefore clearly compatible with mineral free environments including modern cells. While clay minerals have several features that could be exploited by emerging life (e.g., their ability to build, concentrate, protect, and organize biomolecules including RNA), our results suggest that expanding the range of functional RNA structures is not one of them. The similarities between the populations evolved here and RNAs evolved previously in vitro and in vivo do, however, suggest the possibility of a smooth evolutionary transition from biomolecules evolving in association with mineral surfaces to evolving in mineralfree protocellular environments. These similarities also increase the confidence with which insights derived from in vitro evolution studies performed without minerals, can be applied to origin of life scenarios involving mineral surfaces (Hanczyc et al. 2003;Briones et al. 2009;Konnyu et al. 2015;Shay et al. 2015). While the presence of montmorillon-ite had surprisingly little impact on the evolution of ribozymes that catalyze self-cleavage, it remains to be seen if intermolecular functions such as ligand binding or RNA ligation are more strongly impacted by the presence of this mineral surface. Furthermore, other mineral surfaces, even other classes of montmorillonite, vary in their characteristics (Joshi et al. 2009;Swadling et al. 2013) and may have different impacts on RNA evolution. Preparation of montmorillonite Wyoming Montmorillonite SWY2, purchased from the clay mineral society was disaggregated in deionized water using a sonic horn and <0.5 µm aggregates separated using a high capacity centrifuge (6 × 1 L) at 2500 rpm with 12-min run times. Organics were removed from clay minerals using multiple treatments of 5% hypochlorite solution, adjusted to pH 7 with HCl, a treatment that is mild enough to not damage the clay. Interlayer cations were exchanged by continuous stirring in a 1 M NaCl solution for an hour, followed by centrifugation, removal of the supernatant solution and replacement with fresh NaCl solution. This process was repeated five times. Unincorporated Na ions were removed from clay minerals by dialyzing with deionized water for several days until conductivity meter readings were <50 mS/m. The Na-saturated montmorillonite was then freeze-dried and stored between experiments in a desiccator. X-ray diffraction patterns of oriented specimens and random powder samples of Na-saturated clay aggregates were obtained in the airdried state to confirm the identity and purity of the clay. X-ray diffraction patterns were collected on a Rigaku Smartlab XRD. Preparation of DNA library and RNA population A 226 base pair double-stranded DNA template was generated as described (Popovićet al. 2015) with a 90-nt random region flanked with constant regions for amplification and size differentiation of the cleaved product. The sense strand DNA sequence is: GCCATGTAATACGACTCACTATAGGGACACGACGCTCTTC CGATCT(90N)GGGCATAAGGTATTTAATTCCATACTGGACCC AGTCAGTAGACACAACAAGTTCTTAGACGAGATAATACTACG CTAACACCGCACCAAC; the italicized region is the T7 promoter sequence and the bold region is a PCR and Illumina primer binding site. The underlined region corresponds to the cleavage sequence for the self-cleavage selection. The DNA library of ∼2 × 10 14 molecules was transcribed to generate a population of ∼2 × 10 16 RNA molecules, from which aliquots of ∼10 15 molecules were taken for each of the two trajectories. Transcription was carried out in transcription buffer (50 mM Tris-HCl pH 7.5, 10 mM NaCl, 30 mM MgCl 2 , 2 mM spermidine, 40 mM DTT) with 5 mM of each NTP, 100 µM blocking oligo and T7 RNA polymerase (Promega) for 15 h at 37°C. The blocking oligo (CTACTGACTGGGTCCAG), which is fully complementary to the cleavage sequence, was included to inhibit undesired self-cleavage during transcription (Salehi-Ashtiani and Szostak 2001; Saksmerprome et al. 2004). RNA was purified and the blocking oligomer was removed through denaturing PAGE. The RNA population was recovered from the gel through electro-elution With the addition of stripping solution, samples incubated with clay enter the gel. The mobility of samples incubated with clay is still slightly retarded relative to samples without clay. (C ) Individual sequences from families 1, 2, 4, 7, 9, and 13 were incubated with or without clay. The same fraction of the purified transcription product was used for each family so the total signal varies between families. Samples were diluted fourfold by the addition of buffer with 10% glycerol and then subjected to PAGE. Without the addition of stripping solution, the samples incubated with clay do not fully enter the gel. For these sequences, most of the clay-incubated material does not enter the gel at all (the signal is lost) and the little that does enter the gel is mostly at the top. (Biorad), precipitated by the addition of 1/10th volume 3 M NaOAc pH 5.2 followed immediately by addition of three volumes of 100% ethanol and centrifugation at 18,000g for 60 min, and then resuspension with water. Evolution of site-specific cleavage in the presence or absence of clay For each selection step the populations were refolded by heating to 90°C for 3 min and cooling to an ambient temperature over 15 min in a buffer with or without clay (1 µM RNA, 50 mM NaCl, 50 mM MOPS pH 7, and 10 mg/mL of montmorillonite for the [+]clay selections). For the [−]clay selection steps, prior to addition of RNA, the above buffer was preincubated for 60 min with 10 mg/mL sodium exchanged montmorillonite and then filtered through a 150 k MWCO filter (Pierce) to remove clay particles. Removal of clay particles upon filtering was verified by the loss of the characteristic UV absorbance peak of montmorillonite. After filtering, UV absorbance is <1% of the initial suspension and indistinguishable from background. This preincubation step allowed equilibration between the clay and the buffer so that changes to the buffer that could arise from the presence clay would be consistent between the two evolutionary trajectories. The preincubation does not alter the pH of the buffer. After refolding, MgCl 2 was added to a final concentration of 5 mM Mg 2+ and the samples incubated for 60 min at ambient temperature (23°C). A 100× volume of stripping solution (10 M urea, and 20 mM EDTA adjusted to pH 10 with NaOH) was added to stop the reaction and dissociate the RNA from the clay. The samples were then filtered through a 150 k MWCO filter to remove the clay particles. The flow-though was ethanol precipitated, resuspended in denaturing loading buffer and subjected to denaturing PAGE (8 M Urea, 6% polyacrylamide, 2 mM EDTA, 89 mM boric acid, 89 mM Tris, pH 8.3). Denaturing PAGE was used to separate the active sequences within the RNA population from inactive full-length RNA. Size standards were run alongside the population during PAGE so that only those sequences that cleave within the defined cleavage sequence were selected. The catalytically active sequences were recovered from the gel through electro-elution (Biorad), precipitated, and resuspended in water. The resuspended sample was then reverse transcribed using ImProm-II reverse transcriptase (Promega), and amplified via PCR using Taq DNA polymerase (Thermo Scientific). Finally, the PCR products were transcribed in vitro to generate the RNA population used in the next round of evolution. This process was repeated for six rounds. Sequencing and analysis of evolved populations In vitro evolved populations were sequenced on an Illumina HiSeq 2500 instrument. The six populations described here were sequences along with 16 additional populations on a single lane. Prior to sequencing, populations were reverse transcribed and PCR amplified with primers that introduced indexing sequences that allowed the multiplexing of multiple populations in a single-sequencing lane. Phusion High-Fidelity DNA polymerase (Thermo Scientific) was used for this PCR step to minimize mutations after the final selection step. All populations were diluted to the same concentration. Approximately 7 million raw sequence reads per population contained information on 100 positions per molecule. Constant sequences on the 3 ′ end of the sequences reads were removed from the variable regions of 85-93 nt, and raw reads were quality filtered by completely removing all reads in which any position within the variable region has a Phred score of <29 using a custom Python script. For all populations >1.3 million reads remained after quality filtering. For comparative analysis, 1.3 million reads were chosen randomly from each population. Reads were counted, clustered into families of related sequences and compared between populations using FASTAptamer toolkit (Alam et al. 2015) with an edit distance of 12 used to define sequence families. The hammerhead and DCGUY-3WJ motifs were identified using motif descriptors as described (Popovićet al. 2015). Simulated populations were generated using a custom Perl script. Simulated populations included the same number of sequences and the same length distribution as the experimental populations. Self-cleavage activity assays RNA was transcribed from DNA templates in the presence of 32 P α-CTP and the blocking oligo. 32 P body-labeled ribozymes were purified, refolded, and then incubated. For end point assays of individual sequences, reactions were initiated and stopped in the same way as the selection. For kinetic assays, reactions were initiated as in the selection, but the reactions were stopped by the addition of a 3× volume of stripping solution and run directly on a PAGE gel. Products were separated on 6% PAGE and quantified using ImageQuant software to determine the amount of signal from each band. The extent of cleavage was calculated based on the signal from the bands corresponding to the uncleaved RNA and the 5 ′ cleavage products, correcting for the difference in the amount of incorporated 32 P. For kinetic assays, t 0 is defined as the time when Mg 2+ was added to the reaction. With the exception of Family 4, the extent of cleavage at t 0 is minimal (<5%). Cleavage kinetics were fit to a single y(t) = A (1 − e (−kobst) ) or double exponential y(t) = A 1 (1 − e (−kobs1t) ) + A 2 (1 − e (−kobs2t) ) using MyCurveFit (MyAssays Ltd.). Prior to the fit, cleavage at t 0 was subtracted and the extent of cleavage was normalized to the maximum extent of cleavage observed in the absence of montmorillonite. SUPPLEMENTAL MATERIAL Supplemental material is available for this article.
2018-04-03T01:34:46.522Z
2016-12-01T00:00:00.000
{ "year": 2016, "sha1": "5ba209d27514c33a57dc309d69eac3c67cadb0a1", "oa_license": null, "oa_url": "http://rnajournal.cshlp.org/content/22/12/1893.full.pdf", "oa_status": "BRONZE", "pdf_src": "Adhoc", "pdf_hash": "7298147e66018cd7627717d9714d66f23be09e60", "s2fieldsofstudy": [ "Biology", "Environmental Science", "Materials Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
248710416
pes2o/s2orc
v3-fos-license
Green Supply Chain Decision-Making considering Retailer’s Fairness Concerns and Government Subsidy Policy Government’s green subsidy and retailer’s fairness concerns have great implications for enterprise’s operation strategy in the green supply chain (GSC). With the continuous deepening of retailer’s participation in supply chain management, the green services they provided by retailer have become a crucial role in promoting the terminal sales of green products. To further research the government subsidies and retailer’s fairness concerns on the optimal decisions of product pricing, green R&D, and service level, we construct four two-stage GSC models: no subsidy and fairness concerns, subsidizes manufacturer without retailer’s fairness concerns, subsidizes manufacturer with retailer’s fairness concerns, and subsidizes all members with retailer’s fairness concerns. The results show that subsidizes to manufacturers has significantly improved supply chain performance and environmental governance, but it exacerbates the unfair distribution of profits among members, and retailers’ fairness concerns drive them to offer lower green service level. With the green demand of consumers being unable to be fully satisfied, the consumer surplus and effectiveness of government environmental governance decrease accordingly. To eliminate the adverse effect caused by unfair distribution of profits, it is necessary to subsidize retailers so as to share their green service costs and increase their share of profits. Introduction In recent decades, the rapid development of global economy has greatly improved the peoples' living standard, but mass production and consumption of products exacerbate the greenhouse gas emissions. According to a press release by the International Energy Agency (IEA) in Global Energy reviews 2021, global energy demand will grow by 4.6 percent in 2021, and global energy-related carbon dioxide emissions will increase by 4.8 percent, which will reach 33 billion tons. To reduce carbon emission from the consumption of oil, coal, and other mineral resources, some countries have formulated a series of environmental protection laws and strengthened international cooperation [1]. Back in November 2015, leaders from 20 countries announced an initiative in Paris aimed at dramatically accelerating global clean energy innovation, the initiative required participating countries commit to double their state-directed investment in clean energy within ve years, and the consensus also spur government subsidies the development of clean energy technology. As a manufacturing powerhouse, China has been working hard to promote the green transformation of manufacturing industry and focus on the implementation of green manufacturing system construction. During the 13th Five-Year Plan period, China issued a serious of guidelines, including promulgates corresponding subsidy policies to encourage enterprises to participate in the practice of GSC. According to the Ministry of Ecology and Environment (http://www.scio.gov.cn/ztk/dtzt/44689/47315/index.html), China's carbon intensity in 2020 dropped 18.8 percent from 2015, exceeding the binding target for the 13th Five-Year Plan, while nonfossil energy accounted for 15.9 percent of China's energy consumption, both exceeding the 2020 target China had promised to the international community. Recently in September 2020, China's top leader Xi Jinping announced China's initiative to scale up the nationally determined contributions to peak carbon dioxide emissions by 2030 and achieve carbon neutrality by 2060. In addition to the guiding role of the government in promoting green manufacturing construction, the behavior of channel members and consumers' green preference also have a significant impact on the supply chain operation. Encouraged by the government's green subsidy policy, manufacturers have incentive to curb carbon emission and focus on promoting the environmental attributes of manufactured product. As the concept of green environmental protection and the healthy living is deeply rooted in the hearts of people, consumers are paying more and more attention to the environmental properties of products, such as the degradability of raw materials and the recyclability of products. e rapid economic development has also led to the improvement of residents' income level, and people are more likely to choose green products than traditional ones, even though the former are more expensive than the latter. Motived by shaping the green corporate image and expand the market size, manufacturers are more willing to produce environmentally friendly products; for example, as a leading enterprise of car industry, Toyota has introduced Corolla pure electric edition to expand the products category and satisfied the diversified needs of consumers. As the main initiator of green innovation, manufacturers always make decisions in their own favor relying on technological advantages. Compared to traditional brown products, the performance of green products has not been fully realized by consumers, and the demand for green products is uncertain. Driven by green purchasing behavior in consumption, manufacturers need to choose between green product market expansion and green R&D costs increasing. As retailers directly face consumers, their service efforts are the main means to promote the sales of green products, such as environmental education and advertising investment [2]. As the leading institution to regulate the market, the government will provide subsidy or levying carbon tax in order to promote the improvement of environmental governance and social welfare [3,4]; for example, to encourage consumers buy new energy vehicles, the Ministry of Finance of China issued a notice on financial support policies for the promotion and application of new energy vehicles from 2016 to 2020, providing subsidies to consumers who choose the new energy vehicles (http://jjs. mof.gov.cn/zhengcefagui/202004/t20200423_3502975. html). However, relevant government subsidies mainly focus on green manufacturing, and there are few studies on subsidies for green service. Actually, government subsidies only for manufacturer always cause retailer's unfair psychology and weaken the enthusiasm of retailer to participate in GSC management. To research the implication of government subsidy and retailer's fairness concerns on GSC management, four dynamic game models have been established. e following problems are studied in this paper: (1) compared with the baseline scenario, what is the impact of government subsidies on product pricing, green R&D, and green service? (2) How government subsidies affect supply chain performance and environmental governance? (3) How the retailer's fairness concerns affect green R&D, green service levels, consumer surplus, and the effectiveness of government environmental governance? (4) Which subsidies strategy is the optimal choice in terms of economic and social benefits, and what changes have taken place from the perspective of enterprises, government, and consumers? is article is arranged as follows. First, we review the previous research in Section 2. e problem presentation and assumptions are given in Section 3. In Section 4, by using game theory, we obtained equilibrium results for four models. e decision results under the four models are compared and the reasons for the results are analyzed in Section 5. Section 6 conducts numerical analysis to prove the previous conclusions. Section 7 summarizes this article and elaborates on future research. Literature Review e research of this article is primarily related to three steams of literature: (i) green R&D and green service, (ii) fairness concerns in supply chain, and (iii) government subsidy. A summary of the research gaps and the contribution of our work is given in the end of this section. Green R&D and Green Service. Sustainable operation has become an important research field of business management and a major branch of operations research and management [5]. Most recent studies have shown that environmental sustainability should be introduced in the supply chain operational decisions [6,7]. Ma et al. [2] researched the influence of different supply chain structure on product pricing and then further investigated the channel coordination, in which demand is influenced by quality improvement and service effort. Basiri and Heydari [8] investigate the green channel coordination issue with existing nongreen traditional product and green-type product. Bhattacharyya and Sana [9] consider the production inventory system model in green manufacturing industry, establish the object function of profit depends on service and the stochastic demand of green technology, and analyze the optimization decision of each variable. Yang et al. [10] research the cooperation and coordination in GSC with R&D uncertainty. Similar to their research, this paper also assumes that the market demand is a linear function, which is jointly influenced by product greenness and service effort and analyzes the price and service effort determination under the leadership of manufacturer. Taleizadeh et al. [11] study the optimal pricing and production strategy of a twostage GSC, in which the demands combined are influenced by price, refund rate, and green quality and discuss the optimal decision variables under cooperative game and noncooperative game, finally constructing a cost sharing agreement to provide high quality products to purchasers. Ranjan and Jha [12] discuss the pricing strategy and coordination mechanism among members in dual-channel supply chain, in which the demand is a linear function of online/offline price, green quality level, and sales effort level. Similar to the researches of scholars above, we jointly discuss the optimal equilibrium results of manufacturer's green R&D and retailer's green service effort, and the impact on consumer surplus and environmental governance. Fairness Concerns in Supply Chain. Numerous behavioral economics studies have point out that people always show great concerns about the fairness of income distribution in addition to pursuing the maximization interests [13], which is mainly caused by uneven profit allocations. To explore the impact of fairness concerns on the supply chain management, many scholars have conducted researches on the fairness concerns behavior, channel structure, contract design, etc. Zhou et al. [14] study the optimization of contract design of low-carbon supply chain channel and discuss the change of optimal decision and behavior when considering retailer's fairness concern behavior. Zhang et al. [15] discuss three decision-making scenarios, in which the manufacturer is the leader considered to explore the impact of consumer environmental awareness and retailer's fairness concerns on green product quality and product pricing. It is found that retailer's fairness concerns do not change the environmental quality of green products, and retailer's power and fairness concerns degree in the supply chain jointly determine whether the retailer can benefit from fairness concerns. Wang et al. [16] consider the common demand of product greenness and service level, the centralized model, the manufacturer's fairness concerns considered in the decentralized model, and the retailer's fairness concerns not considered in the decentralized model. Finally, the cost sharing joint commission contract is proposed to realize supply chain coordination. Jin et al. [17] discuss the influence of green optimism on GSC and find that green optimism is always bad for upstream manufacturers and may be good for downstream retailers. Zheng et al. [18] consider the supply chain leaders' willingness-to-cede behaviour and modify behavioural theory and game theory to achieve the sustainable cooperation in supply chain. Zhang et al. [19] study how green retailers' fairness concerns affect greenness and profits of supply chain members and establish three coordination mechanisms to promote cooperation among supply chain members. Zhen et al. [20] research the influence of members' fairness concerns behavior on a retailer's dual-channel supply chain, and the results show that if the manufacturer has high fairness concerns level, the retailer should not pay attention to fairness. Sana [20] discusses the formulation of optimal price and green quality under two models, considering the influences of product substitution and corporate social responsibility, in order to realize the profit maximization of individual and integrated systems. Ma et al. [21] study the closed-loop supply chain with four reverse channel structures and analyze the optimal pricing decisions, marketing effort and collection rate under different structures, and then further discuss the influence mechanism of retailer's fairness concerns on marketing effort, recovery rate, and supply chain performance. Liu et al. [22] study the impact of retailer's fairness concerns on the three-party sustainable supply chain and reveal how retailer's fairness concerns affect the cooperation among supply chain members. Du and Zhao [23] investigate the comprehensive influence of fairness preference and channel preference on business strategy; if manufacturers take fairness preferences into account, they will lower wholesale prices to reduce retailers' losses, and as fairness preferences increase, manufacturers tend to establish online channels with low acceptance. Zhang et al. [24] introduce the vertical and horizontal fairness concerns of retailer and discuss the impact of retailer's fairness concerns on online channel strategy under direct selling and platform agency mode. e above studies show that fairness concerns have significant impact on the operation performance of supply chain management, and few studies have considered the impression of fairness concerns on green R&D and green service effort, and even on the consumer surplus and carbon emission. It is valuable to discuss the influence of members' fairness concerns on the operation of GSC. Government Subsidy Policy. In the process of GSC management, besides paying attention to the influence of members' attitude to fairness and invest cost enhances on the optimal decision of supply chain, we could not ignore the important role of government acting as the regulator of economic operation and environmental governance, promoting the sustainable development of economy and society. Ma et al. [25] research the decision-making with the implementation of the government subsidy and analyze the impact of consumer subsidies from the perspectives of consumers. Li et al. [26] consider two-stage supply chain consists of a fairness neutral retailer and a fairness concerns retailer, and the results show that when the cost of carbon emission is high, the retailers pay great concerns for fairness. In addition, in the case of retailers paying more attention to fairness, the government should reduce the carbon tax to induce manufacturer in reducing carbon emission. Nielsen et al. [27] investigate the optimal green level, members' profit, consumer surplus, and environmental improvement under two green technology incentive policies and consider the impact of single and two-period purchasing decisions on the sustainability goal of supply chain. Sharma et al. [28] study the role of option contract in realizing channel coordination and discuss the fairness of channel members when retailers purchase products from suppliers by option contract. Hadi et al. [4] consider the government uses economic incentives and penalties to manage the environmental effects of companies, and the results showed that the government's environmental protection strategy has a significant impact on the revenue and profit. Sana [29] discusses the product pricing under corporate social responsibility and researches the newsboy inventory model from the perspective of green product marketing. By comparing the green marketing and nongreen marketing, and considering the government subsidy and tax, the object expectation function is established, and the equilibrium solutions are obtained. Han et al. [30] study the decisionmaking behavior of manufacturers in an e-commerce supply chain and consider government subsidy policy and fairness concerns. Su et al. [31] discuss the optimal decision-making under government subsidy coefficient, where government has different subsidy strategies. Zhang et al. [32] study the WEEE closed-loop supply chain, in which the manufacturer can authorize the retailer to remanufacture the used products. Khosroshahi et al. [33] study different government subsidy strategies in the GSC, establish an interaction model between the degree of green and the level of transparency set by the manufacturer, and simulate how the market responds to the manufacturer's social responsibility decision. Wang et al. [34] discuss the fairness problem of three closed-loop supply chain models, using the proportion of government subsidies as a coordinating variable to design a joint contract of "government subsidy sharing and cost sharing." Han et al. [35] discuss the influence of carbon tax cost and consumer preference on low-carbon and design revenue sharing contract, which can significantly reduce carbon emission and improve supply chain operational efficiency. Liu et al. [36] introduce the deposit-refund policy and minimum recycling for used products, so as to solve the problem that the disposal cost of recycling is not enough to cover the recycling subsidy already paid. Kang et al. [37] show that reasonable government subsidies can enhance the effectiveness of market resource allocation and improve the corporate social responsibility, while the fairness concerns of farmers and enterprises aggravate the double marginalization and reduce the efficiency of supply chain, and social responsibility cost sharing mechanism helps realize Pareto improvement. All the above papers emphasize the great impression of government policies in GSC management, but there are few studies on the impact of government subsidies on green R&D and green service effort, especially when the retailer is concerned about the fairness of profit distribution. is paper attempts to explore the impact of government subsidies mechanism on members' optimal decisionmaking. Different from the previous research, this paper considers that manufacturers are engaged in green R&D and retailers provide services to the market to improve the sales of green products, and it discusses the impact of government subsidy and retailers' fairness concerns on the operation of GSC. In the context of government subsidizes to manufacturer, retailers need to invest in the cost of sales, how retailers' fairness concerns affect the consumer surplus and environmental governance and how the government should adjust its subsidy strategy to reduce the adverse impact of fairness concerns. More specifically, considering manufacturer's green R&D, retailer's green service effort and fairness concerns, consumers' green preferences, and government subsidy, we introduce four subsidizes strategies of government and formulate four models: nongovernment subsidy without retailer's fairness concerns, subsidizes manufacturer without retailer's fairness concerns, subsidizes manufacturer with retailer's fairness concerns, and subsidizes all members with retailer's fairness concerns. e main innovations of this paper are given as follows: firstly, it explores the impact mechanism of retailer's fairness on green R&D and service effort. Secondly, we research the effects of government subsidy on members' profits and supply chain performance. Finally, we study the impact mechanism of carbon subsidy and retailer's fairness concerns on the impact of service effort, consumer surplus, and environmental governance. Problem Description and Model Assumptions Consider that the supply chain consists of a manufacturer and a retailer, where the manufacturer invests the green R&D and produce green products, while the retailer sells green products to the market and providing service effort. Assume that the market demand is influenced by a combination of retail price, product greenness, and service effort. e government plays an important role in promoting green manufacturing, and to improve the ecological environment, the government can reduce the risk of enterprises engaging in green manufacturing through green subsidy. Besides, to pursue utility maximization, channel member often shows great concerns to his share in the channel. As a key member, retailers play as important role due to their direct access to consumers. To research the operation of GSC consider government subsidy and retailer's fairness concerns, this paper establishes a two-stage GSC model dominated by manufacturer, and four GSC models are shown in Figure 1. Model NN is a benchmark model, model MN researches the situation with government subsidizes manufacturer without fairness concerns of retailer, model MY researches the situation of government subsidizes manufacturer and with retailer's fairness concerns, and model MRY researches the situation with government subsidizes both manufacturer and retailer with retailer's fairness concerns. e symbols and meanings of this paper are given in Table 1. Market demand is assumed to be D � Q − αp + βe + cs [38][39][40]. e cost function of green R&D investment is assumed as 1/2c g e 2 [41], and the cost function of service effort investment is assumed as 1/2c s s 2 [42]. e unit cost of green product is c [43]. e consumer surplus is [44,45]. Assume that the total carbon emission reduction after green R&D investment is e D [46]. Environmental improvement after green R&D investment is a linear expression of linear growth of carbon emission reduction EI � c e e D [32,39], without loss of generality, the value c e � 1 [47]. Unlike the assumption of other scholars, we assume that the government distributes the governance cost saving from the reduction of carbon emission between manufacturer and retailer, and government carbon subsidy expenditure can be expressed as GS � (θ + δ)e D, 0 < θ < 1, 0 < δ < 1, 0 < θ + δ < 1. Social welfare consists of the profits of supply chain members, consumer surplus and environmental improvement, and minus government subsidies and can be expressed as SW � m + r + CS + EI − GS [39,46,48,49]. Equilibrium Analysis In this section, we use reverse solution method to analyze the optimal decision in two-stage GSC composed of a manufacturer and a retailer and explore the government subsidy and retailer's fairness on the operation of GSC. Furthermore, we compare the whole supply chain profit, consumer surplus, environmental improvement, and social welfare in different models. To simplify formulas, we denote Model NN: Nongovernment Subsidy without Retailer's Fairness Concerns. In order to facilitate comparison with the optimal decision results of other models, we first establish a benchmark model, that is, considering the case of nongovernment subsidy without retailer's fairness concerns. In this case, a manufacturer and a retailer play a two-stage Stackelberg game aiming at maximizing their respective profit. In the first stage, the manufacturer determines the wholesale price and greenness of their products; in the second stage, the retailer determines the retail price and green service level. At this point, (1) 2c g ϕ 2 /c s , the optimal decision variables of model NN have the following values: The proof can be found in Appendix 1. Model MN: Subsidizes Manufacturer without Retailer's Fairness Concerns. To investigate the impacts of government subsidizes manufacturer on the operation of GSC, we suppose that the unit subsidy for green R&D is a fixed constant and analyze the variation of product pricing, greenness, and green service level with unit subsidy θ. At this point, the object functions of the manufacturer and the retailer are as follows: Proposition 2. If 0 < ϕ 3 < ������� 2c g ϕ 2 /c s , the optimal decision variables of model MN have the following values: 2 3 , and D MN * � αc g c s ϕ 1 /2ϕ 2 c g − c s ϕ 2 3 . The proof can be found in Appendix 2. Comparative Analysis of Equilibrium Results To analyze the effect of government subsidy policy and retailer's fairness concerns on operation of different GSC model, this part compares the optimal decision variables and objective functions. In order to make the comparison result more intuitive and better understand the change of variables with each parameter, we do simulation analysis in Section 6. e proof can be found in Appendix 5. Proposition 5 (1) shows that government subsidy policy can effectively promote manufacturer to engage in green R&D, while the retailer's fairness concern behavior will reduce the positive effect of government green subsidy. As the government subsidies only for manufacturer witness the important role of retailer in promotion sales of green products and create an unfair mentality for retailer, to improve the retailer's fairness in the profit distribution, the retailer will raise the retail price of green products, so as to increase the retailer's expected utility. e higher price reduces the demand for green products and discourage the enthusiasm of manufacturer for green R&D. e government subsidizes both manufacturer and retailer according to the greenness of products can effectively alleviate the retailer's unfair psychology and decrease the retail price, and with the expansion of the market scale of green products, the manufacturer is willing to invest more in green manufacturing. Proposition 5 (2)-(3) shows that the government subsidy policy also has a positive impact on encouraging retailer to engage in green service. Compared to the situation of government subsidizes manufacturer without retailer's fairness concerns, government subsidizes manufacturer causes the retailer to pay attention to the fairness of profit distribution. To eliminate the adverse effects of the uncertainty demand for green products and the increase of service cost, so as to improve the fairness of profit distribution, the retailer tends to reduce the green service level, thus reducing the market demand for green products, which has discouraged manufacturer's enthusiasm for green R&D. Under the premise of government subsidizes manufacturer and retailer concerns about the fairness, it is essential to provide a certain subsidy to retailer according the greenness of products, so as to enhance the fairness of profit distribution of supply chain. e proof can be found in Appendix 6. Proposition 6 (1)- (2) shows that, with incentives from government's green subsidy, the manufacturer has increased the resistance to the risk of the uncertainty of green market demand and huge green R&D and better satisfies the consumers' green preference, who are willing to pay higher prices for green products. However, the fairness concerns of retailer intensify the competition among channel members, and in order to improve the fairness of profit distribution, the retailer tends to raise the retail price and reduce the green service effort. As a result, consumers are unable to fully understand the performance of green products and need to pay higher price for green products, thus decreasing the green market size and manufacturer's profit. To relieve the competitive pressure among channel members, the government could subsidy retailer for their contribution for product promotion, as long as the government subsidies to Mathematical Problems in Engineering manufacturer and retailer do not outweigh the saving in governance costs resulting from environmental improvements. Benefiting from government subsidies to manufacturer for green R&D, the retailer can order green products from manufacturer at lower wholesale price, and the product promotion strategy is more effective. e government subsidizes retailers according to the greenness of products, which increases the retailer's exposure to the uncertain market demand and huge green service costs, alleviates the competition among channel members and brings higher profit of retailer. Proposition 6 (3) shows that, from the perspective of whole supply chain, the relationship of supply chain profit in different models is consistent with the relationship of manufacturer's profit. Due to the dominant position of manufacturer in GSC management, the manufacturer can even set up internal contracts to promote fairness in profit distribution. have improved the green quality of products, and consumers' green preferences for green products are satisfied with green products at a lower price. Retailer's fairness concerns have led to higher price for green products, the consumers' green preferences cannot be fully satisfied, and the consumer surplus has fallen. Proposition 7 (2)- (4) shows that the fairness concerns of retailer aggravate the competition between upstream and downstream members, with the conflict intensifying, which reduces the effect of government subsidy policy. rough government subsidy, the conflict between the manufacturer and retailer is alleviated, and the fairness of profit distribution is promoted. e level of consumer surplus, environmental improvement, and social welfare is higher than that of only subsidizes to manufacturer. When the retailer has no fairness concerns, the government expenditure is higher than that when retailer has fairness concerns. When the retailer has fairness concerns, the government expenditure when the government subsidizes both manufacturer and retailer is higher than that when only manufacturers are subsidized. With the situation manufacturer in different carbon subsidy situations having different levels of green R&D, corresponding government subsidy strategies should be adjusted accordingly, provided, of course, that the total cost of government subsidies should not exceed environmental improvements. Numerical Analysis In this section, we perform numerical analysis examples to illustrate the value of two subsidy strategies and the influence of retailer's fairness concern coefficient on GSC performance, and the numerical analysis can further verify the previous research conclusions and analysis reasons and bring more management insights for enterprise managers. According to the previous constraints, we set the parameters value and range as Q � 50, α � 4, c � 5, δ � 0.2, c s � 2, c g � 2, β � 2, c � 1, λ � 0: 0.5: 10 and θ � 0: 0.05: 0.8. Figure 2 shows that, with the retailer's Mathematical Problems in Engineering fairness concerns raising, the wholesale price, retail price, product greenness, and service effort are all decreasing. It means that manufacturer and retailer will lower product pricing due to the enhancement of retailer's concerns about fairness, but the marginal profit of manufacturer will decrease, which leads to the reduction of green invest and then decrease in the greenness of products. To promote the fairness of profit distribution, the retailer will reduce the level of green service. Figure 3 shows that the manufacturer's profit, supply chain's profit, and social welfare are the subtraction functions of retailer's fairness concerns coefficient, while retailer's profit is the increment function of fairness concerns. e Variation of Equilibrium Results with the Coefficient of Fairness Concerns λ. ere is a trade-off between manufacturer's profit and retailer's profit, and government subsidies both manufacturer and retailer benefit retailer and the supply chain to obtain the highest profit. When the retailer's fairness concerns fall below a certain threshold, the manufacturer's profit and the total social welfare are the highest among the four models under government subsidizes both manufacturer and retailer; at this point, the positive effect of government subsidies outweighs the negative effect of retailer's fairness concerns, and it is beneficial to supply chain members, environmental governance, and social benefits. 6.2. e Variation of Equilibrium Results with the Unit Government Subsidy θ. It can be seen from Figures 4 and 5 that greenness degree, service effort, members' profit, and social welfare are the increasing functions of government subsidy. With the increase of government subsidy, the increasing trend of greenness degree, retailer service effort, members' profit, and social welfare is gradually increasing, which indicates that government subsidies stimulate the enthusiasm of supply chain members to participate in green production. Green production not only meets the green preference of consumers, but also alleviates the contradiction of unfair profit distribution among channel members and further reduces the social production cost. Managerial Insights In the era of rapid update of information and technology, the income level and living standard of residents have been greatly improved and bring great burden on the environment. To realize the harmonious development of person and nature, we must carefully handle the relationship between economic activities and environment government. e reduction of carbon emission from economic activities is inseparable from the participation of enterprises management, consumers engagement, and the government macrocontrol. To comprehensively describe the tripartite game behavior, we consider that the consumer demand is influenced by both product greenness and service effort, by introducing government subsidies and retailer's fairness concerns, to explore the interaction between government policy and retailer's behavior on the implement of green manufacturing and reducing carbon emission. Most studies have emphasized the important role of manufacturers in green technology investment and retailers in green promotion and discuss the pricing and decisionmaking behavior by using game theory, so as to provide experience for GSC management; however, fewer studies focus on the retailer's fairness caused by government subsidies. Actually, retailers play an increasingly important role in supply chain operations due to their green product education and promotion. Green manufacturing enterprises implement green R&D conducive to reducing carbon emission and reducing the cost of environmental governance. How to distribute the environmental benefits reasonably among manufacturers, retailer, and consumers will become extremely meaningful. is paper considers government subsidies to retailers based on greenness and sales volume, and it is beneficial to eliminate the negative effects of retailer's fairness concerns and increase the social welfare. Conclusions Based on the GSC composed of M and R, from the perspective of greenness and service effort, this paper considers nongovernment subsidies, manufacturer subsidies to retailers, government subsidies to retailers, and government subsidies to both manufacturers and retailers. In the GSC, M is engaged in green R&D and production of green products and wholesales green products to R. R will sell green products to consumers and provide them with green efforts. is paper analyzes the impact of government subsidies and retailers' fairness concerns on consumers, environment, and society. e results show that (1) government subsidies to manufacturers can improve the overall profits of channel members' supply chain and enhance the level of consumer surplus and environmental improvement; (2) government subsidies to retailers are not conducive to the fairness of profit distribution in the supply chain. Retailers' fairness concerns reduce the role of government subsidies, and the demand for green products, the level of environmental improvement, and social welfare decrease; (3) the government subsidies manufacturers and retailers at the same time can reduce the adverse effects of retailers' fairness concerns and promote the effectiveness of environmental governance. e management implications of this study are shown as follows: (1) from the manufacturer perspective, as the channel leader, the cooperation among members should be strengthened to enhance the GSC performance; (2) from the perspective of retailers, retailers can consider fairness concerns for more profit but should control their own when products have high green efficiency and flat concerns behavior; (3) from the perspective of GSC, improving the efficiency of green products is an important means to promote the performance of GSC, while the retailer's fairness concerns will reduce product green degree, which is an obstacle to green products market expansion, not conducive to the development of GSC; (4) from the government perspective, the government should subsidize both manufacturer and retailer to ease competition among members. is paper only discusses the case of single channel and does not consider the competition between channels. In the future, it will include the case of multiple manufacturers and multiple retailers to compare the role of member alliances in eliminating the influence of fairness concerns under various circumstances. In addition, in the actual production process, manufacturers often have fair concerns. In the next step, the fairness concerns of both manufacturer and retailer should be considered to study the impact of their fairness concerns on the operation of multichannel GSC, and whether government subsidies can coordinate the impact of their fairness concerns on the supply chain.
2022-05-12T15:01:37.073Z
2022-05-10T00:00:00.000
{ "year": 2022, "sha1": "59eefed677364fb2509080efe7a98209961f3085", "oa_license": "CCBY", "oa_url": "https://downloads.hindawi.com/journals/mpe/2022/6009764.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "fed387b9c3bed695d872e5a3bcd2ff9f5db662aa", "s2fieldsofstudy": [ "Environmental Science", "Business", "Economics" ], "extfieldsofstudy": [] }
258918678
pes2o/s2orc
v3-fos-license
Removing endobronchial needle-like foreign bodies in two school-age children Abstract Background: Endobronchial foreign bodies (EFBs) are rare in children over the age of three. Case presentation: Two school-age children had EFBs due to accidental inhalation of metal-containing foreign bodies in the mouth. In case 1, CT showed a needle-like foreign body at the entrance of the right upper lobe bronchus, and in case 2, it was found in the posterior basal segment of the right lower lobe. The EFB in case 1 was successfully removed by rigid bronchoscopy. In case 2, the EFB was not accessible via fiberoptic bronchoscopy, and the foreign body was accidentally pushed into the right main bronchus during the thoracotomy for foreign body removal; however, it was later removed by rigid bronchoscopy. Conclusion: In cases of special types of bronchial foreign bodies, the surgical approach should be selected based on the features of the foreign body to minimize patient injury as much as possible. Introduction Endobronchial foreign body (EFB) is an acute and critical condition in otolaryngology, most commonly affecting children aged 1-3, and the majority of foreign bodies are of plant origin.The standard procedure for removing foreign bodies is through rigid or fiberoptic bronchoscopy [1,2].Aspiration of foreign bodies in the trachea is also common in children over 3 years old.However, the types of foreign bodies at this age are not limited to plant-based materials; thus, selecting the appropriate surgical method can be challenging.For non-plant EFBs, it is often necessary to remove the foreign bodies through tracheotomy or thoracotomy.Here, we report 2 cases of needle-type EFBs that encountered difficulties during conventional bronchoscopy foreign body removal.By combining preoperative CT, we were able to remove the foreign bodies with minimal injury, and the children recovered quickly during subsequent treatment. Case reports Case 1 was a nine-year-old boy who was initially admitted due to a persistent cough that lasted over half a month and a fever persisting for four days.The child denied any history of foreign body aspiration.Physical examination revealed a temperature of 38.9 °C, diminished breath sounds, and rales in the right upper lung region.Right upper lobe pneumonia and atelectasis were seen on a chest CT due to a needle-like metallic foreign body lodged in the right main bronchus and upper lobe bronchus (Figure 1) The CT scan confirmed that the foreign body was located in the right upper lobe bronchus, a position rarely encountered in clinical practice.Despite the child's denial of any history of foreign body aspiration, the diagnosis of an endobronchial foreign body was considered based on the patient's symptoms and CT findings.We decided to perform a rigid bronchoscopy examination and foreign body removal under general anesthesia (Figure 2).Following standard anesthesia for patients with tracheal foreign bodies (while maintaining spontaneous breathing), a rigid bronchoscope was inserted, and 100% oxygen was administered at 2 L/min through the side port.The anesthetist controlled the patient's breathing with the assistance of the manual resuscitator.The tip of a needle-like foreign body was found within the right upper lobe bronchus, accompanied by granulation tissue surrounding the bronchial wall.Since the foreign body was located in the bronchus of the right upper lobe, the rigid bronchoscope could only explore the opening of the right upper lobe bronchus and could not provide a clear view of the entire foreign body.During the removal, the foreign body could not pass through the lumen of the rigid bronchoscope, indicating that it was more complex than a simple needle-like foreign body and that the distal end might have had an enlarged, non-radiopaque portion.The foreign body was securely grasped by the bronchial forceps, and both the bronchial forceps along with the foreign body were simultaneously withdrawn together with the rigid bronchoscope. When the foreign body reached the glottis, resistance was reencountered.To prevent injury to the glottis from the enlarged distal end of the foreign body, succinylcholine was administrated to relax the laryngeal muscles, allowing the foreign object to pass through the glottis with less resistance.A face pressure mask was used to provide oxygen after the foreign object had passed through the glottis.The surgical procedure proceeded smoothly, with minimal blood loss of approximately 1 ml, which was controlled after rinsing with 1:10000 epinephrine saline.There was no significant decrease in blood oxygen saturation during the operation. Postoperatively, the patient was returned to the ward and administered intravenous antibiotics (cefathiamidine 150 mg BID based on his weight of 30 kg) for anti-inflammatory treatment, as well as steroids (hydrocortisone 150 mg IV QD and Pulmicort 1 mg nebulized BID) to alleviate edema.A three-day postoperative radiograph showed no residual foreign bodies or complications such as pneumothorax, and the child was discharged.Case 2 was a 10-year-old boy who was admitted to the hospital after accidentally inhaling a metal needle-like foreign body three days prior.A chest CT scan revealed an EFB in the basal segment of the right lower lobe (Figure 3).The physical examination showed a temperature of 36.4 °C and clear breath sounds in both lungs, with no rales or wheezing.CT imaging indicated that the EFB was far from the oropharynx and unlikely to be reachable with a conventional rigid bronchoscopy, so a fiberoptic bronchoscopy with a 3.0 mm diameter was performed first.However, the foreign body was not visible at any bronchial entrance during the fiberoptic bronchoscopy. After failing to change the foreign body's location by patting the back, inversion, and magnet attraction, a thoracotomy was performed.Following general anesthesia and intubation, a preoperative chest radiograph was obtained, which showed the foreign body's location near the T9-10 vertebrae on the right side.Initial thoracoscopic exploration was conducted, but the foreign body was not observed under thoracoscopy.An intraoperative chest radiograph revealed that the foreign body had moved to the T5-6 level.Consequently, an incision was made between the 5th and 6th ribs.The surgeon palpated the area where the foreign body was located on the chest radiograph, but it was still inaccessible.Intraoperative chest radiograph revealed that the foreign body had moved to the right main bronchus.The endotracheal tube was withdrawn, and an otolaryngologist inserted a rigid bronchoscope through the mouth.With the assistance of an anesthesiologist using a manual resuscitator, the otolaryngologist successfully extracted the foreign body.(Figure 4) The patient had no postoperative complications, including pneumothorax, hoarseness, or laryngeal stridor.Based on his weight of 37 kg, he received postoperative intravenous anti-infection treatment (cefuroxime 750 mg TID for 7 days).The child made a full recovery and was discharged without complications. Discussion Preoperative CT scans in both cases revealed a needle-shaped high-density object within the bronchus, and the EFBs were ultimately removed.Both children made a complete recovery and were discharged.The locations of the foreign bodies in the two cases were uncommon, and the clinical features were distinct in each case.Notably, the two surgical procedures differed significantly, necessitating further discussion. The inflammation caused by Endobronchial foreign bodies (EFBs) is related to the duration of the foreign body's presence in the airway and the type of foreign body.The longer the foreign body is present, the more severe the bronchial inflammation [3,4].However, simple steel-type foreign bodies cause less bronchial inflammation.In Case 1, the history of inhaling a foreign body was denied preoperatively, so the surgeon was unaware of the specific shape of the EFB.A preoperative CT scan showed atelectasis in the right upper lobe bronchus due to long-term obstruction caused by the foreign body [5].Therefore, we preoperatively speculated that the foreign body in Case 1 was not a simple steel foreign body that could have a non-radiopaque enlarged part on the surface or at one end. The location of the foreign body in the bronchus in Case 1 was uncommon.When a foreign body enters the lower airway, it typically follows a more vertical and gravity-gradient direction.As a result, it is more commonly lodged in the right main bronchus, which is relatively straighter, wider, shorter, and closer to the trachea than the left main bronchus.Bernoulli's effect explains rarer cases of a foreign body lodging in the left bronchus.Due to the smaller diameter of the left main bronchus compared to the right, more negative suction pressure occurs during coughing, laughing, or speaking, leading to aspiration of the foreign body to the left side [6].Therefore, it is extremely rare for a foreign body to enter the right upper lobe bronchus [7]. Postoperative analysis showed that the location of this foreign body was related to its structure.The foreign body was made of plastic on one end and steel on the other.The densities at the two ends were quite different.The plastic end had a unique shape, with a narrow end that gradually increased in diameter (as shown in Figure 2).During the descending process after entering the trachea, the plastic end was lodged in the entrance of the right upper lobe bronchus due to its shape. The EFB in Case 2 was notably different from the EFB in Case 1.This EFB was an elongated, simple steel foreign body that didn't irritate the bronchial wall, so conservative observation was considered a viable option.However, no reports of long-term conservative observation for related foreign bodies were found in the literature, and current guidelines for EFB treatment still recommend removal upon initial discovery [7].Moreover, the child's young age increases the risk of complications such as migration, granulation tissue growth, and mechanical issues, which would make the surgery more difficult and potentially harmful to the child.As a result of failing to move the foreign body using techniques such as patting the back, inversion, and magnet attraction, a thoracotomy was performed.However, it was discovered during the thoracotomy that the foreign body had moved to the right main bronchus.We infer that during the surgical procedure, as the thoracic forceps and the surgeon manipulated and searched for the foreign body by palpating and turning the lung tissue in various directions, the foreign body was impacted by the gas.This caused it to be propelled along the airflow into the right main bronchus, in a manner somewhat similar to the Heimlich maneuver.Ultimately, the foreign body was successfully removed using a rigid bronchoscope, avoiding more significant damage, such as lung tissue resection. Conclusion We present two cases of endobronchial foreign bodies (EFBs) in school-aged children, as shown in preoperative CT scans.We developed a minimally invasive surgical approach for both cases based on the characteristics of the EFBs and their locations.In Case 1, the unusual location of the foreign body within the trachea was related to its shape and texture.Additionally, the foreign body's representation on CT scans didn't accurately reflect its actual shape.Therefore, it is essential to assess the type and shape of the foreign body comprehensively, considering its location and the severity of inflammation.During surgery, the margin and shape of the foreign body should be carefully identified under rigid bronchoscopy.The needle-shaped foreign body in case 2 was unreachable with fiberoptic bronchoscopy and required thoracotomy intervention.We suggest that during the thoracotomy, by inserting a fiberoptic bronchoscope through the endotracheal tube, we could monitor real-time changes in the foreign body's position as the lung tissue is palpated and rotated in various directions.This manipulation of the lung tissue might potentially cause the foreign body to shift its position.If we notice the foreign body moving towards the desired path while palpating and rotating the lung tissue, we can repeat those movements to guide the foreign body to a reachable location for bronchoscopy, thereby avoiding the need for lung tissue resection. Figure 1 . Figure 1.A needle-like metallic foreign body in the right main bronchus and right upper lobe bronchus. Figure 3 . Figure 3. (a, b) chest anteroposterior and lateral radiographs showing a needle-shaped metallic foreign body aligned with the bronchial path; (c, d) transverse sections revealing a short rod-shaped dense shadow; (e, f ) tracheobronchial reconstruction images showing a slender rod-shaped metallic dense shadow within the right lower lobe basal segmental bronchus. Figure 2 . Figure 2. the removed foreign body from case 1. Figure 4 . Figure 4. the extracted foreign body from case 2.
2023-05-27T15:13:10.187Z
2023-05-25T00:00:00.000
{ "year": 2023, "sha1": "cf448c6f031cdd090027dadfcd0fc38c1840a229", "oa_license": "CCBY", "oa_url": "https://www.tandfonline.com/doi/pdf/10.1080/23772484.2023.2215411?needAccess=true&role=button", "oa_status": "GOLD", "pdf_src": "TaylorAndFrancis", "pdf_hash": "4f41dc255213c65ba8eb5254830734e28638b372", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
255338210
pes2o/s2orc
v3-fos-license
Subthreshold Micropulse Laser for Diabetic Macular Edema: A Review Diabetic macular edema (DME) is one of the main causes of visual impairment in patients of working age. DME occurs in 4% of patients at all stages of diabetic retinopathy. Using a subthreshold micropulse laser is an alternative or adjuvant treatment of DME. Micropulse technology demonstrates a high safety profile by selectively targeting the retinal pigment epithelium. There are no standardized protocols for micropulse treatment, however, a 577 nm laser application over the entire macula using a 200 μm retinal spot, 200 ms pulse duration, 400 mW power, and 5% duty cycle is a cost-effective, noninvasive, and safe therapy in mild and moderate macular edemas with retinal thickness below 400 μm. Micropulse lasers, as an addition to the current gold-standard treatment for DME, i.e., anti-vascular endothelial growth factor (anti-VEGF), stabilize the anatomic and functional retinal parameters 3 months after the procedure and reduce the number of required injections per year. This paper discusses the published literature on the safety and application of subthreshold micropulse lasers in DME and compares them with intravitreal anti-VEGF or steroid therapies and conventional grid laser photocoagulation. Only English peer-reviewed articles reporting research within the years 2010–2022 were included. Introduction Diabetes mellitus (DM) has become a civilization disease associated with a sedentary lifestyle and the aging of the population in the contemporary world. It is estimated that DM affects around 10% of the global population [1]. The prevalence of diabetes is increasing rapidly, and the World Health Organization (WHO) has recognized diabetes as a noncommunicable disease which is causing an epidemic in the 21st century. An insufficiently controlled and long-term disease is associated with a high risk of multiorgan complications, including those involving eyes. One of the main retinal complications is diabetic macular edema (DME), which leads to gradual visual impairment, especially at working ages. DME occurs in 4% of patients diagnosed with DM, even at the early stage of diabetic retinopathy. The estimated number of adults worldwide in 2020 with clinically significant DME was 18.8 million, and this is projected to increase by half in 2045 [2]. According to the current international guidelines for the management of DME by the European Society of Retina Specialists (EURETINA), intravitreal anti-vascular endothelial growth factor (anti-VEGF) was established as first-line therapy in DME with visual impairment [3]. After publishing the results of the DRCR.net Protocol I and Protocol S studies, laser therapy was regarded as being inferior to anti-VEGF treatments [4]. The availability of anti-VEGF injections has changed the standard of care for DME patients [5]. The agents improve both the functional and anatomical parameters of the retina. Currently, different anti-VEGF agents such as ranibizumab, aflibercept, brolucizumab, faricimab, and off-label bevacizumab have become the therapy of choice in DME treatments [6]. Intravitreal injections have a high efficacy and safety profile, however, on the phyll in the neurosensory retina. Commercially available devices can deliver conventional and micropulse shots at 577 nm that simultaneously enable the combination therapy of the grid micropulse laser and the direct photocoagulation of microaneurysms [18]. The 670 nm laser is less scattered and not absorbed by hemoglobin and xanthophyll, thus, it seems to be safe for the neurosensory retina [19]. There is no consensus on which wavelength is the most favorable for the treatment for DMEs, thus, all the above-described devices have a high safety profile and are recommended for micropulse use. Currently, the specific indications for the application of MPLT have not been established. It is considered as an alternative treatment in macular disorders such as DME (Figure 1), central serous chorioretinopathy, and macular edemas that are secondary to retinal vein occlusion [20][21][22][23]. MPLT was proven to be efficient and free from adverse events in minor and moderate macular edemas with a central retinal thickness (CRT) below 400 µm and relatively good visual acuity [24]. As an adjuvant to anti-VEGF agents, it helps to stabilize the anatomic and functional retinal parameters with a lower required number of injections. of the micropulse was performed with a 810 nm laser modality, where the wavelength deeply penetrated the retina and, thus, it was not absorbed by the macular carotenoids. The 577 nm wavelength targets oxyhemoglobin and melanin, and it is not absorbed by xanthophyll in the neurosensory retina. Commercially available devices can deliver conventional and micropulse shots at 577 nm that simultaneously enable the combination therapy of the grid micropulse laser and the direct photocoagulation of microaneurysms [18]. The 670 nm laser is less scattered and not absorbed by hemoglobin and xanthophyll, thus, it seems to be safe for the neurosensory retina [19]. There is no consensus on which wavelength is the most favorable for the treatment for DMEs, thus, all the above-described devices have a high safety profile and are recommended for micropulse use. Currently, the specific indications for the application of MPLT have not been established. It is considered as an alternative treatment in macular disorders such as DME (Figure 1), central serous chorioretinopathy, and macular edemas that are secondary to retinal vein occlusion [20][21][22][23]. MPLT was proven to be efficient and free from adverse events in minor and moderate macular edemas with a central retinal thickness (CRT) below 400 μm and relatively good visual acuity [24]. As an adjuvant to anti-VEGF agents, it helps to stabilize the anatomic and functional retinal parameters with a lower required number of injections. Materials and Methods The present paper reviews all the relevant literature on DME treatments with a subthreshold micropulse laser. The PubMed database and Mendeley were used as a source of studies within the years 2010-2022. Only peer-reviewed articles published in English reporting research were included. Relevant studies were identified using the following terms in combination with Boolean operators: subthreshold laser, micropulse laser, diabetic macular edema, clinically significant macular edema, anti-VEGF, intravitreal steroid, vitrectomy, conventional photocoagulation, ETDRS photocoagulation, continuous-wave photocoagulation, combined therapy, and safety. Subsequently, a manual search of the Materials and Methods The present paper reviews all the relevant literature on DME treatments with a subthreshold micropulse laser. The PubMed database and Mendeley were used as a source of studies within the years 2010-2022. Only peer-reviewed articles published in English reporting research were included. Relevant studies were identified using the following terms in combination with Boolean operators: subthreshold laser, micropulse laser, diabetic macular edema, clinically significant macular edema, anti-VEGF, intravitreal steroid, vitrectomy, conventional photocoagulation, ETDRS photocoagulation, continuous-wave photocoagulation, combined therapy, and safety. Subsequently, a manual search of the reference lists in the retrieved manuscripts was performed. Studies discussing the use of a micropulse transscleral laser for the treatment of glaucoma were excluded. A total of 68 full-text articles on MPLT were assessed for eligibility and divided into four sections covering safety, efficacy, and comparisons with conventional laser and intravitreal therapies (Table 1). Safety of MPLT A high safety profile of MPLT was reported in the in vivo and in vitro studies (Table 2). Potential damages were assessed using mathematical models, investigated using animal and stem cell cultures, and measured in imaging tests such as infrared (IR) and redfree fundus photos, optical coherent tomography (OCT), fundus autofluorescence (FAF), microperimetry, fluoresceine angiography (FA), and indocyanine angiography (IGCA). Ohkoshi et al. [25] detected sites of the application of the micropulse laser in scanning laser ophthalmoscopy in the retro mode. Dark spots were visible immediately after photostimulation, and they were not identified in FAF nor in the fundus photos. However, after 1 week, the alterations were no longer observed. This study implied that MPLT affects the RPE cells and can cause the localized swelling of the treated region. Luttrull et al. [26] assessed the risk of laser-induced retinal thermal injury by comparing computer modeling of the tissue temperature after MPLT using clinical findings in imaging tests such as IR and red-free fundus photography, FAF, FFA, and OCT. According to the study, an increased risk of retinal damage was related to higher retinal irradiance, and it was found in none of the patients treated with MPLT at a 5% duty cycle. Wells-Gray et al. [28] confirmed the structural damage after MPLT by measuring the integrity of cone photoreceptors using advanced adaptive optics imaging. Midena et al., in their studies, pointed to the role of the influence of MPLT on the retinal biomarker levels in aqueous humor [29,30,32]. A strong correlation in protein concentration between the aqueous and vitreous humor was previously proven [37], therefore, a simpler accessible anterior chamber fluid was used for the samples. The authors measured the concentration of the biomarkers of RPE, Müller cells, and a panel of inflammatory molecules in eyes with the DME before and after the MPLT treatment and compared the values with the control groups with healthy ones. The results of their papers were consistent, and they found the effect of MPLT on the expression of aqueous humor markers to be statistically significant. The decrease in proinflammatory proteins and the VEGF level suggested that the MPLT may deactivate the retinal microglia and reduce diabetes-induced inflammation. Moreover, a significant decrease in bioindicators of Müller cell activation implied that MLPT induced positive retinal metabolic and morphology alterations. Vujosevic et al. [27] showed that both 577 nm and 810 nm micropulse lasers in a "highdensity" pattern with 5% DC were safe and efficient in mild DMEs. No retinal damage was detected during any clinical imaging examination. They suggested that the MPLT with the lowest CD and without titration could be a repeatable and simple treatment for patients. In reference to this study, Chang et al. [31] used the same micropulse laser parameters to assess the kinetics of RPE heat-shock protein (HSP) activation. HSP is a group of proteins that are produced in response to cell exposure to stress and during tissue remodeling. This report showed that both the lasers were equally efficient, but a higher predictability and wider safety margin resulted from the use of the 810 nm one. The upregulation of the HSP 70 family was confirmed in the study led by Shiraya et al. [33] on irradiated human RPE stem-cell cultures, which suggested that MPLT could be more beneficial for light perception, photoreceptor protection, and maintenance than a conventional laser could. In agreement with the results of HSP observation, De Cilla and colleagues [35] proved that MPLT not only reduced oxidative stress and markers of apoptosis, but it also increased autophagia in mouse retinal cells. This study proved that the oxidant-antioxidant balance shifted in favor of the antioxidant system with an increasing number of treatments and with a younger age. Moreover, no laser effect was shown in fellow untreated eyes. Yu et al. [34] conducted a study on the tissue section of enucleated rabbits' eyes. In the experiment, the right eyes were treated using an 810 nm micropulse laser, and the left eyes were treated using a 532 nm micropulse laser with 5%, 10%, 20%, and 40% DC. The samples were analyzed for protein marker expression and morphological changes in the retinal tissues. The histologic effect and protein regulation induced by both the lasers were not distinguishable. The 5% DC therapy caused no retinal disruption or RPE damage. No retinal damage induced by MPLT was confirmed in another animal model investigated by Hirabayashi [36]. According to the upregulation of aquaporin 3 gene expression in retinal photoreceptors, the researchers concluded that MPLT may be responsible for suppressing macular edema and intensifying drainage of retinal fluid. However, the role of aquaporin 3 remains unclear, and it needs to be confirmed in other studies. MPLT, micropulse laser treatment; CWL, continuous-wave laser; DME, diabetic macular edema; FU, follow-up (in months); BCVA, best corrected visual acuity; CRT, central retinal thickness; CS, clinically significant; CI, center involved; RPE, retinal pigment epithelium; FAF, fundus autofluorescence; IR, infrared; FA, fluoresceine angiography; DRIL, disorganization of inner retinal layers; mfERG, multifocal electroretinography; EDI, enhanced-depth imaging. Nakamura et al. [38] proved that functional improvement after MPLT was limited to an increase in visual acuity. According to the study, the macular sensitivity within the central 10 • in microperimetry did not improve significantly, despite the increase in BCVA and the reduction in foveal thickness. Luttrull et al. [14] observed that significant differences between pre-and postoperative CRT were observed in eyes with CRT < 300 µm, with a maximum reduction between 4 and 7 months after MLPT. The BCVA was stable with a significant improvement between 4 and 7 months of the follow-up. According to Kwon et al. [51], the MPLT did not cause chorioretinal scars despite repeated treatments occurring and there being an increased number of micropulse shots. The study showed a similar efficacy of the micropulse and conventional lasers. Inagaki et al. [18] compared the efficacy of 810 nm and 577 nm MPLT combined with focal microaneurysm photocoagulation. They proved that both the wavelengths are effective in reducing CRT and maintaining visual acuity. As advantages of the 577 nm wavelength, they pointed out that it required less power and enabled them to perform both the micropulse and classic therapies using the same device. Supplementary microaneurysm photocoagulation reduced the recurrence rate. Marashi et al. [59] agreed that the hybrid threshold laser of microaneurysms with subthreshold micropulse high-density laser effectively stabilized the DMEs with minimal scar formation. Mansouri et al. [50] concluded that the retinal thickness affects the spread of the laser energy and influences the tissue response. The authors compared the efficacy of MPLT according to anatomical severity of the edema, suggesting MPLT as an effective and safe therapy in mild and moderate DMEs. In the study, all the eyes with initial CRT > 400 µm did not respond to the therapy and required rescue injections of anti-VEGF. Citirik et al. [56] also showed the relationship between the efficacy of the micropulse laser and the central retinal thickness. The study indicated that eyes which previously underwent ineffective bevacizumab treatment responded well to MPLT if the CRT was no higher than 300 µm. Nicolò et al. [49] suggested that the micropulse laser is ineffective in eyes which previously did not respond sufficiently to focal or grid macular photocoagulation or an anti-VEGF treatment. Additionally, the authors reported a better response to the treatment of naïve patients, with a stabilization of or improvement in the BCVA and CRT parameters. Valera-Cornejo et al. [41] observed changes in BCVA only in previously untreated patients. It should be underlined that the laser procedures were performed not only over the edema, but also over the entire macula, including the foveal center and unthickened retina. In contrast, the work by Abouhussein et al. [48] led to a different conclusion, i.e., that a single session of MPLT was effective in patients with a refractory DME below 400 µm. In terms of limitations, both the studies had short follow-up and small sample sizes without randomization. Latalska and colleagues [47] proved that the effects of the micropulse laser were more significant in a rural environment than they were in an urban environment. Moreover, they pointed out that glycated hemoglobin level ≤ 7% significantly influenced the improvement in CRT and near visual acuity. Optical coherence tomography angiography (OCT-A) is a novel noninvasive accessory examination, which enables imaging vascular abnormalities and microaneurysms in the superficial and deep capillary plexus. It also reveals the enlargement of the foveal avascular zone (FAZ), nonperfused areas, and neovascularization [60]. The studies by Vujosevic et al. [44,45] showed the mechanism of action of a micropulse laser via a reduction in the inflammatory biomarkers detected in OCT and OCT-A. They detected a decreased number of hyper-reflective spots and microaneurysms, whereas the chorioretinal perfusion parameters were stable in response to the MPLT. No significant changes have been observed in fixed and variable regimens of 577 nm MPLT for mild center-involved DMEs, however, Donati et al. [16] suggested that fixed parameters facilitate the treatment and reduce the number of potential errors. Frizziero et al. [43] confirmed the safety of the fixed model. Nowacka et al. [58] reported the stabilization of the macular structure through the maintenance of the bioelectrical function of cones and bipolar cells detected in mfERG. Ueda et al. [40] proved the entropy of RPE cells as an objective indicator of the retinal healing process. They showed a positive correlation between the decrease in CRT after MPLT and entropy measurements in RPE. According to Işık et al. [39], the response to MPLT may be related to the status of the central RPE and glycated hemoglobin level, however, further studies on a larger group are required. A recent study by Kikushima et al. [19] compared the 577 nm with the novel 670 nm micropulse treatment. Both the wavelengths seemed to be equally effective, however, the use of the 670 nm laser resulted in less scattering and better penetration. Comparison of Subthreshold Micropulse and Conventional Laser Treatment Studies comparing MPLT with conventional laser therapy are presented in Table 4. No damage was identified after MPLT in OCT scans; fewer changes in outer retina after pattern scanning laser than after conventional laser. In most reports, the authors found micropulse subthreshold laser therapy to be equivalent to conventional macular photocoagulation [61][62][63][64][65][67][68][69][70][71][72]. Vujosevic et al. pointed out that MPLT is not only as effective as classic lasers are in reducing macular edema, but it is also a less aggressive therapy, as shown by the increased retinal sensitivity in the microperimetry. The positive influence on central retinal sensitivity was also confirmed in the study by Chhlablani et al. [68]. Venkatesh and colleagues [63] suggested that MPLT did not induce any functional loss detected in multifocal electroretinography, with equally good therapeutic effects. Inagaki et al. [64] investigated Japanese patients with a more pigmented retina which could predispose them to the increased absorption of laser energy and more severe retinal damage. Changes in retinal morphology at 3 months after the laser therapy were detected only after pattern scanning and a conventional grid treatment. A recently published multicenter clinical trial by Lois et al. [72] included a large number of participants (266 eyes) with mild DMEs (<400 µm). The study confirmed the clinical effectiveness, safety, and cost-effectiveness of MPLT in compared to those of a conventional laser treatment. Lavinsky et al. [62] observed the superiority of a high-density, confluent micropulse treatment regarding the anatomical and functional outcomes after 1 year of the follow-up. In contrast, after the normal-density treatment (two burn widths apart), no improvement was seen. Correspondingly, Fazel et al. [67] measured that MPLT significantly improved the BCVA and CRT parameters in eyes with a previously untreated, mild DME. The presented study showed MPLT to be more effective than continuous-wave treatment did in the very short term (4 months). Similarly, Bougatsou et al. [69] agreed that MPLT was more efficacious than a conventional laser was in non-center-involved clinically significant macular edemas, whereas Al-Barky et al. [70] observed slightly better functional outcomes after MPLT. Othman et al. [66] compared MPLT in treatment-naïve patients with MPLT in recurrent or persistent DME 3 months after conventional macular photocoagulation. The therapy was similarly effective in both groups, however, more patients in the secondary group required rescue therapy with an intravitreal steroid. Available data on alternative subthreshold micropulse panretinal photocoagulation (PRP) in treating severe non-proliferative diabetic retinopathy and proliferative diabetic retinopathy are limited, and without studies of higher quality according to evidence-based medicine (EBM), it should be considered as experimental [73,74]. Subthreshold Micropulse Laser Treatment and Intravitreal Therapy Numerous studies compared MPLT with intravitreal treatment or investigated combination therapy (Table 5). Most articles compared MLPT with bevacizumab, ranibizumab, and aflibercept. The treatment protocol for anti-VEGF monotherapy was three loading injections at a monthly interval followed by a pro re nata (PRN) scheme. The patients qualified for micropulse therapy after receiving three initial loading anti-VEGF doses and with a CRT below 400 µm. It was suggested that additional laser treatment could decrease the burden of agent injection frequency with similar functional and anatomical outcomes [75,[78][79][80]82,83,[85][86][87]. However, the study by Akhlaghi et al. [77] led to a different conclusion: adjuvant MPLT improved BCVA and CRT in eyes resistant to the bevacizumab therapy. Inagaki et al. [75] suggested that the initial loading dose of intravitreal anti-VEGF agent, followed by a single MPLT for residual edema reduces the number of required injections and effectively improves BVCA and CRT. Akkaya et al. [76] proved that MPLT was superior to anti-VEGF injections in patients with mild macular oedema (CRT max. 350 µm) and good visual acuity (BCVA ≤ 0.15 logMAR) due to there being less frequent visits, lower costs, and a higher safety profile. In this regard, MPLT could be considered as the early intervention and, if it is necessary, it can be continued with anti-VEGF injections. The study by Abdelrahman et al. [81] compared patients treated with MPL or ranibizumab with a control group for multifocal electroretinography (mfERG). The functional outcome was additionally measured not only by the subjective BCVA, but also by objective mfERG readings from the macular region. Only in the ranibizumab group was there a significant improvement in electrophysiological parameters after the treatment. They proved that both MPLT and ranibizumab improved the anatomical and functional retinal parameters, with superiority over the intravitreal agent. A recent retrospective study by Lai et al. [88] presented that aflibercept monotherapy resulted in short-term higher functional and anatomical improvement compared to that resulting from the MPLT with rescue aflibercept therapy, however, the long-term results did not show any significant differences. In contrast to other studies, MPLT was not preceded by initial anti-VEGF injections, and it was performed with focal conventional laser treatment of microaneurysms. In general, the authors agree that adjuvant micropulse therapy reduced the number of required intravitreal injections, apart from Koushan et al. [89], who did not find an additional benefit in using a combined therapy. Elhamid et al. [9] treated center-involved DMEs, which previously did not respond to an anti-VEGF therapy, with a combination of an Ozurdex implant and MLPT. As in other studies, they suggested that poor response after three initial monthly injections of anti-VEGF predicts reduced the persistent response for subsequent doses. An early switch to a steroid implant diminished the number of intravitreal surgeries. In this study, the frequency of recurrence was relatively lower than it was in other trials with the dexam-ethasone implant, which can be explained by the synergic effect of MPLT. In terms of the limitation, the obtained results require confirmation in larger studies with a control group. Toto et al. [90] also demonstrated the effect of MPLT in addition to a dexamethasone implant. The combined therapy reduced the frequency and the number of required injections, thus extending the treatment-free interval. Micropulse lasers appear to be an efficient modality to decrease persistent DMEs after pars plana vitrectomy. A comparative study by Bonfiglio et al. [91] showed that MPLT performed 6 months after surgery improved the anatomical and functional parameters in vitrectomized eyes. Conclusions An analysis of the available results is limited due to the scarce number of large, randomized clinical trials. The reviewed studies varied in terms of the inclusion criteria, protocols, and treatment procedures. The detailed eligibility criteria for MPLT have not been defined, however, according to the presented literature, there are some therapeutic principles. Three meta-analyses which evaluated the efficacy of MPLT versus conventional photocoagulation or intravitreal injections have been published. Chen et al. [92] compared the mean change in BCVA and CRT, according to six randomized controlled trials (RCTs), including a total of 398 eyes. MLPT resulted in better visual acuity with similar anatomical outcome. Similarly, Qiao et al. [93] compared MPLT with an mETDRS treatment in seven RCTs on 425 eyes. They found no statistical differences in BCVA and CRT after the treatments, with less retinal damage after MPLT. Wu et al. [94] performed a Bayesian analysis of 18 studies, comprising a total of 1758 patients, which assessed the effect of lasers in monotherapy or adjuvant therapies to anti-VEGF. The findings showed that ranibizumab plus conventional photocoagulation is more effective than micropulse laser monotherapy is, however, there was no significant difference in efficacy between the MPLT and bevacizumab plus conventional laser treatments, as well as between the MPLT and conventional laser monotherapies. There are no standardized protocols for MPLT, however, according to the reviewed articles, micropulse panmacular treatment including the fovea, with a fixed regimen, seems to be a cost-effective, noninvasive, and safe therapy. Data in the analyzed articles confirmed that 577 nm laser applications using a 200 µm retinal spot, 200 ms pulse duration, 400 mW power, and 5% DC induced significant morphologic and functional improvement in the central retina and were not associated with any adverse events. Titration can prolong and complicate the procedure. The continuous-wave test burn is performed outside the posterior pole, over non-edematous retina, until a barely visible white spot is created. There is no consensus, after reaching the threshold, on how much to modify the laser power. Some authors switched the continuous wave to the micropulse mode, multiplying the threshold value by 0.5-4. Some researchers titrated the power in micropulse mode and then divided the value by 2. The proper subthreshold value is hard to determine, and medical errors can lead to overtreatment and involuntary damage of the retina. A confluent treatment using fixed 400 mW power for yellow laser with low 5% DC and high intensity was confirmed to effectively stimulate RPE cells. None of the presented studies detected any visible signs of chorioretinal damage in the ancillary imaging tests and animal retinal sections. In contrast to harmful conventional laser treatment, MPLT additionally increased the central retinal sensitivity. The efficacy of the micropulse laser was proven in mild DMEs with a CRT that is smaller than 400 µm due to the diffused distribution in the target tissue. In general, the treatment helps to stabilize or improve the visual acuity and decrease the macular edema. Better results are observed in a high-density protocol covering the macular region, with no spacing between the spots. Automatic pattern systems are helpful in the application of invisible laser spots. The minimal interval from the treatment to obtain a significant response and a reduction in retinal thickness is about 3 months. Therefore, it can be recommended to start the therapy with three loading doses of anti-VEGF, followed by MPLT combined with PRN injections to achieve a quick response to anti-VEGF, which is supported by the long-lasting remodeling effect of MPLT. The increased number of micropulse sessions is associated with a greater retinal response. A combined treatment requires a lower number of anti-VEGF injections, and it is not inferior to monotherapy [95,96]. MPLT is also an emerging option as a standalone treatment for noncompliant patients and for those having contraindications for other therapies.
2023-01-01T16:11:49.837Z
2022-12-29T00:00:00.000
{ "year": 2022, "sha1": "54cd863d07f0c4f99440aacd34808926d72971dc", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2077-0383/12/1/274/pdf?version=1672316161", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "ba1520eed7ee8c133cf254e4e8409798d1e7eeb8", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
4995798
pes2o/s2orc
v3-fos-license
Cobalt-Assisted Morphology and Assembly Control of Co-Doped ZnO Nanoparticles The morphology of metal oxide nanostructures influences the response of the materials in a given application. In addition to changing the composition, doping can also modify the morphology of a host nanomaterial. Herein, we determine the effect of dopant concentration, reaction temperature, and reaction time on the morphology and assembly of CoxZn1−xO nanoparticles synthesized through non-aqueous sol-gel in benzyl alcohol. With the increase of the atom % of cobalt incorporated from 0 to 15, the shape of the nanoparticles changes from near spherical, to irregular, and finally to triangular. The tendency of the particles to assemble increases in the same direction, with Co0.05Zn0.95O consisting of non-assembled particles, whereas Co0.15Zn0.85O consists of triangular nanoparticles forming spherical structures. The morphology and assembly process are also sensitive to the reaction temperature. The assembly process is found to occur during the nucleation or the early stages of particle growth. The cobalt ions promote the change in the shape during the growth stage of the nanoparticles. Introduction Metal oxide nanostructures find application in a broad range of fields that include catalysis [1], energy storage and conversion [2][3][4], sensing [5], and medicine [6].Their functionalities derive from the unique electronic, optical, and magnetic characteristics, as well as from surface lattice distortions/defects and surface reactivity arising at the nanoscale.As these properties depend on the chemical composition, doping is a valuable strategy for modulating or creating new properties of a host nanomaterial in a controlled manner [7].Additionally, for a given composition, most properties show a strong dependence on the size and morphology of the nanostructures [8].For example, the catalytic activity of nanoparticles depends on the type of facets exposed at the surface (i.e., their morphology), as different facets have different energies and therefore different chemical activity [9,10]. Non-aqueous sol-gel synthesis approaches in organic solvents have been successfully applied to the fabrication of a large variety of nanostructures from pure inorganic, to organic-inorganic hybrid materials [11][12][13].They provide good control over the composition, size, shape, assembly, and crystallinity of nanomaterials, features that are influenced by factors such as the type and reactivity of the precursors, solvent, temperature, reaction time, or the presence of surfactants.In non-aqueous sol-gel processes, the oxygen-supplying species for the formation of metal oxides are generally either the solvent or the precursors.The moderate reactivity of the precursors in organic media leads to slow reaction rates in the formation of metal oxides, resulting in the production of highly crystalline materials at low/moderate reaction temperatures.This is particularly useful for synthesizing doped metal oxides. Surfactant molecules containing carboxylic acid, phosphoric acid, or amine moieties are commonly employed for controlling the morphology of nanomaterials [14].These molecules may preferentially bind to specific crystallographic facets during the growth of the nanoparticles and promote the growth in certain directions, which results in the formation of anisotropic nanoparticles of uniform size and shape.In addition, capping molecules can direct the assembly of particles through intermolecular forces such as hydrophobic interactions, hydrogen bonding, molecular dipole interactions or π-π interactions [12,15].Dopants or impurities can also have a significant impact on the morphology of host nanomaterials, as the adsorption of metal cations onto the facets of nanoparticles may change the growth rates along different directions [16,17].Although the effect of dopants or impurities on the morphology of nanomaterials has been less investigated than that of surfactants, several studies have revealed that the presence of small ionic species in the reaction mixture can help direct the crystal growth of metals [18], metal chalcogenides [19,20], and metal oxide nanostructures [17,[21][22][23].Bose et al. [20] reported the modification of the morphology of ZnSe nanocrystals upon doping with Mn, from hemispherical-like nanostructures of the undoped ZnSe to spherical nanocrystals for Mn doped.A similar shape change effect with doping was observed for Nb-doped TiO 2 [23].TiO 2 formed platelet nanoparticles, which evolved from platelets to peanut-like 1D nanorods with the addition of Nb.Yang et al. [17] reported morphology and crystal phase changes for Mg doped ZnO with increasing the dopant content.Tetrapods, ultrathin nanowires and irregular nanoparticles were obtained for different Mg contents.The doping of ZnO with Cd 2+ , Mn 2+ , and Ni 2+ was also studied.Consequently, the addition of appropriate foreign ions to the synthesis of certain nanomaterials, even if they are not incorporated into the final product (i.e., doping), has been recently considered as another strategy for controlling the morphology of nanomaterials [16]. Zinc oxide is used in a wide variety of applications such as catalysis, sensing, and optoelectronic devices [24,25].Doping with transition-metals such as Co creates new properties (e.g., magnetic) and modifies its optical and catalytic behavior, extending the range of applications [26,27].ZnO and doped ZnO with various morphologies have been successfully synthesized in organic solvents [17,[28][29][30][31][32].In particular, the synthesis in benzyl alcohol is appealing for producing crystalline ZnO and transition-metal-doped ZnO nanomaterials with fairly uniform size and morphologies at low temperature without the use of surfactants, and for achieving high levels of substitutional doping of Co 2+ in the ZnO wurtzite structure without the formation of segregated phases [29][30][31].Herein, we report the effect of cobalt doping on the morphology and aggregation behavior of ZnO nanoparticles synthesized by non-aqueous sol-gel in benzyl alcohol.The shape of the nanoparticles changed from near spherical to triangular with the variation of the Co atom % in the ZnO host from 0 to 15.In addition, for the highest Co content, the particles assemble into spherical nanostructures.The effects are dependent on the reaction temperature. Synthesis of Co-Doped ZnO The syntheses of the Co-doped ZnO materials were performed as follows: 1 mmol of zinc acetate (Aldrich, Munich, Germany, 99.99%) and 0.1, 0.2, or 0.3 mmol of cobalt(II) acetate (Aldrich, 99.995%) were added to 5 mL of benzyl alcohol (Aldrich, 99.8%) in a 10 mL microwave glass vial, under argon.The mixture was sealed with a silicone cap under Ar.Subsequently, the suspension was heated in a microwave reactor (Anton Paar Monowave 300, Graz, Austria) at 170 • C for 5 min (with a 50 s heating ramp to reach the final temperature), and finally rapidly cooled down with compressed air.The reaction temperature was controlled with a fiber-optic temperature probe inserted inside the reaction vial.The solid products were collected by centrifugation, washed three times with ethanol, and dried at 70 • C overnight.The samples obtained from the reaction mixtures containing 0.1, 0.2, and 0.3 mmol of cobalt acetate are denoted Co 0.05 Zn 0.95 O, Co 0.09 Zn 0.91 O, and Co 0.15 Zn 0.85 O, respectively (based on the Co atom % determined by Energy dispersive X-ray spectroscopy (EDX) analysis).The synthesis of Co 0.15 Zn 0.85 O was also performed at 170 • C for different time periods.The procedure described above was repeated, except that the reaction was stopped and quickly cooled down under compressed air flow after being kept at 170 • C for 30 s, 45 s, 60 s, 90 s, and 150 s.The solid products were collected by centrifugation, washed three times with ethanol, dried at 70 • C, and characterized.Additionally, the material denoted Co 0.15 Zn 0.85 O was synthesized at 180 • C and 190 • C. The undoped ZnO sample was synthesized in the same way in the absence of cobalt precursor and using 2 mmol of zinc acetate precursor. Characterization Powder X-ray diffraction (XRD) patterns were recorded with a STOE MP diffractometer (STOE, Darmstadt, Germany) in transmission configuration using Cu Kα radiation (λ = 0.1541 nm).The measurements were performed in the 2θ range 5-90 • with a step size of 0.5 • .Transmission electron microscopy (TEM) images were acquired on a Philips CM 200 microscope (FEI, Hillsboro, OR, USA) at 200 kV.For determining the size of the nanoparticles by TEM, the size of ca.50 nanoparticles was measured on several TEM images.Energy dispersive X-ray spectroscopy (EDX) analysis were performed using an EDAX SDD detector (EDAX Inc., Mahwah, NJ, USA) coupled to the TEM.Diffuse reflectance ultraviolet-visible spectra were collected with a Perkin Elmer LAMBDA 950 Ultraviolet-visible (UV-vis) spectrophotometer (Perkin Elmer, Waltham, MA, USA) equipped with a 150 mm integration sphere using BaSO 4 as a reference in the wavelength range of 200-800 nm.Fourier transformed infrared (FTIR) spectra were measured on a Thermo Scientific Nicolet iS5 spectrometer (Thermo Fisher Scientific, Waltham, MA, USA) in the wavenumber range of 4000-400 cm −1 (4 cm −1 resolution), using pellets of the solid diluted in KBr.Carbon elemental analyses were performed on a HEKAtech Euro EA CHNSO Elemental analyzer (HEKAtech GmbH, Wegberg, Germany). Results and Discussion The reaction between zinc acetate and benzyl alcohol at temperatures around 170 • C produces crystalline wurtzite zinc oxide nanoparticles with size between 10 and 20 nm and quasi-spherical morphology (Figure 1a).The process involves an esterification reaction that starts with the nucleophilic attack of the oxygen of the alcohol to the carbon of the carbonyl group of the acetate ligand of the metal precursor and leads to the formation of benzyl acetate and hydroxylated zinc species [33].The latter constitute the monomers for the formation of the zinc oxide, which occurs through condensation reactions of the hydroxylated zinc species with release of water.The same reaction mechanism allows the incorporation of transition-metals such as Co, Fe, Ni or Mn into the host ZnO structure [29,31]. Figure 1b-e shows the TEM images of the Co-doped ZnO nanostructures containing different amounts of cobalt, synthesized by reacting zinc and cobalt acetates with benzyl alcohol at 170 • C under microwave irradiation.The presence of cobalt in the products was confirmed by EDX analysis (Figure S1), and the amounts of dopant measured are 5, 9, and 15 atom % for Co 0.05 Zn 0.95 O, Co 0.09 Zn 0.91 O, and Co 0.15 Zn 0.85 O, respectively.The cobalt contents in the final products are slightly lower than the nominal amounts, likely due to the lower rate of cobalt incorporation compared to the growth rate of the ZnO host particles.Nevertheless, as the Co 2+ and Zn 2+ ions have similar sizes in the tetrahedral environment of the oxide, and are both borderline Lewis acids and therefore have similar reactivity, a high amount of 15 atom % of Co was introduced in the ZnO.The XRD patterns of the three doped ZnO materials show reflections arising exclusively from the hexagonal lattice of the host ZnO (Figure 1f), and no significant shifts of the diffraction angles are observed with the increase of the cobalt content.The average crystallite sizes, calculated from the (101) reflections with the Scherrer equation, are 17, 11, 12, and 11 nm for ZnO, Co 0.05 Zn 0.95 O, Co 0.09 Zn 0.91 O, and Co 0.15 Zn 0.85 O, respectively, which are consistent with the sizes of the primary nanoparticles measured from the TEM images.Shifting of the reflections to higher angles (shifts up to 0.1 • 2θ) with increasing the cobalt atom % in the ZnO structure, which are indicative of small lattice contractions, have been observed by some authors [34] while others, as in this work, have not detected a clear trend [29].This behavior is associated with the close proximity of the sizes of the Co 2+ (0.58 Å) and Zn 2+ (0.60 Å) ions in tetrahedral coordination, and, consequently, the substitutional doping of Co 2+ in the tetrahedral Zn 2+ sites does not cause drastic alterations of the lattice parameters.It has been extensively reported in the literature that the substitution of Co 2+ at the tetrahedral Zn 2+ sites in the wurtzite structure originates a broad band in the visible region of the UV-vis spectra of Co-doped ZnO materials [34][35][36].This band is made of three contributions that arise from d-d transitions.Figure 1g displays the d-d transitions region of the diffuse reflectance UV-vis spectra of the Co-doped ZnO nanostructures with different amounts of cobalt.The spectra show the typical three bands at 567 nm, 610 nm and 657 nm, due to the 4 A 2 (F) → 2 E(G), 4 A 2 (F) → 4 T 1 (P), and 4 A 2 (F) → 2 A 1 (G) transitions, which indicate the presence of Co 2+ in the tetrahedral environment of the oxide, and that substitutional doping of the Co 2+ for Zn 2+ occurred for all the materials.The TEM images of the doped samples reveal significant changes in the morphology and assembly of the nanoparticles as the amount of dopant increases from 5 to 15 atom %.The Co 0.05 Zn 0.95 O material consists of irregularly shaped nanoparticles (Figure 1b), whereas Co 0.09 Zn 0.91 O contains a mixture of irregular particles and faceted triangular nanoparticles.In addition, some of the latter are assembled forming half-spherical arrangements (Figure 1c).As the Co. atom % increases to 15, the material consists mostly of triangular nanoparticles (of ca.10-15 nm) assembled into spherical structures of ca.350 nm in size (Figure 1d,e).The selected area electron diffraction (SAED) pattern of the assembled structures shows the reflections of the ZnO hexagonal lattice.The nanoparticles are cristallographically randomly oriented in these assemblies.These results suggest that the Co 2+ ions promote the formation of triangular nanoparticles and their assembly into large structures.The change in the morphology of the nanoparticles can be explained considering that adsorption of the dopant species on the surface of the host during growth is a crucial step of the doping process [37,38].The adsorption energies and residence times depend on the crystallographic surfaces to which the dopant is adsorbing.Erwin et al. [37] calculated the binding energies for Mn adsorbates on the surfaces of various semiconductors.They found that the binding energies of Mn on the (0001) surfaces of CdS and CdSe with wurtzite structures were higher than on the 1120 or 1010 surfaces.The adsorption energies of the dopant affect the doping efficiency and through which facet the doping will preferentially occur.Furthermore, dopant incorporation causes additional changes to the energy of the facets, and the adsorption of the dopant species on certain surfaces also modifies their reactivity.The latter effect can account for the observation that many ionic species are able to control the morphology of nanostructures without being incorporated into the host structure [21,[39][40][41].Consequently, the growth rates of the different facets will be different.It is inferred from this discussion that a possible "side effect" of the different adsorption energies of the dopants on different surfaces is the anisotropic growth of the nanocrystals, as observed in this work, which will depend on the concentration of the dopant.The assembly of particles in solution is usually promoted by intermolecular interactions established between molecules capping the particles, which are added to the reaction mixture or, in special cases, are formed in situ [12,15,42].The syntheses of the Co-doped ZnO materials were performed in the absence of coordinating molecules such as surfactants.However, it has been found that oxidation of benzyl alcohol at high temperatures (ca.>230 • C) results in the formation of high amounts of benzoate species attached to the metal oxide nanoparticles, which can promote the assembly of particles and formation of supercrystals by π-π interactions between the aromatic rings [42].The amount of benzoate ligands attached to the surface of doped zirconia nanocrystals was found to increase with the reaction temperature and with the amount of dopant [42].The FTIR spectra of the ZnO and Co-doped ZnO nanostructures (Figure S2) do not indicate the presence of benzoate species.The spectra shows two bands at 1583 and 1416 cm −1 , attributed to the antisymmetric and symmetric stretching vibration modes of the coordinated carboxylate moiety, but no bands from the vibration modes of aromatic rings are present.This suggests that the particles contain acetate ligands from the precursors adsorbed on the surface.However, carbon elemental analysis revealed that the amount of organics adsorbed is relatively small for promoting the aggregation of the particles (C wt % between 2.5 and 3.5), and that there is no correlation between the amount of dopant (and, therefore, the tendency to assemble) and the amount of organic species on the material.The aggregation process seems to be associated with the amount of cobalt in the synthesis.A possible explanation for the effect of cobalt on the assembly process is that cobalt ions adsorbed on the surface of the nanocrystals act as bridges between the nanoparticles during the early growth stages, promoting their assembly. To gain insights into the evolution of the morphology and assembly of the nanoparticles, the formation reaction of Co 0.15 Zn 0.85 O was followed by TEM and XRD, as described in Section 2. For that purpose, the microwave (MW) reaction was stopped after just 30 s, 45 s, 60 s, 90 s, 120 s, and 150 s at 170 • C, and the solid product was characterized.The results are shown in Figure 2. No solid product was present after only 30 s of reaction, likely because at that point the mixture contained only monomeric species.On the contrary, agglomerates of nanoparticles are already formed after 45 s, suggesting that the assembly process occurs during the nucleation or early growth stages of particle formation.These agglomerates have a bouquet-like morphology and are made of very small nanoparticles of 2-3 nm with near spherical shape.The XRD indicates that the nanoparticles are already crystalline with wurtzite structure.Furthermore, the Co/Zn ratio in the agglomerates is similar to that of the final product (after 5 min reaction).Considering that the nanoparticles at this stage have near spherical morphology similarly to the undoped ZnO material and the Co precursor is slightly less reactive than the zinc precursor, it is likely that part of the Co is still adsorbed at the surface, possibly bridging neighboring nanoparticles, which leads to the formation of the assemblies at the nucleation/early growth stage.At 60 s, the size of the assemblies increase to ca. 250 nm.After 90 s of reaction, in addition to a further increase in the size of the aggregates to 250-300 nm, changes in the shape of the nanoparticles are observed.Most of the nanoparticles have irregular shape with sizes between 5 and 8 nm, but in some parts of the assemblies triangular NPs are already seen.For longer reaction times, the assemblies do not grow much more and the main change observed is the modification of the shape of individual nanoparticles.Therefore, these results confirm that the cobalt dopant controls the morphology of the nanoparticles during their growth.On the one hand, the preferential adsorption of cobalt on specific surfaces can hinder the growth of those facets and consequently the growth is promoted along other directions.On the other hand, the preferential incorporation of Co on specific surfaces changes the energy of those facets and consequently change the dissolution rate of the different facets promoting the growth in specific directions during Ostwald ripening, which is the growth mechanism of the ZnO nanoparticles synthesized by the reaction studied here [33].After 150 s of reaction, most of the particles in the assemblies have triangular shape and the product is similar to the one obtained after 5 min of reaction.At 190 • C, the nanoparticles have irregular morphology and are not assembled.These results show that the promotion of the assembly of the nanoparticles and the changes in morphology are affected not only by the amount of cobalt present in the reaction mixture but also by the temperature.This is understood considering the effect of the temperature on the reactivity of the precursors, the surface adsorption energies, and nucleation/growth rates.As reported by Chen et al. [38], the doping of nanocrystals can involve several separated processes such as surface adsorption and lattice incorporation that show strong dependence on the temperature.Therefore, it seems that 170 • C is the optimal reaction temperature, at which the balance between the reactivity of the precursors, surface adsorption energies, cobalt incorporation rate and nanocrystal growth rate is the ideal for promoting the assembly of the nanoparticles from an early stage of the growth process and directing the shape of the nanoparticles into a triangular one. Conclusions Co x Zn 1−x O nanomaterials with x = 0.05, 0.09, and 0.15 were synthesized by non-aqueous sol-gel in benzyl alcohol at 170 • C with microwave heating.The dopant was found to have a strong impact on the morphology of the nanostructures, which was attributed to the modification of the energies of the different facets, caused by the dopant adsorption and incorporation, that promoted the growth in preferential directions.Undoped ZnO consists of quasi-spherical nanoparticles, whereas Co 0.15 Zn 0.85 O is made of triangular nanoparticles assembled into spherical structures.The assembly possibly results from surface adsorbed cobalt species that bridge adjacent nanoparticles during the early growth stages of the particles.The morphology and assembly behavior of the Co 0.15 Zn 0.85 O nanoparticles are sensitive to the reaction temperature.In the temperature range studied (170-190 • C), the formation of well-defined assemblies of triangular nanoparticles was only observed at 170 • C, suggesting that the assembly process requires the correct balance between several processes that occur in solution during the nanostructure formation (e.g., surface adsorption energies, cobalt incorporation rate, and nanocrystal growth rate), which are differently affected by the temperature. Figure 1 . Figure 1.TEM images of (a) ZnO, Co-doped ZnO with (b) 5, (c) 9, and (d,e) 15 atom % of cobalt (as determined by EDX analysis); right part of (d) shows the selected area electron diffraction (SAED) pattern of Co 0.15 Zn 0.85 O nanostructures; (f) X-ray diffraction patterns of the ZnO and Co-doped ZnO materials (vertical lines correspond to reference patterns: blue-ZnO, pink-Co 3 O 4 ; (g) diffuse reflectance UV-vis spectra of the Co-doped ZnO nanomaterials. Figure 2 . Figure 2. (a-h) TEM images and (i) XRD patterns of the products obtained at different reaction times during the synthesis of Co 0.15 Zn 0.85 O.
2018-04-27T04:56:13.220Z
2018-04-01T00:00:00.000
{ "year": 2018, "sha1": "c12ef4ef5319881a07f030d908092cdc44b48bf1", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2079-4991/8/4/249/pdf?version=1525348849", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "6ba5be8a7d5f56300a5536c6cc992857f51ccff5", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
256707951
pes2o/s2orc
v3-fos-license
Magnetic particles and strings in iron langasite Magnetic topological defects can store and carry information. Replacement of extended defects, such as domain walls and Skyrmion tubes, by compact magnetic particles that can propagate in all three spatial directions may open an extra dimension in the design of magnetic memory and data processing devices. We show that such objects can be found in iron langasite, which exhibits a hierarchy of non-collinear antiferromagnetic spin structures at very different length scales. We derive an effective model describing long-distance magnetic modulations in this chiral magnet and find unusual two- and three-dimensional topological defects. The order parameter space of our model is similar to that of superfluid 3He-A, and the particle-like magnetic defect is closely related to the Shankar monopole and hedgehog soliton in the Skyrme model of baryons. Mobile magnetic particles stabilized in non-collinear antiferromagnets can play an important role in antiferromagnetic spintronics. INTRODUCTION The topology of defects in ordered states of matter is governed by the order parameter describing spontaneous symmetry breaking at a phase transition 1 . As the number of variables required to characterize an ordered state increases, so does the diversity and complexity of topological defects. A wide variety of defects is found in superfluid 3 He with the order parameter describing orbital momentum, spin, and phase of the condensate 2,3 . Nontrivial topology does not necessarily make defects stable: a competition between interactions with different properties under the scaling transformation, x → Λx, is required to prevent the collapse of the defect 4 . Thus isolated Skyrmion tubes in chiral magnets with a diameter of 10-100 nm are stabilized by Dzyaloshinskii-Moriya (DM) interactions 5,6 favoring non-collinear spins, which compete with the Zeeman and magnetic anisotropy energy favoring uniform states [7][8][9] . The small size and high stability of Skyrmion tubes in bulk chiral magnets and magnetic multilayers, as well as their dynamics, driven by applied electric currents, make them promising information carriers in magnetic memory and data processing devices [10][11][12] . Even smaller skyrmions have been recently observed in centrosymmetric magnets [13][14][15] , where they are stabilized by magnetic frustration and/ or long-ranged interactions between spins mediated by conduction electrons [16][17][18][19][20] . Here, we discuss a realistic material that can host threedimensional (3D) magnetic Skyrmions-non-singular defects which, unlike the Skyrmion tubes, have a finite size in all three spatial directions. These magnetic particles can transfer information in all directions, stimulating the design of three-dimensional spintronic devices. 3D Skyrmions originally emerged as solitons in the non-linear meson model of T.H.R. Skyrme 21 . The parameter space of this model, formed by four meson fields, is three-sphere S 3 parametrized by three angles. A closely related defect, Shankar monopole, was predicted to exist in the A-phase of superfluid 3 He 22,23 . The order parameter describing this phase is an SO(3) matrix, and the collection of all possible ordered states is projective three-sphere RP 3 . Shankar monopole has been recently realized in the Bose-Einstein condensate of trapped spin-1 particles by application of time-dependent and spatially inhomogeneous magnetic fields 24 . This defect is, however, unstable and has a short lifetime. Higher-dimensional order parameter spaces can also be realized in magnetic materials, in particular, antiferromagnets with triangle-based spin lattices showing a non-collinear 120°ordering of spins in the triangles described by an SO(3) matrix 25,26 . Noncollinear antiferromagnetic (AFM) orders give rise to electron and magnon bands with non-trivial topology and Weyl fermions [27][28][29][30][31] resulting in large anomalous Hall and Nernst effects 32,33 that can be controlled electrically 34 . We show that 3D skyrmions can naturally occur in the iron langasite, Ba 3 TaFe 3 Si 2 O 14 . This fascinating material is both magnetically frustrated and chiral. The Fe-langasite spin lattice is built of triangles formed by the Fe 3+ -ions in the ab layers (see Fig. 1) with AFM Heisenberg exchange interactions between spins in the triangles resulting in a 120°spin ordering 35 . Furthermore, competing exchange interactions between spins of neighboring triangles, stacked along the c-axis, give rise to a helical spiral modulation of the 120°-ordering with the period of~7 lattice constants along the c-axis. The direction of the spin rotation in the spiral is governed by the chiral nature of the langasite crystal [35][36][37][38] that, otherwise, has little effect on the spin structure. However, when the magnetic anisotropy is effectively reduced by an applied magnetic field, DM interactions give rise to an additional spiral modulation with a period of about 2000 Å along a direction perpendicular to the c-axis 39 . We show that the same DM interactions can stabilize more complex modulated states as well as unusual topological magnetic defects, in particular, the magnetic particles carrying 3D Skyrmion topological charge and an associated Hopf number. Effective model The 120°order of the classical unit spins S 1 , S 2 , S 3 in triangles can be described by two orthogonal unit vectors, V 1 and V 2 39,40 : 2 V 2 , so that S 1 + S 2 + S 3 = 0. Spatial rotations of the frame formed by V 1 , V 2 and n = V 1 × V 2 are described by SO(3) matrix R parametrized by three Euler angles, ϕ, θ, and Ψ 41 : (1) where R z and R y are the matrices of rotations around the z and y axes, respectively, θ and ϕ are the polar and azimuthal angles of the unit vector n ¼ ðsin θ cos ϕ; sin θ sin ϕ; cos θÞ T describing the direction of the vector chirality of the 120°spin order (see Fig. 2 1 ¼x and V ð0Þ 2 ¼ŷ. The short-period spiral ordering observed in Fe-langasite in zero magnetic field originates from the competing exchange interactions between the spin triangles stacked along the c direction (see Fig. 1b). Importantly, the isotropic Heisenberg exchange interactions determine the spiral wave vector Q∥c 36 : tan Qc ¼ ffiffi ffi 3 p ðJ5 À J3Þ ð2J4 À J3 À J5Þ ; but not the orientation of the spiral plane described by the vector chirality n. The latter is governed by DM interactions between spins in the triangles, favoring a helical spiral with the helicity sign(n z Q) 35 and by an easy-plane magnetic anisotropy, which are two orders of magnitude weaker than the exchange interaction in the triangles 37,38,42 . On the other hand, the inter-triangle DM interactions in this chiral magnet tend to induce 'slow' variations of n and Ψ giving rise to a long-period magnetic superstructure observed under an applied magnetic field 39 . The competition between the magnetic anisotropy favoring a unique direction of n and the tendency to large-scale modulations, both being relatively weak relativistic effects, can also stabilize topological magnetic defects that are superimposed on the fast spin rotations with the propagation vector along the c direction. To obtain an effective model describing long-period magnetic superstructures in Fe-langasite, we separate fast and slow variations of the order parameter by introducing a slowly varying angle ψ(r): Ψ(r) = Qz + ψ(r). The energy is then expanded in powers of gradients of the three slowly varying angles θ, ϕ, and ψ, and averaged over the fast spin rotations (technical details of the derivation can be found in Supplementary Note 1). The energy density of the effective model is where the first term originates from the interlayer Heisenberg exchange interactions (see Fig. 1b) and . The second term with J ? ¼ ffiffi 3 p 2 J 2 results from the exchange interactions between the Fetriangles in the ab layers (Fig. 1a). The distances in the direction parallel(perpendicular) to the c-axis of the hexagonal lattice are measured in units of the lattice constant, c(a). an arbitrary global rotation of spins. The third term in Eq. (3), playing the role of an internal magnetic field, originates from DM interactions between spins in the triangles [see Eq. (2)] and the fourth term is the magnetocrystalline anisotropy. The next term is the coupling of the spiral ordering to an applied magnetic field H, which favors n∥H (χ > 0) since the magnetic susceptibility is the largest for spins rotating in the plane perpendicular to the field vector. The last term in Eq. (3) is a Lifshitz invariant (LI) 6,7 allowed by the chiral nature of the langasite crystal, ∂ ⊥ being gradient along with the in-plane directions (the derivation of LIs for this non-collinear antiferromagnet is discussed in the "Methods" section). Phase diagram In zero field, the anisotropy terms with K 1 , K 2 > 0 confine spins to the ab plane and stabilize the spiral state called uniform (U), as in this state n z = +1 and ψ ¼ const. In enantiopure samples of Felangasite studied in experiments n z = −1 [35][36][37] . The sign of n z does not affect the phase diagram. An applied magnetic field H ⊥ c tends to re-orient the spiral plane, eventually turning it perpendicular to the field (n∥H). The re-orientation of n, which resembles the spin-flop transition in collinear antiferromagnets, activates LI that can stabilize two very different states with additional large-scale modulations. Assuming that n and ψ in the modulated states vary along a vector ξ ¼ ðcos ϕ ξ ; sin ϕ ξ ; 0Þ in the ab plane (this assumption is verified by numerical simulations), we exclude ψ from Eq. (3) using ∂ ξ ψ ¼ Àcos θ∂ ξ ϕ À λ 2J? ðξ Á nÞ and obtain the energy that only depends on n, for H∥x (see Fig. 1a). 2 The 120°spin order. The order parameter is described by the polar and azimuthal angles, θ and ϕ, of the vector chirality n ¼ 2 Þ , and the angle Ψ of the spin rotation around n. Figure 3 shows the phase diagram in the (K 1 , H) plane. In contrast to collinear antiferromagnets, the order parameter n does not abruptly flop but rotates continuously away from the z-axis in the xz plane. For H > H R : χH 2 R ¼ jK 1 j þ K 2 À λ 2 2J? , the rotation angle is given by While n is constant, the angle ψ varies monotonically, ψ = q(r ⋅ ξ) with q ¼ À λ 2J? sin θ, corresponding to an additional rotation of spins around n (see Fig. 4b) recently observed in Fe-langasite above H R~4 T 39 . The wave vector q increases as the field strength grows and n approaches the field direction. This 'tilted spiral' (TS) state with n tilted away from the c-axis has both helical and cycloidal components. In another kind of modulated state, the domain wall array (DWA) shown in Fig. 4a, ψ is constant whereas n rotates in the xz plane along the y-axis perpendicular to the applied field (ϕ À ϕ ξ ¼ π 2 ). This state only appears for relatively small K 1 (see Fig. 3). At a critical field, H c1 , the energy of the domain wall, across which θ varies by 2π, vanishes, which marks the transition from the uniform spiral state to the DWA state. As H increases further, the domain wall energy becomes negative, and the domain walls form an array with the period that decreases with the field. This state is similar to the 'mixed state' in collinear antiferromagnets 43 , except that in our case n rotates through the angle 2π across the wall, since the uniform states with n z = ±1 have different energies for K 1 ≠ 0. At the second critical field, H c2 , the transition between the DWA and TS states occurs and the modulation direction described by ξ rotates abruptly through 90°. Although the energy of all states in the phase diagram Fig. 3 can be found analytically (see Supplementary Note 3), we also performed numerical simulations of the model Eq. (3) rewritten in terms of two orthogonal unit vectors, V 1 and V 2 (see the "Methods" section), which confirm the phase diagram Fig. 3. We also found metastable multiply periodic states: the vortex array with a square lattice (Fig. 5a, b), the vortex chains (Fig. 5c, d) and the hexagonal crystal of coreless vortices (Fig. 5e, f), which can be stabilized by thermal fluctuations at elevated temperatures. Topological magnetic defects in two spatial dimensions Singular topological defects in a model with an SO(3) order parameter in two spatial dimensions-Z 2 vortices with energy logarithmically diverging with the system size-have been discussed in ref. 25 . Here we study non-singular finite-energy defects in the uniform ground state. One might think that, similarly to magnetic Skyrmions, such defects can be classified by the topology of n(x, y)-textures after the angle ψ is integrated out from Eq. (3), as it was done for one-dimensional states. However, in two spatial dimensions, the resulting energy functional, E[n], contains long-ranged Coulomb interactions between the 'electric' charges induced by spatial variations of n. These interactions suppress Skyrmions, which are 'charged' and have an infinite 'electrostatic' energy. The electrostatic potential, φ el , is a variable dual to ψ where ϵ μν is the antisymmetric tensor (μ, ν = x, y). The divergence of the left-hand side is 0 as a result of the global gauge invariance: Eq. (3) is unchanged under ψ → ψ + α. The electrostatic potential satisfies Poisson equation, −Δφ el = 4πρ el , with the electric charge density, ρ el ¼ 1 4π ðn Á ∂ x n ∂ y nÞ À λ 8πJ? ½∇ n z ; the first term being the Skyrmion charge density. Equation (3) can then be written in the form where UðnÞ ¼ K 1 ð1 À cos θÞ þ K2À λ 2 2J ? 2 sin 2 θ À χ 2 ðH Á nÞ 2 and the last term is the positive electrostatic energy with the 'dielectric' where Q sk is the Skyrmion charge (Eq. (8) is similar to the Mermin-Ho relation for the circulation of the superfluid velocity in 3 He-A 44 . Since for a finite-energy defect the integral over the infinite-radius circle in Eq. (8) is 0, Q sk = 0, in agreement with π 2 (SO(3)) = 0. A stable finite-energy defect in the spiral state with nkz is shown in Fig. 6. In polar coordinates (ρ, φ), ϕ ¼ φ þ π 2 , ψ = −φ and θ = θ(ρ) monotonically increases from 0 at ρ = 0 to 2π at ρ = ∞. The n-configuration shown in Fig. 6) a is that of a target skyrmion [45][46][47] with Q sk = 0 and the angle ψ forms a vortex with the winding number −1 (Fig. 6b). As in the vortices in type-II superconductors, the covariant derivative D μ ψ vanishes far away from the vortex. However, it also vanishes at ρ = 0, so that the ψvortex has no core. Note that ϕ þ ψ ¼ const in the vortex center, where θ = 0, corresponds to non-rotating spins. This defect is stabilized by the LI in Eq. (3), which favors ψ varying along n and θ varying along the direction normal to n. Both these trends are fulfilled in the coreless vortex. Topological protection is ensured by the existence of a non-contractible loop in the SO(3) manifold: π 1 (SO(3)) = Z 2 . A path from the center of the defect to infinity along any radial direction is such a loop. In the center of the defect and at spatial infinity n z = +1, whereas inside the green ring in Fig. 6a n z is negative, corresponding to the reversal of both the vector chirality of spins in triangles and the spiral helicity. The rotational symmetry of the defect turns the calculation of θ(ρ) into a one-dimensional problem (see Supplementary Note 4). 3D Skyrmion The third homotopy group, π 3 SOð3Þ ð Þ¼Z, allows for particle-like topological defects that have a finite spatial extent in all three directions. They are closely related to hedgehog solitons in the nonlinear meson model of T.H.R. Skyrme 21 carrying an integer topological charge identified with the baryon number, where ε ijk is the antisymmetric Levi-Civita tensor (i, j, k = x, y or z) and L i = U † ∂ i U, U being an SU(2) matrix describing the four meson is a vector composed of Pauli matrices. In the Skyrme's baryon, Φ 0 depends on the radius r, varying from −1 (the south pole of the 3-sphere) at r = 0 to +1 (the north pole) at infinity, and the vector Φ = (Φ x , Φ y , Φ z ) is along the radius vector r (a hedgehog), which guarantees that the 3-sphere formed by the meson fields wraps once around the three-dimensional Euclidean space. This configuration was used as the initial state for numerical studies of 3D topological excitations in the uniform spiral state. The vortex array with a square lattice found at low applied magnetic fields (H < H R ). c, d The alternating strings of merons and antimerons found at large applied magnetic fields (H > H R ). e, f The non-singular hexagonal vortex crystal. The first row (panels a, c and e) shows the vector chirality n and the second row (panels b, d and f) shows the corresponding angle ψ. In-plane components of n are indicated with arrows; n z and ψ are color-coded. The angle ψ is plotted modulo 2π and the lines in the ψ-plots are branch cuts, across which ψ discontinuously changes by 2π. E. Barts and M. Mostovoy The collapse of Skyrme's hedgehog is prevented by the terms of fourth order in spatial derivatives of Φ α . This stabilization mechanism is inefficient in Fe-langasite with relatively small fourth-order terms. 3D defects in chiral materials can be stabilized by DM interactions. The dependence of the 3D Skyrmion energy on the length scale R (for a fixed shape) is given by E(R) = aR − bR 2 + cR 3 , where the first term is the positive exchange energy, the second term is the negative energy resulting from the LI, and the third term is the positive anisotropy energy counted from the energy of the uniform ground state (a, b, c > 0). The local minimum of E(R) corresponding to a metastable Skyrmion appears for b 2 > 3ac. Our numerical simulations with periodic boundary conditions in all three directions show that λ required to stabilize the 3D defect exceeds the critical value, above which the uniform spiral state becomes unstable towards additional periodic modulations, i.e., it transforms into the TS or DWA state in zero magnetic field. However, we have found a stable 3D Skyrmion in slabs with open boundary conditions along the z-direction, periodic boundary conditions along the x and y directions, and a surface anisotropy favoring n (anti)parallel to the c axis, which suppresses the instability of the uniform state. This mechanism is similar to the stabilization of Hopfions in films of liquid crystals by boundary conditions 48 . Figure 7a shows that the 3D Skyrmion is an axially symmetric hedgehog elongated along the c-axis. The gray surface is a surface of Φ 0 ¼ cos θ 2 cos ψ þ ϕ 2 ¼ 0 and the arrows show the direction of Φ ¼ ðΦ x ; Φ y ; Φ z Þ ¼ ðsin θ 2 sin ψ À ϕ 2 ; sin θ 2 cos ψ À ϕ 2 ; cos θ 2 sin ψ þ ϕ 2 Þ at this surface. Φ 0 = −1 in the center of the 3D skyrmion and Φ 0 = +1 at the periphery so that the H ¼ À1 (see also Supplementary Note 5). The n-configuration in the xy plane passing through the center of the defect is the 2D target skyrmion (Fig. 7b), whereas the xz cut through the defect (see Fig. 7c) shows a doughnut shape. In fact, the n-part of the 3D skyrmion is a Hopfion, similar to Hopfions in ferromagnets and liquid crystals [49][50][51][52][53] , and the 3D topological charge equals the Hopf number of the n-texture written in terms of the vector potential a i = − D i ψ = V 1 ⋅ ∂ i V 2 and the corresponding magnetic field b = [∇ × a] 54,55 : Figure 7d shows the false-color plot of the angle ψ at the n z = −1/2 surface (a torus). The angle ψ winds around the torus, which reflects the fact that the Hopf number is the linking number for constant-n loops 54 (see Fig. 7e). The change of the angle ψ along the loop, Δψ = − ∮dx ⋅ a = − 4π. Importantly, the 3D Skyrmion is not merely a Hopfion, since the vector chirality n is only a part of the order parameter. DISCUSSION The importance of a larger order parameter space in triangular magnets with the 120°ordering for critical phenomena and topological defects was noted already some time ago 25 . An additional ingredient discussed in this paper is the lack of inversion symmetry in the crystal lattice, which gives rise to DM interactions resulting in additional long-period modulations of the 120°spin structure. The spin non-collinearity at the scale of one crystallographic unit cell gives rise to new magnetic phases and topological defects at a much larger length scale determined by the strength of the DM interactions. We derived an effective model describing large-scale spin modulations in Fe-langasite, in particular the experimentally observed tilted spiral phase, and showed that the threedimensional order parameter space of this non-collinear antiferromagnet allows for complex spin states and unconventional topological magnetic defects, such as the coreless vortex tube and three-dimensional Skyrmion. The formal equivalence of the parameter spaces of antiferromagnets with a 120°spin ordering and superfluid 3 He-A calls for a study of magnetic analogs of the wealth of topological defects in the superfluid system 2,3 . Note, however, that 3 He and the Skyrme model do not allow for LIs stabilizing nanosized topological defects in the chiral Fe-langasite. There are other non-collinear antiferromagnets with frustrated exchange interactions that can host unusual topological defects, such as manganese nitrides with the cubic inverse perovskite crystal structure showing a variety of non-collinear spin structures and a giant negative thermal expansion effect [56][57][58] , Pb 2 MnO 4 with a non-centrosymmetric tetragonal crystal lattice and a rare 90°s pin ordering 59 , multiferroic hexagonal manganites with a trimerized triangular spin-lattice and strongly coupled structural, ferroelectric and magnetic defects [60][61][62] , swedenborgites with alternating triangular and Kagome spin lattices, which similar to Fe-langasite are both frustrated and non-centrosymmetric 40,63 , and the conducting non-collinear antiferromagnets, Mn 3 Ge and Mn 3 Sn, showing large anomalous Hall and Nernst effects and allowing for electric control of magnetic states [27][28][29][30][31][32][33][34]64 . The unconventional topological defects discussed in this paper can be a new avenue of research in AFM spintronics. The effective description of the Heisenberg model The microscopic expression for the exchange energy with five exchange constants (see Fig. 1b) is written in terms of V 1 and V 2 and then, using Eq. (1), in terms of θ, ϕ and Ψ. Next, we use Ψ(r) = Qz + ψ(r) and average the exchange energy over short-period spin rotations. The resulting expression only depends on the slowly varying variables θ, ϕ and ψ and is expanded in powers of gradients of these three angles. Details can be found in Supplementary Note 1. LIs for chiral antiferromagnets with a 120°spin order, such as swedenborgaties and langasites, can be written in terms of the two vectors, V 1 and V 2 , and their derivatives. They can be easily found using one-dimensional complex representations of 3 z . To this end we introduce linear combinations of V 1 = (X 1 , Y 1 , Z 1 ) and V 2 = (X 2 , Y 2 , Z 2 ): R þ ¼ X 1 þ iX 2 þ iðY 1 þ iY 2 Þ ¼ e iðϕÀΨÞ cos θ À 1 ð Þ ; R À ¼ X 1 þ iX 2 À iðY 1 þ iY 2 Þ ¼ e ÀiðϕþΨÞ cos θ þ 1 ð Þ ; and their complex conjugates denoted by R þ , R À and Z, respectively. These quantities transform in a simple way under the generators of the P321 group, 3 z and 2 y (see Table 1). These transformation rules follow directly from the symmetry properties of the order parameter: 3 z : ϕ ! ϕ þ 2π 3 ; Ψ ! Ψ À 2π 3 ; 2 y : ϕ ! Àϕ; Ψ ! π À Ψ; (12) and θ being invariant under these transformations. Using these transformation properties, one obtains five independent LIs favoring an additional modulation with an in-plane wave: Im(R þ ∂ þ ! R À ), Im , and Im (R À ∂ À ! Z), where A ∂ ± ! B ¼ A∂ i B À B∂ i A and ∂ ± = ∂ x ± i∂ y . Two LIs vanish upon average over fast spin rotations 39 . Further details and the microscopic derivation of the LI from the DMI between nearest-neighbor triangles in the ab plane can be found in Supplementary Note 2. Numerical simulations We re-write the energy of the effective model Eq. (3) in terms of the unit vectors, V 1 and V 2 : where n = V 1 × V 2 , and V 1 and V 2 are slowly varying vectors with Ψ replaced by ψ. The term with a large J ort > 0 is added to ensure the orthogonality of V 1 and V 2 . We then discretize Eq. (13) and minimize energy by solving two coupled Landau-Lifshitz-Gilbert equations for the unit vectors V 1 and V 2 with an artificially large Gilbert damping. DATA AVAILABILITY All data analyzed during the current study are available from the authors upon reasonable request. Table 1. Transformation properties of R ± , Z, and their complex conjugates (see Eq. (11)) under the generators of P321 group, 3 z , and 2 y (here, ω ¼ e i 2π 3 and ω ¼ e Ài 2π 3 ).
2023-02-10T14:09:57.389Z
2021-12-01T00:00:00.000
{ "year": 2021, "sha1": "78b2de52ac11ce79c5c1bd18d4d902ed8312cf8d", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41535-021-00408-4.pdf", "oa_status": "GOLD", "pdf_src": "SpringerNature", "pdf_hash": "78b2de52ac11ce79c5c1bd18d4d902ed8312cf8d", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [] }
164620759
pes2o/s2orc
v3-fos-license
Research on reporting scheme of grading stop-and-recharge event of low-voltage acquisition terminal For a long time, power supply companies mainly rely on the power supply service command platform for organization and dispatch of work orders and work orders for low-voltage network power outages. However, the analysis of power outage location, power outage scale, requires personnel strength and repair plan is mainly based on the prejudgment of the phone entered after the user loses power, which is prone to misjudgment of workload or recovery time, low processing efficiency and high cost. There are many security risks. This paper analyzes the problem of stop-and-recharge events, single-user and meter-box stop-and-recharge events, and proposes a reporting plan for the three-level stop-and-return event to solve the problem of immediate reporting of events after power outages. Make full use of the acquisition advantages of the electricity information collection system, and comprehensively improve the customer service response speed and distribution network operation and management level. Introduction For a long time, power supply companies mainly rely on the power supply service command platform for the troubleshooting of low-voltage network, including organization and dispatch and work orders [1][2] for power outages. However, the analysis of power outage location, power outage scale, requires personnel strength and repair plan is mainly based on the prejudgment of the phone entered after the user loses power, which is prone to cause misjudgment of workload or recovery time, low processing efficiency, high cost and many security risks [3][4][5] for follow-up processing on site. The electricity information acquisition system is connected with the power supply service command platform. By pushing power failure and recovery events of the low-voltage household meter, acquisition unit, main meter of the substation area or concentrator in real time, the electricity information acquisition system is able to assist the power supply service command platform in analyzing and judging the authenticity, location, cause, property and scope [6][7][8] of a fault comprehensively. This accelerates the troubleshooting response time, reduces the cost and improves the customer satisfaction and system operation index [9][10] . This paper analyzes the problem of power failure and recovery events of the substation area, of monitoring points for key branch and users and of single user and meter box, and proposes a reporting plan for the three-level stop-and-return event. On the basis of the original concentrator, electric energy meter and acquisition unit in the substation area, this scheme is assisted with monitoring devices for 2 monitoring points and combines the active reporting function of a power outage event with the research and judgment system of the power outage fault acquisition in the main substation to realize the three-level online monitoring of the power supply status from the substation area to the household meter. This solves the problem of immediate reporting of events after power outages. Scheme overview Currently, there is a relevant metering acquisition device terminal (including a Type I concentrator, Type II concentrator, monitoring and metering terminal for distribution and transformation, load management terminal and data acquisition terminal of station electric energy), electric energy meter (three-phase and single phase electric energy meters) and acquisition unit (Type I and II acquisition units) in the low-voltage network. The application scene can be divided into the following types based on different data acquisition modes: (1) The wireless public network is used between the main substation and the terminal for data communication. RS485 is used between the terminal and the electric energy meter for data communication; (2) The wireless public network is used between the main substation and the terminal for data communication. The low-voltage power line carrier/micropower wireless is used between the terminal and the electric energy meter for data communication; (3) The wireless public network is used between the main substation and the terminal for data communication. The low-voltage power line carrier/micropower wireless is used between the terminal and the acquisition unit for data communication. RS485 is used between the acquisition unit and the electric energy meter for data communication; (4) The wireless public network is used between the main substation and the electric energy meter for data communication; According to the classification of devices with power failure and recovery events, the events include electric energy meter, acquisition unit and terminal events. According to the scope of power failure and recovery events, events are classified as follows: (1) Event of singe device (such as single electric energy meter, acquisition unit or terminal); (2) Event of multiple devices (such as multiple electric energy meters, acquisition units or terminals); (3) Event of all devices (such as all electric energy meters, acquisition units or terminals); For the way to inform the main substation in time based on aforementioned different device and scope events, this scheme is confirmed to include the following four aspects: (1) Event sensing. The way for the device to sense a power failure and recovery event correctly; (2)Event reporting. The way for the communication network to support the reporting of a power failure and recovery event; (3) Event researching and judging. The way for the substation to analyze and judge a power failure and recovery event correctly. (4) Event processing. The way for the substation to configure a follow-up processing flow for a power failure and recovery event. Currently, the relevant event reporting of an electric energy meter is designed based on utilizing the power failure and recovery event before reporting the operation status under the live status. And state grid power companies have complete acquisition schemes for power failure events after the power-on status of a terminal and an electric energy meter. Therefore, this scheme only involves the way to report a power failure event rapidly. In the communication network, if a node senses a device at the user side has a power failure event, corresponding reporting mechanisms shall be designed to support reporting of events in different coverage scopes. (1) Active reporting mechanism Reporting of single node event in the communication network requires the active reporting mechanism. After receiving a power failure event (such as communication address of a node with a (2)Conflict detection/collision avoidance mechanism Reporting of a multiple-node event in the communication network requires the conflict detection/collision avoidance mechanism. Limited by the bandwidth of the communication channel, it is inevitable to lose an event reporting if plenty of nodes conduct the event reporting simultaneously. Adding the conflict detection/collision avoidance mechanism can reduce the probability of mutual conflicts among event reporting signals and improve the success rate of the event reporting. Reporting scheme of three-level power failure and recovery events According to different power failure scenes, power failure events are divided into three major categories: power failure and recovery of the substation area, of monitoring points for key branch and users and of single user and meter box. Power failure and recovery of substation area Power failure of the entire substation area caused by the distribution and transformation and line faults influences the production, livelihood and personal safety greatly. Occupying the top grade in reporting of the three-level power failure and recovery event, it is the most urgent power failure status for repairing. Realizing the reporting of a power failure and recovery event of the substation area will turn "passive repair" to "active operation and maintenance", which is crucial for enhancing the power supply reliability, reducing user complaints and improving the power supply service quality. According to its causes, power failure and recovery of the substation area can be divided into the following types. Power failure and recovery of substation area caused by low voltage branch switch abnormality. The branch monitoring device with the super capacitor is installed at the output end of the low-voltage branch switch in the power distribution room to acquire the status and trip causes of the low-voltage switch in real time. In the event of power failure, it is reported to the concentrator/ intelligent operation and inspection terminal via the carrier/wireless channel and the concentrator/ intelligent operation terminal reports it to the main system substation as shown in Figure 2. 5 intelligent operation monitoring terminal will have a power failure. Therefore, the local communication module with the super capacitor shall be provided. In the event of a power failure, communication between the local channel and the electric energy meter at the user end or the acquisition unit is used for verifying a real power failure event before reporting it to the main system substation as shown in Figure 3. Power failure and recovery of monitoring points for key branch and users The monitoring device with the super capacitor is installed for key monitoring points, which can be an acquisition unit, an electric energy meter or a special device to generate a power failure event after a (3) The local communication channel has the collision avoidance mechanism. The electric energy meter communication module reports a power failure and recovery event to the concentrator directly or via adjacent nodes. (4) The concentrator reports a power failure and recovery event to the main acquisition substation via GPRS.
2019-05-26T13:18:23.646Z
2019-04-01T00:00:00.000
{ "year": 2019, "sha1": "80564f5301ad6607a3c635bc1588a62998c390d8", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1187/5/052003", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "e45387612cb3f88d818bd6cf54b8d9ca78d09bfd", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Computer Science", "Physics" ] }
268579424
pes2o/s2orc
v3-fos-license
Tunable fluorescent probes for detecting aldehydes in living systems Aldehydes, pervasive in various environments, pose health risks at elevated levels due to their collective toxic effects via shared mechanisms. Monitoring total aldehyde content in living systems is crucial due to their cumulative impact. Current methods for detecting cellular aldehydes are limited to UV and visible ranges, restricting their analysis in living systems. This study introduces an innovative reaction-based trigger that leverages the exceptional selectivity of 2-aminothiophenol for aldehydes, leading to the production of dihydrobenzothiazole and activating a fluorescence response. Using this trigger, we developed a series of fluorescent probes for aldehydes by altering the fluorophore allowing for excitation and emission wavelengths across the visible to near-infrared spectral regions without compromising the reactivity of the bioorthogonal moiety. These probes exhibit remarkable aldehyde chemoselectivity, rapid kinetics, and high quantum yields, enabling the detection of diverse aldehyde types, both exogenous and endogenous, within complex biological contexts. Notably, we employed the most red-shifted near-infrared probe from this series to detect aldehydes in living systems, including biliary organoids and mouse organs. These probes provide valuable tools for exploring the multifaceted roles of aldehydes in biological functions and diseases within living systems, laying the groundwork for further investigations. and an emission of 710 nm.Excitation and emission slits were set to five for each experiment. Microwell Plate Reader: Fluorescence intensity was measured by a Synergy H1 by BioTek.An excitation of 485 nm and an emission of 520 nm was used with Probe 1a. Biliary organoid retrieval.Organoids were retrieved from Cultrex by removing culture media and adding 500uL Cultrex Organoid Harvesting Solution (R&D Systems; 3700-100-01) to the wells of a 48 well plate.The plate was incubated on a nutating platform for 45-60min at 4 °C.After incubation, organoids were retrieved by resuspending well contents 5-10x with a P1000 pipettor, allowing organoids to sediment, and then supernatant was removed with a P1000 pipettor until only 200 uL remained in the well.200 uL of solution was kept in the well at all times to avoid discarding organoids.Organoids were overlayed with 200 uL Biliary Expansion Media until treatment. VII. Animal Studies: C57Bl/6 mice were used for this study.All animal studies were reviewed and approved by the Institutional Animal Care and Use Committee of Emory University.Male Balb/cJ mice 9 weeks of age (Jackson Labs) were maintained in a regulated humidity and temperature environment under pathogen-free conditions and provided food and water ad libitum under a controlled light and dark cycle.Organs derived from CO 2 euthanized mice were perfused by administering 10 mL of sterile PBS through the heart.Following perfusion, the heart, kidney, brain, liver, lungs, and spleen were collected, washed with ice cold PBS, and residual PBS removed with blotting paper.Organs were cut into pieces and snap frozen for long-term storage at -80 o C. All mouse experiments were conducted in accordance with protocols approved by the Institutional Animal Care and Use Committee of Emory University School of Medicine. Rate of benzothiazole formation in solution. The rate of benzothiazole formation was determined upon the reaction of probe 1a (10 µM, 1 equiv.)with propanal (1 µM, 0.1 equiv.) in DMSO.The reaction was monitored over 30 min with fluorescence intensity measurements collected every 1 min by fluorimeter.Increases in fluorescence were plotted against time and a one-phase association curve was used to determine the rate constant.The reaction was performed in triplicate. Flow cytometry analysis of cell death by probe 1a.T-47D and MCF10A cells were incubated with 10 µM of probe 1a for 2 h or 24 h with equal amounts of DMSO as control.Cells were then detached with trypsin and stained using Annexin V/PI following manufacturer's protocol.To avoid fluorescent crosstalk, Annexin V (AV) conjugated to PacificBlue or FITC was used to determined apoptosis.Propidium Iodide (PI) was used to determine necrosis within the cellular populations.Cells were analyzed via flow cytometry within 1 h to quantify cell death.FlowJo software was used to analyze the cytometry data.PI and AV controls were used to determine quadrant placement.All the experiments were performed in triplicate. Probe 1a limit of detection of exogenous aldehyde in live cells. Live T-47D cells were plated on a 24 well IBIDI plate in supplemented RPMI media and incubated for 24 h at 37 o C and 5% CO 2 .Cells were then treated with 10 µM of probe 1a and incubated for an additional 1 h.Cells were washed with PBS and incubated with increasing levels of propanal (1 µM, 10 µM, 25 µM, and 100 µM) for 1 h.Cells were immediately monitored for fluorescence increase with a microwell plate reader (kinetic run, ex. 485, em. 520). Live cell monitoring of acetaldehyde levels through the addition of ethanol and ALDH2 inhibitor.Live T-47D cells were plated on glass bottomed 35 mm dishes in supplemented RPMI and incubated for 24 h.Cells were then incubated with 10 µM concentration of probe 1a with or without ALDH2 inhibitor, DDZ (5 µM).After 1 h, the cells were washed once with PBS, then incubated with 10 mM ethanol for 1h.Prior to imaging, cells were stained with Cell Mask and incubated for 10 min.Cells were subsequently washed 3 times in cold PBS (5 min) and stained with 1 µg/mL Hoechst for 5 min.Cells were imaged on Leica SP8 confocal microscope and images were processed and analyzed using ImageJ software. Live cell monitoring of endogenous aldehyde levels in the presence of ALDH2 activator and inhibitor.Live T-47D cells were plated on glass bottomed 8-well plates in supplemented RPMI media and incubated for 24 h.Cells were then treated DDZ (20 µM) or Alda-1 (50 µM).After 1 h, cells were cotreated with 10 µM of probe 1a with or without DDZ (20 µM) or Alda-1 (50 µM).Cells were then stained with Cell Mask and incubated for 10 min.Cells were subsequently washed 3 times with PBS (5 min) and stained with 1 µg/mL Hoechst for 5 min.Cells were then placed in fresh supplemented RPMI media and imaged on Leica SP8 confocal microscope.The images were processed and analyzed using ImageJ software to determine pixel intensity per cell.Data was normalized to probe 1a only wells to show increases and decreases in intensity signal from DDZ and Alda-1, respectively. Flow cytometry analysis of cell death by probe 1c.T-47D cells were incubated with 15 µM of probe 1c for 2 h with equal amounts of DMSO as control.Cells were then detached with trypsin and stained using Annexin V/PI following manufacturer's protocol.To avoid fluorescent crosstalk, Annexin V (AV) conjugated to FITC was used to determined apoptosis.Propidium Iodide (PI) was used to determine necrosis within the cellular populations.Cells were analyzed via flow cytometry within 1 h to quantify cell death.FlowJo software was used to analyze the cytometry data.PI and AV controls were used to determine quadrant placement.All the experiments were performed in triplicate. Cellular monitoring of exogenous aldehydes with probe 1c.T-47D cells were plated in 6 cm culture dishes and incubated for 24 h.Cells were treated with 20 µM or 100 µM propanal for 1 h before being washed with PBS.15 µM of probe 1c in media was added to cells and incubated for an additional 1 h.Cells were then fixed with methanol and mounted with fluoroshield mounting media with DAPI.Cells were then imaged on the Leica Stellaris 8 confocal microscope. Live organoid monitoring of endogenous aldehyde levels in the presence of ALDH2 activator and inhibitor.Live murine biliary organoids were retrieved from extracellular matrix through above protocol.Organoids were pretreated with 20 µM DDZ and 10 mM of ethanol or 50 µM Alda-1 for 1 h.Organoids were then cotreat with 15 µM probe 1c with or without 20 µM DDZ and 10 mM of ethanol or 50 µM Alda-1 for 4 h.Subsequently, organoids were treated with Cell Mask 488 and incubated for 10 mins followed by 3 PBS washes before Hoechst staining for 5 min.Organoids were placed in fresh media and imaged on Lecia Stellaris confocal microscope.Average pixel intensity of z stacks was determined and plotted for 20 organoids of each trial. Probe 1c efficiency inside mouse lung tissue.Freshly excised mouse lungs were obtained and placed into 15 µM of probe 1c in PBS.Tissues were gently rocked at 4 o C in the dark for 6 h or 24 h.Tissues were removed from treatment and snap frozen.Step-II Step-III Step-IV Step-V Step-VI Step-VII 7a (R = Me, 72%, 6 h) Representative procedure for the synthesis of 5-(1,3-Dioxolan-2-yl)-2nitrobenzenethiol (Figure S1a, step II): An oven-dried 100 mL RB flask was charged with protected aldehyde (1.0 equiv) in 10 mL1,4-dioxane and 5 ml H 2 O placed at 0 °C.Then, sodium sulfide nonahydrate (1.3 equiv) was added portion wise at the same temperature and stirred for 20 h at rt.The reaction progress was monitored by the TLC.Upon completion, the reaction mixture was quenched with dil.HCl slowly and extracted with ethyl acetate.The organic extracts were combined, dried over anhydrous Na 2 SO4, and concentrated under reduced pressure.The residue was purified by silica gel column chromatography using hexane/ethyl acetate (5:2) as eluent to afford thiol adduct.This compound was isolated as colorless thick oil with 75% yield.The compound thiol adduct was dissolved in DMSO and added ethanethiol (10.0 equiv), I 2 (0.1 equiv), and stirred at 50 °C for 7 h.The reaction progress was monitored by the TLC.Upon completion, the reaction mixture was quenched with ice cold water and extracted with ethyl acetate.The organic extracts were combined, dried over anhydrous Na2SO4, and concentrated under reduced pressure.The residue was 144.4, 138.2, 126.3, 125.4, 124.0, 102.2, 65.5, 32.3, 14.3. HRMS (ESI): [M] + Calcd for [C 9 H 8 BrNO 4 ] 272.9637, found 272.9630.S1b, Step II): (I) An oven-dried 100 mL RB flask was charged with protected aldehyde (1.0 equiv) in 10 mL 1,4-dioxane and 5 ml H 2 O placed at 0 °C.Then, sodium sulfide nonahydrate (1.3 equiv) was added portion wise at the same temperature and stirred for 15 h at rt.The reaction progress was monitored by the TLC.Upon completion, the reaction mixture was quenched with dil.HCl slowly and extracted with ethyl acetate.The organic extracts were combined, dried over anhydrous Na 2 SO 4 , and concentrated under reduced pressure, and then proceeded directly to the next step. Representative procedure for the synthesis of 4-(ethyldisulfaneyl)-3nitrobenzaldehyde (Figure (II) The crude thiol adduct was dissolved in DMSO and added ethanethiol (10.0 equiv), I 2 (0.1 equiv), and stirred at 50 °C for 7 h.The reaction progress was monitored by the TLC.Upon completion, the reaction mixture was quenched with ice cold water and extracted with ethyl acetate.The organic extracts were combined, dried over anhydrous Na 2 SO 4 , and concentrated under reduced pressure, and then proceeded directly to the next step. HRMS (ESI): [M-H] Calcd for [C 50 H 54 B 2 F 4 N 6 O 8 S 2 ] 1027.3489,found 1027.3480.2: Quantum yield determination of probe 1a.Quantum yield was calculated using the area under the curve of fluorescence versus absorption.Absorption was measured with a Cary 3500 UV-Vis utilizing the samples at using four separate concentrations with values being divided by ten for accurate analysis.Fluorescence area was measured with a Cary Eclipse fluorimeter using the same samples above with a 10X dilution.All measurements were run in triplicate.Quantum yields of probe 1a and the corresponding propanal product (probe 2a) were determined using Cy2 as a reference compound.The following equation was used to calculate quantum yield: HRMS (ESI): Frozen samples were submitted to Emory Cancer Tissue and Pathology core for sectioning.Sectioned samples were imaged on the Leica Stellaris confocal microscope.IX.Supplementary Figure 1.a) Synthesis of Probe 1a and product with aldehyde 2a. Figure3: Flow cytometry analysis of cell death by probe 1a.Cell death of T-47D cells treated with probe 1a (10 µM) for 2 h and 24 h.Cell death of MCF 10A cells treated with probe 1a (10 µM) for 2h and 24 h.All the experiments showed no increase in cell death as compared to naïve cells.cell monitoring of endogenous aldehyde levels in the presence of ALDH2 activator and inhibitor.Cells were treated with 10 µM of probe 1a with or without DDZ (20 µM) or Alda-1 (50 µM).Average pixel intensity per area shows that the addition of DDZ increases pixel intensity, while the addition of Alda-1 decreases signal.These results are as expected in relation to the concentration of available aldehydes in the cells. 0. 11 B NMR (128 MHz, CDCl 3 ): δ 1.24 (t, J = 32.5 Hz). 19F NMR (376 MHz, CDCl 3 ): J = 62.2, 31.1, 16.3 Hz).HRMS (ESI): [M]+ Calcd for [C 43 H 46 BF 2 N 5 O 4 S 2 ] 809.3052, found 809.3052.XV.Figure7: Fluorescence Characterization of Probe 1d.Probe 1d was mixed with propanal to create benzothiazole product 2d.The product was then tested for absorbance with an UV-Vis and excitation with a fluorimeter.XVI.Figure8: Flow cytometry analysis of cell death by probe 1c.T-47D cell line was treated with probe 1c (5 µM or 15 µM) for two hours.A slight increase in cell death was observed upon addition of 15 µM of probe 1c, but the probe remains non-cytotoxic per regulations.analysis of probe 1c in T-47D cells.T-47D cells were treated for 1 h with exogenous propanal at varying concentrations.Cells were washed with PBS and Probe 1c was subsequently added and incubated for 1 h.Confocal imaging should an increase in fluorescent intensity upon increase in exogenous aldehyde concentration.
2024-03-22T15:35:25.642Z
2024-03-19T00:00:00.000
{ "year": 2024, "sha1": "10426d8946b393369ce1be0c7580cfa362f6216a", "oa_license": "CCBYNC", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2024/sc/d4sc00391h", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7c0ae2c048c419e86fb9c3cc3811414298e11da4", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [] }
249319077
pes2o/s2orc
v3-fos-license
Survival Nomogram for Young Breast Cancer Patients Based on the SEER Database and an External Validation Cohort Background Young breast cancer (YBC) patients are more prone to lymph node metastasis than other age groups. Our study aimed to investigate the predictive value of lymph node ratio (LNR) in YBC patients and create a nomogram to predict overall survival (OS), thus helping clinical diagnosis and treatment. Methods Patients diagnosed with YBC between January 2010 and December 2015 from the Surveillance, Epidemiology, and End Results (SEER) database were enrolled and randomly divided into a training set and an internal validation set with a ratio of 7:3. An independent cohort from our hospital was used for external validation. Univariate and least absolute shrinkage and selection operator (LASSO) regression were used to identify the significant factors associated with prognosis, which were used to create a nomogram for predicting 3- and 5-year OS. Results We selected seven survival predictors (tumor grade, T-stage, N-stage, LNR, ER status, PR status, HER2 status) for nomogram construction. The C-indexes in the training set, the internal validation set, and the external validation set were 0.775, 0.778 and 0.817, respectively. The nomogram model was well calibrated, and the time-dependent ROC curves verified the superiority of our model for clinical usefulness. In addition, the nomogram classification could more precisely differentiate risk subgroups and improve the discrimination of YBC prognosis. Conclusions LNR is a strong predictor of OS in YBC patients. The novel nomogram based on LNR is a reliable tool to predict survival, which may assist clinicians in identifying high-risk patients and devising individual treatments. Breast cancer has overtaken lung cancer as the most common type of malignancy globally. In 2020 alone, the number of newly diagnosed breast cancer patients reached 2.3 million, accounting for 11.7% of all cancer cases. 1 Age is an essential factor for the long-term survival of breast cancer, and young patients often have an inferior prognosis in comparison with other age groups. [2][3][4] The ESMO guidelines define young breast cancer (YBC) patients as \ 40 years. 5 YBC patients are relatively rare, making up only approximately 5.6% of all invasive breast cancer patients. 6 However, numerous studies have revealed that breast cancer in YBC patients is more aggressive (i.e., high tumor grade, common BRCA1/2 mutations, lymph vascular invasion) and is correlated with poorer prognosis. [2][3][4]7,8 Given the high level of heterogeneity, the traditional American Joint Committee on Cancer (AJCC) staging system may not predict the survival probability well for YBC patients. Thus, a new prediction tool is needed to assess prognosis accurately for individual planning. Lymph node ratio (LNR) is defined as the ratio between the number of positive lymph nodes (PLNs) and the total number of resected lymph nodes (RLNs), which has been proposed to improve the prognostic accuracy of lymph node state in various tumors. [9][10][11] Likewise, the prognostic Xiao Huang and Zhou Luo have contributed equally to this work. value of LNR has also been demonstrated in breast cancer. [12][13][14][15][16][17] In several small-sample research studies, LNR even showed better prognostic ability than pathologic nodal stage stratification. [15][16][17] Compared with other age groups, YBC patients are more prone to lymph node metastasis. 3,4 LNR might have particular significance for YBC patients. However, studies related to LNR in YBC patients are rarely reported. The prognostic role of LNR in YBC has been discussed in a previous report, but the cutoff point of LNR was based on other types of breast cancer instead of YBC. 18 In our study, LNR was analyzed as a continuous independent variable, and the analysis result was presented through the time-dependent area under the receiver operating characteristic curve (AUC) values. Furthermore, to avoid redundancy or overfitting, LASSO regression was used to screen the most significant factors related to OS for nomogram construction. Compared with the original model, our new nomogram model included fewer variables, creating more convenience for clinical practice. Finally, we internally verified the prognostic performance of the proposed nomogram and carried out an external validation in an independent database. Population Selection The SEER database of the National Cancer Institute is a systematic population-based cancer database that covers about 30% of the population in the USA. In this study, we extracted the data from the SEER 18 registry database using SEER*Stat 8.3.9 software. All the patients we selected had been diagnosed with YBC from 2010 to 2015. The inclusion criteria were as follows: (1) invasive breast cancer patient; (2) female under the age of 40 years; (3) breast cancer as the first primary tumor that was confirmed by histology; (4) underwent surgical treatments. Meanwhile, patients were excluded if: (1) diagnosis with inflammatory breast cancer or Paget's disease; (2) with distant metastasis; (3) bilateral breast cancer; (4) cases without records of follow-up (survival time code of 0 months); (5) missing information on tumor grade, TNM stage, lymph node status, surgery type, ER, PR, and HER2 status. Ultimately, 11,666 eligible patients were included in our study. Referring to previous research, these patients were randomly divided into a training set (n = 8166) and an internal validation set (n = 3500) in a 7:3 ratio, for the construction and verification of the nomogram, 19,20 respectively. We consider 7:3 to be an appropriate ratio to apply to this study. Using most of the data to construct the nomogram can ensure the accuracy of the model, while a smaller portion of the data was used for validation to prevent overfitting. To further validate the proposed nomogram, 351 patients diagnosed with YBC from May 2012 to December 2018 in The Northern Jiangsu People' Hospital (NJPH) were used to form the external validation set. Patients in this validation set were recruited according to the same inclusion and exclusion criteria as the training cohort. The time of last follow-up was November 2021. This study was approved by the institutional review board of NJPH. Variable Collection Several variables were included in the present study: baseline demographics (i.e., age at diagnosis, race, marital status), tumor features (i.e., laterality, histological type, tumor grade, T-stage, N-stage, LNR, AJCC stage, ER status, PR status, HER2 status), therapy information (i.e., surgery, radiation, chemotherapy), and survival variables (i.e., vital status, survival months). We restaged all the included patients according to the eighth pathological edition of the AJCC staging system. 21,22 The chosen age cutoff value was based on a previously published study. 23 LNR is defined as the ratio of PLNs/RLNs, and the result is rounded to one decimal place. In our research, the primary outcome was OS, defined as the time interval between date of diagnosis and date of death for all causes. Statistical Analysis Statistical analysis categorical variables are expressed as percentages and continuous variables as the mean ± standard deviation (SD). The time-dependent AUC curves were used to compare the predictive ability of LNR with the pNstage. Univariate Cox regression analyses and LASSO regression algorithm were used to screen clinical features significantly related to OS. On the basis of the final results of LASSO Cox regression, a novel nomogram including all the independent prognostic factors was developed to predict 3-and 5-year OS for YBC patients. To measure the performance of the nomogram, both internal and external validation were used. The C-index and the receiver operating characteristic (ROC) curve were used to evaluate the discrimination of the nomogram. The calibration curves were used to determine the degree of agreement between predicted probabilities and observed outcomes. Both discrimination and calibration were evaluated using bootstrapping with 1000 resamples. The nutrition risk index (NRI) and integrated discrimination improvement (IDI) were used to compare the accuracy capability of the nomogram with that of the traditional AJCC staging system. The clinical usefulness and benefits of the nomogram were estimated by the decision curve analysis (DCA) plots. Furthermore, on the basis of the risk score and X-tile software version 3.6.1 (Yale University, New Haven, CT), all the patients were stratified into low-, intermediate-, and high-risk groups. In this study, SPSS 25.0 and R software (version 3.6.1) were adopted for all statistical analyses. All P-values were two-sided, and P \ 0.05 was considered statistically significant. Patient Baseline Characteristics In total, 11,666 eligible patients with YBC were enrolled from the SEER database and randomly assigned to the training set (n = 8166) and the internal validation set (n = 3500). Meanwhile, 351 cases of patients with YBC from our center were selected and used as the external validation set. The differences between the SEER cohort and the NJPH cohort were mainly in the baseline demographics and the therapy information. For clinicopathologic characteristics, the three groups had only apparent differences in the pathological type (p = 0.029). Infiltrating ductal cancer was the most common histopathologic type of YBC (SEER data: 93.6%, NJPH data: 90.9%). High-grade tumors containing poorly or undifferentiated grades were more frequent in YBC patients (SEER data: 56.2%, NJPH data: 55.3%). Moreover, the whole population had a relatively high rate of lymph node metastasis (SEER data: 44.5%, NJPH data: 48.7%). Other clinicopathological characteristics are summarized in Table 1. Time-Dependent AUC Curves for LNR and pN-Stage On the basis of the cumulative sensitivity and dynamic specificity, the time-dependent AUC curves were plotted for OS status. Figure 1 illustrates the changes over time for AUC. In the patients diagnosed with YBC from the SEER database, the AUCs of OS were slightly better for the pN classification system than for LNR. However, as in other studies, LNR showed better prognostic power than the pNstage in the patients from our center. [15][16][17] Feature Selection and Nomogram Construction A total of 15 clinical parameters were included in the training set. In the univariate Cox regression analysis, only laterality was not associated with OS (P = 0.780). The variables that reached the prognostic significance in the univariate analysis were included in the LASSO regression. Among them, seven factors (i.e., tumor grade, T-stage, N-stage, LNR, ER, PR, and HER2 status) with nonzero coefficients were ultimately considered as the statistically significant factors related to OS (Fig. 2a, b). On the basis of these seven significant variables, a nomogram for predicting 3-and 5-year OS of YBC patients was developed (Fig. 2c). To use the nomogram, each level of these variables was assigned a specific point on the scale. By summing the points from each variable, a total point was obtained for the individual patients. We can then predict 3and 5-year OS probability by projecting the total points to the total score scale of the nomogram. For instance, for a young patient (\ 40 years old) diagnosed with a grade III, T2N2, LNR 0.6, ER positive, PR positive and HER2 negative breast cancer, the total point for all variables was 223, which corresponded to 3-and 5-year OS rates of about 85.4% and 73.6%, respectively. Performance and Validation of the Nomogram The calibration curves of the nomogram showed high uniformity between the predicted and actual probabilities of OS in the training set ( Fig. 3a), the internal validation set (Figure 3b), and the external validation set (Fig. 3c). The C-indexes values based on the nomogram (training set, 0.775; internal validation set, 0.778; external validation set, 0.817) were higher than those based on the AJCC stage (training set, 0.735; internal validation set, 0.719; external validation set, 0.751). Meanwhile, time-dependent ROC curves at 3-and 5-years showed that the nomogram performed better in predicting the prognosis of OS than the traditional AJCC staging system, respectively, in the training set (Fig. 3d, e), the internal validation set (Fig. 3f, g) and the external validation set (Fig. 3h, i). DCA was performed to compare the clinical applicability of the nomogram with that of the traditional AJCC staging system. As shown in Fig. 4, DCA curves demonstrated that nomogram could better predict the 3-and 5year OS, as it added more net clinical benefits compared with the AJCC stage model in all three cohorts. Subsequently, NRI and the IDI were further used to compare the accuracy between the nomogram and the traditional AJCC staging system. In the training set, the NRI for 3-and 5-year OS were 0.257 (95% CI 0.208-0.345) and 0.190 (95% CI 0.124-0.237), and the IDI for 3-and 5-year OS were 0.086 (95% CI 0.068-0.109, P\ 0.001) and 0.085 (95% CI 0.070-0.105, P \ 0.001). These results were validated in the internal validation set and the external validation set ( Table 2), suggesting that the nomogram predicted OS with greater accuracy than the traditional AJCC staging system. 24 The Kaplan-Meier survival curves revealed obvious discrimination among different risk subgroups, whereas the traditional AJCC staging system had limited capability to identify high-risk patients in all three cohorts (Fig. 5). DISCUSSION The incidence of breast cancer in young women is relatively low. 6 However, compared with older patients, young breast cancer patients typically have poor prognosis. [2][3][4] In this study, we explored the clinicopathological features and prognostic factors of YBC patients using the SEER database and the independent data from our center. In addition, seven significant factors associated with prognosis were identified through LASSO regression and were used to construct a new nomogram to predict survival in YBC patients. Finally, our study demonstrated that the nomogram outperformed the AJCC staging system in predicting 3-and 5-year OS of these individuals on both internal and external validation cohorts. Lymph node status in breast cancer is widely accepted as an important predictor for patient prognosis. 25,26 Traditionally, the number of PLNs was deemed as the most significant prognostic factor in breast cancer, and formed the foundation of the pN category of the AJCC staging system. 21 However, many factors may affect the number of examined lymph nodes, such as varied levels of surgical expertise and different handling of the surgical specimen by the pathologist. The tumor stage could be underestimated when the number of resected and assessed lymph nodes is insufficient, which might lead to inadequate treatment and incorrect prognostic judgment. 27 To tackle this problem, LNR has been introduced to assess the prognosis in breast cancer. [12][13][14][15][16][17] Many studies have shown that treating LNR as a categorical variable will weaken the prognostic power, and it is better to assess LNR as a continuous variable to reveal its true performance. 28,29 We agreed with this view and analyzed LNR as a continuous variable. In our study, LNR exhibited excellent predictive capability in YBC patients, especially in the external validation set. Notably, LNR revealed a better survival predictive ability than the pN-stage in the data obtained from our center, which was in line with the results of previous studies. [15][16][17] We consider that LNR might perform better than the pN-stage for predicting prognosis in the single-institution study with a small sample size. However, more research is required to confirm this conjecture. In 2020, through univariate and multivariate Cox analyses, Yi and colleagues developed a nomogram that included 13 predictors to predict the survival probability for YBC patients. 18 However, we considered that too many predictors are unnecessary for clinical application because the inclusion of variables that are not significantly related to the outcome contributed little to the improvement of the model. Compared with the traditional multivariable regression, LASSO regression was considered as a better method to select variables since it can minimize overfitting and reduce the complexity of the model by using a loss function or penalty term that is added to the objective function. 30,31 Through the LASSO regression algorithm, only seven variables (i.e., tumor grade, T-stage, N-stage, LNR, ER, PR, and HER2 status) were identified as the independent factor associated with OS in our study. On the basis of these variables, we constructed a more parsimonious nomogram, which greatly ameliorated the clinical applicability in clinical scenarios. In addition, the novel nomogram with fewer variables also performed very well in both internal and external verification. Among the seven parameters included in our nomogram, the T-stage made the most significant contribution to OS. LNR and the pN-stage cooperated with each other to reflect the status of lymph nodes so as to better predict the prognosis of patients. In addition, tumor grade, ER, PR, and HER2 status were identified as prognostic factors of YBC, consistent with the results of previous studies. 4,18 Nonsignificant factors, such as race and marital status, were excluded in the nomogram, which helped to save time and energy for the physician in collecting unnecessary information. In addition, adjuvant therapies, including radiotherapy and chemotherapy, were not considered as independent factors in LASSO regression, possibly because they were generally associated with poor tumor features rather than treatment failure. The nomogram that we developed exhibited a significantly stronger capability in risk stratification for YBC patients than the current AJCC eighth edition, which can be used for patient consultation on survival information, guiding clinical decision making and treatment allocation. Patients defined as high risk through the nomogram are expected to have a dismal prognosis, so we recommend that these patients should receive additional treatment and intensive follow-ups. Furthermore, in current clinical practice, multigene tests, such as the 21-gene recurrence score (21-RS) and the 70-gene signature (70-GS), are currently being used to predict recurrence and survival, and identify candidates for adjuvant chemotherapy among young women with early-stage hormone receptor-positive and HER2negative breast cancer. 32,33 We suggest that the combination of the nomogram and genomics might better guide clinical decision-making for this subset of patients. There exist several limitations in the present study. Firstly, this is a retrospective study based on the SEER database and NJPH database; as such, selection bias is unavoidable. Also, certain important information, such as Ki-67 index, BRCA1-and BRCA2-related mutation and endocrine therapy, is unavailable in the SEER database, the absence of which might reduce the predictive power of individual prognosis among YBC patients. Lastly, young age is associated with higher risk of recurrence. 34 Unfortunately, the SEER database does not provide information about disease recurrence. Thus, the recurrence risk of YBC patients could not be assessed in our study. CONCLUSIONS For YBC patients, LNR can be regarded as a powerful prognostic factor. On the basis of LNR, we constructed a nomogram to provide a convenient and reliable tool for predicting OS in YBC patients, which would contribute to identifying high-risk patients for physicians. OPEN ACCESS This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons. org/licenses/by/4.0/.
2022-06-04T13:38:22.045Z
2022-06-03T00:00:00.000
{ "year": 2022, "sha1": "f1de328fb725fdbc926e6dbbcf74726030f09036", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1245/s10434-022-11911-8.pdf", "oa_status": "HYBRID", "pdf_src": "Springer", "pdf_hash": "b70bb5428cc948eb238a80b04b16c119331ea2aa", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
238842494
pes2o/s2orc
v3-fos-license
Sustainability in Health Service Industry: The Implementation of Material Flow Cost Accounting (MFCA) as an Eco-Efficient Analysis This research intends to analyze the level of efficiency at hospitals to increase the efficiency of productivity and industrial competitiveness, and to support the government's commitment regarding the Sustainability Development Goals program. This research performed a productivity analysis using Material Flow Cost Accounting (MFCA) which is divided into several stages, namely the planning stage (Plan), the second stage of implementation (Do), the third stage of evaluation (Check), and the last is the follow-up stage (Act). The analysis focuses on financial and operational reporting data for three months, and showed that the current efficiency level of Hospital X was 68.54%, indicating a high level of efficiency of the hospital. However, with the conventional method, the hospital was yet unable to identify the level of inefficiency related to the environment or pollution caused by the hospital's operational activities. The implementation of MFCA resulted in the fact that the hospital's operational activities for three months have consequences in 9% of the negative products from the total material input or at amount of Rp. 12,516,456, which was previously unobserved by conventional methods. The results of the material flow tracing using MFCA have key benefits, such as minimizing waste, thereby saving the environment from damage, and thus realizing the sustainable operation of hospital X. This means that if MFCA is applied continuously in Hospital X, it will not only save expenses but can also achieve eco-efficiency and realize continuous improvements so that, in the long-term, operational sustainability can be achieved. technique to calculate material loss or "waste". In the current production process, companies, generally, focus only on the output that was produced, which will be transferred to the next processor to consumers. Waste/material losses at MFCA are all materials (inputs) that are wasted and do not enter the output of a production process (both goods and services). The productivity improvement program through MFCA is relatively new and has not been widely or generally known by businesses and institutions in various regions in Indonesia, especially in the health service industry. Several previous studies regarding the implementation of MFCA were generally conducted on manufacturing companies in Indonesia and around the world, such as Western, Asia-Pacific Countries (Christ & Burritt, 2016;Darkamin & Barmaki, 2019;Tajelawi & Gharbarran, 2015;Ichimura & Takakuwa, 2013). Literature has documented that MFCA is a valuable indicator of the company's potential growth and environmental impact. Substantial companies that implement MFCA could recognize losses (waste) that were previously unnoticed. Some of the research evidence suggests that MFCA can improve decision-making procedures and increase profitability. The MFCA method is different from the traditional manufacturing cost of production system that allows companies to trace their inventory efficiency by controlling material wastes. There is still very limited research on eco-efficiency, especially in the health service industry. This is why it is necessary to conduct research analysis using MFCA, considering the benefits it will have towards improving the health services (hospitals). Hospitals are service providers in the health sector. In their business processes, a lot of materials are needed as inputs, and hospitals consume various resources to support health services. Hospitals allocate a large source of funds in investing in medical equipment, buildings, and providing other supporting facilities. These various tools, buildings, and facilities require resources to function properly such as electricity, water, as well as other supporting materials that require certain specifications (medical supplies such as gloves, masks, syringes, alcohol, bandages, gauze, etc.). In addition, they also require large maintenance costs as well as operational costs. Most of the operational costs are fixed costs, while hospitals' income is a variable component whose amount cannot be determined certainly each month. As far as this research is concerned, the large operational costs of a hospital increase the medical expenses for both outpatients and inpatients. In addition, if it is indicated that the capacity of hospital facilities is not used optimally, the hospital's wastes on medical supplies as well as water and electricity resources can have negative impacts on the hospital, both monetary and non-monetary (negative impact on environmental sustainability). There is an urgent need to find solutions for hospitals efficiency practices/procedures/strategies, in order to help reduce operational costs in which will eventually help realize the SDGs, namely health and welfare; life on land and under the sea; and sustainability of the city, environment and surrounding communities. This research focuses on solving inefficiency problems in hospitals, analyzing the current level of efficiency in Hospital X, and identifying efforts to increase production efficiency through ecoefficient decisions using Material Flow Cost Accounting (MFCA) to improve economic and environmental performance, and achieve Sustainable Development Goals (SDGs). This research is expected to contribute in three ways. Firstly, this research is expected to contribute to establishing an in depth understanding of the MFCA concept and the implementation literature. It will add empirical evidence regarding the application of MFCA in the health service sector, particularly in hospitals.. This research is also expected to contribute to adding alternative accounting methods in calculating environmental costs. Moreover, it is also expected to provide empirical evidence of the benefits of MFCA in supporting environmental sustainability and increasing efficiency, productivity, and company performance. Secondly, this research can be considered as input and evaluation of the ongoing production process, in order to identify, reduce, and manage waste/material loss, as well as to increase the efficiency and productivity of the company. The results of this research are expected to support management to understand the mechanism of MFCA. Thirdly, the results of this research contribute also to enlarging the scope of the Indonesian government's program in realizing its commitment to the Sustainable Development Goals (SDGs) in general, and the Ministry of Manpower's program in particular, namely an increase in productivity. Finally, the results of this research are expected to contribute to the reduction in costs that must be borne by the community to enjoy health facilities. Using MFCA, costs are expected to reduce, and efficiency as well as productivity, are expected to increase. Hence, companies will no longer impose wastes or material losses on end consumers, which will eventually lower prices and make such services more affordable for them. This research adds empirical evidence on the implementation of MFCA in the Health services industry, particularly in hospitals, since MFCA is a new method and has a very limited implementation in Indonesia. Resource-Based Theory In the literature on management accounting, regarding company strategy, there are at least two resource-based theories. The first is the Resource-Based View (RBV), which explains that a company's competitive advantage can be a superior performance, through identifying its key resources with characteristics capable of producing excellence (Advantage-creating), such as value; scarcity; inability to be imitated (inimitability) or substituted; durability and appropriateness; as well as excellence in competitiveness (Barney, 1991). The RBV theory argues that ownership and the identification of key resources under its control can create a sustainable competitive advantage for companies, through the development and distribution of products or services that have distinctive characteristics to consumers (Clulow et al., 2007). It can be briefly said that this theory links key resources to the Company's value for these resources. Several studies on the RBV aim to find empirical evidence of how consumers perceive the company's key resources (Clulow et al., 2007). Another theory regarding company resources is, the Resource Dependence Theory (RDT), which is closely related to the external environmental conditions faced by companies. According to the RDT theory (Pfeffer and Salancik, 1978), the company is in an environment that provides scarce and limited resources, which lead this company to be in high uncertainty. To overcome such uncertainty of resource scarcity conditions, this theory explains that the company tries to develop various relationships with business partners to control and ensure the availability of resources for the production/operational needs of the company. This resource-based theory underlies the idea that companies need to pay attention to the level of material inputs (all materials and resources included in the production process) instead of only focusing on the output itself. With scarce and limited resources, companies need to increase efficiency and productivity to produce superior products in terms of both quality and price, compared to competitors. The resource-based theory is the most appropriate explanation for the importance of material loss/waste reduction activities, which is the focus of Material Flow Cost Accounting (MFCA). Environmental Management Accounting Management accounting is a branch of accounting that intends to provide information to managers as a basis for decision making. The attention of the global business world to environmental issues, in connection with the climate change on earth, has made science develop following ongoing social issues so that it could contribute to solving problems faced, especially in the business world. Environmental Management Accounting (EMA) is a manager's tool for calculating and managing, as well as, providing information in order to make decisions and accommodate company interests with concerns about the social and environmental impacts resulting from their business processes (Burritt et al., 2008). Environmental Management Accounting is a term used to describe the integration of physical environmental information into the management accounting system, involving various tools, physical or monetary; past or future-oriented; routinely produced or produced based on a special interest; and having a focus on both the short and long term (Christ and Burritt, 2015). Several researches focusing on environmental issues have begun to emerge in the field of environmental management accounting research since 1988; however, they are still small in number compared to other accounting fields (Qian et al., 2010;Madein and Sholihin, 2014). The focus on climate change and attention to the impact of business processes on the environment are evidenced by the shift in the production management of various industries in the world. Starting from the 1970s after the second world war, the industry at that time was dominated by manufacturing processes that used nonrenewable fuels (fossil fuels) and a very rapidly developing mass production manufacturing industry (mass production), as well as production management using industrial engineering (relying on Technique). The next stage is the period between the 1970s and 1990s, when there was a surge in oil prices resulting in price competition. The production process at this time focused on the optimization style whose ultimate goal is to reduce production costs, produce as efficiently as possible, and apply lean production and cell production management. Next, a period of climate change or global warming on the earth has begun, with a decrease in the available natural resources. At this time, the focus of the attention of the business world is on environmentally friendly production processes. Production management has shifted to the concept of kaizen or process improvements in all fields, and has begun to implement energy saving and waste management strategies, reducing wastes to reduce adverse impacts on the environment through Material Flow Cost Accounting (MFCA), among other methods. Material Flow Cost Accounting Material Flow Cost Accounting (MFCA) is an accounting technique that calculates the amount of material loss or waste in detail in each cycle involving input, process, and output (Tachikawa, 2015;Christ and Burritt, 2015;Ministry of Manpower, 2015). The MFCA method was first developed in Germany and further refined through application in the company by experts in Japan. This MFCA allows companies to obtain more outputs with minimal wastes or losses of input materials, so it has a significant impact on cost reduction and quality improvement (Kemenaker, 2015;Tachikawa, 2015). Tachikawa (2015) explains that MFCA has advantages because its application is not limited to certain types of industry and company size, but can be applied to any industrial scope and field. Through MFCA, companies will get three benefits: (1) reduce costs, because MFCA is able to reduce material loss/waste; (2) increase energy efficiency, to reduce CO2 levels, which is very beneficial for the environment; and (3) increase material efficiency, because MFCA is able to identify the production process per flow of material to output (Tachikawa, 2015). Christ and Burritt (2015) examined the role of scientific research in helping deepen the understanding of the current MFCA developments. Research Approach This research uses mixed method approaches to solve research problems. A qualitative approach is used because this research was focused on a single object of research in collecting data. This method is also called the case research method. A quantitative analysis is also used in this research to prove that the implementation of MFCA brings productivity changes to the hospital compared to productivity before the implementation of MFCA. Types of Data and Data Collection Techniques The focus of this research is to analyze the increase in hospital efficiency and productivity through decisions related to the environment, by implementing MFCA in calculating the cost of basic services. The scope of this research is comprehensive and detailed in all areas of hospital operations involving input, process, and output. This research uses primary and secondary data types. The secondary data of this research are: 1. Financial reports and reports related to hospital operational expenses. 2. Hospital Company Profile. 3. Hospital business processes, especially operational processes related to the business unit (dental clinic), are used as research objects. Meanwhile, primary data and data collection techniques in this research include management commitment to service efficiency and the discussion of service processes and procedures in the agreed business unit, being the focus of the research. Findings and Results This research was conducted in the midst of the Covid-19 Pandemic, so the data collection process was carried out online in the context of physical distancing. In accordance with the agreement with the hospital management, the scope of this research is limited as follows: (a) Operational scope: Dental Polyclinic (b) Time: the period of three months (June, July and August 2020) (c) Data: -Part of the Income and Expense Report of the Dental polyclinic operational Process. Dental Poly Operational Process The research was conducted on a dental polyclinic, where the operational process consists of 4 main processes, namely: (a) Patient registration. (b) Patient examination. (c) Payment. (d) Pharmacy. The operational process for dental poly services can be seen in the following flow chart (Appendix 1): -Flowchart (Appendix 1)- Hospital Operational Efficiency Level To answer the formulation of the first problem in this research, in relation to the analysis of the operational efficiency of the hospital, the following data were collected regarding the operational efficiency for 3 months: If this efficiency value is getting closer to 1, the hospital could be said to have carried out its operational activities effectively. Planning Stage This stage is the first stage in an ecoefficient analysis using Material Flow Cost Accounting (MFCA). This planning stage consists of: a. Determination of product or service targets In accordance with the results of discussions with the management of Hospital X in Depok, the planning stage begins with determining the target product/service which is the scope of the research. In this research, the target of services to be analyzed using MFCA is outpatient services at the dental polyclinic. The choice of this dental polyclinic was intended because the procedures and operational scope were simpler than other services. b. Determination of MFCA Limits and quantity centers In accordance with the MFCA concept, it is necessary to determine the quantity center in implementing a good MFCA. A quantity Center is a unit within the organization which involves input, process, and output. Output, in one quantity center, can consist of a successful product (positive product) and the rest of the material/waste (negative product/waste). For a brief description of the scope of the service target in relation to input, process and output, Figure 1 shows the dental poly service process as follows: Based on the agreement with the management of hospital X, it was agreed that the focus of this research is the operational process of the hospital for three months, from June to August 2020. d. Efficiency analysis using Conventional Production Management (CPM) The efficiency analysis of RS X Depok with conventional methods is as follows: In the previous section, the quantity center, being the focus of the research has been determined. At this stage, it is used to compile a material flow per quantity model. This stage of the research begins by identifying the activities that occur at each quantity center in details as follows: Quantity Center's Registration Quantity Center (QC) registration is the beginning of the service activities at the dental polyclinic of hospital X in Depok. The activity carried out is for patients (either new or old) to register for a service that day at the registration counter by bringing their medical card (old patient) or KTP/personal data (new patient) and insurance card (if any). Quantity Center's Doctor Examination The doctor's examination is the second QC in the dental polyclinic. This activity starts from the moment the patient reregisters or reports to the nurse on duty to record complaints or to be examined by the doctor for treatment. The nurse records the patient's temperature and records the answers to his/her complaints. Then, the patient waits to be called into the doctor's examination room. After the patient has a turn to enter the examination room, the activity that occurs next is that the doctor conducts an interview with the patient, asking about his/ her complaints, recording information in a medical record book, and performing an examination. The need for drugs and medical equipment depends on the patient's case. After the examination, the doctor refers the patient to supporting services (only if needed) such as laboratories, radiology (photo panoramic), and so on. In the next step, the doctor writes down the examination note in the medical record book, provides a prescription note (if any), and writes down all examination steps and drugs or treatments advised on the treatment form as a way for charging medical expenses or financial administration settlement by the patient. The doctor also provides a note for next visit (if needed). Quantity Center's Administrative Completion Further activities are carried out at QC completion of procedures, starting from the patient queuing at the cashier with documents for making payments. As per the queue, the cashier serves the patient when it is his/her turn. The patient shows documents such as a completed medical treatment form, drug prescription, etc. Then, the cashier checks the information system, making some ratings and calculations. The patient then pays according to the cashier's bill, either via cash or debit and then receives a proof of payment with the drug prescription to take the drug from the pharmacy. Quantity Center's Drug Taking The activities that occur in QC for drug taking are as follows: (1) the patient goes to the pharmacy unit with evidence of drug settlement and a prescription from the doctor; (2) pharmacists process drug-taking; and (3) a verification of the patient's data is carried out and drugs are submitted. Furthermore, after the patient receives the medicine, the dental clinic service The next step, after identifying the activities of each QC, the material flow (material flow) is identified. This will be presented in the next section. Implementation Stage: The inputs and outputs based on the material flow model per quantity center are as follows: Quantity Center's Registration Inputs needed in QC are papers (form) for registration (for new patients), a ballpoint for writing, electricity for application system resources on the computer, and blank medical card forms to print new patients' identities. The outputs of the registration QC are the queue numbers and the patients' treatment forms. Quantity Center's Doctor Examination Inputs needed in this activity are medical treatment forms, doctor services, nursing services, medical devices, electricity, water, equipment (medicine, etc.), doctor snacks, and patient snacks. The outputs produced in this activity are actions to patients, control schedules, drug prescriptions, and doctor's examination reports/treatment forms that contain details of the doctor's actions (bills). Quantity Center's Administrative Completion Inputs needed in this activity are the doctor's examination reports, drug prescriptions, electricity, medical treatment receipt papers, and payment stamps. The outputs produced in this activity are receipts for payment of examination fees, proof of drug settlement and drug prescriptions. Quantity Center's Drug Taking Inputs required in this activity are drug prescriptions, proof of payment of drug payments, plastic wrapping to wrap drugs, and label stickers to record drug information. The outputs produced in this activity are a copy of the drug prescription and the drug that was submitted. b. Determination of the tie compound This stage of the research analyzes the ties in the MFCA or tie compound as a cycle of the relationship between input, process and output, as described in Figure 6. Source: Internal Data of Hospital X Depok In Figure 3 above, the tie compound at hospital X in Depok, which is the object of the research, consists of 4 tie-compounds, namely: 1. Tie-Compound Registration. These four tie-compounds form the basis of the MFCA analysis. c. Evaluation and filtering of material balances and calculating the inputoutput in monetary units The next two steps in this research are presented in one section. Evaluation and filtering of material balances are carried out by the finance department by collecting supporting data for financial statements, namely a list of material balances. This research was conducted from June 2020, so that the material balance needed is the balance at the end of May 2020. Furthermore, the research is continued by calculating the input-output that is included in the quantity center in the monetary unit. consumed into service components produced or enjoyed by dental clinic patients. Negative products are tracing the resources that are actually wasted, as these resources are not consumed in real terms to produce services. Some resource components are not available with accurate data because they are at the hospital level (not at the Poly level). However, from the explanation of the hospital's financial staff, a logical assumption is made. e. Calculating waste or loss (per unit and per rupiah) Waste or loss is identified from the search results for each quantity center and from an explanation from the hospital's financial staff. Waste or loss in the manufacturing industry can be easily identified even by using conventional cost calculation methods. However, this is not the case with the service industry. Waste or loss cannot be identified if the company uses conventional cost calculations. The results of data collection and identification of inputs and, outputs (positive products and negative products) both in quantity units and monetary units in each Hospital X quantity center are presented in Table 2. f. Analysis and interpretation of MFCA results The implementation of MFCA in the dental polyclinic at Hospital X in Depok produces Material Flow data which is presented in Table 2. The operational process in the dental polyclinic is simpler than other polyclinics. The efficiency level of the dental polyclinic at hospital X is 68.54%. This means that the resources consumed by the dentist in 3 months are 68.54%, contributing to the formation of the income. The eco-efficient analysis is implemented as part of a company's effort to pay attention to the impact of operational activities on environmental sustainability. Kondo & Nakamura (2005) explain that waste management and recycling procedures are companies' efforts to achieve ecoefficiency by comparing between inputs and outputs in the operational process. Table 2 shows the operational or service process in the dental polyclinic by tracing material flow (MFCA). From the identification process per quantity center, in a period of 3 months (June to August 2020), negative products were obtained, which had never been previously detected by hospital X. Source: processed data From the analysis of inputs and outputs, especially the identification of positive and negative products, Table 2 shows that the input used by the hospital for 3 months, namely Rp. 133,934,299, did not entirely produce positive products, but there was a few percent that became waste/lost/damaged, according to the MFCA concept. This is called a negative product (loss). A total of Rp. 122,517,065 was a positive product or was actually consumed as services to patients during the months from June to August 2020. The number of positive products, if the percentage is calculated, is 91% of the total input/resources/materials consumed by the dental clinic during the production process of its services. The research results show that Rp. 12,516,456 is a material that becomes a negative product, or according to this MFCA concept, it is called waste/lost/damaged. If calculated, the percentage of waste/loss is 9% (see Table 2) of the total material that goes into the production process of RS X dental services for 3 months, which was not previously detected by the management of X Hospital. Even though, only 9% of the total material/resources consumed are waste/loss, this amount is very material if it is related to the cumulative impact of environmental damage that may be caused, especially since the waste generated by the health service industry requires more careful attention and handling. The analysis during this 3-month period is an illustration that by tracing the flow of material as in the MFCA concept, hospital X is able to reduce resource consumption which has an impact not only on saving the hospital's expenses but also on reducing material costs from IDR 133,934,299 to 122,517. 065 (save 9%). However, the expected benefit, in the long run, is the achievement of eco-efficiency so that it can realize the operational sustainability of the hospital. Ecoefficiency is an increase in efficiency due to efforts aimed at minimizing the level of pollution to the environment. The results of this research are consistent with those of Christ & Burritt (2015); Tajelawi & Garbharran (2015) by using Material Flow Cost Accounting (MFCA) so that it can help companies/organizations create eco-efficiency conditions. This is because MFCA is indeed a tool created to encourage the achievement of ecoefficiency in organizations allowing organizations to focus on reducing material use and improving the company's economic performance. MFCA improves transparency in material flows and energy consumption in companies, which can be a helpful tool for management (Dierkes & Siepelmeyer, 2019;Huang et al., 2019). MFCA is the best solution where financial performance and economic performance can be synergized for continuous improvement. Conclusion This research intends to analyze the efficiency level of Hospital X's operation using a conventional method and Material Flow of Cost Accounting (MFCA). The research was conducted for three months and showed that the current hospital's efficiency level was 68.54%, meaning that the hospital's efficiency level was quite high. However, with the conventional method, the hospital does not know how much inefficiency is related to the environment or pollution caused by the hospital's operational activities. The implementation of MFCA resulted in that the hospital's operational activities for three months led to a negative product of 9% of the total material input or Rp. 12,516,456, which was not previously detected by conventional methods. Compared to the manufacturing industry sector, the efficiency level of the health care service industry is lower, because the MFCA concept is more suitable in material waste reduction and more applicable in the manufacturing sector. However, the results of the analysis of material flow tracing using this MFCA have the main benefit of minimizing waste and energy consumption, thereby saving the environment from damage, and thus realizing the sustainable operation of RS X. This means that if MFCA is applied continuously, hospital X will not only save expenses but can also achieve eco-efficiency and realize continues improvement so that, in the long-term, operational sustainability can be achieved. Limitations and Suggestions This research has several limitations related to data access. The research was conducted during the Covid 19 pandemic, so the research team could not directly visit the research object, and hence the communication and coordination with the hospital management were carried out online with limited intensity and some limited data accessed by researchers. Another limitation is the amount of data obtained. The hospital management limits the data to only three months and only on the dental unit or clinic, which made it difficult to trace some data related to operational costs at the central level. Assumption and analogy are used in data analysis and discussion based on the hospital real case and literatures. The limited theoretical and empirical Suggestions that can be recommended for further research are related to using public hospital data sources (financial reports are available) and using proxy measures with secondary data. Future research can further explore MFCA in manufacturing sector companies because the characteristics of this sector are more relevant concerning the flow of material which is the focus of MFCA. Thus, it can be seen, in more detail, to what extent MFCA can encourage eco-efficiency.
2021-09-28T16:57:41.724Z
2021-07-13T00:00:00.000
{ "year": 2021, "sha1": "1bcb94c517a142e6ee8713fab99b9c9f09dfa47a", "oa_license": "CCBY", "oa_url": "https://ibimapublishing.com/articles/JSAR/2021/747009/747009.pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "b88ec9dcf9f1ad8f09b5b356d4bc487ac839f9ad", "s2fieldsofstudy": [ "Environmental Science", "Medicine", "Business" ], "extfieldsofstudy": [ "Business" ] }
53098042
pes2o/s2orc
v3-fos-license
Prognostic Value of Biventricular Strain in Risk Stratifying in Patients With Acute Heart Failure Background Few studies have shown that right ventricular (RV) function is independently related to adverse events regardless of left ventricular (LV) function in heart failure. We evaluated the prognostic value of global longitudinal strain (GLS) of both ventricles in patients with acute heart failure. Methods and Results We measured biventricular strains in 1824 randomly selected patients (973 men, aged 70±14 years) from a strain registry. A total of 799 patients (43.8%) died during the median follow‐up duration of 31.7 months. In univariate analysis, LVGLS and RVGLS were significantly associated with all‐cause mortality. We classified them into 4 strain groups according to LVGLS (≥9%) and RVGLS (≥12%). On Cox proportional hazards analysis, group 4 (<9% LVGLS and <12% RVGLS) had the worst prognosis, with a hazard ratio (HR) of 1.755 (95% confidence interval [CI], 1.473–2.091; P<0.001) compared with that of group 1 (≥9% LVGLS and ≥12% RVGLS). After multivariate analysis, both LVGLS (per 1% decrease; HR: 1.057; 95% CI, 1.029–1.086; P<0.001) and RVGLS (per 1% decrease; HR: 1.022; 95% CI, 1.004–1.040; P=0.014) were also significant. The HR of RVGLS <12% was higher in patients without pulmonary hypertension (assessed by maximal tricuspid regurgitation ≤2.8 m/s) after the adjustment of LVGLS (HR: 1.40 [95% CI, 1.11–1.77] versus 1.07 [95% CI, 0.88–1.30] with pulmonary hypertension; interaction, P=0.043). Conclusions In the patients with acute heart failure, RVGLS was significantly associated with all‐cause mortality regardless of LVGLS, and those with decreased biventricular GLS showed the worst prognosis. The predictive power of RVGLS was more prominent in the absence of pulmonary hypertension. A long with left ventricular (LV) dysfunction, right ventricular (RV) systolic dysfunction has been considered a poor prognostic factor in patients with heart failure (HF). 1,2 RV systolic dysfunction has also been identified as a potent predictor of adverse clinical outcomes in recent studies, independent of LV function. 3,4 However, no large-scale studies are currently being conducted on this topic. Originally, strain measured using 2-dimensional speckle tracking echocardiography (2DSTE) was introduced in the analysis of LV function, and strain values can reflect global and regional myocardial functions objectively. 5 LV strain values can be used as prognostic indicators in patients with HF. 6 Because they can represent intrinsic myocardial properties, their application has been extended recently for the analysis of the right ventricle and the left atrium. Recent echocardiographic guidelines recommended several indexes to measure RV systolic function. 7 However, the objective quantification of the right ventricle has been problematic because of its complex shape. Among several echocardiographic parameters assessing RV function, global longitudinal strain (GLS) is an excellent index, and reduced RVGLS has been known to be a poor prognostic factor in several cardiovascular diseases. [8][9][10] In this study, we evaluated the prognostic value of GLS of both ventricles and evaluated whether RVGLS can be an independent predictor of long-term prognosis in patients with acute HF. Study Population The RVGLS and LVGLS values of 1824 randomly selected patients from the registry for STRATS-AHF (Strain for Risk Assessment and Therapeutic Strategies in Patients With Acute Heart Failure; NCT: 03513653, https://clinicaltria ls.gov/ct2/show/NCT03513653) were measured. STRATS-AHF is a study of strain measurement in 4312 patients hospitalized for acute HF from 3 tertiary university hospitals in Korea from January 2009 through December 2016. 11 Acute HF was defined as a rapid onset or worsening of HF symptoms and/or signs requiring urgent evaluation and treatment. 12 We included all hospitalized patients with signs or symptoms of HF with either pulmonary congestion or objective findings of LV systolic dysfunction or structural heart disease in the study. We excluded patients with acute coronary syndrome or severe primary valvular disease who required surgery. All-cause deaths and dates of deaths were identified in 100% of participants from their medical records or from the Ministry of Public Administration and Security. The study protocol was approved by the institutional review board of each hospital. The institutional review boards waved the need for written informed consent from the study patients. The study complied with the Declaration of Helsinki principles. The data, analytic methods, and study materials will not be made available to other researchers for purposes of reproducing the results or replicating the procedure. Calculation of the Sample Size We estimated the sample size before the measurement of RVGLS using PASS 11 (NCSS Statistical Software). On the basis of previously reported data, 13 we calculated the sample size to obtain a hazard ratio (HR) of 1.3 in both groups. 14 A 2sided log-rank test with an overall sample size of 1600 participants (800 in group 1 and 800 in group 2) achieved 99.1% power at a 0.05 significance level to detect an HR of 1.30 when the control group had an HR of 1.00. Considering the feasibility of RV strain measurement and the possibility of measurement errors in %20%, we attempted to measure RV strain in a total of 1920 randomly selected patients. Echocardiographic Examination We obtained all echocardiographic images using the standard echocardiographic technique suggested by the American Society of Echocardiography, using commercial echocardiographic machines and a 2.5-MHz probe. 7 The standard echocardiographic techniques included M-mode, 2-dimensional, and Doppler measurements. We recorded the tissue Doppler-derived peak systolic and early and late diastolic velocities of the septal mitral annulus. LV end-systolic and end-diastolic volumes were measured from the apical 4-and 2-chamber views, and LV ejection fraction (LVEF) was calculated using the biplane Simpson method. Strain Analysis We downloaded the echocardiographic images from the cardiac picture archiving and communication system in the DICOM (Digital Imaging and Communication in Medicine) format. These DICOM files were sent to the strain core laboratory. Strain analysis was conducted using a commercial software, TomTec (ImageArena 4.6), as described previously. 15 TomTec software is vendor independent. For myocardial deformation analysis, the endocardial border was traced on the end-systolic frame in the selected image. The endsystolic frame was defined by the QRS complex or as the smallest LV volume during the cardiac cycle. The software automatically tracks speckles along the endocardial border and myocardium throughout the cardiac cycle. The peak longitudinal systolic strain was automatically defined as the peak negative value during the cardiac cycle. GLS in each view was calculated as the mean value of 6 segments of each apical view. LVGLS was measured as the average of GLS values from 3 apical views (4, 3, and 2 chambers). RVGLS was measured only in the apical 4-chamber or focused RV view. Because it was difficult to separate the RV free wall from the interventricular septum with this version, we averaged all segmental strain values from the RV free wall and ventricular septum. GLS was analyzed on a single cardiac cycle in the patients with sinus rhythm; the GLS value was calculated as the average of 3 cardiac cycles in the patients with atrial fibrillation. The strain values were measured by a specialist who was blinded to the clinical data of the study population. Statistical Analysis Data were presented as meanAESD for continuous variables and numbers with frequencies for categorical variables. For Clinical Perspective What Is New? • In patients with acute heart failure, left and right ventricular global longitudinal strains (GLSs) were significantly associated with all-cause mortality even after the adjustment of other clinical variables. • Patients with lower left ventricular GLS (<9%) and right ventricular GLS (<12%) had the worst prognosis. • In patients with pulmonary hypertension, the predictive power of right ventricular GLS was less prominent than that in patients without pulmonary hypertension. What Are the Clinical Implications? • Measurement of left and right ventricular GLS can give prognostic information in admitted patients with acute heart failure. comparisons among groups, we used the Student t test or 1way ANOVA for continuous variables and the v 2 test (or Fisher exact test if any expected count was <5 for a 292 table) for categorical variables. Because the GLS value was negative, we obtained the absolute value |x| for simpler interpretation. The correlation of LVGLS and RVGLS was calculated with the Pearson correlation coefficient. The NT-proBNP (N-terminal probrain natriuretic peptide) concentration was assessed using logarithmically transformed values (base 10, log [NT pro-BNP]) because of its skewed distribution. Death data were collected from the medical records of the patients with regular clinical follow-up, and all-cause mortality and dates of deaths were identified by the Ministry of Public Administration and Security for the patients without regular follow-up. A receiver operating characteristic curve analysis was used to evaluate the optimal cutoff values of LVGLS and RVGLS for the prediction of all-cause deaths. A survival curve was plotted using the Kaplan-Meier method with comparison using the log-rank test. The time to first adverse clinical event was analyzed using a multivariate Cox proportional hazards analysis to determine the independent predictors of mortality. Because we observed a sufficient number of adverse clinical events in our study, we included all significant variables in the univariate analysis as covariates in the multivariate analysis. However, in the case of a variance inflation factor >10 in the linear regression analysis, the variables with multicollinearity with others were excluded from the analysis. In the multivariate analysis, we analyzed the individual effects of LVGLS and RVGLS as continuous variables in analysis A and analyzed the grouping effect of each value in analysis B. The intra-and interobserver variabilities of LVGLS and RVGLS were evaluated in 20 random participants by 2 independent investigators by calculating the intraclass correlation coefficient. The data were analyzed using SPSS v20 (IBM) and MedCalc v12.3.0.0 (MedCalc Software). A 2-sided P value of <0.05 was considered statistically significant. Clinical Outcomes According to Biventricular GLS A total of 799 patients (43.8%) died during the median followup duration of 31.7 months (interquartile range: 11.6-54.4 months). Age and body mass index were similar among groups. Group 4 had the highest heart rate, NT-proBNP concentration, LV dimensions, LV volumes, E/E 0 ratio, and number of patients with New York Heart Association functional class IV. HF with reduced ejection fraction was the most common condition observed in group 4. However, the pattern of discharge medications was insignificant among the groups. Prognostic Stratification According to the Presence of Pulmonary Hypertension When we divided our study population into 2 groups according to the presence of increased pulmonary arterial pressure (assessed by maximal velocity of tricuspid valve regurgitation >2.8 m/s). In univariate analysis, RVGLS <12% was a significant predictor of all-cause mortality regardless of pulmonary hypertension Figures 3 and 4). Discussion In this study, we showed that RVGLS was significantly associated with all-cause mortality regardless of LVGLS. Those who had decreased biventricular GLS (LVGLS <9% and RVGLS <12%) showed the worst prognosis. RVGLS has greater significance in the absence of pulmonary hypertension. Prognostic Stratification According to Biventricular GLS Unlike LVEF, myocardial strain values based on 2DSTE can represent myocardial deformation. These have been known to be objective and reliable markers of intrinsic myocardial contractility. 5 Myocardial strain values obtained on 2DSTE, which is a simple and feasible method with good reproducibility, are strong prognostic factors among patients with HF, independent of LVEF. 6,11 In this study, LVGLS was a significant prognostic indicator of adverse clinical events (HR: 0.957; 95% CI, 0.943-0.971; P<0.001) and all-cause death (HR: 0.949; 95% CI, 0.933-0.965; P<0.001). A cutoff value of 9% was optimal for separating patients with and without adverse clinical outcomes in our study. This result is similar to the previously reported LV cutoff value in patients with symptomatic HF. 16 Along with LV dysfunction, RV dysfunction has been regarded as a poor prognostic factor in patients with HF. 1,2 Information on RV systolic function in patients with HF can provide complementary information in the stratification of patient prognosis. 1 RV systolic function can be influenced by LV systolic function. Because the right ventricle is easily influenced by ventricular loading conditions, RV enlargement and RV systolic dysfunction can be caused by elevated LV enddiastolic pressure reflected backward to the right ventricle. 17 In our study, we assessed RV systolic function using RVGLS. RVGLS obtained with 2DSTE has been used as a systolic marker with considerable feasibility and reproducibility. 18 The patients with a decreased RVGLS value (<12%) had an increased E/E 0 ratio (21.0AE13.0 versus 18.1AE9.0, P<0.001), which is an echocardiographic indicator of LV end-diastolic pressure, and left atrial diameter (47.2AE9.9 mm versus 44.9AE10.0 mm, P<0.001) compared with the other patients. These data suggest that the patients with decreased RVGLS values had higher LV end-diastolic pressure. Similar to LVGLS, RVGLS is a strong predictor of clinical outcomes in several cardiovascular diseases. 8,10,19 In our study, the cutoff value of RVGLS in the prediction of adverse clinical outcomes was 12%. In a study of patients with advanced systolic HF awaiting heart transplantation, RVGLS showed a significant correlation with the RV systolic stroke work index, a hemodynamic parameter usually used to evaluate RV function. 20 RVGLS <10.8% is the cutoff value for detection of a decreased RV systolic stroke work index (<0.25 mm Hg/LÁm 2 ). RVGLS <14.8% obtained by the velocity vector imaging algorithm was a prognostic factor in patients with HF with reduced ejection fraction (LVEF ≤35%). 13 We showed that strain group based on LVGLS and RVGLS values was a significant prognostic factor in multivariate analysis. Group 1 had the best long-term prognosis, followed by groups 2, 3, and 4. Although group 2 seemed to have a higher survival rate than group 3, there was no significant difference between them. We think this phenomenon may have originated from ventricular interdependence. The left and right ventricles have a common interventricular septum and specific myocardial fiber orientation. RV systolic function can be influenced by LV systolic function. LV contraction can account for %20% to 40% of RV systolic pressure, and RV contraction has been shown to influence %4% to 10% of LV systolic pressure in several experiments. 21 The effect of RV dysfunction on long-term prognosis may be low because a relatively healthy left ventricle can overcome RV dysfunction. Moreover, LV systolic dysfunction can activate the neurohumoral system and affect RV systolic function. 22 Prognosis According to RV Function and Pulmonary Artery Systolic Pressure As a general rule, pulmonary hypertension caused by left HF is coupled with RV systolic dysfunction. [23][24][25] However, this relationship between pulmonary arterial pressure and RV systolic dysfunction in chronic HF is not always present because RV systolic function may adapt over time in response to an increase in RV afterload. As discussed earlier, RV enlargement and RV systolic dysfunction can be caused by elevated LV end-diastolic pressure reflected backward to the right ventricle because the right ventricle is easily influenced by ventricular loading conditions in chronic HF. The pathophysiology and prognosis of RV dysfunction in acute HF may differ from those in chronic HF. Consequently, pulmonary hypertension related to LV failure was the most important cause of RV dysfunction in our study. Pulmonary arterial systolic pressure was significantly higher in the patients with a decreased RVGLS value (46.1AE15.2 versus 44.2AE15.1 mm Hg). In the patients with elevated pulmonary arterial pressure, reduced LV strain rather than RV strain was the major determinant of all-cause mortality; however, allcause mortality between the patients with reduced RV and LV strains in normal pulmonary arterial pressure was similar. In the patients with a normal pulmonary arterial pressure, the decreased RVGLS value may have resulted from intrinsic RV muscular dysfunction rather than passive transmission of increased pulmonary arterial pressure. Thus, patients with a decreased RVGLS value might have an intrinsic myocardial dysfunction, which can influence prognosis. Our results are different from those of the study by Ghio et al, 25 who showed that the assessment of RV function did not improve the prognostic stratification of patients with HF and normal pulmonary arterial pressure. This difference might have been observed because of the different study populations and methods of measuring RV systolic function. They included patients with chronic HF and measured the RV ejection fraction via right heart catheterization. Conversely, we studied patients with acute HF and measured GLS using 2DSTE. The pathophysiology of RV dysfunction in acute HF is different from that in chronic HF. In acute HF, pulmonary artery pressure increased by congestion might worsen RV function. Moreover, RVGLS could represent an intrinsic myocardial property that could not be measured using volumetric methods. Limitations The study has several limitations. First, this study was retrospective, without a standardized protocol that used only 1 echocardiographic machine or acquired a focused RV view in the echocardiographic examinations. Moreover, the treatment pattern for HF might differ among physicians and hospitals; however, the enrolled patients were treated and followed up at an HF clinic with standard treatment guidelines for acute HF, and data on all-cause deaths were collected from the National Insurance data or National Death Records. We gathered all echocardiographic images using standardized imaging protocols. Second, there was vendor dependency in the strain measurement. We used a vendor-independent strain algorithm for the measurement of LVGLS and RVGLS. Because there can be different strain values using other algorithms, the cutoff values obtained in this study should be used with caution in other study populations in which other strain algorithms are used. Third, we measured RVGLS from the RV free wall and interventricular septum together because of the technical difficulty of RV strain measurement with this feature-tracking algorithm. If we were to use total RVGLS along with the RVGLS value from the RV free wall separately, the result might be more interesting and informative in the prediction of clinical outcomes. Fourth, this study might have potential selection bias. Although the RV strain was measured in the randomly selected patients, the study patients had higher NT-proBNP levels and worse LV systolic and diastolic parameters as well as a higher incidence of all-cause death than did those excluded from the STRATS-AHF registry. Finally, although the strain values are currently the best echocardiographic markers reflecting myocardial systolic function, using them has not yet been regarded as the gold standard method. 5,26 Further prospective studies with standardized protocols are needed to determine the clinical significance of these values. Conclusions In patients with acute HF, RVGLS was significantly associated with all-cause mortality regardless of LVGLS, and those who had decreased biventricular GLS showed the worst prognosis. The predictive power of the RV strain was more prominent in the absence of pulmonary hypertension.
2018-11-11T01:39:44.617Z
2018-09-26T00:00:00.000
{ "year": 2018, "sha1": "051b59f71251ba2797e84d6c88aa847d66a7b57d", "oa_license": "CCBYNC", "oa_url": "https://www.ahajournals.org/doi/pdf/10.1161/JAHA.118.009331", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "051b59f71251ba2797e84d6c88aa847d66a7b57d", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
220385345
pes2o/s2orc
v3-fos-license
Evidence for SARS-CoV-2 Infection of Animal Hosts COVID-19 is the first known pandemic caused by a coronavirus, SARS-CoV-2, which is the third virus in the family Coronaviridae to cause fatal infections in humans after SARS-CoV and MERS-CoV. Animals are involved in the COVID-19 pandemic. This review summarizes the role of animals as reservoirs, natural hosts and experimental models. SARS-CoV-2 originated from animal reservoir, most likely bats and/or pangolins. Anthroponotic transmission has been reported in cats, dogs, tigers, lions and minks. As of now, there is no a strong evidence for natural animal-to-human transmission or sustained animal-to-animal transmission of SARS-CoV-2. Experimental infections conducted by several research groups have shown that monkeys, hamsters, ferrets, cats, tree shrews, transgenic mice and fruit bats were permissive, while dogs, pigs and poultry were resistant. There is an urgent need to understand the zoonotic potential of different viruses in animals, particularly in bats, before they transmit to humans. Vaccines or antivirals against SARS-CoV-2 should be evaluated not only for humans, but also for the protection of companion animals (particularly cats) and susceptible zoo and farm animals. Classification of Coronaviruses Coronaviruses (CoVs) belong to the order Nidovirales, suborder Cornidovirineae, family Coronaviridae and subfamily Orthocoronavirinae. The latter is composed of four genera designated alpha, beta, gamma and delta CoVs (α-, β-, γand δ-CoV), corresponding to groups one to four (I to IV), respectively. This classification is based on sequence analysis, phylogenetic relatedness and serologic examination [1]. Coronavirus Structure and Genome Organization The coronavirus particle is conserved across the observed diversity of these viruses [42]. The surface of the virion possesses club-shaped spike projections, giving the virus the appearance of a solar corona. The RNA genome of coronaviruses is the largest known genome of RNA viruses [43]. It is positive-sense linear single-strand with 26-32 kb. The genome is typically organized as 5 -leader-UTR-replicase-S (Spike)-E (Envelope)-M (Membrane)-N (Nucleocapsid)-3 UTR-poly (A) tail with many accessory genes [43]. The genome encodes structural, accessory and non-structural proteins. The structural proteins include spike (S), envelope (E), matrix (M), nucleocapsid (N) and some CoVs, which express hemagglutinin-esterase (HE), apparently derived from influenza C viruses [44] (see Table 2). The viral envelope is studded by the S, E, M and HE [45,46]. The S protein possesses receptor-binding domain (RBD), antigenic epitopes and cleavage site (CS). The S protein is cleaved by host proteases into S1 and S2 subunits, which are responsible for binding to the host cell receptor and fusion of viral and cellular membranes, respectively. The receptors of CoV are variable and mostly virus-specific ( Table 2). The M protein is a transmembrane protein. It is the most abundant structural protein and is important for virus morphology. The E protein is expressed in smaller amounts than the other structural proteins, and it plays roles in assembly and release of the virus and has an ion-channel activity, while the N protein encapsidiates the viral RNA genome. The replicase gene is about 20kb (two-thirds of the genome) and encodes two open reading frames, ORF1a and ORF1b, which express two polyproteins, pp1a and pp1ab, respectively; the latter requires frameshifting by the polymerase [47,48]. Subsequently, pp1a and pp1b cleaved into individual non-structural proteins (nsps 1 to 16) after the expression of papain-like proteases (PLpro), encoded within nsp3 and a serine-type protease or Mpro encoded by nsp5 [49,50]. Furthermore, many of these nsps assemble into the replicase-transcriptase complex (RTC) responsible for RNA replication and transcription, including, for example, the RNA-dependent RNA polymerase (RdRp; nsp12) [51]; the RNA helicase, 5 -triphosphatase (nsp13) [52], the N7 MTase and 3 -5 exoribonuclease (ExoN) (nsp14) involved in replication fidelity and N7-methyltransferase activity [53] and 2 -O-methyltransferase (nsp16) [54]. The accessory proteins are mostly dispensable for virus replication in cell culture; however, they might be essential for viral pathogenesis [43,55]. Some accessory proteins also play a role in blocking innate immune responses, e.g., nsp1, which is absent in γ-CoV (avian infectious bronchitis virus "IBV" and turkey coronavirus "TCoV"; see Table 1). This is likely why they are non-essential for replication [43,56]. Table 2. Genome organization and S1/S2 cleavage site of different human coronaviruses. Species Genome Organization S1/S2 Black bold letters are structural ORFs, while red bold letters refer to the furin cleavage site. HCoV-229E, HCoV-NL63 and SARS-CoV lack furin-like cleavage site. Genetic Evolution of Coronaviruses Coronaviruses (CoVs) evolve through recombination and point mutations. The large viral RNA genome coupled with low fidelity of RdRp (nsp12) allows the occurrence of spontaneous mutations during virus replication, although at lower rates than other RNA viruses [57][58][59], because CoVs have a proofreading mechanism which seems to cause the lower substitution rate compared to other RNA viruses [60]. The mutation rate of CoVs is variable. For example, the mutation rates of murine hepatitis virus (MHV) and IBV have been estimated to be 0.44-2.77 × 10 −2 and 0.67-1.33 × 10 −5 per site per year, respectively [61,62], while the evolution rate of SARS-CoV-2 was estimated to be~9 × 10 −4 substitution per site per year [63]. Moreover, the mutation rate of CoVs can be increased more than five times under immune pressure (e.g., vaccination) or upon interspecies transmission [4,58,[64][65][66][67][68]. Importantly, CoVs are subjected to high-frequency recombination events with rates of about 20% during mixed infection of cells with closely related viruses [69]. Such "mosaic" recombination was responsible for the natural evolution of novel viruses as reported in SARS-CoV [70,71] and MERS-CoV [72] in addition to other CoVs [73][74][75]. In vitro, the generation of chimeric coronaviruses with high replication efficiency in human cells has been frequently described [76][77][78][79][80]. Such findings confirm the possibility of natural recombination in the emergence of potential pathogens to humans [78]. Therefore, recombination of the virus genome is a major pathway for the evolution of CoVs with efficient interspecies or intraspecies transmission capacity or higher virulence [81][82][83][84]. SARS-CoV-2 In December 2019, a cluster of human cases of severe pneumonia of unknown etiology was detected in Wuhan, Hubei Province, China. The infection has been traced back to seafood and a wet live animal wholesale market in the city [85]. On the 7th of January, a novel CoV was identified as the causative agent. Different tentative names, including novel Coronavirus 2019 (nCoV-2019) and 2019-nCoV, have been proposed for the new virus [85]. On 11 February, the WHO named the disease as coronavirus disease-2019 (COVID-19). The international committee for taxonomy of viruses (ICTV) published the official nomenclature of the virus as SARS-CoV-2 [86]. On 29 February, the WHO declared that the disease is called COVID-19 and the virus that causes it is SARS-CoV-2 [87]. On 11 March, the virus was declared as a pandemic, marking the first known pandemic caused by a coronavirus [88]. In this review, we summarize data on SARS-CoV-2 in animals, available on 1 June 2020, in PubMed, Google Scholar, preprint servers and websites of animal and human health organizations (e.g., OIE, CDC and USDA). Origin of SARS-CoV-2 and Wild-Animal Reservoir The identification of reservoirs of zoonotic pathogens often plays a crucial role in effective disease control. Zoonotic pathogens, which can infect a wide range of hosts (e.g., influenza), have been demonstrated as a serious risk factor for emergence and re-emergence in humans [89,90]. The majority of significant viral diseases of humans have been transmitted from domestic and/or wild-animal reservoirs (reviewed in References [91][92][93][94][95][96][97]). Although we may never know with certainty the precise route of transmission of SARS-COV-2, it is widely accepted that SARS-CoV-2 has an animal origin. However, it remains to precisely identify the animal reservoir(s). Bats were the reservoir for SARS-CoV (2003)(2004) [98] and diverse SARS-related CoVs (SARSr-CoVs) [79,99]. Therefore, it is most likely that bats are the current potential reservoir for SARS-CoV-2 [4], which was genetically close to a horseshoe bat SARSr-CoV (designated RaTG13), with 96% genetic similarity [100]. This virus was isolated from Rhinolophus affinis, between 2015 and 2017, from Yunan Province, which is located far away from Wuhan (about 2000 km) [100,101]. However, the RBD and CS in the S protein are distinct between SARS-CoV-2 and RaTG13. The latter has a monobasic CS and several mutations in the RBD, compared to the SARS-CoV-2. Extensive sequence analysis estimated that RaTG13 and SARS-CoV-2 diverged 40 to 70 years ago, most likely in horseshoe bats [102]. Recently, a novel SARSr-CoVs (designated RmYN02) with an insertion of polybasic amino acids in the CS was detected from the Yunnan Province, between May and October 2019 [103]. With testing more samples from bats in China, there is a possibility to identify more related strains to SARS-CoV-2. Moreover, involvement of other intermediate hosts, probably pangolins, as a plausible conduit in the transmission of SARS-CoV-2 to humans cannot be excluded [102]. Recent studies found that Malayan pangolins (Manis javanica) are frequently infected with CoVs. Diverse CoVs identified in the lungs, intestine and/or blood of pangolins sampled in 2017-2018. Sequence analysis indicated that pangolin-CoVs belonged to two different lineages, and one lineage shared 97.4% amino acid identity to RBD with SARS-CoV-2. Therefore, pangolins are considered to be a potential intermediate host for SARS-CoV-2 [102,[104][105][106][107]. Up to the end of May 2020, little to no evidence of recombination was observed [108]; however, it is conceivable that SARS-CoV-2 evolved after multiple "mosaic" recombination events of bat and/or pangolin SARSr-CoV. The currently available data do not rule out a non-pangolin or bat intermediate host. Dogs In a surveillance of 27 dogs in Hong Kong, two dogs tested positive [109][110][111]. The first dog was identified on February 27, 2020 [109][110][111]. SARS-CoV-2 RNA was detected in swabs in the nasal and oral cavities of a quarantined 17-year-old Pomeranian dog. The owner tested positive for the virus, suggesting a human-to-dog transmission. The virus titer was very low in the dog samples, and no clinical signs were observed. Genetic analysis revealed that the dog and human viruses were closely related, indicating possible human-to-dog transmission. A few days later, neutralizing antibodies were detected in the blood samples. The dog died after three days from the quarantine, probably due to unrelated health issues, rather than SARS-CoV-2 infection [109][110][111]. The second dog was identified on March 18, 2020 [109][110][111]. A 2.5-year-old asymptomatic German shepherd dog tested positive for SARS-CoV-2 RNA, and neutralizing antibodies developed a few weeks later. The dog probably acquired the infection from the owner, who was also infected with the virus [109][110][111]. In France, neither RNA nor antibodies were detected in dogs living in the same room with veterinary students infected with SARS-CoV-2 [112]. Likewise, viral RNA was not detected in 12 dogs housed with confirmed infected individuals in Northern Spain, in April-May 2020 [113]. These data suggest that dogs do not play a major role in COVID-19. Cats Antibodies were detected in 15/102 (14.7%) of domestic cats sampled in Wuhan, China, after the local SARS-CoV-2 outbreak between January and March 2020, using ELISA and/or neutralization assay. Three cats with the highest titer were owned by three patients, indicating potential direct human-to-cat transmission rather than cat-to-cat transmission. Conversely, sera collected from stray cats or hospital cats had significantly lower titers, and no viral RNA has been detected in nasopharyngeal and anal swabs [114]. In Hong Kong, viral RNA was detected in the oral cavity, nasal and rectal swab samples obtained on March 30, 2020, from a clinically healthy pet cat whose owner was infected with the virus [115], and 14 additional cats from households in which one or more people were ill tested negative. In Belgium, on March 18, viral RNA of SARS-CoV-2 was detected in the feces and vomit of a cat with digestive and respiratory clinical signs. The owner of the cat was also infected with SARS-CoV-2, suggesting human-to-cat transmission [116]. In New York City, USA, on April 22, two pet cats were confirmed positive in two separate locations. Both cats had mild respiratory signs. Human-to-cat transmission has been suggested as a source of infection for both cats [117]. In Northern Spain, one out of eight cats tested positive for SARS-CoV-2 RNA in nasal swabs in April-May 2020. The cat was housed with an infected patient with severe COVID-19 symptoms [113]. Other limited surveillances in cats revealed neither RNA nor antibodies in pet cats in residence with infected individuals in France [112]. These findings revealed that pet cats are more susceptible than dogs for SARS-CoV-2. They may develop mild symptoms and excrete the virus. Whether cats can play a role in virus transmission to humans or other animals is not yet clear. Tigers A four-year-old female Malayan tiger in the Bronx Zoo in New York City, USA, tested positive in April 2020. The virus was detected in respiratory-tract samples. The tiger exhibited respiratory signs (i.e., dry cough). Four more tigers in the zoo tested positive. Other co-housed tigers and animals tested negative, assuming poor animal-to-animal transmission. The infection was presumably acquired by an asymptomatically infected zookeeper [118]. Lions Three African lions in the Bronx Zoo in New York City, USA, tested positive in April, 2020. The animals had a dry cough and inappetence. The infection was probably acquired from an infected yet asymptomatic zookeeper [118]. Minks In The Netherlands, minks in two separate farms in Beek en Donk (n = 7500 minks) and in Milheeze (n = 13,000 minks) developed respiratory and gastrointestinal disorders in April 2020. The mortality rate was 1.2 to 2.4%, and deaths were mainly observed in pregnant females. Most of necropsied minks had lung lesions, including interstitial pneumonia. With the use of RT-qPCR, viral RNA was detected in different samples, including the conchae, lung, throat swab, rectal swab and, less frequently, from the liver and intestines. No viral RNA was detectable in the spleens. Some of the workers at the farm had previously tested positive for the SARS-CoV-2; therefore, human-to-animal transmission was the most likely scenario for the infection of minks. Nevertheless, the preliminary sequencing data suggested mink-to-human transmission for one worker; however, investigations are still ongoing [119]. Other Animals Viral RNA was not detected in samples obtained from a guinea pig or two rabbits housed with humans with confirmed COVID-19 infections in three households in Northern Spain, in April-May 2020 [113]. There are knowledge gaps on the role of other animals, particularly cattle, sheep, goats, horses and donkeys, in COVID-19, which should be determined by targeted surveillance. Experimental Animal Hosts Model animals are of imminent importance to understand the pathobiology and amelioration of diseases. Faithful animal models should mimic human disease in sharing comparable morbidity, mortality and route of infection [120]. It is not always possible to find a faithful animal model to recapitulate the pathogenesis of virus infection in humans and to evaluate potential medical countermeasures, including antivirals and vaccines. Although the nonhuman primates (NHP) are the gold-standard for studying emerging viruses in humans [121], they are expensive and difficult to handle, and for ethical reasons (e.g., animal welfare [122]), they are not used as a first-line model. Small animals are easy to handle, cheaper than NHP and commercially available [121]. However, they vary in their susceptibility to different viruses and do not always recapitulate the clinical disease in humans, due to biological variations (e.g., presence of receptors and immune system). For instance, for the emerging CoVs in humans, mice, ferrets and hamsters were susceptible to SARS-CoV infection [123][124][125][126], but not for MERS-CoV [127][128][129], mostly due to species-variations in DPP4 receptors [127,129]. In the last few months, several animal models have been studied to assess the virulence and pathogenesis of different SARS-CoV-2 isolates from different countries. These experiments are summarized in Table 3. Rhesus Macaques Rhesus macaques inoculated with a combination of intratracheal (IT), intranasal (IN), ocular (OC) and oral (OR) routes were described by Munster et al. [130]. Macaques showed a transient elevation in body temperature for one day only. In addition to bodyweight loss, some macaques showed changes in the respiratory pattern and piloerection, reduced appetite, hunched posture, pale appearance and dehydration. They completely recovered between 9 and 17 days post-inoculation (dpi). Pulmonary infiltrates were seen by radiographs from 1-12 dpi, which completely resolved by day 14 PI. Postmortem examination revealed interstitial pneumonia. Viral loads have been detected in nose, throat, and anal samples for up to 17 dpi. Viral RNA has been detected in the lungs and respiratory tract, GIT and lymphoid tissues. No viral RNA in the blood or urine samples has been detected. Viral antigen has been detected in the macrophage in the lungs and in the lymph nodes. All animals seroconverted at 10 dpi [130]. IT inoculation of six male and female rhesus macaques with SARS-CoV-2 was described by Shan et al. [131]. Only one of the macaques exhibited transient inappetence, and the other animals remained healthy. Viral RNA has not been detected in the blood. Viral RNA has been detected in high amounts at 1 and 5 dpi in oropharyngeal swabs. Likewise, anal swabs have been tested positive in three of the six monkeys. Chest X-ray examination revealed patchy and progressed to multiple glass-ground opacity. Lungs of euthanized monkeys had a variable degree of consolidation, edema, hemorrhage and congestion with interstitial pneumonia. The virus has been re-isolated from the trachea, bronchus and lungs, in addition to the oropharyngeal swabs [131]. In another study, IT inoculation of three-to-five-year-old rhesus macaques with SARS-CoV-2 resulted in reduced bodyweight in three out of four monkeys and transient inappetence, tachypnea and hunched posture [132]. Viral loads have been detected in the nasal, oral and anal swabs. Viral RNA has been in the nose, lung, gut, spinal cord, heart, skeletal muscles and bladder. X-ray radiography showed bilateral ground-glass opacification of the lungs, and necropsy at 7 dpi revealed mild to moderate interstitial pneumonia. SARS-CoV-2 antibodies have been detected in sera collected at 14, 21 and 28 dpi. Interestingly, after 28 days post-infection, the monkeys were re-challenged with SARS-CoV-2. Neither viral RNA in different organs nor elevation in antibody titers have been observed, and chest X-rays were normal, indicating full protection from reinfection [132]. Moreover, rhesus macaques have been used for the evaluation of inactivated vaccines against SARS-CoV-2 [133]. Intra-tracheal-inoculated non-vaccinated macaques developed severe interstitial pneumonia, and SARS-CoV-2 RNA has been detected in the oral and anal swabs, as well as in the lungs, at 3-7 dpi [133]. These data confirm that rhesus macaques are a faithful animal model for studying the pathogenesis of and vaccine efficacy against SARS-CoV-2 resembling SARS-CoV and MERS-CoV. Ferrets The experimental infection of ferrets has been described in several studies. Shi et al. [134] assessed virulence and transmission of two viruses: one from the environmental sample collected in the Wuhan Seafood Market, and another from a patient in Wuhan. IN-inoculated ferrets excreted infectious viruses in the upper respiratory tract (i.e., nasal turbinate, soft palate and tonsils), the virus has not been detected in the trachea, lungs, heart, liver, spleen, kidneys, pancreas, small intestine and brain. In separate experiments, the viral RNA has been detected in the rectal swabs, although at lower levels than those in the nasal washes. No infectious virus has been isolated from rectal swabs of any ferret. Only two ferrets had fever and loss of appetite at day 10 and 12 after infection. All ferrets possessed serum anti-SARS-CoV-2 antibody, using ELISA and serum neutralization test (SNT). In a third experiment by the same team, viral RNA was detected in the nasal turbinate, soft palate, tonsil, and/or trachea for up to 8 dpi in IT-inoculated ferrets. Another study has been done by Kim et al. [135]. In this study, IN-inoculated ferrets with a Korean virus exhibited reduced activity, elevated body temperature and occasionally cough. Viral RNA has been detected in the serum, nasal washes, saliva, urine, feces, nasal turbinate, trachea, lungs, intestine and kidneys. Viral antigen has been detected in the nasal turbinate, trachea, lungs and intestine, and acute bronchiolitis has been observed at necropsy. The virus was successfully transmitted to co-housed ferrets (direct contact) and via airborne (indirect contacts), as indicated by the presence of antibodies, using SNT and viral excretion in the nasal washes, saliva, urine and fecal samples for up to 7 days post-exposure [135]. The study conducted by the Erasmus Medical Centre, using ferrets as a model, has been published in a preprint [136]. Ferrets were inoculated intranasally with a German SARS-CoV-2 and after six hours of inoculation-naïve ferrets were co-housed with each inoculated ferret, to assess direct virus contact transmission. At 24 hpi, additional ferrets were housed in a separate cage, to assess airborne transmission. Viral RNA has been detected in inoculated ferrets for up to 19 dpi in the throat, nasal and/or rectal swabs. Likewise, all direct-contact ferrets excreted viruses for up to 17 days post-exposure, and the virus was successfully transmitted by air to three of the four of the indirect-contact ferrets. In the latter group, SARS-CoV-2 RNA was first detected from three to seven days post-exposure, and ferrets remained positive for 13 to 19 days post-exposure [136]. Generally, excretion of the virus from the nasal swabs was higher than in the throat and rectal swabs. Viable viruses have been isolated from the nasal and throat swabs, but not from the rectal swabs. All ferrets seroconverted at 21 dpi with similar levels of antibody in primarily inoculated, direct-contact and most of the indirect-contact ferrets [136]. Another study conducted at Friedrich-Loeffler-Institut (FLI), Germany, showed that a German SARS-CoV-2 can efficiently replicate and transmit to co-housed ferrets, without showing clinical signs [137][138][139]. Viral RNA has been detected in the nasal washes, and to a lesser extent in the rectal swabs obtained from inoculated and in-contact ferrets. Moreover, viral RNA has been detected in the respiratory tract, intestine, muscle, skin, lymph node, adrenal gland and/or brain tissues in euthanized inoculated ferrets. Lesions were mostly restricted to the nasal cavity. All inoculated and some co-housed ferrets developed antibodies against SARS-CoV-2 [139]. Together, these experiments have shown that ferrets are a suitable animal model for studying the pathogenesis of SARS-CoV2. They mimic the mild clinical signs of SARS-CoV-2, lung lesions and transmission in humans. Mice Several studies have been conducted in wild and transgenic mice. Studies showed that SARS-CoV-2 exhibited binding for human ACE2 receptors (hACE2), but limited binding to murine ACE2 [140][141][142][143]. Transgenic mice expressing hACE2 receptors for SARS-CoV-2 viruses were used in one study [144]. IN inoculation of specific-pathogen-free, 6-11-month-old, WT-HB-01 mice and hACE2 mice with SARS-CoV-2 has been done. Only hACE2 transgenic mice exhibited slight bristles and up to 8% weight loss at 5 dpi. Virus isolation and/or detection was successful in the lungs in samples taken from 1 to 7 dpi. Lung lesions and histopathological changes, including pneumonia and infiltration of inflammatory and immune cells, have been described. No remarkable histopathological changes or viral antigens in the myocardium, liver, spleen, kidney, cerebrum, intestine and testis have been observed [144]. In another study, 17-week-old transgenic female C57Bl/6 Ces1c -/mice were inoculated intranasally with chimeric SARS-CoV carrying SARS2-RdRp. Mice developed bodyweight loss and lung hemorrhages and dysfunction. The virus has been isolated from the lungs at 5 dpi [145]. Another study compared the infectivity of a Belgian SARS-CoVs-2 in wild-type BALB/C mice and transgenic mice lacking functional T and B cells. The virus replicated at similar levels in both mice breeds, without remarkable differences in lung pathology. These results indicated that SARS-CoV-2 replicated, although at low levels, in mice lacking hACE2 [146]. Moreover, wild-type (WT) C57BL/6 mice and C57BL/6 mice with genetic ablation of their type I (Ifnar1 -/-) and III interferon (IFN) receptors (Il28r -/-) were inoculated IN with SARS-CoV-2. Increased replication of the virus in the lungs was observed in Ifnar1 -/mice 3 dpi, compared to WT and Il28r -/mice. Moreover, Ifnar1 -/mice exhibited increased levels of intra-alveolar hemorrhage, sometimes with peribronchiolar inflammation. Interestingly, pretreatment of Ifnar1 -/mice with human convalescent SARS-CoV-2 patient serum reduced viral loads in the lungs [146]. These findings indicate that the transgenic mice, not wild-type mice, may play an important role in studying the immunopathology of COVID-19. Hamsters Many studies described the infection of SARS-CoV-2 in hamsters. In the first study, golden Syrian hamsters, 6-10 weeks old, were IN-inoculated with SARS-CoV-2 isolated from the nasopharyngeal aspirate of a patient in Hong Kong, after propagation in VeroE6 cells [147]. Primarily inoculated animals developed clinical signs within one week post-inoculation, including lethargy, ruffled fur, hunched back posture, tachypnea and~11% loss of bodyweight. None of the animals died. Viral RNA was detected in the nasal turbinate and trachea from 2 to 7 dpi. Virus load was high in the lungs, and lower levels were detected in the intestine, salivary glands, heart, liver, spleen, lymph nodes, kidney, brain and blood particularly at 4 dpi. Hamsters recovered at 14 dpi and showed high serum neutralizing antibodies at 7 and 14 dpi. Euthanized hamsters showed pathological changes in the nasal turbinate, trachea and lungs, including lung consolidation and severe pulmonary hemorrhage. Viral N-protein was observed in the lungs and intestine. In the lungs, induction of interferon-γ and pro-inflammatory chemokines/cytokines were described. Viral transmission to naïve co-housed hamsters was successful. Although in-contact hamsters did not suffer reduction in bodyweight gains, the histopathological changes and viral expression in nasal turbinate, trachea, lung and extra-pulmonary tissues were similar to those of the primarily inoculated hamsters. Moreover, passive immunization of hamsters significantly reduced viral loads in the nasal turbinate and lungs; however, this occurred without significant impact on clinical signs or histopathological changes [147]. In a second study, four-five-week-old male golden Syrian hamsters were intranasally inoculated with SARS-CoV-2 virus [148]. Hamsters had ruffled hair coat. Viral RNA was detected from 2 to 14 dpi, with the highest viral load in the lungs and to lower levels in the kidneys and from fresh fecal samples. At necropsy, pneumonia and lung consolidation were reported. Viral N-protein was demonstrated in the nasal epithelial cells, lungs and duodenum. Viral clearance and tissue repair were observed on 7 dpi. The virus transmitted efficiently from the primarily inoculated hamsters to co-housed naïve hamsters. The inoculated hamsters and co-housed hamsters lost > 10% of the bodyweight. Viral RNA was detected in the nasal washes obtained 3 dpi from co-housed hamsters. All hamsters recovered, and neutralizing antibodies were detected within 14 dpi. In a third study, seven-to-eight-week-old golden Syrian hamsters (males and females) were challenged IN with SARS-CoV-2 WT or a mutant SARS-CoV-2 virus with a deletion of the polybasic CS [149]. The WT virus caused more extensive histopathological changes in the lungs of infected animals and replicated more efficiently in the tracheal and lung tissues than the variant virus [149]. Another study compared the susceptibility of WT and STAT2-/-or IL28R-a -/-transgenic hamsters with ablated Signal Transducer and Activator of Transcription 2 (STAT2-/-lacking type I and III IFN signaling) and IL28R expression (IL28R-a -/lacking IFN type III signaling) [146]. After IN-inoculation with a Belgian virus, all wild-type hamsters had high viral loads in the lungs, with multifocal necrotizing bronchiolitis, massive leukocyte infiltration and edema. STAT2-/-hamsters developed high viral load in the lungs, high titer viremia, high levels of viral RNA in the spleen, liver and upper and lower gastrointestinal tract (GIT) and less-severe lung pathology. These data indicate that STAT2 plays a role in SARS-CoV-2 pathogenesis, by restricting the systemic spread of the virus, yet it increases lung pathology [146]. Taken together, these experiments showed that hamsters are a valuable small animal model to study the pathogenesis, immunopathology and transmission of SARS-CoV-2. Dogs Three-month-old beagles have been challenged IN, using a Chinese virus, to assess virus replication and transmission [134]. Viral RNA has been detectable in the rectal swabs; however, no viral RNA was detectable in any organ or tissue collected from a euthanized dog at 4 dpi. No infectious virus has been recovered, and two of the inoculated dogs seroconverted, using ELISA. Neither antibodies nor virus has been detected in cohoused dogs, indicating low susceptibility of dogs to SARS-CoV-2 [134]. Cats Replication and transmission of a Chinese SARS-CoV-2 in subadult cats (aged six-to-nine months) after IN challenge have been studied [134]. At 3 dpi, viral load was evident in the nasal turbinate, soft palates, tonsils, tracheas, lungs and small intestine of euthanized cats. Moreover, the virus was transmitted aerogenically to other cats, and the viral RNA has been detected in the fecal samples. Seroconversion and neutralizing antibodies have been detected in inoculated and exposed cats and severe lesions in the upper and lower respiratory tracts, including the lungs, have been recorded [134]. Likewise, IT, IN, OC and OR inoculation of 15-18-week-old male and female domestic cats with SARS-CoV-2 and virus transmission to naïve cohoused cats has been recently described [150]. Cats did not exhibit clinical signs, although viruses have been isolated in the nasal swab specimens 1 to 6 dpi from inoculated cats and 3 and 9 dpi from cohoused cats. Virus detection was not successful in the rectal swabs. All cats seroconverted at 24 dpi [150]. Those two experiments further confirm that cats are more susceptible than dogs to SARS-CoV-2. It remains to be studied the potential role of cats in the transmission of the virus to other mammals. Pigs To date, two studies determined the susceptibility of pigs to the infection and transmission of different SARS-CoV-2 isolates [134]. After IN-challenge, neither viral RNA nor antibodies have been detected in inoculated animals [134,139] or in naïve contact pigs [134]. These experiments suggest that pigs are not vulnerable to SARS-CoV-2. Tree Shrew Experimental infection of male and female tree shrews of different ages, ranging from six months to seven years, with SARS-CoV-2, has been described [151]. After IN-inoculation, most animals, particularly females, showed an increase in body temperature, without showing clinical signs or gross lesions. Viral RNA has been detected, particularly in the younger animals, for up to 12 dpi, in the nasal, throat and anal swabs and/or the blood samples. The RNA has been detected in different organs, including the lungs, pancreas and uterus. Pathological alterations have been observed mainly in the lungs, and to a lesser extent in other organs, including the spleen, intestine, brain, liver and heart [151]. Bats The susceptibility of Egyptian fruit bats, which are genetically and immunologically distinct from the putative reservoir horseshoe bats [152,153], was studied after IN-inoculation with a German SARS-CoV-2 [137,139]. Despite not showing any clinical symptoms, the bats excreted viruses orally for up to 12 dpi. Moreover, viral RNA and/or infectious virus was detected in respiratory tissues and at lower levels in other organs, including the heart, skin and intestine [139]. Anti-SARS antibodies were detected in inoculated and contact bats. Viral RNA was detected in co-housed bats for up to 21 dpi, indicating successful bat-to-bat transmission [139]. The results of this experiment further indicate that bats play a role in the replication and transmission of SARS-CoV-2. Poultry The susceptibility of poultry to SARS-CoV-2, using different genetically distinct viruses, has been described. Replication and transmission of SARS-CoV-2, Wuhan strain, in chickens showed that neither RNA nor antibodies were detectable at 14 dpi [154]. Likewise, chickens inoculated with a German strain did not develop clinical signs, lesions or antibodies [139]. Furthermore, neither IN-inoculated nor cohoused ducks excreted viral RNA in swab samples, and all of the animals were seronegative 14 dpi [154]. Likewise, chickens, ducks, turkeys, quail and geese challenged with SARS-CoV-2 did not show any clinical signs, and no virus replication or antibodies have been detected [155]. These experiments suggest that poultry are not susceptible to the virus, and it is unlikely that they play a role in COVID-19. Summary and Conclusion COVID-19 is the first known pandemic caused by a coronavirus, and SARS-CoV-2 is the third virus in this family to cause fatal infections in humans, after SARS-CoV and MERS-CoV. Animals are involved in COVID-19 as reservoirs, animal hosts and experimental models (Figure 1). The virus originated from an animal reservoir, most likely bats and/or pangolins or a yet-to-be-identified animal host. Targeted and retrospective surveillance should be extensively done to identify the reservoirs for SAR-CoV-2 and other related viruses before they transmit to humans. Summary and Conclusion COVID-19 is the first known pandemic caused by a coronavirus, and SARS-CoV-2 is the third virus in this family to cause fatal infections in humans, after SARS-CoV and MERS-CoV. Animals are involved in COVID-19 as reservoirs, animal hosts and experimental models (Figure 1). The virus originated from an animal reservoir, most likely bats and/or pangolins or a yet-to-be-identified animal host. Targeted and retrospective surveillance should be extensively done to identify the reservoirs for SAR-CoV-2 and other related viruses before they transmit to humans. There are no data available on systemic surveillance, particularly in farm animals; however, it is likely that SARS-CoV-2 will be established in human populations and not in animals. There are several reasons for this assumption. (1) CoVs evolve at a lower rate than other RNA viruses (e.g., influenza), due to the proofreading of RdRp. Therefore, it is less likely to be established in other animals. (2) SARS-CoV-2 shares similarities with SARS-CoV, which had a limited natural host-range, including cats and raccoon dogs, and has been occasionally reported in other animals [161,162]. (3) So far, there is no evidence that HCoV-OC43 has been reported in animals, although it was transmitted from cattle-to-humans around 1890 [20]. (4) Fortunately, many domestic and companion animals are less susceptible to SARS-CoV-2 compared to humans. The low susceptibility of animals is probably attributed to restricting host-factors, e.g., functional ACE2 and specific proteases. A recent study has shown that the proportions of cells carrying both ACE2 and TMPRSS2 were high in cats, low in pigs, very rare in dogs and absent in chickens [163]. (5) To date, anthroponotic transmission is the main pathways for the infection and fatalities caused by SARS-CoV-2 in few companion and zoo animals, and no strong evidence for natural animal-to-human transmission, except for mink, which remains to be confirmed. Importantly, there is no sustained animal-to-animal transmission. (6) Last but not least, many CoVs are endemic in animals in several countries, and no clear evidence is available for the transmission to humans. Moreover, whether the immune response against CoVs in animals can confer some protection against SARS-CoV-2 remains to be studies. To understand the pathobiology of the virus, experimental infections have been conducted in several animal species. Results showed that rhesus macaques, hamsters, ferrets, cats and fruit bats were permissive, while dogs, pigs and poultry were resistant. Monkeys (e.g., rhesus macaques) There are no data available on systemic surveillance, particularly in farm animals; however, it is likely that SARS-CoV-2 will be established in human populations and not in animals. There are several reasons for this assumption. (1) CoVs evolve at a lower rate than other RNA viruses (e.g., influenza), due to the proofreading of RdRp. Therefore, it is less likely to be established in other animals. (2) SARS-CoV-2 shares similarities with SARS-CoV, which had a limited natural host-range, including cats and raccoon dogs, and has been occasionally reported in other animals [161,162]. (3) So far, there is no evidence that HCoV-OC43 has been reported in animals, although it was transmitted from cattle-to-humans around 1890 [20]. (4) Fortunately, many domestic and companion animals are less susceptible to SARS-CoV-2 compared to humans. The low susceptibility of animals is probably attributed to restricting host-factors, e.g., functional ACE2 and specific proteases. A recent study has shown that the proportions of cells carrying both ACE2 and TMPRSS2 were high in cats, low in pigs, very rare in dogs and absent in chickens [163]. (5) To date, anthroponotic transmission is the main pathways for the infection and fatalities caused by SARS-CoV-2 in few companion and zoo animals, and no strong evidence for natural animal-to-human transmission, except for mink, which remains to be confirmed. Importantly, there is no sustained animal-to-animal transmission. (6) Last but not least, many CoVs are endemic in animals in several countries, and no clear evidence is available for the transmission to humans. Moreover, whether the immune response against CoVs in animals can confer some protection against SARS-CoV-2 remains to be studies. To understand the pathobiology of the virus, experimental infections have been conducted in several animal species. Results showed that rhesus macaques, hamsters, ferrets, cats and fruit bats were permissive, while dogs, pigs and poultry were resistant. Monkeys (e.g., rhesus macaques) developed mild-to-moderate clinical signs, as seen in the majority of human SARS-CoV-2 infections; however, they are expensive and difficult to handle and are not available in each lab. Hamsters and ferrets seem to be the most suitable models to study the molecular pathobiology of SARS-CoV-2 similar to SARS-CoV [124], but not to MERS-CoV [128], probably due to different receptors (ACE2 for SARS-viruses vs. DPP4 for MERS-CoV) [148]. So far, ferrets, hamsters, cats and, to a lesser extent, bats, were used to assess animal-to-animal transmission. Moreover, wild-type mice are a poor model to assess virus pathogenesis or antiviral and vaccine efficacies. However, transgenic mice are a model that can be considered, particularly to study the elements of the immune system, which might confer resistance to SARS-CoV-2 infections. CoVs infection in humans was neglected for years. The recurrent severe infections of animal-coronaviruses in the last two decades indicate that future outbreaks of related or unrelated CoVs in humans are inevitable. Although difficult to be achieved, there is an urgent need to develop universal vaccines and antivirals against CoVs. Currently, there are several potential vaccines and antivirals against SARS-CoV-2, and some of them are under evaluation in clinical trials [164,165]. Although the limited resources may prevent the wide application of vaccines against SARS-CoV-2 in animals, evaluation of vaccines or antivirals should be considered for susceptible animals (i.e., pets and zoo animals). Vaccination of reservoir animals against rabies virus (an RNA virus) has proven to be effective to control rabies virus infections in humans and animals, and have allowed the eradication of rabies in terrestrial carnivores in several regions worldwide [166].
2020-07-02T10:30:12.076Z
2020-06-30T00:00:00.000
{ "year": 2020, "sha1": "7d381ba6fb730a2227b4e77a23b9c274e3fbce1e", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-0817/9/7/529/pdf", "oa_status": "GREEN", "pdf_src": "PubMedCentral", "pdf_hash": "e3791740fa4c54e80729fe935dc4a6f43e88d710", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Medicine", "Biology" ] }
118951819
pes2o/s2orc
v3-fos-license
Evidence of bar--driven secular evolution in the gamma--ray narrow--line Seyfert 1 galaxy FBQS J164442.5+261913 We present near--infrared (NIR) imaging of FBQS J164442.5+261913, one of the few $\gamma$--ray emitting Narrow Line Seyfert 1 ($\gamma-$NLSy1) galaxies detected at high significance level by $Fermi$--LAT. This study is the first morphological analysis performed of this source and the third performed of this class of objects. Conducting a detailed two--dimensional modeling of its surface brightness distribution and analysing its $J-K_s$ colour gradients, we find that FBQS J164442.5+261913 is statistically most likely hosted by a barred lenticular galaxy (SB0). We find evidence that the bulge in the host galaxy of FBQS J164442.5+261913 is not classical but pseudo, against the paradigm of powerful relativistic jets exclusively launched by giant ellipticals. Our analysis, also reveal the presence of a ring with diameter equalling the bar length ($r_{bar} = 8.13\ \textrm{kpc}\pm 0.25$), whose origin might be a combination of bar--driven gas rearrangement and minor mergers, as revealed by the apparent merger remnant in the $J$--band image. In general, our results suggest that the prominent bar in the host galaxy of FBQS J164442.5+261913 has mostly contributed to its overall morphology driving a strong secular evolution, which plays a crucial role in the onset of the nuclear activity and the growth of the massive bulge. Minor mergers, in conjunction, are likely to provide the necessary fresh supply of gas to the central regions of the host galaxy. INTRODUCTION Narrow line Seyfert 1 (NLSy1) galaxies are type 1 active galactic nuclei (AGN) characterized by narrower Balmer lines (F W HM (Hβ) < 2000 km s −1 ) than in normal Seyferts, flux ratios [OIII]/Hβ < 3, strong optical FeII lines (FeII bump) and a soft X-ray excess (Osterbrock & Pogge 1985;Pogge 2000). Based on the full width at half maximum (FWHM) of their Broad Line Region (BLR) lines and the continuum luminosity (Kaspi et al. 2000), their central black holes masses (MBH ) are estimated to range from ∼ 10 6 M to ∼ 10 7 M (Mathur et al. 2012a, although Baldi et al. 2016 show that these low MBH estimates might be seriously affected by the orientation of the BLR). Their low-mass black holes suggest that their accretion rates are close to the Eddington limit and their host galaxies are in an early phase of galaxy evolution (Ohta et al. 2007). Unfortunately, relatively little is known about their host galaxies. Some studies find that their morphologies resemble those of inactive spirals with a regular presence of stellar bars (Crenshaw et al. 2003a), and pseudobulges (Orban de Xivry et al. 2011;Mathur et al. 2012b). However, γray emission have been detected in seven radio-loud NLSy1 (RL-NLSy1) by the Large Area Telescope (LAT) on board the Fermi satellite, suggesting that highly beamed and strongly collimated relativistic jets can be launched by RL-NLSy1 AGN. The latter, challenge the paradigm that such jets are launched exclusively by blazars hosted by giant elliptical galaxies (Laor 2000;Marscher 2009) with black holes with masses MBH 10 8 M accreting at low rates (McLure et al. 2004;Sikora et al. 2007). Therefore, a thorough analysis of the host galaxies of this new class of AGN (hereafter γ-NLSy1, Abdo et al. 2009), becomes a priority. So far, only two γ-NLSy1 host galaxies have been characterized, 1H 0323+342 (Antón et al. 2008;León Tavares et al. 2014) and PKS 2004-447 (Kotilainen et al. 2016). These studies reveal characteristics such as the presence of disks, rings, bars and pseudobulges, which are expected in normal NLSy1s, however, do not fit with the common belief that powerful relativistic jets are launched exclusively by giant ellipticals. by Fermi-LAT with high significance, having test statistic T S > 25 (∼ 5σ, Mattox et al. 1996) and given its redshift (z = 0.145, Bade et al. 1995), it is the second closest after 1H 0323+342 (z = 0.061), making it an excellent candidate for accurate morphological studies to its host galaxy. With the aim of achieving a better understanding of the mechanisms needed to form and develop highly collimated relativistic jets, in this paper we present the results from our thorough analysis to FBQS J164442.5+261913. This paper is structured as follows: Observations and data reduction are presented in Section 2; the methods we adopt to analyse the data are explained in Section 3. Our results and discussion are presented in Section 4 and 5. In Section 6 we summarize our findings. Throughout the manuscript we adopt a concordance cosmology with Ωm = 0.3, ΩΛ = 0.7 and a Hubble constant of H0 = 70 Mpc −1 km s −1 . OBSERVATIONS AND DATA REDUCTION The J-and K-band observations of FBQS J164442.5+261913 were conducted at the 2.5 m Nordic Optical Telescope (NOT) during the night of May 1, 2015 using the Wide-Field near-infrared camera NOTcam with CCD dimensions of 1024 pix × 1024 pix and a pixel scale of 0.234 /pix, giving a field of view of ∼ 4×4 arcmin 2 ). During the night, the seeing was very good, with an average FWHM of ∼ 0.75 and ∼ 0.63 for J-and Ks-bands, respectively. The target was imaged using the NOTcam standard J (λ central = 1.246µm) and Ks (λ central = 2.140µm) filters with a dithering technique with individual exposures of 30 seconds and a typical offset of ∼ 10 . A total of 85 individual exposures for J−band and 72 for Ks−band were obtained, giving a total exposure time of 2550 seconds and 2160 seconds, respectively. The data reduction was performed using the NOTcam reduction package 1 for IRAF 2 . First we corrected for the optical distortion of the Wide-Field camera using distortion models based on high quality data of a stellar-rich field. Then, bad pixels were masked out using a file available in the NOTCam bad pixel mask archive. A normalized flat field was created from evening and morning sky frames to account for the thermal contribution. Using field stars as reference points, the dittered images were aligned and co-added to obtain the final reduced image used in our analysis. In order to perform photometric calibration to the images, we retrieved J-and Ks-band magnitudes from 2MASS (Skrutskie et al. 2006) resulting in an accuracy of ∼ 0.10 mag. The derived integrated magnitudes in circular apertures are mJ = 15.35 ± 0.10 (MJ = −23.84 ± 0.10) and mKs = 13.44 ± 0.10 (MKs = −25.86 ± 0.10). Galactic extinction for J and Ks bands are negligible (A λ [J] = 0.058 and A λ [Ks] = 0.025). Photometric decomposition We perform a 2D modeling of the galaxy using the image decomposition code GALFIT (Peng et al. 2011). We follow the procedure described in our previous studies of AGN host galaxies (León Tavares et al. 2014;Olguín-Iglesias et al. 2016), which is described below. The first, and most important part of the analysis is the modeling of the point spread function (PSF) by fitting selected stars of the field of view (FOV, Fig. 1). These stars are non-saturated, with no sources within ∼ 7 radius, more than ∼ 10 away from the border of the FOV and with a range of magnitudes that allow us a proper characterization of core and wings. Stars 2, 5, 6, 8 and 9 fulfil these criteria (see Figure 1) and thus are used to derive our PSF model. On the other hand, star 1 is saturated, stars 3 and 10 are very close to the border of the FOV and stars 4 and 7 have close companions. Each selected star is centered in a 50 × 50 box, where all extra sources are masked out by implementing the segmentation image process of SExtractor (Bertin & Arnouts 1996). The stars are simultaneously modeled, using one Gaussian function (intended to fit the core of the stars) then, the resulting model is added with an exponential function (intended to fit the wings of the stars). Similarly, depending on the residuals, we add extra Gaussians and exponential functions until the core and wings are satisfactorely fitted. For our imagery, six Gaussians and six exponentials (and a flat plane, that fits the sky background) were enough. The result is considered as a suitable PSF model for our analysis once it successfully fits all the stars individually ( Figure 2). Next, we fit FBQS J164442.5+261913 with the scaled version of our derived PSF as the only component, to constrain the unresolved AGN contribution at the centre of the galaxy. Since the residuals of the single PSF model (hereafter model 1) are considerable (χ 2 model1 = 4.148 ± 0.01 for J-band and χ 2 model1 = 3.171 ± 0.02 for Ks-band), we continue our analysis by adding extra functions to the model. We use the Sérsic profile, expressed such that where I(R) is the surface brightness at the radius R, and κn is a parameter coupled to the Sérsic index n in such a way that Ie is the surface brightness at the effective radius Re, where the galaxy contains half of the light (Graham & Driver 2005). The Sérsic profile has the ability to represent different stellar distributions such as elliptical galaxies, classical-and pseudo-bulges and bars, just by varying its Sérsic index n. Hence, when n = 4, the Sérsic funtion is known as the de Vaucouleurs profile (widely used to fit elliptical galaxies and classical bulges); when n = 1, it is an exponential function, and when n = 0.5, it is a Gaussian. Given that NLSy1s are known to be typically hosted in disc galaxies (Crenshaw et al. 2003b), we also explore models that include the exponential function, expressed as: Horizontal arrows show the suitable (blue thick arrows) and unsuitable (red thin arrows) stars for the PSF construction. where I(R) is the surface brightness at the radius R, I0 is the central surface brightness and hr is the disc scale length. Uncertainties Since the error bars produced by GALFIT are purely statistical and thus, unrealistically small (Häussler et al. 2007;Bruce et al. 2012), we follow Kotilainen et al. (2011) and León Tavares et al. (2014) to derive the uncertainties of our fittings. We identify model parameters that could contribute most significantly to errors. Regarding the PSF, spatial variations might affect the structural parameters of the galaxy model and, to a lesser extent, its magnitudes. To account for this, we compare the brightness distribution of our PSF model with the brightness distribution of each star in the field, whose only difference is assumed to lie in their positions. On the other hand, sky background can affect magnitudes in a larger extent (when compared to the PSF) and to a lesser extent (yet significantly), the structural parameters of the galaxy model. Even though, our imagery is in NIR bands and thus the sky counts are SKYCOUNTS ≈ 0, they may show large variations. To account for this, we run several sky fits in separated regions of 300 pixels × 300 pixels (70 × 70 ) and use the mean and ±1σ of the resultant values to fit the galaxy. The outcomes are models with slightly different magnitudes which are assumed to be the errors due to the sky background. Model magnitudes are also affected by uncertainties in the zero-point, estimated from magnitudes retrieved from 2MASS. Thus, zero-point magnitude variations (±0.1mag) are also added as errors in the magnitudes of our final models. Fit of the isophotes Additionally to the morphological decomposition, we perform an analysis based on the ellipse fit to the galaxy isophotes (Wozniak et al. 1995;Knapen et al. 2000;Laine et al. 2002;Sheth et al. 2003;Elmegreen et al. 2004;Marinova & Jogee 2007;Barazza et al. 2008). We perform this analysis using the ELLIP SE task in IRAF. This procedure reads a 2-dimensional image to fit isophotes to its light distribution. The fits start from an initial guess of x and y center, ellipticity ( ) and position angle (PA). Each extracted isophote is represented by its surface brightness (µ), semimajor axis length (R), PA and . The fitted isophotes are used to represent and analyse the azimuthally averaged surface brightness profiles of the galaxy and the models derived from the photometric decomposition. Furthermore, the sample of isophotes extracted are used to identify changes in PA and ellipticity that could be associated to different structures within the galaxy morphology. STRUCTURE OF FBQS J164442.5+261913 In order to characterize the morphology of FBQS J164442.5+261913, we first assume that it is hosted by an elliptical galaxy, since only these type of galaxies are known to launch powerful relativistic jets able to produce γ−rays (Marscher 2009). Thus, we add a Sérsic profile to the single PSF model that represents the AGN contribution. We constrain the Sérsic index to n > 2.0, given the observational evidence that the light profiles of most ellipticals and classical bulges, are better described by a Sérsic function with n > 2, whereas most disk-like bulges have n < 2 (Fisher & Drory 2008;Gadotti 2009). By means of a χ 2 test, we find that the improvement of this model (hereafter model 2) is equal for the J− and the Ks−bands (χ 2 model2 /χ 2 model1 = 0.42 for J-band and χ 2 model2 /χ 2 model1 = 0.42 for Ks band). The images of the galaxy and the models, as well as the azimuthally average surface brightness profiles of the galaxy, the model and the sub-components of the model for each band are shown in Figure 3. panel of Figure 4), shows a ring like feature interrupted in the eastern part. Neither the residuals nor the surface brightness profiles of the stars fitted with our PSF model show similar features. Moreover, its radius (∼ 3.5 ) exceeds by far the FWHM of our PSF (∼ 0.75 ). Hence, we consider the ring as a real component of the host galaxy. The Ks-band residual (top panel of Figure 5) shows an elongated and roughly symmetric structure with a length similar to the diameter of the ringed feature (∼ 3.2 / ∼ 8.1kpc). In both bands a not-fitted bump in the light distribution of the galaxy is observed (from ∼ 2.8 to ∼ 3.7 ), which is consistent with the ring and the two light enhancements close to the ends of the elongated structure. Since residuals are still considerable, we include an extra component into the last model ( Figure 6). We choose an exponential function, since it is able to represent the likely presence of a disk in the host galaxy of a typical NLSy1 galaxy (we call it model 3). The improvement over model 2 is χ 2 model3 /χ 2 model2 = 0.65 for J-band and χ 2 model3 /χ 2 model2 = 0.79 for Ks-band. From the residual images, we observe that the ring in J-band (bottom panel of Figure 4) seems better defined. Moreover, in Ks-band (middle panel of Figure 5), hints of this structure emerge, whereas the elongated structure disappear. The elongated feature and the light enhancements might be explained by the presence of a stellar bar showing their ansae (bright regions at the ends of bars observed in ∼ 40% of SB0 galaxies, as found by Martinez-Valpuesta et al. 2007;Laurikainen et al. 2007). Such a bar could be more likely detected in Ks−band since neither young luminous stars or dust strongly affect its observed emission (Rix & Rieke 1993). Nevertheless, a powerful AGN, a bright bulge and a disc, might outshine the bar, making its presence less evident. The upper panels of Figure 7 shows an image of FBQS J164442.5+261913 in Ks-band, with the AGN and bulge contribution subtracted (using a bulge+AGN+disc model), revealing an elongated and symmetrical feature that resembles a stellar bar over the underlying disk. In order to confirm the existence of a bar in the host galaxy of FBQS J164442.5+261913, we perform another widely used method for detecting and describing bars; the ellipse fit of the galaxy isophotes (see plot in the lower panel of Figure 7). When and PA are plotted against radius, a bar is characterized by a local maximum in and a constant PA (typically ∆PA 20 • ) along the bar (Wozniak et al. 1995;Jogee et al. 1999;Menéndez-Delmestre et al. 2007). We can see a region that fulfil these criteria (from ∼ 2.6 < radius < ∼ 3.2 and PA ∼ 78 • ) suggesting again the presence of a bar. Since the method of the ellipses fit also hints at the presence of a bar, we proceed to characterized its morphology (Figure 8). We add a Sérsic profile to model 3 of Ks-band to fit the light distribution of the stellar bar (we call it model 4). We use as initial guesses a Sérsic index n = 0.5 (Greene et al. 2008) and the and PA derived from the ellipse fit of Figure 7. The improvement with respect to the model where no bar is included is χ 2 model4 /χ 2 model3 = 0.90. From the residual image (shown in lower panel of figure 5) we can see that in general, the residuals decrease, the hints of the ring remain and the ansae are better defined. The bump remains unfitted with the functions included in the model, which is expected given that it is caused by a ring and the bar ansae. The parameters derived from every model analysed are shown in table 1. North is up and east is to the left. To enhance S/N and to detect faint structures, the residuals were smoothed to < 1 resolution. The segmented white circle has a 3.2 radius and guides through the ring feature. Blue arrows show the light enhancements at the ends of the bar (ansae). A likely minor merger event feature is observed in the east part of the galaxy (from R ≈ 3 up to R ≈ 5 ), with a surface brightness in J-band µ = 21.0 ± 0.5 mag/arcsec 2 , which originates the blue region at 3 in the J − Ks colour profile of figure 9. Figure 9 shows the J − Ks colour profile of the host galaxy of FBQS J164442.5+261913. The AGN contribution has been subtracted using the best fit model for each band. In general, as we move from the center to the outer parts of the host galaxy, we can see that the colour decreases from J −Ks = 4.33 mag down to J −Ks = 3.45 mag at R = 1.20 , showing that the central region (bulge) is the reddest of the host galaxy. From R = 1.20 towards larger radii, the colour increases up to a local maximum of J − Ks = 3.63 mag at R = 1.55 . We link this increase in color to the bar, since here is where it has its maximum influence (see Figure 8). A second increase in colour is observed in the bar region from R = 2.20 to R = 2.85 , with a maximum of J − Ks = 3.80 mag. Between 2.85 < R < 3.15 , a blue region is observed which corresponds to eastern feature (inside the ring) in figures 4 and 5. We observe a last local minimum at R = 3.30 with a colour J − Ks = 3.80 mag. We associate this colour to the ring, with no influence of the ansae since, according to observations by (Martinez-Valpuesta et al. 2007), ansae do not show any colour enhancement (probably because they are a dynamical phenomena). Finally, as we move outward, the disk becomes bluer, reaching an average colour J − Ks = 3.70 mag. THE HOST GALAXY OF FBQS J164442.5+261913 According to the results shown in table 1, the host of FBQS J164442.5+261913 can be classified as a barred lenticular galaxy (SB0). In addition to the ansae morphology, that is frequent in S0 galaxies (∼ 40% of S0) as found by Laurikainen et al. (2007), both the bulge and the disc fulfil the characteristics for lenticular galaxies presented in Laurikainen et al. (2010). They also find that, as in spirals (Hunt et al. 2004;Noordermeer & van der Hulst 2007), the luminosity of the bulge in S0s correlate to the luminosity of their discs. According to such correlation (M K,disk = 0.63M K,bulge + 9.3), the bulge of FBQS J164442.5+261913 should have a disk with an absolute magnitude M K,disk = −24.55 ± 0.20, consistent with the absolute magnitude derived through the morphological analysis in this work (M K,disk = −24.85 ± 0.25). The parameters derived in this work for the components of FBQS J164442.5+261913 are consistent with those of pseudobulges. Weinzirl et al. (2009) find a connection with pseudobulges and small Sérsic indeces (n < 2.0), consistent with n = 1.8 ± 0.31 for J-band and n = 1.9 ± 0.35 for Ks-band, derived for FBQS J164442.5+261913. Independently, Fisher & Drory (2008) find that pseudobulges and their discs are associated through their effective radius and scale lengths as r ef f /hr = 0.21 ± 0.10 consistent with FBQS J164442.5+261913 (r ef f /hr = 0.14 ± 0.07 for J-band and r ef f /hr = 0.14 ± 0.06 for Ks-band). On the contrary, they found that classical bulges have large r ef f /hr ratios (r ef f /hr = 0.45 ± 0.28). Additionally, when we compare the structural parameters of table 1, with the results in La Barbera et al. (2010), we find that FBQS J164442.5+261913 lies below the Kormendy relation (either for J-as for Ksband), consistent with Gadotti (2009) who find that pseudobulges do not tend to follow the Kormendy relation. Finally, if a galaxy hosts a pseudobulge, its center should be mostly population I material (young stars, gas and dust, Kormendy & Ho 2013). If we bear in mind that, in cases of high recent starburst, supergiants contribute to K-band luminosity (Minniti & Rix 1996), then the J − Ks color gradient of the host galaxy of FBQS J164442.5+261913, is in agreement with the latter pseudobulge classification criteria, where we see that, in the central region, Ks-band luminosity is stronger in comparison to J-band than in any other region of the galaxy. So far, only one galaxy able to launch a relativistic jet powerful enough to accelerate particles up to γ−ray energies, is known to host a pseudobulge; PKS 2004-447 (Kotilainen We now evaluate whether the parameters derived for the bar in FBQS J164442.5+261913 are in accordance with those for active early-type galaxies. Using the maximum ellipticity of the ellipse fits to the bar region as bar length (Marinova & Jogee 2007), we find that the length of the bar in FBQS J164442.5+261913 is r bar = 8.13 kpc ± 0.25, if we normalized it to the disc scale length hr, we obtain r bar /hr = 1.00 ± 0.06. On the other hand, we can calculate the bar strength f bar (Abraham & Merrifield 2000, see also Whyte et al. 2002;Aguerri et al. 2009;Laurikainen et al. 2007;Hoyle et al. 2011) defined as: where b/a is the minor to major axis ratio of the bar. We obtain a bar strength f bar = 0.17 ± 0.03. According to e.g. Aguerri et al. (2009) andLaurikainen et al. (2007), the bar in FBQS J164442.5+261913 is long and weak, consistent with S0 galaxies as found by (Laurikainen et al. 2002). The bar in FBQS J164442.5+261913 might be related to the ring through resonances (Athanassoula et al. 2010, and references therein), given that secular evolution is likely the main evolutionary process that is currently in progress in its host galaxy. Therefore, the ring-like feature might be the result of gas redistribution by angular momentum transport driven by the bar (i.e. a ring constructed by a rotating bar interacting with the disk gas). In this scenario, the gas is moved by the bar into orbits near dynamical resonances (for a review, see Athanassoula et al. 2013). Another scenario for the ring formation in FBQS J164442.5+261913 is a minor merger event. Athanassoula et al. (1997) show that the interaction of a small satellite galaxy on a barred galaxy can produce a ring that encloses the bar. Also, (Mapelli et al. 2015) show that minor mergers with gas-rich satellites might explain the formation of rings in lenticular galaxies. This scenario is supported by the residuals in J-band (see Figure 4), where a feature of surface brightness µ = 21.0 ± 0.5 mag/arcsec 2 is shown about ∼ 5.15 /13.10kpc east from the center of FBQS J164442.5+261913 (resembling the Seyfert galaxy NGC 1097 whose light distribution is strongly affected by a small satellite galaxy, Higdon & Wallin 2003). This feature seems to interrupt the shape of the ring in the eastern part of the galaxy and even cause the color enhancement at 3.0 . An alternative and more likely scenario was proposed by Marino et al. (2011) for their sample of lenticular galaxies. The formation of the ring might be a joint effect of secular evolution driven by the bar and gas accreted from a small satellite galaxy (or many). Moreover, since S0 galaxies lack of an own gas reservoir (unlike spirals), this scenario also explains the origin of the gas needed to grow a massive bulge (MJ = −22.42±0.40 and MKs = −24.21±0.32) and activate the black hole in FBQS J164442.5+261913, as well as the way this gas is channelled to the most central parts of the galaxy (i.e. through angular momentum transport driven by the bar, Shlosman et al. 1990;Ohta et al. 2007). We finally observe that the parameters of the bar and the ring hosted by FBQS J164442.5+261913, are similar to the bar of PKS 2004-447 (Kotilainen et al. 2016) and the ring in 1H 0323+342 (León Tavares et al. 2014). While the bars of PKS 2004-447 and FBQS J164442.5+261913, are r bar ≈ 7.80 kpc and r bar = 8.13±0.25 kpc (taking the length of the bar as the maximum in the ellipticity profile), respectively, with absolute Ks-band magnitude of K bar = −23.44 ± 0.38 and K bar = −23.86 ± 0.52, respectively; the rings of 1H 0323+342 and FBQS J164442.5+261913, are ∼ 8.24 kpc and ∼ 8.13 kpc, respectively. Moreover, PKS 2004-447 shows an arm-like feature, whose origin might be related to a minor merger event (see Figure 19 of Athanassoula et al. 1997) and that, at some point, might become a ring, similar to the feature shown in FBQS J164442.5+261913. According to the most accepted processes for jet formation, the Blandford-Znajek (BZ, Blandford & Znajek 1977;MacDonald & Thorne 1982;Penna et al. 2013) and the Blandford-Payne (BP, Blandford & Payne 1982) mechanisms, the jet launching and collimation requires very massive black holes with high spins and strong magnetic fields. All of this require major mergers to occur, which fits well with previous observations (McLure et al. 2004;Sikora et al. 2007) and the jet formation paradigm (where powerful rel-ativistic jets are launched from giant elliptical galaxies, Marscher 2009). However, it comes completely at odds with the morphology of FBQS J164442.5+261913, with a bar and a disc, that lacks of a classical bulge and with a black hole mass (as estimated by the FWHM of its BLR lines and the continuum luminosity, Yuan et al. 2008) MBH ∼ 8 × 10 6 M (although, previous studies show that values MBH 10 8 M could be obtained when estimating its black hole mass by different methods, Baldi et al. 2016;Calderone et al. 2013). SUMMARY We have performed a detailed photometric analysis of the γ-NLSy1 FBQS J164442.5+261913. We use deep near-infrared imagery in J− and Ks−bands taken with the near-infrared camera NOTcam on the NOT. The main results of this analysis are: • The surface brightness distribution of FBQS J164442.5+261913 is best fitted by a model resulting from a sum of a nuclear source, a bulge and a disc. Additionally to these components, a stellar bar in the Ks-band image is detected and modeled. The morphological parameters derived from our analysis show that the bulge, the disk and the bar of the host galaxy of FBQS J164442.5+261913 fulfil the characteristics of SB0 galaxies. • We find that the Sérsic index and the relations between bulge and disc for FBQS J164442.5+261913 are in good agreement with those of pseudobulges. Therefore, the bulge in the host galaxy of FBQS J164442.5+261913 is statistically most likely to be pseudo. • In both J-and Ks-bands, we detect a ring enclosing the bar that is interrupted by, what it seems to be, a recent minor merger which might hint to the formation process of such inner ring, as suggested by Athanassoula et al. (1997). • When comparing the ring and bar in FBQS J164442.5+261913 to the ring and bar in 1H 0323+342 and PKS 2004-447 (the only two γ−NLSy1 whose morphology have been analyised until now), we find similarities regarding size and magnitude. Likewise, PKS 2004-447 shows an arm-like feature, whose origin might be related to a minor merger event and that, at some point, might become a ring, similar to the inner ring in FBQS J164442.5+261913. We conclude that the prominent bar in the host galaxy of FBQS J164442.5+261913 has mostly contributed to its overall morphology driving a strong secular evolution, which plays a crucial role in the onset of the nuclear activity and the growth of its massive (pseudo) bulge. Minor mergers, in conjunction, are likely to provide the necessary fresh supply of gas to the central regions of the host galaxy. Although our findings strongly suggest that secular evolution is the main process taking place in FBQS J164442.5+261913, our available data is insufficient to address some other questions as whether its (pseudo) bulge shows an increased star formation activity or if it is rotationdominated (as it should, given its disky origin; Kormendy & Ho 2013). Therefore, we encourage different wavelengths imaging and integral field spectroscopy (IFS) observations to this galaxy and the whole sample of radio-loud NLSy1s.
2017-01-04T06:32:45.000Z
2017-01-04T00:00:00.000
{ "year": 2017, "sha1": "ad18f4830d0191ed36bf6a98375435fa96fb11ac", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1701.00911", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "ad18f4830d0191ed36bf6a98375435fa96fb11ac", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
146167561
pes2o/s2orc
v3-fos-license
Structural, non-volatile magnetization, and dielectric studies on zinc-doped BiFeO3 We investigate the structure, non-volatile magnetization, and dielectric properties of zinc doped bismuth ferrite (Bi1-zZnzFeO3; z = 0, 0.05, and 0.15) powder synthesized via sol-gel auto-combustion method. We have found that the doping of Zn induced the presence of Bi25FeO40 as a secondary phase and decreased the lattice parameter of bismuth ferrite (BFO) phase structure. Furthermore, the doping decreases the dielectric constant and increases the slope of magnetization and magnetic coercivity. The increase of magnetic coercivity value implies the increase of non-volatile magnetization properties. Introduction Bismuth ferrite (BiFeO 3 , BFO) displays a broad range of interesting properties, including ferroelectric, magnetic, ferroelastic and electro-optical properties [1,2]. Because of these phenomena, BFO is widely studied since its discovery. In recent years, a considerable interest has arisen in studying multiferroic properties of BFO [3]. BFO possesses a ferroelectric-antiferromagnetic property observed around room temperature (300 K) which has a ferroelectric transition temperature of 1100 K and an antiferromagnetic Néel temperature up to 640 K [4]. The increase of ferromagnetic properties arises as one of the main issues in developing BFO materials. It offers a new class of applications such as multi-stage data storage devices [1,5]. Recently, low-dimensional BFO multiferroic nanostructures combined with site engineering capabilities present an improved ferromagnetic property compared to that of the bulk ones [6,7]. It is also reported that the values of the dielectric constant of nanoparticles-doped BFO are much higher than those of the ceramics and thin films, [7][8][9]. In the case of the integration of doped BFO into its functional structures, it is crucial to investigate the stability of BFO and the possible parasitic phases on the magnetic properties [6]. For non-volatile memory devices applications, the presence of small magnetic coercivity may affect in the stability of magnetism. Consequently, the magnetic poles can easily changes. The effect of Zn doped BFO into Bi-site in the magnetic properties is rarely reported. Meanwhile, Liu et al. [10] found that Zn substitution on Fe-site had improved the magnetic saturation. In contrast, Xu et al. [11] found that Zn substitution on Fe-site made ferromagnetism in BFO vanished at room temperature, where Zn-doping was responsible for the oxygen vacancies and the intervening in the Fe atom chains. However, doping Zn into Bi-site is also expected to improve its ferromagnetic properties and could change their dielectric properties. Based on those facts, this research attempts to investigate the influence of Zn doped BFO on the enhancement of magnetic coercivity and the dielectric constant of BFO nano-powder for non-volatile magnetization while maintaining the entire structure of BFO. Following procedure of previous researches in producing nano-sized particle samples, sol-gel is chosen as the preparation method [4,6,10,11]. Experimental Methods Zinc doped bismuth ferrite (Bi 1-z Zn z FeO 3 ; z = 0, 0.05, 0.15) powder were synthesized via low temperature sol-gel auto-combustion methods. In this method, bismuth nitrate (Bi(NO 3 ) 3 ·6H 2 O) and iron (III) nitrate (Fe(NO 3 ) 3 ·9H 2 O) from Sigma-Aldrich, and zinc metal dissolved in nitric acid were used as precursor. All the precursors were prepared in a proportional composition according to the wt% fraction of BiFeO 3 . The Citric acid as a chelating agent was added in 3:1 molar ratio with the metal ions. The mixture solution was stirring and heating at 80C until it became viscous light brownish gel, followed by dried at 120C for 24 h till become xerogel. Then, the xerogel was further ground and calcined at 600C for 5 h. The study of BFO phase structure were carried out by XRD with PANanalytical Diffractometer (X'Pert Pro, Cu-Kα 1 , λ = 1.5405 Å) at room temperature. The crystal structure and the lattice parameters were refined using the HighScore Plus, based on the XRD results. The magnetic measurements were carried out using a vibrating sample magnetometer (VSM) over a magnetic field range from 0 -14 kOe at room temperature ~27C. The dielectric measurement was carried out by the four-point probe system on Agilent E4980A LCR meter over the frequency range from 50 kHz to 2 MHz at room temperatures. X-ray diffraction analysis The phases and the crystalline structures of the samples within the range of 2θ = 20 -70 were investigated by x-ray diffraction (XRD). The peak patterns of Bi 1-z Zn z FeO 3 (z = 0, 0.05, 0.15) for the powdered samples were displayed in figure 1, and the parameter result of both phase structures were obtained in Table 1. It is clearly seen that the XRD patterns of the samples matches that of the BFO phase structure. The characteristic of hkl peaks at (104) and (110) in between 2θ = 31-32 were confirmed as the presence of rhombohedral (hexagonal) structure with space group (s.g.) R3c (ICSD 98-018-0501) [1,6,12]. The small amount of Bi 25 FeO 40 with a cubic structure and s.g. I23 (ICSD 98-004-1937) have also been detected at z = 0.05, 0.15 as a secondary phases [13]. The occurrence of Zn in the structure has reduced the lattice parameter of BFO. Zn with smaller atomic radii and valence than Bi was used to replace the atomic position of Bi. This condition affects the decrease in BFO cell volume, and the atomic density becomes denser. With these results, it is suspected that the smaller atomic radii of Zn had induced the secondary phase [14]. Magnetic characterizations The magnetic measurements at room temperature were performed to investigate the magnetic ordering in the doped BFO powdered samples. It is already known that in the bulk form, BFO possesses a Gtype antiferromagnetic order with a low magnetic saturation. However, an enhancement of the magnetization by site engineering was reported [5,7,8]. The room temperature of the magnetization hysteresis loops of According to the XRD results, the presence of the secondary phases in the last two samples (z = 0.05, 0.15) had increased its magnetic saturation slope [15]. This slope exhibits the presence of antiferromagnetic phase that appears in between the ferromagnetic phases [16]. Therefore, the increasing of remanence and saturation values which formed its magnetic properties were not affected by the existence of the secondary phases but were caused by Zn doping in the BFO structure. Dielectric studies The frequency dependence of the dielectric constant (ɛ r ) of Bi 1-z Zn z FeO 3 (z = 0, 0.05, 0.15) at room temperature in the frequency range 50 kHz -2 MHz shows in figure 3, respectively. From the figures, we can see that the decrease of ɛ r value with the increasing frequency exhibits the limitation change of polarization at a certain frequency rate. The dipoles of the sample can switch following the alternating electric field leading to an increase in total polarizability in the low-frequency region. Furthermore, when the frequency increases to the high-frequency region, the ability of dipoles to keep their orientation in alignment with the switching field at fast enough rates had decreased and hence a decrease in ɛ r values [2,17]. Below 100 kHz, the dispersion value of dielectric constant is decreasing from ɛ r = 5398.84 for z = 0 to ɛ r = 3750.65 for z = 0.05. Meanwhile, at 100 kHz and above, the values of dielectric constant of both samples are similar (ɛ r = 1856.01 for z = 0; and ɛ r = 1857.35 for z = 0.05). The presence of Zn contained in the z = 0.05 sample which affects the density of BFO phase seems likely to decrease the dispersion of the dielectric constant [18]. Conclusions The effects of Zn-doped BFO on the structure, magnetic, and dielectric properties of the Bi 1-z Zn z FeO 3 (z=0, 0.05, 0.15) at room temperature have been studied. It was found that the lattice parameters of BFO are decreasing after the doping of Zn. The unit cell becomes denser which is caused by the presence of Bi 25 FeO 40 as a secondary phase. Zn content has also induced the increase of magnetic saturation and the existence of antiferromagnetic phase which couples with the ferromagnetic ones. The increment of the coercivity value makes the ferromagnetism become more non-volatile. Furthemore, the doping of Zn makes the dielectric constant decreases at low-frequency values.
2019-05-07T14:16:13.656Z
2019-03-01T00:00:00.000
{ "year": 2019, "sha1": "b2845b70e040d1ab2528ce1a7a0b23a4988a7124", "oa_license": null, "oa_url": "https://doi.org/10.1088/1742-6596/1191/1/012047", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "7b57043dad4d068fcd7f9fd063d6de341e800cf2", "s2fieldsofstudy": [ "Materials Science", "Physics" ], "extfieldsofstudy": [ "Materials Science", "Physics" ] }
149316181
pes2o/s2orc
v3-fos-license
A Comparative Study of Three Modern Translations of the Old English Lines ( 675-702 ) of Beowulf In this article, I compare the modern translations of lines (675-702) of Beowulf in Seamus Heaney’s 2000 translation, Roy Luizza’s 1999 translation, and Edwin Morgan’s 1952 translation. I begin with Morgan’s text since it is the earliest translation and ends with Heaney’s translation, as it is the most recent one. My evaluations for the three texts take into consideration the syntax, the poetic dictions and the approach used by Haney, Luizza and Morgan. I choose these lines in particular because these lines describe the confrontation with Grendel, and because an evaluation of the translations of the entire epic would be an overwhelming task. The article begins with a brief introduction to Old English structure and typological descriptions so we understand the challenge the aforementioned translators of Beowulf have met as they worked on the original manuscript and be able to acutely evaluate the final product of their translations of the aforementioned lines. Introduction: Old English structure and typology The structure of Old English is quite different from the structure the modern reader of English expects.English language nowadays is prescribed usually as a language that begins with the subject followed by the verb and an object.Sometimes we might generate sentences without direct objects since some verbs in English are intransitive.But this order of the basic components of the English sentence (Subject/Verb/Object) has started to become the frequent case of English since the Normans invaded England in late 11 th century.In other words, Old English does not always follow this pattern. The problem is that if Old English does not follow the familiar structure of its modern version, any complexity in understanding modern English would be far more challenging in Old English.While the modern English which we usually refer to as an Subject/Verb/Object language seems hard to interpret in some occasions, Old English might be described as either Subject/Verb/Object or Subject/Object/Verb language and thus it is really hard to analyze it or understand it in many occasions. The problem of understanding Old English grows even worse when we think of certain structural patterns that relate to nouns and verbs as Greenberg notices in the 1960s.Greenberg insists that languages that follow VO structure share certain patterns, as do those that follow OV one.The patterns characteristically might be noticed in word ordering or typologies.These patterns include the position of adjectives and genitives to nouns.Languages that follow the verb/object order tend to have the noun first and have nouns followed by adjectives or genitives.On the other hand, languages that follow the object/verb order tend to have adjectives or genitives precede nouns (in Fennel, 2001).Nevertheless, modern English is an exception; although it follows the verb/object order, its adjectives and genitives precede nouns (Trask 1996).This exception of the structure of modern English does apply to Old English since English has undergone radical changes over time (McMahon, 1994).Because English has showed radical changes in it structure since the Norman quest of England, it might be located in a state of transition between the two aforementioned types; Verb/Object or Object/Verb orders (Lehmann, 1973).In other words, the modern word order (Verb/Object) of English developed from the Object/Verb pattern of Old English.Thus, translators of Old English manuscripts must take into consideration the structural differences between Old and Modern English which might be a very challenging task. The challenge in translating Old English comes from the lack of typological harmony of this language.Old English is fundamentally an Object/Verb language since it comes from its close North West Germanic language, which is essentially an Object/Verb language.North West Germanic shows a lack of typological harmony since nouns might precede adjectives and genitives might appear both before and after nouns.Furthermore, these are some pieces of evidence in which Old English shows a subject/verb/object typology as it had been undergoing typological change from Object/Verb language to Verb/Object one (Lass 1994).For instance, a sentence like "Eanred mec agrof" which means "Eanred me carved" shows the regular form of old English as an Object/Verb language (Trask 1996, p. 149).Thus, there is no doubt that Old English is very similar to its Germanic ancestor. Toward the end of 11 th century and after the Norman quest of England, the Verb/Object order of English had developed as the basic structure of English language but the adjectives remained unstable as they appear both before and after nouns even though as the main structure became the Verb/Object order (ibid, p. 150).Thus, the old English translator should work hard on each sentence to figure out the final meaning of each sentence. The following section deals with lines (675-702) of Beowulf.I compare three modern translations of the aforementioned lines, which I choose as a sample of study because of three important reasons.First, in these lines Beowulf talks about his coming confrontation with Grendel.Second, these lines include some cultural aspects that are related to the time period.Third, a comparison of the three translations of the entire epic would be an overwhelming task.My evaluations explore how the three translations deal with Beowulf's character and how they transfer the cultural aspects of the time period into their modern texts or adaptations of these lines.I will break the two passages into lines and comment on them in the three translations, beginning with Heaney's translation, and ending with Morgan's one. Discussion Seamus Heaney insists on the importance of maintaining the cultural aspects of the Anglo-Saxon life in any successful translation.He thinks that translating Beowulf and turning it into modern English are not feasible.In Haney's words such a task is "like trying to bring down a megalith with a toy hammer."Along with this difficulty, Heaney wants to give his translation the "metrical shape"and"the power of verse" while re-writing a modern version of the original text.Heaney's claims that part of him "had been writing Anglo-Saxon" while translating the poem (Heaney xxxiii).He uses some dictions or archaic words that remind the reader of the Anglo-Saxon origins of the text.Unfortunately, the Broadview Anthology of British Literature does not introduce us to the approach used by Liuzza.But it is clear that he tries to maintain some archaic diction in his translation along with some Anglo-Saxon aspects as well. Haney's attempts might not be satisfying for Morgan who did not live long enough to read Haney's translation.In his introduction, Morgan thinks that all the previous translations of Beowulfbefore 1952-have failed to produce a modern translation that look "like twentieth-century English diction," and therefore he decides to do it himself (Morgan xii).Morgan believes that "there is no use being faithful to the poetic archaism of the original" if the meaning will not be understood by modern readers (xiii).He thinks that the modern diction will not ruin the beautiful meaning of Beowulf. Before I start comparing and contrasting the three translations, it is good to refer to the general context of these lines.The twenty-seven lines deal with the last words spoken by Beowulf before he goes to bed at the night of the confrontation with Grendel.The lines that succeed these lines move to describe Grendel in his way to Heorot.Interestingly enough, the three translations are radically different when we consider the way they treat the character of Beowulf.Morgan shows negative image of Beowulf describing him as being cakey and arrogant.Luizza shows him as a good man who says "few boasting words."Heaneythinks that Beowulf is a good prince who is proud of what he has achieved in past battles. Lines And before he bedded down, Beowulf, That prince of goodness, proudly assert: "when it comes to fighting, I count myself As dangerous any day as Grendel." As we can see above the difference in presenting Beowulf is clear in the above quoted translation.Morgan uses the words "vaunt and vow" to describe Beowulf.Luizza translates the original text as "few boasting words"represting Beowulf in a positive way.Heaney presents him as a "proud good prince" and we feel Haney's positive attitude toward Beowulf as well.I think also that Morgan's translation does not make sense when we take in consideration lines 77-8.Morgan gives us the impression that Grendel is a warrior rather than a beast when he claims that Grendel boasts his deeds.The fact is that Grendel never speaks throughout the whole epic.Liuzza's translation is somehow misleading as well since Grendel attacks people when they are sleeping and never comes during daytime.Grendel cannot be described as a warrior.Thus, I wholeheartedly believe that Heaney's translation does make more sense.We also can see that Luizza and Morgan have used similar syntax but different dictions.Heaney uses different dictions and syntax.At the same time, Heaney produces a more reasonable translation since his translation of the above quoted lines seems closer to both, the modern reader and the original text in my opinion.It is also good to notice that Heaney uses fewer words in his translation but produces powerful meanings.Nevertheless, the syntax used in the next two lines might be similar in the three translations. Lines 679-680 in Morgan's translation read as follows: Therefore, not with a sword shall I silence him, Deprive him of his life, though it lies in my power; Lines 679-680 in Luizza's translation read as the follows: and so I will not kill him with a sword, put an end it his life, though I easily might; Lines 679-680 in Haney's translation read as follows: So it won't be a cutting edge I'll wield to mow him down, easily as I might. As we can see in the above quoted lines, Heaney and Luizza both begin their lines with "so" while Morgan uses "therefore" instead.However, in the above lines Heaney is the only one who uses of sea-rovers at rest beside him.In the above quoted lines, Heaney and Liuzza both use the word "bolster" while Morgan uses the modern word "pillow."Interestingly enough, everybody uses the word "lay down" in line 688.I do think that Luizza's and Heaney's translations are fine ones for the aforementioned lines.In line 690, Heaney use "sea-rovers,"Luizza uses "Seafarer" while Morgan uses "sea-venturers."I think Morgan choice is the worse since the word "venturer"does not even exist in the dictionary in relationship to sea.It is neither modern nor archaic.One more time, Heaney is close to Luizza than to Morgan.The same applies to lines 691-93: Lines 691-693 in Morgan's translation read as follows: Not one of them thought he would ever again Leave there to find his beloved homeland, His folks and his fortress, where he once was bred; Lines 691-693 in Liuzza's translation read as follows: None of them thought that he should thence ever again seek his own dear homeland, his tribe or the town in which he was raised, Lines 691-693 in Heaney's translation read as follows: None of them expected he would ever see His homeland again or get back To his native people who reared him. As we can see clearly in the above quoted lines, Heaney is close to Luizza in terms of syntax while Morgan is radically different in syntax and dictions.Morgan uses the word "bred" in line 693 while Haney uses the archaic word "rear" and Luizza uses the word "raise."I do believe Heaney's and Luizza's choice of words is more suitable for the context than Morgan's "bred."However, we see some general agreement between Morgan and Luizza on lines 694-698 in terms of the syntax and the word choice: Lines 694-698 in Morgan's translation read as follows: For they knew how sudden death had already Swept from the wine-hall more than too many Of those Danish men.The Lord wove them Fortunate war-fates solace and support He gave the Weder-folk, Lines 694-698 in Liuzza's translation read as follows: for they had heard it said that savage death had swept away far too many of the Danish folk in that wine-hall.But the lord gave them a web of victory, the people of the Weders, comfort and support... Lines 694-697 in Haney's translation read as follows: They knew too well the way it was before how often the Danes had fallen prey to death in the mead-hall.But the lord was weaving a victory on his war-loom for the Weather-Geats. Morgan and Luizza both agree on some words like "Weders" and "Danish" to describe Beowulf's people while Heaney uses the words "Danes" and "Wheather-Geats" instead.Unlike Morgan and Luizza, Heaney's word choice is the best because he could keep the consistency in his text while referring to Beowulf's tribe.Heaney could also keep the originality of the text in line 696, which relates to the Anglo-Saxon culture while Luizza and Morgan are not as successful in conveying the cultural aspects of that line.Once more, Heaney uses fewer words and keeps the originality and the readability of his text. In lines 698-702, we see Heaney and Luizza closer in semantics while we see Morgan and Luizza closer in syntax: Lines 698-702 in Morgan's translation read as follows: ... so that they all Destroy their enemy through the strength of one, By his powers alone.The truth is shown, The great hand of God time out of mind Moving mankind.Lines 698-702 in Liuzza's translation read as follows: ..., so that they completely, through one Man's craft, overcame their enemy, by his own might.It is a well-known truth that mighty God has ruled mankind always and forever.Lines 698-702 in Haney's translation read as follows: Through the strength of one they all prevailed; they would crush their enemy and come through in triumph and gladness.The Truth is clear: Almighty God rules over mankind and always has.Heaney successfully finishes his translation at line 697-for that part which talks about how God was siding with the Geats while Luizza and Morgan keep going half way in line 698.Therefore, Morgan and Luizza are close in Syntax.But a careful look at the above quoted lines tell us that Luizza and Heaney are close in semantics since their choice of words is similar to some extent in lines 700-2.For example, they both use words like "Almighty," and "Mighty" to describe God.They also use similar syntax in the last two lines. Conclusion I do believe that Morgan's translation fails to present Beowulf in pure 20 th century English dictions as he promised in his introduction of the lines examined in this article.This might explain why Luizza and Heaney both have used dictions that relate in a way to or comes directly from the Anglo-Saxon.Luizza's translation of the same lines seems to follow Morgan's syntactic choice but with different word choices.I do believe that Heaney's translation ranks as the best of the three regarding in terms of syntax and semantics.Liuzza's translation comes in the second place but he provides us with a fine translation at the same time.Morgan comes last in my opinion if we take into consideration his syntax and semantics.The differences we noticed in three translations are natural since the translators' understanding of Old English is not expected to be similar.Again and as explained in the introduction, Old English might be misleading with genitives and adjectives preceding or following nouns. 675-679 in Morgan's translation read: Then the good warrior uttered aunt and vow, Beowulf of the Geats, before he went to rest: "I do not count myself of feebler striking-force In works of war than what Grendel boasts; Lines 675-679 in Luizza's translation read: The good man, Beowulf the Geat, Spoke few boasting words before he lay down: "I consider myself no poorer in strength And battle-deeds than Grendel does himself; Lines 675-679 of Haney's translation read:
2018-12-21T02:19:37.300Z
2018-03-06T00:00:00.000
{ "year": 2018, "sha1": "f1fa11de910dec60812ae5907dd095a851594bf5", "oa_license": "CCBY", "oa_url": "https://theartsjournal.org/index.php/site/article/download/1337/639", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "f1fa11de910dec60812ae5907dd095a851594bf5", "s2fieldsofstudy": [ "Linguistics" ], "extfieldsofstudy": [ "Art" ] }
15996075
pes2o/s2orc
v3-fos-license
The effects of congenital brain serotonin deficiency on responses to chronic fluoxetine The importance of reversing brain serotonin (5-HT) deficiency and promoting hippocampal neurogenesis in the mechanisms of action for antidepressants remain highly controversial. Here we examined the behavioral, neurochemical and neurogenic effects of chronic fluoxetine (FLX) in a mouse model of congenital 5-HT deficiency, the tryptophan hydroxylase 2 (R439H) knock-in (Tph2KI) mouse. Our results demonstrate that congenital 5-HT deficiency prevents a subset of the signature molecular, cellular and behavioral effects of FLX, despite the fact that FLX restores the 5-HT levels of Tph2KI mice to essentially the levels observed in wild-type mice at baseline. These results suggest that inducing supra-physiological levels of 5-HT, not merely reversing 5-HT deficiency, is required for many of the antidepressant-like effects of FLX. We also demonstrate that co-administration of the 5-HT precursor, 5-hydroxytryptophan (5-HTP), along with FLX rescues the novelty suppressed feeding (NSF) anxiolytic-like effect of FLX in Tph2KI mice, despite still failing to induce neurogenesis. Thus, our results indicate that brain 5-HT deficiency reduces the efficacy of FLX and that supplementation with 5-HTP can restore some antidepressant-like responses in the context of 5-HT deficiency. Our findings also suggest that feeding latency reductions in the NSF induced by chronic 5-HT elevation are not mediated by drug-induced increments in neurogenesis in 5-HT-deficient animals. Overall, these findings shed new light on the impact of 5-HT deficiency on responses to FLX and may have important implications for treatment selection in depression and anxiety disorders. INTRODUCTION Major depression and anxiety disorders are highly prevalent diseases that rank among the leading causes of disability worldwide. [1][2][3] The negative impact of these disorders is exacerbated by the poor remission rates obtained with standard treatments. 4 Most antidepressants acutely increase the extracellular levels of serotonin (5-HT), but mood improvements typically do not emerge until after weeks of treatment. 5 Consequently, the clinical effects of antidepressants have been hypothesized to result from long-term adaptive responses to antidepressant administration, such as increased neurogenesis. [6][7][8] Antidepressant effects have also been hypothesized to result from the correction of endogenous 5-HT deficiency, 9 but recent studies have shown that mutant forms of the 5-HT synthesis gene, tryptophan hydroxylase 2 (Tph2), 10,11 which could result in impaired 5-HT synthesis, 12 are associated with poor antidepressant treatment responses. [13][14][15] These observations suggest that congenital brain 5-HT deficiency might reduce antidepressant efficacy. Here we used a genetically engineered mouse line, the Tph2 (R439H) knock-in (Tph2KI) mouse, which exhibits B60-80% reductions in brain 5-HT 16 throughout postnatal development, 17 to examine the effects of 5-HT deficiency on responses to fluoxetine (FLX) in the novelty suppressed feeding (NSF) and tail suspension tests (TSTs). The NSF is a particularly interesting preclinical measure of antidepressant-like effects in that the efficacy of antidepressants in this test requires chronic administration and has been reported to be neurogenesis dependent. 18,19 However, the neurogenesis dependence of antidepressant-like effects in the NSF have been challenged, 20 and the role of hippocampal neurogenesis in the etiology and treatment of depression and anxiety disorders remains highly controversial. 21 Our data indicate that 5-HT deficiency impairs NSF responses to FLX and prevents the induction of neurogenesis by FLX. However, we also show that FLX reduces immobility time in the TST in both wild-type (WT) and Tph2KI mice, despite the fact that the magnitude of the FLX-induced increase in 5-HT is markedly reduced in Tph2KI compared with that of WT animals. In addition, we demonstrate that supplementation with 5-hydroxytryptophan (5-HTP), the 5-HT precursor, can restore the anxiolytic-like effects of FLX in the NSF in Tph2KI mice, despite failing to restore FLX's pro-neurogenic effects in these animals. Thus, our results suggest that 5-HT deficiency can impair a subset of antidepressant effects and indicate that antidepressant-like effects in the NSF do not require antidepressant-induced increases in hippocampal neurogenesis, at least under conditions of congenital 5-HT deficiency. Animals and drug treatments The Tph2KI mouse line is on a mixed background (c57BL6/J-129S6/SvEv) and has been described previously. 16 Homozygous Tph2 R439H KI mice and WT littermate controls were derived from heterozygous breeding pairs and were housed two to five per cage in a facility maintained at 23±2 1C on a 12 h light-dark cycle. Eight-to 10-week-old age-matched littermates were used for all experiments. Male mice were exclusively used for all behavioral analyses and the subsequent FLX/neurogenesis studies. The baseline neurogenesis experiments ( Figure 2) included balanced numbers of males and females, but the study was not sufficiently powered to reveal any significant sex differences. FLX (Spectrum Chemical Corporation, New Brunswick, NJ, USA) and desipramine (DES; Sigma, St Louis, MO, USA) were administered to mice via their drinking water (155 mg l À 1 ) for a total of 4 weeks. Behavioral testing was conducted after 3 weeks of FLX administration, and mice were killed and processed for immunohistichemistry (IHC) analysis 1 week after behavioral testing. In our hands, this treatment regimen results in a dose equivalent of FLX to B20 mg kg À 1 per day for both WT and Tph2KI mice. 22 An analogous treatment paradigm has been shown to be effective for DES. 23 Supplementation with 5-HTP was performed by injecting WT and Tph2KI mice intraperitoneally twice daily (at 0900 and 1700 h) with 5-HTP (20 mg kg À 1 ) for 4 weeks, while administering FLX as described above. Again, behavioral testing was performed after 3 weeks (2 h following the first daily injection) and the mice were killed for IHC analysis 1 week later (2 h after the final 5-HTP injection). This treatment paradigm partially restores tissue levels of 5-HT in Tph2KI mice 2 h after administration. 22 Antidepressant-containing drinking water was administered via opaque bottles and was replaced twice weekly. Chlordiazepoxide (7.5 mg kg À 1 , Sigma) was administered intraperitoneally 20 min before performing the NSF. For proliferation studies, bromodeoxyuridine (BrdU; Sigma) solutions were prepared fresh daily by dissolution in saline (10 mg ml À 1 ) and were intraperitoneally injected (100 mg kg À 1 ) into mice 4, 18 and 24 h before killing. For survival and double-labeling studies, BrdU was administered via the drinking water (1 g l À 1 ) in opaque bottles that were replaced twice weekly. All experiments were conducted in accordance with an animal protocol that was approved by the Duke University Institutional Animal Care and Use Committee. Microdialysis Surgery. Mice were anesthetized using isoflurane and placed in a Kopf (Tujunga, CA, USA) stereotaxic frame equipped with a mouse adapter (Stoelting Mouse and Neonatal Rat Adaptor, Wood Dale, IL, USA). A guide cannula (catalog number 5-300004; Brainlink, Groningen, The Netherlands) was implanted into the hippocampus (HIP; anterior-posterior: 3.3 mm, medial-lateral: 3.0 mm, dorsal-ventral: 1.5 mm), according to the Franklin and Paxinos mouse brain atlas. Each cannula was fixed in place with two anchor screws (CMA, Chelmsford, MA, USA) and carboxylate dental cement (CMA). Operated mice were single housed, treated with antibiotics (1.2 mg sulfamethoxazole per ml and 0.24 mg trimethoprim per ml) in the drinking water (with or without FLX) and allowed to recover 48-96 h after implantation. Dialysate collection. Sterile artificial cerebrospinal fluid (147 mM NaCl, 2.7 mM KCl, 0.85 mM MgCl2, 1.2 mM CaCl2; CMA) was delivered from a CMA 400 syringe pump at a flow rate of 0.45 ml min À 1 . Sixteen to 24 h before the start of sample collection, each mouse was gently restrained and a microdialysis probe (catalog number 5-140040, 2 mm membrane; Brainlink) was inserted into the guide cannula. Each mouse was then placed in a circular cage with bedding, chow and water (with or without FLX) available ad libitum. The probe tubing was stiffened with laboratory paper tape to avoid biting. A two-channel swivel (catalog number 375/D/22QM; Instech, Plymouth Meeting, PA, USA) allowed for unimpeded movement of the mouse. Between 0900 and 1100 h, one baseline dialysate (30 min duration) was collected on ice, shielded from light, immediately frozen on dry ice and stored at À 80 1C. High-performance liquid chromatography-electrochemical detection analysis. The high-performance liquid chromatography system consisted of a BASi (West Lafayette, IN, USA) LC-4C detector coupled to a BASi LCEC radial flow cell. The potential was set at þ 650 mV. Flow was provided by a Shimadzu (Columbia, MD, USA) LC-20AD solvent delivery module. The pump was preceded by an online degasser series 1100 from Agilent (Santa Clara, CA, USA). The chromatograms were analyzed using PowerChrom software (eDAQ, Colorado Springs, CO, USA). Ten microliters of dialysate was separated on a 1 Â 100-mm UniJet microbore 3 mm octadecylsilyl column (BASi, West Lafayette, IN, USA) at a flow rate of 80 ml min À 1 . The mobile phase consisted of 24 mM Na 2 HPO 4 , 3 mM octanesulfonic acid, 27.4 mM citric acid, 107 mM EDTA and 17-18.5% (v/v) MeOH, pH adjusted to 4.8 with NaOH, 5-HT eluted at 11-13 min. Behavioral analyses The TST and NSF were performed as described previously. 16,24 Immediately following the NSF, mice were returned to the home cage, where the latency to feed and quantity of food consumed within 5 min were measured. Non-overlapping images of the entire GCL and subgranular zone were taken on a fluorescence microscope (Zeiss, Oberkochen, Germany) by an individual unaware of genotype and treatment condition. The numbers of BrdU þ , BrdU þ /NeuN þ , DCX þ or activated caspase-3 þ cells were counted by an observer unaware of genotype and treatment condition. For proliferation studies, only BrdU þ cells within two cell widths of the subgranular zone were counted. However, for survival and double-labeling studies, BrdU þ cells were counted throughout the entire GCL, as they are known to migrate. 25 For BrdU/NeuN double-labeling experiments, at least 600 BrdU þ cells were examined in each group. Real-time PCR Mice were killed by cervical dislocation, decapitated and heads were rapidly cooled by submersion in liquid nitrogen for B6 s. Brains were removed and a 1.5-mm diameter punch of dorsal HIP (primarily dentate gyrus) was obtained from a 1-mm coronal section, snap-frozen in liquid nitrogen and stored at À 80 1C until further use. RNA was extracted with Trizol in combination with RNeasy minikits according to the manufacturer's protocol (Qiagen, Valencia, CA, USA). RNA was reverse transcribed using the iScript cDNA synthesis kit according to the manufacturer's protocol (Bio-Rad, Hercules, CA, USA), and real-time PCR was performed using a LightCycler (Roche Applied Science Statistical analysis Data were analyzed using Student's t-tests or two-way analysis of variances with Tukey's post-hoc tests, where appropriate. In some cases (for example,, microdialysis experiments), data were transformed (for example, log-transformed) before performing statistical analyses. Statistical analyses were performed using JMP software (SAS, Cary, NC, USA). RESULTS FLX increases extracellular 5-HT levels in the HIP of 5-HT-deficient mice Consistent with our previous results, 16,17 microdialysis revealed that Tph2KI mice have reduced extracellular 5-HT (5-HT EXT ) in the HIP compared with WT controls (main effect of genotype: F (1,25) ¼ 135.8074, Po0.0001, Figure 1a). Chronic treatment with FLX increased 5-HT EXT in both genotypes (main effect of FLX: Figure 1a). However, the magnitude of the FLX-induced increase in 5-HT EXT in Tph2KI mice (B2.25-fold, P ¼ 0.0326) was markedly less than that observed in WT animals (B6.4-fold, genotype by drug interaction: F (1,25) ¼ 28.3908, Po0.0001, Figure 1a). Importantly, the levels of 5-HT EXT in Tph2KI mice after chronic FLX treatment were not significantly different from those in untreated WT mice (P ¼ 0.3963), but they were only 12% of the levels achieved in FLX-treated WT animals. These results suggest that although chronic FLX treatment essentially reverses hippocampal 5-HT deficiency in Tph2KI mice, congenital 5-HT deficiency can significantly blunt the neurochemical effects of selective serotonin reuptake inhibitors (SSRIs). Chronic FLX fails to induce an antidepressant-like effect in the NSF in Tph2KI mice In the NSF, chronic FLX significantly reduced feeding latency (main effect of treatment: F (1,68) ¼ 6.0055, P ¼ 0.0168, Figure 1b), but a significant genotype by treatment interaction was also observed (F (1, 68) ¼ 10.4593, P ¼ 0.0019, Figure 1b). Indeed, FLX reduced feeding latency in WT mice (P ¼ 0.0012) but not in Tph2KI animals. Importantly, FLX-treated WT mice exhibited significantly shorter feeding latencies than FLX-treated Tph2KI animals (P ¼ 0.0414), suggesting that the lack of effect in Tph2KI mice was not due to a floor effect. A significant main effect of FLX on home-cage feeding latency was also observed (F (1, 76) ¼ 4.1378, P ¼ 0.0454, Figure 1c), but the genotype by treatment interaction was not significant (P ¼ 0.3142), thus suggesting that differential effects of FLX on appetitive drive in WT and Tph2KI animals were not responsible for the observed differential responses to FLX. In contrast to chronic FLX, acute treatment with chlordiazepoxide, a benzodiazepine, reduced feeding latency in both genotypes (main effect of treatment: Figure 1d). Similarly, chronic treatment with DES, a tricyclic antidepressant that preferentially inhibits norepinephrine reuptake, reduced feeding latency in both genotypes (main effect of treatment: F (1,35) ¼ 11.3246, P ¼ 0.0019, Figure 1e), suggesting that brain 5-HT deficiency does not impair anxiolytic-like responses to non-5-HT-specific drugs. Throughout the NSF experiments, control Tph2KI mice exhibited a tendency towards reduced feeding latencies compared with that of WT animals (Figures 1 b-e). To determine the basis for this, we examined home-cage food consumption and feeding latency, and weight loss following a 24-h food-deprivation period. No significant genotype differences were observed in home-cage food consumption or weight loss (BDS, unpublished observations). Brain 5-HT deficiency does not impair baseline neurogenesis Because of the reported importance of hippocampal neurogenesis in the NSF, 18,19 we hypothesized that the lack of effect of FLX in Tph2KI mice in the NSF might result from a defect in the neurogenic response to FLX. As we have shown previously, 24 there are no significant baseline differences between WT and Tph2KI mice in BrdU incorporation in the subgranular zone (Figures 2a-c), but a detailed analysis of baseline neurogenesis in Tph2KI mice has not been performed previously. Interestingly, IHC analysis for DCX, a marker of immature neurons, revealed a significant 37% increase in the number of DCX þ neurons in Tph2KI mice compared with that in WT controls (Student's t-test: P ¼ 0.0025, degrees of freedom ¼ 31, Figures 2d-f). To determine whether the increased numbers of DCX þ neurons resulted from increased survival of adult-generated neural progenitor cells, Tph2KI and WT mice were administered BrdU for 1 week and were killed either 1 or 21 days later. Importantly, no significant differences in the number of BrdU þ cells were observed in WT and Tph2KI mice killed on day 1 (Figure 2g). However, Tph2KI mice killed on day 21 had significantly more BrdU þ cells than WT controls (Student's t-test: P ¼ 0.0351, degrees of freedom ¼ 21, Figure 2h). In addition, IHC analysis for activated caspase-3, a marker of apoptosis, revealed Po0.05 compared with WT FLX by Tukey's post-hoc test. @ Po0.05 by Tukey's post-hoc test compared with control Tph2KI mice. 'X' denotes a significant genotype by treatment interaction by two-way ANOVA (Po0.05) and '#' denotes a significant main effect of genotype (Po0.05 by two-way ANOVA); n ¼ 7-8 per group for a, n ¼ 19-21 per group for b, n ¼ 10 per group for c, n ¼ 9 per group for d, n ¼ 8-11 per group for e and n ¼ 22-27 per group for f. decreased numbers of apoptotic cells within the GCL of Tph2KI animals when compared with that in WT controls (Student's t-test: P ¼ 0.0142, degrees of freedom ¼ 19, Figure 2i). We did not observe any statistically significant effects of 3 weeks of FLX treatment (beginning after the cessation of BrdU administration) on the survival of BrdU þ cells in either genotype (Figure 2j), although Tph2KI animals again exhibited an overall increase in survival compared with that of WT controls (main effect of genotype: F (1,20) ¼ 4.446, P ¼ 0.0478, Figure 2j). To evaluate whether this increased survival of adult-generated neurons might lead to a larger GCL, we compared GCL size in WT and in Tph2KI mice under baseline conditions and following chronic FLX administration. Tph2KI mice were observed to have a significantly larger GCL than WT controls (main effect of genotype: F (1,31) ¼ 5.8218, P ¼ 0.0219, Figure 2k). Although chronic FLX administration led to a slight increase in GCL size in WT mice, this effect did not reach significance. As a control, no significant genotype or treatment differences were observed in the size of the medial habenula (Figure 2l). Brain 5-HT deficiency prevents the neurogenic effects of FLX After observing no significant effects of FLX on cell survival, we next examined the effects of FLX on cell proliferation. No significant main effects of chronic FLX treatment or genotype were observed on BrdU incorporation. However, a significant genotype by treatment interaction was observed (F (1,37) ¼ 5.5383, P ¼ 0.024, Figures 3a-e). As expected, 6 chronic treatment with FLX before BrdU administration increased the number of BrdU þ cells in WT mice (P ¼ 0.0259, Figures 3a, c and e). However, this treatment had no effect on BrdU incorporation in Tph2KI animals (Figures 3b, d and e). Chronic FLX significantly increased DCX immunoreactivity (main effect of treatment: F (1,55) ¼ 8.4018, P ¼ 0.0054, Figures 3f, h and j), but a significant genotype by treatment interaction was also observed (F (1,55) ¼ 21.3702, Po0.0001, Figure 3j). Indeed, the increased DCX was only apparent in WT mice (Po0.0001) and not in Tph2KI animals. Similar to what was observed above (Figure 2d), control Tph2KI animals exhibited a 30% increase in the number of DCX þ cells, but this effect did not reach statistical significance using Tukey's post-hoc analysis, only with the less conservative Student's t-test (P ¼ 0.0437). We did not observe a significant increase in BrdU or DCX immunoreactivity in response to chronic DES treatment in either genotype (BDS and TLT, unpublished observations). We next performed double-labeling experiments to compare the percentage of BrdU þ cells that become NeuN þ neurons between the groups. In both WT and Tph2KI animals, B80% of the BrdU þ cells in the GCL were also immunopositive for NeuN 3 weeks after a 1-week exposure to BrdU, and FLX administration did not significantly affect the proportion of BrdU þ /NeuN þ cells in either genotype (Figure 4a). We again observed a significant increase in the number of surviving BrdU þ cells in Tph2KI mice compared with WT animals (main effect of genotype: Figure 4b). Similarly, the total number of BrdU þ /NeuN þ neurons was greater in Tph2KI than in WT animals (main effect of genotype: F (1,20) Chronic FLX treatment fails to increase hippocampal BDNF mRNA levels in Tph2KI mice As expected, 27 chronic FLX treatment led to significantly increased mRNA levels of BDNF (main effect of treatment: F (3,37) Figure 5a); however, a significant genotype by treatment interaction was also observed (F (3,37) ¼ 4.5347, P ¼ 0.0399). Tukey's post-hoc tests revealed that the effect of FLX was only significant in WT mice (P ¼ 0.0216), not in Tph2KI animals. Tukey's post-hoc tests also revealed that Tph2KI animals Denotes a significant main effect of genotype by two-way analysis of variance, Po0.05; n ¼ 12 per group for a-c; n ¼ 16-17 per group for d-f; n ¼ 9 per group for g, n ¼ 11-12 per group for h, n ¼ 10-11 per group for i, n ¼ 6 per group for j, n ¼ 8-9 per group in k and l. The scale bar indicates 20 mm. Arrows denote BrdU þ cells, and arrowheads indicate DCX þ cells. exhibit increased hippocampal BDNF mRNA at baseline (P ¼ 0.0446). The effects of FLX on CREB mRNA expression in the HIP were dependent upon genotype (significant genotype by treatment interaction: F (3,36) ¼ 4.7420, P ¼ 0.0361, Figure 5b). However, Tukey's post-hoc tests did not reveal any significant differences between the groups (although the less conservative Student's t-test revealed a slight increase in CREB levels in FLX-treated WT mice, P ¼ 0.0367, as expected 28 ). We did not observe any significant effects of DES on hippocampal levels of CREB or BDNF (Figures 5c and d). Co-administration of 5-HTP restores the anxiolytic ability of FLX in Tph2KI mice in the NSF Unlike FLX alone, chronic co-administration of 5-HTP þ FLX reduced feeding latency in both Tph2KI and WT animals (significant main effect of treatment: F (3,34) ¼ 16.3484, P ¼ 0.0003, Figure 5e). Co-administration of 5-HTP þ FLX also led to an increase in the number of BrdU þ (P ¼ 0.0474, Figure 5f) and DCX þ cells (P ¼ 0.0187, Figure 5g) in WT mice. However, 5-HTP þ FLX treatment failed to increase the number of BrdU þ (genotype by treatment interaction: F (3,41) ¼ 4.1731, P ¼ 0.0475, Figure 5f) or DCX þ cells (genotype by treatment interaction: F (3,63) ¼ 9.8198, P ¼ 0.0027, Figure 5g) in Tph2KI mice. Similar to what was observed above (Figures 2d and 3j), Tph2KI mice exhibited a 55% increase in the number of DCX þ neurons, but this effect did not achieve statistical significance. Chronic 5-HTP þ FLX administration also failed to induce a significant increase in BDNF or CREB expression in Tph2KI animals (BDS, unpublished observations). DISCUSSION Our results suggest that 5-HT deficiency could reduce the efficacy of FLX by limiting FLX-induced increases in 5-HT EXT , thus blocking downstream cellular and molecular responses. This would be consistent with prior work that has implicated variants in Tph2 in antidepressant sensitivity in humans 14,15 and with prior preclinical work showing that acute pharmacologic inhibition of 5-HT synthesis blocks the acute effects of SSRIs in the TST 29 and forced swim test [30][31][32] in rodents. Although only FLX was examined here, it is likely that other SSRIs would be impacted by 5-HT deficiency as well. A previous report demonstrated that acute 5-HTP administration can restore antidepressant-like responses to acute SSRI treatment in otherwise SSRI-insensitive NMRI mice, suggesting that combined 5-HTP þ SSRI therapy could represent an antidepressant augmentation strategy, 33 a hypothesis that is further supported by our finding that NSF behavior can be modified in Tph2KI mice by chronic combined 5-HTP þ FLX treatment. Although the specific mutation expressed by Tph2KI mice is extremely rare, 5-HT deficiency could result from many different mutations in 5-HT system genes. 34 As such, we hypothesize that the current results will be relevant for a wide range of genetic insults leading to 5-HT deficiency. Although we feel that studies using Tph2KI mice may be highly informative for psychiatric conditions, such as depression and anxiety, we do not claim that these animals completely recapitulate any disorder. Rather, we view these animals as a model of 5-HT deficiency, not of depression or anxiety per se. Similarly, we have utilized the TST and the NSF because of their strong predictive validity for antidepressant action, not on the basis of their face validity or relevance to depression-or anxiety-like behavior. Future studies examining the effects of 5-HT deficiency on responses to chronic stressors may be useful in determining the importance of 5-HT deficiency in regulating susceptibility to stress, which could, in turn, have implications for our understanding of the gene by environment interactions that lead to aberrant emotional behavior. The observed trend towards a reduction in feeding latency in the NSF in Tph2KI animals compared with that in WT controls is consistent with a role for 5-HT in anxiety-like behavior and is similar to the phenotypes reported in other transgenic models of 5-HT deficiency 35,36 and an acute rat model of 5-HT depletion. 37 We hypothesize that the variance in baseline feeding latencies in WT and Tph2KI mice (compare Figure 1 with Figure 5) may be associated with the varying levels of physiological arousal associated with different drug administration paradigms (that is, dietary vs injections). Indeed, previous reports from several groups, including our own, have shown that performance in the NSF test is sensitive to stress. 19,24 The importance of adult hippocampal neurogenesis in depression-and anxiety-like behavior and in responses to antidepressants has been widely debated. 21,38,39 The reported relationships between neurogenesis and stress, [40][41][42][43][44] along with the fact that completely inhibiting neurogenesis prevents some of the behavioral effects of antidepressants, have suggested a role for hippocampal neurogenesis in the development and treatment of mood disorders. 18,19,[44][45][46][47] However, numerous studies, including the current study, have found neurogenesis to be of limited importance in depression-related behavior and/or in antidepressant-like responses. 20,38,[48][49][50] Our finding that chronic 5-HTP þ FLX (or DES) administration, which does not increase neurogenesis in Tph2KI animals, reduces feeding latency in Tph2KI mice demonstrates that antidepressantinduced increases in neurogenesis are not required for this effect, at least not in 5-HT-deficient animals. These current results are distinct from previous studies that used X-ray irradiation to ablate all dividing cells, which revealed that antidepressants are ineffective when neurogenesis has been completely inhibited. 18,19 Interestingly, it has been shown that promoting neurogenesis is not sufficient to induce antidepressant-like effects in the NSF, 51 and the effects of several classes of experimental antidepressants, such as corticotropin-releasing factor 1 and vasopressin 1b antagonists, reportedly do not require adult hippocampal neurogenesis in animal models. 52 Taken together, these data suggest that increased neurogenesis is neither required nor sufficient for feeding latency reductions in the NSF. Although 5-HT elevation has been repeatedly shown to increase the proliferation of adult hippocampal neural progenitor cells, the reported effects of 5-HT on the survival of neural progenitor cells have been inconsistent. Several groups have reported that chronic FLX administration increases the survival of newly born neurons in vivo, 19,53,54 but other studies have suggested that chronic FLX increases both apoptosis and cell turnover in the HIP. 55,56 Our results did not reveal a significant effect of FLX on cell survival but did demonstrate an unexpected increase in cell survival in 5-HTdeficient animals compared with that in WT controls. It is possible that the improved survival of adult-generated neurons in Tph2KI mice is related to their increased levels of hippocampal BDNF, which has been shown to have an important role in the survival (but not proliferation or maturation) of adult neural progenitor cells. 57 It is likely that the increased size of the GCL observed in Tph2KI mice will have important implications for hippocampal function and hippocampal-dependent behaviors, but future research will be required to evaluate this possibility. Overall, our data indicate that chronic treatment with FLX can reverse brain 5-HT deficiency in Tph2KI mice but that FLX fails to induce several of its key molecular, cellular and behavioral effects in 5-HT-deficient animals. Importantly, several non-5-HTergic agents (that is, DES and chlordiazepoxide) appear to retain their efficacy in 5-HT-deficient animals, and behavioral responses to FLX can be restored in 5-HT-deficient animals by cotreatment with 5-HTP. These results suggest that 5-HT deficiency may contribute to insensitivity to SSRIs by limiting the magnitude of SSRI-induced 5-HT increments. In addition, the observed decrease in feeding latency induced by combined 5-HTP þ FLX treatment in the absence of increased neurogenesis further refines our understanding of the importance of hippocampal neurogenesis in mediating the effects of antidepressants. The feeding latencies of WT and Tph2KI mice chronically treated with FLX þ 5-HTP are shown. Quantification of the number of bromodeoxyuridine (BrdU) þ (f ) and doublecortin (DCX) þ (g) cells in FLX þ 5-HTP-treated WT and Tph2KI mice are shown. *Significant main effect of FLX by two way analysis of variance (ANOVA; Po0.05). **Po0.05 by Tukey's post-hoc test compared with WT control. 'X' indicates significant genotype by treatment interaction by two-way ANOVA (Po0.05); n ¼ 10-11 mice per group for a; n ¼ 9-11 mice per group for b-d; n ¼ 9-10 per group for e, n ¼ 11 per group for f and n ¼ 11-15 per group for g.
2016-05-18T09:40:17.249Z
2013-08-01T00:00:00.000
{ "year": 2013, "sha1": "04a55d5d04745b0fae160b6e5d6f4e83f196c73e", "oa_license": "CCBYNCND", "oa_url": "https://www.nature.com/articles/tp201365.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "04a55d5d04745b0fae160b6e5d6f4e83f196c73e", "s2fieldsofstudy": [ "Biology", "Psychology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
17781977
pes2o/s2orc
v3-fos-license
Gene expression deconvolution in clinical samples Cell type heterogeneity may have a substantial effect on gene expression profiling of human tissue. Several in silico methods for deconvoluting a gene expression profile into cell-type-specific subprofiles have been published but not widely used. Here, we consider recent methods and the experimental validations available for them. Shen-Orr et al. recently developed an approach called cell-type-specific significance analysis of microarray for deconvoluting gene expression. This method requires the measurement of the proportion of each cell type in each sample and the expression profiles of the heterogeneous samples. It determines how gene expression varies among pre-defined phenotypes for each cell type. Gene expression can vary substantially among cell types and sample heterogeneity can mask the identification of biologically important phenotypic correlations. Consequently, the deconvolution approach can be useful in the analysis of mixtures of cell populations in clinical samples. Background Microarray expression profiling has proven to be a valuable technology in a wide variety of biological and biomedical investigations. One of its limitations, however, is the relatively large amount of mRNA required. Consequently, for analyses involving tissue from humans or experimental animals, the tissue samples used for mRNA extraction are often heterogeneous with regard to cell type. Because gene expression can vary substantially among cell types, gene expression profiles based on tissue samples of varying composition can be very difficult to interpret biologically. The problem is particularly serious for expression profiles intended for clinical use in informing treatment selection. Investigators have reported difficulties caused by sample heterogeneity for identifying biologically relevant differentially expressed genes and for developing and validating predictive models [1][2][3]. Although laser capture microdissection provides an experimental means for selecting a more homogeneous population of cells, it is time consuming and difficult to obtain sufficient purified tissue with adequately preserved RNA. Expression deconvolution Several statistical approaches have been proposed to deconvolute gene expression profiles obtained from heterogeneous tissue samples into cell-type-specific subprofiles. Most of the methods are based on a framework first proposed by Venet et al. [4], incorporating the linearity assumption that the expression of each gene in a mixture of cell types is a weighted average of the expression values that would exist for pure populations of those cell types. The weights are determined by the proportional composition of the cell types in the mixture and hence are the same for each gene but differ among sample mixtures. Since the publication of Venet et al. [4], several additional publications have appeared dealing with deconvolution of gene expression profiles on complex tissues (for example, [5][6][7][8][9][10]). Without reviewing the details that distinguish the various methods, we attempt here to summarize the status of this area of development. When the proportions of the cell types in each mixture sample are known from fluorescence activated cell sorting analysis, histopathological evaluation or other experimental methods, deconvolution is relatively straightforward. With the known proportions of the cell types in the mixture, deconvolution can be solved as a linear regression problem in which the cell-type-specific gene expression levels represent the regression coefficients. In fact, under these conditions, the regression problem can be solved separately for each gene. In some cases the cell-type-specific gene expression levels may be of interest in their own right, or interest may focus on differences in expression among cell types. For cancer studies, however, interest is often on differential expression among classes of tumors (such as Abstract Cell type heterogeneity may have a substantial effect on gene expression profiling of human tissue. Several in silico methods for deconvoluting a gene expression profile into cell-type-specific subprofiles have been published but not widely used. Here, we consider recent methods and the experimental validations available for them. Shen-Orr et al. recently developed an approach called cell-type-specific significance analysis of microarray for deconvoluting gene expression. This method requires the measurement of the proportion of each cell type in each sample and the expression profiles of the heterogeneous samples. It determines how gene expression varies among pre-defined phenotypes for each cell type. Gene expression can vary substantially among cell types and sample heterogeneity can mask the identification of biologically important phenotypic correlations. Consequently, the deconvolution approach can be useful in the analysis of mixtures of cell populations in clinical samples. responders versus non-responders to a treatment), with expression from normal epithelium and infiltrating immune cells of lesser interest. Shen-Orr et al. [8] developed cell-type-specific significance analysis of microarray (csSAM) for analyzing differentially expressed genes for each cell type in sample mixtures with microarray data. The relationship between measured gene expression in mixed samples and the expression of genes in the isolated pure subsets was tested experimentally for synthetic mixtures of liver, brain and lung cells from rats. Their in silico synthesized mixture expression profiles, obtained by multiplying the measured pure tissue expression profiles by the proportion of the tissue subset in a given mixture sample, were highly correlated with the experimentally measured expression profiles for the mixtures. This provided direct support for the linearity assumption of all previous models. The deconvoluted estimates of cell-type-specific expression were in good agreement with expression measured in pure cell types for the vast majority of probes. The authors [8] then applied csSAM to human whole blood gene expression array data from kidney transplant recipients. When they used the whole blood analyses, there were no differentially expressed genes detected between the rejection group and stable group. However, a large number of differentially expressed genes were identified between the two groups in two individual cell types when applying the csSAM for each of the five quantified cell types: monocyte, basophile, neutrophil, eosinophil and lymphocyte. The method requires experimental measurements of the proportional composition of the component cell types in each sample. Although there are some pre-processing issues such as normalization that require further consideration, csSAM seems to be a useful tool for analysis of gene expression profiling of heterogeneous samples with known relative cell type frequencies. Source code for csSAM in the R statistical programming language is available [8]. Several investigations performed deconvolution when the proportions of the component cell types were unknown but expression of signature genes in pure cell types was known (for example, [5][6][7]). Abbas et al. [7] developed an approach to estimate the proportions of white blood cell subtypes in samples from patients with systemic lupus erythematosus. First, they selected the most highly expressed signature probesets (genes) among several of the 18 immune cell types of interest using the expression data from the pure cells. They then used expression profiles for these signature genes to solve a linear equation for the proportions of the 18 immune cell subtypes in both healthy donors and patients with lupus. The deconvoluted results allowed them to find patterns of leukocyte dynamics and their correlations with clinical outcomes. In circumstances such as described by Abbas et al. [7] in which careful preliminary studies have been conducted to identify signature genes and determine their expression in pure cell subtypes, such deconvolution can be successful. Some proposals for deconvolution have been made for cases in which neither the proportions of the cell types in the mixtures nor signature genes are known. These approaches use a variety of methods, such as nonnegative matrix factorization [9,10]. The validations available are limited, however, and the number of samples required for accurate deconvolution may be large [9]. Consequently, when measurements of the proportions of the component cell types in individual samples are not available and signature genes for each cell subtype are unknown, we believe that the status of deconvolution of expression profiles of mixtures is less clear. Identifying genes that are differentially expressed among groups of diseased tissue samples is a frequent objective of gene expression profiling. Many of the publications referenced here ignore class information (such as disease versus normal or responder versus nonresponder) in performing the deconvolution and state or imply that the deconvoluted cell-type-specific expression profiles can then be used with standard software packages for investigating class comparisons [6,10]. This approach is potentially problematic, however, because the deconvoluted expression profiles are no longer statistically independent. Shen-Orr et al. [8] indicate that the deconvolution should be performed separately for each class being compared and that in using permutation tests to assess statistical significance, deconvolution should be repeated for each permutation of class labels. Conclusions Deconvolution of gene expression profiles for heterogeneous samples can be performed accurately when sufficiently accurate estimates of the proportional representation of component cell types in each sample are available and when expression profiles of the components are sufficiently different. The csSAM method developed by Shen-Orr et al. [8] can be useful in such clinical applications. Further studies are needed to address potential confounding factors for deconvolution, such as data normalization and batch effect adjustment. As Shen-Orr et al. [8] indicated, although the assumption of linearity holds for majority of probes, identification and exclusion of probes affected by non-linear amplification or synergistic cross-hybridization may provide more accurate deconvolution. Although most of the previous deconvolution methods have focused on single-label microarray data, they could be potentially adapted for use with dual-label array that uses a homogeneous reference sample. Deconvolution of expression profiles when estimates of the proportional representation of component cell types in each sample are not available can be performed accurately in cases, such as that of Abbas et al. [7], in which careful preliminary studies have been conducted to identify expression profiles of signature genes from pure samples that clearly distinguish the cell types. Without the prior identification of such signature genes or the measurement of cell-type proportions, however, methods for the deconvolution of gene expression profiles for mixed tissue samples require further investigation and experimental validation to clarify the conditions under which accurate results can be obtained. Abbreviations csSAM, cell-type-specific significance analysis of microarray.
2014-10-01T00:00:00.000Z
2010-12-29T00:00:00.000
{ "year": 2010, "sha1": "5a6657f75ebcbacee0d22b504eab6129fb7c68e6", "oa_license": "CCBY", "oa_url": "https://genomemedicine.biomedcentral.com/track/pdf/10.1186/gm214", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "8711acfe72413d2acfadc327fed8b13002dbee4a", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
256325649
pes2o/s2orc
v3-fos-license
Editorial and Machine Learning. These manuscripts were chosen out of over 500 manuscripts, that span a broad variety of topics from various emerging areas of Information Technology and Computer Science, especially addressing current research problems related to spectrum sensing, network attack, classification, robust resource provisioning, factored language model, ensemble learning and healthcare IoT applications; to name a few. Last decade has witnessed exponential growth in imagebased applications. The first manuscript in this issue “ An improvised CNN model for fake image detection”, Yasir Hamid et al. evaluates novel computer vision models based on Convolution Neural Networks for fake image detection. The second manuscript “TConvRec: temporal convolutionalrecurrent fusion model with additional pattern learning”, Brijendra Singh et al. propositions an intelligent prediction model based on convolutional recurrent fusion for performance optimization. The manuscript “Light Weight Gradient Ensemble Model for detecting network attack at the edge of the IoT network”, D. Santhadevi et al. implements a novel scheme for accurately predicting malware while controlling the computational expenses. The next manuscript “A realtime correlation model between lung sounds & clinical data for asthmatic patients”, Divya Singh et al. outlines a novel, correlation engine design which correlates individual clinical data with lung sounds .The next manuscript, “Power and area optimized adaptive Viterbi decoder for high speed communication applications”, Namratha et al. proposes a novel architecture for the area and power-efficient adaptive Viterbi decoder. The manuscript, “ Knowledge graph enrichment from clinical narratives using NLP, NER, and biomedical ontologies for healthcare applications”, Anjali Thukral et al. prototypes a scheme for mapping clinical narratives to a Knowledge Graph (KG) so that vital clinical details as recommended by doctors can be used in healthcare Warm New Year greetings to all our readers!!! We hope this year brings relief and progress to humanity. BJIT remains committed to delivering on its challenge of consistently showcasing and disseminating novel researches pertaining to computing applications and capable of altering the quality of human life. It is a matter of great privilege for me to unveil before you the forty sixth issue i.e. volume 15 number 01 of the “International Journal of Information Technology” [An official Journal of Bharati Vidyapeeth’s Institute of Computer Applications and Management (BVICAM), New Delhi] with acronym BJIT. The issue is live on the Springer content platform SpringerLink and available to the prospective readers through Springer CS package globally. Throughout the world, nations have started recognizing that Information Technology (IT) is now acting as a catalyst in speeding up the detection, correlation, pattern learning and in improving the quality of human life. Recent advancements in IT have touched almost every conceivable area of human life. Its degree of pervasiveness, in day-to-day life, is rapidly increasing, every new day. On the backdrop of this, BJIT has accepted the challenge to consistently showcase, disseminate and institutionalize the rapidly changing huge knowledgebase globally, with authenticity and accuracy, having special focus on the new researches pertaining to IT applications for improving the quality of day-to-day life. Current research has expanded volumes as well as dimensions in almost all fields of human endeavor. Applications of information technology have successfully been applied in almost every field. Volume 15 Number 01 presents a compilation of 50 papers in the field of Artificial Intelligence Warm New Year greetings to all our readers!!! We hope this year brings relief and progress to humanity. BJIT remains committed to delivering on its challenge of consistently showcasing and disseminating novel researches pertaining to computing applications and capable of altering the quality of human life. It is a matter of great privilege for me to unveil before you the forty sixth issue i.e. volume 15 number 01 of the "International Journal of Information Technology" [An official Journal of Bharati Vidyapeeth's Institute of Computer Applications and Management (BVICAM), New Delhi] with acronym BJIT. The issue is live on the Springer content platform SpringerLink and available to the prospective readers through Springer CS package globally. Throughout the world, nations have started recognizing that Information Technology (IT) is now acting as a catalyst in speeding up the detection, correlation, pattern learning and in improving the quality of human life. Recent advancements in IT have touched almost every conceivable area of human life. Its degree of pervasiveness, in day-to-day life, is rapidly increasing, every new day. On the backdrop of this, BJIT has accepted the challenge to consistently showcase, disseminate and institutionalize the rapidly changing huge knowledgebase globally, with authenticity and accuracy, having special focus on the new researches pertaining to IT applications for improving the quality of day-to-day life. Current research has expanded volumes as well as dimensions in almost all fields of human endeavor. Applications of information technology have successfully been applied in almost every field. Volume 15 Number 01 presents a compilation of 50 papers in the field of Artificial Intelligence applications. The current improvements in the average lifespan of a human being can be attributed to availability of varied healthcare initiatives. The manuscript " An empirical investigation into the altering health perspectives in the internet of health things", Nour Mahmoud Bahbouh et al. assesses varied recent healthcare applications and the advancements. The next manuscript " Improvement in spectrum sensing of wireless regional area network with empirical mode decomposition", Rahul Koshti et al. intends to propose a novel model to improve the performance of Wireless regional area network with Empirical mode decomposition. The manuscript, "Predicting Opinion Evolution based on Information Diffusion in Social Networks using a Hybrid Fuzzy based Approach", Samson Ebenezar Uthirapathy et al. advises a new framework for analyzing both information diffusion and opinion evolution. The next manuscript " A highly efficient implementation of fractional sample rate digital down converter on FPGA", Debarshi Datta et al. propositions a reconfigurable digital down converter (DDC) to significantly lower the sampling frequency. The manuscript "An energy efficient robust resource provisioning based on improved PSO-ANN", Ankita Srivastava et al. addresses to resolve the provisioning problem by scheduling the tasks to virtual machines (VMs). The manuscript " Electrocardiogram signal classification using VGGNet: a neural network based classification model", Agam Das Goswami et al. contends a novel ensemble-based classification model is proposed. The manuscript "A framework for vehicle quality evaluation based on interpretable machine learning", Mohammad Alwadi et al. captures the nuances of a computational framework for evaluating the vehicle quality. The manuscript "Deep dilated CNN based image denoising", Rashmi Chaurasiya et al. suggests a novel, mechanism for analyzing the effect of receptive field on image denoising. The manuscript "A novel multivariate approach for the detection of epileptic seizure using BCS-WELM", Priya Das et al. delineates a novel weighted extreme learning machine (WELM) classifier for epilectic seizure detection. High-data rate applications consume high power. The manuscript "Design and implementation of high speed, low complexity FFT/IFFT processor using modified mixed radix-2 4 -2 2 -2 3 algorithm for high data rate applications", C. A. Arun et. al. propagates a modified mixed radix algorithm with low complexity based Fast Fourier Transform (FFT) processor. The manuscript "Channel scheduling based interference lowering power efficient algorithm (CShILPeA) for the wireless body area network: design and performance analysis", Shilpa Vikas Shinde et. al. captures a novel mechanism to mitigate interference. The manuscript " Query intent recognition by integrating latent dirichlet allocation in conditional random field", Nahida Shafi et. al. offers a multi-stage system to extract the representation of utterances from the word corpus. The manuscript "COVID-19 assessment using HMM cough recognition system", Mohamed Hamidi et al. details a novel, Hidden Markov model based automatic speech recognition system. The manuscript "Simulation and excitation analysis of nano aperture-array for surface plasmon based memory applications", Srujana Ramachandra et al. presents a mechanism for a Plasmon enabled optical memory device to achieve higher data transfer rates and data density. The manuscript "Development of greenhouse-application-specific wireless sensor node and graphical user interface", Suman Lata et al. evaluates multi-sensor wireless node for greenhouse applications. The manuscript "A proposed hybrid clustering algorithm using K-means and BIRCH for cluster based cab recommender system (CBCRS)", Supreet Kaur Mann et al. details a novel model to assist cab drivers in efficient passenger pickup. The manuscript "Dynamic characterization of functional brain connectivity network for mental workload condition using an effective network identifier", Mangesh Ramaji Kose et al. develops an efficient, electroencephalogram based approach to analyze the dynamic mental workload condition of the human brain. The manuscript "A new approach for global task scheduling in volunteer computing systems", Ehab Saleh et al. evaluates a novel global scheduling algorithm in a peer-to-peer volunteer network. The manuscript, "A big data smart agricultural system: recommending optimum fertilizers for crops", Vuong M. Ngo et al. analyzes an electronic agricultural record to manage agricultural big data. Image encryption optimization is an open research challenge. The manuscript "An improved image encryption algorithm using a new byte-shuffled Henon map", Madhu Sharma et al. investigates a novel byteshuffling enhancement of chaotic maps for the same. The manuscript, "Investigations of standalone PV system with battery-super capacitor hybrid energy storage system for household applications", K. Karunanithi et al. details a standalone Photovoltaic system for household applications. The manuscript "Binary particle swarm optimization based edge detection under weighted image sharpening filter", Ankush Verma et al. evaluates an edge detection approach to deal with the challenge of incorrect edge detection. The next manuscript "Effective recognition of facial emotions using dual transfer learned feature vectors and support vector machine", Swapna Subudhiray et al. recommends a novel mechanism for facial emotion classification. The manuscript "Study of drug assimilation in human system using physics informed neural networks", Kanupriya Goswami et al. suggests a mathematical model for assimilation, distribution and elimination of drugs in the human body. The manuscript "Class balancing framework for credit card fraud detection based on clustering and similarity-based selection (SBS)", Hadeel Ahmad et al. investigates a novel, hybrid mechanism for processing unbalanced data. The manuscript "Secure Authentication Framework for SDN-IoT network using Keccak-256 and Bliss-B algorithms", Sahana D. S. et al. introduces an enhanced framework that improves security and delivers efficient services to entities. The manuscript "Bundle relaying scheme for network deployed using grey wolf optimization in delay tolerant networks", Nidhi Sonkar et al. offers a mechanism to deploy static relay nodes to increase contact opportunity and forward the message. The manuscript "A deep reinforcement learning technique for bug detection in video games", Geeta Rani et al. details about the design of the deep reinforcement neural learningbased model to detect bugs in a gaming environment. The manuscript "Lattice abstraction-based content summarization using baseline abstractive lexical chaining progress", G. Bharathi Mohan et al. characterizes a corpus reader content analysis and then de-noises the contents by eliminating the nonstructural text in segmented sentences. The next manuscript "A spectral-spatial 3D-convolutional capsule network for hyperspectral image classification with limited training samples", Deepak Kumar et al. replicates a capsule network to overcome the challenges of hyperspectral image clasification. The manuscript "A novel stock counting system for detecting lot numbers using Tesseract OCR", Parkpoom Lertsawatwicha et al. outlays a novel mechanism for counting stock. The manuscript "Machine learning based workload balancing scheme for minimizing stress migration induced aging in multicore processors", P. Jagadeesh Kumar et al. investigates a thermal estimation model and bases an aging-aware scheduler on the same. The manuscript "UML and NFR-framework based method for the analysis of the requirements of an information system", Mohd. Arif et al. proposes a model for the analysis of both functional and non-functional software requirements. The manuscript "A supervised machine learning-based solution for efficient network intrusion detection using ensemble learning based on hyperparameter optimization", Arindam Sarkar et al. empirically details a novel model for intrusion detection. The manuscript, "Extracting information and inferences from a large text corpus", Sandhya Avasthi et al. emulates an incremental topic model to process large text data. The next manuscript "Design of optimal bidirectional long short term memorybased predictive analysis and severity estimation model for diabetes mellitus", R. Annamalai et al. simulates an optimal predictive analysis and severity estimation model for diabetes mellitus. The manuscript " Joint energy and latency-sensitive computation and communication resource allocation for multi-access edge computing in a two-tier 5G HetNet", Mobasshir Mahbub et al. suggests a novel mechanism for efficient computation and communication resource allocation for multi-access edge computing. The next manuscript " Bell pepper leaf disease classification with LBP and VGG-16 based fused features and RF classifier", Monu Bhagat et al. evaluates an approach for disease detection in big fields. The manuscript "Spoofing free fingerprint image enhancement", H. Mohamed Khan et al. elaborates a fingerprint enhancement, object area detection and gaborbased ridge era model for spoof-free fingerprint image. The manuscript "Variations-tolerant low power wide fan-in OR logic domino circuit", Ankur Kumar et al. evaluates a strategy for reducing delay and power variations for better noise immunity. Nature inspired algorithms are being extensively used to solve complex optimization problems. The manuscript "Spider monkey optimization method to design two channel quadrature mirror filter bank with linear phase", Surendra Kumar Agrawal et al. explores a novel two channel linear phase quadrature mirror filter (QMF) bank by optimizing the filter tap weights of prototype filter. The next manuscript "Energy-efficient resource allocation with a combinatorial auction pricing mechanism", Puja Prasad et al. outlays an auction-based approach to ensure truthful price discovery and maximize revenue. The manuscript " Analysis of mimo optical wireless data center networks", Anand Kumar Dixit et al. analyzes modelling of wireless optical data centres as Multiple Inputs, Multiple Outputs (MIMO) systems. The manuscript "A dual fuzzy with hybrid deep learning architecture based on CNN with hybrid metaheuristic algorithm for effective segmentation and classification", Shafeen Nagoor et al. evaluates a hybrid deep-learning model for early tuberculosis diagnosis. I am sure the contributions in this issue, which is an amalgamation of novel applications of computer science and information technology, shall pave way to improve our life and sustainability in the present environment. The manuscripts of the issue will not only enrich our readers' knowledgebase but will also motivate many of the potential researchers to take up these challenging application areas and contribute effectively for the overall prosperity of the mankind. As a matter of policy, all the manuscripts received and considered for the Journal, are double blind peer reviewed by at-least two independent referees. Our panel of expert referees' posses a sound academic background and have a rich publication record in various prestigious journals representing Universities, Research Laboratories and other Institutions of repute, globally. Finalizing the constitution of the panel of referees, for double blind peer review(s) of the considered manuscripts, was a painstaking process, but it helped us to ensure that only the best, interesting and novel of the considered manuscripts are showcased and that too after undergoing multiple cycles of review, as required. I thank the entire editorial board, members of the resident editorial team and our panel of experts in steering the considered manuscripts through multiple cycles of review and bringing out the best from the contributing authors. I thank my esteemed authors for having shown confidence in BJIT and considering it a platform to showcase and share their original research work. I would also wish to thank the authors whose papers could not have been published
2023-01-29T05:07:57.666Z
2023-01-01T00:00:00.000
{ "year": 2023, "sha1": "75cd50414ff451783213e2f220f5fc124c227344", "oa_license": null, "oa_url": "https://link.springer.com/content/pdf/10.1007/s41870-023-01156-3.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "ee5917e3e2b21e0c19900cb65cbae402c820ee98", "s2fieldsofstudy": [ "Computer Science" ], "extfieldsofstudy": [ "Medicine" ] }
261379454
pes2o/s2orc
v3-fos-license
An Ensemble Forecasting Alternative Based on Stochastic Parameter Perturbation (SPP) on Potential Vorticity Anomalies through the Identification of Weather Features as Coherent Objects : The Pacific Ocean witnesses frequent cyclonic activity. The destructive impact of these storms, including strong winds, heavy rain, and storm surge, causes flooding, landslides, and extensive damage. Understanding cyclone genesis and evolution is crucial for accurate forecasts and minimizing harm. Towards this direction, an alternative ensemble forecasting approach based on a stochastic parameter perturbation (SPP) scheme, applied in potential vorticity (PV) anomalies, was developed. Testing it on Typhoon Usagi demonstrated its effectiveness in introducing uncertainties to storm tracks and cyclone development. These findings highlight the potential of stochastic methods in regional forecasting systems. Introduction Super Typhoon Usagi was a very intense cyclone (equivalent to a category 4 hurricane on the Saffir-Simpson hurricane wind scale) developed in the Western Pacific Ocean on 16 September 2013.Its development was influenced by a combination of favorable atmospheric conditions, including warm sea surface temperatures and low vertical wind shear [1].The cyclone exhibited a well-defined eye at its center, surrounded by concentric bands of intense thunderstorms.Analysis of satellite imagery and meteorological data revealed that Cyclone Usagi underwent rapid intensification, with the minimum sea level pressure reaching 910 hPa [2].This powerful tropical cyclone exhibited characteristics of a mature system, featuring sustained wind speeds of up 205 km/h (according to the Japan Meteorological Agency-JMA). Cyclone Usagi had significant impacts on several countries in its path, primarily affecting the coastal regions of the Philippines, Taiwan, and southern China.It triggered extensive flooding, landslides, and infrastructure damage, leading to 39 deaths and significant economic losses estimated at approximately USD 4.32 billion [3,4].These impacts underscore the importance of preparedness measures, early warning systems, and resilient infrastructure in vulnerable coastal regions to mitigate the devastating effects of tropical cyclones. In this way, ensemble forecasts are employed in order to quantify the uncertainties in cyclone paths, dynamics and impacts.The present study is an effort to propose an alternative way of producing model ensembles based on stochastic parameter perturbations (SPP) on potential vorticity anomalies through the identification of weather features as coherent objects. A Feature-Based Stochastic Scheme (FBS) The future-based stochastic system (FBS) aims to perturb stochastically the grid points that dynamically describe a cyclone system.This is carried out via a four-step procedure: Step one-PV budget calculation: A module has been developed that calculates the non-conserved PV components of the total atmospheric PV at the beginning of every model time step.This module is described in detail in [13] where it has been used to analyse the processes that contribute to the intensification of Mediterranean cyclones. Step two-Identifying and tracking objects: A new module has been developed and implemented into WRF to identify coherent 3D objects.Each object is composed of neighboring grid points of PV diab or PV mo that exceed the absolute value of 0.75 PVUs.However, we retrain only objects that include at least one grid point of more than 2 PVUs.From the perspective of PV invertibility, these two absolute value thresholds are deemed adequate for retaining objects which describe meso-scale systems in terms of size, and have a significant impact on the atmospheric state.Since this study focuses on cyclones, we included an additional criterion that demands from objects to be composed of grid points with negative pressure perturbation (P', expressed in pressure anomalies from model level averages) (Figure 1). (SPP) on potential vorticity anomalies through the identification of weather features as coherent objects. Model Set-Up The numerical simulations presented in this study are performed using the Advanced Weather Research and Forecasting Model (WRF-ARW, version 4.2.2, [5]).The domain is set up with horizontal grid resolution of 4 km and a hybrid 61 terrain-following η levels up to 50 hPa.Initial and boundary conditions were obtained from hourly ERA5 reanalysis [6] at 0.25° grid spacing. The specific physical parameterization schemes common to all the simulations performed in this study are summarized in the following table (Table 1): A Feature-Based Stochastic Scheme (FBS) The future-based stochastic system (FBS) aims to perturb stochastically the grid points that dynamically describe a cyclone system.This is carried out via a four-step procedure: Step one-PV budget calculation: A module has been developed that calculates the non-conserved PV components of the total atmospheric PV at the beginning of every model time step.This module is described in detail in [13] where it has been used to analyse the processes that contribute to the intensification of Mediterranean cyclones. Step two-Identifying and tracking objects: A new module has been developed and implemented into WRF to identify coherent 3D objects.Each object is composed of neighboring grid points of PVdiab or PVmo that exceed the absolute value of 0.75 PVUs.However, we retrain only objects that include at least one grid point of more than 2 PVUs.From the perspective of PV invertibility, these two absolute value thresholds are deemed adequate for retaining objects which describe meso-scale systems in terms of size, and have a significant impact on the atmospheric state.Since this study focuses on cyclones, we included an additional criterion that demands from objects to be composed of grid points with negative pressure perturbation (P', expressed in pressure anomalies from model level averages) (Figure 1). Step three-tracking objects in time: Once identified each object is separately labeled according to the time step it was identified.If there are overlapping objects, then the oldest time label is assigned to the object.Step four-assigning a perturbation coefficient: Finally, every object is assigned to a coefficient ct that changes in time according to the following equation: Step three-tracking objects in time: Once identified each object is separately labeled according to the time step it was identified.If there are overlapping objects, then the oldest time label is assigned to the object. Step four-assigning a perturbation coefficient: Finally, every object is assigned to a coefficient c t that changes in time according to the following equation: χ is a random number that ranges from −1 to 1, t is the time step, dt is the model time step and τ is a constant in units of time.The choice of τ is arbitrary but nevertheless it is crucial for the frequency of changes of the perturbation coefficient.As an example, Figure 2 shows examples of the time evolution of the perturbation coefficient c t at every model time step for τ = 12 h. χ is a random number that ranges from −1 to 1, t is the time step, dt is the model time step and τ is a constant in units of time.The choice of τ is arbitrary but nevertheless it is crucial for the frequency of changes of the perturbation coefficient.As an example, Figure 2 shows examples of the time evolution of the perturbation coefficient ct at every model time step for τ = 12 h. Results In terms of the trajectory of Usagi, we can observe that the control simulation shows a similar track compared with the ones obtained from the JMA, specially during the initiation and mature stage of Usagi (Figure 3).Although the control trajectory starts to diverge in the dissipation phase of Usagi, we can certainly conclude that the control simulation performs with accuracy enough the typhoon trajectory. Results In terms of the trajectory of Usagi, we can observe that the control simulation shows a similar track compared with the ones obtained from the JMA, specially during the initiation and mature stage of Usagi (Figure 3).Although the control trajectory starts to diverge in the dissipation phase of Usagi, we can certainly conclude that the control simulation performs with accuracy enough the typhoon trajectory. χ is a random number that ranges from −1 to 1, t is the time step, dt is the model time step and τ is a constant in units of time.The choice of τ is arbitrary but nevertheless it is crucial for the frequency of changes of the perturbation coefficient.As an example, Figure 2 shows examples of the time evolution of the perturbation coefficient ct at every model time step for τ = 12 h. Results In terms of the trajectory of Usagi, we can observe that the control simulation shows a similar track compared with the ones obtained from the JMA, specially during the initiation and mature stage of Usagi (Figure 3).Although the control trajectory starts to diverge in the dissipation phase of Usagi, we can certainly conclude that the control simulation performs with accuracy enough the typhoon trajectory.To assess the model's sensitivity in cyclone forecasting, six simulations were conducted using the Stochastically Perturbed Parametrization Tendencies (SPPT) scheme within the WRF model, while maintaining the same model configuration.The ensemble cyclone tracks exhibited a close resemblance to the reference track (Figure 4), indicating a comparable spread.Similarly, the development of the cyclonic system, as reflected in the Mean Sea Level Pressure (MSLP) values at its center, varied around those of the control simulation. To assess the model's sensitivity in cyclone forecasting, six simulations were conducted using the Stochastically Perturbed Parametrization Tendencies (SPPT) scheme within the WRF model, while maintaining the same model configuration.The ensemble cyclone tracks exhibited a close resemblance to the reference track (Figure 4), indicating a comparable spread.Similarly, the development of the cyclonic system, as reflected in the Mean Sea Level Pressure (MSLP) values at its center, varied around those of the control simulation.Before implementing the perturbation coefficient on the physical tendencies of the objects, a set of experiments was performed by multiplying them with constant values.These values ranged from 0 to 2, with an increment of 0.25.This allowed for a deeper understanding of the impacts this procedure had on the system's evolution.The findings revealed that coefficients smaller than 1 (where 1 represents the control simulation) had a more pronounced effect compared to larger coefficients (Figure 5).For the purposes of this study, the FSB scheme utilized random coefficients within the range of −0.1 to +0.4.The application of this scheme yielded a satisfactory spread, albeit narrower than that observed with the traditional SPPT methodology.The cyclone exhibited sensitivity throughout all stages, with the minimum mean sea level pressure (MSLP) value consistently higher than the control case in most instances (Figure 6).Before implementing the perturbation coefficient on the physical tendencies of the objects, a set of experiments was performed by multiplying them with constant values.These values ranged from 0 to 2, with an increment of 0.25.This allowed for a deeper understanding of the impacts this procedure had on the system's evolution.The findings revealed that coefficients smaller than 1 (where 1 represents the control simulation) had a more pronounced effect compared to larger coefficients (Figure 5). To assess the model's sensitivity in cyclone forecasting, six simulations were conducted using the Stochastically Perturbed Parametrization Tendencies (SPPT) scheme within the WRF model, while maintaining the same model configuration.The ensemble cyclone tracks exhibited a close resemblance to the reference track (Figure 4), indicating a comparable spread.Similarly, the development of the cyclonic system, as reflected in the Mean Sea Level Pressure (MSLP) values at its center, varied around those of the control simulation.Before implementing the perturbation coefficient on the physical tendencies of the objects, a set of experiments was performed by multiplying them with constant values.These values ranged from 0 to 2, with an increment of 0.25.This allowed for a deeper understanding of the impacts this procedure had on the system's evolution.The findings revealed that coefficients smaller than 1 (where 1 represents the control simulation) had a more pronounced effect compared to larger coefficients (Figure 5).For the purposes of this study, the FSB scheme utilized random coefficients within the range of −0.1 to +0.4.The application of this scheme yielded a satisfactory spread, albeit narrower than that observed with the traditional SPPT methodology.The cyclone exhibited sensitivity throughout all stages, with the minimum mean sea level pressure (MSLP) value consistently higher than the control case in most instances (Figure 6).For the purposes of this study, the FSB scheme utilized random coefficients within the range of −0.1 to +0.4.The application of this scheme yielded a satisfactory spread, albeit narrower than that observed with the traditional SPPT methodology.The cyclone exhibited sensitivity throughout all stages, with the minimum mean sea level pressure (MSLP) value consistently higher than the control case in most instances (Figure 6). Conclusions In this study, we present preliminary results on a new SPPT scheme where perturbations are uniquely applied to areas characterized by high PV, produced by diabatic processes.Therefore, we only perturb the grid points which are expected to have a strong impact on the component of the atmospheric state that is sensitive to inherent model uncertainties.Our results show comparable spread on the tracks and MSLP evolution to the one produced by the original SPPT method.In contrast to the original SPPT method, we consider the fact that this approach is based on PV theory and, therefore, that perturbations have a physical basis, to be an advantage Conclusions In this study, we present preliminary results on a new SPPT scheme where perturbations are uniquely applied to areas characterized by high PV, produced by diabatic processes.Therefore, we only perturb the grid points which are expected to have a strong impact on the component of the atmospheric state that is sensitive to inherent model uncertainties.Our results show comparable spread on the tracks and MSLP evolution to the one produced by the original SPPT method.In contrast to the original SPPT method, we consider the fact that this approach is based on PV theory and, therefore, that perturbations have a physical basis, to be an advantage Figure 1 . Figure 1.An example for Identifying and tracking objects-Cyclone Usagi. Figure 1 . Figure 1.An example for Identifying and tracking objects-Cyclone Usagi. Figure 2 . Figure 2. Five-day time evolution of the perturbation coefficient for characteristic length (τ in Equation (1)) equal to 12 h. Figure 3 . Figure 3. Usagi's trajectories depicted by the CNTRL simulation and the JMA from 16 UTC 9 September to 21 UTC 23 September 2013.CNTRL and JMA data are depicted every 3 h. Figure 2 . Figure 2. Five-day time evolution of the perturbation coefficient for characteristic length (τ in Equation (1)) equal to 12 h. Figure 2 . Figure 2. Five-day time evolution of the perturbation coefficient for characteristic length (τ in Equation (1)) equal to 12 h. Figure 3 . Figure 3. Usagi's trajectories depicted by the CNTRL simulation and the JMA from 16 UTC 9 September to 21 UTC 23 September 2013.CNTRL and JMA data are depicted every 3 h. Figure 3 . Figure 3. Usagi's trajectories depicted by the CNTRL simulation and the JMA from 16 UTC 9 September to 21 UTC 23 September 2013.CNTRL and JMA data are depicted every 3 h. Figure 5 . Figure 5. Cyclone tracks and minimum MSLP evolution during the experiments where the identified objects are multiplied with values ranging from 0 to 2 (0% to 200%). Figure 5 . Figure 5. Cyclone tracks and minimum MSLP evolution during the experiments where the identified objects are multiplied with values ranging from 0 to 2 (0% to 200%). Figure 5 . Figure 5. Cyclone tracks and minimum MSLP evolution during the experiments where the identified objects are multiplied with values ranging from 0 to 2 (0% to 200%). Figure 6 . Figure 6.Cyclone tracks and minimum MSLP evolution for all ensemble members produced through the proposed FSB approach. Figure 6 . Figure 6.Cyclone tracks and minimum MSLP evolution for all ensemble members produced through the proposed FSB approach. Table 1 . WRF parameterizations used for the study. Table 1 . WRF parameterizations used for the study.
2023-08-31T15:13:56.291Z
2023-08-29T00:00:00.000
{ "year": 2023, "sha1": "0d722904b536c1d95d53764b28b6aa3d736b215d", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2673-4931/26/1/110/pdf?version=1693283503", "oa_status": "HYBRID", "pdf_src": "ScienceParsePlus", "pdf_hash": "824f030d1f7ac90447b5db92bcd9c418ca9c9731", "s2fieldsofstudy": [ "Physics", "Environmental Science" ], "extfieldsofstudy": [] }
229501695
pes2o/s2orc
v3-fos-license
Technological and Operational Aspects That Limit Small Wind Turbines Performance Small Wind Turbines (SWTs) are promissory for distributed generation using renewable energy sources; however, their deployment in a broad sense requires to address topics related to their cost-efficiency. This paper aims to survey recent developments about SWTs holistically, focusing on multidisciplinary aspects such as wind resource assessment, rotor aerodynamics, rotor manufacturing, control systems, and hybrid micro-grid integration. Wind resource produces inputs for the rotor’s aerodynamic design that, in turn, defines a blade shape that needs to be achieved by a manufacturing technique while ensuring structural integrity. A control system may account for the rotor’s aerodynamic performance interacting with an ever-varying wind resource. At the end, the concept of integration with other renewable source is justified, according to the inherent variability of wind generation. Several commercially available SWTs are compared to study how some of the previously mentioned aspects impact performance and Cost of Electricity (CoE). Understanding these topics in the whole view may permit to identify both tendencies and unexplored topics to continue expanding SWTs market. Introduction Energy demand augmented in 2018 above 2.3% according to the International Energy Agency, showing the fastest growth pace in the last decade [1,2]. Strategies conducted to supply such consumption must include a rapid increase in energy productivity, efficiency management strategies for energy systems, an integrated approach that uses centralized and decentralized sources, and a more significant share of renewable in the mix [3]. Depletion of fossil fuel sources and the associated effect of the non-rational use of this on the environment has raised the interest in searching alternatives accounting for the development, renovation, adaptation and even hybridization of different renewable [4][5][6][7] and non-renewable generation sources [8]. In this aim, the United Nations promotes the implementation of strategies conducting to ensure access to affordable, reliable, sustainable and modern energy for all [9]. It is worth to mention that hydropower facilities are important assets for the electric power sector and represent a key source of flexibility for electric grids with high penetrations of variable generation [10]. • Technology cost feasibility, in terms of manufacturing costs, maintenance costs and lifespan [30]; • Improvement in efficiency at low wind speeds in areas near the consumer centers where the resource may no be optimal [31]; • Noise control or reduction, owing to operation of SWTs is expected to be closer to the end-consumer, then turbines must be as quiet as possible [32]; and, • Hybridization and integration with other sources of renewable energy, attending to the principle of spatial and temporal complementarity of the respective natural resources [33]. For all aforementioned challenges, arise the need to supply practical solutions in which SWTs can be used in low-wind speed environments (both rural or urban) with highly turbulent wind flows while being commercially affordable. In order to address these challenges, several multidisciplinary studies have been carried out in the last years in topics ranging from, but not limited to wind resource assessment, aerodynamics, manufacture, control systems or micro-grids integration. Among others, the following aspects stand out: • As wind resource is the most differential factor when comparing SWTs with their large counterparts, most of the recent works have focused on it. Tadie Fogaing et al. [34] reviewed wind energy resources in urban locations, where is mentioned the requirement of SWT in applications close to consumption areas. Therefore, the wind energy study must have a precise evaluation of the wind speed profile. • James and Bahaj [35] focused on micro-and small-scale wind turbine in the UK context. Principally, the authors addressed SWTs installed in buildings in the UK such as rural, suburban and urban environments. Additionally, the authors compared the wind speed computational tool called NOABL (Numerical Objective Analysis Boundary Layer) with annual measurements in rural, suburban and urban areas. • Micallef and van Bussel [36] documented recent works related to SWTs and addressed a series of different disciplines connected to the aerodynamics in urban environments. The authors proposed an interesting discussion on the nature of existing methodologies for assessing wind resource in urban areas, encompassing analytical, experimental and numerical-based methods. KC et al. [37] presented a review on the topic of SWTs in the built environment aiming to understand issues related to wind resource, SWT performance, appropriate sitting and suitability of IEC 61400-2 in such environment. • Within the context of rotor aerodynamic design, the work of Karthikeyan et al. [38] is one of the most relevant precedents, as they explore the self-starting behavior in SWTs, discuss on different devices for the improvement of performance and, gather information on airfoil sections for wind turbine blades for low Reynolds conditions, for which several design techniques are implemented. • Regarding design control approaches, Menezes et al. [39] reviewed some relevant works related to the topic of wind turbine control divided into three main areas: wind turbine torque control, blade pitch control and grid integration control; it does not focus on SWTs neither vertical-nor horizontal axis-wind turbines. That works recognized the small number of works that address the wind turbine control concepts and presents a literature review of the topic to provide a base for further investigations in the field of wind turbine control techniques. In the discussion, the authors considered the potential of smart rotor applications and the overall potential of wind turbine control in the sustainable energy sector. The previous analysis allows to conclude that most of the works reported regarding SWTs tend to focus on some specific issues without establishing a clear link between them, which would allow to have a more holistic view of the subject. In 2015, Tummala et al. [31] presented a review on SWTs discussing wind turbine classification, blade design, appropriate positioning, aero-acoustics, control and manufacturing in a holistic manner. However, its approach is mostly related to control without paying much attention to manufacturing issues. Although wind resource and aerodynamics are paramount in the effectiveness of energy harnessing by SWTs, the implementation of both, tailored manufacturing techniques and control systems, may contribute to further implementation of such machines. Therefore, the aim of this paper is, on one hand, to provide a survey of the five main topics identified, i.e., wind resource, rotor aerodynamics, manufacture techniques, control systems and hybridization micro-grids of SWTs with horizontal-axis configuration. The interaction between these topics allows for a better understanding of the phenomenon in its whole and shows how one topic can be better understood when linked with the rest. On the other hand, a complement to the key topics of this review is provided by a comparative exercises between different commercially-available SWTs. This includes identifying the aspects mentioned previously under different resources and operating conditions and the respective effect on the performance and finally the impact on the CoE. This procedure is structured around six different wind turbine models, each one with particular characteristics of performance and manufacturing that illustrate the diversity evidenced in survey sections. The comprehensive analysis of the wind resource (Section 2) and the effects of turbulence over SWTs operation (Section 3) produces a fundamental input for the decisions over the aerodynamic design of the wind turbine rotor (Section 4). The final shape of the blades is a key factor for the selection of the required manufacturing technique that guarantees both the aerodynamic shape of the blades and their structural integrity as illustrated in Section 5. In addition, the blades' control system (Section 6) would have to account for the aerodynamic performance of the blades and its interaction with the available wind resource. The above wind turbine topics affect the integration of SWTs with other renewable sources, driving the research studies to hybrid micro-grid design to improve the energy security of some regions. (Section 7). The identified topics and their interactions are shown in Figure 2. Finally, the assessment of commercial wind turbines and the effect of the aforementioned topics over its performance and the CoE is presented in Section 8. Wind Resource Estimation Global Wind Power Potential (WPP) has been estimated in 94.5 TW; regions with the highest WPP are Europe, Russia, and the United States with 37.5, 36 and 11 TW, respectively [40]. However, data from the European Wind Energy Association shows that low-speed winds are the most frequent; around 14 % of the time, the wind is too slow to produce electricity with large scale wind turbines [41]. More recently, projections of the wind energy resource for most of the coastal region of the United States shows a steady decrease, which would compromise the capacity to fully exploit the region's total WPP with offshore large-scale wind turbines [42]. Among that, the lack of exact predictability and fluctuations of wind energy, conduces to problems in the power flow of transmission system, especially when the weak nature of the grid in remote areas and the uncertainty of wind are taken into consideration [43]. Deployment of SWTs commonly includes its installation and operation in rural areas and urban environments. An appropriate estimation and assessment of the wind resources, including wind velocity distribution and variation over time, is mandatory to determine the power generation and load distribution on wind turbine's components. In this sense, this section reviews the models to evaluate wind speed, including those defined within the IEC standard to evaluate wind profiles in open terrains for pre-feasibility of SWTs projects. Finally, some studies related to the behavior of small wind turbines in urban areas are presented. Wind Speed Estimation Wind speed, from a mathematical point of view, is represented by space and time variables. If the analysis is independent of time, the model usually employed to determine the wind profile developed on a surface is described by the logarithmic law, as [36,44]: where u ∞ (y) is the wind speed depending on the height above ground level y, κ is the von Karman constant, d is the zero-plane displacement where the wind speed has a value of 0 m/s, y 0 is the roughness height given by the aerodynamic effects of surface imperfections (typical values are given in Table 1), and u * is the friction velocity due to the ground surface given as: where ρ is the air density and τ w is the wall shear stress [45]. Wind speed anemometer measurements u re f are used as reference values at the evaluation height y re f to determine the speed magnitude for any other height y given by [46,47]: being α the roughness coefficient of the ground. For a preliminary analysis, the values shown in the Table 2 can be used [47]. Table 2. Values of roughness coefficient α for some terrain types [47]. Terrain Type α Calm sea 0.09 Agricultural area-limited presence of obstacles less than 6 m high 0.12 Agricultural area-limited presence of obstacles between 6 m and 8 m high 0.16 Agricultural area-a lot of presence of obstacles between 6 m and 8 m high 0.20 Urban or forest area 0.30 On the other hand, the wind speed profile have variable behavior respect to the time. The statistical models are an alternative to describe the stochastic behavior of wind magnitude and direction [48]. In this regard, the Weibull and Rayleigh are Probability Distribution Functions (PDF) commonly used to characterize wind speed magnitude frequency. Additionally, these models are usually used to estimate the Annual Energy Production (AEP) [36]. The Weibull function is given by: where k is the function shape parameter, and c is a scale parameter [36,40]. In the literature, the shape parameter is related to the terrain morphology and the wind regime, where some typical values are reported in Table 3 [ 47], given the scale parameter as [49,50]: where, u mean is the mean wind speed, and Γ is the gamma function. Rayleigh distribution is a case where the Weibull model takes the scale parameter c equal to two (2) [40] and it is given by IEC standard [51] as: being u hub the average wind speed at the height of the wind turbine rotor over 10 min, and u ave the annual average wind speed at the same height of u hub [51]. Finally, the annual energy production by Weibull PDF is calculated as: where AEP refers to the total energy generated over time operation T, u in is the cut-in wind speed, u out is the cut-out wind speed, P(u ∞ ) is the power output given by the characteristic curve of the specific wind turbine to be evaluated [36], and W(u ∞ ) is the Weibull distribution given by Equation (4). The statistical approaches are usually adjusted by real density values of air at the installation place and corrected due to changes in pressure or temperature. It is also modified by system performance, in terms of maintenance or availability of the power grid. The main disadvantage of this approach is that is based on statistical models of wind measurements and the manufacturer's P-V curve (i.e., generation power as a function of wind speed). These values are usually experimentally estimated in a wind tunnel with controlled parameters [29], being somehow not enough accurate when compared with the case of real conditions. CFD Wind Velocity Profile Estimation The evaluation of potential SWT sites begins by recognizing the wind resource; random, erratic, and uncontrollable behavior [52] present in large wind farms or rural areas gets worse due to buildings' presence. Among others, turbulence high levels add constant changes in magnitude and direction of the wind speed affecting wind energy harnessing and reliability. Due to this, the prediction of the wind profile by Computational Fluid Dynamics (CFD) stands out for its comprehensive implementation [36,52,53]. The Unsteady Reynolds-Averaged Navier-Stokes (URANS) represents the conservation of mass and momentum for incompressible fluids without body forces [53][54][55]. This model is commonly used for wind resources characterization; it is formed by the continuity equation given by [56]: and the conservation of the momentum: whereū i denotes the mean velocity, u i is the fluctuation velocity, v is the kinematic viscosity, and u i u j is the Reynolds-stress tensor, which is an unknown variable, that is possible to solve using Boussinesq eddy-viscosity assumption [56,57] given by the following expressions: where k is the turbulent kinetic energy and v t is the kinematic eddy viscosity [58]. Additionally, it is necessary to find two turbulence properties [56], the turbulent kinetic energy k and the turbulent dissipation rate ε given by the equations: where v t is given by [56]: and σ k and σ are Prandtl numbers, and C ε1 , C ε2 and C µ are model constant [54,55,59]. The above CFD formulation is known as the Standard k − ε model (SKE). Alternative approaches for reproducing rural and urban wind conditions include variations of the SKE formulation [54], Reynolds Stress Models (RSM) or Large-Eddy Simulation (LES) techniques [53]. Standard IEC 61400-2 The IEC 61400-2 is the international regulation for SWT. This standard defines four site classes in terms of wind speed and turbulent effects, which are a characteristic of the zone and differ depending on the application site are shown in Table 4, where I 15 is the characteristic value of the turbulence intensity at a wind speed of 15 m/s and a is a dimensionless adjustment parameter; it is worth to mention that these classes should not be considered for off-shore applications or when the environment presents tropical storms [51]. The standard defines two wind regimes: Normal Wind Speed Conditions (NWC) and Extreme Wind Speed Conditions (EWC). The wind regimes and the SWT class depicted in Table 4 define the Standard Wind Speed Conditions (SWC). SWTs designs under the NWC regime take into account the Rayleigh distribution (see Equation (6)) according to the IEC standard [51]. Furthermore, the NWC regime also determines the Normal Wind Speed Profile (NWP) u ∞ (y) (see Equation (3)) [46,51], with a value of α of 0.2. The last NWC factor is the Normal Turbulence Model (NTM); it describes the stochastic fluctuation of wind speed according to 10 min average measurements including the effect of magnitude and wind speed direction variation [51]. On the other hand, to account for extreme wind loads, the IEC standard suggests the EWC regime, which includes peak wind speed and sudden changes in direction among others. The Extreme Wind Speed Model (EWM) addresses the 3-second gust speed estimated to be exceeded on the average only once in 50 years given by: as well, the one expected in one year is [51]: In Equations (15) and (16), y hub refers to the hub height of the wind turbine, and GF is the gust factor defined by the IEC standard as 1.4. Both equations take into account the average variation of the wind direction within −15 • and 15 • [60]. Specifically, GF is given as [61]: where u ∞ is a wind speed peak or gust and u ∞ refers to the average wind speed. The American Society of Civil Engineers Standard establishes that a 3-s gust duration is sufficient to perceive structural damage. Following the IEC standard, a GF of 1.4 ensures that a SWT under the corresponding reference wind speed, will be safe under 3-second gusts; for these cases, the reference wind speed refers to 10 min average measurements [62]. The GF model is simple but it does not represent the real gust profile [60]. Into the EWC regime, the Extreme Operating Gust (EOG) stands that gust magnitude over N years at the hub height is [51]: where D is the rotor diameter, β takes values of 4.8 and 6.4 for periods of one (N = 1) and 50 years (N = 50), respectively; on the other side, Λ 1 is the turbulence scale parameter given by: and, being σ 1 the standard deviation of the longitudinal velocity component expressed as [51]: Finally, the EWC defines the Extreme Direction Change (EDC) by means of: which is defined over the same periods described by EOG, where β takes the same values defined for Equation (18) [51]. Turbulence Effects on Wind Profiles and STWs Perfomance The IEC 61400-2 standard models are applicable for open terrain wind measurements; however, its use for urban environments required considering obstacles and surface roughness present in operation areas. Urban environments have more turbulent intensity effects than open terrains. The impact of turbulence in the wind generation system is considered during the SWTs design process; if not, power generation estimation will not be correct and structural components may fail during the operation [63]. Some works that present cases in which the SWTs are evaluated under the cited standard are listed below. The Turbulence Intensity TI is commonly used to evaluate the effect of turbulence effect over SWTs operation and is given by [64]: where σ u ∞ is the standard deviation of wind speed measurements over 10 min and u ∞ is the average wind speed over the same time interval [64]. It is worth to mention that the turbulence intensity does not include a time dependent model, i.e., it does not have time information regarding the wind speed profile fluctuations being difficult the wind speed chronological observation [37]. Further, the TI presents some issues, particularly when the value of u ∞ approaches or is equal to zero, the values of TI are greater than 100%, giving erroneous correction values. Another disadvantage is that urban environments have gusts that significantly affect the standard deviation and TI value. Mainly, the turbulence index's drawback is that it considers the wind measurements distributed normally. Moreover, when the model combines the Gaussian distribution with the TI factor, the wind profile takes negative values, causing a mistake when the wind power generation is calculated. In this way, Woolmington et al. [64] developed a model to characterize the turbulence behavior in the wind profile, called Turbulent Fourier Dimension (T DF ), and it was compared with the conventional turbulence intensity model, using wind resource measurements in two places in Dublin (Ireland). Rakib et al. [60] compared the standard IEC 61400-2 parameters with real operation measurements of 5 kW HAWT with two blades and a rotor diameter of 5 m. The study employed three 3D ultrasonic anemometers installed at the height of 15 m on the wind turbine tower. The system was located at the University of Newcastle (Australia). For acquired data values, the study employed WindView software. Wind speed was monitored over twelve months with a frequency of 20 Hz. With real data, Rakib et al. [60] calculated the gust factor GF and compared it with the standard model. Authors concluded that standard IEC-61400.2 did not represent the gust profile within an urban area. Rakib et al. [65] presents the characterization of the wind resource in urban areas, precisely the vertical wind speed, and compares it with the formulation given by standard IEC-61400 for open land. A horizontal axis wind turbine of 5 kW rated capacity with two blades of 2.5 m long was used in the research. The wind turbine's rated operating parameters were a wind speed of 10 m/s, an angular velocity of 320 rpm at a TSR of 8, and it was mounted at the height of 18 m. Similarly, KC et al. [63] compared the standard IEC with simulated behavior of HAWT operated in turbulent urban terrain and focused on the effects on the power output and fatigue loads. The wind profile was based on measurements in two different places, Port Kennedy (Australia) and an open area in Östergarnsholm (Sweden). Then, data was processed with software TurbSim v2.00 which can be used as input for FAST software. The 5 kW wind turbine was modeled as aeroelastic in FAST v7.02.00. The wind turbine had two blades, rated wind speed of 10.5 m/s, cut-in wind speed of 3.5 m/s, rated angular velocity of 320 rpm with a TSR of 8. Additionally, the computational model had a passive yaw control system by a delta-wing tail, and the design of the blades counted with the SD7062 airfoil profile over the span. The blades of the described wind turbine had a length of 2.5 m and were simulated as made of glass fiber reinforced polymer (GFRP). The study concluded that the turbulent model of standard IEC 61400-2 does not represent the actual operation of small wind turbines in turbulent environments. The wind turbine's predicted performance in Port Kennedy using the software FAST, showed that power output increased by turbulent present into wind profile. Still, the blade's root presented more bending moment than the simulation in Östergarnsholm, due to the magnitude and direction fluctuation of wind speed. Recently, researchers have shown that power generation by wind turbines installed in urban areas presented a reduction between 15% to 30% of nominal capacity. Wind turbine's capacity factor has a value of 10% in open terrain operation; the same wind turbine operating in a turbulent environment has a capacity factor of less than 7% [66]. For these reasons, some researchers compared the IEC standard method with real measurement during operation time to analyze the variation between theoretic conditions and actual conditions. Dilimulati et al. [66] present recommendations regarding the installation of wind turbines in urban areas. The authors suggest that for deployment of SWTs in urban sectors average wind speed must be at least 5.5 m/s. Additionally, the height installation must be at least 50% higher than the surrounding buildings or obstacles. The hub height must be located 30% higher than the rooftop. Therefore, the installation should be above the turbulent boundary layer. Pagnini et al. [67] compared the performance of two wind turbines, one HAWT and one VAWT, both had a nominal capacity of 20 kW and were installed in Savona (Italy). The study compared the electric power generation involving the turbulence index with the power generation using the IEC 61400-12-1 standard's statistical method. The study concluded that the manufacturers' curves for both turbines did not represent the real generation. This will depend on the location, the roughness of the installation site, the direction of the wind resource, and the effects of turbulence. The HAWT model presented a more outstanding energy production than the VAWT model; however, the HAWT was more affected by gusts and fluctuations in the wind speed direction. Lubitz [68] studied the Bergey XL.1 model's behavior, a SWT with a nominal capacity of 1.0 kW and a rotor diameter of 2.5 m. The turbine was mounted and operated in a rural area in Oxford (UK), where the wind resource showed turbulent behavior. The power generation was estimated by measuring the output voltage and the electric resistance of an external load. The author compared the results obtained of P-V curve with two studies of the same turbine. Additionally, this work presented the results of wind speed frequency and turbulence intensity. The study concluded that as TI increases at low speeds, power generation also increases; however, when TI increases at high speeds, power generation decreases. Ward and Stewart [69] studied the behavior of a wind turbine with a nominal capacity of 2.4 kW, considering the effect of the turbulence index. The authors mainly compared the manufacturer's power-speed curve with the power generation according to the IEC standard and applied the height correction factor for wind speed measurements (Equation (3)). The results showed that electric power generation increased due to the increase in TI for low speeds. However, when speed increased with the same values of TI, the turbine's power output decreased, the same conclusion presented by Lubitz [68]. Cooney et al. [70] studied energy production by an 850 kW wind turbine. First, the author described the resource with the wind rose and histogram, then compared the power generation calculated using field measurements with the manufacturer's characteristic curve. Although the turbulence index was not involved in this study's mathematical model, experimental measurements were adapted to the theoretical behavior. Furthermore, that work compared the curves of the real and theoretical power coefficient (C p ) and presented the implementation of the Weibull distribution to estimate future energy generation, which was complemented by an economic study of the Levelized Cost of Energy (LCoE) and Net Present Value (NPV). Carbó Molina et al. [71] studied different turbulence conditions of an H-Darrieus VAWT model in a wind tunnel to identify the turbulence intensity and Reynolds number. For the experiment, the wind tunnel had passive grids to increase the turbulent behavior inside it. Additionally, measurements were taken in two different wind tunnels with different sizes. The wind turbine scale model had two blades with a 5 cm chord NACA0018 airfoil, a diameter of 0.5 m, an area of 0.4 m 2 , and an angular speed of 1200 rpm to simulate an operational Reynolds. The authors concluded that turbulence intensity had a positive effect on the wind turbine's power coefficient, increasing 20% due to the turbulent index of 0.5% to 15%, which is higher from lower Reynolds and TSR. Battisti et al. [29] recognized the importance of wind resource characteristics and the wind turbine at different time scales. It refers to the time between two states of wind velocity (direction or magnitude), that for the case of wind turbines, is the time that it spends to adapt for wind velocity changes. The interaction of these time scales provides the integration between the natural phenomenon and a wind turbine's operation. Therefore, the turbine's response to wind condition variations depends on the time scale of the speed fluctuation and the turbine's response characteristics. The authors concluded that the inertial response governs the response time of the turbine. The authors also discussed the requirements for a turbine to operate under variable wind conditions and introduced the Required Rotor Acceleration (RRA) and Available Rotor Acceleration (ARA). The RRA is the acceleration required by the rotor to follow a change of wind speed (Gust). For a fixed geometry and continuous monitoring of the maximum generation point, the RRA is expressed as, where λ opt is the optimum TSR, R is the radius of rotor andu ∞ is the wind speed acceleration. In the case of a large rotor radius, the RRA is smaller for any wind acceleration. On the other hand, if the turbine rotation speed increases, a higher acceleration of the rotor is required to track wind acceleration. In this sense, Emejeamara and Tomlin [72] studied the effect of gusts on wind speed profile and the importance of resource tracking technologies for urban applications. The authors used a small-scale VAWT and the TI to study the effects of turbulence, which was related to excess energy or fraction of kinetic energy when the wind profile had gusts. The numerical value of excess energy was presented with the indicator of Excess Energy Content (EEC) given as [72,73], where u ∞ is the mean wind speed over 10 min and GEC refers to the gust energy coefficient, and T takes the value of 10 min. The value of EEC can be applied in the energy study to improve the estimation of energy production. The results showed that when TI increased, there was additional energy due to gusts in the wind profile. On the other side, the ARA is defined as the maximum available angular acceleration when the rotor is free to accelerate under gust conditions with no contribution of the torque applied in the opposite direction (i.e., electric generator's torque); it is expressed as [29], being Q aero the wind turbine torque. Battisti et al. [29] concluded that the ARA for HAWTs is one order of magnitude higher than for VAWTs of equivalent radius. This is more notable at low wind speeds, which are more common in urban environments. Finally, two possible scenarios arise from comparing RRA and ARA. The first one refers to when the turbine can follow the variations in speed and the parameters established by the control that happens when RRA < ARA occurs. On the contrary, when RRA > ARA is met, a rotor acceleration delay is present, and the optimum TSR may not follow the wind variations since the required acceleration is not allowed. Aerodynamic Wind Turbine Rotor Distributed generation refers to the use of small generation technologies to produce electricity close to the end-users, becoming a more appealing alternative concerning the use of large-scale farms or conventional plants when it comes to the supply of energy in urban and rural areas. Wind power for distributed scale and off-grid applications has been implemented by using turbines of up to 100 kW, commonly denominated as SWTs and, to a similar extent, with turbines in the "medium" size range of 101 kW to 1 MW [74]. Recent trends in the wind energy industry show that distributed generation, below 10 kW, is gaining popularity in rural or isolated areas and urban environments. The implementation of energy generation policies that allow domestic consumers to sell excess energy into the grid increases the attractiveness of small generation units. Remote areas are another important example of an opportunity to use small wind turbines. The penetration of an interconnected system for the transmission of energy in this type of location must overcome several hurdles, such as geography or economic feasibility. Projections point that SWTs are cost-effective alternatives to conventional non-renewable and centralized generation, for use in applications such as the electrification of rural areas and integrated in hybrid systems with solar PV generation [75]. The success of distributed wind generation, it is bounded by aspects such as electricity generation and transmission costs, restrictions in the required area and the difference in the LCoE when compared with other energy sources [30]. The performance of a wind turbine as a whole depends on several variables that include the operating conditions such as wind and angular speed. The rotor blades' operation depends on one design element in particular: the airfoil section, which not only fixes the aerodynamic coefficients but also the bending moment of inertia. The recent interest in using wind energy for small-scale applications, e.g., off-grid generation as part of a hybrid system, has driven wind turbines' design towards rotor diameters of less than 10 m and nominal power below 10 kW. At this size ranges, a conventional airfoil's behavior is significantly affected by low Reynolds numbers that characterize the flow around the rotor's blades. This section depicts efforts concerning aerodynamic improvements in SWTs regarding its performance in urban and rural uses. Low Reynolds Airfoils Smaller Reynolds numbers characterizes the flow conditions on small wind turbine blades; therefore, the down-scaling of a wind turbine rotor for the applications in small scale generation presumably requires a careful review of the key aspects of airfoil behavior at low Reynolds and the work that has been made so far, to close the gap of knowledge and help to make design decisions for the conception of efficient small HAWT. The National Renewable Energy Laboratory (NREL) work on wind turbine dedicated airfoils since the early 1990s, consists of the design and study of specially purposed airfoils use of numerical tools [76]. Somers [77] present a design methodology of airfoils for rotors with 20 to 40 m in diameter being one of several publications that report the use of a computational tool based on the potential flow and boundary layer theories. Those tools are used to obtain airfoil geometries from a set of specified constraints, e.g., a specified pressure distribution. For example, the S822 (blade tip) and the S823 (blade root) airfoil sections, designed for stall-regulated rotors with 3 to 10 m in diameter, are obtained with this design methodology [78]. The stated design drivers for this work are a moderate maximum lift coefficient at the blade's tip and maximized at the root, low profile drag, and a maximum lift coefficient insensitive to roughness. The results on this set of airfoils report a soft stall behavior, beneficial for stall-regulation and reduction of oscillatory loads under the gusty wind. The variables of interest for the selection of an airfoil describe the geometry of the section in the form of the relative thickness (t/c), and the aerodynamic efficiency, given by the maximum lift-to-drag ratio (L/D) opt , the corresponding lift coefficient C l,opt , and the maximum lift coefficient C l,max . These properties are recurrent in published works; the relevant data has been arranged in Table 5, also showing the flow regime at which the properties are reported and the specified use for each one. An additional work done by Somers [79] presents an improved series of airfoils: the S833 (for use at the mid-span region of the blade), the S834 (intended for the tip region), and the S835 (designed for the root region), all of them designed for rotors with variable-speed/variable-pitch regulation. Besides being intended for quiet operation, these airfoil geometries aim to provide high maximum lift, independence of roughness, low profile drag, and soft stall behavior. Giguère and Selig [80] present a series of four airfoils for the intended use at Reynolds numbers between 100,000 and 500,000: SG6040 (root region), SG6041 (primary airfoil), SG6042 (primary airfoil), SG6043 (primary airfoil). These airfoils are designed for maintaining optimum lift-to-drag ratios at varying operating conditions and are characterized on the one hand by high pitching moments, resulting from the flat pressure gradient distribution, usually aimed to mitigate flow separation. On the other hand, the presented analysis shows a non-smooth stall behavior, a characteristic that limits the applicability of the presented airfoils to variable-pitch/variable-speed regulated turbines. Airfoil Aerodynamic Aspects The historical perspective presented by Tangler [81], highlights the importance of leading-edge roughness independence in the design of wind turbines. It points out how the surface degradation in the leading edge region negatively affects several NACA airfoil families' performances used in early wind turbine designs from the 1980s. This work also discusses the phenomenon of laminar separation bubbles concerning wind turbine performance and points how, in the worst case, it can reduce lift while increasing drag. Laminar separation bubbles can occur due to high suction peaks in a low Reynolds flow over an airfoil; this explains why the requirement of a shallow pressure gradient on the upper surface of an airfoil is a common design constraint for low Reynolds airfoils. Lissaman [82] has discussed this particular mechanism of separation at the beginning of the 1980s; several years before this topic was actively taken into account in the design of wind turbine airfoils. The reduction of high suction peaks in airfoils can be achieved with a flatter pressure coefficient distribution over the section's upper (or suction) surface. Geometries such as those of the SG60XX set of airfoils are designed under this premise resulting in relative thicknesses of 10 to 16%. At this point, the structural aspect takes relevance as both the works presented in [80,81] related to small wind turbine airfoils to small relative thicknesses. This response to the less strict structural demands that a blade encounters in a small wind turbine opens a new area of discussion, as slender blades can be vulnerable to large deflections deriving in potential issues such as flutter or blade-tower collisions. The work presented by Selig and McGranahan [83] consists of a careful experimental analysis of six different airfoils that includes the previously mentioned S822 and S834, as well as the E387. Besides giving insight into the nature of laminar separation bubbles with flow visualization, this work uses zig-zag tapes to recreate leading-edge roughness in the experimental campaign. The findings show that leading-edge roughness, as a triggering turbulent flow transition mechanism, can be beneficial in small Reynolds (100,000) most likely by avoiding forming the laminar separation bubble. At higher Reynolds flows (up to 500,000), the increase in friction drag can outweigh any possible benefit derived from reducing pressure drag. Numerical Approaches for Aerodynamic Assessment It is worth mentioning that extensive and rigorous experimental studies such as the one presented in [83] are not common. Except for a series of publications containing experimental airfoil characteristics [84][85][86], most of the works on airfoils for small wind turbines are focused on a single airfoil family, a reduced group of airfoils or rely on numerical tools for predicting airfoil characteristics. Henriques et al. [87] present a work in which an airfoil is designed by prescribing not only the load distribution but also the thickness distribution along the chord length of the section. In this case, the airfoil geometry is determined in an iterative way using a panel method software. The resulting airfoil is analyzed using computational studies at Reynolds numbers between 300,000 and 1,000,000 and with an experimental test at a Reynolds number of 60,000. Most numerical tools for airfoil design via iterative or optimization techniques use software-based on potential flow theory. For instance, Kim et al. [88] implement an airfoil design aiming for low noise emissions and optimal aerodynamics using XFOIL [89], a well-known software based on panel methods. Natarajan et al. [90] use XFOIL for studying the aerodynamic characteristics of an airfoil at Reynolds below 250,000. However, this kind of tool must be used carefully regarding low Reynolds applications. Based on the potential flow theory, the original formulation of a panel method does not account for viscous effects. Then, the approach to airfoil analysis with panel methods places three sources of uncertainty for wind turbine analysis at low Reynolds: (1) under-prediction of drag, (2) over-prediction of the maximum lift, and (3) inaccurate representation of the post-stall behavior. The disadvantages of panel method models for predicting airfoil characteristics at low Reynolds have been addressed by discussing current aspects of wind tunnel testing for wind turbines and its components, as shown by Van Treuren [91]. A different approach is taken by Grasso [92,93], who presents an optimization work with gradient-based methods and a hybrid approach that uses a genetic algorithm along with a gradient method. The latter author's work favors numerical analysis as a more feasible option for optimization tasks, which are characterized by the inclusion of aerodynamic and structural constraints in what is called Multidisciplinary Design Optimization (MDO). In this sense, an increasing level of sophistication in the modeling component of current optimization works can be observed. A modified version of XFOIL with an improved description of maximum lift and post-stall behavior is used in [92,93]. Benim et al. [94] use a full RANS solver along with a two-equation turbulence model as part of an airfoil optimization problem, aiming to maximize the torque generation while ensuring a smooth operation. The work presented by Ram et al. [95] consists of implementing a genetic algorithm for the optimization of a small wind turbine airfoil. The optimization aims to maximize lift while minimizing drag, including a bump in the leading edge region to force turbulent transition and recreate the effects of leading-edge roughness. The resulting airfoil is reported to obtain a higher lift to drag ratios than the SG6043 section. Wata et al. [96] presents an optimized airfoil based on the SG6043 section. New geometry is generated by geometrically combining the baseline airfoil with other airfoils for use in small wind turbines to be analyzed later with XFOIL, considering low Reynolds numbers. Singh et al. [97] propose the design of an airfoil considering the effects mentioned above of low Reynolds and suggest an airfoil shape with a flat upper surface that minimizes the chances of flow separation. Avoiding laminar separation bubbles seems to be a known aim in many recent works on airfoil design for small wind turbines and medium-scale turbines, as happens to be the case of Hall [98]. The analysis of turbulent effects is a complex procedure which is often addressed with special care, this is the case of Chillon et al. [99] and their numerical analysis on vortex generators (VG) for wind turbine blades. The authors use source term modeling for the analysis of VG in a RANS environment, and use a 2-equation turbulence model (k-ω SST) for flow prediction at angles of attack below 12 • . Due to the physics of detached flows around airfoils and the vanes of VG, the authors use a detached eddy simulation (DES) analysis for angles between 12 and 20 • which corresponds precisely to the stall range. Manufacturing Procedures About 20% of a wind turbine cost comes from the manufacturing of its blades [100]. Therefore, the most common material used in manufacturing SWTs is timber, mainly due to its low cost and suitable properties. These turbines are fabricated mostly by carving or machining blades from solid blocks since timber-laminate composites are costly. Several efforts have been reported to provide easily manufacturable SWTs to allow rural communities for sustainable electrification. This manufacturability requires the use of simple manufacturing tools and low-cost procedures to be performed without the use of highly skilled personnel. Melendez-Vega et al. [101] reported the design of a SWT based on the carving of a standard tube made of polyvinyl chloride (PVC). The authors aimed to generate a low-cost light-weight design for residential use in rural locations. Latoufis et al. [102] proposed a SWT blade made of wood with an estimated cost of 650 EUR for a 2.4 m diameter rotor. Pourrajabian et al. [103] investigated four timber species (i.e., alder, ash, beech, and hornbeam) for use in small blades. The authors included the design and optimization of solid and hollow blades by using genetic algorithms. Other alternatives include the use of metal in windmill's blades, which are often fabricated by rolling galvanized steel [104]. Latoufis et al. [105] investigated the effect of leading-edge erosion on locally manufactured SWTs on power performance and acoustic noise emissions. Eroded wind turbine blades were found to increase 10% acoustic emissions over a range of wind speed from 4 m/s while the power reduction can be up to 23.7%. Composite Reinforced Materials However, when the blade length increases (approximately more than 1.5 m), it is challenging to obtain knot-free planks, limiting the use of timber. Here is where composite laminates start to play an essential role by providing an alternative with higher specific properties to reduce inertia [103]. The most common composite materials employed include glass fiber and carbon fiber as reinforcements and polymers (resins) such as vinylester, polyester, and epoxy. Epoxy resins stand out by their most extended shelf life and higher fatigue resistance; however, their main drawback is related to ultraviolet degradation, which implies the need for coatings [106]. In the cases where composite materials are utilized, many of the issues related to the manufacturing of large wind turbines apply to the small-scale case. The standard method for manufacturing wind turbines blades from composite materials consists of machined molds from thin templates, which are then spaced along the span with the gaps filled with the composite system. However, in SWT blades, dimensional accuracy is more critical since an optimum aerodynamic design needs to compensate for the small diameter [106]. Generally speaking, the first composite blades were manufactured by the hand lay-up technology in open molds; however, the technology evolved towards more advanced techniques. In most cases, the fabrication of wind turbine blades is performed as two shells in conjunction with spar or internal webs that are bonded together [107]. As stated by Clausen et al. [104], high-performance wind turbine blades made of composite materials requires variations in chord and pitch along the blade so that pultruded profiles are not suitable. The most common manufacturing techniques for SWTs, once the mold is fabricated, is the Resin Film Infusion technology (RFI) [107]. RFI includes Vacuum Assisted Resin Transfer Molding (VARTM), also known as Vacuum Injected Molding (VIM), and Resin Transfer Molding (RTM). Vacuum infusion lacks being suitable for in-mass production since it is laborious and requires meticulous work. However, this method has been low-cost and ideal for producing a small number of blades. Besides, it minimizes tooling costs and decreases development time [108]. For high-volume production, RTM appears to be a more suitable technique in conjunction with its variations like Light RTM (LRTM) [106]. Hutchinson et al. [100] demonstrated that by using LRTM in comparison with VARTM, it is possible to reduce the cost by 3% and improve dimensional accuracy by 5.5%. Other improvements were reported as the reduction of resin wastage, infusion time, and void formation; this latter led to an increase in the composite's mechanical performance. Several experimental and numerical works are dealing with void formation, aiming to predict and prevent its appearance [109][110][111][112][113]. The structural design procedure of a low-speed, horizontal axis, bio-inspired wind turbine blade made of carbon/epoxy is presented in [114]. The methodology included the the mechanical characterization of the carbon fiber composite material, CFD aerodynamic evaluation of the pressure distribution profile of the blade, and Fluid-Structure Interaction (FSI) simulations to find a configuration which allows balance between aerodynamic and dynamic inertial loads, ensuring an almost undeformed geometry during wind turbine's operation. Then, the authors designed a manufacturing process based on Vacuum Assisted Resin Injection (VARI) for the bio-inspired SWT [115]. The authors performed a study to analyze the resin flow in the manufacturing process simulating different injection strategies. VARI offers advantages over RTM like lower tooling cost, lower injection pressures, and reduced volatile emissions. Fabrication of blades that mimic some type of biological system is gaining interest in the scientific community. Kaminski et al. [116] developed a bio-inspired gravo-aeroeslastic scaling method to structurally scaling wind turbine blades from large wind turbines to 1/100-th scaled models. The scaled model can be low cost, requires light mass while maintaining adequate stiffness. These were fabricated by additive manufacturing inspired by bone growth. Similarly, Ikeda et al. [117] designed and fabricated a SWT with bird-inspired flexed wing morphology, demonstrating that the proposed blades outperformed conventional designs by 8.1% in a proposed Robustness Index. A prototype of the blade was fabricated for validation purposes with a 2-mm-thick hollow structure made of CFRP. Other example of bio-inspiration can be seen in [118][119][120]. Emerging Manufacturing Techniques New advances led by the aerospace industry have allowed Automated Tape Lay-up (ATL) and automated Fiber Placement (FP) techniques to be considered for wind turbine blades manufacturing, aiming to reduce costs and ease production. However, as much larger thicknesses are expected in wind turbines blades in conjunction with larger dimensions compared with aircraft composites, some challenges need to be overcome before a broad implementation of such promising alternatives [107]. According to Watson et al. [121], future emerging manufacturing technologies in the wind power sector must focus on reducing costs and manufacturing tolerances. These include fabric-based materials and additive manufacturing for both molds and blades. Additive manufacturing is promising, particularly in the case of small wind turbines, since it is possible to manufacture complex blade shapes without requiring expensive molds [122]. Several types of research have been carried out within this context in recent years. Poole and Phillips [123] used additive manufacturing to fabricate wind turbine blades of Polylactic Acid (PLA). The authors tested several reinforced methods, i.e., pour filled, short fiber infused and pultruded rod-reinforced, concluding that pultruded rod-reinforced was the most suitable. Chaudhary and Prakash [124] fabricated a 0.24-m long blade using additive manufacturing with Acrylonitrile Butadiene Styrene (ABS). However, the drawback of this technique is related to final product strength and stiffness. Additionally, the use of these additive manufacturing materials raises a concern regarding fatigue resistance issues since they have not been certified yet for wind turbine blades under an existing standard. The solution to this issue may be to use reinforcements for the polymers used as inks. Rahimizadeh et al. [125] proposed a systematic scheme based on grinding and sieving to recycle the constituents of the scrap blades and reuse them in a Fused Filament Fabrication (FFF) process for improving the mechanical performance of additive manufacturing components. Recently, the discussion of the paradox of utilizing petroleum-based materials in the manufacture of wind turbines aiming to be sustainable technologies has led to exploring other alternatives. This includes the investigation of bio-based resins and bamboo [103]. Bamboo-based composites, particularly, have demonstrated suitability by offering higher strength and stiffness over birch constructed laminates [104]. Shah et al. [126] showed that flax is a potential structural replacement to traditional E-glass fiber for small wind turbines. The flax blades were 10% lighter than blade made of glass fiber composites offering advantages in the manufacturing such as no handling itching and inhalation hazardous. In Table 6, a summary of the reported manufactured small wind turbine is presented indicating material and manufacturing process when reported. Control Systems Approaches This section presents several works related to both active and passive blade and rotor control systems. As it has been throughout this review, the studies shown in this chapter are limited to SWTs and does not include control techniques and technologies concerning the generator. Instead, the focus is on HAWT blades, especially for pitch, yaw, and stall control techniques. An innovative blade design is presented by Xie et al. [130], this blade's outer section can be folded out of the rotor's plane to regulate the pitch of the blade and, thus, control the energy conversion efficiency. In a posterior work made by the same authors, this wind turbine was build and tested in a wind tunnel; the results showed that the maximum power coefficient could be reduced by up to 82.8% by fold control [131]. Hatami and Moetakef-Imani [132] present an improvement for a small scale HAWT pitch control made by implementing a Self-Tuning Regulator (STR) into it. The resulting pitch control regulates the rotor speed fluctuations above the rated wind speed in a better way since it can adjust the controller gains in real-time. The wind turbine was modeled with FAST; this model estimates the wind turbine parameters, and the obtained information is used to adjust the gain of a PDI controller for the pitch of the blades. The work presented by Rocha et al. [133] demonstrated that pitch control has a significant effect on the performance of urban installed wind turbines. BEM theory was used to model a HAWT with a fixed TSR. Different blade pitch angles and analysis of variance demonstrated that a blade pitch control system could be an effective method for improving the wind turbine's performance in urban conditions. A comparison between two different control systems and their influence over flicker emissions, voltage fluctuations, and mechanical loads is presented in Mohammadi et al. [134]. Yaw controlled 10 kW HAWT, and an electromechanical model simulates a stall-controlled 10 kW HAWT by using different software. It was concluded that the stall-controlled wind turbine provided better results in terms of flicker emissions, voltage fluctuations, and mechanical loads that the yaw-controlled one. To better accommodate the nonlinearities of wind turbine systems, Civelek [135] propose a fuzzy logic controller for blade pitch. This controller's coefficients are optimized with an advanced genetic algorithm and resulted in better output power stability. The folding of the blade also reduces the rotor's diameter and thus, the amount of energy available for exploitation is also reduced. Khaled et al. [136] presented a study related to how the performance of a small scale HAWT is affected by the length and the cant angle of a winglet. Different designs, lengths, and cant angles of winglets were optimized by using an artificial neural network while CFD simulations were made to measure the influence of said winglets in the C p and thrust force of the wind turbine. The best results were obtained for a winglet length of 6.32% of the rotor radius and cant angle of 48.3 • . Venkaiah and Sarkar [137] developed a free fuzzy feedforward PID pitch controller for HAWT. BEM theory was used to estimate the aerodynamic load acting on the blade and determine the maximum power capture pitch angle. An electrohydraulic actuator is controlled by the proposed fuzzy feedforward PID controller that achieved a performance index of 0.08606, 0.08849, and 0.09809 with normal leakage, high leakage, and very high leakage, respectively. Siavash et al. [138] reported a study where a small wind turbine was equipped with a controllable nozzle-diffuser duct that surrounds the rotor; the mechanism controls the speed of the flow and the drag forces acting on the turbine structure. The duct consists of a fixed ring and a diffuser which can rotate on each other. This experimental turbine was tested in a low-speed wind tunnel with the duct in different configurations. The results showed that the controllable nozzle-diffuser augments the power output up to 50% and the rotor speed by up to 61%. Plasma actuators are devices that can locally ionize the air and thus alter the speed and direction of the surrounding airflow. The work reported by Jukes [139], a plasma actuator control system, based on a Surface Dielectric Barrier Discharge (SDBD), was implemented on a small scale HAWT. The study determined that the flow separated from the blade's suction surface radially outwards from the blade root as the TSR is reduced. Placing plasma actuators in different SWTs blade areas allows energizing the boundary layer, reducing flow separation in the middle to tip sections of the blade. It can reduce the torque due to aerodynamic drag by up to 24%, proving the feasibility of implementing smart rotor control based on plasma actuators. A novel mechanism for improving rotor energy capture and load performance consist in the use of rotor blades with bend-twist coupling. This technology has been explored numerically for inducing a passive torsional response in the blades of the rotor in different applications; for instance, the work of Maheri et al. [140] reports increments in AEP of up to 13% while Nicholls-Lee and Turnock [141] shows increments in AEP of 2.65% for a rotor with variable pitch and bend-twist coupling. Similar works with a numerical approach have been performed on different multi-megawatt wind turbines such as Barr and Jaworski [142], which reports a 14% increase in AEP for a 5 MW rotor. Passive bend-twist coupling incorporates several technologies for blade torsional actuation, including composite material anisotropy, with and without tow steering and, geometrical approaches based on swept blade designs as demonstrated in the aeroelastic and structural design works of Capuzzi et al. [143][144][145]. The use of this passive technologies is not limited to direct power increase, as some studies such as that of Zahle et al. [146] have evidenced their effectiveness for load alleviation, using numerical simulation in large rotors and combining active blade pitching for energy capture in the below-rated range with passive bend-twist coupling for load alleviation in the above-rated range. Given the complexity in modeling for this kind of design problems, several authors have opted for novel optimization techniques, such as Restrepo-Montoya et al. [147], who proposes a metamodel-based methodology to design laminates with bend-twist coupling effect by means of genetic algorithms (GA) and artificial neural networks (ANN) integrated with a finite element model (FEM) capable of defining the stacking sequence that a laminate needs to reach a certain twist angle when submitted to bending load. This kind strategy resembles to that of Herath et al. [148] who uses a GA approach for optimization of composite layup in a morphing blade with induced bend-twist coupling by differential stiffness elements. In other cases, the application of GA is used in the joint design of rotor geometry and structures taking global parameters such as AEP or CoE as the cost function of the optimization exercise [149][150][151]. Although the focus of this survey is on HAWTs, there are several works on blade controlling of VAWTs whose principle may potentially be applied to HAWTs. Bianchini et al. [152], a BEM model, was used to explore different pitch control strategies for VAWT rotors. The BEM model was successfully validated via CFD simulations, and it was determined that an optimized pitch could result in better exploitation of higher-lift parts of the blade's polar during the rotation. Three pitch optimization strategies were presented and concluded that wind speed-dependent strategies to optimize the pitch angle produce a greater annual energy output but the increased complexity of those systems might not be compensated. The authors also concluded that a fixed pitch optimization approach could result in a cost-free improvement of the VAWT. An intelligent pitch angle control for an H-VAWT is presented by Abdalrahman et al. [153]. The C p of the turbine is calculated with data obtained by CFD simulations of an H-VAWT blade at different TSRs. The results of these simulations were used to make an aerodynamic model of the rotor. This model was then used to create an intelligent blade pitch controller for each blade's pitch angle by using an artificial neural network. The obtained controller resulted in superior power output for the VAWT when compared with a standard PDI controller. Sagharichi et al. [154] made a work that consisted of four different pitch functions with different amplitudes used to evaluate the relationship between the pitch angle and the self-starting performance for both a fixed pitch and a variable pitch H-VAWTs. All cases had better results with the amplitude of case 1 is reported to have achieved the least time required for starting and to have an increase of 34% in power. Hybridization and Integration One of the main disadvantages of wind energy is its variability in time due to the intermittent nature of the wind speed, therefore, the energy distribution also has the same behavior. For this reason, wind speed is classified as a variable renewable energy source (VRE) [155]. There are two ways to solve the effects of resource variability and guarantee energy security: (1) storage systems [156] and (2) electrical interconnection systems [157]. However, insufficient storage and the absence of interconnection in some areas increases the uncertainty that the installed capacity adequately supply the energy demand [158]. By composing a hybrid system that integrates wind generation and other renewable sources, including a storage system such as a battery bank [15], it is possible to meet the energy demand of a region [159]. Koutroulis et al. [160] found that systems composed of solar photovoltaic (PV) panels, wind turbines and battery banks (BB) meet the demand of unconnected sectors to transmission networks, even achieving lower values in capital and maintenance costs than systems that have only PV or WT. Additionally, it is possible to facilitate the integration and complementary of VREs through the interconnection of spatially distributed generators or the use of complementary generators, operations that apply monitoring or response to demand, oversizing of storage systems with hydrogen, the use of electric vehicles as storage concept [161], generation prediction [33], or adaptive droop control [162]. The concept of complementarity between renewable resources is of evident relevance in this context, mainly as an indicator of the energy security for a particular region. Renewable resources such as sunlight and wind depend on the space and time of the area where the energy system is operated. Factors such as height above sea level, temperature, humidity, topography, or presence of clouds, directly affect the generation of VRE. This condition does not depend on whether the resource is renewable or not, the accessibility of fossil fuels in the region or its surroundings is also evaluated. However, it is a reality that the wind resource is present in all regions to a lesser or greater extent, making it essential for complementarity in electricity generation [158]. Spatial complementarity occurs when two or more energy sources coincide in a specific region, and temporal complementarity refers to periods of availability when complementary there is in the time domain. In this way, the concept of space-time complementarity is generated, where multiple energy sources are simultaneous both in time and space [33]. Jurasz et al. [33] presented some correlations and indices to quantify the amount of energy complementarity. Among these correlations are Pearson's, Kendall's, Spearman's rank correlation coefficients, canonical correlation analysis (CCA), and cross-correlation. In the case of the indices, the complementary index of wind and solar radiation (CIWS) and an index that integrates the geographic regression model and the principal component analysis (PCA) stand out. Mahesh and Sandhu [163] shown a summary of the complementarity of solar and wind resources, the authors dealt the mathematical modeling, system constraints when integrating battery banks, and the reliability of hybrid renewable energy systems. Shivarama Krishna and Sathish Kumar [164] affirmed that the hybrid systems are the best option for building modern power grids, including economic, environmental, and social benefits. Chauhan and Saini [165] presented a summary of the possible configurations in the integration of renewable energies, options of storage technologies, the mathematical modeling of wind systems, micro-hydroelectric systems, PV solar systems and bioenergy systems, in addition to a summary of the numerical methods to optimize the sizing of hybrid renewable energy systems and control systems for energy management. Siddaiah and Saini [166] also presented some optimization techniques and identified the renewable sources of each study, the objective function of each algorithm, and whether the search approach was economic, environmental, social, or technical. Tezer et al. [167] evaluated optimization methods for hybrid systems, some objective functions, and emphasized in some hybrid optimization models. Kajela and Manshahia [168] summarized the types of renewable energies, their advantages and disadvantages, the importance of hybrid systems, some typical hybrid renewable energy systems (HRES), the modeling of objective functions, their restrictions, and the computational tools for optimization in sizing. One of the challenges in the integration of wind and solar systems is related to the compatibility in voltage, where the photo-voltaic solar system responds faster than the wind system [169]. In this way, studies have been carried out addressing this integration from the control of electrical variables. Chaib et al. [169] presented a control system that coupled the wind and solar systems through two switches that converge on a DC bus. In the same aim, Huu [162] developed an adaptive control for a storage system for DC distribution networks. The authors introduced DC distribution micro-grids, the conventional droop control method, the battery storage system model, and the adaptive droop method. Sinha and Chandel [170] summarized the trends in the optimization methods for hybrid systems, focused on solar and wind technologies, the authors also presented a comparative table showing the strengths and weaknesses when using iterative approaches, graphic models, probabilistic approximations, linear programming, and trade-off methods. Khare et al. [171] also summarized the systems composed of solar and wind technologies, and focused on the review of evolutionary techniques for optimization, where genetic algorithms (GA) and particle swarm optimization (PSO) stand out [7]. Al-falahi et al. [172] presented the developments in optimization methodologies for sizing HRES, mainly focused on the integration of wind and solar systems, and described the comparison between the different objective functions and restrictions of each method. Several authors highlight the low cost of PV systems into HRES systems when the hybridization sizing includes also wind energy generation. Maleki and Pourfayaz [173] carried out the sizing of an autonomous hybrid solar-photovoltaic/wind/bank-battery system using MATLAB and various evolutionary algorithms such as PSO, tabu search (TS), simulated annealing (SA), improved particle swarm optimization (IPSO), improved harmonic search (IHS), a hybrid model between IHS and SA, and artificial bee swarm optimization (ABSO). The best results obtained by the authors shown that PV/BB systems reached fewer total annual cost (TAC) values than the WT/BB systems. On other hand, Torres-Madroñero et al. [7] presented the sizing of PV/WT/BB hybrid system, using a computational tool on Python that implemented GA and PSO methods with single and multi-objective function, involving TAC and levelized cost of energy (LCoE). The authors concluded that when the optimization method searched the best configuration with economical criterion (TAC or LCoE), the HRES systems had 100% of photo-voltaic generation, obtaining LCoE values between 0.160 and 0.287 USD/kW-h, depending on the energy demand case. In the same way, Mayer et al. [174] studied a novel method for sizing a hybrid system based on the environmental impact throughout the life cycle of the equipment involved in the HRES. The authors employed the multi-objective function with the mentioned environmental factor and the net present cost (NPC) by the GA method. The HRES configuration took into account photo-voltaic solar energy, wind energy, solar heat collector, heat pump, heat storage, battery bank, and heat insulation thickness. The study concluded that if the configuration got away from economic optimum to reduce the environmental impact, the installed photo-voltaic capacity increased, being the most profitable way to reduce emissions. Assessment of Currently Available SWTs Some of the most relevant concepts for aerodynamics, wind resource, manufacturing and control strategies that have been discussed in previous sections are now analyzed from energy generation and energy cost point of view. This analysis is based on three indicators: the AEP, the capacity factor, (CF), and the CoE. The definition in Equation (7) is adopted for the calculation for the AEP. The wind resources under analysis have been defined as hypothetical cases by assuming Weibull probability distribution functions following Equation (4). The difference in mean wind speed and shape factor for each distribution is expected to describe wind conditions at sites of different characteristics; however, these do not correspond to actual measured data and should be considered when interpreting the subsequent results. Likewise, the power curves for the considered wind turbine models are taken as provided by certification reports although the real data is naturally subject to variations that might affect the outcome of the calculations. The ratio between the annual energy output and the annual energy output at rated conditions is given by the CF, defined in [175][176][177] as: The model for the CoE is based on the annual fixed cost (C f ix ) and the operation and maintenance cost (C M&O ), both of them estimated on a yearly basis. The final definition is given by: The calculation of the fixed cost is based on a simple payback time analysis, assuming a 6% interest rate p.a. and a 10 year payback period; this results in a 13.6% annuity for the yearly payment of the initial investment. The initial investment represents the installed costs, assumed as 7500 USD/kW; similarly, the value of C M&O is assumed as 40 USD/kW. All of the data on wind turbine cost is directly adopted from official sources [74,178,179] and it must be mentioned that the availability of information on actual project costs as seen by manufacturers and operators is scarce and varies from year to year. In this sense, the results from preceding calculations are based on averaged values for the sake of illustration on typical wind turbine costs. Results Six different wind turbine models are considered, and their characteristics are shown in Table 7, including the manufacturer, model, rotor diameter D, swept area A, rated power P rated , rated wind speed u rated , cut-in speed u in , cut-out speed u out , and control type (yaw, pitch or brake control). Model 1, for example, corresponds to a machine with a wooden built rotor that points to a low-cost manufacturing. Model 6, corresponds to a modern SWT with a simplified blade geometry while Model 3 corresponds to SWT with a rotor specially designed to endure strong winds. It is expected that the diversity in the selected turbines have a direct impact the CoE, as each machine's sophistication should impact the capital costs consequently. One aspect common to all models is these start rated power operation at wind speeds around 11 m/s, a typical rated wind speed for places with adequate wind resources. In contrast, the behavior of power in the above-rated wind speed range, as shown in Figure 3, is different for each model. Some turbines reach maximum power above 12 m/s (i.e., for Model 2, Model 3, Model 4, and Model 6) or present power drops for high wind speeds as is the case for Model 4. With a similar intention, five different wind resources are adopted for the CoE calculation. These are based on the Weibull probability distribution and are illustrated in Figure 4. According to Figure 3, the turbines corresponding to Model 1, Model 2, Model 3, and Model 4 are characterized by relatively smaller power ratings, ranging from 530 W for Model 1 to 5.4 kW for Model 4, whereas Model 5 and Model 6 are both situated above the 8 kW threshold in terms of rated power. Model 1 has a significantly smaller cost compared to the remaining models, not only because of its size, but also because it has been specifically designed as a low-cost solution for rural electrification. The influence of the material and manufacturing techniques is still yet to be verified; nevertheless, a significant drop in capital cost is expected when comparing more sophisticated blade manufacturing techniques for high-performance composite materials to manual and simpler manufacturing techniques for cheaper materials such as wood. A wide distribution in velocity magnitude characterizes the wind resources for Case 4 [67] and Case 5 [35]; this kind of resource, typical for coastal and open areas and for flat places, is highly variable, as consequence the AEP, shown in Figure 5, is consistently better in these cases for each wind turbine model. The corresponding Weibull distributions (see Figure 4) show favorable frequencies for wind speeds at both the below-rated and above-rated wind speed ranges. Both Case 1 [35] and Case 2 [60] reveal a lower performance, most likely associated to the prevalence of wind speeds below the typical cut-in values for the considered wind turbines. A smaller degree of variation is also observed, compared to the previously discussed cases, which have significant production in the beyond-rated range. For Case 1 and Case 2, a typical wind turbine model can be expected to deliver most of the AEP by operating within the below-rated range, which is at wind speeds between cut-in and rated values; this is reflected by the corresponding capacity factors, shown in Figure 6. The economic analysis presented here considers capital costs for each SWT, including operation and maintenance costs, without incorporating the machine's lifespan. The simple CoE, in USD/kWh has been estimated for each Model and Case with the model described at the beginning of the section. An interesting result is the CoE for Model 1 shown in Figure 7, with lower values than any of the other models and regardless of the wind resource. The low costs associated to wooden blade manufacturing and simplified wind turbine construction can be pointed as the driving factors for this result. The results for Model 5, with a nominal power of about 17 kW, show the second-best performance in terms of CoE, particularly when considering adequate wind resources, such as those of Case 4 and Case 5; in addition, it must be mentioned that this wind turbine makes use of a variable pitch control, resulting in superior rotor conversion efficiency when compared with other models. The wind resource of Case 3 [35] results in Model 5 having a competitive CoE, and this may be attributed to more widespread distribution of wind frequencies. The Model 6 presents a similar performance to that of Model 5 but with slightly higher CoEs as the AEP reduces by almost a half. Such a reduction in energy output can be attributed to the fact that the rotor of Model 6 has blades with fixed pitch and apparently constant twist and chord distributions. Such a simplified rotor geometry can be expected to result in a smaller power output but, the associated reduction in wind turbine cost compensates the cost of electricity as shown in this analysis. The wind resource of Case 1 has resulted in an interesting set of CoEs, showing the highest variation with respect to the wind turbine model. A similar outcome is observed for Case 2 which indicates that the narrow probability distributions shown earlier in Figure 4 results in a CoE that is highly dependent on the shape of the power curve and on the cut-in wind speed for each particular wind turbine model. The relative similarity in CoE observed for Case 4, Case 5, and Case 3 can be attributed, under the same reasoning, to the wider probability distributions for these three wind resources. The data for Model 4 in Figure 7 shows a difference in the values of CoE, with significantly higher costs for wind resources with narrow probability distributions (case 1 and case 2). It must be said that Model 4 is characterized by a relatively high cut-in wind speed and a power reduction in the above rated-range, associated to the stall-regulated blades. In summary, this analysis reveals that if a low-cost turbine can achieve a modest power output for a set of variable conditions, the low-cost factor can significantly help bring down the CoE, with moderate sensitivity to the wind resource variability. For models produced with more sophisticated manufacturing processes and materials, such as Model 2 to Model 6, a poor matching between the power curve and wind resource inevitably results in lower capacity factors. This could drive the CoE to very high values, rendering the machine disadvantageous in sites with small average wind speeds and little variation. Conclusions This review has tested several hypotheses about the current state of the five discussed topics. For wind resource assessment, this review shows that the existing literature focuses on typical wind speed estimation, present wind velocity consideration in standard IEC, and finally, in studies of the operation under turbulent environments. This shows that previous studies have focused on issues that affect rotor performance, airfoil design, and methodologies for flow analysis for rotor aerodynamics. Regarding manufacture, this paper demonstrated that many works' current aim is to keep the cost as low as possible. However, in some works, the recent tendency is to use advanced materials to improve efficiency. For control systems, this review shows that most published articles have been reported on active control systems for SWTs and that passive control techniques remain mostly unexploited. Finally, this study presents the complemnetariety of wind technology with other renewable energy generators, introducing the sizing and control of hybrid micro-grids systems. The analytical model presented by Equation (1) is a first approximation for a value of mean wind speed. This can be used to estimate the critical load on blades or other mechanical components. However, the log-law model must not assess the power generation for a wind turbine because it does not depend on the time; for this reason, the final estimation of power output will not be the right approach. The design of a wind turbine system requires an energetic study, and the approach given by Equation (7) is ideal for estimating the total energy over one operation year. Additionally, the most frequent wind speed of Weibull or Rayleigh distribution can be used to calculate the critical operation load with the integration of safety factor under designer's criteria or following the IEC standard that presents u re f in Table 4. However, when the project and design request is an accurate estimation of power energy and load, the third approach by CFD models is usually employed. Compared with log-law and statistical models, it has a more computational cost and time investment to reach the aim. Generally, in the studies reported in this work, the standard IEC 61400-2 does not represent the real operation features to small wind turbines that work in urban environments. However, the importance of mathematical models given by standard is useful to designers for conceptual design when SWTs operate in open areas. Moreover, this standard serves as a start point to implement correction models for SWT in urban places. The majority of researchers compare the IEC standard with their studies involving experimental methods where wind tunnels and wind turbine models or real measurements during operation are used. The criteria for selecting one of these methodologies are related to the required accuracy and the state of maturity of the project. Some of the studies recognized that the gust effect has additional energy that the wind turbines can extract in urban areas; this is evident by the indicator EEC presented by Emejeamara et al. [73]. However, the excess of energy is limited by wind systems' capacity to track or adapt their performance as function of change of wind speed either by direction or magnitude. For this reason, Battisti et al. [29] presented the reaction capacity of a wind turbine, and its relation with the time scales of response, the time scales of change wind speed, the acceleration required by the wind turbine to adapt to wind speed acceleration, and the maximum acceleration permissible to reach by the turbines. All of these factors [29] converged in wind tracking index WTI, which is the quotient between ARA and RRA. If WTI is greater than or equal to 1, the turbine will accelerate to following the control parameters. Still, if WTI take values less than 1, the wind turbine will have a critical operation and will not be reactive to magnitude and direction wind fluctuations. In particular, higher ARA are obtained for HAWT at higher speeds; thus, WTI is higher to HAWT than VAWT, in this way these type of turbine is more reactive and enabled for tracking gust. In general, small diameters are required to operate in areas which require a rapid reaction [29]. When implementing conventional airfoils in low Reynolds conditions (predominating in the operation of small wind turbines), some of the main issues are laminar separation bubbles, causing early flow transition or causing separation from the leading edge of the airfoil. This phenomenon decreases lift while increasing drag considerably, which is the less convenient wind turbine blade behavior. Wind turbines' operation can become less efficient due to erosion in the leading edge of the blade; therefore, an airfoil with design properties that are insensitive to leading-edge roughness is more convenient for wind energy applications with low Reynolds flows. Several works impose the constraint of shallow pressure distribution over the airfoil's upper surface to avoid adverse pressure gradients that result in trailing edge separation. In this case, the design problem is based on inverse methodologies for determining airfoil shape that satisfies a prescribed pressure distribution. Although not many works are published regarding blade and rotor control focused on SWT, this review shows the great variety of technologies studied in recent years: from simulations of different yaw or pitch controllers to plasma actuators to control flow separation to folding blades and shrouded rotors. Many of the reviewed works for rotor control show simulation techniques and theories. For these simulations to be accurate for SWT applications, the inputs related to wind conditions are critical, considering the available wind resource requirements on the environments where STWs are usually implemented. This shows the importance of accurate and reliable wind resource assessments and measurements to develop and implement appropriate control techniques on SWTs successfully. A relatively small number of articles were published on blade and rotor control systems made explicitly for SWTs. This highlights an issue with the studies made on SWTs that tend to scale down the same technologies used for large-scale wind turbines and expect them to work similarly. This ignores the differences between large and small scale wind turbines regarding aerodynamic considerations and the wind resource's particularities on the usual environments where SWTs operate. Regarding the operation, feasibility concerns are raised in wind generation systems based on SWTs, both for grid-connected or off-grid operation. In grid-connected applications, the required electrical connections and access to the installation zone represent an economic barrier. Conversely, when the system operates off-grid, instrumentation for wind speed measurement and civil construction foundations must be considered in the project since these often imply high costs [35]. These costs may be unattractive for rural and urban applications both in grid-connected and off-grid projects if generated power is not enough. Among technical challenges of all diverse nature, such as noise or performance, certification is a crucial aspect in which progress must be made, considering it is essential to opt for government incentives. Furthermore, the cost of accredited testing and certifications are pointed as a critical hurdle given the high price that such processes can reach for a single small wind turbine model. Despite that PV systems stand out with fewer cost than SWTs, the economic study carried out by CoE in this review show that depending on the wind resource and the wind turbine model, the CoE values can be similar to LCoE value found in previous works. It is the case of the SWT Model 1 on the wind speed profile Case 5, where it reached a CoE value of 0.23 USD/kW-h, which is into the range of LCoE values obtained by Torres-Madroñero et al. [7] (0.160 o 0.287 USD/kW-h) for photo-voltaic/battery systems. However, it is important to mention that the analytic approach given by this review about CoE doesn't have the energy demand in its formulation. Moreover, the literature studies took the temporal wind speed profile in some cases instead of the PDF formulation, like Section 8 here presented. When accurate formulation and results comparing PV and wind generation, the works mentioned in Section 7 are advised. Reducing the difference between the demand and supply of energy and keeping the cost of electricity at low levels within a minimum impact on the environment are significant concerns regarding the development of generation technologies. In this sense, innovation has brought down costs and increased the availability of renewable power sources, where wind turbines and solar photovoltaic (PV) panels are the leading energy technologies to non-conventional renewable electricity production [186]. In particular, SWTs can be split into two market sectors: (1) machines devoted to supplying electricity in remote areas, and (2) grid-connected applications. The first one concerns the use of SWTs in rural areas, mostly for off-grid consumers, either alone or in conjunction with other types of renewable energies. The reduction in capital and operational costs is paramount, aiming to provide reliable energy. The price cannot be overwhelmed by the value of the connection to the grid. Additionally, such remote consumers often lack economic resources to afford high costs. Regarding the second sector, the motivation is to use renewable sources harvesting on-site wind resources to be self-sufficient and, in surplus scenarios, provide energy to the grid through the so-called smart meter technologies [30].
2020-11-26T09:07:34.987Z
2020-11-22T00:00:00.000
{ "year": 2020, "sha1": "7484ba5f6dec04bf6c550cd683a8b967e5654b1f", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1996-1073/13/22/6123/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "441bdd4ac1a926e4465e72c22bf5416b8beca03f", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
118557567
pes2o/s2orc
v3-fos-license
Algebraic Structures of N=(4,4) and N=(8,8) SUSY Sigma Models on Lie groups and SUSY WZW Models Algebraic structures of N = (4; 4) and N = (8; 8) supersymmetric (SUSY) two dimensional sigma models on Lie groups (in general) and SUSY Wess-Zumino-Witten (WZW) models (as special) are obtained. For SUSY WZW models, these algebraic structures are reduced to Lie bialgebraic structures as for the N = (2; 2) SUSY WZW case; with the difference that there is a one 2-cocycle for the N = (4; 4) case and there are two 2-cocycles for the N = (8; 8) case. In general, we show that N = (8; 8) SUSY structure on Lie algebra must be constructed from two N = (4; 4) SUSY structures and in special there must be two 2-cocycles for Manin triples (one 2-cocycle for each of the N = (4; 4) structures). Some examples are investigated. In this way, a calculational method for classifying the N = (4; 4) and N = (8; 8) structures on Lie algebras and Lie groups are obtained. Introduction Supersymmetric two dimensional nonlinear sigma models have important role in theoretical and mathematical physics such as their numerous string applications. Let us have a short bibliography for this subject. The relation between these theories and geometry of the target spaces have been studied about thirty five years ago [1]. The biHermitean geometry of the target spaces of the N = 2 extended supersymmetric sigma models was first realized in [2] (see also [3]). Then the extensions to more supersymmetries N = 4 and N = 8 have been investigated [4] (see also [5]). The sigma models with extended supersymmetry can only be defined on a restricted class of target manifolds, more supersymmetry implies more restriction on these geometries [5]. The extended supersymmetric sigma models on Lie group manifolds and also SUSY WZW models have been studied in [6]. The N = 2 and N = 4 extended superconformal field theories in two dimensions and also their correspondence with Manin triples have been investigated in [7] and [8]. Also there are some notes about N = 8 superconformal field theory in [7]. The algebraic study of N = (2, 2) SUSY WZW models and also N = (2, 2) SUSY sigma models on Lie groups (algebraic biHermitian structures) have been studied in [9] and [10], respectively. In this paper, we try to obtain the algebraic structures of N = (4,4) and N = (8,8) SUSY sigma models on Lie groups (in general) and the algebraic structures of SUSY WZW models (especially). The outline of the paper is as follows: in section two, we review the N = (2, 2) SUSY sigma models on Lie groups and their algebraic biHermitian structures [10] as well as SUSY WZW models and their correspondence to Manin triples. Then in section three, we obtain the algebraic bihypercomplex structures for the N = (4, 4) SUSY sigma models on Lie groups and specially for the N = (4, 4) SUSY WZW models. We show their correspondence to Lie bialgebra with one 2-cocycle, at the end of this section we give an example. Finally in section four the algebraic structure of the N = (8, 8) two dimensional SUSY sigma models on Lie group is investigated and show that for the N = (8, 8) SUSY WZW models these algebraic structure is the Manin triples with two 2-cocycles, an example is given at the end of this section. 2 N = (2, 2) SUSY sigma models on Lie groups and SUSY WZW models In this section, for self contiaing of the paper we will review briefly the geometric description of the N = (2, 2) SUSY WZW and sigma-models on Lie groups [2]- [6] and their algebraic structures [9], [10]. We will use the N = (1, 1) action to the description of N = (2, 2) model; and impose extended supersymmetry on the superfields. With the knowledge that N supersymmetric sigma-models have N supersymmetric generator (Q i ) and N − 1 complex structures (J i ) on manifolds M such that for N = (p, q) SUSY sigma-models in two-dimension then we will have p right-handed generators (Q i+ ) and q left-handed generators (Q i− ) respectively, then N = (1, 1) SUSY sigma model have one right-handed generators (Q + ) and one left-handed generators (Q − ) and the action on the manifold M is written as follows [2]: such that this action is invariant under the following supersymmetry transformation: where Φ µ are N = 1 superfields; so that their bosonic parts are the coordinates of the manifold M . Further more the bosonic parts of the G µν (Φ) and B µν (Φ) are metric and antisymmetric tensors on M respectively. Note that in the above relations Q ± and D ± are supersymmetry generators and superderivative, respectively and ǫ ± are parameters of supersymmetry transformations. The above action has also invariant under the following extended supersymmetry transformation [2]: where J µ ±ν ∈ T M ⊗ T * M . The consequence of invariance of the action (1) under the above transformations are the following conditions on J ρ ±σ [2]: where the extended connections Γ ±µ ρσ have the following forms: such that and Γ µ ρν are Christofel symbols. In order to have a closed supersymmetry algebra we must have the integrability condition on the complex structures (J ± ) (4) as follows [2]: In this manner the N = (2, 2) SUSY structure of the sigma model on M is equivalent to existence of the biHermitian complex structure (J ± ) on M (4),(5), (9) such that their covariant derivatives with respect to extended connection Γ ±µ ρν are equal to zero (6). If M is a Lie group G then in the non-coordinate bases, we have: where G AB is the ad-invariant nondegenerat metric and H ABC is antisymmetric tensor on the Lie algebra g of the Lie group G. Note that L µ A (R µ A ) and L µ A (R µ A ) are components of left(right) invariant one-forms and their inverses on the Lie group G; f AB C are structure constants of the Lie algebra g and J B A is an algebraic map J : g −→ g or algebraic complex structure. Now, using the above relations and the following relations for the covariant derivative of the left invariant veilbin [11]: then, we have the following algebraic relations, for the biHermitian geometry of the N = (2, 2) SUSY sigma models [10]: where (χ A ) B C = −f AB C are the matrices in the adjoint representation and we have (H A ) BC = H ABC for the matrices H A . Note that relation (15) represents the ad-invariance of the Lie algebra metric G AB . One can use relation (15)-(19) as a definition of algebraic bi-Hermitian structure on Lie algebra [10]; and calculate and also classify such structures on the Lie algebras [10]. For the N = (2, 2) SUSY WZW models we have H ABC = f ABC ; then (19) automatically satisfy and from (16) we obtain the determinant of J 2 is (−1) n , i.e the dimension of the Lie algebra g (n) must be even and J B A has eigenvalues ±i. If we choose a basis T A = (T a , Tā) for the Lie algebra g we will have [9]: where this form of J is satisfying in (18). In this basis according to (17) we must have the following form for G AB : where g is a n 2 × n 2 symmetric matrix. According to (18), we have f abc = 0 and fābc = 0, this means that fc ab = f c ab = 0 i.e T a and Tā form Lie subalgebras g + and g − such that (g + , g − ) is a Lie bialgebra and (g, g + , g − ) is a Manin triple [9]. The relation between Manin triples and N = 2 superconformal models (from the algebraic OPE point of view) was first pointed out in [7]. Also the relation of N = (2, 2) WZW models and Manin triple (from the action point of view) was pointed in [9]. In [10] we have obtained all algebraic biHermitian structures related to four dimensional real Lie algebra. Let us consider a simple example for N = (2, 2) SUSY WZW models correspond to the following non-Abelian four dimensional Manin triple A 4,8 [10]: N=(4,4) SUSY WZW and sigma models on Lie groups As mentioned above, the correspondence between N = 2 and also N = 4 and N = 8 superconfomal Kac-Mody algebra and Manin triples has been investigated in [7] and the Manin triples construction of N = 4 superconformal field theories has also investigated in [8], but up to now the algebraic structures of the N = (4,4) and (1) where invariant under transformation (2). Now we will consider for N = (4, 4) case the invariance of that action under the following SUSY transformations [2], [3] (instead of (3)) and also N = (4, 4) SUSY sigma model must be have four righthanded generators (Q +r ) and four left-handed generators (Q −r ) and three complex structures (J ±r ): such that the constrains on the complex structures are followed as [5]: where the closed characteristic of the algebra of SUSY transformations (i.e [δ 2 r (ǫ r ), δ 2 r (ǫ r )], and [δ 2 r (ǫ r ), δ 2 s (ǫ s )]) have been consequences the following relations [5]: such that these are Nijenhuis-concomitant [12] for complex structures J ±r 1 . When the background is a Lie group G then in non-coordinate bases ((10)-(13)) the geometrical relations (25)-(31) have the following algebraic forms: In this way, relation (32)-(38) define the algebraic bihypercomplex structures 2 on the Lie algebra g, such that we have three algebraic complex structures J r , (r = 1, 2, 3) where by use of (33) only two of them are independent i.e we have two algebraic independent complex structures (e.g J 1 and J 2 ). As for the N = (2, 2) case for the N = (4, 4) SUSY WZW models we have H ABC = f ABC then relations (36) automatically satisfy and from (32),(34) and (35) one can obtained the following forms for J 1 , J 2 and G: and where we have the basis T A = {T a , Tā} for the Lie algebra g. Then from (37) one can obtain R a b = Rā b = 0, and from (34) we obtain that R T = −R, then from (32) we see that dimension of J 2 must be 4n where n is an integer number. So the dimension of Lie algebra g must be 4n. Note that from (35) as for N = (2, 2) case we see that g = g + ⊕ g − where g + and g − are Lie subalgebras with basis T Γ = {T a , Tā} and T a, a = 1, ...., n, such that the basis for g are now T A = {T Γ , TΓ} and they form a Lie bialgebra. Now from (38) we have: This means that we have a 2-cocycle. To show this we consider the definition of coboundary operator δ on an i-cochain γ on the Lie algebra g with values in the space M as follows [14]: ∀T A ∈ g. The 2-cochain γ is 2-cocycle when δγ = 0. Now for the case that M = C we have: Using the following form for the 2-cochain: in (43) after some calculation one can obtain (41). In this way the algebraic structure of N = (4, 4) WZW models is also Lie bialgebra as for the N = (2, 2) WZW models with this difference that for the N = (4, 4) case, we have Lie bialgebra with a 2-cocycle, such that the independence algebraic complex structures (J 1 , J 2 ) are anticommute (37). As for the N = (2, 2) case we consider the non-Abelian four dimensional Manin triple A 4,8 . Now in this case (N = (4, 4)) we have the following forms for the metric G and complex structures J 1 and J 2 . with the following 2-cocycle: where I is 2 × 2 unit matrix. N=(8,8) WZW and sigma models on Lie groups Now, as for the N = (4, 4) case we consider the action (1) again; such that this action is a invariant under SUSY transformation (2) as well as under the following second SUSY transformations [5]: where for these transformations we have fourteen J ±r geometric complex structures. As for the N = (4, 4) case from the invariance of the action (1) under transformation (47) and also closed characteristic of the algebra of transformations one can obtain again relations similar to (25)-(31) with (r = 1, ..., 7) [5] and also the same algebraic relations (32)-(38). For this case from (34) we have the following relations among algebraic complex structures therefore only three of them (e.g J 1 , J 2 and J 4 ) are independent. As for the N = (4, 4) case for N = (8, 8) SUSY WZW we obtain the following forms for the complex structures J 1 , J 2 and J 3 and also for G: where in this case from (32) we conclude that the dimension of the algebra g must be 8n with n is an integer and relation (35) for J 2 reduce the Lie bialgebra structures with Lie subalgebras g + and g − with dimension 4n. In this case relation (38) reduce to the following relations: i.e the algebraic structures of N = (8, 8) WZW models are Lie bialgebras with two 2-cocycles and three algebraic complex structure J 1 , J 2 and J 3 where anticommute under (37). As an example, consider a four dimensional complex Lie algebra L 9 with the following commutations relations [15]: one of the dual Lie algebra for the above Lie algebra isL 9 that satisfy in the following mixed Jacobi identities: with following commutation relations: Now, for this 8 dimensional Lie algebra g we have obtained the following algebraic complex structures J 1 , J 2 and J 3 and metric G: 38)) to obtain and classify all these structures on low dimensional Lie algebra as for the N = (2, 2) case [10].
2014-10-07T14:10:44.000Z
2014-02-23T00:00:00.000
{ "year": 2014, "sha1": "829fbf7de6eca5b7aac3ff8eee868f0c547bb819", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "829fbf7de6eca5b7aac3ff8eee868f0c547bb819", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Physics" ] }
255440386
pes2o/s2orc
v3-fos-license
New Results for the Pointing Errors Model in Two Asymptotic Cases Several precise and computationally efficient results for pointing errors models in two asymptotic cases are derived in this paper. The normalized mean-squared error (NMSE) performance metric is employed to quantify the accuracy of different models. For the case that the beam width is relatively larger than the detection aperture, we propose the three kinds of models that have the form of $c_1\exp(-c_2r^2) $.It is shown that the modified intensity uniform model not only achieves a comparable accuracy with the best linearized model, but also is expressed in an elegant mathematical way when compared to the traditional Farid model. This indicates that the modified intensity uniform model is preferable in the performance analysis of free space optical (FSO) systems considering the effects of the pointing errors. By analogizing the beam spot with a point in the case that beam width is smaller than the detection aperture, the solution of the pointing errors model is transformed to a smooth function approximation problem, and we find that a more accurate approximation can be achieved by the proposed point approximation model when compared to the model that is induced from the Vasylyev model in some scenarios. I. INTRODUCTION F REE-space optical (FSO) is a wireless optical communication technology, which has attracted considerable attention in both academic and industry due to its great potential: larger bandwidth and high data rate, unregulated spectrum, low mass and less power requirements, rapid and easy deployment [1]. Also, FSO technology can be deployed together in the so-called hybrid radio frequency (RF)/FSO systems that are considered to be a promising solution for reliable wireless backhaul connectivity to enable long-range communications in future 6G networks [2]. Furthermore, the advanced reconfigurable intelligent surfaces (RIS) technology that was proposed recently, can be used to enhance the performance of FSO systems, and thus broadening the range of FSO communication [3]. Despite these benefits of FSO technology, the performance of FSO communication systems can be deteriorated by adverse effects, such as beam wandering and spreading, and scatting when the optical carrier propagates through atmospheric turbulence. It has been shown in [4] that turbulence-induced irradiance scintillation and pointing errors are the two major performance-limiting factors for FSO links with ranges longer than one kilometer. Note that the beam wander and mechanical vibration result in the pointing errors, and they have the same mathematical model with only differing in physical meaning of parameters [5], [6]. The research on the irradiance scintillation models have been studied extensively, and plenty of precise or mathematically tractable models have been proposed so far, such as Gamma-Gamma distribution [7], Fischer-Snedecor F distribution [8], lognormal-Rician distribution [9], and Málaga distribution [10]. Unfortunately, the results for the pointing errors models are greatly limited. To the best of the author's knowledge, the pointing errors model for a Gaussian beam was firstly developed by R. Esposito in [11], where it was expressed in terms of Marcum's Q function 1 . Subsequently, a simple and efficient approximation of this model was present by Farid [13], namely, Farid model, which has been widely used in FSO systems. It should be noted, however, the Farid model has two main drawbacks: 1) one is the low approximation accuracy when the radius of beam width is two times less than that of the detection aperture. 2) the other is that it requires the computation of the complex error function erf (·). Recently, the other pointing errors model was established by Vasylyev in the field of quantum communication [14]. Although it provides a good approximation regardless of the relationship between beam width and detection aperture, its complicated mathematical form greatly hampers the analytic expression for the system performance. In this work, we present some new results on the pointing errors model. Several computation-efficient models are proposed, and the accuracy of them is investigated in detail. For the case that the beam width is relatively larger than the detection aperture, the normalized mean-squared error (NMSE) performance indicates that the proposed modified intensity uniform model not only shows a better approximation accuracy than the Farid model, but also has a simpler expression. This is one of the key contributions of this paper. By analogizing the beam spot with a point in the case that beam width is smaller than the detection aperture, the solution of the pointing errors model is transformed to a smooth function approximation problem. Numerical results demonstrate that the proposed point approximation model provides a higher 1 Marcum's Q function plays an important role in the performance analysis of communication systems, which is defined as [12]. II. POINTING ERRORS MODEL In line-of-sight FSO communication links, misalignment between transmitter and receiver results in pointing errors, which are the another performance-limiting factor besides turbulence-induced scintillation. We note that the pointing errors consist of two parts in practical FSO systems. One is the beam wandering caused by large scale eddy and the other is due to mechanical vibration or thermal expansion. However, the former can be dealt with in a similar methodology to the latter, as shown in [5], [6]. After propagating a distance z from the transmitter, the normalized spatial distribution of a Gaussian beam at the receiver plane is given by where ρ denotes the displacement from the beam center. According to [15], the beam radius w z at the distance z is related to the beam waist w 0 at z = 0, wavelength λ, and atmospheric coherence length ρ 0 , which can be expressed as where ε = 1 + 2w 2 o /ρ 2 o (z) . Specifically, ρ 0 = 0.55C 2 n k 2 z −3/5 for the spherical wave with C 2 n and k = 2π/λ denoting the index of refraction structure constant and wave number respectively. At the receiver, the effect of pointing errors causes the deflection between the beam center and aperture center, as shown in Fig. 1. Hence, the transmission efficiency within a circular detection aperture of radius R a reads as where A is the detector area. Considering the symmetry of the beam shape and the detector area, h p (r; z) depends on the radial distance r = ||r||, which is given by Equivalently, (4) can be expressed in terms of the incomplete Weber Integral, which is found to be [14], [16] where I n (·) is the modified Bessel function. From (5), the pointing errors model at r = 0 can be easily derived as III. NEW RESULTS OF POINTING ERRORS MODEL IN TWO ASYMPTOTIC CASES In this section, we provide several methods to evaluate the pointing errors model in two asymptotic cases: w z R a and R a w z . In most current practical FSO systems, the divergence of emitted laser beam is typical of tens of µrad while the size of the receiving aperture is on the order of tens of centimeters [17], [18]. Hence, these two scenarios can occur, depending on the transmitted distance z. We demonstrate that the expression for pointing errors has the form h p (r; z) = c 1 exp −c 2 r 2 by combing the condition w z R a and (1). Note that this simple form has the benefit to facilitate the performance analysis of FSO systems provided that the radial displacement r follows a Rayleigh distribution, which is given by where σ 2 s is the jitter variance at the receiver. With (7), the unified probability density function (PDF) of h p is obtained as In what follows, we aim at determining the values of coefficients c 1 and c 2 for different pointing errors models. It should be emphasized that the present results in the following are expressed in terms of elementary functions, avoiding the computation of complicated function, that is, erfc (·) and I n (·) in Farid model [13, eqn. 1) Intensity Uniform Model: It can be reasonably claimed that the intensity distribution within the area of detector aperture is approximately uniform when w z R a , and we can regard the intensity of detector center as an intensity value of this area. As such, we have In this case, c 1 = 2R 2 a /w 2 z and c 2 = 2/w 2 z . 2) Modified Intensity Uniform Model: Although the expression for the intensity uniform model is simple, it leaves out some important details. For example, Inspired by this result, coefficient c 2 in (9) may exhibit the same behaviour as the coefficient c 1 , that is, 2/w 2 z is a Taylor approximation of some function. Specifically, we have when an exponential function is considered. From (10), we have c 1 = η and a /w 2 z , which is the same as that in modified intensity uniform model, and the coefficient c 2 is determined in another way. Fig. 2 depicts the process of solving the coefficient c 2 , which consists of two steps: the circle-square transformation, and the equal space partition 2 . To calculate the c 2 in this model, we assume that the equal space partition step only operates along one axis, and the intensity in each interval forms a linear relationship with the values from this axis while keeping the intensity same in the other axis. Specifically, the obtained result is independent of the axis due to the symmetry, and an example of the equal space partition along the x-axis is shown in Fig. 3, where x n − x 0 = nδ = √ πa 2 with n and δ denoting the number of splits and spacing respectively. Hence, based on the above description, the intensity distribution satisfies the following relation where x i+1 = x i +δ, k i , b i are the coefficients of a linear function in the i-th interval, and they can be determined by two distant points, i.e., (x i , I beam (x i ; z)) and (x i+1 , I beam (x i+1 ; z)). Then, the pointing errors model h p (r; z) is approximated as The detailed description of the procedure for the determination of c 2 in linearized model is provided below 3 . Algorithm 1 Algorithm for the determination of c 2 in Linearized Model Input: Radius of detector aperture R a , beam radius w z , number of splits n, radial distance r 0 . Output: The coefficient c 2 . 5) First Reduced Vasylyev Model: According to [14], the pointing errors model established by Vasylyev is expressed as where λ and R are respectively given by By using the Taylor series of exp (·), I 0 (·), and I 1 (·) at R a /w z = 0 in [19], coefficients λ and R are then simplified into lim Ra/wz→0 after some algebraic manipulations. Substituting (17) into (15), the first reduced Vasylyev model is obtained as In this case, c 1 = η, c 2 = 2/w 2 z . B. Models for R a w z . 1) Point Approximation Model: In this model, beam spot acts like a point, as shown in Fig. 4. Note that nearly total energy of laser beam is concentrated around this point, and this leads to Alternatively, the above formula can be rewritten as where ε (·) denotes the Heaviside step function. Specifically, the pointing errors at r = R a can be obtained as Combining (19) with (21), the pointing errors model for the case R a w z can then be expressed as The problem is then transformed to find some smooth function that can approximates (22) efficiently. Inspired by the fact that the logistic function is typically used to approximate the step function, and we develop a pointing errors formula as where k ∈ Z + with Z + denoting the set of positive integers, and α represents the logistic growth rate or steepness of the curve. From (23), the derivative of h p (r; z) at r = R a results to a simple formula, which is obtained as Hence, combining (24) with [14, eqn. (D6)], the relationship between k and α is given by In addition, by using the asymptotic expansion formula of the modified Bessel function I n (z) for large z in [21], i.e., I n (nz) ≈ (2πnz) − 1 2 exp (nz), and then (25) reduces into The above formula indicates that the curve drops faster at the midpoint r = R a as the ratio between detector aperture and beam width becomes larger, and this is in line with expectations. Substituting (26) into (23) gives the result of the point approximation model. Note that (23) is reduced to 4 More specifically, (23) can be directly constructed from the Fermi-Dirac distribution, which has these properties of (22). The PDF of Fermi-Dirac [20]. Thus, (23) can be obtained by substituting E, µ, k B T with (r/Ra) 2k , 1, 1/α respectively. as r > R a . Hence, based on the (1) and (28), we demonstrate that the parameter k is assigned to be 1 intuitively, and this can be verified based on the numerical results in the next section. Furthermore, by using the (7) and (23), the PDF of h p in this case is approximated as where c = exp (α). 2) Second Reduced Vasylyev Model: As R a w z , the coefficients λ, R in (16) can be reduced into and a full derivation of (29) is present in Appendix A. Hence, substituting (29) into (15), the second reduced Vasylyev model is obtained as Correspondingly, by using the (7) and (30), the PDF of h p in this case is obtained approximately as IV. NUMERICAL RESULTS In this section, we investigate the effectiveness of the pointing errors models that are present in the previous section. The theoretical results are obtained through MATLAB, and they are also included as a benchmark in all the figures. Moreover, from the perspective of computation efficiency, the number of splits n in the linearized model is set to be 4 if not specified yet. It should be emphasized that the radial distance r 0 in the linearized model is optimized to minimize the NMSE performance, which is defined by ||h −ĥ|| 2 2 /||h|| 2 2 with h and h representing the theoretical value and approximate value respectively. In Fig. 5, we present the theoretical results and approximate results for different models and w z /R a . The corresponding NMSE performance is shown in Table I. Note that the normalized optimized radial distance r * 0 /R a that minimizes the NMSE performance for these three kinds of normalized beam width, i.e., w z /R a = 2, 4, 6 are 4.05, 12.95, 27.6 respectively. From this figure, it can be clearly seen that the accuracy of the first reduced Vasylyev model and the intensity uniform model is close to each other, and they present the poorest approximation accuracy among these models. The accuracy of modified intensity uniform model is comparable with that of linearized model, and the former is more computation-efficient than the latter. Moreover, both of these two models show excellent agreement with the theoretical values even when w z /R a = 2, where NMSE ≈ 1 × 10 −5 , and are more accurate than the traditional Farid model that is widely used in the FSO systems. Fig. 6. The normalized optimized radial distance r * 0 /Ra, and the ratio of NMSE performance between modified intensity uniform model and linearized model for different wz/Ra. In Fig. 6, we investigate the effects of the normalized beam width w z /R a , and the number of splits n on the normalized optimized radial distance r * 0 /R a and the ratio of NMSE performance. It should be noted that the ratio of NMSE performance is derived between the modified intensity uniform model and the linearized model. From this figure, we find that the relation between r * 0 /R a and w z /R a is a quadratic function for these two splits, i.e, the expressions are r * 0 /R a = 0.72 (w z /R a ) 2 + 0.08 (w z /R a ) + 1.01 and respectively. Additionally, the R-square is 1 for both of them. As for the ratio of NMSE performance, it can be observed that they are close to each other for two kinds of splits, which indicates that NMSE performance for n = 4 and n = 6 is nearly equivalent. Fig. 7 depicts the effects of k in the point approximation model on the approximate accuracy when w z /R a = 0.2 and w z /R a = 0.1. Specifically, the corresponding NMSE results for w z /R a = 0.1 are 5.5 × 10 −5 , 1.2 × 10 −4 , 3.1 × 10 −4 for k = 1, 2, 3 respectively. As can be seen, the best approximation can be achieved when k = 1. As expected, the curve decreases more dramatically at the midpoint r/R a = 1 when w z /R a is smaller. Fig. 8 shows the approximate results of h p (r; z) for the point approximation model and the second reduced Vasylyev model. In addition, the corresponding NMSE results of these two models are shown in Table II. From this figure, we demonstrate that both of the two models provide an efficient approximation when compared to the theoretical values. However, it can be observed from the NMSE performance in Table II that the proposed point approximation model achieves a higher accuracy than the second reduced Vasylyev model when w z /R a ≤ 0.2. V. CONCLUSION In this work, we have present several new results for the pointing errors model, and the accuracy of them is investigated in terms of NMSE performance. The linearized model was shown to provide the best approximation among these models, and the normalized optimized radial distance r * 0 /R a in this model has a quadratic relationship with the normalized beam width w z /R a . Also, we demonstrate that the accuracy of the modified intensity uniform model is not only superior to that of the traditional Farid model from the perspective of the numerical results, but also it is expressed in a simpler form. This indicates that our model is preferable in the performance analysis of FSO systems considering the effects of the pointing errors. Furthermore, by analogizing the beam spot with a point when R a w z , the solution of the pointing errors model is transformed to a smooth function approximation problem, and numerical results show that the proposed pointing approximation model achieves a better approximation than the model developed by Vasylyev when w z /R a ≤ 0.2.
2023-01-06T06:42:34.698Z
2023-01-05T00:00:00.000
{ "year": 2023, "sha1": "b4fcddb00bbbb160e3466c80936e511e6fa753fe", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "b4fcddb00bbbb160e3466c80936e511e6fa753fe", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Computer Science", "Mathematics" ] }
19907559
pes2o/s2orc
v3-fos-license
Isolation and Characterization of Homodimeric Type-I Reaction Center Complex from Candidatus Chloracidobacterium thermophilum, an Aerobic Chlorophototroph* Background: Candidatus Chloracidobacterium thermophilum is the only aerobic chlorophototroph with type-I homodimeric reaction centers (RCs). Results: An RC carotenoid-binding protein (CBP) complex was isolated from Ca. C. thermophilum. Conclusion: Ca. C. thermophilum RCs contain bacteriochlorophyll a, chlorophyll a, and Zn-bacteriochlorophyll a′. Significance: This is the first description of aerotolerant type-I homodimeric RCs from the only known chlorophototrophic member of the phylum Acidobacteria. The recently discovered thermophilic acidobacterium Candidatus Chloracidobacterium thermophilum is the first aerobic chlorophototroph that has a type-I, homodimeric reaction center (RC). This organism and its type-I RCs were initially detected by the occurrence of pscA gene sequences, which encode the core subunit of the RC complex, in metagenomic sequence data derived from hot spring microbial mats. Here, we report the isolation and initial biochemical characterization of the type-I RC from Ca. C. thermophilum. After removal of chlorosomes, crude membranes were solubilized with 0.1% (w/v) n-dodecyl β-d-maltoside, and the RC complex was purified by ion-exchange chromatography. The RC complex comprised only two polypeptides: the reaction center core protein PscA and a 22-kDa carotenoid-binding protein denoted CbpC. The absorption spectrum showed a large, broad absorbance band centered at ∼483 nm from carotenoids as well as smaller Qy absorption bands at 672 and 812 nm from chlorophyll a and bacteriochlorophyll a, respectively. The light-induced difference spectra of whole cells, membranes, and the isolated RC showed maximal bleaching at 840 nm, which is attributed to the special pair and which we denote as P840. Making it unique among homodimeric type-I RCs, the isolated RC was photoactive in the presence of oxygen. Analyses by optical spectroscopy, chromatography, and mass spectrometry revealed that the RC complex contained 10.3 bacteriochlorophyll aP, 6.4 chlorophyll aPD, and 1.6 Zn-bacteriochlorophyll aP′ molecules per P840 (12.8:8.0:2.0). The possible functions of the Zn-bacteriochlorophyll aP′ molecules and the carotenoid-binding protein are discussed. Reaction center (RC) 4 complexes are the central components of (bacterio)chlorophyll ((B)Chl)-based phototrophy and are responsible for the conversion of light energy into chemical energy. After absorbing a photon, a BChl dimer bound to the RC near the periplasmic surface of the membrane achieves a long lived, charge-separated state by transferring an electron through a series of bound cofactors to a terminal acceptor, which is bound to the RC near the cytoplasmic surface of the membrane. Based on their terminal electron acceptors, RC complexes are classified into two types (1). Type-I RCs utilize Fe-S clusters as terminal electron acceptors, whereas type-II RCs use quinones as terminal electron acceptors. Green sulfur bacteria (GSB; Chlorobi and Chlorobiales) and heliobacteria (Firmicutes and Heliobacteriaceae) possess type-I RCs; purple bacteria (Proteobacteria) and filamentous anoxygenic phototrophs (Chloroflexi) possess type-II RCs; and cyanobacteria (Cyanobacteria), similar to plants and algae, possess both type-I and type-II reaction centers, photosystems I and II, respectively. All characterized GSB and heliobacteria are strict anaerobes, a trait once thought to be a consequence of the vulnerability of their RC-bound Fe-S clusters to oxygen (2,3). However, Chlorobaculum tepidum is extremely tolerant to oxygen so long as cells are not illuminated. This observation suggests that reactive oxygen species are the true problem, and consistent with this hypothesis, mutants lacking enzymes for protection against reactive oxygen species are more sensitive to oxygen (4). The type-I RCs of GSB and heliobacteria uniquely have homodimeric core complexes, whereas all other RCs, including photosystems I and II, have heterodimeric core complexes (5). Despite their simpler composition, few detailed structural studies have been reported for homodimeric RCs, and some aspects of their biochemical and biophysical properties remain controversial (6). Until recently, only five of the currently recognized phyla of the domain Bacteria contain species capable of chlorophototrophic growth (7). The discovery of Candidatus Chloracidobacterium thermophilum (hereafter Ca. C. thermophilum) extended this distinction to a sixth phylum, Acidobacteria (8). Metagenomic sequence data from the hot spring microbial mats in which Ca. C. thermophilum was discovered (8,9) as well as the complete genome sequence of Ca. C. thermophilum (10) revealed the presence of pscA and pscB genes, which encode the homodimeric core subunit and the F A /F B -harboring subunit of a type-I RC, respectively. The Ca. C. thermophilum genome does not encode PscC, the c-type cytochrome that donates electrons to the primary donor (11,12), or PscD, a protein that may enhance electron transfer from the F A /F B clusters of PscB to ferredoxin (13) in the RCs of GSB. Time course metatranscriptome profiling studies over a diel cycle have demonstrated that the transcripts for the pscA gene are least abundant during the day when the microbial mats are oxic, but pscA transcripts are highest during the late afternoon and evening when the mats are anoxic (14). 5 Ca. C. thermophilum can be cultivated in the laboratory as an aerobe, and thus, its RCs can also be synthesized under oxic conditions as well. These properties make these RCs a unique system for investigating electron transport in homodimeric type-I RCs, and information gained from these studies may contribute new insights into the evolutionary events that led from anoxygenic to oxygenic photosynthesis. We have previously reported the purification and characterization of chlorosomes (8,15,16) and the BChl a-binding, Fenna-Matthews-Olson (FMO) protein from Ca. C. thermophilum (17,18), components of the photosynthetic apparatus whose roles in light harvesting have been extensively characterized in GSB (19 -22). Chlorosomes are large light-harvesting organelles, which attach to the inner surface of the cytoplasmic membrane and which contain Ͼ200,000 self-aggregating BChl molecules. The suprastructures of the BChl d and c molecules in chlorosomes of C. tepidum were recently described (23). The FMO protein, which forms a layer between the chlorosomes and RCs (24), functions both as a light-harvesting complex and as a conduit for excitation energy transfer between the chlorosome baseplate and the RC (20 -22). Although its genome predicts that Ca. C. thermophilum has a photosynthetic apparatus very similar to that of GSB (i.e. chlorosomes, FMO, and type-I RCs) (8,10), the aerobic lifestyle of Ca. C. thermophilum suggests that its photosynthetic apparatus has unique modifications that allow it to remain functional in the presence of oxygen. We recently reported that the chlorosomes of Ca. C. thermophilum contain several novel proteins that are not known to occur in the chlorosomes of GSB or Chloroflexi (15). We have additionally reported that FMO from Ca. C. thermophilum has distinctive spectroscopic properties compared with FMO from GSB (17,18). These new features of the light-harvesting complexes of Ca. C. thermophilum seem to be related to the ability of this organism to grow phototrophically under oxic conditions. In this report, we describe the isolation, spectroscopic properties, and pigment composition of the Ca. C. thermophilum RCs. These oxygen-tolerant RCs are complexes formed from a PscA homodimer and a novel carotenoid-binding protein (CBP; denotes the complex formed by the CbpC apoprotein and carotenoids). Unexpectedly, these RC-CBP complexes contain two molecules of Zn-BChl aЈ (the C-13 2 epimer of Zn-BChl a), which may act as the primary electron donor (P840) or an electron acceptor. The properties of these RCs are discussed and compared with those of other chlorophototrophs. EXPERIMENTAL PROCEDURES Purification of RC Complex from Ca. C. thermophilum-Ca. C. thermophilum cells were cultured photoheterotrophically at 53°C under oxic conditions in an orbital shaking incubator (85 rpm) as described previously (8). Cells (9 g, wet weight) were harvested by centrifugation; resuspended in 10 mM Tris-HCl, pH 7.5 containing 2 M NaSCN, 5 mM EDTA, 1 mM phenylmethylsulfonyl fluoride, 2 mM dithiothreitol (DTT), and 3 mg of lysozyme ml Ϫ1 ; and incubated for 30 min. The cells were disrupted by sonication for 5 min and then passed three times through a French pressure cell at 138 megapascals at 4°C. Unbroken cells and large cell debris were removed by centrifugation (8,000 ϫ g) for 10 min, and the resulting supernatant was subjected to centrifugation at 220,000 ϫ g for 1.5 h. The resulting pellet containing total membranes and chlorosomes was suspended in the same buffer and loaded onto sucrose density gradients (20 -50%), which were centrifuged for 18 h at 4°C (220,000 ϫ g). The membrane layer that formed below the chlorosome layer was collected, diluted with buffer C (50 mM Tris-HCl, pH 8.0, 1 mM EDTA, 1 mM phenylmethylsulfonyl fluoride, 2 mM DTT), and the suspension was centrifuged again at 220,000 ϫ g for 1.5 h. The resulting membrane pellets were suspended in ϳ8 ml of buffer C (ϳ70 g of pigments (BChl a and BChl c) ml Ϫ1 ) and solubilized with 0.1% (w/v) n-dodecyl ␤-D-maltoside (DDM). After ultracentrifugation (220,000 ϫ g for 1.5 h), the supernatant was decanted and subjected to anionexchange chromatography on a DEAE-Sepharose column (2.5 ϫ 8 cm) equilibrated with buffer C containing 0.02% (w/v) DDM. The orange-colored RC preparation was eluted with buffer C containing 150 mM NaCl. The fractions were pooled and concentrated by ultrafiltration (10-kDa molecular mass cutoff; Millipore, Billerica, MA). Isolation of CBP-Ca C. thermophilum cells were suspended in 20 mM Tris-HCl buffer, pH 7.6; disrupted by sonication for 5 min; and passed three times through a French pressure cell at 138 megapascals at 4°C. After unbroken cells and large cell debris were removed by centrifugation (8,000 ϫ g for 10 min), the supernatant was centrifuged at 220,000 ϫ g for 1.5 h. The resulting pellet containing total membranes and chlorosomes was suspended in 20 mM Tris-HCl buffer, pH 7.6 containing 0.6 M sodium carbonate and incubated overnight at 4°C. The suspension was clarified by centrifugation (220,000 ϫ g for 1.5 h), and the resulting blue supernatant enriched in FMO was stored at Ϫ80°C until required for other studies. The resulting pellet was suspended in 20 mM Tris-HCl buffer, pH 7.6 containing 18 or 34 mM n-octyl ␤-D-glucoside (OG) and incubated for 2 h. After centrifugation (220,000 ϫ g for 1 h), the resulting supernatant was decanted, taking care to avoid the soft pellet containing the chlorosomes (although this supernatant usually exhibited a minor absorption peak at ϳ740 nm due to residual contaminating chlorosomes). This supernatant was loaded onto sucrose density gradients (10 -50% (w/v) sucrose prepared in 20 mM Tris-HCl buffer, pH 7.6 containing 20 mM OG). After centrifugation at 220,000 ϫ g for 18 h, an orange-colored layer containing the CBP was collected, diluted with the same buffer, and concentrated by ultrafiltration (Ultracel 10,000, Millipore). Protein Identification-Polyacrylamide gel electrophoresis (PAGE) in the presence of sodium dodecylsulfate (SDS) was performed by the method of Schägger and von Jagow (25). Nondenaturing (native) PAGE was performed according to Allen and Staehelin (26) with minor modifications: SDS was replaced with 0.02% (w/v) DDM and 0.05% (w/v) sodium deoxycholate. The separating gel and the stacking gel contained 8 (w/v) and 2.5% (w/v) acrylamide, respectively (the ratio of acrylamide to N,NЈ-methylenebisacrylamide was 29:1 (w/w)). After electrophoresis, proteins were stained with Coomassie Brilliant Blue. Tryptic peptide mass fingerprinting analyses were performed using protein bands directly excised from the gel. Polypeptides in the gel slices were digested with trypsin as follows. Gel slices that had been stained with Coomassie Brilliant Blue were destained with 25 mM ammonium bicarbonate in 50% (v/v) acetonitrile. After vortexing for 10 min, gel slices were pelleted, and the liquid was removed. If the gel pieces were still blue, this process was repeated. Destained gel slices were dried by vacuum centrifugation. The gel pieces were then incubated with 10 mM DTT in 25 mM ammonium bicarbonate at 56°C for 1 h. Samples were centrifuged, and the liquid was removed. Iodoacetamide solution (10 mg ml Ϫ1 in 25 mM ammonium bicarbonate) was added, and the samples were incubated at room temperature for 45 min in the dark. The gel samples were washed with 25 mM ammonium bicarbonate, dehydrated with 25 mM ammonium bicarbonate in 50% acetonitrile, and dried by vacuum centrifugation. The gel samples were incubated with trypsin solution (12.5 ng of trypsin l Ϫ1 in 25 mM ammonium bicarbonate; Promega) at 37°C for 16 h after which the liquid was collected into a clean vial. After adding 5% (v/v) formic acid solution (in 50% acetonitrile), the gel pieces were vortexed for 20 min and sonicated for 15 min, and the liquid was collected into the same vial. This step was repeated to increase the peptide yield. The solution containing the peptides from the digested protein was dried by vacuum centrifugation to reduce the volume and analyzed by LC-MS/MS, which was performed by the Mass Spectrometry Facility at the Huck Institutes for the Life Sciences at The Pennsylvania State University (University Park, PA). Peptides produced by tryptic digestion were identified using the search engine Mascot (Matrix Science, Boston, MA), and amino acid sequence data were deduced from the genome of Ca. C. thermophilum (10). Spectroscopic and High Performance Liquid Chromatography (HPLC) Analyses-Absorption spectra were recorded with a Cary-14 spectrophotometer modified for computerized data acquisition (Olis, Inc., Bogart, GA) and a Genesys 10 spectrophotometer (Thermo Fisher Scientific, Waltham, MA). Lightinduced difference spectra were recorded using a JTS-10 spectrophotometer (Bio-Logic, Claix, France) and a series of interference filters (full-width half-maximum Յ10 nm) to monitor absorption changes at specific wavelengths. Actinic light was provided by light-emitting diodes that emitted maximally at 630 or 740 nm. Samples were subjected to continuous illumination until maximum bleaching was achieved (as judged by absorbance changes at 840 nm), and the magnitude of the absorbance change was plotted against wavelength. The pigment ratio of BChl a per special pair was estimated using the known extinction coefficients for the type-I RC of GSB: ⑀ 810 nm ϭ 100 mM Ϫ1 cm Ϫ1 for antenna BChl a in the RC (27) and ⌬⑀ 830 nm ϭ 90 mM Ϫ1 cm Ϫ1 for the special pair (28). Electron paramagnetic resonance (EPR) spectroscopy was performed using a Bruker ECS-106 X-band spectrometer equipped with an Oxford liquid helium cryostat and temperature controller. Spectra were the average of eight scans recorded with the following conditions: frequency, 9.487 GHz; gain, 20,000; modulation amplitude, 5 gauss at 100 kHz. Power and temperature are specified in the legend for Fig. 4. A Spectra-Physics Millenia CW laser operating at 2.2 watts provided actinic light, and dark-adapted samples were illuminated directly in the cavity. Light-induced spectra were obtained by subtracting the spectrum of a dark-adapted sample from that of the illuminated sample. The pigment compositions of the RC preparations were analyzed by reversed-phase (RP) HPLC on C 18 columns (Supelco, Bellefonte, PA) as described by Frigaard et al. (29). RP-HPLC analyses of carotenoids on a C 30 column (Bischoff Chromatography, Leonberg, Germany) were performed as follows. The gradient was composed of Solvent A (30% methyl t-butyl ether, 66% methanol, 4% water (v/v/v)) and Solvent B (50% methyl t-butyl ether, 30% methanol, 20% acetonitrile (v/v/v)). At the time of injection, the mobile phase was 30% Solvent B at a flow rate of 1 ml min Ϫ1 . Solvent B was linearly increased to 100% over 40 min followed by a constant flow of 100% Solvent B for 8 min after which Solvent B was returned to 30% in 1 min. Pigment ratios were determined using the following molar extinction coefficients: ⑀ 665 nm ϭ 71.43 mM Ϫ1 cm Ϫ1 for Chl a (30), ⑀ 770 nm ϭ 54.8 mM Ϫ1 cm Ϫ1 for BChl a (31), and ⑀ 491 nm ϭ 141 mM Ϫ1 cm Ϫ1 for carotenoids (32). The Zn-BChl a P was synthesized as follows. BChl a P was extracted from a purple bacterium, Roseobacter sp., with acetone:methanol (7:2, v/v) and purified by RP-HPLC. The purified BChl a P was treated with 1% (v/v) HCl to produce bacteriopheophytin a P , and the bacteriopheophytin a P was incubated with zinc acetate to produce zinc-chelated BChl a P (hereafter Zn-BChl a P ). Diethyl ether and then water were added to the solution, and the ether phase containing Zn-BChl a P and residual bacteriopheophytin a P was collected and evap-orated to dryness by a stream of nitrogen. The dried pigments were dissolved in acetone:methanol (7:2, v/v) for further analyses by RP-HPLC. Carotenoids were extracted from CBP with acetone:methanol (1:1, v/v), purified by RP-HPLC, and dried under a stream of nitrogen. To test for the presence of keto group(s), the purified carotenoids were dissolved in isopropanol and incubated with NaBH 4 as described (33). Absorption spectra were recorded before and after the NaBH 4 reduction. To test for the presence of glycosyl and/or acyl esters, carotenoids extracted from CBP were dissolved in methanol and saponified using 5% (w/v) KOH. An equal volume of ether and then water was added to the solution, and the carotenoid-containing ether phase was collected. The carotenoid solution was dried under a stream of nitrogen, dissolved in methanol, and analyzed by RP-HPLC using the C 18 column system described above. RESULTS Purification and Identification of RC Complex from Ca. C. thermophilum-To isolate RCs from Ca. C. thermophilum, a chlorosome-depleted membrane fraction was first obtained by sucrose density gradient ultracentrifugation using a buffer containing 2.0 M sodium thiocyanate. Sodium thiocyanate is a chaotropic agent that has been used to detach chlorosomes from cytoplasmic membranes in GSB and Ca. C. thermophilum (8,15,34). Although the membrane preparations obtained were not completely free of chlorosome contamination as indicated by a chlorosome-specific absorbance peak at ϳ740 nm (data not shown), a large portion of the chlorosomes was removed by this method. Other chaotropes (e.g. sodium iodide) were tested, and they were also effective in completely detaching the chlorosomes and produced results similar to those with sodium thiocyanate. The chlorosome-depleted membranes were solubilized using 0.1% (w/v) DDM. After ultracentrifugation, the pellet contained the residual contaminating chlorosomes, and the supernatant no longer exhibited an absorption peak at ϳ740 nm. The supernatant fraction was subjected to anionexchange column chromatography, and orange-colored, RCcontaining fractions were collected. SDS-PAGE analysis of the RC-containing fractions showed two polypeptides with apparent masses of 110 and 22 kDa (Fig. 1A). These bands were directly excised from the gel and subjected to tryptic peptide mass fingerprinting analysis (supplemental Fig. S1). The results showed that the 110-kDa band was PscA (Cabther_A2188; predicted mass, 99.2 kDa), and the 22-kDa band was a hypothetical protein (Cabther_A1191; predicted mass, 17.2 kDa), which was annotated as containing a prepilin-type N-terminal cleavage/methylation domain. The coverage percentages for the peptides detected in this analysis were 19.7% for PscA and 42.4% for the product of Cab-ther_A1191 to which we have assigned the gene locus designation cbpC (carotenoid-binding protein; see below). The PscB protein, which has a predicted molecular mass of 19.2 kDa and is predicted to ligate the two terminal electron-accepting [4Fe-4S] clusters (F A and F B ) of the RC, was not observed. PscB may have been lost because of the use of chaotropic agents to remove chlorosomes during membrane isolation. PscB in the RCs of C. tepidum and PshB of RCs of Heliobacterium modes-ticaldum are also easily removed unlike the F A -and F B -containing protein PsaC in photosystem I (35,36). Native PAGE experiments performed on the purified RC complex showed a single, diffuse orange-pigmented band, which had an apparent mass of about 480 kDa (Fig. 1B). This result suggested that the 22-kDa carotenoid-binding apoprotein CbpC and the 110-kDa PscA core subunit form a multisubunit complex. To investigate whether FMO was initially bound to the RC as in GSB, chlorosome-containing membranes prepared without chaotrope treatment were solubilized with 18 mM OG and subjected to ion-exchange chromatography. FMO did not co-elute with the RC (data not shown). When membranes from C. tepidum were treated in the same manner, the RCs retained FMO (37,38). These results suggest that FMO is more loosely bound to the RC complex in Ca. C. thermophilum than in GSB. Spectroscopic Features of Type-I RC from Ca. C. thermophilum-The absorption spectrum of the isolated RC complex showed a large absorption peak at 483 nm with shoulders at about 455 and 515 nm and smaller peaks at 672 and 812 nm with a shoulder at ϳ825 nm (Fig. 2). Using the RC from GSB as a reference, the 812 and 672 nm peaks are attributed to the Q y bands from BChl a and Chl a, respectively. The large absorption band between 450 and 550 nm is most likely due to the high carotenoid content in the RC complex (see below). A small absorption peak at 600 nm, which could be attributed to the Q x band of BChl a, was observed, but this feature was usually obscured by the large carotenoid absorption band. The peak at 600 nm was more obvious in preparations that had been depleted of the CBP and were correspondingly more enriched in PscA. Fractions of this type were obtained during the purification of the CBP, but these fractions still contained some contaminating chlorosomes (data not shown). Fig. 3A shows the light-induced difference spectrum of the RC complex measured by continuous illumination at room temperature under oxic conditions. The difference spectrum showed a large absorbance decrease at 840 nm with a shoulder at ϳ820 nm. The photobleaching at 840 nm was also the dominant feature observed in whole cells and chlorosome-containing membranes (Fig. 3B). Because the bleaching at 840 nm coin- cides with the presence of PscA (as measured by SDS-PAGE), we attribute the absorbance change at 840 nm to the special pair, which we denote as P840. Using extinction coefficients for the RC of GSB (⑀ 810 nm ϭ 100 mM Ϫ1 cm Ϫ1 for antenna BChl a (25) and ⌬⑀ 830 nm ϭ 90 mM Ϫ1 cm Ϫ1 for P840 ϩ /P840 (26)) and freshly isolated RC complexes, the ratio of BChl a per special pair in Ca. C. thermophilum was estimated to be 10.3 Ϯ 0.96. Consistent with the absence of absorbance features around 740 nm in the UV-visible spectrum, the RC complexes showed no measurable activity when illuminated with 740-nm actinic light. As expected, samples containing chlorosomes were active when illuminated with 740-nm actinic light (Fig. 3B). Note that all of the samples exhibited similar photobleaching behavior even in the presence of oxygen. It was not necessary to use a sealed, anoxic cuvette, which must be used to measure absorbance changes for oxygen-sensitive RCs (i.e. GSB and heliobacterial RCs; see below). These data demonstrated that the RCs retained photoactivity even after prolonged exposure to air and illumination. A relatively large absorbance increase at 676 nm was a second feature that was common to the light-induced difference spectra of whole cells, chlorosome-containing membranes, and RC preparations. A similar feature has been observed in RCs from GSB, and in that case, it has been attributed to an electrochromic shift that occurs for Chl a molecules bound near the special pair (5,39,40). Similar to the RCs of GSB (see below), the RC complexes of Ca. C. thermophilum bind Chl a. Furthermore, the lifetime of the absorbance increase at 676 nm is highly similar to that at 840 nm. Thus, we tentatively assign the absorbance increase at 676 nm to an electrochromic shift of a Chl a molecule near the special pair. The light-induced difference spectrum of whole cells also showed a relatively large bleaching at 553 nm, but no similar bleaching was observed in the difference spectrum of chlorosome-containing membranes (Fig. 3B). Furthermore, the lifetime for the recovery of oxidized P840 ϩ as measured by the increase in absorption at 840 nm was much longer in membranes than in whole cells. The addition of a soluble protein fraction back to membranes resulted in shorter recovery lifetimes for the absorption at 840 nm and the reappearance of the bleaching at 553 nm. Given the wavelength of this change, its absence in membrane fractions, and its effect upon the recovery of 840 nm photobleaching, we ascribe the feature at 553 nm to one or more soluble c-type cytochromes that act as electron donors to the oxidized special pair. The light-induced EPR spectrum of chaotrope-treated membranes recorded at 84 K showed a derivative-shaped signal with a crossover at g ϭ 2.002 (Fig. 4). This signal could only be generated using intense illumination. Plots of the signal intensity versus microwave power or temperature suggested that this signal originated from an organic radical; its line width of 8.8 gauss was consistent with that of a (B)Chl dimer. After the actinic illumination was turned off, the signal decayed to undetectable levels within minutes; hence, the light-induced EPR signal was completely reversible (data not shown). Based on the g-value, power and temperature dependences, and line width, this lightinduced signal was assigned to the oxidized primary donor (P840 ϩ ). Pigment Composition of Ca. C. thermophilum RC Complex-Pigments extracted from the RC complexes were analyzed by RP-HPLC (Fig. 5). The elution profiles of pigment extracts were monitored at 770 nm for BChl a, 667 nm for Chl a and BChl c, 491 nm for carotenoids, and 270 nm for quinones. As shown in Fig. 5, the HPLC analyses verified the presence of BChl a (35 min), Chl a (39.5 min), and two major elution peaks corresponding to carotenoids (42 and 43 min). No BChl c was detected. When monitoring was performed at 270 nm (data not shown), a compound with an absorption spectrum like that of menaquinone was sometimes but not always observed at 59 min (data not shown). Cells and chlorosomes of Ca. C. thermophilum contain menaquinone-8(H 2 ), which is menaquinone-8 with one reduced double bond in the isoprenoid tail (16,41). The molar ratio of BChl a to Chl a was found to be 1.60 Ϯ 0.05. Combined with the ratio of BChl a to P840 calculated above, the molar ratio of BChl a:Chl a:P840 was estimated to be 10.3:6.44:1.00. The absorption spectrum of the pigment eluting at 35 min (Fig. 5A, black line) was typical of BChl a; this pigment had the same elution time as authentic BChl a P derived from C. tepidum (29). Thus, the BChl a in Ca. C. thermophilum RCs is esterified with phytol (supplemental Fig. 2B). To determine the identity of the esterifying alcohol of the Chl a in the purified RC complexes (Fig. 5A, gray line), we used Chl a esterified with phytol (Chl a P ) from Synechococcus sp. PCC 7002 and Chl a esterified with ⌬2,6-phytadienol (Chl a PD ) from C. tepidum as HPLC standards (40). The Chl a derived from the Ca. C. thermophilum RCs had the same elution time as Chl a PD from C. tepidum (supplemental Fig. S3). Thus, the Chl a molecules in the RC complexes of Ca. C. thermophilum are probably Chl a PD . In addition to the major peak for BChl a P eluting at 35 min, a smaller peak eluting at ϳ40 min with a spectrum similar to that of a BChl was always observed in six different RC complex preparations. The absorption spectrum of this component had a maximum at 763 nm (supplemental Fig. S2C) and was very similar to that of Zn-BChl a. To verify its identity, a Zn-BChl a standard was chemically prepared (see "Experimental Procedures"), and the absorption spectrum of the resulting Zn-BChl a standard was measured (shown in supplemental Fig. S2D). Although the 500 -700-nm region of the absorption spectrum of the component eluting at 40 min was somewhat distorted by the overlapping absorbance of Chl a PD eluting at 39.5 min, the spectrum of this component was clearly similar to that of the Zn-BChl a P standard. To investigate this component further, the putative Zn-BChl a P fraction was collected and analyzed by mass spectrometry. The putative Zn-BChl a P eluting at 40 min had a mass of 951.7 Da and also had the isotopic mass pattern that is typical for Zn-containing molecules (Fig. 6). These results establish that the RCs of Ca. C. thermophilum contain Zn-BChl a P . No Zn-BChl a P was observed in pigment extracts of the purified FMO protein (16) or chlorosomes (8,15,16), but this component was always observed in whole cells and RC preparations of Ca. C. thermophilum, which suggests that Zn-BChl a P is an RC-specific pigment. The ratio of the major BChl a P (at 35 min) to Zn-BChl a P (at 40 min) was 6.41 Ϯ 1.58. Given that 10.3 BChl a molecules are bound to one RC complex, this suggests that 1.61 molecules of Zn-BChl a P are present per RC. Alternatively, if one assumes that there are actually 2.0 molecules of Zn-BChl a P per RC (per P840), then these RCs contain 12.8 BChl a P :8.0 Chl a PD :2.0 Zn-BChl a P per RC. The Zn-BChl a P in the RC complex had the same mass (Fig. 6) and absorption spectrum (supplemental Fig. S2) as the Zn-BChl a P standard, but it eluted about 1 min later during RP-HPLC analysis (supplemental Fig. S4). Because of this difference, we propose that the Zn-BChl a P in the Ca. C. thermophilum RC is the C-13 2 epimer, i.e. Zn-BChl a P Ј. It has previously been reported that BChl aЈ and Chl aЈ, the C-13 2 epimeric forms of BChl a and Chl a, are slightly more hydrophobic than the latter and thus elute earlier upon normal-phase HPLC (40,42). Because the RP-HPLC profiles of carotenoids extracted from the RC complex and CBP complex were nearly identical (Figs. 5B and 7D), most of the extracted carotenoids from the RC complex, especially the two major carotenoid species eluted at 42 and 43 min, are probably derived from CBP. However, a carotenoid that eluted at 50 min was not observed in the carotenoids extracted from the CBP alone, and this carotenoid also increased in membrane fractions enriched in PscA (see supplemental Fig. S5). This carotenoid had the same retention time and absorption spectrum as an authentic lycopene standard. Based on these results, lycopene appears to bind specifically to the RC core complex (the PscA homodimer), although it is possible that other carotenoids might also be components of this complex. Characterization of Carotenoid-binding Protein-When membranes were solubilized with OG instead of DDM, fractions containing only the CBP could be isolated. Sucrose density gradient centrifugation of membranes solubilized with OG resolved three fractions: a thick orange-colored fraction, a brownish green fraction, and a greenish brown fraction (see Fig. 7A). As judged from absorption properties, the middle green layer was a chlorosome-containing fraction, and the lower In B, absorbance (Abs) by the chlorosome precluded data collection between 720 and 760 nm. Samples that were sufficiently dilute to allow zeroing of the reference and sample beams were too dilute to measure absorbance changes. Overlap between the actinic source and detection wavelengths precluded data collection between 647 and 672 nm. The sample was suspended in buffer C containing 0.02% DDM. greenish-brown layer was a CBP-depleted, RC-enriched fraction that still contained some contaminating chlorosomes. The upper orange layer that contained the CBP was collected, diluted, and concentrated by ultrafiltration. SDS-PAGE analyses showed that the upper orange layer contained a single polypeptide, CbpC, with an apparent mass of 22 kDa (Fig. 7B). The absorption spectrum of the fraction containing only the CBP complex exhibited a large absorbance peak at 485 nm and a small peak at 672 nm (Fig. 7C). The ratio of the 672 nm peak to the 485 nm peak depended on the concentration of detergent used in the isolation. When the concentration of OG was increased from 18 to 34 mM, the peak at 672 nm became nearly undetectable (Fig. 7C, gray line). RP-HPLC analysis of pigments extracted from the CBP demonstrated the presence of the same two carotenoid species as in the RC-CBP complex (Fig. 7D). This observation suggested that the two carotenoids detected in the RC-CBP complex were mostly derived from the CBP complex. BChl a and BChl c were not detected in the purified CBP complex (data not shown). Chl a was detected in the CBP sample that was isolated using 18 mM OG, and this suggested that the absorption peak at 672 nm was probably due to the presence of a small amount of Chl a. Ultrafiltration experiments showed that the pigments absorbing at 485 nm were bound to the protein and were unlikely to represent carotenoid pigments in detergent micelles (data not shown). When the CBP was electrophoresed at 4°C by PAGE containing 0.1% (w/v) LDS instead of SDS, the unstained protein retained its yellow-orange color and had an apparent mass of ϳ22 kDa. Thus, it is proposed that the CbpC polypeptide binds carotenoids (see results from the RP-HPLC analysis described below). When the pigment extract from the CBP complex was analyzed by RP-HPLC on a C 18 column, two major carotenoid peaks (denoted peaks 1 and 2) were detected (Fig. 7D). These peaks were collected and reanalyzed by RP-HPLC on a C 30 column as described under "Experimental Procedures." The elution profile of peak 1 on the C 30 column showed that peak 1 contained two carotenoid species (denoted as peaks 1A and 1B) (supplemental Fig. S6, left panel), whereas the compound in peak 2 still eluted as a single compound (data not shown). To test whether these carotenoids contained keto groups, peaks 1A, 1B, and 2 were reduced with NaBH 4 . After NaBH 4 reduction, the absorption spectra of all three carotenoid fractions changed and showed enhanced fine structure features (supplemental Fig. S6, A, B, and C, gray lines). These results indicated that all three carotenoid species contained at least one keto group. The mass [MH ϩ ] of peak 1A was determined to be 551.4 Da. Based upon the absorption spectra before and after the NaBH 4 treatment, the elution times from RP-HPLC, and its mass, peak 1A was identified as echinenone, which is known to be one of the major carotenoids in chlorosomes of Ca. C. thermophilum (16,41,43). The absorption spectra of peaks 1B and 2 were nearly identical both before and after the NaBH 4 treatment. Before reduction with NaBH 4 , the absorption spectra of peaks 1B and 2 were similar to that of deoxyflexixanthin; after the NaBH 4 treatment, the spectra were similar to that of 1Ј-hydroxytorulene (supplemental Fig. S6, B and C). To test whether these carotenoid species contained glycosyl moieties, carotenoids extracted from the CBP complex were saponified by treatment with KOH, and the saponified carotenoids were analyzed by RP-HPLC (supplemental Fig. S7, red line). After saponification, peak 2 and about half of the material eluting in peak 1 disappeared, and a single new carotenoid (peak 4) appeared. This indicated that peak 2 and half of the material eluting as peak 1 contained glycosyl and/or acyl moieties. The mass of the non-saponified portion of peak 1 (denoted as peak 3) (supplemental Fig. S7, left panel, red line) also had an [MH ϩ ] mass of 551.4 Da. The absorption spectrum of peak 3 was similar to that of echinenone and peak 1A (see supplemental Fig. S6A), and the elution time of peak 3 upon RP-HPLC was the same as that of the echinenone standard purified from chlorosomes of Ca. C. thermophilum. Based on these results, peak 3 is assigned as echinenone (see supplemental Fig. S7F). Therefore, the saponified portion of peak 1 must have given rise to peak 1B. The appearance of the single large peak 4 and its absorption spectrum suggested that the chromophore portions of peak 2 and the saponified material eluting in peak 1 are the same compound. The absorption spectrum of peak 4 was similar to the spectra of peaks 1B and 2 in supplemental Fig. S6. The [MH ϩ ] mass of peak 4 was 567.4 Da. Based on the mass data, the absorption spectra, an analysis of the carotenoid biosynthesis genes in Ca. C. thermophilum (discussed below), and the fact that the carotenoids in CBP complex have keto groups, the chromophore portion of the two major carotenoids in the CBP complex is probably deoxyflexixanthin (supplemental Fig. S7E). The difference in elution times for the non-echinenone portion of peak 1 (peak 1B) and peak 2 likely arises from differences in the glycosyl and/or acyl moieties attached to the deoxyflexixanthin chromophore. No further attempts were made to identify the nature of these modifying groups that must occur at the 1Ј-OH of the -end of these molecules. Oxygen Tolerance of Reaction Center Complex-To study the oxygen tolerance of the RC complexes that had been purified on the benchtop under oxic conditions, RCs were exposed to repeated illumination under oxic or anoxic conditions, and photobleaching of P840 was measured optically at 840 nm. When the RC complexes were diluted in anoxic buffer and sealed in a cuvette under anoxic conditions, the RC retained nearly 100% activity after eight rounds of P840 photobleaching and recovery (Fig. 8, diamonds). When the RCs were assayed under oxic conditions, the complexes still retained 99% activity after eight rounds of illumination and recovery (Fig. 8, squares). For comparison, RC core complexes, which had been isolated from the strict anaerobe H. modesticaldum and were devoid of the PshBI and PshBII proteins (36), lost nearly 40% activity after only six photobleaching cycles when assayed under similar oxic conditions (Fig. 8, circles). These results indicate that when PscB is dissociated from the RC core homodimer the RC-CBP complex isolated from Ca. C. thermophilum is much more oxygen-tolerant than the homodimeric RCs of heliobacteria. Whole cells and chlorosome-containing membranes from Ca. C. thermophilum showed nearly no decrease in photoactivity even after dozens of actinic exposures (data not shown). Given that the PscB and PshB proteins that harbor the F A and F B [4Fe-4S] clusters in other homodimeric RCs are lost after treatment with chaotropes or high ionic strength buffer washes (36), it is highly likely that PscB is retained in whole cells and chlorosome-containing membranes, which were prepared under low ionic strength conditions. Combined with the results that indicated that the RC core complexes were relatively oxygen-tolerant, these observations suggest that the RC-CBP complex of Ca. C. thermophilum is much more oxygen-tolerant than the homodimeric RCs of heliobacteria and GSB both in the presence and the absence of PscB. This tentative conclusion will be tested more rigorously in future studies involving RC-CBP complexes containing PscB. Carotenoids are known to function in photoprotection by quenching Chl triplet states and by quenching singlet oxygen (44). To test for a possible role of the CBP complex in oxygen tolerance, RC preparations that were depleted of the CBP complex were also assayed under oxic conditions. After eight illumination and recovery periods, the CBP-depleted, PscA-enriched RC fractions retained 94% activity (Fig. 8, triangles), but after 12 illumination periods, only 87% activity remained. These results suggest that the CBP complex might play a role in the oxygen tolerance of the RC-CBP complex. Attempts to reconstitute CBP-depleted RCs with the isolated CBP did not restore oxygen tolerance (data not shown). Table 1 summarizes and compares properties of the RCs of Ca. C. thermophilum, C. tepidum, and H. modesticaldum. In combination with RP-HPLC analyses, spectroscopic measurements suggested that BChl a P , Chl a PD , and Zn-BChl a P molecules are bound to the RC complex of Ca. C. thermophilum in the ratio 12.8:8.0:2.0 (per P840). The total (B)Chl content (ϳ23 (B)Chl molecules) of these RCs is similar to those of other organisms with homodimeric type-I RCs (31,45). The PscA core subunit may bind the entire complement of (B)Chl pigments, and the deduced amino acid sequence of PscA from Ca. C. thermophilum includes 22 histidine residues that might serve as (B)Chl ligands. However, CbpC may also bind some Chl a (see below). The PscA homodimer in GSB is estimated to bind 16 BChl a and four to six Chl a molecules (5,31,45). C. tepidum PscA contains 19 histidine residues per monomer as potential ligands for binding these (B)Chl molecules. DISCUSSION The BChl-like component eluting at ϳ40 min in the RP-HPLC profile (Fig. 5A) was confirmed to be Zn-BChl a P by its mass, the isotopic pattern of the mass spectrum (Fig. 6), its absorption spectrum with a wavelength maximum at 763 nm (supplemental Fig. S2C), and the similarity of its retention time upon RP-HPLC to that of Zn-BChl a P (supplemental Fig. S4). However, because of the small difference in the retention times of chemically produced Zn-BChl a P and the compound detected in the Ca. C. thermophilum RCs, we propose that the latter is actually the C-13 2 epimer, i.e. Zn-BChl a P Ј. This is the first time that a wild-type phototrophic bacterium has been shown to synthesize both Mg-BChl a P and Zn-BChl a P . Some species of the genus Acidiphilium have Zn-BChl a P as their sole BChl (46). In the case of Acidiphilium rubrum, cells synthesize but do not accumulate Mg-BChl a; the substitution of magnesium by zinc apparently occurs non-enzymatically postsynthesis. A. rubrum uses Zn-BChl a not only for electron transfer reactions but also as an antenna pigment in the RC and lightharvesting 1 complexes. A recent study showed that Rhodobacter capsulatus produces small amounts of Zn-BChl a when the magnesium chelatase subunit ChlD is eliminated by mutation (47). In this case, ferrochelatase is responsible for the insertion of zinc into protoporphyrin IX. It is noteworthy that Ca. C. thermophilum is found in neutral to slightly alkaline environments (pH 7-9), and hence, it seems unlikely that the magnesium release and zinc insertion naturally occurs in the environment after the synthesis of Mg-BChl a. The insertion of zinc may therefore occur enzymatically in Ca. C. thermophilum. Based upon the analyses conducted in this study, the RCs of Ca. C. thermophilum most likely contain two molecules of Zn- BChl a P Ј per homodimer or P840 (Table 1). These two Zn-BChl a P Ј molecules could function as the special pair, the A 0 acceptor, or even as secondary electron transfer components functioning between A 0 and the Fe-S cluster F X . Whereas the max of Zn-BChl a P (763 nm) occurs at a shorter wavelength than that of Mg-BChl a P (770 nm), the Q y absorption band of the special pair in the Ca. C. thermophilum RC occurs at a longer wavelength (840 nm) than in GSB RCs (830 nm), although the difference spectrum has a very different shape. In the lightinduced difference spectra for cells, membranes, and RC-CBP complexes, the dominant absorbance decrease at 840 nm with a shoulder at 820 nm is opposite of that observed in the RCs of GSB, which typically show maximal bleaching at 830 nm with a shoulder or associated smaller peak at a longer wavelength near 840 nm. Despite this pattern, the special pair in GSB is referred to as P840 (48). We similarly refer to the special pair of the Ca. C. thermophilum RC as P840, but we note that the special pair in this case actually exhibits maximal bleaching at 840 nm. These observations might indicate that the Zn-BChl a P Ј molecules in the Ca. C. thermophilum RC do not function as the special pair but instead act as one of the electron acceptors. The redox potential of Zn-BChl a P is reported to be slightly higher than that of Mg-BChl a P (49). On the other hand, heliobacterial RCs have a special pair comprising a dimer of BChl gЈ epimers, and GSB reaction centers contain two molecules of BChl aЈ (Table 1). Therefore, a special pair comprising Zn-BChl a P Ј molecules is plausible. Additional spectroscopic studies will be required to establish the role of the Zn-BChl a P Ј molecules in the Ca. C. thermophilum RCs. Most RC complexes purified from GSB have five protein subunits, PscA, PscB, PscC, PscD, and FmoA (FMO) (see Table 1), although the presence or absence of FMO depends on the species and the detergents used (nicely summarized in Sakurai et al. (38)). The FMO protein is firmly attached to the cytoplasmfacing side of the RC in most GSB, and its orientation has recently been deduced (24). Although the pscB and fmoA genes occur in the same operon as pscA in the Ca. C. thermophilum genome (8,10), the purified RC complexes did not contain FmoA or PscB. These proteins may have been lost because of the use of sodium thiocyanate to remove chlorosomes from the membranes during the purification. The Ca. C. thermophilum genome does not encode pscC and pscD genes (10), and functionally similar proteins were not identified during this study. The observation that soluble protein fractions could accelerate the recovery of light-induced photobleaching at 840 nm and the observation of light-induced bleaching at 553 nm in whole cells suggested that soluble c-type cytochrome(s) donate electrons to P840 ϩ . The cbpC gene (Cabther_A1191), which encodes the apoprotein of the CBP complex, is annotated as having a prepilin-type N-terminal cleavage/methylation domain. Cab-ther_A1191 is not co-localized with any other pilus-related genes, which often occur in operons (50). The cbpC gene product obviously binds carotenoids (Fig. 7), and it seems highly unlikely that the Cabther_A1191 product is actually a pilusrelated protein. In the native PAGE experiments, the purified RC complex migrated as a single band at ϳ480 kDa, whereas SDS-PAGE and mass spectrometry only showed the presence of 99-kDa PscA (apparent mass, ϳ110 kDa) and 22-kDa CbpC polypeptides. By RP-HPLC analysis, we estimated the ratio of carotenoids to Chl a in the purified RC complex to be 5.23 Ϯ 1.04 (Table 1). Assuming 6.44 Chl a molecules are present in each RC, there are about 34 carotenoid molecules in the RC complex. It is highly unlikely that a 22-kDa CbpC apoprotein could bind 34 carotenoids, and thus, there are probably multiple CbpC subunits in an RC complex. Assuming that the PscA core homodimer accounts for ϳ220 kDa of a 480-kDa complex, then 11.8 CbpC subunits would be required. Based on this stoichiometry, about 2.8 carotenoids are probably bound to one CbpC subunit. Future studies will be required to establish whether Chl a and/or carotenoids are being removed from the CbpC during solubilization and isolation and whether the CbpCbound pigments can transfer energy to the (B)Chls of the RC core complex. The spectroscopic properties of the CBP complex with its intense absorbance from carotenoids and very weak absorbance from Chl a are reminiscent of peridinin-Chl a protein (PCP), a 34-kDa light-harvesting antenna protein found in marine algae (51). PCP is unusual among light-harvesting complexes because of its high ratio of a carotenoid (peridinin) to Chl a. The crystal structure of PCP from the dinoflagellate Amphidinium carterae revealed that eight peridinin molecules and two Chl a molecules are bound per PCP monomer (52). In PCP, peridinin harvests light energy and transfers the excited states to Chl a. In line with the high carotenoid to Chl a ratio, the absorption spectrum of PCP displays a dominant absorbance band from peridinin in the 400 -550-nm region and a small Q y band from Chl a at 670 nm (51). These spectral features are similar to those of the CBP complex of Ca. C. thermophilum, although in the purified CBP fraction, the absorbance value for Chl a depended on the concentration of OG used during purification. This might imply that some Chl a molecules are located at the interface between the PscA core and the CbpC subunits and that these Chl a molecules can be displaced by detergent molecules during the purification. Whether the CBP complex functions as a light-harvesting complex like PCP is currently uncertain. However, the experiments shown here suggest that the CBP complex contributes to the photostability of the RC-CBP complex under oxic conditions (Fig. 8). It should also be noted that a ketocarotenoid, 3Ј-hydroxyechinenone, acts as a strong quencher in the orange carotenoid protein, which acts as a quencher of excess excitation in cyanobacteria (53). The major carotenoid in chlorosomes of Ca. C. thermophilum was identified previously as echinenone, which is also the most abundant carotenoid in whole cells (16,41,43). The synthesis of echinenone from lycopene requires cyclase(s) capable of producing ␤-carotene and a 4-ketolase (54). The Ca. C. thermophilum genome contains both cruA and crtYcYd genes, representing two of the four families of lycopene cyclases (54,55), and a crtO gene for the 4-ketolase (10). The genome also contains crtC (1Ј,2Ј-hydratase) and crtD (3Ј,4Ј-desaturase) genes (10). The two major carotenoids in the CBP complex were shown to have glycosyl moieties, and the chromophore portion of these carotenoids was identified as deoxyflexixanthin. The synthesis of deoxyflexixanthin from lycopene requires a lycopene monocyclase (either CruA or CrtYcYd), CrtC, CrtD, and a 4-ketolase (CrtO). The presence of genes for two lycopene cyclases suggests that one may act preferentially as a monocyclase, whereas the other enzyme is either a bicyclase or preferentially adds a second ring to ␥-carotene like CruB in BChl e-containing GSB strains (56). The complement of genes for carotenogenesis in Ca. C. thermophilum is completely consistent with the assignment of deoxyflexixanthin as the chromophore of these glycosylated (and/or acylated) carotenoids. In summary, we have purified an RC-CBP complex from Ca. C. thermophilum and demonstrated that it retains light-induced photobleaching of P840 in the presence of oxygen. Overall, these RCs have properties that are intermediate between the more complex RCs of GSB and the simpler RCs of heliobacteria ( Table 1). The purified RC-CBP complex contained only two polypeptides, the homodimer core subunit PscA and a novel carotenoid-binding subunit, which may function in light harvesting, oxygen tolerance, and/or photoprotection. The CBP complex itself presents an interesting subject for future spectroscopic studies because of its high carotenoid to protein ratio and the possibility that it binds Chl a. Like other previously characterized homodimeric type-I RCs, Ca. C. thermophilum RC-CBP complex binds a relatively small number of BChl a and Chl a molecules, but this RC is unique because it contains both Mg-BChl a P and Zn-BChl a P Ј. Because of its simple subunit composition, oxygen tolerance, and unique pigment compliment, the RC of Ca. C. thermophilum may provide new insights into the structural, functional, and evolutionary relationships of RCs.
2018-04-03T05:44:26.538Z
2011-12-19T00:00:00.000
{ "year": 2011, "sha1": "8cd2e64a21df61c3ecd83704fed665b10f4f3298", "oa_license": "CCBY", "oa_url": "http://www.jbc.org/content/287/8/5720.full.pdf", "oa_status": "HYBRID", "pdf_src": "Highwire", "pdf_hash": "a784843c9a0f209f2da6f067d1960c60fe3ee63b", "s2fieldsofstudy": [ "Chemistry", "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
108909524
pes2o/s2orc
v3-fos-license
Groundwater quality on dairy farms in central South Africa Dairy farms in central South Africa depend mostly on groundwater for domestic needs and dairy activities. Groundwater samples were collected from 37 dairy farms during 2009 and 2013. Sixteen water quality parameters were tested and compared to the standard. Four parameters in 2009 and six in 2013 exhibited 100% compliance with the standard. Nitrate, Escherichia coli and total coliforms showed relatively low compliance across farms and years. Almost all farms were noncompliant for hardness in both sampling years. T-tests revealed significant changes from 2009 to 2013 for pH (t = 2.580; p = 0.006), hardness (t = 2.197; p = 0.016) and potassium (K) (t = 1.699; p = 0.0468). For hardness, approximately 45% of the farms in 2009, and 57% in 2013, posed a health risk to sensitive consumers. More than 50% of the farms in both years demonstrated levels of nitrates that could pose a health risk, particularly for babies. High levels of coliforms and E. coli were found, indicating a health risk for clinical infections in consumers. The number of farms presenting 3 or more parameters with a health risk more than doubled from 13.5% in 2009 to 27.0% in 2013. INTRODUCTION Dairy farming is the fourth largest agricultural industry in South Africa, representing 6% of the gross value of overall agricultural production (Mkhabela et al., 2010).The dairy industry is also a major contributor to the South African economy through employment, with about 60 000 farm workers employed by more than 4 000 milk producers (DAFF, 2012).The total number of milk producers, as recorded in January 2008, was 3 665, of which 919 were situated in the Free State Province (Mkhabela et al., 2010).The number of milk producers in the Free Sate decreased from 919 in 2008 to 498 in 2013 (Milk SA, 2013). Dairy enterprises utilise water for all steps of the dairy industry, including cleaning, sanitisation, heating, cooling and floor washing.Dairy farm effluent, which refers to manure and urine deposited throughout the milking process, is diluted while washing the milking shed floor (Williamson et al., 1998;Hooda et al., 2000).Animal waste in dairy effluent is a major source of pollution through nutrient enrichment of streams and groundwater which may, in turn, have a significant impact on the environment (Wilcock et al., 1999;Ali et al., 2006;Atalay et al., 2008;Kay et al., 2008;Van der Schans et al., 2009). The harmful effects of agricultural activities on groundwater (Gillingham and Thorrold, 2000;Dahiya et al., 2007;Monaghan et al., 2009) are becoming more and more of a concern worldwide (Mohammad and Kaluarachchi, 2004).In South Africa, most dairy effluent is discharged onto pastures and land (Strydom et al., 1993) and has been shown to pollute groundwater (Tredoux et al., 2000).Therefore, disposal practices for dairy effluent and manure in dairy enterprises are currently undergoing critical revision to reduce their impact on groundwater quality (Goss and Richards, 2008). Most dairy farms in the Free State utilise groundwater as a human drinking water source and for all dairy activities.Farm groundwater is rarely treated in South Africa.Therefore, if farm effluent and manure is disposed of in an inappropriate manner, faecally derived pathogens and nitrates may be introduced into groundwater which, in turn, may pose a risk to human health when the water is used as drinking water and in dairy activities (Harter et al., 2002;Oliver et al., 2009).Faecal contamination of groundwater has been linked to outbreaks of various water-borne infections (Krolik et al., 2013).Nitrates have been implicated in methaemoglobinaemia and also a number of inconclusive health outcomes (Fewtrell, 2004).Acquired methaemoglobinaemia is primarily an issue for infants less than 6 months old (Manassaram et al., 2010). In South Africa, the quality of water used on dairy farms must meet minimum standards in order to comply with the conditions set out in Regulation 961 (RSA, 2012).Water used in a commercial dairy must comply with the South African National Standard 241 (SANS, 2011) for drinking water.Because the groundwater on dairy farms in South Africa is rarely monitored, this study was undertaken to assess compliance of groundwater to the SANS 241 (SANS, 2011) drinking water standard on dairy farms in the central Free State. METHODS Groundwater samples were collected from the point of use on 37 dairy farms located in the districts of Motheo, Xhariep and Lejweleputswa in the central Free State, South Africa.Water samples were collected during 2009, and repeated in 2013.Sixteen groundwater quality parameters were analysed, namely: electrical conductivity (EC), pH, total hardness, chloride (Cl), sulphate (SO 4 ), phosphate (PO 4 ), nitrate (NO 3 ), fluoride (F), calcium (Ca), magnesium (Mg), sodium (Na), potassium (K), heterotrophic plate count (HPC), total coliforms and Escherichia coli.Total dissolved solids (TDS) were estimated by multiplying EC by the factor of 6.5 (WRC, 1998).Standard sampling and analytical procedures were followed for the physical and chemical parameters, as prescribed by SANS 241 (SANS, 2011) and the Department of Water Affairs (DWAF, 2006). For the microbiological analyses, the instructions of the manufacturers of Petrifilm® and Colilert ® -18 were followed.Prior to sampling, a tap was first flamed and thereafter left to run freely for at least 2 minutes.Electrical conductivity and pH were measured in situ using a MARTINI MI 806 pH/EC/temperature portable meter.For the chemical parameters, water was collected in 500 mℓ bottles while, for the microbial analysis, sterile 100 mℓ bottles were used.All samples were placed in an ice box and transported to the laboratory where they were stored in a refrigerator at a temperature of 4°C until analysis was completed.Chemical analyses were conducted by the Institute of Groundwater Studies in Bloemfontein.Water samples for microbiological analysis were processed in the microbiology laboratory of Mangaung Metropolitan Municipality in Bloemfontein.T-tests were performed on the different water quality parameters to ascertain if significant differences existed between the 2 sampling years. RESULTS Of the 16 water quality parameters that were measured, 4 parameters in 2009 and 6 in 2013 exhibited 100% compliance with the standard (Table 1).Three parameters, namely nitrate, E. coli and total coliforms, showed relatively low compliance across the farms and years.Approximately onethird of the farms were non-compliant for E. coli and more than 50% for total coliforms in both sampling years.For hardness, almost all the farms were non-compliant in both sampling years.T-tests were performed to ascertain if there were any changes in the quality of the drinking water from 2009 to 2013.Only three of the parameters demonstrated significant change from 2009 to 2013, namely pH (t = 2.580; p = 0.006), hardness (t = 2.197; p = 0.016) and K (t = 1.699; p = 0.0468). Health and economic implications Hard water generally poses no health risk for consumers; however, water that is very hard or extremely hard could result in chronic health effects in sensitive groups, such as the aged and immune-compromised (WRC, 1998).In this study, approximately 45% of the farms in 2009 and 57% in 2013 demonstrated water that poses a risk for these sensitive consumer groups (Fig. 1a). Hard water used for domestic purposes results in scale deposition, particularly in heating appliances, and also requires an increased use of soap (Rubenowitz-Lundin and Hiscock, 2013).The groundwater on many farms tested as hard or very hard, while the water on a few farms tested as extremely hard (Fig. 1a).Because water is used in all dairy cleaning operations, these levels of hard water could add an additional cost to the running of a dairy by reducing the life span of equipment and increasing the amount of soap used. High levels of nitrate in drinking water are of concern for babies, particularly as groundwater is the only source of drinking water on all the farms studied (WRC, 1998).More than 50% of the farms studied, in both years, demonstrated levels of nitrates that could pose a health risk (Fig. 1b).Of particular concern were the few farms with levels of nitrates exceeding 40 mg/ℓ which pose an acute risk for babies.Furthermore, nitrate poisoning of livestock could result in animal losses (Tredoux et al., 2000).Other adverse health effects in animals include increased incidence of stillborn calves and abortion, lower milk production and reduced weight gain (Tredoux et al., 2004). The high levels of coliforms found in the groundwater on many of the farms are of concern, particularly for sensitive groups.Counts of 10-100/mℓ could result in clinical infections, and counts of 100-1 000/mℓ could cause infections even with once-off consumption (WRC, 1998).Counts of >1 000/mℓ in the groundwater on 18.9% of the farms in 2009, and 5.6% in 2013, pose serious health risks for all users (WRC, 1998) (Fig. 2a). Escherichia coli, on the other hand, poses a health risk to consumers at much lower levels, particularly to sensitive groups (WRC, 1998).Clinical infections are common, even with onceoff consumption, at counts of >10-100/mℓ and serious health effects are common in all users at counts of >100/mℓ (Fig. 2b).These risks are equally prevalent when untreated, polluted groundwater is used for food preparation (WRC, 1998).The 4 parameters (hardness, nitrate, coliforms and E. coli) were used to ascertain the health risk exposure of consumers of groundwater on the farms.It was found that the number of farms presenting with 3 or 4 of the parameters at a level of risk, more than doubled from 13.5% in 2009 to 27.0% 2013. DISCUSSION AND CONCLUSIONS The region in which this study was undertaken is known for its hard water, caused mainly by the natural geology of the region.Nitrate enrichment of water can be attributed mostly to animal waste and run-off from the dairies (Wilcock et al., 1999).On some of the farms the nitrate levels were exceptionally high, up to 7 times greater than the South African specified health limit of 11 mg/ℓ (SABS, 2011), which is more stringent than the 50 mg/ℓ specified for nitrates by the World Health Organisation (WHO, 2011).On 2 farms in 2009 and on 1 farm in 2013, the nitrate measurement exceeded toxic levels of >50 mg/ℓ (Spalding and Exner, 1993).A groundwater study conducted in the rural areas of South Africa indicated that increasing nitrate levels in groundwater are hazardous to bottle-fed infants as well as to livestock (Tredoux et al., 2000). A further concern is the high levels of coliforms and E. coli that were detected in the water used for domestic purposes and dairy activities.The number of total coliform and E. coli found in the drinking water suggests that poor sanitation conditions and practices are potential reasons for the high presence of microbiological contaminants (Gwimbi, 2011).Although most coliforms do not cause disease, they are indicators of the presence of other disease-causing organisms (Wu et al., 2011).At the high levels found in this study, coliforms could pose a health threat even with onceoff consumption (WRC, 1998).At more than 55% of the farms, E. coli contamination of drinking water fell into the categories of intermediate to very high risk, according to the WHO (1997).The E. coli presence indicates faecal contamination and, therefore, poses a health threat to humans and animals residing on the farms (Pell, 1997).Immunecompromised patients, suffering from HIV/AIDS, are particularly vulnerable. Factors contributing to the contamination of milk include contact with animals and personnel engaged in milk processing, unhygienic milking equipment, poor quality water used in the dairy, and poor herd health (Altalhi and Hassan, 2009).Although the process of pasteurisation is responsible for improving safety and lengthening the shelf life of dairy products, it does not eliminate all microorganisms and their enzymes, spores and toxins.The thermal destruction process is logarithmic and eliminates bacteria at a rate that is proportional to the number of bacteria present in raw milk (Le Jeune and Rajala-Schultz, 2009).In instances where the bacterial count is high in raw milk, pasteurisation will not be able to kill all bacteria within the short period of time of its application (Lund et al., 2002).Milk buyers in South Africa apply a sliding scale for good quality milk and a penalty system for milk with low bacteriological quality, when determining the value of the raw milk (Clover, 2013).Furthermore, the high bacterial content in the water could compromise the quality of dairy products and other farming produce (Jones, 1999).This study strongly suggests a revision of waste water management strategies on dairy farms in the Free State. Figure 1 Figure 1 Distribution of measurements for (a) total hardness, and (b) nitrate (arrow indicates the limit of the South African National Standard) Figure 2 Figure 2 Distribution of measurements for (a) total coliforms, and (b) E. coli (arrow indicates the limit of the South African standard)
2018-12-12T21:35:54.998Z
2015-01-01T00:00:00.000
{ "year": 2015, "sha1": "4569b0b58c197e315ddd01c26d69441fa8f29ab0", "oa_license": "CCBY", "oa_url": "https://www.ajol.info/index.php/wsa/article/download/138057/127627", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "4569b0b58c197e315ddd01c26d69441fa8f29ab0", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Geography" ] }
221494522
pes2o/s2orc
v3-fos-license
Health sequelae of human cryptosporidiosis in industrialised countries: a systematic review Background Cryptosporidium is a protozoan parasite which is a common cause of gastroenteritis worldwide. In developing countries, it is one of the most important causes of moderate to severe diarrhoea in young children; in industrialised countries it is a cause of outbreaks of gastroenteritis associated with drinking water, swimming pools and other environmental sources and a particular concern in certain immunocompromised patient groups, where it can cause severe disease. However, over recent years, longer-term sequelae of infection have been recognised and a number of studies have been published on this topic. The purpose of this systematic review was to examine the literature in order to better understand the medium- to long-term impact of cryptosporidiosis. Methods This was a systematic review of studies in PubMed, ProQuest and Web of Science databases, with no limitations on publication year or language. Studies from any country were included in qualitative synthesis, but only those in industrialised countries were included in quantitative analysis. Results Fifteen studies were identified for qualitative analysis which included 3670 Cryptosporidium cases; eight studies conducted in Europe between 2004–2019 were suitable for quantitative analysis, including five case-control studies. The most common reported long-term sequelae were diarrhoea (25%), abdominal pain (25%), nausea (24%), fatigue (24%) and headache (21%). Overall, long-term sequelae were more prevalent following infection with Cryptosporidium hominis, with only weight loss and blood in stool being more prevalent following infection with Cryptosporidium parvum. Analysis of the case-control studies found that individuals were 6 times more likely to report chronic diarrhoea and weight loss up to 28 months after a Cryptosporidium infection than were controls. Long-term abdominal pain, loss of appetite, fatigue, vomiting, joint pain, headache and eye pain were also between 2–3 times more likely following a Cryptosporidium infection. Conclusions This is the first systematic review of the long-term sequelae of cryptosporidiosis. A better understanding of long-term outcomes of cryptosporidiosis is valuable to inform the expectations of clinicians and their patients, and public health policy-makers regarding the control and prevention of this infection. Systematic review registration PROSPERO Registration number CRD42019141311 Background Cryptosporidiosis is a clinical disease, typically affecting the intestinal tract of humans and animals who have ingested the protozoan parasite Cryptosporidium in its oocyst (infective) stage [1]. Transmission of Cryptosporidium occurs predominantly via the faecaloral route, or through consumption of contaminated food or water and therefore the prevalence of human estimates for Cryptosporidium still vary widely [11] and it remains difficult to quantify the true burden of cryptosporidiosis, as current estimates only account for the morbidity and mortality associated with the acute illness, while the potential contributions of long-term manifestations are not included [32,33]. A recent study from the Netherlands [2] found that long-term manifestations contributed nearly 10% of the total Disability-Adjusted Life Years (DALYs) and costs when included in burden of disease models for Cryptosporidium, suggesting a higher public heath burden and cost than previously estimated. Accurate estimations of the burden of disease associated with Cryptosporidium will inform decisions regarding the allocation of diagnostic, surveillance and interventional measures to prevent and control Cryptosporidium infections. Due to the potential morbidity and mortality associated with long-term sequelae of human cryptosporidiosis, an accurate estimation of the proportion of cases that develop such sequelae is needed to quantify true burden of disease estimates for Cryptosporidium. The objectives of this review were: (i) estimate the proportion of people that self-report health sequelae post-Cryptosporidium infection; (ii) estimate the risk of specific sequelae following Cryptosporidium infection; and (iii) explore potential risk factors associated with developing sequelae following Cryptosporidium infection in industrialised countries. Search strategy We searched for studies in PubMed, ProQuest and Web of Science databases, with no limitations on publication year or language. The reference lists from relevant papers identified during our electronic searches were also reviewed for additional relevant papers which may warrant inclusion in our review. Search terms were initially developed and piloted using PubMed and, to ensure consistency, the same search terms were used when searching ProQuest and Web of Science databases. Databases were searched using the following keywords: Cryptosporid*, Complications, Sequel*, Post-infecti*, Long term and Chronic. The full electronic search strategies are documented in Additional file 1: Table S1. The review was registered with PROSPERO, registration number CRD42019141311. Selection of studies All citations identified using the final search strategies were exported to Mendeley ® reference managing software for organisation and removal of duplicates. The titles and abstracts of the remaining articles were screened for relevance by one reviewer (BC), after which, the remaining articles were independently screened by two reviewers (BC and APD) to ensure consistent application of the pre-determined inclusion/exclusion criteria (Additional file 1: Table S1). Studies from any country were included in qualitative synthesis, but only those in industrialised countries were included in quantitative analysis. An industrialised country was defined using Organisation for Economic Cooperation & Development (OECD) membership. Final inclusion of studies was decided by consensus, with any conflicts being reviewed by a third reviewer (RMC). The full text was retrieved and reviewed for articles where the title and abstract had been deemed relevant by reviewers. Data extraction Data were extracted from eligible studies and collated in a Microsoft Word document. We recorded post-Cryptosporidium infection health sequelae data as reported in the individual papers (e.g. prevalence, cumulative incidence, etc.). Relative risks or odds ratios were recorded where data were available. We also extracted the following study characteristics from each paper if available: name of authors, year of publication, study location/ setting, study design, year(s) of study, study duration and duration of follow-up, number of included study participants, participation rate, study population demographics (including age and gender distributions), Cryptosporidium species data, the diagnostic method to ascertain Cryptosporidium infection and the types of sequelae reported. Additionally, where available, data on the incidence/prevalence of post-infectious IBS following Cryptosporidium infection and the IBS diagnostic criterion applied were collected. Quality assessment The methodological quality of the included studies was assessed using the Newcastle-Ottawa Scale (NOS) for nonrandomised studies [34]. NOS was used to score studies using three domains: (i) the selection of the study groups; (ii) the comparability of the groups; and (iii) the determination of either the exposure or outcome of interest in case-control or cohort studies, respectively. Scores ranged between 5-8 (Additional file 1: Table S2). Statistical analysis The proportion of Cryptosporidium cases that developed specific sequelae was calculated by dividing the number of individuals developing a sequela by the total number of Cryptosporidium cases. Where data were available from two or more appropriate studies, we used a random-effects meta-analysis model to obtain pooled estimates of prevalence for the outcomes of interest (i.e. sequelae) across the eligible studies. For this analysis, a study could be included more than once if sequelae data were reported longitudinally at different time periods. Data analyses were performed using Meta XI [35]. Assessment of heterogeneity and reporting biases Forest plots and the I 2 statistic were used to assess heterogeneity between the studies. Values of 0-40%, 30-60%, 50-90% and 75-100% were interpreted as; might not be important, may represent moderate heterogeneity, may represent substantial heterogeneity and considerable heterogeneity, respectively [36]. Funnel plots were used to assess for publication bias and small-study effects. Stratified analysis was performed for the following subgroups; time (less than 6 months post-infection and more than 6 months post-infection) and Cryptosporidium sp. (e.g. C. parvum vs C. hominis). Data synthesis The number of papers identified, included and excluded is presented according to the requirements of the PRISMA statement [37] in Fig. 1. Fifteen studies were identified for qualitative synthesis and eight of these were identified as being set in industrialised countries and of sufficient quality for additional quantitative synthesis. The qualitative synthesis is shown in Table 1. Quantitative synthesis results are shown in Tables 2, 3 and 4 and Figs. 2 and 3. Qualitative synthesis Electronic searching returned 1251 PubMed, 2161 Pro-Quest and 3227 Web of Science abstracts. After removal of duplicates, screening and assessment, 15 articles were suitable for inclusion in the qualitative synthesis and the data extracted from these studies are summarized in Table 1. The 15 shortlisted studies included 3670 Cryptosporidium cases. The studies comprised 8 cohort studies and 7 case-control studies. Seven studies were conducted in children, with the remaining 8 studies including both adults and children. The length of duration of followup ranged from 2 months to 9 years. Studies were conducted in South America (3 studies), Africa (1 study), South Asia (2 studies) and Europe (9 studies) and were all based in a community setting. The selected studies were published between 1992 and 2019. The studies investigated a range of potential sequelae; diarrhoea (3 studies), developmental delay (2 studies), stunting of growth (4 studies) and multiple gastrointestinal and non-gastrointestinal symptoms (8 studies). Quantitative synthesis Adequate information to estimate post-Cryptosporidium infection sequelae was available in 8 of the 15 studies [4,[15][16][17][18][19][20][21]. The pooled estimates for each of the sequelae are shown in Table 2. Data for each individual sequela are available in Additional file 2. The eight studies were conducted in Europe between 2004 and 2019; four in Sweden, three in the UK and one in the Netherlands. The sequelae investigated were mostly gastrointestinal, with some non-gastrointestinal symptoms such as joint pain and eye pain and most recruited cases were adults. This was in contrast to studies in non-industrialised countries which focused on growth, nutrition and cognitive detriment in children. The most frequently investigated sequelae are listed in Table 2 and included diarrhoea, abdominal pain, vomiting, fatigue, joint pain, eye pain and headache. The most common reported long-term sequelae were diarrhoea (25%), abdominal pain (25%), nausea (24%), fatigue (24%) and headache (21%). The distribution of gastrointestinal manifestations and non-gastrointestinal manifestations reported is shown in Fig. 2. Table 3 shows the pooled estimates for the prevalence of post-Cryptosporidium sequelae by time period postinfection. With the exception of eye pain and headache, all sequelae were more frequently reported within 6 months of Cryptosporidium infection. Subgroup analysis In all eight studies included in the quantitative analysis, species identification of Cryptosporidium had been performed. Four were outbreak cohort follow-up studies so contained only one species (three contained C. hominis cases exclusively and one contained C. parvum exclusively). The other four studies contained both species; one of these four also contained a small number of other species (17/271 cases), but because of the low numbers, these have not been considered here. Figure 3 shows the pooled estimates for the prevalence of post-Cryptosporidium sequelae by Cryptosporidium species. Overall, long-term sequelae were more prevalent following infection with C. hominis, with only weight loss and blood in stool being more prevalent following infection with C. parvum. IBS was reported in 11% of cases, however, it should be noted that data for this outcome were only available from 2 studies, one of which only studied C. parvum cases. Sequelae risk Five of the 8 qualitative synthesis studies included a control group. A limited evaluation of risk of individual sequelae using the five case-control studies available was undertaken [4,15,17,19,20]. Data were available for 10 sequelae (Table 4). Individuals were 6 times more likely to report chronic diarrhoea and weight loss up to 28 months after a Cryptosporidium infection than controls. Long-term abdominal pain, loss of appetite, fatigue, vomiting, joint pain, headache and eye pain were also 2-3 times more likely following a Cryptosporidium infection (Fig. 4). To view the PRISMA checklist relating to this work, please see Additional file 3. Discussion Of the 15 studies investigating long-term sequelae, just over half were set in industrialised countries. In contrast to those in non-industrialised settings, these involved mainly adult cases, with the inclusion of some children. Half were outbreak cohort studies, with the rest involving sporadic community cases. Studies from non-industrialised countries involved exclusively children, reflecting the greater clinical importance and recognition of paediatric infection in such settings. In industrialised countries there is more focus on detecting sporadic community cases of cryptosporidiosis in all age groups, partly in order to facilitate early detection of community outbreaks, for example from drinking water, swimming pools, or other environmental sources. The studies in non-industrialised countries also differed in that the children were recruited and tested as part of the specific studies, whilst the studies in industrialised countries relied on cases initially diagnosed routinely. The eight studies suitable for inclusion in the quantitative analysis were all carried out in just three countries in Europe (UK, Sweden and the Netherlands), where species data are routinely generated and all except one [11] were relatively recent, dated between 2013-2019. In many non-industrialised countries, or in earlier European studies, species identification would not be routinely performed, and this is reflected in the study data. The geographical reach of the eight studies is somewhat limited, since they were all located in northwest Europe. Only five were case-control studies, and of these, only two included both C. hominis and C. parvum, with the other three limited to studying C. hominis alone following outbreaks. Since the bulk of the cryptosporidiosis burden is found in low-income countries, there is a need in future to conduct similar quantitative evaluations using data from developing countries, where obtaining suitable data may be more challenging. There were some limitations to this review. The role of genotype in long-term outcomes could not be explored. Typing was undertaken by gp60 sequencing in three of the studies but was either not analysed with symptoms data [16], or was an outbreak where all had the same subtype [4,18]. There were insufficient data to compare between studies. Another limitation was that since not all cases were necessarily tested for all gastrointestinal pathogens, or the results of such tests were not stated, long-term sequelae identified cannot be proven to be Cryptosporidium-specific and not due to other infectious agents. Most of the studies examined quantitatively were concentrated on adult individuals, whereas cryptosporidiosis is commonest in young children. This over-representation of adults results from the fact that several of the studies followed large waterborne outbreaks involving many adults, rather than sporadic cases. Identifying and defining sometimes rather nonspecific sequelae is more difficult in very young children. However, a study by Carter et al. [21] of sporadic cases did include children, and in fact this study found that the proportion developing IBS or IBS-like symptoms was higher in children than in adults, with 78% reporting it among 5-17 years-old and 63% at 6 months to 4 years-old. The results indicate that sequelae are frequently reported after cryptosporidiosis lasting up to at least 2 years. Only one study investigated cases for longer, up to 36 months [11]. For both main infecting species, sequelae occur, but there are differences in the frequency of each depending on the species. Following the publication of the first study in 2004 [15], the evidence base surrounding post-Cryptosporidium infection sequelae has continued to expand [16][17][18][19][20][21]. Gastrointestinal sequelae such as continuing diarrhoea, nausea and abdominal pain appear particularly common, each reported by around a quarter of cases up to 36 months post-infection, with analysis of the case-control studies finding that persistent diarrhoea is around six times more likely than in controls and weight loss over three times more likely over 28 months. Fatigue and headache were also commonly reported and occurred in the casecontrol studies between two-three times more commonly in cases than controls over the same time period. Overall, the most commonly reported long-term sequelae were diarrhoea (25%), abdominal pain (25%), nausea (24%), fatigue (24%) and headache (21%). Where it was investigated, there was evidence that symptoms meeting the definition for IBS were described just over 10% of cases up to 36 months. Conclusions This is the first systematic review of the long-term sequelae of cryptosporidiosis. The proportion of cases selfreporting sequelae post-infection has been estimated and estimates of risk of specific sequelae presented. Risk factors for sequelae were less well identified. A better understanding of the long-term outcomes of cryptosporidiosis is valuable to inform the expectations of clinicians and their patients and public health policy makers regarding the control and prevention of this infection.
2020-09-05T13:14:35.532Z
2020-09-04T00:00:00.000
{ "year": 2020, "sha1": "323643a445f9f45c6308035b0dbd488adccd8f3d", "oa_license": "CCBY", "oa_url": "https://parasitesandvectors.biomedcentral.com/track/pdf/10.1186/s13071-020-04308-7", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "6301a102785498b861b783befffe3d795f2722d1", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
154367040
pes2o/s2orc
v3-fos-license
Depreciation Test of Fixed Assets – Necessity , Indices of Value Loss , Certainty and Frequency of Assessment Evaluation With the promulgation of IAS 36 "Depreciation of Assets" to ensure a consistent approach of reversible loss of value, removing the specific nationals practices and such the differences what appear as a result of the different treatments. In the chain of procedures for determining the depreciation of an asset/cash generating unit is required first the crossing phase of identification to an possible impaired assets, being necessary for the strict professional reasoning to each business. Applying the test impairment is not random and not to all assets of an enterprise. In general, the standard requires for companies to make the impairment tests when there are signs that an asset may be impaired (but annually for intangible assets with an indefinite useful life and commercial fund). The opportunity given by IAS 36 " Depreciation of Assets" to choose in determining of recoverable amount, between two values is not accidental. It is considered that the company can recover the value of its assets through use or market capitalization. However measurement of the two values is a complex process, very expensive for many businesses, own estimates based on Management Company with a strong subjective load, which is reflected on the certainty and reliability of obtained data. Depreciation of the Romanian Accounting Sense Before Harmonization With International Accounting Standards Romanian accounting -before harmonization with International Accounting Standards, deemed necessary to four times the valuation of property elements namely: the entry into property, the inventory, the closure and out of the assets.Impairment of value was addressed through the accounting structures of the nature of provisions and payments.For measurement of the reversible impairment of value, according to Romanian accounting standards, attention is stopped on the inventory evaluation.Valuation of assets 1 , at inventory, emphasized, especially in practice, on the quantitative than qualitative side.During the inventory, evaluation of intangibles assets is made on the actual value or utility of each element, called inventory value determined by the usefulness of the asset in business, by his physical condition and market price.For properties (tangible and intangible) held in inventory depreciation values to take account of depreciation deduction calculated for the purposes of its asset value was given by the net book value of depreciation resulting from the amortization plan unless the actual value was considered to be lower than net accounting value.In closing of the exercise the input value of property is compared with the utility value (present value) determined during the inventory.Following this comparison, are resulting two situations: finding some valuable pluses between asset value and input value, which in application of the principle of prudence, there were no record in the accounts; the discovery of weaknesses of value between asset value and input value of property is recorded in the accounts as an exceptional depreciation when depreciation was irreversible or was a provision when impairment was reversible, caused by factors such as: a. the emergence of a moral wear that was not taken into account at depreciation; b. overstatement of fixed assets by applying inappropriate factors when assessing their c.Their lack of utility for the enterprise, when the inventory (in storage, not used for activities at the time of inventory, etc...) d.Other reasons which determine a current value lower than the value that they were included in the accounts.Depreciation was regarded as the equivalent value of irreversible impairment of fixed assets because of the use, of natural factors, technical progress or other causes for impairment with reversible character and temporary to be treated by the provisioning. International Standards for Financial Reporting is Treating Different Depreciation Issues. Was induced term of depreciated value (deprived value) with another meaning than the frequently used in romanian practice.Is prescribed as necessary to reflect an asset in the financial statements at an amount who don't must exceed the recoverable amount obtained from the use or trading on an active market.The concept was developed precisely to provide a better and reliable reflection of the value of an asset on the balance sheet in the financial statements of an enterprise.And this because, in practice, in many European jurisdictions, although there were statutory obligations of comparing the book value of assets with market value of their requirements were not necessarily applied rigorously.Furthermore, some jurisdictions, particularly those with legal tradition-British trade, imposed not reflect depreciation, unless it is made into a permanent and long term.More rigorous approach of IAS 36 reflects the fact that authorities have become aware that this was a neglected area in financial reporting.Thus, on balance, in accordance with IAS 36 is comparing the accounting value of fixed assets to his fair value and present value of estimated cash flows to be generated through use -the amount of use.If most of these values are less than the carrying amount, impairment is recognized.The purpose of the above mentioned rule is precisely to prescribe all procedures that apply to an undertaking to ensure that its assets are recorded at a value greater than their recoverable amount.To understand the establishment of the recoverable value from fixed assets we follow the diagram below: Studies and Scientific Researches -Economic Edition, no. 15, 2010 What is the situation in Romania?By OMFP no.3.055/2009, the evaluation of assets is in line with norms.The valuation in the financial statements is made by respecting the precautionary principle taking into account all adjustments of value due to depreciation.On balance the Romanian companies will evaluate the fixed assets by comparing the book value to that determined based on inventory, the less differences between inventory value and net book value is highlighted in the accounts is an additional depreciation -if the impairment is irreversible, or is made an adjustment for depreciation or loss of value -where impairment is reversible.There is such an alignment of Romanian accounting regulations to the provisions of IFRS, recognition of assets at year following their recoverability principle, if we consider the value of inventory as a proponent of market value, and also following good utility for the enterprise.With the promulgation of IAS 36 is ensure a consistent approach to reverse the loss of value, removing the specific practices such differences arising as a result of the different treatments.In the chain of procedures for determining the depreciation of an asset/cash generating unit is required first the crossing of identification phase possible impaired assets, is need a strict application of a professional reasoning to each business.Applying the impairment test is not random and not all assets of an enterprise.In general, the standard requires that companies to impairment tests when there are signs that an asset may be impaired (but annually for intangible assets with indefinite useful life and commercial fund).Standard requirement consider difficult and uncertain the capacity of intangible assets, especially those that are not yet available for use, for generate sufficient future economic benefits to ensure the recoverability of their accounting value, for this are impose annual impairment testing is required of them.For all other assets at each financial reporting date, the company must YES No further action The objective of the enterprise is to generate profit? Current cost Potential for use of the asset can be replaced? The asset is used mainly to generate cash inflows? YES Regarding the future economic benefits attached to an asset is expected: a) Significant changes occur in the manner or the use of the asset, or b) Economic performance of the asset to be lower than initial expectations, or c) The market value of the asset has declined significantly over initial projections, or d) Asset is spent physically or morally. Studies and Scientific Researches -Economic Edition, no. 15, 2010 determine whether there are circumstances who indicate that depreciation could occur.Note that this is not a requirement that the possible depreciation is calculated for all assets at each balance sheet date, which would require a very demanding task for many businesses.It is, rather, the existence of conditions that might suggest an increased risk of impairment to be assessed.Thus, on balance it is necessary to identify those assets which, under the conditions considered, may be impaired.The existence conditions, the cues that impairment does not necessarily mean, however, the company should consider the recoverable amount of those assets.How the impairment test is complex process and quite expensive for some companies (difficulties related to the possibility of determining the market value of assets, and no projections of cash flows over the medium term reliable local economic conditions, also to determine the discount rate cash flow is necessary to make the assumptions and make some assumptions that are not always defensible) in measuring the recoverable amount will apply the principle of materiality.If in previous exercises the asset under review was established an recoverable value significantly higher than the net book value, and elapsed time indications that the asset has lost value does not consists of events that leads to reduce this difference, then it is not necessary an re-estimations recoverable amount. Example 1: An international transport company is operating with a whole fleet of buses that carry different routes without any of the buses to be dedicated to a route.Also, the company owns a truck used for transportation of goods, on request.The TIR can be subject to an individual analysis existing the possibility of determine the cash flows generated by its use, and an active market to establish market value of TIR's.The latest estimate of recoverable amount of TIR's been achieved in 2008 and it exceeded twice book value.The balance of 2009 there were indications that the asset might be impaired, as indicated an impairment but not so severe as to exceed the difference between the estimated recoverable amount and carrying amount of TIR 2008's, the company has not considered necessary the recoverable amount reestimations.Also, if the asset review will still be used without being in the near future the company intends to dispose of (recoverable value based on value in use), even when the market recorded a change in interest rates which may reduce recoverable amount, the company will not re-estimated recoverable amount: if changes of the market interest rate do not affect the actualizations rate from which were determined using discounted future cash flows in estimating value in use; If there are opportunities to counteract the changes in the discount rate through increases in future cash flows due to, for example, improved market share.The identification of impaired assets provides a standard set of indicators of potential impairment and suggests that they represent a minimum list of factors to be taken into account.The first analysis will take into account signals impairment criteria grouped into: external criteria and internal criteria.Criteria's or external cues are, primarily the result of a technological break at the enterprise, the lower level of activity, reduce product prices degradation future work prospects of the company, changes in the discount rate.Internal cues are generated by wear, performance degradation that the assets, adjustments in business activity (restructuring or closure), etc... It's all internal information leading to the idea that in future asset performance will be lower.Of the two categories of sources an important and particular attention needs the criteria's or external sources that are external to the enterprise cannot be influenced by it and management.The simple fact that one or more of the signals above suggests that there may be concerns about the possible impairment of an asset do not necessarily mean that he must undergo surgery for an impairment test.However, in the absence of plausible explanations as to why the signs of possible impairment should not be considered further, resulting that the presence of one or more of these inquiries would require monitoring.The company leaves the assessment signals a possible loss of value by observing the IAS 36 considers mandatory annual investigation, all indications of such depreciation "at each reporting date, the entities will check if there are indications of asset impairment.If such signs are identified, the entity shall estimate the recoverable amount of the asset".Possibility afforded by IAS 36 "Impairment of Assets" to choose in determining recoverable amount, the two values is not accidental.It is considered that the company can recover the value of its assets through use or market capitalization.However, the measurement of the two values is a complex, very expensive for many businesses, owns estimates based on management company with a strong subjective load, which is reflected on the certainty and reliability of data obtained.Applying the test for impairment of fixed assets, at the end of the reporting period, there are signs of a possible loss of value involved in measuring recoverable amount.The standards, a company has two obvious ways to recover the value of its assets: by turning on an active market or use (as a result of extensive debate -IAS 36 Basis for Conclusions, which concluded that the common market assumptions and own modeling company does not provide a full reality, which is why it is chose between the two aggregate maximum consistent with the likely behavior management).In theory, and in most and in practice, a company that makes rational choices would sell an asset if the fair value less costs to sell of it would be more than the use value of the asset and continue to use the asset if the value use would exceed the amount of recovery by sale.Thus, the economic value of an asset is measured in the most coherent way in relation to the higher of these two values as the company will retain or dispose of assets in accordance with what appears to be the best and most efficient use of it.The identification of an impairment loss for an asset and measure the recoverable amount should be based on an investment analysis of estimated future cash flows expected to arise from either actively or by using market capitalization as: where, according to information available, the net receivables of sale exceed cash flows from continued use, deciding the asset sale; If, while operating the exploitation treasury flows are lower than initial estimates but the immediate sale involves a low price or if the potential future costs by making the asset can be recovered, the firm will still choose to use the asset.Business decision is based on investment results of the analysis above, without the need to establish both values.Recoverable amount is the maximum of fair value less costs to sell and value in use, de-advancement net book value by either of the two is sufficient to consider the asset as not impaired.Usually, is easier to determine the fair value less costs by sell than use value.In standard definition, fair value less costs by sell is the amount you can get from the sale of an asset in a transaction voluntarily conducted on objective between interested parties in informed choice, minus the cost of disposal.As measured by market, where supply meets the seller's request, estimating the fair value less costs of disposal are often performed with certainty for business.The objective of the company is to identify, on the market, the fair value of fixed assets from which to deduct the costs of removal from service.The savings established by the existence of the specialized markets for assets will not meet significant difficulties, the fair value is relatively easy to estimate for the enterprise.However, though IFRS frequently used notions so as fair value concepts, active market, etc.., the reality of the Romanian economy shows that for most assets, no active markets where we can assess the aggregate.It is possible to determine the fair value less costs by sell even if the asset is not traded on an active market. In these circumstances the company will review the information available on possible transactions in the past, with similar assets, which are known the market selling prices, also if any offers made for similar assets and the prices are at levels approximately similar to can make an estimate of the net fair value.Without intention to conclude, for many, the concept of fair value does not know only one reality: the market value.This is not, however, only one way to measure the fair value, and providing the most objective because it is based on information outside the company, which it cannot influence in any way.Use of assessment techniques is an alternative method of valuation, in the absence of established market price.Two approaches can meet, namely: The first is a method of analogy that is to call the market value of an similar asset, showing the distinctive identical or at least similar to those of the asset under review; The second approach is its exploitation of an asset using modeling techniques.The method for determining the value of a property through analogy or similarity is, theoretically valid but in practice this is difficult to realize, since the concept of similar characteristics is often difficult to establish and demonstrate.And yet, sometimes it is not possible to determine the fair value less costs of disposal "in the absence of a basis for credible estimating of amount you could get from selling assets in a transaction conducted between interested parties on objective and informed concerned".Where the asset is not considered a market value or cannot make reliable estimates of the amount you might get an undertaking from the sale thereof, shall require the measurement of value in use.If the measurement of fair value less costs of sell is, often, a certainty for business, it is established, usually, on market the value in use involves estimates and updates in most cases based on subjective values.Moreover, its size is specific to each company; through modeling has a much higher degree of subjectivity and is also more difficult to be validated.The use value involves the updating of treasury flows attached to future use of the asset by using an appropriate discount rate.Estimating of the future treasury flows involves a high degree of uncertainty is quite subjective and depends entirely on the management team that could be faced with lack of treasury flow projections over the medium term reliable local economic conditions.Also, to determine the discount rate of treasury flow is necessary to make hypothesis and make some assumptions that are not always easy to sustain.Perhaps that is why the company is required in shaping its forecasts on future treasury flows to estimates based on reasonable assumptions -will avoid excessive growth rates of income, significant reductions in costs expected considering the recent experience is a guide correct for the near future.It also will insist on choosing the most appropriate discount rate, knowing the fact that the measure decisively influences its recoverable amount the asset under review.The standard requires that the measure recoverable amount to be made for each activity, the exception is the case when the asset does not generate treasury inflows largely independent of those generated by other assets or groups of assets.For such cases the recoverable amount is calculated on cashgenerating unit to which the asset belongs. Bibliography 1 . Epstein B., Abbas A, "IFRS 2008 Interpretation and application of International Accounting and Financial Reporting", Ed BMT Publishing House, 2009
2018-12-27T13:26:43.490Z
2010-12-15T00:00:00.000
{ "year": 2010, "sha1": "6786db3d6791d9cfc795d829537aa36277beb396", "oa_license": "CCBY", "oa_url": "http://sceco.ub.ro/index.php/SCECO/article/download/68/67", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "6786db3d6791d9cfc795d829537aa36277beb396", "s2fieldsofstudy": [ "Economics" ], "extfieldsofstudy": [ "Economics" ] }
221855514
pes2o/s2orc
v3-fos-license
CALIBRATION AND VALIDATION OF THE ADVANCED LAND OBSERVING SATELLITE-3 “ALOS-3” The “Advanced Land Observing Satellite-3” (ALOS-3, nicknamed “DAICHI-3”) is the next high-resolution optical mission as a successor of the optical mission by the Advanced Land Observing Satellite (ALOS, “DAICHI”) in Japan Aerospace Exploration Agency (JAXA), and will be launched in Japanese Fiscal Year 2020. ALOS-3 is now under developing the flight model. The major missions of ALOS-3 are (1) to contribute safe and secure social including provision for natural disasters, and (2) to create and update geospatial information in land and coastal areas. To achieve the missions, the “WIde-Swath and High-resolution optical imager” (WISH, as a tentative name) is mounted on ALOS-3, which consists of the high-resolution panchromaticand multispectral-bands. This paper introduces the overview of ALOS-3’s mission and the calibration and validation plan at JAXA. The standard product is the system corrected data using the sensor models, which will be provided from the sensor development team. Therefore, the sensor calibration is directly affected to the accuracies of the standard product. In addition, the sensor model based the Rational Polynomial Coefficient will be contained with level 1B2 standard product that can be used to process an ortho rectification and threedimensional measurement from ALOS-3 images. As the target accuracy of WISH’s standard products, the geometric accuracies are less than 5 m in horizontal without ground control point (GCP), and 1.25 m in horizontal and 2.5 m in vertical with GCPs (1 sigma), and the radiometric accuracy is +/10 % as absolutely and +/5 % as relatively for multispectral band. * Corresponding author INTRODUCTION The "Advanced Land Observing Satellite-3" (ALOS-3, nicknamed "DAICHI-3") is the next high-resolution optical mission as a successor of the optical mission by the Advanced Land Observing Satellite (ALOS, "DAICHI") in Japan Aerospace Exploration Agency (JAXA) (Shimada et al., 2010). ALOS-3 is now under developing the flight model after the Critical Design Review (CDR) phase (Katayama et al., 2016). The major mission objectives are (1) to contribute safe and secure social including provisions for natural disasters, and (2) to create and update geospatial information in land and coastal areas. The "WIde-Swath and High-resolution optical imager" (WISH, as a tentative name) will be mounted on ALOS-3, which consists of 0.8 m resolution of panchromatic band and 3.2 m resolution of multispectral six bands with 70 km of observation swath width. This paper describes overviews of ALOS-3's missions, products, and the calibration and validation plan of WISH. CHARACTERISTICS OF ALOS-3 2.1 Mission objectives of ALOS-3 Regarding to two major mission objectives of ALOS-3, utilizations of following various applications and outcomes by ALOS-3 are expected. Safe and secure social including provision for natural disasters: For responding of natural disasters in Japan, Asian region as well as worldwide, disaster related information e.g. damaged area and volume estimations, damage assessment associate with rescue activities will be provided as soon as after happening the event. To response this requirement, several emergency observation modes are prepared in ALOS-3 operation i.e. the point observation, the observation direction changing, and the wide area observation (Tadono et al., 2018). For analysing the acquired data, this is basically a change detection between before and after the event, therefore it is important to observe the area and archive the data in before the event as well. This will be also able to use maintaining and updating the hazard maps in the prevention phase. JAXA is also considering to use multi-satellites combining ALOS-3 and ALOS-2 if it is still in operating, and ALOS-4 as next Synthetic Aperture Radar (SAR) satellite mission in Japan, and also combining analysis with other earth observation satellites. Geospatial information in land and coastal areas: The Geospatial Information Authority of Japan (GSI) is responsible to generate and update the official national topographic map in Japan, which is covered by 1/25,000 scales. To contribute this activity, at least 5 m geometric accuracy must be guaranteed. It is also important to identify surface textures, land-use and land-cover (LULC) and their changes to update map as well. The RedEdge multispectral band is provided in WISH and will support to know an activation level monitoring in forests, vegetation and agricultural areas. Therefore, the image quality of ALOS-3 is also important. In addition, terrain These functions are also contributed to activities in the natural disaster responses. JAXA has plan to acquire the stereo imageries by stereoscopic observation mode using sub-cycle orbit of three days difference in nominal situation. At least 2.5 m height accuracy of panchromatic band with ground control point (GCP) is assigned to the target accuracy. In addition, bathymetry and environmental monitoring in coastal regions are defined as this mission that will be contributed by the Coastal channel of multispectral band. Figure 1 shows expected in-orbit configuration of ALOS-3, and Tables 1 and 2 summarize the current specifications of ALOS-3 and the onboarded instrument WISH, respectively, which is considered to improve and enhance a fine resolution and global observation capabilities achieved by the Panchromatic Remote Sensing for Stereo Mapping (PRISM) and the Advanced Visible and Near Infrared Radiometer type-2 (AVNIR-2) onboard ALOS . For example, the ground sampling distance (GSD) is 0.8 m of WISH's panchromatic band compared with 2.5 m of PRISM, and 3.2 m for multispectral bands with 10 m of AVNIR-2, even the observation swath width is same with them of 70 km at nadir. For multispectral observation, two channels are added from AVNIR-2 i.e. the Coastal and the RedEdge. The data quantization is improved to 11 bits/pixel from 8 bits/pixel of PRISM and AVNIR-2. This improvement will contribute to obtain better image quality, however it causes a huge amount of mission data. Raw data (not deliver to user) 1B1 Specifications of ALOS-3 and WISH Radiometric system correction; CCD unit image 1B2 Radiometric + Geometric system correction, Georeference/Geo-coded with RPC Target geometric accuracy (1 sigma): 5 m (horizontal) without GCPs; 1.25 m (horizontal), 2.5 m (vertical) with GCPs Target radiometric accuracy (Mu band, 1 sigma): +/-10 % (abs.); +/-5 % (relative) 1C Rough ortho rectification using existing DEM/DSM i.e. AW3D (Tadono et al., 2014) Table 3. ALOS-3 standard product and target accuracies The satellite orbit is kept as the sun-synchronous and subrecurrent with the local sun time at 10:30 am, but the repeat cycle is 35 days from 46 days of ALOS's one. This is enhanced observable frequency at middle and high latitude areas, however small pointing angle observations are necessary to cover the entire area in low latitudes. Unfortunately, along-track stereo observation by multi-sensors like PRISM had not been selected, however the satellite has the body pointing capability within 60 deg. in cone-shape from nadir that will contribute in an emergency observation if a natural disaster happens for example. Data products of ALOS-3 To contribute the missions explained in Section 2.1, JAXA has a plan to produce the ALOS-3 products in two categories i.e. the standard product and the high-level product. The former is basically system corrected product and distribute from the data distributer. The calibration of the instrument and the accuracy assessment of the standard product will be conducted by JAXA. The latter is to demonstrate ALOS-3 capabilities in applications, therefore JAXA will be conducted the development of an algorithm and validation and accuracy assessment. The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLIII-B1-2020, 2020 XXIV ISPRS Congress (2020 edition) Table 4. Planned calibration items for ALOS-3 standard products Table 5. Planned validation items for ALOS-3 high-level and research products The International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences, Volume XLIII-B1-2020, 2020 XXIV ISPRS Congress (2020 edition) 150 km x 70 km / 5 km mesh, red square in Fig. 2) 2.3.1 Standard product: Table 3 summarizes the ALOS-3 standard product and target accuracies that will be operationally generated and distributed from the data distributer. These are system corrected products using radiometric and geometric sensor models that will be provided from the sensor development team. Therefore, the sensor calibration is directly affected to the accuracies of the standard product, and effect to quality of following processing and products. In addition, the sensor model based the Rational Polynomial Coefficient (RPC) will be generated and contained with level 1B2. A rough ortho rectified image is prepared as level 1C, which will be used existing DEM i.e. PRISM DSM called AW3D (Tadono et al., 2014). Table 4 summaries the calibration items for ALOS-3 standard product planned to be evaluated after launch the satellite as well as monitor during the operational phase until the end of the mission. The calibration accuracy will be expected to improve and stable with accumulation of the evaluation number. It will be also very important to find degradations, and it should be reflected to sensor model parameters and algorithms update if necessary. Table 5 shows the planned validation items for the high-level and research products of ALOS-3, which is important to demonstrate its capabilities. ALOS-3 will be conducted the initial checking out (ICO) during three months after launch, then enter the operational phase. The first three months of the operational phase will be assigned as the initial calibration phase that will be done the intensive evaluations of instrument itself and update the sensor model's parameters to improve the accuracies. Three months are impossible to complete all sensor calibrations, especially evaluations of geometric accuracy and stability over seasons and annuals, which should be monitored during the operational phase continuously. Therefore, the initial calibration should be done by ICO + three months as the first priority. However, it still expects that haven't spare time to complete it because ALOS-3/WISH is an optical instrument and the acquired image is affected by clouds. Test sites establishment with reference data To conduct the initial calibration effectively, we are now preparing the calibration test sites with the reference data not only in Japan but also in the world. Regarding to the geometric calibration, ground control points (GCPs) are collecting as much as possible in pre-launch phase. Figure 2 shows the preparation status of GCPs in Japan, where are conducted the global navigation satellite system (GNSS) measurements along with major coastal lines. These GCPs will be not only used for calibration but also for image correction by bundle adjustment method in Japan. Particularly, a dense GCPs Figure 6. Atmospheric corrected image processing flowchart network is now creating in Kanto district as indicated by red square in Figure 2. Figure 3 shows an enlarged map of the dense GCPs network in Kanto district, Japan, which is covered area approx. 150 km east to west and 70 km north to south directions with 5 km mesh of GCPs measurements. This network will be able to cover two satellite paths. Figure 4 shows the preparation status of GCPs in the world, which consists of six major satellite paths in continental or semi-continental scales. Some of these GCPs were measured by us, others are collected by commercial bases. Currently, approx. 6,300 precise GCPs were prepared in total, and registered to the GCP database. As the other geometric related test sites, several reference DSM sites are prepared by airborne LiDAR measurements (Takaku et al., 2016). Regarding to the radiometric calibration, several test sites have been selected in the world as candidates, which are covered by homogeneous targets and had been used for calibrating other satellite optical instruments ) so far. Figure 5 shows the location of radiometric calibration test site candidates. Some of them are defined as common calibration sites in the Committee on Earth Observation Satellites (CEOS) Working Group on Calibration and Validation (WGCV), and also have ground-based in-situ measurements to collect surface reflectance, atmospheric parameters, etc. continuously. They will be very useful to conduct the cross calibration as well as the absolute calibration. Geometric calibration and DSM validation Regarding to the geometric calibration of WISH, both relative and absolute calibrations will be done using reference data i.e. GCPs, precise DSMs, calibrated other images. We will be firstly evaluated the relative CCD alignments i.e. interior orientation using the densely GCPs test site, which shows in Figure 3. WISH instrument has 12 CCD units to achieve 70 km swath width, which corresponding to approx. 6 km width by each CCD unit. After evaluating the relative CCD alignments, the site will be used to evaluate the image distortions within scene, which will also be done in the dense GCPs test site. We will calculate residual geometric errors in both X-and Ydirections and estimate the updated parameters in sensor model. The band-to-band registration will be evaluated as relative calibration of the multispectral band. The co-registration accuracy between panchromatic-and multispectral-band will be evaluated using acquired the images over the test sites covered by gentle and typical terrain features. This is important to process a pan-sharpening in good quality. The absolute geometric accuracy and its stability will be evaluated using the global GCP sites over the world as shown in Figure 4. They are also used for the pointing control and determination accuracy evaluations. Figure 7. Example of atmospheric correction using WorldView-2 in Ina City, Nagano Pref., Japan All geometric evaluations will be conducted by applying orientation technique based on the collinearity condition similar with the geometric calibration of PRISM ). We will calculate and evaluate the sensor model parameters and update them. After updating the parameters, geometric correction accuracy and RPC accuracy will be confirmed using level 1B2 and 1C products. After the initial calibration, we will also validate the generated precise DSMs and ortho-rectified images (ORI) using ALOS-3 stereopair images as high-level products. The reference GCPs and DSM test sites will be used for the validation of generated DSM and ORI. To conduct the calibration and validation effectively in particular the initial phase, we are now preparing the calibration sites with the reference data not only in Japan but also in the world. However, it may not be enough to complete it, therefore we have a plan to additional GCPs measurements as well as reference data collections based on the acquired images by ALOS-3 after launch the satellite. Radiometric calibration and surface reflectance validation Regarding to the radiometric calibration, it will be also done by both relative and absolute evaluations. As the relative radiometric calibration, the pixel-to-pixel, CCD-to-CCD, and channel-to-channel sensitivity variations will be evaluated using acquired data over the test sites that have homogeneous targets as well as in night-time observations. The sensitivity linearity and the quasi-dark current will be also evaluated to monitor its stability. After the relative calibration, the absolute radiometric calibration will be conducted by the Lunar calibration mode, the vicarious calibration as well as the cross calibration with calibrated other satellite data in the test sites. An atmospheric corrected image (ATC) product will generate as a part of the high-level product to demonstrate the potential of ALOS-3. Figure 6 shows a current planned processing flowchart of ATC product in JAXA, where introduces a radiative transfer model (e.g. Rstar 7) to make "look-up tables" under various atmospheric and geometric conditions. The acquired radiance at top of atmosphere is used as input data to the processing, referred the atmospheric parameters (e.g. ozone, water vapor, sea level pressure, air temperature, aerosol) and DEM as ancillary data, and finally obtained the calculated the surface reflectance. Figure 7 shows a test result of derivation of surface reflectance using WorldView-2 image in Ina City, Nagano Pref., Japan. Figure 8 shows a validation example of derived surface reflectance as shown in Figure 7, where Sentinel-2 surface reflectance product was used for comparison. The result indicates 5.5 % differences as root mean square error (RMSE) in near infrared (NIR) channel as maximum error, and less than 1 % RMSE in blue, green, and red channels, respectively. Figure 8. Comparison the surface reflectance between the derived from WorldView-2 and Sentinel-2 Image quality evaluations The image qualities of WISH (e.g. the modulation transfer function (MTF), signal-to-noise ratio, effects of the data compression, the time domain integration (TDI) characterization) will also investigate using acquired images after launch the satellite. They will be compared with pre-flight test results, and continuously monitored temporal changes during the operational phase. CONCLUSIONS This paper introduced the overview of ALOS-3's mission and products, the calibration and validation plan that will be conducted from the initial calibration phase between three and six months after launch. This activity will be continuing as the operational calibration to improve and monitor the accuracies of standard products during the operational phase, and updating the sensor model parameters as well as algorithms if necessary. ALOS-3 is scheduled to be launch in this fiscal year. As explained in the paper, we are now preparing for launch the satellite as well as the initial calibration that is collecting the reference data and developing the evaluation tools. However, it may not be enough to complete the initial calibration that depends on the availability of actual data number acquired in the test sites. Therefore, we have an option to collect the reference data from available ALOS-3 data. We will make a report on calibration results in near future.
2020-08-13T10:11:08.429Z
2020-08-06T00:00:00.000
{ "year": 2020, "sha1": "2c3076314e47b00876d3613b172d9bd4a117729c", "oa_license": "CCBY", "oa_url": "https://www.int-arch-photogramm-remote-sens-spatial-inf-sci.net/XLIII-B1-2020/135/2020/isprs-archives-XLIII-B1-2020-135-2020.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "d4458e363ee5ffe2c46fa0fb6dc6c1d2efa1f270", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Computer Science" ] }
20381665
pes2o/s2orc
v3-fos-license
Workplace Violence Experienced by Substitute (Daeri) Drivers and Its Relationship to Depression in Korea Workplace violence is related to various health effects including mental illness such as anxiety or depression. In this study, the relationship between the experience of workplace violence and depression in substitute drivers in Korea, namely, daeri drivers, was investigated. To assess workplace violence, questions regarding types and frequency of the experience of violence over the past year were asked to the daeri drivers. In order to assess the risk of depression, the Center for Epidemiological Studies Depression Scale was used. Odds ratios with 95% confidence intervals of depression were estimated using multiple logistic regression analysis. All of the daeri drivers had experienced instance of verbal violence while driving and 66 of the drivers (34.1%) had been in such a situation more than once in the past quarter of a year. Sixty-eight daeri drivers (42.2%) had experienced certain type of physical violence over the past year. Compared to daeri drivers who had experienced workplace verbal violence less than 4 times and who had not experienced workplace physical violence over the past year, higher odds ratio was observed in daeri drivers who had experienced workplace verbal violence or physical violence, more than 4 times and more than one time respectively, after adjustment. Experience of verbal or physical type of workplace violence over the past year increased the risk of depression in the daeri drivers. Because violence against drivers can compromise the safety of the driver, the customer, and all the passengers, it is imperative that the safety and health of daeri drivers be highlighted. INTRODUCTION One of the emerging jobs in Korea is that of a "daeri" driver. In Korean, "daeri" means substitute or proxy. Daeri drivers provide driving services to people who, for various reasons, require a driver to drive their cars. It is well known that some Koreans are heavy alcohol consumers (1), and drinking is almost an integral part of business and social life in Korea. In 2009, in Korea, driving under the influence of alcohol (DUI) resulted in the injury of 104 in every 100,000 people and in the death of 2 in every 100,000 people (2). To curb the social costs incurred due to DUI, in Korea, many efforts are being made to discourage it, for example, policemen have started conducting random inspections on drivers. The government's crackdown on drinking and driving involves different penalty levels according to the alcohol concentration in the person's blood (3): an alcohol concentration below 0.2% entails a fine of USD 5,000 and a concentration above 0.2% entails a fine of USD 10,000 along with the revocation of the driver's license. The increasing rate of alcohol consumption and the sobriety tests being conducted on the road have given rise to the need for daeri drivers in Korea. For exam-ple, if a salesperson is drunk and wants to return home in his car, in order to avoid DUI, he would have to hire a daeri driver to drive his car instead of driving it himself. Hence, daeri drivers usually work at night, drive other people's cars to the required destination and, most often have to do so while dealing with a drunk customer. A qualitative study reported that daeri drivers suffered from low income, were looked down upon socially, worked in an unsafe environment, were treated unfairly by the call service office, and had to engage in emotional labor in relation to drunk customers (4). Recently, workplace violence has become an important issue in Korea. News reports on service workers who have experienced workplace violence are omnipresent. According to an article wherein the experience of workplace violence experienced by 30,000 workers in Korea was analyzed, almost 6% of all the workers had experienced workplace violence and among the service workers, 10% had experienced workplace violence (5). Workplace violence is also a severe problem for health care workers. A study on nurses revealed that 71% of the nurses had experienced workplace violence during the past year (6). The prevalence of physical violence and sexual harassment were http://dx.doi.org/10.3346/jkms.2015. 30.12.1748 also high, with 22% of the nurses having experienced physical violence and 20% of them having experienced sexual harassment (6). Jobs that involve dealing with customers involve the risk of workplace violence. However, although daeri drivers are prone to experiencing workplace violence, there is a dearth of studies elucidating the risk of workplace violence faced by daeri drivers. Workplace violence is related to various health effects including mental illness such as anxiety and depression (7). In the association between workplace violence and mental illness, certain job characteristics are known to aggravate the risks of developing such conditions (8). Furthermore, mental illness and job stress are related to occupational injuries and accidents (9). In that sense, workplace violence and its relationship to mental illness in the case of daeri drivers can be a very important social problem because the injuries of daeri drivers are closely linked with car accidents. However, no studies have been conducted on mental illness in daeri drivers. Therefore, in this study, the relationship between the experience of workplace violence and depression in daeri drivers was investigated by conducting the first ever survey on daeri drivers' mental health. Data source and study population The survey was conducted in September 2014, using a structured, self-administered questionnaire. Since daeri drivers do not have a fixed workplace, the survey was conducted on a downtown road, namely Sinnonhyeon station, for 10 days, from 2:00 am to 4:30 am. The road on which the survey was conducted is close to the center of Seoul, where the demand for daeri drivers is high. More than 1,000 daeri drivers pass through that road, because it connects them to the other parts of the city. A total of 166 daeri drivers participated in the survey; however, the data of 5 participants who failed to complete all the questions on the questionnaires were excluded. Hence, the data of 161 participants were used for the final analysis. Questionnaire and study variables The questionnaire used in the study was developed based on the results of face-to-face interviews with daeri drivers. The questions representing demographic and occupational characteristics pertained to age, marital status, educational level, household income, length of time worked at the job, daily working hours, and number of working days per month. To assess workplace violence, questions regarding the type of violence (verbal or physical) experienced and the frequency of violent experiences over the past year were asked. In each types of violence, relatively minor events such as simple arguments, disturbing behaviors, or unreasonable requests from customers were not included but only verbal abuse such as swearing or threatening and direct physical assault was included. In the case of daily working hours, standard working hours presented in the Labor Standards Act which is 8 hr per day (40 hr per week) was applied and working days per month were categorized as five-day and six-day work week. In order to assess the risk of depression in the daeri drivers, the Center for Epidemiological Studies Depression Scale (CES-D) was used. The CES-D is a tool that is widely used in many epidemiological studies to screen for depression (10), and since the CES-D comprises simple questions and evaluates the severity of depression based on the duration of each symptom, it is known to be suitable for community-based epidemiological studies (11). At first, when the tool was developed, a cut-off score of 16 was proposed, to differentiate depressed people from non-depressed ones (12); however, in the CES-D, the cut-off score for the screening of depression varies according to the subjects and the purpose of each study (13). In addition, the respondents' reports regarding the severity and frequency of depressive symptoms differ according to their socioeconomic status (SES) and environmental factors (14). At the same time, differences in culture and language are also an important factor influencing the reporting of depressive symptoms (15). Furthermore, since study subjects were actively engaged in occupation activity, they were not likely suffering from clinical depression. As a result, for this research, which was a community-based study involving the screening of depression in Korea, a cut-off score of 21 was proposed (16). Based on this cut-off score, the daeri drivers who participated in the study were divided into two groups. Statistical analysis The demographic and occupational characteristics of the drivers were evaluated and the differences in the prevalence of depression as evaluated by the CES-D according to these independent variables were assessed by a chi-square test. In the case of the independent variables representing workplace violence, odds ratio (OR) with 95% confidence intervals (95% CI) for depression were estimated using multiple logistic regression analysis. Model I was adjusted for demographic characteristics such as age, marital status, educational level, and household income. Model II was additionally adjusted for occupational characteristics such as length of time worked at the job, daily working hours, and number of working days per month. The risks were expressed as ORs in relation to reference groups of daeri drivers who had experienced verbal violence during work less than once in the past quarter and those who had never experienced physical violence at the workplace over the past year. All the analyses were 2-tailed and P values less than 0.05 were regarded as statistically significant. All the analyses were performed using SAS software, version 9.3 (SAS Institute, Cary, NC, USA). Ethics statement All of the participants provided written informed consent for their voluntary participation in the study. The identifying information of all of the participants was deleted before the analyses. This survey was approved by the institutional review board of the Yonsei University Graduate School of Public Health (IRB number: 2-1040939-AB-N-01-2015-303). Demographic and occupational characteristics of the daeri drivers The mean age of the daeri drivers was 53 yr and all of them were male. Among the 161 drivers, 91 of them (56.5%) were in their 50s, which is the age at which people in Korea typically retire from their jobs. A majority of the daeri drivers (108 daeri driv-ers, 67.1%) were married or living with a partner. The educational levels of the daeri drivers were diverse, ranging from below high school to college. In the case of household income, 48.4% of the drivers made more than 2.5 million won per month (approximately USD 2,500/month), but 22.4% of the drivers made less than 1.5 million won per month (approximately USD 1,500/month). There was no difference in the prevalence of depression according to the demographic characteristics of the daeri drivers. The duration for which the participants had worked as daeri drivers varied from less than a year to more than 6 yr, but a majority of the drivers (n = 112, 71.3%) had worked for less than 6 yr. Ninety-one drivers (57.2%) worked for 8 hr or less per day, but the remainder of them worked for more than 8 hr per day. In the case of number of working days per month, 43 of the daeri drivers (26.7%) worked for less than 5 days per week. The analysis revealed that there was no difference in the prevalence of depression on the basis of occupational characteristics. With regard to workplace violence experienced over the past year, all of the daeri drivers had experienced verbal violence while driving and 66 of them (34.1%) had experienced verbal violence more than once in the past quarter. In addition to verbal violence, 68 of the daeri drivers (42.2%) had experienced certain types of physical violence while at work. Depression was more prevalent in daeri drivers who had experienced verbal violence more than once in the past quarter (24.2%, P = 0.034), and in those who had experienced even 1 instance of physical violence over the past year (25.0%, P = 0.017) ( Table 1). Table 2 shows the effects of verbal and physical violence experienced over the past year on the depression of the daeri drivers. The results of crude analysis revealed higher ORs for the drivers who had experienced verbal violence more than once in the past quarter (2.44, 95% CI: 1.05-5.68) and in those who had experienced even 1 instance of physical violence in the past year (2.77, 95% CI: 1.18-6.51). In Model I, higher ORs were also ob- DISCUSSION In this study, all the daeri drivers had experienced verbal violence and about 42% of them had experienced physical violence over the past year. Furthermore, the odds of developing depression were almost twice as high for workers who had experienced even 1 instance of physical violence in the past year and for workers who had experienced verbal violence more than 4 times a year. These significant relationships were not attenuated after adjustment for SES. To our best knowledge, this is the first study that investigates workplace violence experienced by daeri drivers and its relationship to the drivers' depression. There are many types of workplace violence experienced by daeri drivers and the drivers' firsthand experiences obtained through interviews were also recorded in the current study. The following are excerpts from our interviews with the daeri drivers. One driver stated, "Once, I drove a car owned by an interior designer. The drunken designer fell asleep, and a little later, he suddenly got up and threw a hammer at me for no reason at all. Maybe he lost control, or didn't know what he was doing because he was drunk". Another driver stated, "After I had an argument with a customer, he suddenly took out a scissor and threatened me and then even tried to cut his abdomen to threaten me. I had to stop him from running away from the scene and called the police". Yet another stated, "Some customers request us to speed and even ignore traffic rules, telling us to break signals and centerline rules". A fourth driver stated, "After having experienced workplace violence, I can no longer concentrate on driving as I am constantly worried". Thus, our qualitative interviews revealed that daeri drivers are prone to various dangerous situations due to workplace violence. Daeri driving involves being confined in the closed space of the car with the customer, involves the possibility of accidents, and also involves dealing with drunken customers. All these possibilities are serious stress factors for daeri drivers. The daeri drivers in this study experienced workplace violence with higher frequency compared to workers in other professions. Although the frequency of workplace violence varies according to occupation, preceding studies concerning workplace violence in Korea suggest that the overall prevalence of workplace violence is less than 5% (5). Workplace violence is known to be associated with sick leave, burnout, a poor job retention rate, and depression (17,18). The prevalence of depression in the general elderly Korean population ranges from 4.6% to 7.5% (19), and other studies investigating the prevalence of depression according to occupation also reveal that the current prevalence of depression in the case of people in most occupations is 10% or less (20). The high frequency and intensity of workplace violence experienced by daeri drivers might have resulted in the high prevalence of depression, namely, 16.8%. As we pointed out earlier, based on the qualitative interviews with the daeri drivers, such a high prevalence of depression among them might be due to the stressful workplace environment due to factors like the high susceptibility to violence. This is to say, according to the results of our interview, daeri drivers chose their career because they got fired from original job or they failed in business. Hence, majority of daeri drivers suffer from lack of social support or economic difficulties. Such conditions might have also affected on high prevalence of depression in daeri drivers. In addition, presence of huge difference in perceived socioeconomic status between daeri drivers and drunken customers might have aggravated depression in daeri drivers (21). If customers wish to hire a daeri driver, they make a call to a service center for daeri driving. The service center uploads details (i.e., where the customer wants to go and where the customer is parked) regarding the customer's request, to a web program. The web program sends the customer's request to all the daeri drivers who are under contract with the company. All the contracted daeri drivers receive the information via a smartphone application that sounds an alert every time there is a potential customer. The first driver who contacts the customer gets him/her and, consequently, the money. Hence, even though there are numerous daeri drivers at the same place at the same time, they are not really co-workers, but rather, competitors. Such a harsh and competitive working environment can make daeri drivers feel lonely, and loneliness itself is an important predecessor of depression. Since avoiding DUI is the main reason that people hire daeri drivers, they (i.e., the drivers) usually work from evening until the early hours of the morning. Night jobs are known to be related to various neuropsychic problems including anxiety and depression (22). Thus, the fact that daeri drivers have to work at night may be one of the important causes of the high prevalence of depression among daeri drivers. Traditionally, marriage is known to have a favorable effect on depression due to its functions of fulfilling unmet psychological needs or helping an individual cope with stressful events (23,24). On the contrary, lower SES is known to be related to a higher prevalence of depression (25). The monthly household income of over 50% of the daeri drivers in this study was 2.5 million won or less (≤ USD 2,500/month). Further, the results of the study showed a higher prevalence of depression among the married drivers. According to our qualitative interview, many of the study subjects chose to become daeri drivers because they had lost their original jobs. We believe that economic http://dx.doi.org/10.3346/jkms.2015. 30.12.1748 hardships due to low household income and the psychological burden of supporting a family might have led to the higher frequency of depression among the married daeri drivers. The results also revealed that there was a higher prevalence of depression among drivers who had completed junior college and who had graduated from college than among drivers who were high school graduates. Although a higher educational level is traditionally believed to have a protective effect on depression, the effect of educational level on depression is still inconclusive (26). Daeri drivers choose their job due to stressful events such as layoffs and these occupational events are known to have adverse effects on psychological health and self-esteem (27). It is possible that in the case of highly educated drivers, choosing to work as a daeri driver as a second career after having being laid off results in a greater sense of loss, which may have resulted in a higher prevalence of depression among this group of drivers. Meanwhile, depression was more common in the group of daeri drivers who had been working the job for a short period of time. The authors believe that this finding is also related to the difficulty involved in coping with a new job as a daeri driver. According to the results of our qualitative study, due to the unique characteristics of the job, daeri drivers work alone and there is no peer group to teach them how to perform the job better. Isolation from people due to the necessity of working at night, the absence of a peer group at work, and the hardships involved in coping with these stressful situations may have collectively influenced the results of this study. Currently, in Korea, violence against people who are driving is a punishable offense following the passage of the Additional Punishment Law on Specific Crimes and the penalty levels exceed those in the case of common assault or threat crimes, since violence against drivers is directly related to citizens' safety (2). Daeri drivers are not different from other drivers and their safety and mental health is directly related to traffic safety and to the safety and security of citizens. Current study has several limitations. First of all, the crosssectional study design did not allow us to determine the causal direction between workplace violence and depression. The results were based on interviews with only 166 daeri drivers in a metropolitan city, so they cannot be generalized to daeri drivers who work in small cities. Because we carried out the survey for just 10 days during the summer, the effects of seasonal changes on depressive symptoms should also be considered (28). Further, no information on when exactly over the past year, the violent events occurred. This is considered important information as recent experiences of violence are likely to have a greater effect on depressive symptoms than remote ones. In addition, no information on the medical history of the daeri drivers was collected, and this information, too, is important as past medical history and a family history of depression can affect the symptom level of depression. For the last, since market entrance regulations for daeri drives do not exist, leaving work also happens easily and quickly, especially in the cases of diseased workers. As a result, relatively healthy workers are included in cross-sectional studies (healthy worker survival effect). At the same time, daeri driving essentially includes dealing with customers face to face. Considering this well-known nature of job, possibilities of entering daeri driving might be higher in workers with higher adaptability to emotional work (healthy worker selection effect). Such healthy worker effects results in underestimation of association between work environment and its impact on health (29). Nevertheless, significant results on relationship between workplace violence and depression in daeri drivers shown in current study prove that degree of violence in daeri drivers is profound though a more comprehensive prospective study design with a representative sample is needed to elucidate the relationship between workplace violence and depression in daeri drivers. Daeri driving is a new occupation created by the need of society in coping with random inspections by policemen on DUI drivers. Besides Korea, other countries also have certain types of daeri driving but the job's characteristics differ greatly according to cultural differences or social needs and especially in terms of social support. In the case of Japan, daeri drivers are supervised by National Police Agency and Ministry of Land, Infrastructure, Transport and Tourism and at the same time, market entrance is also regulated by law. In addition, fare for daeri driving in Japan is 1.5 to 2 times of taxi fare. In the United States of America, charged membership service named I'm Smart exists, and this service provides daeri drivers in teams of two to registered customers only. As for England, daeri drivers are covered by workers' compensation insurance. On the other hand, in the case of Korea social security for daeri drivers is absent (30). This study is the first in the world, to examine the relationship between workplace violence and depression in daeri drivers. The results highlighted the fact that daeri drivers experience severe workplace violence and even small number of workplace violence were related to increased risk of depression in daeri drivers. Moreover, because violence against drivers can aggravate the safety of the driver, customer, and all other passengers, the safety and health of daeri drivers should be highlighted and measures should be taken to ensure that it receives adequate attention. DISCLOSURE The authors have no potential conflicts of interest to disclose. AUTHOR CONTRIBUTION Conception and design: Jung PK, Won JU, Yoon JH. Acquisition of data: Lee JH, Seok H, Lee W. Analysis and interpretation of
2017-11-08T22:51:48.521Z
2015-11-30T00:00:00.000
{ "year": 2015, "sha1": "119b4484bae794f0ee336caf221b2b7d8cd1884f", "oa_license": "CCBYNC", "oa_url": "https://doi.org/10.3346/jkms.2015.30.12.1748", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "119b4484bae794f0ee336caf221b2b7d8cd1884f", "s2fieldsofstudy": [ "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }
16639216
pes2o/s2orc
v3-fos-license
On the construction of dense lattices with a given automorphism group We consider the problem of constructing dense lattices of R^n with a given automorphism group. We exhibit a family of such lattices of density at least cn/2^n, which matches, up to a multiplicative constant, the best known density of a lattice packing. For an infinite sequence of dimensions n, we exhibit a finite set of lattices that come with an automorphism group of size n, and a constant proportion of which achieves the aforementioned lower bound on the largest packing density. The algorithmic complexity for exhibiting a basis of such a lattice is of order exp(nlogn), which improves upon previous theorems that yield an equivalent lattice packing density. The method developed here involves applying Leech and Sloane's construction A to a special class of codes with a given automorphism group, namely the class of double circulant codes. Introduction A lattice packing of Euclidean balls in R n is a family of disjoint Euclidean balls of equal radius centered on the points of some non degenerate lattice. The proportion of the space covered by these Euclidean balls is called the density of the packing. When balls of volume V are packed by a lattice Λ, the corresponding density is V. det(Λ) −1 , where det(Λ) denotes the determinant of the lattice, i.e. the volume of a fundamental region. The classical Minkowski-Hlawka Theorem states that for n greater than 1 there exist lattice packings with density at least 2 1−n ζ(n). This lower bound on the lattice packing density was later improved by a linear factor to a quantity of the form cn2 −n for constant c. This improvement is originally due to Rogers [8] with c = 2e −1 . The constant c was successively improved by Davenport and Rogers [3] to c = 1.68 and eventually by Ball [1] to c = 2. In the meantime, Rush [9], building upon a technique of Rush and Sloane [10], essentially recovered the original Minkowski-Hlawka lower bound on the largest density of a sphere packing using coding theory arguments together with the Leech-Sloane Construction A for lattices. While this did not achieve the improved density of the form cn2 −n , it had the alternative advantage of being more effective than the proofs of the above results. Rush's construction exhibits in a natural way a finite number of lattices among which dense ones exist. This number, though still too large to be in any way practical, is much smaller than what can be derived by applying the original proofs of the results highlighted above: consequently, the algorithmic complexity of Rush's construction is of the form exp(n log n) which is a substantial improvement over the preceding ones (see [2], p.18). Recently, the cn2 n improved lower bound on the minimum density was made as effective as Rush's lattice construction, with c = 0.01, for (non-lattice) sphere packings by Krivelevich, Litsyn and Vardy in [7]. They use an elegant graph theory method that enables them to find dense packings with a time (and space) complexity exp(n log n). In this paper, we again make the cn2 n lower bound as effective, with c ≈ 0.06, without paying the price of losing lattice structure. In fact, the dense lattice packings that we exhibit have additional algebraic structure, namely they come together with an automorphism group of size n. This additional structure is not a by-product of our method but is an essential reason for the improved density. This is a small step towards showing that, in the asymptotic setting, algebraic constructions can compete with unstructuredness, and maybe even stand out. The starting point of our approach is similar to that of [10] and [9], namely relies upon construction A to transform codes in F n p into lattices of R n . The specificity of the Rush-Sloane method is to consider codes designed for a metric which is unconventional in F n p but specially adapted to the Euclidean metric in R n . However, instead of indiscriminately looking for the best codes for this metric in the whole space F n p , we depart from [10,9] by restricting our attention to an exponentially smaller set of codes, namely a class that has a given automorphism group (double circulant codes), and prove that a constant fraction of them yield lattices with improved density. Similar codes were also used in a coding theory context to improve the classical Gilbert-Varshamov bound for linear codes by a linear factor [4]. Exhibiting a lattice basis has algorithmic (time) complexity exp(n log n). The paper is organized as follows: in Section 2, we show how dense lattices are constructed from "dense" codes and we formulate our main results, Theorem 1 and Corollary 2. In Section 3 we show how to obtain good double circulant codes. 2 From dense codes to dense lattices Let S n denote the Euclidean ball of radius 1 in R n , we have: Let S n (d) denote the Euclidean ball of radius d in R n , so that we have Let ρ ∈ R be the radius of a Euclidean ball of volume p n/2 for p any positive number, i.e. Vol (S n (ρ)) = p n/2 . By (1) and Stirling's formula we have: where o(1) will always be understood to mean a quantity that vanishes as n goes to infinity. For Λ a lattice of dimension n it is customary to define its minimum norm by The lattice Λ defines a packing of R n by spheres of Euclidean radius √ µ/2 and the density of this packing is given by: where det(Λ) stands for the determinant of Λ. From now on let p be a prime. We identify elements z of F p with elements z of Z such With this convention, following [10,9], we introduce the norm of a vector x = (x 1 , · · · , x n ) in F n p as the non-negative real number Let B n,p (d) denote the set of vectors x ∈ F n p such that x 2 ≤ d. We shall only be dealing with values of d such that d < p/2 so that we shall always have: hence, by fitting the sphere S n (d) inside a union of n-cubes of volume 1, Let us call a [n, k, d, p] code a k-dimensional subspace C of F n p such that d equals the minimum of the norm x 2 of a nonzero codevector x ∈ C. We will refer to d as the minimum norm of the code C. Recall that Construction A associates to a code C the lattice: It is readily seen that this lattice has minimum norm µ = min(d 2 , p 2 ) and determinant p n−k . In the following, we will always ensure that d ≤ p so that the [n, k, d, p] code C yields by construction A a lattice of R n of norm d 2 with density (3): By (5) this gives a density We shall prove Theorem 1 There exists a constant c, such that for any n = 2q, q a large enough prime, there exists a prime p, n 2 log n < p ≤ (n 2 log 2 n) 5.5 , and an [n, n/2, d, p] code C such that Furthermore, the automorphism group of C contains a subgroup isomorphic to Z/2Z × Z/qZ. The condition n 2 log n < p in Theorem 1 will ensure that the term (1 + √ n/2d) −n in (7) tends to 1 when n tends to infinity. This will enable us to obtain: Corollary 2 There exists a constant c, such that for any n = 2q, q a large enough prime, there exists a lattice of R n with density at least cn/2 n and whose automorphism group contains a subgroup isomorphic to Z/2Z×Z/qZ. Such a lattice can be constructed with time complexity exp(n log n)). The numerical value of the constant c in Theorem 1 and Corollary 2 can be estimated to be at least (2 − 1/e)(2 + e 2 π) −1 ≈ 0.064. Double circulant codes and random choice This simply means that C is the kernel of the mapping x → x t H from F 2q p to F q p . We will only consider the case when q is a prime. Let n = 2q. There is a natural action of the group G = Z/2Z × Z/qZ on the space F n p of vectors The double circulant code C is invariant under this group action and so is the norm of any vector x. Note that construction A applied to the code C will clearly yield a lattice whose automorphism group contains G. To show that double circulant codes with a large minimum norm d exist, we shall study the typical behaviour of d when a double circulant code is chosen at random. We now formalize this: Consider the random double circulant code C rand obtained by choosing the the first row of A, the vector (a 1 . . . a q ), with a uniform distribution in F q p . We are interested in the random variable X(w) equal to the number of nonzero codevectors of C rand of norm not more than w. In other words we define where X x is the Bernoulli random variable equal to 1 if x ∈ C rand and equal to zero otherwise. Our strategy is to study the maximum value of w for which we can claim P(X(w) > 0) < 1, this will prove the existence of codes of parameters [n, n/2, d > w, p]. The core remark is now that, if y = g · x, then Let now B ′ n,p (w) be a set of representatives of the orbits of the elements of B n,p (w), i.e. for any x ∈ B n,p (w), |{g · x, g ∈ G} ∩ B ′ n,p (w)| = 1. We clearly have X(w) > 0 if and only if Denote by ℓ(x) the length (size) of the orbit of x, i.e. ℓ(x) = #{g · x, g ∈ G}. We have Since n = 2q = |G| and q is a prime, possible values of λ in (9) In fact a closer look shows that ℓ(x) = q is not possible. For this to happen, one of the two halves of x, call it y, would have all its q cyclic shifts distinct, and the property that −y equals some cyclic shift of y. But then it would be possible to partition the set of cyclic shifts of y into pairs of opposite vectors, but q is odd, a contradiction. Therefore Inequality (9) gets rewritten as We now switch to evaluating the right hand side of (10). Syndrome distribution We need to study carefully the quantities E [X x ] = P(x ∈ C rand ), for x ∈ B n,p (w). For x ∈ F n p , let us write where σ L (x) = x L and σ R (x) = x R t A. For any vector u = (u 0 , . . . , u q−1 ) of F q p , denote by u(Z) = u 0 + u 1 Z + · · · + u q−1 Z q−1 its polynomial representation in the ring R = F p [Z]/(Z q − 1). For any u ∈ F q p , let C(u) denote the cyclic code of length q generated by the polynomial representation of u (i.e. C(u) is the ideal generated by u(Z) in the ring R). We have: Lemma 3 The right syndrome σ R (x) of any given x ∈ F n p is uniformly distributed in the cyclic code C(x R ). Therefore, the probability P(x ∈ C rand ) that x is a codevector of the random code C rand is Proof: A little thought shows that σ R (x) has polynomial representation equal to x R (Z)a(Z), where a = (a 1 , a q , a q−1 , . . . , a 2 ) is the transpose of the first column of A. Therefore, the image of the mapping for fixed x, is the cyclic code C(x R ). Since this mapping is linear, every element of C(x R ) has the same number of preimages (namely Kerψ), therefore when the distribution of a is uniform in F q p , the distribution of σ R (x) is uniform in the code C(x R ). The choice of p and the cyclic codes C(x R ) The right hand side of (10) will be easiest to study if there are as few as possible cyclic codes in F q p , i.e. if the ring R has as few as possible invertible elements, equivalently if Z q − 1 has as few as possible divisors in F p [Z]. The next lemma tells us how to ensure this, while simultaneously bounding from above the size of p, so as to retain some control over the overall contruction complexity. Lemma 4 For any n = 2q large enough, there exists a prime p in the range n 2 log n ≤ p ≤ (n 2 log 2 n) 5 Proof: We just need to find p in the required range such that (p mod q) is a primitive element in Z/qZ. Let Q = q 2 p where p is a prime such that 4 log n ≤ p ≤ 4 log 2 n: p exists for q large enough, and we have n 2 log n ≤ Q ≤ n 2 log 2 n. Let α < q be a positive integer that is a primitive element in Z/qZ. Since q is prime we have q = 0 mod p so that we may choose ε 1 ∈ {1, 2} and ε 2 ∈ {0, 1} such that r = (1+ε 1 q)(α+ε 2 q) is coprime to p and therefore to Q. Note also that r is smaller than Q for q large enough, not prime, and equal to α mod q. By Linnik's Theorem on least primes in arithmetic progressions, there exists a prime p such that p = r mod Q and p ≤ Q L for a constant L. We have p = r = α mod q. Note that since r is not prime we have Q < p in addition to p ≤ Q L . By a result of Heath-Brown [5] we have L ≤ 5.5. For p as in Lemma 4 we therefore have exactly two non-trivial cyclic codes over F p of length q, namely C 1 , the subspace generated by the all-one vector (or the generator polynomial 1 + Z + · · · + Z q−1 ) and its dual, C ⊥ 1 , with generator polynomial Z − 1. Now Lemma 3 implies that there are exactly two types of non-zero vectors of F n p such that P(x ∈ C rand ) is different from zero and from 1/p q , namely: • vectors x such that x L ∈ C 1 and x R ∈ C 1 , we call them vectors of type 1. For these vectors we have P(x ∈ C rand ) = 1/p. • vectors x such that x L ∈ C ⊥ 1 and x R ∈ C ⊥ 1 , we call them vectors of type 2. For these vectors we have P(x ∈ C rand ) = 1/p q−1 . Next, we study the number of these exceptional vectors to evaluate their contribution to the upper bound (10). The number N 1 (w) of possible values of (α, β) such that x 2 ≤ w is, Therefore, for w < (p − 1)/2 (which is always going to be satisfied for n large enough and p chosen as in Lemma 4), and, bounding from above by the area of a 2-dimensional disc, Therefore (2) gives We now switch to evaluating the cardinality N 2 (w) of the set A of vectors of type 2 in B n,p (w). Now let B be the set of vectors y of F n p obtained by the following procedure: 3. choose two integers l, r such that |l| ≤ ⌈ √ tp⌉ and |r| ≤ ⌈ √ tp⌉, where t is a constant to be determined later 4. define y = (y 1 . . . y n ) by y i = l, y j = r and y h = x h for h = i, j. We now define the bipartite graph with vertex set A ∪ B by putting an edge between x ∈ A and y ∈ B if y is obtained from B by the above procedure. Let E be the set of edges of this graph. The degree of a vertex x ∈ A is clearly q 2 (2⌈ √ tp⌉ + 1) 2 ≥ 4tpq 2 so that we have Recall that x is of type 2 means that x 1 +· · ·+x q = 0 and x q+1 +· · ·+x 2q = 0. Now let y ∈ B. There is at most one way of modifying two given coordinates i, j, 1 ≤ i ≤ q, q + 1 ≤ j ≤ 2q, so as to obtain a vector x ∈ A. In other words the degree of a vertex y ∈ B is at most q 2 and |E| ≤ |B|q 2 . We have therefore Now notice that if x ∈ A and y ∈ B are adjacent in the bipartite graph we have (1)), this gives In particular we have w ′ = ρ(1 + o(1)) so that, applying (11), we get (1)). Now choose t = (2eπ) −1 so as to minimize e 2teπ /4tp and we get: Proof of Theorem 1 and Corollary 2 We are now ready to prove the main result. We have proved that for such a value of c, some double circulant codes with minimum norm d ≥ w must exist. Proof of Corollary 2: Let C be the code in Theorem 1. By inequality (5), since cnp n/2 ≤ |B n,p (d)|, the quantity d+ √ n/2 must be greater than the radius of a Euclidean ball of volume cnp n/2 . As before, by equality (2), the code's minimum norm d must be greater than √ pn multiplied by a constant, so that the term (1 + √ n/2d) −n in (7) converges to 1 when n → ∞, since √ p/n → ∞. Therefore (7) yields the announced density for the lattice deduced from the code C by construction A. Construction A preserves the automorphism group of the code in the lattice. The construction complexity is simply that of going over all double circulant codes of length n over F p (there are p n 2 of them), and checking, by exhaustive search over the p n 2 codevectors, whether they contain a vector of norm less than the required bound. The resulting complexity equals therefore p n times quantities of a lesser order of magnitude, i.e. p n(1+o(1)) which is not more, by Lemma 4, than 2 2L(1+o(1))n log 2 (n) . Concluding comments • The proof of Theorem 1 shows that, by lowering the value of c, we can make all the contributions to the probability of the existence of a codevector of weight ≤ w vanish, except for the codevectors of type 1. In other words, for small values of the constant c, the asymptotic probability that the double circulant code-random lattice yields a packing of density less than cn2 −n equals the non-vanishing probability (not more than 1 − 1 2e ) that codevectors of type 1 exist. When this happens, not only does the packing density drop below cn2 −n , but it drops below the Minkowski density altogether. In contrast, typical random lattice packings have a density of order 1/2 n [11]. • The action of the automorphism group of the lattices presented here is not transitive on the set of coordinates, it has two orbits. Can one construct dense lattices with a transitive automorphism group ? • The automorphism group here has size (at least) n. Could alternative constructions yield an automorphism group of guaranteed larger size (potentially resulting in increased packing densities) ?
2014-10-01T00:00:00.000Z
2006-05-03T00:00:00.000
{ "year": 2006, "sha1": "0e6b98ef40552c88b2be6b9bf7da0e0b5a859386", "oa_license": null, "oa_url": "http://www.numdam.org/item/10.5802/aif.2286.pdf", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "0e6b98ef40552c88b2be6b9bf7da0e0b5a859386", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
119680341
pes2o/s2orc
v3-fos-license
Compatibility Conditions for Discrete Planar Structure Compatibility conditions are investigated for planar network structures consisting of nodes and connecting bars; these conditions restrict the elongations of bars and are analogous to the compatibility conditions of deformation in continuum mechanics. The requirement that the deformations remain planar imposes compatibility. Compatibility for structures with prescribed lengths and its linearization is considered. For triangulated structures, compatibility is expressed as a polynomial equation in the lengths of edges of the star domain surrounding each interior node. The continuum limits of the conditions coincide with those in the continuum problems. The compatibility equations may be summed along a closed curve to give conditions analogous to Gauss-Bonnet integral formula. There are rigid trusses without compatibility conditions in contrast to continuous materials. The compatibility equations around a hole involve the edges in the neighborhood surrounding the hole. The number of compatibility conditions is the number of bars that may be removed from a structure while keeping it rigid; this number measures the structural resilience. An asymptotic density of compatibility conditions is analyzed. Introduction Two overdetermined problems for the deformation of material are considered, the discrete problem for structures with prescribed lengths and its linearization, the discrete problem of prescribed elongations. These problems approximate two continuum problems used for reference, the nonlinear continuum problem of given Cauchy Green tensor and its linearization, the continuum problem of prescribed strain. Their compatibility conditions are investigated, mainly in the discrete linear situation, where they are conditions on the inhomogeneous term of an overdetermined matrix equation. The compatibility equations of a general discrete structure may involve data from widely separated points and are not usually local. However, for a truss to approximate a material, the compatibility conditions must have a local nature and limit to the continuum compatibility equations at all points as the discretization is refined. In a continuous material, the compatibility equations express how nearby deformations influence deformations at a point. For discrete structures that approximate a continuous material, the compatibility equations must be supported on diminishing neighborhoods of most points, which we call material points. A class of structures which approximate the material in planar domains are the triangulated structures. Compatibility conditions of a triangulated structure are localized to triangles neighboring a vertex, thus as the triangulation is refined, all interior vertices are material points approximate all points in the domain. Motivation-compatibility equations in a truss Suppose that one is to arrange given rigid bars (edges) into a triangulated structure in the plane. The lengths of edges cannot be arbitrary. The relations between the bar lengths that allow them to fit as edges in a structure are called discrete compatibility conditions for this structure. As a simple but essential example, consider a neighborhood of an interior node V of a triangulated surface, that is, a node surrounded by a closed chain of nodes and edges joined by radial edges to the center node; all edges are rigid as in Figure 1. The union of triangles that meet the center node is called a star neighborhood. If one edge is removed, the distance between its ends is still fixed by the remaining edges; therefore its length is related to the lengths of other edges. This dependence is a compatibility condition. It is defined at each interior node of the triangulated surface structure, depends only on the triangles neighboring the interior node and follows from neighborhood remaining planar; namely, the sum of angles between edges going around the vertex is 360 • . Hence the curvature atom K(V ) = 0 vanishes at the center (see equation (6)). We call the linearized compatibility condition at an interior node a wagon wheel condition. Cauchy-Green Deformation Tensor Let us recall the basic definitions of the kinematics of a deformable body [MH 1983]. Let B ⊂ E 2 be a Euclidean material disk domain with piecewise smooth boundary and coordinates (X 1 , X 2 ), Figure 1: Interior node compatibility condition: neighborhood stays rigid after removing an edge. and S ⊂ E 2 the target domain with coordinates (x 1 , x 2 ). An in-plane displacement φ : B → S has a prescribed Green Deformation tensor (Right Cauchy-Green Deformation Tensor) .ζ The problem (NC) is to find the deformation φ given the positive definite Green tensor ζ. Hence C is a Riemannian metric on B pulled back from the Euclidean metric at φ(X). At each point of B, ζ is a symmetric 2 × 2 matrix whose three entries depend on the two components of the vector φ, therefore are not independent but are constrained by a differential compatibility condition. Because the deformation remains planar, the compatibility condition is equivalent to that the Gaussian curvature K(ζ) of the metric being zero. General, triangulated and triangular structures Definition 1. There are three types of discrete structures. • A general discrete structure or abstract truss Sis a connected finite graph: the vertices V i ∈ V form a finite set of points in the plane; and the edges E form a finite collection of pairs of distinct vertices E ij linking V i to V j . Connected means there is an edge path between each pair of vertices in the graph connecting one vertex to the other. A general discrete structure need not be a planar graph. The nodes and links may coincide, may overlap and there may be more than one edge linking a pair of vertices. • A triangulated structure is obtained by the vertices and edges of piecewise linear triangulation of a planar domain. A triangulation of a domain is a tiling by non-overlapping closed triangles whose union is the closure of the domain. The triangles are allowed to intersect only in a common vertex or along a whole common edge. A triangulated structure is made up of the nodes and edges of a triangulated domain. It is assumed that edges of the triangles are straight line segments. The union of vertices and edges of triangulation is sometimes called the one-skeleton of the triangulation (e.g., [Ha 2002, p. 5]). The trusses made of non-intersecting links may be viewed as triangulated structures with missing links. • A triangular structure is a triangulated structure whose links are unit edges of the triangular lattice. The triangular lattice is the set of points in the plane of the form ke 1 + e 2 where k and are integers and vectors e 1 = (1, 0) and e 2 = 1/2, √ 3/2 . All links have unit length and the angles between neighboring links are all 60 • . Some of our theorems apply for general structures. The majority of theorems apply to triangulated structures since the formulation of compatibility depends on triangles around interior nodes forming a planar neighborhood of the node. Others are proved only for triangular trusses. Because the triangular trusses are the simplest to understand, many notions will be explained for triangular trusses first. Energy of approximating discrete structures If the deformation is elastic, one can associate with it an energy density e(ζ) which depends on the deformation and may be expressed as the infimum over displacements φ where ζ = (∇φ) T ∇φ and subject to boundary conditions. This is equivalent to the constrained infimum over Cauchy Deformation tensors e(ζ) dx, ζ is subject to K(ζ) = 0 and boundary conditions, but the latter is frame-independent which may be preferable for some problems. Suppose that the links of a discrete structure have a prescribed length ij = length(E ij ). The nonlinear existence problem (ND) is whether this structure may be realized as points in the Euclidean plane such that distances between endpoints equal the prescribed lengths of the edges (1). When the structure is the triangulation of a planar domain, the compatibility condition states that the curvature atom (angle excess which depends on the jk ) vanishes at each interior vertex K(V i ) = 0, (see section (2)). In section 2.2, it is shown that for triangulated structures compatibility conditions for (ND) may be expressible as a polynomial equation in ij . Let e( ij ) denote the energy associated to the length of a link, which is a convex function of the length such that e( ) > 0 if > 0 and e(0) = ∞. The elastic energy of the whole structure is W (S) = i<j e( ij ), subject to K(V i ) = 0 at interior vertices. Let a sequence of triangulated structures approximate a material domain such that the diameter of the triangles tends uniformly to zero. In the continuum limit, the compatibility conditions for (ND) tend to Christoffel-Riemann compatibility conditions of (NC), namely, that the Gauss curvature of ζ vanishes (see Section 7.4). Being a function of lengths, the expression for energy is frame independent, that is invariant under translation and rotation of coordinates. By a minimization of W (S) plus the work of external forces with respect to ij that satisfy the compatibility equations, one finds the equilibrium configuration expressed in terms of optimal lengths of edges. The energy and the equilibrium equations can be represented in a coordinate-free form which can be convenient for calculating the state of structures undergoing large deformation. In the linearized theory (see Section 3), the energy is quadratic in the elongations λ ij of the edges and the compatibility conditions (wagon wheel conditions) become linear. The elastic energy is where B is the matrix of compatibility coefficients. Existence of deformations The deformation of a continuous material is a vector field of deflections that is described by a symmetric right Cauchy Green tensor. Being planar means that the Green tensor is subject to pointwise differential constraints, the compatibility conditions. The linearization of the problem to find configurations with prescribed Right Green-Cauchy tensor (NC) is the problem of prescribed strains (LC). For the discrete approximating structures (trusses) the nonlinear problem is to find a configuration in the plane whose links have prescribed length (ND). Its linearization is the prescribed elongations problem (LD). Both are overdetermined and also require compatibility conditions for their solution. The compatibility conditions restrict how the edges can be deformed. The same compatibility conditions are valid for the deformed and undeformed configurations; therefore they represent constraints for any deformed structure. When the deformations are small, we arrive at linearized compatibility constraints. If the corresponding compatibility conditions hold, then all four problems may be solved for the deformation, at least locally. The solvability of (ND) under the condition that curvature atoms vanish at interior vertices is a simple case of Alexandrov's theory of polyhedral approximation and is discussed in Section 2, 7.2. That the vanishing of the Riemannian curvature is sufficient that (NC) can be solved locally is classical [L 1926]. The local solutions are piecewise Euclidean. Thus the global solubility of (NC) follows from the global solubility of (ND) in Section 7.3. The global solubility of (LC) may be obtained similarly to (NC) [So 1956]. (LD) problem is an overdetermined matrix equation which is soluble if the compatibility equations hold in Section 3. Compatibility conditions with local support and material points The compatibility conditions of a general structure may be nonlocal, as in Figure (3c), where Q may be far separated from rest of the truss. However, only specific structures approximate a material. For a material, compatibility conditions express how the deformations of a material in the neighborhood of a point influence the deformation at the point; thus they are local in nature. The support of compatibility conditions in a sequence of approximating structures must tend to material points of the material. For example, the sequence of trusses as in Figure 6 with just a single northeast link added in the northeast corner will create a single non-local compatibility condition whose support does not converge to a material point. (LD) is obtained by linearizing (ND) rather than discretizing (LC); nevertheless, discrete and continuous compatibility conditions are consistent. The solution of the discrete problem (ND) approximates the solution of (NC) (Section 7.2). We show that the continuum limit of vanishing of curvature compatibility constraints in a triangulated structure gives the continuum vanishing of curvature compatibility condition of (NC) (Section 7.4). The continuum limit of linearized compatibility constraints, the wagon wheel condition at interior vertices (section 3.5), is the compatibility condition of strains, Ink( ) = 0. Compatibility conditions (Wagon wheel conditions) in the discrete linear problem (LD) The number of compatibility conditions is the codimension of the range of the prescribed elongations problem (LD). We call a structure generic if this dimension is given by the Maxwell Dimension, C M (18), which is merely the excess in the number of equations over the number of variables (Section 3.2). For a triangulated structure without holes, C M is given by the number of interior nodes (19). Equivalently, a structure is generic if it is infinitesimally rigid. For triangulated structures which are generic, there is an independent compatibility equation supported in the star-neighborhood of every interior vertex, the wagon wheel condition, which is explained in Section 3.5. In this paper, we explore the meaning of C M for generic triangulated structures. • In a triangulated structure, the number of compatibility conditions is equal to the number of edges that can be removed keeping the structure rigid. The presence of additional edges increases the resilience of the structure and helps to maintain its structural integrity when damaged. C M provides a quantitive measure for resilience and use it to analyze the damaged by a crack, multiple faults, etc. (Section 6). • C M is the sharp upper bound on how many links may be removed from a truss before it loses rigidity. • The minimal number of links that can be removed from a structure to make it flexible may be considerably smaller than the Maxwell Dimension C M . For example, in a triangular truss, this number is two. At any convex corner which is connected to the rest of the truss by two or three links, removing one or two will free up the remaining attached link. • In a triangulated structure, C M for (LD) may be easily computed from the topology of a structure which facilitates analysis of damaged structures (as in Section 6.) 1.9 Compatibility as a measure of resilience of a periodic structure In the periodic triangular structures in which we consider repeated fixed period cells with holes, we define the asymptotic compatibility condition as the homogenized limit of the number of compatibility conditions divided by the area as the number of period cells tends to infinity (51). This number depends on both the size of holes in the period cell as well as the geometry of the hole. A periodic material with a single long hole in the period cell is weaker than if the same sized hole were round, and this is weaker than many small holes of the same area. Integral form of compatibility conditions There is an integral compatibility condition for all simple closed curves in the domain. In the linearized problem (LD) for triangulated structures, when we sum up the compatibility condition elements localized at nodes interior to a closed curve, we obtain a relation between the elongations supported on a strip along the curve. Lemma 6 shows that the sum of all localized compatibility conditions of (LD) cancel on vertices interior to a closed loop. Theorem 7 shows that the sum of the interior compatibility conditions gives a boundary compatibility condition that is supported on the double layer, the edges entirely within one link of the loop. The similar boundary compatibility condition for (ND) is just that for a connected domain, the total turning angle going around a loop, which can be computed from edge lengths of the double layer, is just 2π. The compatibility condition for (LC) is expressible as the vanishing of an exact two-form and may be integrated analogously to integrating a gradient field over the boundary curve of a subdomain. In Section 4.5, applying Stokes's Theorem shows that there is a boundary compatibility condition that is also a double layer: it depends on both the boundary curvature and the normal derivative of the prescribed strain. The similar condition for (NC) is the Gauss-Bonnet Theorem for connected domains: the integral of the curvature of the outer boundary as a plane curve plus the angle excesses at the vertices equals 2π (27). The expressions for these quantities in the same local curvilinear coordinates is presented in Section 4.6. The genericity of structures A structure is generic if it is infinitesimally rigid. The computation of the number of compatibility conditions is simple for generic triangulated structures, but which structures are generic? For triangular structures, genericity is proved by showing that the compatibility conditions given by the wagon wheel conditions supported in the neighborhoods of all interior points plus the conditions coming from ring-girders around the holes form a basis for all compatibility conditions (Theorem 13). The genericity of Bigon-Triangle-Prism Structures (BTP Structures) of Section 5.3, a large family of structures including triangulated structures, is proved in Section 5.4. Such structures are built out of smaller infinitesimally rigid structures, and their C M can be computed from its pieces. A structure whose geometry is regular may be far weaker than one whose edges take lots of directions. For example, if the structure consists of two rigid pieces that are connected by n links from one piece to the other and these connectors are parallel then the structure is not rigid at all. However, roughly speaking, if all the connecting links have pairwise independent directions, then one must remove n − 2 of them before the structure loses its rigidity along the seam. Thus it has n − 2 additional compatibility conditions. This is the prism construction, proved in Sections 5.3 and 5.4. Previous Results Compatibility conditions are routinely used in calculation of the stresses of loaded frames. The elongations of several rods that end at a node are determined by the deflection of that node and compatibility conditions hold if several nodes are interconnected. These conditions are commonly expressed through the deflections where they play an auxiliary role in the determination of the stress state of the structure. Here we study geometrical aspects of compatibility conditions in complex networks. Network structures have been studied by mechanical, material, and physical scientists as well as by mathematicians. In the nineteenth century, Maxwell found conditions under which mechanical structures made out of bars joined together at their ends would be stable [M 1864]. He used the method of constraint counting to estimate the dimension (Maxwell dimension) of infinitesimal deformations for generic structures. Recently, Maxwell's ideas were revived by Thorpe and collaborators in studies of network glasses. Jacobs and Hendrickson [JH 1997] developed an algorithm, the pebble game, to compute this dimension exactly. They applied this algorithm to study percolation of rigidity, the transition of a floppy structure to a rigid one (see the overview in [TJCR 1999]). These studies count the nullity in the case when the system (13) for (LD) is underdetermined. In the present paper, we are concerned with the opposite problem for rigid structures and study the degree to which the structure is overdetermined. Using network models to approximate materials is fairly common [BKN 2013] although their use to approximate Green Tensor equations is rare. The spring network model of Hrennikoff [Hr 1941] is pioneering both approximating elasticity as well as using finite elements. The modern finite element formulation is based on lattice models, see for example [Bd 2007], [BS 2010]. However, the problem of compatibility of the links is usually avoided by the description the deformation through the positions of nodes, so the compatibility is automatically satisfied. Network models were used to study the resilience of lattices. These models describe the transition of the network when damaged elements are replaced by initially inactive "waiting links" resulting in waves of damage, see Cherkaev and Zhornitskaya [CZ 2003], Cherkaev, Vinogradov and Leelavanichkul [CVL 2006] and Cherkaev and Leelavanichkul [CL 2012]. During the transition, compatibility varies. The compatibility and self-deformation of various triangular lattices with two kinds of fixed length links were studied by Cherkaev, Kouznetsov, and Panchenko where the local compatibility conditions [CKP 2010] were derived. The study of compatibility conditions for node and bar structures was initiated by Krtolica [K 2016] who proved that the continuum limit of the discrete compatibility condition is the continuous one (Theorem 20). The continuum limit of a triangulated structure is a planar solid; its deformation obeys the (NC) continuum compatibility condition K = 0 (see Section 7.1). We show that the discrete compatibility condition of (ND) tends to this condition. Similarly, we show the (LD) compatibility condition tends to the one for (LD). Our interpretation of the network approximation (ND) of the prescribed Green's tensor problem (NC) as a polyhedral metric approximating a Riemannian surface metric is based on A. D. Alexandrov, e.g., [AZ 1962]. An alternative homogenization procedure of discrete models uses Γ-convergence (see books by Braides [Br 2006 Nonlinear Discrete Prescribed Length Problem (ND). Let v be the number of vertices and e the number of edges in a general structure. In addition to this combinatorial data, we associate a length ij to each edge. If the structure is concretely realized as points and segments of the plane, each edge has a positive length ij induced from the Euclidean metric. The realization problem asks to find the positions of the vertices in the Euclidean plane if the lengths are prescribed for given combinatorial data. The lengths are assumed to satisfy the triangle inequality: whenever V i , V j and V k are the vertices of a triangle, then Equality in the triangle inequality corresponds to a degenerate triangle with three collinear points. The realization of a truss may have multiple vertices at the same position in the plane, multiple edges connecting a pair of vertices and degenerate triangles. We consider flexing a truss in the plane in such a way as to preserve the lengths. The abstract truss may not be really constructible as a linkage that flexes because the links may have to pass through themselves in order to do so. To an abstract truss which is combinatorially a triangulated surface B, we can associate a piecewise linear surface. Triangles with prescribed edge lengths ij may be filled in by triangular pieces of the Euclidean plane with the given edge lengths. The metrics from the triangular pieces glue together to form a piecewise-linear (PL) metric on the filled in truss g B . The existence problem asks whether an abstract triangulation of a disk with a given metric comes from a truss in the Euclidean plane. We say that a PL immersion x : B → E 2 is a realizable configuration if the metric induced on B by the Euclidean structure g B is the pull-back of the Euclidean metric In other words, it is possible to map the filled in triangulated B to the plane in such a way that the lengths measured in the Euclidean plane correspond to filled in lengths so that the image of edges has the Euclidean length given by ij ? Solving (ND) for Triangulated Structures Suppose that the truss is an abstract triangulated planar domain, namely, it is the one skeleton (vertices and edges) of a triangulated domain, embedded in the plane and bounded by g +1 closed curves (g is the genus). For simplicity, the curves are required to be disjoint and simple. Simple means that the curves have no self intersections, so the domain has no pinch points. Let f be the number of triangular faces. The Euler Characteristic (e.g., [O 1966, p. 378]) for a triangulated disk is given by the formula where v is the number of vertices, e the number of edges and f the number of faces (triangles). If v b and v i denote the number of interior and boundary nodes, and e b and e i the number of boundary and interior edges, we have for Substituting Euler's formula it follows that The Curvature atom K(V i ) at a node, which is the discrete analog of Gaussian Curvature, is defined to be the angle excess [AZ 1962, p. 8]. If V i is an interior vertex and V i1 , . . . , V i k are its adjacent vertices taken cyclicly, then where V i k+1 = V i1 , k is the valence (number of links) at the vertex V i , and ∠(A, B, C) is the Euclidean angle included between the vectors BA and BC. In the piecewise Euclidean metric g B , this is determined from the side lengths by the cosine law A necessary and sufficient condition that the nonlinear realization problem is solvable is that the curvature atom vanishes at each interior vertex. Theorem 1. A necessary and sufficient condition that the edge lengths of a combinatorial triangulated disk may be nondegeneratevely developed (mapped jigsaw puzzle wise triangle by triangle) in the Euclidean plane is that the strict triangle inequality ik < ij + jk hold (compare (2)) for all triangles and that the curvature atoms vanish The development is unique up to reflection or rigid motion. This theorem is proved in the Appendix, Section 9.3. Polynomial Expression for the Compatibility Conditions for (ND) In this section, we show that the compatibility conditions for the prescribed lengths problem (ND) of a triangulated truss may be expressed as polynomial equations in the lengths of edges. We use the standard procedure for reducing systems of equations involving radicals to polynomial systems. Theorem 2. Let V i be an interior node of a triangulated truss. For the prescribed lengths problem (ND), the compatibility condition that the vanishing of curvature (8) may be expressed as a polynomial equation in the lengths of the edges of the triangles that meet V i . Proof. If V i is an interior vertex and V i1 , . . . , V i k are its adjacent vertices taken cyclicly, let Then by (6), the condition K(V i ) = 0 is equivalent to Expanding using the standard addition formulas, this gives a homogeneous trigonometric polynomial in sines and cosines of order k. The cosines cos(φ i ) are rational, homogeneous of degree zero functions of the lengths by the cosine law (7). The sines satisfy sin φ i = 1 − cos 2 φ i . Multiplying through by the denominators, the condition may be written where P 1j and Q 1i are homogeneous polynomials in the lengths and where there are N = 2 k−1 terms. To eliminate the irrational terms, raise Z to powers 2, 3, . . . 2 k−1 . Note that even powers of Q 1j are polynomial and odd powers are a product of a polynomial and Q 1j . Counting the number of types of products of radicals in the multinomial expansions, for each type there are All together, there are 2 k different types. Writing these typesQ j we get which is a linear system for the Q j 's. Solving the system gives rational expressions for Q j which may be substituted into (10). Multiplying through by the denominator we arrive at the desired polynomial equation. Triangle example of polynomial equivalent of K(V i ) = 0. There are short cuts for the three sided star about the interior vertex V surrounded by V 1 , V 2 and V 3 (Figure 1b.). The sum of the angles We obtain cos φ 3 = cos φ 1 cos φ 2 − sin φ 1 sin φ 2 . By substituting squares of radial lengths p i = |V i − V | 2 and circumferential lengths q ij = q ji = |V i − V j | 2 , this transforms to the sixth order polynomial equation in the lengths where i = j = k = i so in the first sum j = i + 1 mod 3, k = i + 2 mod 3 and in the second sum k(i, j) = 6 − i − j mod 3 is the index other than i and j. This expression is invariant under the permutation of V i 's. Hexagon polynomial version of K(V i ) = 0. For the central point of a hexagonal star, the condition (9) becomes the sum of products of six sines or cosines. One can check that the sines occur in even powers. Fifteen terms have two sines, and four cosines, fifteen terms with four sines and two cosines, one term has all cosines and one term all sines; 32 terms in all. The expression (10) is the homogeneous function of degree six in the squares of the lengths of the edges. Since there are only 32 different types of terms, the elimination procedure above requires raising this expression to powers 2, 3, . . . 31. The solution of the resulting system (11) is substituted into (9) to get a rational expression of the compatibility condition which is rational in the lengths of the twelve sides of the hexagon. For a rough estimate of the degree of the polynomial, note that the highest degree of the coefficients of the equation (11) is 6n in the squares of the lengths. Thus the degree in the squares of the lengths of the polynomial compatibility condition is bounded by the degree of the coefficient times the degree of the determinant of system degree in squares of the lengths ≤ 6 × (6 + 6 · 2 + · · · + 6 · 31) = 17, 856. The Linearized Discrete Problem of Prescribed Elongations (LD). The linearization of the discrete nonlinear realization problem (ND) is to prescribe the infinitesimal elongations along edges and solve for the infinitesimal displacements. 3.1 (LD) equations and the number of compatibility conditions for a generic truss. The infinitesimal deformations may be regarded as velocities of the nodes. To derive the equations, suppose that we have a time-dependent immersion of an abstract truss x(X, t) : B × (−δ, δ) → E 2 where x(X, 0) is the reference configuration. Differentiating (1) with respect to time, for all edges Stacking the displacements we get the unknown vector U of dimension 2v. The linear system (12) may be written where A is a v × 2e dimensional matrix and Λ is e × 1 column matrix. The nullspace of A, the velocities that don't change lengths up to first order are called the infinitesimal deformations, which have dimension n, the nullity of A. In the Euclidean plane, velocities of rigid motions are three dimensional, consisting of two translation dimensions and one rotation dimension. Velocity fields of rigid motions are always infinitesimal deformations and are, therefore, in the null space. Hence n ≥ 3. n degrees of freedom of motions must be pinned down by boundary conditions to determine the displacements uniquely. A truss is called infinitesimally rigid if the infinitesimal deformations are three dimensional: they consist exactly of velocity fields of rigid motions. An infinitesimal deformation which does not come from rigid motion is called a nontrivial flex. Usually the system (13) is over-determined. If the truss is infinitesimally rigid with the fewest possible edges, that is if A has full rank and 2v − 3 = e, then we say the structure is statically determined. An example of a rigid truss that is not infinitesimally rigid is given in Figure 2. In such infinitesimally flexible trusses, there is equality in the triangle inequality for the nodal distances at the flexing vertex. Not all Λ ∈ R e may be realized as elongations of edges when a truss undergoes deformation. The codimension of the range of A is called the number of compatibility conditions c. In fact, we may find a c × e matrix B of independent rows, called the compatibility conditions, such that range A = ker B. Since all the geometry is encoded in A, it is natural that the matrix B may be computed from A. Lemma 3. For any (not necessarily triangulated) truss in the plane, the compatibility matrix may be computed as the first e columns of whereà is the matrix A augmented by a rank three 3 × 2v matrix annihilating the rigid motions (14). Proof. Let A 0 be any rank three 3 × 2v matrix that annihilates the translations and rotation and B 0 be a CC × 3 matrix. If we augment the matrix, the right side and compatibility, then the kernel kerà vanishes if and only if the truss is infinitesimally rigid. In that case, Excluding it from (15) we formulate the compatibility conditioñ B is then just the first e columns ofB. The Maxwell Number of compatibility conditions in a generic truss. In this section, we assume that the trusses come from triangulated planar domains. Using e b = v b and (5), the quantity Since it is nonnegative, we can estimate of the number of compatibility conditions as the number of variables minus the number of equations plus the dimension of rigid motions the Maxwell Dimension observed by James Clerk Maxwell [M 1864]. Thus, a simply connected truss which is statically determined has C M = 0. The number of compatibility conditions is the maximal number of edges that may be removed from the truss while keeping it infinitesimally rigid. In terms of the dimension of infinitesimal deformations (nullity of A) we have rank(A) = 2v − n so c = e − 2v + n ≥ C M . We shall call the truss generic if its number of compatibility conditions is the Maxwell number. A truss is generic if and only if it is infinitesimally rigid. We will characterize the number of compatibility conditions for a large family of trusses in Theorem 13 geometrically. For a triangulated disk, the genus g = 0. By (17), thus the Maxwell Dimension equals the number of interior vertices. An example of a non-generic truss. In the infinitesimally flexible unit-length triangular truss illustrated in Fig. 2, there are v = 13 vertices, e = 24 edges so Maxwell dimension C M = e − 2v + 3 = 24 − 26 + 3 = 1. However, The kernel admits an nontrivial infinitesimal flex at vertex k in the vertical direction so n = 4 and c = 2. Adding a link E f k or E gk would make the truss infinitesimally rigid. Recoverable and Unrecoverable damage. Given a triangulated truss, we may suppose that some subset of the links, that we designate as being damaged, are removed from the truss. If the resulting remaining subtruss is still rigid, we Thus any rigid truss may be regarded as a triangulated disk with recoverable damage. In Figure 3, removing more of the middle of the truss (dashed lines) results in a flexible truss. As another simple example, suppose one of the boundary vertices is a convex corner connected to the rest of the truss by two edges. If one of the edges is damaged, then this corner is free to flex. The remaining free edge cannot be stretched, it carries no energy and thus is invisible from an elastic model. If the damage is interior to a truss (Figure 3b), it may only mean that some inner vertices are not determined from the others, and from the outside, the subtruss looks rigid. The deformation at the inner vertex V i cannot be determined. Still, the damaged triangulated truss may be viewed as a general truss. Moreover, any rigid triangulated truss may be viewed as a triangulated disk with recoverable damage. In the linear theory, we may suppose that we remove k links from the truss. This is equivalent to removing k rows from A. Let A 0 be a 3 × 2v dimensional matrix that annihilates the rigid where A is a k × 2v dimensional matrix corresponding to the removed edges and A is the (e−k)×2v dimensional matrix corresponding to the remaining links. If the damage is recoverable, the null space of A consists only of rigid motions. Put If the damage is recoverable, has full rank andÂU =Λ may be solved as in (16). Hence the elongations of the removed links may be expressed as and the compatibility condition for the reduced system arê B , the compatibility matrix is just the first e − k columns ofB. Thus any truss mat be viewed as a damaged triangulated truss. For a simply connected triangular truss, it is shown in Theorem 12 that a basis for the compatibility conditions is given by the wagon wheels centered on interior vertices. The support of a wagon wheel condition is localized to the triangles immediately neighboring the vertex. In case the truss suffers recoverable damage, the compatibility conditions are different. For example, if several links are removed from the middle of the truss making a hole, then several compatibility conditions have to be combined to account for the hole. Relation to Elasticity Theory We focus on the geometry of the strain of a material. However, in the constitutive theory, deformations are useful because the elastic energy in a link is often modeled as a function of the deformation. The infinitesimal displacements are related to elongations via AU = Λ. In the linear case, let C = diag(c 11 , c 22 , . . .) be the e × e elasticity matrix of positive spring constants for the edges. By Hooke's Law, the forces along the edges CΛ are proportional to the elongations. Then A T CΛ are forces at the vertices, and K = A T CA is the stiffness matrix which is nonnegative definite with rank 2v − n. If F is the 2v × 1 vector of forces applied at the vertices, then force balance is A T CΛ = F . The equation for balanced forces may be solved for elongations or displacements. where ρ is mass density, f is an external body force and c(x) is the elasticity tensor. A T is the analog of div and B is the analog of Ink( ). Compatibility conditions for the triangulated truss. In this section, we consider triangulated trusses only. We analyze compatibility conditions on the elongations λ ij , that is, necessary conditions that are satisfied when the linearized discrete problem is soluble. The conditions are derived from the fact that the neighboring triangles around a vertex are embedded in the plane. The wagon wheel condition for a triangular truss. Let V 0 be an interior vertex of a triangular truss. The union of triangles containing V 0 is a regular unit hexagon. Let λ 0,i (called λ i for short) denote the radial elongations and λ i,i+1 for i = 1, . . . , 6 taken modulo n = 6 denote the concentric elongations. The condition may be deduced as the linearization of the compatibility condition K(V 0 ) = 0 for the discrete nonlinear problem. The cosine law gives Since the sum of the angles is constant 2π, the compatibility condition is the vanishing of the angle sum, Substituting i = i,i+1 = 1 for the lengths and α i = 60 • for the angles, this equation becomes It relates the elongations along the spokes to the elongations of the rim, so we call it the wagon wheel condition at the vertex V 0 . Generalized wagon wheel condition for a triangulated truss. Suppose that V 0 is an interior vertex of valence n in a triangulated truss, and that V 1 , . . . , V n are the adjacent vertices going around in order. It turns out that the compatibility equation is again that a weighted sum of the radial elongations L i equals a weighted sum of the concentric elongations, L i,i+1 for i = 1, . . . , n taken modulo n. Since the sum of the angles is constant 2π, we again obtain (20). By regrouping the sum, this becomes The wagon wheel condition may be rewritten in a simpler form. If we denote It follows that is the support distance, the distance of the of line through the i,i+1 side to V 0 . Also, subtracting the projection of i+1 on i we obtain This gives the final wagon wheel condition. Theorem 4. In a triangulated truss, the compatibility condition at an interior vertex has the form where h i is defined by (23). Rearranging the wagon wheel condition (22) into a triangle-wise sum In other words, on average, the projected elongations of the radial components onto the circumferential line cancels the circumferential component of that line. Necessity of the wagon wheel condition for the triangular truss. On the triangular truss the necessity computation is facilitated by the fact that the spokes are equal and evenly spaced so W i−1 + W i+1 = W i . Thus using (21) and i = i,i+1 = 1, Thus the wagon wheel condition is a consequence of satisfying the equations (LD). The same compatibility condition applies to affinely distorted hexagons. Under the change of variables Necessity of the wagon wheel condition for the triangulated truss. The compatibility conditions were derived by differentiating K(V i ) = 0. Here we show they are necessary consequences of equations (LD). Rewrite the equation (22) from the prescribed elongations equations (26). Using the coefficients suggested by the calculation above, (26) and Thus the wagon wheel condition in is a consequence of satisfying the (LD) equations (13) in a triangulated truss as well. Integral and sum of compatibility around a curve The compatibility condition may be integrated over any simple closed curve of the structure. Consider a simply connected subdomain bounded using (36). Thus it can be integrated to give a compatibility condition on the curve in Theorem 9. This is reminiscent of applying the divergence theorem to a solution of the second order divergence equation − div A(x)∇u = 0 on a simply connected subdomain Ω to get a boundary integral involving the solution where ν is the inner unit normal. For the problem (NC), the integral compatibility condition is the Gauss Bonnet formula [O 1966, p. 375] where K, κ g , dM , ds and α i are the Gauss curvature, the geodesic curvature of the boundary curve, the area form, the arclength and angle changes at the corners expressed in terms of the ζ metric. The compatibility condition for prescribed deformation tensor ζ, the vanishing of the Gauss curvature K = 0, gives integral compatibility around a curve. An analogous equation holds for the discrete problem (ND) (see Section 4.4). In a triangulated structure, the compatibility conditions are localized at interior points. For the linearized problem (LD), the wagon wheel condition is supported on the closed star neighborhood of an interior point. We will show that for interior edges that are contained in four stars, the sum of their compatibility conditions is zero on that edge. Thus the sum over all interior stars results in a compatibility condition whose support consists of a double layer, the edges on the boundary or one link away from the boundary of the subdomain. Moreover, in Theorem 22 it is shown that the continuum limit of the boundary compatibility condition of (LD) is the compatibility condition of (LC). Sum of (LD) compatibility along a curve for a triangular truss. Consider the union of hexagons P in a triangular truss whose boundary curve is a single simple closed curve. Let σ(L) denote the functional on elongations given by the sum of the wagon wheels conditions for the hexagons of P. For those edges E ij that are included in four hexagons, as a radial edge for the hexagons centered at the endpoints and as a circumferential edge for those hexagons centered on the opposite vertices of triangles containing the edge, the sum cancels and the coefficient of L ij is zero in σ(L). Thus, only the edges whose endpoints are in a double layer, at most one unit from ∂P , contribute to σ(L). The weights depend on which hexagons contain the edge. For example, if an edge is circumferential to two hexagons and radial for none, the weight of L ij would be 2. The boundary edges receive a coefficient +1. The parallel curve, consisting of edges one link inside the boundary receives the coefficient −1. At convex corners, the unique incoming curve receives −1. At boundary points with zero curvature, the incoming edges receive zero. At concave corners, the incoming middle curves receive +1. We summarize in the In a simply connected domain, the difference of the lengths is accounted for by the curvature of the boundary. Denote the parallel curve τ . Denote and the curvature atom at a boundary vertex V j by κ j , which for our hexagon domains takes values in { π 2 , 0, − π 3 , − 2π 3 }. Then by Steiner's formula [Sa 1976, pp. 7-8] Equivalently, Eij is boundary edge In the case that the P is a convex union of regular hexagons, as in Figure 4.1b, σ(L) = 0 and (29) simplify because there are only boundary edges, a single incoming edge at corners and boundary parallel interior edges. Eij is boundary edge As a simple application of Theorem 7, we can conclude that if there are no elongations on the boundary of a domain, then there cannot be only positive elongations in the neighboring edges of the boundary layer. Corollary 5. Let P be a convex union of hexagons structure with interior points. Suppose that the elongations L ij are zero on the boundary ∂P and positive on edges within one link of the boundary. Then L cannot satisfy the compatibility conditions at all interior points of P. Proof. The weights σ(E ij ) of (29) are positive on boundary edges and nonpositive and somewhere negative on the rest of the edges in the unit boundary layer for convex domains. Thus σ(L) = 0 or (28) for compatible elongations cannot hold. Sum of compatibility along a curve for the discrete linear problem Consider a triangulated structure. First, we show that the sum of all wagon wheel compatibility conditions vanishes on an interior edge. Lemma 6. Let E 02 be an interior edge that is bounded on opposite sides by two nondegenerate triangles V 0 V 1 V 2 and V 0 V 2 V 3 . Suppose further that all four V 0 , V 1 , V 2 and V 3 are interior vertices. Then the sum of the four wagon wheel conditions that involve E 02 , the ones centered on V 0 , V 1 , V 2 and V 3 , has zero L 02 coefficient. The wagon wheel conditions that involve E 01 are the ones centered at V 0 and V 1 where E 01 is a radial edge and those centered on V 2 and V 3 where E 01 is a concentric edge. Exactly one term in (22) at each vertex appears. The sum of coefficients of L 01 is 2 2 3 sin β 1 We observe that twice the areas of triangle V 0 V 2 V 1 and V 0 V 2 V 3 are, respectively, The sum of coefficients of L 01 becomes This is because the sum of the lengths of the projections of the sides The union of the closed triangles that meet a vertex is called a closed star neighborhood. Since the interior compatibility conditions are supported on closed star neighborhoods, let us consider the union of interior stars whose boundary consists of a simple closed curve. Let σ be the sum of the compatibility conditions viewed as a linear functional on elongations. The value of σ on different types of edges depends on which interior stars contain the edge. We prove that it vanishes for edges contained in four different stars which are sufficiently distant from a boundary. Theorem 7. For the triangulated structure, let P be the union of closed star neighborhoods of all interior points. Let be the sum of the compatibility conditions corresponding to the interior points. σ(E ij ) vanishes except for edges that either touch the boundary of P or both endpoints are one link away from the boundary. The coefficient σ(E ij ), for such edges, is the sum of the E ij coefficient in all wagon wheel compatibility conditions whose star neighborhoods contain the edge E ij as either radial or circumferential edge. The values of the coefficients, which have expressions in terms of the geometry of the triangulation, are given in the proof. Proof. As before, consider an edge E 02 and its adjacent triangles V 0 V 2 V 1 and V 0 V 2 V 3 . Let us fix lengths in this proof 1 = length(E 01 ), 2 = length(E 02 ), 3 = length(E 03 ), 4 = length(E 12 ) and an isthmus edge, an extreme of multiple incoming edges or a spine edge; (3) three vertices are interior which corresponds to the middle of multiple incoming edges or a parallel boundary edge. Boundary edges. Let E 02 and edge of the boundary ∂P. This means that P is on one side of E 02 , therefore we can take V 1 an interior vertex and V 0 , V 2 and V 3 boundary vertices. Then the coefficient of L 02 has only one term Unique incoming edge. If both side triangles touch boundary, V 0 is interior but V 1 , V 2 and V 3 are not, then E 02 is radial and Isthmus edges. In case the edge straddles a neck of P, V 0 , V 2 ∈ ∂P but V 1 and V 2 are interior vertices, Extreme of multiple incoming edges. If one side triangles touches the boundary and the other does not, say, V 0 and V 1 are interior but V 2 and V 3 are not, then E 02 is both radial and circumferential so using (30), (31) and Interior spine edges. Both endpoints are interior but both triangles touch the boundary, say, V 0 and V 2 are interior but V 1 and V 3 are not, then E 02 is doubly radial so using (30), (31), 2A 1 = h 1 4 = h 3 2 = h 5 1 , 2A 2 = h 2 5 = h 4 2 = h 6 1 and the fact that the two projections 1 cos α 1 + 4 cos β 1 = 3 cos α 2 + 5 cos γ 1 = 2 , we have The remaining cases are computed similarly. Sum of compatibility conditions centered along a curve Consider a simple closed curve γ in a triangulated structure which doesn't surround a hole (is contractible) in the structure such that the points of γ are interior points. Let U be the union of the stars centered on γ and suppose that the partγ of the boundary ∂U outside γ is also a simple closed curve. Then U may be viewed as the union of two strips, the boundary layer S inside γ and the boundary layerS insideγ. The compatibility sums along the strips must be equal since their sums differ by the sum of stars along γ. Of course, we know them both to vanish. Let C i denote the wagon wheel functional at vertex V i . Then One notices thatσ(E ij ) and σ(E ij ) have opposite signs for edges E ij ⊂ γ since they are radial edges inS and circumferential edges of S. For triangular trusses, they add to zero. This sum of conditions along a curve limits to the compatibility condition at the vertex as the curve shrinks to the vertex. Sum along a curve of (ND) compatibility The curve compatibility condition also holds for the nonlinear discrete problem. Let γ be a contractible closed curve that bounds the subdomain P. There is a boundary equation that holds for the double layer near the boundary that amounts to saying that the total angle change going around the outer boundary is 2π. Theorem 8. In a triangulated structure, suppose that the union of stars P is bounded by a single simple curve γ. Then the total turning angle of the γ may be expressed in terms of the prescribed lengths of edges on or within one link of the boundary edge where F are the triangular faces of P and Proof. The inner sum in (33) is the interior angle, the sum of the angles of triangles adjacent to the boundary vertex V i . Thus the bracket is the outer turning angle of γ at V i . The outer sum is the total over boundary vertices of the turning angles, which adds up to 2π for planar domains. Integral of compatibility along a curve for (LC) In this section, we deduce the continuum compatibility integral along a contractible curve γ. Let Ω be the domain bounded by γ. We begin with a derivation of the compatibility conditions in curvilinear coordinates using Cartan's moving frames (see , e.g., [O 1966]) and Chern's computation of the variation (as in [C 1962]). A computation in local coordinates is given by Brown [B 1957]. Moving Frame derivation of compatibility of linearized prescribed strain. Suppose that {e i } is an orthonormal frame and {ω i } such that ω i (e j ) = δ i j is the dual orthonormal co-frame for S. In the flat plane, they satisfy the structure equations The covariant derivative of this vector field is the vector valued one-form Pulling back the co-frame and connection form gives where θ i are forms on D and a j i = −a i j . Exterior differentiation on D × (−δ, δ) is given by The time derivative is computed by computing d and picking off the "dt" part. Differentiating, dropping "φ * ," and calling d dt θ i =θ i , Equating the D parts and the (−δ, δ) parts, gives The derivative of the metric is therefore where we have used the skew symmetry of a i j . Thus the linearized prescribed strain equation becomes where ij = ji is the prescribed strain. The second and third covariant derivatives of the deformation are defined by The compatibility condition is obtained by cyclicly permuting second covariant derivatives of (34), alternating signs and adding, The resulting compatibility equations are 0 = ij,k − jk, i + k ,ij − i,jk . In two dimensions this amounts to a single equation, namely i = j = k = so (no sum) Integral compatibility condition The equivalence of the closedness of the one form Note that β is a contraction of the covariant derivative of the prescribed strain, thus is a globally defined one-form depending on the prescribed strain. We have β i = η pq pi,q , where the η pq is the usual skew rotation tensor which is invariant like the identity tensor. Upon changing frame where α is a function. The tensorial nature of ij,k and β j dictates how components change under change of frame. For a frame changẽ for some special orthogonal matrix function R q k and its inverse R q k a k p = δ q p . Theñ Let us introduce some notation as in the derivation of the Gauss Bonnet Theorem [O 1966, p. 375.]. Let r denote arclength along the boundary, γ(r) parameterize the boundary curve with positive orientation and let t =γ(r) = c(r)e 1 (γ(r)) + s(r)e 2 (γ(r)), ν = −s(r)e 1 (γ(r)) + c(r)e 2 (γ(r)) be the unit tangent and inner normal vectors along the boundary where c(r) = cos φ(r), s(r) = sin φ(r) and φ(r) = ∠(e 1 (r), t(r)) is the angle of the tangent vector relative to the coordinate chart. Thus, if ∂Ω has length L, then the total change in angle going around the boundary is φ(L) − φ(0) = 2π. For C 2 boundary, the Frenet equation gives the geodesic curvature in terms of the covariant derivative of the tangent vector In the neighborhood of a boundary point in Fermi coordinates X(u, v) = γ(u) + vν(u) where u is arclength along ∂Ω and ν(u) is the inner unit normal to ∂Ω. The corresponding new frame on ∂Ω hasẽ 1 = t andẽ 2 = ν. v → X(u, v) is a straight line so ∇ ν ν = 0. The geodesic curvature κ g of the boundary is given by Recalling the definition of covariant derivative, and using skewness ofω i j and˜ ij =˜ ji , Similarly, using Fermi coordinates where straight lines have zero geodesic curvaturẽ Thus we have an expression for the integrand Integrating gives the boundary compatibility equation. Let Ω be a simply connected domain with C 2 boundary and ij be a C 1 prescribed strain field satisfying the local compatibility conditions and defined in the neighborhood of the closure of Ω. Then The components˜ ij near each boundary point are expressed in a local frame whereẽ 1 = t and e 2 = ν are the unit tangent and inner normal vectors on ∂Ω. Proof. The function˜ 21 = ν T t is periodic around ∂Ω so the integral of its tangential derivative vanishes. Applying Stokes's Theorem and (39) to dβ = 0 completes the proof of the boundary compatibility condition Compatibility integral for curve with corners. If the boundary is piecewise C 2 , then there are finitely many corner points γ(r i ) with 0 ≤ r 1 < r 2 < · · · < r k < L such that γ can be extended to a C 2 function on each interval [r i , r i+1 ]. Moreover, γ has limiting directionsγ(r i +) anḋ γ(r i+1 −) at endpoints. The angle change in direction at the corner r i is α As in the Gauss Bonnet theorem, the formula (40) may be generalized to include corners. The idea is to round off the corner with a circular arc C i (δ) with arbitrarily small radius δ which will tend to zero. We assume ij is C 2 and assume that the fillet C i (δ) of radius δ rounding out the corner at γ(r i ) osculates the boundary at r i − ρ(δ) and r i + σ(δ). Hence C i (δ) is in the c i δ disk about the corner point γ(r i ), where we may take c i = 1 + sec α i . The change of angle along the fillet is Pick some points in the middle of each segment r i < m i < r i+1 . The curve near the corner is approximated The length of C i (δ) is δθ i and the curvature is κ g = 1 δ . The corner contribution at r i is A i where β . For simplicity, we may assume γ(r i ) = 0 and that the corner is symmetric about the y-axiṡ γ(r i +) = (cos ψ, sin ψ) andγ(r i −) = (cos ψ, − sin ψ), so that the change in angle is α i = 2ψ. Hence lim Let us approximate ij by its Taylor expansion at the origin. Call the first order Taylor polynomial where f ij , g ij and h ij are constants so that the strain and its derivative as (x, y) → (0, 0). Let Ω be a simply connected domain with piecewise C 2 boundary and ij be a C 2 prescribed strain field satisfying the local compatibility conditions and defined in the neighborhood of the closure of Ω. Suppose there are k corners of ∂Ω at the vertices γ(r i ), where γ is a parameterization of ∂Ω by arclength and 0 ≤ r 1 < r 2 < · · · < r k < L where L is the length of the boundary. Then The components˜ ij near each boundary point are expressed in a local frame whereẽ 1 = t and e 2 = ν are the unit tangent and inner normal vectors. The angle change at the corners is given by α i = ∠(γ(r i −),γ(r i +)). At the corners, the componentsˆ ij near each boundary point are expressed in a local frame wherê are the unit vectors halfway between the tangent directions at the corner and the angle bisector. This theorem matches the discrete curve sum (29).˜ 11 −˜ 22 corresponds to the strain of the links along the boundary. ∂˜ 11 /∂ν corresponds to the difference between elongations on the curve parrallel to the boundary and on the boundary curve. κ g is a delta function which is nonzero at the corners and α = 60 • at convex exterior corners of a triangular domain P. Integral of compatibility along curve for (NC) Let ζ be the prescribed right Cauchy-Green tensor. The compatibility condition that the curvature of ζ vanishes implies the Gauss-Bonnet Formula (27) which says that the total change in angle going around of the boundary of a domain is 2π (, e.g., [O 1966]). This is easy to see if ζ is transformed to Euclidean coordinates, but here we express the angle change in the background coordinates in which ζ was initially given to compare with Theorem 9. Take a local ζ-orthonormal frame f i and let θ j be its dual coframe. Then the structure equations dθ i = θ j ∧ θ i j , θ j i + θ i j = 0, define the connection form. Vanishing of the curvature of ζ is given by the equation dθ 2 1 = 0. Recall the change of coordinates formula for the connection form under (37) θ j t = −a r t da r j + a r t a q j θ q r so that θ j i is not a tensor. In our case, the frame has rotated by an angle φ(r) at γ(r) so that at least along the curve the coordinate change is given by an orthonormal matrix where c(s) = cos φ(s) and s(s) = sin φ(s). From this we can computẽ To see this in terms of derivatives of ζ, we derive an expression for the connection form of the ζ metric in terms of background curvilinear coordinates. Let e i be an orthonormal background frame, ω j its dual coframe and ω j i its connection form. The ζ-metric is ζ = ζ ij ω i ⊗ ω j . Take a ζ-orthonormal frame with f 1 is in the e 1 direction given by where D = ζ 11 ζ 22 − ζ 12 2 is the determinant. The dual coframe is Splitting ω 2 1 = Γ 2 11 ω 1 + Γ 2 12 ω 2 , the connection form satisfies In case of Fermi coordinates whereẽ 1 is tangent to the boundary andẽ 2 is the inner normal, since Γ k ij = −Γ i kj , Applying Stokes's Theorem and (46), where σ is ζ-arclength, s is background arclength so dσ = ζ 11 ds. Arguing as before at the corners, we obtain Theorem 11. Let Ω be a simply connected domain with piecewise C 2 boundary and ζ ij is a C 2 prescribed right Cauchy-Green tensor field satisfying the local compatibility conditions and defined in the neighborhood of the closure of Ω. Suppose there are k corners of ∂Ω at the vertices γ(r i ), where γ is a parameterization of ∂Ω by arclength and 0 ≤ r 1 < r 2 < · · · < r k < L where L is the length of the boundary. Then The componentsζ ij near each boundary point are expressed in a local frame whereẽ 1 = t and e 2 = ν are the unit tangent and inner normal vectors. κ g is the geodesic curvature of the boundary in the background metric. The angle change at the corners is given by α i = ∠(γ(r i −),γ(r i +)). At the corners, the componentsˆ ij near each boundary point are expressed in a local frame (45) in unit vectors halfway between the tangent directions at the corner and the angle bisector. Proof. Expading (42) at the corner, The corresponding corner limit (43) gives the result. The genericity of trusses. Mechanical intuition suggests that the trusses we consider are infinitesimally rigid, thus are generic, and the number of compatibility conditions is given by the Maxwell count. In this section, we prove that many planar trusses are generic. For triangulated trusses without holes, this number is the number of interior vertices (48). The independence of wagon wheel conditions centered at different interior vertices shows that they form a basis for compatibility conditions. A geometric basis for the compatibility conditions of triangular structures. A basis for the compatibility conditions will be determined for a triangular structure X in the hexagonal lattice. We shall decompose X into pieces, thick regions, plates, which are connected by thin parts, girders. Consider the collection of nodes V H that are centers of hexagons contained in X. Consider the graph G H whose vertices are V H and whose edges are any pair of nodes in V H that are a unit apart. In general G H is not connected. If G i is a connected component of G H , let P i be the union of hexagons whose centers are G i . Let us call P i a plate. Let us call a simple truss that contains no hexagons a girder. The plates may not make up all of X, what remains is a collection of girders that attach to the plates. It will be shown that the plates support the compatibility conditions and the girders are statically determined and don't support any compatibility conditions. Unlike for continuous materials, the girders bounded by a single simple curve are examples of rigid structures without compatibility conditions. If such a girder loops around and a single new edge is attached connecting the ends of the girder, then the new structure gains a single compatibility condition. A girder is bounded by two simple disjoint curves, a ring girder, has three compatibility conditions. For a structure X in the hexagonal lattice, the compatibility conditions consist of wagon wheel equations centered on the interior vertices and ring girders around the holes. We begin with simply connected domains. Theorem 12. Let X be the union of finitely many unit triangles of the triangular lattice. Suppose that the boundary ∂X consists of a single simple closed curve. Then the truss X is generic: the number of compatibility conditions for (LD) is given by the Maxwell number which also equals the number of interior vertices. Moreover, a basis for the compatibility conditions is given by the wagon wheels centered at the interior vertices which are supported on the hexagon neighborhood of the vertex. The simplicity of the boundary means that the boundary edge path has no self-intersections, thus X cannot be pinched together at a hinge vertex. The proof of Theorem 12 is given in Appendix, Section 9.2. We expect that wagon wheels form a compatibility basis in an arbitrary triangulated structure. Since the triangulated truss is generic by Corollary 15, we know that the dimension of the compatibility conditions is the number of interior vertices (19). They would form a basis if we knew the wagon wheel conditions are linearly independent. The independence of wagon wheels in (ND) can be seen geometrically. The total angle at V i is not determined by the flatness of the surrounding vertices. To see this, imagine that the truss lived on a cone that is flat except at the vertex V i where the curvature atom might not vanish. Just imagine rolling a piece of paper into a cone with the vertex at V i . Since the stars not centered at V i do not surround V i , they are Euclidean, and the compatibility condition holds for lengths in the cone. However, they do not determine the cone angle at V i which may be arbitrary. The Number of compatibility conditions in a multiply connected truss. Next, we consider multiply connected planar domains. Thus we imagine X is a triangulated truss with g holes removed. So X is bounded by g + 1 pairwise disjoint simple closed curves γ i such that one of them, say γ 0 , contains the others within. The rest of the curves have at least four links so do not bound a single triangle and do not surround other components γ i . Then holes are the regions bounded by the interior curves γ 1 , . . . , γ g . Theorem 13. Let X be a triangulated truss with holes. Suppose that the boundary ∂X consists of g + 1 pairwise disjoint, simple closed curves. Then X is generic: the number of compatibility conditions equals the Maxwell number Intuitive explanation by a "hole filling" argument. Consider a triangulated truss T bounded by g+1 disjoint simple closed curves. Theorem 13 can be seen by an inductive argument, removing the holes one after another starting from a simply connected truss. The argument assumes that wagon wheel conditions used are independent, an assumption that will not be required for the proof. Suppose that γ i , i = 0, 1, . . . , g are the boundary curves with n ≥ 4 links each such that γ 0 contains the others. Fill in the holes with a continuation of the triangulation of T . Let H i be the disk region inside γ i , a hole to be removed, and T i = T ∪ (H i ∪ · · · ∪ H g ) the completely filled region T 1 with i − 1 holes removed. T 1 is simply connected so CC(T 1 ) = v i (T 1 ), the number of interior vertices, by Theorem 12. Removing one hole at a time, suppose that i − 1 holes have been removed and that To find CC after removing the next hole, suppose H i has v h interior vertices, e h interior edges, and γ i has f h triangles and n ≥ 4 links. Then the number of vertices and edges on γ i is n. The Euler Characteristic of H is 1 = v i − e i + f i . Being a triangulation implies 3f i = n + 2e i so that Now proceed inductively on the number of interior vertices. If there are no interior vertices, then there are n − 3 interior edges and n vertices on γ. The number of compatibility conditions of T i is the difference between the number of equations CC(T i ) + 2e(T i ) minus the number of variables 2e(T i ) for the generic system (15) augmented by annihilators of rigid motions. Removing n wagon wheel conditions centered on γ i and n − 3 interior edges gives the compatibility count for T i+1 , T i+1 has n fewer interior vertices, thus If there are interior vertices, then we may replace the hole with one with fewer interior vertices and with the same number of compatibility conditions. Suppose (50) holds for j interior vertices. Arguing inductively, if there are j + 1 interior vertices in H i , choose a triangle of the triangulation τ ⊂ H i such that one side is an edge of γ i and the other edges are interior. Cut τ from H i and glue it to T i . Now the region T i ∪ τ has the same number of compatibility conditions as T i , and its new hole H i − τ has n + 1 boundary links, but j interior vertices. Thus the induction holds, and the new hole may be removed from T i ∪ τ , completing the induction. Proof using "branch cut" argument. Proof. The statement is proved by induction on the number of holes. The base case g = 0 is proved by Theorem 12. Let X g+1 be a truss with g holes. Again, we shall construct a maximal subtruss Z g+1 containing all vertices of X g+1 which is statically determined and is obtained from X g+1 by removing v i + 3g edges. The idea is to make a "branch cut." From the hole bounded by γ g in X g+1 closest to the outside, draw a line segment from the hole to the outside that meets only some edges B between the hole and outside. There must be b ≥ 3 of them. The truss X g = X g+1 \B now has g − 1 holes. The outside curve and γ g have been combined to a single closed curve (their connected sum) by replacing the two B-edges on the curves γ 0 and γ g by two connecting segments from γ 0 to γ g . By the induction hypothesis, there is a statically rigid Z g in X g obtained by removing v i + 3(g − 1) edges of X g whereṽ i is the number of interior vertices of X g . Notice that there were b − 3 interior vertices of X g+1 lost by making the branch cut. Also, notice that Z g+1 = Z g is also a maximal statically determined subtruss in X g obtained by removing of its edges, where v i is the number of interior vertices of X g+1 . Thus the induction is complete. Example of infinitesimally rigid truss with maximally many interior links removed In general, one can remove a link from every wagon wheel in a simply connected truss and still maintain static determinacy. Consider a rhombus P n consisting of the union of hexagons centered on ke 1 + e 2 where k, = 1, . . . , n, e 1 = (1, 0), e 2 = 1 2 , √ 3 2 . As in Theorem 14, for each hexagon we may remove a link (the NE link) and still keep rigidity. In fact, the resulting figure is statically determined with no compatibility condition. Thus it has no material points according to our definition. removing one more link makes the structure flexible. There are n 2 Figure 6: Rhombus P n with maximally many links removed maintaining infinitesimal rigidity. hexagons and 3n 2 + 8n + 1 total links in P n . Thus the maximal number that can be removed and still maintain rigidity approaches one third of the links as n → ∞. Compare this to minimal number of links that need to be removed from P n to make the structure flexible but keeping it a period cell. If all n + 2 NE links of P n were removed from the = const. row, then the structure would flex along that row. The proportion of number of links removed would tend to zero as n → ∞. The definition of BTP Trusses. We establish the genericity of a more general class of trusses built up by assembling rigid pieces, which we call BTP Trusses which provides a second argument for the genericity of trusses that are subdomains of the triangular lattice. BTP trusses are built up erector set fashion by assembling rigid pieces to form a larger composite rigid piece. Besides, we determine the number of compatibility conditions of the composite truss in terms of its components and attaching procedure. The BTP-Trusses (Bigon-Triangle-Prism trusses) are finite trusses built by assembling subunits of smaller BTP-Trusses according to some rules. The following constructions define BTP-Trusses. A single edge with two ending vertices is the basic BTP truss. A pair of edges attached to the same two vertices form a bigon, which is also rigid. Three edges connected in a triangle also make a rigid truss. We observe that a rigid truss with two labeled vertices behaves like a single edge: two rigid trusses may be attached bigon or triangle fashion to make a larger rigid truss. Two distinct nodes at the same coordinates may be pinned together to make a single node. In addition, there is the prism construction which uses three edges to connect two rigid trusses to make a larger rigid truss not gotten by forming bigons or triangles. Of course, our prisms are projections into two dimensions! Since the third connecting edge may be far separated from the other edges, determining the rigidity of a truss is not a local problem. The composition rules of BTP trusses are as follows. 1. Single links. These consist of two different points and the edge connecting them. 2. Bigons. Suppose S and T are two BTP-Trusses, each containing at least two distinct points z 1 , z 2 ∈ S and z 3 , z 4 ∈ T such that the coordinates z 1 = z 3 and z 2 = z 4 . The bigon is the disjoint union " " of S and T whose two points are identified. The bigon is assembled by pinning two points together in each of the subassemblies. 3. Triangles. Suppose S, T and U are three BTP-Trusses, each containing at least two distinct points z 1 = z 2 in S, z 3 = z 4 in T and z 5 = z 6 in U such that the coordinates z 2 = z 3 , z 4 = z 5 and z 6 = z 1 and such that z 1 z 2 z 4 is non-degenerate (the three points are not collinear.) The triangle is the disjoint union of three sides whose three points are identified pairwise. The triangle is assembled by pinning two points together in each of the three subassemblies to form a triangle. 4. (Planar) Prisms. Suppose P, Q, R, S, T are five BTP-Trusses, two of which contain at least three distinct points z 1 , z 2 , z 3 ∈ P and z 4 , z 5 , z 6 ∈ Q satisfying the non-degeneracy condition (62) and the others contain at least two distinct points z 7 = z 10 in R, z 8 = z 11 in S and z 9 = z 12 in T such that the coordinates z 1 = z 7 , z 2 = z 8 , z 3 = z 9 , z 4 = z 10 , z 5 = z 11 and z 6 = z 12 . The prism is the disjoint union with six pairs of points identified. T prism = (P · · · T )/{z 1 ∼ z 7 , z 2 ∼ z 8 , z 3 ∼ z 9 , z 4 ∼ z 10 , z 5 ∼ z 11 , z 6 ∼ z 12 } The prism is assembled by connecting the vertices of the triangles z 1 z 2 z 3 of P and z 4 z 5 z 6 of Q with edges z 1 z 4 of R, z 2 z 5 of S and z 3 z 6 of T . Pin a vertex. Suppose that T is a truss that has two distinct vertices z 1 , z 2 ∈ T with the same coordinates. The new truss is built by pinning the vertices For example, three single links may be assembled to a simple nondegenerate triangle. Another identical copy of this triangle may be attached to the first at two vertices and overlapping the first, forming a "bigon." The third vertices from each triangle are distinct nodes but have the same coordinates. Finally, these vertices may be pinned together. The BTP-Truss structure is not unique. The same double triangle truss also results from attaching the second edge to each of the three original edges of a triangle. The genericity of BTP Trusses. The triangle and prism constructions require that a nondegeneracy condition be satisfied. For example, in a triangle, the three edges cannot be collinear. In a prism, if the upper and lower triangles are connected by three parallel line segments, then the resulting truss is not infinitesimally rigid because it has a shearing flex. Similarly, if the line segments have a common point of intersection then the prism isn't infinitesimally rigid it will have a rotational flex about the common point. The nondegeneracy condition (62) will be stated in the proof of the theorem. Theorem 14. BTP-Trusses are infinitesimally rigid, hence generic. The number of compatibility conditions under a BTP combination is determined from the compatibility conditions of its parts. Let c i be the number of compatibility conditions for the part T i . The proof is given in the Appendix Section 9.1. It is unknown to the authors whether all infinitesimally rigid trusses are BTP-trusses. An immediate consequence is that the trusses of triangulated domains are infinitesimally rigid. Corollary 15. Let T be a triangulated truss such that all triangles are non-degenerate. Then T is generic. Suppose that T is built up starting from a single edge one step at a time by attaching two connected edges to form a triangle, such as gluing on a triangle to an outer edge, or by attaching a single boundary edge to two existing vertices, such as gluing on a triangle to two existing edges, or such as connecting two vertices to surround a hole. The number of compatibility conditions is n b , the number of times a single edge is glued to two vertices. Proof. The process of building the truss is just the BTP construction where triangles are made from the previous stage and two segments, and bigons are made from the previous stage and one segment. Each bigon increases the compatibility count by one. How do the relative area and the relative number of holes influence the asymptotic compatibility condition? For simplicity, we restrict consideration to periodic triangular structures. Let the basic periodicity cell Υ by a k × k union of hexagons centered on ae 1 + be 2 where a, b = 1, . . . , k. Suppose there are h holes per cell and m interior vertices taken by each hole. For simplicity, we assume that cells are bounded by h + 1 pairwise disjoint simple closed curves. Let Ω n be the n × n union of cells slightly overlapping, centered on ake 1 + bke 2 where a, b = 1, . . . , n. The asymptotic compatibility density is defined to be The total number of holes is g = n 2 h. The total number of interior vertices is v i = k 2 n 2 − hmn 2 . The area is base times height minus corner triangles, thus This shows that the asymptotic compatibility depends not just on the total area of the holes removed from the cell. Taking out more holes of the same total are increases AC, a proxy for material resilience. What is the same, the influence of the holes in a triangular truss depends on the number of interior vertices removed. For example, if the hole is a p × p rhombus, then there are m = p 2 + 4p + 2 interior vertices removed, but the hole has area 2(p − 1) 2 triangles. If the hole is a (p − 1) 2 × 1 trapezoid, it has the same number of triangles, but it has m = 2p 2 − 4p + 4 interior vertices removed. A Single Hole Suppose X is an annular domain in the triangular lattice bounded by two disjoint, simple closed curves, an inner one γ 1 and an outer one γ 0 . The number of compatibility conditions is v i + 3 where v i is the number of interior vertices of X. We address here how much weaker the region with the hole is than the region bounded by just by γ 0 without the hole. For sake of argument, let Y denote the hole-region bounded by γ 1 and Z = X ∪ Y the region bounded by just γ 0 . Let 0 and 1 be the number of boundary vertices on γ 0 and γ 1 , resp., v 0 , v 1 and v 2 the number of interior vertices of X, Y and Z, resp. Neighboring the boundary curve γ 1 are the inside and outside unit neighborhoods If γ 1 is smooth enough, then the 1-neighborhoods are a sequence of triangles which form a girder-ring. However, in general, a 1-neighborhood may not even be an annular domain; it may contain additional girders and even hexagons. Assume that γ 1 is a curve whose outer 1-neighborhood is a girder-ring. This would be the case if the curvature exceeds −60 • at each vertex and no three consecutive vertices have angle −60 • with respect to the outer normal. The outer curve must be longer and must have 1 + 6 edges. This description fits Steiner's formula for a parallel curve. Since γ 1 is a closed curve in the plane, if we traverse it in the counter-clockwise direction, then the total curvature is Now imagine that γ 1 is a bicycle chain. Attach an outer triangle to each link. Then the angles between consecutive triangles are ∠ γ1 (v i )+60 • . It follows that we can insert one triangle whenever ∠ γ1 (v i ) = 0 for each link in the chain. In total, we can insert 1 + T /60 triangles to fill in the girder-ring. Similarly, if the inner 1-neighborhood is a girder-ring, then its inner edge has 1 −6 edges. This would be the case if the curvature does not exceed 60 • at each vertex and no three consecutive vertices have angle 60 • with respect to the outer normal. Note that if both G 0 and G 1 are girder rings, then an open hexagon about a vertex either lies entirely in the inside γ 1 , entirely in the outside of γ 1 or its center is on a simple closed curve γ 1 . This means that in the collared case, Now we can compare the number of compatibility conditions before and after drilling out the hole. c Thus, the number of compatibility conditions lost by drilling the hole is Strength is not only lost by removing the interior particles of the hole but from the boundary points. The drop in the number of comptiblity conditions is the number of interior and boundary points lost to the hole plus three (53). Small holes. Let us work out some examples. As a reality check, suppose that the hole is a single triangle Y . This hole should not change the truss since it is made of triangular holes. Its removal results in the same number of compatibility conditions The smallest nontrivial inner boundary γ 1 is for the unit rhombus. The loss of compatibility dimensions is For holes consisting of 3, 4 or 5 adjacent triangles, there are eight (up to reflection and rotation) different shapes with a simple boundary curve. They have = 5, 6 or 7, resp., but v 1 = 0 for all of them so that c(Z) − c(X) = 2, 3, and 4, resp. Of the twelve simple curves about six triangles, the hexagon is unique. For the others = 8 and c(Y ) = 0 so that c(Z) − c(X) = 5. However, for the hexagon, = 6 and v 1 = 1. Hence, c(Z) − c(X) = 4. Thus the shape of the hole influences how many compatibility conditions are lost. Isoperimetric bound on the number of compatibility conditions lost by a single hole. For the girder Y with p triangles, c(Y ) = 0, 1 = p + 2 so c(Z) − c(X) = p − 1. For a regular hexagon with side p, 1 = 6p, v 1 = 3p 2 − 3p + 1 so c(Z) − c(X) = 3p 2 + 3p − 2. For a girder with the same number of triangles 6p 2 as the hexagon, we have c(Z) − c(X) = 6p 2 − 1 which is a much weaker. For a given length 1 , the range of losses of strength (53) is bounded on the one side by v 1 = 0 and on the other side by an isoperimetric inequality for domains in the triangular lattice. Proof. We find the region Ω with maximal number of interior vertices subject to boundary length at most 1 . First, observe that the maximizing region is a hexagon. Ω ⊂ Θ where Θ is a hexagon with not necessarily equal sides and boundary length at most . To see it, consider the function ℘ i (x) = n iẋ , where n i is outward normal vector obtained by a 90 • rotation of the direction vector e i of the ith side. Let the support distance in the n i direction be and z i be a point on the side that achieves this distance. Fill in the consecutive points by geodesics, for example, take the geodesic from z i to z i+1 which goes in the e i direction first, and then the e i+1 direction. The resulting figure is the desired hexagon Θ. Now maximize the number of interior points of a hexagon. Let us call the lengths of the sides a, b, . . . , f corresponding to the e 1 , e 2 , . . . directions. Observe that the length of any pair of adjacent sides equals the length of the opposite pair because both pairs connect two parallel lines. a (54) The last equation is the difference of the first two. The centers of hexagons within Θ form a hexagonal pattern bounded by points just inside each edge, thus having a, b, . . . points on a side. The objective function is the total number of points inside, which equals the area of the parallelogram between the e 1 -e 4 and the e 3 -e 6 parallel lines, minus the triangles in the e 2 and e 5 corners The number of points an the bottom are the number of points on the "a" side plus those on the "b" side projected to the "a" line. The point in the corner is counted twice. The number of lattice points in an equilateral triangle with b − 1 points on a side is the triangular number 1 2 b(b − 1). Replacing e and f from (54), we obtain the total length L = a + 2b + 2c + d. Maximizing N subject to fixed length yields all sides equal length, so the area maximizing figure is the hexagon. A hexagon with a points on a side is made up of six triangles with side a − 1 plus the point in the middle, yielding the formula. Theorem 17. Let X be an annular domain bounded by disjoint simple closed curves. If 1 ≥ 4 denotes the length of the inner boundary, then the compatibility loss may be estimated The result follows from applying the isoperimetric estimate, Theorem 16 to (53). How geometry of a hole influences asymptotic compatibility How much do holes weaken a hexagonal lattice? We assume that the lattice is periodic and compute the large-scale average compatibility condition density for damaged material relative to the undamaged material. Note that removing a single edge reduces the number of interior vertices by four, but introduces a ring girder which supports three compatibility conditions. Thus m − 3 ≥ 1 compatibility conditions are lost for each hole. (52) becomes The asymptotic compatibility is AC = 2 √ 3 ∼ = 1.15447 for the triangular material without holes. The asymptotic compatibility depends not just on the total area removed from the cell. Taking out more holes of the same total area has larger AC, a proxy for material resilience. In other words, one large hole weakens the material more than a number of small holes of the same total area. A similar phenomenon was observed by Cherkaev and Ryvkin [?], [?]. 7 Discretization of the nonlinear continuum problem (NC). We consider the approximation of the prescribed Green Tensor problem by discrete structures in two-dimensional materials. Discrete approximations of the prescribed strain equation and other elasticity problems are sometimes justified as discretizations of partial differential equations via finite elements [Bd 2007], [BS 2010] or discrete differential forms [AFW 2006]. In this section, we recall another justification using the theory of approximating Riemannian metrics by piecewise linear metrics, first developed by A. D. Alexandrov [AZ 1962]. The prescribed Green (deformation) tensor ζ gives a Riemannian metric. The unknown is a mapping to Euclidean space which pulls back the Euclidean metric to the prescribed metric ζ. Its compatibility condition is that Riemannian curvatures of both metrics agree, that is they both vanish. We discuss its approximation by the discrete prescribed length problem an approximating triangulated discrete structure whose lengths are determined from ζ. Associated to a triangulated structure is a piecewise Euclidean surface made by filling in triangles with pieces of Euclidean planes in such a way that the lengths of links correspond to Euclidean distances. It is also the continuous piecewise linear finite element map on the triangulation. The resulting piecewise Euclidean metric approximates ζ. The global existence problem for prescribed Green tensor may be solved by solving the discrete prescribed length problem and taking the limiting configuration. In a similar vein, configurations of sequences of converging discrete structures converge to a continuous material. The compatibility condition, the vanishing of curvature atoms of the structures, also converges to the continuum compatibility condition of the limiting structure. Note that the compatibility equation K ≡ 0 has linear highest order term which is the compatibility condition of the linearized equation. The integrability condition implies local solvability of the prescribed metric equations. A nice exposition is in [L 1926, p. 242]. The existence of a deformation with prescribed right Cauchy Green tensor is further discussed by Shield [Sh 1973] and by Blume [Bl 1989]. Integrability condition of the linearized prescribed strain equation implies locally solvability similarly [So 1956]. The global solubility of (LC) may be deduced from the discrete approximation. Approximation (NC) equation by (ND) the nonlinear discrete prescribed length problem. Of several ways to see how the discrete nonlinear problem is approximating the nonlinear problem, we exploit the geometric interpretation of the prescribed Green deformation problem to give a geometric notion of the weak convergence, due to A. D. Alexandrov. Triangulated Structures. Recall that a triangulated structure (D, ij ) is combinatorially the nodes and links of a piecewise linear triangulation of a simple connected planar domain D with an assignment of a positive length E to each edge E of S (Section 1). An approximation to φ : B → S is obtained by taking a closed approximating triangulated disk D ⊂ (B, ζ) which is subdivided into sufficiently fine straight line segments, which is sometimes called a polyhedral or piecewise linear (PL) approximation. For each segment let ij be the ζdistance between the endpoints of the segment. Assuming that these lengths are smaller than the injectivity radius of ζ on D, the lengths are then the ζ-geodesic distances between the vertices. It means that there is a unique distance realizing ζ-geodesic in B between any two linked vertices. The ζ-geodesics may not agree with the straight triangle sides (of the background Euclidean metric), so in general, the ζ-length of a straight side from V i to V j may be greater than ij . By Alexandrov's theory, the solutions of the discrete prescribed length equations for a sequence of such approximating triangulated structures whose triangle diameters tend to zero converges weakly to the solution of the nonlinear problem. Can the abstract structure be realized as a Euclidean configuration? Let us assume that B is a PL triangulated disk for which we are given prescribed lengths. Assuming strict triangle inequalities hold, we can construct a nondegenerate straight edge triangle T in the Euclidean plane whose side lengths equal the given ij distances between vertices. The affine map φ : T ijk → T between the two-dimensional triangles determines a Euclidean metric in T ijk by pulling back the Euclidean metric g B = φ * (ds 2 E 2 ), which is triangle-wise Euclidean. The induced Euclidean metrics on each triangle glue together to give a metric for the abstract structure B, which is smooth except possibly at the vertices and continuous everywhere. Note that it may happen that the total Euclidean angle at the interior vertices may not be 2π. The angle deficit called an atom of curvature, is regarded as a point mass at V i whose mass is given by is the Euclidean angle between the vectors V i V j and V i V k at V i . The abstract metric induces an intrinsic distance function between pairs of vertices V, W ∈ S, namely where the infimum is taken over all piecewise smooth curves in B from V to W and length is in the piecewise Euclidean metric. The minimizers have to be paths of piecewise linear segments with possible bends only when the paths cross edges. In fact, possible kinks occur only at the vertices because at interior points of edges have full Euclidean neighborhoods where the local length minimizers are straight lines. The metric of g B may be recovered from ρ B . The curvature measure ω is a Borel measure supported on the vertices in B, and may be given by for Borel sets G ⊂ B. This notion of curvature of an abstract structure (B, ij ) is due to A. D. Alexandrov [A 1948, p. 496], [AZ 1962, p. 156]. It applies, more generally, to surfaces with bounded curvature [AZ 1962, p. 6] for which, starting from the notion of a distance function, lengths of arcs, length minimizing (geodesic) arcs, angles between geodesics at a point, angle deficits of geodesic triangles, the curvature measure of a Borel set may be defined. * A surface with C 2 Riemannian metric with bounded Gauss curvature K(x) may also be regarded by Alexandrov as a surface of bounded curvature. The induced distance is the usual one obtained by minimizing the length of paths. The curvature measure is taken with respect to induced area form. If we have a sequence of triangulations for a fixed domain B, then weak convergence in the sense of Alexandrov means that the induced distance functions converge uniformly of B. † * Nowadays, geometers are familiar with offspring theory, "Gromov Convergence" of length spaces. † Alexandrov's sense of convergence is also used in the context of Monge-Ampere equations, where there is an extensive regularity theory for Alexandrov solutions (limits of solutions of discrete approximating problems) whose convergence agrees with convergence in the viscosity sense [G 2016]. (NP) existence problems for structures. Consider the realizability problem: given an abstract triangulated disk structure (B, ij ) (net for short), is it possible to build a configuration, a continuous immersion φ : B → E 2 such that φ is linear on each triangle and is a local isometry? In other words, can we develop B into the Euclidean plane by gluing together the plane triangles in such a way that at each interior vertex the angles all close up? The answer is clearly "yes" assuming that the sum of the angles at every interior vertex is 2π. Globally, the development may wind around and overlap. The overlap may even occur at a single boundary vertex as at a branch point in a Riemann Surface. The metric is obtained by pulling back the Euclidean metric from a single planar region φ(S). Such a configuration then realizes the prescribed lengths of edges. In analogy to the continuum case, the existence of a configuration of a structure can be possible if and only if angles at each interior vertex add up to 2π, in other words, the curvature atoms vanish: the structure is flat. Argument using Alexandrov's Polyhedral Realization Theorem. In simple cases, the existence and uniqueness may be deduced from Alexandrov's theorems about the realizability of polyhedral metrics of the sphere as boundaries of convex polyhedrons. For a flat disk, if the boundary is also convex, i.e., the interior angles of boundary vertices do not exceed π then the answer is yes. The sum of the interior angles at the boundary is 2π. Hence the boundary curve may be realized as the boundary of a convex polygon Π in the plane. By adding this polygon to the net forming the back side, we get enough polygons to make a two sided disk which is topological sphere. All vertices are interior vertices in the two-sided disk. The curvature atoms are nonnegative and add up to 4π since they are be supported on the vertices of Π, with total curvature equal to the 2π contributions from both the back and the front. By Alexandrov's theorem for realizing polyhedral metrics [A 1950, p. 184], there is a (degenerate) two-sided flat convex polyhedron in E 3 whose boundary metric gives the abstract net metric. Omitting the back side corresponding to the polygon Π gives the desired realization. One can generalize to nonconvex boundary by first adding triangles to fill in the convex hull of B and then doubling as before. However, this works only for embedded polygons, and not for general nets. Realizing abstract triangulated structures. Intuitively, the unique existence of immersion of the abstract structure is evident. The mapping of the first triangle may be moved by rigid motion or reflection to an arbitrary position, but after that, the continuation of the PL immersion of the abstract structure is uniquely determined by the gluing. The result may only be immersed: namely the image under φ n may have self-intersections: the abstract structure may wrap around so that part of the image crosses itself. We begin by formulating a realization lemma of Alexandrov for a single abstract structure [A 1950, p. 71]. It will be used to solve the global realizability of a given smooth metric ζ as well as the existence of a limit for a sequence of abstract structures. Theorem 1. Let (B, ) be an abstract structure which is the 1-skeleton of a triangulated PL disk B with an assignment of edge lengths. Assume that the lengths satisfy a strict triangle inequality and that it has zero curvature atom at each interior vertex. Then there is a configuration φ : B → E 2 that realizes the structure, which says for every E i,j , an edge from V i to V j , then This relation shows that the pulled back Euclidean metric agrees with the polyhedral metric. Moreover, the configuration is unique up to rigid motion, which means, ifφ is another configuration, then there is an isometry (rigid motion) I of E 2 so thatφ = I • φ. The proof is given in the Appendix. Solutions of the Prescribed Green Tensor Equation (NC) In the first application of the Theorem 1, we use the existence of local solutions of (NP) and glue them together to give a global solution. To this end, we assume that ζ is sufficiently regular and that for each point X ∈ B there is a neighborhood U X such that (N P ) may be solved in any geodesic triangle T ijk ⊂ U X whose vertices are in the neighborhood. It means that there is a φ ijk ∈ C 1,α that solves (N P ) on T ijk . Since the metric s flat, the interior angles are determined by side lengths via cosine law, and thus add up to 2π going around a vertex because the sum of angles of curves emanating from such vertex do. Note that {U X } X∈B is an open cover of D. Suppose we are given a bounded domain B ⊂ E 2 and a smooth positive definite symmetric matrix function ζ ij defined on a neighborhood of B whose curvature is everywhere zero. For simplicity, we may assume that B is a geodesic polygon in the ζ metric. We shall construct a continuous mapping φ : B → S = E 2 by continuing local solutions. We shall show that it solves (N P ) on D. We begin by showing that there is an approximating sequence of triangulations to a given (B, ζ) disk with metric. Start with an initial geodesic triangulation of B with triangles of diameter less than the injectivity radius of ζ. By barycentric subdivision obtain a sequence of triangulations whose maximum diameter tends to zero. After finitely many subdivisions, we reach a triangulation whose triangle diameters are smaller than the Lebesgue number of the cover {U X }. We arrange that the edges of the triangles are all minimizing geodesics whose length equals the distance between ending vertices. We also require that triangles be nondegenerate such that no side length equals the sum of its other two side lengths (has a strict triangle inequality). The last amounts to requiring that the third vertex not be on the geodesic determined by the other two. This can always be arranged vertex-wise by making a small perturbation of the proposed new vertex position. Call a sequence of nondegenerate triangulations B , with the property that the largest diameter of a triangle in B tends to zero as → ∞. In the realization problem for a metric, the barycentric subdivision results in vertices of B that are also vertices of B +1 . For this purpose, the barycentric subdivision of B may require a small perturbation of the middle point if needed as explained above. In this case, in each triangle, there is a Euclidean metric which coincides with ζ restricted to the edges. For the sufficiently fine subdivision, we have a geodesic triangulation of B m . Each triangle comes with an isometry φ T : T → E 2 determined up to a rigid motion. For example, corresponding points use Fermi coordinates in T ijk and T . By composing with an appropriate rigid motion R, the map R • φ T can be pasted to smoothly extend the map built by pasting the maps for triangles one at a time. We have proved the global existence theorem for (NP). Lemma 18. Let B be a bounded open topological disk in E 2 with a prescribed C 2 positive definite matrix function ζ satisfying the K ≡ 0 compatibility condition defined in a neighborhood of B with induced metric and distance ρ such that the boundary ∂B is piecewise ζ-geodesic. There is a sequence of ζ-geodesic triangulations, call them B k such that the largest diameter of the triangles of D k tends to zero. For k sufficiently large, the induced triangle-wise Euclidean metric and corresponding induced isometry B → E solves (NP). The discrete problem (ND) as an approximation to the continuum problem (NC). By pasting together local solutions we obtained a global solution to (NP). However, the construction required knowing geodesic triangles of ζ so that the flat structure on each T ijk agreed with the flat metric ζ. If we used the PL straight line triangulation of a polygon B ⊂ E 2 instead, the edges of the triangles would no longer be geodesics in the ζ metric. We still define ij = ρ ζ (V i , V j ) to be the ζ-distances between vertices, but then the PL-edge segments, not being geodesics will, in general, have a ζ-length greater than ij . Thus if we construct the polyhedral metric on the PL disk, it will be flat because angles at the vertices are determined by triangles in the flat metric ζ. However, the PL metric will not necessarily agree with the ζ-metric in the triangle. This metric still approximates the ζ-metric and its isometric embedding to E 2 approximates the isometric embedding of the ζ-metric. In this sense, the discrete nonlinear problem approximates the continuum (NP). Lemma 19. Let B be a bounded PL triangulated disk in E 2 with a prescribed C 2 positive definite symmetric matrix function ζ defined in a neighborhood of B satisfying the compatibility condition K ≡ 0 with induced metric and distance ρ such that the boundary ∂B is piecewise E 2 straight line segments. There is a sequence of PL triangulations, call them B k such that the largest diameter of the triangles of B k tends to zero. The polyhedral metrics approximate ζ in the sense of Alexandrov: the induced distance functions ρ i → ρ ζ uniformly in B × B. After composing by Euclidean isometries if necessary, the PL isometries φ i : B i → E 2 tend to φ ζ : B i → E 2 in C 1 to a solution of (NP). Proof. Let B may be triangulated by PL triangles such that the ζ-diameter of each T ijk is less than the injectivity radius of ζ and such that strict triangle inequalities hold. Call this triangulation B 1 . Call the successive good PL Barycentric subdivisions B k . The maximal diameter of the triangles tends to zero as k → ∞. Since ζ ij is bounded and uniformly positive on B 1 , The ζ-diameters also tend to zero Fix a point P 0 and a direction e 0 in the plane. If we designate a basepoint V 1 and direction e at V 1 common to all triangulations B , we may use a rigid motion on the configuration φ constructed to arrange that the point, direction and orientation agree: φ (V 1 ) = P 0 , d(φ )[V 1 ](e) = e 0 and d(φ )[V 1 ] is orientation preserving. By assumption or construction, the sum of the angles of triangles adjacent to an interior vertex adds up to 2π. As before, we can find a development of (B, ζ ) into the plane and construct the PL mapping φ which is an isometry to E 2 on each triangle. We claim that the sequence φ converges to a limiting map φ : B → E 2 which solves the (NC) for the limiting structure. First, φ are uniformly Lipschitz. For abstract configurations the φ are local isometries. For the realization problem, this follows from the fact that ζ ij is uniformly bounded by Λ 2 δ ij , where Λ 2 is the supremum of all eigenvalues of ζ ij (x) for all x ∈ B. It follows that the induced metric ρ is √ 2Λ-Lipschitz on B and φ is Λ Lipschitz. If ρ(x, y) ≥ ρ(x , y ), the same inequality follows by swapping roles of (x, y) and (x , y ). Since the maximum stretch in a linear map on a triangle is along one of the edges, ρ (x, y) ≤ Λ|x − y| when restricted to a triangle of B . If σ is a straight line from x to y in B, let x = x 1 , x 2 , . . . , x n = y be the intersections of σ with the edges of the triangles, then by the triangle inequality, because σ is length realizing in the background Euclidean metric. It follows that the sequence of maps {φ } is uniformly bounded and Lipschitz. Hence a subsequence converges uniformly to φ which is Lipschitz. In the realization problem, because the functions φ and φ , agree on the vertices of B (say ≤ ) are uniformly Lipschitz, and diameters of triangles tend to zero, the limit of any two subsequences exists and is equal, hence the whole sequence φ converges to φ. By the same argument, the whole sequence ρ → ρ ∞ as → ∞. The curvature atoms are all zero at interior vertices because ζ is a Euclidean metric on geodesic triangles. Let V be an interior vertex. On the one hand, the angles between the tangent vectors of triangle sides emanating from an interior vertex of any Riemannian surface add up to 2π. Thus the angles of the ζ geodesic triangles add to 2π. Because the ζ-geodesic triangles are flat, their angles may be given from side lengths by the cosine law. On the other hand, sides of the PL triangles in the induced PL metric is also determined from the lengths of the sides, hence results in the same angle as for the geodesic triangle. As these are disks with bounded curvature whose maximum triangle diameters converge to zero, by a theorem of Aleksandrov [AZ 1962, p.79], the metrics converge uniformly ρ k → ρ. Moreover, the curvature measure converges to the curvature measure of the limiting structure. Thus φ i converges weakly in the sense of Alexandrov to a solution of (NP). Generalizing the approximation sequence for (B k , ζ), we now consider any convergent sequence of structures whose B triangulations become infinitely fine. We assume only that the polyhedral distance functions ρ i converge uniformly. This limit turns out to be a surface with bounded curvature in the sense of Alexandrov. The configurations of the piecewise Euclidean structures converge to a configuration of the limiting structure. The structures are approximations of this limiting structure. Applied to the approximations from Lemma 18, the limiting configuration solves the global realizability problem for the (NP). Moreover, the compatibility conditions (the flatness of the structures) converges to the compatibility assumed for ζ (the vanishing of curvature). The approximate distances ρ converge to the limiting distance ρ which follows from a theorem of [AZ 1962, p. 79]. In the ζ-geodesic convex polygon, let x, y ∈ B be any points. Then there where d is the largest ζ-diameter of the triangles in the triangulation B , and the constant c 1 = 2+ (n − 2)π depends on the number of vertices of B. (In fact, Alexandrov-Zalgaller's estimate allows arbitrary Euler Characteristic of B and arbitrary bound of integral curvature, in Aleksandrov's sense of manifolds with bounded curvature. In our setting, we assume B is homeomorphic to a disk and that the PL surfaces are flat.) Thus the sequence of distance functions ρ converges to a limiting function ρ which is a distance function. In the realization problem, we claim that ρ ∞ = ρ induced from ζ. The metrics are pullbacks of induced metrics ζ = φ * ds 2 Euclidean which induce distance functions, call them ρ = φ * ρ Euclidean . Because φ → φ uniformly on B, the distance functions converge to the induced metric φ * ρ Euclidean . Both limits agree so the limiting map is a solution in Alxandrov's sense of (NC) ρ = φ * ρ Euclidean . Since the limiting map preserves the distance functions, the map itself may be recovered by trilateration. Since a point is uniquely determined by its distance to three general nearby points, it is given by rigid motion in Euclidean coordinates. This also implies that φ is C 1 . 8 Compatibility conditions of (LD) imply those of (LP). The linearization of (ND) led to (LD) rather than the discretization of (LC). In this section, we verify that both the interior compatibility condition and the boundary compatibility condition for (LD) implies the compatibility conditions for (LC). We shall consider the limit of a truss as it approximates the continuous material. We shall show that the compatibility condition on the elongations which are induced by a given strain field approximate the compatibility condition for the prescribed strain problem. The wagon wheel condition of (LD) implies the interior compatibility condition of (LC) Suppose B ⊂ E 2 is a PL triangulated disk. Consider the problem determining an infinitesimal deformation u : B → E 2 by prescribing the strains where ij = ji is a given symmetric strain field. Were such u to exist, because it is a map of Euclidean Spaces the strain field must necessarily satisfy the continuum compatibility condition in B Ink( ) = 11,22 − 2 12,12 + 22,11 = 0 (56) (35). This statement is the linearized equivalent of saying that the pulled back metric of a map between Euclidean Spaces corresponds to a vanishing Riemann curvature. The infinitesimal deformations equations of a hexagon, Au = Λ is a discretization of the continuum equations for prescribed strain Its compatibility equation approximates continuum compatibility. Theorem 20 (Expansion of Compatibility Equation for Regular Hexagons). Let B 3r ⊂ R 2 be a disk radius 3r about 0 and H ⊂ B 2r be a regular hexagon with side length δ ≤ r containing 0. Let u ∈ C 4 (B 3r , R 2 ) be an infinitesimal deformation satisfying the strain equation (55). The wagon wheel condition (21) for the u-induced rates of change of distances between vertices of H has the Taylor expansion about the origin as δ → 0 uniformly in B 2r depending on u C 4 (B2r,R 2 ) . So if the discrete compatibility condition W = 0 holds for all δ, then the continuum compatibility conditions (56) hold. The conclusion holds for points centered anywhere inside the hexagon. Thus maintaining the wagon wheel condition for refined grids approximates the continuum compatibility condition. As we remarked after (25), for affine hexagons, the compatibility equation is the wagon wheel condition weighted by the respective side lengths The theorem continues to hold for affine hexagons. A similar compatibility condition (22) holds with more complicated weights. We expect that the expansion of both of these in δ has the continuum compatibility condition as the lowest order coefficient. Proof. Proof of the Theorem depends on expressing the rate of change of distance in terms of strains. Lemma 21. Let B 3r ⊂ R 2 be a disk radius 3r about the origin and a i , a j ∈ B 3r . Let u ∈ C 4 (B 3r , R 2 ) be an infinitesimal deformation with strains given by (55). If φ(x, t) is a deformation such that φ(x, 0) = x andφ(x, 0) = u(x), then where γ(s) = a i + s(a j − a i ) for 0 ≤ s ≤ 1 is a parameterization of the line segment from a i to a j . Proof of Lemma. Let the position of the vertices be denoted φ(a i , t) with initial position a i = φ(a 1 , 0) and initial velocity U i = ∂ ∂t φ(a 1 , 0). Then the elongation By the Fundamental Theorem of Calculus, Now the matrix where ∇u may be replaced by its symmetrization (γ(s)) = 1 2 (∇u + (∇u) T ) in the quadratic form proving the lemma. Finally, the strains are expressed in Taylor Series about the origin. The elongations of the edges of the hexagon H are computed by integrating the Taylor Series in their expressions. The twelve elongations are put into the wagon wheel condition, and coefficients are collected (using c Maple!) to yield (57). The theorem was first proved by Krtolica [K 2016]. Compatibility curve sums for (LD) implies curve integrals for (LC) Let Ω be a simply connected subdomain with C 2 boundary. We can build an approximation Ω n by approximating ∂Ω by a piecewise linear curve that passes through n equally distant points V n,1 , V n,2 , . . . , V n,n ∈ ∂Ω taken in order around ∂Ω, attaching inward facing equilateral triangles to each of the segments, connecting their interior vertices with edges forming a ring girder G n along the boundary, and then filling the remainder with an arbitrary triangulation. Then the compatibility sum for (LD) gives an equation V(G n , L) = 0 on the prescribed elongations L which is a weighted sum involving all edges of the double layer, the edges in the girder G n at most one link from ∂Ω n . We may partition the girder into n pieces G n,i localized near each of the rim vertices V n,i and split the sum It turns out, that if we fix a vertex V n,1 = X ∈ ∂Ω and take an arbitrary strain field near X, and consider its induced elongations L n , for the constructed triangulations, then the boundary strain compatibility for (LD) of each localized piece converges to the (LC) boundary integrand (40) V(G n,1 , L n ) ∆ n → β(e 1 (X)) = − ∂ 11 ∂ν (X) + ( 11 (X) − 22 (X))κ(X) ds as n → ∞, where r = ∆ n = |V n,i+1 − V n,i | for all i is the common distance between boundary vertices at the n-th stage and ν is the inward normal. Figure 9: Piece of a boundary girder G n,i . Since we suppose that the boundary is C 3 we perform the computation for a specific boundary curve that agrees up to the third order to any given boundary curve. Theorem 22 (Expansion of compatibility condition along a curve). Let Ω be a subdomain and ∂Ω a C 3 curve through the origin V 0 = 0 and tangent to the x-axis such that at the origin, its curvature is κ, and its derivative of curvature with respect to arclength is b. Let Ω be the region above the curve. Let B δ ⊂ R 2 be a disk radius δ about the origin. Let V 1 , V 4 ∈ ∂Ω be vertices on both sides of the origin such that |V 1 − V 0 | = |V 4 − V 0 | = r and let V 2 and V 3 be interior vertices above ∂Ω such that V 0 V 1 V 2 and V 0 V 3 V 4 are equilateral triangles. We suppose that r > 0 is so small that V 1 , . . . , V 4 are in B δ . Let T r be a truss such that V 0 is adjacent only to vertices V 1 , V 2 , V 3 and V 4 . Let u ∈ C 4 (B δ , R 2 ) be an infinitesimal deformation satisfying the strain equation (55). If we denote V 0.5 and V 3.5 as the midpoints of the sides V 0 V 1 and V 4 V 0 , resp., then let the localized piece of the boundary girder G n,1 near the origin be the V 0 V 0.5 V 2 V 3 V 4.5 part of the truss T r = {E 01 , E 02 , E 03 , E 04 , E 12 , E 23 , E 34 }. The curve compatibility condition for ∂Ω of Theorem (7), where we take half of the contributions from sides E 01 and E 04 , for the u-induced rates of change of distances between vertices of T r has the Taylor expansion about the origin V(G n,1 , L n ) = − 11,2 + ( 11 − 22 )κ + as r → 0. Hence, in the limit, the discrete curve sum compatibility condition as r → 0 tends to the continuum curve integral compatibility conditions (40). The distance between V 2 and V 3 will be smaller or larger than r, depending on whether κ > 0 or κ < 0. Note that the third derivative of the boundary influences only the r 2 term. The number of variables is 2v S + 2v T . The number of equations is e S + e T + 4. The kernels of A(S) and A(T ) consist of rigid motions. The four extra equations guarantee that the rigid motion is the same for both A(S) and A(T ), hence the grand system has three dimensional kernel: the bigon is rigid. The Maxwell count of the bigon is c bigon = (e S + e T + 4) − 2(v S + v T ) + 3 = (e S − 2v S + 3) + (e T − 2v T + 3) + 1 = c S + c T + 1. In the triangle construction, the vertices are distinct for trusses S, T and U . Connecting the legs of the triangle amounts to adding six equations The number of variables is 2v S + 2v T + 2v U . The number of equations is e S + e T + e U + 6. The kernels of A(S), A(T ) and A(U ) consist of rigid motions. The six extra equations from the three sides and the non-degeneracy of the triangle guarantee that the rigid motion is the same for all three A(S), A(T ) and A(U ), hence the grand system has three dimensional kernel: the triangle is rigid. The Maxwell count of the triangle is In the prism construction, there are three distinct vertices in the trusses P and Q and two distinct vertices in each of the trusses R, S and T . Intuitively, if P and Q connected with just two legs R and S there wii be one one degree of freedom of motion. The appropriate third leg T will prevent such motion. If, for example, the three connecting legs were parallel, then the prism would admit an infinitesimal shear motion perpendicular to the legs. Connecting the legs of the prism amounts to adding twelve equations The number of variables is 2v P + · · · + 2v T . The number of equations is e P + · · · + e T + 12. The kernels of A(P ) through A(T ) consist of rigid motions. The twelve extra equations do not necessarily guarantee that the rigid motion is the same for all five A(S) -A(T ) unless R, S and T are in a non-degenerate position relative to P and Q, namely, the system (59) has a three dimensional kernel so the prism is rigid. The Maxwell count of the prism is c prism = (e P + · · · + e T + 12) − 2(v P + · · · + v T ) + 3 = (e P − 2v P + 3) + · · · + (e T − 2v T + 3) = c S + · · · + c T . We may write this nondegeneracy condition as a determinant inequality. View R, S and T as segments so, e.g., A(R) has the same kernel as We know that P moves as a rigid body, so its displacement field is given by three parameters a, b, c corresponding to translation and rotation. In three dimensions, the velocity field of a rotation about the origin at Z is given by cross product with a fixed vector H W (Z) = Z × H. Infinitesimal rotation in the x-y plane is given by crossing with H = (0, 0, c), Adding translation (a, b), the velocity of any rigid motion is thus The prism is rigid if the legs connecting them make the rigid motions of P and Q coincide. Substituting the unknown motion (60) for vertices of P and any fixed motion, say (u, v) = (0, 0) for the vertices of Q in the leg equations gives a homogeneous linear system for a, b and c. It has a trivial kernel if its determinant is nonvanishing, namely If the legs were parallel, then the first two columns are multiples of one another and the determinant vanishes. But other configurations allow infinitesimal flexes. For example, if the lines determined by the legs meet at the origin, then the areas of the parallelogram determined by the endpoints of the legs all vanish, so the last column is zero. In this case, a nontrivial flex is given by the velocity field of a rotation about the origin for P and zero for Q. The determinant is invariant under translation so that any point may be the meeting point. In the pinning construction, the vertices are distinct for the truss T and two distinct vertices z 1 and z 2 share the same coordinates. Pinning two vertices amounts to adding two equations The number of variables is 2v T . The number of equations is e T + 2. The kernels of A(T ) consist of rigid motions. The two extra equations restrict the kernel further so the pin is infinitesimally rigid. The Maxwell count of the pin is c pin = (e T + 2) − 2v T + 3 = (e T − 2v T + 3) + 1 = c T + 2. 9.2 Proof that wagon wheels form a basis in triangular domains. The method of proof is to show that there is a maximal statically determined sub-truss In X that omits exactly v i edges of X, and thus there are v i compatibility conditions. We remove an edge from each interior hexagon in turn showing that the wagon wheel of that hexagon is independent of the remaining hexagons. Let F Y denote the triangles of X which are not in any plate. Define a graph G Y consisting or vertices F Y and edges between any two triangles of F Y which share a common edge. The graph may not be connected. Let F i denote its connected components. Define Y i to be the truss made from the union of triangles in F i . It turns out the Y i are girders connected by the plates. Thus, after removing the plates, Lemma 23. The plates P i are bounded by a single simple closed curve (are simple trusses). Proof. Any point in P i is within one unit of a center in G i . Because the straight line path between neighboring centers is also in P i , it is possible to construct a path from any point to a path connecting centers to another point in P i . Thus, P i is path connected. The boundary of P i can be at most one closed curve. If not, there are triangles inside the outer boundary of P i which are not in X, contrary to the assumption that X is simply connected. Lemma 24. The Y i are girders. Proof. Y i is connected because the graphs F i are connected. There are no hexagon points in Y j . Thus every vertex is on the boundary. Thus any closed loop in Y j may be homotoped through Y j to closed loopγ in the set of boundary paths. As for plates, all lattice points withinγ are in X. However, any point not already encountered in ∂Y j would be hexagon points, thus not part of Y j . Proof. The girder is infinitesimally rigid. Any single triangle is determined up to a rigid motion. Gluing on another triangle to a rigid structure maintains rigidity because a common edge determines its motions. We shall show that removing any edge from girder results in a flexible structure, hence the girder is statically determined. A nontrivial flex yields a nontrivial infinitesimal flex. There are three types of triangles in a girder: those with exactly one, two or three neighboring triangles in the girder. In the first case, removing a boundary edge leaves another boundary edge which is free to flex. Removing the common edge leaves an empty quadrilateral, which flexes. In case the triangle has two neighboring triangles, remembering that the opposite corner is not a hexagon, by removing the boundary edge the opposite corner becomes a hinge that flexes. In the case that the triangle has three neighboring triangles, by removing an edge leaves an empty quadrilateral. Remembering that none of the vertices of the quadrilateral are hexagons, the quadrilateral flexes. G i , being a finite, connected, simply connected subgraph of the triangular lattice has metric geometry. In many ways, it behaves like the Euclidean plane, and arguments from the plane can be applied to G i . Choose a basepoint c 1 . For each vertex in P i , let r(c) be the G i -distance from the base point. (r is for "radius.") Since each edge has unit length, r(c) is the minimal number of edges in an edge-path connecting c 1 to c in G i . Any c ∈ P i \G i has r(c) = r(c j ) + 1 where c ∈ H(c j ). An edge-path between two vertices which realizes the distance is called a geodesic. There may be many geodesics between any two points. However, in these triangular metric graphs, certain convexity properties still hold. For example, every point has a neighbor with a radius equal to r(c) − 1. The level sets of r are then "r-circles" about c 1 . It turns out that the r-circles are lines, namely made up of unions of simple paths. We give some proofs. The triangular truss H has geodesics and distance circles. The lattice is generated by the vectors e 1 = (1, 0) and e 2 = 1 2 (1, √ 3). Put e 3 = e 2 − e 1 , e 4 = −e 1 , e 5 = −e 2 , e 6 = −e 3 and extend modulo 6 so e 7 = e 1 etc. The six edge directions emanating from a vertex are e 1 , . . . , e 6 . Note that there is a unique geodesic between the points c and c + re i of length r given by the path t → c + te i where 0 ≤ t ≤ r. Every other path connecting the endpoints is longer. If a point is in a "sector" between the generating rays, say c = c 1 + ae i + be i+1 where a, b ∈ N are positive integers, then r(c) = a + b and the set of geodesics connecting c 1 to c are zig-zag paths which consist of a steps in the e i direction and b steps in the e i+1 direction, taken in any order. These geodesics sweep out a parallelogram between endpoints. Every other path connecting these endpoints is longer. This means that a geodesic curve either goes straight or turns right or left 60 • at each vertex. In the subgraph G i , the paths are restricted to paths connecting points of G i . This could mean that the boundary of G i may obstruct some of the paths between endpoints, or there may be a critical obstacle, meaning that all geodesics from c 1 to c must pass through certain boundary points. This is the usual situation for variational problems with an obstacle. If the obstacle is effective, then the distance minimizer goes through points of the obstacle. Note that G i is not very concave. If G i is to the right then the boundary cannot contain, in order, the points c, c + e 1 , c + 2e 1 and c + e 1 + e 6 because c + e 1 and c + e 1 + e 6 are a unit apart and are included as an edge in ∂G i . Thus the boundary edges here are the points c, c + e 1 and c + e 1 + e 6 . In other words, going around the boundary of g i clockwise the curve may turn right at most 60 • at a vertex. This is the same curvature as a glancing geodesic. Furthermore, three consecutive rights of 60 • don't occur. For example, if G i is on the left then c, c + e 4 , c + e 4 + e 3 , c + e 4 + e 3 + e 2 , c + e 4 + e 3 + e 2 + e 1 cannot be consecutive boundary points because P i then contains the hexagon centered at c + e 3 . Lemma 26. Suppose c ∈ G i such that c = c 1 . Then the minimal value of r on H(c) can occur at exactly one or two neighboring boundary points of ∂H(c). Proof. Thus is clear if c 1 ∈ H(c) so assume c / ∈ H(c 1 ). First, the minimum c cannot be the center point because one of its neighbors must have value r(c) − 1 on the boundary. For brevity, let the hexagon be centered at the origin c = 0, and suppose the minimum occurs at the two non-neighboring points e j and e k where j + 2 ≤ k ≤ j + 4 mod 6. Let γ j and γ k be geodesics from c 1 to e j and c 1 to e k , respectively. Then γ j γ −1 k is a closed loop in G i . Since G i is simply connected, all vertices of H interior to the loop also belong to P i . Moreover, hexagons centered on these vertices are also in P i so the vertices belong to G i . Now, γ j and γ j between the hexagon and the last effective boundary obstacle point (or c 1 if no obstacle) makes a closed loop σ in H. Follow the geodesic ray t → te i+1 , t ≥ 0 from e j to where it meets the shortened loop σ, say on the γ i side of the obstacle point. Because the ray is strictly minimizing and emanates from the center of the hexagon in a different direction than either γ i (or γ j ), it is shorter than the corresponding arc on γ i . Hence r(e i=1 ) < r(e i ), a contradiction. It follows that the minima of r on H(c) occur on the boundary of the hexagon and can be taken at most at two neighboring points. Lemma 27. r may not be equal at all three points of a unit triangle in G i . Proof. After a rigid motion, for contradiction we may suppose r = r(0) = r(e 1 ) = r(e 2 ). Then the three points have neighbors with radius r − 1. Up to reflection and rotation there are four cases. Case 2: r − 1 = r(e 2 + e 3 ) = r(e 6 ). Let gamma 6 and gamma 7 be geodesics form c 1 to e 6 and e 2 + e 3 resp. Let σ be the loop γ 6 γ −1 7 , at least from the last effective obstacle point. The points inside the loop are in G i , as before. Say that these points are to the left of our five points. The ray f (t) = t → te 4 for t ≥ 0 meets the loop at either γ 6 or γ 7 (or both.) If it is γ 6 then it is shorter that arcs from γ 6 joined to segment from e 6 to e 0 of length r. Hence r(0) < r which is a contradiction. If it is γ 7 then the new ray t → e 2 + te 4 also meets γ 7 since it is parallel to the old ray and trapped inside the loop f to γ 7 to e 2 + e 3 to e 2 to 0. The new ray is shorter that arcs from γ 7 joined to segment from e 2 + e 3 to e 2 of length r. Hence r(e 2 ) < r which is a contradiction also. Case 3: r − 1 = r(e 5 ) = r(2e 1 ) = r(e 2 + e 3 ). Let γ 5 and γ 7 be geodesics form c 1 to e 5 and e 2 + e 3 resp. Let σ be the loop γ 5 γ −1 7 , at least from the last effective obstacle point. If the loop is to the left, we argue as in Case 2. If the loop is to the right, the ray f (t) = t → te 1 for t ≥ 0 meets the loop at either γ 5 or γ 7 (or both.) If it is γ5 then it is shorter that arcs from γ 5 joined to segment from e 5 to e 0 of length r. Hence r(0) < r which is a contradiction. If it is γ 7 then the new ray t → e 2 + te 1 also meets γ 7 since it is parallel to the old ray and trapped inside the loop f to γ 7 to e 2 + e 3 to e 2 to 0. The new ray is shorter that arcs from γ 7 joined to segment from e 2 + e 3 to e 2 of length r. Hence r(e 2 ) < r which is a contradiction also. Case 4: r − 1 = r(e 5 ) = r(2e 1 ) = r(2e 2 ): is almost the same as case 3. This lemma has some immediate consequences. The differential geometric analog in the Euclidean plane is that the gradient of the distance function from a point has unit gradient away from the center and so its level curves are smooth curves. Moreover, the curvature of distance circles is positive. Lemma 28. r takes three values on H(c) where c ∈ G i \H(c 1 ). Level sets of r are locally convex line segments. Proof. Neighbors of the minimum points at radius r have radius r + 1. The remaining points must have radius r + 2 by Lemma 27. Thus, locally, if the center of a hexagon has radius r then two neighbors along a line or along a 60 • angle have radius r too. Hence r-circles are locally curves which are convex. Proof. The idea is to construct a maximal statically determined subtruss Z of P i . The number of edges in P i \Z is v i , so that it is maximal and the number of compatibility conditions for the truss is C M . The rough idea is to build up the subtruss one hexagon at a time inductively by adding the remainder of each next hexagon minus one edge. Thus this establishes a one-to-one correspondence between the hexagons and the missing edges. The subtruss is infinitesimally rigid. Then one checks that removing any one of the remaining edges results in the truss that admits a nontrivial infinitesimal flex. Since G i is connected, we begin by ordering the centers c j of the hexagons, where j = 1, . . . , v i in such a way that the next center is in the boundary of the union the previous hexagons where H(c k ) is the hexagon centered at c k and Y k is that portion of P i consisting of the first j hexagons. We shall choose centers circlewise, starting from c 1 , then points with r = 1, then r = 2 until we reach the furthest point. All radii occur because G i is connected. Recall that the concentric circles are made up of a collection of paths that end the circle exits P i . Choose c 2 ∈ ∂H(c 1 ) ∩ G i at one end of a component circle. Then take centers in order along with this component of the bounding circle. Then continue at the start of the next component of the same circle and continue in this fashion until this circle has been exhausted. Then continue in the next higher radius r-circle in the same fashion. The result is that the distance function r(c i ) is nondecreasing as a function of i. It also follows that the condition (63) is satisfied. Using the fact that r-circles are locally convex is important because it implies that each new hexagon extends beyond Y i and adds three to seven new edges to Y i+1 , in particular, it adds at least one new interior edge. We shall construct a sequence of trusses Z j ⊂ Y j by induction such that Z j is statically determined. It turns out that in our construction, all vertices of Y j occur in Z j and the boundary edges of Y j remain edges of Z j . For the base case, Y 1 is a hexagon at the basepoint. Removing any one edge from Y 1 will produce a statically determined truss. The wagon wheel condition on this hexagon restricts the allowable elongation for the removed edge. For the sake of definiteness of our construction, let Z 1 be Y 1 with the edge with the e 1 edge from the center removed. Let us suppose that Z j ∈ Y j be the statically determined subtruss which is Y j with j edges removed. Let Z j+1 be Z j with all new edges of H(c j+1 ) except for one new interior edge. Z j+1 = Z j ∪ H(c j+1 )\Y j \(one new interior edge of H(c j+1 )) Note that the new edges of Z j+1 are statically determined. If the velocities of the vertices are prescribed on Z j , then they are determined for the new vertices. Note that this uses the fact that the additional edges come from a hexagon and are not two consecutive parallel edges which admit an infinitesimal flex. It also means that an additional wagon wheel condition on the new hexagon is required to restrict the elongation of the removed edge of H(c j+1 ). Hence the new wagon wheel condition is independent of the previous wagon wheel conditions of Y j . Let Z denote the structure after adding the v i th hexagon. We claim that the structure is infinitesimally rigid. To see it, notice that by construction, the number of new edge equations in Z j+1 exactly equals two times the number of new vertices. Moreover, they are independent of the equations of Z j . Hence the nullity of the matrix for Z j+1 is the same as the nullity as the matrix for Z 1 which is three, corresponding to its infinitesimal isometries. Furthermore, removing one more edge from the new edges of Z makes the structure infinitesimally flexible. Suppose we remove a new edge of Z j . This removal introduces a nontrivial flex to Z j , hence a nontrivial infinitesimal flex. The vertices of Z k with k < j have zero velocities, but some new edge vertices are non-vanishing. With each additional hexagon, the velocities of Z j are used as boundary conditions, and the velocities on the new vertices of Z j+1 are uniquely determined from the new equations. Similarly, the velocities can be computed at all the new vertices added after Z j . Hence Z has a nontrivial infinitesimal flex. Proof of Theorem 12. The simple truss X is the union of plates and girders, which are connected along edges. We claim that the contact region of any two plates or girders consists of a single edge. If two consecutive edges are shared then the hexagon centered at the midpoint is wholly contained in X, hence is interior to some plate, and not on the boundary of a plate or a girder. Similarly, if two non-consecutive edges are shared, then since each piece is simply connected, this forms two disjoint simple closed boundary curves for X, contrary to the assumption that there is only one. Note that we assume that the pieces have the same orientation as H so that the two pieces in opposite directions at each edge. If one of the gluing reversed the orientation, then a one-sided Moebius strip would have been formed. If we consider a graph T where the vertices are the plates and triangles of girders, and the edges occur between two vertices whenever they share an edge, then T is a tree. If not, X is not simply connected: X is bounded by more than one circle, contrary to assumption. The union of girders and Z's corresponding to the plates is a maximal statically determined substructure. Ordering the pieces in the tree, so that next piece attaches to the union of previous pieces, as in a tree search, we prove infinitesimal rigidity and statistical determinateness just as in Lemma 29. Since the hexagons of distinct plates overlap at most on a single boundary edge, the wagon wheel conditions of different plates are independent. Thus all v i wagon wheel conditions of X are linearly independent. Proof of Theorem 1. One proceeds like doing a jigsaw puzzle, constructing the immersion one triangle at a time by extending the immersion on this triangle from what was already built. We claim that triangles can be ordered in the net so that the next triangle is connected exactly by one edge or exactly by two adjacent edges to the union of the previous triangles. Numbering the triangles in this order then T n+1 shares exactly one or two edges with U n = ∪ n j=1 T j . Such ordering may be chosen in reverse order from the abstract structure by removing first any boundary triangle with two boundary edges from U n resulting in a smaller disk U n−1 with boundary. Continue one at a time. If there aren't any more boundary triangles with two boundary edges, then two cases are possible: either there remain interior vertices in U n or not. If there are interior vertices, remove a triangle from the boundary of U n whose boundary edge is opposite an interior vertex. This results in a disk U n−1 . Then continue removing triangles with two boundary edges as before. Stop when one triangle remains. If there are no more interior vertices in U n , then there must be a boundary triangle with two edges on the boundary, so we proceed as before. To see this, suppose there are V i and V b interior and boundary vertices, E i and E b interior and boundary edges and F faces. Because each interior edge bounds two triangles and each boundary edge bounds one, the total number of edges is three times the number of faces, less the number of interior edges which have been double-counted The Euler characteristic formula of a disk U n is Substituting for E i and using V b = E b for disks we get Thus if there are no interior vertices, there must be on average more than one boundary edge per face; in other words, one triangle has at least two boundary edges. With a right ordering of triangles in hand, we may develop the configuration jig-saw fashion. Say one common edge is E k,1 ⊂ T k for k ≤ n. Then the PL map p n : U n → E 2 is extended to U n+1 by the unique linear map on the opposite side of p n (T k ) which agrees with p n on E k,1 . (i.e., we paste the triangles in sequence.) If T n+1 shares two edges with U n , then by restriction of the triangulation, these must be two edges that meet at one endpoint, the interior vertex, say V k , k ≤ n. The extension of one side coincides with the extension of the other side because, by the angle condition, the angle from T n+1 is the angle deficit of U n at V k because the total is 2π. The resulting extension completes p n+1 to a PL map on a neighborhood of V k . The image under p n+1 is a Euclidean neighborhood of p n+1 (V k ) homeomorphic to the neighborhood of V k in U n+1 . By choosing the sequence T n carefully so that each U n is a disk (which is OK on triangulated disks), this is the only type of common edge situations that are encountered. Otherwise, one could imagine extending the map on a ring that surrounds some triangles so that filling in the holes will result in triangles with more than two edges in common with U n . Thus we have provided an argument for the existence of an immersion φ for flat abstract structures. It remains to argue that for different choices of the ordering of triangles in this construction, the PL map φ from the abstract net is unique up to rigid motion and reflection. Suppose that two configurations are built up starting from the same initial triangle. It suffices to show that the image of any point X in the net is uniquely determined relative to the initial triangle φ (T 1 ). Let us consider two sequences of puzzle-pieces. Let φ , the PL map obtained from the order T 1 , . . . , T n and letφ the PL map obtained from a different order T j1 , . . . , T jn , where j i is a permutation of {1, . . . , n} with j 1 = 1. Both φ andφ are continuous. Choose a point X 0 ∈ T 1 = T j1 . Use an open/closed argument. Take PL path γ from X 0 to X in S. This means that γ does not meet a vertex (except possibly X) and that restricted to each triangle it crosses, γ is linear with nonzero velocity. Let t 0 = sup{t ∈ [0, 1] : φ (γ(s)) =φ(γ(s)) for all 0 ≤ s ≤ t.}. If t 0 = 1 then φ (X) =φ (X), so suppose t 0 < 1. Since φ =φ on T 1 , the two functions agree while γ is in T 1 so t 0 > 0. Since γ has positive velocity in each T i it crosses, there is an > 0 so that γ(t) is in the same triangle, say T i for t 0 − < t < t 0 . If γ(t 0 ) is an interior point, there is a δ > 0 so that γ(t) is in the same triangle for t 0 − < t < t 0 + δ. But if the linear functions φ andφ agree on the first part of the triangle, they must agree on the second part. If γ(t 0 ) is a boundary point of a triangle then by construction of γ, it is an interior point of some edge E ij . Since both φ and φ are constructed by gluing the same Euclidean triangle along the edge φ (E ij ) so that φ and φ continue to agree on T j . Hence there is a δ > 0 so that γ(t) is in T i ∪ T j for t 0 − < t < t 0 + δ. But if the linear functions φ andφ agree on both of these triangles. The upshot in both cases is that φ (t) andφ (t) agree for 0 ≤ t < t 0 + δ, contradicting the statement that t 0 is the largest interval of agreement.
2018-12-21T19:47:08.000Z
2018-12-21T00:00:00.000
{ "year": 2020, "sha1": "ac4689848b432d6ef016b5c2086d4d3485a0b46b", "oa_license": null, "oa_url": "https://doi.org/10.1016/j.ijsolstr.2019.06.008", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "ac4689848b432d6ef016b5c2086d4d3485a0b46b", "s2fieldsofstudy": [ "Engineering", "Mathematics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
210710045
pes2o/s2orc
v3-fos-license
Long-term colorectal cancer incidence after adenoma removal and the effects of surveillance on incidence: a multicentre, retrospective, cohort study Objective Postpolypectomy colonoscopy surveillance aims to prevent colorectal cancer (CRC). The 2002 UK surveillance guidelines define low-risk, intermediate-risk and high-risk groups, recommending different strategies for each. Evidence supporting the guidelines is limited. We examined CRC incidence and effects of surveillance on incidence among each risk group. Design Retrospective study of 33 011 patients who underwent colonoscopy with adenoma removal at 17 UK hospitals, mostly (87%) from 2000 to 2010. Patients were followed up through 2016. Cox regression with time-varying covariates was used to estimate effects of surveillance on CRC incidence adjusted for patient, procedural and polyp characteristics. Standardised incidence ratios (SIRs) compared incidence with that in the general population. Results After exclusions, 28 972 patients were available for analysis; 14 401 (50%) were classed as low-risk, 11 852 (41%) as intermediate-risk and 2719 (9%) as high-risk. Median follow-up was 9.3 years. In the low-risk, intermediate-risk and high-risk groups, CRC incidence per 100 000 person-years was 140 (95% CI 122 to 162), 221 (195 to 251) and 366 (295 to 453), respectively. CRC incidence was 40%–50% lower with a single surveillance visit than with none: hazard ratios (HRs) were 0.56 (95% CI 0.39 to 0.80), 0.59 (0.43 to 0.81) and 0.49 (0.29 to 0.82) in the low-risk, intermediate-risk and high-risk groups, respectively. Compared with the general population, CRC incidence without surveillance was similar among low-risk (SIR 0.86, 95% CI 0.73 to 1.02) and intermediate-risk (1.16, 0.97 to 1.37) patients, but higher among high-risk patients (1.91, 1.39 to 2.56). Conclusion Postpolypectomy surveillance reduces CRC risk. However, even without surveillance, CRC risk in some low-risk and intermediate-risk patients is no higher than in the general population. These patients could be managed by screening rather than surveillance. InTrODuCTIOn Colorectal cancer (CRC) causes considerable morbidity and mortality. 1 It can be prevented by removing adenomas, known precursors. 2 Patients at increased risk of CRC following adenoma removal are recommended surveillance colonoscopy. The 2002 UK surveillance guidelines stratify patients with adenomas into three risk groups, 3 as do the European Union (EU) and US guidelines. 4 5 Low-risk patients (with 1-2 adenomas <10 mm) are recommended no surveillance or surveillance at 5-10 years; while intermediate-risk/higher-risk patients (with 3-4 adenomas <10 mm or 1-2 adenomas with at least 1≥10 mm (UK/EU), or 3-10 adenomas or at least 1≥10 mm, with villous histology, or highgrade dysplasia (US)) are recommended 3-yearly surveillance. High-risk patients (with 5 or more adenomas <10 mm, or 3 or more adenomas with at least 1≥10 mm (UK), or more than 10 adenomas (US)) are recommended colonoscopy at 1 year or within 3 years before 3-yearly surveillance. The 2002 UK guidelines were largely based on studies using detection rates of advanced adenomas (AAs) at follow-up as a proxy for CRC, 3 6-9 which overestimates risk due to higher rates of AAs than CRC. 9 10 Moreover, as the guidelines were developed before substantial improvements in colonoscopy quality, 11 such intensive surveillance may no longer be necessary. In 2004, there was a call for proposals to reassess surveillance requirements among intermediate-risk patients, who account for most surveillance colonoscopies. 12 There was concern that the introduction of the Bowel Cancer Screening Programme (BCSP) in 2006 would increase demand for surveillance and overwhelm endoscopy services. We developed a study that examined CRC incidence among intermediate-risk patients over a median of 7.9 years, identifying a higher-risk subgroup who benefited from surveillance and a lower-risk subgroup who could potentially forego surveillance. 13 These findings were timely as adenoma surveillance accounts for 20% of colonoscopies performed in the UK and USA, placing great pressure on endoscopy services. 14 15 Revision of the guidelines is required to minimise unnecessary colonoscopies while ensuring that patients at increased CRC risk receive surveillance. The present study examined CRC incidence among all three risk groups over a median of 9.3 years and assessed effects of surveillance on CRC incidence. We aimed to identify patient subgroups who could safely forego surveillance or receive less than currently recommended. Study design and participants We conducted a retrospective study using data from 17 UK hospitals on patients who had adenomas removed at baseline colonoscopy from 1984 to 2010 (mostly (87%) from 2000 to 2010). We used this cohort for our previous study of intermediate-risk patients. 13 16 For the present study, we obtained updated information on the cohort (eg, on surveillance examinations, cancers and deaths). This provided longerterm follow-up data for the intermediate-risk group. We additionally examined the low-risk and high-risk groups not previously analysed. Participating hospitals were required to have lower gastrointestinal endoscopy and pathology reports recorded electronically for at least 6 years prior to study start (2006). We searched endoscopy databases for patients who had undergone colonic examination before 31 December 2010, and searched pathology databases for reports of colorectal lesions. Endoscopy and pathology reports were pseudonymised and entered into a database (Oracle Corporation, Redwood City, California, USA). Summary values for size, histology and location were assigned to lesions seen at multiple examinations. 16 After identifying patients with colonic examinations before 31 December 2010, we looked back in these patients' records to identify the first occurrence of an adenoma, defining this as baseline. Multiple examinations were sometimes required at baseline to fully examine the colon and remove detected lesions, which we grouped and defined as the baseline visit. Baseline visits sometimes spanned days or months. Subsequent colonic examinations were grouped into surveillance visits, using rules described elsewhere. 16 We excluded patients without a colonoscopy or adenoma at baseline. We also excluded patients with CRC; a prior bowel resection; inflammatory bowel disease; polyposis, juvenile polyps, or hamartomatous polyps; Lynch syndrome or family history of familial adenomatous polyposis; colorectal carcinoma in situ reported more than 3 years before baseline; missing examination dates; or missing information needed for risk categorisation. We obtained data on cancers and deaths from National Health Service (NHS) Digital, NHS Central Register, and National Services Scotland through 2016 and entered these into the study database. We compared the cancer data with the hospital data and resolved data duplication and inconsistency issues. The primary outcome was incident adenocarcinoma of the colorectum, including cancers with unspecified morphology but assumed to be adenocarcinomas (those located between the rectum and caecum). In situ cancers and cancers with unspecified morphology but assumed to be squamous cell carcinomas (those located around the anus) were not included as CRCs. In line with previous methodology, 13 16 we excluded CRCs that we assumed had arisen from incompletely resected baseline lesions because we thought their inclusion could lead to biased estimates of risk and inappropriate surveillance recommendations. Namely, we excluded CRCs found in the same/adjacent colonic segment to a baseline adenoma ≥15 mm that was seen at least twice within 5 years preceding cancer diagnosis. In sensitivity analyses, we additionally excluded CRCs that satisfied only Colon some of these criteria, but that we deemed likely to have arisen from incompletely resected lesions. Statistical analysis Sample size calculations were based on obtaining estimates of CRC incidence with a coefficient of variation of ~30%. Assuming an incidence rate of two CRCs per 1000 person-years, [17][18][19] nine CRCs and 4500 person-years in any risk subgroup would give a coefficient of variation of 33%. Thus, assuming the smallest subgroup would be 15% the size of the whole risk group, 60 CRCs were required in each risk group. We compared baseline characteristics among patients with and without surveillance using χ² tests, including sex, age, adenoma number, size, histology, and dysplasia, presence of proximal polyps, colonoscopy completeness, bowel preparation quality, year of baseline visit, length of baseline visit (in days or months), family history of cancer/CRC, number of hyperplastic polyps and presence of hyperplastic polyps ≥10 mm. Colonoscopy completeness and bowel preparation quality were defined by the most complete colonoscopy and best preparation during baseline. We estimated CRC incidence after baseline in each risk group. Time-at-risk started from the last examination at baseline. Timeto-event data were censored at first CRC diagnosis, death, emigration or date of complete ascertainment of cases in cancer registries. We examined effects of surveillance and baseline characteristics on CRC incidence. Exposure to successive surveillance visits started at the last examination in each visit. When CRC was diagnosed at a surveillance visit, we did not include the visit as surveillance as it offered no protection against CRC. We used univariable Cox proportional-hazards models to calculate hazard ratios (HRs) and 95% confidence intervals (CIs). Multivariable Cox regression was used to identify independent CRC risk factors, using backward stepwise selection based on likelihood ratio tests to retain variables with p values <0.05. Number of surveillance visits was included as a time-varying covariate. Interactions between number of surveillance visits and age or sex were assessed by including interaction parameters. We performed Kaplan-Meier analyses to show time to cancer diagnosis and estimate cumulative CRC incidence with 95% CIs at 3 years, 5 years and 10 years. Cumulative incidence curves were compared using the log-rank test. We calculated standardised incidence ratios (SIRs) as the ratio of observed to expected CRC cases, with exact Poisson 95% CIs. Expected cases were calculated by multiplying sex-specific and 5-year agegroup-specific person-years by the corresponding incidence in the general population of England in 2007. 20 We divided each patient's follow-up time into distinct periods; in the absence of surveillance, censoring at first surveillance; after first surveillance, censoring at second surveillance; and after second surveillance to final censoring. Using baseline risk factors, we stratified each risk group into lower-risk and higher-risk subgroups. Age was not included in the stratification criteria because older age is associated with worse colonoscopy quality and higher risks of complications; 21 nor was year or length of baseline visit which do not help define clinically relevant subgroups. In our previous study of intermediate-risk patients, incomplete colonoscopies, colonoscopies of unknown completeness, poor bowel preparation, adenomas ≥20 mm, adenomas with highgrade dysplasia and proximal polyps were CRC risk factors. 13 16 In the present study, we used these factors to define higher-risk in a sensitivity analysis of the risk stratification criteria for intermediate-risk patients. Further sensitivity analyses excluded patients without a complete baseline colonoscopy. Patient and public involvement Our patient and public representatives reviewed the study proposal and results and have helped to develop plans for wider dissemination of the results. reSulTS There were 33 011 eligible patients in the updated cohort. Of these, we excluded 2859 with no baseline colonoscopy; 125 with CRC at baseline or a condition associated with increased CRC risk; 15 whose baseline occurred after 2010; 12 with colorectal carcinoma in situ more than 3 years before baseline; 2 with missing examination dates; 2 without adenomas; 980 whose risk could not be classified; and 44 who were lost to follow-up. Of the remaining 28 972, 14 401 (50%) were classed as low-risk, 11 852 (41%) as intermediate-risk and 2719 (9%) as high-risk (figure 1). Patients attending surveillance were younger than nonattenders and generally more likely to have had more adenomas, an adenoma with tubulovillous histology or high-grade dysplasia, hyperplastic polyps or missing data at baseline. A greater proportion of attenders than non-attenders had a baseline visit before 2005, a baseline visit spanning more than 1 day and a family history of cancer/CRC. Non-attenders were more likely to have had an incomplete colonoscopy or poor bowel preparation. Among intermediate-risk patients, attenders were more likely to be male and have had an adenoma ≥20 mm or hyperplastic polyp ≥10 mm (online supplementary table 1). The median age of low-risk patients was 64 years (IQR 55 to 72), 44% were women and 50% attended surveillance (table 1). The median time to first surveillance was 3.2 years (IQR 2.2 to 5.0). During a median follow-up of 9.6 years (IQR 7.2 to 12.4), 195 CRCs were diagnosed, giving an incidence rate of 140 per 100 000 person-years (table 1). Number of surveillance visits, age, adenoma histology, proximal polyps and colonoscopy completeness were independently associated with CRC incidence. Adjusting for these factors, a single surveillance visit was associated with a 44% reduction in CRC incidence compared with no surveillance. Incidence was even lower with two surveillance visits (table 1). The median age of intermediate-risk patients was 66 years (IQR 58 to 74), 44% were women and 60% attended surveillance (table 2). The median time to first surveillance was 3.0 years (IQR 1.4 to 3.5). During a median follow-up of 9.1 years (IQR 6.6 to 12.4), 246 CRCs were diagnosed, giving an incidence rate of 221 per 100 000 person-years (table 2). Number of surveillance visits, age, adenoma dysplasia, proximal polyps, colonoscopy completeness, and year and length of baseline visit were independently associated with CRC incidence. Adenoma histology was not included in the final multivariable model because it was only associated with incidence when the unknown category was included. Adjusting for the other factors, a single surveillance visit was associated with a 41% reduction in CRC incidence compared with no surveillance. A similar reduction in incidence was seen with two surveillance visits (table 2). Figure 1 Study profile. *Not mutually exclusive. †Reasons for lost to follow-up: 19 patients had all examinations after emigrating; 22 patients were untraceable through national data sources and had no surveillance; and 3 patients had an unknown date of birth. Colon The median age of high-risk patients was 67 years (IQR 61 to 74), 29% were women and 66% attended surveillance (table 3). The median time to first surveillance was 1.5 years (IQR 1.0 to 3.0). During a median follow-up of 8.4 years (IQR 5.7 to 11.2), 84 CRCs were diagnosed, giving an incidence rate of 366 per 100 000 person-years (table 3). Number of surveillance visits, adenoma dysplasia and colonoscopy completeness were independently associated with CRC incidence. Adjusting for these factors, a single surveillance visit was associated with a halving of CRC incidence compared with no surveillance. Attendance at subsequent visits was associated with further incidence reductions (table 3). There were no significant interactions between number of surveillance visits and age or sex (all p values ≥0.05). Each risk group was then divided into lower-risk and higher-risk subgroups using the identified baseline risk factors. low-risk group The higher-risk subgroup of low-risk patients comprised those with incomplete colonoscopies, colonoscopies of unknown completeness, tubulovillous or villous adenomas, or proximal polyps at baseline (n=9166, 64%); lower-risk patients had none of these (n=5235, 36%) (table 4). Higher-risk patients were older, more likely to have had a baseline visit before 2005, and had more surveillance than lower-risk patients (online supplementary table 2). Surveillance was associated with lower CRC incidence in the higher-risk but not the lower-risk subgroup; however, estimates in the lower-risk subgroup were imprecise owing to few CRCs (table 4). Intermediate-risk group The higher-risk subgroup of intermediate-risk patients comprised those with incomplete colonoscopies, colonoscopies of unknown completeness, adenomas with high-grade dysplasia or proximal polyps at baseline (n=7114, 60%); lower-risk patients had none of these (n=4738, 40%) (table 4). Higher-risk patients were older, more likely to have had a baseline visit before 2005, and had more surveillance than lower-risk patients (online supplementary table 2). Surveillance was associated with reduced CRC incidence in the higher-risk but not the lower-risk subgroup, although estimates in the lower-risk subgroup were imprecise (table 4). Without surveillance, cumulative CRC incidence at 10 years was 2.6% (95% CI 2.1 to 3.3) in the whole intermediate-risk group, differing significantly between the lower-risk (1.3%, 95% CI 0.8 to 2.1) and higher-risk (3.7%, 95% CI 2.9 to 4.7) subgroups ( After first surveillance, cumulative CRC incidence still differed between the risk subgroups (table 5; figure 3), although incidence in the higher-risk subgroup was now similar to that in the general population (SIR 1.00, 95% CI 0.73 to 1.33) and was lower in the lower-risk subgroup (SIR 0.59, 95% CI 0.34 to 0.96) (table 5). When we additionally included poor bowel preparation and adenomas ≥20 mm in the classification of higher risk, the proportion of patients classed as higher risk increased to 74%. Incidence rates and effects of surveillance on incidence remained similar (data not shown). High-risk group The higher-risk subgroup of high-risk patients included those with incomplete colonoscopies, colonoscopies of unknown completeness or adenomas with high-grade dysplasia at baseline (n=902, 33%); lower-risk patients had none of these (n=1817, 67%) (table 4). The subgroups were similar regarding sex, age, year of baseline visit and number of surveillance visits (online supplementary table 2). Surveillance was associated with reduced CRC incidence in the higher-risk but not the lower-risk †The higher-risk subgroup included patients with an incomplete colonoscopy or colonoscopy of unknown completeness, a tubulovillous or villous adenoma, or proximal polyps at baseline; the lower-risk subgroup included patients with none of these factors. ‡The higher-risk subgroup included patients with an incomplete colonoscopy or colonoscopy of unknown completeness, an adenoma with high-grade dysplasia or proximal polyps at baseline; the lower-risk subgroup included patients with none of these factors. §The higher-risk subgroup included patients with an incomplete colonoscopy or colonoscopy of unknown completeness or an adenoma with high-grade dysplasia at baseline; the lower-risk subgroup included patients with none of these factors. Table 5 Cumulative colorectal cancer (CRC) incidence and age-standardised and sex-standardised incidence ratios P values calculated with the log-rank test to compare incidence in the lower-risk and higher-risk subgroups of each risk group. *One minus the Kaplan-Meier estimator of the survival function was used to estimate the cumulative incidence of colorectal cancer. †Expected numbers of colorectal cancers were calculated by multiplying the sex and 5-year age-group-specific observed person-years by the corresponding incidence rates in the general population of England in 2007. ‡The higher-risk subgroup included patients with an incomplete colonoscopy or colonoscopy of unknown completeness, a tubulovillous or villous adenoma, or proximal polyps at baseline; the lower-risk subgroup included patients with none of these factors. §The higher-risk subgroup included patients with an incomplete colonoscopy or colonoscopy of unknown completeness, an adenoma with high-grade dysplasia or proximal polyps at baseline; the lower-risk subgroup included patients with none of these factors. ¶The higher-risk subgroup included patients with an incomplete colonoscopy or colonoscopy of unknown completeness or an adenoma with high-grade dysplasia at baseline; the lower-risk subgroup included patients with none of these factors. SIR, standardised incidence ratio. Table 5 Continued subgroup, although estimates in the lower-risk subgroup were imprecise (table 4). Without surveillance, cumulative CRC incidence at 10 years was 5.7% (95% CI 4.0 to 8.3) in the whole high-risk group, differing significantly between the lower-risk (3.8%, 95% CI 2.1 to 6.8) and higher-risk subgroups (9.9%, 95% CI 6.2 to 15.7) (table 5; figure 4). Compared with the general population, CRC incidence was higher in the whole high-risk group (SIR 1.91, 95% CI 1.39 to 2.56) and higher-risk subgroup (SIR 3.55, 95% CI 2.34 to 5.17), but not significantly different in the lower-risk subgroup (SIR 1.10, 95% CI 0.64 to 1.76) (table 5). Colon After first surveillance, cumulative CRC incidence at 10 years was 5.6% (95% CI 3.1 to 9.8) in the whole high-risk group, 4.4% (95% CI 1.8 to 10.6) in the lower-risk subgroup and 7.8% (95% CI 3.8 to 15.4) in the higher-risk subgroup (table 5; figure 4). Compared with the general population, CRC incidence was not significantly different in the whole high-risk group (SIR 1.34, 95% CI 0.86 to 1.99) or lower-risk subgroup (SIR 1.01, 95% CI 0.52 to 1.76), but remained higher in the higher-risk subgroup (SIR 1.97, 95% CI 1.02 to 3.44). After a second surveillance visit, CRC incidence was no longer higher in the higher-risk subgroup than in the general population (table 5). In the main analysis, we excluded CRCs assumed to have arisen from incompletely resected baseline lesions; those found in the same/adjacent colonic segment to a baseline adenoma ≥15 mm that was seen at least twice within 5 years preceding cancer diagnosis (intermediate-risk group, n=38; high-risk group, n=12). In sensitivity analyses, we additionally excluded CRCs that satisfied only some of these criteria, but that we deemed likely to have arisen from incompletely resected lesions (low-risk group, n=6; intermediate-risk group, n=29; high-risk group, n=7). This negligibly affected the results (data not shown). Excluding patients without a complete baseline colonoscopy (low-risk group, n=2682; intermediate-risk group, n=2885; high-risk group, n=365) had little impact (online supplementary tables 3-7), although high-grade dysplasia was no longer significant in intermediate-risk patients (online supplementary table 4). DISCuSSIOn This is the largest study examining long-term CRC incidence following adenoma removal and the effects of surveillance on CRC incidence. We obtained data from 17 hospitals on 28 972 patients who underwent baseline colonoscopy and polypectomy and were followed for a median of 9.3 years. Stratifying the cohort into low-risk (50%), intermediate-risk (41%) and highrisk (9%) groups according to the 2002 UK surveillance guidelines, 3 we identified heterogeneity in CRC incidence and in the effects of surveillance on CRC incidence among each risk group. Our analyses showed that patients in the low-risk group were indeed at low risk of CRC. Even among the two-thirds of the group at higher CRC risk than the rest owing to an incomplete colonoscopy, colonoscopy of unknown completeness, tubulovillous or villous adenoma, or proximal polyps at baseline, CRC incidence was similar to that in the general population, without any surveillance. Among the remaining one-third, CRC incidence without surveillance was lower than in the general population. In a resource-constrained setting, it is important to consider the opportunity costs of performing surveillance in a particular patient group; we think that patients remaining at increased CRC risk following a highquality baseline colonoscopy, as compared with the general population, should be prioritised. Given this, and considering the risks of colonoscopy, we think that patients classified as Figure 2 Cumulative colorectal cancer incidence after baseline in the low-risk group. Cumulative colorectal cancer incidence with no surveillance (censoring at first surveillance) for the whole low-risk group (A) and the lower-risk and higher-risk subgroups (B). Cumulative colorectal cancer incidence after a single surveillance visit (censoring at second surveillance) for the whole lowrisk group (C) and the lower-risk and higher-risk subgroups (D). 95% CIs are shown for each curve. The higher-risk subgroup included patients with an incomplete colonoscopy or colonoscopy of unknown completeness, a tubulovillous or villous adenoma, or proximal polyps at baseline; the lower-risk subgroup included patients with none of these factors. low-risk do not require surveillance and they could instead be managed by screening. Our results corroborated our previous finding that surveillance is warranted for most but probably not all intermediate-risk patients. 13 Among intermediate-risk patients with incomplete colonoscopies, colonoscopies of unknown completeness, adenomas with high-grade dysplasia or proximal polyps at baseline (60% of the risk group), CRC incidence without surveillance was higher than in the general population and a single surveillance visit conferred substantial protection against CRC. Among patients without these characteristics, CRC incidence was lower than in the general population after baseline colonoscopy, indicating that surveillance is not necessary. Incidence of CRC was high in the high-risk group; without surveillance, rates were double that in the general population. Cumulative incidence at 10 years was 6% both without surveillance and with one surveillance visit, falling to 3% with two visits. High-risk patients might therefore benefit from attending two surveillance visits, although studies are needed to define the optimum interval between first and second visits. When we stratified the high-risk group into subgroups, estimates were too imprecise to draw clear conclusions. Our findings suggest that surveillance is warranted for high-risk patients (n=2719) and the higher-risk subgroup of intermediate-risk patients (n=7114) (34% of our cohort), but not for the lower-risk subgroup of intermediate-risk patients (n=4738) or low-risk patients (n=14 401) (66% of our cohort), who could instead be managed by screening. In the BCSP in England, surveillance is recommended for intermediate-risk and high-risk patients only. 23 In this setting, numbers of surveillance colonoscopies could be reduced by a third if the lower-risk subgroup of intermediate-risk patients forewent surveillance. Patients returning to the BCSP would be screened biennially with the faecal immunochemical test (FIT), which replaced the faecal occult blood test in June 2019. 24 Although FIT was introduced with a relatively high positivity threshold of 120 µg haemoglobin per gram of faeces, the threshold may be lowered over time if endoscopy capacity increases, which would improve FIT sensitivity for adenomas and early CRCs. 25 It is important that patients returning to screening are reminded to see their general practitioner if lower gastrointestinal symptoms occur. Several baseline characteristics were repeatedly predictive of CRC, including older age, incomplete colonoscopies, adenomas with high-grade dysplasia and proximal polyps. This aligns with our previous study of intermediate-risk patients, 13 16 and other studies describing these as risk factors for incident advanced neoplasia. 26 27 These findings reinforce the importance of a thorough baseline colonoscopy with complete resection of detected lesions. Incomplete resection might be implicated in the elevated risk among patients with high-grade dysplasia or proximal polyps, Figure 3 Cumulative colorectal cancer incidence after baseline in the intermediate-risk group. Cumulative colorectal cancer incidence with no surveillance (censoring at first surveillance) for the whole intermediate-risk group (A) and the lowerrisk and higher-risk subgroups (B). Cumulative colorectal cancer incidence after a single surveillance visit (censoring at second surveillance) for the whole intermediate-risk group (C) and the lower-risk and higher-risk subgroups (D). 95% CIs are shown for each curve. The higher-risk subgroup included patients with an incomplete colonoscopy or colonoscopy of unknown completeness, an adenoma with high-grade dysplasia or proximal polyps at baseline; the lower-risk subgroup included patients with none of these factors. Colon as advanced and proximal polyps have been associated with greater risks of incomplete resection. 28 Some proximal polyps in our study may have been serrated lesions which are often proximally located, flat, and difficult to see and remove. 29 Unfortunately, serrated lesions were not consistently classified in the era of our data. Half of low-risk patients, 60% of intermediate-risk patients and 66% of high-risk patients attended surveillance. Nonattenders were older than attenders, more likely to have had an incomplete baseline colonoscopy or poor bowel preparation, and in the intermediate-risk group were more likely to be female, consistent with the literature. 13 16 30 That 30%-40% of intermediate-risk and high-risk patients had no surveillance suggests some underuse of surveillance colonoscopy. Unfortunately, we had no information on why patients did not attend surveillance, but reasons may have included patient comorbidities, objections to colonoscopy or process errors. Among low-risk patients, first surveillance occurred after a median of 3.2 years, earlier than recommended. 3 This has been observed elsewhere. 31 32 Possible explanations include slow adoption of guidelines and concern about postcolonoscopy CRCs. There was greater adherence to recommended surveillance intervals for intermediate-risk and high-risk patients. Besides the present study and our previous study of intermediaterisk patients, 13 16 only one other study has compared CRC risk following adenoma removal with that in the general population in the absence and presence of surveillance. 17 This study included 5779 patients who underwent baseline colonoscopy from 1990 to 1999. Among patients with an AA (adenoma ≥10 mm, with high-grade dysplasia or villous histology) at baseline, CRC risk without surveillance was four times that in the general population and surveillance substantially reduced this risk. By contrast, among patients with non-AAs, CRC risk without surveillance was similar to in the general population and surveillance did not affect CRC risk. The study was limited, however, by the small sample size and age of the data. Strengths of the present study include the large, high-quality data set, comprising detailed data from 17 hospitals on baseline and surveillance colonoscopies. The hospitals included general and teaching hospitals located throughout the UK. Few data were missing and follow-up was complete for nearly all patients. Most baseline colonoscopies were performed after the introduction of colonoscopy quality initiatives in 2001. 11 Nevertheless, 20% of patients did not have a complete baseline colonoscopy. Exclusion of these patients had little impact, however, indicating that the findings are applicable in the modern era of high-quality colonoscopy. Limitations include the observational design, meaning we cannot assume that surveillance caused the reductions in CRC incidence. However, we adjusted for potential confounders and still saw a large effect of surveillance on incidence. Use of routine data means that misclassification may have occurred; however, this would Colon Figure 4 Cumulative colorectal cancer incidence after baseline in the high-risk group. Cumulative colorectal cancer incidence with no surveillance (censoring at first surveillance) for the whole high-risk group (A) and the lower-risk and higher-risk subgroups (B). Cumulative colorectal cancer incidence after a single surveillance visit (censoring at second surveillance) for the whole high-risk group (C) and the lower-risk and higher-risk subgroups (D). 95% CIs are shown for each curve. The higher-risk subgroup included patients with an incomplete colonoscopy or colonoscopy of unknown completeness or an adenoma with high-grade dysplasia at baseline; the lower-risk subgroup included patients with none of these factors. likely be non-differential, producing underestimations of effects. More patients attending surveillance were missing baseline data than non-attenders, particularly for colonoscopy completeness and bowel preparation quality, which is a potential source of bias. Some follow-up colonoscopies may have been for symptoms rather than surveillance. Additionally, as patients were stratified into risk groups by baseline adenoma size and number, we could not interpret the individual effects of these characteristics. Finally, although the follow-up period was long, the full benefit of surveillance on CRC incidence may not manifest until after 10 years. COnCluSIOn A large proportion of patients with adenomas do not remain at increased CRC risk following a complete baseline colonoscopy and polypectomy, compared with the general population. In our cohort, this was true for low-risk patients, and intermediaterisk patients without high-grade dysplasia or proximal polyps. Surveillance is probably not necessary for these patients and routine screening would suffice, although patients should be reminded to contact their general practitioner if lower gastrointestinal symptoms occur. Conversely, surveillance is warranted for high-risk patients, and intermediate-risk patients without a complete baseline colonoscopy or with high-grade dysplasia or proximal polyps, whose risk was higher than in the general population before surveillance. Incorporating these findings into guidelines could reduce surveillance colonoscopies by a third, while ensuring that patients at increased risk are protected. Twitter amanda J cross @DramandaJcross
2020-01-19T14:03:14.330Z
2020-01-17T00:00:00.000
{ "year": 2020, "sha1": "164cd4cb2d5ec3c79aa096f9f4f460e1363bfbfc", "oa_license": "CCBY", "oa_url": "https://gut.bmj.com/content/gutjnl/69/9/1645.full.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "82d58c9d6810c5be19e17717a426e12402fc9238", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
13593009
pes2o/s2orc
v3-fos-license
Defined Essential 8™ Medium and Vitronectin Efficiently Support Scalable Xeno-Free Expansion of Human Induced Pluripotent Stem Cells in Stirred Microcarrier Culture Systems Human induced pluripotent stem (hiPS) cell culture using Essential 8™ xeno-free medium and the defined xeno-free matrix vitronectin was successfully implemented under adherent conditions. This matrix was able to support hiPS cell expansion either in coated plates or on polystyrene-coated microcarriers, while maintaining hiPS cell functionality and pluripotency. Importantly, scale-up of the microcarrier-based system was accomplished using a 50 mL spinner flask, under dynamic conditions. A three-level factorial design experiment was performed to identify optimal conditions in terms of a) initial cell density b) agitation speed, and c) to maximize cell yield in spinner flask cultures. A maximum cell yield of 3.5 is achieved by inoculating 55,000 cells/cm2 of microcarrier surface area and using 44 rpm, which generates a cell density of 1.4x106 cells/mL after 10 days of culture. After dynamic culture, hiPS cells maintained their typical morphology upon re-plating, exhibited pluripotency-associated marker expression as well as tri-lineage differentiation capability, which was verified by inducing their spontaneous differentiation through embryoid body formation, and subsequent downstream differentiation to specific lineages such as neural and cardiac fates was successfully accomplished. In conclusion, a scalable, robust and cost-effective xeno-free culture system was successfully developed and implemented for the scale-up production of hiPS cells. Introduction Human induced pluripotent stem (hiPS) cells are capable of self renewing indefinitely, and to differentiate into all the cell types of the human body [1]. Because of these characteristics, analogous to human embryonic stem (hES) cells, hiPS cells are promising sources for several biomedical applications [2]. However, to fully realize the potential of hiPS cells for cellular therapy, drug screening and disease modelling, the development of standardized and robust scalable processes to produce large numbers of these cells while maintaining their critical biological functionality and safety are of prime importance. Typically, hiPS cells are expanded using adherent static cell culture systems that cannot provide a sufficient number of cells for downstream applications, presenting low cell yields and inherent variability of the culture process and of the final product. Translating cell culture from static plates to suspension systems is needed to achieve scalability of the process. Stirred bioreactors are an appropriate culture system for moderate large-scale cell production given their robustly controlled operation and well-established scale-up protocols [3,4,5]. Several methodologies for human pluripotent stem (hPS) cell culture in these systems have been implemented in the last few years, including cultivation of cells encapsulated typically inside hydrogels [6,7], adherent onto microcarriers [8,9], or as 3D aggregates in suspension [10,11]. Microcarrier technology confers distinct advantages as it provides homogeneous culture conditions to the cells, large surface areas for cell adhesion and growth [12,13] and importantly, a large surface/volume ratio. Also, microcarrier culture on fully controlled bioreactors allows monitoring and controlling of environmental parameters, and can be scaled up relatively easily. Nevertheless, despite recent progress on scalable microcarrier hPS cell suspension culture, most of the methods are based on the use of non-defined extracellular matrix (ECM) extracts, such as Matrigel™ or Geltrex™, as surface for cell adherence on microcarriers [14,15,16], and commercially available serum-free media, such as mTeSR™ and StemPro 1 [14,17,18], that contain animal-derived products. Envisioning the bioprocess translation to Good Manufacturing Practice (GMP) standards, great efforts have been made towards the translation of scalable culture systems to chemically defined and xeno-free conditions. A completely defined medium, Essential 8™, that consists of only eight components, was recently developed [19,20,21], and several other studies have been reporting defined surfaces that support long-term hiPS cell culture, like vitronectin, laminin, fibronectin and various synthetic peptides [15,18,22,23]. Nevertheless, the use of Essential 8™ medium to support expansion of hiPS cells on microcarriers coated with defined substrates has never been reported. To design a bioprocess to produce a biomedical product, it is of foremost importance to set up robust and reproducible production practices. Therefore, robust predictive strategies to evaluate process parameters that will impact culture output need to be developed. Rational design of experiments can provide a model to predict the culture output as a function of multiple culture parameters [24,25]. Therefore, in this work, we implemented a stirred culture system based on the use of vitronectin-coated microcarriers and Essential 8™ medium for the scalable expansion of hiPS cells, using 50 mL spinner flasks. Importantly, a three-level factorial design model was used to identify the optimal conditions that maximize cell yield. Finally, given the potential applications of hiPS cells in differentiation and lineage specification studies, we investigated the differentiation capacity of hiPS cells cultured on microcarriers, under xenofree chemically defined conditions, to cardiomyocytes and to neural progenitor cells. Cells and microcarriers Gibco™ human induced pluripotent stem cell line used in this work was derived from CD34 + cells of healthy donors (Life Technologies). The hiPS cells were routinely cultured on Geltrex (1:60, Life Technologies) -coated 6-well plates in Essential 8 (E8) medium (Life Technologies), in a humidified 5% CO 2 incubator at 37°C. The medium was refreshed daily and cells were routinely passaged at a split ratio of 1:4 using the EDTA method [26], when colonies reached 80% confluence. hiPS cells were adapted to Vitronectin (rhVTN-N, Life Technologies)-coated plates for two passages prior to inoculation onto microcarriers. Cells were routinely evaluated for karyotype abnormalities by conventional cytogenetics using the services of Genomed SA (Lisbon, Portugal). Polystyrene microcarriers (Solohill Engineering, Inc.), with 360 cm 2 /g of superficial area, were used to support cell growth. Microcarriers were mixed during 1 h with Ethanol 70% (Sigma) at room temperature and washed 3 times with sterile phosphate-buffered saline (PBS). Coating of microcarriers was performed for 2 h at room temperature with Vitronectin in sterile PBS, using 0.5 μg/cm 2 . Geltrex-coated microcarriers were used (coating: 0.25 mL/cm 2 of Geltrex solution (1:60)) as a control. Prior to cell inoculation, microcarriers were incubated (at 37°C) for 30 min in culture medium. Inoculation of hiPS cells on microcarriers Inoculation as single cells. The protocol for the inoculation of hiPS cells on microcarriers as single cells was described recently [27]. Inoculation as clumps. Cells were incubated for 5 min with Cell Dissociation Buffer (Life Technologies) at room temperature, using the EDTA method [26]. Cells were then collected and inoculated on microcarriers with or without ROCK inhibitor (10 μM, Y27632, from Stem-Gent) for the first 24 h of culture. hiPS cell expansion on microcarriers Static culture. The protocol for the screening of microcarriers for hiPS cell expansion under static culture in low-attachment 24-well plates (Corning Inc.) was recently published [27]. We used 3 cm 2 of microcarrier superficial area per well and cells were inoculated at an initial density of 5x10 4 cells/cm 2 . Geltrex-and vitronectin-coated polystyrene microcarriers (GM and VtnM) were tested. 80% of E8 medium was changed daily for 5 days. The cell yield in total cell number was calculated as the ratio X day5 /X i , where X day5 is the number of viable cells, attached to the microcarriers, at day 5, and X i is the number of cells inoculated at day 0. Spinner flask culture. The expansion of hiPS cells in a microcarrier stirred suspension culture was performed in presiliconized (Sigmacote, Sigma) spinner flasks (StemSpan TM , Stem-Cell Technologies), with a working volume of 50 mL. The impeller was composed of a horizontal magnetic stir bar with a vertical paddle. Agitation was obtained by a magnetic stirrer platform (Variomag, Biosystem), which was placed inside a 5% CO 2 incubator at 37°C. Cells were seeded as small clumps, at an initial density of 3, 5 or 7x10 4 cells/cm 2 , using a total of 1 g of coated polystyrene microcarriers (360 cm 2 /spinner) in 25 mL of E8 medium under static conditions, to promote cell-microcarrier contact. Medium was supplemented with ROCK inhibitor (10 μM) for the first 24 h after inoculation. After 24 h, the medium was replaced and adjusted to 50 mL of fresh E8 medium. Subsequently, an intermittent stirring (3 min at 40 rpm every 2 h) was performed overnight to promote cell-cell and cell-microcarrier contact. Thereafter, the culture was continuously stirred at 30, 50 or 70 rpm and feeding was performed on a daily basis by replacing 80% of volume with fresh pre-warmed medium. For spinner flask cultures, cell attachment efficiency to the microcarriers was calculated as the percentage of X day1 /X i , where X day1 is the number of the viable cells attached to the microcarriers at day 1 of the culture and X i is the number of cells inoculated at day 0. The maximum cell yield was calculated as the ratio X max /X i , where X max is the maximum cell number, attached to the microcarriers, achieved during the culture. Sampling. Duplicate 700 μL samples of the culture were collected from the spinner flasks everyday. In order to detach the cells, microcarriers were incubated with 0.05% trypsin (Life Technologies) at 37°C for 10 min, in the heater mixer set at 750 rpm. After dissociation by pipetting, the mixture was filtered through a 100 μm mesh (cell strainer, from BD Biosciences) to remove the microcarriers. Cells were then centrifuged at 210 g for 5 min and viable and dead cells were determined by counting in a hemocytometer under optical microscope, using the trypan blue dye exclusion test. Cell harvesting from the microcarriers and re-plating. The cell harvesting and re-planting protocol at the end of the culture (static or dynamic) was performed as previously described [27]. Cells were resuspended in E8 medium supplemented with ROCK inhibitor (10 μM) and then inoculated at a density of 5x10 4 cells/cm 2 of well area on GP. Experimental design The effects of two independent variables, initial cell density and agitation rate, on the cell yield were determined using a face-centered composite design (FC-CD) approach using STATIS-TICA software (StatSoft, Tulsa, OK). Each independent variable was evaluated at three different coded levels (low (−1), central (0) and high (+1)) as portrayed in S1 Table and combined in a FC-CD design set up described as: where N is the number of experiments, k is the number of variables (k = 2), p the fractionalization number (in a full design, p = 0) and C 0 is the number of central points, that provides estimation of the experimental error. Accordingly, a total of 12 [2 2−0 + (2×2) + 4] independent experiments were performed. The data was fitted to a full quadratic model (including linear and non-linear effects, plus two-way interaction) as follow: where Y is the response measured or dependent variable (cell yield), X 1 and X 2 are the two independent variables, β 0 is the intersect; β 1 and β 2 are the linear main effects, β 11 and β 22 are the quadratic coefficients, and β 12 is the coefficient for the second order interaction. The error of the prediction was estimated from the error obtained by the genuine replicates performed on the central points of the matrix, done at least in 4 independent experiments [28]. The coefficient of regression (R 2 ) was also determined by the software. ) and anti-mouse IgG-PE (1:10) (StemGent). Immuncytochemistry against markers from the three germ layers was performed using antibodies against alpha smooth muscle actin (α-SMA; mouse: 1:1000; Dako), neuron-specific class III β-Tubulin (TUJ1; mouse: 1:20 000; Covance) and SOX17 (mouse: 1:1000; R&D Systems), for the mesoderm, ectoderm and endoderm, respectively. Cardiomyocyte marker was Troponin T cardiac isoform antibody (13-11) (cTNT; mouse: 1:500; Thermo Scientific). Neural progenitor cell markers were NESTIN (mouse: 1:1000; R&D Systems) and paired box gene 6 (PAX6; rabbit: 1:1000; Covance). Characterization of hiPS cells and derivatives Flow cytometry. Cells were kept at 4°C in 2% (v/v) paraformaldehyde (PFA, from Sigma). For surface staining, approximately 5x10 5 cells were resuspended in 100 μL of FACS buffer (3% (v/v) Fetal Bovine Serum (FBS, from Invitrogen) in PBS) with the diluted primary antibody, and incubated for 15 min at room temperature in the dark. Cells were washed twice with PBS and resuspended in 300 μL of PBS to be analysed by flow cytometry (FACSCalibur, Becton Dickinson). For negative controls, cells were incubated with the appropriate isotypes. For intracellular staining, the protocol used is described by Miranda et al. [29]. For the negative controls, cells were incubated only with 3% (v/v) Normal Goat Serum (NGS, from Sigma) in PBS. The CellQuest software (Becton Dickinson) was used for all acquisition/analyses. Immunocytochemistry. For surface antigens, after removing the culture medium, cells were incubated for 30 min at 37°C in the presence of the primary antibodies diluted in medium. Cells were washed 3 times with PBS and incubated in the dark for 30 min, at 37°C, with the secondary antibodies. For intracellular staining, the protocol used is described by Miranda et al. [29]. Cells were examined using a fluorescence microscope (Leica DMI3000B/ Nikon Digital Camera Dxm1200F). RT-PCR. RNA was isolated using PureLink 1 RNA Mini Kit (Life Technologies). cDNA was synthetized using 1 μg of total RNA and the High Capacity cDNA Reverse Transcriptase kit (Life Technologies). StepOne QST Real-Time Polymerase Chain Reaction (RT-PCR) was performed using the TaqMan™ Gene Expression Assay (Applied Biosystems) (S2 Table). In vitro hiPS cell differentiation potential. hiPS cell differentiation potential was evaluated in vitro via embryoid body (EB) formation and spontaneous differentiation. Cells from a spinner flask culture were harvested and inoculated as single-cells in GP. At 80% confluence, cells were passaged with EDTA treatment to a 6-well low-attachment plate in EBs medium (DMEM with 20% (v/v) FBS, 1% (v/v) MEM-non essential amino acids, 1mM sodium pyruvate, 0.1 mM β-mercaptoethanol and 1% (v/v) Penicillin/Streptomycin, all from Invitrogen), supplement with ROCK inhibitor for the first 24 h. Medium was changed every 2 days for 4 weeks thereafter. EBs were then dissociated with trypsin 0.025% and cells inoculated in a 24-well plate coated with 4 μg/mL laminin (StemGent) and 10 μg/mL poly-D-lysine (Sigma). Medium was changed every 2 days for 1 week. Finally, cells were stained with anti-SOX17, TUJ1 and α-SMA antibodies. Directed hiPS cell cardiomyocyte differentiation. The Gibco 1 hPS cell Cardiomyocyte Differentiation Kit (Life Technologies) was used to induce cardiomyocyte differentiation of hiPS cells adherent to confluent microcarriers from a spinner flask culture (without cell harvesting). Confluent microcarriers were placed in a 24-well low-attachment plate (3 cm 2 of microcarriers area/well) and the protocol was performed using manufacture's instructions. Also, cells harvested from the microcarriers at the end of the spinner flask culture were inoculated as single-cells in GP, and at 80% confluence, cardiomyocyte differentiation was initiated. On both cases, at the end of the differentiation protocol, cells were stained for cTNT and OCT4 markers. Neural induction by Dual-SMAD inhibition. Confluent microcarriers from a spinner flask culture were placed in 24-well low-attachment plate (3 cm 2 of microcarriers area/well). N2B27 medium supplemented with 10 μM of SB431542 (SB, Sigma) and 100 nM of LDN193189 (LDN, StemGent) was added and replaced daily for 12 days. N2B27 medium is composed of a 1:1 mixture of Dulbecco's modified Eagle´s medium (DMEM)/F12 and Neurobasal medium supplemented with 1x N2 and 1x B27 (Life Technologies). At the end of the differentiation protocol, cells attached to microcarriers were stained with NESTIN, PAX6 and OCT4 markers. Also, differentiated cells attached to the microcarriers were dissociated by pipetting and plated on GP in the same N2B27-based medium. This medium was daily replaced for 12 days. At day 12, cells were stained for NESTIN, PAX6 and OCT4 markers. Statistical analysis All data presented show n = 3 replicates, unless stated otherwise. Error bars represent the standard error of the mean (SEM). Xeno-free surfaces for adherent hiPS cell culture in E8 medium In order to select the best xeno-free substrate for expansion of hiPS cells in combination with E8 culture medium, different substrates were tested and compared. Since the ability of vitronectin (Vtn) surfaces to support long-term hiPS cell expansion in xeno-free E8 medium has been described in the literature [19], the model hiPS cell line was seeded onto Vtn and Geltrex surfaces and cultured in E8 medium. As shown in Fig 1A, no significant differences were found in cell morphology between hiPS cells cultured on these two surfaces. On both cases, hiPS cells demonstrated a typical morphology of tightly packed colonies with defined borders and a high nucleus-to cytoplasm ratio. Comparison studies were performed using different substrate surfaces for the adhesion, expansion and serial passaging of hiPS cells. Besides Geltrex and Vtn, CELLStart™ (Life Tecnhologies) and Synthemax 1 (Corning Inc.) surfaces were also evaluated. As it can be seen in Fig 1B, cell growth kinetics were similar when hiPS cells were cultured on all surfaces, except on CELLStart™ surface. As shown in Fig 1C, cells cultured on Geltrex surface presented the highest fold increase (5.2±0.8), which could be expected due to its complex and rich protein composition [30]. However, this non-defined ECM extract surface may be a source of xenogeneic risk. In the same figure, it is shown a similar cell fold expansion when culturing the cells on Vtn (4.3±0.4) and Synthemax 1 (4.5±0.5) surfaces. Although Synthemax 1 is a chemically-synthesized substrate [31], Vtn is more cost-effective adhesion-promoting reagent [22] as was evaluated in the literature [30]. hiPS cells were then cultured during four consecutive passages on Vtn surface and immunofluorescence microscopy was performed to evaluate the expression of the intracellular and extracellular pluripotency markers OCT4, SOX2 and NANOG (with the corresponding DAPI stains of the nuclei), and TRA-1-60, TRA-1-81 and SSEA4, respectively (Fig 1D and 1E). The fluorescence images indicated that hiPS cells can be maintained in their undifferentiated state on Vtn surface. Moreover, flow cytometry analysis revealed consistently high expression levels of pluripotent markers TRA-1-60 (94±1%), SSEA4 (97±1%), OCT4 (95±1%), NANOG (89±3%) and SOX2 (92±1%) (Fig 1F and 1G). Finally, it was also verified that hiPS cells consistently displayed a normal karyotype (46 XX) after four passages on Vtn-coated tissue culture plates, in E8 medium (data not shown). In conclusion, the combination of Vtn surfaces and E8 medium support robust and long-term culture of undifferentiated hiPS cells under adherent static conditions. hiPS cell expansion on vitronectin-coated microcarriers: inoculation strategy After demonstrating that Vtn could support the long-term culture of hiPS cells in xeno-free E8 medium, this matrix was used to coat polystyrene microcarriers and then to implement a scalable culture. Vtn-coated microcarriers were inoculated with 5x10 4 cells/cm 2 in low attachment 24-well plates. Three inoculation strategies were evaluated (Fig 2A). In strategy (a) cells were incubated for 1h with ROCK inhibitor, dissociated with Accutase and inoculated as single cells in the presence of ROCK inhibitor for 24 h; in strategy (b), cells were dissociated with EDTA treatment and inoculated as cell clumps; and in strategy (c) cells were dissociated with EDTA treatment and were inoculated as cell clumps, in the presence of ROCK inhibitor for 24 h. When hiPS cells were treated with EDTA for 3 minutes, this resulted in the formation of small clumps that were able to survive. However, due to the considerable size of these clumps (8-12 cells) cells tend to grow as aggregates rather than attaching onto microcarriers. Consequently, the time of incubation with EDTA was increased to 5 min in order to obtain smaller clumps (3-6 cells) and allow cell adhesion to the microcarriers. Since higher incubation time with EDTA had to be performed in the microcarrier-based culture, the addition of ROCK inhibitor for the first 24 h of culture was considered beneficial for smaller clump survival. In Fig 2B, it can be observed the cell yield (see Materials and Methods) for the three different inoculation strategies, after 5 days of static culture of hiPS cells on Vtn-coated polystyrene microcarriers (VtnM) and plate (VtnP). Cell expansion on Geltrex-coated polystyrene microcarriers (GM) and plate (GP) were evaluated as a control. The highest cell yields were obtained using the inoculation strategy (c), and results with VtnM (5.8±1.0 for strategy (a), 1.0±0.2 for strategy (b) and 6.6±1.0 for strategy (c)) were similar to the ones obtained with GM. Moreover, for strategy (c), the culture on microcarriers showed similar yields to the static culture on plate (6.4±0.7). The inclusion of the small molecule Y27632 (ROCK inhibitor) has already been reported [19,32] to improve initial cell survival and to support high clonal efficiency. Therefore, cell-microcarrier adhesion efficiency was improved and higher cell yields were obtained in this case. Importantly, as presented in Fig 2C, after 5 days of culture, hiPS cells cultured on VtnM and GM, stained positively for NANOG and OCT4 pluripotency intracellular markers. Considering these results, strategy (c) was chosen for scaling-up the microcarrier-based culture. Optimization of hiPS cells expansion on a scalable stirred spinner flask culture by a face-centered composite design The next step was to implement a dynamic microcarrier-based system in 50 mL-spinner flasks, envisaging the scalability of the expansion process. The protocol followed for the expansion experiments in the spinner flask is presented in Fig 3A, and is composed of four steps. On step (a) cells were inoculated in the spinner flask, using the EDTA/ROCKi method, in VtnM (20 g/ L, corresponding to 360 cm 2 of superficial area) and using half of the working volume (25 mL E8 medium supplemented with ROCK inhibitor). On Step (b), attachment of the cells to the VtnM is initiated. This step corresponds to the initial 2 days of culture. The spinner flask was operated under static conditions during the first 24 h, which is a critical period for the success of the culture that depends on cell attachment efficiency. In our experiments, attachment efficiencies of hiPS cells to VtnM were very similar, 32.5±0.9%, for inoculations with different initial cell densities. Step (b) also involved the second day of culture when ROCK inhibitor was removed from the medium, working volume was established at 50 mL and spinner flask was operated at an intermittent agitation (3 min at 50 rpm every 2 h) to maximize cell-cell and cellmicrocarrier interactions. Step (c) corresponds to the period of cell expansion, which started with the initiation of the exponential growth phase at day 3 and ceased when the maximum cell yield was attained, between days 7 and 11, depending on culture conditions. During this step, the spinner flask was operated under a continuous agitation, 80% of the medium was changed everyday and samples were taken each day for cell counting. The final step (d) involved the characterization of the hiPS cells cultured on VtnM in the stirred spinner flask, through analysis of their pluripotency state and their differentiation potential by flow cytometry and immunocytochemistry. The most critical parameters for cell expansion in the spinner flask were identified to be the initial cell density and the agitation speed, which were already evaluated for the culture of hES cells as aggregates in stirred suspension bioreactors [25]. Therefore, a face-centered composite design (FC-CD) was implemented to evaluate the influence of these two parameters on hiPS cell expansion, in terms of the maximum cell yield of the culture (S1 Table). In the factorial design, the selected values for the initial cell density were 3, 5 and 7 x10 4 cells/cm 2 and for the agitation speed, values were 30, 50 and 70 rpm, which were selected taking into consideration the operating conditions already reported for dynamic microcarrier cultures with hESC and mESC [12,33,34,35]. The equation that describes the quadratic model obtained for cell yield Yield ¼ 3:298 À 1:610 X 1 À 2:408X 1 2 þ 0:677 X 2 À 1:448 X 2 2 þ 0:700 X 1 X 2 where X 1 is the agitation rate and X 2 is the initial cell density. The second order polynomial generated for cell yield in a spinner flask culture does not fully describe the expected experimental data (R 2 = 0.479), which could be anticipated due to the inherent variability of this cell culture system. Nevertheless, based on the regression model, response surface plot and 2D heat plot were established as shown in Fig 3B and 3C. The optimal conditions predicted by the model to reach a maximum cell yield of 3.5 were 55,000 cells/ cm 2 for the initial cell density and 44 rpm for the agitation rate. hiPS cell expansion under optimized dynamic culture conditions In order to verify the validity of the proposed model, several runs of hiPS cell expansion in spinner flask were performed under the optimum conditions given by the FC-CD, which were an initial cell density of 55,000 cells/cm 2 and an agitation rate of 44 rpm. Maximum cell numbers achieved in these experimental culture runs were compared with the model-predicted value of the maximum cell number achieved in a culture under the optimal conditions ( Fig 4A). Predicted and experimental results for maximum cell yield are similar, 3.5 and 4.0±0.4, respectively. Interestingly, the values of the maximum cell yield obtained in the experimental culture runs were all above the value predicted by the model. Therefore, although the inherent variability of hiPS cell expansion on microcarriers under dynamic conditions, in a spinner flask, the model obtained by the FC-CD proved to be a useful approximation for this culture system. hiPS cells expanded under these optimal conditions were then evaluated for their pluripotency and undifferentiated state. It was confirmed that hiPS cells growing attached onto VtnMcoated microcarriers in a spinner flask retained their pluripotency characteristics, since these cells presented NANOG and OCT4 expression, as detected by immunocytochemical analysis. Also, the cells maintained their capacity to form typical undifferentiated colonies when harvested from microcarriers and re-plated on GP, since they stained positively for the pluripotency markers NANOG, OCT4, SOX2, SSEA4 and TRA-1-60 ( Fig 4C). Pluripotency maintenance was demonstrated by flow cytometry analysis. As shown in Fig 4D, more than 93% of the cells were positive for the pluripotency markers NANOG, SOX2 and OCT4 after 12 days of culture. mRNA was isolated from hiPS cells at day 0 and at the end of the spinner flask culture (day 12) in order to assess for the expression of the hiPS cell markers OCT4 and NANOG, by RT-PCR (Fig 4E). It was confirmed that cells cultured in spinner flasks maintained gene expression of the pluripotency markers. Furthermore, hiPS cells collected at the end of a spinner flask culture retained a normal karyotype (46 XX). hiPS cell pluripotency was also evaluated in terms of their ability to differentiate into progeny of the three germ layers, which was assessed in vitro by induction of spontaneous differentiation of cells harvested from VtnM-coated microcarriers at the end of a spinner flask culture. Embryoid body (EB) formation was achieved using hiPS cells that were harvested, re-plated and cultured on GP, and finally inoculated in low-attachment plates to form cell aggregates in suspension. Cells were able to aggregate as EB and were cultured for 5 weeks. Quantitative reverse transcriptase-polymerase chain reaction showed upregulation of genes associated with the formation of the three germ layers: endoderm (SOX17, AFP), ectoderm (TUBB3) and mesoderm (T and SMA); and downregulation of pluripotency markers (OCT4 and NANOG) (Fig 4F). Furthermore, expression of SOX17, TUJ1 and α-SMA markers, representing the three germ lineages; endoderm, ectoderm and mesoderm, respectively, was observed by immunostainning of the differentiated cells re-plated on laminin/poly-D-lysine-coated well plates (Fig 4G). Differentiation potential of hiPS cells cultured in dynamic conditions Two different experimental settings were performed in order to evaluate the differentiation potential of hiPS cells after expansion under dynamic conditions in the spinner flask: directed cardiomyocyte (CM) differentiation and commitment to neural progenitor (NP) cells. Both directed differentiation protocols were performed by a) plating microcarriers with hiPS cells in low-attachment plates, and b) plating hiPS cells harvested from microcarriers on GP. Directed differentiation of hiPS cells into CM was performed after the spinner flask culture. Spontaneous contracting regions in GP (S1 Video) and beating cell-VtnM aggregates (S2 Video) on low-attachment plate were observed at day 10 of differentiation. CM induction was confirmed at day 16 by immunocytochemistry analysis (Fig 5A and 5B) since cTNT + cells were obtained both on GP and on VtnM. Also, upon re-plating of CM obtained on VtnM onto GP, after the differentiation protocol, it was possible to observe the presence of contracting colonies (S3 Video) that stained positively for the cardiac marker cTNT. NP cells were also obtained from spinner flask-expanded hiPS cells by dual inhibition of SMAD signaling [36]. After a 12 day-differentiation protocol on GP and on VtnM with hiPS cells cultured in spinner flask, immunocytochemical analysis showed strong expression of the early neural differentiation markers PAX6 and NESTIN, whereas the expression of the pluripotency marker OCT4 was not observed (Fig 5C and 5D). Also, NP cells that were obtained on VtnM, were re-plated on GP after the differentiation protocol and after 4 days it was observed neuroepithelial cells arranged in neural rosette structures, which expressed PAX6 and NESTIN markers. In Fig 5E are presented the relative gene expressions obtained by quantitative RT-PCR after cardiac and neural differentiation. It was demonstrated an increase of transcription levels of representative genes of cardiac markers (early markers ISL1 and GATA4 and late markers TNNT2 and NKX2.5) or increase of transcription levels of representative genes of neural progenitor markers (PAX6 and SOX1), while there was a decrease in pluripotency marker gene expression (OCT4 and NANOG). Discussion Biomedical applications of stem cell-derived products depend on the availability of large numbers of cells, or their differentiated progeny. However, developing GMP-compliant scalable and efficient process for stem cell production, namely hiPS cell expansion, followed by directed differentiation into progenitor cells and then fully mature cells, is still a challenge. In vitro expansion of hPS cells relies on cell-ECM interaction that occurs between cell surface adhesion molecules, and enables cells to attach and proliferate. Geltrex (or Matrigel™) is an undefined mixture of ECM proteins extracted from the Engelbreth-Holm-Swarm (EHS) mouse tumors, then, its quality and composition varies from lot to lot. Also, there are safety concerns over this substrate use in clinical applications due to the risk of contamination with animal-derived pathogens and immunogens [37,38]. Multiple matrix proteins, such as laminin [39,40], vitronectin [22,41] and fibronectin [42,43], support hPS cell growth in their undifferentiated state. Vitronectin protein can be found in both serum and the ECM and mediates cell adhesion and spreading, and it is relatively easy to overexpress and purify [22], thus being a very promising xeno-free substrate to support the cost-effective scale-up of hiPS cell proliferation. Although TeSR™ medium has been used for hPS cell expansion in the complete absence of animal proteins, the inclusion of human serum albumin (HSA) and human-sourced matrix proteins makes the production process expensive and impractical for scale-up. Recently, basic components of hES cell and iPS cell culture were re-optimized in the absence of BSA and βmercaptoethanol (BME, a toxic component in the absence of BSA) and a completely defined medium, E8 (eight components, including DMEM/F12), was developed [19]. E8 medium reduces process cost and simplifies quality control, being a promising medium for studying specific signaling pathways in self-renewal and differentiation, due to its simple composition. Therefore, the use of E8 medium may facilitate the hiPS cell research transfer to the clinic. In the present work, it was demonstrated that Vtn-coated surface combined with E8 medium can support hiPS cell expansion and serial passaging in tissue culture plates, while cells maintain their undifferentiated and pluripotent states. In parallel, with the development of cell culture medium, a consistent xeno-free dissociation method is also important. Conventionally, hPS cells are passaged as aggregates, using and enzymatic treatment, but this process is always accompanied with an excessive cell death. It was reported recently that, after a specific EDTA treatment, hiPS cells could be partially dissociated to generate small aggregates (3 to 5 cells) that survived [26] and attached to a GP within minutes, spread in 2 h and presented a colonylike morphology in 24 h. These results were confirmed in our work since we demonstrated that our hiPS cell model retained stable proliferation and pluripotency markers after growth on Vtn surface combined with E8 medium, for four consecutive passages, using EDTA treatment. We have also demonstrated that by coating polystyrene microcarriers with Vtn substrate, the model hiPS cell line was effectively expanded in a suspension culture in static conditions. The optimization for the inoculation protocol indicated that higher cell yields are obtained when cells are inoculated as small clumps (3-6 cells) using EDTA treatment (5 min), in the presence of ROCK inhibitor, for the first 24 h of culture. The cell yields achieved in the microcarrier suspension culture were comparable to the ones obtained when cells were cultured in tissue culture plates. The microcarrier-based culture is an attractive system due to the scale-up potential for hPS cell expansion, when combined with stirred bioreactors. Thus, spinner flask was used in this work for scaling-up of hiPS cell culture on VtnM in E8 medium. As it was mentioned before, the process optimization is required for the development of successful cell-based therapies [24]. The rational design of experiments is an interesting technique to develop a predictive mathematical model to identify critical conditions of the bioprocess system and to understand their impact in the culture output. Using a multifactorial approach and a response surface methodology we were able to evaluate the influence of agitation rate and the initial cell density on the cell yield of the culture. Results from the two-level factorial design suggested that the agitation rate has a negative effect (-1.610) on cell yield, and on the other hand the initial cell density has a positive effect (+0.677). It also suggested that the yield response is more affected by the agitation rate parameter, which is related to the shear force effect on the cells. The second order terms of both parameters were negative, indicating a downward concavity of the model, which suggests the existence of a maximum response within the range of the analyzed values, i.e. there is an optimum value of each culture parameter. In the case of agitation rate, lower rates would result in less efficient oxygen and nutrients transfer, lower mixing and larger microcarrier aggregates, however higher rates result in higher shear stress values. The calculated maximum shear stress values for this culture system, following Nagata correlations [44], varied between 0.08 and 0.26 Pa when using agitation rates between 30 and 70 rpm. These values are well below the predicted value of 0.65 Pa at which significant effects on human embryonic kidney cell morphology occur [45] and the value of 0.78 Pa at which extensive murine embryonic stem cell damage and no proliferation were noted [46]. However, by analyzing the growth curves that correspond to the 70 rpm cultures (S1 Fig), it was interesting to notice that cell expansion only occurred when inoculating at the highest density (7x10 4 cells/cm 2 ) and still in this case the culture presented a larger lag phase. In relation to cell inoculation densities, the initial cell number will affect the maximum yield of the culture due to the lack of critical autocrine signals in low density cultures or the buildup of toxic metabolites in high densities cultures [25]. An optimal response was achieved indicating that the optimal conditions were 44 rpm for agitation rate and 55,000 cells/cm 2 for initial cell density, which corresponded to an expected yield of 3.5 that was validated and confirmed experimentally. This means that with the culture system implemented, a maximum hiPS cell density of 1.4x10 6 cells/ml could be obtained. Importantly, cells cultured in this system maintain their pluripotency state and presented a normal karyotype. hiPS cells harvested from microcarriers at the end of the spinner flask culture were able to differentiate into derivatives of the three embryonic germ layers through EB formation and spontaneous differentiation. Envisioning the incorporation of both expansion and differentiation steps in an integrated bioprocess, the use of microcarrier technology to directly generate hiPS cell-derived-NP cells and CM without harvesting the cells after the expansion period was evaluated, using serumfree and xeno-free conditions. We were able to efficiently differentiate hiPS cells attached to VtnM, which were previously cultured in a stirred spinner flask, to a) NESTIN + and PAX6 + cells after 12 days of neural commitment protocol [36] and to b) clusters of beating cells after 10 days of CM differentiation (using ready-to-use media, Life Technologies). It has been already reported the generation of NP cells using this technology [47], however the system involved the use of Matrigel™ to coat the microcarriers. Also, CM were recently generated [48] using a differentiation protocol based on modulators of the Wnt signaling, nonetheless it involves the use of murine laminin to coat the microcarriers. Conclusion In conclusion, a scalable and efficient bioprocess for hiPS cell expansion using xeno-free and defined conditions was developed and optimized, in order to generate larger numbers of hiPS cells, needed for clinical, drug discovery and industrial applications. Importantly, this work paves the way towards the development of strategies for the scalable integrated expansion and directed differentiation to specific lineages, (for example, neural and cardiac) of hiPS cells under defined xeno-free conditions.
2016-05-12T22:15:10.714Z
2016-03-21T00:00:00.000
{ "year": 2016, "sha1": "d164425a5b61289009139da400a426403ea934cb", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0151264&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "d164425a5b61289009139da400a426403ea934cb", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
245143058
pes2o/s2orc
v3-fos-license
New Skull Material of Taeniolabis taoensis (Multituberculata, Taeniolabididae) from the Early Paleocene (Danian) of the Denver Basin, Colorado Taeniolabis taoensis is an iconic multituberculate mammal of early Paleocene (Puercan 3) age from the Western Interior of North America. Here we report the discovery of significant new skull material (one nearly complete cranium, two partial crania, one nearly complete dentary) of T. taoensis in phosphatic concretions from the Corral Bluffs study area, Denver Formation (Danian portion), Denver Basin, Colorado. The new skull material provides the first record of the species from the Denver Basin, where the lowest in situ specimen occurs in river channel deposits ~730,000 years after the Cretaceous-Paleogene boundary, roughly coincident with the first appearance of legumes in the basin. The new material, in combination with several previously described and undescribed specimens from the Nacimiento Formation of the San Juan Basin, New Mexico, is the subject of detailed anatomical study, aided by micro-computed tomography. Our analyses reveal many previously unknown aspects of skull anatomy. Several regions (e.g., anterior portions of premaxilla, orbit, cranial roof, occiput) preserved in the Corral Bluffs specimens allow considerable revision of previous reconstructions of the external cranial morphology of T. taoensis. Similarly, anatomical details of the ascending process of the dentary are altered in light of the new material. Although details of internal cranial anatomy (e.g., nasal and endocranial cavities) are difficult to discern in the available specimens, we provide, based on UCMP 98083 and DMNH.EPV 95284, the best evidence to date for inner ear structure in a taeniolabidoid multituberculate. The cochlear canal of T. taoensis is elongate and gently curved and the vestibule is enlarged, although to a lesser degree than in Lambdopsalis. Introduction Multituberculates were arguably the most successful evolutionary radiation of early mammals. Their temporal range extended from at least the Middle Jurassic to the late Eocene, an interval of approximately 130 million years (Kielan-Jaworowska et al. 2004;Butler and Hooker 2005;Schumaker and Kihm 2006;Dawson and Constenius 2018). They were most speciose and abundant in the Late Cretaceous (Judithian through Lancian North American Land Mammal Ages [NALMA]) and Paleocene of North America (Krause 1986;Cifelli et al. 2004;Kielan-Jaworowska et al. 2004;Weil and Krause 2008), represented by approximately 150 species and literally tens of thousands of specimens. It is therefore surprising that, despite this diversity and abundance, the skull anatomy of Late Cretaceous/Paleogene North American multituberculates is still very poorly known. By contrast, complete or nearly complete skulls are known for a plethora of Late Cretaceous and Paleocene Asian multituberculate genera that have been described and analyzed in great detail (see reviews and reconstructions in Kielan-Jaworowska et al. 2004: figs. 8.38-8.40 and Wible et al. 2019: figs. 21-23). Skull material of Late Cretaceous European multituberculates is limited to brief descriptions of partial crania of the genera Kogaionon (Rădulescu and Samson 1996;reconstructed in Kielan-Jaworowska et al. 2004: fig. 8.42A), Barbatodon (Smith and Codrea 2015), and Litovoi (Csiki-Sava et al. 2018) and incomplete dentaries of Barbatodon (Csiki et al. 2005;Smith and Codrea 2015;Solomon et al. 2016). Although multituberculates have been reported from the southern supercontinent Gondwana (see review in Krause et al. 2017), no cranial material is known. A small fragment of a dentary was assigned to the possible multituberculate Ferugliotherium by Kielan-Jaworowska and Bonaparte (1996), but the affinities of Ferulgiotherium remain enigmatic (summarized in Rougier et al. 2021). Taeniolabis taoensis is an iconic Paleocene mammal from the Western Interior of North America, illustrated as a representative of the Multituberculata in many textbooks and other secondary literature sources for over a century (e.g., Scott 1913;Simpson 1937a;Romer 1966;Kermack and Kermack 1984;Kurtén 1971;Savage and Long 1986;Rose 2006;Prothero 2017). It is notable in several respects: 1. T. taoensis (Cope 1882c) was among the first Cenozoic multituberculates to be described. The species was initially placed in the genus Polymastodon but was later deemed to be synonymous with the earliernamed Taeniolabis sulcatus Cope, 1882b, which is now considered a nomen dubium (see complicated history of synonymies in Simmons 1986Simmons , 1987. Neoplagiaulax eocaenus Lemoine, 1880 from Europe and Ptilodus mediaevus Cope, 1881 from North America were named in the year or two preceding the description of T. taoensis. Catopsalis foliatus Cope, 1882a was described earlier in the same year as Polymastodon taoensis Cope, 1882c. Catopsalis pollux (Cope, 1882c), now also a junior synonym of T. taoensis, and Ptilodus trovessartianus (Cope, 1882c), now placed in Parectypodus (see Krause 1977;Tsentas 1981), were described in the same paper as Polymastodon taoensis. (1882b) initially estimated that it was the size of a sheep but later (Cope 1882c(Cope , 1884a concluded that it equaled or exceeded the size of Macropus giganteus (= M. major), the Eastern Grey Kangaroo, males of which can reach up to 90 kg (Poole 1982). Romer (1966) drew a size comparison with woodchucks (Marmota monax), which have body masses ranging from 3.1 to 5.1 kg (Kwiecinski 1998), whereas Sloan (1979) estimated a body mass of 40 kg. More recent workers (Kielan-Jaworowska et al. 2004;Weil and Krause 2008;Scott et al. 2016) have suggested a body mass more comparable to that of, or even larger than, the North American beaver Castor canadensis (normally ~12-20 kg but up to 39 kg; Jenkins and Busher 1979). Evans et al. (2012) estimated the maximum body mass of T. taoensis to be 30 kg. Wilson et al. (2012) employed a formula based on m1 area that yielded a body mass estimate of > 100 kg for the species. However, Wilson and colleagues concluded that the scaling of m1 area to body mass is different in multituberculates than in the therian reference group and therefore that cranial length would be a more accurate predictor for large multituberculates. When they employed this metric, their estimate was 22.7 kg for T. taoensis. Williamson et al. (2016) also reported a marked discrepancy in size when employing m1 area (103.0-107.6 kg) versus skull length (21.8 kg). Scott et al. (2016) developed a variety of body mass estimates for taeniolabidids based on tooth row length and m1 area. Like Wilson et al. (2012) and Williamson et al. (2016), their estimates based on m1 area were very high for T. taoensis, ranging from 33.6 to 107.5 kg but, when based on tooth row length, were much smaller, 7.7 to 19.4 kg. Finally, based on a nearly complete cranium described herein (DMNH EPV.95284) and using regressions based on cranial size (geometric mean of maximum cranial length and width), Lyson et al. (2019a) obtained a mean estimate of 34.0 kg for the body mass of T. taoensis (95% confidence interval = 20.8-55.6 kg). 6. T. taoensis has long been considered to be the "most specialized of known multituberculates" (e.g., Granger and Simpson 1929: 611;Matthew 1937) and to possess the most derived dentition of any known multituberculate. The species has among the fewest teeth of any multituberculate (dental formula of 2.0.1.2/1.0.1.2 shared with at least Lambdopsalis and Sphenopsalis, compared to as high as 3.1.5.2/1.0.4.2 for the most plesiomorphic multituberculates -see Krause et al. 2020d: Table 4). Furthermore, of the large number of multituberculate species sampled by Wilson et al. (2012), T. taoensis has the highest orientation patch count (OPC), a measure of dental complexity (Evans et al. 2007). The OPC value for T. taoensis even exceeds that of extant herbivorous rodents. Geological Setting, Age Control, and Paleobotanical Context The Corral Bluffs study area is located immediately east (~20 km) of the Colorado Front Range in the southwestern corner of the Denver Basin and within the eastern city limits of Colorado Springs, Colorado, USA (Fig. 1). The basin contains Cambrian through Eocene rocks including synorogenic strata that were deposited during both the Ancestral Rockies and Laramide orogenies. During the Late Cretaceous and Early Paleogene, the basin was a depocenter and accumulated synorogenic sediments that occur in two unconformitybound packages informally named the Denver 1 (D1) and Denver 2 (D2) sequences (Raynolds 1997(Raynolds , 2002. The D1 sequence, which comprises the exposure in the Corral Bluffs study area, is the earlier of the two synorogenic sequences and is comprised of the Denver Formation (Maastrichtian and Danian) and the lower Dawson Formation. The D1 sequence overlies the Laramie Formation and is overlain by the upper Dawson Formation that forms the D2 sequence (Raynolds and Johnson 2003;Dechesne et al. 2011). The D1 sequence accumulated in the Denver Basin between ca. 68 and 64 Ma during the early Laramide Orogeny and is composed of reworked Mesozoic and Paleozoic sediments as well as Precambrian basement rock shed during uplift of the Colorado Front Range (Raynolds 1997(Raynolds , 2002Raynolds and Johnson 2003). In the Corral Bluffs study area, the D1 sequence is dominated by sandstone and mudstone beds interpreted to represent riverine and floodplain depositional environments including channel, crevasse splay, and ponded water settings (Lyson et al. 2019a). Megafloral and fragmentary vertebrate fossils were first discovered in Corral Bluffs in the early 1900s (Lee 1913). Studies on these collections, as well as on subsequent fossil discoveries throughout the 1900s and early 2000s, largely focused on the biostratigraphy of Corral Bluffs. Early analyses noted fragmentary dinosaur fossils in arroyos at the base of the bluffs and archaic early Paleocene mammal fossils eroding out of well-exposed, cliff-forming strata higher in the bluffs. These early fossil collections were used to loosely determine the placement of the Cretaceous/Paleogene (K/Pg) boundary in the section (Knowlton 1930;Gazin 1941;Brown 1943). Subsequent biostratigraphic work identified Puercan 2 (Pu2) interval zone mammals collected from the cliffforming portion of the bluffs (Middleton 1983;Eberle 2003). In addition, several studies documented the diversity of mammals (Middleton 1983;Eberle 2003), turtles (Middleton 1983;Hutchison and Holroyd 2003;Lyson and Joyce 2011), and plants (Benson 1998;Johnson et al. 2003), with some of the turtle specimens being exceptionally complete (Lyson et al. 2021a(Lyson et al. , 2021b. More recently, (Table 1). c, Magnetostratigraphic, lithostratigraphic, chronostratigraphic, and biostratigraphic logs showing stratigraphic placement of localities at which specimens of T. taoensis (denoted by red stars) occur. Stratigraphy is tied to the Geomagnetic Polarity Time Scale (Gradstein et al. 2012;Ogg 2012) using rema-nent magnetization of the rocks in the Corral Bluffs study area, two CA-ID-TIMS U-Pb-dated volcanic ashes (denoted by yellow stars; these ash beds are at the same stratigraphic level and are interpreted as being the same laterally continuous bed that crops out approximately 750 m apart), and the palynologically defined K/Pg boundary (italicized dates) (Fuentes et al. 2019;Lyson et al. 2019a). The composite lithostratigraphic log is dominated by intercalated mudstone and sandstone, reflecting a variety of fluvial facies (Lyson et al. 2019a). Pollen interval zones are defined by diversification of Momipites spp. (family Juglandaceae) (Nichols and Fleming 2002) and placement of North American Land Mammal Ages (NALMA), as defined by Lofgren et al. (2004), is modified from Lyson et al. (2019a). Abbreviations: m, meters; Ma, million years ago; K/Pg, Cretaceous/Paleogene boundary. (modified from Lyson et al. 2019a). Scale bar in a = 20 km, b = 500 m Lyson et al. (2019a) documented a remarkable assemblage of recently discovered vertebrate and megafloral fossil localities. Most vertebrate fossils are preserved in non-coprolite phosphatic concretions, a presently unique mode of preservation in terrestrial environments, and are exceptionally complete (Lyson et al. 2019a(Lyson et al. , 2021a(Lyson et al. , 2021b. Importantly, Lyson et al. (2019a) documented the presence of Taeniolabis taoensis, whose appearance defines the onset of the Pu3 interval zone in the Denver Basin for the first time. This, coupled with the chronostratigraphic framework of Fuentes et al. (2019), provided a temporal foundation to determine the timing of the Pu2/Pu3 transition in the Denver Basin. Six specimens provisionally referred to T. taoensis were available to Lyson et al. (2019a): two intact crania (DMNH EPV.95284, Fig. 2a, b;DMNH EPV.134082, Fig. 2c, d), one intact lower jaw (DMNH EPV.130973, Fig. 2g, h), and three fragmentary, unprepared specimens. With the then available material, Lyson et al. (2019a) conservatively placed the Pu2/3 boundary at the lowest in situ specimen (DMNH EPV.134082) of T. taoensis, approximately 113.4 m above the palynologically defined K/Pg boundary. They noted the possibility of an alternative placement of the Pu2/3 boundary ~6 m lower in the section (107.3 m above the K/Pg boundary) based on the presence of a nearly complete cranium (DMNH EPV.95284) found loose but intact on the surface of the outcrop (see Fig. 1c). The three fragmentary specimens of T. taoensis noted by Lyson et al. (2019a) were found as float lower in the section (~90 m above the K/Pg boundary) on a broad, flat, upper surface of a large, laterally continuous, sandstone unit, informally referred to as the "Bill Sandstone" (Middleton 1983;Eberle 2003). The major sandstone beds in the Corral Bluffs study area, like the Bill Sandstone, are cliff forming, and often form "platforms" on their upper surface. As a result, they are major accumulation surfaces for concreted fossil material eroding out of the slopes above these surfaces. In the case of the three fragmentary specimens noted by Lyson et al. (2019a), additional preparation revealed that two of these specimens were misidentified and that the third is referable to a taeniolabidid that is likely not Taeniolabis taoensis; the latter specimen will be dealt with in a subsequent manuscript. However, since Lyson et al. (2019a), one additional fragmentary specimen referable to T. taoensis (DMNH EPV.136300, Fig. 2e, f), comprising the anterior portion of a cranium, was discovered as float on top of the Bill Sandstone unit and at the base of a ~15 m steep slope (Table 1, 89.8 m above the K/Pg boundary). Given that this specimen was found as float at the base of a steep slope on a major accumulation surface, we maintain the finding by Lyson et al. (2019a) of the placement of the Pu2/3 boundary at the lowest in situ T. taoensis specimen (DMNH EPV.134082) at approximately 113.4 m above the palynologically defined K/Pg boundary (Fig. 1c). The four Corral Bluffs specimens currently referred to T. taoensis are from different localities and each is preserved in a phosphatic concretion. Two specimens were found in situ and two specimens were found as float ( Table 1). One of the four specimens (lower jaw DMNH EPV.130973) was found in an amorphous mudstone facies that Lyson et al. (2019a) interpreted as representing a floodplain. Three of the four specimens (all cranial) were preserved in concretions that incorporated coarse sand in the groundmass, demonstrating that they eroded out of channel deposits. The association of relatively intact T. taoensis and other vertebrate specimens (e.g., crania, not isolated teeth) with coarse-grained lithologies suggested to Lyson et al. (2019a) that the specimens so preserved represent species that probably inhabited riverine environments, based on the following logic: 1. Five facies were identified at the Corral Bluffs study area and sandstone-dominated facies were interpreted as representing riverine environments whereas finer grained siltstone-and mudstone-dominated facies were interpreted as representing overbank floodplain environments (Lyson et al. 2019a). 2. Other vertebrates found at the Corral Bluffs study area were also predominantly found within a specific lithology. For instance, baenid turtles have a strong association with a sandstone lithology and chelydroid turtles have a strong association with finer-grained siltstone and mudstone lithologies. 3. Both anatomical (Hutchison 1984;Lyson et al. 2019b) and sedimentological (Holroyd and Hutchison 2002;Joyce 2009a, 2009b;Holroyd et al., 2014) data suggest that baenid turtles lived in aquatic riverine environments, while these same data suggest that chelydroid turtles lived in ponded water environments. Additionally, extant chelydroid taxa are predominately found in ponded water floodplain environments (Ernst and Barbour 1989). Combined, these data suggest that the lithologic/taxon associations observed at the Corral Bluffs study area can be used to infer paleoenvironment, as has been used to infer other paleoecostyems (e.g., Lyson and Longrich 2011). As a result, consistent with the earlier analysis by Lyson et al. (2019a), we interpret the environment in which T. taoensis lived as dominated by river channels and corresponding floodplains draining the Laramide highlands to the west. Precise stratigraphic placement for each specimen was obtained using the methods outlined in Lyson et al. (2019a; see also Table 1). This, coupled with the chronostratigraphic framework developed for the Corral Bluffs study area by Fuentes et al. (2019), allowed us to obtain precise ages for each T. taoensis specimen ( Table 1). The chronostratigraphic framework is derived from the identification of three magnetochron boundaries (C30n/C29r, C29r/C29n, and C29n/C28r), the palynologically defined K/Pg boundary, and two chemical abrasion isotope dilution thermal ionization mass spectrometry (CA-ID-TIMS) 206 Pb/ 238 U dates on zircons separated from thin (ca. 2-3 cm thick) tonstein beds preserved within lignite beds (Fig. 1). The tonstein beds are interpreted to be the diagenetic remnants of volcanic ash falls into still water. These temporal benchmarks were used to calculate average sedimentation rates and interpolated ages for the section (Fuentes et al. 2019;Lyson et al. 2019a). Two age estimates for each T. taoensis specimen are provided based on two different age models (Table 1). These age models, the global Geomagnetic Polarity Time Scale (Gradstein et al. 2012) and estimates based on Denver Basin sediments (Clyde et al. 2016), have slight differences in the age estimates for the magnetochron boundaries and the K/Pg boundary. The interpolated ages for each T. taoensis specimen using both age models are provided in Table 1. Finally, considering recent biostratigraphic and magnetostratigraphic work in the section at Corral Bluffs, we note that all specimens of T. taoensis were recovered from sediments that include the Momipites wyomingensis -Kurtzipites trispissatus pollen zone (P2) and that all were found in magnetochron 29n (Nichols and Fleming 2002;Lyson et al. 2019a) (Fig. 1). The stratigraphic placement of in situ T. taoensis specimens in the Corral Bluffs study area facilitates placement of this mammalian species in megafloral context. Lyson et al. (2019a) analyzed a dataset of 6,401 fossil leaves representing 233 morphospecies. These taxa were collected from 65 Late Cretaceous and early Paleocene localities covering ~1.2 Myr (30 m in the Late Cretaceous representing ~213 kyr, and 150 m in the early Paleocene representing ~917 kyr). They used this dataset to estimate plant raw richness, originations, extinctions, and standing richness, as well as mean annual temperature (MAT). At ~110 m above the palynologically defined K/Pg boundary, equivalent with the Pu2/Pu3 boundary as defined by the lowest in situ T. taoensis specimen (at 113.4 m), these data show the highest levels of raw richness and extinction in the Paleocene megafloral record in the bluffs (Lyson et al. 2019a: suppl.). While the number of fossils collected influences these data, they nonetheless indicate floral turnover at the Pu2/Pu3 boundary. Importantly, we see the first appearace of the angiosperm family Leguminosae (= Fabaceae) outside of Central America (Centeno-Gonzalez et al. 2021) anywhere in the world in the form of both fossil legume pods and leaflets at this stratigraphic level (Lyson et al. 2019a). Legumes would have represented a new, high-protein food source for herbivores such as T. taoensis on the early Paleocene landscape. Finally, Lyson et al. (2019a) observed a ~3 °C increase in leafestimated MAT at the Pu2/Pu3 boundary. Taken together, these data indicate that floral and faunal turnover (likely migration) driven by temperature increase and the arrival of new plant food sources occurred at the Pu2/Pu3 boundary. Specimens As noted above, there are many known but undescribed specimens of Taeniolabis taoensis from the San Juan Basin that have been collected for well over a century; these reside primarily in collections at AMNH, KU, NMMNH, and UCMP. Simmons (1987) provided a list of referred specimens known at the time and, based on online catalogs, it appears that additional specimens have been discovered since. However, because our study is focused on skull anatomy and because the vast majority of specimens consist of isolated teeth and fragmentary jaws, they are mostly not considered here. Our study of skull anatomy of T. taoensis, assisted by µCT imagery, is therefore largely limited to more detailed description of previously documented specimens --AMNH 16310, AMNH 16321, and UCMP 98083 (listed above in "Introduction"), all from the San Juan Basin --and original description of the new material from the Corral Bluffs study area of the Denver Basin, consisting of four specimens. Synoptic overviews of each of the seven primary specimens in the study sample, to indicate relative completeness and quality of preservation (prior to µCT scanning), are provided below; they are photographically illustrated in Figs. 2 and 3. DMNH EPV.95284 (DMNH locality 6266, Denver Basin) -Nearly complete cranium missing anterior-most portion of premaxilla (and all of the incisors except for the base of left I2) and small portion of right zygomatic arch; poor surface preservation (Fig. 2a, b); found within displaced concretion at base of a 2-3 m high ridge, suggesting it had not been transported a great distance. DMNH EPV.134082 (DMNH locality 6500, Denver Basin) -Partial cranium missing dorsal and anterior portions of the snout and anterior parts of the zygomatic arches; poor surface preservation (Fig. 2c,d); found within in situ concretion. DMNH EPV.136300 (DMNH locality 12111, Denver Basin) -Anterior portion of cranium found within displaced concretion except for tip of snout (anterior portions of premaxillae and both I2s), which was exposed but has better surface preservation than the more posterior parts that were inside the concretion; deformation slight, primarily involving ventral displacement of nasals (Fig. 2e, f); displaced concretion was found at the base of a ~15 m high ridge, and thus may have been transported a great distance. DMNH EPV.130973 (DMNH locality 7064, Denver Basin) -Moderately well-preserved left dentary missing only apical portions of coronoid process and mandibular condyle (Fig. 2g,h); found within in situ concretion. AMNH 16321 (locality listed on specimen label as "2 mi. above Ojo Alamo," which is in the Bisti/De-na-zin area, Williamson et al. 2012, San Juan Basin) -Moderately complete but highly fragmented cranium; µCT imaging of this specimen reveals that it is much less complete than in the current, restored specimen ( Fig. 3a, b), and also less than illustrated by Broom (1914: pls. XI, XII), who published the only photographs of the specimen. For instance, whereas the photographs in Broom (1914) indicate the presence of a left I2, and the reconstruction includes all four upper incisors (Fig. 3a, b), the µCT scan of the specimen demonstrates that none of the incisors are real. AMNH 16310 (locality listed on specimen label as "2 mi. above Ojo Alamo," which is in the Bisti/De-na-zin area, Williamson et al. 2012, San Juan Basin) -Very well preserved left dentary missing posterior half of ascending process, including most of coronoid process. The missing portions were (incorrectly) reconstructed in plaster and are illustrated in Fig. 3c, d. In full disclosure, a limitation of this study is that it was conducted during the 2020/2021 COVID-19 pandemic, which severely restricted previously planned direct access to original specimens at various museums. AMNH 16310 and AMNH 16321 had been borrowed and UCMP 98083 had been scanned (but not borrowed) prior to the pandemic. However, J. Meng kindly provided photographs of several AMNH cranial and mandibular specimens from the San Juan Basin, most of which had been referred to T. taoensis and discussed in the literature previously. These included AMNH 3036, holotype specimen consisting of "right maxilla fragment with M 1-2 and fragments of skull" (Simmons 1987: 798); AMNH 745 (combined with AMNH 16310 in reconstruction of left dentary by Granger and Simpson 1929: fig. 4A); AMNH 748 and AMNH 968, partial dentaries included in reconstruction of skull by Gregory (1910: fig. 8); and AMNH 27734, nearly complete right dentary. Three of the dentaries (AMNH 745, AMNH 748, AMNH 27734) are illustrated in Fig. 3e-j, with the condylar region of AMNH 27734 highlighted in Fig. 3k-n. We employed the photographs for a few supplementary or confirmatory observations of anatomical structures that were not visible, or poorly visible, on the seven primary specimens in our study sample to which we had direct access (or µCT scans in the case of UCMP 98083). It is also important to note that these photographs revealed that the reconstruction of the cranium of T. taoensis by Gregory (1910: fig. 8), reported as being based upon AMNH 3075, is actually based on the holotype specimen, AMNH 3036. AMNH 745, 748, and 968 are from Coal Creek Canyon, AMNH 27734 is from Barrel Spring Arroyo, whereas AMNH 3036 is simply listed as coming from "N.W. New Mexico." Finally, T. Williamson (pers. comm, 11/13/2019) alerted us to the existence of a cranial specimen in the NMMNH collections, NMMNH P-47645, and provided a photograph of it; it is highly concreted, deformed, and fragmented, and does not appear to yield any new anatomical information. Another limitation of our study is that, even with µCT technology, we were able to discern very few details of the nasal and endocranial cavities because of very low-density contrast between matrix and bone in the available sample. By contrast, the density difference was slightly better in UCMP 98083, which allowed segmentation of the inner ear; its anatomy is detailed below. Measurements Linear measurements of the skull and dentition were taken directly from the specimens wherever possible using a Mitutoyo CD-8″ CSX caliper. Other measurements were extracted from digital images using a combination of 1 3 Fig. 3 Skull material of Taeniolabis taoensis from the San Juan Basin, New Mexico. a, b, AMNH 16321, fragmentary, incomplete cranium restored with substantial amounts of plaster, in dorsal and ventral views. c, d, AMNH 16310, left dentary, the posterior portion of which was restored with plaster, in lateral and medial views. e, f, AMNH 745, left dentary, in lateral and medial views. g, h, AMNH 748, right dentary in lateral and medial views. i, j, AMNH 27734, right dentary in lateral and medial views. k, l, m, n, enlarged photographs of mandibular condyle of AMNH 27734 (see i, j) in lateral, medial, dorsal, and posterior views. Scale bar = 5 cm ImageJ, the 2-dimensional projections in ORS Dragonfly, and Adobe Illustrator, which permitted precise location of measurement endpoints and calculation of linear distances. Angular measurements were extracted from digital photographs using the Measure Tool in Adobe Photoshop or by using semi-transparent protractors overlain on images in PowerPoint (Microsoft Office). Linear and angular measurements of the inner ear were taken with the Amira 3D measurement tool. Total inner ear length and cochlear canal curvature follow Schultz et al. (2017). All linear measurements are in millimeters (mm). Computed Tomography and Imaging Data and images for DMNH EPV.95284, DMNH. EPV.130973, DMNH EPV.134082, DMNH EPV.136300, and UCMP 98083 were produced at the High-Resolution X-ray Computed Tomography Facility of the University of Texas at Austin (UTCT). Data and images for AMNH 16310 and AMNH 16321 were produced in the Microscopy and Imaging Facility of the American Museum of Natural History (AMNH) in New York. 16-bit TIFF stacks of raw scan data were processed in ORS Dragonfly (v 4.0, 4.1, 2020.1) with the artifact correction Gradient-Domain-Fusion filter and the contrast enhancing CLAHE (Contrast Limited Adaptive Histogram Equalization) filter. These filtered image stacks were rendered into surface meshes through dynamic threshold segmentation and meshing tools using ORS Dragonfly versions 4.0, 4.1, and 2020.1. Morphological smoothing and closing operations were applied to segmentations prior to meshing. All meshes were decimated by 50%. Meshes were smoothed with the Hamming Window Smoothing method, at 10-15 iterations. UCMP 98083 inner ear images were smoothed with the Laplacian Smoothing method at 15 iterations. Surface meshes were smoothed for final output in ORS Dragonfly 2020.1 and exported in the stereolithography (stl) file type. Final images were rendered in Blender 2.82 with the Cycles render engine and orthographic camera. The inner ear was reconstructed in Amira; label fields were imported into Dragonfly for final processing and imaging. Basic individual scan parameters, which vary for each specimen and facility, are reported below, as are adjustments made to the datasets. DMNH EPV.95284 (Fig. 4) -nearly complete cranium first scanned encased in phosphatic concretion at UTCT. Scan parameters: North Star Imaging (NSI) scanner. Fein Focus High Power (FFHP) source, 200 kV, 0.13 mA, aluminum filter, source to object 492.0 mm, source to detector 1316.851 mm, isometric voxel size = 90.9 μm, total slices = 1,927. After mechanical preparation, DMNH EPV.95284 was scanned again at UTCT. Scan parameters: NSI scanner. FFHP source, 180 kV, 0.15 mA, aluminum filter, source to detector 1317.262 mm, isometric voxel size = 82.7 μm, total slices = 1,925. Contrast between bone and sediment matrix was poor in both scans. CLAHE filtering resulted in major striping artifacts in the dataset and failed to homogenize contrast levels across the image stack. Due to the inconsistent contrast levels, window leveling had to be adjusted regularly during segmentation. Similarly, adjusting the Look Up Table (LUT) helped increase contrast levels. DMNH EPV.134082 (Fig. 5) -posterior portion of cranium scanned at UTCT. Scan parameters: NSI scanner. FFHP source, 200 kV, 0.17 mA, brass filter, source to detector 731.325 mm, isometric voxel size 73.7 µm, total slices = 1,752. DMNH EPV.136300 ( Fig. 6) -anterior portion of cranium scanned at UTCT. Scan parameters: NSI scanner. FFHP source, 160 kV, 0.85 mA, aluminum filter, source to detector 733.22 mm, isometric voxel size = 50.3 μm, total slices = 1,892. Contrast levels were higher and more consistent than in any other scan of DMNH material. Contrast was improved by adjusting LUT and window leveling. UCMP 98083 (Fig. 8) -nearly complete cranium and both dentaries scanned at UTCT. Scan parameters: NSI scanner. FFHP source, 150 kV, 0.12 mA, aluminum filter, source to detector 730.451 mm, isometric voxel size = 51.7 μm, total slices = 1,888. Contrast between bone and sediment matrix was generally poor and inconsistent across the image stack. Application of CLAHE filter resulted in striping artifacts and exacerbated inhomogeneity of the dataset. LUT and window leveling required regular adjusting. DMNH EPV.130973 (Fig. 9a- Diagnosis The most recent diagnosis of Taeniolabis was provided by Simmons (1987: 797) and was restricted to dental characters, as follows: "Dimensions of i1, I2, and M2/m2 greater than in any other multituberculate. Seven or more cusps in the labial cusp row and six or more cusps in the lingual cusp row of m1. Four or more cusps in lingual cusp row of m2. Nine or more cusps in labial and lingual cusp rows of M1. Four or more cusps in medial cusp row of M2. Ratio of tooth length p4/m1 less than 0.40." With the cranial material described herein, and because other taeniolabidoid taxa are also represented by cranial material, we are in the position to revise the diagnosis for the genus more comprehensively. We defer doing so, however, until we have described and analyzed new taeniolabidid skull material from the Denver Formation that we currently regard as not referable to T. taoensis (Krause et al. in prep.) and until we can examine the many dental specimens of T. taoensis housed in other museums (to which access is currently restricted because of the pandemic). (Cope, 1882c) Holotype Specimen AMNH 3036, right maxillary fragment with M1-2 and fragments of skull (see clarification in Simmons 1987 regarding composition of holotype). Maxillary fragment illustrated by Cope (1884a: fig. 3e, b: pl. XXIIIc, fig. 6). Cranial fragments illustrated by Gregory (1910: fig. 8) but mistakenly labeled as AMNH 3075. (Tables 2 and 3) and cusp formulae (Table 4) of the molars in these specimens fall within the ranges of variation for the San Juan Basin specimens measured and counted by Simmons (1987), the only exception being M1 lengths in DMNH EPV.95284 and DMNH EPV.136300, which fall less than 1 mm below the rather narrow range of the San Juan Basin M1 lengths. We provisionally take these measurements and counts as providing confirmatory evidence for assignment of the DMNH specimens to T. taoensis. TAENIOLABIS TAOENSIS Diagnosis "P4/p4, M1/m1, and length of lower tooth row larger than in any other multituberculate, including T. lamberti.... Anterior edge of coronoid process lies labial to posterior half of m1" (Simmons 1987: 799). As for the diagnosis of the genus, we defer revising the diagnosis of T. taoensis until we have more fully assessed new taeniolabidid material from the Denver Formation (Krause et al. in prep.). Cranium The cranial material from the Corral Bluffs study area is the most complete for Taeniolabis taoensis. This material, coupled with that previously described from the San Juan Basin (without the aid of µCT technology), allows us to provide descriptions of most elements in more detail than possible heretofore. This is preceded by an overview of cranial size and shape and how the latter differs from those in previous reconstructions. Nasals The nasals of Taeniolabis are extraordinarily long and broad elements that dominate the roof of the nasal cavity and form the dorsal margin of the external nasal aperture. They are wider posteriorly than anteriorly. As described and/ or illustrated previously (Broom 1914;figs. 6, 8;Granger and Simpson 1929;figs. 4, 5A), each nasal articulates along strongly interdigitated sutures with the premaxillae and maxillae ventrolaterally, the parietal posterolaterally, and the frontal posteromedially. Interestingly, the midline suture with the contralateral nasal is also strongly interdigitated (rather than being planar) in at least AMNH 16321 (Figs. 3a, 7c); segments of this internasal suture can be discerned on the surface of DMNH EPV.95284 (Fig. 4c), DMNH EPV.136300 (Fig. 6c), and UCMP 98083 ( Fig. 8c) but details cannot be distinguished. The medial aspects of the frontals project anteriorly to insert between the posterior ends of the nasals such that the nasalfrontal suture on each side is, from medial to lateral, oriented transversely, then obliquely (anteromedial to posterolateral), and then transversely again, ending at the triple junction with an anterolateral extension of the parietal. The nasals are incomplete anteriorly in AMNH 16321 (Figs. 3a, b and 7a-e) and were reconstructed in dorsal and lateral views by Broom (1914: figs. 6, 8), Simpson (1926: fig. 2), and Granger and Simpson (1929: figs. 4, 5A) to extend anteromedially to meet their counterparts in the midline but to terminate at almost the same level as the premaxillae. DMNH EPV.95284 ( Fig. 4a-c) and especially DMNH EPV.136300 ( Fig. 6a-c), which have relatively complete premaxillae, demonstrate for the first time that these earlier reconstructions are inaccurate. The relatively complete premaxillae preserved in DMNH EPV.136300 ( Fig. 6a-c) reveal the presence of strong internarial processes that extend the premaxillae considerably farther anteriorly than previously realized (see "Premaxillae" below). As such, the termination of the nasals anteriorly falls well short of the anterior extent of the premaxillae (see reconstructions in Fig. 10a, c). As seen in DMNH EPV.95284 (Fig. 4c), DMNH EPV.136300 (Fig. 6c), and UCMP 98083 (Fig. 8c), the anterior shape of the nasals in dorsal view is more squared than estimated by Broom (1914: fig. 6), Simpson (1926: fig. 2), and Granger and Simpson (1929: figs. 5A). We do not see definitive evidence of sutures near the midline anteriorly that might indicate the presence of an internarial bar formed by the premaxillae and inserted between the anterior ends of the left and right nasals but also acknowledge that none of the available specimens preserves this area pristinely; we therefore provisionally regard an internarial bar to be absent in T. taoensis. A distinct notch in the lateral outline of the external nasal aperture occurs where the nasal and premaxilla meet (Fig. 10a, c). This is not the same structure identified as an "anterior nasal notch" by Lillegraven and Krusat (1991; see also Wible and Rougier 2000), which occurs along the anterior margin of the nasal, not at its lateral edge. Broom (1914) and Granger and Simpson (1929) did not record the presence of nasal foramina in AMNH 16321 but at least one large nasal foramen, now obscured by matrix and/or plaster on the original specimen ( Fig. 3a) but visible on the µCT scans (Fig. 7c, e), is present on the right side; the canal leading from it projects posterointernally. Miao (1988: 18; see also Hurum 1994) stated that nasal foramina were present in a cranium of Taeniolabis "being studied by Simmons (personal communication)." This is presumably UCMP 98083, which was illustrated in ventral (but not dorsal) view by Greenwald (1988: fig. 1A). We here also confirm the presence of nasal foramina in UCMP 98083 (Fig. 8a, c, e). The available µCT scans reveal that there are at least five large foramina in the left nasal of UCMP 98083, whereas only one is visible in the less well preserved right nasal. The foramina, which appear to pass internally into the nasal cavity, occur in the posterior half of each nasal and short neurovascular grooves extend generally anteriorly from the foramina toward the front or side of the snout. The single foramen visible on the right nasal is situated more posteriorly than any of those on the left. Four of the five foramina on the left are distributed along a more-or-less transverse line, with the fifth situated more anteriorly. We suspect that there are more foramina in the right nasal of UCMP 98083 (as illustrated in Fig. 10a) but that they are obscured by the relatively high amount of breakage on that side. Nonetheless, it is apparent that pronounced asymmetry in the number and position of nasal foramina is present. We could not conclusively confirm the presence of nasal foramina in any of the Corral Bluffs cranial specimens of T. taoensis but believe that this is owing to poor surface preservation. Premaxillae The premaxillae were not well known previously, particularly anteriorly along the midline and on the palate. The sutures with the maxillae on the lateral aspects of the snout and with the nasals dorsally are as depicted by Broom (1914: figs. 6, 8) and Granger and Simpson (1929: figs. 4, 5A) in AMNH 16321. However, the premaxillae are fragmentary in this specimen and preclude evaluation of the presence or absence of the internarial process, the precise size, shape, orientation, and borders of the incisive foramina (anterior palatine foramina of Broom 1914;Granger and Simpson 1929), and the position and shape of the sutures with the maxillae on the palate. The premaxilla bears two incisors (I2 and I3) and has three processes --facial (posterodorsal), palatal, and internarial --the last of which was previously entirely unknown for Taeniolabis. The facial process on the side of the snout is more or less vertical in orientation, but gently convex laterally, and ascends to contact the nasal along a strongly interdigitated, roughly horizontal suture. As it ascends, the process does not change greatly in anteroposterior length. The posterior suture with the maxilla, also strongly interdigitated, is longer than that with the nasal, extends from anteroventral to posterodorsal in lateral view, and is slightly convex anteriorly. The palatal processes of the premaxillae are transversely domed, resulting in a gently concave palate, particularly anteriorly. The medial portion of the sutures between the palatal processes of the premaxillae and maxillae was previously unknown. We can confirm that, laterally, the sutures on AMNH 16321 ( Fig. 7d) are as depicted in Granger and Simpson (1929: fig. 6), passing posteromedially from the premaxillary ridge (crista premaxillaris of Kielan-Jaworowska et al. 2005) for a short distance toward the midline. We cannot trace this premaxillary-maxillary suture on the palate of AMNH 16321 with confidence any farther toward the midline and therefore cannot determine if it intersects or passes posterior to the posterior border of the incisive foramen. Digital segmentation of DMNH EPV.136300, however, reveals that the suture passes directly medially from where it crosses the premaxillary ridge to intersect the incisive foramen toward its posterior end (Fig. 11). The suture then passes directly medially from near mid-length on the medial aspect of the incisive foramen to meet its contralateral counterpart on the other side of the interpremaxillary suture. This differs from the reconstruction in Kielan-Jaworowska and Hurum (1997: fig. 11C), who drew the suture as touching the posterior border of the incisive foramen but not passing any farther medially. As definitively shown by DMNH EPV.136300, the incisive foramina are generally as conjectured by Granger and Simpson (1929: fig. 6, dashed lines) from AMNH 16321 but are smaller, more nearly oval (rather than reniform), Fig. 11 Rendering of 3D virtual model of left side of palate, in ventral view, of Taeniolabis taoensis, DMNH EPV.136300, based on µCT data, illustrating premaxilla (light gray) and maxilla (dark gray) and the suture between them relative to the incisive foramen. Teeth are rendered in intermediate gray. Abbreviations: I2, upper second incisor; I3, upper third incisor; if, incisive foramen; ipms, interpremaxillary suture; M1, upper first molar; mx, maxilla; P4, upper fourth premolar; pms, premaxillary-maxillary suture; pmx, premaxilla; zpm, zygomatic process of maxilla. Scale bar = 1 cm and slightly more lateral, closer to the alveoli of I3 than described and depicted by those authors. Although Broom (1914) explicitly stated that the size of the incisive foramina could not be determined in AMNH 16321 because of breakage, Granger and Simpson (1929: 614) described them as "oval, about 15 mm. long, and quite lateral in position, just internal to I 3 ." The best-preserved incisive foramen is on the left side of DMNH EPV.136300 (Figs. 6d and 11); it is elliptical in shape and measures 10.3 mm long and 3.4 mm wide, thus shorter than estimated and depicted by Granger and Simpson (1929: fig. 6). The alveolus of I3 lies completely within the premaxilla. Its anterior border is directly lateral to the anterior border of the incisive foramen and its lateral border is immediately medial to the premaxillary ridge (best seen in DMNH EPV.136300 [ Fig. 6d] and AMNH 16321 [ Fig. 7d]), which marks the boundary between the facial and palatal processes of the bone. The premaxillary ridge is not sharp but is instead low and rounded (best seen on right side of AMNH 16321; Fig. 7d). Wible et al. (2019) opined that the ridge is absent in Taeniolabis (and Ptilodus); we regard it as present but low and rounded and just not as sharp and crest-like as in some other multituberculates, although it also appears as quite low and rounded in forms like Catopsbaatar (2005: 489) considered the ridge to be a probable synapomorphy of Djadochtatherioidea, stating that "to our knowledge it does not occur in other multituberculates," but its presence in Taeniolabis indicates that this is likely not the case. A premaxillary ridge is also depicted by Miao (1988: fig. 18) as quite sharp in Lambdopsalis. The alveoli of I2 and I3 in DMNH EPV.136300 (Figs. 6d and 11) are separated by short diastemata ( Table 5) that are less than the lengths of the alveoli of I2. These distances appear to be shorter on AMNH 16321 but fracturing and plaster preclude accurate measurement on this specimen; plate XI in Broom (1914), which includes a photograph prior to reconstruction, suggests that the size of the diastemata are greater but this area is now damaged and less complete (compare Figs. 2b and 7d). The anterior portions of the premaxillae of Taeniolabis were previously unknown; reconstructions of this region (Broom 1914;fig. 6;Granger and Simpson 1929; figs. 5A, 6), based on AMNH 16321, depicted a large empty space between the left and right I2s (see also Figs. 2a,b and 7c,d). The anterior parts of the premaxillae are, however, almost completely preserved in DMNH EPV.136300 (Figs. 6d and 11) and show, for the first time, that this space is occupied by substantial anterodorsal projections, the internarial processes, formed by the premaxillae. In lateral view, the internarial processes extend even farther anteriorly than the anterior-most extent of the I2s (Fig. 6a, b, d). The tips of the processes are, unfortunately, missing due to breakage and/or post-depositional erosion, thus precluding observation of their full height. However, because we do not see evidence of internarial processes inserted between the anterior ends of the nasals, we believe that they terminated shortly above where they are broken away and that they did not form a complete internarial bar separating the external nasal aperture into left and right halves (see reconstruction in Fig. 10a, c, d). The extensive bases of the left and right internarial processes abut one another and are separated only by the interpremaxillary suture. In anterior view (Figs. 6e and 10d), the ventral surfaces of the premaxillae form a strongly arched (concave ventrally) surface between the left and right I2s. Septomaxillae There is no evidence of septomaxillae in any of the new specimens of T. taoensis, confirming Broom's (1914) earlier suspicion of their absence based on AMNH 16321. Vomer We were unable to detect convincing evidence of the vomer in any of the available cranial specimens but regard this as simply owing to preservational issues and, in terms of µCT imaging, to the poor density contrast between bone and rock matrix, although there are indications of its presence in DMNH EPV.136300. There is, however, a prominent longitudinal ridge, which may be paired, on the dorsal surfaces of the premaxillae and maxillae on DMNH EPV.134082 (Fig. 5c, e), which has the floor of the nasal cavity exposed. The ridge appears to be situated to the right of the midline but it may be a slightly displaced midline structure and may represent the base to which the vomer articulated. Other displaced bony remnants in DMNH EPV.134082 are preserved more posteriorly that could represent the actual vomer but it is impossible to determine. Lacrimals The lacrimal of Taeniolabis was described as having a small dorsal exposure by Kielan-Jaworowska and Hurum (1997). However, the lacrimal bone was not identified in Taeniolabis by Broom (1914), Granger and Simpson (1929), or other earlier workers, with Broom stating (p. 128) that, " [I]f one occurs, it must be very small and situated low down within the orbit." Indeed, although the anterior rim of the orbit is fragmented in AMNH 16321 and, whereas the nasomaxillary suture is abundantly clear, there is no trace of a suture in this region, thus tentatively confirming that the lacrimal did not have any facial exposure and indicating that the maxilla contributed exclusively to the formation of the anterior orbital rim. Similarly, none of the Corral Bluffs specimens available to us reveals evidence of a lacrimal, either on the orbital rim or within the orbit. This is arguably the result of poor preservation of surface detail in these specimens but digital segmentation of the mid-region of the cranium of DMNH EPV.136300 ( Fig. 12) did not reveal the preservation of a lacrimal, either on the orbital rim or inside the 1 3 orbit. Furthermore, because we also do not see evidence of a lacrimal in UCMP 98083, we conclude that it was entirely absent in Taeniolabis. Maxillae The maxilla of T. taoensis is a large element that houses a small, simple premolar (P4) and two, large, complex molars (M1, M2) in its alveolar process. The left and right cheektooth rows are approximately parallel to one another but diverge slightly anteriorly. The facial process of the maxilla articulates via interdigitated sutures anteriorly with the premaxilla and dorsally with the nasal, essentially as depicted by Broom (1914: figs. 6, 8) and Granger and Simpson (1929: figs. 4, 5A). The facial process also extends posteromedially along and medial to the orbital rim to contact a long, anterolaterally directed projection of the parietal; this contact is possible, in part, because of the absence of an intervening facial process of the lacrimal. The contribution of the maxilla to the anterior orbital rim is more completely preserved in DMNH EPV.95284 (Fig. 4) and DMNH EPV.136300 ( Fig. 6) than it is in AMNH 16321 (Fig. 7), in which much of the rim is broken away, but is best revealed by digital segmentation of the partially preserved anterior orbital rim (zygomatic root) in DMNH EPV.136300 (Fig. 12). This specimen demonstrates that the lacrimal is indeed absent and that the anterior orbital rim is composed solely by the maxilla. The only additional feature worthy of note on the facial process is a single, infraorbital foramen and a prominent groove extending anteriorly from it, situated directly anterior to the large root of the zygoma and well anterior to the level of P4. Digital segmentation of the mid-region of the cranium in DMNH EPV.136300 demonstrates that the maxilla provides a vast contribution to the medial orbital wall, extending dorsally for a considerable distance from the alveolar process ( Fig. 12). There, it contacts the nasal anterodorsally and the frontal dorsally, although the suture between the maxilla and frontal cannot be discerned in one area. Unfortunately, features that plausibly lie within or bordered by the maxilla (e.g., maxillary and sphenopalatine foramina) on the medial wall of the orbit cannot be seen. Because a lacrimal is not present, the facial process of the maxilla contributes exclusively to the anterior and some of the dorsal part of the orbital rim, which is continued posteriorly by the parietal. The facial process of the maxilla transitions into the zygomatic process but, in dorsal view, there is a prominent angle between the longitudinal axes of the two that marks the anterior margin of the root of the zygoma. The included angle, measured along the external margins of the two processes, in the least deformed of the Corral Bluffs specimens, DMNH EPV.136300 (Fig. 6c, d), is approximately 123°, slightly sharper than rendered by Granger and Simpson (1929: fig. 5A; ~129°) and especially by Broom (1914;Fig. 6; 138°) based on AMNH 16321 (our measurement of AMNH 16321 is roughly consistent with that obtained by Granger and Simpson). The anterior margin of the root of the zygoma is well anterior to P4 and the posterior margin begins roughly opposite the embrasure between P4 and M1. The zygomatic process of the maxilla is extraordinarily deep and its dorsal surface forms the ventral margin of the orbit. The process extends posteriorly to contact the slightly less deep zygomatic process of the squamosal. The two processes articulate on the zygomatic arch along a planar suture that, in lateral view, is restricted to the anterior half of the arch (i.e., the zygomatic process of the maxilla is much shorter than the zygomatic process of the squamosal) and extends from anterodorsal to posteroventral. Fig. 12 Rendering of 3D virtual model of medial wall and rim of right orbit of Taeniolabis taoensis, DMNH.EPV.136300, based on µCT data. Position indicated in right lateral view of entire specimen in inset at lower right. Frontal is depicted in green, parietal in blue, nasal in red, maxilla in brown, and maxillary teeth (P4-M2) in gray. Region where discrimination between frontal and maxilla not possible shown with green and brown stripes and region where discrimination between nasal and maxilla not possible shown with red and brown stripes. Abbreviations: bzr, broken zygomatic root; fr, frontal; M1, upper first molar; M2, upper second molar; mx, maxilla; n, nasal; P4, upper fourth premolar; pa, parietal. Scale bar = 1 cm 1 3 Hopson (in Hopson et al. 1989) stated that the presence or absence of a jugal could not be determined in AMNH 16321. UCMP 98083 (Figs. 8d and 13) reveals the presence of an anterior fragment of the rudimentary jugal on the medial aspect of the zygomatic process of the maxilla (see "Jugals" below). This contrasts with the reconstructions by Broom (1914: figs. 6, 8) and Granger and Simpson (1929: figs. 4, 5A), which depict (in dotted outlines) a much larger jugal, with at least part of it rising above the zygomatic processes of the maxilla and squamosal, bearing a postorbital process. UCMP 98083 indicates that the jugal would likely not be visible in lateral view (Fig. 8b). An anterior zygomatic ridge on the lateral aspect of the zygomatic arch of the maxilla, marking the dorsal boundary for the origin of masseter superficialis pars anterior, was described as present in AMNH 16321 by Kielan-Jaworowska et al. (2005: 509; based on observation of photographs in Broom 1914: pls. XI, XII). In addition, Yaoming Hu (pers. comm. to Kielan-Jaworowska et al. 2005: 509) was said to have identified an anterior zygomatic ridge in AMNH 16321. We are unable to confirm this identification on the original specimen; the right zygomatic arch is mostly missing but the left arch is quite well preserved, although fragmented with several small missing areas filled with plaster; compare Fig. 3a, b with Fig. 7a, c, d). We also cannot identify curved ridges on what is preserved of the right zygomatic arch of UCMP 98083 (Fig. 8b, c). The lateral surface of the zygomatic process of the maxilla on both AMNH 16321 and UCMP 98083 is essentially smooth (except for cracks) and gently convex. There is, however, a prominent depression, wide anteriorly and tapering and becoming indistinct posteriorly, on the ventral surface of this process in AMNH 16321 (Fig. 7d), that, despite incompleteness and fracturing, appears to be present in UCMP 98083 (Fig. 8d) as well. We assume, therefore, that the masseter superficialis pars anterior originated from this depression rather than from the lateral surface of the process. The poor surface preservation of the Corral Bluffs specimens precludes independent confirmation of these observations. Granger and Simpson (1929:614) described the maxillary portion of the palate on AMNH 16321 "as greatly arched or domed, reaching its greatest height between the premolars." This region of the palate in AMNH 16321 is reconstructed with large amounts of plaster ( Fig. 3b; see Broom 1914: pl. XI and Fig. 7d for images of the specimen without plaster infillings). The DMNH specimens, especially DMNH EPV.136300 (Fig. 6d), which is the least deformed in this region, indicate that the maxillary portion of the palate, although arched/domed, is not as strongly concave as reconstructed in AMNH 16321. The greatest degrees of curvature of the palate appear to be farther posterior, between the M1s, and far anteriorly, on the premaxillae between the I2s. On the palate, the full extent of the suture of the maxilla with the premaxilla was digitally segmented in DMNH EPV.136300 and is shown to pass medially from the premaxillary ridge and intersect the posterior margin of the incisive fossa ( Fig. 11; see more detailed description in "Premaxillae" section above). Faint sutures between the maxillae and the palatines can be seen on only one specimen, DMNH EPV.134082 (Fig. 5d); the fact that they are more or less symmetrically developed lends credence to their identification. The combined sutures (described more fully in the "Palatines" section below), beginning opposite the distal quarter of M1, result in a shape similar to that of a bell, with the top of the bell situated anteriorly (see reconstruction in Fig. 10b). We could not identify either major or minor palatine foramina but ascribe this to poor preservation rather than to true absence. Finally, whereas Broom (1914: 128) identified a "small, oval" palatal vacuity, Granger and Simpson (1929: 614) opined that AMNH 3041 (a specimen not seen in the current study) "seems positively to indicate that palatal vacuities were not present," but then, in a later paper, Simpson (1937b: 735) left some doubt, stating that "[t]here was probably no palatal vacuity" in T. taoensis. The Corral Bluffs specimens, particularly DMNH EPV.136300 (Fig. 6d), demonstrate conclusively that palatal vacuities are absent. Palatines The palatines appear to not have been preserved in AMNH 16321; they are now reconstructed in plaster (compare Figs. 3b and 7d; see also Broom 1914: pl. XI), although Granger and Simpson (1929: fig. 6) drew some dashed lines indicating that the hard palate extended well posterior to the distal ends of M2. Distinct palatine-maxilla sutures were reconstructed by Kielan-Jaworowska and Hurum 1997: fig. 11C) to indicate that the palatines were, together, roughly bell-shaped with a sharply pointed posterior tip and extending from medial to M2 to, again, well posterior to M2 but we are unaware of any previously known specimens that demonstrate this size, position, and shape. The palatines are completely, although poorly, preserved in two of the Corral Bluffs specimens, DMNH EPV.95284 (Fig. 4d) and DMNH EPV.134082 (Fig. 5d), and anterior parts of them are present in DMNH EPV.136300 (Fig. 6d). Although fractured and deformed, they are also preserved in UCMP 98083 (Fig. 8d). Of these specimens, only DMNH EPV.134082 reveals faint sutures with surrounding bones (reconstructed in Fig. 10b) and even these must be characterized as somewhat uncertain. The anterior-most extent of the tentative suture with the maxilla is on the midline opposite the posterior quarter of M1. It extends laterally in a strong convexity and then posteriorly along a sinuous line until passing lateral to the pterygopalatine ridge and medial to the distal end of M2 and the retromolar extension of the maxilla. The fact that the maxillary-palatine sutures are 1 3 symmetrically present on both sides in DMNH EPV.134082 provides some degree of confidence in their identification. It is clear that the central area of the palatines did not extend as far posteriorly as indicated in previous reconstructions and nor did they terminate medially in a very sharp point (Granger and Simpson 1929: fig. 6; Kielan-Jaworowska and Hurum 1997: fig. 11C; Wible et al. 2019: fig. 22C). Although the left and right palatines come together posteriorly to form a blunt, uvula-like tip that may have extended slightly past the level of the posterior margins of the left and right M2s, the main portions of the palatines did not. Also, although the posterior ends of the palatines appear to be slightly thickened toward the midline and potentially ventrally deflected slightly in DMNH EPV.134082 (Fig. 5d) and DMNH EPV.95284 (Fig. 4d), there is no convincing evidence for a large, strongly thickened, laterally expansive, markedly raised postpalatine torus as seen in several Late Cretaceous djadochtatherioids (see "Bony Palate" below). We therefore regard this feature to be absent in Taeniolabis, which is consistent with how it was scored by Rougier et al. (2016) and Wible et al. (2019). Finally, despite the new specimens, and µCT analysis of them as well as of previously known specimens, the positions of the major and minor palatine foramina cannot be discerned and whether or not the palatine has any exposure within the orbit also remains unknown. Jugals Broom (1914: 128) reconstructed the jugal of Taeniolabis in AMNH 16321, stating that the anterior portion "did not reach far round the anterior orbital margin" and "probably had a postorbital process," and that the posterior portion was "perfectly preserved" and "merely a narrow splint of bone." Hopson et al. (1989:206), as had Simpson (1937b) previously, however, concluded that the presence or absence of a jugal in AMNH 16321 "cannot be determined due to the poor preservation of the bone surface on the zygoma." We essentially concur with these authors but do see a depression on the medial aspect of the squamosal portion of the left zygomatic arch, along the dorsal half, that could represent a "scar" for the posterior end of the jugal (Fig. 7d). More anteriorly, there is a faint outline of what might be a suture for the rest of the element extending onto the zygomatic process of the maxilla. These traces are not convincing but are in approximately the same position as those depicted for the reconstructed zygomatic arch of Ptilodus depicted by Hopson et al. (1989: fig. 5); medial to the arch, along its dorsal aspect, and overlapping the maxilla-squamosal suture. Similarly suggestive, but not definitive, evidence of a jugal is present on the medial aspect of the right zygomatic arch of DMNH EPV.95284 (Fig. 4d). Digital segmentation of the right zygomatic arch of UCMP 98083, however, does reveal a fragment of bone that we tentatively interpret to be at least part of the anterior end of a jugal (Fig. 13). It sits within a shallow fossa on the medial aspect of the zygomatic arch but it is clearly incomplete, as indicated by broken surfaces. It likely would not have been visible in lateral view, at least not to the extent depicted by Broom (1914: fig. 8) and Granger and Simpson (1929: figs. 4, 5A), who showed it as a substantial element forming all of the ventrolateral rim of the orbit. The dorsal margin of the preserved left zygomatic arch of AMNH 16321 is fragmentary in the region where one might expect a postorbital process indicating the position for attachment of the ventral end of the orbital ligament and marking the lower posterior boundary of the orbit. Broom (1914: fig. 8) had speculatively illustrated the postorbital process on the jugal, roughly opposite the postorbital process on the parietal. Although the zygomatic arches are not well preserved on DMNH EPV.95284 (Fig. 4), the narrowed, ridge-like dorsal edge may be preserved on the right side and, if so, indicates that the postorbital process is on the zygomatic process of the squamosal and slightly posterior to the position indicated by Broom; unfortunately, deformation and poor surface preservation of the specimen does not allow us to have confidence in that conclusion. Fig. 13 Rendering of 3D virtual model of fragmentary jugal of Taeniolabis taoensis on medial aspect of right zygomatic arch of UCMP 98083, based on µCT data. Fragmentary jugal depicted in blue, zygomatic process of squamosal in red, and rest of cranium in gray. Abbreviations: bs, broken surface; fj, facet for jugal; j, jugal (anterior fragment); M1, upper first molar; otf, orbitotemporal fenestra; P4, upper fourth premolar; zpm, zygomatic process of maxilla; zps, zygomatic process of squamosal. Scale bar = 1 cm Frontals The frontals of T. taoensis were reconstructed by Broom (1914: fig. 6; followed by Granger and Simpson 1929: fig. 5A) in dorsal view, based on AMNH 16321, as small, flat, and unfused in the midline, with each having a long, straight medial margin, a shorter, slightly curved (concave posterolaterally) posterolateral margin, and an irregular margin anterolaterally that is still shorter. Overall, the frontals are in the shape of a stemmed arrowhead, with the acute tip directed posteriorly. The medial, posterolateral, and anterior/ anterolateral margins articulate with the contralateral frontal, the parietal, and the nasal, respectively. The posterolateral suture, especially on the left side, is prominently displayed on DMNH EPV.134082, where its contact with the parietal has been substantially displaced (Fig. 5c, e). There is no contact with a lacrimal bone because that element is absent in Taeniolabis (see "Lacrimals" above). Broom (1914: fig. 8; followed by Granger and Simpson 1929: fig. 4) reconstructed the frontals as not contributing to either the dorsal orbital margin or to the medial orbital wall. Instead, the parietal was reconstructed as extending forward to contact the maxilla both above and within the orbit, thus contributing to the posterior part of the supraorbital margin (with the maxilla forming the anterior part) and also to the posterior part of the medial orbital wall. Digital segmentation of the medial orbital wall in DMNH EPV.136300 reveals that, although the parietal extends a process forward along the orbital rim to contact the nasal, it simply overlies the frontal in this area and the two main parts of the frontal, the dorsal frontal plate and the lateral orbital process, are connected deep to this parietal process (Fig. 12). Within the orbit, the frontal contacts the nasal anterodorsally and has a long, roughly horizontal contact ventrally with the maxilla, although parts of the intervening suture could not be fully discerned. Parietal The parietal of Taeniolabis, in dorsal view, is expansive. In the middle portion of its anteroposterior extent, it lies on either side of the frontals, the contact being V-shaped and paralleled posterolaterally by the low, rounded temporal ridges. Farther anteriorly, the parietal extends as narrow processes lateral to the frontals that pass so far forward that contact is made with the nasals anteromedially and, ultimately, with the facial processes of the maxillae anteriorly. Posterior to the acute V-shaped termination of the frontals, the parietal comprises the entire posterior portion of the roof of the cranial cavity. Although the posterior-most median portion of the parietal in AMNH 16321 (Fig. 7c) is broken away, and although the left temporal ridge is more fragmentary than the right temporal ridge, the portions that are preserved indicate that the temporal ridges converged ~32 mm from the back of the cranium but did not fully meet in the midline, thus forming a double-ridged sagittal crest, the long apices of the two, more-or-less parallel crests being separated by ~6 mm. This double sagittal crest is not particularly tall. By contrast, the sagittal crest in DMNH EPV.95284 (Fig. 4a-c, e, f) is considerably longer (~ 45 mm), taller, and is a single, prominent midline feature. The sagittal crest in DMNH EPV.134082 (Fig. 5a-c, e, f) is not as well preserved but appears to more closely resemble that of DMNH EPV.95284 than that of AMNH 16321 in length (~41 mm), prominence, and singularity. The nuchal crests are mostly broken away in AMNH 16321 ( Fig. 7) but they are almost complete in DMNH EPV.95284 (Fig. 4) and DMNH EPV.134082 (Fig. 5), demonstrating that they were very prominent and sharp, flaring posteroventrolaterally (concave anteriorly) toward the squamosals, and overhang the concave occipital region. Unfortunately, the suture between the parietal and the squamosals cannot be identified with certainty in any of the available specimens and therefore it is impossible to know the relative contributions of each element to the nuchal crests. Similarly, the suture between the parietal and the occiput is obscured in all known specimens. DMNH EPV.95284 belies the conclusion of Simpson (1926: 233, 235) that "the sagittal and occipital [= lambdoid] crests are only moderately developed" in multituberculates and that "the temporal muscle was weak." In this specimen (and DMNH EPV.134082, Fig. 5), these crests are very prominently developed. Gambaryan and Kielan-Jaworowska (1995: 82) had earlier observed that the sagittal crest is "prominent" in Taeniolabis (and Lambdopsalis). Gambaryan and Kielan-Jaworowska (1995: 85) inferred the position of the postorbital process in Taeniolabis from illustrations in Broom (1914) as being "small and situated on the anterior part of the parietal." We can confirm this position and the fact that it is a swelling rather than a distinct, pointed process from direct observation of AMNH 16321 (Fig. 7c) and DMNH EPV.95284 (Fig. 4c). This region is not well enough preserved on DMNH EPV.134082 to make similar assessments. Deformation of the cranium of DMNH EPV.95284 makes a conclusive determination of the size of the process difficult but, on the left side, it appears to be quite prominent. Orbitosphenoids/Alisphenoids (and Anterior Lamina of the Petrosal) Sutures between the elements of the lateral wall of the braincase could not be distinguished in any of the specimens in our sample, including the extent of the orbitosphenoid, alisphenoid, and anterior lamina of the petrosal (see "Petrosals" section below). Nevertheless, a few features are visible. A large foramen for the mandibular division of the trigeminal nerve is visible in UCMP 98083 (Fig. 14d). It is oval (anteroposteriorly longer than dorsoventrally tall). A possible foramen for the ramus superior of the stapedial 1 3 artery is visible roughly in the center of the lateral braincase wall in DMNH EPV.95284 (Fig. 4b) and UCMP 98083 (Fig. 8b). A deep groove extends anterodorsally from it along the lateral braincase wall. Basisphenoid/Presphenoid A subtle, low, rounded midline ridge extending directly forward from the basioccipital ridge toward the posterior margin of the choanae can be seen in DMNH EPV.134082 (Figs. 5d and 14c) and, to a lesser extent, in DMNH EPV.95284 (Figs. 4d and 14b). We interpret this ridge to be comprised of the basisphenoid and presphenoid, as interpreted for other multituberculates (e.g., Miao 1988;Wible and Rougier 2000), but, owing to poor surface preservation in these specimens, sutures delimiting either of these small elements are impossible to discern. Pterygoids Sutures that would delimit the margins of the pterygoid also cannot be seen in any of the available cranial specimens of T. taoensis but presence of the bone is indicated by tall, elongate, symmetrically curved (concave laterally) ridges that extend posteriorly from the end of the palate (reconstructed in Fig. 10b). Barghusen (1986) termed these structures 'pterygopalatine ridges.' More specifically, these ridges in Taeniolabis pass posteriorly from opposite the distolingual margin of M2 and then gently curve laterally, diverging slightly from one another before merging seamlessly (i.e., without a discernible suture) in the region of the promontorium on the petrosal. Although relatively poorly preserved in DMNH EPV.95284 (Figs. 4d and 14b), the pterygopalatine ridges are reasonably well preserved in DMNH EPV.134082 (Figs. 5d and 14c), where the one on Abbreviations: bof, basioccipital fossa; bor, basioccipital ridge; bsp/psp, basisphenoid/presphenoid; ci, crista interfenestralis; cri, canal for ramus inferior; crp, crista parotica; fips, foramen for inferior petrosal sinus; fmV, foramen for mandibular division of trigeminal nerve; fv, fenestra vestibuli; jf, jugular fossa; lf, lateral flange; oc, occipital condyle; plf, perilymphatic foramen; plg, perilymphatic groove; pp, paroccipital process; ppr, pterygopalatine ridge; pr, promontorium; sf, stapedius fossa. Scale bars in a, c = 5 cm, b = 2 cm, d = 1 cm the left side seems to be the least disturbed by postmortem taphonomic processes (the anterior part of the right ridge is displaced medially into the basipharyngeal canal). It is assumed that these pterygopalatine ridges articulate dorsally and/or laterally with the alisphenoids and that they articulate with the presphenoid/basisphenoid duo medially but this cannot be documented in any of the available specimens. Because of the poor surface preservation of this region in all available specimens, there is also no distinct pterygoid hamulus in evidence. Squamosals The right zygomatic arch of AMNH 16321, except for a small section anteriorly, contributed by the zygomatic process of the maxilla, is composed almost entirely of plaster and the left arch is fragmented, with several small missing gaps having been filled with plaster, but not significantly deformed (compare Fig. 3a, b with Fig. 7). The zygomatic arches in DMNH EPV.95284 are preserved almost in their entirety (except for a short posterior section of the right arch) but are markedly bilaterally asymmetrical because of differential preservation and deformation (Figs. 2a, b and 4). Only the posterior half of the left zygomatic arch and very short anterior and posterior sections of the right zygomatic arch are preserved in DMNH EPV.134082 (Figs. 2c, d and 5), only the anterior roots of the arches are preserved in DMNH EPV.136300 (Fig. 6), and only an anterior portion of the right zygomatic arch is preserved in the juvenile cranium UCMP 98083 (Greenwald 1988: fig. 1A; Fig. 8). Measurements of depth and width of the zygomatic arches, where available, are provided in Table 5. AMNH 16321 (Fig. 7a) and UCMP 98083 (Fig. 8b) exhibit an obliquely oriented suture (from anterodorsal to posteroventral) between the zygomatic process of the squamosal and the zygomatic process of the maxilla, the suture ending posteroventrally at slightly less than midlength along the zygomatic arch. This is as depicted in lateral views by Broom (1914: fig. 8) and Granger and Simpson (1929: fig. 4), and in Figs. 7a, 8b, and 10c. The illustrations by Broom and by Granger and Simpson added a short dorsal section of the suture that turned posteriorly again, creating an asymmetrical V-shaped suture in lateral view. The dorsal margin of the zygomatic arch of AMNH 16321 is broken away anteriorly, precluding clear evidence for this, as well for a jugal that is depicted as rising above the level of the arch. UCMP 98083 (Fig. 8b) is inconclusive in demonstrating the shape of the maxillary-squamosal suture but indicates that the jugal probably did not rise above the dorsal margin of the zygomatic arch. Only the ventral-most part of this suture is visible on the medial side; more dorsally, the suture appears to be obscured by the diminutive jugal (see "Jugals" section above). The cross-sectional shape of the zygomatic process of the squamosal is essentially that of a tall, isosceles triangle, the most acute angle of which is positioned dorsally. In lateral view, it is distinctly arched, convex dorsally and concave ventrally. Gambaryan and Kielan-Jaworowska (1995) identified intermediate and posterior zygomatic ridges (purportedly for the origins of masseter superficialis pars posterior and masseter medialis pars posterior, respectively), as seen on the squamosal of djadochtatherioid multituberculates, the anterior zygomatic ridge (purportedly for origin of masseter superficialis pars anterior) being situated more anteriorly and confined to the lateral surface of the maxilla. As stated above (section on "Maxillae"), Kielan-Jaworowska et al. (2005), based on observation of photographs in Broom (1914: pls. XI, XII) and a personal communication from Yaoming Hu, identified anterior and intermediate zygomatic ridges in AMNH 16321 and thereby implied comparability to the situation in djadochtatherioids. Our examination of the left zygomatic arch of AMNH 16321 indicates that some qualification is necessary. An elongate, shallow, lenticular depression facing ventrolaterally (more ventrally than laterally) on the squamosal extends anteriorly from just lateral to the anterior end of the glenoid fossa to approximately midlength on the arch, anterior to the maxilla-squamosal suture. The lateral margin of this depression is likely equivalent to the intermediate zygomatic ridge of Gambaryan and Kielan-Jaworowska (1995) but it is important to emphasize that the depression itself faces more ventrally than it does laterally (unlike in djadochtatherioids). Following Gambaryan and Kielan-Jaworowska (1995), the depression served as the site of origin for pars posterior of the superficial masseter. Medial to this depression is a shallow groove that, following the inferences of Gambaryan and Kielan-Jaworowska (1995), may have served as the origin for masseter lateralis. It is relatively wide posteriorly, extending forward from just anterior to the glenoid fossa; the level of its anterior termination is not distinct. The glenoid fossa, best preserved on AMNH 16321 (Fig. 7d) and DMNH EPV.134082 (Fig. 5d), is very large and somewhat tear-drop shaped (the sharp apex of the tear situated anterolaterally), being slightly longer anteroposteriorly than wide mediolaterally, although its anterior termination is not distinct (see estimated measurements in Table 5). Its longitudinal axis is not strictly anteroposterior but, instead, trends in a slightly anterolateral to posteromedial direction. The articular surface of the fossa is flat anteroposteriorly but shallowly concave from medial to lateral, with distinct rims both anteromedially and posterolaterally, as best seen in AMNH 16321 (Fig. 7d). As stated above (see "Parietal"), the suture between the parietal and the squamosals cannot be discerned in any of the available specimens but, farther laterally, immediately anterior to the very prominent nuchal crests, Granger and Simpson (1929: fig. 5A) identified a partial suture between the squamosal and the petrosal in dorsal view. Petrosals The description of the petrosal of T. taoensis is based on DMNH EPV.95284, DMNH EPV.134082, and UCMP 98083 (Fig. 14). Of those specimens, DMNH EPV.95284 is the most intact but the surface of the petrosal is altered because of postmortem taphonomic processes, and some finer structures are therefore difficult to discern. Furthermore, the density difference between sediment and bone is poor in this specimen and, as such, µCT data cannot aid significantly in identification of foramina or in tracing pathways of nerves and vessels. Although the external surface of UCMP 98083 (Fig. 14d) preserves greater detail (e.g., muscular attachments, grooves, foramina) than DMNH EPV.95284 (Fig. 14b), the basicranium of this specimen is more deformed. The ventral surface of the petrosal of DMNH EPV.134082 (Fig. 14c) is highly altered, however similarities between DMNH EPV.134082 and DMNH EPV.95284, including, for example, the position of the jugular fossa and promontorium, are evident. The density difference between sediment and bone is likewise poor in DMNH EPV.134082, particularly on the left side. The contrast is slightly better on the right side, where the inner ear is discernible. Most of the petrosal of AMNH 16321 is reconstructed in plaster and does not preserve actual morphology. In ventral view, the promontorium is anteromediallyposterolaterally oriented, elongate, and forms a distinct ridge that divides the middle ear cavity into two deeply excavated spaces (Fig. 14). The surface of the promontorium does not appear to bear any distinct grooves for the internal carotid or stapedial arteries. This is best seen in the partially preserved promontorium of UCMP 98083 (Fig. 14d). DMNH EPV.95284 (Fig. 14b) does not preserve any grooves either, although it should again be noted that the surfaces of the petrosal are preservationally altered, and it is possible that any grooves might have simply been obliterated in the process. Much of the right promontorium is covered in matrix in DMNH EPV.134082 and that of the left is too poorly preserved to evaluate its surface morphology (Fig. 14c). Based on the µCT scans and preserved external morphology of the specimens, it cannot be confirmed whether the perilymphatic duct exited the inner ear through a perilymphatic groove from the perilymphatic foramen or was enclosed in a cochlear canaliculus (and that a true fenestra cochleae was present). In UCMP 98083 (Fig. 14d), a faint groove is visible on a block close to the perilymphatic foramen/fenestra cochleae, possibly representing a perilymphatic groove, but the block is separated from the petrosal and rotated out of position, and it is therefore unclear whether it truly connected to the perilymphatic foramen in life. Nevertheless, we tentatively refer to the foramen as the "perilymphatic foramen" in the description and comparison, as it is present in all multituberculates known to date. The shape and size of the fenestra vestibuli and perilymphatic foramen are not obvious in DMNH EPV.95284 and DMNH EPV.134082 and are obscured by a large fracture in UCMP 98083. The fenestra vestibuli appears large in UCMP 98083, but the true size is difficult to estimate as the posterior and anterior edges appear to be broken (Fig. 14d). Separating the perilymphatic foramen and fenestra vestibuli is a short, narrow, and posteriorly trending bony ridge, the crista interfenestralis. Medial to the crista interfenestralis and extending along the whole length of the promontorium is a deeply excavated jugular fossa. The jugular fossa is somewhat teardrop-shaped, with a larger and rounded posterior edge and a narrower anterior edge. The outline and size of the fossa is best seen on the right side of DMNH EPV.95284. Although the position of the fossa is also recognizable in DMNH EPV.134082 (Fig. 14c), its size appears to be considerably exaggerated by erosion. The broken fragment next to the perilymphatic foramen obscures the posterior aspect of the right jugular fossa in UCMP 98083, but the anterior aspect is visible and confirms the shape seen in the DMNH specimens (the left jugular fossa is not visible). There are several small openings in the lateral wall of the jugular fossa (along the posteromedial edge of the promontorium). Based on the poor quality of preservation, it is unclear if these represent exposed emissary veins that drained medially into the inferior petrosal sinus or actual foramina for veins that drained medially from the inferior petrosal sinus into the jugular fossa (Fig. 14d). The crista interfenestralis is continuous with the paroccipital process posteriorly dividing the rear of the middle ear cavity (divided post-promontorial tympanic recess). The space lateral to the crista interfenestralis and promontorium is likewise deeply excavated and extends slightly farther anteriorly than the jugular fossa (DMNH EPV.95284, Fig. 14b). Laterally, a prominent crista parotica defines the border of the lateral space, best seen in DMNH EPV.95284 (Fig. 14b), UCMP 98083 (Fig. 14d), and on the right side of DMNH EPV.134082 (Fig. 14c). At the posterior aspect of the lateral space, along a slightly elevated shelf, is an elongate fossa for the stapedius muscle (Fig. 14b, d). The stapedius fossa is best seen in UCMP 98083 and is barely visible in the DMNH specimens. The stapedius fossa does not seem to extend onto the lateral aspect of the crista interfenestralis as in Kryptobaatar and Guibaatar (Wible and Rougier 2000;Wible et al. 2019). The posterior wall of the lateral space is formed by the base of the paroccipital process. The paroccipital process is distinct, small, and rounded, extending ventrally only slightly past the surface of the petrosal. This is much smaller than reconstructed by Granger and Simpson (1929: fig. 5B), who illustrated it extending ventral to the level of the occipital condyles. The crista parotica is anteromedially confluent with a broad and low lateral flange. The lateral flange is medially inflected and contacts the promontorium anteriorly ( Fig. 14b-d). At the anterior tip of the lateral flange is a small foramen that could represent the canal for the ramus inferior of the stapedial artery (Fig. 14b, d). However, the foramen could not be traced through the µCT scans and its course is uncertain. Several other foramina should pierce the crista parotica and lateral flange but are not visible on the external surface or in the µCT scans of any of the specimens, including the foramen for the ramus superior of the stapedial artery, tympanic aperture of the prootic canal (for the prootic sinus), and the secondary facial foramen (for the facial nerve). This is clearly due to poor preservation and does not represent absence of the foramina as those foramina (or a combination of them) are generally present in multituberculates (see "Comparisons and Discussion"). Lateral to the crista parotica lies the epitympanic recess. The fossa incudis, for the crus breve of the incus, is not preserved in any of the specimens. In other multituberculates the posterior aspect of the epitympanic recess houses a narrow fossa incudis (Kryptobaatar, Wible and Rougier 2000;cf. Tombaatar, Ladevèze et al. 2010;Mangasbaatar, Rougier et al. 2016). Anterior to the epitympanic recess, in what possibly represents the anterior lamina, appears to be a large foramen that opens endocranially into the cavum epiptericum. The opening is best visible in UCMP 98083 (fmV in Fig. 14d), and appears to be a single large foramen, but due to poor preservation we cannot rule out that it is a fossa with two distinct foramina. We interpret this foramen to be an opening for the mandibular division of the trigeminal nerve. Finally, in occipital view, the posttemporal foramen, which we assume lies within the petrosal, is large; it is best preserved on the left sides of DMNH EPV.134082 (Fig. 5f) and UCMP 98083 (Fig. 8f). It is surrounded by a funnel-shaped entryway, the posttemporal fossa. The following description of the inner ear of Taeniolabis is primarily based on UCMP 98083, the bestpreserved specimen for this region in the sample ( Fig. 15; measurements in Table 6). Much of the inner ear is intact in ap, apex; asc, anterior semicircular canal; cc, crus commune; cn, cochlear nerve (yellow); co, cochlear canal; fv, fenestra vestibuli; lsc, lateral semicircular canal; psc, posterior semicircular canal; vs, vestibule. Scale bar in a = 10 mm, in b-k = 5 mm UCMP 98083 (aside from parts of the semicircular canals), but the endocast is infilled with sediment that has nearly the same density as the bone, which makes differentiation of the endocast difficult in some areas (e.g., cochlear canal on the left, semicircular canals on the right). Overall, the cochlear canal and vestibule can be better differentiated on the right side, whereas the semicircular canals are more visible on the left side. Some information can be gleaned from DMNH EPV.95284. The density difference between bone and sediment infill of the endocast is worse than in UCMP 98083, but still allowed for tracing a very coarse outline of the right and left inner ears. The outline, however, might not as reliably present true morphology. The right inner ear is discernible in DMNH EPV.134082. The density contrast is much poorer on the left side where only the vestibule and parts of the semicircular canals are visible. The inner ear is not preserved among the basicranial fragments present in AMNH 16321. The cochlear canal is only gently curved laterally in dorsal or ventral view (49°). The basal portion of the canal is directed anteromedially, then curves to an anterolateral direction (Fig. 15). The slender cochlear canal gently changes from a relatively round cross section at the apex to a more oval cross section at the base, with a width and height of 1.8 mm and 2.0 mm at mid-length. The right cochlear canal measures 7.8 mm in length (measured from the apex to the contact of the vestibule and cochlear canal). The left cochlear canal is slightly longer (8.7 mm), however its outline is less visible in the µCT scans and we believe that the 7.8 mm measured on the right is a more accurate representation of the morphology. The cochlear canal constitutes about 8.1% of cranial length in the juvenile UCMP 98083. The apex of the cochlear canal is only very gently expanded, possibly indicating the presence of a lagena macula (Fig. 15). A separate canal for the lagenar nerve could not be discerned in the µCT scans. This does not necessarily imply the absence of such a canal because the density contrast in the specimen is simply not sufficient to clearly delimit whether a lagenar nerve canal was present or not. The cochlear nerve passes into the cochlear canal through what appears to be a single foramen along the dorsal aspect of the canal. Lack of contrast makes it unclear whether any bony support structures for the cochlear nerve or hearing membrane existed. Given that the cochlear nerve passes through a single foramen in most multituberculates (Meng and Wyss 1995;Fox and Meng 1997;Ladevèze et al. 2010;Luo et al. 2016;Csiki-Sava et al. 2018;Wible et al. 2019), it is plausible that the failure to see such structures in UCMP 98083 and DMNH EPV.134082 might represent actual absence of a cribriform plate and primary or secondary bony laminae. The vestibule of Taeniolabis is large, with a volume of 253-270 mm 3 . It is smooth and rounded and does not provide any indications for the boundaries of the utricle or saccule. The vestibular nerve could not be traced reliably in the µCT scans and can thus not aid in the identification of the saccule or utricle. Of note is that the fenestra vestibuli opens into the vestibule, indicating that the scala vestibuli is incorporated into the inflation of the vestibule. The ampullae are slightly rounded and, in parts, difficult to differentiate from the enlarged vestibule. The semicircular canals are better differentiated on the left side of UCMP 98083; although also present on the right side, the sediment infill on the right (Fig. 15b-f) is nearly the same density as the bony labyrinth and the canals could not be as reliably traced as on the left (Fig. 15g-k). The radius of curvature of the three semicircular canals is fairly similar, with the posterior canal being slightly the largest ( Table 6). The difference in radius of curvature between the posterior canal and the lateral and anterior canals is greater on the left than on the right, which might be driven by a fracture in the left posterior semicircular canal. The left posterior semicircular canal exhibits a peculiar bend, but the corresponding part on the right could not be traced and it is unclear what the actual morphology might have looked like. The anterior and posterior semicircular canals meet to form a long and robust crus commune. A secondary crus commune is absent; the lateral and posterior semicircular canals do not merge but remain separate, leading to two ampullae. Interparietal Discrimination of sutures in the occipital region is insufficient on any of the available specimens to resolve whether or not an interparietal (or, more finely, a postparietal and left and right tabulars) is preserved. Occipital The boundaries of the various components of the occipital bone (supraoccipital, paired exoccipitals, basioccipital) cannot be observed on any of the adult cranial specimens preserving the occipital region in our sample (DMNH EPV.95284, DMNH EPV.134082, AMNH 16321) simply because the sutures necessary to do so are either fused or obscured, with or without the aid of µCT imagery. There is, however, suggestive evidence of the ventral aspects of the sutures between the supraoccipital and left and right exoccipitals in the juvenile cranium of UCMP 98083 (Fig. 8f). These slant from dorsolateral to ventromedial and indicate that the supraoccipital contributed to a small median portion of the dorsal margin of the foramen magnum. The dorsal boundary of the supraoccipital, presumably with the parietal, cannot be discerned. It is not possible to know how much of the nuchal region of the cranium is contributed by the occipital (in addition to, potentially, the parietal, interparietal, petrosal, and squamosal), again because sutures are not visible, even in UCMP 98083. Furthermore, because of the poor surface preservation of the Corral Bluffs specimens and incomplete preservation, fragmentation, and/or deformation of the San Juan Basin specimens, other osteological features expected to be found in, or bordering on, the occipital cannot be differentiated. Despite these limitations, some major features of the occipital region of T. taoensis can be described for the first time. The occipital region is tall and, assuming that an interparietal is not present (see reviews of distribution among fossil and extant mammaliaforms in Koyabu et al. 2012;Krause 2014b), ends dorsally in prominent nuchal crests. Whether or not the occipital bone extends to actually participate in the nuchal crests cannot be discerned but it does so, or nearly so, in all multituberculates for which this region is known (see "Comparisons and Discussion"). The nuchal crests themselves are very prominent, joining with the equally prominent sagittal crest of the parietal in a triple junction to form a salient, peaked external occipital protuberance. This therefore differs strongly from the reconstructions by Broom (1914: fig. 8) and Granger and Simpson (1929: fig. 4), at least in lateral view, where the posterodorsal outline of the cranium appears more evenly rounded (convex posterodorsally). Furthermore, the position of the external occipital protuberance is farther posterior than depicted by Broom (1914: fig. 6) and Granger and Simpson (1929: fig. 5A). The nuchal area dorsal and lateral to the foramen magnum, between it and the prominent nuchal crests, is concave. Overall, because the occipital condyles lie slightly posterior to the level of the external occipital protuberance, the nuchal area is slightly slanted from anterodorsal to posteroventral in lateral view. The occipital condyles, previously unknown in T. taoensis, are preserved in DMNH EPV.95284 (Figs. 4b, d, f and 14b) but are best preserved in DMNH EPV.134082 (Figs. 5a-d, f and 14c). They project posteriorly, slightly beyond a transverse line formed by the nuchal crests, which is farther posterior than speculatively reconstructed by Broom (1914: fig. 6) and especially Granger and Simpson (1929: figs. 5A, 6). The condyles themselves, in posterior view, occupy the lower third of the area surrounding the foramen magnum and are large and rounded posteriorly and ventrally. In ventral view, the condyles are separated by a broad, V-shaped (approximately 90º) odontoid (= intercondyloid) notch. In posterior view, the foramen magnum, as preserved in DMNH EPV.95284 (Fig. 4f) and DMNH EPV.134082 (Fig. 5f), is triangular in shape, slightly wider than tall (Table 5), with each of the two dorsolateral sides of the triangle being straight, posteriorly projecting crests. The foramen appears to be more rounded laterally in the juvenile UCMP 98083 (Fig. 8f). The foramen itself is directed straight posteriorly. A linear, vertically oriented projection of bone midway between the dorsal margin of the foramen magnum and the external occipital protuberance on DMNH EPV.95284 (Fig. 4f) represents a remnant of the external occipital crest. The crest is more faintly visible in DMNH EPV.134082 (Fig. 5f), AMNH 16321 (Fig. 7f), and UCMP 98083 (Fig. 8f). The basioccipital, on the ventral surface immediately anterior to the foramen magnum, is poorly preserved in all of the specimens. The best indication of its morphology is preserved in DMNH EPV.134082, where a low, midline ridge, with two shallow fossae on either side of it, extends anteriorly for a short distance (Fig. 14c). Farther laterally, the basioccipital contributes a significant medial border of the jugular fossae. A faint, transverse suture anterior to the low, midline ridge and two shallow fossae flanking it may mark the position of the spheno-occipital synchondrosis in DMNH EPV.134082. Dentary The dentary of T. taoensis has been described and/or illustrated by Cope (1884a: fig. 3a-d; 1884b: pl. XXIIIc, fig. 1, 1a- Greenwald (1988: fig. 1B, C) and photographs of AMNH 745 (Fig. 3e, f), AMNH 748 (Fig. 3g, h), AMNH 968, and AMNH 27734 (Fig. 3i-n). DMNH EPV.130973 (Figs. 2g, h and 9a-c) is among the most complete dentaries known for T. taoensis and is missing only the dorsal portions of the coronoid process, the terminus of the mandibular condyle, and a few inconsequential fragments elsewhere; its surface preservation, however, is not as pristine as that of AMNH 16310 (Figs. 3c, d and 9d-f). The horizontal ramus of AMNH 16310 is complete and very well preserved but only the anterior half of the ascending ramus is preserved; as such, much of the coronoid process, much of the pterygoid and masseteric fossae, and all of the mandibular condyle are missing. Both dentaries of UCMP 98083 (Fig. 8g-l) are preserved but both are missing large portions (particularly posteriorly), have numerous cracks, and are considerably deformed. AMNH 745, AMNH 748, AMNH 968, and AMNH 27734 reveal more of the coronoid process and mandibular condyle than is preserved in DMNH EPV.130973, AMNH 16310, and UCMP 98083. Combined, these specimens permit a full reconstruction of the dentary of T. taoensis (Fig. 10f-h) and indicate significant changes to the size, shape, and position of the coronoid process and mandibular condyle. The prior descriptions and illustrations of the dentary have revealed or confirmed that the horizontal ramus houses i1, p4, and m1-2; that there is a long diastema between i1 and p4; and that the ramus is short and deep (see Table 7 for measurements). The horizontal rami are also extraordinarily thick mediolaterally and meet anteriorly at an included angle of approximately 40-45º (precise measurement is not possible because of the difficulty of orienting specimens consistently and because the symphysis is unfused; see also fig. 1C in Osborn and Earle 1895). Given that the cheektooth row is set obliquely relative to the longitudinal axis of the dentary, and given the inferred palinal direction of the power stroke of the chewing cycle, the left and right rows would have been roughly parallel in life. In lateral view, the ventral margin of the dentary is sinuous (most strongly developed in AMNH 16310; Figs. 3c, d and 9d, e): convex anteriorly in the region of the incisor root (ventral to the diastema), concave ventral to m1, and convex posteriorly (ventral to the masseteric fossa). The posterior convexity extends posterodorsally in a smooth arc toward the condyle in DMNH EPV.130973 but has a bend that is variably expressed in AMNH 745, AMNH 748, AMNH 968, and AMNH 27734. The ventral surface below the masseteric fossa is notable in being wide (widest at midlength and tapering anteriorly and posteriorly), flat, and strongly tilted in the coronal plane from ventrolateral to dorsomedial. The dorsal margin of the horizontal ramus is strongly concave in the region of the diastema, with the section above the incisor root being at an approximate right angle to the section anterior to the root of p4. On the lateral aspect of the dentary, a single mental foramen is present, situated anterior to and below the nadir in the arc of the dorsal margin of the diastema. The foramen is small; the canal immediately deep to it, measured in AMNH 16310, is only approximately, on average, 1.15 mm in diameter, which seems extraordinarily small for such a large animal. The masseteric fossa is defined by anterodorsal and anteroventral margins (variably expressed as ridges) and extends anteriorly onto the horizontal ramus in the adult specimens in the sample, converging to a rounded point below p4. In the left dentary of UCMP 98083 (Fig. 8g), which represents a juvenile individual, this point appears to be slightly more posteriorly positioned, ventral to the anterior part of m1, but it is below p4 on the right (Fig. 8j). The names applied to the anteroventral margin of the masseteric fossa were reviewed by Gambaryan and Kielan-Jaworowska (1995); we follow those authors in referring to it as the masseteric crest even though it presents as a low rounded ridge in T. taoensis. Passing posteriorly, where it intersects the ventral margin of the dentary in lateral view, the masseteric crest becomes the masseteric line, which continues posteriorly and then posterodorsally to form the ventral and posteroventral borders of the masseteric fossa. Both DMNH EPV.130973 (Fig. 9a) and AMNH 16310 (Fig. 9d) reveal a gently curved ridge, convex anteriorly, inside the masseteric fossa that begins dorsally at the approximate level of the embrasure between m1 and m2. There appears to be good evidence for this ridge in the photographs of AMNH 748 (Fig. 3g) and AMNH 968 (not illustrated) as well, but it is less clear in those of AMNH 745 (Fig. 3e) and AMNH 27734 (Fig. 3i). When present, the ridge descends through the masseteric fossa and becomes less distinct as it continues ventrally to ultimately intersect the masseteric line close to where the latter transitions to the masseteric crest. We refer to this feature as 'intramasseteric ridge 1.' The depression anterior to the ridge, bounded anteriorly by the convergence of the anterodorsal and anteroventral (masseteric crest) margins is the masseteric fovea (fovea massetericus of Gambaryan and Kielan-Jaworowska 1995). In other words, the masseteric fovea is the anterior-most component of the masseteric fossa (which is not the case for all multituberculates -see "Comparisons and Discussion" below). It is likely that intramasseteric ridge 1 represents the dividing line between portions of the masseter muscle. Gambaryan and Kielan-Jaworowska (1995) regarded the fovea as serving as the insertion area of masseter medialis pars anterior (anterior deep masseter of Sloan 1979 but also known as the infraorbital portion of zygomaticomandibularis Druzinsky et al. 2011), whereas the rest of the masseter muscle inserted posterior to the fossa. Farther posteriorly, at least as seen in DMNH EPV.130973 (Fig. 9a), the masseteric fossa is divided by another ridge, here termed 'intramasseteric ridge 2,' which ascends from the masseteric line where it changes directions from the ventral to the posteroventral boundary of the massteric fossa. The ridge appears to become less distinct dorsally but this observation is based on DMNH EPV.130973, which exhibits some breakage in the area (Fig. 9a). This ridge presumably marks yet another division of the masseter muscle. Based on DMNH EPV.130973 (Fig. 9a) and AMNH 27734 (Fig. 3i) in particular, we infer that the masseteric fossa (and the dentary as a whole) was shorter posteriorly than reconstructed by earlier workers (e.g., Granger and Simpson 1929: fig. 4A), in part because the mandibular condyle was not suspended on a long, posterodorsally projecting condylar process. Kielan-Jaworowska et al. (2005) inferred the presence of a masseteric protuberance in Taeniolabis taoensis based on illustrations in Granger and Simpson (1929: fig. 4) and Simmons (1987: fig. 4.4). In AMNH 16310, DMNH EPV.130973, and UCMP 98083, all of which preserve good surface detail in this region, however, the masseteric crest, defining the anteroventral border of the masseteric fossa, does not end abruptly anteriorly in an enlargement similar to that identified as a protuberance in forms like Catopsbaatar (Kielan-Jaworowska et al. 2005: figs. 7; 9B 1 ) or Djadochtatherium (Kielan-Jaworowska and Hurum 1997: fig. 5A). Indeed, the anteroventral margin of the masseteric fossa becomes less distinct as it passes anteriorly. We also see no evidence of a protuberance in the photographs of AMNH 745, AMNH 748, AMNH 968, or AMNH 27734. Finally, it must be noted that Fig. 4.4 in Simmons (1987) is of the holotype dentary (CCM 70-110) of T. lamberti (not T. taoensis) and the anteroventral margin of the masseteric fossa is broken posteriorly, thus precluding determination of relative prominence along the margin and confident appraisal of the presence or absence of a masseteric protuberance; we suspect it did not exist in that species either. On the medial side of the horizontal ramus, the mandibular symphysis is unfused (contra Weil and Krause 2008). It occupies much of the area ventral to the diastema and is comma-shaped with the head of the comma positioned anteriorly and the tail trailing posteriorly along the ventral aspect of the ramus; this is most pristinely preserved and best seen on AMNH 16310 (Fig. 9e). AMNH 16310 exhibits a distinct ridge passing posteroventrally from the dorsal portion of the symphysis, ending below the mesial portion of m1; this ridge is much less distinct on DMNH EPV.130973 (Fig. 9b). In coronal section, the medial side of the horizontal ramus below the cheek teeth is gently concave (AMNH 16310) to almost flat (DMNH EPV.130973), bounded inferiorly by a ridge that becomes stronger as it passes posteriorly to form the ventral border of the pterygoid fossa, the pterygoid shelf, on the ascending ramus. As measured on the medial side of both DMNH EPV.130973 and AMNH 16310, the occlusal plane of the molars lies at an angle of approximately 20° relative to the ventral surface of the dentary; this is considerably higher than scored for Taeniolabis in recent phylogenetic analyses by, for example, Kielan-Jaworowska and Hurum (2001: char. 35, "11-17 degrees"), Mao et al. (2016: char. 7, "11-17 degrees"), and Wible et al. (2019: char. 73, "equal to or less than 10°"). The pterygoid fossa, more complete on DMNH EPV.130973 (Fig. 9b) than on AMNH 16310 (Fig. 9e), is massive and deeply excavated laterally and anteriorly, providing a huge insertion area for the medial pterygoid muscle. Its anterior border is very distinct and gently convex anteriorly, merging ventrally with a very strong pterygoid shelf (pterygoid crest of Simpson 1926), which bounds the entire ventral aspect of the fossa. The pterygoid shelf itself is very wide but narrows as it ascends posterodorsally toward the posterior margin of the mandibular condyle, just as the pterygoid fossa becomes shallower posteriorly. The mandibular foramen, for passage of the inferior alveolar nerve, could not be identified in any of the specimens available to us, either on the specimens themselves or through µCT imagery. It is likely to have been positioned as in T. lamberti, 1 3 on the anterior wall of the pterygoid fossa at a level below m2 (see Simmons 1987). The size and shape of the mandibular condyle and its position relative to the rest of the dentary have not been well documented previously. It is not preserved on any of the specimens we were able to examine firsthand (DMNH EPV.130973, AMNH 16310) or through µCT imagery (UCMP 98083) but is preserved on the specimens for which we have photographs (AMNH 745, AMNH 748, AMNH 968, AMNH 27734), courtesy of AMNH curator Jin Meng, and therefore can now be documented more fully. The condyle is best preserved on AMNH 27734 and is illustrated here in four views (Fig. 3k-n). The condyle was not as large and globular or with as long a neck as reconstructed by past authors (e.g., Osborn and Earle 1895; fig. 1A, C; Gregory 1910; fig. 8; Granger and Simpson 1929; fig. 4A), and certainly not as much as reconstructed in plaster on AMNH 16310 (Fig. 3c, d). In dorsal view (Fig. 3m), the condyle is lenticular in shape, strongly convex posteriorly and gently concave anteriorly, with the ridge passing anteriorly to form the mandibular notch doing so from the medial side of the condyle. In posterior view (Fig. 3n), the condyle extends only a short distance onto the posterior surface and has a distinctly convex ventral margin. The lateral and medial views (Fig. 3k, l) document a short and unconstricted condylar neck and a gently convex dorsal surface that becomes more convex posteriorly. The mandibular notch, in side view, is evenly rounded (concave dorsally) extending forward from the condyle and merging with the posterior margin of the coronoid process (Fig. 3i, j). The coronoid process is incomplete in all specimens known to us; it was reconstructed as very low by Cope (1884a: fig. 3a, b; 1884b: pl. XXIIIc, fig. 1, 1a), who also incorrectly reconstructed a distinct angular process, as very tall and recurved by Osborn and Earle (1895: fig. 1A), and as moderately tall and recurved by Broom (1914: fig. 8) and Granger and Simpson (1929: fig. 4A). The process is, however, nearly complete and well preserved in AMNH 27734 (Fig. 3i, j), missing only some fragments along the posterior edge; it is the only specimen preserving the full anterior margin and dorsal tip. This specimen demonstrates that the coronoid process, while moderately tall, was likely not recurved, at least not strongly. The anterior base of the process is also at least partially preserved in DMNH EPV.130973 (Figs. 2g, h and 9a, b), AMNH 745 (Fig. 3e, f), AMNH 748 (Fig. 3g, h), AMNH 968, and AMNH 16310 (Figs. 3c, d and 9d, e). In lateral view, the anterior base arises opposite the posterior portion of m1 (i.e., m2 not visible in this view; DMNH EPV.130973, AMNH 748, AMNH 16310) or the anterior portion of m2 (AMNH 968, AMNH 27734). Acknowledging that there is also variability depending upon how the dentary is oriented, this is in slight contrast to Simmons (1987) who observed that it arises only opposite m1 and used it as a diagnostic feature differentiating T. taoensis from T. lamberti, in which, in the holotype and only known specimen preserving the dentary, it arises opposite the anterior part of m2; there appears to be more intraspecific variation in this feature than previously known. The process extends posterodorsally from the anterodorsal margin of the masseteric fossa, and produces a broad, shallow, U-shaped temporal groove (sulcus temporalis of Gambaryan and Kielan-Jaworowska 1995) between it and the buccal alveoloar margin of m2. The dorsal apex of the coronoid process is peaked, and the anterior and posterior edges descend more or less symmetrically from that peak, although the anterior edge is longer and less vertical. Although the posterior edge of the coronoid process is not preserved in AMNH 27734, it has been reconstructed in plaster to descend and then merge in a concave, rounded outline with the mandibular notch that seems natural (Fig. 3i-l). This shape was used in the reconstruction of the dentary in Fig. 10f, h. Comparisons and Discussion Comparisons with the craniomandibular morphology of other multituberculate taxa are warranted because of the new anatomical information provided for our subject taxon, Taeniolabis taoensis, but also because many new cimolodontan multituberculate taxa have been described, many of them represented by skull material, since the last major descriptions of T. taoensis material (Broom 1914;Granger and Simpson 1929). This includes specimens of other taeniolabidids, consisting of a dentary of Taeniolabis lamberti (see Simmons 1987) and a few cranial fragments of Kimbetopsalis simmonsae (see Williamson et al. 2016). But more detailed comparisons are possible with cranial specimens of the family most closely related to taeniolabidids, lambdopsalids, in part because of new material of Sphenopsalis (Mao et al. 2016) but, most significantly, nearly complete skull material of the type genus and best-known representative, Lambdopsalis (Miao 1988). Of particular importance also is the recent discovery of a nearly complete skull (and partial postcranial skeleton) of Yubaatar, regarded as the immediate outgroup of Taeniolabidoidea (Xu et al. 2015). In addition, a plethora of skull material of many new genera and species of cimolodontan multituberculates has been discovered in the Late Cretaceous of Asia and Europe, primarily in the form of djadochtatherioids and kogaionids, respectively. The former has been described in considerable detail (e.g., Kielan-Jaworowska 1970a, 1970b, 1971, 1974Kielan-Jaworowska and Dashzeveg 1978;Kielan-Jaworowska et al. 1986, 2005Hurum 1994Hurum , 1998aGambaryan and Kielan-Jaworowska 1995;Hurum et al. 1996;Rougier et al. 1996bRougier et al. , 1997Rougier et al. , 2016Hurum 1997, 2001;Wible and Rougier 2000;Ladevèze et al. 2010;Wible et al. 2019) but only preliminary details of the latter have been published to date Samson 1996, 1997;Smith and Codrea 2015;Csiki-Sava et al. 2018). Cranial Size and Body Mass Estimates The cranium of Taeniolabis taoensis is the largest known among multituberculates. There are no other multituberculates that are known, even from fragmentary remains, that might have approached the size of T. taoensis, which we estimate from cranial size relative to a large sample of extant therian mammals to have had a body mass of approximately 35-40 kg (see "Description"). For the comparisons below, except for the closely related taeniolabidid Kimbetopsalis simmonsae, which is not known from the lower dentition but reported to be ~21% smaller than T. taoensis (based on length of M1; Williamson et al. 2016), we primarily use size of m1 because it is the only element in common to compare with most other large multituberculates. The length and width of T. taoensis m1s are 18.7-21.4 mm and 9.4-11.4 mm (n = 35), respectively, and the length and width of its M1s are 21.9-24.4 mm and 10.8-12.1 mm (n = 22), respectively (Simmons 1987; table 2), thus establishing it as the largest known multituberculate, Cenozoic or Mesozoic, from anywhere. The only m1 assigned to the congeneric T. lamberti is 16.0 mm long and 8.0 mm wide (Simmons 1987 : table 3). Bubodens magnus, known from a single m1, is the largest known Mesozoic multituberculate from North America (Wilson 1987), but the length (12.8 mm) and width (6.0 mm) of the tooth are less than two-thirds the average dimensions of the m1s of T. taoensis. B. magnus is considerably larger than the largest known Mesozoic multituberculate from Eurasia, Yubaatar zhongyuanensis (m1 length = 9.2 mm; width = 3.6 mm) (Xu et al. 2015), which is less than half the size of T. taoensis. Boffius splendidus is the largest known Cenozoic multituberculate from Europe (M1 length = 15.2-15.3 mm; width = 8.7-9.0 mm; m1 width = 9.6-10.0 mm) (Vianey-Liaud 1979;De Bast and Smith 2017: Bast and Smith 2017: table 4]) and Sphenopsalis nobilis is the largest known Cenozoic multituberculate from Asia (m1 length = 13.6 mm; width = 7.3 mm) (Mao et al. 2016); both are, on average, considerably smaller than T. taoensis. All of the multituberculates, purported or substantiated, from Gondwanan landmasses are much smaller than any of the above forms (see review in Krause et al. 2017). Cranial Shape The most apt descriptor of the cranium of T. taoensis is "robust," evoking comparisons with those of extant Australian wombats (Vombatus ursinus) or North American beavers (Castor canadensis). It is almost as wide as long (Table 5) and is, in general, of very heavy construction, with massive, squared zygomatic arches, a short, blunt snout, and prominent sagittal and nuchal crests indicating bulky masticatory and cervical musculature (Fig. 10). By contrast, the crania of other multituberculates are, in general, relatively gracile (see comparison figures including some of the most recently described cimolodontan taxa in Csiki-Sava et al. 2018: suppl. fig. 8; Wible et al. 2019: figs. 21-23). The cranial shape of Lambdopsalis most resembles that of Taeniolabis, the primary differences being the relative narrowness (in dorsal or ventral view) and shallowness (in lateral view) of the snout and the presence of inflated vestibular apparatuses on either side of the occipital condyles that superficially resemble tympanic bullae; these apparatuses are not present in T. taoensis. Snout Region Snout Shape The snout of T. taoensis, relative to those of other multituberculates, is short and broad, although the anterior extension of the premaxilla revealed by DMNH EPV.136300 (Fig. 6) changes these proportions somewhat relative to earlier reconstructions (e.g., contrast dorsal views by Broom 1914: fig. 6;Granger and Simpson 1929: fig. 5A with that of the revised reconstruction in Fig. 10a). The strongest contrast in snout shape is with the long, narrow, tapered snouts of the European Late Cretaceous kogaionids (see Csiki-Sava et al. 2018: suppl. fig. 8D-F for dorsal reconstructions of Kogaionon, Barbatodon, and Litovoi). Taeniolabis and its close relative Lambdopsalis (Miao 1988: fig. 12) are noteworthy in that the sides of the snout, in dorsal view, are relatively parallel and are distinctively set off from the rest of the cranium by strong indentations at the roots of the zygomatic arches. The latter feature is also seen in at least some of the kogaionids, most notably in Kogaionon (Csiki-Sava et al. 2018: suppl. fig . 8D). A strong indentation is also apparent on an incomplete maxilla (UALVP 28212) of the basal taeniolabidoid Valenopsalis illustrated by Fox (2005: pl . 6, fig. 11). By contrast, the snouts of the ptilodontoids Ptilodus (Simpson 1937b: fig. 5), Ectypodus (Sloan 1979; fig. 1; Gingerich et al. 1983; fig. 2A), and Filikomys (Weaver et al. 2021: fig. 1, Extended Data fig. 4) and a variety of djadochtatherioids (see Wible et al. 2019: fig. 23) have a much smoother transition in this region whereas those of the microcosmodontid Microcosmodon (Fox 2005: pl . 1, figs. 1, 2), the eucosmodontid Stygimys (Sloan and Van Valen 1965: fig. 4), and the cimolomyid 1 3 Meniscoessus (Archibald 1982: fig. 27a) seem to be intermediate in this regard. Bony Composition of Snout The snout of T. taoensis receives contributions from the nasals, premaxillae, and maxillae only. Despite the report of a septomaxilla in a specimen (V.J. 451-155) of the paulchoffatiid Pseudobolodon by Hahn and Hahn (1994), this could not be confirmed by Wible and Rougier (2000) or Rougier et al. (2016); as such, this element appears to be absent in all multituberculates, and we confirm its absence in T. taoensis as well. There is also no evidence for facial exposure of the lacrimal in Taeniolabis; indeed, the bone appears to be entirely absent (Fig. 12), as has also been reported for the lambdopsalid Lambdopsalis (Miao 1988). The lack of facial exposure on the snout in taeniolabidoids stands in contrast to the condition in djadochtatherioids, which have a prominent, generally subrectangular facial exposure of the lacrimal, consistently articulating with the maxilla anteroventrally, the nasal anteromedially, and the frontal posterolaterally (Wible et al. 2019: fig. 23D-K). Extensive facial exposure of the lacrimal is also apparently present in the eobaatarid Sinobaatar (Hu and Wang 2002;Kusuhashi et al. 2009) and in the cimolomyid Meniscoessus (Weil and Tomida 2001). The lacrimal in the stem taeniolabidoid Yubaatar is reported to have "a narrow exposure on the skull roof" (Xu et al. 2015: 6) and, although not reconstructed as present in the kogaionid Barbatodon by Smith and Codrea (2015: fig. 2N, O), Csiki-Sava et al. (2018: suppl. appendix p. 49) scored the facial process of the lacrimal as "very small and arcuate" in both Barbatodon and Litovoi (although it was not depicted in their cranial reconstructions of the two genera [suppl. fig. 8E, F]). A facial process of the lacrimal was also not reconstructed for Kogaionon by either Kielan-Jaworowska et al. (2004: fig. 8.42A1) or Csiki-Sava et al. (2018: suppl. fig . 8D). As such, the condition in kogaionids is unclear. Simpson (1937b: 740) stated that "there is no suggestion of facial exposure of a lacrimal" in the ptilodontid Ptilodus and depicted it as absent (figs. 4, 5) and yet it was scored as "small and arcuate" by Kielan-Jaworowska and Hurum (2001) and subsequent workers; this appears to be in error. Presence of the lacrimal on the face was indicated as "uncertain" in the microcosmodontid Microcosmodon (Fox 2005: 15), and it is not depicted in the drawings of the neoplagiaulacid Ectypodus by Sloan (1979: fig. 1) and Gingerich et al. (1983: fig. 2A). Concerning relatively early-branching forms, Kielan-Jaworowska et al. (2004: 266) stated that, although the lacrimal was reconstructed as small in paulchoffatiids by Hahn (1969Hahn ( , 1978b, it is "very poorly preserved, and one cannot depend on the reliability of this reconstruction." This may be underscored by the fact that Simmons (1993: char. 55) scored the facial process of the lacrimal in Paulchoffatia (now Meketichoffatia; Hahn 1993) as "large." Mao et al. (2016: char. 93; and derivative character matrices) scored a facial process of the lacrimal as "large, roughly rectangular" in the paulchoffatiid Rugosodon but its condition appears to be unknown (Yuan et al. 2013). In conclusion, given the complete absence of the facial process of the jugal in Lambdopsalis, Taeniolabis, and Ptilodus (and possibly other forms), it appears that the character states frequently used to describe facial exposure of the lacrimal should be separated into two characters, with the first documenting presence versus absence and the second documenting size and shape ("small and arcuate" versus "large and roughly rectangular"). The nasal bones of Taeniolabis are very large, both broad and long, and are not confined to the roof of the snout; instead, they extend well posterior to the anterior margins of the orbit. In all other cimolodontan multituberculates except Lambdopsalis (Miao 1988: fig. 12), Yubaatar (Xu et al. 2015: fig. 3b), and perhaps Ectypodus (Sloan 1979: fig. 1), the posterior margin of the nasals lies level with or anterior to the anterior margin of the orbitotemporal fenestra. The facial (posterodorsal) process of the premaxilla in Taeniolabis does not insert between the nasal and the maxilla as sharply as it does in most djadochtatherioids (Sloanbaatar being a possible exception; see Wible et al. 2019: fig. 21) and at least some kogaionids (Barbatodon and Litovoi; Csiki-Sava et al. 2018: suppl. fig. 8E, F). Relative to other multituberculates, the premaxilla in Taeniolabis houses a massive central incisor (I2), which has been related to an enhanced gnawing function and concomitant reduction of shearing lower premolars (e.g., Simpson 1937a;Bohlin 1945;Gambaryan and Kielan-Jaworowska 1995;Weil and Krause 2008;Weaver and Wilson 2020). The facial process of the maxilla in Taeniolabis is unremarkable other than the fact that it contributes, in the absence of a facial process of the lacrimal, to the entire anterior orbital rim. It does not exhibit the lateral bulging described for djadochtatherioids such as Catopsbaatar, Djadochtatherium, Kryptobaatar, Mangasbaatar, and Tombaatar, a condition that Rougier et al. (2016) and Wible et al. (2019) attributed to possession of an enlarged maxillary sinus. Tip of Snout and External Nasal Aperture The external nasal aperture of T. taoensis, although not illustrated previously in anterior view, is profoundly different than previously known, primarily because of the presence, as revealed by DMNH EPV.136300 (Fig. 6a-e), of a prominent internarial process on the premaxilla and more anteriorly extended nasals (Fig. 10a-d). Although the dorsal ends of the left and right internarial processes are not preserved in DMNH EPV.136300, we tentatively infer from their shape that they ended in blunt tips (much as in the gondwanatherian Vintana see Krause 2014a: fig. 1d; 2014b: fig. 5B, C, the marsupialiform Didelphodon see Wilson et al. 2016: fig. 1a, b, d, f, g, h), or the "Gurlin Tsav deltatheroidan" Szalay and Trofimov 1996: fig. 22; G. Rougier, pers. comm.) and did not extend posterodorsally to insert between the anterior ends of the left and right nasals, for which we do not see direct evidence in the form of sutures near the midline. An internarial bar, formed by a complete internarial process (variously also called the internasal, prenasal, dorsal, or ascending process) dividing the external nasal aperture into left and right halves is found in tetrapods ancestrally and, despite being rarely preserved in fossils, is known to have been retained in a variety of non-mammaliamorph cynodonts (e.g., Beishanodon, Dadadon, Galesaurus, Menadon, Riograndia, tritylodontids) as well as in some early-branching mammaliamorphs such as Sinoconodon, Morganucodon, Hadrocodium, Haldanodon, Necrolestes, and probably Docodon (Kemp 1982;Hopson and Barghusen 1986;Rowe 1986Rowe , 1988Sues 1986;Lillegraven and Krusat 1991;Flynn et al. 2000;Wible andRougier 2000, 2017;Bonaparte et al. 2001;Luo et al. 2001;Kammerer et al. 2008;Gao et al. 2010;Rougier et al. 2015;Pusch et al. 2019). Rowe (1988: 251) declared the internarial process to be "absent in adult Monotremata, Multituberculata, and Theria, rendering the external nares confluent in postnatal ontogeny." Miao (1988), however, inferred the presence in Lambdopsalis of an internarial process on each premaxilla that extended posterodorsally to insert between the anterior ends of the left and right nasals, thus forming an internarial bar and dividing the external nares. Although not completely preserved on any specimen, Miao's evidence included, in addition to internarial processes projecting dorsally at the anterior ends of the palatal processes of the premaxillae between the left and right I2s, a triangular piece of bone (or the suture for it) insinuated between the anterior ends of the left and right nasals in four specimens. He identified these triangular pieces of bone as premaxillae and inferred that they were dorsal (and posterior) extensions of the internarial processes. He also suggested that an internarial bar may have been present in Chulsanbaatar. Hurum (1994) found no evidence for an internarial process in Chulsanbaatar (or Nemegtbaatar). Wible and Rougier (2000) also concluded that there was no evidence for an internarial bar in Chulsanbaatar, nor, in fact, in the more plesiomorphic paulchoffatiids Pseudobolodon and Kuehneodon. They also expressed doubt about its existence in Lambdopsalis, noting that the specimens of Lambdopsalis purported to preserve an internarial process insinuated between the nasals are quite different between specimens in this regard and that the "process" may simply be broken parts of the nasals. We share these doubts. Bony Palate Incisor Positions The positions of the upper incisors of Taeniolabis (Figs. 6d and 11; Table 5) are similar to those of Lambdopsalis (Miao 1988: fig. 18) and Catopsalis (Middleton 1982: pl. 1, fig. 3) in that the alveolus for I3 lies almost directly posterior to the much larger one for I2 and is closely approximated to it. In all three forms, I2 is much larger than I3, although the disparity in size is considerably greater in Taeniolabis. Furthermore, the alveolus of I3 in Taeniolabis and Lambdopsalis is positioned just inside of the lateral margin of the premaxilla; in other words, just inside of the premaxillary ridge. By contrast, I3 in djadochtatherioids is well separated from I2 by a very sizable anteroposterior diastema and is positioned well medial to the lateral margin of the premaxilla (Wible et al. 2019: fig. 22D-K); this condition also obtains in the eucosomodontid Stygimys (Sloan and Van Valen (1965: fig. 4) and to a slightly lesser degree in the cimolomyid Meniscoessus (Archibald 1982: fig. 27). The condition in the ptilodontid Ptilodus (Simpson 1937b: fig. 6), the ptilodontoid Filikomys (Weaver et al. 2021), and the eobaatarid Sinobaatar (Kusuhashi et al. 2009: figs. 5, 10, 12, 14, 15) is different still in that I3 is separated from I2, but not to as great a degree as in djadochtatherioids but greater than in Taeniolabis and Lambdopsalis, and I3 is positioned at the lateral margin of the premaxilla. I3 is also laterally positioned in the microcosmodontid Microcosmodon (Fox 2005: pl. 1, fig. 1; pl. 3, fig. 3) and the kogaionids Barbatodon (Smith and Codrea 2015: fig. 2G, O) and Litovoi (Csiki-Sava et al. 2018: fig. 1B, C); it is not far separated from I2 in these genera and the size disparity between I2 and I3 appears to be not as great as in other cimolodontans, including Taeniolabis and Lambdopsalis. The size disparity between I2 and I3 is even less in the lambdopsalid Sphenopsalis, in which the two teeth are described as "subequal"; both lie at the margin of the premaxilla (Mao et al. 2016: fig. 3). Whereas I3 lies wholly within the premaxilla in other cimolodontans for which the condition is known, Sphenopsalis and Tombaatar (Rougier et al. 1997) are unusual in that the posterior part of the alveolus is composed of the maxilla. Finally, I3 in the taeniolabidoid Prionessus, although not illustrated, is described as positioned at the lateral margin of the palate and as being much smaller than I2 (Meng et al. 1998). Palatal vacuities have been described, scored, and/ or illustrated in a number of cimolodontan multituberculates, including the ptilodontid Ptilodus (Simpson 1937b: fig. 6), the neoplagiaulacid Ectypodus (Sloan 1979: fig. 1), the ptilodontoid Filikomys (Weaver et al. 2021), the eucosmodontid Stygimys (Sloan and Van Valen 1965: fig. 4) fig. 2b), with the latter purportedly having two pairs of vacuities. Two pairs are mistakenly scored for Meniscoessus, Nemegtbaatar, and Stygimys by Mao et al. (2016; and derivative character matrices); it appears that the choanal opening was interpreted as the anterior rim of a second set of vacuities in these cases, as well as possibly in Yubaatar, which was described as having only one pair but scored as polymorphic with one or two pairs (Xu et al. 2015: char. 81). Postpalatine Torus A postpalatine torus is variably developed as a ventrally projecting bulge or distinctive, raised plate at the posterior ends of the palatine bones in several Late Cretaceous djadochtatherioids (e.g., Catopsbaatar, Chulsanbaatar, Guibaatar, Kamptobaatar, Kryptobaatar, Mangasbaatar, Nemegtbaatar, Sloanbaatar, Tombaatarsee Rougier et al. 2016: char. 44 [Nessovbaatar was scored as possessing a postpalatine torus but this was presumably in error because the taxon is not yet represented by cranial material]; Wible et al. 2019: chars. 49, 50). A postpalatine torus has also been identified in the microcosmodontid Microcosmodon (Fox 2005: pl . 1, fig. 1) as well as in the kogaionids Kogaionon (Rădulescu and Samson 1996: fig. 1) and Litovoi (Csiki-Sava et al. 2018) but not described or illustrated in detail; none is mentioned for Barbatodon (Smith and Codrea 2015). In contrast, a postpalatine torus is not described or apparent in the illustrations for Ptilodus (Simpson 1937b: fig. 6) or Meniscoessus (Archibald 1982: fig. 27a). Illustration of the palatal surface of Ectypodus by Sloan (1979: fig. 1) leaves presence or absence of a torus ambiguous. There is variability in how this character is coded, and how it is scored for some taxa, but it was scored as "absent or very faint" by Rougier et al. (2016) and as "absent" by Wible et al. (2019) in both Taeniolabis and Lambdopsalis. The Corral Bluffs material (DMNH EPV.95284, fig. 4d; DMNH EPV.134082, fig. 5d), even with poor surface preservation, confirms that a postpalatine torus is probably not present in Taeniolabis. The condition in Lambdopsalis, however, seems uncertain; Miao (1988:38) described the posterior part of the palatine as "greatly thickened into a torus." This does not comport with illustrations in Miao (1988: figs. 3, 13, 14, 18) or with observations of original specimens (Rougier, pers. comm.) and, as a result, the torus was scored as either absent or very faint by Rougier et al. (2016) and Wible et al. (2019). Zygomatic Arch Composition The zygomatic arch in Taeniolabis and other multituberculates is formed by the zygomatic processes of the maxilla and squamosal, which meet anterior to arch midlength and which are joined along an oblique suture that ascends anterodorsally (Figs. 7a,c,8b,c and 10a,c,d). A small jugal also contributes to the arch (Figs. 8d and 13), probably overlapping the maxillary-squamosal suture on the medial aspect of the arch (toward the top). Where known, the jugal is reduced to a slender, splint-like element on the medial aspect of the zygomatic arch in multituberculates, buttressing the suture between the zygomatic processes of the maxilla and squamosal (Hopson et al. 1989; but see Fox 2005 for interpretation that the multituberculate 'jugal' is a neomorphic ossification). Although small and rarely preserved, the jugal has been identified in several cimolodontan multituberculates including the Paleogene ptilodontoids Ptilodus and Ectypodus (Hopson et al. 1989) and the Late Cretaceous djadochtatherioids Nemegtbaatar, Chulsanbaatar, Kryptobaatar, and Guibaatar (Kielan-Jaworowska et al. 1986;Hopson et al. 1989;Wible and Rougier 2000;Wible et al. 2019), and possibly in Catopsbaatar (Kielan-Jaworowska et al. 2005). Facets for a jugal are also reported in the Late Jurassic paulchoffatiids Kuehneodon and Pseudobolodon (Hahn 1987;Hopson et al. 1989). Perhaps of most relevance because of phylogenetic position, however, is the discovery of a relatively large, plate-like jugal on the medial aspect of the zygomatic arch (overlapping the zygomatic processes of the maxilla and squamosal) of Yubaatar (Xu et al. 2015: fig. 3) from the late Late Cretaceous of China. Yubaatar was recovered as the outgroup to Taeniolabididae + Lambdopsalidae by Xu et al. (2015) and Csiki-Sava et al. (2018). The diminutive size of the jugal in multituberculates contrasts strongly with its much larger size in most other cynodonts and especially with its massive size in the gondwanatherians Vintana and Adalatherium (Krause et al. 2014a(Krause et al. , 2014b(Krause et al. , 2020a(Krause et al. , 2020b. In Vintana it is particularly large, larger than in any known Mesozoic mammaliamorph, primarily because of the presence of a deep, scimitar-like flange. In both forms, the jugal contributes to the anteroventrolateral portion of the orbit and extends posteriorly to a level opposite the posterior margin of the glenoid fossa, but does not contribute to the fossa. Among euharamiyidans, a jugal has been recorded for Vilevolodon : Extended Data fig. 1e), Maiopatagium : Extended Data fig. 3a-d), Arboroharamiya (Han et al. 2017: fig. 2), and Shenshou (Huttenlocker et al. 2018). In Vilevolodon it is long (at least two-thirds the length of the zygoma), extending anteriorly to contact the facial process of the maxilla and forming a part of the anterior orbit and posteriorly to border on, but not contribute to, the glenoid fossa on the squamosal. The jugal in Maiopatagium is incomplete posteriorly but reconstructed as also contacting the facial process of the maxilla anteriorly. The jugal of Arboroharamiya, although illustrated as present, was neither described nor scored. The jugal of Shenshou was scored as being long (Huttenlocker et al. 2018: char. 482). A full survey of the zygomatic arches of multituberculates to determine presence or absence of zygomatic ridges was not possible for this study; such a survey, in our opinion, would require firsthand observation of what can be relatively subtle features. Nonetheless, zygomatic ridges appear to be prominent in eucosmodontids, based on published illustrations. A strong anterior zygomatic ridge is depicted in a reconstruction of the maxilla (based on UMVP 1481-1483) of Stygimys (Sloan and Van Valen 1965: fig. 4); it begins just above P1 and below the infraorbital foramen and arches posterodorsally onto the zygomatic arch. The anterior edge of a prominent anterior zygomatic ridge also appears to be developed above P2 and below the infraorbital foramen in a fragmentary maxilla (AMNH 16534) of Eucosmodon (Granger and Simpson 1929: fig. 17A). Outside of djadochtatherians and eucosmodontids, however, evidence for zygomatic ridges is less clear and somewhat controversial. The ubiquity of these zygomatic ridges on the lateral (rather than the ventral) surface of the zygomatic arches of multituberculates has been questioned by Fox (2005), who contested the observations of Gambaryan and Kielan-Jaworowska (1995) concerning the presence of zygomatic arches in various "plagiaulacoids" (e.g., paulchoffatiids, Monobaatar, Arginbaatar) and reported their clear absence in several cimolodontans (Valenopsalis, Cimolodon, Ptilodus, Ectypodus, Neoplagiaulax, Microcosmodon) based on firsthand observation of specimens. Examining some of the same illustrations examined by these authors, we have similar reservations and can also confirm that zygomatic ridges, as defined by Gambaryan and Kielan-Jaworowska (1995), are not present in Taeniolabis (see sections on "Maxillae" and "Squamosals" above). We therefore tentatively concur with Simmons (1993), Rougier et al. (1997), andFox (2005) that zygomatic ridges are not ubiquitous among Multituberculata and cannot be regarded as an autapomorphy for the clade, that they are present in only relatively derived multituberculates, and therefore that the absence of zygomatic ridges appears to be the plesiomorphic condition for Multituberculata. Cranial Roof Composition The cranium of T. taoensis is roofed by the nasals anteriorly, the parietal posteriorly, and the frontals centrally, with some relatively minor dorsolateral contributions by the premaxillae and maxillae anteriorly. This is the only taeniolabidid species for which the composition of the cranial roof is well known. Although all or most of the frontals are likely preserved in one of the cranial fragments comprising the holotype (NMMNH P-69902) of Kimbetopsalis simmonsae, sutures are not visible (Williamson et al. 2016: fig. 1). From what is illustrated (Williamson et al. 2016: fig. 1A), there are no obvious differences from the frontals of T. taoensis. The frontals of Lambdopsalis, as reconstructed by Miao (1988: fig. 12; see also Wible et al. 2019: fig. 23B) in dorsal view, appear to be very similar to those of Taeniolabis in being small and having the same general shape and sutural contacts. The frontals in known specimens of Sphenopsalis are incomplete but, in dorsal view, are scored as "deeply inserted between the nasals" anteriorly and appear to be less acutely pointed than in Taeniolabis and Lambdopsalis posteriorly (Mao et al. 2016: char. 90, fig. 9B). The frontals of Yubaatar have a squared anterior process inserted between the nasals and a strongly pointed posterior process that resembles that of Taeniolabis and Lambdopsalis (Xu et al. 2015: fig. 3b). The mid-portion of the frontal in Yubaatar differs from that in Taeniolabis and Lambdopsalis in that it extends laterally to contribute to the orbital rim (without dorsal overlap from the parietal). In dorsal view, the frontals of djadochtatherioids have a long anterior process inserted between the nasals (Wible et al. 2019: fig. 23D-K) that is more acutely pointed than the broad, blunt incursion in taeniolabidoids, as represented by Taeniolabis (Fig. 10a) and Lambdopsalis (Miao 1988: fig. 12), or in the stem taeniolabidoid Yubaatar (Xu et al. 2015: fig. 3b). The frontals also appear to be pointed anteriorly in the eobaatarid Sinobaatar (Kusuhashi et al. 2009: fig. 11). Ptilodontoids do not appear to have a consistent pattern in this regard. In the ptilodontid Ptilodus, the frontals combine to form a short, pointed anterior incursion between the nasals (but the frontonasal suture is more complicated laterally) (Simpson 1937b: fig. 5) whereas in Ectypodus it is the nasals that are inserted between the left and right frontals, at least as depicted in a simple outline drawing by Sloan (1979: fig. 1). Kogaionids also exhibit a variable pattern, with Kogaionon (Rădulescu and Samson 1996: fig. 1) and Barbatodon (Smith and Codrea 2015: fig. 2N) having a mediolaterally more-or-less straight frontal-nasal suture whereas that of Litovoi (Csiki-Sava et al. 2018: suppl. figs. 7A, B, 8F) is depicted as gently curved, convex anteriorly. The long process of the parietal that extends forward, lateral to the dorsal exposure of the frontal, to contact the nasal in Taeniolabis (Fig. 10a) is also seen in Lambdopsalis (Miao 1988: fig. 12), although such contact is less in the latter. Naso-parietal contact was also scored as present in Yubaatar by Xu et al. (2015: suppl. info. p. 4, char. 92) but is probably in error because such contact is elsewhere listed as absent (suppl. info. p. 18, char. 92) and their fig. 3b shows the frontal intervening between the two elements. Among cimolodontans, therefore, naso-parietal contact in Taeniolabis and Lambdopsalis appears to be a unique condition but Wible and Rougier (2000) have also reported such contact in several specimens of the paulchoffatiids Kuehneodon and Pseudobolodon. It appears to be absent, however, in the "plagiaulacidan" Glirodon (Engelmann and Callison 1999: figs. 1, 2). Gambaryan and Kielan-Jaworowska (1995: 45) concluded that the postorbital process in multituberculates "is situated on the parietal and the orbit is very large." The position was regarded as different than in most therians in being in a relatively posterior position and not on the frontal (Novacek 1986). There are, however, some exceptions among placental mammals. For instance, some ctenodactyloid rodents bear a postorbital process on the parietal (Wible et al. 2005) and, as noted by Gambaryan and Kielan-Jaworowska (1995) and Wible and Rougier (2000), some hyracoids have a postorbital process that receives contributions from both the frontal and parietal (reviewed by Barrow et al. 2012). Wible and Rougier (2000: 82) elaborated, in part based on more recent discoveries, that there are three positions of the postorbital process in multituberculates: (1) "on the frontal and inconspicuous" (e.g., various paulchoffatiids, Ptilodus, Ectypodus), (2) "on the parietal and short" (e.g., Chulsanbaatar, Kamptobaatar, Nemegtbaatar), and (3) "on the parietal and long" (e.g., Catopsbaatar, Kryptobaatar). Rougier et al. (2016) added Djadochtatherium, Mangasbaatar, and Tombaatar to those taxa with long postorbital processes on the parietal. Wible et al. (2019) indicated that the postorbital process was probably on the parietal in Guibaatar as well. Weil and Tomida (2001) described the postorbital process of Meniscoessus as unique among multituberculates in being comprised of both the frontal and parietal. Despite the relatively far posterior position in multituberculates, Wible and Rougier (2000) presumed (contra Miao 1988), as do we, that the process in multituberculates marks the upper boundary between the orbit and the temporal fenestra, as it does in therians. Postorbital Process The composition of the postorbital process in Lambdopsalis has been controversial. Miao (1988) identified it as on the frontal, but lateral to where the parietal meets the nasal. Gambaryan and Kielan-Jaworowska (1995), based on a personal communication from Jin Meng) disagreed, and concluded that it was formed by the parietal. Based on this controversy, Rougier et al. (2016) and Wible et al. (2019) equivocated about the composition of the postorbital process in Lambdopsalis. A specimen of Lambdopsalis (IVPP V7151.50) reexamined by Mao Fangyuan (personal communication, September 2020) confirms the conclusion that the postorbital process is indeed on the parietal. In addition, she has concluded, based on comparisons with the new material of Lambdopsalis, that the postorbital process in specimen IVPP V19029 of Sphenopsalis is also on the parietal. This pattern is then comparable to that in Taeniolabis (Figs. 4,7,and 10) and the stem taeniolabidoid Yubaatar (Xu et al. 2015: fig. 3b). Bony Composition Composition of the anterior and medial walls of the orbit, as well as the lateral wall of the braincase, remains poorly known for most multituberculates other than djadochtatherioids. The lacrimal contributes to the orbital wall, including the orbital pocket (see below) in those djadochtatherioids for which it has been able to be distinguished (e.g., Nemegtbaatar -Hurum 1994;Kryptobaatar -Wible and Rougier 2000;Mangasbaatar -Rougier et al. 2016;Guibaatar -Wible et al. 2019). It is, however, a minor component relative to the frontal and maxilla. Kielan-Jaworowska et al. (2004: 266) stated that "[T]he lacrimal has not been found in any Tertiary multituberculate, including Ptilodus (Simpson, 1937a;Krause, 1982a) and Lambdopsalis (Miao, 1988)" (contra Crompton et al. 2018: fig. 1). We have concluded the same for Taeniolabis; it is absent (see Fig. 12 and "Lacrimals" above). Outside of Djadochtatherioidea, lacrimals have been identified in the Late Cretaceous North American cimolodontans Meniscoessus (Weil and Tomida 2001) and Filikomys (Weaver et al. 2021: char. 93) but contributions of the element to the orbital wall have not been recorded. Similarly, as elaborated above (see "Bony Composition of Snout"), although facial exposure of the lacrimal is scored as present in kogaionids (Csiki-Sava et al. 2018: char. 93), its condition within the orbit remains unknown. The presence or absence of a lacrimal in the microcosmodontid Microcosmodon could not be ascertained (Fox 2005). The Late Cretaceous stem taeniolabidoid Yubaatar, however, is reported to have a lacrimal that "occupies the anteromedial corner of the orbit" (Xu et al. 2015: 6). Whether or not the perpendicular lamina of the palatine contributes to the medial orbital wall is unknown in Taeniolabis, simply because the sutures to evaluate the condition cannot be conclusively discerned in any of the available specimens. Although uncertainty exists, the palatine is thought to be absent from the medial orbital walls of paulchoffatiids (Hahn 1987;Hurum 1994), Lambdopsalis (Miao 1988), Chulsanbaatar (Hurum 1994), Tombaatar (Rougier et al. 1997), Kryptobaatar (Wible and Rougier 2000), Microcosmodon (Fox 2005), Catopsbaatar (Kielan-Jaworowska et al. 2005), Mangasbaatar (Rougier et al. 2016), and Guibaatar (Wible et al. 2019), seemingly replaced by expansion of the maxilla. The palatine was described or reconstructed as present in the orbits of Ectypodus (Sloan 1979), Kamptobaatar (Kielan-Jaworowska 1971, and Nemegtbaatar (Hurum 1994(Hurum , 1998a but its presence in these taxa has been refuted, or at least questioned, by Rougier et al. (1997Rougier et al. ( , 2016, Wible and Rougier (2000), and Wible et al. (2019). Nemegtbaatar appears to be the only possible remaining exception but, at the present time, Miao's (1988) hypothesis that the absence of orbital exposure of the palatine is a synapomorphy of Multituberculata appears to remain viable (see also Wible 1991;Crompton et al. 2018). We tentatively conclude that, lacking definitive evidence for contributions from the lacrimal and palatine to the orbital wall in Taeniolabis, the wall, at least anteriorly, was likely composed of only the frontal and maxilla. In addition, the parietal formed a small posterior section of the supraorbital rim by way of the long process that extends forward on the cranial roof, lateral to (and overlapping) the frontal and the posterior end of the nasal, to contact the maxilla. Sloan (1981: fig. 6.14) concluded that the anterior margin of the orbit in Taeniolabis was placed too far anteriorly in earlier reconstructions, level with the mesial edge of M1 by Broom (1914: fig. 8) and level with the mesial edge of P4 by Granger and Simpson (1929: fig. 4). Sloan positioned it much farther posteriorly, level with the middle of M1 (see also Wible et al. 2019: fig. 21C). Correspondingly, he also estimated that the orbit was much larger (35 mm in diameter) than reconstructed by earlier workers and thereby suggested nocturnal or crepuscular habits for Taeniolabis. Using the distance from the anterior edge of I2 and the anterior edge of P4 as a guide to standardize anteroposterior length, both the left and right sides of DMNH EPV.136300 (Fig. 6a, b) and the left side of DMNH EPV.95284 (Fig. 4a) demonstrate that this posterior shift by Sloan (1981) was probably excessive and that the anterior margin of the orbit lies approximately level with the mesial edge of M1 (see revised position in Fig. 10a-c), the same level as reconstructed by Broom (1914). Position and Size Correspondingly, although there is no indication of a postorbital process on the zygomatic arch of Taeniolabis (contra Broom 1914: figs. 6, 8;Granger and Simpson 1929: figs. 4, 5A, 6), the presence of a protuberant postorbital process on the parietal sets limits on the posterior border of the eyeball and its associated structures. Sloan (1981: Fig. 6.14) depicted this process as posterior to the level of the distal end of the cheektooth row and also posterior to the posteriormost extent of the maxillary-squamosal suture on the zygomatic arch. This placement is incorrect. DMNH EPV.95284 (Fig. 4), DMNH EPV.134082 (Fig. 5), and AMNH 16321 (Fig. 7) exhibit direct or indirect evidence demonstrating that the postorbital process lies at a level farther anteriorly, opposite M2 (or even the distal end of M1) and opposite the approximate middle of the oblique maxillary-squamosal suture, almost as far anteriorly as originally reconstructed by Granger and Simpson (1929;figs. 4, 5A) but not as far as reconstructed by Broom (1914: fig. 8). None of the available specimens has a pristinely preserved orbit, the DMNH specimens all exhibiting considerable distortion and AMNH 16321 exhibiting breakage around the periphery of the orbital rims. The least damaged orbit is on the left side of AMNH 16321 (Fig. 7) where the maximum orbital diameter anterior to the postorbital process is only about 25 mm, almost 30% smaller than estimated by Sloan (1981). Based on available evidence, we conclude that the orbit of Taeniolabis was farther forward and much smaller than in Sloan (1981: fig. 6.14) reconstruction. In this context, we also agree with Gambaryan and Kielan-Jaworowska's (1995: 65) assessment that the anterior margin of the orbit and the postorbital process are positioned more anteriorly in both Taeniolabis (Fig. 10c) and Lambdopsalis (Miao 1988: fig. 17) than in djadochtatherioids (Wible et al. 2019: fig. 21D-J). Orbital Pocket The concept of an "orbital pocket" in multituberculates began with Sloan's (1979) reconstruction of the jaw musculature in the neoplagiaulacid Ectypodus. It was seemingly predicated on the observation that there was a space anterior to the eyeball and its adnexa (e.g., extraocular muscles, fat, lacrimal gland, nerves, vessels) that could not have been occupied by those structures, as has also been speculated for the extinct South American marsupialiform Argyrolagidae (Simpson 1970). Sloan (1979: 495) asserted that this pocket, "in front of the orbit and the temporal muscle," was the site of origin for the anterior deep masseter muscle. From his reconstruction ( fig. 3B), it is clear that Sloan meant a pocket in front of the eyeball but still within (not in front of) the osseous orbital cavity. But he also appears to have envisioned the muscle overlapping the anterior edge of the eyeball superficially (and the temporalis muscle overlapping the posteroventral edge superficially), which is unlikely. Another difficulty with Sloan's reconstruction, as noted by Gambaryan and Kielan-Jaworowska (1995), is that the eye is placed opposite the postorbital 1 3 process rather than anterior to it. Gambaryan and Kielan-Jaworowska (1995) formally designated the orbital pocket (theca orbitalis) and identified it in djadochtatherioids as serving as the origin for pars anterior of the medial masseter muscle (anterior deep masseter muscle of Sloan 1979), which inserted into the masseteric fovea on the dentary. Kielan-Jaworowska (1971) had earlier identified a fossa in the same general area in the djadochtatherioid Kamptobaatar, later named the "orbitonasal fossa" by Kielan-Jaworowska et al. (1986), who speculated that it contained a gland. Rougier et al. (1997) identified the orbital pocket and orbitonasal fossa as the same space (i.e., that the two structures were synonymous) but Gambaryan and Kielan-Jaworowska (1995: 52) indicated that there was both an orbital pocket, containing pars anterior of the medial masseter muscle, and an orbitonasal fossa (found in Kamptobaatar, Sloanbaatar, and other djadochtatherioids), possibly containing a gland lying "at the posterodorsal end" of the orbital pocket (see also Wible and Rougier 2000;Kielan-Jaworowska et al. 2004). Gambaryan and Kielan-Jaworowska (1995) indicated that the anterior part of the medial masseter "rarely" originates from the orbital pocket in therian mammals and cited, as the only extant example, bathyergid hystricomorph rodents (blesmols or African mole-rats). Cox and Faulkes (2014) and Cox et al. (2020) identified the masticatory muscle of bathyergids in this region as the infraorbital portion of the zygomaticomandibularis (= pars anterior of the medial masseter of Gambaryan and Kielan-Jaworowska 1995 and other workers but other names have also been applied; see nomenclature in Druzinsky et al. 2011), originating from the anterior wall of the orbit and the zygomatic process of the maxilla. In most bathergyids, the muscle's origin is confined to these areas but in two forms, Cryptomys and Fukomys, a small slip passes through the infraorbital foramen to originate on the rostrum. Wible and Rougier (2000) and Wible et al. (2019) reviewed the distribution of the orbital pocket in multituberculates, stating that it had been observed in a number of djadochtherioids (e.g., Catopsbaatar, Chulsanbaatar, Guibaatar, Kamptobaatar, Kryptobaatar, Mangasbaatar, Nemegtbaatar, Sloanbaatar, and Tombaatar) as well as in the neoplagiaulacid Ectypodus (Sloan 1979), the taeniolabidid Taeniolabis (Sloan 1981), and the lambdopsalid Lambdopsalis (Gambaryan and Kielan-Jaworowska 1995: 65; the evidence for its "reduced" existence in Lambdopsalis was in the form of a personal communication from Desui Maio). It was stated to be absent in the paulchoffatiids Meketichoffatia and Pseudobolodon by Wible and Rougier (2000) and none has been reported in the Kogaionidae (Rădulescu and Samson 1997;Smith and Codrea 2015;Csiki-Sava et al. 2018), the microcosmodontid Microcosmodon (Fox 2005), the ptilodontid Ptilodus (Simpson 1937b;Krause and Wall 1992), or, to our knowledge, any other multituberculate. Sloan (1981) opined that the space identified as the orbit by Broom (1914) and Granger and Simpson (1929) in Taeniolabis is actually the pocket for the origin of the anterior deep masseter muscle (= anterior part of medial masseter = infraorbital part of zygomaticomandibularis). While we cannot rule out the existence of an orbital pocket in Taeniolabis, the evidence for one is much less clear than in djadochtatherioids. In djadochtatherioids, this pocket is deep anterodorsally, has a well-developed roof formed by the frontal, lacrimal, and maxilla, is open ventrally, and is demarcated posteriorly by a more-or-less vertical ridge, the orbital ridge on the medial wall. The prominence of the orbital roof is particularly evident in ventral views of the cranium (compare those of the djadochtatherioids with those of the ptilodontoids and taeniolabidoids depicted by Wible et al. 2019: fig. 22). Such a prominent roof is not present in Taeniolabis (which does not have a lacrimal) and an orbital ridge cannot be identified in the available sample. We therefore conclude that, if an orbital pocket did exist in Taeniolabis, it was much smaller than stated by Sloan (1981) and certainly much smaller and shallower (relatively) than in djadochtatherioids. Whether the orbital cavity, if it existed, contained a muscle of mastication or a gland (other than the lacrimal gland) or both we do not know. Finally, the infraorbital foramen in Taeniolabis is of only modest proportions and therefore it seems unlikely that any part of the infraorbital portion of zygomaticomandibularis passed through it, as speculated by Sloan and Van Valen (1965) for Stygimys (but see Gambaryan and Kielan-Jaworowska 1995 for contrasting opinion). Lateral Braincase Wall Composition Unfortunately, sutures in the lateral braincase wall in the available specimens of Taeniolabis cannot be identified, thus rendering moot any possible comparisons concerning relative contributions with other multituberculates. The situation is further exacerbated by the fact that sutures in the lateral wall of the braincase are indeed generally very difficult to identify and poorly known in multituberculates (Kielan-Jaworowska et al. 2004;Crompton et al. 2018). Foramina Similarly, foramina in the lateral wall of the braincase are exceedingly difficult to discern in the available specimens of Taeniolabis, with the exception of a possible foramen (and groove) for the ramus superior of the stapedial artery in DMNH EPV.95284 and UCMP 98083 (?rsf and ?rsg in Figs. 4b and 8b,respectively) and what appears to be a single large foramen for the mandibular 1 3 division of the trigeminal nerve in UCMP 98083 (fmV in Fig. 14d). Typically, multituberculates have two foramina for the mandibular division of the trigeminal nerve in the anterior lamina and/or petrosal, which, following Simpson (1937b) and Wible and Rougier (2000), are identified as the foramen ovale inferium and the foramen masticatorium. This is the case for the paulchoffatiids Kuehneodon and Pseudobolodon, the ptilodontoids Ptilodus and Mesodma, and the djadochtatherioids Catopsbaatar, Chulsanbaatar, Kamptobaatar, Kryptobaatar, Nemegtbaatar, Sloanbaatar, and cf. Tombaatar (Wible and Hopson 1995;Wible and Rougier 2000;Ladevèze et al. 2010). Two foramina are also described for Lambdopsalis, but they are said to be in the alisphenoid (Miao 1988). Four foramina are noted for Mangasbaatar, of which three are considered to represent the foramen masticatorium (Rougier et al. 2016). The condition in Taeniolabis appears to be most similar to Guibaatar (Wible et al. 2019), with a single large foramen, although, as has been noted for Guibaatar, we cannot exclude the possible presence of a bony bar separating the foramen. Mesocranium Ridges and Troughs Posterior to the Choanae Based on descriptions and illustrations in the literature, there appear to be several documented patterns of ridges and troughs in the ventral part of the mesocranium (in the basipharyngeal canal lying posterior to the choanae and anterior to the basioccipital) of multituberculates: (1) the Late Jurassic paulchoffatiid Pseudobolodon and the Paleocene ptilodontid Ptilodus have paired, longitudinally oriented pterygopalatine ridges (sensu Barghusen 1986) medial to the ventral margin of the alisphenoid and lateral to the vomer, presphenoid, and basisphenoid in the midline, with the resulting lateral pterygopalatine trough being substantially narrower and less deep than the medial trough (Hahn 1981;Wible and Rougier 2000); (2) 1970a, 1970b, 1971), Kryptobaatar (Wible and Rougier 2000), and Nemegtbaatar (Kielan-Jaworowska 1974; Kielan-Jaworowska et al. 1986) also have a medial and a lateral pterygopalatine trough on each side but they are much more equal in development (Kielan-Jaworowska 1971;Kielan-Jaworowska et al. 1986;Wible and Rougier 2000); (3) the Late Cretaceous djadochtatheriids Guibaatar (Wible et al. 2019) and Mangasbaatar (Rougier et al. 2016), lacking a prominent vomer in the mesocranium, have only a single medial basipharyngeal channel, bounded by the left and right pterygopalatine ridges; and (4) the Paleocene Lambdopsalis does not have pterygopalatine ridges and there is therefore only a single trough between the lateral wall of the basipharyngeal canal and the midline crest formed by the vomer, presphenoid, and basisphenoid (Miao 1988). Although this region is not particularly well preserved and does not show any sutures in any of the specimens of Taeniolabis, it appears to resemble the condition in Lambdopsalis in this regard. Kielan-Jaworowska (1970b, 1974 and Kielan-Jaworowska and Hurum (1997) have emphasized the significance of the position of the pterygoid bones, medial to the lateral walls of the basipharyngeal canal, as a possible multituberculate synapomorphy but this condition does not appear to apply to Lambdopsalis and Taeniolabis. Wible and Rougier (2000) and Rougier et al. (2016) discussed possible functions of the pterygopalatine troughs. Basicranium Petrosal The promontorium in Taeniolabis is tubular, slender, and anteromedially-posterolaterally oriented, as it is in other multituberculates (Miao 1988;Wible and Rougier 2000;Ladevèze et al. 2010;Rougier et al. 2016;Wible et al. 2019). Taeniolabis does not appear to bear any distinct grooves for the internal carotid or stapedial arteries on the promontorium, although this might be owing to poor surface preservation. Several multituberculates exhibit a Y-shaped pattern of grooves on the promontorium for the internal carotid artery passing anteriorly and the stapedial artery passing posteriorly toward the fenestra vestibuli. This pattern is clearly identifiable in the djadochtatherioids Kryptobaatar (Wible and Rougier 2000), cf. Tombaatar (Ladevèze et al. 2010), and Mangasbaatar (Rougier et al. 2016), as well as in the ptilodontoid Ectypodus (Sloan 1979). In addition, Wible and Hopson (1995) and Kielan-Jaworowska et al. (1986) reconstructed the stapedial artery as crossing the promontorium posterolaterally from the internal carotid artery toward the fenestra vestibuli in Valenopsalis joyneri (previously a species of Catopsalis -see Williamson et al. 2016). A groove for the proximal stapedial artery is also indicated along the lateral aspect of the promontorium passing toward the fenestra vestibuli in Litovoi (Csiki-Sava et al. 2018: fig. S6); a transpromontorial groove for the internal carotid is not illustrated. Miao (1988: 30) described a groove for the stapedial artery "along the lateral side of the promontorium" passing anteromedially from the posteromedial rim of the fenestra vestibuli. In contrast, the transpromontorial groove for the internal carotid artery and the groove for the stapedial artery are absent in Guibaatar (Wible et al. 2019). Only a slight indentation on the ventral margin of the fenestra vestibuli as well as the presence of a foramen medial to the crista parotica indicate the presence of the stapedial artery in Guibaatar. Absence of a stapedial groove is also noted for paulchoffatiids (Lillegraven and Hahn 1993). Similar to the condition in other multituberculates, the lateral flange contacts the promontorium medially in Taeniolabis; it is however unclear whether the medially inflected lateral flange houses a separate canal for the ramus inferior 1 3 of the stapedial artery and/or the post-trigeminal vein. A small foramen is present at the contact between the lateral flange and promontorium, but its course cannot be traced. In most multituberculates, the stapedial artery is reconstructed to branch into a superior ramus, passing through the crista parotica, and an inferior ramus passing anteriorly within the lateral space with the facial nerve (between the lateral flange and promontorium). A canal for the ramus inferior (canal for ?maxillary artery of Kielan-Jaworowska et al. 1986; post-trigeminal canal of Rougier et al. 1996a) within the medially inflected lateral flange has been identified in the ptilodontoid cf. Mesodma (Kielan-Jaworowska et al. 1986;Wible and Hopson 1995), the taeniolabidoids cf. "Catopsalis/Valenopsalis" (Kielan-Jaworowska et al. 1986;Wible and Hopson 1995) and Lambdopsalis (Miao 1988), the djadochtatherioid Kryptobaatar (Wible and Rougier 2000), and the cimolomyid ?Meniscoessus (Luo 1989). In contrast, the ramus inferior and post-trigeminal vein are reconstructed to pass with the facial nerve through the secondary facial foramen endocranially into the cavum supracochleare and cavum epiptericum in Guibaatar (Wible et al. 2019). In addition to variation in pathways for the canal for the ramus inferior, various patterns exist for passage of the ramus superior of the stapedial artery, prootic sinus (tympanic aperture of the prootic canal), and facial nerve (secondary facial foramen) among multituberculates. Several other foramina merge in three different patterns: The jugular fossa is scored as 'large and deep' in all Djadochtatherioidea for which this feature is known, as well as in Lambdopsalis (Rougier et al. 2016;Wible et al. 2019). Although the poor preservation of the Taeniolabis specimens prevents a precise reconstruction of the boundaries of the fossa, it is clear that it was likewise large and deep. In contrast, Wible and Rougier (2000) described the jugular fossa as shallow but not necessarily small in the ptilodontoids Ptilodus and Ectypodus, with it being scored as 'small and shallow' in Ptilodus by Rougier et al. (2016) and in the basal multituberculate Pseudobolodon by Wible et al. (2019). Inner Ear Even with the increasing use of µCT scanning, publications based on virtually reconstructed endocasts of multituberculate inner ears, capable of capturing the external and internal morphology of the petrosal in great detail, are still relatively sparse. The most detailed description based on a µCT scan of the inner ear of the djadochtatherioid cf. Tombaatar was published by Ladevèze et al. (2010), but the resolution of the scan and preservation of the specimen left open several questions about morphology. A more recent study by Csiki-Sava et al. (2018: suppl. fig. 6) provided images and a brief description of the inner ear of the kogaionid Litovoi but did not provide the detail sufficient for a full comparison with those of other multituberculates. An image of a 3D reconstructed inner ear of the cimolomyid Meniscoessus is included in Luo et al. (2016: fig. 6.9), in addition to a description based on low-resolution CT scans by Luo and Ketten (1991), and a published abstract by Weil and Tomida (2017). Conference abstracts based on 3D reconstructed inner ears have also been published for the paulchoffatiid Pseudobolodon (Schultz and Martin 2015) and the neoplagiaulacid Neoplagiaulax (Kotrappa and Farke 2015). To date, the most detailed descriptions of multituberculate inner ear morphology are based on fragmented petrosal morphology, histological thin sections, or x-ray or low-resolution CT scanning, including those of the closely related taeniolabidoid Lambdopsalis (Miao 1988;Meng and Wyss 1995), several paulchoffatiids (Lillegraven and Hahn 1993), the djadochtatherioids Chulsanbaatar and Nemegtbaatar (Hurum 1998b), and three not further specified multituberculate petrosals from the Hell Creek Formation (Fox and Meng 1997). As such, the inner ear of Taeniolabis (Fig. 15), even though not pristinely preserved, provides valuable insight into the inner ear morphology of multituberculates. All multituberculates known to date exhibit a cochlear canal that is only gently bent laterally, if at all. Impacting a comparison in degree of curvature of the cochlear canal among multituberculates is a paucity of accurate and comparable measurements. In many cases, comparisons are solely reliant on qualitative descriptions of very small differences in degree of bending. For example, Miao (1988) described the cochlear canal as "straight" in Lambdopsalis, confirmed by Luo and Ketten (1991), whereas Meng and Wyss (1995: 142) stated that it "bends slightly laterally." Based on the images provided by Meng and Wyss (1995: fig. 2b), we concur that the cochlear canal appears to slightly bend laterally in Lambdopsalis, perhaps a little less so than that in Taeniolabis (49°). A "rod-like and straight" morphology was noted by Luo and Ketten (1991: 225) for Valenopsalis and ?Meniscoessus, but Luo et al. (2016: fig. 6.9) illustrated, with higher-resolution imaging, a slight bending in the 3D reconstruction of the Meniscoessus cochlear canal. This is supported by Weil and Tomida (2017), who described the cochlear canal as "curved." A "slightly curved" cochlear canal has also been previously mentioned for Ptilodus (Simpson 1937b: 751), the unidentified multituberculates from Hell Creek (Fox and Meng 1997: 274), and cf. Tombaatar (Ladevèze et al. 2010: 325). Whether the cochlear canal is straight or slightly bent or variable in djadochtatherioids is uncertain. Hurum (1998b: 83) described the cochlear canal as "straight" in Nemegtbaatar and Chulsanbaatar, although a very slight lateral bend can be seen in at least Chulsanbaatar (ZPal MgM-I/157), but highresolution µCT reconstructions would be necessary to more accurately measure the degree of curvature in those forms. More recently, the djadochtatheriid Guibaatar was described as having a cochlear canal that is "subtly more curved laterally" than in Nemegtbaatar and Chulsanbaatar (Wible et al. 2019: 293). The degree of curvature is greater in the koigaonid Litovoi (76° based on measurements of Csiki-Sava et al. 2018: suppl. fig . 6d) and even greater in the paulchoffatiid Pseudobolodon (180°; Schultz and Martin 2015). Although some variation in the degree of curvature of the cochlear canal appears to be present in multituberculates, the cochlear canal appears to be much less curved in multituberculates than in gondwanatherians (210°, Hoffmann and Kirk 2020), basal cladotherians (> 270°, Rougier et al. 1992;Ruf et al. 2009;Luo et al. 2011Luo et al. , 2012Harper and Rougier 2019), monotremes (> 140°, Schultz et al. 2017), and docodontans (Ruf et al. 2013;Panciroli et al. 2018), but is greater than in the eutriconodontan Priacodon and the stem therian Höövör petrosals (Harper and Rougier 2019). The relatively gentle bending of the cochlear canal in some derived multituberculates might be an apomorphic feature of these groups as lateral bending to a greater degree (< 140°) is common in mammaliaforms and appears to also be present in the most basal multituberculates, paulchoffatiids. It is unclear whether the cochlear canal in multituberculates contained a lagena macula similar to that of extant monotremes. Several multituberculates exhibit a slightly expanded apex of the cochlear canal but none of them shows any signs of a separate canal for the lagenar nerve. A gentle expansion of the apex is present in Taeniolabis. Miao (1988) did not discuss the presence or absence of a lagena in the closely related Lambdopsalis, but Meng and Wyss (1995: 142) described the basal part of the cochlear canal as "slightly narrower" than the anterior part; they did not, however, specifically tie this to the presence of a lagena. However, the cochlear canal in Lambdopsalis appears to be gently enlarged to a similar degree as in Taeniolabis (Meng and Wyss 1995: fig. 2c). In Valenopsalis and ?Meniscoessus, the apex of the cochlear canal does not appear to be inflated in the reconstructions provided by Luo and Ketten (1991: fig. 3a, b), but Weil and Tomida (2017) noted an inflated apex for Meniscoessus, which is corroborated by the reconstruction provided in Luo et al. (2016: fig. 6.10). The early studies by Luo and Ketten (1991) and Luo et al. (1995) employed relatively coarse CT data, which might have not provided the high resolution necessary to detect such an inflation. Presence of a lagena has also been suggested based on apical inflation in one of the unidentified multituberculates (UALVP 26039) from the Hell Creek Formation (but not in UALVP 34144 and UALVP 26037; Fox and Meng 1997), the paulchoffatiid Pseudobolodon (Schultz and Martin 2015), the djadochtatherioid cf. Tombaatar (Ladevèze et al. 2010), and the kogaionid Litovoi (Csiki-Sava et al. 2018), whereas the cochlear canal has been described as straight and not showing "any signs of a lagena" in Chulsanbaatar and Nemegtbaatar (Hurum 1998b: 83). Whether a bony support system for the cochlear nerve (e.g., cribriform plate, primary bony lamina, secondary bony lamina, osseous ganglion canal) was present in Taeniolabis is uncertain due to poor preservation in the available specimens. The cochlear nerve appears to enter the cochlear canal through a single foramen, but the contrast between the sediment infill and bone is so poor in all of the specimens that it is impossible to differentiate any internal morphology of the cochlear canal. However, presence of a single cochlear foramen would be consistent with other descriptions of multituberculate inner ears. Most multituberculates described to date lack a cribriform plate and the cochlear nerve enters through a single foramen (e.g., Meng and Wyss 1995;Fox and Meng 1997;Ladevèze et al. 2010;Luo et al. 2016;Csiki-Sava et al. 2018;Wible et al. 2019). In addition, Fox and Meng (1997) described a longitudinal ridge on the inner surface of the lateral wall of the cochlear canal in an unidentified multituberculate (UALVP 26039) from the Hell Creek Formation as marking the most proximal course of the cochlear nerve within the canal. Similarly, a bony primary or secondary lamina is absent in most multituberculates (e.g., Meng and Wyss 1995;Schultz and Martin 2015;Csiki-Sava et al. 2018), although possible fragments within the cochlear canal that could represent bony laminae have been noted for some djadochtatherioids (Hurum 1998b;Ladevèze et al. 2010). The most prominent feature of the multituberculate inner ear is the enlarged vestibule. The greatest enlargements are seen in Lambdopsalis, Meniscoessus, Valenopsalis, and at least one of the Hell Creek multituberculates, UALVP 26039 (Miao 1988;Luo and Ketten 1991;Meng and Wyss 1995;Fox and Meng 1997;Luo et al. 2016;Weil and Tomida 2017). The vestibule of Taeniolabis is also expanded, but not quite to the same degree as in those taxa. In Lambdopsalis, the expansion of the vestibule is so great that the endocast of the lateral and even part of that of the posterior semicircular canal are confluent with the endocast of the vestibule; at least the osseous lateral semicircular canal is also confluent with the osseous housing of the vestibule in Meniscoessus. This is not the case in Taeniolabis; all osseous semicircular canals are free from the vestibule. In most multituberculates, the anterior and posterior semicircular canals fuse to form a crus commune, whereas the lateral and posterior semicircular canals remain separate (i.e., absence of a secondary crus commune). This is at least the case in Lambdopsalis, Meniscoessus, Taeniolabis, and Nemegtbaatar (Meng and Wyss 1995;Hurum 1998b;Luo et al. 2016). A very short secondary crus commune is present in cf. Tombaatar (Ladevèze et al. 2010) and the lateral semicircular canal is too incomplete in Litovoi to assess whether a secondary crus commune is present (Csiki-Sava et al. 2018). The size of the radius of curvature of semicircular canals varies across multituberculates. In Lambdopsalis, the lateral semicircular canal is much larger than either the anterior or posterior canal, likely due to the presence of a greatly inflated vestibule (Hurum 1998b). In the paulchoffatiid Pseudobolodon, the anterior semicircular canal is much larger than either the posterior or lateral canal (Schultz and Martin 2015), whereas the semicircular canals are fairly similar in size in Nemegtbaatar (Hurum 1998b), cf. Tombaatar (Ladevèze et al. 2010), and Taeniolabis. Occipital Region This region is surprisingly poorly known in multituberculates. Kielan-Jaworowska et al. (2004: 268) stated that, with the exception of djadochtatherioids (e.g., Kamptobaatar and Sloanbaatar, Kielan-Jaworowska 1971; Kryptobaatar, Wible and Rougier 2000) and Lambdopsalis (see Maio 1988), the occipital plate has not been reconstructed in any other multituberculate. Since then, a partial cranium of the kogaionid Litovoi preserving much of the occiput was discovered although few anatomical details were described or illustrated (Csiki-Sava et al. 2018: suppl. fig. 3E). We can now add Taeniolabis to the list of multituberculate taxa preserving the occipital region, and some indications of overall proportions but, unfortunately, none of the specimens in our sample allows delineation of sutures between the occipital bone and the other elements that make up the occiput. We are therefore unable to ascertain, for instance, if the parietal contributed to the occipital plate, or if it was restricted to the dorsal cranial roof, as in djadochtatherioids such as Kamptobaatar, Kryptobaatar, and Sloanbaatar (Kielan-Jaworowska 1971; Wible and Rougier 2000), the stem taeniolabidoid Yubaatar (Xu et al. 2015), and the lambdopsalid Lambdopsalis (Miao 1988). In these forms, the suture between the parietal and the occipital appears to follow, if not bisect, the nuchal crests. In Mangasbaatar, the supraoccipital portion of the occipital does not reach the nuchal crest but is very close to it (Rougier et al. 2016). Also, one important distinction between the occipital regions in Taeniolabis and Lambdopsalis is that, in the former, it is concave (best seen in DMNH EPV.95284 [Fig. 4f] and DMNH EPV.134082 [ Fig. 5f]), whereas in the latter, it is strongly convex, a result of the inflated vestibular apparatus (Miao 1988: figs. 12, 17, 18). No sutures are recognized between the various components of the occipital (supraoccipital, paired exoccipitals, basioccipital) in the adult specimens in our sample of Taeniolabis. Intra-occipital synchondroses are generally the earliest among cranial sutures to fuse, at least in extant mammals (e.g., Wilson and Sánchez-Villagra 2009;Goswami et al. 2013;Rager et al. 2014), and are typically fused in multituberculates (Kielan-Jaworowska et al. 1986). Wible and Rougier (2000: fig. 16), however, documented a suture between the exoccipitals and the supraoccipital in one specimen (PSS-MAE 101) of Kryptobaatar and Rougier et al. (2016) identified sutures within the occipital bone in Mangasbaatar. Similarly, we see evidence of sutures between the paired exoccipitals and the supraoccipital in the juvenile cranium (UCMP 98083, Fig. 8f) of T. taoensis. As for Kryptobaatar and Mangasbaatar, the supraoccipital of Taeniolabis appears to contribute a short, median portion of the dorsal margin of the foramen magnum, as is also the case in most non-mammaliaform cynodonts and fossil and extant mammaliaforms (reviewed in Krause et al. 2014b). Dentary Direct examination of DMNH EPV.130973 (Fig. 9a-c) and AMNH 16310 (Fig. 9d-f), and indirect evaluation of several AMNH specimens ( Fig. 3e-n) from the San Juan Basin through high-resolution photographs, contribute several fundamentally new aspects to our knowledge of dentary morphology in Taeniolabis taoensis, most of them pertaining to the ascending ramus. The mandibular condyle is not as large and globular and it is not suspended on a long, posterodorsally directed neck (peduncle), as depicted in previous reconstructions. The coronoid process is intermediate in height relative to some earlier reconstructions and the position of its anterior border is more variable than previously characterized, thus eliminating a feature thought to differentiate T. taoensis from T. lamberti (see below). We can also establish that a masseteric protuberance is not present in T. taoensis (contra Kielan-Jaworowska et al. 2005) but that a masseteric fovea is present in the anterior part of the masseteric fossa. Among other taeniolabidids, the dentary of the congeneric T. lamberti is represented by a single specimen, the holotype (CCM 70-110;Simmons 1987), but is unknown for Kimbetopsalis simmonsae, although Williamson et al. (2016) allowed that an edentulous horizontal ramus (AMNH 3030) referred by Sloan (1981) to Catopsalis foliatus might possibly be referable to K. simmonsae (Lucas et al. 1997 had earlier suggested that AMNH 3030 might belong to Taeniolabis). Simmons (1987) contended that the coronoid process in T. lamberti arises from lateral to the anterior half of m2 whereas that of T. taoensis arises from lateral to the posterior half of m1. The dentaries of T. taoensis described above indicate that there is variation in this feature, therefore likely eliminating this character as one that differentiates the two species. In other comparable parts of their anatomy, the dentaries of T. taoensis and T. lamberti also appear to be very similar, other than the tentative observation by Simmons (1987: 802) that the latter "appears slightly smaller and less massive." Although most comparable measurements of the dentary are not possible, the width of the horizontal ramus below m1 (12.1 mm in CCM 70-110; 14.6 mm in DMNH EPV.130973; 14.9 mm in AMNH 16310) and the relative lengths of the cheektooth row (34.1 mm in CCM 70-110, Simmons 1987, 36.5 mm in DMNH EPV.130973, 38.2 mm in AMNH 16310) support this observation. In the sister group of taeniolabidids, the lambdopsalids, consisting of Lambdopsalis bulla and Sphenopsalis nobilis, the dentary is well known in the former (Miao 1986: fig. 7;1988: figs. 4, 26, 31, 32). The horizontal ramus of the dentary of L. bulla resembles that of T. taoensis in a number of features but is generally more slender, does not exhibit strong divisions within the masseteric fossa, and the mental foramen is more anteriorly positioned. The ascending ramus of L. bulla, however, is very different from that of T. taoensis in the following features: (1) longer condylar neck; (2) relatively larger and more posteriorly (rather than posterodorsally) directed condyle; (3) more deeply incised mandibular notch; and (4) lower, more reclined coronoid process. Finally, the ventral margin of the dentary in L. bulla, although broad and strongly tilted (Miao 1988: fig. 4) is straighter (less sinuous) than that of T. taoensis in lateral view. In the only other currently accepted lambdopsalid, Sphenopsalis nobilis, the dentary is represented only by a few fragments (Mao et al. 2016). Of the fragments illustrated in Mao et al. (2016: figs. 6, 7), there is nothing that would conclusively distinguish S. nobilis from T. taoensis in dentary morphology but Mao et al. (2016: 438) refer to the presence of "a long condylar process that continues posteriorly to the mandibular condyle" and a "bulbous" condyle in the former, both of which are not present in T. taoensis. In Prionessus, a genus whose inclusion in Lambdopsalidae is debated (Xu et al. 2015;Mao et al. 2016;Scott et al. 2016;Williamson et al. 2016;Csiki-Sava et al. 2018), the dentary is represented by an edentulous, fragmentary specimen (the holotype, AMNH 20423 - Matthew and Granger 1925: fig. 6), an even more fragmentary specimen preserving m1-2 (AMNH 21710 - Matthew et al. 1928: fig. 1), and most of a horizontal 1 3 ramus bearing i1, p4, m1-2 (IVPP 11132 - Meng et al. 1998: fig. 3b). Little can be gleaned from the illustrations of these specimens that would serve to differentiate them from the dentary of T. taoensis, except perhaps that the dentary of Prionessus is more gracile, with a ventral border that is more sinuous, and a diastema that is less strongly concave dorsally. Among taeniolabidoids outside of taeniolabidids and lambdopsalids, the dentary is known in the controversial genus Catopsalis, which is generally recognized to be not monophyletic (e.g., Simmons and Miao 1986;Williamson et al. 2016). Although C. calgariensis (see Russell 1926;Simpson 1927;Middleton 1982;Higgins 2003), C. waddleae (see Buckley 1995;Johnston and Fox 1984), and C. kakwa (see Scott et al. 2016) are only known from isolated teeth, the dentary is at least partially known in specimens of C. alexanderi, C. fissidens, and C. foliatus. Concerning the dentary of C. alexanderi, Middleton (1982Middleton ( : 1201 stated, " Granger and Simpson (1929: p. 611) description of the mandible of Taeniolabis would serve equally well for C. alexanderi. The jaw is not quite as robust; the masseteric fossa not as sharply defined anteriorly." In addition to these observations, it appears (see Middleton 1982: pl. 1, figs. 1, 2, 4) that the anterior limit of the masseteric fossa lies opposite the embrasure between m1 and m2 rather than below p4 as in T. taoensis. Furthermore, although much of the posterior and dorsal portions of the ascending ramus are not preserved, it appears that the mandibular condyle is not suspended by a long neck, as is also the case in T. taoensis, but that the posteroventral margin is less rounded and the ventral margin is straighter (less sinuous) than in T. taoensis (see also Kielan-Jaworowska and Sloan 1979: fig. 2D). A robust dentary fragment of Catopsalis foliatus (AMNH 3035) containing p4, m1-2 and illustrated by Granger and Simpson (1929: fig. 10; see also Matthew 1937: fig. 75, Kielan-Jaworowska and Sloan 1979: fig. 2E, Lucas et al. 1997 in medial view, reveals little other than the deep excavation of the pterygoid fossa posterior to m2 and the flat, tilted (from ventrolateral to dorsomedial) ventral surface below the anterior portion of the pterygoid fossa. In these features, it closely resembles the dentary of T. taoensis. Although fragments of the dentary of C. fissidens are known (Granger and Simpson 1929;Lucas et al. 1997;Williamson et al. 2016), none appear to be sufficiently complete to yield any useful comparative information other than that the anterior border of the coronoid process appears to arise opposite the embrasure between m1 and m2 or just anterior to it (Lucas et al. 1997;fig. 2-9, 10;Williamson et al. 2016; fig. 3D). Even more basal within Taeniolabidoidea is Valenopsalis joyneri, previously regarded as a species of Catopsalis (Williamson et al. 2016: figs. 4, 5). Unfortunately, dentaries have not been described for this species. Yubaatar zhongyuanensis is currently regarded as the immediate outgroup to Taeniolabidoidea (Xu et al. 2015; see also Csiki-Sava et al. 2018) and is represented by both dentaries in the holotype and only known specimen. Relative to the dentary of T. taoensis, that of Y. zhongyuanensis is longer (relative to depth) and more slender, i1 is less erect, the anterior border of the masseteric fossa is farther posterior (below m1 rather than below p4), the coronoid process begins farther posterior (its anterior margin lying opposite m2), and the pterygoid fossa appears to be less deeply excavated (Xu et al. 2015: figs. 2, 4a [mislabeled as part d]). The peduncle for the mandibular condyle, however, is less stalk-like (indeed, it is described as not having a neck) and the condyle is more dorsally directed than in Lambdopsalis bulla and, in this regard, more closely resembles the dentaries of T. taoensis. The ventral surface of the dentary in taeniolabidoids appears to be broad, flat, and strongly tilted (from ventrolateral to dorsomedial in coronal section) in the areas below the molars and the pterygoid fossa. This condition is present in at least Taeniolabis (Figs. 2h, 3d, h, j, 8h, 9b, e and 10h;Simmons 1987: fig. 4.1) and Catopsalis (Kielan-Jaworowska and Sloan 1979: fig. 2E; Lucas et al. 1997: fig. 3.1, 3.2, 3.5) and, seemingly, in Lambdopsalis (Miao 1988: figs. 26, 32) and Prionessus (Matthew and Granger 1925: fig. 6) and does not appear to be present, or at least as strongly developed, in other cimolodontan taxa. Earlier-branching forms such as paulchoffatiids (e.g., Kuhneodon -Hahn 1969: figs. 17, 18;Hahn 1978a: fig. 10; Meketibolodon -Martin 2018: fig. 45) and eobaatarids (e.g., Sinobaatar - Kusuhashi et al. 2009: figs. 8, 16) also exhibit a flat ventral surface posteriorly but it appears to be untilted or even tilted in the opposite orientation, from ventromedial to dorsolateral. This may indicate that a broad, flat, and strongly tilted ventral surface of the dentary is a synapomorphy uniting taeniolabidoids although the condition in the stem taeniolabidoid Yubaatar is not described or directly illustrated (Xu et al. 2015) and is unknown to us. More definitively outside of Taeniolabidoidea, the dentaries of other cimolodontans are generally less robust than in T. taoensis and, more broadly, Taeniolabidoidea (Wible et al. 2019: fig. 25). Overall morphology is generally quite conservative within Multituberculata, thereby obviating the need for detailed comparison. Outside of Multituberculata, dentaries are also short and robust, with a sizeable diastema, in Gondwanatheria and Euharamiyida (Krause et al. 2020c: fig. 5 A-C, K, L) but generally longer and more slender, without a sizeable diastema in other, early-branching mammaliaform clades (Krause et al. 2020c: fig. 5J, M-X). Conclusions The craniomandibular morphology of the iconic early Paleocene multituberculate Taeniolabis taoensis is documented in this study on the basis of newly discovered specimens from the Denver Basin, Colorado, and long-known specimens (both described and undescribed) from the San Juan Basin, New Mexico. All specimens, where possible, were subjected to examination with µCT technology. The specimens from the Denver Basin are the first to be recorded from there and, correspondingly, also establish the approximate base of the Puercan NALMA Taeniolabis taoensis/Periptychus carinidens Interval Zone (Pu3) in the basin for the first time. Early reconstructions of the cranium of T. taoensis were based primarily on AMNH 16321, which µCT imaging reveals to be quite fragmentary, possessing major parts of the cranial roof and zygomatic arches but missing many critically important areas of the anterior snout, palate, mesocranium, basicranium, and occipital plate. The material examined in this study reveals profound changes to the shape of the skull relative to early reconstructions (Fig. 10). Some of the more salient differences include a more anteriorly extended premaxillary region; more prominent and more ridge-like sagittal and nuchal crests; pronounced peaks in the regions where the sagittal crest and temporal ridges intersect as well as where the sagittal and nuchal crests intersect; smaller and more laterally positioned incisive foramina; less posteriorly positioned choanae; smaller paraoccipital processes; a triangular foramen magnum; smaller, less bulbous, and more posteriorly situated occipital condyles; and a shorter dentary. The bony composition and features of the lateral wall of the braincase, nasal cavity, and endocranial cavity remain poorly known. Features previously unknown for T. taoensis include the presence of an in situ I3, prominent internarial processes, numerous nasal foramina, a diminutive jugal on the medial aspect of the zygomatic arch, the frontal occupying a substantial portion of the medial wall of the orbit, a posttemporal foramen, and all aspects of the mesocranium, basicranium, and inner ear in the cranium, and a masseteric fovea on the dentary. We also document the absence of a septomaxilla, lacrimal, postpalatine torus, palatal vacuities, zygomatic ridges, and a masseteric protuberance. Reconstruction of the dentary was previously based primarily on AMNH 16310, which was incomplete posteriorly. The dentary specimens described herein add previously unknown details of the ascending ramus, primarily of the masseteric and pterygoid fossae, coronoid process, and mandibular condyle. Comparison of the craniomandibular morphology of T. taoensis with that of other cimolodontan multituberculates confirms that, of those taxa represented by significant skull material, closest resemblances are with the lambdopsalid Lambdopsalis.
2021-12-16T05:11:24.101Z
2021-12-01T00:00:00.000
{ "year": 2021, "sha1": "f4e2fd19ea25ea1f77cd01f2057bb4dad1c577f2", "oa_license": null, "oa_url": "https://link.springer.com/content/pdf/10.1007/s10914-021-09584-3.pdf", "oa_status": "BRONZE", "pdf_src": "PubMedCentral", "pdf_hash": "f4e2fd19ea25ea1f77cd01f2057bb4dad1c577f2", "s2fieldsofstudy": [ "Geology" ], "extfieldsofstudy": [ "Medicine" ] }
221687175
pes2o/s2orc
v3-fos-license
Urban manta rays: potential manta ray nursery habitat along a highly developed Florida coastline The giant oceanic manta ray Mobula birostris was listed in the US Endangered Species Act as a threatened species in 2018, yet insufficient data exist on manta populations throughout US waters to designate critical habitat. Taxonomic and genetic evidence suggests that manta rays in the Western Atlantic are a separate species (M. cf. birostris) and little is understood about the ecology and life history of this putative species. The juvenile life stage of both M. birostris and M. cf. birostris is particularly understudied. Here, we are the first to describe the characteristics of a manta ray population along a highly developed coastline in southeastern Florida using boatbased surveys and photo identification of individuals. Fifty-nine manta individuals were identified between 2016 and 2019. All males were sexually immature based on clasper development, and 96% of females were classified as immature based on size and absence of mating scars or visible pregnancies. Twenty-five (42%) individuals were observed more than once during the study period and 8 individuals were sighted over multiple years. The occurrence of juveniles, high site fidelity and extended use of the study area by juvenile manta rays suggest that southeastern Florida may serve as a nursery habitat. High occurrence of fishing line entanglement (27% of individuals) and vessel strike injury were documented, and rapid wound healing was observed. Future research and conservation efforts will focus on identifying the physical and biological features of the potential nursery habitat and on mitigation of anthropogenic impacts. INTRODUCTION Globally, manta rays (Mobula birostris and M. alfredii) are classified as Vulnerable on the IUCN Red List, mainly due to targeted fishing and bycatch . Manta and mobula rays have been targeted globally for their gill plates, which are transported and sold in Asia as health tonics (Dulvy et al. 2014, Croll et al. 2016. The conservative life-history traits of manta rays (i.e. late age of maturity, long gestation period, low fecundity) make them particularly susceptible to exploitation (Dulvy et al. 2014) and rising anthropogenic threats. Other significant, yet less-studied threats to manta rays include ingestion of microplastics (Germanov et al. 2019b), vessel-strike injuries (McGregor et al. 2019), un sustainable tourism (Venables et al. 2016), habitat destruction and climate change (Stewart et al. 2018a). Recent taxonomic (Marshall et al. 2009) and genetic (Hinojosa-Alvarez et al. 2016, J. Hosegood et al. unpubl. at https:// doi. org/ 10. 1101/ 458141) evidence indicates there is a third species of manta ray (M. cf. birostris), which is noted to occur off the Atlantic coast of the USA. M. cf. birostris, though most closely related to M. birostris (Hinojosa-Alvarez et al. 2016, J. Hosegood et al. unpubl. at https:// doi. org/ 10. 1101, is thought to be more similar in ecology to M. alfredi; however, there is a lack of data on populations from the Caribbean and Western Atlantic. As for many batoids, large gaps exist in the knowledge of manta ray life history and ecology (Martins et al. 2018). M. alfredi associates with neritic habitats and is the better studied of the 2 manta species, with data on gestation period (1 yr; Marshall & Bennett 2010b, Ste vens 2016, age at maturity (8−17 yr for females; Marshall et al. 2019) and size at birth (130−190 cm;Ste vens 2016, Murakumo et al. 2020. Less is known about the life history of M. birostris, as a result of its tendency to reside in more remote open-ocean habitats . The life stage of sexually immature or juvenile manta rays remains particularly understudied. Of the 663 currently described batoid species, less than 6% have been the subject of studies concerning nursery habitats (Martins et al. 2018). Heupel et al. (2007) defined testable criteria of an elasmobranch nursery habitat as: (1) more juveniles are found in the proposed nursery area than in other areas, (2) individuals have a tendency to remain over time, and (3) the habitat is repeatedly used across years. While juvenile manta ray habitats are slowly being identified around the world, only 2 have been described that meet these criteria (M. birostris and M. cf. birostris: Childs 2001, Stewart et al. 2018bM. alfredi: Germanov et al. 2019a). Juvenile survival rates are particularly important to population viability in species where there is a late age of maturity, high adult survival and low fecundity (Heppell et al. 2000). As global manta ray populations decline, the importance of survival at the juvenile life stage makes identifying critical habitats, including pupping and nursery areas, a priority for manta ray research and conservation efforts (Stewart el al. 2018a). Southeast Florida is characterized by a narrow (2− 10 km) continental shelf and warm waters from the north-flowing Florida Current (Banks et al. 2007, Finkl & Andrews 2008. Nearshore waters of southeast Florida support many populations of marine megafauna including sea turtles (Makowski et al. 2005, Stewart et al. 2014, bottlenose dolphins (Tursi ops truncatus; Litz et al. 2007), spotted eagle rays (Aeto batus narinari; Newby et al. 2014), goliath grouper (Epinephelus itajara; Koenig et al. 2017) and sharks (Kajiura & Tellman 2016). These megafauna have been subjected to increased harassment, injury and mortality as the human population of south Florida continues to grow (Bureau of Economic and Business Research 2019) and the coastal waters are more heavily used for recreational activities such as boating, fishing, jet-skiing, snorkeling, SCUBA diving and swimming (Adimey et al. 2014, Powell et al. 2018, Foley et al. 2019. The giant oceanic manta ray (M. birostris) was listed on the US Endangered Species Act as a threatened species (NOAA 2018), yet insufficient data exist on the manta population along the eastern USA to designate critical habitat. At the time of writing, there have been no published studies of manta rays in Florida, with the exception of one paper documenting the sighting of 3 manta rays in the Indian River Lagoon in the 1990s (Adams 1998). However, fishermen have long been aware of seasonal manta ray aggregations in northern Florida and use the manta rays to target cobia Rachycentron canadum, a popular game fish ( Fig. 1 shows a photo array of anthropogenic impacts to Florida manta rays including fishermen targeting cobia with manta rays). SCUBA diving is a popular pastime in Florida; yet, manta rays are rarely seen by divers on the reefs (REEF 2019; J. Pate pers. obs.). Here, we provide the first description of manta rays along the southeast coast of Florida, USA. Using inwater observations, we documented the sex, size, maturity status, associated fish taxa, behavior and spatio-temporal distribution of this manta population. We further present evidence that our survey area meets the criteria of a batoid nursery habitat (Heu pel et al. 2007, Martins et al. 2018. We also quantify negative anthropogenic interactions of man ta rays with boating and fishing, as well as photo graphically document the rapid healing of injuries. Finally, we discuss the value and limitations of these data in regards to management, and suggest future avenues for research and conservation. Boat-based surveys Visual boat-based surveys were conducted from June 2016 to November 2019 to locate and identify individual manta rays. A boat survey consisted of driving the boat at slow speeds (<10 knots) along a portion of a north−south transect between Jupiter Inlet (26°56' 40'' N, 80°04' 13" W) and Boynton Beach Inlet (26°32' 44" N, 80°02' 31" W; Fig. 2), approximately 200 m from shore (distance from shore varied with conditions such as sea state and tide). This area was selected based on previous observations of manta rays swimming close to shore (< 3 m depth). Surveys were typically conducted between inlets (e.g. between Jupiter Inlet and Palm Beach Inlet; Fig. 2), though could be shorter or longer depending on conditions. For example, if weather conditions were favorable and no mantas were sighted between Jupiter and Palm Beach Inlet, we sometimes continued south to Boynton Beach Inlet or partway there. In 2019, some surveys extended farther north to St. Lucie Inlet (27°09' 48" N, 80°09' 11" W; Fig. 2) when manta rays were difficult to locate in our standard survey area. Surveys normally began in the mornings (~09:00 h) when winds were typically lighter. An observer stood on the bow of the boat (Triumph 215 CC) for the best vantage point. Unmanned aerial vehicles (UAVs; DJI Mavic Pro & DJI Phantom 4; commonly drones) or planes were used on 26% (n = 46) of surveys in 2018 and 2019. We were unable to fly UAVs in a portion of our study area due to flight restrictions around airports. An encounter was defined as any time a manta ray was located by an observer or UAV. When a 53 Fig. 1. Anthropogenic interactions with manta rays. Fishing line interactions on (A) a female manta ray with hooks in the pectoral fin and (B) a manta ray with multiple fishing line entanglements, including one that is cutting into the right pectoral fin, as well as evidence of a shark bite. Manta rays in south Florida are found in areas of high human activity: (C) a manta ray by a popular fishing jetty, (D) fishermen in northern Florida targeting cobia that swim with manta rays, (E) a manta ray inside Boynton Beach Inlet with a boat passing, and (F) a manta swimming in shallow coastal water along a public beach by a dredging outflow pipe manta ray was located, GPS location, water depth, water temperature and bottom cover type (sand or hard bottom) were recorded. Water temperature was not recorded during 2016 surveys. When a manta ray was located, a snorkeler entered the water to obtain a visual identification photograph of the ventral spot pattern. It was not possible to obtain a usable identification photo for all manta rays, and some manta rays were lost before a snorkeler was able to enter the water. Since the manta rays were frequently in shallow water (< 2 m) swimming close to the seafloor, the snorkeler would often use an extended GoPro to reach underneath the manta ray while continuously recording. Still frames were extracted from GoPro videos for analysis. Distinguishing features such as scars, injuries and color patterns were also noted. Photo-identification is a commonly accepted method of identifying manta ray individuals and the methodology has been extensively documented (Marshall et al. 2011, Kitchen-Wheeler et al. 2012. Disc width was estimated by the in-water snorkeler by comparing manta to an object of known size (i.e. snorkeler or the research vessel) to the nearest foot (0.3 m). Due to the inaccuracies of this method and estimator bias, we further binned individuals into 1 m size classes to allow for at least a broad-scale overview of the general sizes of encountered individuals. Population structure A photo of the pelvic fins was taken to determine sex (by presence or absence of claspers), and a chisquare test (RStudio 1.2.5042) tested for a significant departure from a 1:1 sex ratio. Maturity status was determined for males by relative size of claspers. Juvenile male claspers are small and uncalcified, and do not extend past the pelvic fins (Fig. 3). Sex- Each circle represents a manta ray encounter. Orange circles indicate a manta ray that was encountered within 1 km of a fishing pier or inlet jetty ually mature males have calcified claspers that extend well past pelvic fins and fully developed clasper glands (see Fig. 3 in Marshall & Bennett 2010b). Female maturity was assessed by disc width and the presence of mating scars (Marshall & Bennett 2010b) or visible pregnancy. Mating scars are found on the distal end of the pectoral fin, usually the left (Marshall & Bennett 2010b). Though these scars may decrease in intensity, they persist over time and are easily recognized by trained observers. Females of Mobula birostris mature at disc widths > 4 m , and females of M. alfredi mature at 3.0−3.5 m disc width (Marshall et al. 2019). We conservatively assumed that females with a disc width less than 3 m and an absence of mating scars were immature. If the female was greater than 3 m, but lacked scars or visible pregnancy, the maturity status of females was considered unknown. Each identified manta ray individual was classified as M. birostris or M. cf. bir os tris based on the key provided in Marshall et al. (2009) (Fig. 4). To ensure that no individuals be longed to M. alfredi, photos were analyzed for presence of a caudal spine. Lack of a caudal spine is a diagnostic feature of M. alfredi. Manta ray behavior was also recorded as 'directed swimming', 'feeding', 'cleaning' or 'reproductive' (Ger manov et al. 2019a). Directed swimming manta rays maintained a directional heading and did not frequently change directions, as often observed in feeding manta rays. Manta rays directionally swimming would sometimes unroll one or both cephalic fins in response to the snorkeler. Manta rays were considered to be feeding if the cephalic lobes were unrolled and the mouth and gill slits were open. Cleaning behavior is characterized by a manta ray hovering, usually over a reef, while being cleaned by juvenile reef fish. Reproductive behavior included any observation of courtship or mating (Stevens et al. 2018). If the behavior did not fall clearly into the categories described above, it was labeled as 'not determined'. A re-sighting was defined as a sighting of a manta ray individual on a different survey day. Time and straight-line distance were calculated between each re-sighting of an individual. Average time and distance were calculated between all manta ray resightings. Re-sightings were also collected from citizen science divers/snorkelers, sometimes outside the survey area. Sightings outside the survey area were excluded from the time and distance re-sighting analysis. Associated species In Florida, manta rays are used by fishermen to locate and target cobia Rachycentron canadum, though this association is better known in northern Florida. In our survey area, some fishermen have reported that the number of cobias associating with manta rays has decreased over time (J. Pate pers. obs.). Here, we used our photos and videos to quantify fish taxa associated with each manta ray encountered in our study area. Fish were considered associated with the manta ray if they were swimming close to the manta ray (<1 m) in the same direction for the duration of the encounter or if they were physically attached to the manta ray. Videos were only used if a complete view of the dorsal and ventral side was available. Fish were identified to genera if possible, but for many we were only able to identify to family. Anthropogenic impacts Injuries and fishing line entanglement were noted and/or photographed in each encounter. Fishing line and injury location on the manta ray was documented as anterior ventral (AV), posterior ventral (PV), pectoral ventral (PeV), anterior dorsal (AD), posterior dorsal (PD), pectoral dorsal (PeD) (Fig. 5). If injuries were in more than one body region, both were selected. If not possible to see where the hook was attached, it was categorized as no hook/line entanglement (NH). When possible, attached fishing gear was removed from manta rays. Wherever possible, injuries and scars were photographed and the cause was identified. Propeller injuries were identified by multiple parallel linear wounds (Byard et al. 2013, Foley et al. 2019, sometimes with a perpendicular linear wound caused by the skeg (Fig. 6). Shark bites were identified by crescent shaped scarring or teeth rake marks (Marshall & Bennett 2010a) (Fig. 1B). Fishing line was listed as the cause of injury if it was still attached to the manta ray and was visibly causing injury, or if there was a remnant scar/injury from the line (similar to the one on the right pectoral fin of the manta ray in Fig. 1B). Other injuries (nicks, slices or missing tissue) were categorized as unknown cause, though the origin of the wound is likely anthropogenic (McGregor et al. 2019). Injuries were considered to have recently occurred if any fresh, vascularized flesh was exposed ( Fig. 6A−D,F,I) and considered healed otherwise. If the injured individual was re-sighted, photos were taken to assess the rate of wound healing. Wounds were considered healed when completely closed (Fig. 6E,H,J). To quantify manta ray proximity to areas of high human impact, straight-line distance was calculated with Google Earth between every manta ray location and the nearest fishing pier or inlet jetty. Manta ray mortalities were opportunistically documented. Manta rays were observed on 42.9% of surveys (n = 75). A maximum of 8 manta rays were encountered in a single survey. One hundred and fifty manta ray encounters occurred during the study period ( Fig. 8; 2016: 11 encounters; 2017: 66 encounters; 2018: 28 encounters; 2019: 45 encounters). Photographic identification was obtained for 130 (87%) of these encounters. Manta rays were found in an average (± SE) water depth of 2.7 m (± 0.1 m, range 0.8−9.8 m) and water temperature of 28.0°C (± 0.1°C, range 23.2−30.1°C). One hundred and twenty-seven (84.7%) manta ray encounters occurred over a sandy seafloor, 17 (11.3%) over hard bottom and for 6 (4%) encounters, habitat was not recorded. Manta ray individuals and population structure Of the 150 encounters, 59 unique manta ray individuals were identified (Fig. 8). The sampled individuals consisted of 31 male (52.5%) and 28 female (47.5%) manta rays, a sex ratio (1.1:1) that did not differ significantly from parity (chi-square test, χ 2 1 = 0.15254, p = 0.6961). All male manta rays (n = 31) were sexually immature based on clasper size. Ninety-six percent (n = 27) of females were sexually immature based on disc width (< 3 m) and absence of mating scars. Seventeen (29%) of the photographi-cally identified manta ray individuals' disc widths at first sighting were < 2 m, 41 were between 2 and 3 m (69%) and 1 individual was over 3 m (2%; Fig. 9). The single individual estimated to be greater than 3 m disc width was female, but lacked mating scars or visible pregnancy, thus maturity status was considered unknown (Fig. 9). All identified individuals were classified as Mobula cf. birostris based on the key in Marshall et al. (2009). An examination of a subset of individuals (n = 5) in the field by A. D. M. further confirmed that all examined individuals possessed skin morphology similar to that of M. cf. birostris (Marshall et al. 2009). Eighty-three percent of manta rays had photos where the base of the tail was clearly visible, and in all of them a caudal spine was present. Eighty-six (58%) of the manta ray encounters were behaviorally categorized as directed swimming, 37 (25%) were categorized as feeding and for 26 (17%) manta ray encounters behavior was not determined. No cleaning or reproductive behavior was observed. Twenty-five manta rays (42%) were sighted more than once, with a total of 78 re-sighting events. Of these re-sighted manta ray individuals, 48% were seen twice, 24% 3 times, 8% 4 times and 16% between 5 and 9 times (average 4.1 ± 0.9 sightings; range 2−23; median = 3). Four of these re-sightings were reported by boaters or divers, 3 outside our survey area. One individual was on a wreck east of our survey area, and 2 others were sighted 60 km south of the survey area. An average of 70.9 d lapsed between each re-sighting (±11.9 d; range 1−387 d) and Individual manta rays used the survey area across years, with 8 individuals being sighted in at least 2 consecutive calendar years. Two of these mantas (FL0020, female, and FL0027, juvenile male) were sighted in 3 consecutive years (2017,2018,2019). FL0027 was observed 23 times between 16 August 2017 and 22 October 2019. Over the entire period, its clasper size indicated a juvenile age class, with claspers not extending past the pelvic fins (Fig. 3). Anthropogenic impacts Of encounters with photographically identified manta rays, 99% had a ventral photo and 87% had a dorsal photo. Since we were not able to examine both sides for all manta rays, entanglement and injury frequency represent a minimum for our study group. Sixteen individuals (27%) were seen foul-hooked or entangled in fishing line, of which 6 individuals interacted with fishing gear more than once. The most common area hooked was the ventral pectoral fin (PeV, 37%; Table 1). Fifty-two percent of manta rays were hooked in the pectoral fin (dorsal and ventral; Table 1). Seventeen percent of hooked manta rays were hooked in the anterior ventral (AV ; Table 1), near the mouth and gills (Figs. 1B & 7B). Thirty-three injuries were documented on 27 (46%) manta individuals. Ten (30%) of the injuries were presumably from boat propellers, 10 (30%) from fishing line, 9 (27%) from an unknown cause and 4 (12%) from shark bites. Seven injuries were considered to have recently occurred, with 5 of those being caused by boat propellers. All propeller injuries occurred on the dorsum (with one cutting through to the ventral side as well, Fig. 6I), with 55% on the posterior, 27% on the anterior and 18% on the (Table 2). Eighty percent of fishing line injuries occurred on the pectoral fin (Table 2). We documented healing times on 4 injuries (on 3 individuals). These wounds completely healed in 40 and 15 d (FL009, injury a and injury b, respectively), 44 d (FL0020) and 25 d (FL0027; Fig. 6). Forty-two manta rays (28%) were encountered within 1 km of a fishing pier or inlet jetty (Figs. 1 & 2). Only one manta ray mortality occurred during the study period, but this occurred outside the study area. Florida Fish and Wildlife Commission was notified of a dead manta ray entangled in a vessel exclusion line (steel cable) on 18 July 2017 in Pompano Beach, Florida (26°13' 16.4'' N,80°5' 17.3'' W, south of our survey area). The female measured 2.48 m in disc width and had no other signs of injury or fishing line entanglement. It is likely that the manta ray became entangled in the line and drowned. This manta ray had not been previously photographically identified in our surveys. DISCUSSION Documenting nursery habitats is a priority in manta ray research and conservation (Stewart et al. 2018a), yet only 2 nurseries (Mobula birostris in Texas, USA: Stewart et al. 2018b;M. alfredi in Indonesia: Germanov et al. 2019a) have been identified that meet the criteria of an elasmobranch nursery habitat (Heupel et al. 2007). We provide evidence that our survey area meets these criteria: (1) more juveniles are found in the proposed nursery area than in other areas, (2) individuals have a tendency to remain over time, and (3) the habitat is repeatedly used across years. All males observed were sexually immature and 96% of females were of immature size without mating scars (criterion 1). Other populations in the western Atlantic, namely in the Yucatan Peninsula, Mexico (Marshall & Holmberg 2018), and northeastern Florida (Mullican et al. 2013, A. Marshall pers. obs.), consist mostly of adult manta rays. Juvenile M. birostris are rarely encountered in surveys of most manta populations, making this population somewhat unique, albeit similar to the one described in the Flower Garden Banks (Stewart et al. 2018b). Additionally, manta ray individuals in the south Florida population remained over time (criterion 2), with 42% of individuals being sighted more than once within the study period. One juvenile male manta ray (FL0027) was sighted 23 times over 797 d. Finally, we demonstrate that the nearshore waters of south Florida are used repeatedly among years (criterion 3). Manta rays were seen in every year of the study and aerial surveys for shark migrations have documented manta rays every year from 2011 to 2019 (J. Waldron & S. M. Kajiura unpubl. data). Individual manta rays were also shown to utilize the study area over consecutive calendar years. While our survey area fits the criteria for a nursery habitat, it is still unclear how large this critical habitat may be. Manta rays are capable of long-distance migrations and deep dives , thus our survey area likely does not encompass the entirety of the nursery habitat. Our methodology is limited by logistics and skewed towards finding manta rays in shallow, clear water. Thus, while we are confident that our study site represents a portion of the nursery habitat, we are unsure of the geographical limits of habitat use by juvenile manta rays in Florida. Two of our photographically identified manta rays were seen by a local free-diver in shallow water 60 km south of our survey area, evidence that the nursery habitat may extend farther north, south and possibly east of our survey area. Nursery habitats tend to provide conditions beneficial for development, such as food availability and protection from predators (Heupel et al. 2007). Feeding behavior was observed in the proposed nursery habitat, sometimes involving groups of up to 5 individuals. These feeding events are likely manta rays taking advantage of ephemeral bursts of productivity, possibly from upwelling caused by eddies of the near by Florida Current (Lirman et al. 2019) Table 1. Physical location of fishing hooks on manta ray bodies (Fig. 5). In 30 manta encounters, a fishing gear interaction was documented. In 8 encounters, no hook was seen and the fishing line was wrapped around the manta ent-rich discharge from coastal inlets (Sonnetag & McPherson 1984). The large size of manta rays at birth protects them from many potential predators, except for large sharks. Only 4 (6.8%) of photographically identified manta rays were observed with shark bites, suggesting that predation rates are low within the study area. Similar rates of predation have been reported for juvenile M. alfredi (Deakos et al. 2011), suggesting juvenile manta rays may be selecting habitat, at least in part, based on predation risk. It is important to note that predation rates may be underestimated if mortality is cryptic in juvenile manta rays, and that our approach only documented manta rays that survived predation attempts. Manta ray proximity to areas of high human use (i.e. inlets and fishing piers) likely contributes to frequency of fishing line entanglement and vessel strike (Fig. 2). In French Polynesia, manta rays near inhabited islands are more likely to be observed with sublethal injuries caused by fishing gear or boat strikes than mantas near uninhabited islands (Carpentier et al. 2019). The human population of Florida in creased 262% from 1960 to 2008, with 75% of people living in coastal counties (Wilson & Fishetti 2010). In eastern Florida, participation in recreational fishing increased 58% between 1981 and 2016 (pers. comm. from the National Marine Fisheries Service, Fisheries Statistics Division, 24 January 2020). Of the 3 manta rays documented by Adams (1998) in a Florida estuary 25 yr ago, 1 individual had deep lacerations from a propeller and was entangled in fishing gear. This suggests that anthropo genic interactions have been occurring for decades and are likely increasing as the human population of Florida increases. The largest threat to global manta ray populations is targeted fisheries and bycatch (Croll et al. 2016), yet less is known about sub-lethal interactions between manta rays and recreational fishing gear (Stewart et al. 2018a). Twenty-seven percent of manta ray individuals in the present study were foulhooked or entangled in fishing line (Fig. 1), with 38% of those individuals having more than one interaction with fishing gear, an extremely high rate of anthropogenic interaction for small manta rays to sustain in their first years of life. Many marine megafauna, including dolphins, manatees and sea turtles, in Florida have been documented as entangled in fishing gear and these interactions are increasing (Adimey et al. 2014). Most (52%) of the fishing hooks were on the pectoral fin of manta rays (Table 1). Similarly, Adimey et al. (2014) found that sea turtles and manatees were most often entangled on their flippers. These lateral hooking locations are indicative of passive interactions with fishing gear, i.e. interactions when a freeswimming animal accidentally comes into contact with the fishing gear. Active interaction with fishing gear results from an animal seeking food or exploring a novel object (Adimey et al. 2014). Deakos et al. (2011) documented evidence of fishing hooks on manta ray cephalic fins in a Hawaiian population, with a 10% cephalic lobe injury rate, likely caused by entanglement in monofilament line. Variation in fishing behavior and gear may result in variation in severity and location of foul-hooking and/or entanglement. This would require that the appropriate mitigation strategies be tailored for each manta ray population and fishery. In Florida, fishermen target manta rays based on their well-known association with cobia, yet we rarely saw cobia with manta rays in our survey area. While some fishing interactions may result in minimal permanent injury to the manta ray, they likely cause considerable stress and possible sub-lethal effects. When fishermen have accidentally hooked manta rays, fight times have been over one hour (J. Pate unpubl. data). Fight time is correlated with physiological stress (i.e. lactate production) in elasmobranchs, with smaller sharks producing more lactate than larger sharks (Gallagher et al. 2014). Thus small, juvenile manta rays may be especially vulnerable to stress from capture. Additionally, fishing line wrapped around the mouth or cephalic fin can also affect a manta's ability to feed. Studies on other marine taxa also demonstrate reduced time spent foraging after anthropogenic interaction (Williams et al. 2006). Furthermore, stress from capture can induce early parturition in many elasmobranch species, including M. birostris (Adams et al. 2018). We documented 46% of individuals with scars or injuries. Although it can be difficult to determine the origin of many of the scars and injuries to manta rays, we were able to document 10 incidents where an outboard motor was the clear cause of a recent injury (denoted by multiple parallel linear injuries from propellers; usually there was another deeper, linear injury perpendicular to propeller marks from the sharp edge of the skeg; Fig. 6). McGregor et al. (2019) documented rapid healing of a vessel strike in a single M. alfredi female. They found that 95% of the wound closed within 295 d (though this is likely an overestimate of healing time since they had to wait for the next season for the manta ray to be re-sighted) and modeled a healing half-life of 46 d. We were not able to obtain as highquality photos for measuring wounds as McGregor et al. (2019). However, we were still able to document wound healing, from time of first injury observation to complete wound closure, for 4 injuries on 3 mantas (2 males, 1 female; Fig. 6). Complete wound closure ranged from 15 to 44 d, much faster than previously documented in manta rays (Marshall & Bennett 2010a, McGregor et al. 2019; Fig. 6), yet similar to healing rates in wild blacktip reef sharks Carcharhinus melanopterus (Chin et al. 2015). Rapid wound healing certainly leads to an underestimation of frequency of vessel strike injuries (McGregor et al. 2019). Once healed, these wounds appear as barely visible light gray scars (Fig. 6) and may not be noticed in a quick underwater encounter. It is also possible that manta rays are experiencing blunt force trauma from a vessel strikes, yet are not exhibiting any obvious external injuries. Nine percent of stranded sea turtles in Florida from 1984 to 2014 were found to have blunt force injuries probably caused by vessel strikes (Foley et al. 2019). Manta ray strandings in Florida are rare (D. Adams pers. comm.), yet mortality may be cryptic as manta rays are negatively buoyant and will sink when they die. CONCLUSIONS The nearshore area between St. Lucie Inlet and Boynton Beach Inlet in southeast Florida has been identified as a potential nursery habitat for manta rays. Ninety-eight percent of manta rays observed were juveniles, and many individuals showed high site fidelity to the study area, which occurs along a highly populated coastline with intensive human traffic. We also documented wounds and entanglement consistent with high rates of negative interactions with fishermen and boaters, which are likely underestimated due to rapid wound healing in manta rays. In 2018, M. birostris was listed as threatened under the US Endangered Species Act, which re quires that critical habitat of listed species be designated. However, to designate critical habitat, NOAA requires that the physical and biological features essential to conservation of the species be identified. As of 2019, NOAA determined that no such critical habitat had yet been identified (NOAA 2019). We suggest that this coastline be considered in future designations as a potential nursery habitat for this species. Very few juvenile manta ray habitats have been identified (Stewart et al. 2018b, Germanov et al. 2019a), making it crucial to identify and protect known or suspected nursery grounds. We recommend the use of aerial surveys, acoustic and satellite telemetry, as well as studies of prey availability to further elucidate the physical and biological features of this potential nursery habitat. Future research in this area should also focus on ontogenetic habitat shifts, as these juvenile rays presumably reach the age to join the greater adult population, and the connectivity of this population with others regionally. The manta rays in our study appear to represent a third species of M. cf. birostris (Marshall et al. 2009, J. Hosegood et al. 2019 https:// doi. org/ 10. 1101/ 458141) and the conservation status of this species is unknown, warranting further research into the life history and genetics of this population.
2020-07-16T09:02:05.509Z
2020-09-03T00:00:00.000
{ "year": 2020, "sha1": "f836a1318dd25b6b13fcfbda5b4fc357e644bedf", "oa_license": "CCBY", "oa_url": "https://www.int-res.com/articles/esr2020/43/n043p051.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "4dbece71348a7c56bdf49589e7720575e84405f7", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [ "Geography" ] }
6590834
pes2o/s2orc
v3-fos-license
Systemic Immune-Inflammation Index and Circulating T-Cell Immune Index Predict Outcomes in High-Risk Acral Melanoma Patients Treated with High-Dose Interferon High-dose interferon alfa-2b (IFN-α-2b) improves the survival of patients with high-risk melanoma. We aimed to identify baseline peripheral blood biomarkers to predict the outcome of acral melanoma patients treated with IFN-α-2b. Pretreatment baseline parameters and clinical data were assessed in 226 patients with acral melanoma. Relapse-free survival (RFS) and overall survival (OS) were assessed using the Kaplan-Meier method, and multivariate Cox regression analyses were applied after adjusting for stage, lactate dehydrogenase (LDH), and ulceration. Univariate analysis showed that neutrophil-to-lymphocyte ratio ≥2.35, platelet-to-lymphocyte ratio ≥129, systemic immune-inflammation index (SII) ≥615 × 109/l, and elevated LDH were significantly associated with poor RFS and OS. The SII is calculated as follows: platelet count × neutrophil count/lymphocyte count. On multivariate analysis, the SII was associated with RFS [hazard ratio (HR)=1.661, 95% confidence interval (CI): 1.066-2.586, P=.025] and OS (HR=2.071, 95% CI: 1.204-3.564, P=.009). Additionally, we developed a novel circulating T-cell immune index (CTII) calculated as follows: cytotoxic T lymphocytes/(CD4+ regulatory T cells × CD8+ regulatory T cells). On univariate analysis, the CTII was associated with OS (HR=1.73, 95% CI: 1.01-2.94, P=.044). The SII and CTII might serve as prognostic indicators in acral melanoma patients treated with IFN-α-2b. The indexes are easily obtainable via routine tests in clinical practice. Introduction Malignant melanoma is a highly aggressive skin cancer, and the global incident rate is increasing by 3% to 5% annually [1]. Patients with thick primary lesions, ulcerated lesions, or regional metastases have a high risk of relapse [1]. In particular, patients with stage IIB to IIIC have the highest recurrence risk, with postsurgical relapse rates of 40% to 55% and 40% to 80%, respectively [2]. The clinical characteristics and prognosis of Asian patients show significant variations from those of Caucasian patients [3,4]. Acral melanoma is rarely observed in Caucasians but is the most commonly diagnosed pathological subtype in Asian, accounting for 47.5% to 65% of melanoma cases [5,6]. Furthermore, non-Caucasian melanoma patients exhibit worse prognosis than Caucasian melanoma patients, which is still lack of effective adjuvant treatment strategy [7,8]. Presently, interferon alfa-2b (IFN-α-2b) is the only drug approved by the US Food and Drug Administration for the adjuvant treatment of high-risk postoperative melanoma. A meta-analysis of 14 randomized controlled trials concluded that IFN-α-2b was significantly associated with improved disease-free survival and overall survival (OS) [9]. Moreover, 1-year administration of IFN-α-2b was clinically beneficial in Asian patients with stage IIIB to IIIC acral melanoma or with ≥3 nodal metastases, which is quite different from Caucasian population [10,11]. However, there remain some controversies to using adjuvant interferon therapy, such as significant toxicities and financial burdens. Therefore, it is crucial to investigate prognostic biomarkers that can identify patients who are more likely to benefit from adjuvant interferon therapy. It is clear that systemic inflammatory responses are a vital determinant of disease progression and survival in most cancers [12]. Infiltrating inflammatory cells in the immune system are increasingly recognized to be generic constituents of tumors that have opposing functions, as both tumor antagonists and promoters [13,14]. Therefore, several immune-based prognostic scores, such as neutrophil count, lymphocyte count, neutrophil-lymphocyte ratio (NLR), platelet-lymphocyte ratio (PLR), monocyte-lymphocyte ratio (MLR), systemic immune-inflammation index (SII), prognostic nutritional index (PNI), and circulating CD4 + T-and CD8 + T-cell counts have been developed to predict the prognosis in several cancers, including melanoma [15][16][17][18][19][20]. However, such parameters have never been utilized to predict outcome in acral melanoma patients treated with adjuvant interferon therapy. Moreover, the potential effects of peripheral lymphocytes, neutrophils, platelets, CD4 + regulatory T cells (CD4 + Tregs), CD8 + regulatory T cells (CD8 + Tregs), and cytotoxic T lymphocytes (CTLs) on melanoma recurrence and metastasis have not been explored. In this study, we developed a novel index, the circulating T-cell immune index (CTII) that is based on CD4 + CD25 + regulatory T cells (CD4 + Tregs), CD8 + CD28 − regulatory T cells (CD8 + Tregs), and CD8 + CD28 + cytotoxic T lymphocytes (CTLs). We found that the SII and CTII were promising independent predictive factors of prognosis of the patients with acral melanoma who had undergone adjuvant interferon therapy. Patients The study was approved by the medical ethics committee of Peking University Cancer Hospital & Institute. Written informed consent was obtained from all participants. We retrospectively reviewed the medical records of 226 patients with high-risk acral melanoma who visited Peking University Cancer Hospital between October, 2010, and October, 2016. All patients diagnosed with melanoma were confirmed histopathologically. All methods were performed in accordance with the relevant guidelines and regulations. To ensure that the whole blood parameters were representative of normal baseline values, none of the patients had lymphatic system disorders or malignant hematologic diseases. Furthermore, all of the patients were treatment-naïve. Study Design This was a retrospective, single-center study. Patients were divided into two groups according to IFN-α-2b dose. Cohort A (152 patients) received 4 weeks of intravenous induction therapy of IFN-α-2b (15×10 6 U/m 2 /d, 5 days per week); Cohort B (74 patients) received 4 weeks of IFN-α-2b intravenous induction therapy (15×10 6 U/m 2 /d, 5 days per week), followed by 48 weeks of subcutaneous maintenance therapy at a dose of 9×10 6 U, 3 times per week. The dosage was based on that used in a previous clinical trial [11] as well as on our own clinical experience in Chinese melanoma patients [10]. The dosage was lower than the standard high-dose IFN dosage applied in the Eastern Cooperative Oncology Group trial [21,22] but was more suitable for Chinese patients since they generally cannot tolerate the standard dosage owing to its toxicity. The baseline parameters, including demographics, routine hematologic tests results, CD4 + Tregs, CD8 + Tregs, CTLs, liver function parameters, and clinical history, were all obtained. The following parameters were collected for analysis: age, sex, date of melanoma diagnosis and date of death or last follow-up, American Joint Committee on Cancer (AJCC) M stage, serum lactate dehydrogenase (LDH), ulceration, and clinical history. Parameters were collected from data on routine hematologic tests that were performed at the time of initial diagnosis and before the adjuvant high-dose interferon treatments. Six inflammatory factors (NLR, PLR, SII, MLR, PNI, and CTII) were included in this analysis. These inflammatory factors were calculated as follows: NLR = N/L; PLR = P/L; SII = P × N/L; MLR=M/L; PNI = albumin + 5 × L, and CTII= CTLs/(CD4 + Tregs × CD8 + Tregs), where N, L, M, and P are the peripheral neutrophil, lymphocyte, monocyte, and platelet counts, respectively. Statistical Analysis Two end points were analyzed: OS and relapse-free survival (RFS). OS was defined as the date of melanoma diagnosis to the time of death due to any cause or until October, 2016, for patients who remained alive (censored). RFS was calculated from the time of initial treatment until the time of disease relapse or death due to any cause, or until October, 2016, for patients who remained alive (censored). Statistical evaluation was conducted with IBM SPSS statistical software (version 20.0). The t test was used to analyze mean values for normally distributed continuous variables, while the Mann-Whitney U test was used to compare mean values for abnormally distributed continuous variables. OS and RFS curves were estimated with the Kaplan-Meier method. Prognostic parameters associated with OS and RFS were assessed by both Cox univariate and multivariate analyses. Only possible prognostic factors associated with OS and RFS were subjected to Cox multivariable analysis. The R software was used to determine the cutoff values of the parameters associated with OS and RFS. The results are presented as hazard ratio (HR) with 95% confidence interval (CI). Receiver operating characteristic (ROC) curve analysis was used to evaluate predictive values of potential parameters for acral melanoma prognosis. For all statistical tests, Pb.05 (two-tailed test) was considered statistically significant. Patient Characteristics A total of 226 patients with acral melanoma were enrolled in this study; 152 patients received the 4-week regimen, and 74 patients received the 1-year regimen. The median RFS and OS rates were 22.3 and 47.2 months, respectively. Patient characteristics are summarized in Table 1. There was no significant difference in OS and RFS rates between treatment arms. Therefore, all patients were subjected to prognostic factor analysis, regardless of their treatment arm. Association of NLR, PLR, SII, MLR, PNI, and CTII with RFS and OS We used the R software to determine the cutoff values of lymphocyte cells count, neutrophil cells count, NLR, PLR, SII, MLR, PNI, and CTII for the prediction of RFS and OS based on the data of the 226 melanoma patients. We transformed the continuous data to dichotomous data by employing cutoff values. On univariate Cox analyses, the NLR, PLR, SII, LDH, ulceration, and AJCC M stage were significantly associated with the RFS and OS of patients with acral melanoma (Figures 1 and 2). The CTII was only associated with the OS of patients with acral melanoma (P=.044). The results of the univariate analyses are shown in Table 2. Factors found significant on univariate analysis were subjected to multivariate Cox proportional hazards analysis. As shown in Table 3 Comparison of SII and CTII in Different Acral Melanoma Subgroups As AJCC M stage and tumor recurrence were significantly associated with prognosis in patients with acral melanoma, we compared SII and CTII in different patient subgroups that were created based on the clinicopathological features ( Figure 3). We found that the SII and CTII in stage III patients as well as those who experienced recurrence were higher than stage II patients and those without recurrence (all Pb.05). This indicated that SII and CTII may predict melanoma invasiveness and metastatic potential. Discussion We investigated potential prognostic biomarkers of IFN-α-2b therapy in Asian patients with acral melanoma to evaluate the clinical benefit of the therapy on OS and RFS. Several clinical trials have indicated that the median RFS ranged from 20.4 to 30 months for high-risk melanoma [22,23]. Congruent with these studies, in the present study, the median RFS in acral melanoma patients treated with high-dose interferon was similar to the lower limit of the RFS range in Caucasian population [10], which partly confirms that acral melanoma subtype is associated with significantly inferior prognosis, as previously suggested [24]. Such prognostic differences might arise because of the variations in the genetics, pathogenesis, and immune microenvironment between different ethnic populations [15,[25][26][27][28][29]. Several studies have shown that pretreatment NLR, neutrophil counts, and lymphocyte counts in patients with melanoma are valid prognosticators [15,16]. SII, which is based on lymphocyte, neutrophil, and platelet counts, has not been investigated extensively in melanoma patients; we are the first to verify its role in predicting RFS and OS in such patients. The SII prediction value was shown to be higher than that of the NLR, PLR, and other conventional parameters such as serum LDH and ulceration. Moreover, the SII value is based on measures that are easily obtained during routine laboratory tests in clinical practice. Therefore, the SII ought to be a simple, low-cost, and effective biomarker that may assist in the surveillance of patients most likely to relapse or to benefit from adjuvant interferon therapy. This might also contribute to early and accurate decision-making concerning the most effective treatment strategy. Recent evidence indicates that infiltrating immune system cells present in the tumor microenvironment synergistically promote tumor progression. Tumor-promoting immune cells include macrophages, platelets, neutrophils, and T and B lymphocytes, which produce an attractive tumor microenvironment for tumor growth, metastasis, and facilitate angiogenesis [13,[30][31][32][33]. Furthermore, some studies showed that immune cells facilitate tumor progression by releasing a series of molecules, such as the proangiogenic vascular endothelial growth factor, the proinvasive matrix degrading enzyme matrix metalloproteinase-9, and other cytokines [32,34]. Meanwhile, activated T cells and other lymphocytes demonstrate potent antitumor effects [35]. The balance between these opposing immune inflammatory responses in tumors is likely to be crucial for accurate prognosis as well as for determining appropriate antitumor treatments [12]. A better understanding of the role of infiltrating immune system cells ought to help clarify the association between cancer, immunity, and inflammation [17]. In our study, Cox univariate and multivariate analyses indicated that the SII was significantly associated with the outcome of melanoma. The CTII was also shown to be a predictive factor for OS. Additionally, we found that elevated SII and CTII values were associated with tumor vascular invasion and recurrence, indicating a more aggressive phenotype [36,37]. A recent study indicated that increased absolute lymphocyte counts concordant with delayed increases in CD4 + and CD8 + T cells are associated with positive outcome in advanced melanoma patients treated with ipilimumab [18]. Patients with metastatic melanoma and a high baseline NLR also appeared to benefit from immunotherapy with agents such as ipilimumab [16]. Therefore, the appropriate predictive biomarkers may help select the appropriate therapies (or sequences). Such predictive biomarkers may also serve to expedite decisions on whether to continue a particular therapy or switch to alternative options. The limitations of this research include its retrospective nature and small sample size, which could produce selection biases. Moreover, NLR, PLR, SII, and CTII were not of powerful prognostic values in terms of outcome of melanoma patients. A recent study indicated that the underlying mechanism through which elevated SII is associated with poorer a prognosis is an increase in the dissemination of tumor cells into the circulation, allowing such cells to escape immune surveillance and increase peripheral circulating tumor cell levels [17]. Therefore, we hypothesized that additional biomarkers such as circulating tumor cell levels could be combined with SII and CTII in order to improve the prognostic accuracy. Measuring changes in specific immune-related parameters during therapy can improve the real-time assessment of the drug's benefit. Martens et al. found that increases in absolute lymphocyte counts observed 2 to 8 weeks after ipilimumab initiation, combined with delayed increases in CD4+ and CD8+ T cell levels, are indicators of positive outcome in metastatic melanoma patients [18]. Thus, further prospective, well-designed studies with larger populations focused on changes in SII and CTII during therapy are warranted. In conclusion, our study is the first to demonstrate the prognostic significance of the SII and CTII in high-risk acral melanoma patients treated with adjuvant IFN-α-2b. Both SII and CTII are easily assessable in clinical practice. Additional studies are required to clarify the mechanisms behind the association between elevated SII and CTII and poorer prognosis in melanoma patients.
2018-04-03T03:32:59.745Z
2017-07-12T00:00:00.000
{ "year": 2017, "sha1": "82fef89573bef970311fa349925180602d221a44", "oa_license": "CCBYNCND", "oa_url": "https://doi.org/10.1016/j.tranon.2017.06.004", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "82fef89573bef970311fa349925180602d221a44", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
121623569
pes2o/s2orc
v3-fos-license
On the Field Dependence of the Interface Energy in Af/FM Bilayers In the investigations of antiferromagnetic (AF)/ ferromagnetic (FM) bilayer samples, often distinct experimental techniques yield different values for the measured exchange anisotropy field (HE). We propose that the observed discrepancy may be accounted in part by the dependence of the unidirectional anisotropy with the value of the external applied field (h). Using a simple microscopic model for representing the AF/FM interface, which incorporates the effect of interface roughness, we show that the interface energy between the AF and FM layer indeed varies with h, as recently observed in anisotropic magnetoresistance measurements, lending support to our proposal. Exchange bias or unidirectional anisotropy, which is characterized among other effects by the offset in the field of the magnetic hysteresis loop from zero, has become in recent years an important issue both technologically and in condensed matter physics. A thorough understanding of all variables at play still is a debatable topic in discussions about the involved phenomena (see the reviews in [1][2][3][4]). It seems that due to the diversity of systems exhibiting the phenomenon there may be more than just one mechanism to generate it. Theoretical models have considered both compensated and uncompensated interfaces, single-crystal and polycrystalline systems, spin-flop coupling, interface roughness, and magnetic domains in the antiferromagnetic layer [5]. Up to now there is not a definitive model to account for all the richness of effects observed. In this work we employ a simple Ising model [6] which display many of the hallmarks of systems exhibiting uniaxial anisotropy to study the external field dependence of the interface energy. In figure 1a we illustrate the usual model for an uncompensated AF interface and interlayer interactions, and in figure 1b for a compensated AF interface with ferromagnetic interlayer interaction, both including the effect of disorder or roughness at the interface. It is clear for the case in figure 1a that the directional symmetry is broken as long as the AF structure is not inverted by the external field. In the case of figure 1b the broken directional symmetry may look a little more subtle because it depends crucially on the interface disorder: broken directional symmetry looks like the one illustrated in figure 1c where the heights may be seen as local magnetizations of the AF or FM. It is well known that in the investigation of samples of antiferromagnetic(AF)/ferromagnetic(FM) bilayers, often distinct experimental techniques yield different values for the measured exchange anisotropy field (H E ). This intriguing fact has been interpreted as arising from the distinct natures of the experimental techniques, some probe reversible properties of the system while others probe irreversible properties. Here we propose another reason for the observed discrepancy, namely the dependence of the interface energy between the AF and FM layers with the value of the external applied field (h), and the fact that each experimental technique employs a different field range. We use further simplification of the model represented in figure 1 but which still retains its essential features such as incorporating the effect of roughness at the AF/FM interface [6].. This model has been used earlier to study the thermal-history-dependent properties observed in exchange-coupled AF/FM bilayers. Consider two atomic monolayers with magnetic moments over congruent square lattices, one layer with ferromagnetically coupled moments and the other with two perfectly compensated antiferromagnetic sublattices. The moments from different layers are coupled by an interlayer exchange interaction, which can be FM or AF. The interface roughness is accounted for by randomly substituting a fraction of the atoms in the FM layer by atoms from the AF layer. The interlayer exchange interaction is, where the sum is over all sites at the interface and c J represents the coupling between FML and AFL atoms. All energies shall be measured in units of 1 J . The many-body problem posed by the model expressed by (1) -(5) is far from trivial. Like as in random field magnets and spin glasses ( behaviours also observed in some exchange biased systems [7] with even a typical spin glass Almeida-Thouless line [8] being detected [9] ), the presence of randomness results in a complex phase space with strong metastability effects always present. The last terms in equations (3) and (4) act like an effective random field in the system and explicitly breaks time-reversal symmetry in the ferromagnetic sub-system, i.e., as long as the AF structure is not altered by the external field the FM structure is not invariant under change of magnetization direction giving origin to the unidirectional anisotropy as argued in [10] in terms of random fields acting in the interface. Following the mean-field approach of Soukoulis et al [11] in their studies of random magnets, the local thermally averaged magnetization at temperature T is given by is the local field in the mean field approximation, µ=1,2 specify, respectively, FML or AFML atoms, the sum ) (ij is over NN, ij J is defined in equation (4) and units of Boltzmann constant. Following the same procedure as in [6,11], equations (6a) and (6b) are solved numerically by an iterative method, yielding the local and macroscopic magnetizations. As should be expected from the set of nonlinear equations (6), there are many possible local arrangements for the spins and thus effects of irreversibility and metastability sets in. As in most experimentally studied systems we have chosen 4a,4b (S=2 case) we obtain its field dependence for several values of the interlayer exchange coupling. As may have been expected this quantity is field dependent, the results in figures 3a and 4a agrees qualitatively with the experimentally observed results [12]. However, new theoretical results are shown in Fig. 3 (b) and Fig. 4
2019-04-14T02:03:31.945Z
2003-09-23T00:00:00.000
{ "year": 2003, "sha1": "15d1e2a0a994109da71ef7bbf5d2e940e931354e", "oa_license": null, "oa_url": "http://arxiv.org/pdf/cond-mat/0309535", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "d0f4591cd0ee9bb6af5fa3089c52e6f7034b8720", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Materials Science", "Physics" ] }
4098224
pes2o/s2orc
v3-fos-license
Capturing patient-reported area of knee pain: a concurrent validity study using digital technology in patients with patellofemoral pain Background Patellofemoral pain (PFP) is often reported as a diffuse pain at the front of the knee during knee-loading activities. A patient’s description of pain location and distribution is commonly drawn on paper by clinicians, which is difficult to quantify, report and compare within and between patients. One way of overcoming these potential limitations is to have the patient draw their pain regions using digital platforms, such as personal computer tablets. Objective To assess the validity of using computer tablets to acquire a patient’s knee pain drawings as compared to paper-based records in patients with PFP. Methods Patients (N = 35) completed knee pain drawings on identical images (size and colour) of the knee as displayed on paper and a computer tablet. Pain area expressed as pixel density, was calculated as a percentage of the total drawable area for paper and digital records. Bland–Altman plots, intraclass correlation coefficient (ICC), Pearson’s correlation coefficients and one-sample tests were used in data analysis. Results No significant difference in pain area was found between the paper and digital records of mapping pain area (p = 0.98), with the mean difference = 0.002% (95% CI [−0.159–0.157%]). A very high agreement in pain area between paper and digital pain drawings (ICC = 0.966 (95% CI [0.93–0.98], F = 28.834, df = 31, p < 0.001). A strong linear correlation (R2 = 0.870) was found for pain area and the limits of agreement show less than ±1% difference between paper and digital drawings. Conclusion Pain drawings as acquired using paper and computer tablet are equivalent in terms of total area of reported knee pain. The advantages of digital recording platforms, such as quantification and reporting of pain area, could be realized in both research and clinical settings. INTRODUCTION Pain drawings that capture area, location and distribution of pain can be used to aid diagnosis and track changes over time (e.g. after a course of treatment) (Margolis, Tait & Krause, 1986;Abbott et al., 2015;Southerst et al., 2013;MacDowall et al., 2017). Pain drawings are commonly captured on paper-based body schemas, charts or sketched diagrams (Thompson et al., 2009;Wood et al., 2007;Sengupta et al., 2006;Creamer, Lethbridge-Cejku & Hochberg, 1998;Post & Fulkerson, 1994;Elson et al., 2011). Despite studies reporting paper-based pain drawings to have good to very good inter-rater and intra-rater reliability, they present limitations for clinicians to easily quantify and compare pain areas within and between patients (Thompson et al., 2009;Wood et al., 2007;Sengupta et al., 2006;Creamer, Lethbridge-Cejku & Hochberg, 1998;Post & Fulkerson, 1994;Elson et al., 2011). One way of overcoming these limitations is to have the patient draw the area and location of their pain on digital platforms, such as personal computer (PC) tablets. PC tablets offer considerable advantages to patients in health care settings. The advancements of touch-screen technology, such as those employed in smart-phones and hand-held PC tablets make it possible and easier to acquire, quantify, report and compare patient-completed pain drawings. Recent studies have investigated pain drawings recorded on digital platforms and shown reliability commensurate with that of paper drawings (Boudreau et al., 2016;Boudreau, Kamavuako & Rathleff, 2017). A study using high-resolution and contoured (3D) images of the knee found a high proportion of patients with patellofemoral pain (PFP) reported knee pain in mirrored locations and that the pain drawings were exceptionally symmetrical (Boudreau, Kamavuako & Rathleff, 2017). Interestingly, when the knee pain drawings were compared to duration of symptoms, those with a longer duration of symptoms appeared to draw patterns more of an 'O' shape. Given that longer duration of PFP symptoms has been shown to be a prognostic indicator of a poor outcome (Matthews et al., 2016), capturing pain areas digitally will allow quantification and real time insight into the patient's condition, which may optimise patient management. The aim of this study was to assess the concurrent validity of paper and digital pain drawings in patients with PFP. The hypothesis was that the area of knee pain in patients with PFP acquired using hand-held PC tablets (digital drawings) is equivalent to using pen and paper. METHODS This concurrent validity study investigated the agreement of pain area between patient completed paper and digital pain drawings. To minimise order and learning effect bias, the order of completing paper and digital pain drawings was randomised with approximately 1-2 min between drawings. The reporting of the study follows the Guidelines for Reporting Reliability and Agreement Studies (Kottner et al., 2011). Participants A consecutive sample of 35 patients from a clinical trial (Matthews et al., 2017) were recruited from the community of Aalborg, Denmark via public advertising or referred from sports medicine clinics and general practitioners. A musculoskeletal physiotherapist with experience in managing patients with PFP screened participants for inclusion into the study. Inclusion criteria were: (1) aged between 18 and 40 years with a history of non-traumatic anterior retro or peripatellar knee pain that was greater than six weeks duration, (2) self-reported worst pain over the previous week equal to or greater than 3 out of 10 on a numerical pain scale (0 = no pain, 10 = worst pain imaginable), (3) symptoms provoked by at least two of the following activities: squatting, running, stair ascending or descending or prolonged sitting were included. Individuals were excluded if they had any one of the following: concomitant injury or pathology of other knee structures (e.g. ligament, meniscal, tendon, iliotibial band, pes anserinus, fat pad), or a history of knee surgery, patellofemoral dislocation or subluxation, Osgood-Schlatter's disease, Sinding-Larsen-Johansson syndrome, a positive patellar apprehension test or evidence of knee joint effusion. The Ethics Committee in the North Denmark Region and the Danish Data Agency approved the study (N-20140022). All participants were provided with verbal and written information about the procedures of the study, and written informed consent was obtained prior to data collection. Data collection The participants were instructed by a second musculoskeletal physiotherapist to complete in a randomised order a paper and digital pain drawing to the best of their abilities. The verbal instruction given to the patients for both the paper and digital pain drawings was 'please use the pen to draw on the paper/screen where you most often experience your knee pain'. Digital pain drawings were performed on a PC tablet (Samsung Galaxy Note 10.1, Android 4.1.2) that displayed a 3D body schema of the lower torso (from the anterior superior iliac spine prominences, and below, such that contours of the left and right legs and knees were clearly visible) ( Fig. 1). Participants used a permanent red marker with a 1 mm thick felt tip (Edding 400, Wunstorf, Germany) for the paper drawings and an S Pen TM that accompanied the tablet so as to control for line thickness and to enable precise drawings. In order to enable a valid comparison (i.e. like with like) three a priori calibrations were performed: (i) the thickness of the line created by the S Pen TM on the tablet was set to equal the thickness of the permanent red marker on the paper, (ii) the S-Pen drawing on the digital device created red 'pixels' to indicate the patient's knee pain location, area and distribution and (iii) the image of the lower body schema as displayed on paper was scaled to the same size as the lower body schema displayed on the computer tablet. Sample-size The hypothesis is that a hand-held PC tablet is a comparable and equally valid method for acquiring patient-completed pain drawings as compared to pen and paper records. Based on a previous study (Boudreau et al., 2016), it is hypothesized that the difference in pain area between these methods will not be greater than 1%. We used an equivalency sample-size calculation to determine the sample-size required to show that there was no clinically relevant differences between pain area(s) collected by paper vs. pain area(s) collected using a computer tablet. The data used for the sample-size calculation were based on means and standard deviations of two PFP groups collected in a pilot study. Using a conservative correlation factor of 0.5 between drawings, an equivalency limit of 6,720 pixels and a SD of 4,451 pixels at 5% significance and 95% power, it was necessary to collect 35 paper-digital pairs of pain drawings. Data management and analysis To compare pain areas between paper and digital records, we calculated the pixel density of the scanned paper and digitally acquired pain drawings. The pixel densities of the area of the pain drawing on both paper and digital media were expressed as a percentage of the total area of a blank body map of the lower body schema (i.e. a reference standard). Any pain areas that were ambiguous in terms of the boundaries and extent of pain were excluded, such as cross-hatching with unfilled areas, as shown in Figs Assessment of pain area for paper drawings One investigator (MM) who was not involved in data collection and was blind to the computer tablet records processed the paper records to determine pixel density of the paper-based pain recordings. This investigator scanned all the paper records for subsequent determination of pixel density from the digital record. Paper drawings were scanned at 300 ppi, saved as a PDF file and imported into Adobe Photoshop CC (2015.1; Adobe Systems, San Jose, CA, USA) for analysis. The pen selection function was used to trace a path of the body schema and pain area to create a 'selected area' from which the pixel density was calculated. First, a reference standard of pixel density for the paper version of the lower body schema was created by scanning an unused paper version of the lower body schema. The total pixel density was calculated three times and then averaged. The pain area for each participant's paper record was traced and pixel density calculated. Assessment of pain area of digital drawings The Navigate Pain TM software that was preloaded on the PC tablet and automatically calculates the red pixels associated with the pain drawings. The red pixels are also expressed relative to the total pixel area (total drawable area) of the lower body schema. The percent and absolute number of pixels were exported directly into an excel document for data analyses. Data analysis One-sample t-tests were used to compare the difference in pixel density between paper and computer table recordings of the patient's pain drawings. Intraclass correlation coefficient (ICC) using absolute-agreement, two-way mixed model was used to determine the agreement between paper and digital platforms. Pearson's correlation coefficients were used to express the degree of linear association between the two methods (Koo & Li, 2017). Limits of agreements (LoAs), using Bland-Altman plots, were used to express the agreement between the paper and computer tablet methods. The LoAs were presented as a range indicating the maximal potential difference between the two methods in 95% of the ratings. All statistics were performed in IBM SPSS Statistics, version 24 (IBM Corp., Armonk, NY, USA) and a = 0.05 was used as level of significance. RESULTS Thirty-five participants were recruited into the study. Three participants were excluded, as they did not follow the drawing instructions (Fig. 2). One participant was excluded due to the use of arrows in their paper drawing to indicate a pain area ( Fig. 2A). One participant was excluded due to their paper drawing having incomplete circles with scribbled lines, leaving it unclear if it truly represents their pain area (Fig. 2B). One participant was excluded due to the ambiguous use of zigzag lines in both the paper and digital to indicate pain area (Fig. 2C). The remaining 32 participant drawings were analysed (Fig. 3). Participants were predominantly female (78%), mean age of 24.5 (5.6) years old, BMI of 23.7 (3.4) and with average symptom duration of 69.7 (2-192) months. Twenty-five of the 32 participants reported and marked bilateral symptoms. A very high agreement in pain area between paper and digital pain drawings as reflected by an ICC of 0.966 (95% CI [0.93-0.98], F = 28.834, df = 31, p < 0.001). There was a strong linear correlation in pain area between paper and digital pain drawings (R = 0.93, p < 0.0001) (Fig. 4). The drawings with the largest difference (1.1%) and smallest difference (0.05%) in pain area between paper and digital pain drawings are depicted in Fig. 5. DISCUSSION This study found minimal differences in pain area recordings made on paper or a PC tablet. Results indicate that any difference in area would likely be less than ±1%. These results support the hypothesis and provide an important first step towards validation of digitally acquired pain drawings for pain assessment in the knee. Digital pain drawings offer the advantages of being easily acquired, quickly quantified and interpreted to assist in clinical diagnosis and comparison over time. Previous studies have investigated paper-based pain mapping to assist in clinical diagnosis of knee and shoulder pain (Elson et al., 2011;Bayam et al., 2011Bayam et al., , 2017. Participants were instructed to use small crosses ('X') (Elson et al., 2011) or symbols (Bayam et al., 2011(Bayam et al., , 2017 to mark out their pain location, type, distribution and severity and several 'X's if pain was present in more than one location. Once the drawing was made, a grid-like template was use for categorizing anatomical zones of the knee. In its simplest form, placement of 'X's allow quick reporting and the identification of a general location of the pain. However, the utilisation of 'X's to mark out pain area limits the accuracy of the patient expressing their pain distribution and raises doubt on the diagnostic utility. By using a digital method in this current study, patients were able to fully express their perceived pain location and distribution and not be restricted to simple 'X's. Although not a focus of the present study, digital drawings could improve the diagnostic accuracy of the pain drawings and be potentially useful and cost-effective as an adjunct tool to quantify and interpret a patient's pain. An unexpected observation in this study was the variability of the individual pain drawings in a cohort with a homogenous diagnosis. Of the 32 participants, 23 (72%) drew pain areas on both knees with 12 (38%) patients drawing pain in two or more locations in the same knee. This observation has also been seen in previous studies (Thompson et al., 2009;Elson et al., 2011). Thompson et al. (2009) reported patients indicating pain in two or three locations or two regions with several participants drawing three areas of local pain. Figure 3 The variability of the 32 digital knee pain drawings, from patients diagnosed with PFP, used to assess pain area between paper and digitally acquired drawings. Full-size  DOI: 10.7717/peerj.4406/ fig-3 By drawing multiple areas, it could appear that participants are expressing diffuse symptoms, multi-location or different pain types, e.g. sharp or aching types of pain. The variability of these drawings could also be a reflection of the heterogeneous nature of PFP. PFP is an often persistent, multifactorial condition that is diagnosed by its clinical presentation with exclusion of other conditions (Crossley et al., 2016). The simplified approach of this diagnosis could lead to the captured pain drawings expressing a variety of local nociceptive, peripheral and central sensitization pain presentations. Whilst this current study compared the percentage of pain area, the change in location of this area between drawings was not assessed. A change in location between two pain drawings has yet to be assessed, even when considering the earliest reliability studies. A change in location would of course be bodily context dependent. With more or less acceptable deviations in location change dependent on the pain being assessed such as the knee or low back. The variability of the present drawings also warrants further consideration in future studies, looking at the relationship between patient-perceived pain drawings, location and diagnosis. The level of anatomical detail displayed on the body schema used in the current study may have contributed to the minimal difference obtained between paper and digital pain drawings. In a pain-mapping study on a chronic neck pain cohort, results found high reproducibility between paper and digital platforms as well as between simple body outlines and high-resolution contoured body schemas (Boudreau et al., 2016). However, a small fixed negative bias was identified with slightly smaller drawings performed on paper than PC tablet, and pain areas were drawn slightly larger on the less-detailed body outline in comparison to the high-resolution body schemas (Boudreau et al., 2016). One explanation for these findings could be the greater level of anatomic detail of the body schema being more recognizable to the patient. When a patient is able to see important anatomical landmarks, greater accuracy and precision of the pain drawings may occur. A key consideration of this study is the verbal instructions given to the participants in the study. The instructions given may have allowed some degree of ambiguity. This is evident by three of the 35 excluded pain drawings. The pain drawings were excluded due to the amounts of unmarked areas within circles and use of zigzag lines for shading in larger areas. As a recommendation for future studies the instruction set should be of explicit clarity and possibly, include a sample example of a correct and incorrect pain drawing. For example, 'please draw on the image that best represents the location and area of your pain. Please use solid lines or completely filled in areas, leaving no clear spaces within the area'. A second consideration for this study was the method of acquiring the pain drawings. Clinicians have traditionally completed pain drawings on body schemas, charts or sketched diagrams (Thompson et al., 2009;Wood et al., 2007;Sengupta et al., 2006;Creamer, Lethbridge-Cejku & Hochberg, 1998;Post & Fulkerson, 1994;Elson et al., 2011). However, considerations were identified in these studies that warranted attention. A comparison study found that clinicians drew significantly smaller areas of pain when compared to the patients' drawing (Post & Fulkerson, 1994), suggesting observer bias and filtering of information by the clinician which may not accurately represent the patient's report. In the current study, this consideration was addressed by asking patients to complete the drawings. It is imperative future studies of pain drawings, and indeed clinical utilization, ensure patient-completed pain drawings in guiding the diagnosis (Post & Fulkerson, 1994). The result of this study opens possibilities on the benefits of using digital platforms in clinical examination. By combining touch-screen technology with a high-detailed body schema, pain drawings can be quickly quantified and interpreted to facilitate clinical decisions. In turn, clinicians can easily monitor symptoms by comparing pain drawings within and between patients over time. With continued development of software, new avenues could be created for research. Future studies could explore and interpret pain drawings to enable identification of previously unknown pain patterns. Identification of pain patterns could be particular pertinent in patients with persistent and prevalent conditions such as knee or low back pain. Several studies on patients with low back pain have used pain drawings to locate body regions where patients have experienced pain (Hullemann et al., 2017;Gerhardt et al., 2016). Results suggest pain drawings might help to understand the patient's underlying mechanism of pain and improve treatment outcomes (Hullemann et al., 2017;Gerhardt et al., 2016). The advantages offered by digital recording platforms, such as automatic quantification and reporting of pain area, could be realized in both clinical settings and research to improve healthcare. CONCLUSION This study found knee pain drawings acquired on digital and paper-based platforms to be comparable in area. This study provides a first important step for testing of a digital interface that can facilitate the precise communication of patient-perceived knee pain area and location. The use of digital technology in health care, including digital platforms of patient-perceived pain drawing records, opens up many exciting possibilities in clinical and research settings.
2018-04-01T15:39:19.783Z
2018-03-08T00:00:00.000
{ "year": 2018, "sha1": "f48267b5aa16ff6e8b73d8fc055d5520db1fae6c", "oa_license": "CCBY", "oa_url": "https://doi.org/10.7717/peerj.4406", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "30acc66e59d11f64b51fc3d751a7baa687d17c56", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
111279926
pes2o/s2orc
v3-fos-license
Design and optimization of a Holweck pump via linear kinetic theory The Holweck pump is widely used in the vacuum pumping industry. It can be a self standing apparatus or it can be part of a more advanced pumping system. It is composed by an inner rotating cylinder (rotor) and an outer stationary cylinder (stator). One of them, has spiral guided grooves resulting to a gas motion from the high towards the low vacuum port. Vacuum pumps may be simulated by the DSMC method but due to the involved high computational cost in many cases manufactures commonly resort to empirical formulas and experimental data. Recently a computationally efficient simulation of the Holweck pump via linear kinetic theory has been proposed by Sharipov et al [1]. Neglecting curvature and end effects the gas flow configuration through the helicoidal channels is decomposed into four basic flows. They correspond to pressure and boundary driven flows through a grooved channel and through a long channel with a T shape cross section. Although the formulation and the methodology are explained in detail, results are very limited and more important they are presented in a normalized way which does not provide the needed information about the pump performance in terms of the involved geometrical and flow parameters. In the present work the four basic flows are solved numerically based on the linearized BGK model equation subjected to diffuse boundary conditions. The results obtained are combined in order to create a database of the flow characteristics for a large spectrum of the rarefaction parameter and various geometrical configurations. Based on this database the performance characteristics which are critical in the design of the Holweck pump are computed and the design parameters such as the angle of the pump and the rotational speed, are optimized. This modeling may be extended to other vacuum pumps. Introduction The choice of the equipment that is used for the creation and maintenance of vacuum conditions depend on various parameters such as the required pressure, the throughput and the available time for the process. In many cases, the use of a single vacuum pump is not enough and a combination of pumps is needed. It is common to have a first stage where rough vacuum conditions are created and a second one for the achievement of the desired pressure. The optimization of the design and the operational parameters of the pumps has lead to the development of numerical tools for the simulation of the flow in the pump. In many approaches the Navier-Stokes equations have been used with the corresponding slip boundary conditions. This method is well tested but its range of applicability is limited to rough vacuum conditions. This is due to the fact that in lower pressures the assumption of the continuum medium collapses and the recovered results are not reliable [2]. Another method that can be used for the simulation of flow systems in high vacuum conditions is the mesoscopic approach with either stochastic or deterministic tools. The first one is the Direct Simulation Monte Carlo method (DSMC) [3] where computational molecules move, reflect from solid boundaries and collide to each other so as to statistically mimic the behavior of real molecules. Each model particle in the simulation represents a large number of real molecules in the physical system. The methodology is stochastic in nature since several modules in the algorithm, such as intermolecular collisions, are modeled in a probabilistic manner using random numbers. The state of the system is defined by the position and velocity vectors of the model particles. The most important drawback of DSMC is that it is appropriate for relatively high Mach numbers since the statistical noise can be significant. The deterministic approach is based on the solution of the Boltzmann equation or the corresponding kinetic equations [4,5]. The main unknown is the distribution function, while the macroscopic quantities can be recovered as its moments. The most common and computationally efficient method is by discretizing the kinetic equation in the molecular velocity space by the discrete velocity method (DVM) and by a finite differencing scheme in the physical space. This approach is superior to the DSMC method when linearized flows are tackled. In the present paper the simulation of the Holweck pump based on linear kinetic theory is presented. This is a vacuum pump that is used either as a single apparatus or as a first stage for a pumping system. Simulations of the Howeck pump have been carried out especially in the last years. Most of them are using either the DSMC [6,7] or the Navier-Stokes [8,9] equation in order to recover the results and refer to various geometries. Recently, the deterministic approach has been implemented [1] but the details of the pump dimensions and the function characteristics are not provided. The methodology proposed in this paper is followed here and more specifically the discrete velocity method (DVM) is applied to solve the BGK kinetic equation, which is valid for the case under investigation since there are no significant temperature variations. In addition, the equations are linearized, something that is justified by the fact that the length of the pump channels is much bigger than the characteristic length of the cross section and the local forcing term is relatively small. Results are presented for the mass flow rates and the characteristic curves, while a preliminary parametric study is performed for the optimization of the design parameters. Statement of the problem The Holweck pump consists of two coaxial cylinders. One of them is stationary and the other one is rotating, while the inner cylinder has helicoidal grooves printed on it. The rotation causes a flow that can result to a pressure difference between the two ends of the cylinder. The pressure at the high vacuum end is P h while the corresponding pressure at the fore vacuum end is symbolized as P f . The flow is fully three-dimensional, but a simplification can be achieved if the effect of the curvature of the cylinder is neglected as well as the end effects at the inlet and outlet of the pump. This approach can be justified by the fact that the ratio of the characteristic length of the grooves and the radius of the inner cylinder is less than 5% while the length of the channels is big compared to the characteristic length of its cross section. This approach gives the opportunity to have the solution of the whole flow field by integrating the partial solutions of every cross section of the channel. On the other hand, if the equations are non-dimensionalized by the local forcing term, then only one cross section has to be solved, since the flow can be assumed as a fully developed 3d flow in a duct. This procedure simplifies the solution and decreased drastically the computational time. The exact geometry of the pump is shown in Figs. 1 and 2 and Table 1. The geometry of the pump is identical to the one used in [7] and it corresponds to a typical design of the Holweck pump. It is obvious that the same approach can be easily applied to any other configuration. In the present work, this configuration is kept constant except of the angle θ of the channels. This is one of the parameters (the other one is the rotational speed) which are optimized by using the methodology presented in the next paragraphs. Formulation As it has been already stated, the method followed in the present work is the simulation of the flow by the BGK kinetic equation, which can be deduced from the Boltzmann equation if the collision term is replaced by the BGK model. Then the equation takes the form where ξ is the microscopic velocity, P the pressure, r the position vector and f = f (r, ξ) the distribution function. Finally, f eq = f eq (r, ξ) represents the local Maxwellian which is Here m is the molecular mass, k is the Boltzmann constant, n(r) is the number density, u(r) is the macroscopic velocity and T (r) is the temperature. These quantities can be calculated as moments of the distribution function f while the shear stress tensor can be calculated as The basic parameter of the flow is the Knudsen number which determines the rarefaction of the flow. In this work, the rarefaction parameter δ is used, which is proportional to the inverse Knudsen number and is defined as where P is the local pressure, D h the hydraulic diameter, µ 0 the viscosity at the reference pressure T 0 and u m is the most propable molecular velocity. Since the length of the channels is big enough compared to D h and the speed of the outer cylinder is much smaller than u m , the kinetic equation can be linearized, non-dimensionalized and solved numerically. On the other hand, taking into account the fact that the flow is linear, a decomposition can be applied and solve four subproblems: Longitudinal Poiseuille and Couette flow and Transversal Poiseuille and Couette flow. Then the results can be combined properly in order to have the full solution. This approach which has been proposed by Sharipov [1] , has been proved efficient and is followed in the present work. Applying this procedure simplifies more the numerical solution of the flow since instead of a 3-d problem one has to solve four 2-d problems. In addition, it gives flexibility to the solution because parameters such as the angle of the channel and the velocity of the cylinder are taken in to account only when the results of the subproblems are combined. So, for a given cross section, a data base of the subproblems results can be created and an optimization with respect to the angle or the velocity can be achieved. For all the four problems, the non-dimensional parameter in the physical space is the hydraulic diameter of the channel cross section while the velocity vectors are non-dimensionalized as c = ξ/u m . In the following paragraphs, the formulation for each of the four subproblems and the procedure for the results combination are presented. It is noted that the following approach is included here for completeness, eventhough someone can find it in [1]. In addition, here the parameter for the non-dimensionalization of length is the hydraulic diameter instead of the height of the groove and for the two transverse flows the no penetration boundary condition is used while in [1] it is not clear which is the boundary condition. Longitudinal Poiseuille flow The present flow is caused by a pressure difference along the z-axis of the channel. The distribution function is linearized as and by taking into account that the flow is considered fully developed and the assumption that the density and the temperature over the whole cross section remain constant, the kinetic equation takes the form In addition, the fact that the distribution function h does not change along the z-axis, allows us to eliminate one of the microscopic velocity's components. If the projected distribution function is defined as the kinetic equation then takes the form which is solved in order to recover the macroscopic quantities. Diffuse boundary conditions are used for all the solid boundaries and periodic boundary conditions at − b+d 2D h and b+d 2D h . It is noted that this is also the treatment for the boundary conditions at the longitudinal Couette flow. The dimensionless velocity u z and stress tensor Π yz are given as The dimensional velocity and stress tensor arê Finally the dimensionless mass flow rate and the reduced drag coefficient on the outer cylinder surface is given as and the dimensional flow rate isṀ Longitudinal Couette flow When the flow due to the motion of the upper plate in the z-direction is considered, the distribution function is linearized as follows in this case the kinetic equation becomes again, c z is eliminated by projecting the kinetic equation on the velocity space The dimensionless macroscopic quantities are given by Eq. (11). The dimensional velocity and stress tensor areû The dimensionless mass flow rate and the reduced drag coefficient on the outer cylinder surface are while the dimensional mass flow isṀ It has to be noted that by using the Onsager-Casimir theory the following relation is recovered Transversal Poiseuille flow When the flow is caused by a pressure difference along the x direction, the first difference is that there are two components of the velocity ie. u x and u y . In addition, the density variations can not be neglected. On the contrary, the temperature perturbations are small and the flow can be assumed isothermal [10]. Finally the distribution function is linearized as the new form of the kinetic equation after the projection is Again, using the fact that the distribution function h does not change along the z-axis, allows us to eliminate one of the microscopic velocity's components. The projected distribution function is defined as Φ(x, y, c x , c y ) = 1 √ π h(x, y, c)e −c 2 z dc z (24) The existence of density variations makes the use of the typical Maxwell boundary conditions inappropriate for the flow under investigation and the no-penetration boundary condition is used. According to it, on the walls a new parameter ρ w is calculated in order to satisfy the equilibrium of the momentum on the wall in the vertical direction. This is a parameter without a physical meaning, but it allows us to ensure that no momentum is crossing the solid boundaries. The exact expressions for the estimation of ρ w can be found in [11]. The dimensionless velocity u x and stress tensor P xy are given as The dimensional velocity and stress tensor arê Finally the dimensionless mass flow rate and the reduced drag coefficient on the outer cylinder surface is given as Transversal Couette flow The last flow that has to be considered is the one due to a motion of the upper plate in the x-direction. As in the corresponding flow due to pressure gradient, three macroscopic quantities are involved in the kinetic equation ie. ρ, u x and u y . The linearization of the distribution function is and the kinetic equation becomes and y using the projection procedure Here, the dimensionless macroscopic quantities are given by Eq. (26) It has to be noted that by using the Onsager-Casimir theory again for the two transversal flows the following relation is recovered Numerical scheme For the four subproblems the kinetic equation has to be solved in order to recover the macroscopic quantities and the dimensionless flow rate. In order to do so in the present work the DVM method is used. The main idea of the method is that the kinetic equation is solved for a set of discrete microscopic velocity vectors. Then numerical integration is applied in order to recover the moments of the distribution function, which are in fact the macroscopic quantities. The discrete velocities are chosen carefully and most often they are the roots of an orthogonal polynomial, at least as far as the magnitude of the velocities is considered, in order to recover the integrals with the best accuracy for a given number of velocities. Depending on the rarefaction of the flow, different number of velocities is required. In general, the more dense a flow is, the less velocities are required. In the present work, 16 values have been used for the magnitude of the velocities and 400 angles, since a polar coordinates system is used for the microscopic velocities space. On the other hand, when the rarefaction parameter is larger than 15, the discrete angles are reduced to 80. In the physical space, the grid used was uniform with ∆x = ∆y = 0.1mm. For more dense flows (δ > 1) the grid lattice was tripled (∆x = ∆y = 1/30 ) since it is known that higher values of δ require more dense grids. The numerical scheme used for all the four syb-probloems, is a typical central-difference scheme but it is applied on the characteristic of the microscopic velocity since it gives more accurate results due to the Lagransian nature of the Boltzmann equation and is described in detail in [11]. Overall quantities For the characterization of a pump, one of the quantities required is the pressure difference created and the corresponding throughput. The most important problem is that the dimensional quantities G recovered by the numerical solution of the four subproblems depend on the local pressure and on the local rarefaction parameter δ. This is also the reason for the variance of their values while the mass flow rate has to be constant for every cross section of the pump. In order to solve this problem the quantity G η is defined which is related to the mass flow rate asṀ with P h being the pressure on the high vacuum chamber. Since all the other quantities are constant, G n should not be dependent on the position of the cross section. Application of the mass conservation law in the triangle of Fig.3 giveṡ In addition, from Eqs (14,20)it is deduced thaṫ and since it is concluded thatṀ Accordingly for the x-directioṅ with and finallyṀ l x is the dimensional length of side x. By substituting Eq.(40) and (43) to Eq. (37) a differential equation for the local pressure is recovered For a known P h , Eq. (44) can be solved and the pressure distribution for the whole length of the pump recovered. It has to be noted that on every cross section, the values of the dimensionless flow rates for the local δ have to be used. Finally the pumping speed S and the throughput Q can be found as where N gr is the number of grooves. 6. Results and Discussion 6.1. Partial solutions The first step for the recover of the overall quantities is the creation of a full database including the dimensionless flow rate for the four subproblems described above. The results are presented in Figs. 4 and 5. They cover the range 0 ≤ δ ≤ 200. All the quantities have been obtained by using the developed kinetic numerical codes. As it can be seen, the dimensionless flow rate for the Parametric analysis One of the parameters that are expected to affect the performance of the pump is the angle of the grooved channels with respect to the front surface of the pump (see Fig. 1). The angle in the present work is assumed to be constant for the whole length of the pump. In Fig.7 the dependence of the exit pressure on the angle of the pump is examined. As it can be seen, the influence of angle is strong and the optimum angle depends on the rarefaction at the inlet of the pump. For δ h = 1 it seems that the optimum angle is close to φ = 12 0 , while for δ h = 0.01 it is about φ = 15 0 . This correlation is expected eventhough the exact value of the optimal angle can not be anticipated without numerical or experimental results. It can be deduced that a gradual change of the angle along the pump or a multi-stage pump of equal length but different angles for each stage can be much more efficient. Finally, on Fig. 8 the influence of the rotating cylinder speed is examined. The increase of the rotational speed is increasing the pressure difference which is the expected behavior. Concluding remarks The flow in a Holweck pump has been simulated by solving the linearized Boltzmann-BGK equation using the discrete velocity method in the velocity space and an equi-distributed Cartesian grid in the physical space. A decomposition method has been applied in order to reduce the computational cost. Results containing the dependence of the flow on the rarefaction parameter have been provided while the influence of the grooves' angle and the rotational speed on the pressure difference produced by the pump has been examined. It has been shown that for the sets of parameters tested, there is an optimum angle where the pump can produce the highest pressure difference for almost the whole range of the pump throughput, while the increase of the rotational speed seems to increase the pressure difference in any case. Future studies can include a detailed investigation of the dependence of the pump performance on the design parameters such as the grooves' dimensions, the pump length, the gas type. In addition the angle of the grooves has to be optimized for various sets of the other design parameters and the overall efficiency of the pump has to be taken into account. Finally the same approach can be modified and applied in other kind of pumps such as the Gaede pump.
2019-04-13T13:05:20.967Z
2012-05-23T00:00:00.000
{ "year": 2012, "sha1": "a63cca2e69ac86b6437a4153ea0bec030b8693a4", "oa_license": null, "oa_url": "http://iopscience.iop.org/article/10.1088/1742-6596/362/1/012024/pdf", "oa_status": "GOLD", "pdf_src": "IOP", "pdf_hash": "62b95fd2819620d6dad98f2dbb4d9b521ffe4380", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [ "Physics", "Engineering" ] }
233937414
pes2o/s2orc
v3-fos-license
Ursolic Acid Inhibits the Activation of Kupffer Cells by Caspase-11/NLRP3 Inammasome Signaling Pathways Background: Previous studies have indicated that Kupffer cells (KCs) are the main regulatory cells for the activation of hepatic stellate cells (HSCs), and caspase-11/NLRP3 inammasome signaling plays crucial roles in the activation of monocyte-macrophages. Ursolic acid (UA) is a traditional Chinese medicine with antibrotic effects, but the molecular mechanism underlying these effects is still unclear. Methods: A mouse primary Kupffer cell line in vitro and liver brosis mice (including specic gene knockout mice) in vivo were selected as experimental objects. RT-qPCR and Western blotting techniques were utilized to assess the mRNA and protein expression in each group. ELISA and histological analysis were utilized to assess liver injury and collagen deposition. Results: In vitro, caspase-11/NLRP3 inammasome signaling promoted the activation of Kupffer cells, and UA inhibited the activation of Kupffer cells by caspase-11/NLRP3 inammasome signaling. In vivo, UA reversed liver damage and brosis in brotic mice and was related to Kupffer cells; the expression of Caspase-11/NLRP3 inammasome signaling in Kupffer cells of the UA group was inhibited. Even in the CCl4 group, the liver damage and brosis of NLRP3 knockout mice were alleviated, and related experiments also proved that the inhibitory effect of UA on Kupffer cells was related to the activation of the NLRP3 inammasome. Conclusion: Caspase-11/NLRP3 inammasome signal transduction is closely related to the activation of Kupffer cells and the occurrence of liver brosis. Additionally, caspase-11/NLRP3 inammasome signaling serves as a new target for UA antibrosis treatment. Introduction Liver brosis is an over repair response caused by various chronic liver injuries characterized by excessive deposition of extracellular matrix (ECM) and dominated by type I collagen in the liver [1] . The continuous development of liver brosis can eventually lead to cirrhosis and even liver cancer, which seriously harms human health [2] . Activated hepatic stellate cells (HSCs) are the main source of ECM, and the activation and transformation process of hepatic brosis is the central event. Overall, inhibition of the activation of HSCs is the key factor controlling the progression of liver brosis [1] . Kupffer cells (KCs) are the main regulatory cells in the process of liver brosis and activate hepatic stellate cells in the resting state to promote liver brosis [3] . Cytokines secreted by activated Kupffer cells can directly affect the activation of HSCs, including TGF-β, IL-1β, INF, and CCL3. Proin ammatory factors promote HSC activation through the NF-κB signaling pathway, such as TNF and IL-1β. CCL3 is a ligand of CCR1 and CCR5, which promote liver brosis [4] . Kupffer cells can also produce IL-1 receptor antagonist (IL-1Ra), IL-10 and other antiin ammatory mediators and can produce matrix metalloproteinase (MMP) to promote the degradation of ECM and improve liver brosis [5] . Kupffer cells (KCs) play a key role in regulating HSC activation and liver brosis progression. The NOD-like receptor protein 3 in ammasome (NLRP3 in ammasome) consists of pattern recognition receptors (PRRs) and apoptosis-associated speck-like protein containing a caspase-recruitment domain (ASC) and caspase-1 and is a classic intracellular innate immune receptor that can be activated by internal and external danger signals to induce the release of IL-1, IL-18 and other proin ammatory cytokines [6] . Studies have shown that the NLRP3 in ammasome activation mechanism is divided into two kinds: through the NLRP3/ASC pathway, activated Caspase 1 is often referred to as the classic NLRP3 in ammasome pathway; however, caspase 11 can also be recruited and activated to activate caspase 1 through NLRP3, resulting in the release of IL-1β and IL-18 and in ammatory cell death, which is a nonclassical NLRP3 in ammasome pathway [7] . It has been reported that caspase11 −/− mice have stronger resistance to lethal sepsis, and their survival rate is signi cantly higher than that of caspase1 −/− mice and wild-type mice [7] . The activation of caspase 11 is more harmful to the body, which indicates that the nonclassical NLRP3 in ammasome pathway plays a more important role in in ammatory injury progression [8] . Ursolic acid (UA) is a natural monomer compound extracted from traditional Chinese medicine plants and has anti-in ammatory, anti-brosis, and liver protection effects [9,10] . However, whether ursolic acid has an inhibitory effect on Kupffer cell activation and whether the inhibitory effect is related to the nonclassical Caspase-11/NLRP3 in ammasome pathway remain to be further studied. This study mainly explores the potential mechanism of ursolic acid against brosis and provides strong experimental support for the future clinical application of ursolic acid in the treatment of patients with liver brosis. Reagents and Antibodies The following reagents were used in this study: CCl4 and olive oil (Shandong Xiya Chemical Industries [11,12] . The wild-type (WT) C57BL/6 mice used in the experiments were from the Department of Laboratory Animal Science of Nanchang University, and NLRP3 knockout C57BL/6 mice were purchased from the Jackson Laboratory (homozygous: B6.129S6-Nlrp3<tm1Bhk>/J). Based on widely recognized research, carbon tetrachloride (CCl4) was selected to induce liver brosis in mice [13] . According to the principle of random allocation, male C57BL/6 mice weighing 20 to 30 g were randomly divided into the control group [n = 10, gavage with olive oil (2 ml/kg) twice a week for 8 weeks], CCl4 group [n = 10, gavage with CCl4 at 2 ml/kg (20% olive oil dilution) twice a week for 8 weeks], and UA group [n = 10, gavage with CCl4 at 2 ml/kg (20% olive oil dilution) twice a week for 4 weeks and then gavage with CCl4 and UA (40 mg/kg/day) gavage for 4 weeks]. Male NLRP3 knockout mice were randomly divided into the NLRP3 -/group, NLRP3 -/-+CCl4 group and NLRP3 -/-+ UA group (all treatments were consistent with the WT group). If the mice feel painful during modeling or perfusion, it needs to be killed as soon as possible. Mice were euthanized by inhaling iso urane and then cervical dislocation, and death was con rmed by neck tissue separation. All experimental procedures were approved by the Institutional Animal Care and Use Committee of the First A liated Hospital of Nanchang University (Nanchang, China). All animals received humane care in compliance with institutional guidelines. Histological analysis The para n-embedded liver and ileum samples were used to prepare 5 µm thick slices with a microtome. The slices were stained with hematoxylin and eosin using standard methods. Sections underwent hematoxylin and eosin (HE) staining, Sirius red collagen staining and immunohistochemistry (IHC) analysis and were evaluated by microscopy. IHC was used to determine the localization and expression of related proteins. Specimens were incubated with an appropriate antibody and were observed and photographed by confocal microscopy. In immuno uorescence cytochemistry, nuclei were counterstained with DAPI. Finally, images were acquired under a uorescence microscope. Western blot analyses Total protein was obtained from tissue lysates or cell supernatant for Western blotting. The protein levels were determined using a BCA assay kit (Tiangen, Beijing, China). Denatured proteins were separated on 10% Tris-glycine polyacrylamide gels by SDS-PAGE and transferred to PVDF membranes. The membrane was treated with a chemical illuminator, and the protein bands were detected with a luminescent image analyzer (Bio-Rad ChemiDoc MP, USA). The relative level of the target protein is the gray ratio between the target protein strip and the GAPDH band. Extraction of primary Kupffer cells Mice were anesthetized with iso urane (300-500 ml/min). The abdominal cavity was cut open aseptically, and the inferior vena cava was punctured. The PBS solution was uniformly perfused by the syringe at the same time. The hepatic portal vein was injected with 50 ml of perfusion solution (0.05% collagenase IV) at 37°C and digested for 10 min. In detail, cell sediments were resuspended in 10 ml of RPMI 1640 and centrifuged at 300×g for 5 min at 4°C, the top aqueous phase was discarded, and the cell sediments were reserved. Then, cell sediments were resuspended in 10 ml RPMI 1640 and centrifuged at 50×g for 3 min at 4°C. The top aqueous phase (cleared cell suspension) was transferred into a new 10 ml centrifuge tube and centrifuged at 300×g for 5 min at 4°C, the top aqueous phase was discarded, and the cell sediments were reserved. The cell sediments mainly constrained nonparenchymal cells of the liver, which were KCs, sinusoidal endothelial cells and satellite cells. To purify the obtained cell population further, the method of selective adherence to plastic was used according to Blomhoff et al [14] . KCs were identi ed by immuno uorescence using anti-F4/80 antibody. Statistical analysis Statistical analyses were performed using SPSS software version 22.0 (SPSS Inc., Chicago, IL), and image production was performed using GraphPad Prism 6.0 software. Quantitative data are expressed as the means ± standard deviation (SD), and continuous variables were compared using one-way analysis of variance (ANOVA). If positive, multiple comparisons were carried out using the Nemenyi test. All statistical tests were two-sided, and P <0.05 was considered statistically signi cant. Ursolic acid (UA) inhibits the activation of Kupffer cells in vitro As shown in Figure Importantly, the results that the NLRP3 in ammasome expression of LPS +H 2 O 2 +Wedelolactone groups were also lower than LPS+H 2 O 2 groups and LPS+H 2 O 2 +MCC groups had shown that the Caspase-11 is an effective stimulator of NLRP3 in ammasome. Additionally, proin ammatory cytokines ( Figure 2I-J) secreted by Kupffer cells were decreased. The results indicated that the expression of the NLRP3 in ammasome or Caspase-11 was signi cantly inhibited by MCC or wedelolactone, respectively and that the Caspase-11/NLRP3 in ammasome pathway plays a crucial role in the activation of Kupffer cells. To con rm that UA inhibits the activation of Kupffer cells through the Caspase-11/NLRP3 in ammasome pathway, the relative protein expression of Caspase-11 and the NLRP3 in ammasome pathway (including caspase-11 and NLRP3) in the LPS +H 2 O 2 +MCC group was detected. As shown in Figure 2, UA reverses liver damage and brosis in brotic mice To evaluate the effect of UA on liver brosis, liver damage and collagen deposition in mouse livers were measured by HE and Sirius red staining ( Figure 3A). The liver lobule structure, collagen deposition and in ammatory cell in ltration of the CCl4 group were signi cantly enhanced (P<0.05), and the performance of liver brosis was signi cantly improved after UA treatment (P<0.05). The ALT, AST, and hydroxyproline levels in mouse serum were determined to evaluate liver function or liver brosis ( Figure 3B-D). Compared to the control group, the serum levels of ALT, AST and hydroxyproline in the CCl4 group mice were signi cantly increased; however, the levels of ALT, AST and hydroxyproline were inhibited in UAtreated brotic mice (P<0.05). These results indicate that UA can reverse liver damage and brosis in vivo. Type I collagen (collagen-1), a-SMA, and TIMP-1 often serve as biomarkers of HSC activation, and changes in these biomarkers are often found in the progression of liver brosis. At the mRNA levels, the expression levels of type I collagen (collagen-1), a-SMA, and TIMP-1 in the CCl4 group were signi cantly higher than those in the control group, and this increase in type I collagen, a-SMA, and TIMP-1 was UA reverses liver brosis in liver brotic mice by the Caspase-11/NLRP3 in ammasome pathway in Kupffer cells To con rm the roles of the Caspase-11/NLRP3 in ammasome pathway in liver brosis mice and the UA treatment group, immunohistochemical staining of the whole liver tissue of the three groups of mice was conducted. The expression of Caspase-11 and NLRP3 in ammasome of CCl4 groups were signi cantly higher than control group, and the expression of Caspase-11 and NLRP3 in ammasome were signi cantly decreased by UA treatment ( Figure 4A). To further analyze the activation of mouse Kupffer cells in vivo, mouse Kupffer cells were isolated. The results of Kupffer cells isolated by CD14 immuno uorescence showed that the purity of Kupffer cells was good (Supplementary Figure 1E). Caspase-11 was more highly expressed in Kupffer cells than in other liver cells (such as hepatocytes and HSCs) after CCl4 induction (Supplementary Figure 1A-D). First, the activation of Kupffer cells isolated from the three groups of mice was determined. The results of proin ammatory cytokine (INF-γ, TGF-β) by ELISA secreted by Kupffer cells in CCl4 group were signi cantly higher than control group (P<0.05); and the pro-in ammatory cytokine of UA group were obviously decreased than CCl4 group (P<0.05) ( Figure 4B-C). To con rm the effect of the Caspase-11/NLRP3 in ammasome pathway on the inhibition of Kupffer cells by UA, the expression of Caspase-11 and the NLRP3 in ammasome in isolated Kupffer cells was detected. Consistent with the vitro results, the expression of Caspase-11, NLRP3 in ammasome in CCl4 group were signi cantly higher than control group; the expression of Caspase-11, NLRP3 in ammasome in UA group were signi cantly lower than CCl4 group (P<0.05) ( Figure 4D-I). Effect of NLRP3 knockout on liver brosis and Kupffer activation To further con rm the role of the Caspase-11/NLRP3 in ammasome pathway in liver brosis, male NLRP3 knockout mice were randomly divided into the NLRP3 -/group, NLRP3 -/-+CCl4 group, and NLRP3 -/-+ UA group. As shown in Figure 5A, the liver lobule structure, collagen deposition and in ammatory cell in ltration of the NLRP3 -/group, NLRP3 -/-+CCl4 group, and NLRP3 -/-+UA group were signi cantly reversed compared with those of the WT+CCl4 group. Importantly, there was no signi cant difference between the three groups (NLRP3 -/group, NLRP3 -/-+CCl4 group, NLRP3 -/-+ UA group). The results showed that the levels of ALT, AST, and hydroxyproline in the mouse serum of the three groups were signi cantly lower than those in the WT+CCl4 groups, and no obvious change was found between the three NLRP3 -/groups ( Figure 5B-D). The mRNA expression of collagen-1, a-SMA, and TIMP-1 in the three NLRP3 -/groups was signi cantly lower than that in the WT+CCl4 groups, and the differences among the three NLRP3-/groups were not statistically signi cant ( Figure 5E-G). Kupffer cells were isolated from mice, and the proin ammatory cytokines (INF-γ and TGF-β) in Kupffer cells were measured. The mRNA expression of INF-γ and TGF-β in the three NLRP3-/-groups was signi cantly lower than that in the WT+CCl4 groups, and the expression levels of INF-γ and TGF-β in the three NLRP3-/-groups were similar ( Figure 5H-I). Previous results have indicated that the NLRP3 in ammasome plays a key role in liver brosis. To assess the expression change of the Caspase-11/NLRP3 in ammasome, the mRNA and protein expression of Caspase-11 and the NLRP3 in ammasome were detected. As shown in Figure 5J, 5°, and 5N, the mRNA and protein expression of the NLRP3 in ammasome were signi cantly decreased after NLRP3 knockout in mice. Importantly, the Caspase-11 expression of NLRP3-/-+CCl4 group after CCl4 induced is still signi cantly higher than NLRP3-/-group, and the Caspase-11 expression of NLRP3-/-+CCl4 group after UA treatment were signi cantly lower than NLRP3-/-+CCl4 group. These results suggest that the NLRP3 in ammasome plays an important role in Caspase-11 in liver brosis progression (P<0.05) ( Figure 5J, 5K, 5M). Discussion Liver brosis is a process of extracellular matrix (ECM) deposition or scar formation caused by various factors, including hepatitis viral, nonalcoholic fatty liver, alcoholic fatty liver, biliary or autoimmune liver disease [1] . The continuous development of liver brosis can eventually develop into liver cirrhosis, even liver cancer, which seriously endangers human health [2] . Liver brosis is the early stage of liver cirrhosis. Effective treatment intervention can effectively prevent the progression of the disease [15] . Therefore, it is of great signi cance to develop anti brosis drugs based on the pathogenesis of liver brosis. The transformation of quiescent HSCs into proliferative myo broblasts is the central event in the pathogenesis of liver brosis; however, Kupffer cells are the main regulatory cells in the process of liver brosis [3] . The activation of resting HSCs by Kupffer cells can promote the progression of liver brosis; the apoptosis or degradation of activated HSCs by Kupffer cells can promote the progression of liver brosis. In the progressive stage of liver injury, hepatocyte injury or harmful substances (such as bacteria or lipopolysaccharide (LPS)) can trigger damage-related molecular models (DAMPs) or pathogen-related molecular models (PAMPs) to activate Toll-like receptors (TLRs) or tumor necrosis factor receptors (TNFRs) to stimulate Kupffer cell activation [16] . Then, activated Kupffer cells secrete proin ammatory factors, including TGF-β, TNF, IL-1β and IFN-γ. At the remission stage of liver injury, Kupffer cells transform into in ammatory inhibitors and produce anti-in ammatory mediators, such as IL-1Ra and IL-10 [17,18] . Therefore, Kupffer cells play a double-edged sword role in liver brosis. Therefore, the activation of Kupffer cells in vivo and in vitro served as the main observation objects in this study. First, this study found that the activation of primary liver Kupffer cells was signi cantly enhanced after LPS combined with H 2 O 2 stimulation, and the activation could be inhibited by ursolic acid in vitro. The in vitro results indicated that the Caspase-11/NLRP3 nonclassical in ammasome pathway is involved in the activation of Kupffer cells. Next, we demonstrated that UA could reduce CCl4-induced liver brosis and inhibit Kupffer cell activation after Kupffer cells were isolated from mouse livers. The Caspase-11/NLRP3 in ammasome pathway plays an important role in the activation of Kupffer cells in vivo. Finally, the results of the NLRP3 -/mouse experiment showed that the Caspase-11/NLRP3 in ammasome pathway was involved in Kupffer activation. The NLRP3 in ammasome is an intracellular multiprotein complex that is widely involved in the body's immune response and is related to the pathogenesis of tumors, arteriosclerosis, intestinal in ammation and metabolic diseases [19] . The NLRP3 in ammasome consists of PRRs, ASC, and caspase-1 and is widely distributed in monocytes-macrophages, dendritic cells (DCs), lymphocytes, granulocytes and antigen-presenting cells (APCs) [20] . The NLRP3 in ammasome is a classical receptor of intracellular innate immunity that can be activated by danger signals inside and outside the cell and then induce the release of downstream proin ammatory factors (IL-1β, IL-18) [21] . At present, the NLRP3 in ammasome is expressed in hepatocytes, Kupffer cells, and HSCs in the liver and is activated under certain conditions, eventually leading to the release of IL-1β and IL-18 [22] . However, the expression level of the NLRP3 in ammasome in Kupffer cells was signi cantly higher than that in hepatocytes and HSCs, as shown in Supplementary Figure 1. Therefore, previous studies have indicated that Kupffer cells are the main places for the expression, assembly and activation of NLRP3 in ammasome [23] . There were two pathways involved in the NLRP3 in ammasome activation mechanism: classic NLRP3 in ammasome pathways (NLRP3/ASC/Caspase-1) and nonclassical NLRP3 in ammasome pathways (Caspase-11/NLRP3/Caspase-11) [7] . Hepatocyte death is induced by activated NLRP3 in ammasomes through the pyrolytic pathway and aggravates the proceeding of NASH [24] . Overall, the NLRP3 in ammasome plays a crucial role in the liver in ammation network. Caspase-11 is an important promoter of the nonclassical pathway of cell pyrolysis. During the progression of liver injury, gram-negative bacteria enter the liver through the portal vein and release lipopolysaccharides (LPS) on the surface to activate Kupffer cells through the TLR pathway [25] . LPS enters Kupffer cells in the form of endocytosis and interacts with intracellular caspase-11 to bind and activate it, thus initiating the nonclassical pathway of cell pyrolysis. LPS enters Kupffer cells through endocytosis and interacts with intracellular caspase-11 and activates it, thereby starting the nonclassical pathway of pyrolysis [26] . On the one hand, activated caspase-11 can activate the downstream NLRP3 in ammasome, releasing IL-1β and IL-18; on the other hand, the cell-breaking membrane protein GSDMD is activated and destroys the cell membrane, releases the cell content, and causes in ammatory damage [27] . The cross-analysis of unbiased RNA sequencing/proteomic analyses identi ed Caspase-11 (Caspase-4 in humans) as a commonly upregulated gene in alcoholic hepatitis (AH) and patients but not in chronic alcoholic steatohepatitis (ASH) mice and healthy human livers [28] . Recent studies have shown that HSPA12A attenuates LPS-induced liver injury by inhibiting caspase-11-mediated hepatocyte pyroptosis via PGC-1α-dependent acyloxyacyl hydrolase expression [28] . Caspase-11 activation is harmful to the body, and its role in the nonclassical NLRP3 in ammasome pathway is critical. In this study, Our results indicated that uric acid can improve liver brosis by inhibiting the Caspase-11/NLRP3 in ammasome pathway of Kupffer cells. In vivo and in vitro, the Caspase-11/NLRP3 in ammasome pathway plays an indispensable role in the activation of Kupffer cells, and uric acid can inhibit the activation of Kupffer cells by the Caspase-11/NLRP3 in ammasome pathway. The results from NLRP3 -/mice showed that the NLRP3 in ammasome had an important impact on the activation progression of Kupffer by Caspase-11. There are also some limitations, such as the lack of results regarding the coculture of Kupffer cells and HSCs and the greater number of Caspase-11 intervention results. In conclusion, our results indicate that the caspase-11/NLRP3 in ammasome pathway plays an important role in the activation of Kupffer cells. Furthermore, UA may reverse liver brosis by intervening in the Caspase-11/NLRP3 in ammasome pathway in Kupffer cells. The potential mechanism of UA against liver brosis remains unknown. This study aims to clarify the possible molecular targets of UA against liver brosis and provide a reasonable experimental basis for the clinical application of UA in the future. Our results provide new insight into the treatment of liver brosis with UA; however, further in vivo and in vitro studies are needed to con rm these results. Declarations Financial support:
2021-05-08T00:04:26.788Z
2021-02-11T00:00:00.000
{ "year": 2021, "sha1": "ff3614c7ab32d9095bc6b7678b4d2c28071c1712", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-166103/latest.pdf", "oa_status": "GREEN", "pdf_src": "Adhoc", "pdf_hash": "0d015a2db5f477382c3d89f4a80605def74d06e3", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Chemistry" ] }
49305400
pes2o/s2orc
v3-fos-license
"What's ur type?"Contextualized Classification of User Types in Marijuana-related Communications using Compositional Multiview Embedding With 93% of pro-marijuana population in US favoring legalization of medical marijuana, high expectations of a greater return for Marijuana stocks, and public actively sharing information about medical, recreational and business aspects related to marijuana, it is no surprise that marijuana culture is thriving on Twitter. After the legalization of marijuana for recreational and medical purposes in 29 states, there has been a dramatic increase in the volume of drug-related communication on Twitter. Specifically, Twitter accounts have been established for promotional and informational purposes, some prominent among them being American Ganja, Medical Marijuana Exchange, and Cannabis Now. Identification and characterization of different user types can allow us to conduct more fine-grained spatiotemporal analysis to identify dominant or emerging topics in the echo chambers of marijuana-related communities on Twitter. In this research, we mainly focus on classifying Twitter accounts created and run by ordinary users, retailers, and informed agencies. Classifying user accounts by type can enable better capturing and highlighting of aspects such as trending topics, business profiling of marijuana companies, and state-specific marijuana policymaking. Furthermore, type-based analysis can provide more profound understanding and reliable assessment of the implications of marijuana-related communications. We developed a comprehensive approach to classifying users by their types on Twitter through contextualization of their marijuana-related conversations. We accomplished this using compositional multiview embedding synthesized from People, Content, and Network views achieving 8% improvement over the empirical baseline. I. INTRODUCTION "It's 4/20, and that means everyone is talking about marijuana 4 ," highlights the state of marijuana-related communication on Twitter, especially around the time marijuana legalization polls were conducted in the USA. As more evidence is gathered through research studies on the safety and benefits of the medical and recreational uses of cannabis, there is a rise in public demand for broader legalization of marijuana and its variants. Accordingly, it is useful to study the engagement of users on social media to understand public opinion and its influence on policies better. Characterization of marijuana concentrate users on social media can enable researchers and analysts to describe the patterns of use, reasons of symptoms, and side effects as well as identify the predictor of risks with the help of spatiotemporal analysis. Specifically, classification of user types in marijuana communications on social media can aid in analyzing content-network dynamics at a user level, through an assessment of homophily in marijuana-related communities. Further, assessing the differences concerning marijuana conversations, the information flow, and interactions between user types, such as retail, informed agency and personal accounts, can help better situate their characteristics and understand the implications. For instance, in the case of predicting the outcome of a state legalization process [1], understanding the public opinions of the residents, assessing trending marijuana related topics in their conversations and monitoring their implications, are relevant and critical, as these opinions translate to votes. We associate personal user type (P) with an account handled by an individual user expressing their opinions, retail user type (R) with an account managed by a business entity to promote and market marijuana-related products, and informed agency user type (I) with an account handled by a group or organization to disseminate marijuana related information. Throughout the paper, we use informed agency & media interchangeably to refer to the same user type. In this study, we are proposing a user classification approach exploiting the multiview aspect of the Twitter data and features extracted from people, content, and network dimensions. The multiview stems from the inclusion of text, image (profile pictures), emoji and network interactions among accounts of different user types [2]. Hence, for a reliable classification, we use Compositional Multiview Embedding(CME) that combines different elements of the context such as text, image, emoji and network activities. This study addresses two key challenges: (i) The imbalanced dataset due to the relatively few users pertaining to Retail and Informed Agency user types, and (ii) Lack of proper use of different contextual dimensions, precisely, by incorporating Person-Content-Network views in compositional multiview embedding, for interpreting marijuana-related Twitter data. We create compositions of vector embeddings of these views of the Twitter data, called Compositional Multiview Embedding(CME), as it can represent the context in a more coherent manner [3]. In our approach, we create two CMEs: (i) one using tweet text, emoji and network interactions of users, and (ii) another using user description and emoji. In Section V-B, we explain the correlation analysis performed on various feature combinations to assess their relationship. For instance, we found that descriptions and network interactions of users are highly correlated, suggesting that their combination can affect the performance of the classifier over the validation and test data. Therefore, we did not create the embedding using these two views. We evaluated the classifiers based on the individual F-scores of user type classes. We also generated word embedding vectors for profile pictures of users, which significantly improved the performance of classification of the informed agency user type. Details of our approach and results are discussed in Sections V and VI respectively. The remainder of the paper is organized as follows: In Section II, we explain related works on the marijuana-related user classification. In Section III, we provide preliminaries about the concepts and technologies used. In Section IV, we provide an exploratory analysis that includes statistics about our dataset. Section V explains features and our experimental settings, and Section VI discusses the results of our analysis. Section VII concludes the paper with a summary and future research directions. II. RELATED WORK In this section, we describe prior studies that are broadly related to user classification, under three prominent subheadings: (i) Embedding based Approaches to User Classification, (ii) Diverse Features for User Classification, and (iii) User-level Approaches. A. Embedding based Approaches to User Classification The profile of a user on Twitter consists of user description, tweets exchanged with their followers/friends, profile picture. Researchers [4] utilized user tweets to learn an embedding model using Long Short-Term Memory (LSTM) and Recurrent Neural Network (RNN) to classify users based on their gender and age information achieving an accuracy of 91% and 82% respectively. In contrast, [5] employed interactional features to generate embeddings for a semi-supervised approach. Specifically, they utilize a small number of seed users with labels (e.g., news agency, person, genres) and interactions with "mentions" in their tweets. [6] proposed an approach to learn the interactional features of the users by optimizing the structural and attribute level properties of their networks that characterizes homophily in their communication. In another study [2], researchers utilized person-level multiview embedding to predict engagement, friend selection and demographic information of users. In contrast, our study gleans person, content and network-level features, creating a composition of multiview embeddings through vector addition operation that characterizes users in the context of marijuana-related communications on social media. B. Diverse Features for User Classification Prior work related to user classification on social media has involved different sets of features. Person-level features included profile [7], user behavior, first and last names [8], demographics, Content level features including linguistic, domain-specific and generic LDA topics, and Network level features comprising follower-followee connections [7]. These features were utilized to glean political affinity, ethnicity, and favorability towards a particular profession, to generate machine-readable user-profiles for improving the user classification [7], and to cluster users based on their conversations and predict demographics [8]. Combination of these features with network interactions results in a better-contextualized representation of the dataset [9]. They claimed that their model provides an in-depth analysis of users' communication from both content and network perspective, and improves user classification. C. User-level Approaches For particular problems such as identification of user interests and event detection, user-level understanding of the content as well as the network dynamics is pivotal. In [10], they classified users into three classes, namely, organization, journalist (or media personnel), and an ordinary person, to identify variation in characteristics across multiple events. Engagement of users on a particular subject on social media is considered as an important signal in social media analytics, and has been used for user classification in [11]. The authors developed a working model to categorize a user as Idea Starter, Commentator, Curator, Amplifier, and Viewer. In the election domain, political homophily on social media forms a feature for user classification, and [12] illustrates its significance for resolving reciprocated or non-reciprocated ties in the network of users. Homophily creates social echo chambers which polarizes the world of users. This fact can be used to discriminate ordinary users (or information seekers) from information providers (e.g., journalist). Moreover, topical analysis of the usergenerated content can be informative about their intentions. In [13], topic-centric Naive Bayes classifier was developed to identify the topics to categorize unknown users based on the closeness of their topics to those of the users in the training dataset. In recent years, there has been a surge in the use of ecigarettes among smokers, and Twitter has emerged as a costeffective platform for sharing and promoting information. In [14], a user classification approach has been designed employing metadata and tweeting behavior to classify users as individuals, informed agencies, marketers, spammers, and vapor enthusiasts. III. PRELIMINARIES Our approach uses several building blocks for an indepth analysis of tweet content to extract relevant context in marijuana dataset. Specifically, we discuss the peoplecontent-network paradigm [15] and compositional word embeddings for expressiveness, EmojiNet for interpreting emoji, Clarifai for processing profile pictures, and SMOTE for oversampling. A. People-Content-Network On social media, communities are being formed around various topics of interest through network interactions [15]. The information being shared in tweets by a user in the marijuana community displays an intent that depends on the user's type [16]. For instance, personal users share their experiences and opinions on marijuana, retail accounts usually promote the use of marijuana and other related products that they sell, and media accounts disseminate information on marijuana-related events and festivals, legalization processes. Accordingly, as these user types show different characteristics, it is critical to bring to bear different perspectives, such as person, content, and network, for reliable analysis and insights. We describe a systematic organization and analysis of features in Section V-C. B. EmojiNet Emoji are a pictorial representation of facial expressions, places, foods and other objects. These are often used by marijuana community on social media to express opinions and emotions about marijuana-related topics. Emoji contribute to the interpretation of the content created by the users and can contribute to the better recognition of the characteristics of user types. To achieve this goal, we make use of EmojiNet [17] which gathers meanings of 2,389 emoji. Specifically, EmojiNet provides a set of words (e.g., smile), associated POS tags (e.g., verb), and their sense definitions. It maps 12,904 sense definitions to 2,389 emoji, to capture platformspecific interpretations. C. Word Embedding Model A word embedding model created using word2vec can learn a rich low dimensional representation of words in a tweet corpus. Initially, the word embedding procedure was developed to generate distributional representations over corpora such as Wikinews, News articles, and Google News corpus that represent the current state-of-the-art. [18] also shows that vector arithmetic over the word vectors can be used to generate analogies. For instance, word embedding of "Queen" can be obtained by summing the word embeddings of "Man" and "Woman" and subtracting from it the word embedding of "King." In recent studies [19], [20], the researchers have shown that word embedding models perform well over short texts. In another study [21], the authors have created a named entity recognition shared task for data from microblogging platforms using distributed word representations. These recent and prior successes in modeling words as computable vectors have encouraged us to utilize a pre-trained word2vec model trained over a generic Twitter corpus [21] or train a new word-embedding model over our domain-specific Twitter corpus. Depending on the type of the corpus (characterized using sentence level statistics and word frequency counts), we can use one of two neural network architectures for learning word2vec embeddings: (i) Continuous Bag-Of-Words model [18] (CBOW) (ii) Skip-gram model [18]. In our study we have used skip-gram architecture. D. Compositional Word Embedding In our study, we utilize compositional word embedding [3] to combine feature-level embedding vectors and to generate a comprehensive representation of a data point (e.g., user, tweet, user descriptions). Specifically, we employ weighted vector addition, a linear composition function detailed in [3]. Formally, we define Z, the weighted composition of word embeddings of U and V as follows: where U, V ∈ R m×300 (m represent number of users) are two embeddings which are composed by weight-based (e.g., cosine similarity matrix) modulation using W 0 , W 1 ∈ R m×m , respectively. Note that in such a composition, the dimension of input and output representation is unaltered. As detailed in Section V-B, it is essential to consider the correlation between different view embeddings before composing them. For instance, in Z the weight matrices will be optimized through an optimization function; however, if the embeddings U and V are uncorrelated, it is computationally hard to generate the representation of Z as such optimization function over the two uncorrelated embeddings, will fail to converge. Hence, we performed a linear composition, vector addition, to generate the representation of Z. Since the classification is insensitive to the position of emoji and words in the content, we consider such composition as appropriate. Formally, Z= U+V is a vector addition of U and V. IV. EXPLORATORY ANALYSIS We have conducted an analysis of our dataset by extracting statistical, textual and topical information. Fig 1 captures the word cloud synthesized using the tweets from the Informed Agency user type that can be used to glean related topics. We have three classes of user types, namely, Personal Accounts (P), Informed Agency (I), and Retail Accounts (R). Our corpus contains tweets crawled in Summer 2017 that includes the months of June, July, and August, covering all states in the U.S. During this time frame, the volume of communication-related to marijuana was high due to ongoing events (e.g. Cannabis Cup, The 420 Games) 5 . Data collection involved semantic filtering [22] utilizing the DAO 6 ontology on the eDrugTrends 7 /Twitris platform 8 . The corpus comprised of a total of 4,106,566 tweets from 1,066,615 unique users. Out of nearly 4.1M tweets, 1,895,777 tweets were identified as unique through tweet id and the content. We randomly selected a set of 4982 users with 12,103 tweets from our pool of 1M unique users to be considered as the training set. The domain experts from CITAR 9 annotated the 4982 users in our training dataset as one of the following three types: Personal Accounts, Informed Agency, and Retail Accounts. After the annotation process, the distribution per user type was as follows: 4395 personal, 476 informed agency, and 111 retail accounts. Effectively, the distribution of user types in the training set is highly skewed. The reason for sparsity among retailers (i.e., retail business twitter accounts) is that marijuana is a schedule I 10 drug according to the federal law, and thus its promotion of social media platforms is complicated due to its federal status as an illegal drug. Similarly, media accounts are significantly smaller compared to personal accounts, but still significantly higher than the retail accounts. Such data imbalance poses a serious risk in biasing the classifier towards the majority class. Upon our initial exploratory analysis of the corpus, we saw that the content in tweets and description of users are adequate to identify the characteristics of different user types. The average number of words in descriptions and tweets are 9.6 and 12.8, while the average number of emoji in descriptions and tweets are 0.46 and 0.26, respectively. 88% of the users have their descriptions complete, and these user descriptions carry information containing emoji and text that can be utilized for classification. Further, interactions among users can play an essential role in disseminating the information and influence other connected users in the network. The median number of followers and friends for users are 367 and 376 respectively, and the average number of tweets per user is 3.85. Our corpus includes 2,837,734 interactions (mentions, retweets) between users, 83% of which are retweets, and the rest are mentions. This statistics suggests that there is much communication among users that can contribute to the classification of user types. V. METHODOLOGY The novelty in our approach to the user classification problem is to leverage the multiview aspect of the Twitter data by creating compositions of embeddings for different views. As depicted in the overall architecture of our approach in Fig 2, this section provides details of critical steps in our approach. A. Preprocessing At this stage, we trained two Word Embedding(WE) models for Content and People views using our domain-specific Twitter corpus. (i) The Content WE model is based on 1.8M unique pre-processed tweets, and (ii) The People WE model is based on pre-processed user descriptions of 1M unique users. We built such two separate WE models because we observed that user descriptions were more complete and contained less jargon and slang terms as compared to tweets. To obtain discriminative features for user classification, we removed stop words, punctuations, and alphanumeric characters from tweets and user descriptions. We also extracted URLs, mentions of screen names, retweeted user screen names, contact information (e.g., phone number, email and web address), and emoji. After that, we lemmatized the tweets and user descriptions corpus. Moreover, we employ EmojiNet [17] to retrieve senses and keywords from emoji, and Clarifai 11 to process profile pictures. The overall goal is to enable gleaning of semantically relevant information about users from their tweets for reliable determination of user types. B. Correlation Analysis In this study, we perform correlation analysis between embeddings of features from different views to assess which compositional operation is appropriate. The similarity between embedding vectors derived from the textual representation of features constrains the operations that can be used to combine them since the resulting vector needs to be representative of the components. For example, when two embedding vectors are highly uncorrelated, dimensionality reduction does not generate representative vector space. However, uncorrelated embeddings can be composed merely with vector addition, to make resulting vector space more representative. For instance, researchers [23] made use of operations such as addition and concatenation, to combine word embedding vectors of the input text. These word embeddings were generated from text corpora and knowledge bases for more contextually rich representation of the input text. Similarly, [24] retrofits word vectors, using the WordNet embeddings to enrich the word embeddings of the input text. The creation of embedding vectors is performed through probabilistic calculations [25], and the embedding of each view (Section V-D) may or may not correlate with that of the other views. We conducted correlation analysis between different pairs of view embedding vectors as shown in Table II. The table shows Spearman correlation and their corresponding p values for these pairs. We use Spearman as our correlation metric to measure the similarity between view embeddings at each data point since our embeddings do not follow the Gaussian distribution. In this analysis, our alternative hypothesis (H 1 ) is that the two embedding vectors are uncorrelated, and similarly the null hypothesis (H 0 ) is that they are correlated. Having the pvalue, less than 0.01 suggests the rejection of H 0 . Hence, based on Spearman, we see from the Table II that for the first three pairs the null hypothesis of correlation H 0 can be rejected, while for the pair User description and Network, we are unable to reject the null hypothesis of correlation (H 0 ). In fact, the data indicates that people interact closely based on their similar user characteristics rather than the shared tweet content in marijuana-related communications. C. Feature Engineering In our analysis, we have organized our features under three main categories: Person, Content, and Network, since we consider these as the main views of the Twitter communication that contribute to the context. 1) People: This set of features are user-level that contributes to differentiating the user types from each other on social media. Specifically, it includes user descriptions, name, screen name, contact information and profile pictures. • User Descriptions: This field holds the description of the account that was defined by the user. As this metadata carries information on characteristics of the user, we exploit the elements of this feature such as text, emoji, and contact information by employing text processing techniques. • Name: This field holds the name of each user where users can enter their full personal, business, or organization name, or have an arbitrary entry. We use this information to discriminate person users utilizing a lexicon 12 of commonly used person's first and last names. In fact, we found that 68% of the person users can be identified using names listed in the lexicon. • Contact Information: We extract this information from the description of users as it includes a phone number, email and web addresses. Usually, retail accounts provide this information in their profile for their customers to reach out to them, making this feature a discriminative factor in classification. • Profile Pictures: This visual form of Twitter data can reflect feelings, emotions, intentions, and other characteristics of a user. We consider this feature as discriminative as there is a noticeable difference in profile pictures of personal, retail, and informed agency accounts. See Fig 3 for examples. 2) Content: To glean discriminatory features from tweet content, we first separated text, emoji and URLs, and then processed them separately. • Tweet text: We first extracted tweet text, by filtering other elements such as mentions, URLs, and emoji, and concatenate tweets of each user. Then we created word embedding vectors out of this textual data. • URLs: Users usually provide URLs in their characterlimited tweets to refer to a more detailed version of their stories. For instance, retail and media accounts use URLs in their tweets to direct clients to their web page, more often than personal accounts. The number and frequency of URLs in a tweet can help to discriminate among user types. • Emoji: The use of emoji provides a concise and precise expression of opinions, reactions, sentiments, and emotions concerning a topic of discussion. It is a discriminative feature in our study capturing the number and senses of emoji used by different user types. 3) Network: As users on Twitter primarily interact using replies, mentions, and retweets, we utilize these interactions as our features to identify communication patterns for each user type. We consider replies as mentions. In our exploratory data analysis of marijuana-related communications, we found that the following features are prominent. • Mentions: It is a derived feature where the author mentions the screen-name of another user and is considered as direct interaction. • Retweets: It is a derived feature where the retweeting user forwards another users tweet and is considered a direct interaction between these two users. We generate network embeddings by creating the adjacency matrix based on these interactions between users. This procedure is further explained in Section V-D.2 D. Compositional Multiview Embedding (CME) The Twitter data contains multiple dimensions that we call views, such as People, Content, and Network. These views can be leveraged to contextualize a comprehensive and multilevel analysis of the Twitter social network. In our study, we employed the Content and People WE models for generating embeddings for Content view (e.g., Tweets) and People view (e.g., User Descriptions and Profile Pictures), respectively. As described in Section III-B, the tweet content and user descriptions involve emoji, which we regard as critical for interpreting the meaning. For this reason, we extracted the textual representation of emoji from EmojiNet, and generated cumulative emoji embeddings utilizing a pre-trained word embedding model that was trained over Wikinews corpus [26] as explained in [27]. We also generated word embeddings for profile pictures of users. As Clarifai provides a set of tags that textually represents the profile images, we input these tags into the People WE model because we consider profile pictures as related to the People view. Then we generated CMEs by combining the embeddings at the intersection of different views of the Twitter data, as formulated below. For Person and Content views (T), word embedding vector (WV ) in each data point (WV T i , i represents an index of a data point in a view) is calculated by averaging the word-vectors of each word that is present in the view. For instance, we preprocess the tweets of a user and generate word vectors of each word in 300 dimensions. Then we sum these vectors and divide by the number of words to generate the embedding vector for tweets of the user. However, while we perform the average operation to generate embedding vectors for Person and Content views, we do not perform average for the Network view. For generation of network embeddings, we utilized interactional features (mentions and retweets) and performed t-SVD to generate dense embeddings, where each embedding has 300 dimensions. The procedure is detailed in Section V-D.2. We formally define the calculation of WV T i as where v w is the embedding of word w and V is the vocabulary of the Content WE model trained over the marijuana-related tweet corpus. 1) Tweet-Emoji(T+E) & User Description-Emoji(D+E): We explained the procedure for generating WEs for Tweets, User Descriptions and Emoji earlier in this section, and we explain here how we generate CMEs for Tweet & Emoji, and User Description & Emoji. As depicted in Fig. 4, to generate the Tweet-Emoji CME, we combine the WE vectors, which we generated for Tweets and Emoji, by performing the vector addition operation. Similarly for User Description-Emoji CME, we combine the WE vectors for User Descriptions and Emoji via the vector additon. 2) Network Embedding (N): The user types that we characterize in this study have different volumes of network activities. For instance, while average retweet and mention rates (derived from Table I) per user are 0.9 and 0.09 respectively on personal accounts, they are 11.08 and 3.53 on informed agency accounts. Clearly, the network activity can be used to distinguish and recognize these user types. Thus, combining the network activity information with tweet content and user information can contribute to a reliable classification. For representing the network activities of users, we created the weighted adjacency matrix of interactions; however, the adjacency matrix was sparse that made the generalization of the classifier difficult. Hence, creating dense vectors is imperative for better representation. For generating a low dimensional dense vector, we utilize truncated Singular Value Decomposition (t-SVD) which has proven to generate dense embedding in NLP and network embedding tasks [28]. Formally, we define the adjacency matrix as A ∈ R m×n , where m and n denote source users and target users respectively (capturing direction of communication). where, for a pair of users u i , u j , A u i ,u j represents a cell in the matrix A of dimension |m| × |n| representing interaction counts, which includes both retweets and mentions, for the corresponding users. The adjacency matrix A is sparse and non-stochastic (∑ n j=1 A i, j = 1). As we need to create a dense and stochastic representation of the network activities, we normalize the values in a row such that they will all sum up to 1. This normalization is done by diving every value by the sum of all values in a row, and this process makes the matrix stochastic. As the number of sources and target users is mostly different in A (1149 × 1701), we convert A to square cosine matrix, denoted by A cosine ∈ R m×m obtaining a matrix 1149 × 1149, since we want to measure the similarity between users in our training set. Transformation of A to A cosine is formulated as follows: A cosine = A·A T ||A||||A T || . Each cell value in A cosine lies between 0 and 1 and is symmetric. As our adjacency matrix A cosine is 1149 × 1149, we need to reduce its dimension down to 300 for us to perform composition of the network embedding with other word embeddings. Therefore, we apply t-SVD over the matrix A cosine resulting three square matrices: U, Σ, U T ∈ R m×m , where Σ = {σ 1 , σ 2 , ..., σ m } is a set of m singular values. After we apply the dimensionality reduction, the reduced matrix becomes of dimension m × 300. We denote the reduced matrix as A reduced ∈ R m×300 and its value is determined by: A reduced = U m×300 · (Σ −1 300×300 ) T . The 300 dimensional embeddings in A reduced is considered as the network embedding of users, and is used to create a CME in our user type classification. 3) Network-Tweet-Emoji(N+T+E): After we generate the network embeddings(NE) of users, we combine the WEs for Tweets and User Descriptions, and NE to generate the Network-Tweet-Emoji CME by performing the vector addition operation. Embeddings for Network, Tweets and Emoji are all in 300 dimensions. E. Experimental Setting In building the Content and Person WE models, we used Skip-gram model with negative sampling. The rate of negative sampling was set to 10 and the window size was set to 5. Such a set up is desirable for datasets of averagesize [18]. The Content WE model was trained on a preprocessed corpus of 1.8M unique tweets generated from 1M unique users creating a vocabulary (V) of 16,531 words. The People WE model was trained on 946,975 pre-processed user descriptions obtained from 1M unique users, generating a vocabulary (V) of 16,903 words. Apart from linguistic differences between user descriptions and tweets, another reason to build two WE models is multiview aspect of our dataset that also includes profile pictures and emoji in a profile that reflect different contextual meanings as compared to the tweets of a user. In order to create an embedding of a profile picture, we used Clarifai to generate text caption and then apply the Person WE model on the text caption. Empirical Baseline: To the best of our knowledge, the problem of user type classification in marijuana-related communications on Twitter that we address in this study has not been investigated before. For this reason, we created an empirical baseline that utilizes word embeddings of the textual content of tweets and descriptions. We conducted two sets of experiments depending on the inclusion of CME with network level features. The first set of experiments do not include the CME with network level features, and we incrementally include the Person and Content level features. We utilize all data points in our training set that comprises of 4982 users. As discussed in earlier sections, interactions also play an essential role in forming the characteristics of user types. Therefore, the second set of experiments included CMEs which contain Network level features, where we take the best performing classification setting from the first set of experiments as a baseline for comparison. At this stage, we had to reduce the size of the training set down to 1149 users where the sizes of P, I, and R classes were 1045, 87 and 17 respectively. Since our training set was highly imbalanced, we applied the oversampling algorithm SMOTE to avoid bias towards the majority class at the expense of the minority classes. In our experiments, to illustrate the improvement that the domain specific WE models provides, we also utilized a generic word2Vec model, called Tweet2Vec [21], for a comparison, which is explained in detail in Section VI. Table IV present To illustrate the improvement obtained by the addition of network level features into the classification, we take the best performing approach of the first set of experiments as the baseline for the second set of experiments. VI. RESULTS TableIII and The different feature sets incorporate different views of the data as explained in Section V. We systematically and gradu-ally include person-content-network features to observe their individual contributions to the outcome of the classification. We evaluated our approach using Average F-score (Avg.F) for each user type (P,I,R). We also report precision, recall, and average F-score, and discuss the overall performance. The baseline approach that we empirically chose achieved an overall F-score of 88% using the word embeddings of tweets content and user descriptions. The F-scores for individual classes of P, I, and R were 95%, 42%, and 73%, respectively. We generated these embedding vectors using the domain-specific word embedding models. As we see in Table III that the classifier built with the embeddings of tweets and descriptions generated through the Tweet2Vec model obtained an average F-score of 86%, and underperformed for P and I classes. Therefore, we continued experiments using Content and People WE models. As discussed in Section V, to better contextualize different elements of the content such as text and emoji, we have generated CMEs from the tweets and emoji embeddings, and similarly from user descriptions and emoji. Though this experiment has shown a reduction of average F-score by 3%, the precision has been improved by 10% for I and R classes, meaning false positives for I and R are reduced. Given the small size of these classes in our training dataset, such improvement in precision encouraged us to further continue our experiments with the inclusion of CMEs. We have further included the tweet and user metadata to the feature set, and it still did not make a significant difference in the performance. However, the inclusion of profile pictures as a feature in the experiments showed a significant improvement in the overall F-score to 97%, where F-scores for P, I, and R were 98%, 87%, and 90%, respectively. As discussed earlier, we can benefit from the multiview aspect of the Twitter data to cultivate more satisfactory interpretation of the content. The inclusion of textual data, emoji and profile pictures in our approach by combining them through CMEs for classification of user types, has impacted the outcome significantly. Furthermore, recall that, in the second set of experiments, we have extended our study by applying our approach with the addition of network interactions between users. We have generated network embeddings from the interactions between users. We have used the best performing classifier from the first set of experiments (Table III) as a baseline for the second set of experiments, to compare our approach that incorporates the network embeddings. In our second set of experiments, we have first added the network embedding as a separate feature along with the features from the second baseline approach, and it did not affect the performance. Then we created CME from the embeddings of tweets, emoji, and network, and it boosted the performance of each class, P, I, and R in terms of their Fscores, by 1%, 6%, and 4%, respectively. It also improved the overall F-score by 1%. The improvement that we achieved by applying CMEs is significant since the F-score for the second baseline was already significantly high, and our approach has improved upon that performance. VII. CONCLUSION AND FUTURE WORK Our overarching goal was to utilize people, content, and network related features in marijuana-related communications on Twitter to classify the user types into three prominent categories: Personal, Informed Agency, and Retail accounts. Such a classification provides support for understanding the dynamics of issues related to marijuana and its variants from location and temporal perspectives ultimately. Furthermore, dominant and trending topics can be identified for each user type for more precise and reliable subjective analysis of related events and their impacts. In this paper, we introduced an approach to classify user types utilizing Compositional Multiview Embedding (CME). For this purpose, we learned a domain-specific embedding for tweet text, a separate embedding for user profile descriptions, and a mapping of profile images to tags to obtain their embeddings, while incorporating emojis as words using EmojiNet embeddings. We also incorporated interactional features by creating network embeddings. Overall, we achieved 7% improvement over the empirical baseline, when we used the CMEs without network embedding and 8% improvement when we used the CMEs with network embedding. The latter also resulted in an F-score of 0.96. Although we are implicitly addressing the homophily through assessing the similarity between users based on different views, we plan to enhance our work by analyzing homophily in marijuana-related communications on Twitter as a case study by leveraging the approach explained in this paper. Upon the completion of review process, we will outsource our baseline and annotated dataset for reproducibility.
2018-06-18T16:41:28.000Z
2018-06-18T00:00:00.000
{ "year": 2018, "sha1": "85e422b2f53c3afbfefefd45e5a00237f1ba5b4a", "oa_license": null, "oa_url": "https://arxiv.org/pdf/1806.06813", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "a8780eae540eb2ec864507fa1e3ddde399906310", "s2fieldsofstudy": [ "Computer Science", "Sociology" ], "extfieldsofstudy": [ "Computer Science", "Psychology" ] }
58535598
pes2o/s2orc
v3-fos-license
Effect of Huanglongbing or Greening Disease on Orange Juice Quality, a Review Huanglongbing (HLB) or citrus greening is the most severe citrus disease, currently devastating the citrus industry worldwide. The presumed causal bacterial agent Candidatus Liberibacter spp. affects tree health as well as fruit development, ripening and quality of citrus fruits and juice. Fruit from infected orange trees can be either symptomatic or asymptomatic. Symptomatic oranges are small, asymmetrical and greener than healthy fruit. Furthermore, symptomatic oranges show higher titratable acidity and lower soluble solids, solids/acids ratio, total sugars, and malic acid levels. Among flavor volatiles, ethyl butanoate, valencene, decanal and other ethyl esters are lower, but many monoterpenes are higher in symptomatic fruit compared to healthy and asymptomatic fruit. The disease also causes an increase in secondary metabolites in the orange peel and pulp, including hydroxycinnamic acids, limonin, nomilin, narirutin, and hesperidin. Resulting from these chemical changes, juice made from symptomatic fruit is described as distinctly bitter, sour, salty/umami, metallic, musty, and lacking in sweetness and fruity/orange flavor. Those effects are reported in both Valencia and Hamlin oranges, two cultivars that are commercially processed for juice in Florida. The changes in the juice are reflective of a decrease in quality of the fresh fruit, although not all fresh fruit varieties have been tested. Earlier research showed that HLB-induced off-flavor was not detectable in juice made with up to 25% symptomatic fruit in healthy juice, by chemical or sensory analysis. However, a blend with a higher proportion of symptomatic juice would present a detectable and recognizable off flavor. In some production regions, such as Florida in the United States, it is increasingly difficult to find fruit not showing HLB symptoms. This review analyzes and discusses the effects of HLB on orange juice quality in order to help the citrus industry manage the quality of orange juice, and guide future research needs. INTRODUCTION Huanglongbing (HLB) is a citrus disease that has profoundly changed the size and shape of worldwide citrus production, and the negative effects keep impacting the industry as the disease continues to spread throughout the various citrus growing regions of the world (Gottwald et al., 2012). Practically all commercial citrus species and cultivars are vulnerable to HLB. The disease has an array of symptoms which can be detected anywhere on the plant, from the roots to the leaves, changing the chemical characteristics, and sensory attributes of the fruit (Bové, 2006;Baldwin et al., 2010Baldwin et al., , 2018Dala Paula et al., 2018). In this review, the effects of HLB on orange juice quality are described based on the current scientific literature. WORLDWIDE CONSUMPTION AND PRODUCTION OF FRESH ORANGES AND ORANGE JUICE citrus processing industry. The disease affects nearly all varieties of citrus, with grapefruit, sweet oranges, some tangelos, and mandarins being the most susceptible and limes, lemons, sour oranges, and trifoliate oranges the least (Abdullah et al., 2009). It is difficult to determine where HLB originated. However, there is information suggesting that HLB was responsible for India's citrus dieback during the eighteenth century (Capoor, 1963;da Graça, 2008). Initially, researchers believed that the tristeza virus was the leading cause of the citrus dieback in India, but after a thorough survey, HLB was determined to be the primary cause (Fraser and Singh, 1968;da Graça, 2008). In China, HLB has been reported since 1919 and described by Reinking (1919) as the citrus yellow shoot disease (Bové, 2006). In 1937, the African variation was reported for the first time in South Africa (Van der Merwe and Andersen, 1937), and it was later linked to chromium and manganese toxicity. It was also associated with the leaf mottling citrus disease in the Philippines in the 1960's (Fraser et al., 1966;McClean and Schwarz, 1970). Currently, the disease has spread to more than 50 countries in Africa, Asia, Oceania, and the Americas (South, North and Central Americas, and the Caribbean; Figure 1; CABI, 2017;EPPO, 2017). The first case of HLB in the Americas was reported in the state of São Paulo, Brazil in 2004(Coletta-Filho et al., 2004Teixeira et al., 2005a). However, in a survey conducted in São Paulo, just 6 months after HLB had been reported in Brazil, 46 cities stated having infected trees, suggesting that HLB had been present for almost 10 years (Bové, 2006). A year later, in August 2005, symptoms of the disease were recognized in Florida, United States;in 2007 in Cuba;in 2008 in the Dominican Republic;andin 2010 in Mexico (Coletta-Filho et al., 2004;Halbert, 2005;Llauger et al., 2008;Matos et al., 2009;NAPPO North American Plant Protection Organization, 2010). Currently, HLB is present in all Florida citrus-growing counties (Baldwin et al., 2010), in California, Georgia, Louisiana, South Carolina, andTexas (CABI, 2017;EPPO, 2017). As the severity of HLB increases, premature fruit drop becomes a growing problem which has contributed to declining yields in Florida, especially during the last few years (Chen et al., 2016). In Brazil, the States of São Paulo, Minas Gerais, and Paraná have reported the presence of HLB, with São Paulo being the most affected state. In India and China, HLB has spread to around 25 and 11 provinces, respectively ( Table S in CAUSAL AGENTS AND VECTORS OF HUANGLONGBING It is well established that Huanglongbing is associated with the presence of the gram-negative bacteria genus Candidatus Liberibacter (CL). Three species are known to cause the symptoms of HLB: CL asiaticus (CLas), CL americanus (CLam), and CL africanus (CLaf). The Asian and the American species can be transmitted by the psyllid Diaphorina citri Kuwayama (Hemiptera: Psyllidae), commonly called Asian citrus psyllid (ACP), and the African species by the insect Trioza erytreae (Hemiptera: Triozidae; Bové, 2006). Although HLB was first reported in Brazil and the US 15 years ago, the psyllid vector was reported in São Paulo and Florida as early as 1942 and 1998, respectively (Bové, 2006;Tansey et al., 2017). CLam was the most prevalent bacteria species in Brazil in 2005, which initially affected more than 90% of the infected trees, decreasing to 60% in 2007. During this period, there was an increase in CLas infection, from 5 to 35% of the infected trees, while a combined infection remained practically the same at 5% (Coletta-Filho et al., 2007;Gasparoto et al., 2012). Among HLB bacteria, CLaf is sensitive to heat and to dry weather and thrives between 20 and 25 • C, while the other species are heat tolerant and thrive at higher temperatures (Catling, 1969;Cheraghian, 2013). These observations might explain why CLaf is not present in hot and humid tropical and subtropical climates. As CLas has been difficult to culture in vitro, its recommended detection methods was by quantitative real-time polymerase chain reaction (qPCR) targeting the 16S rDNA gene (Teixeira et al., 2005b;Li et al., 2006). SYMPTOMS OF HUANGLONGBING AND ITS IMPACT ON ORANGE TREES In the early stages of the disease, it is difficult to make a clear diagnosis. McCollum and Baldwin (2017) noted that HLB symptoms are more apparent during cooler seasons, more so than in warmer months. It is uncertain how long a tree can be infected before showing the symptoms of the disease but, when it eventually becomes symptomatic, symptoms appear on different parts of the tree. Infected trees generally develop some canopy thinning, with twig dieback and discolored leaves, which appear in contrast to the other healthy or symptomless parts of the tree. The symptomatic leaves can be normal-sized, showing yellow coloration or a blotchy-mottle or they can be small, upright and show a variety of chlorotic patterns resembling those induced by zinc or other nutritional deficiencies (McClean and Schwarz, 1970;da Graça, 1991;Albrecht et al., 2016;McCollum and Baldwin, 2017).The root systems are poorly developed, showing very few fibrous roots, likely due to nutrient starvation (da Graça, 1991;Batool et al., 2007). Symptomatic trees display excessive starch accumulation in the aerial plant parts, one of the predominant biochemical responses to HLB, due to the upregulation of glucose-phosphate transport, which is involved with the increased entrance of glucose into this pathway (Martinelli and Dandekar, 2017). It has been suggested that accumulation of starch in the leaves is also the result of decreased degradation and impaired transport which results in an inefficient partitioning of photoassimilates among mature citrus leaves, roots, and young leaves. This unbalance in sugar transport and accumulation would affect sugar content in fruit. The starch indefinitely remains in the aerial plant parts; it does not degrade, even during the night cycles, resulting in root starvation, severe health decline, and death of trees (Etxeberria et al., 2009;Fan et al., 2010;Zheng et al., 2018). Along with the color changes and starch accumulation in symptomatic leaves, there are also changes in the secondary metabolite profiles. HLB affects the amounts of hydroxycinnamic acids and flavonoids in infected leaves, resulting in lower levels of vicenin-2, apigenin-C-glucosyl-O-xyloside, 2"-xylosylvitexin, luteolin rutinoside, and isorhoifolin compared to healthy leaves. While healthy leaves contain only trace levels of limonin glucoside, infected leaves contain levels of 300 ± 22 µg/mL (Manthey, 2008). Proline and other amino-acids were found in greater amounts in leaves showing symptoms of infection, and sugar metabolism was also affected (Cevallos-Cevallos et al., 2012;Albrecht et al., 2016). According to studies of infected orange fruit, HLBsymptomatic oranges are reduced in size, sometimes asymmetric, and contain small, brownish/black aborted seeds which can be seen when the orange is sectioned perpendicularly to the fruit axis. The orange peel turns green with an inversion of colors: the fruit turns from green to yellow/orange in the peduncular end while the stylar end remains green. In a healthy orange, the color change first starts at the stylar end, progressing only later to the peduncular area. HLB causes fruits to drop prematurely, resulting in a 30-100% yield reduction, and, ultimately, premature death of the tree. Tree mortality can occur several months to years after infection (McClean and Schwarz, 1970;da Graça, 1991;Bové, 2006;Batool et al., 2007;Bassanezi et al., 2011;Liao and Burns, 2012). HLB symptomatic fruit from infected trees are smaller in diameter compared to asymptomatic fruit from infected and healthy trees, which have similar diameter (Table 1, Figure 2). Even though most of these symptomatic fruit do not make it to processing due to premature drop or elimination by sizing equipment (McCollum and Baldwin, 2017;Baldwin et al., 2018), more are entering the processing stream as there is not enough normal sized fruit. The weight and juice content of symptomatic oranges are diminished compared to asymptomatic and healthy oranges, which are similar ( Table 1). Most of the studies were performed with Valencia and Hamlin oranges (Liao and Burns, 2012;Massenti et al., 2016;Baldwin et al., 2018), and also with two strains of Valencia, and Hamlin, Westin and Pera varieties (Bassanezi et al., 2009). HLB potentially causes trees to be more susceptible to other pests including citrus longhorned beetle (Anoplophora chinensis Forster) attacks. In advanced cases of HLB infection, a combination of citrus longhorned beetles and Phytophthora fungi is common (Halbert and Manjunath, 2004;Batool et al., 2007). HUANGLONGBING CONTROL AND MITIGATION OF ITS SYMPTOMS Current management strategies focus on vector control, avoiding the spread of infection, or management of infected trees. The success of individual or combined approaches depends on the infestation level. In regions where disease incidence is low, the most common practices are avoiding the spread of infection by removal of symptomatic trees, protecting grove edges through intensive monitoring, use of pesticides, and biological control of the vector ACP. The management of infected trees includes enhanced nutrition by foliar sprays of readily absorbable nutrients and phytohormones, or regulating soil pH to enhance nutrient uptake, and precision irrigation based on soil moisture sensing and needs of HLB-affected trees (Stansly et al., 2010;Albrecht et al., 2012;Martini et al., 2016;Zheng et al., 2018). However, the control of HLB is still difficult, especially if bacteria are widespread and their vectors are well established. Diseased trees in abandoned citrus groves act as abundant sources of CLas inoculation and insect vectors, and this has been a particularly prevalent problem in Florida. The most effective control strategy has been to remove infected trees in an area and then replant with CLas-free trees (Abdullah et al., 2009). Current recommendations are that control of the psyllid vector should be done as soon as its presence is noticed in citrus groves, even in regions free of HLB (McCollum and Baldwin, 2017). Another area-wide pest management approach to control the ACP and reduce the likelihood of resistance is the Citrus Health Management Areas (CHMAs) (Jones et al., 2013). According to Singerman and Useche (2016), CHMAs coordinate insecticide application to control the ACP spreading across area-wide neighboring commercial citrus groves as part of a plan to address the HLB disease. The intensifying insecticide application also creates environmental and public health concerns and side-effects to specific fauna, as the arthropod . Singerman and Page (2016) indicated that CHMAs enhance grower's profitability when all growers involved participated in the program. Covered, protected production fields have been tested as an alternative for fresh citrus production in Florida. These protected systems work by physically excluding ACP from the enclosed grove therefore preventing contact between the ACP and trees. One of the main advantages is the reduced reliance upon frequent insecticide sprays to control psyllids (Ferrarezi et al., 2017a). Anti-psyllid screen houses and container-grown cultivation allow rapid young plant growth, thus playing important roles in developing new citrus production systems aimed at vector-free environments (Ferrarezi et al., 2017b). Florida growers have been using foliar nutritional spray products that often contain macro-and micro-nutrients to compensate for lack of nutrient assimilation due to the disease, and compounds that are believed to activate "systemic acquired resistance" pathways in plants (such as salicylic acid) to increase tree defense response (Masuoka et al., 2011;Baldwin et al., 2012a). The benefits of this approach to disease management in the field have been criticized because the inoculum remains after application. Unfortunately, this perceived method of managing HLB potentially contributed to the proliferation of the disease in Florida after farmers stopped eliminating their infected trees. Unless the vector is thoroughly controlled, the spread of HLB to other orchard trees and neighboring farms is inevitable (Timmer et al., 2011;Gottwald et al., 2012). In an evaluation of the effect of nutritional spray treatments on fruit quality, Hamlin oranges from treated trees had the same off-flavor as oranges from trees that did not receive the treatment, whereas Valencia oranges were notably sweeter. Nutritional treatments did not consistently result in less pathogen DNA for either variety (Baldwin et al., 2012a). The implementation of combined nutrient programs and insecticide treatments has been studied and the results suggest that the beneficial effect of increased orange juice quality may have been cumulative, only manifesting later on Plotto et al., 2017). In addition to foliar nutritional sprays, plant growth regulators were tested, unsuccessfully, to reduce HLB-associated fruit drop (Albrigo and Stover, 2015). Incidentally, it was found that orange fruit showing HLB symptoms were also contaminated with Lasiodiplodia theobromae (diplodia), generally a postharvest pathogen, but which induced greater abscission zone in Healthy-R 6.9a Healthy-D 6.9a HLB-SY-R 6.1b HLB-SY-D 6.2b AS, asymptomatic; SY, symptomatic; Healthy, fruit harvest from healthy not shaken trees; Healthy-R, fruit harvest from healthy shaken trees (healthy-retain); Healthy-D, healthy fruit that dropped to the ground upon shaking the trees (healthy-drop); HLB-R, fruit retained on shaken HLB affected-trees; HLB-D, fruit that dropped from HLB affected-trees. symptomatic fruit (Zhao et al., 2015). A direct correlation between diplodia and ethylene production at the fruit abscission zone was established, and the use of pre-harvest fungicides reduced fruit drop (Zhao et al., 2016). However, HLB-infected fruit with a greater abscission zone (i.e., fruit that are more readily prone to drop on the ground) had generally lower quality than fruit harvested from the same trees but with lesser abscission zone . The difference in quality was due to lower total sugars and high bitter limonoids, and was more pronounced in early-harvested Hamlin. The strategy of reducing fruit drop by reducing diplodia infection might have its benefit in delaying harvest to reduce the negative effect of HLB on fruit quality. FRESH FRUIT AND ORANGE JUICE QUALITY AFFECTED BY CANDIDATUS LIBERIBACTER ASIATICUS To better understand the influence of HLB on the chemical and physicochemical characteristics of orange juice, it is important to consider the factors which may affect them, such as variety, harvest date, location, maturity, and the presence of pulp in the juice. In general, variations due to harvest date are more pronounced compared to variation due to the disease (Bassanezi et al., 2009;Baldwin et al., 2010;Plotto et al., 2010). As the season progresses, the peel color of a healthy orange becomes less green and more orange, juice content declines, sugars and soluble solids content (SSC) increase and titratable acidity (TA) and citric acid decrease . Peel Color As peel color often determines the attractiveness of an orange to the consumer, the effects of HLB on this important characteristic are of great concern within the fresh fruit citrus industry. Symptomatic oranges from HLB-affected trees (HLB-SY) are greener or less orange in peel color compared to asymptomatic oranges from HLB-affected (HLB-AS) or from HLB-unaffected trees (healthy). Several studies investigated changes in peel color due to infection by CLas. A less orange-colored peel was reported in symptomatic Hamlin fruit Liao and Burns, 2012). However, variation in peel color of Valencia oranges depended on harvest date and year Liao and Burns, 2012;Massenti et al., 2016) suggesting that Valencia orange may be less prone to peel color changes due to HLB. Valencia fruit has naturally more color than Hamlin and, therefore, HLB effect on peel color would be less visible. Sugar and Organic Acids The physicochemical characteristics of oranges play a vital role in determining the quality of the orange juice produced. There is no general agreement among available results in the scientific literature regarding pH due to CLas infection. The pH of orange juice from HLB-infected trees were either higher, lower, or similar compared to juice made with oranges from uninfected trees (Plotto et al., 2008Raithore et al., 2015;Dala Paula et al., 2018). TA, SSC, and SSC/TA tend to be similar in juice from asymptomatic HLB-AS and healthy oranges. However, a few studies reported differences, although small, in SSC/TA between HLB-AS and healthy Valencia and Hamlin orange juices Dagulo et al., 2010;Massenti et al., 2016;Hung and Wang, 2018). Juice from HLB-SY fruit usually presents the highest TA, and the lowest SSC and SSC/TA in Valencia, Hamlin (Tables 2 and 3), Westin and Pera orange juices (Bassanezi et al., 2009). Recent studies reported variation among fruit affected by the disease, with higher SSC in juice from HLB-SY Hamlin Hung and Wang, 2018) and Valencia and a higher SSC/TA in juice from HLB-SY Hamlin compared to juice from healthy fruit (Hung and Wang, 2018). Recently, uninfected trees are difficult to find in Florida, which explains why in the Hung and Wang (2018) study, Hamlin healthy oranges were from young 2-year old trees grown under protective screens while HLB-SY or HLB-AS oranges were obtained from older field-grown trees, making the comparison not as accurate as if trees were of the same age and growing conditions. SSC/TA, a parameter commonly used as a fruit quality index, tends to increase at later harvest dates and is more heavily affected by harvest time and orange cultivar than HLB infection status . Among the orange cultivars investigated, evaluation of the effects of HLB predominantly addresses Valencia oranges. Glucose, fructose, and sucrose were quantified in orange juice from HLB-infected trees and compared with juice from oranges from uninfected trees. In the early studies, glucose and fructose either did not vary, or slightly decreased upon the effect of disease status in fruit (Plotto et al., 2008;Baldwin et al., 2010;Slisz et al., 2012;Raithore et al., 2015; Table 4). Only recent studies reported a significant increase of glucose and fructose content in juice from HLB-SY fruit compared with healthy oranges Dala Paula et al., 2018). On the other hand, sucrose and total sugar contents decreased in juice made with oranges from HLB-affected trees in most studies, and more notably, in juices from HLB-SY Valencia and Hamlin oranges. The change in sugars in HLB-SY fruit reflects the disruption in the plant carbohydrate metabolism reported in leaves of citrus affected by HLB (Fan et al., 2010), as well as the impaired sugar transport due to the disease (Liao and Burns, 2012;Chin et al., 2014;Zheng et al., 2018). An increase in cell-wall invertase was observed in HLB-infected leaves resulting in a decrease in sucrose content (Fan et al., 2010). Cell-wall invertase is a glycoprotein enzyme generally found in developing sink organs (roots and fruits) responsible for the hydrolysis of sucrose into glucose and fructose. Asymptomatic (HLB-AS and healthy) oranges can have sucrose contents ∼2.5 times higher than that of symptomatic fruit (Slisz et al., 2012). In addition, Fan et al. (2010) suggested that CLas prefers to use fructose causing an accumulation of glucose and sucrose, which are metabolic resources but also signaling components that interfere through feedback inhibition on photosynthesis and contribute to HLB's yellowing leaf mottle symptoms. Poiroux-Gonord et al. (2013) also demonstrated an increase in sucrose content in the pulp of oranges next to leaves submitted to photooxidative stress despite the fact that the studied "Navelate" orange trees were not infected by CLas and, consequently, had no blocking or impaired transportation of the phloem sap as one of the different mechanisms attributable to the CLas (Hijaz et al., 2016). For organic acids, the majority of the studies reported similar citric and ascorbic acid levels in juice from HLB-unaffected fruit and asymptomatic oranges from HLB-affected trees. However, juice from HLB-SY oranges generally has higher content of citric acid and lower content of malic acid compared to juice from healthy fruit ( Table 4). Poiroux-Gonord et al. (2013) reported an increase in organic acid, especially succinic acid, in the pulp of oranges nearby leaves submitted to photooxidative stress, a situation associated with HLB effects in citrus leaves (Cen et al., 2017). Secondary Metabolites Oranges are an important source of secondary metabolites which promote human health, particularly flavonoids, limonoids, hydroxycinnamic acids, and polyamines. Many secondary metabolites result from the interaction between the plant and its environment, and are induced by biotic and abiotic factors. Changes in the levels of certain classes of secondary metabolites in oranges are frequently due to stress conditions in plants, including the photooxidative stress in nearby leaves (Poiroux-Gonord et al., 2013). In addition, these compounds are influenced by many factors, such as: cultivar, cultivating methods, degree of ripeness, and processing and storage conditions (Sudha and Ravishankar, 2002;Ramakrishna and Ravishankar, 2011;Chin et al., 2014). In general, juice made with asymptomatic oranges from HLB-infected trees is more similar to juice made with oranges from HLB-unaffected trees when compared to juice made with symptomatic fruit regarding secondary metabolite content. When differences are present, they are caused by harvest maturity rather than by disease status . The interaction of fruit maturity and HLB is not well understood, but Dagulo et al. (2010) suggested that fruit symptomatic for HLB are similar to immature fruit (lower sugars, higher acids, higher bitter limonoids), which is probably why the effect of HLB is more prevalent early in the season. They also suggested that HLB-affect fruit are slow to mature, likely due to a compromised vascular system. Baldwin et al. (2010) determined several secondary metabolites, including hydroxycinnamic acids at 6.3 min and 7.2 min; vicenin-2; feruloyl putrescine; narirutin 4 ′ -glucoside; limonin glucoside; narirutin; nomilin glucoside; nomilinic acid glucoside; limonin and nomilin in asymptomatic and healthy juice made with Hamlin oranges harvested in December 2007. Feruloyl putrescine was the only secondary metabolite that was present at similar levels. However, the same orange cultivar harvested in February 2008 presented similar levels of the two hydroxycinnamic acids; vicenin-2; feruloyl putrescine, limonin glucoside, narirutin, and nomilin glucoside between healthy and asymptomatic juices. The same comparison performed with Valencia oranges harvested in April 2008, had similar contents of all of the secondary metabolites; however, oranges from the June harvest showed different levels of feruloyl putrescine, limonin glucoside, and limonin. These results demonstrate that harvest maturity has greater effect on the content of secondary metabolites than CLas infection . Juice made with HLB-affected oranges contains high levels of nomilin and limonin, more so when made from symptomatic oranges. Both, nomilin and limonin are known to provide bitterness in citrus fruit and its juice (Maier et al., 1977(Maier et al., , 1980Hasegawa et al., 2000). Early research on the effect of HLB on fruit quality suggested that limonin levels >1 mg/L could induce bitterness in juice as it was also the detection threshold in water (Guadagni et al., 1973). However, further research showed that the recognition threshold of limonin was actually around 4-6 mg/L in a complex matrix such as orange juice (Guadagni et al., 1973;Dea et al., 2013). In fact, it is now recognized that only symptomatic oranges have their taste compromised Plotto et al., 2010;Slisz et al., 2012;Chin et al., 2014;Raithore et al., 2015;Dala Paula et al., 2018) and only severely affected orange juice has limonin levels above 4 mg/L ( Table 5). This suggests that there are other compounds involved with the bitter taste of juice from symptomatic oranges (Dala Paula et al., 2018), and that interactions of flavonoids together with the combination of lower sugars with higher acids enhances limonoid bitterness perception (Dea et al., 2013;Kiefl et al., 2018). Amino Acids and Bioactive Amines The accumulation of proline, arginine, and branched chain amino acids is expected in plants subjected to conditions that induce stress, such as drought, high salinity and acidity, high incidence of light, high concentration of heavy metals in the soil, changes in temperature, as well as response to biotic stress, such as plant diseases (Rai, 2002;Sharma and Dietz, 2006;Slisz et al., 2012;Malik et al., 2013). Studies showed that proline was higher in leaves of symptomatic HLB-infected trees (Cevallos-Cevallos et al., 2011, 2012Malik et al., 2014), but it was lower in juice from HLB-SY Valencia fruit (Slisz et al., 2012). In contrast, Hung and Wang (2018) reported an accumulation of proline in Hamlin orange juice from HLB-infected trees. These authors suggested that some of the control trees of the Slisz et al. (2012) study possibly tested as false negatives due to the detection limit of PCR methods or uneven distribution of CLas throughout the tree. However, in both studies the amino acids alanine, arginine, leucine, isoleucine, threonine, and valine were found at lower concentrations in juice from HLB-symptomatic oranges (Slisz et al., 2012;Hung and Wang, 2018). In juice from HLB-symptomatic Valencia and Hamlin oranges, the concentrations of asparagine and phenylalanine were over two times higher than in juice from healthy oranges, and histidine content also increased (Chin et al., 2014). An increase of asparagine and histidine contents was also found in juice from HLB-symptomatic Valencia fruit (Slisz et al., 2012) *The results were converted from mg/Kg to mg/L assuming orange juice's density of 1.0 g/cm 3 ; **LOQ of limonin = 1.2 mg/Kg; ***LOQ of nomilin = 5.0 mg/Kg. and in Satsuma orange leaves (Malik et al., 2014). A suggested explanation for this trend is that CLas may have inhibited the tree defense mechanism which, in turn, reduced the action of proline dehydrogenase, an enzyme responsible for the activation of the biosynthetic pathways of proline from ornithine and glutamate. Thus, the levels of this amino acid could not increase (Slisz et al., 2012). However, the accumulation of phenylalanine in juice from HLB-affected oranges (Slisz et al., 2012) differs from results from Malik et al. (2014) and Hung and Wang (2018). These last authors explained that phenylalanine is an essential precursor for secondary phenylpropanoid metabolism by phenylalanine ammonialyase in higher plants and its gene expression is significantly affected by CLas infection (Hung and Wang, 2018). Hamlin and Valencia HLB-symptomatic oranges showed high contents of the aromatic amine synephrine, however, juice from HLB-asymptomatic and healthy fruit had similar content (Slisz et al., 2012;Chin et al., 2014). In plants, putrescine is a necessary diamine precursor of polyamines synthesis (spermidine and spermine), and its increase is usually associated with environmental stress (Coelho et al., 2005;Gloria, 2006;Sharma and Dietz, 2006); however, putrescine content was not affected in juice from HLB-symptomatic oranges (Chin et al., 2014). On the other hand, feruloyl putrescine, a conjugate of putrescine and ferulic acid, is found at high concentrations in juice from HLB-symptomatic Hamlin oranges compared to juice from HLB-asymptomatic and healthy fruit. The same trend does not seem to be observed in Valencia oranges . Only a few studies have dealt with changes in the volatile compounds in orange juice affected by HLB Dagulo et al., 2010;Hung and Wang, 2018;Kiefl et al., 2018). These studies have shown that monoterpenes tend to be higher and esters lower in juice affected by HLB Dagulo et al., 2010;Kiefl et al., 2018). These studies have also shown that sesquiterpenes, including valencene, were typically lower in HLB-affected juice (Figure 3). These results are relevant to the quality of orange juice as esters typically impart fruity flavor and terpenes are characteristic of citrus volatiles: ethyl acetate, ethyl butanoate and ethyl hexanoate have sweet fruity odors in orange juice (Plotto et al., 2008). Ethyl-3-hydroxyhexanoate is reported as one of the major esters in orange juice (Shaw, 1991;Fan et al., 2009) with a sweet and fruity odor (Buettner and Schieberle, 2001). Lower esters and higher terpenes are likely to result in imbalanced flavor of orange juice. While the terpene alcohol linalool, with a fruity/floral characteristic, is desired in orange juice, other terpene alcohols (α-terpineol, 4-terpineol, carveol) are indicators of oxidation and poor quality (Dagulo et al., 2010;Kiefl et al., 2018). Dagulo et al. (2010) suggested that the higher terpenes and lower sesquiterpenes in HLB-affected orange juice might be an indication of lower enzyme activity in the pathway converting terpenes to sesquiterpenes of the affected oranges. Contradictory results were reported for alcohols. Dagulo et al. (2010) and Baldwin et al. (2010) found that (Z)-3-hexenol was higher in juice from HLB-affected Valencia oranges, while Kiefl et al. (2018) found it was higher in juice from healthy fruit. In fact, Dagulo et al. (2010) and Hung and Wang (2018) found that all alcohols were higher in HLB-affected juice. The levels of aldehydes varied much more depending on the study, season and cultivar. Octanal, nonanal, and decanal are important aldehydes with a characteristic citrus odor (Perez-Cacho and Rouseff, 2008) and were higher in juice from "healthy" oranges in the Kiefl et al. (2018) and Baldwin et al. (2010) studies. On the contrary, these aldehydes were higher in juice from HLB-asymptomatic Valencia oranges in the Dagulo et al. (2010) study. Likewise, the "green" odor compound hexanal was 65 to 110% higher in samples from HLB-unaffected samples in the Baldwin et al. (2010) study, up to 81% higher in HLB-symptomatic Valencia in the Dagulo et al. (2010) study and about 25% higher in HLB-affected fruit (Kiefl et al., 2018). Considering all three studies, it is important to remember that volatile levels differ with harvest times, types of processes used to prepare the orange juice (Baldwin et al., 2012b) and HLB infection status. It is important to emphasize that, generally, asymptomatic orange juice is similar to healthy orange juice with respect to volatile profile. Not only does HLB affect the profile of volatiles in orange juice, but by having an effect on fruit size, peel oil extracts are reduced by 30% in HLB-symptomatic fruit . As in orange juice, sesquiterpene hydrocarbons are lower in the peel oil of symptomatic fruit, as are some monoterpenes and straight-chain aldehydes. In another study, Xu et al. (2017) found compounds only detected in oil from HLB-affected fruit, including β-longifolene and perillene, two terpenes, and 4decenal, an aldehyde. However, these authors admit that more samples should be analyzed to confirm these findings. These authors found that linalool, decanal, citronellol, citral, carvone, and dodecanal were higher in the oil from asymptomatic than symptomatic fruit from Hamlin and Valencia oranges harvested twice in the season (Xu et al., 2017). Kiefl et al. (2018) analyzed peel oil by gas chromatography and olfactometry and found that mostly odor-active aldehydes contributed to the difference between healthy and HLB-affected Valencia oil, being higher in HLB-affected fruit. Effects of HLB on Juice Sensory Characteristics Early reports describing the symptoms of HLB disease on trees, leaves, and citrus fruit were published in plant pathology journals, and effects on fruit were mostly describing the visual defects. One report mentioned HLB-symptomatic oranges as having a "bitter and salty taste, especially in the early part of the season" (McClean and Schwarz, 1970). These were informal observations about fruit having off flavor. Only recently formal sensory analyses (triangle test, difference-from-control test) have been used to describe and quantify other, and more subtle taste attributes in HLB-affected fruit (Plotto et al., 2008). Studies have included analysis of juice prepared from fruit of healthy, unaffected trees, and of juice prepared from asymptomatic and symptomatic fruit from HLB-affected trees testing positive for CLas. Comparisons were made using difference-from-control tests where panelists rated the degree of difference between healthy and infected juice. Sensory results could be explained by chemical data and confirmed differences between healthy, asymptomatic, and symptomatic fruit/juice. These comparisons were repeated with several cultivars, Hamlin, Mid-Sweet and Valencia, and the differences between healthy and HLB-affected fruit were more pronounced and obvious to the palate with fruit harvested early than late in the season Plotto et al., 2010). Juice made with these symptomatic, HLBaffected oranges had the most off-flavors, commonly described as "bitter, " "sour, " and "sour/fermented." Higher bitterness and sourness in symptomatic fruit could be explained by higher levels of limonin and titratable acidity and with lower soluble solids content . A trained panel provided more insight into the various descriptors characterizing orange juice made with HLB-symptomatic fruit, with several negative descriptors regarding taste and flavor (astringency, tingling, harshness, bitterness, metallic-taste, low sweetness, saltiness/umami, musty, sourness/fermented, pungent/peppery, low citrusy taste; Tables 6, 7), usually due to an imbalance in the chemical composition in the affected fruit (Baldwin et al., , 2012aPlotto et al., 2010Plotto et al., , 2017Raithore et al., 2015;Dala Paula et al., 2018;Kiefl et al., 2018). HLB off-flavor in severely symptomatic fruit is so pronounced that processing healthy with affected fruit is likely to negatively impact the sensory quality of commercial orange juice (Bassanezi et al., 2009). Juice from HLB-symptomatic fruit, up to 25%, can be blended with juice from unaffected fruit without being perceived as off-flavored for both Hamlin and Valencia (Raithore et al., 2015). Another study found an even lower amount (10% by juice mass) of HLB-symptomatic fruit being acceptable in a blend (Ikpechukwu, 2012). Both studies were performed with not-from-concentrate juice processed in a pilot plant, and can be a basis to processors who need to sort symptomatic fruit out before juicing to maintain overall juice quality (Raithore et al., 2015). No studies were found with juice made from concentrate, Frozen juice with pulp and filtered 1 *The list of sensorial descriptors includes commentaries realized by the panel during sensory evaluations and attributes significantly higher in asymptomatic or symptomatic orange juice, CLas (+), compared to healthy juice (control). **In comparison with healthy orange juice (control), CLas (-). ***According to the authors, HLB-bitter refers to a long-lasting metallic, astringent and harsh taste. I Frozen juice thawed overnight served with the pulp and without pulp. Juice was filtered then flash pasteurized at 71 • C for 10 s and immediately cooled then served; II Oranges were hand juiced and lightly pasteurized using at 71 • C for 15 s, and frozen at −20 • C; III Oranges were extracted using a commercial JBT 391 single head extractor with premium juice extractor settings and pasteurized under simulated commercial conditions (1.2 L/m, 8 to 10 s hold time, 83 to 90 • C). IV Oranges were extracted as is a customary industry practice, premium setting was selected according to the particular characteristic of the peel oil specific to Valencia, it was passed through a pressure filtration finisher with screen size 0.51 mm and then pasteurized under simulated commercial conditions (1.2 L/m, 90 • C). but processors always blend those juices and add volatiles which can mask some off-flavors. More in-depth studies on bitterness in orange juice revealed that the two known bitter limonoids in orange juice, limonin, and nomilin, act in a synergistic manner and their thresholds of perception are lower when tasted together (Dea et al., 2013). Furthermore, both limonoids have a different taste characteristic: limonin is described as "bitter" whereas nomilin is described as "metallic" by some panelists, probably contributing to the taste synergy. Unlike other tastes, the detection thresholds for bitter molecules are generally extremely low, and can have prolonged aftertaste. Perception of bitterness is highly variable among humans, and because there are more than 50 known bitter receptors, studies of bitterness associated with juice affected by HLB are complex. Fractionated liquid chromatography of orange juice combined with taste analysis revealed that derivative molecules of hydroxycinnamic acids had bitter and astringent taste, and were more prevalent in juice from HLB-symptomatic oranges (Dala Paula et al., 2018). Using the same technique, Glabasnia et al. (2018) identified 10 polymethoxylated flavones (PMFs) that enhanced bitterness due to limonin and nomilin in orange juice. Tasted without limonin and nomilin in a model solution, these PMFs increased astringency but not bitterness. These studies demonstrate the complexity of interactions between molecules belonging to two chemical classes-polyphenols and limonoids, on taste perception. Contribution of volatiles, sugars, acids, amino acids, and high molecular weight carbohydrates such as pectin to flavor and taste adds to the complexity of understanding the effect of HLB on juice quality. A new technology was developed to predict HLB-affected orange juice quality by measuring pathogen CLas titer using real-time PCR (Bai et al., 2013;Zhao et al., 2018). Fruit severely infected by HLB may have one or more of the following juice quality features: low sugar, abundant bitter limonoids, and rich acid/sourness, but the common feature for all juice prepared from such fruit is high CLas titer, which correlated negatively with sensory characteristics (Bai et al., 2013;Zhao et al., 2018). The U.S. patent by Zhao et al. (2018) is the only study where CLas is quantified in orange juice from many sources showing an attempt of quantifying the degree of infection. The amount of CLas titer in the juice (lower CT values) negatively correlated with sweetness, orange and fruity flavor, and positively with negative attributes, such as off flavor and "umami." FINAL CONSIDERATIONS HLB affects the sensory and physicochemical characteristics of orange juice despite the available scientific literature data which presents contradictory results among these parameters. This may be due to factors such as: different harvest times of the oranges, differences in the age of the trees between the control group and HLB group, unpredictable environmental stress, as well as the level of CLas infection of the orange trees. Juice made with HLBsymptomatic fruit usually has high TA, low SSC and SSC/TA, whereas juice made with asymptomatic fruit from HLB-infected trees is generally similar to juice processed with healthy fruit. In general, HLB causes a decrease in sucrose, total sugars and malic acid contents while ascorbic acid does not seem to be significantly affected by the disease. On the other hand, levels of citric acid, bitter limonoids (limonin and nomilin), hydroxycinnamic acids, flavonoids (particularly tangeretin), nobiletin, narirutin, hesperidin, diosmin, and didymin are higher in juice from HLB-symptomatic oranges compared to juice from healthy fruit. The content of amino acids, alanine, arginine, asparagine, histidine, isoleucine, leucine, phenylalanine, proline, threonine, and valine are altered by HLB. Additionally, symptomatic Hamlin orange juice has high synephrine and feruloyl putrescine levels. Regarding the typical HLB-off flavor in orange juice, the loss of sweetness can generally be explained by lower sucrose and total sugar levels and SSC, along with higher citric acid, and sourness is explained by higher TA and citric acid content. Furthermore, some volatiles may contribute to increased or decreased perception of sweetness or sourness (Bartoshuk et al., 2017;Plotto et al., 2017). Elevated levels of limonin and nomilin are partially responsible for the typical HLB-bitterness. These two limonoids have a synergistic effect which decreases their perception and identification thresholds in orange juice. Beyond these compounds, there is evidence indicating that other compounds, possibly hydroxycinnamic acids, are involved with the typical HLB-bitterness (Dea et al., 2013;Dala Paula et al., 2018). Unquestionably, more work is needed to further identify the full list of compounds contributing to the unpleasant taste and mouthfeel in HLB-affected orange juice. Sensory studies take into consideration that the lower sugar contents reinforce the perception of bitterness. There are relatively few published papers evaluating the effects of HLB on orange juice's chemical, physicochemical and, especially, sensorial qualities and most of the research available was performed using Valencia oranges, followed by Hamlin. While citrus fruit sold as fresh can be substantially devalued by loss of color and misshape, juice processors still can process oranges that are HLB-symptomatic as long as they are mixed with asymptomatic fruit in <25% ratio of HLB-SY to asymptomatic fruit (healthy or HLB-AS). Processors traditionally add back flavor extracts from orange peel oil or orange essence to standardize juice (Ringblom, 2004), and have that tool to modulate citrus flavor and sweetness. Other attempts have been made to isolate compounds, or groups of compounds from citrus juice, peel or molasses, which could also increase sweetness or decrease bitterness perception in HLB-affected orange juice (Kiefl et al., 2017). More research to mitigate HLB-induced off-flavors and tastes could include use of resins, that are already used to remove bitter limonoids; the proper resin that only removes bitter compounds without removing flavor volatiles would need to be designed. Also tailoring aroma packages to mask bitterness or enhance sweetness, or adding non-volatiles extracted from oranges that mask bitterness. Finally, perhaps adding substances that bind bitter limonoids in the juice and then remove, or adding enzymes that glycosylate bitter limonoids, rendering them nonbitter. These efforts are likely to be pursued until a long-term solution is found to citrus greening disease. AUTHOR CONTRIBUTIONS BD-P, AP, JB, JM, EB, RF, and MG contributed to the writing and review of the manuscript. ACKNOWLEDGMENTS We thank CNPq (Conselho Nacional de Desenvolvimento Científico e Tecnológico), Capes (Coordenação de Aperfeiçoamento de Pessoal de Nível Superior) for providing scholarship and Fapemig (Fundação de Amparo a Pesquisa do estado de Minas Gerais) for financial support.
2019-01-22T14:05:18.065Z
2019-01-22T00:00:00.000
{ "year": 2018, "sha1": "60a415dbf189ebb977f31fcf7b27164328fd9eb7", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fpls.2018.01976/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "60a415dbf189ebb977f31fcf7b27164328fd9eb7", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
102344119
pes2o/s2orc
v3-fos-license
Mononuclear Lanthanide ( III )-Salicylideneaniline Complexes : Synthetic , Structural , Spectroscopic , and Magnetic Studies † The reactions of hydrated lanthanide(III) [Ln(III)] nitrates and salicylideneaniline (salanH) have provided access to two families of mononuclear complexes depending on the reaction solvent used. In MeCN, the products are [Ln(NO3)3(salanH)2(H2O)]·MeCN, and, in MeOH, the products are [Ln(NO3)3(salanH)2(MeOH)]·(salanH). The complexes within each family are proven to be isomorphous. The structures of complexes [Ln(NO3)3(salanH)2(H2O)]·MeCN (Ln = Eu, 4·MeCN_Eu, Ln = Dy, 7·MeCN_Dy; Ln = Yb, 10·MeCN_Yb) and [Ln(NO3)3(salanH)2(MeOH)]·(salanH) (Ln = Tb, 17_Tb; Ln = Dy, 18_Dy) have been solved by single-crystal X-ray crystallography. In the five complexes, the LnIII center is bound to six oxygen atoms from the three bidentate chelating nitrato groups, two oxygen atoms from the two monodentate zwitterionic salanH ligands, and one oxygen atom from the coordinated H2O or MeOH group. The salanH ligands are mutually “cis” in 4·MeCN_Eu, 7·MeCN_Dy and 10·MeCN_Yb while they are “trans” in 17_Tb and 18_Dy. The lattice salanH molecule in 17_Tb and 18_Dy is also in its zwitterionic form with the acidic H atom being clearly located on the imine nitrogen atom. The coordination polyhedra defined by the nine oxygen donor atoms can be described as spherical tricapped trigonal prisms in 4·MeCN_Eu, 7·MeCN_Dy, and 10·MeCN_Yb and as spherical capped square antiprisms in 17_Tb and 18_Dy. Various intermolecular interactions build the crystal structures, which are completely different in the members of the two families. Solid-state IR data of the complexes are discussed in terms of their structural features. 1H NMR data for the diamagnetic Y(III) complexes provide strong evidence that the compounds decompose in DMSO by releasing the coordinated salanH ligands. The solid complexes emit green Magnetochemistry 2018, 4, 45; doi:10.3390/magnetochemistry4040045 www.mdpi.com/journal/magnetochemistry Magnetochemistry 2018, 4, 45 2 of 29 light upon excitation at 360 nm (room temperature) or 405 nm (room temperature). The emission is ligand-based. The solid Pr(III), Nd(III), Sm(III), Er(III), and Yb(III) complexes of both families exhibit LnIII-centered emission in the near-IR region of the electromagnetic spectrum, but there is probably no efficient salanH→LnIII energy transfer responsible for this emission. Detailed magnetic studies reveal that complexes 7·MeCN_Dy, 17_Tb and 18_Dy show field-induced slow magnetic relaxation while complex [Tb(NO3)3(salanH)2(H2O)]·MeCN (6·MeCN_Tb) does not display such properties. The values of the effective energy barrier for magnetization reversal are 13.1 cm−1 for 7·MeCN_Dy, 14.8 cm−1 for 17_Tb, and 31.0 cm−1 for 18_Dy. The enhanced/improved properties of 17_Tb and 18_Dy, compared to those of 6_Tb and 7_Dy, have been correlated with the different supramolecular structural features of the two families. The molecules [Ln(NO3)3(salanH)2(MeOH)] of complexes 17_Tb and 18_Dy are by far better isolated (allowing for better slow magnetic relaxation properties) than the molecules [Ln(NO3)3(salanH)2(H2O)] in 6·MeCN_Tb and 7·MeCN_Dy. The perspectives of the present initial studies in the Ln(III)/salanH chemistry are discussed. Introduction The interdisciplinary field of Molecular Magnetism [1] has undergone revolutionary changes since the early 1990s when it was discovered that the 3-dimensional metal coordination cluster [Mn 12 O 12 (O 2 CMe) 16 (H 2 O) 4 ] could behave as a single-domain tiny magnet at a very low temperature [2][3][4].This discovery gave birth to the currently "hot" area of Single-Molecule Magnetism.Currently trivalent lanthanides, Ln(III), have entered the area in a dynamic way [5][6][7][8][9][10][11][12][13][14] by virtue of their large magnetic moments and single-ion anisotropies.Mononuclear Ln(III) complexes are central "players" in this "game" because they represent one of the smallest, magnetically bi-stable units and can, thus, be considered as ideal candidates for high-density storage or quantum computing [5,6,10,[15][16][17][18][19][20][21][22][23][24][25][26][27][28][29][30][31][32][33].Mononuclear Ln(III) Single-Molecule Magnets (SMMs), which are often called Single-Ion Magnets (SIMs), have been reported with a blocking temperature (~60 K) approaching that of the liquid nitrogen [34][35][36].The effective energy anisotropic barrier for magnetization reversal arises from the intrinsic electronic sub-level structure of the Stark components [5].Such energy considerations depend on the subtle variation of several parameters.These include the Kramers/non-Kramers character of the Ln III center, its oblate/prolate 4f-electron density, the geometry of the complex, the symmetry of the coordination sphere, and the donor strength of the donor atoms [5][6][7][8].The relaxation of the magnetization may take place through various mechanisms, which are either spin-lattice processes (Raman, Orbach, direct relaxations) or the Quantum Tunneling of the Magnetization (QTM) process.After 10 years of intense research, the design synthetic criterion has been established and tested for Dy(III) complexes.Dy(III) SIMs are the most numerous because Dy III has an odd number of 4f electrons (a Kramers' ion), which ensures that the ground state will always be bi-stable irrespective of the symmetry of the ligand field.The requirement for a successful Dy(III) SIM is a strongly axial ligand field.The coordination numbers that give rise to strong axial crystal fields are 1 and 2. Such Ln(III) complexes cannot be isolated, but extensive synthetic and magnetic experimental work has proven that the Dy III coordination number can be higher (this is the usual situation) as long as the strong axial crystal field is supported by equational ligands that are weak donors. In addition to their exciting magnetic properties, the Ln III ions in their simple salts and complexes give rise to interesting photoluminescence properties arising from forbidden 4f-4f (or 4f-5d in the case of Ce III ) electronic transitions [37,38].The low intensity of the 4f-4f transitions is a disadvantage and use of suitable organic ligand chromophores is required.Thus, many Ln(III) complexes exhibit sharp, intense emission lines upon UV light excitation because of the effective intermolecular transfers from the coordinated ligand (which behaves as an antenna) into the emissive levels of the Ln III ions that result in a radiative emission process [37][38][39].Of particular interest are Ln(III) complexes such as Pr(III), Nd(III), Sm(III), Dy(III), Ho(III), Er(III), and Yb(III) ones that emit light in the near-IR region of the electromagnetic spectrum [40][41][42][43].These Ln III ions find a wide variety of applications in fluoroimmunoassays (Yb III -based luminescence), telecommunications (Er III ~1.54 µm), optical communication systems (Tm III ~1.4 µm), optical amplifiers (Dy III -based luminescence), and solid-state lasers (Nd III ~1.06 µm, Ho ~2.09 µm). Another modern topic of relevance to the present work in the chemistry of Ln(III) complexes is the simultaneous incorporation of both SIM and photoluminescence properties in the same complex [44].The concurrence of emission and magnetic properties in such bifunctional (or hybrid) materials creates the possibility of tuning light emission by a magnetic field.The initial studies were focused on the isolation of emissive ferromagnets due to their applications in multimodal sensing and optoelectronics [45].Luminescent SIMs are also of great scientific value for the in-depth investigation of the mechanisms that govern the magnetization relaxation in mononuclear Ln(III) complexes because photoluminescence studies can, in principle, allow scientists to determine spectroscopically the Stark sublevels of some Ln III ions and compare them with those derived from the magnetic measurements.Excellent results on this topic are available in References [44,[46][47][48][49]. Our groups have a long-standing, intense interest in the chemistry, magnetism, and photoluminescence of 4f-metal complexes [50][51][52][53][54][55][56] and recently our attention has focused on mononuclear emissive Ln(III) complexes exhibiting slow magnetization relaxation [57].Our design synthetic strategy is to use ligands that act as terminal (either monodentate or chelating) and possess chromophores that can facilitate efficient energy transfer from their triplet state to the Ln III ions' excited (i.e., emissive) levels.Potentially polydentate O and N-donor Schiff bases [58] often lead to dinuclear or polynuclear complexes [13,[59][60][61].Rather surprisingly, mononuclear Ln(III) complexes with simple bidentate ligands derived from the condensation of salicylaldehyde (and its derivatives) and aniline (and its derivatives) have escaped the attention of scientists.The general formula of these ligands, which have the empirical name anils, is shown in Scheme 1.When neutral (the deprotonated phenoxido oxygen atom can bridge two Ln III ions and favor dinuclear species), these ligands are expected to behave as bidentate chelating, which leads to mononuclear complexes.Moreover, the presence of two aromatic rings per molecule makes these Schiff bases potential antennas for sensitizing Ln III -based emission.Anils have written their own history in Physical Chemistry.The prototype, N-salicylideneaniline (R 1 = R 2 = H in Scheme 1; salanH; the systematic name is 2-(phenyliminomethyl)phenol; alternative names already used are 2-hydroxybenzylideneaniline or phenylsalicylaldimine) is a well-known organic photochromic compound whose crystals change their color from orange to red upon UV irradiation and then back to orange upon thermal fading or VIS irradiation [62][63][64][65].The mechanism responsible for the photochromic color change has been proposed to involve (Scheme 2): (a) conversion of the colorless enol (or enol-imine) form to the orange cis-keto (or cis-keto-amine) form with excited-state intramolecular proton transfer upon UV irradiation (keto-enol tautomerization), and (b) cis-trans isomerization to afford the red trans-keto form.Extensive studies have revealed that the photochromic behavior of salanH is related to its molecular conformation in the crystal.The dihedral angle between the two aromatic rings is a key parameter for the appearance of the photochromic properties. proposed to involve (Scheme 2): (a) conversion of the colorless enol (or enol-imine) form to the orange cis-keto (or cis-keto-amine) form with excited-state intramolecular proton transfer upon UV irradiation (keto-enol tautomerization), and (b) cis-trans isomerization to afford the red trans-keto form.Extensive studies have revealed that the photochromic behavior of salanH is related to its molecular conformation in the crystal.The dihedral angle between the two aromatic rings is a key parameter for the appearance of the photochromic properties.We have very recently reported the synthesis and characterization of the isomorphous complexes [Ln(NO3)3(5BrsalanH)2(H2O)]•MeCN where 5BrsalanH is N-(5-bromosalicylidene)aniline (R1 = Br at the meta carbon position with respect to the OH-containing carbon atom) [57].The Dy III member of this family shows field-induced magnetic relaxation and emits green light upon excitation at ~340 nm with the photoluminescence being ligand-based.We report in this paper a logical continuation of this work, which involves the reactions of hydrated lanthanide(III) nitrates with neutral salanH (Scheme 1) and the prototype of the anil group of ligands.Our primary goals were: (i) The comparison of the coordination chemistry of salanH towards Ln(III) ions with that of 5BrsalanH [57] and (ii) the investigation of the magnetic properties of the Tb(III) and Dy(III) complexes and the possibility to observe slow magnetization relaxation, i.e., SIM properties and (iii) the study of the photoluminescence properties of selected compounds with emphasis on the possibilities to observe Ln III -based emission and to record emission in the near-IR region.The ultimate goal was to isolate emissive SIMs.The salanH/salan − ligands have widely been used in transition and main group metal chemistries [66-74], but no Ln(III) complexes have been reported to date.The structural chemistry of the free salanH compound is exciting.This aromatic Schiff base forms two photochromic polymorphs, α1 [75] and α2 [76].Both polymorphs feature non-coplanar phenyl rings and a planar, thermochromic polymorph, β [77].A fourth planar polymorph was also reported two years ago [78].All the four polymorphs are in the enol form featuring an intramolecular O-H•••N H bond.It should be mentioned at this point that the phenomenon of thermochromism is related to that of the photochromism.Upon heating, the anils that are not photochromic in the crystalline state develop an absorbance spectrum resembling the spectrum of the colored photochromic solid with the process being the transformation of the enol form to the cis-keto form.The two phenomena are mutually exclusive. Before closing the introduction, we would like to state that there is currently a renewed interest in the chemistry of mononuclear Ln(III)-Schiff base complexes because of two reasons: (a) There has been demonstrated that two and four electrons can be stored, respectively, in intramolecular and intermolecular C-C bonds formed by Ln(III)-assisted reduction of the imino group of Schiff base ligands, which shows that the latter can provide a promising alternative to amide and cyclopentadienyl ligands and open a novel route to the reductive chemistry of lanthanides [79] and (b) Yb(III) complexes with polydentate chelating Schiff bases are qubit candidates due to the very large splitting between the electronic ground doublet and the first excited crystal field state and their intrinsic slow paramagnetic relaxation [80] as well as candidates for novel coupled electronic qubitnuclear qubit systems [81]. Synthetic Comments Since we were interested in preparing mononuclear Ln(III) complexes with the neutral salanH ligand, we avoided the addition of an external base (e.g., NaOH, Et3N, R4NOH, etc.) in the reaction We have very recently reported the synthesis and characterization of the isomorphous complexes [Ln(NO 3 ) 3 (5BrsalanH) 2 (H 2 O)]•MeCN where 5BrsalanH is N-(5-bromosalicylidene)aniline (R 1 = Br at the meta carbon position with respect to the OH-containing carbon atom) [57].The Dy III member of this family shows field-induced magnetic relaxation and emits green light upon excitation at ~340 nm with the photoluminescence being ligand-based.We report in this paper a logical continuation of this work, which involves the reactions of hydrated lanthanide(III) nitrates with neutral salanH (Scheme 1) and the prototype of the anil group of ligands.Our primary goals were: (i) The comparison of the coordination chemistry of salanH towards Ln(III) ions with that of 5BrsalanH [57] and (ii) the investigation of the magnetic properties of the Tb(III) and Dy(III) complexes and the possibility to observe slow magnetization relaxation, i.e., SIM properties and (iii) the study of the photoluminescence properties of selected compounds with emphasis on the possibilities to observe Ln III -based emission and to record emission in the near-IR region.The ultimate goal was to isolate emissive SIMs.The salanH/salan − ligands have widely been used in transition and main group metal chemistries [66-74], but no Ln(III) complexes have been reported to date.The structural chemistry of the free salanH compound is exciting.This aromatic Schiff base forms two photochromic polymorphs, α 1 [75] and α 2 [76].Both polymorphs feature non-coplanar phenyl rings and a planar, thermochromic polymorph, β [77].A fourth planar polymorph was also reported two years ago [78].All the four polymorphs are in the enol form featuring an intramolecular O-H•••N H bond.It should be mentioned at this point that the phenomenon of thermochromism is related to that of the photochromism.Upon heating, the anils that are not photochromic in the crystalline state develop an absorbance spectrum resembling the spectrum of the colored photochromic solid with the process being the transformation of the enol form to the cis-keto form.The two phenomena are mutually exclusive. Before closing the introduction, we would like to state that there is currently a renewed interest in the chemistry of mononuclear Ln(III)-Schiff base complexes because of two reasons: (a) There has been demonstrated that two and four electrons can be stored, respectively, in intramolecular and intermolecular C-C bonds formed by Ln(III)-assisted reduction of the imino group of Schiff base ligands, which shows that the latter can provide a promising alternative to amide and cyclopentadienyl ligands and open a novel route to the reductive chemistry of lanthanides [79] and (b) Yb(III) complexes with polydentate chelating Schiff bases are qubit candidates due to the very large splitting between the electronic ground doublet and the first excited crystal field state and their intrinsic slow paramagnetic relaxation [80] as well as candidates for novel coupled electronic qubit-nuclear qubit systems [81]. Synthetic Comments Since we were interested in preparing mononuclear Ln(III) complexes with the neutral salanH ligand, we avoided the addition of an external base (e.g., NaOH, Et 3 N, R 4 NOH, etc.) in the reaction systems.Depending on the reaction solvent used, two families of mononuclear complexes were isolated.The other complexes are proposed to be isomorphous with the structurally characterized compounds based on elemental analyses, IR spectra, powder X-ray patterns (pXRDs), and unit-cell determinations for selected complexes.Assuming that these mononuclear complexes are the only products from their corresponding reaction mixtures, their formation is summarized by Equation (1) where Ln stands for lanthanide or yttrium and x is 5 or 6. Analogous 1:1 reactions in MeOH gave yellowish orange solutions from which were subsequently precipitated orange crystals in moderate yields (40%-50%).The characterization of the products revealed the formulation [Ln(NO 3 ) 3 (salanH) 2 (MeOH)]•(salanH) [Ln = Pr, 12_Pr; Nd, 13_Nd; Sm, 14_Sm; Eu, 15_Eu; Gd, 16_Gd; Tb, 17_Tb; Dy, 18_Dy; Ho, 19_Ho; Er, 20_Er; Yb, 21_Yb), and [Y(NO 3 ) 3 (salanH) 2 (MeOH)]•(salanH) (22_Y).The crystals of 17_Tb and 18_Dy proved to be of X-ray quality and their structures were solved by single-crystal X-ray crystallography.Elemental analyses, IR spectra, pXRDs, and unit cell determinations for selected samples make us strongly believe that the other complexes are isomorphous with the structurally characterized compounds.Assuming that these mononuclear complexes are the only products from their corresponding reaction systems (this is not entirely correct, vide infra).Their formation can be represented by Equation (2) where Ln stands for lanthanide or yttrium and x is 5 or 6.Further points of synthetic interest are reported in the "Supplementary Materials" section. The samples used for the measurements are pure as proven by their pXRDs (Figures 1 and S1).In the case of 12_Pr-22_Y, the experimental patterns of all compounds are identical with the simulated ones for the structurally characterized compounds 17_Tb and 18_Dy.In the case of 1•MeCN_Pr-11•MeCN_Y, the slight differences between the experimental and theoretical (as derived from the cifs) patterns are due to the presence of one solvate MeCN molecule per molecule of complex, which is completely or partially lost in the dried samples used for the pXRD measurements.We also calculated the pattern of complex 4•MeCN_Eu after removing the solvent atoms from the cif.Its similarity with the experimental pattern has slightly improved (Figure S2). A full spectroscopic discussion (IR for all complexes, 1 H NMR for the Y III complexes, and diffuse reflectance for selected complexes) is presented in the "Supplementary Materials" section.Representative spectra are shown in Figures S2-S7. Description of Structures The structures of complexes 4•MeCN_Eu, 7•MeCN_Dy, 10•MeCN_Yb, 17_Tb, and 18_Dy have been fully solved by single-crystal, X-ray crystallography.Aspects of the molecular and crystal structures of the complexes are shown in Figures 2-10 and S8-S11.Numerical data are listed in Tables 1-4 and S1-S4.Complexes 4•MeCN_Eu, 7•MeCN_Dy, and 10•MeCN_Yb are isomorphous.Thus, only the molecular and crystal structure of 7•MeCN_Dy will be described in detail.Complexes 17_Tb and 18_Dy are also isomorphous and only the molecular and crystal structure of compound 18_Dy will be discussed in detail.Thus, only the molecular and crystal structure of 7•MeCN_Dy will be described in detail.Complexes 17_Tb and 18_Dy are also isomorphous and only the molecular and crystal structure of compound 18_Dy will be discussed in detail.The crystal structure of 7•MeCN_Dy consists of complex molecules [Dy(NO 3 ) 3 (salanH) 2 (H 2 O)] and lattice MeCN molecules.Their ratio in the crystal is 1:1.The coordination sphere of the Dy III centeris composed of three bidentate chelating nitrato groups and two oxygen atoms that belong to two organic ligands and one oxygen atom from the aquo ligand (Figure 2).Thus, the metal ions is 9-coordinate.The salanH molecules behave as monodentate O-donor ligands.The acidic hydrogen atom of each neutral ligand is clearly located on the nitrogen atom of the Schiff-base linkage and the ligands are, thus, in their zwitterionic form (Scheme 3).The protonation of the nitrogen atom blocks the second possible coordination site of salanH.It should be mentioned at this point that the acidic hydrogen atom is located on the oxygen atom in the crystal structures of the various polymorphs of the free ligand [75][76][77][78].There are two intramolecular H bonds of moderate strength in the complex molecule with the protonated nitrogen atoms N1 and N2 being the donors and the negatively charged coordinated oxygen atoms O1 and O2 being the acceptors, respectively (Figure S8).The crystal structure of 7•MeCN_Dy consists of complex molecules [Dy(NO3)3(salanH)2(H2O)] and lattice MeCN molecules.Their ratio in the crystal is 1:1.The coordination sphere of the Dy III centeris composed of three bidentate chelating nitrato groups and two oxygen atoms that belong to two organic ligands and one oxygen atom from the aquo ligand (Figure 2).Thus, the metal ions is 9coordinate.The salanH molecules behave as monodentate O-donor ligands.The acidic hydrogen atom of each neutral ligand is clearly located on the nitrogen atom of the Schiff-base linkage and the ligands are, thus, in their zwitterionic form (Scheme 3).The protonation of the nitrogen atom blocks the second possible coordination site of salanH.It should be mentioned at this point that the acidic hydrogen atom is located on the oxygen atom in the crystal structures of the various polymorphs of the free ligand [75][76][77][78].There are two intramolecular H bonds of moderate strength in the complex molecule with the protonated nitrogen atoms N1 and N2 being the donors and the negatively charged coordinated oxygen atoms O1 and O2 being the acceptors, respectively (Figure S8).There are no Archimedean, Platonic, and Catalan polyhedra with nine vertices and also no prisms or antiprisms can be constructed with this number of vertices.Thus, the only shapes that may be considered are those listed in Table S1.Using the program SHAPE [82], the best fit obtained for the Ln III centers in 4•MeCN_Eu, 7•MeCN_Dy, and 10•MeCN_Yb is for the spherical tricapped trigonal prism (Figure 3, Table S1) with the nitrato atoms O3, O6, and O11 being the spherically distributed capping atoms.Since the nitrato coordinated groups impose small bite angles (~51°), the polyhedra are distorted.There are no Archimedean, Platonic, and Catalan polyhedra with nine vertices and also no prisms or antiprisms can be constructed with this number of vertices.Thus, the only shapes that may be considered are those listed in Table S1.Using the program SHAPE [82], the best fit obtained for the Ln III centers in 4•MeCN_Eu, 7•MeCN_Dy, and 10•MeCN_Yb is for the spherical tricapped trigonal prism (Figure 3, Table S1) with the nitrato atoms O3, O6, and O11 being the spherically distributed capping atoms.Since the nitrato coordinated groups impose small bite angles (~51 • ), the polyhedra are distorted.There are no Archimedean, Platonic, and Catalan polyhedra with nine vertices and also no prisms or antiprisms can be constructed with this number of vertices.Thus, the only shapes that may be considered are those listed in Table S1.Using the program SHAPE [82], the best fit obtained for the Ln III centers in 4•MeCN_Eu, 7•MeCN_Dy, and 10•MeCN_Yb is for the spherical tricapped trigonal prism (Figure 3, Table S1) with the nitrato atoms O3, O6, and O11 being the spherically distributed capping atoms.Since the nitrato coordinated groups impose small bite angles (~51°), the polyhedra are distorted.The two coordinated salanH ligands in the three complexes deviate from planarity.The ligand bearing N2 is more planar than the ligand bearing N1 (Table 2, Scheme 4).For example, in complex 7•MeCN_Dy, the angle ϕ between the aromatic rings of the ligand possessing N2 is 11.5 • while the angle between the rings of the ligand possessing N1 is 28.9 • .The two coordinated salanH ligands in the three complexes deviate from planarity.The ligand bearing N2 is more planar than the ligand bearing N1 (Table 2, Scheme 4).For example, in complex 7•MeCN_Dy, the angle φ between the aromatic rings of the ligand possessing N2 is 11.5° while the angle between the rings of the ligand possessing N1 is 28.9°.The crystal structure of 18_Dy contains complex molecules [Dy(NO3)3(salanH)2(MeOH)] (Figure 4) and lattice salanH molecules (Figure 5) in a 1:1 ratio.The Dy III ion is a 9-coordinate and is bound to six oxygen atoms from the three bidentante chelating nitrato groups to the oxygen atom from the terminally coordinated MeOH molecule and to two oxygen atoms that belong to the zwitterionic monodentate salanH ligands (Scheme 3).There are three classical intramolecular H bonds of medium strength within the complex.The donors are the protonated nitrogen atoms of the salanH ligands (N1, N2) and the acceptors are the coordinated phenoxido atoms O1 and O2 and the coordinated nitrato oxygen atom O6.Their dimensions are: N1-O1 2.667 The crystal structure of 18_Dy contains complex molecules [Dy(NO 3 ) 3 (salanH) 2 (MeOH)] (Figure 4) and lattice salanH molecules (Figure 5) in a 1:1 ratio.The Dy III ion is a 9-coordinate and is bound to six oxygen atoms from the three bidentante chelating nitrato groups to the oxygen atom from the terminally coordinated MeOH molecule and to two oxygen atoms that belong to the zwitterionic monodentate salanH ligands (Scheme 3).There are three classical intramolecular H bonds of medium strength within the complex.The donors are the protonated nitrogen atoms of the salanH ligands (N1, N2) and the acceptors are the coordinated phenoxido atoms O1 and O2 and the coordinated nitrato oxygen atom O6.Their dimensions are: N1-O1 2.667 The coordination polyhedra of the Ln III centers in 17_Tb and 18_Dy can be best described as spherical capped square antiprisms with the nitrato oxygen O9 being the capping atom (Figure 6, Table S2).The coordination polyhedra of the Ln III centers in 17_Tb and 18_Dy can be best described as spherical capped square antiprisms with the nitrato oxygen O9 being the capping atom (Figure 6, Table S2).The coordination polyhedra of the Ln III centers in 17_Tb and 18_Dy can be best described as spherical capped square antiprisms with the nitrato oxygen O9 being the capping atom (Figure 6, Table S2).By contrast with 4•MeCN_Eu, 7•MeCN_Dy and 10•MeCN_Yb, the salanH ligands in 17_Tb and 18_Dy are in mutually "anti" or "trans" positions with the O1-Ln-O2 angle [151.1(1)• in 17_Tb and 150.9(1) • in 18_Dy] being the largest donor atom-Ln III -donor atom bond angle in the coordination sphere.In respect of this, the conformation of 17_Tb and 18_Dy resembles that of complexes [Ln(NO 3 ) 3 (5BrsalanH) 2 (H 2 O)]•MeCN (Ln = Pr, Sm, Gd, Dy, Er) [57].forming the 3D architecture of the complexes (Figure S9).C28 is the methyl carbon atom of the lattice MeCN molecule.The parameters of the H bonds for the three complexes are listed in Table S3.The supramolecular features of the isomorphous complexes 17_Tb and 18_Dy are also interesting and we describe here those of the Dy(III) compound 18_Dy.The H bonds within the [Dy(NO3)3(salanH)2(MeOH)] complex molecule and the salanH lattice molecule have already been mentioned (see Figure S10).The lattice salanH molecules are H-bonded to the Dy III -containing complex molecules through the C36-H(C36)•••O11 and O12-H(O12)•••O13 (−x, −y + 2, −z + 2) H bonds (Figure 8).In addition to H bonds, π-π stacking interactions also contribute to the supramolecular structure.There are two types of π-π interactions for each coordinated salanH molecule with neighboring salanH ligands, which is indicated with different colors in Figure 8 (dashed dark and light green).The salanH ligands whose interaction is indicated by the dashed dark green line form an angle of 5.3(1)° between their mean planes (the same value is found for 17_Tb) and their centroid•••centroid distance is 3.28 Å (3.27 Å for 17_Tb).The salanH ligands whose interaction is indicated by the dashed light green line form an angle of 5.3(1)° between their mean planes for both 17_Tb and 18_Dy and their centroid•••centroid distance is 3.41 Å for 18_Dy and 3.36 Å for 17_Tb.The complex molecules interact further via π-π stacking interactions through centrosymmetricallyrelated aniline rings (rings containing the C21, C22, C23, C24, C25, and C26 atoms) with the centroid•••centroid distance being 3.36 Å for both compounds.All these different types of interactions result in a 3D architecture, which gives the characteristic of a hybrid molecular material to the complexes.In more detail, through the π-π interactions indicated by the light and green lines in Figure 8, a brick wall-type arrangement of the Dy III -containing complex molecules is formed resulting in layers parallel to the (001) plane (Figure 9).The Dy III •••Dy III distances within each layer are in the range 9.596(1)-10.298(1)Å [9.598(1)-10.312(1)Å for the Tb(III) complex].The layers interact from one side with a centrosymmetrically-related layer of brick wall-type through π-π interactions involving the C21-containing aniline rings (Figure S11) and from the other side with a layer of lattice salanH molecules (Figure 10) parallel to the (001) plane through C36-H(C36)•••O11 and O12-H(O12)•••O13 (−x, −y + 2, −z + 2) H bonds (Figure 8).The layers are stacked along the c axis in such a way that two brick wall-type layers of the complex molecules are separated by a single layer of lattice salanH molecules (Figure S11), which give characteristics of a coordination complex-organic molecule hybrid material to the structures of 17_Tb and 18_Dy.In each layer of lattice salanH molecules, two types of overlap are observed with both relating molecules through a center of symmetry and their mean planes are at distances of 3.23(2) and 3.42(2) Å [3.27(4) and 3.41(4) Å for 17_Tb] for the interactions indicated with dashed light violet-orange and mauve lines, respectively (Figure 10).The parameters of the H bonds for 17_Tb and 18_Dy are listed in Table S4.The supramolecular features of the isomorphous complexes 17_Tb and 18_Dy are also interesting and we describe here those of the Dy(III) compound 18_Dy.The H bonds within the [Dy(NO 3 ) 3 (salanH) 2 (MeOH)] complex molecule and the salanH lattice molecule have already been mentioned (see Figure S10).The lattice salanH molecules are H-bonded to the Dy III -containing complex molecules through the C36-H(C36)•••O11 and O12-H(O12)•••O13 (−x, −y + 2, −z + 2) H bonds (Figure 8).In addition to H bonds, π-π stacking interactions also contribute to the supramolecular structure.There are two types of π-π interactions for each coordinated salanH molecule with neighboring salanH ligands, which is indicated with different colors in Figure 8 (dashed dark and light green).The salanH ligands whose interaction is indicated by the dashed dark green line form an angle of 5.3(1) • between their mean planes (the same value is found for 17_Tb) and their centroid•••centroid distance is 3.28 Å (3.27 Å for 17_Tb).The salanH ligands whose interaction is indicated by the dashed light green line form an angle of 5.3(1) • between their mean planes for both 17_Tb and 18_Dy and their centroid•••centroid distance is 3.41 Å for 18_Dy and 3.36 Å for 17_Tb.The complex molecules interact further via π-π stacking interactions through centrosymmetrically-related aniline rings (rings containing the C21, C22, C23, C24, C25, and C26 atoms) with the centroid•••centroid distance being 3.36 Å for both compounds.All these different types of interactions result in a 3D architecture, which gives the characteristic of a hybrid molecular material to the complexes.In more detail, through the π-π interactions indicated by the light and green lines in Figure 8, a brick wall-type arrangement of the Dy III -containing complex molecules is formed resulting in layers parallel to the (001) plane (Figure 9).The Dy III •••Dy III distances within each layer are in the range 9.596(1)-10.298(1)Å [9.598(1)-10.312(1)Å for the Tb(III) complex].The layers interact from one side with a centrosymmetrically-related layer of brick wall-type through π-π interactions involving the C21-containing aniline rings (Figure S11) and from the other side with a layer of lattice salanH molecules (Figure 10) parallel to the (001) plane through C36-H(C36)•••O11 and O12-H(O12)•••O13 (−x, −y + 2, −z + 2) H bonds (Figure 8).The layers are stacked along the c axis in such a way that two brick wall-type layers of the complex molecules are separated by a single layer of lattice salanH molecules (Figure S11), which give characteristics of a coordination complex-organic molecule hybrid material to the structures of 17_Tb and 18_Dy.In each layer of lattice salanH molecules, two types of overlap are observed with both relating molecules through a center of symmetry and their mean planes are at distances of 3.23(2) and 3.42(2) Å [3.27(4) and 3.41(4) Å for 17_Tb] for the interactions indicated with dashed light violet-orange and mauve lines, respectively (Figure 10).The parameters of the H bonds for 17_Tb and 18_Dy are listed in Table S4. The photoluminescence characteristics of the free ligand salanH and the Eu(III) (4•MeCN_Eu), Tb(III) (6•MeCN_Tb, 17_Tb), and Dy(III) (7•MeCN_Dy, 18_Dy) complexes are almost identical, which suggests no Ln III -based emission.The same excitation and emission profiles are seen for solid 15_Eu except an emission peak at 612 nm assigned [54,55] to the 5 D0→ 7 F2 transition and indicates a partial sensitized red Eu III emission.The photoluminescence characteristics of the Gd(III) complexes The photoluminescence characteristics of the free ligand salanH and the Eu(III) (4•MeCN_Eu), Tb(III) (6•MeCN_Tb, 17_Tb), and Dy(III) (7•MeCN_Dy, 18_Dy) complexes are almost identical, which suggests no Ln III -based emission.The same excitation and emission profiles are seen for solid 15_Eu except an emission peak at 612 nm assigned [54,55] to the 5 D0→ 7 F2 transition and indicates a partial sensitized red Eu III emission.The photoluminescence characteristics of the Gd(III) complexes indicate that the broad green emission at ~525 to 540 nm in the spectra of all the eight complexes is salanH-centered and, thus, the coordinated ligand cannot act as a "sensitizer" for 4f-metal luminescence.It seems that this Ln III -independent emission is due to an efficient Ln III -to-salanH back energy transfer [50,52,83].A reason for this behavior might be the fact that the main absorption bands of coordinated salanH (at ~500 nm) are not close to the region where some Ln III ions absorb (<400 nm) [57].Upon the same excitation conditions, the room-temperature, solid-state emission spectra of the complexes are very similar with those recorded in acetone solutions (also at ~20 • C). Solid-state, room temperature emission spectra upon CW laser excitation at 405 nm in the visible (Figure S18) and near-IR (Figure 12) regions have also been recorded for the Pr(III), Nd(III), Sm(III), Er(III), and Yb(III) complexes.that the broad green emission at ~525 to 540 nm in the spectra of all the eight complexes is salanHcentered and, thus, the coordinated ligand cannot act as a "sensitizer" for 4f-metal luminescence.It seems that this Ln III -independent emission is due to an efficient Ln III -to-salanH back energy transfer [50,52,83].A reason for this behavior might be the fact that the main absorption bands of coordinated salanH (at ~500 nm) are not close to the region where some Ln III ions absorb (<400 nm) [57].Upon the same excitation conditions, the room-temperature, solid-state emission spectra of the complexes are very similar with those recorded in acetone solutions (also at ~20 °C).Solid-state, room temperature emission spectra upon CW laser excitation at 405 nm in the visible (Figure S18) and near-IR (Figure 12) regions have also been recorded for the Pr(III), Nd(III), Sm(III), Er(III), and Yb(III) complexes.The emission spectra in the visible region upon CW laser excitation at 405 nm are, in general, very similar with those of the Eu(III), Gd(III), Tb(III), and Dy(III) complexes mentioned above.Thus, they exhibit green, salanH-based luminescence.The similar f-f emission spectra of the Pr(III) complexes 1•MeCN_Pr and 12_Pr, exhibit, in addition to the broad band at ~540 nm, three weak peaks, a shoulder at ~580 nm assigned [41] to the 3 P0→ 3 H5 transition, a rather broad peak at ~605 nm assigned to the 1 D2→ 3 H4 transitions, and another shoulder assigned to the 3 P0→ 3 F2 transition at ~615 nm.It, thus, seems that a partial energy transfer from the coordinated organic ligands to Pr III is operative.The peak at ~605 nm is the most intense of the three due to the hypersensitivity of the 1 D2→ 3 H4 transition [41].As a rule, the Pr(III) complexes exhibit complicated emission spectra because the Pr 3+ ion can display emission bands from three levels ( 3 P0, 1 D2, and 1 G4) after exciting the The emission spectra in the visible region upon CW laser excitation at 405 nm are, in general, very similar with those of the Eu(III), Gd(III), Tb(III), and Dy(III) complexes mentioned above.Thus, they exhibit green, salanH-based luminescence.The similar f-f emission spectra of the Pr(III) complexes 1•MeCN_Pr and 12_Pr, exhibit, in addition to the broad band at ~540 nm, three weak peaks, a shoulder at ~580 nm assigned [41] to the 3 P 0 → 3 H 5 transition, a rather broad peak at ~605 nm assigned to the 1 D 2 → 3 H 4 transitions, and another shoulder assigned to the 3 P 0 → 3 F 2 transition at ~615 nm.It, thus, seems that a partial energy transfer from the coordinated organic ligands to Pr III is operative.The peak at ~605 nm is the most intense of the three due to the hypersensitivity of the 1 D 2 → 3 H 4 transition [41].As a rule, the Pr(III) complexes exhibit complicated emission spectra because the Pr 3+ ion can display emission bands from three levels ( 3 P 0 , 1 D 2 , and 1 G 4 ) after exciting the absorption of organic ligands [84].The Nd(III) complexes 2•MeCN_Ndand 13_Nd exhibit in addition to the broad salanH-based band at ~540 nm a weak emission feature with a fine structure at ~590 nm, which is difficult to assign. The T 1 →S 0 transition from the excited triplet state to the ground state of the organic ligand is generally disallowed at room temperature due to non-radiative losses.The non-radiative losses can be divided into intramolecular and external losses to the environment, which are mainly collisions with quenching sites (e.g., oxygen or water).At room temperature, even in the absence of oxygen and water, the non-radiative losses typically outclass the radiative transition and, thus, detection of the triplet state at room temperature is not possible [85,86]. The assignments of the near-IR emission bands for the Pr(III), Nd(III), Sm(III), Er(III), and Yb(III) complexes are located at the top of the corresponding peaks on Figure 12.For a given Ln III ion, the emission spectra are very similar when displaying the characteristic lanthanide(III) emission peaks. The emission spectra of the Pr(III) complexes 1•MeCN_Pr and 12_Pr show three bands at ~865, ~1020, and ~1465 nm, which can be assigned as originating from the 1 D 2 → 3 F 2 , 1 D 2 → 3 F 4 and 1 D 2 → 1 G 4 transitions, respectively [41,84].Some crystal-field fine structures can be observed, which is an indication that the Pr III center occupies well-defined crystallographic sites in the complex [84].For the Nd(III) complexes 2•MeCN_Nd and 13_Nd, the spectra consist of three emission peaks that are assigned to the 4 F 3/2 → 4 I 9/2 (~885 nm), 4 F 3/2 → 4 I 11/2 (~1050 nm), and 4 F 3/2 → 4 I 13/2 (~1330 nm) transitions [40,41,43,84].Among the three peaks, the 4 F 3/2 → 4 I 11/2 transition has the highest intensity with a potential in laser systems while the transition at ~1330 nm offers the opportunity to develop new materials suitable for an optical amplifier operating at 1.3 µm, which is one of the telecommunication windows [84].For the near-IR emission spectra of the Sm(III) complexes 3•MeCN_Sm and 14_Sm, all the peaks come from the 4 G 5/2 excited state [84].In the case of the Er(III) complexes 9•MeCN_Er and 20_Er, the emission spectra display a peak at ~1530 nm, which covers a large spectral range from 1480 to 1620 nm.This is attributed to the typical 4 I 13/2 → 4 I 15/2 transition of Er III [40,41,43,84].There are a number of excited states of Er(III) from which emission is possible.The fact that emission is observed only from the 4 I 13/2 state suggests that an efficient non-radiative decay pathway exists from those states to the 4 I 13/2 state.The Er(III) complexes are promising for application in amplification because the transition at around 1530 nm is in the right position for the third telecommunication window [41,84].To make possible a wide-gain bandwidth for optical amplification, a broad emission band is desirable.In our complexes, this requirement is realized since the full width at half-maximum is large (>60 nm).The Yb 3+ ion is an unusual case in the Ln(III) emission because it has only one singlet excited state, 2 F 5/2 , and 10,200 cm −1 above the ground 2 F 7/2 state [84].The emission spectra of the Yb(III) complexes 10•MeCN_Yb and 21_Yb consist of the characteristic peak at ~990 nm, which is assigned to the 2 F 5/2 → 2 F 7/2 transition [41,84,87].For 21_Yb, the typical broader vibronic components at longer wavelengths are observed [84].The near-IR emission of Yb(III) is very important because biological issues and fluids (e.g., blood) are relatively transparent in this region (around 1000 nm) and the development of Yb(III) complexes for various analytical and chemo sensor application is a "hot" research topic in medicinal inorganic chemistry. The detailed elucidation of the mechanism of the near-IR emission properties of the Pr(III), Nd(III), Sm(III), Er(III), and Yb(III) complexes described in this work is beyond the scope of the present paper.With limited data at hand, the question asking if there is effective energy transfer from the triplet state of the salanH ligand to the near-IR emissive levels of the Ln III ions is difficult to answer.We believe that there is no efficient energy transfer from the ligand for two reasons: (a) The strong excitation intensity (from the CW laser) is able to activate the Ln III s emission levels and (b) the presence of the ligand-based green emission of the complexes in the spectra (Figure S18) at room temperature is indicative of an incomplete or zero energy transfer to the excited levels of the Ln III ions [42]. Magnetic Studies of the Tb(III) and Dy(III) Complexes The direct current (dc) magnetic susceptibility data (χ M ) on well dried samples of 6_Tb, 7_Dy, 17_Tb, and 18_Dy collected in the temperature (T) range 2.0 to 300 K under an applied field of 0.03 T are typical of mononuclear Tb(III) and Dy(III) complexes and will not be further discussed in detail.For example, the 298 K χ M T values for 7_Dy and 18_Dy are ~14.3cm 3 •K•mol −1 , which is in very good agreement with the expected value for one Dy III center ( 6 H 15/2 , free ion, S = 5/2, g j = 4/3).Upon cooling, the values of the χ M T product decrease continuously and slowly reach the ~70% of their room-temperatures values at 2.0 K with the decrease being primarily due to the depopulation of the m j sublevels of the ground J state [5].The magnetization plots show a rapid increase at low fields and almost saturated values over 2 T (Figures S19 and S20).The magnetization values at the maximum applied field of 5 T (~6.5 Nµ B ) are significantly lower than the expected value for one isolate Dy III center (10 Nµ B ), which can be attributed to the crystal-field effects that lead to a substantial magnetic anisotropy [5].The observed experimental value is typical of mononuclear Dy(III) complexes [57]. Alternating current (ac) magnetic susceptibility experiments were carried out using a 4.0 G ac field oscillating in the frequency range of 10 to 1488 Hz in order to explore the magnetization dynamics of the four complexes.In the zero dc field, no out-of-phase (imaginary) components of the ac susceptibility, χ" M , were detected for frequencies between 10 and 1488 Hz even at the lowest investigated temperature (2 K).Under an external dc field of 0.1 T (in order to suppress the QTM and to enhance the slow magnetic relaxation properties), well-defined, temperature-dependent and frequency-dependent χ" M maxima were observed for the Dy(III) complexes 7_Dy and 18_Dy (Figure 13a,b), which indicated a field-induced slow magnetic relaxation.Complex 17_Dy shows tails of peaks at low frequencies under an external static field of 0.15 T and a well-defined maximum was clearly visible only for the higher frequency examined (1488 Hz) (Figure 13c).The optimum dc fields were decided by examining the χ" M vs. T response under different dc fields between 500 and 2000 G at two different ac frequencies (10 and 1000 Hz).A field-induced dependence of the susceptibility on temperature and frequency was observed and the optimum dc field that gives the clearest signal was selected for the ac measurements (Figure S21).For the Tb(III) complex 6_Tb, no field-induced properties were observed. While complex 7_Dy presents a single set of maxima in the χ" M vs. T graph in the 3.5-2.0K range (2.0 K is the lowest temperature limit of our instrument), complex 18_Dy presents the set of maxima at a higher temperature range (8-4 K) indicating better properties for the latter.Such a difference is also clear for the two Tb(III) complexes 6_Tb and 17_Tb.Complex 6_Tb has no magnetic response at all while 17_Tb shows χ" M dependence upon T. For a given Ln III center, the different magnetic response between the two complexes (7_Dy and 18_Dy and 6_Tb and 17_Tb) is a consequence of the different crystal fields around the metal ion in the two families [5,6,10,[15][16][17][18][19][20][21][22][23][24][25][26][27][28][29][30][31][32][33].The different crystal fields arise mainly from the (i) different coordination geometries around the Ln III centers (tricapped trigonal prismatic in 6_Tb and 7_Dy vs. capped square anti-prismatic in 17_Tb and 18_Dy), (ii) different coordinated solvent molecules (H 2 O in 6_Tb and 7_Dy vs. MeOH in 17_Tb and 18_Dy), and (iii) the "cis" disposition of the salanH ligands in 6_Tb and 7_Dy as opposed to the "trans" disposition of the ligands in 17_Tb and 18_Dy (vide supra).Upon a more detailed examination, the better field-induced magnetic relaxation properties of 18_Dy compared with those of 7_Dy (a higher temperature range for the χ" M maxima in the former than in the latter) and of 17_Tb compared with those of 6_Tb (appearance of signals in the former and no magnetic response in the latter) can be correlated with the supramolecular structures of the two families (vide supra).The presence of the lattice (i.e., uncoordinated) salanH molecules in 17_Tb and 18_Dy "dilutes," in a sense, these complexes ( The magnetic relaxation parameters of the Dy(III) complexes have been calculated using the Arrhenius Equation (3). ( ) where τ0 is the pre-exponential factor, ΚB is the Boltzmann constant, and Ueff is the effective thermal energy barrier for magnetization reversal known as Orbach relaxation [88].Best fits of the linear parts (Figure 14, left) give the parameters Ueff = 13.1 cm −1 , τ0 = 4.5 × 10 −7 s for 7_Dy, and Ueff = 31.0cm −1 , τ0 = 2.5 × 10 −7 s for 18_Dy.Due to the lack of maxima in the χ″M vs. T graph for 17_Tb, a Debye relaxation has been assumed and the SIM parameters have been calculated using Equation (4) [89].Considering a single relaxation process, the least-squares fits of the experimental data (Figure 14, center) give average values of Ueff = 14.8 cm −1 , τ0 = 3.5 × 10 −6 s.Due to a limited number of points in the Arrhenius plot for 7_Dy, the Debye relaxation has been assumed and the SIM parameters have also been calculated using Equation ( 4) [89].The parameters are Ueff = 15.9 cm −1 , τ0 = 5.3 × 10 −6 s (Figure 14, right) in a rather satisfactory agreement with the values obtained from the previous Arrhenius plot that involves only three points [86].The magnetic relaxation parameters of the Dy(III) complexes have been calculated using the Arrhenius Equation (3). ln where τ 0 is the pre-exponential factor, K B is the Boltzmann constant, and U eff is the effective thermal energy barrier for magnetization reversal known as Orbach relaxation [88].Best fits of the linear parts (Figure 14, left) give the parameters U eff = 13.1 cm −1 , τ 0 = 4.5 × 10 −7 s for 7_Dy, and U eff = 31.0cm −1 , τ 0 = 2.5 × 10 −7 s for 18_Dy.Due to the lack of maxima in the χ" M vs. T graph for 17_Tb, a Debye relaxation has been assumed and the SIM parameters have been calculated using Equation (4) [89]. Considering a single relaxation process, the least-squares fits of the experimental data (Figure 14, center) give average values of U eff = 14.8 cm −1 , τ 0 = 3.5 × 10 −6 s.Due to a limited number of points in the Arrhenius plot for 7_Dy, the Debye relaxation has been assumed and the SIM parameters have also been calculated using Equation ( 4) [89].The parameters are U eff = 15.9 cm −1 , τ 0 = 5.3 × 10 −6 s (Figure 14, right) in a rather satisfactory agreement with the values obtained from the previous Arrhenius plot that involves only three points [86].For complex 18_Dy, an increase of the χ″M values at low temperatures is clearly observable (Figure 13b, left), which suggests another set of maxima below 2 K.This second set might be also present in 7_Dy and 17_Tb, but the characteristics of our instrumental setup do not allow us to detect such a behavior below 2 K.The second set of maxima might indicate another pathway for magnetization relaxation different from the calculated Orbach process.This second set of maxima seems to be temperature-independent, which can be seen in the χ″M vs. T graph for 18_Dy (Figure 13b, left).Generally, such a temperature-independent process is attributed to a relaxation by fast QTM, but this seems to have been suppressed by the application of the external magnetic field. The fit of the χ″M vs. χ′M data (Cole-Cole or Argand plot) for complexes 7_Dy and 17_Tb in the temperature range for which an Orbach process is assumed was performed using the CCfit software.The fit gives α values below 0.3 in both cases, which indicates a narrow distribution of the relaxation times (Figure S22, upper left and bottom left).A fit with reliable α values was not possible for 18_Dy due to the simultaneous existence of two pathways for magnetization relaxation (Figure S22, upper right). Due to the fact that, in the absence of high symmetry (as in 7•MeCN_Dy and 18_Dy), the Dy III ground state is a doublet along the anisotropy axis with an angular momentum quantum number mj = ±15/2, we have determined the orientation of the ground-state magnetic anisotropy axes for the Dy III centers in the two Dy(III) complexes by employing a method reported in 2013 [90].The method is based on an electrostatic point charge approximation and requires only the knowledge of the single-crystal, X-ray structure of the complexes (and not the fitting of experimental magnetic data).In 7•MeCN_Dy and 18_Dy, the charge distribution consists of a plane containing the two phenoxido oxygen atoms and one bidentate nitrato group (7•MeCN_Dy) or one phenoxido oxygen atom and two bidentate nitrato groups (18_Dy).Following this method and using the FORTRAN program MAGELLAN [91], it is found that the ground-state magnetic anisotropy axis for the Dy III center is directed towards one of the phenoxido oxygen atom in 7•MeCN_Dy (Figure 15, right) and towards two nitrato groups in 18_Dy (Figure 15, left).In our cases, the direction of the easy axes does not give valuable information and prediction since it is only a simple calculation.The distribution of the charged oxygen atoms (and the derived field) is spherical and any oblate-prolate [16] discussions are not possible.For complex 18_Dy, an increase of the χ M values at low temperatures is clearly observable (Figure 13b, left), which suggests another set of maxima below 2 K.This second set might be also present in 7_Dy and 17_Tb, but the characteristics of our instrumental setup do not allow us to detect such a behavior below 2 K.The second set of maxima might indicate another pathway for magnetization relaxation different from the calculated Orbach process.This second set of maxima seems to be temperature-independent, which can be seen in the χ M vs. T graph for 18_Dy (Figure 13b, left).Generally, such a temperature-independent process is attributed to a relaxation by fast QTM, but this seems to have been suppressed by the application of the external magnetic field. The fit of the χ M vs. χ M data (Cole-Cole or Argand plot) for complexes 7_Dy and 17_Tb in the temperature range for which an Orbach process is assumed was performed using the CCfit software.The fit gives α values below 0.3 in both cases, which indicates a narrow distribution of the relaxation times (Figure S22, upper left and bottom left).A fit with reliable α values was not possible for 18_Dy due to the simultaneous existence of two pathways for magnetization relaxation (Figure S22, upper right). Due to the fact that, in the absence of high symmetry (as in 7•MeCN_Dy and 18_Dy), the Dy III ground state is a doublet along the anisotropy axis with an angular momentum quantum number m j = ±15/2, we have determined the orientation of the ground-state magnetic anisotropy axes for the Dy III centers in the two Dy(III) complexes by employing a method reported in 2013 [90].The method is based on an electrostatic point charge approximation and requires only the knowledge of the single-crystal, X-ray structure of the complexes (and not the fitting of experimental magnetic data).In 7•MeCN_Dy and 18_Dy, the charge distribution consists of a plane containing the two phenoxido oxygen atoms and one bidentate nitrato group (7•MeCN_Dy) or one phenoxido oxygen atom and two bidentate nitrato groups (18_Dy).Following this method and using the FORTRAN program MAGELLAN [91], it is found that the ground-state magnetic anisotropy axis for the Dy III center is directed towards one of the phenoxido oxygen atom in 7•MeCN_Dy (Figure 15, right) and towards two nitrato groups in 18_Dy (Figure 15, left).In our cases, the direction of the easy axes does not give valuable information and prediction since it is only a simple calculation.The distribution of the charged oxygen atoms (and the derived field) is spherical and any oblate-prolate [16] discussions are not possible.Design, San Diego, CA, USA) operating at 0.03 T in the 300 to 2.0 K range for the magnetic susceptibility and at 2.0 K in the 0 to 5 T range for the magnetization measurements.Diamagnetic corrections were applied to the observed susceptibilities using Pascal's constants [92]. Syntheses of [Pr These complexes were prepared in an identical manner with 7•MeCN_Dy by simply replacing Dy(NO 3 ) 3 •5H 2 O with the equivalent amount of the appropriate Ln(NO 3 ) 3 •xH 2 O starting material (x = 5 or 6).Typical yields were in the 50% to 60% range (based on salanH).The complexes were satisfactorily analyzed as lattice MeCN-free.In some samples, a small percentage of lattice MeCN (typically 0.1-0.4moles per mole of the complex) could also fit well with the experimental microanalytical data.Anal. Syntheses of [Pr These complexes were prepared in an identical manner with 18_Dy by simply by replacing Dy(NO 3 ) (ii) The Tb(III) compound 17_Tb and the Dy(III) complexes 7_Dy and 18_Dy show field-induced slow magnetic relaxation properties.The enhanced/improved properties of 17_Tb and 18_Dy compared with those of 6_Tb and 7_Dy have been nicely correlated with the different supramolecular characteristics of the two families and (iii) the complexes exhibit ligand-based green photoluminescence at room-temperature while near-IR, Ln(III)-based emission has been recorded for the Pr(III), Nd(III), Sm(III), Er(III), and Yb(III) members of the two families.Complexes 7_Dy, 17_Tb, and 18_Dy can be considered as compounds that exhibit both photoluminescence (albeit not derived from the 4f-metal ions) and field-induced slow magnetic relaxation by combining the two properties within the same molecule.We are still far from our ultimate goal of correlating optical and magnetic properties since it has been performed in an elegant way by other groups [44,[46][47][48][49]. Our future efforts in the present general project are directed towards: (a) Synthetic, structural, optical, and magnetic studies of complex [Dy(NO 3 ) 3 (salanH) 2 (MeOH)] (mentioned in the "Supplementary Materials" section) and its analogues with other 4f-metal ions.(b) The preparation of mononuclear Ln(III) complexes with other anil-type ligands (R 1 , R 2 = various non-donor groups in Scheme 1) with the goals to see if there are other structural types in this chemistry (for example, if a bidentate chelating coordination mode of the ligands can be realized), isolate complexes with zero-field SIM properties (for example, using bulkier anils to lower the coordination number of the Ln III center) and to achieve Ln III -based emission with the organic ligand acting as an "antenna" (for example, enhancing the aromatic content of the ligands in order to realize efficient organic ligand-to-Ln III energy transfer).All these efforts, which are already well advanced, are in progress and results will be reported in due course.Lastly, we have been working intensely to realize our long-term goal to achieve Dy III -based emission in a SIM.This will enable us to correlate luminescence and magnetism since the highest f-f transition in the emission spectra can provide a direct picture of the splitting of the ground J multiplet.S1 and S2: Continuous Shape Measures (CShM) values for the potential coordination polyhedra of the Ln III center in the structurally characterized complexes.Tables S3 and S4: H-bonding interactions in the crystal structures of the structurally characterized complexes.Table S5: Crystallographic data for complexes 4•MeCN_Eu, 7•MeCN_Dy, 10•MeCN_Yb, 17_Tb, and 18_Dy. Author Contributions: I.M.-M.and D.M. contributed toward the syntheses, crystallization, and conventional characterization of the complexes.Both also contributed to the interpretation of the results.J.M. and A.E. performed the magnetic measurements, interpreted the results, and calculated the magnetic anisotropy axes of the Dy III centers in complex 7•MeCN_Dy and 18_Dy.The latter also wrote the relevant part of the paper.L.C. and S.C. carried out the solid-state, room-temperature visible and near-IR emission studies upon CW laser excitation at 405 nm and interpreted the results.The latter also wrote the relevant part of the paper.V.B. performed the solid-state, room-temperature visible emission studies (including the recording of the excitation spectra), interpreted the results, and wrote the relevant part of the paper.C.P.R. and V.P. collected single-crystal X-ray crystallographic data, solved the structures, and performed the refinement of the structures.The latter also recorded pXRD patterns, studied in detail the supramolecular features of the crystal structures, and wrote the relevant part of the paper.S.P.P. coordinated the research, contributed to the interpretation of the results, and wrote the paper based on the reports of his collaborators.All the authors exchanged opinions concerning the interpretation of the results and commented on the manuscript at all stages. Scheme 2 . Scheme 2. Proposed mechanism for the photochromic behavior of salanH.Dashed lines indicate H bonds. Magnetochemistry 2018, 4 , 29 Figure 1 . Figure 1.Experimental X-ray diffraction patterns of freshly prepared and well dried powders of complexes 17_Tb and 18_Dy.The simulated pattern of the structurally characterized Dy(III) complex 18_Dy (labelled as 18_Dy Theoretical) is also shown. Figure 1 . Figure 1.Experimental X-ray diffraction patterns of freshly prepared and well dried powders of complexes 17_Tb and 18_Dy.The simulated pattern of the structurally characterized Dy(III) complex 18_Dy (labelled as 18_Dy Theoretical) is also shown. Figure 3 . Figure 3.The spherical tricapped trigonal prismatic coordination polyhedron of the Dy III center in complex 7•MeCN_Dy.The smaller cream spheres define the vertices of the ideal polyhedron. Scheme 3 . Scheme 3. The coordination mode of the zwitterionic ligand salanH in the mononuclear complexes of the present work.The coordination bond is drawn in bold.The Dy-O bond lengths fall in the range of 2.300(1)-2.572(1)Å and are typical for 9-coordinate Dy(III) complexes [51,55,57].The bond lengths of Dy III to deprotonated phenoxido oxygens [2.300(1) and 2.310(1) Å] are shorter than the distances to the nitrato and aquo oxygens.Each Ln-O bond distance in the three isomorphous complexes 4•MeCN_Eu, 7•MeCN_Dy, and 10•MeCN_Yb follows the order Yb III < Dy III < Eu III , which is a consequence of the well-known trend of lanthanide contraction.The C1-O1/C14-O2 [1.302(5), 1.311(5) Å], C6-C7/C19-C20 [1.415(6), 1.414(6) Å], and C7-N1/C20-N2 [1.299(6), 1.304(6) Å] bond lengths in the salanH ligands of 7•MeCN_Dy (as well as in 4•MeCN_Eu and 10•MeCN_Yb) indicate their enolate-protonated imine character (Scheme 3).However, the fact that these bond lengths are shorter, shorter, and longer, respectively, than those of the various polymorphic forms of salanH (the corresponding distances in the free ligand are ~1.35,1.45-1.53,and ~1.27 Å) might suggest a small degree of ketone-amine character in the bonding scheme of the coordinated organic ligands.It is proposed that 7•MeCN_Dy has their ligands in an intermediate structure between phenolate and quinoid tautomers with a higher percentage of the former and does not, therefore, exist in either two limiting structural forms.This proposal is further supported by the fact that the C1-C6 and C14-C19 distances [1.441(5) and 1.433(5) Å in 7•MeCN_Dy] are slightly longer than the other carbon-carbon distances in the corresponding benzylidene rings of coordinated salanH ligands [1.363(7)-1.410(7)and 1.361(7)-1.410(6)Å, respectively].There are no Archimedean, Platonic, and Catalan polyhedra with nine vertices and also no prisms or antiprisms can be constructed with this number of vertices.Thus, the only shapes that may be considered are those listed in TableS1.Using the program SHAPE[82], the best fit obtained for the Ln III centers in 4•MeCN_Eu, 7•MeCN_Dy, and 10•MeCN_Yb is for the spherical tricapped trigonal prism (Figure3, TableS1) with the nitrato atoms O3, O6, and O11 being the spherically distributed capping atoms.Since the nitrato coordinated groups impose small bite angles (~51 • ), the polyhedra are distorted. Magnetochemistry 2018, 4 , 29 Scheme 3 . Scheme 3. The coordination mode of the zwitterionic ligand salanH in the mononuclear complexes of the present work.The coordination bond is drawn in bold. Figure 3 . Figure 3.The spherical tricapped trigonal prismatic coordination polyhedron of the Dy III center in complex 7•MeCN_Dy.The smaller cream spheres define the vertices of the ideal polyhedron. Figure 3 . Figure 3.The spherical tricapped trigonal prismatic coordination polyhedron of the Dy III center in complex 7•MeCN_Dy.The smaller cream spheres define the vertices of the ideal polyhedron. a The same atom numbering scheme is applied for the two complexes.b This atom belongs to the coordinated MeOH molecule.c These atoms belong to the lattice (i.e., non-coordinated) salanH molecule, see Figure 5. d Ln = Tb.e Ln = Dy.Magnetochemistry 2018, 4, x FOR PEER REVIEW 10 of 29 Figure 4 . Figure 4. Partially labelled plot of the structure of the molecule [Dy(NO3)3(salanH)2(MeOH)] that is present in the crystal structure of complex 18_Dy. Figure 4 . Figure 4. Partially labelled plot of the structure of the molecule [Dy(NO 3 ) 3 (salanH) 2 (MeOH)] that is present in the crystal structure of complex 18_Dy. Figure 5 . Figure 5. Fully labeled plot of the structure of the lattice free salanH molecule that is present in the crystal structure of complex 18_Dy. Figure 6 . Figure 6.The spherical capped square anti-prismatic coordination polyhedron of the Dy III center in complex 18_Dy.The smaller cream spheres define the vertices of the ideal polyhedron. Magnetochemistry 2018, 4 , 29 Figure 5 . Figure 5. Fully labeled plot of the structure of the lattice free salanH molecule that is present in the crystal structure of complex 18_Dy. Figure 6 . Figure 6.The spherical capped square anti-prismatic coordination polyhedron of the Dy III center in complex 18_Dy.The smaller cream spheres define the vertices of the ideal polyhedron. Figure 6 . Figure 6.The spherical capped square anti-prismatic coordination polyhedron of the Dy III center in complex 18_Dy.The smaller cream spheres define the vertices of the ideal polyhedron.One of the two salanH ligands in 17_Tb and 18_Dy is almost planar.The angle between the two aromatic rings is 1.1 • in 17_Tb and 1.0 • in 18_Dy.The second salanH ligand is somewhat less planar with the angle being 9.6 • in 17_Tb and 9.8 • in 18_Dy.The lattice salanH molecule is nearly planar and the angle is 2.4 • for both complexes. Figure 9 . Figure 9.A brick wall-type layer of the [Dy(NO3)3(salanH)2(MeOH)] molecules parallel to the (001) plane in the crystal structure of 18_Dy.The dashed dark green and light green lines indicate the two types of intermolecular π-π interactions between coordinated salanH ligands. Figure 9 . Figure 9.A brick wall-type layer of the [Dy(NO3)3(salanH)2(MeOH)] molecules parallel to the (001) plane in the crystal structure of 18_Dy.The dashed dark green and light green lines indicate the two types of intermolecular π-π interactions between coordinated salanH ligands. Figure 9 . Figure 9.A brick wall-type layer of the [Dy(NO 3 ) 3 (salanH) 2 (MeOH)] molecules parallel to the (001) plane in the crystal structure of 18_Dy.The dashed dark green and light green lines indicate the two types of intermolecular π-π interactions between coordinated salanH ligands. Figure 10 . Figure 10.A layer of lattice salanH molecules parallel to the (001) plane in the crystal structure of 18_Dy.The dashed light violet-orange and mauve lines indicate the two types of overlap. Figure 10 . Figure 10.A layer of lattice salanH molecules parallel to the (001) plane in the crystal structure of 18_Dy.The dashed light violet-orange and mauve lines indicate the two types of overlap. Figure 10 . Figure 10.A layer of lattice salanH molecules parallel to the (001) plane in the crystal structure of 18_Dy.The dashed light violet-orange and mauve lines indicate the two types of overlap. Figure 11 . Figure 11.(a) Solid-state, room-temperature excitation (curve 1, maximum emission at 545 nm) and emission (curve 2, maximum excitation at 360 nm) spectra of free salanH, (b,c) Solid-state, room-temperature excitation (curve 1, maximum emission at 540 nm) and emission (curve 2, maximum excitation at 360 nm) spectra of complexes 4•MeCN_Eu and 15_Eu, respectively.Upon maximum excitation at 360 nm, the free salanH ligand shows a broad emission with a maximum at 545 nm located in the green part of the visible spectrum.The photoluminescence characteristics of the free ligand salanH and the Eu(III) (4•MeCN_Eu), Tb(III) (6•MeCN_Tb, 17_Tb), and Dy(III) (7•MeCN_Dy, 18_Dy) complexes are almost identical, which suggests no Ln III -based emission.The same excitation and emission profiles are seen for solid 15_Eu except an emission peak at 612 nm assigned[54,55] to the 5 D 0 → 7 F 2 transition and Figures 8 - 29 Figure 13 . Figure 13.Out-of-phase ac molar magnetic susceptibility signals (χ″) vs. T (at different ac frequencies, left) and vs. ac frequency (at different low temperatures, right) for complexes (a) 7_Dy, (b) 18_Dy, and (c) 17_Tb.All measurements were performed in the 10 to 1488 Hz frequency range under static field of 0.1 (7_Dy, 18_Dy) and 0.15 T (17_Tb).Solid lines are guides for the eye. Figure 13 . Figure 13.Out-of-phase ac molar magnetic susceptibility signals (χ") vs. T (at different ac frequencies, left) and vs. ac frequency (at different low temperatures, right) for complexes (a) 7_Dy, (b) 18_Dy, and (c) 17_Tb.All measurements were performed in the 10 to 1488 Hz frequency range under static field of 0.1 (7_Dy, 18_Dy) and 0.15 T (17_Tb).Solid lines are guides for the eye. Figure 14 . Figure 14.(left) Arrhenius plots for complexes 7_Dy and 18_Dy in applied dc fields of 0.1 T, (center) plots of ln(χ″M/χ′M) vs. 1/T for 17_Tb at different ac frequencies in an applied field of 0.15 T, and (right) plots of ln(χ″M/χ′M) vs. 1/T for 7_Dy at different ac frequencies in an applied field of 0.1 T. The solid lines represent the fits. Figure 14 . Figure 14.(left) Arrhenius plots for complexes 7_Dy and 18_Dy in applied dc fields of 0.1 T, (center) plots of ln(χ M /χ M ) vs. 1/T for 17_Tb at different ac frequencies in an applied field of 0.15 T, and (right) plots of ln(χ" M /χ M ) vs. 1/T for 7_Dy at different ac frequencies in an applied field of 0.1 T. The solid lines represent the fits. Figure S4:The 1 H NMR spectrum of complex 22_Y in DMSO-d 6 .FigureS5: The 1 H NMR spectrum of complex 11_Y in DMSO-d 6 .FigureS6: The 1 H NMR spectrum of salanH in DMSO-d 6 .FigureS7: Diffuse reflectance spectra of the Pr(III), Nd(III), Sm(III), Er(III), and Yb(III) complexes of the two families of compounds.FigureS8: Molecular and supramolecular features of complex 7•MeCN_Dy.FigureS9: The 3D architecture of complex 7•MeCN_Dy. Figure S10 : Molecular structural features of complex 18_Dy.Figure S11: Stacking of layers along the c axis in the crystal structure of 18_Dy.Figures S12-S17: Solid-state, room-temperature photoluminescence data for selected complexes in the visible region. FigureS18: Solid-state, room-temperature visible emission spectra of the Pr(III), Nd(III), Sm(III), Er(III), and Yb(III) complexes that belong to the two families of compounds upon CW laser excitation at 405 nm.Figures S19 and S20: Magnetization vs. magnetic fields plots for complexes 7_Dy and 18_Dy, respectively, at 2.0 K. FigureS21: Ac measurements for complexes 17_Tb and 18_Dy at variable fields. Table 1 . Cont. Bond Distances (Å) 4•MeCN_Eu b 7•MeCN_Dy c 10•MeCN_Yb d The same atom numbering scheme is applied for the three complexes: b Ln = Eu, c Ln = Dy, d Ln = Yb. a The same atom numbering scheme is applied for the three complexes: b Ln = Eu, c Ln = Dy, d Ln = Yb.
2019-01-11T15:58:40.411Z
2018-10-07T00:00:00.000
{ "year": 2018, "sha1": "f0acd3b33d50668a236651550dbc6ac465f6976c", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2312-7481/4/4/45/pdf?version=1538892840", "oa_status": "GOLD", "pdf_src": "ScienceParseMerged", "pdf_hash": "f0acd3b33d50668a236651550dbc6ac465f6976c", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Materials Science" ] }
16900794
pes2o/s2orc
v3-fos-license
Radio Planetary Nebulae in the Magellanic Clouds We report the extragalactic radio-continuum detection of 15 planetary nebulae (PNe) in the Magellanic Clouds (MCs) from recent Australia Telescope Compact Array+Parkes mosaic surveys. These detections were supplemented by new and high resolution radio, optical and IR observations which helped to resolve the true nature of the objects. Four of the PNe are located in the Small Magellanic Cloud (SMC) and 11 are located in the Large Magellanic Cloud (LMC). Based on Galactic PNe the expected radio flux densities at the distance of the LMC/SMC are up to ~2.5 mJy and ~2.0 mJy at 1.4 GHz, respectively. We find that one of our new radio PNe in the SMC has a flux density of 5.1 mJy at 1.4 GHz, several times higher than expected. We suggest that the most luminous radio PN in the SMC (N S68) may represent the upper limit to radio peak luminosity because it is ~3 times more luminous than NGC 7027, the most luminous known Galactic PN. We note that the optical diameters of these 15 MCs PNe vary from very small (~0.08 pc or 0.32"; SMP L47) to very large (~1 pc or 4"; SMP L83). Their flux densities peak at different frequencies, suggesting that they may be in different stages of evolution. We briefly discuss mechanisms that may explain their unusually high radio-continuum flux densities. We argue that these detections may help solve the"missing mass problem"in PNe whose central stars were originally 1-8 Msun. We explore the possible link between ionised halos ejected by the central stars in their late evolution and extended radio emission. Because of their higher than expected flux densities we tentatively call this PNe (sub)sample -"Super PNe". . Radio PN Candidates in the SMC and LMC. ⋆ in the Col. 1 indicates that PN is observed with the ATCA high resolution mode at 6/3 cm. T and 3 are from the detection at highest radio frequency. The error in flux densities at all frequencies are flux dependent and they are <10% (Fil fluxes (Col. 11) are taken from Reid & Parker (2009, in prep.). Optical diameter (Col. 12) is taken from Shaw et al. (2006) (2002); SMP - Sanduleak, MacConnell, & Philip (1978) INTRODUCTION Planetary nebulae (PNe) possess ionized, neutral, atomic, molecular and solid states of matter in diverse regions with different temperature, density and morphological structure. Their physical environments range in temperature from 10 2 K to greater than 10 6 K. Although these objects radiate from the X-ray to the radio, detected structures are influenced by selection effects due to intervening dust and gas, instrument sensitivity and distance (see the more specific discussions by Kwok (2005); Dgani & Soker (1998)). ⋆ Email: m.filipovic@uws.edu.au Radio-continuum surveys of PNe in the Magellanic Clouds (MCs) potentially offer a flux-limited sample that could provide absolute physical attributes such as fluxes, emission measures, and spectral energy distributions (SEDs) (e.g., Zijlstra et al. (2008)). These in turn are relevant to the major issues of PN evolution, although MC PNe provide very limited information on morphology. Most known PNe are weak thermal radio sources and although morphologies of these radio objects are similar to their optical counterparts, radio interferometric observations allow us to image the structure of a PNe's ionized component. A spherically symmetric uniform density PNe has an ionized mass, Mi, that can be expressed as: where D kpc is distance (kpc), F5 is radio flux density at 5 GHz (Jy) and ne represents electron density (cm −3 ) derived from forbidden-line ratios (Kwok 2000). In cases where the PNe distance is unknown, this equation can be inverted to provide a crude but useful distance estimate (Mezger & Henderson 1967, Appendix A, eq. (A.14)). For example, adopting a 5 GHz flux density of 25 mJy and Te = 10 4 K, gives an ionized gas mass of 0.0022 × D 2.5 kpc M⊙. Assuming a mean PNe ionized mass of 0.1 to 0.25 M⊙ allowed Cohen et al. (2005) to estimate that the distance to G313.3+00.3 lies between 4.6 and 6.6 kpc. However, this distance range may not be always valid as Eq. 1 may not be always applicable. For example, some PNe such as NGC 7027, the brightest PN in the radio sky, exhibit an ionized mass of 0.057M⊙ (Beintema et al. 1996). At the other extreme, large PNe have masses up to 0.5M⊙ or sometimes well above 1 . Therefore, we point that Eq. 1 is valid only if PN is optically thin, but this will be true in the large majority of cases. Study of extragalactic PNe have the advantage that their distance is known with much greater certainty than those of Galactic PNe. Centimeter radio emission from PNe also can be used to estimate interstellar extinction by comparing radio and optical Balmer-line fluxes (Luo, Condon & Yin 2005). The study of radio PNe at a known distance allows us to better understand the properties of PNe in our own Galaxy, and ultimately to refine methods of estimating their distances. We also point out that PNe may even reflect conditions inherent in their host galaxies. However, there is consistency in the bright end cut-off of the PN Luminosity Function (PNLF) regardless of galaxy type (Herrmann et al. 2008). We present the first complete sample (Si > 1.5 mJy in the radio-continuum of confirmed extragalactic PNe. Further analysis depends heavily on a variety of new, high resolution and time consuming observations which are underway. RADIO DATA The large majority of known Galactic PNe are weak but detectable radio-continuum objects. Their thermal radio emission is a useful tracer of nebular ionization. Because centimeter-wavelength radiation is not extinguished by dust grains, the observed emission should be a good representation of the conditions in the PNe. For these reasons, large scale radio surveys surveys of nearby galaxies such as the Magellanic Clouds (MCs) may serve as a perfect example for a detection of radio-continuum PNe outside of our own Milky Way. In the past decade, several Australia Telescope Compact Array (ATCA) moderate resolution surveys of the MCs have been completed. Deep ATCA and Parkes radiocontinuum (Filipovic et al. 1995(Filipovic et al. , 1997 and snap-shot surveys of the Small Magellanic Cloud (SMC) were conducted at 1.42, 2.37, 4.80 and 8.64 GHz by Filipović et al. (2002Filipović et al. ( , 2005 and Payne et al. (2004), achieving sensitivities of 1.8, 0.4, 0.8 and 0.4 mJy beam −1 respectively. The maps have angular resolutions of 98 ′′ , 40 ′′ , 30 ′′ and 15 ′′ at the frequencies listed above. New complete mosaics of the SMC at 4.80 and 8.64 GHz (both at sensitivities of 0.5 mJy beam −1 ) have recently been completed by Dickel et al. (2009) andFilipovic et al. (2009). Also, Mao et al. (2008) presented a new 20-cm ATCA mosaic survey of the SMC with a resolution of 18 ′′ × 11 ′′ , which is well suited for initial PNe detection as the PNe appear as point sources. For the Large Magellanic Cloud (LMC), a new moderate resolution (40 ′′ ; sensitivity ∼0.6 mJy beam −1 ) ATCA+Parkes survey by Hughes et al. (2006Hughes et al. ( , 2007 and Payne et al. (2009) at 1.4 GHz (λ = 20 cm) complements ATCA+Parkes mosaic images at 4.8 and 8.64 GHz obtained by Dickel et al. (2005). For these observations, the 4.8 GHz total intensity image has a FWHM of 33 ′′ while the 8.64 GHz image has a FWHM of 20 ′′ . Both have sensitivities of ∼0.5 mJy beam −1 and positional uncertainties for all three radio-continuum maps of the LMC are less than 1 ′′ . In addition to the ATCA+Parkes surveys, we also searched the 843 MHz Sydney University Molonglo Sky Survey (SUMSS; resolution ∼45 ′′ , sensitivity ∼2 mJy; (Bock et al. 1999)) for sources co-incident with known catalogued PNe. We also searched the specific MOST (MOlonglo Synthesis Telescope) observations of the SMC presented by Turtle et al. (1998). From these mosaic surveys a collection of targets was selected for follow up observation. In several sessions since 2006 we have observed 5 of these PNe with the ATCA (project C1604) in "snap-shot" mode at 4.8 and 8.64 GHz achieving resolutions as high as 1 ′′ . With this resolution, we expect that the larger PNe, such as SMP L83, might be resolved, but most PNe will still appear unresolved. These 5 PNe are marked with an ⋆ in Table 1 (Column 1). METHOD AND RESULTS The radio-continuum surveys described in Section 2 were initially searched within 2 ′′ of known optical PNe for coidentifications. In the SMC, PNe lists given by Morgan (1995, his Table 3) and Jacoby & De Marco (2002, their Table 4), contain a total of 139 PNe. We found four radio sources (3%) that were spatially coincident with the PNe: JD 04 ( Fig. 1; left), SMP S11 ( Fig. 2; left), SMP S17 ( Fig. 1; right) and N S68 ( Fig. 2; right). For more details see Table 1. We refer to the PNe using the names listed in Jacoby & De Marco (2002). Three other previously classified PNe; MA 1796, MA 1797 (Meyssonnier & Azzopardi 1993) and MG 2 (Morgan 1995) also appear to be coincident with the corresponding radio sources. However, Stanghellini et al. (2003) found that these three sources are in fact ultra-compact H ii regions and not bona fide PNe. We also note that the radio flux densities for these three objects are much higher than reported here for MC radio PNe. Another previously classified PN in the SMC known as JD 26 was also detected across the range of our radio frequencies but after close examination we re-classified this object as an H ii region. ). The radio-continuum contours are from 1 mJy beam −1 in steps of 0.5 mJy beam −1 . The synthesised beam of 1.4 GHz survey is 18 ′′ ×11 ′′ . The larger optical extent of these two PNe is due to faint AGB halos. Figure 2. ATCA high resolution radio-continuum images of two SMC PNe -SMP S11 (left) and N S68 (right). Gray scale images are at 4.8 GHz and overlaid contours are from 8.64 GHz observations. The radio-continuum contours from 8.64 GHz observations are at both images from 0.5 mJy beam −1 in steps of 0.5 mJy beam −1 . All images have rms noise (1 σ) in order of ∼0.1 mJy beam −1 . Both synthesised beams are circular (2/4 ′′ ) and they are displayed as a black circle in a bottom left corner. Within the LMC, we found 11 co-identifications using optical PNe catalogues presented by Leisy et al. (1997) and Reid & Parker (2006a,b). The catalogue by Leisy et al. (1997) contains accurate positions and finding charts for ∼ 280 LMC PNe compiled from all major surveys prior to 1997. More recent catalogues presented by Reid & Parker (2006a,b) identify ∼ 629 LMC PNe and PNe candidates, classified into three groups; True (T), Likely (L), and Possible (P). All of our 11 co-identifications are classified type "T". We note that there are 11 other radiocontinuum sources in the LMC previously classified by Reid & Parker (2006a) RP 105 as being true PNe when the MIR data was examined. Eight of these 11 point sources have significantly higher than expected flux densities, in the range of 10-15 mJy, at 1.4 GHz. By applying multi-frequency criteria to these 11 candidates, including the higher than expected flux density, they were found to be compact H ii regions. We suggest that an upper limit on radio flux density be included as a new parameter in the multi-frequency PNe selection criteria. More details about this post-facto classification will be given below and in our subsequent papers. Finally, high resolution imaging and spectra from a recent Hubble Space Telescope (HST) survey of 59 PNe in the MCs (Shaw et al. 2006), have 10 (all in the LMC) radio PNe matches (see Table 1, Col. 11), displaying a wide range of morphologies. We note that our 4 PNe radio-continuum detections in the SMC are not yet observed with the HST. We employed the "shift technique" in order to estimate the possibility of false detections. We offset the positions of known optical PNe by 30 ′ in each of four directions (±RA and ±Dec) and counted the number of spurious identifications. We find only one false detection per Cloud, implying that at most one of the cross-identification in each Cloud that we report here occurred by chance. Table 1 summarizes our 15 radio detections coincident with known LMC and SMC PNe. We list ATCA radio source name, radio positions (J2000), flux densities at 843 MHz, 1.4 GHz, 2.37 GHz, 4.8 GHz and 8.64 GHz, spectral index (which is defined as α in Sν ∝ ν α , where (Sν) is flux density and (ν) is frequency) and error, flux density ratio between Spitzer MIR at 8µm and S1.4GHz, optical flux, optical diameter (arcsec) and optical name. None of these sources can be considered radio extended, given our radio resolutions and their expected sizes at the distance of the MCs. All 15 radio detections are within 1 ′′ of the best optical positions. Spitzer Detections and Mid-Infrared properties True radio detections of MC PNe are dependent on accurate optical and IR identifications as PNe. For example, Sanduleak et al. (1978) note that a faint star with strong Hα emission can be misconstrued as a PN if its continuum lies below the sensitivity of the detector. Ultra-compact H ii regions have also been confused with PNe and their paper lists examples of both in the LMC and SMC. Some of the optical counterparts among our radio-continuum PN candidates could, therefore, be confused with ultra-compact H ii regions as shown by Stanghellini et al. (2003). Compact H ii regions and Symbiotics constitute a major contaminant in the search for PNe at all frequencies. However, the mid-infrared morphologies and false colors of H ii regions are distinct from those of PNe (Cohen et al. 2007a,b). H ii regions that are classified as compact or even ultra-compact are associated with MIR structures such as multiple filaments and/or haloes. Unlike PNe the MIR morphology of H ii regions is highly irregular. Their false colors (using IRAC bands 2 (4.5 µm), 3 (5.8 µm), 4 (8 µm) as blue, green and red, respectively) are generally white, indicative of thermal emission by warm dust grains rather than of fluorescent polycyclic aromatic hydrocarbon bands or molecular hydrogen lines that cause many PNe to appear orange or red. We have cut out small regions of Spitzer IRAC images around each of the catalogued PNe with potential radio detections. For the LMC, these come from the enhanced products of the SAGE Legacy program (PID 20203) available at the Spitzer Science Center (SSC). For the SMC we downloaded the SSC IRAC mosaics recently available from the SMC SAGE Legacy program (PID 40245). Combining the above techniques with detailed scrutiny of the optical spectra (Reid & Parker 2006a) we have been able to reclassify the brightest radio detections of PNe candidates (those with flux densities above 10-15 mJy) in both Clouds as coming from H ii regions rather than PNe. Jacoby & De Marco (2002) list JD 04, SMP S11, SMP S17 and N S68 as previously known SMC PNe in their table 4, recovered by their "blinking" technique using images obtained with an [O iii] filter (on-band) and a nearby continuum filter (off-band). They identified objects as PNe if their diameters were less than 10 ′′ and the [O iii] on-band image flux exceeded twice the off-band flux. We also detect these four objects in our Spitzer images and confirm the status for two (JD 04 and SMP S17) as bona-fide PNe. The other two objects (SMP S11 and N S68), appear to have somewhat larger MIR/RC ratio (see Sect. 4.2) and from the MIR perspective each could also be an ultra-compact H ii region or an unusual PN. SMP L8, 25, 33, 39, 47 (Fig. 3), 48 (Fig. 4), 62, 74 (Fig. 5), 83 (Fig. 6), 84 and 89 are originally listed as LMC PNe by Sanduleak et al. (1978) based on deep blue and red sensitive objective-prism plates taken from the Curtis Schmidt telescope at the Cerro Tololo Inter-American Observatory. These were originally obtained for other unrelated programs and objects with no evidence of a continuum were selected. Nine of these 11 were confirmed spectroscopically by Reid & Parker (2006b), whilst the other three lie outside their sampled field. Shaw et al. (2006) presented comprehensive HST high resolution (spectra and images) of 8 out of 11 LMC radio-continuum PNe. All 15 radio-continuum PNe detections exhibit canonical optical spectra leaving us in no doubt that they are indeed optical PNe. These criteria include the relative sizes and morphologies of nebulae in Hα and red continuum; the contrast between nebular and ambient Hα emission; and the presence and intensity ratios in the optical spectrum of forbidden lines, characteristic of PNe but not seen in H ii regions. These are the attributes described by Reid & Parker (2006b). More than 10 of our radio detections are also confirmed as PNe from HST imaging (Shaw et al. 2006). Also, in the meantime, two other LMC PNe (SMP L25 and SMP L33) were observed with the HST. Bernard-Salas et al. (2008) have presented two Spitzer spectroscopic studies of a sample of 25 MC PNe from which we report here radio-continuum for 4 objects (SMP L8, 62, 83 and SMP S11). Zijlstra et al. (1994) has reported a radio [WC]-type planetary nebula in the LMC named SMP L58 (Sanduleak et al. 1978) having flux densities of 0.79 and 0.84 mJy at 4.8/8.64 GHz, respectively (based on well-calibrated ATCA June 1993 observations). We did not detect SMP L58 in any of our LMC mosaic radio surveys as our detection limits (3σ=1.5 mJy) are well above SMP L58 flux densities. These two radio fluxes imply a spectral index of about 0.1±0.3. Our non-detection of this PN highlights the difficulty of radio observations of extragalactic PNe, even with the most recent ATCA techniques. SMP L58 is sufficiently bright among MC PNe that it was originally detected by IRAS at 25 µm at a level of 220 mJy (Loup et al. 1997), with an estimated error of ±25%. The SAGE detection of this PN in the MIPS 24-µm band corresponds to 190±5 mJy (Hora et al. 2008), consistent with no change in MIR emission over the intervening almost two decades. The 8.0-µm flux densities in both SAGE epochs (3 months apart) were 37 mJy. That would imply a large MIR/RC ratio of ∼47 if we took a flat spectrum to 1 GHz from Zijlstra et al. (1994) average 4.8/8.65 GHz flux densities or our own measurements. This rather large ratio points to more like H ii re- overlaid with the ATCA high resolution radio-continuum image at 4.8 GHz. The radio-continuum contours are from 0.75 mJy beam −1 in steps of 0.25 mJy beam −1 . The synthesized beam of the radio image is 4 ′′ and the rms noise (1 σ) is ∼0.25 mJy beam −1 . We note the large optical extent which is due to the PN faint AGB halos. gion nature than a PN. However, Bernard-Salas et al. (2009) present the Spitzer spectrum, and conclude that the dust is carbonaceous, and thus the H ii region nature can therefore be excluded, further highlighting the difficulty of positively identifying PNe even given multi-wavelength data. Other Radio-continuum Extragalactic PNe and PNe Candidates We also note two radio PNe detections in the Sagittarius dwarf galaxy (Dudziak et al. 2000). When scaled to the distance of the LMC (∼1 mJy at 4.8 GHz) neither of these PNe would be detectable in our surveys. Expected Flux Density at the distance of the MCs Planetary nebulae within the MCs should not have measurable radio emission much above the sensitivity limits of our present generation data. For example, scaling up the radio fluxes from the very distant (6.6 kpc) Galactic PNe G313.3+00.3 (Cohen et al. 2005) Probably the most luminous Galactic PNe at the present is NGC 7027, at a distance of 980±100 pc (Zijlstra et al. 2008), it would have a 4.80-GHz flux density (5.6 Jy at its radio peak in 1987.34) at the distance of the LMC of ∼2.2 mJy and the SMC of 1.5 mJy. NGC 7027 is fading, and the original peak flux will have been higher than the current value, although by how much is uncertain. Assuming that a PN central star has a highest likely luminosity of 2×10 4 L ⊙ (if higher, the object would evolve too quickly through the PN phase to be detectable), which is about twice that of NGC 7027, suggests a peak LMC radio flux of up to 3 mJy. A further correction factor is needed for the stellar temperature: the radio flux of an optically thick PN varies by up to a factor of 2 during its evolution, caused by the changing stellar temperature (which affects the number of ionizing photons) (Zijlstra et al. 2008). According to Zijlstra (2009, priv. com.), this may add another 25-50% to the peak radio flux. However, very few PNe would be expected to show such values. Almost half of our sample presented here (7 out of 15) have similar or significantly higher flux densities (up to 4.1 mJy of the SMC PN N S68) at 4.8 GHz than projected values for NGC 7027. This suggest that the most radio luminous PNe such as N S68 may represent an upper (or close to upper) limit in radio peak emission. At the present, this SMC PN (N S68) is factor of ∼3 more luminous than Galactic PN NGC 7027. In general, we have found much higher flux densities then expected in both MCs PNe; e.g. LMCPN J054237-700930 (SMP L89) with a 1.4-GHz value of 3.1 mJy (±10%) and SMCPN J004336-730227 (JD 04) with 5.1 mJy at 1.4 GHz. We note that 3 out of 4 detections in the SMC are surprisingly stronger (by up to a factor of three) than their LMC and Galactic cousins even though the SMC is some 10 kpc further away then the LMC. While these are small sample statistics one could ask why is the SMC PN sample is brighter than the LMC? Could we be missing even more brighter LMC ones? However, the existence of the observable infrared emission for the four of the five detected SMC PNe imply that the dust in the shell must be relatively dense and close to the central star (CS) in order to be efficiently heated. The major part of the detected radio-continuum emission most likely originates from the dense ionized shell (<0.1 pc) which is, at the distance of 60 kpc, much smaller than our 8.64 GHz synthesized beam. The radio spectrum distribution of JD 04 is fairly flat throughout the observed radio frequency range. However, N S68, SMP S17 and SMP S11 demonstrate a mild but distinctive drop in the flux density toward the lower frequencies which is very likely the effect of the increased optical depth. The spectral index estimate for the N S68 in the λ = 13 cm range agrees with the value predicted by the unbounded wind shell model of 0.6 (Wright & Barlow 1975). Following the CS evolutionary model of Schoenberner (1981) it can be seen that high core mass CS are traveling through the heating part of the HR diagram much faster than the low mass CS. The CS with mass of 0.84 M ⊙ will pass through the heating stage almost 10 times quicker than the CS with 0.6 M ⊙ (Schoenberner 1993). At the same time, number of ionizing photons (λ < 912Å) is significantly larger for the high mass CS (for example a 0.65 M ⊙ central star will produce almost 3 times more ionizing photons than central star with 0.6 M ⊙ mass). Therefore, assuming that the SMC CS mass distribution is shifted toward 0.65 M ⊙ , it is reasonable to assume that the sudden drop in the radio-continuum flux density and consequent gap between the bright detected objects and the sensitivity limit (see Sec. 4.3) could be caused by the quick recombination phase when the ionizing source is "turned off". However, this model will also imply higher densities (> 10 4 ) in the ionized shell and therefore characteristically optically thick radio-continuum emission at the lower frequencies. MC PNe Properties and Selection Criteria We estimate spectral indices for 11 of the sources shown in Table 1 (Col. 9). Despite the rather large estimated errors, most of our sample is within the expected (-0.3< α <+0.3) range. These large errors are most likely due to the low flux density levels of the associated detections. We cannot determine more accurate spectral indices without observations of higher spatial resolution and sensitivity, but we assume that PNe emission is predominantly thermal. However, thermal emission also characterizes compact H ii regions. We point out that the radio-continuum flux densities peak at different frequencies and in most cases that the spectral index cannot be described as a straight line. We compared corresponding PNe Hα fluxes (Reid & Parker 2009;in prep.) with radio/IR and found no obvious correlation. Also, our 10 radio LMC PNe detections observed with the HST exhibit a wide range of diameters from very small (∼ 0.08 pc or 0.32 ′′ ; SMP L47) to very large (∼ 1 pc or 4 ′′ ; SMP L83; Fig. 6). While one cannot classify radio sources based solely on spectral index, we note that two of our radio-continuum PNe (SMP L74 and SMP L83; Figs. 5 and 6) have a very steep spectrum (α = −0.6±0.4 and α = −0.5±0.2) implying nonthermal emission although the error in the index is large. We point out that Peña et al. (2004) report on large variability in the optical star in SMP L83, which may be responsible for the steeper spectral index. Similarly steep spectral indices may originate from SNRs and background sources such as AGNs and/or quasars. We exclude these possibilities as they would have very different characteristics at other frequencies such as X-ray and optical. Also, these may ex-hibit similar physical process as the Galactic PN associated with V1018 Sco and GK Per with spectral index of -0.8. Cohen et al. (2006) attributed this rather steep spectrum to the collision between the fast and slow winds in this nebula and neither is normally classified as PN. Nor is the ratio of MIR to radio flux densities, MIR/RC, uniquely diagnostic for individual objects. The MIR/RC ratios for the MCs are based on MIR flux densities at 8.0µm and radio-continuum values at 1.4 GHz. These are given in Table 1 (Col. 10). If there are data, but not at 1.4 GHz, then we assume a flat radio spectrum. The median value for 137 Galactic PNe gives the MIR/RC ratio as 4±1 (Cohen et al. 2009, in prep.). Very large values, like 50-300, suggest optically thick radio emission regions (Cohen et al. 2007a); diffuse H ii -25 (Cohen et al. 2007b);ultra-compact H ii -42 (Murphy et al. in prep). The 14 MCs MIR/RC ratios have a median value of 9±2; consistent with the Galactic sample (the formal difference between the median values for MC and Galactic PNe has less than 2σ significance). An absent ratio in Table 1 indicates no MIR detection. Although the primary emission mechanism of PN is thermal, Dgani & Soker (1998) presented a revised model of the PNe emission mechanism after the discovery of an inner region of non-thermal radio emission in the "born-again" PNe, A30. They assumed that the fast wind from the central star carries a very weak magnetic field. Interactions of the wind with dense condensations trap magnetic field lines for long periods and stretch them, leading to a strong magnetic field. As the fast wind is shocked, relativistic particles form and interact with the magnetic field to create non-thermal emission. Nonetheless, the flux density from this mechanism for PNe in the MCs would be exceedingly low; less than 1 µJy at the distance of the SMC. Villaver et al. (2007) have found that the average central star mass of a sample of 54 PNe located in the LMC is 0.65±0.07 M⊙, slightly higher than reported for those in the Galaxy. They attributed this difference to the lower metallicity in the LMC (on average by half) than in our Galaxy. This naturally raises the question: do MCs PNe evolve differently when compared to their Galactic cousins? Zijlstra (2004) proposed that low-metallicity stars evolve to higher final masses. There has been no convincing confirmation of this (see Gesicki & Zijlstra (2007) for a discussion of the accuracy of mass determinations.) However, our results presented here suggest that this effect is present for the brightest sources. Also, one could interpret this as evidence for a more recent epoch of strong star formation in the MCs, leading to an overabundance of high-mass central star PNe. Our radio PNe detections (Table 1) represent only ∼ 3% of the optical PNe population of the MCs. Whatever the emission mechanism, we are selecting only the strongest radio-continuum emitters, possibly at a variety of different stages of their evolution (Vukotic et al. 2009 PN Central Star Properties Most PNe have central-star and nebular masses of only about 0.6 and 0.3 M⊙, respectively. Detection of white dwarfs in open clusters suggests that the main-sequence mass of PNe progenitors can be as high as 8 M⊙ (Kwok 1994). Our preliminary spectroscopic study contained 3 of 4 SMC and 10 of 11 LMC radio-detected PNe (Payne et al. 2008a,b) and suggests that nebular electron temperatures are also within the expected range assuming an average density of 10 3 cm −3 . Given the values of radio flux density at ∼ 5 GHz, we estimated that the ionized nebular mass of these 13 MCs PNe may be 2.6 M⊙ or greater. However, forbidden lines are insensitive to high densities and one might question whether the [S ii] densities should be applied to the radio region. This study suggests that the MC PNe detected in the radio-continuum may represent a predicted link to the "missing-mass" problem associated with systems possessing a 1-8 M⊙ central star. If a high rate of mass loss continues for an extended fraction of the Asymptotic Giant Branch (AGB) Star's lifetime, a significant fraction of a star's original mass can be accumulated in a circumstellar envelope (CSE). If the transition from the AGB to PNe stage is short, then such CSEs could have a significant influence on the formation of PNe, resulting in the detection of optical AGB haloes. The presence of these haloes has been known since the 1930s (Duncan 1937). We consider the notion that our ATCA observations may be detecting the extended radio counterparts of these AGB haloes, presumed to be composed of weakly ionized material. However, the peak radio flux in a PN occurs while the nebula is optically thick for ionizing radiation, meaning only a fraction (in most cases a small fraction) of the circumstellar matter would be ionized. Once the ionized mass increases above (>0.1 M⊙), the radio flux decreases quite rapidly as the nebulae become optically thin. Therefore, we conclude that haloes are very weak in the radio and cannot be detected in the MCs with the current observations. Although, the ionized mass will increase at a much later stage, when the ISM interaction region merges with the PN (Wareing et al. 2007), at this time the expected radio flux will be very low indeed. While most of the MCs SNRs flux densities are much higher then these 15 PNe (see Bojičić et al. 2007;Crawford et al. 2008a,b;Filipović et al. 2008, for a small sample of MC SNR flux densities), they are still bit more luminous then their galactic counterparts "prompting" us to call these sources Super PNe or mini-SNRs. While obvious differences remain between the two classes, the physical processes within these two groups (PNe vs SNRs) are perhaps not too dissimilar, if one takes into consideration the fact that older SNRs shock fronts are isothermal in nature. CONCLUSION AND FUTURE OBSERVATIONS We present the first 15 extragalactic radio PNe -all in the MCs. At least 10 of these candidates can currently be positively identified as a PN via high-resolution optical (HST) imaging. All 15 radio-continuum objects examined here, exhibit typical PN characteristics ie. canonical optical spectra and MIR properties leaving us no doubt that they are bone fide. Assuming they are radio PNe, their higher than expected flux densities at lower frequencies are most likely related either to environmental factors, and/or selection effects. We tentatively call this PNe (sub)sample "Super PNe". We are presently conducting high resolution (∼ 1 ′′ ) ATCA observations of all these 15 PNe candidates using a variety of frequencies and arrays. We also plan further optical confirmations of these PNe using high resolution [O iii] images from the HST. ACKNOWLEDGMENTS We used the Karma/MIRIAD software packages developed by the ATNF. The Australia Telescope Compact Array is part of the Australia Telescope which is founded by the Commonwealth of Australia for operation as a National Facility managed by the CSIRO. M.C. thanks NASA for funding his participation in this work through ADP grant NNG04GD43G and JPL contract 1320707 with UC Berkeley. We thank the Magellanic Clouds Emission Line Survey (MCELS) team for access to the optical images. We thank the referee (Albert Zijlstra) for his excellent comments that have greatly improved this manuscript. Landscape Table to go here Table 1.
2012-01-30T03:13:18.000Z
2009-06-25T00:00:00.000
{ "year": 2009, "sha1": "602c005cfef29c6da62995936855777c543e5ad0", "oa_license": null, "oa_url": "https://academic.oup.com/mnras/article-pdf/399/2/769/3636365/mnras0399-0769.pdf", "oa_status": "BRONZE", "pdf_src": "Arxiv", "pdf_hash": "843de5ee06408dc1916e0d61b9ff61d5b2b12196", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
237624370
pes2o/s2orc
v3-fos-license
The Response of Sobaity Sea Bream Sparidentex hasta Larvae to the Toxicity of Dispersed and Undispersed Oil Accidental oil spillages can release millions of barrels of oil into the marine environment threatening aquatic wildlife like fisheries. As a part of the oil spill response strategy, several chemical dispersants have been recommended that have been successfully used elsewhere. However, the adverse effects of dispersed oil are unknown to fish species in Kuwait. Therefore, this study investigated the toxicity of water-accommodated fraction (WAF) and chemically enhanced water-accommodated fraction (CEWAF) of Kuwait crude oil (KCO) with three dispersants (Corexit® 9500, Corexit® 9527, and Slickgone® NS) against the larvae of the sobaity sea bream Sparidentex hasta which is of an international economic significance. Total petroleum hydrocarbons (TPH) were used for comparison of chemical compounds partitioned in WAF of dispersed and non-dispersed oil. Toxicity test results with fish larvae showed that WAF of non-dispersed oil and Corexit® 9527 treated CEWAF had similar LC50 values (0.12 g oil. l -1) whereas CEWAF’s of Corexit® 9500 and Slickgone® NS CEWAF showed lower toxicity. Introduction The increasing global demand for energy, led to increased exploration and extraction of oil from various regions of the world, specifically in the offshore environments [1][2][3]. As crude oil and related petroleum products are transported across global regions by ships or pipelines, accidental oil spillages can occur, leading to devastating environmental crises with a chronic toxic effect on marine organisms like fish [4][5][6][7][8]. Oil spill accidents like 1989 Exxon Valdez, Arabian Gulf oil spill, and 2010 Deepwater Horizon have resulted in severe adverse effects on marine organisms. Also, major concerns have been raised regarding the chronic toxicity of dispersant usage as response agents in future oil spills in various marine environments ranging from tropical to polar and the multitude of hazards associated with it [9][10][11][12]. The Arabian Gulf region has experienced small to massive oil spills through natural seepage, war-related activities, accidents of tankers, and offshore drilling [13,14]. The Regional Organization for the Protection of the Marine Environment (ROPME) in the Arabian Gulf adopted an oil spill response strategy and recommended a list of dispersants to be used in the ROPME Sea Area in the case of an oil spill [15][16][17]. Spilled oil floats on the surface due to its low density and forms a thin layer that spreads on the water surface [18]. Wave action breaks the oil slick into ≥100 µm oil droplets that disperse in the water column. Chemical dispersants facilitate the dispersion of oil as small oil droplets (10-50 µm) by lowering the interfacial tension between oil and water. Thereby, its impact is reduced on surfacedwelling marine organisms' like fish larvae, marine mammals, and sea birds [19,20]. However, some tradeoffs reflect the complexity of dispersants applications as it facilitates the entrance of oil in the water column, thus rendering it hazardous to benthic biota [21]. There are numerous concerns about the toxic effect of dispersed oil on fish, which live in the water column [22][23][24][25][26][27]. The fraction of oil that is partitioned in the water phase is termed as water accommodated fraction (WAF) and routinely measured as total hydrocarbons (THC). In the field experiments during an oil spill, the THC concentrations below the oil slick after dispersant treatment ranged from 30-50 mg. l -1 which eventually decreases after few hours to ˂1-10 mg. l -1 [20,28]. The THC fraction is bioavailable to the marine organism and its effects have been earlier studied and several concerns have been raised on the application of chemical dispersants to disperse oil slicks after the Deep Water Horizon (DWH) oil spill incident in the Gulf of Mexico in 2010 [29][30][31][32]. It is imperative to highlight that both Corexit® 9500 and 9527 chemical dispersants were used in the Deep Water Horizon (DWH) oil spill in the Gulf of Mexico in 2010 to combat surface and subsurface oil spillages. However, the consequences of using large volumes of oil dispersants during the DWH are unknown [33] as there are indications for Corexit 9527 and 9500 being toxic to marine life [34]. The objective of this study is to expand the knowledge on the toxicity of dispersed and undispersed oil to the larval stage of marine fish species (Sobaity Sea bream S. hasta) which is considered of global significance to fisheries. The current study has three main folds: (1) provide information on the analytical method to assess oil toxicity and oil spill combat products (Corexit 9500, 9527 and Slickgone) on marine fish, (2) provide approximate toxicity data that can be used for hazards assessment in marine pollution scenarios, and (3) support global toxicity database of hazards chemicals to the marine ecosystem. The bulk of acute toxicity data generated in the literature concerning oil and dispersed oil encompassed two oil dispersants of the Corexit formulation. Nevertheless, to our knowledge, not much work focused on investigating their acute toxic effect in combination with Kuwait crude oil (KCO) on Sobaity Sea bream S. hasta. The authors aims that the outcomes of this study will be helpful to utilize Sobaity Sea bream S. hasta as indicator species for oil contamination. Literature Review In literature, few studies have investigated the toxic effects of dispersed and undispersed Kuwait crude oil (KCO) on marine fish. Also, sea bream fish was not the subject of much ecotoxicological research that assesses the adverse effects of hazardous chemicals threatening marine ecosystems. Others like [35] investigated the effects of water-soluble fractions of Iraq crude oil on larval and post-larval stages of gilthead sea bream (Sparus aurata). Also, the effects of water-soluble fractions of KCO on growth, bioactivity, and survival of larval stages of red sea bream (Pagrus major) and black sea bream (Acanthopagrus schlegeli) were examined by [36]. Also, it's been demonstrated that exposure of fish like tilapia, African catfish, and carp to water-soluble fractions (WSF) of crude oil can affect its hematological characteristics [37][38][39][40]. And, juvenile African sharptooth catfish C. gariepinus exposed to sublethal levels of WSF affected its growth performance [39]. Moreover, northern wolfish Anarhichas denticulatus exposed to mechanically, chemically dispersed oil and dispersant for 48 hrs resulted in reducing acetylcholine (Ache) activity in its brain as it is an integral nervous system function [25,41]. Juvenile turbot (Scophthalmus maximus) exposed to fuel oil and/ or Finasol OSR 52 chemical dispersant resulted in an additive effect of oil contamination coupled with hydrostatic pressure on turbot cellular oxygen consumption. This combination reduced S. maximus capacity to withstand high pressure (10.1 MPa) after being contaminated with either test chemicals [42]. Also, exposure to sublethal concentrations of oil can exhibit an array of adverse effects in the early life stages of marine fish such as Author Copy • Author Copy • Author Copy • Author Copy • Author Copy • Author Copy • Author Copy • Author Copy • Author Copy larval behavioural impairment, CYPA1 induction but not AChE inhibition [4,43]. In the present study, Kuwait crude oil (KCO) in different thicknesses was layered over the fixed surface area and volume of seawater to prepare WAF. Selection of fish early life stages in this study because this stage tends to be the most sensitive to crude oil exposure and comparisons of toxicity results among test species and their life stages [44][45][46][47]. Fish in its early life stage showed sensitivity to oil-derived polycyclic aromatic hydrocarbons (PAHs) which eventually lead to significant impacts of a series of ecological effects like fish population survival and recruitment [48][49][50][51]. The selection of early life stages like the embryonic stage in haddock fish has concluded that it's more vulnerable to crude oil toxicity than larval stages [52,53]. As there is a lack of knowledge regarding the toxic effects of dispersed and undispersed oil specifically to fish species indigenous to the Arabian Gulf, careful toxicological assessment can help in oil spill response and mitigation. This study will provide information on whether the application of chemical dispersants to oil will enhance its acute toxicity or not and also indicate the amount of oil on the surface detrimental to fish larvae survival. The data will be useful in deciding by the regulatory authorities about the use of dispersant in case the oil spill occurs during the breeding season of fish. The objective of the study herein is to assess the toxicity of dispersed and undispersed KCO and dispersed KCO with individual oil dispersants like Corexit® EC 9500A and Corexit® EC 9527A Slickgone® NS on larvae of marine sobaity sea bream fish Sparidentex hasta. Natural seawater was collected and filtered through 0.45 µm Whatman® sterile membrane filter paper for the preparation of water accommodated fraction and chemically enhanced water accommodated fraction, and the same water was used for further dilutions. WAF of KCO was prepared at 1, 10, 20, 40, and 80 g oil. l -1 seawater in pre standardized conditions, according to [54]. List of Abbreviations For the preparation of CEWAF, a fixed oil loading of 1 g KCO l -1 filtered seawater (2 g KCO/2 l -1 seawater) was selected for the preparation of water-accommodated fraction (WAF) and a 10:1 (oil: dispersant) ratio where 0.1 g oil dispersant (0.2 g oil dispersant per 2 l -1 seawater) was used and layered over the oil slick in a 2 l glass aspirator bottle filled with 2 l l -1 of seawater for the chemically enhanced water-accommodated fraction (CEWAF) preparations [44]. The test chemical was mixed for 24 hrs then stopped, and the solution was left to stand for 3 hrs for a complete phase (oil/water) separation. The solutions of WAF/CEWAF were drained, collected in amber bottles, and preserved in a refrigerator until the experiments take place [55]. Chemical Characterization Analysis of total petroleum hydrocarbons (TPH); 100 l -1 of WAF/CEWAF sample solution was extracted by adding MERCK® dichloromethane (CH 2 Cl 2 ) in a solvent-rinsed 2 separatory funnel with Teflon stopcock and stopper. The mixture was shaken vigorously and dried over MERCK® grade anhydrous sodium sulfate (Na 2 SO 4 ) and glass wool, which were presoaked and rinsed with dichloromethane then the solvent layer was collected in a volumetric flask, which was labeled and stored for later analysis. The collected extract was then analyzed on an RF-5301 PC SHIMADZU® Spectrofluorophotometer instrument using 310 nm excitation and 360 nm emission wavelengths. A standard multipoint calibration curve for TPH analysis was prepared using Kuwait crude oil and was reported in terms of the KCO equivalents [56]. Larval Rearing Larval stages of S. hasta were obtained from the hatchery of the Aquaculture Program at Kuwait Institute for Scientific Research, 24 hrs after hatching ( Fig. 1). Stocking density in rearing tanks for newly hatched larvae was 40 larvae per liter (1 liter was used), according to [57]. Rearing tanks were aerated with six air stones (5 x 5 x 7 each) and illuminated by sunlight and fluorescent light (40W) with 1500 lux light intensity at the water surface at noon and 1000 lux at night [58]. Water quality parameters for the fish holding water were: dissolved oxygen (5-6 mg. l -1 ), temperature (20-28ºC), salinity (40-42 ppt), and pH (8.2-8.6) (Fig. 2). Fish Exposure System 96 hrs acute toxicity tests were conducted following the OCED (Organization of Economic Co-Operation and Development) Guideline for the testing of chemicals -Fish Embryo Toxicity (FET) Test [59]. Toxicity test was conducted using sobaity sea bream Sparidentex hasta larvae which is one of the most significant commercial fish in the region, with significant economic value and a wide geographic distribution ranging from Arabian Gulf to the Oman Sea and the Western Indian Ocean [45]. Toxicity tests were conducted using the following preparations: Kuwait crude oil (KCO WAF), KCO + Corexist ® 9500 dispersants (Corexist ® 9500 CEWAF), KCO + Corexist ® 9527 (Corexist ® 9527 CEWAF), and KCO + Slickgone ® dispersant (Slickgone ® CEWAF). Solutions were prepared by two methods: variable oil loading, using a series of decreasing concentration of KCO, and by single loading and subsequent serial [60]. Filtered seawater was used to make variable loading of the WAF solution was obtained from the same holding tank that fish larvae were reared. Seawater was aerated with pure oxygen for 15 min until saturation before the bioassay. Five test concentrations of KCO WAF and KCO CEWAF plus a non-toxic control solution were prepared with an appropriate geometric dilution series selected in which each successive concentration is about 50% of the previous one such as 100%, 50%, 25%, 12.5%, and 6.25%. Static toxicity (non-renewal) test was conducted for 96 hrs with pre-hatched larvae. Fish larvae were not fed throughout the exposure period, and the main reason for not feeding the test organisms is that the yolk sac nourishes fish larvae for three days after hatching, and the oil globule further nourishes the same larvae for an additional two days. The initiation of toxicity was conducted 24 hrs after hatching. The weight of larvae ranged between 0.10 and 0.75 mg. A minimum of 10 to 15 fish larvae was placed using a glass pasture pipette in 100 l -1 glass beakers. Toxicity of Dispersed and Undispersed Oil There are numerous toxicity testing methods to investigate the adverse effects of oil and oil-related products on the marine ecosystem; particularly saltwater fish. In this study, we have adopted an exposure system of fish larvae to serial dilutions to low concentrations of a test chemical (KCO WAF/KCO CEWAF). It's imperative when interpreting toxicity data related to dispersed and undispersed oil to take into consideration the intricate and complex relationship between multiple factors like the physical/chemical composition of test chemicals, application procedures, exposures regimes, and environmental hazards [11,61]. As the concentration of dissolved petroleum hydrocarbons immediately following an oil spill event will particularly depend on several physical and environmental elements like seawater salinity, temperature, degradation by weathering and evaporation, and biological factors in the vicinity of the spillage area. This study is amongst the few to investigate the toxic effects of dispersed and undispersed oil against S. hasta larvae. It is well established in marine pollution research that there are few standards test for fish to assess the toxicity of oil-related product like the use of inland silversides (Menidia beryllina) [62]. Even though some of the used fish species (saltwater/freshwater) like zebrafish and sheepshead minnow are considered tolerant to oil toxicity, but still they are being utilized due to the criteria for toxicity testing of oil product that they meet [63,64]. Different types of dispersed and undispersed oil toxic effects on red sea bream (P. major) have been investigated by others [65]. Chemical dispersant's manufacturer companies must submit a report highlighting the effectiveness of a chemical dispersant and its toxicity as a product to the United States Environment Protection Agency (U.S. EPA) to be listed in the National Contingency Plan (NCP) Product List [66]. Also, Corexit® chemical dispersant was listed to be used in the ROPME Sea Area, but its toxicity on native marine fish is not known. In this study, the selection of fish larval stages was justified on the basis that in oil spill scenarios, fish larvae will be vertically distributed in the water column and are likely to experience sporadic exposure to oil as water current carry them to contaminated areas [67]. Therefore, the selected chemical dispersant's toxicity was assessed in this study against marine fish early-life stages. In the study herein, recommended methods to use WAF/CEWAF for toxicity testing were followed because it is the soluble fraction that enters an aquatic environment with the most exceptional ease and as a result, can cause direct acute damage to aquatic organisms [68][69][70]. Chemical Analysis of WAF Prepared at Variable Oil Loadings TPH concentration in WAF was 2.22, 3.44, 4.77, 7.21, and 5.78 mg. l -1 for 1, 10, 20, 40, and 80 g. l -1 , respectively. The increase in TPH was linear up to 40 g. l -1 ; however, the increase in TPH in WAF was not proportional to the oil loaded. Further, an increase in oil loading to 80 g. l -1 resulted in the reduced partition of TPH in WAF than previous concentrations. The data for 80 g. l -1 KCO is not plotted in Fig. 3. This result was substantiated by studies on the chemical characterization of WAF reported, as others also encountered this difficulty in the preparation of WAF and concluded that oil to dissolving medium ratios did not increase the TPH content [20,[71][72][73]. [11] Observed that using a static exposure oil WAF tests revealed that hydrocarbons concentrations for TPH, (benzene, toluene, ethylbenzene, and xylene) BTEX and PAH were not constant showing a decline within 48 hrs. Toxicity of WAF Prepared at Variable Oil Loadings The toxicity of WAF prepared by varying oil loadings of KCO was determined in exposure chambers and fish larvae exposed for 96 hrs. Each WAF was prepared at 1, 10, 20, 40, 80 g. l -1 were tested separately, and the exposure was run for 96 hrs. The data reported Author Copy • Author Copy • Author Copy • Author Copy • Author Copy • Author Copy • Author Copy • Author Copy • Author Copy in Table 2 shows that exposure to WAF of 1 g. l -1 seawater resulted in high LC 50 values at 24 hrs, which decreased with the exposure time, and at 96 hrs, the averaged LC 50 was 0.12 ± 0.01 g. l -1 . It was interesting to note that WAF prepared at higher oil loading though contained higher TPH values but their toxicity was less than 1 g. l -1 KCO concentration. The data in Table 2 shows that increasing oil loading could not increase the toxicity of WAF and the LC 50 values obtained at 10, 20, 40, and 80 were 5.0±3.1 (C.I. 3.3-6.7); 6.0±3.1 9 (C.I. 3.3-6.70; 11.1±2.0 (C.I. 7.1-17.4); 21.0±22.0 (C.I. 12.3-27.1) g. l -1 respectively (Fig. 4). In the present study, S. hasta larvae exposed to the WAF prepared by various oil loadings and subsequent dilutions at nominal concentrations, which were not renewed every day (static exposure). It was observed that exposure concentrations decreased as time progressed from 0 to 96 hrs, which reflect the natural scenario in the marine environment as an oil spill undergo dilution and evaporation effects. When WAF prepared at different oil loadings (1-80 g. l -1 seawater) with subsequent serial dilutions of each loading separately, an interesting pattern emerged. The most toxic WAF's on sobaity sea bream larvae was found to be the one prepared at the lowest oil loading, i.e., 1 g. l -1 seawater which was in agreement with what [46] observed that WAF solution was acutely lethal when prepared with only 0.01 to 0.1 g. l -1 seawater applied oil loading. Except in the case of [74], the oil loading was even lower compared to the one used in this study. WAF prepared with increasing oil loadings (≥1 g. l -1 seawater) with serial dilutions was not found to exert increasing toxic effects indicating that saturation of water-soluble compounds was achieved at 1 g. l -1 seawater oil loadings and a further increase in oil content could not increase the partitioning of water-soluble compounds in the aqueous medium. [11,75] have indicated that crude oils have a limited narrow range of acute toxicity marine biota even though those oils might have a wide range of hydrocarbons content. Toxicity in oils is primarily caused by the presence of PAHs as major constituents of oils [64,76] and more precisely the presence of naphthalene as oil drop emulsion is increased in WAF of oil [77]. It's imperative to mention too that when reviewing multiple oil toxicity studies in literature; a pattern of inconsistencies emerged indicating the difficulty and challenge in making comparisons between oil types and biological species investigated. [64] Suggested a universal analytical and exposure method in addition to side by side comparative studies between oils and test organisms. Chemical Analysis and Toxicity of CEWAF Prepared at Fixed Oil Loading WAF prepared at 1.0 g. l -1 showed higher toxicity than at a higher concentration of oil on the surface of the water; therefore, chemical dispersants were applied at this oil loading. The application of dispersants caused an increase in TPH partitioned in CEWAF's. Table 2. Showing different toxicity levels for all Kuwait crude oil water-accommodated fractions (KCO WAF) (g. l -1 ) on S. hasta larval survival from 24 to 96 hrs, which was significant (p<0.05) and as determined by the lethal concentrations which affects 50% of exposed larvae (LC 50 ) (g. l -1 ) . Oil loadings (g. l -1 ) The TPH concentrations in CEWAF were 5.1, 17.7, 33.2 mg. l -1 after Slickgone ® NS, Corexit ® 9527, and Corexit ® 9500 treatment, respectively. Whereas, the KCO alone at 1.0 g. l -1 oil loading resulted in 2.0 mg. l -1 TPH in WAF, which was comparable to the previous analysis. Therefore, in determining the toxicity of oil, it is vital to consider the oil-water ratio, which means in a spill scenario, the spread of oil over the water will be an essential consideration in determining the risk to water column organisms. During exposure to KCO WAF from 0 to 96 hrs, increased mortality in S. hasta larvae was observed compared to control, which are in line with [1]. Similar experimental observations were reported by (34) as they observed high mortality rates upon exposure of gilthead sea bream (Sparus aurata) larval stages to Iraq oil water-soluble fraction (WSF) which reflect similar pollutant (KCO WAF/ CEWAF) and polluted (sea bream) treatment conditions applied in this study. Black sea bream (A. schlegeli) and red sea bream (P. major) larvae exhibited similar results achieved in this study as their survival rate was significantly reduced upon exposure to WSF of KCO [36]. [1] have used approximately similar oil loadings (1.2 g. l -1 ) to the one used in this study and observed an array of toxicological defects in Atlantic haddock (Melanogrammus aeglefinus) embryo-larval stages like cardiotoxicity, spinal deformities, and pericardial edema. More so, comparing the LC 50 of WAF prepared at 1 g. l -1 seawater with WAF prepared at higher loading (20 g. l -1 seawater). A possible explanation is that partitioning of KCO in the aqueous phase was not increased, possibly because of the fixed surface area of the underlying seawater in the WAF preparation bottles and the slow stirring speed used [8] have determined that crude oil had evident effects on the larvae of various carp species, including the common carp (Cyprinus Carpio), carassin carp (C. auratus), and the grass carp (C. idella). It has been determined that oil toxicity is related to the aromatic fraction of low molecular weight (LMW) and high molecular weight (HMW) of polycyclic aromatic hydrocarbons (PAHs) [78]. Others have contributed oil toxicity to yolk-sac larvae to their sensitivity to low total polyaromatic hydrocarbons (TPAH) [79,80]. [81] Have concluded that exposure of embryonic stages of Atlantic cod (Gadus morhua) fish to oil can lead to larval mortality and morphological deformities. Exposure during the embryonic period (pre-hatch) of haddock larvae (Melanogrammus aeglefinus) resulted in reduced eye size and increased incidence of abnormal eye morphology, also oil-exposed fish had reduced mean blood flow speed, flow rate, and flow pulsatility [82,83]. Also, dispersion of oil droplets can contribute to the toxicity of the exposure medium as the droplets can behave like a reservoir for toxic components which can be harmful to fish health. Like WAF alone, the toxicity of CEWAF also increased with exposure time (Table 3, Fig. 4 Table 3. Showing different toxicity levels of Kuwait crude oil water-accommodated fraction (KCO WAF), and three chemically enhanced water accommodated fraction (CEWAF) of oil dispersants (Corexit® 9500 CEWAF, Corexit® 9527 CEWAF, and Slickgone® NS CEWAF) (g. l -1 ) on S. hasta larval survival from 24 to 96 hrs, which was significant (p<0.05) and as determined by the lethal concentrations which affects 50% of exposed larvae (LC 50 ) (g. l -1 ) for WAF and CEWAF's of (KCO). [64,76]. The change in the order of toxicity of CEWAF mixtures may be related to the different degradation rates and degradation products of the dispersant, indicating that toxicity data vary for different oil dispersants and different crude oil types [84]. [44] Demonstrated that the primary function of oil spill dispersant was to increase the entry of oil into the water column, thus modifying the exposure medium and increasing its toxicity. Dispersion of crude oil with oil dispersant (CEWAF) has in some cases increased its toxicity compared to the toxicity of KCO WAF, as dispersants solubilized more of the oil fraction in the water column, thus, the oil will become more bioavailable to fish larvae [85]. [86] Observed that exposure of chinook salmon smolts (Onchorhyncus tshawytscha) to Prudhoe Bay crude oil (PBCO) WAF and Corexit® 9500 CEWAF resulted in LC 50 of 155.93 mg. l -1 , for Corexit® 9500 CEWAF was some 20 times higher (i.e., less toxic) than that of the LC 50 WAF of PBCO (7.46 mg. l -1 ). This result suggests that hydrocarbon bioavailability to smolts may have been reduced under dispersed conditions and may be attributed to several factors. [84] also, observed an increase in larval mortality of crimson-spotted rainbow fish (Melanotaenia fluviatilis) with time in a 96 hrs exposure period to crude oil WAF and CEWAF. [43] Observed that dispersed oil was more toxic to fish early life stages than native oil did. [21] observed that larvae are not able to avoid waters contaminated with oil and dispersed oil as chemical receptors might have been damaged at the initial interaction with petroleum hydrocarbons. [87] indicated that Corexit® 9500 and 9527 dispersants are of low to moderate toxicities when tested on most aquatic species. Many factors may contribute to the test results variability, such as species and exposure duration. This finding is in agreement with what was obtained for larval S. hasta in a way that KCO WAF of similar toxicity to Corexit® 9527 CEWAF and Corexit® 9500 CEWAF was less toxic. [66] had observed that when Corexit® 9500A was mixed with South Louisiana sweet crude oil produced similar toxicity to other types of dispersants. Those findings are in agreement with data reported by [88] in which they observed that petroleum hydrocarbon products, which did not undergo complete dissolution in the aqueous phase, but when dispersed; adversely affected marine organisms because of the joint-effect of toxicity and physical fouling. [89] have indicated that both Corexit® 9500A and 9527 chemical dispersant formulations did not exhibit a difference in terms of its toxicity against the marine organism. However, in a study performed by [90], they observed that dispersed oil exerted higher toxicity than undispersed oil in early-life stages of orangespotted grouper Epinephelus coioide. More arguably, the acute toxic effect concentration for Corexit 9500 dispersant on biological species has a wide range, and [87] concluded that it can be <1 to >1000 mg -l . Moreover, [91] have indicated that it can range from 23 to 50 mg -l against multiple marine species from the Gulf of Mexico. It's important to highlight the global differences in toxicity data amongst multiple marine trophic levels concerning oil and oil spill response agents (dispersants) because of the variability of analytical methods and biological species tested [11]. Acute toxicity data obtained from the current study will add further understanding to the global database for the toxicity of oil and oil spill-response agents, and by investigating the toxicity of dispersed and undispersed oil on S. hasta will unquestionably increase taxa diversity for marine ecosystems in regions of similar nature to the ones that Sobaity Sea bream thrive in. Still the decision of using or not using oil dispersants as an oil spill response (OSR) strategy is questionable and depends on the timing and location of the spill incident considering that there will always be trade-offs to this choice to assess the ecological benefits or harms associated with dispersant applications [92,93]. Although the outcomes of this study exhibit similar toxicity of dispersed oil to that of parent oil, the applications of oil response agents remain a controversial issue that requires constant scientific review and assessment. Also, approving dispersant use should be individually evaluated as each oil spill event (scenario) represents a different system of mixed physical, chemical, and biological entities. There is a paucity of toxicity data regarding selected marine fish species in different regions of the world that can be susceptible to major oil spills; therefore additional ecotoxicological research is required on the acute toxicity of oil spill response agents on economically important fisheries resources to assist in proper management local resources. Conclusions and Recommendations In the present study, the toxicity of dispersed and undispersed crude oil was investigated against the larval stage of sobaity sea bream S. hasta using field relevant experimental durations and concentrations of test chemicals. Based on the results of fish toxicity tests using different chemical treatments, the following conclusions have been drawn: 1. In the experiments of this study, it was observed that 1 g. l -1 seawater loading was most suitable for WAF preparation because, upon dispersant treatment, the KCO CEWAF was turbid at higher concentrations of oil loadings making it challenging to use in toxicity assay, especially when attempting to count exposed larvae as they are not visible under the turbid solution. 2. It was observed that individual dispersants had a different capacity in dispersing crude oil as represented by TPH values along with that of crude oil alone. 3. The toxicity of dispersed and undispersed oil on the larval stage of S. hasta was variable, demonstrating the different dispersion ability of oil dispersants to disperse oil in the aqueous medium. 4. Future research is required to understand the effects of other types of oil dispersants on the early life stages as well as adult stages of commercially important marine fish species. 5. Long-term effects of dispersed and undispersed oil should be thoroughly assessed concerning other commercially important fish species.
2021-09-24T15:25:49.009Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "de67c0167395d4f2cdb0434cbb6f3e568e869847", "oa_license": null, "oa_url": "http://www.pjoes.com/pdf-133231-68921?filename=The%20Response%20of%20Sobaity.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "4492db07c70c36ef52bfe352a53cba7fa50bfca1", "s2fieldsofstudy": [ "Environmental Science", "Biology" ], "extfieldsofstudy": [] }
246068565
pes2o/s2orc
v3-fos-license
The use of 3D-printed models in patient communication: a scoping review 3D models have been used as an asset in many clinical applications and a variety of disciplines, and yet the available literature studying the use of 3D models in communication is limited. This scoping review has been conducted to draw conclusions on the current evidence and learn from previous studies, using this knowledge to inform future work. Our search strategy revealed 269 papers, 19 of which were selected for final inclusion and analysis. When assessing the use of 3D models in doctor–patient communication, there is a need for larger studies and studies including a long-term follow up. Furthermore, there are forms of communication that are yet to be researched and provide a niche that may be beneficial to explore. Graphical abstract: The use of 3Dprinted models in patient communication: a scoping review Follow-up is needed Larger sample size in future Use the comparisons to conclude what is to be done in future Chart these data to make comparisons Collate available evidence Identify studies involving 3D models and communication 3D modeling In the last decade, 3D models have been employed in a variety of applications such as decision-making, surgical planning, trainee education and more recently, communication. 3D printing can be used to construct individual models that are unique to a patient's anatomy. The literature on the use of 3D models in the medical field is now extensive; however, there are very few papers on the use of 3D models in communication. Communication It is generally well accepted that effective doctor-patient communication leads to overall better health outcomes. A plethora of studies have found that improving doctor-patient communication leads to increased adherence to treatment, a more effective recovery and better emotional health following discharge [1][2][3]. Additionally, the relationship between a doctor and a patient can have an impact on the length of hospital stay and the number of complications, in turn, affecting the cost analysis associated with each patient [4,5]. There may be a variety of reasons for these observations. Having effective communication between doctors and their patients means that the patients are more likely to understand their illness and comprehend the potential consequences of not following their treatment plan [6]. Furthermore, a relationship of trust is often built in order to have effective communication and this leads to patients feeling more comfortable to ask questions and engage in the consultation. Patient engagement within a consultation has been shown to influence the style in which a doctor conducts a consultation [7]. It was shown that patients that are most engaged and responsive during their consultation receive more patient-centered care from physicians [8]. Patient-centered care has been the focus of clinical practice since medicine has evolved to be less focused on a biomedical model and shifted more to a psycho-biomedical model. This shift in clinical practice, and thus in doctor-patient relationships, was also associated with increased patient autonomy, with patients no longer being passively treated and, instead, being involved with decision-making and being empowered to take control of their health. Aim of scoping review Given the abundance of evidence of effective communication increasing patient satisfaction, some research has focused on how to improve communication during clinical practice [9][10][11][12]. Therefore, this review aims to analyze the available literature within all medical disciplines and use the current evidence to map this niche and inform future work. This review aims to answer the following question: what is the role of 3D models in doctor-patient communication? No other reviews have been conducted to analyze this aspect. A scoping review has been deemed the most appropriate methodology to follow due to limited literature available and in light of the aims that have been set out, following considerations by Armstrong et al. [13]. A scoping review can be seen as complimentary to other types of reviews, and although conducting this scoping review could act as a precursor to a systematic review, this is not its purpose. There is variability in the conduct of scoping reviews, however, the decision to conduct this review for the sole aim of mapping this broad topic is in line with guidance from a number of authors [14,15]. Methods This scoping review was conducted following the Joanna Briggs Institute (JBI) methodology without the optional sixth stage [16]. The key steps were identification of the research question, identification of relevant studies, study selection, charting the data and finally, collating, summarizing and reporting the results. Databases PubMed and Embase were used to identify any relevant papers. Due to the nature of the research behind 3D modeling and communication potentially having a psychological aspect, Embase was used to broaden the search for relevant papers. These results were then reviewed individually. Any relevant papers were charted onto a table and cross-referenced with the PubMed results. Inclusion criteria Inclusion criteria were defined before conducting the review and were detailed as follows: • Studied communication (i.e., direct assessment of interaction and communicative dynamics among specialists or between specialists and nonspecialists, including patient understanding) • Original research • Full studies (no conference abstracts) • Written in English Exclusion criteria Exclusion criteria were based on the inclusion criteria and were refined during the process of data collection. This was in line with guidance from Armstrong et al. on how to conduct a scoping review, stating that inclusion and exclusion criteria may be adapted as data are collected [13]. This differs from a systematic review in which inclusion and exclusion criteria must be defined from the onset of the process. Here, exclusion criteria included: • Mentioned the word communication but in a different context (e.g., 'communication between cells') • Review, editorial or conference abstract • Nonhuman subjects • Paper was about 3D bioprinting • Communication was not an objective Data charting Two researchers independently assessed the abstract of each unique paper identified by the searches to determine whether they met criteria for inclusion. If it was ambiguous whether a paper should be included, the full article was read and discussed by the two researchers in order to decide whether to include or exclude it. This process was documented using an Excel spreadsheet. This included results of the search strategy and whether each paper was included or excluded along with an exclusion category code. A data extraction tool was then constructed including: PMID, first author, discipline, type of study, themes, aim, general results, results specific to communication, type of condition, how data were collected, whether the data were qualitative or quantitative, the type of participant(s), type of communication, whether study participants future science group www.futuremedicine.com were randomized, the sample size and finally, whether there was a follow up for studying communication. Results were color coded according to medical disciplines (e.g., cardiovascular, orthopedic etc.) to facilitate comparisons. Logic model Common themes identified across the papers were analyzed and mapped in a visual representation, creating a logic model to graphically encapsulate the theory of how the intervention (i.e., 3D models) produces its outcomes (Figure 3). First, components that were consistently occurring with the use of the models were identified. Subsequently, causal mechanisms were identified from these components, in other words, those observations that were noted as a result of the components associated with the model. Lastly, the end results were established from the literature; these outcomes included enhanced communication, building a rapport and benefiting patient education. Mapping the stages that led to the outcomes that were observed allows us to learn from the available literature how and why these models can be an asset to enhancing communication. Specialties The overall search strategy revealed 518 papers. After cross-referencing for duplicates, 269 papers were analyzed. Following the application of the exclusion criteria, 19 papers ultimately qualified for inclusion ( Figure 1). The earliest paper was from 1991, however, 91% of the literature dated from 2013 onward. These included papers from within the field of orthopedics (n = 9), cardiology and cardiovascular surgery (n = 7), ear nose and throat (n = 1), gastric surgery (n = 1) and neurosurgery (n = 1) ( Figure 2 & Table 1). Study design There was heterogeneity in the study design. For the orthopedic studies, seven out of nine were randomized (i.e., model vs nonmodel), one was a cross-sectional study and another was a case series. Only one cardiac paper was a randomized study, four were cross-sectional studies, one was a case report and another was a cross-over trial in which the patients acted as their own controls. The one ear nose and throat paper included was a cross-sectional study, the one neurosurgery study included was also a cross-sectional design and lastly, the gastric surgery study that was included followed a randomized design procedure. Sample size The sample size of the 19 studies ranged from n = 1 participant (case report) to n = 103 participants in one randomized cardiac study. The median sample size across all of the disciplines was n = 50 (interquartile range [IQR]: 55). The median sample size for the cardiac studies was n = 34 (IQR: 49), while for the orthopedic papers it was n = 74 (IQR: 50). Data collection Seventeen studies used questionnaires to gather feedback about the models and two of these also included interviews. Of the two studies that did not use questionnaires, one used semistructured interviews and the other collected verbal feedback via telephone or email. None of the studies included a follow up on the use of the 3D models in communication. Some followed-up patients for health outcomes and only one mentioned that a follow up could be useful to assess the longer term outcomes in communication, but it was not included in the published study. The questionnaire-based studies used rating scales to evaluate the models' features and, in some cases, complimented these observations by collecting free text comments which were qualitatively analyzed (n = 9). Communication The main type of communication studied was patient-doctor communication, with studies in the pediatric setting also considering communication between clinicians and families. Three cardiac studies evaluated communication between colleagues, while other disciplines looked solely at doctor-patient communication. There was variation in the type of participant included with a mix of studies evaluating patients, parents, clinicians and trainees and a combination of these. The majority of the noncardiac studies solely analyzed patients for assessing 3D models in communication and did not record a clinician's perspective. Feedback on models In all of the 19 studies feedback and results on the use of 3D models for communication were positive, irrespective of the type of respondent or type of communication. Illmann et al. found that of the 85% of clinicians who found benefit from the models, 80% of them believed they would facilitate communication with colleagues and 72% believed they would be useful in communication with parents or families [17]. In an orthopedics study, patients were asked 'How much does the CT or 3D-printing model help you to gain a better communication with doctors?' [18] and patients in the 3D-printing group rated the 3D-printed model an average of 8.5 compared with 6.5 for those rating the CT scans. Some patients found the 3D models to be particularly useful when used in conjunction with other modalities, for example, when viewed alongside radiological images [19]. A minority of studies produced findings on communication that were not exclusively positive. One study asked clinicians to rank which applications they felt the models would work best in, the clinicians ranked teaching as the most relevant and communication as the least relevant [20]. Another study reported patients having to emotionally confront the model as a barrier to its utility when faced with their brain tumors [19]. Further detail on the results of all 19 studies is shown in Supplementary Table 1. A logic model was created and is shown in Figure 3. Key components for the use of 3D models for communication purposes include the ability of visualizing the anatomy, the patient-specific nature of the model and the opportunity for haptic handling. Discussion 3D printing holds promise to assist and facilitate communication in clinical practice. Considering the broad search strategy without limitation on the date of publication and the broad literature on 3D printing in medicine, results of this search revealed a very modest number of studies focused on communication, highlighting that this is an underexplored topic which is still in its infancy. Most of the analyzed literature dated from 2013 onward, which demonstrates the considerable expansion of the world of 3D printing in the last decade. The reasoning behind the lack of emphasis on the communication aspect that 3D printing offers is not entirely clear. Possible reasons include lack of guidance on how to analyze this application of 3D printing, as well as the time pressure which is typical of clinical commitments. This review may help to identify successful approaches and to recognize that this is an area with many opportunities for novel and exciting research. Study characteristics The sample size of the studies varied both between and within the disciplines, likely due to the differences in recruitment, participant type and experimental procedure. There were some challenges in analyzing the sample sizes of the studies due to the different recruiting strategies. For example, some of the studies did not specify how many clinicians were surveyed for analysis of communication, and so there was some disparity in whether clinicians were included within the sample size, or their observations were anecdotally reported. Many of the studies did not use a comparator to be able to effectively draw conclusions as to whether 3D models are more or less effective in communication. Instead, these studies chose to gauge opinions on 3D models using, for example, a cross-sectional study design. This research could act as building blocks for further research to be conducted with a more pragmatic methodology, including a randomization procedure. Having these studies as a base for future work is useful in that the positive feedback gained on 3D models may encourage others to experiment with the 3D-printing technology and explore its versatilities in different types of communication. Although all of the studies received positive feedback for the 3D models, none of the studies included a follow up. Many of the studies followed-up patients for other objectives, such as comparing surgical approaches and therefore, measuring health outcomes after a sufficient time period has passed. There was only one paper [19] that mentioned the possibility of following up the patients' thoughts on communication months after the initial consultation. This could be seen as a potential weakness in the design of many studies as the interplay between a patient's emotional well-being and physical health is a well-established phenomenon that can have powerful implications [21]. Patients may have had positive feelings toward the 3D models initially when first introduced to the concept but in the succeeding months, these feelings may shift. This could be, for example, due to growing feelings of anxiety when visualizing their own anatomy. This was recognized by Biglino et al., who found that 30% of their sample patients reported feeling more anxious when confronted with the model [22]. Including a follow up in future studies is therefore important to monitor the evolution of these dynamics, ideally engaging patients in the process. Reviewing the literature from a variety of disciplines has meant that comparisons can be made between them, and lessons can be extracted from one field of medicine and utilized in another. This was the case with one paper by van de Belt et al., which studied patient education and communication with 3D models for glioma treatment [19]. This was a unique paper in that it focused on potential psychological effects that may come with the first experience of a patient-specific 3D model. This paper revealed both positive and negative psychological effects in this experience, many other papers failed to acknowledge due to reducing communication to patient understanding. Instead of examining the effects of the model on the patient holistically, many papers chose to focus on how much the patients can learn about their anatomy and condition and neglected to perceive that this experience may be daunting for many patients and that many patients may not want to know as much detail as a 3D model can provide. Quantitative & qualitative assessment The majority of the studies collected quantitative data; this was done most commonly via a Likert scale on a questionnaire to be filled out by the participant. There are advantages and disadvantages to both quantitative and qualitative data, however, what is imperative to remember is the complexity of communication. Communication represents a key part of human behavior and assessing and conveying beliefs on a 1-10 scale is a reductionist approach that negates acknowledgment of the dynamic nature of communication. Furthermore, the empirical nature of scales may cause them to be labeled as objective; however, these scales are inevitably subjective. Despite the issues associated with the gathering of communication data via Likert scales, they hold their place in some contexts. For example, future work could collect quantitative data alongside qualitative data (e.g., via interview or free-text comments), an approach adopted in three of the 19 studies. Communication as the primary focus & different facets of communication Although all of the included studies studied communication, assessing communication was not a primary objective for the majority. Most chose to assess different uses of 3D printing, mainly surgical planning, and measured objective health outcomes. Often communication was only analyzed using a questionnaire that included one or two questions about communication with the 3D models among other questions (not communication based). Therefore, the actual number of studies that focused on communication was actually less than 19. There is a need for future studies to evaluate the use of models in communication as a sole or primary objective. There should be a research effort placed on focusing on communication as highlighted by the almost universally positive results observed by the 19 studies. Most of the analyzed literature stated in their objectives that communication will be the focus of the study; however, the observations they recorded were often focused on patient education. An example of this would be when researchers asked patients 'How much do you understand about your condition?' and 'How much do you know about the surgical plan?' as in [23]. Extrapolating improved communication dynamics from increased patient understanding may be misleading as although these may intertwine, they are different entities. While increased patient understanding is desirable and may suggest effective communication, communication should not be assessed exclusively on patient knowledge and understanding. Furthermore, many of the papers assessed communication by simply asking 'Do you feel the models facilitated communication?', and while direct assessment is useful, it may be more valuable alongside assessing communication indirectly. An example of a more meaningful assessment of communication could be to present more open questions allowing participants to share their experiences more freely, how they use the model in the own time if they are given one to take home, or how the haptic handling of the model guided the consultation and helped them to engage more in the discussion. future science group www.futuremedicine.com Limitations This scoping review was limited to PubMed and Embase and did not consult other databases, or gray literature. Also, studies not published in English were excluded. Due to the limited literature that has been published on 3D models and communication, a systematic review with a meta-analysis could not be conducted. It was therefore decided that a scoping review was more appropriate, recognizing the limitations of this but also the value of mapping the available literature to inform suggestions for future work. Conclusion This scoping review was undertaken because of the lack of any assessment of published studies on the use of 3D models for communication and that future work would benefit from guidance. Recommendations stemming from this review can inform the design of future studies exploring the use of 3D patient-specific models in facilitating communication. We argue that this is a clinically relevant area of research, with potentially important implications for patient empowerment and psychological adjustment. Future perspective Doctor-patient communication was the most frequently studied form of communication, with all of the noncardiac papers studying this type of communication. Although this review included studies which only included clinicians, these studies assessed clinicians' opinions on how 3D models affect communication with patients. This review did not include any studies which solely assessed communication between specialists. Some of the cardiac papers studied communication between parents/families and physicians; however, overall, the range of communication that has been studied is limited. Specifically, no studies were identified where the use of 3D models in communication between patients and their family and friends was the focus. This could be an interesting area to explore in future work as participants in some of the included studies mentioned utilizing the models to better explain their condition (or their child's condition) to relatives and friends. As a patient's diagnosis, especially a life-long diagnosis, is often a large part of a patient's life, the ability to effectively communicate this with those whom they are closest to could be invaluable. Compared with patient-doctor communication (i.e., between a specialist and a nonspecialist), this is a case of communication between nonspecialists, as relatives and friends may have very little knowledge of normal anatomy. The 3D models could add value to patients' lives further than just in the consultation room. One area that has not been investigated is whether the use of models is more beneficial in communication with individuals with certain diagnoses. For example, within the cardiac context, six out of the seven studies looked at models of congenital heart disease. The other study looked at hypertrophic obstructive cardiomyopathy. Despite congenital heart disease encompassing a range of diagnoses, there was no direct comparison of whether any of the single diagnoses received greater benefit from the models. This may have been due to the limited sample size not allowing for reliable subgroup analyses and the uniqueness of each patient's heart when it comes to complex anatomical defects. Additionally, congenital heart disease could be compared with other types of structural pathologies such as cardiomyopathies or valve disease. These comparisons between diseases could guide where 3D printing is best suited to be implemented, whether with patients with congenital heart disease who tend to be younger, or patients with valvular disease who tend to be older. If positive feedback is received from a range of patients with varying diagnoses this would only strengthen the case for implementing 3D models into clinical practice to ultimately improve patient care. Much of the research that is currently available is of limited sample size and often not randomized. In the future, this area of research can progress further by conducting larger-scale randomized studies, comparing the use of 3D models to an effective comparator such as conventional procedures. Furthermore, this research should include both qualitative and quantitative data collection in which a broader sense of patient perspective about the models can be ascertained. In doing this, richer data can be gathered including any emotional and psychological effects the participants may feel toward the models, which will, in turn, affect communication. Additionally, research into how models can be used by patients to communicate with friends and family about their condition would be a novel approach and would provide an insight into how 3D models can be employed in patients' lives. It will be interesting to evaluate the role of the models in aiding communication not only across disciplines but also during specific milestones in the patient's journey, such as transition into adulthood, pregnancy and end of life. Based on the evaluation of these studies, future studies could benefit from incorporating these suggestions in their study design. These suggestions have been summarized in Figure 4. Studies should include a large sample size which will be dependent on the pathology which is being studied, as well as utilizing a randomized study design including different types of participants and collecting opinions from specialists and nonspecialists. There should be a shift in focus away from solely evaluating patient understanding, and more toward the assessment of patient emotion and its effect on communication. A follow up should be included. Communication between nonspecialists is a new area to explore. Semistructured interviews with a smaller population could be used to gather more detailed and in-depth perceptions alongside questionnaires with the larger population. Objective • To chart the available data from studies on the use of 3D models in communication and analyze these data to draw conclusions and inform future work. Methods • A PubMed and Embase search was conducted with a comprehensive search strategy from which papers were identified and analyzed in detail. • Comparisons were made between studies to identify strengths and weaknesses in both protocol and results. Results • A total of 269 papers were identified from which 19 papers were deemed relevant and included in the final analysis. • The majority of papers had positive outcomes for communication. Discussion • Larger randomized studies are needed with communication as a primary objective. • These studies should include a follow up to observe results over time. • Communication between nonspecialists is an interesting concept that is yet to be explored. Supplementary data To view the supplementary data that accompany this paper please visit the journal website at: www.futuremedicine.com/doi/sup pl/10.2217/3dp-2021-0021 Financial & competing interests disclosure The authors gratefully acknowledge the support of the British Heart Foundation (CH/17/1/32804), the Bristol BHF Accelera- No writing assistance was utilized in the production of this manuscript. future science group www.futuremedicine.com
2022-01-21T16:52:17.355Z
2022-01-19T00:00:00.000
{ "year": 2022, "sha1": "17c47da798ce2a4b4d39a39e862f74e2e37355aa", "oa_license": "CCBY", "oa_url": "https://doi.org/10.2217/3dp-2021-0021", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "3d1a16ffd5f589e9a80b13d4f3f3b0192d2f2b63", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
234540028
pes2o/s2orc
v3-fos-license
The Role of Judges in Dealing with Community Development The purpose of this paper is to determine the role of the judge in facing the development of society. Judges are part of the important structure of the judicial power branch in Indonesia. Judicial power is an independent power to administer justice in order to uphold law and justice. Judges are given the power to judge. Judges have an important role as law enforcement officers in the law enforcement process in Indonesia, so they must pay attention to legal objectives. The role of the judge has consequences for the responsibility of the judge which is very heavy, where the judge has responsibility to one God, to the nation and state, to himself, to the law, to the parties and to society. Judges and society are elements that cannot be separated in a legal system. The judge is a product of the society and culture where he comes from and is. The function of the judiciary is to decide disputes between individuals and individuals, individuals and communities, even individuals or society and the state; forming or making a policy or policy.[]Tujuan penulisan ini adalah mengetahui peranan peranan hakim dalam menghadapi perkembangan masyarakat. Hakim merupakan bagian dari struktur penting cabang kekuasaan kehakiman di Indonesia. Kekuasaan Kehakiman merupakan kekuasaan yang merdeka untuk menyelenggarakan peradilan guna menegakkan hukum dan keadilan. Hakim diberi wewenang untuk mengadili. Hakim memiliki peranan penting sebagai aparat penegak hukum dalam proses penegakan hukum di Indonesia, sehingga harus memperhatikan tujuan hukum. Peranan hakim memiliki konsekuensi terhadap pertanggungjawaban hakim yang sangat berat, dimana hakim memiliki tanggung jawab terhadap tuhan yang maha esa, terhadap bangsa dan negara, terhadap diri sendiri, terhadap hukum, terhadap para pihak dan terhadap mayarakat. Hakim dan masyarakat merupakan unsur yang tidak bisa dilepaskan dalam suatu sistem hukum. Hakim sebagai produk masyarakat dan budaya tempat dia berasal dan berada. Fungsi kehakiman adalah memutus sengketa antara individu dengan individu, individu dengan masyarakat, bahkan individu atau masyarakat dengan negara; membentuk atau membuat policy atau kebijakan. Introduction The Indonesian state, which in fact is a constitutional state, then everything concerning the violation of the law or not obeying the existing legal rules will receive strict sanctions for the perpetrators. It is advisable for anyone who lives as a member of society who lives on this earth to be able to create social order properly, namely upholding the applicable law. The law must be enforced without selective logging in legal cases. The law that is recorded in written regulations as well as the rule of law and in unwritten law is something abstract and generally accepted. Meanwhile, the law is concrete and specific when it has been applied/enforced in certain cases. Laws are basically made to create order and peace in society. Therefore, the system of a law must run like a series of community organs must complement each other and have a high awareness of the applicable law. The paradigm that views law as a system has dominated the thinking of most legal circles, both theorists and practitioners since the birth of the modern state in the 17th century until now, namely the paradigm that considers law as an order. In Weber's view, law is an order that is coercive because the upholding of the legal order (which differs from other non-legal social norms and orders) is fully supported by the coercive power of the state. Weber distinguishes between various legal systems on the basis of substantive and formal rationality. Weber said that it has a substantive rationality when the substance of the law does consist of general rules in abstracto that are ready to be deduced to punish various concrete cases. On the other hand, law is said to have no substantive rationality if every case is resolved on the basis of political or ethical policies that are unique in its order. It may even be resolved emotionally which simply cannot refer to the general rules that are objectively present. On the other hand, law can be said to have formal (irrational) rationality if the law is only obtained through inspirations or through wangsit whispers which are said to be accepted by charismatic leaders so that its truth and worthiness cannot be objectively tested (Wignjosoebroto, 2008: 36-37). Law is one of the fields whose existence is very essential to ensure the life of society and the state, especially as Indonesia is a state of law, which means that every act of state officials must be based on law, and every citizen must obey the law. With today's increasingly complex world developments, it is not uncommon for them to cause serious problems that need attention as early as possible. The problems that arise, either in the form of violations of existing norms in social life or rules that have the tendency to create a phenomenon that is contrary to moral and moral rules as well as legal rules. The violations that occur are the reality of human existence who cannot accept the rules as a whole. If such things are allowed to drag on and get less attention, it will cause unrest in society so that it can disturb public order (Iswanty, 2012: 390). The enforcement of the rule of law is a human effort to achieve the order or order it needs. In terms of enforcement, the main thing is to synergize the three pillars, namely laws and regulations, law enforcement officers, and the legal culture of the community. Judges as part of the law enforcement apparatus (judicial power) have a very important role in the birth of a just rule of law and through various decisions judges are able to become a means of regulating public order. Courts through judges' decisions have the role of transforming ideas that come from abstract moral values into concrete events, so that the judge's decision visualizes abstract principles into concrete legal rules. In every case the incident will be seen, acknowledged or justified. The judge proves by means of evidence in order to obtain certainty that the incident qualifies, including in what or which legal relationship. The judge will look for provisions that can be applied to the legal event concerned. So, the Judge will apply the law to the event and evaluate it and in turn assign the law to the incident that occurs, of course he will give justice according to his judgment. The laws that are applied in society will have an impact on society. In the law enforcement process, the court decides on a case. Humans are social creatures, creatures in groups. The human community deliberately creates an order in the form of social rules which has been continuously going on for centuries. This social order is institutionalized through a process of habituation (habitualization), that is, each action that is often repeated and eventually becomes a pattern. This development gradually crystallized into a habit (folkways). There is a strong trend in that some of the behaviors are formulated in community laws. As a rule of law, it regulates what can and cannot be done, or what procedures must be followed, where the sanctions imposed by society for individuals who cannot conform are strict. The creation of this law is in line with the natural desire of humans to obtain or obtain justice in life together as members of society, so as to create order and order in a social order. In addition to law as a tool to change society, law can also be left behind from social changes in society if it turns out that the law is unable to meet the needs of society at a certain time and place which can hinder development in other fields. The abandonment of the rule of law can also lead to disorganization, which is a condition in which the old rules have faded away, while the new rules as their substitute have not been developed or formed. This situation can then lead to anomaly, which is a chaotic situation, because there is no guideline for community members to measure their activities. Thus, humans who live in society inevitably in the development stage of their life will always be confronted with an applicable rule or law. Judges are at the forefront of deciding and adjudicating a legal event. The thing that becomes a reference in the objective reality of society that comes from the creation of rules or sanctions that apply from the state or government is at the legal level. The law is strict and compelling when it aims to create order and peace in a society. Law that exists in society is the law used to regulate cases of rationality, namely empirical, not speculative. If the law is enforced with justice, the law will be upheld in society. The law does not look at social classes, people's awareness in law that will determine the course of law enforcement in Indonesia. Once the role of the judge is important in facing the development of society, it will become the basis for fulfilling the rights of justice seekers, namely the community in enforcing the law in an integrated manner through the available court channels. Based on the description above and on the premise that people's life activities are dynamic, they always develop rapidly, especially in the current era of globalization. Especially when society is entering a transitional period like in Indonesia today, and with the diversity of cultures and values that develop in it as well as globalization which results in various influences from other legal systems. Therefore, a law enforcer, including a judge, in its role to enforce the law in court by making fair legal breakthroughs. For this reason, the authors are interested in pouring the analysis and research into a paper entitled: "The Role of Judges in Facing Community Development". Hopefully it will be able to provide a systematic and holistic perspective on legal scientific treasures related to the implementation of law enforcement by judges in facing the dynamic development of society. The research method is knowledge of systematic and logical steps in searching for data relating to a particular problem to be processed, analyzed, then drawn or concluded. The research model is qualitative with the type of research, namely the normative juridical research approach. Through the approach of statutory regulations, legal doctrines and legal theories. Data collection techniques in this research are documentation studies of various laws and regulations, legal doctrines and legal theories related to the role of judges in facing community development, for further analysis using data reduction, data presentation, and drawing conclusions. Arrangement of Judicial Powers The concept of judicial power in Indonesian law is closely related to the political dynamics of Indonesian history. Judicial power has been designed to support the political power of the rulers, both during the Old Order and New Order times. In the Reformation Era, this abuse of power was corrected by the amendment to the 1945 Constitution (UUD 1945) which attempted to position the judiciary and judges apart from executive power. However, there are those who do not realize that after the amendments to the 1945 Constitution, the independence of judges personally is actually eliminated (Irianto, 2017: 1). The history of Indonesian justice, the portrait of the implementation of judicial duties has always been influenced by the prevailing politics of the era. Starting from the era of Dutch East Indies colonialism to the era of independence during the Reformation Era, politics has always influenced the Indonesian judicial system. According to Sudikno Mertokusumo, "the judicial system is influenced by the system of government, economy, and politics" and the discussion about Indonesian justice "cannot be separated from the development of the constitution or the constitution of the state in Indonesia." (Mertokusumo, 2011: 259). Long before the establishment of the Republic of Indonesia, the official regulation of the judicial system had been in place since the era of Dutch East Indies colonialism. This arrangement was adjusted to the political interests of Dutch East Indies colonialism which was colonizing Indonesia. That is why the regulation of the scope of authority of judges, procedural law, and the division of types of court in a discriminatory manner was regulated in such a way by the Dutch East Indies colonial power so that it remained entrenched in the colony. During the Japanese occupation (1942)(1943)(1944)(1945), according to Sudikno Mertokusumo, the judicial system in effect was only a simplification of the judicial system in effect during the Dutch East Indies era (Mertokusumo, 2011: 22). The Indonesian judicial system after the proclamation of independence on 17 August 1945 was, to some extent, a continuation of the judicial system inherited from Dutch East Indies colonialism. However, this continuation was not one hundred percent the same as during colonialism because certain adjustments were made in line with the spirit of independence and the values held by the Indonesian people. The judiciary, for example, is no longer dominated by Dutch judges. Even so, the existence of the judicial system and court institutions in the course of Indonesian history has not been separated from political influence. In the Soekarno era, under the Guided Democracy policy, the freedom and autonomy of judges was limited. Likewise during the Soeharto era, because through the policy of economic growth as commander-in-chief, the executive branch regained control of the judiciary. The two regimes confirmed the executive power over the judiciary through the issuance of various laws (UU). His reflection appears to be from the government side that has never lost as in Pompe's research when he fought against society for 40 years (Pompe, 2012). This happens because the government has always been a repeat player; the party who has all the resources to win the case, as Marc Galanter said (Galanter, 1974: 95). During the New Order era, a two-roof system prevailed. Judges in the technical aspects of the judiciary are under the Supreme Court. However, in organizational, administrative, and financial matters, judges are under the government bureaucracy (Ministry of Justice). So strong was executive power that it was difficult to distinguish between judges serving the government or the political party in power. In practice, judges are more subject to executives who determine their welfare and careers. In fact, the executive can use his political power to gain the loyalty of judges. At that time also, judges were given the status of civil servants (PNS) whose monoloyalty was addressed to the government (Irianto, 2017: 5). Meanwhile, in relation to its technical judicial functions, the Supreme Court's internal supervision of judges applies. Internal supervision is carried out by judges with a higher position over the judges who are below him. Although under the roof of the Supreme Court, the executive power is so strong that it can penetrate the judicial area. So, in this imbalanced power relation, the area of internal supervision of judges can be used to pressure judges to be loyal to the executive. The picture of the world of justice at that time was the potential abuse of power and the proliferation of nepotism and corruption in the judiciary. The arrival of the reform era that ended the rule of the New Order had a very significant impact on judicial reform in Indonesia. Law No. 35 of 1999(Law No. 35 of 1999 is a milestone for the one-stop policy. MA deals with all aspects, including administrative, financial, and organizational matters previously in the hands of the executive. The transfer is carried out in stages for general, religious, military and state administrative courts. The embodiment of the conversion of two roofs into one roof was carried out in 2001 through the third amendment to the 1945 Constitution, which is normally regulated in Article 24 paragraph (1) of the 1945 Constitution. Indonesia is a rule of law that always prioritizes law as the basis for all activities of the state and society. Indonesia's commitment as a rule of law has always been and is only stated in writing in Article 1 paragraph (3) of the amended 1945 Constitution of the Republic of Indonesia. Wherever, a country wants its country to have law enforcers and laws that are fair, firmly not favoritism. One of the law enforcement is law enforcement in court to settle a proposed case. During the examination at court proceedings, the judge who leads the proceedings of the trial must be active in asking questions and give the opportunity to the defendant represented by his legal adviser to ask the witnesses, as well as the public prosecutor. Thus, it is hoped that the material truth will be revealed, and the judge is responsible for everything he decides. The Role of Judges as Law Enforcers The most important component contained in the rule of law principle is the existence of "separation of powers" and "independence/independence of the judiciary" (judiciary body). The independence of the judiciary is a symbol of fair and impartial law enforcement. According to Soerjono Soekanto, the essence of the (good) law enforcement process is the harmonious application of values and rules which are then manifested in behavior. This pattern of behavior is not limited to members of the community, but also includes the "pattern setting group" which can be interpreted as a law enforcer in a narrow sense (Witanto and Kutawaringin, 2013: 3; Badriyah, 2016: 1). The judge who is personified in an elected human figure called "qadi" is often also depicted as the Goddess Themis with her eyes closed as a symbol of neutrality and impartiality; will not look right or left or flirt with one of the litigants. In the teachings of classical legal philosophy, the judge must follow the "unconditional obligation" without having any intention of having bad thoughts. Therefore, according to Montesquie, judges only act as la bouche qui prononce les paroles des lois (limited to a funnel that sounds the words of the law) (Irianto, 2017: 9). By seeing it in the organizational structure and mechanically, this makes the judge as a person who is free of values and free from interests, because he is released from everything that is human and completely avoids environmental influences. The problem is how it is possible for judges to work on analyzing cases only "purely" based on the prevailing legal norms. A judge has at least some form of accountability in adjudicating a case, being responsible means the willingness and courage to carry out as well as possible everything that becomes his authority and duties. The responsibilities of judges are as follows: 1. The responsibility of the judge towards one God; 2. The responsibility of judges to the nation and state; 3. The judge's responsibility towards himself; 4. The responsibility of the judge towards the law; 5. The responsibility of the judge towards the parties; 6. Responsibilities of judges to society (Annisa, 2017: 161-163). That is how heavy the responsibility of judges in examining and deciding cases places Judges in a noble position. Because of his position, the judge is confronted with several legal principles attached to his position, including: 1. A judge (court) may not reject a case submitted to him on the grounds that the law is unclear. This principle stipulates that a Judge who is presented with a case is obliged to examine it, and is not allowed to refuse on the grounds that the law is not clear, but the Judge must be able to prove the truth of the criminal event that occurred in the case presented to him, and he must be able to find the law; 2. What has been decided by the judge must be considered true (res judicata proveritate habetur). This provision indicates that the Judge in deciding a case submitted to him is the correct decision, because the Judge sees from the valid evidence presented to him, and is supported by his conviction of the perpetrator's guilt based on the available evidence. 3. The judge must judge, not make the law (judicis est jus dictare, non dare). This is to determine that a Judge whose main task is to examine and decide a case is based on valid evidence and his belief in the truth is based on such legal evidence, so that the decision can be accounted for and is considered fair. Judges are not allowed to issue decisions without being based on evidence and make decisions that must be obeyed by the parties in a case. Even so, in adjudicating a case the Judge determines the law concretely, so that the decision on collection rights can be considered as law (judge made law), but in the formation of the law, the Judge's decision is limited by law and bound by law; 4. There is no good judge in his own case (nemo judex idoneus in propria causa). This provision implies that the judge in examining a case must be a case that has nothing to do with himself and his family, meaning that the judge examining the case must not have an interest in the case because the parties in the case are still related by blood or brotherhood with the judge (Priyanto, 2010: 6). Some of the principles mentioned above become the basis for carrying out the task of examining and deciding cases. The task of examining and deciding cases is not an easy task. The judge must be able to place himself in the objectivity of the cases that are presented to him. The judge must be careful in examining the case and can prove that the case submitted to him is really a case which is not the result of fabrication and is not colored by other interests, especially political interests. Judges and Community Development Judicial power or in English is often referred to as 'the judiciary'. The main and first function of the branch of the judicial power is to decide disputes (resolving disputes) between individuals and individuals, individuals and communities, even individuals or society and the state. The second function is to form or make a policy or policy (KY, 2018: 62). In a legal system, likening the legal structure to a vehicle where the judge as the driver will determine the direction and decision of a legal event, and in the legal context this vehicle refers to the legal institution. In the process of change that occurs in society, there is usually a force that is the pioneer of change or agent of change. We know various social groups as agents of change, for example the government, schools, political organizations, intellectuals, farmers and so on. What about the law, to what extent is the legal operation in changing society? This is an important question, considering that Indonesian society is experiencing development and changes in all fields. Development contains dynamic aspects even though many argue that the law maintains the status quo. The reality of life provides many examples that there has been a very rapid and observable change. Change is seen as the fact that there is a distinction between the past and the present, and at the same time records the knowledge that what is now, whatever the difference from the past, is actually the result of the development of the facts that originally existed. Based on the Galilean Newtonian concept, nothing in the universe is eternal, change is a necessity based on the principle of cause and effect which is random. Longfellow said: "All muct change to something new ang something strange" (Suteki, 2015: 26). Legal change (legal change) and social change (social change) are two things that have always been a concern and study of legal experts and other social scientists, how the relationship between law and social change. The first concern in the definition of the relationship between law and social change is the definition or problem of definition, what is meant by social change. In simple terms, social change can be interpreted as a restructuring of the basic patterns in which people in certain societal structures engage with one another in the fields of government, politics, law, economy, education, religion, family life and other activities. According to Mochtar Kusumaatmadja, social changes that occur in a structured manner in the form of regular and systemized changes in society are a form of community development. Community development or social change is a matter of renewing ways of thinking and attitudes to life, without changing attitudes and ways of thinking, the introduction of new institutions in life will certainly not succeed (Kusumaatmadja, 2006: 10). We can say that the role of law in development is an instrument to ensure that the social changes that occur will run regularly. Regular social change through legal procedures, whether in the form of statutory regulations or judicial decisions will be better than irregular change, especially through violent means, change or order (order) are the twin goals of society. which is undergoing change (Kusumatmadja, 2006: 20). Learning from the history of the development of advanced societies in the world today, it can be seen that the social changes or development that they carry out generally go through a long journey which is carried out systematically through successive stages, namely the unification stage, the industrialization stage and the welfare state stage. In the first stage the serious problem is how to achieve political integration to create national unity and unity, in the second stage, the main task of the state is to develop the economy and political modernization and in the third stage the main task of the state is to protect the people from the negative side of industrialization, to improve mistakes in the previous stages by prioritizing the level of community welfare. National unity is a prerequisite for an industrialized society and industrialization is the path to a prosperous society (Rajagukguk, 1997: 1). Developing countries, including Indonesia, in their efforts to catch up with other developed countries, generally try to put the attainment of the three stages of development together. Especially for Indonesia, if the three stages of achievement will be carried out simultaneously and simultaneously, then one thing that is one of the keys to determining success requires a legal culture that is able to accommodate the goals we are trying to achieve. In the context of societal or social change, law must be understood and developed as a single system in which there are various elements that are interconnected and inseparable from one another (Ashiddiqiew, 2006: 2). Renewal of attitudes, characteristics or values is necessary, what matters is which community values will be abandoned and replaced with new values that are deemed appropriate to community life, and which old values will still be preserved (Kusumaatmadja, 2006: 11). Every case that is submitted to the court must continue to be tried, regardless after being tried and the judge declares that the case is not within the scope of his competence, the court must continue to adjudicate, regardless after being tried the judge declares that the case is not a legal event that must be tried or not in the scope of its competence, the court must declare it in the form of a decision not in the form of a case rejection before being tried. The judge is a product of the society and culture where he comes from and is. Judges as individuals with various backgrounds and the reality of their experiences are important to learn. By understanding the existence of judges from several points of view, a comprehensive explanation of the various problems faced by judges can be obtained. Thus, it is also obtained an explanation of how the functions and roles of judges are carried out, the obstacles they face, as well as access and support in maximizing all their knowledge and abilities, so as to produce good quality decisions in fulfilling the sense of justice in society (Irianto, 2010: 2). As a broader and living system, society consists of various subsystems, namely cultural, social, political and economic subsystems. In Talcott Parsons' Cybernetics Theory, it is stated that the primary function of the social subsystem in a wider society is to integrate various interests that are diverse, plural, and even opposing each other so that they often form friction in social interactions. In a broad social system, law falls within the area of the social sub-system so that the main function of law is also as an integrating mechanism. In law enforcement practices, courts in Indonesia carry out an integration function represented by judges, so that judges have the responsibility to bring justice to the people and truth (seacrhing for the truth) in order to create social integration, not the other way around creating social disintegration (Suteki, 2015: 80-81). Judges are believed to be figures capable of integrating various kinds of interests, differences and frictions through conversion equipped with inputs in the form of adaptive functions so as to maintain the integration pattern. After carrying out the conversion process in court institutions, the decisions made by judges are expected to fulfill the elements of efficiency, legitimacy and justice. In the Common Law system, judges can create laws, new laws, known as the judge made law principle, so that judges are truly independent, not shackled by mere statutory regulations (la bouche de la loi). In the context of law enforcement in Indonesia, judges in examining and adjudicating a case, although in a case faced by a judge there is no legal regulation or legal regulation but it is not clear, the judge cannot refuse to try a case. This is expressly stated in Article 10 paragraph (1) of Law no. 48 of 2009 concerning Judicial Power which states that: "Courts are prohibited from refusing to examine, hear and decide a case filed on the pretext that the law does not exist or is unclear, but is obliged to examine and judge it". In the event that a judge has to decide a case where there is no legal rule, the judge here must try to find the law, both written law and unwritten law. The judge has so much authority. This is because judges who are the main actors in the judicial power have the responsibility to hear, examine and decide cases. With his ruling, the judge can resolve disputes, eliminating ownership. In fact, it can even be done by a judge. So much is the power of a judge that any problem that comes to him, even though it is unclear, incomplete, or even without rules, is obliged to decide the case by trying to dig or make legal findings (rechtsvinding) (Wijayanto, 2018: 5). The role of the judge is to understand the purpose of law in society, to explore justice and values that live in society, because the law in society is like a living organism. Laws in society are always factual and are in constant change. The changes can be minor and gradual and difficult to observe, but they can also be drastic. The relationship between law and reality is so fluid, it causes the law to always change. In many ways, changes in law are the result of changing social realities. However, there are times when the law wobbles in keeping with changes in society, causing a "gap" between society and the law. This means that the law may not be in a watertight space, but rather tries to adapt to the development of society. The history of law is a history of adaptation to the changing needs of life. In that case, the judge has the main role and responsibility to make changes to the law. Judges can make changes by interpreting the law. In that case, the role of the judge becomes significant in bridging the gap between outdated laws and developments in society. A judge cannot say that matters of legal change are the sole responsibility of the legislature. Courts must take on the role of changing the law jointly with the legislature. Justice seekers certainly really want that cases submitted to court can be decided by judges who are professional and have high moral integrity so that they can create decisions that not only contain legal justice but also include moral justice ( moral justice) and social justice (social justice). Seeing the law, society, courts and judges cannot stand alone. The relationship is dynamic, because changes that occur in one aspect affect other aspects. The problem is, the change in law that follows changes in society is like a double-edged sword. When a conscious change of law is made to capture the needs of society, the gap between the two can be bridged. A judge's decision that has been pronounced in a trial that is open to the public will not only have an influence on the parties in litigation, but will also have implications for the wider community, so that the decision must not only reflect public justice. A good decision is a decision that can reflect a change in the dynamics of people's life for a better direction, or at least it can be a deterrent to community behavior that violates the law, so that decisions can be an effective medium in creating legal order in society, on a scale. small decisions are media for resolving cases being tried, but in a broad sense the considerations of the decision will be polarized into a generally accepted rule in society because it contains good values for people's lives. The responsibility of judges to the community is not in the sense of fulfilling every wish of the community, or in other words simply following the wishes of the mainstream in society, because the form of a judge's accountability to the community is not aimed at a particular group that has an interest in the case being examined. At present the community often becomes a tool for stakeholders to influence judges' decisions, so that judges become shackled by the opinions and desires of the public at large, even though what a group of people voices is not always true because the dimensions of conveying information about case material to the public are also not always correct, but sometimes it has been engineered into contradictory issues with the real facts. In reality, judges are strongly influenced by diverse identities, at least based on life history, ethnicity and cultural traditions, class, religious beliefs, political views, class, gender, and even scientific ideology. Thus, the "juridical-normative" decision actually also contains a "sociological-cultural" claim in line with the diversity and overlapping identities of a judge. However, it is sometimes not realized by the judges themselves and the wider community. The awareness that judges are human encourages us to see judges in their full human quality. Thus, it is important to see that judges are also a product of society. A society with shifting values and permissiveness to corruption may also produce judges who are insensitive to corruption. For example, judges who interpret acts of corruption are limited to the text of the law, there are elements of self-enriching and detrimental to the state from government employees and corporations that will cause acts of corruption that are so diverse and in scope that they cannot be categorized as corruption. Judges find it difficult to produce new legal findings that come out of formal procedural justice and think about substantial justice. The judge interpreted his existence as part of the community, even as a product of society. This has an impact on the way the judge responds to the various cases he has to decide. Another thing that is no less important is to study the judge's meaning of himself as part of the legal structure in law enforcement institutions (Rahardjo, 2003: 224;Makbul, 2016: 19). The use of law as a tool of change in society requires legal experts to have more and broader knowledge than legal knowledge in the sense that we are familiar with it so far, a legal expert in this context must be able to understand the interaction between the law and other factors develop in society, be it social, economic, political, cultural and other factors. These difficulties often lead to lags and even failure of the role of law in accommodating development interests or changes in society. Changes in the field of law will have an impact on other areas of life, and vice versa. The legal function, on the one hand, can be used for means and means of changing society towards a better order and on the other hand, the law can also be used for means and tools to maintain the existing social order. Legislation as a tool of change in society, besides having the advantages as above also has weaknesses, among others, it is often found that laws and regulations are not able to adapt to the rapidly changing development of society, laws and regulations are also not able to comprehensively accommodate the interests that exist in society. The plurality of society in Indonesia (part of Asia) should be the basis for the direction of law enforcement. Law and society have a very strong relationship, even Tamanaha said that law has a peculiar form of social life. Brian Z Tamanaha said that law and society have a frame called The Law-Society Framework which has certain relationship characteristics. Understanding law and the way of law in Indonesia can no longer be approached through three classical approaches such as a philosophical approach, a juridical approach and a sociological approach. However, another approach is needed, as Menski offers a fourth approach known as the legal pluralism approach. The legal pluralism approach relies on the link between the state (positive law), social aspects (socio-legal) and natural law (moral/ethical/religious). The method of law that only relies on positive law with rule and logic and its rule bound will only lead to a deadlock in the search for substantive justice. The legal pluralism approach must be mastered by law enforcers in making legal breakthroughs through the non enforcement of law, because this approach is no longer imprisoned by legal formalism but has jumped towards the consideration of living law and natural law. The method of law in Indonesia is not appropriate if a juridical (positivistic) approach is used, such as the country of origin of Indonesian law (continental Europe), without considering moral/religious aspects and socio-legal considerations. The liberal individualistic character of modern law in Indonesia must be balanced with the character of wisdom and compassion, unity and a sense of justice in society that is reflected in the living law, so that the law is able to present complete justice which is the goal of progressive law enforcement. As a progressive judge, in carrying out his duties and authority, it is based on two components which include rules and behavior. The law is for humans, not the other way around. Starting from this basic assumption, the judge's presence is not for himself, but for something broader. So great is the responsibility of a judge, not only to the parties involved in a case, but also to the community and the highest responsibility of a judge is towards God. Because basically the judge is the representative of God in the world and as a form of judge's responsibility, so in every verdict issued by the judge in the head of his decision there is always the sentence "For Justice Based on the One Godhead". The creativity of judges in making legal breakthroughs is part of the spirit of liberation against the culture of law enforcement (administration of justice), which has been in power and is considered to be hindering legal efforts to resolve problems. Encourage judges so that in the process of law enforcement, judges must dare to free themselves from the use of standard patterns (normative-positivistic) that emphasize mere normative justice (normative jusctice), because in fact the search for justice cannot only be seen from the normative aspect, but also from the normative aspect. sociology, especially when it comes to aspects of social justice and the constitutionality of a law. Therefore, the role of law as a tool for change in society will always involve other legal components, one of which is the judge (legal structure) to work as a system that is inseparable and complementary and complementary, so that the void in statutory regulations can always be filled by law. which in fact lives and is obeyed in the community and in the life of society there is never a legal vacuum. Factors Affecting Judges in Actualizing Legal Justice in Society Judges' verdicts in the form of words (language) which actually contain the juridical thinking activities of the maker (judge). He will constrict, systematize and conclude. This activity appears to be applied in the fulfillment of a legal rule that will be applied to a collection of events presented by the parties, or in a mindset of consideration (motivation), so that between legal considerations and decisions (amar) has a logical sequence. But no less important, conceptually the decision must provide individual justice in each case (case). For every individual the most important thing is that the decision fits and fulfills a sense of justice. Unfortunately because there are two conflicting parties in the case, there are different perceptions in dealing with a decision. The losing party tends to say, it is not fair, there is collusion and various other tones that discredit the Court. Legal wills in the reality of everyday life are carried out through humans. Based on this vision, humans who carry out law enforcement really occupy an important and decisive position. What the law says and promises, will eventually come true through the hands of these people. Therefore, judges as law enforcement subjects are able to influence the upholding of a law in social life. Psychologically, everyone wants to live happily and avoid misery, so that when declared defeated, he will look for efforts to improve his position. Likewise, judges have psychological values in examining each case until they finally issue a decision. The law provides provisions for legal remedies if he is not satisfied with the judge's decision. However, after reaching the highest level of justice even when the case is about to be executed, they do not voluntarily carry out the sound of the verdict. This is of course a burden on the court. Judges in actualizing the idea of justice need a conducive situation, both from external and internal factors from within a judge. a. Guarantee of freedom of trial/judge (Independence of Judiciary) Judicial freedom has become a necessity for the establishment of a rule of law (rechstaat). The judge will be independent and impartial in deciding disputes, and in such a conducive situation, the judge will be free to transform ideas in the considerations of the decision. In Indonesia, guarantees for independence of judiciary have been put in place as a foundation in Articles 24 and 25 of the 1945 Constitution of the Republic of Indonesia which are emphasized in the intended explanation: "Judicial power is independent power, meaning that regardless of the influence of government power, in connection with this there must be a guarantee in the Law regarding the position of the Judges ". This is emphasized again in the explanation of Article 1 of Law No. 14 of 1970 jo. Law No. 4 of 2004 concerning the Principles of Judicial Power which states: "This independent judicial power contains the meaning in it that judicial power is free from interference by other state powers, and freedom from coercion, directives or recommendations that come from extra-judicial parties except in the case of -things permitted by law ". Independent judicial power serves two purposes. First, to carry out the functions and authorities of the judiciary honestly and fairly, secondly, so that the judicial power is able to play a role in supervising all actions of the authorities (Soekanto, 1993: 5). While the consequences of an independent judicial power are: a. The rule of law. Every dispute resolution must be in accordance with the process prescribed by law based on the principles of equal treatment before the law and equal protection before the law; b. Justice as a pressure valve. The judiciary is given the authority as a pressure valve for any violation of the law committed by anyone and any party without exception and the violation includes all forms of unconstitutional, public order and propriety acts; c. Justice as the last resort in upholding truth and justice places the judiciary as the last place; d. Judiciary as the implementer of law enforcement; e. The judiciary is justified in acting "fundamentally undemocratic": it does not require access from anyone, does not require negotiation from any party and does not require "compromise" from the litigating party; There is general agreement in the court community in the world that judicial institutions are expected to do the following: 1. Courts provide individual justice in individual cases; 2. Courts operate in a transparent manner; 3. The court provides an impartial forum in resolving legal disputes; 4. Courts protect citizens from the arbitrary use of government power; 5. Courts protect the weak; 6. Courts maintain and maintain formal records of decisions and legal status. Based on these provisions, actually the regulation on the independence of the judicial power looks solid. b. Quality of Judge Professionalism Each judge is required to carry out his duties in a professional manner, namely the ability and skills of the judge to carry out the efficiency and effectiveness of decisions. Both in terms of the application of the law, as well as the ability to consider decisions based on the values of justice that grow and develop in society, as well as the ability to predict reactions and social impacts on decisions that have been passed. This professionalism is one side of the "profession" currency, besides the side of professional ethics. So, every profession has two aspects, namely professionalism as a technical expertise and professional ethics as the basis of morality. Professionalism has an important role, especially when judges carry out juridical responsibilities and obligations related to their positions. Basic Law on Judicial Power No. 14 of 1974 jo. Law No. 4 of 2004 obliges Judges (Article 14 paragraph (1)): "... may not refuse to examine and try a case filed on the pretext that the law is not clear or unclear, but is obliged to examine and judge it". In an effort to realize the professionalism of Judges, Judges should have a deep mastery of knowledge and broad insight, which is reflected in the weight and for decisions that are passed with the ability to know, understand and live the applicable law and have the courage to make decisions based on law and justice. c. Living with Professional Ethics of Judges Judge professional ethics are the principles of morality that underlie the profession of Judges. Meaning as a guide in behaving and acting while carrying out and carrying out the position of a judge, both inside and outside the official. The Indonesian Judge Association (IKAHI) has formulated a code of honor for Indonesian Judges in the form of Panca Dharma Hakim, which is a form of supervision of its members. These Panca Dharma Hakim values are abstract in nature, consisting of: Kartika: devotion to God Almighty; Chakra: be fair; Candra: wise; Tirta: honest; Sari: Virtuous. Conclusion Based on the above discussion regarding the role of judges in facing the development of society, the conclusions are as follows: 1. Judges are part of the important structure of the judicial power branch in Indonesia. Judicial power is an independent power to administer justice in order to uphold law and justice. Judges are state judicial officials who are empowered by law to judge. Judges have an important role as law enforcement officers in the law enforcement process in Indonesia, judges in examining and deciding cases must pay attention to the objectives of the law itself which include legal certainty, justice and legal benefits. The role of judges is so big, has consequences for the very heavy responsibility of the judge, where the judge has responsibility to one God, the nation and the state, to himself, to the law, to the parties and to society. Judges and society are elements that cannot be separated from a legal system. The judge is a product of the society and culture where he comes from and is. The main and first function of the branch of the judicial power is to decide disputes (resolving disputes) between individuals and individuals, individuals and communities, even individuals or society and the state. The second function is to form or make a policy or policy. Community development is a necessity for social change. Legal change (legal change) and social change (social change) are two things that have always been a concern and study of legal experts and other social scientists, how the relationship between law and social change. Every case that is submitted to the court must continue to be tried, regardless after being tried the judge later states that the case is not within the scope of his competence, the court must continue to judge by exploring the values that live and develop in society. In the condition of society that continues to move faster than the law itself and the condition of a plural society in Indonesia, judges in deciding a case must go through various approaches including juridical, philosophical, sociological and legal pluralism approaches as a way of realizing the creativity of judges to make legal breakthroughs. who is able to answer problems in community development. [w]
2021-01-07T09:01:01.023Z
2020-11-30T00:00:00.000
{ "year": 2020, "sha1": "283a0f14a50b67a8a4cb9ef4ce26342727962d27", "oa_license": "CCBYSA", "oa_url": "https://journal.walisongo.ac.id/index.php/walrev/article/download/6597/pdf", "oa_status": "GOLD", "pdf_src": "Adhoc", "pdf_hash": "c10eb4e98e35e61639a18bdbf0295b7240351ae6", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Political Science" ] }
270173471
pes2o/s2orc
v3-fos-license
Amplification sensing manipulated by a sumanene-based supramolecular polymer as a dynamic allosteric effector The synthesis of signal-amplifying chemosensors induced by various triggers is a major challenge for multidisciplinary sciences. In this study, a signal-amplification system that was flexibly manipulated by a dynamic allosteric effector (trigger) was developed. Herein, the focus was on using the behavior of supramolecular polymerization to control the degree of polymerization by changing the concentration of a functional monomer. It was assumed that this control was facilitated by a gradually changing/dynamic allosteric effector. A curved-π buckybowl sumanene and a sumanene-based chemosensor (SC) were employed as the allosteric effector and the molecular binder, respectively. The hetero-supramolecular polymer, (SC·(sumanene)n), facilitated the manipulation of the degree of signal-amplification; this was accomplished by changing the sumanene monomer concentration, which resulted in up to a 62.5-fold amplification of a steroid. The current results and the concept proposed herein provide an alternate method to conventional chemosensors and signal-amplification systems. In this study, a novel, nature-surpassed signal-amplification system was discovered.Within this system, dynamic changes in the allosteric effector occurred via supramolecular polymerization adjustments, which were for the first time achieved by using curved-π buckybowl sumanene 45 (Fig. 1c) as the monomer.A recent study showed that pristine sumanene spontaneously forms supramolecular polymers in solutions in an isodesmic manner 46 -K n (nucleation) = K e (elongation).Therefore, a sumanene-based chemosensor (SC, Fig. 1c) was constructed based on the guideline shown in Fig. 1b.Pristine sumanene gradually stacks on the convex face of the chemosensor to form hetero-supramolecular polymers (SC•(sumanene) n ).The likelihood of the stackingsumanene moieties in the hetero-supramolecular polymers perturbing the electronic properties of the molecular recognition sites in SC was high.Here, we report an unprecedented signal-amplification system based on the hetero-supramolecular polymers composed of SC as a molecular binder and sumanene as a dynamic allosteric effector.In this case, the signal amplifications were induced from fluorescence changes upon the complexation of guest molecules.This concept resulted in a powerful, widely applicable chemosensor capable of manipulating signal amplification. Photophysical properties of SC The UV/vis absorption and fluorescence spectra, and fluorescence lifetime decays of SC in dichloromethane (CH 2 Cl 2 ) are shown in Fig. 2 and Fig. S12 in Supplementary Information (SI).The maximum peak in the UV/ vis absorption spectra of SC was observed at approximately 280 nm (Fig. 2a), which was similar to the sum of sumanene and the indole reference compound (ref, Fig. 1c).However, new absorption bands at approximately 310 and 360 nm were observed after comparing the molar extinction coefficients of SC with those of sumanene and ref, suggesting that a π-conjugation extended from the sumanene core to the indole chromophore (see Fig. (top)).An emission peak at 412 nm was observed in the fluorescence spectrum of SC (Fig. 2b).Because of the appreciable bathochromic shift in SC compared with those of sumanene and ref, this peak may have originated from the π-extended indole-sumanene conjugation.The fluorescence decay profiles (Fig. 2c) monitored at 392, 406, and 431 nm (λ ex : 340 nm) were fitted to a sum of reasonable exponential functions to produce the first short-lived species (0.4 ns) at the shorter wavelength region, a major fluorescence species (2.6 ns) presented in the entire region, and the second short-lived species (0.3-0.4 ns) at the longer wavelength region (Table S1 in SI).The major excited species (2.6 ns) was assigned to the fluorescent SC monomer.A titration of SC using triethylamine as an organic base (Figs.S13-S14 and Table S2-S3 in SI) showed the fluorescence quenching mainly involved the first short-lived species; this was assigned to a fluorescent anion species where protons dissociated from the indole moiety.A rise component with a negative A factor (relative abundance) was observed in the second short-lived species (Fig. 2c, Figs.S12c,d (SI), and Table S1 (SI)).By contrast, ref mainly showed the major monomeric species (3.2-3.4 ns) without the rise (Fig. S15 and Table S4 in SI).Density functional theory (DFT) calculations (function/basis set: ωB97X-D/6-311G(d,p)) of SC were performed to identify the rise component (Fig. 2d).Three indole side chains aligned in a T-shape manner with a distance of 6.7 Å.The results of the DFT calculations suggested that the second short-lived species is originated from the frustrated (second) excimer formed via the intramolecular, T-shaped overlap of the indole side chains; this was observed in previous studies 47,48 .This interesting photochemical property was formed by the fixation of the fluorescent moieties on the sumanene scaffold. Sensing behavior of SC A series of benzene and cyclohexane derivatives were used as model guests for a proof of concept (Fig. 3a).Thereafter, their sensing behaviors were investigated in CH 2 Cl 2 .Upon the gradual addition of MB, a wide range broadened peak was observed in the long wavelength region (> 355 nm) of the UV/vis absorption spectra for the titration of SC ⊂ MB (Fig. S17a in SI).Efficient quenching was only observed when a new band emerged, and peak shifts were absent in the corresponding fluorescence spectra (Fig. 3b and Fig. S17b (SI)).Using this spectral change, K SC was estimated as 7 ± 1 M −1 , assuming the formation of a 1:1 complexation (Fig. 3c) wherein: (1) the stoichiometric analysis in Fig. S20 in SI was discussed using the following anion system, and (2) the following 1:1 complex structure was provided in advance (Fig. 3d).Subsequently, the anion sensing shown in Figs.S18-S19 in SI was discussed.The UV/vis absorption spectra of SC ⊂ TBPB exhibited a similar broadened wavelength (> 355 nm); however, a hypochromic effect was observed at approximately 320-355 nm.During fluorescence titration, the peak maxima bathochromically shifted with appreciable quenching.The spectral differences between neutral MB and anionic TBPB were likely responsible for the different ground-state complexations based on the neutral-or anion-SC.Furthermore, the K SC value of anion-SC, assuming the complexation type was 1:1, was enhanced to 850 ± 10 M −1 .Therefore, the spectroscopic behaviors were affected by the strength of the anion complexation. The interaction between SC and MB was further investigated using IR spectroscopy.The broad peak at 3303 cm −1 , which was derived from the N-H stretching vibration in the indole ring, decreased in intensity as the higher wavenumber shifted to 3329 cm −1 with increasing MB concentration (Fig. 3e).The DFT calculation also supports the 1:1 complex structure as SC ⊂ MB (Fig. S20-2 in SI), as was the case with TBPB.Additionally, the ref compound can bind MB (K ref = 4 ± 1 M −1 ) and TBPB (K ref = 4 ± 1 M −1 ) with sufficient fluorescence quenching; this was also observed in SC (Fig. S16 in SI).These results suggest that the hydrogen bonds formed via the carbonyl moiety of the guests and the dissociated N-H protons on the indole rings in SC are main recognition forces.This was consistent with the fluorescence quenching behavior observed after adding triethylamine (vide supra).The K SC value of SC ⊂ TBPB was enhanced by a factor of 213 compared with K ref of ref ⊂ TBPB, which was likely owing to a cooperative complexation through the octopus-like three indoles on the sumanene scaffold.Therefore, it was important to elucidate the thermodynamic parameters before applying the temperature-dependent van't Hoff analysis of SC ⊂ TBPB at four temperatures from 5 to 35 °C (Fig. 3f and Figs.S18-S19 (SI)).The van't Hoff plot showed a straight line, indicating that the same complexation mechanism can be found in this temperature range (same heat capacity); ΔH° and ΤΔS° were − 20.8 and − 4.2 kJ mol −1 , respectively.The thermodynamic parameters exhibited enthalpy-driven complexation.Nevertheless, the observed entropy loss was relatively low www.nature.com/scientificreports/despite the immobilization by three recognition sites, which was accounted for by the classical chelate effect 49 .Therefore, the high enthalpy gain and low entropy loss (the common cooperative manner) resulted in a higher and enhanced K SC (ΔG° = -16.6 kJ mol −1 ) based on the distinctive structural specificity. The sensing results of the eight guests by SC are listed in Table 1 and Figs.S21-S23 (SI).First, the K SC values of SC against methyl esters were in the range of 7-28 M −1 ; small but gradual increases in K SC were observed as the number of carboxylic groups increased.The K SC values for TT and TC were similar, indicating that the contribution of the benzene ring (π-π interaction) in supramolecular complexation was minimal.This reinforced the hydrogen bonding interactions as the main driving force.Subsequently, the K SC values of SC against anions were between 850 and 1940 M −1 .These values exhibited a factor of 44-121 enhancement compared with the corresponding methyl esters.The marginal deviation in K SC as the number of carboxylates increased from two to three was likely owing to the bulkiness of the TBP cation (steric hindrance).Similar K SC values for TBPT and TBPC were observed, showing that the main complexation was derived from hydrogen bonding interactions and not π-π interactions. Hetero-supramolecular polymerization consisting of SC and sumanene Similar to a previous study on homo-supramolecular polymerization 46 , the formation of hetero-supramolecular polymers using SC and sumanene in CH 2 Cl 2 was investigated.No new absorption bands were observed in the UV spectra as sumanene was titrated against 128 μM of SC (Fig. S25 in SI); these UV spectra were similar to the concentration-dependent UV spectra of sumanene.Nevertheless, the hetero-supramolecular polymerization behavior was elucidated by calculating the molar extinction coefficient of the sumanene skeleton in SC, ε sumanene,(calcd.) .This provided the basis for the ε value of the SC•(sumanene) n hetero-supramolecular polymer, which was used to estimate α agg (Fig. S25, Table S5, and their relevant discussion in SI).Assuming that the model was isodesmic based on the similar curvature surfaces of SC (depth as 0.90 Å) and sumanene (depth as 0.89 Å) estimated using the DFT calculations, a K i value of 770 ± 120 M −1 was determined after analyzing the molar extinction coefficient at 363 nm.Heteromer formation was further supported by the diffusion coefficient (D) obtained via NMR diffusion ordered spectroscopy (DOSY).The D value of a CD 2 Cl 2 mixture of SC (447 μM) and sumanene (9.09 mM) was 7.37 × 10 -10 m 2 /s (Figs.S26-S29 in SI).This value was subjected to the ellipsoid approximation model; a 4-5-mer was estimated (Table S6 and its relevant discussion in SI).The obtained D values clearly support the formation of the heteromer, not monomer.By contrast, the number-average DP value of the heteromer calculated using the assumption of the isodesmic model was 3.3, which was similar to the DOSY experiment (see structures in Fig. 4).Here, we adopted that SC functions as a chain capper, since all the indole moieties in SC stably align in an endo-form manner (see Fig. 2d), which hampers a random copolymerization in the middle of SC insertion.The further DFT calculation of trihydroxysumanene as a starting material of SC also supports that the endo-form is more stable in 18.4 kJ mol −1 than the exo-form, reinforcing the validity of the heteromer structure. Amplification-sensing behavior using the hetero-supramolecular polymer Broad peaks were observed in the long wavelength region of the UV/vis absorption spectra of SC•(sumanene) n ⊂ MB in CH 2 Cl 2 (Fig. S34a in SI), which was similar to the peaks in the long wavelength region of the SC ⊂ MB spectra.A fluorescence quench was observed in the fluorescence spectra of SC•(sumanene) n ⊂ MB (Fig. 5a); this quench was similar to that observed in the SC ⊂ MB spectra.A new emission band was observed at the longer wavelength region (approximately 500 nm) in the normalized fluorescence spectra (Fig. S34b in SI), indicating that supramolecular complexation of SC•(sumanene) n ⊂ MB occurred.The fluorescence spectral changes at 399 nm were fitted (Fig. 5b, red), assuming that the hetero-supramolecular polymer and the guest molecule form a 1:1 complex owing to the same recognition moiety (the SC starburst in the complex).The obtained apparent binding constant (K SMP ) was 79 ± 6 M −1 , which was 11.3-fold higher than the K SC of SC ⊂ MB (7 ± 1 M −1 ).Table 1 shows the K SMP and K SC values of the eight guest molecules and the sensing behaviors of the heterosupramolecular polymer and SC system.Large amplification ratios of 2.9-11.3 for esters with smaller K SC were observed, whereas amplification ratios of 1.1-2.5 for anions with relatively large K SC (Figs.S35-S48 in SI).Therefore, signal-amplification sensing with sumanene supramolecular polymers may be useful for enhancing a low signal of a target molecule with a small K in the SC system. To elucidate the origin of the signal-amplification, DFT calculations (function/basis set: ωB97X-D/6-311G(d,p)) of SC, sumanene, and SC•(sumanene) n were performed (Fig. 6).The LUMO energy of SC was 0.02 eV; this value remained constant even when a sumanene molecule stacked on SC (dimer formation).By contrast, stacking two or more sumanene molecules on SC resulted in the gradual reduction of LUMO energies to between -0.02 (SC•(sumanene) 2 ) and -0.09 eV (SC•(sumanene) 4 ).Furthermore, the LUMO energy of (sumanene) 5 was positive (0.45 eV).Therefore, the likelihood of the negative LUMO energy values being more electron-receptive was high 50 .This improved the acceptor properties of the SC binding site in the heteromer.Because the LUMO orbitals of SC•(sumanene) n extended from the sumanene core in SC to the indole binding site, the formation of SC•(sumanene) n influenced the electronic acceptor properties of the indole moiety-the origin of the signalamplification.In addition, natural population analysis showed that the formation of hetero-supramolecular polymers caused the electron transfer from sumanene to SC, resulting in an anionic SC core in the charge-transfer (CT) complex (Table S7 in SI).Therefore, the signals for the anions were not amplified because of electrostatic repulsion in the hetero-supramolecular polymer system; however, amplification was observed against the esters. Importantly, the experimental and theoretical findings showed that the number of stacks of sumanene played a critical role in signal amplification.The functionality of the conceptual mechanism shown in Fig. 1b was assessed by sensing MB with varying concentrations of the sumanene monomer in SC (Figs.S30-S33 in SI).The ln K SMP value (= ΔG°) as a function of DP increased exponentially (Fig. 7a); therefore, the behavior of sumanene was that of a dynamic allosteric effector.The conceptual illustration observed in this study (signal amplification) is shown in Fig. 7b. Steroid sensing: general validity and application using the hetero-supramolecular polymer To further generalize the current signal-amplification method and demonstrate its applicability in biologically important materials, steroids such as testosterone, corticosterone, and allylestrenol were selected as target molecules with lower donor characteristics (Fig. 8a).The fluorescence spectral changes of SC•(sumanene) n ⊂ allylstrenol exhibited distinctive fluorescence quenching similar to that of other sensing systems (Figs.S49-S53 in SI).A similar 1:1 fitting for the fluorescence changes at 409 nm resulted in a K SMP value of 250 ± 20 M −1 , which was a 62.5-fold higher signal-amplification than K SC of SC ⊂ allylstrenol as 4 ± 0.3 M −1 (Fig. 8b).A lower K SC value in the SC system corresponded to a higher amplification of the K SMP in the hetero-supramolecular polymer system.It was concluded that the signal amplification by the sumanene-based hetero-supramolecular polymer was 62.5-fold higher than that of the other sensing systems. Conclusion A novel signal-amplification system was developed, wherein the curved-π sumanene monomer for supramolecular polymerization functioned as a dynamic allosteric effector.This monomer effector altered the DP to flexibly manipulate the electronic properties at the binding site (positive heterotopic allosterism), achieving an amplification that was up to 62.5-fold higher than other systems at sensing the biologically important steroid, allylestrenol.The sensing method and the conceptual guideline proposed herein facilitate the sensing of diverse guests that are difficult to signally recognize using the conventional lock-and-key type chemosensors. Figure 1 . Figure 1.Concept and design guidelines of the chemosensors.(a) Model titration curves (lower (blue) and higher (red) binding constants), assuming the 1:1 stoichiometric complexation represents each nonlinear leastsquares binding isotherm.The black arrow indicates signal amplification.(b) Schematic of the concept for the dynamic allosteric effector that can flexibly manipulate binding equilibria via supramolecular polymerization.The red and blue pieces show a guest and chemosensor (artificial receptor), respectively.(c) Chemical structures of sumanene as a monomer for supramolecular polymers, SC as a chemosensor, and ref as a reference compound, respectively. Figure 3 . Figure 3. Sensing behavior of SC.(a) Chemical structures of model guests.(b) Fluorescence spectra (λ ex : 351 nm) of SC (421 μM; black) showing the gradual addition of MB (18.0-575 mM; from brown to blue) in CH 2 Cl 2 at 25 °C, measured in a 1 mm cell; the excitation wavelength at which comparable absorbances were obtained was selected.(c) Nonlinear least-squares fitting line (assuming the 1:1 stoichiometry with SC and MB monitored at 407 nm) to determine the binding constant at 25 °C.(d) Optimized supramolecular complex structure of SC ⊂ trimesate (C 6 H 3 (COO − ) 3 ); counter cations were omitted for clarity.(e) IR spectra of SC (4.2 mM, black) showing the gradual addition of MB (40 mM-1.6 M, brown to pink) in CH 2 Cl 2 at room temperature, where the green dotted line represents MB (1.6 M) in CH 2 Cl 2 .(f) van't Hoff plot of the binding constants obtained from the complexation of TBPB with SC in CH 2 Cl 2 (r = 0.992). Figure 5 . Figure 5. Sensing behavior of SC•(sumanene) n hetero-supramolecular polymer.(a) Fluorescence spectra (λ ex : 355 nm) of SC (445 μM) with (8.87 mM, DP = 3.2, black) following the addition of MB (1.8-147 mM, from brown to blue) in CH 2 Cl 2 at 25 °C, measured in a 1 mm cell; the excitation wavelength at which comparable absorbances were obtained was selected.(b) Normalized binding isotherms of SC•(sumanene) n (red) and SC (black) following the addition of MB at 25 °C.
2024-06-02T06:17:29.506Z
2024-05-31T00:00:00.000
{ "year": 2024, "sha1": "f753e9226c5242842ff329c0cda086d5833fa2dd", "oa_license": "CCBY", "oa_url": null, "oa_status": null, "pdf_src": "PubMedCentral", "pdf_hash": "799204afef33969b50158751dbc2f6fd2dbae853", "s2fieldsofstudy": [ "Chemistry", "Materials Science" ], "extfieldsofstudy": [ "Medicine" ] }
13968793
pes2o/s2orc
v3-fos-license
Functional complementation between transcriptional methylation regulation and post-transcriptional microRNA regulation in the human genome Background DNA methylation in the 5' promoter regions of genes and microRNA (miRNA) regulation at the 3' untranslated regions (UTRs) are two major epigenetic regulation mechanisms in most eukaryotes. Both DNA methylation and miRNA regulation can suppress gene expression and their corresponding protein product; thus, they play critical roles in cellular processes. Although there have been numerous investigations of gene regulation by methylation changes and miRNAs, there is no systematic genome-wide examination of their coordinated effects in any organism. Results In this study, we investigated the relationship between promoter methylation at the transcription level and miRNA regulation at the post-transcription level by taking advantage of recently released human methylome data and high quality miRNA and other gene annotation data. We found methylation level in the promoter regions and expression level was negatively correlated. Then, we showed that miRNAs tended to target the genes with a low DNA methylation level in their promoter regions. We further demonstrated that this observed pattern was not attributed to the gene expression level, expression broadness, or the number of transcription factor binding sites. Interestingly, we found miRNA target sites were significantly enriched in the genes located in differentially methylated regions or partially methylated domains. Finally, we explored the features of DNA methylation and miRNA regulation in cancer genes and found cancer genes tended to have low methylation level and more miRNA target sites. Conclusion This is the first genome-wide investigation of the combined regulation of gene expression. Our results supported a complementary regulation between DNA methylation (transcriptional level) and miRNA function (post-transcriptional level) in the human genome. The results were helpful for our understanding of the evolutionary forces towards organisms' complexity beyond traditional sequence level investigation. Results: In this study, we investigated the relationship between promoter methylation at the transcription level and miRNA regulation at the post-transcription level by taking advantage of recently released human methylome data and high quality miRNA and other gene annotation data. We found methylation level in the promoter regions and expression level was negatively correlated. Then, we showed that miRNAs tended to target the genes with a low DNA methylation level in their promoter regions. We further demonstrated that this observed pattern was not attributed to the gene expression level, expression broadness, or the number of transcription factor binding sites. Interestingly, we found miRNA target sites were significantly enriched in the genes located in differentially methylated regions or partially methylated domains. Finally, we explored the features of DNA methylation and miRNA regulation in cancer genes and found cancer genes tended to have low methylation level and more miRNA target sites. Conclusion: This is the first genome-wide investigation of the combined regulation of gene expression. Our results supported a complementary regulation between DNA methylation (transcriptional level) and miRNA function (posttranscriptional level) in the human genome. The results were helpful for our understanding of the evolutionary forces towards organisms' complexity beyond traditional sequence level investigation. Background Epigenetics refers to the heritable changes that modify DNA or associated proteins without changing the DNA sequence itself [1]. It has been commonly accepted that both epigenetic mechanisms -DNA methylation modification at the gene's promoter regions (5' of the gene) and microRNA (miRNA) regulation at the 3' untranslated regions (3' UTRs) -are important in gene expression regulation. DNA methylation has been popularly investigated due to its heritable epigenetic modifications of the genome and has been implicated in the regulation of most cellular processes. These include embryonic development, transcription, chromatin structure, X chromosome inactivation, genomic imprinting and chromosome stability [2][3][4][5][6]. Aberrant DNA methylation has been frequently reported to influence gene expression and subsequently cause various human diseases, especially cancer [7][8][9]. The causal relationship between variation in promoter DNA methylation and difference in gene regulation has been well recognized [10,11]. Recent work [12] revealed that hypermethylation at promoter CpG sites typically results in a lower transcription level of downstream genes. When methylation was experimentally removed from a gene's promoter region, its transcription level would often be higher [13]. Among the~28 million CpG dinucleotide sites that are susceptible to methylation in the human genome, approximately 10% are in the promoter regions of genes, in which they may physically obstruct the binding of transcriptional proteins to the gene or may be indirectly regulated by the recruitment of methyl-CpG-binding domain proteins through cytosine methylation [14][15][16]. The repression role in gene expression regulation by methylation modification in a gene's promoter region has been reinforced by current whole genome bisulfite sequencing of the methylomes of more than 20 eukaryotes [17]. miRNAs are a class of small noncoding RNA molecules that regulate eukaryotic gene expression at the post-transcriptional level. They specifically bind mRNAs in their 3' UTRs based on sequence complementation and lead to translational repression and gene silencing [18]. According to release 17 (April 26, 2011) of the miRNA database miRBase [19], there are 16,772 miRNA gene loci in 153 species and 19,724 distinct mature miRNA sequences [20]. Among them, the human genome encodes 1424 miRNA sequences, which may target approximately 60% of human protein-coding genes [21]. This huge number of miRNAs discovered so far indicates that many biological processes, including cell cycle control, cell growth and differentiation, apoptosis, and embryo development, are controlled by miRNAmediated gene expression regulation [22]. Although there have been many important advances in understanding gene silencing roles at the transcriptional level through DNA methylation modification and at the post-transcriptional level through miRNA regulation, it remains unclear how these two major mechanisms cooperate at the genome-wide level to influence cellular processes. Thus, a combinatory analysis of these two mechanisms is likely to reveal many important insights into a deeper understanding of gene regulation in cells. Considering that (1) DNA methylation acts on a gene's 5' promoter region, and transcription typically depends on demethylation of the promoter region, and (2) miRNAs target on 3' UTR to suppress gene's post-transcriptional activities, we hypothesized that there exists a functional complementation between transcriptional promoter region methylation regulation and post-transcriptional miRNA regulation. If this hypothesis is valid, we would infer that (1) miRNAs preferentially target genes with a low DNA methylation level at the promoter regions; (2) genes that are controlled by more miRNAs tend to have less promoter methylation regulation. We validated our hypothesis by deeply analyzing human methylome data in two cell lines. To the best of our knowledge, this is the first report of the complementary relationship between DNA methylation regulation and miRNA regulation in a eukaryotic genome. Furthermore, we found that cancer genes tended to be silenced by miRNAs and to escape from DNA methylation suppression, thereby supporting our hypothesis. Gene annotation Human and mouse gene structure data was retrieved from the Ensembl database (version 54), including the information of Ensembl gene ID, Ensembl transcript ID, transcript start (bp), transcript end (bp), Ensembl protein ID, 3' UTR start, 3' UTR end, chromosome, and strand. We extracted the promoter region and 3' UTR position information from Gene structure data. If there are multiple transcripts for a gene, the transcription start site (TSS) and 3' UTR of the major transcript were used [23]. We retained only those genes without distant alternative TSS (> 200 bp distance from the major TSS) and without ambiguous 3' UTR regions to avoid the potential inaccurate mapping of the gene expression data and gene structures. Analysis of DNA methylation data The single-base resolution DNA methylation data was retrieved from Lister et al. (2009) [15], including whole genome bisulfite sequencing data for two human cell lines: H1 human embryonic stem cells and IMR90 fetal lung fibroblasts. The methylation information for each promoter was extracted by mapping the promoter region (in a range of -1000 to +200 bp from the TSS) to the genome-wide methylation data from the H1/IMR90 cell line. Based on single-base resolution bisulfite sequencing data, we used methylation broadness to measure the DNA methylation level in specific genome regions, which was calculated as the proportion of methylated CpG sites among the total CpG sites in a sequence (we denote it as "mCG/CG" hereafter). We also used "normalized" CpG content, the observed over expected CpG ratio (CpG O/E ) in a sequence, to infer the pattern of DNA methylation in the human genome. CpG O/E is a robust measure of the level of DNA methylation on an evolutionary time scale due to specific mutational mechanisms of methylated cytosines [23]. Briefly, methylated cytosines are hypermutable due to their vulnerability to spontaneous deamination, which causes a gradual depletion of CpG dinucleotides from methylated regions over evolutionary time. Consequently, genomic regions that are subject to strong germline DNA methylation (hypermethylated) would decrease the extent of CpG dinucleotide content over time and, thus, have lower-than-expected CpG O/E . In contrast, regions that undergo weak germline DNA methylation (hypomethylated) maintain high CpG O/E . This measure has been successfully used to indirectly measure historical DNA methylation levels. In particular, the pattern of DNA methylation inferred from CpG O/E corresponds well to the actual pattern of DNA methylation in such diverse taxa as human and sea squirt. CpG O/E was calculated as the frequency of CpG sites divided by the frequency of C and G [24]. The pattern of DNA methylation inferred from CpG O/E corresponds well to the actual pattern of DNA methylation in human stem cells (H1 cell line) and fetal lung fibroblasts (IMR90) [14,15]. Since the DNA methylation level of two strands in any given genomics region are highly correlated, here we used the sense strand to represent the DNA methylation level for a given gene promoter region. Similar results were obtained in this study when we used the methylation level of anti-sense (data not shown). Compilation of miRNA targets The miRNAs and their predicted targets were extracted from R package RmiR.hsa [25], including miRNA target site prediction results from 6 sources: miRBase, targetScan, miRanda, tarBase, mirTarget2 and PicTar. In this study, we used the target site prediction results from two approaches: mirTaeget2 and PicTar. Analysis of human gene expression data We obtained the expression data of 409 microarray experiments from McVicker and Green (2010) [26], which were collected from 12 studies [12,13,[27][28][29][30][31][32][33][34][35][36], representing a wide variety of germ and somatic tissues. As these studies used two different platforms (Affymetrix microarrays hgu133plus2 and hgu133A), we only considered the probe sets shared by both arrays. The methods to process the raw intensity data and to assign the probe sets to genes were described in McVicker and Green (2010) [26]. In total, we assigned an expression intensity of 9858 genes in 409 tissues. Among the 409 tissues, 64 containing germ cells were considered as germline tissues, with the exception of germ cell tumors, embryonic stem cells, and immortalized cell lines (see additional file 1). Because the above data sets are highly redundant in terms of tissue or cell type, we only used Gene Expression Atlas data to estimate the relative expression broadness (EB, number of tissues where a gene is expressed). This data has been widely used to estimate gene expression broadness. The Affymetrix raw data was downloaded from the website of the authors in reference [36]. It comprised 156 human (U133A/ GNF1H) microchip experiments in 79 tissues. The expression level detected by each probe set was obtained as the average difference (AD) value computed from MAS 5.0 algorithm (MAS5) [37]. The AD values were averaged among replicates. Using the annotation tables from the original study [36,38] and the Ensembl EnsMart tool, we mapped the probe IDs used in the microarray experiments to Ensemble gene identifiers. In approximately 20% of the cases, multiple probes in the microarray targeted onto a single gene. The expression intensities of multiple probes that corresponded to one gene were averaged after discarding all the low-confidence probe sets (indicated by a suffix of ''_x_at'' or ''_s_at'' in the Affymetrix IDs) [39]. In this study, we used an AD value of 200 as the threshold to calculate the EB, as we did in our previous work [23]. The gene expression data of two human cell lines H1 and IMR90 was obtained from reference [15]. The expression data was generated by a whole RNA sequencing (RNA-Seq) approach. The reads per kilobase of transcript per million reads (RPKM) were used to represent the expression level of each gene. Cancer genes We retrieved 427 human cancer genes and their annotations from the Cancer Gene Census database (CGC, 2010-03-30 version) [40]. Since a cancer gene may act in a dominant or recessive manner [41,42], we classified these 427 cancer genes as two groups, i.e., dominant gene group (337 genes) and recessive gene group (85 genes), according to their annotations in the CGC database. There were 5 genes with ambiguous classification in the database and they were excluded in this analysis. An in-house Perl script was used to extract the orthologous 3' UTR alignment information and to identify the human-specific indel events. Human-specific insertion event rate and deletion event rate in the 3' UTR regions were calculated based on percent nucleotide difference. The indel rate equals to the sum of the lengths of all indels in the aligned human sequences divided by the total length of the aligned sequences. Results and discussion Correlation between gene expression level and promoter DNA methylation Although methylation of gene's promoter regions has long been considered a suppressor of gene expression [17,45], it still remains unclear to which extent the promoter's DNA methylation contributes to the influence of gene expression level [45,46]. For example, most promoters having CpG islands (CGIs) remain unmethylated even in cells that do not express the corresponding gene. On the contrary, most CpG-poor promoters are hypermethylated even in somatic cells where the genes are expressed [47]. What is equally uncertain is the contribution of promoter methylation to the tissue-specific gene expression. Although many studies have shown the tissue-specific differentially methylated regions (T-DMRs) could connect to the gene expression reprogramming in different tissues or developmental stages, others failed to demonstrate such a connection based on the analysis of a small set of genes [48,49].To better understand the relationship between DNA methylation regulation and the gene expression regulation through miRNA targeting, we explored to what extent promoter methylation affects the gene expression level using the genome-wide data set collected in this study. We used two independent measurements, i.e., methylation broadness and normalized CpG content (CpG O/E ), to test the correlation of promoter methylation and gene expression level. First, we calculated the broadness of DNA methylation in each gene promoter region in human H1 embryonic stem cells and IMR90 fetal lung fibroblasts, based on the recently published whole genome single-base resolution methylome data [15]. Methylation broadness measures the fraction of cytosine sites detected as methylated in a given DNA segment, which is calculated as the proportion of methylated sites over the total sites in a sequence (termed as mCG/CG) [17]. We calculated the pairwise correlation between promoter DNA methylation and gene expression level. We found gene expression intensity was significantly and negatively correlated with the methylation level in the promoter regions, both in H1 cells (ρ = -0.468, P <10 -15 ) and in IMR90 cells (ρ = -0.473, P <10 -15 ). Next, we used CpG O/E to approximately infer the pattern of DNA methylation in the human genome. As a robust measurement of the level of germline DNA methylation on an evolutionary time scale [24], low CpG O/E and high CpG O/E reflect hypermethylation and hypomethylation, respectively. We calculated the correlation between CpG O/E and gene expression level for a wide range of tissues. As shown in Figure 1, gene expression in most germline tissues was positively correlated with CpG O/E . Remarkably, we found the correlation is more significant in female germline tissues than in male germline tissues. The average gene expression intensity in all germline tissues is also significantly correlated with promoter CpG O/E (ρ = 0.37, P <10 -15 ). Our results also showed either weak correlation or even no significant correlation among most somatic tissues (Figure 1). In summary, using different DNA methylation measurements, we found methylation level in a gene's promoter regions was negatively correlated with expression level at the whole genome level. It is worth noting that we found a more significant correlation between gene promoter DNA methylation level and gene expression level than the previous studies [3,15]. One possible reason is that we only used the genes with unique TSS or largely overlapping promoter regions (see Methods). miRNAs preferentially target the genes with low DNA methylation level at the promoter regions We next tested the hypothesis that a functional complementation exists between transcriptional promoter region methylation regulation and post-transcriptional microRNA regulation. We retrieved unique miRNAs and their target sites for each human gene based on the predicted miRNA binding sites using mirTarget2 [50] and PicTar [51] algorithms. We chose these two algorithms because most of the randomly selected miRNA targets predicted by mirTarget2 and PicTar have been validated as true targets [50,52]. Genes that have long 3' UTRs are likely to be regulated by more miRNAs [53]; thus, we treated the 3' UTR length as a proxy of the number of miRNA target sites for an additional correlation analysis. There were 12,730 genes that had both miRNA target prediction by mirTarget2 and promoter methylation measured using human H1 cells. Using this dataset, we found a significant negative correlation between gene promoter methylation and number of miRNA target sites (Spearman's ρ = -0.29, P < 10 -15 ) (Table 1, Figure 2). Similarly, we found a significant negative correlation between gene promoter methylation and number of miRNA target sites (ρ = -0.26, P < 10 -15 ) based on the 12,731 genes having both miRNA target prediction by mirTarget2 and promoter methylation from methylome of human IMR90 cells (Table 1, Figure 2). Moreover, using the CpG O/E value in the promoter regions as a proxy of the promoter methylation level in germline cells, we found a significant positive correlation between CpG O/E and the number of miRNA target sites (ρ = 0.29, P < 10 -15 ) (Table 1, Figure 3). This positive correlation between CpG O/E and the number of miRNA target sites is consistent with the negative correlations above, because CpG O/E reversely reflects the promoter methylation level. Finally, when we used the miRNA target site data predicted by PicTar, we had very similar results (Table 1), indicating our findings are reliable. We further used the 3' UTR length to approximately measure the number of miRNA target sites. Consistent with the above results, we found negative correlations between 3' UTR length and promoter methylation level in both human methylomes (H1 and IMR90) ( Table 1). This analysis revealed that the genes with a higher promoter methylation level tended to have shorter 3' UTRs at the genome level. We questioned whether the observed correlations above are unique in the human genome. Thus, we investigated the relationship between promoter DNA methylation level and the number of miRNA target sites in mice. We retrieved the corresponding gene structure data from the ENSEMBL database. The data processes that included the definition of TSS and estimation of 3' UTR length were the same as in humans, as described in the Methods section. We found a highly significant correlation between promoter CpG O/E and 3' UTR length (Spearman's ρ = 0.24; P < 10 -15 ), indicating that the negative correlation pattern between promoter region methylation and number of miRNA target sites still holds in mice. Since mammalian genomes share many CpG island features in their promoter regions [4], it is likely that the observed correlation is common in mammals, or even in many vertebrates. Enrichment of miRNA targets among genes with lower promoter methylation level is not a by-product of gene expression level, expression broadness or the number of transcription factor binding sites We next specifically investigated whether the above observed enrichment of miRNA targets among genes with a lower promoter methylation level was a by-product of ancillary features of the analyzed gene sets. The results from the following analyses indicated this was not the by-product. First, we asked whether the relationship between DNA methylation and miRNA regulation could be explained by the underlying gene expression levels since the DNA methylation of a gene's promoter regions and gene expression level is correlated in the majority of eukaryotes, and gene expression level is often positively correlated with the number of miRNA target sites. We estimated partial correlations [54] between DNA methylation and number of miRNA target sites after removing the contributions of gene expression level. The Table 1 Spearman's rank correlation coefficients (rs) and partial correlations between gene's promoter methylation level and the number of microRNA target sites corresponding corrections were still highly significant, suggesting that covariance between DNA methylation (or the number of miRNA target sites) and gene expression level could not account for the observed relationships between DNA methylation and the number of miRNA target sites. As shown in Table 1. Although the partial correlations between DNA methylation and miRNA regulation decreased after removing the effects of gene expression level, they still showed high significance Second, broadly expressed genes tended to avoid miRNA regulation [55,56], implying that the correlation between promoter methylation and miRNA regulation could have been affected by the greater chance of higher DNA methylation level in broadly expressed genes' promoter regions. We indeed found the promoter methylation level was negatively correlated with gene expression broadness (EB) (for mCG/CG using H1 methylome data, Spearman's ρ = -0.19, P < 10 -15 ; for CpG O/E , ρ = 0.22, P < 10 -15 ) (Figure 4a). However, no significant correlation between the number of miRNA target sites and EB was observed (for miRNA target sites based on MirTarget2, ρ = -0.003, P = > 0.1) (Figure 4b), and only a very weak correlation between the length of UTRs and EB (ρ = 0.03, P = 0.002) was observed. We had similar results using the methylation data of IMR90 and/or using the predicted miRNA target sites by PicTar (data not shown). Therefore, the effect of EB on the correlation of promoter methylation level and miRNA target sites could be largely ruled out. Third, recent studies found genes with more transcription factor binding sites (TFBS) have a higher probability to be controlled by miRNAs [57].We examined whether the promoter methylation levels are correlated with the number of TFBS. We extracted the TFBS data from [58]. A total of 22,067 genes had both TFBS and promoter methylation data. We found the correlation between TFBS and promoter methylation was very weak (Spearman's ρ = -0.016 for TFBS and CpG OE ; ρ = -0.07 for TFBS and mCG/CG using H1 mythylome data). This observation suggested that the correlations between promoter methylation level and the number of miRNA targets was not a side effect of the correlation of TFBS site number and the number of miRNA target sites. Finally, a previous study found that gene evolutionary rates were negatively correlated with the number of their regulatory miRNAs [53]. Therefore, we speculated Figure 2 The correlation between methylation level in promoter regions and number of microRNA target sites. The number of microRNA target sites in each gene was predicted by mirTarget2. The methylation data was from the methylomes at base resolution of two human cells [15]. genes with stronger promoter methylation repression (tend to be regulated by fewer miRNAs) might have evolved faster in their 3' UTRs and could have insertion or deletion bias. A possible mechanism of the negative correlation between promoter methylation and the number of regulatory miRNAs is that genes with hypermethylated promoters may in turn shorten their 3' UTRs to reduce possible miRNA regulation. We tested this hypothesis by the following analyses. We extracted the human-mouse one-to-one orthologous 3' UTR sequences from PACdb [59] and aligned these orthologous sequences using the computer program Clustal W [60]. We calculated the substitution rates per site (termed as K 3u ) based on the Kimura's two-parameter model [61]. We found a weak positive correlation between K 3u and the promoter methylation level (Spearman's ρ = 0.15, P < 10 -15 between K 3u and mCG/CG using H1 mythylome data; ρ = -0.1, P < 10 -10 between K 3u and CpG O/E ), indicating promoter hypermethylated genes tended to evolve faster in their 3' UTRs. We identified the human-specific insertion rate and deletion rate for the 3' UTRs of all genes (see Methods). However, there was no evidence to show that promoter hypermethylated genes tended to shorten their 3' UTR length (P > 0.1). Further studies of promoter methylation and 3' UTR evolution will be needed to uncover the underlying mechanisms of the connection between promoter methylation level and the number of miRNA target sites. miRNA targets are significantly enriched in genes located in differentially methylated regions or partially methylated domains Some genes may belong to a specific group of genes that are preferentially regulated by miRNAs or promoter region methylation. It is interesting to investigate the functional complementation between transcriptional promoter methylation and post-transcriptional miRNA regulation in such groups of genes. Specifically, we identified the genes located in differentially methylated regions (DMRs) and partially methylated domains (PMDs) using the data from Lister et al. [15]. According to Lister et al. [15], the DMRs were identified as the regions of the genome enriched for sites of higher levels of DNA methylation in IMR90 relative to H1 by Fisher's Exact Test. There were 491 regions considered as DMRs using the methylome data from H1 and IMR90 cell lines. For the genes located at either DMRs or other genomic regions, we calculated the average number of miRNA target sites and average value of promoter methylation level, respectively. Using the H1 methylome data, on average, genes located at the DMRs and other regions had mCG/CG ratios of 0.26 and 0.44 (P < 10 -15 , Mann-Whitney U test) (Figure 5a), and 17.2 and 14.3 miRNA targets sites (P < 10 -6 , Mann-Whitney U test) (Figure 5b), respectively. These findings indicate that genes located in DMRs tended to maintain a low methylation level, whereas they might be regulated by more miRNAs. Therefore, there exists a negative correlation between DNA methylation level and the number of miRNA target sites. Lister et al. showed a trend of decreased level of methylation level in PMDs (partially methylated domains in IMR90 cell line, contiguous regions with an average methylation level less than 70%). We calculated the average number of miRNA target sites in PMDs and other genomic regions. As expected, genes located in PMDs had a lower promoter methylation level (P < 10 -4 ) and were regulated by more miRNAs (P < 10 -6 ) (Figure 6). This result again demonstrated a negative correlation existed between promoter methylation level and the number of miRNA target sites. DNA methylation and miRNA regulation in cancer genes Cancer is a common complex disease, and many genes have been reported as involved in the development of cancer. Since cancer genes have been extensively studied and often found to be regulated by miRNAs, it is interesting to examine whether the cancer genes are more likely to have low methylation in accordance with our hypothesis and our observations above. To test this hypothesis, we retrieved human cancer genes and their annotations from the CGC database and compared the Table 2 summarizes the results of these analyses. We found that cancer genes tended to have more miRNA target sites than other genes (average 18.60 miRNA target sites for cancer genes versus 14.34 for other genes, P < 10 -15 , Mann-Whitney U-test). On the contrary, cancer genes had lower methylation levels than other genes, regardless of whether the methylation level was measured by methylation broadness (mCG/CG), normalized CpG content (CpG O/E ), or number of CGIs in the promoter regions (Table 2). For example, the normalized methylation level in cancer genes' promoter regions was lower than other genes (average 0.33 for cancer genes versus 0.53 for other genes, P < 10 -15 , Mann-Whitney U-test). We next compared the features in two major groups of cancer genes: dominant and recessive cancer genes. Among the 427 cancer genes, there were 337 dominant cancer genes and 85 recessive cancer genes based on their annotations in the CGC database. We analyzed their DNA methylation levels and number of miRNA target sites. For a normalized methylation level and CpG O/E , no significant difference was detected between the dominant and recessive cancer genes. However, the number of miRNA target sites in the dominant cancer genes (19.18) was larger than that of recessive cancer genes (16.16). Finally, the number of CGIs in the promoter regions of the dominant cancer genes (0.73) was significantly smaller than that of the recessive cancer genes (0.87, c 2 test, P<10 -15 ). These comparisons suggested the different inheritable mechanisms of the dominant and recessive cancer genes in cancer, as we recently examined in the protein-protein interaction level [62]. Collectively, we observed that the promoter region methylation level in cancer genes was negatively correlated with their number of miRNA target sites. This observation still held after filtering the potential confounding effects from gene expression level or expression broadness. This analysis indicated that the cancer genes tended to be silenced by miRNA genes but could escape from DNA methylation suppression. Conclusion To understand how DNA methylation and miRNA regulate the expression of their target genes, many previous exploratory studies have been reported, but all of them focused on the effect of each mechanism on the expression of target genes. In this study, we investigated the relationship between promoter methylation and miRNA regulation at the genome level by taking advantage of recently released human methylome data and high quality miRNA and other gene annotation data. Our results suggested that there is a functional complementation between promoter methylation regulation at the transcription level and miRNA regulation at the post-transcriptional level. Specifically, the genes that are under stronger promoter DNA methylation control tend to avoid miRNA regulation by having fewer miRNA target sites, and vice versa. From an evolutionary perspective, both recruitment of DNA methylation in a gene's promoter region and the advent of new miRNA genes during the transition from invertebrate to vertebrate contributed to the high complexity of vertebrate organisms and cell types [63][64][65]. Although many recent studies have greatly improved our understanding of the evolutionary adaptations and conservation of DNA methylation and miRNA regulation, the relationship between DNA methylation and miRNA regulation, and how these two mechanisms dynamically influence each other's evolution and function, remain poorly understood. The results supporting complementary regulation between DNA methylation and miRNA function in this study provided the first attempt to uncover such an important and complex regulation system, which will help us understand the evolutionary forces towards organisms' complexity beyond traditional sequence level investigation. Additional material Additional file 1: The gene expression intensities in germline tissues. Totally 6569 genes can be assigned the expression intensities in 64 tissues. CpG O/E was calculated for the promoter region (-1000 bp to +200 bp relative to the TSS) of each gene. CGI: CpG island. Dominant and Recessive genes are two major cancer gene categories. *Those genes with CpG reads in their promoter region less than 3 were excluded.
2014-10-01T00:00:00.000Z
2011-12-23T00:00:00.000
{ "year": 2011, "sha1": "e43840ce2e4d1f444aa90c38ae035ba6f7ea1fcf", "oa_license": "CCBY", "oa_url": "https://bmcgenomics.biomedcentral.com/track/pdf/10.1186/1471-2164-12-S5-S15", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e43840ce2e4d1f444aa90c38ae035ba6f7ea1fcf", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
96380026
pes2o/s2orc
v3-fos-license
Simultaneous extraction and determination of pharmaceuticals and personal care products (PPCPs) in river water and sewage by solid-phase extraction and liquid chromatography-tandem mass spectrometry This study features the simultaneous extraction and quantification of 18 pharmaceuticals and personal care products (PPCPs). This is a pioneering method for the quantification of acetaminophen, sulfamethoxazole, diclofenac, atenolol, metoprolol, diethyltoluamide and oxybenzone in atmospheric pressure chemical ionisation mode. The method was validated for high repeatability and reproducibility with relative standard deviations less than 10%. Instrument quantification limits for PPCPs were within the range of 0.05–1.0 µg L−1, and the method quantification limits (MQLs) for ultrapure water were within the range of 0.3–15 ng L−1. All samples were extracted using Oasis© hydrophilic–lipophilic balanced cartridges with optimised sample size and extraction conditions. Good accuracy was demonstrated, with solid-phase extraction recoveries above 80% for most PPCPs. In environmental matrices, the MQLs for river water, sewage treatment plant (STP) effluent and STP influent were 4–25, 10–153 and 38–386.5, respectively. The method was successfully applied to investigate occurrences of persistent PPCPs in Malaysian river and sewage samples. Introduction Pharmaceuticals and personal care products (PPCPs) have been an emerging class of pollutants in the past decade due to their ubiquitous nature, toxicity and persistence in the environment. The term 'emerging pollutants' describes the entrance or generation of pollutants into the environment in appreciable amounts, having a significant degree of persistency and exhibiting detrimental effects on organisms [1]. The occurrence of PPCPs in terrestrial and aquatic environments has exposed non-target organisms to PPCPs. The ecotoxicity of reported PPCPs includes the development of antibacterial resistance in microorganisms such as Staphylococcus aureus; growth inhibition and retardation in phytoplankton when exposed to antibacterial compounds; and smaller adults, reduced egg production and abnormal growth in copepods [2]. Several PPCPs have been classified as potential endocrine disrupting compounds, capable of causing sexual Thus, this study aims to develop a new, selective and sensitive method for the simultaneous extraction and quantification of PPCPs using LC-MS/MS in atmospheric pressure ionisation mode, which is less susceptible to matrix effects (ME) compared to electrospray ionisation (ESI) [28][29][30][31]. The developed method was applied for the quantification of PPCPs in environmental waters such as river water, sewage treatment plant (STP) influents and effluents using atmospheric pressure chemical ionisation (APCI) mode. All samples were filtered prior to SPE extraction using Whatman glass filter GF/F 0.7 µm, 47 mm from Whatman International Ltd (Springfield Mill, UK). Millipore nylon membrane filters 0.2 µm, 47 mm was used to filter all organic solvents prior to LC-MS/MS quantification and was purchased from Millipore (Massachusetts, USA). In addition, non-sterile membrane syringe filters 0.22 µm, 4 mm was used to filter reconstituted samples after SPE extractions, and was purchased from Membrane Solutions LLC (Texas, USA). Oasis © HLB 3 cc/60 mg cartridges were purchased from Waters (MA, USA). SPE extraction was carried out using an ISOLUTE VacMaster SPE vacuum manifold (UK). Nitrogen gas (N 2 ) (99.9%) was used to dry samples and all glassware after silanisation. N 2 was purchased from Linde Malaysia Sdn. Bhd. (Malaysia). All glassware was washed and prepared in accordance with EPA Method 1694 [32]. All glassware was also silanised prior to usage. Samples Ultrapure water was used as a blank reference sample. For the purpose of method development and method validation, river water was collected upstream of the Langat River Basin (N03°12ʹ 53.9ʹ, E101°53ʹ 1.06ʹE), where minimal impact from anthropogenic activities was expected. River water samples were collected in 1 L white non-transparent plastic bottles. Sewage samples were collected using the ISCO 3700 Portable sampler as 48-hour time-paced multiple bottle composite. Samples were collected every 4 hours. Thus, each 1 L composite sample consisted of 12 smaller samples were collected in Extended Aeration STP. All samples were transported on ice at approximately 4°C. Upon reaching the laboratory, samples were acidified to pH 2 using 37% sulphuric acid and preserved with 1 g L −1 sodium azide to minimise microbial degradation as well as 50 mg/L ascorbic acid to quench any residual oxidant. Samples were filtered using Whatman GF/F filter paper to remove any suspended particulate matter. Typically, samples were extracted within 24 hours. The developed method was applied to detect PPCPs in the Langat River Basin; three river water locations (RW1 upstream, RW2 midstream and RW3 downstream) and one sewage treatment plant (STP1) were sampled. A map of sampling locations is shown in Figure 1. The Oasis HLB 3 cc/60 mg cartridges were conditioned with 3 mL of MTBE, 3 mL of methanol and 3 mL of acidified ultrapure water (acidified to pH 2 by formic acid). Each conditioning procedure was carried out by gravity elution followed by drying using a vacuum manifold. Samples were loaded into cartridges at rate of 10 mL min −1 . SPE cartridges were then washed with 3 mL of ultrapure water, which had been acidified by formic acid to pH 2 and then vacuum-dried for 15-20 min. Subsequently, PPCPs were eluted into 15 mL centrifuge tubes using 3 mL of a methanol: MTBE mixture (10:90), followed by 3 mL of methanol. To achieve optimum results, elution was carried out by gravity flow. The extract was then dried under a stream of nitrogen gas until near dryness. The extract was reconstituted with 250 µL of a mixture consisting of ultrapure water with 0.1% formic acid: methanol (75:25). The final extract was filtered using a 0.22 µm, 4 mm-diameter nylon syringe filter and transferred into a 2 mL amber glass vial fitted with a 250 µL silanised vial insert. 2.3.2. Liquid chromatography-tandem mass spectrometry Accela high-speed LC interface to TSQ Quantum Ultra triple stage quadrupole (QqQ) mass spectrometer (Thermo Scientific, CA, USA) was used to quantify the PPCPs. Thermo Scientific Hypersil GOLD whose dimension is 50 mm × 2.1 mm, 1.9 µm, was used in this study. The mobile phase consisted of a mixture of three solvents: 0.1% formic acid in ultrapure water (A), methanol (B) and ACN (C). A flow rate of 100 µL min −1 was used. The gradient was as follows: 90% A, 9% B and 1% C held for 1 min and then increased linearly to 1% A, 79% B and 20% C by the 15th min and held for 4.5 min; the gradient was then decreased linearly to its initial concentrations and held constant for 5 min with the aim of ensuring equilibration. The LC-MS/MS dwell volume was 65 µL. Quantifications in both APCI positive and negative were carried using LC-MS/MS fast switching mode of less than 25 ms. The total run time was 25 min. The sample injection volume was 10 µL. Other optimised conditions include the discharge current (4.8 µA), vaporiser temperature (250°C), capillary temperature (250°C), sheath gas pressure (20 arb units) and auxiliary gas pressure (5 arb units). The incorporation of the final source parameters, compound parameters, mobile phase, gradient elution and reconstitution solvents for the quantification of PPCPs in HPLC methanol yields chromatograms as shown in Figure 2. Optimised APCI and MS/MS parameters were adopted for selected reaction monitoring (SRM) in LC-MS/MS analysis. Quantification and method validation Validation of instrumental intra-day precision was carried out by quantifying a mixture of 100 µg L −1 of PPCPs in methanol at three intervals of morning, noon and evening on three consecutive days using LC-MS/MS. Instrumental inter-day precision was verified by quantifying a mixture of 100 µg L −1 of PPCPs in methanol on five consecutive days using LC-MS/MS. Precision of the method was calculated by measuring the dispersion of sets of data under repeatability or reproducibility conditions according to the following equation: Instrument detection limits (IDLs) and instrument quantification limits (IQLs) were validated by direct injection of decreasing concentrations of the PPCPs. The IDL and IQL of each target compound were determined at signal-to-noise (S/N) ratios of 3 and 10, respectively. The instrument was calibrated using a seven-point calibration curve at concentrations of 1, 10, 25, 50, 100, 250 and 500 ng mL −1 using a mixture of PPCPs in ultrapure water with 0.1% formic acid: methanol (75:25). The lowest calibration point was determined to be 1 ng mL −1 as it is the conservative IQL. SIS mixture of 200 ng mL −1 of 13 C 2 -17α-ethynylestradiol, diclofenac-d 4 , 13 C 3 -trimethoprim, 13 C 3 -sulfamethazine and 13 C 3 -caffeine were added at every calibration concentration to generate relative factor as calculated according to Equation (2). Linearity of the calibration curve was determined by employing least squares regression. The coefficient of determination (R 2 ) was used to determine the linearity of each target compound. Xcalibur software version 2.0.7 from Thermo Scientific (CA, USA) was used in data collection, peak integration and linear regression: where A x is the peak area for the analyte of interest, A IS the peak area for the internal standard, C IS the concentration of the internal standard and C x the concentration of the analyte of interest. Each matrix was spiked with PPCPs mixture at concentration one to five times of estimated method detection limits (MDLs) resulting with S/N ratios between 2.5 and 5. Seven replicates of samples were subjected to the entire methodology. MDLs were calculated based on 99% confidence level to be greater than zero and within one-third to one-fifth of the spike level using Equation (3) [32][33][34]. The method quantification limits (MQLs) were calculated as 10 times the standard deviation of the spike level [34]: where t( n−1,1−α = 0.99) is 3.14, the Student's t-value for six degrees of freedom and SD is standard deviation of seven replicates. SPE recoveries were validated by spiking PPCPs mixture into ultrapure water (250 ng L −1 ), river water (250 ng L −1 ), STP effluent (500 ng L −1 ) and STP (1000 ng L −1 ) influent prior to extraction. A reference sample for each matrix was spiked with the same concentration of PPCPs mixture but after SPE extraction. Percentage of SPE recovery was established by using the following equation: (C s /C r × 100%); percentage ratio of concentration of target compounds in spiked matrix (C s ) versus reference sample (C r ): where C s is the concentration of target compounds in spiked matrix and C r the concentration of target compounds in reference sample. ME were evaluated by comparing the signal of target compounds in sample matrix with the signal of target compounds in ultrapure water. River water, STP effluent and STP influent was spiked with 250, 500 and 1000, respectively after SPE extraction. ME (%) was calculated using Equation (5). as [A s −(A sp −A usp )]/A s × 100 where A s is the peak area of spiked ultrapure water; A sp is peak area of spiked matrix extract and A usp is background concentration of matrix. ME% > 0% indicates ionisation suppression while ME% < 0% indicates ionisation enhancement. Validation of the SPE recovery and ME was performed in accordance with Al-Odaini et al. [21]: Concentrations of PPCPs in environmental matrices were calculated by using the Equation (6). Samples with concentrations out of calibration range were diluted prior to re-analysis: where C ex is concentration of target compounds in sample extract, V ex the volume of sample extract, V s the volume of sample collected and CF the concentration factor. Results and discussion 3.1. Optimisation of SPE conditions Many studies have featured the use of large HLB cartridges sizes such as 20 cc/1 g, 6 cc/500 mg and 6 cc/200 mg [5,8,10,32,35]. Renew and Huang [36] used an anion-exchange cartridge and HLB cartridge in tandem for the extraction of antibiotics. Other studies have used the smallest SPE sorbent size and found acceptable recoveries of analytes [7,21]. Therefore, the smallest SPE cartridge 3 cc/60 mg was used in this method due to its lower cost and promising recovery. Different sample volumes (25, 50, 100, 150, 200 and 250 mL) were optimised for the environmental matrices. The sample volume with the highest recovery of the PPCP standards was chosen for each matrix. The optimised sample volumes were 150 mL of river water, 150 mL of STP effluent, 100 mL of STP intermediate and 50 mL of STP influent. The optimisation of sample volume is important to avoid over-loading the SPE cartridge. Different ratios of reconstitution solvent and formic acid were optimised to improve peak intensity and separation. It was found that the use of a higher concentration of ultrapure water in the ratio led to lower chromatogram baselines and better separation for estradiol, estriol, estrone, ethynylestradiol, levonorgestrel and norethindrone. The addition of a small amount of formic acid led to sharper peaks, better selectivity and sensitivity [37]. The optimised reconstitution solvent used in this study was 75:25 (ultrapure water with 0.1% HPLC formic acid: HPLC methanol). Optimisation of LC-MS/MS conditions 3.2.1. Optimisation of best ionisation Non-polar compounds such as naproxen, gemfibrozil, diclofenac, DEET and oxybenzone experienced better ionisation in APCI with higher detected peak area. Natural and synthetic hormones performed best in APCI as they experienced difficulty of ionisation in ESI mode. Similar incidences had been reported in previous literature. This could be due to their high lipophilicity and lack of functional polar groups [5,15,16] In this study, some PPCPs such as caffeine, trimethoprim and sulfamethoxazole demonstrated higher ionisation in ESI modes. However, these minor losses in ionisation were sacrificed to achieve the main aim of unification in a single quantification. Ionisations for other compounds were comparable. APCI source was selected for further optimisation. In addition, Wang and Gardinali [16] reported lower MDLs for ibuprofen, DEET, caffeine, acetaminophen, progesterone, estradiol and ethynylestradiol for APCI compared to ESI. Optimisation of source-dependent parameters Discharge current, vaporiser temperature, capillary temperature, and sheath and auxiliary gas pressure were optimised in this study. Most analytes showed a steady increase in peak area with increasing discharge current, except for estradiol, metoprolol, estrone and estriol. Most previous studies have used a high vaporiser temperature from 350°C to 500°C [5,15]. In this study, a higher vaporiser temperature led to a decrease in peak area for most analytes, with the exception of marginal increases for metoprolol. Therefore, a vaporiser temperature of 250°C was selected. The lower vaporiser temperature in this method was expected, as a lower solvent flow rate of 100 µL min −1 was employed. Meanwhile, most analytes demonstrated a major improvement at a capillary temperature of 250°C, with the most distinct improvement in norethindrone's peak area. Increasing the sheath gas pressure and auxiliary gas pressure over the range of 20-45 arb units and 5-30 arb units, respectively, yielded a steady decrease in peak areas for most PPCPs except for trimethoprim and sulfamethoxazole. The latter two demonstrated fluctuations during optimisation. The optimum sheath gas pressure and auxiliary gas pressure were 20 and 5 arb units, respectively. Optimisation of mobile phase and gradient elution Several mobile phases were optimised in this study, including methanol, ACN and ultrapure water, as well as different acidic additives such as acetic acid and formic acid. Of all the optimised mobile phases and additives, ACN yielded an improved peak shape and lower baselines for chromatogram. ACN concentrations of 10%, 20% and 30% were optimised, and their impacts on the peak areas of analytes are shown in Figure S1 (supplemental data). Significantly improved peak areas of all PPCPs were found when increasing ACN from 10% to 20%, except for acetaminophen. However, a further increase of ACN (30%) was detrimental, evidencing steep reductions in all peak areas. Thereafter, 20% ACN was adopted to develop the mobile phase gradient. Several flow rates were optimised to ensure that all 18 PPCPs had sufficient time for separation. In this study, estriol and estrone had the same precursor and daughter ions but different relative abundances. It was observed that a faster flow rate of 200 µL min −1 hampered adequate separation of estriol and estrone. The optimum separation of estriol and estrone was achieved at 100 µL min −1 . Precision The precision of the method was validated by repeatability (intra-day precision) and reproducibility (inter-day precision) under identical conditions. According to USEPA Method 1694, the required initial precision for acetaminophen, caffeine, gemfibrozil, naproxen, sulfamethoxazole and trimethoprim is a relative standard deviation (RSD) of not more than 30% [32]. Other studies conducted elsewhere have also revealed both intra-day and inter-day RSD values less than 15% [7,8,39,40]. In this study, the RSD values for repeatability and reproducibility ranged from 0.4% to 7.3% and 3.6% to 8.7%, respectively (Table S3 in supplemental data). Overall, the RSD for both repeatability and reproducibility was below 10%, indicating good precision. Sensitivity The IDLs and IQLs of PPCPs in this study were quantified in the range of 0.001-0.1 µg L −1 and 0.005-1.0 µg L −1 , respectively. The linearity of all calibration curves (R 2 ) was above 0.997. IDLs, IQLs and R 2 values for all PPCPs are summarised in Table S4 (supplemental data). The results of the MDLs of all PPCPs are depicted in Table 1. MDLs ranged from 0.1 to 5 ng L −1 for ultrapure water (reference material); 1.5 to 8 ng L −1 for river water; 3 to 48 ng L −1 for STP effluent and 2 to 121.5 ng L −1 for STP influent. PPCP quantification often involves trace analysis; therefore, low MDLs in parts-per-trillions are crucial. MDLs for acetaminophen, atenolol, diclofenac, ethynylestradiol, levonorgestrel, metoprolol and norethindrone in river water and STP effluent were lower than previously published method which quantified in ESI mode [21]. The MQLs correspond to the reporting limits of the method. Any environmental concentration below the MQL would be quantified with weak precision and poor accuracy. MQLs for each analyte are listed in Table 1. The MQLs ranged from 0.3 to 14 ng L −1 for ultrapure water (reference sample); 4 to 25 ng L −1 for river water; 10 to 153 ng L −1 for STP effluent and 38 to 386.5 ng L −1 for STP influent. MDLs and MQLs for acetaminophen and caffeine reported in this study for river water is lower than those established in USEPA method [32]. In addition, MQLs for diclofenac in STP influent and effluent is lower than previously published using ESI mode [9]. Accuracy 3.3.3.1. SPE recovery. SPE recovery is analyte-and matrix-specific; therefore, the percentage recovery for each PPCP was validated as shown in Table 2. The said recovery also acts as a performance evaluation for the HLB cartridge to effectively extract all target PPCPs. Recoveries for most PPCPs were above 80% with minimal exceptions. The relative recoveries of PPCPs using isotope dilution were within a range of 20-98.9% for ultrapure water (reference sample); 37-129% for river water; 54.1-96.5% for STP effluent and 63.1-97.8% for STP influent. According to the USEPA [32], recovery for acetaminophen, caffeine, gemfibrozil, naproxen, sulfamethoxazole and trimethoprim are required to be within a range of 50-120% in reference water when the recovery is corrected by an internal standard. The optimised method was able to fulfil this criterion for all said compounds, with the exception of acetaminophen, which had lower recoveries in the reference water and river water samples. The relative recovery for acetaminophen in reference water was 34.9% in this study, in comparison to 32%, 8.2% and 40% in previously published methods [5,21,39]. As such, the low recovery of acetaminophen was comparable with other studies. Atenolol also experienced low SPE recovery in reference water (20.1%). The same result was observed in Lin et al. [10], where the recovery improved from 26.8% in reference water to 92.3% in river water. Some PPCPs demonstrated a reduction in recovery as the complexity of environmental water matrices increased from river water to sewage. High organic matter and chemicals in samples will compete for binding sites, thus reducing the sorption efficiency of SPE cartridges [21,40]. This is an unavoidable phenomenon, but additional sample clean-up and more specific isotope dilution could improve the recovery of PPCPs [6,8,41]. Analyte-specific isotope dilution is recommended to overcome particularly low recovery in reference water, as well as increased matrix complexity. Vanderford and Snyder [6] demonstrated that utilising matched isotope-labelled analytes could greatly compensate SPE loss and mitigate ME. In this study, gemfibrozil and naproxen had recoveries of 63.1% and 64.1%, respectively, in STP influent. With the specific isotope dilution reported by Vanderford and Snyder [6], their recoveries were 90% and 102%, respectively. On the same note, the analytes in this study that matched the SIS compounds, that is, caffeine, ethynylestradiol, diclofenac and trimethoprim, yielded recoveries above 90% in STP influent. Therefore, isotope dilution is recommended in order to improve SPE recovery. Despite the advantages of having exact isotope-labelled standards, this is often not practiced in most studies [7,8,10,39,40]. Isotope-labelled standards are rare and expensive. In addition, they are not available for all compounds. Common practice in the quantification of PPCPs is to provide the closest approximation to the analyte in structure and behaviour due to financial shortcomings and limited supplies. In the process of validating the performance criteria of SPE HLB cartridges, several other parameters such as MDLs and MQLs need to be taken into consideration. According to Gros et al. [7], low recoveries of analytes are usually not an obstacle in producing reliable quantifications as long as precision (repeatability and reproducibility) and sensitivity (MDLs and MQLs) are good. All compounds demonstrated good precision and sensitivity. Therefore, the Oasis HLB 3 cc/60 mg cartridge was confirmed for further application. 3.3.3.2. Matrix effects. APCI mode experienced less ME especially for non-polar and steroid compounds compared to ESI mode [5,42,43]. This is beneficial when quantifying environmental matrices. The ME of each analyte is shown in Table 3. The SRM chromatograms of PPCPs and hormones spiked into different environmental matrices are shown in Figures S2 and S3, respectively (supplemental data). All target compounds demonstrated good separation with minimal noise peak. Five SISs were added into the sample to assist in correction of recoveries for SPE and ME. The peak area of caffeine was suppressed by 203.9%, but it was recovery-corrected by SIS to an enhancement of 37.7% in STP effluent. In the event that an exact matched SIS was not available, the most compatible SIS was evaluated and thereafter chosen based on criteria such as structural similarity, behaviour similarity and performance of recovery correction. Therefore, each analyte was matched to the four SISs spiked in this study. 13 C 3 -Trimethoprim was able to provide better recovery for atenolol in all three matrices. The ME after recovery correction was enhanced by 10.5% in river water, 5.3% in STP effluent and 13.2% in STP influent. A similar method for selecting SISs has been published elsewhere [7]. Lin et al. [10] adopted 13 C 6 -sulfamethazine as the sole SIS for quantifying 97 pharmaceuticals and hormones in environmental water samples. Another study also reported the use of 2 SISs to correct the recoveries of 28 pharmaceuticals [39]. Therefore, using five SISs in this study was deemed sufficient to provide a reasonable quantification. Furthermore, dilutions of sample extracts were also proven to significantly reduce ME. In a study conducted by Gros et al. [7], the dilution of post-SPE extracts at ratios of 1:2 and 1:4 reduced signal suppression in STP effluent and influent, respectively. The drawback of this method is its loss in sensitivity. Another separate study concluded that post-SPE dilution requires a larger volume of SIS and resulted in slightly higher reporting limits [6]. Despite the reported drawbacks, dilution was shown to reduce ME, which is the main obstacle in environmental analysis. Instead of conducting post-SPE dilutions, which involve tedious calculations and a higher volume of SIS, which is costly, sample volume reduction was utilised in this study. Several sample volumes were extracted for each matrix, and their recoveries were documented and compared. The optimum sample volumes were 150 mL for river water, 100 mL for STP effluent and 50 mL for STP influent. Smaller sample volumes have been previously reported in other studies [6,21] 3.4. Environmental application Limited studies have been carried out on the occurrence and distribution of PPCPs in Malaysian waters. To date, there are only a few relevant studies on the occurrence of human pharmaceuticals and synthetic hormones [9,[21][22][23]. The occurrence of PCPs has never been studied in Malaysia. To the author's best knowledge, DEET, gemfibrozil, estradiol, estriol, estrone, naproxen, oxybenzone, progesterone, sulfamethoxazole and trimethoprim are quantified here for the first time in Malaysian waters. The range and mean of PPCP concentrations in samples collected from the three river locations (upstream, midstream and downstream) and Extended Aeration STP (influent and effluent) are shown in Tables 4 and 5, respectively. Samples were collected in triplicate at every river water sampling point, thus totalling nine river samples for each PPCP. Meanwhile, sewage samples were collected using the ISCO 3700 Portable sampler as 48 hours time-paced multiple bottle composting. Samples were collected every 4 hours. Thus, each composite sample consisted of 12 smaller samples. Four PPCPs, namely levonorgestrel, naproxen, norethindrone and trimethoprim, were not detected (ND) in RW1. RW1 is a recreational spot where swimming and picnic activities have previously occurred. Most PPCP peaks were concentrated at midstream, as RW2 is an urbanised town. Seven PPCPs, namely ethynylestradiol, gemfibrozil, naproxen, norethindrone, progesterone, sulfamethoxazole and trimethoprim had higher concentrations downstream at RW3. The highest PPCP concentration detected in river water was estriol at RW2, with a mean concentration of 3993 ng L −1 . Norethindrone and trimethoprim were detected below MDLs in RW2. Meanwhile, gemfibrozil and progesterone were ND in RW2. As for the concentration of PPCPs in the STP, the three highest influxes were caffeine, estriol and acetaminophen, with mean concentrations of 25,578, 7711 and 4236 ng L −1 , respectively. Removal efficiency of STP were calculated in accordance with Li et al [44].These compounds were all excellently removed in the STP, with a removal percentage above 85%. However, due to their high influx into the STPs, they were not completely eliminated, with significant amounts still detected in the effluent sample. The highest PPCP concentration detected in the effluent was 1000 ng L −1 of estriol with removal of 88.6%. A high concentration of estriol was detected in the Langat River Basin at alarming levels that could elicit chronic toxicity in aquatic organisms. According to Metcalfe et al. [45], exposure to nanogram per litre concentrations of estriol induces intersex (development of testis-ova) and altered sex characteristics in Japanese medaka. High concentration of natural estrogens (estradiol, estriol and estrone) is most likely associated with discharge of untreated human and animal waste. According to Juahir et al. [46], 39% of point sources of pollution in Langat river basin consisted swine poultry. In addition, several recreational areas are non-point sources. Conclusion A selective and sensitive LC-MS/MS method was developed for the detection and quantification of 18 PPCPs in environmental waters. SPE using HLB sorbent was shown to provide an efficient method for simultaneous extraction, sample clean-up and enrichment. High selectivity was achieved by adopting SRM mode, in which specific pairs of precursorproduct ions were monitored for the purpose of quantification and confirmation. SPE loss and ME were mitigated using five deuterium-labelled SISs. Losses during sample preparation, ME and instrumental fluctuations were compensated for by the application of the SIS quantification method. The developed method was validated for its performance in ultrapure water, river water, STP effluent and STP influent. Recoveries for the majority of PPCPs were above 80% in most environmental matrices which are within acceptance level of USEPA [32]. MDLs and MQLs for some analytes in ultrapure water were as low as 0.1 and 0.3 ng L −1 , respectively. MDLs and MQLs for some PPCPs in river water, STP effluent and influent were lower than those reported in USEPA and previously published methods [9,21,32]. In addition, the intra-day and inter-day precision of the quantification method were recorded to be less than 10% RSD. Thereafter, the robust and reliable method was applied for the detection of PPCPs in Malaysian environmental water. The detection of several PPCPs, namely, sulfamethoxazole, trimethoprim, estradiol, estriol, estrone, progesterone, DEET, oxybenzone, naproxen and gemfibrozil, in Malaysian environmental waters was reported for the first time. Estriol was quantified at several sampling locations at concentrations above 1000 ng L −1 . These high concentrations are most probably associated to point sources pollution of untreated of human and animal wastes. Occurrences of selected PPCPs at high concentrations were alarming, indicating the possibility of eliciting aquatic toxicity. Disclosure statement No potential conflict of interest was reported by the authors.
2019-04-05T03:38:42.924Z
2015-06-30T00:00:00.000
{ "year": 2015, "sha1": "a1fd19660dad86d0310685f5f5689351ffb196f1", "oa_license": "CCBY", "oa_url": "https://figshare.com/ndownloader/files/2155780", "oa_status": "GREEN", "pdf_src": "TaylorAndFrancis", "pdf_hash": "01c40647a1636595ddc18a96718bb9846e1b751c", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Chemistry" ] }
209500899
pes2o/s2orc
v3-fos-license
Securing Cluster-heads in Wireless Sensor Networks by a Hybrid Intrusion Detection System Based on Data Mining Cluster-based Wireless Sensor Network (CWSN) is a kind of WSNs that because of avoiding long distance communications, preserve the energy of nodes and so is attractive for related applications. The criticality of most applications of WSNs and also their unattended nature, makes sensor nodes often susceptible to many types of attacks. Based on this fact, it is clear that cluster heads (CHs) are the most attacked targets by attackers, and also according to their critical operations in CWSNs, their compromise and control by an attacker will disrupt the entire cluster and sometimes the entire network, so their security needs more attentiveness and must be ensured. In this paper, we introduce a hybrid Intrusion Detection System (HIDS) for securing CHs, to take advantages of both anomaly-based and misuse-based detection methods, that is high detection and low false alarm rate. Also by using a novel preprocessing model, significantly reduces the computational and memory complexities of the proposed IDS, and finally allows the use of the clustering algorithms for it. The simulation results show that the proposed IDS in comparison to existing works, which often have high computational and memory complexities, can be as an effective and lightweight IDS for securing CHs. In this paper, we present a hybrid IDS based on data mining algorithms for securing CHs, which by using a novel data pre-processing model reduces the computational complexity and consumption memory in the IDS, and it allows us to use the data mining classification algorithms for detect intrusions and securing CHs in WSNs. Therefore, in the proposed system, in addition to the benefits of both anomaly-based detection and misuse-based detection methods, which led to a high detection rate and low false alarms rate, with the help of the proposed novel data pre-processing model, energy consumption will be at least, which is very important in WSNs. In order to evaluate and present the results of the proposed method, and also because of the absence of a real sample of the dataset for intrusion detection in WSNs, the KDDCup'99 dataset is used as the sample to evaluate the performance of IDSs in these networks. The simulation results show that the proposed IDS in comparison to existing works, which often have high computational and memory complexities, can be as an effective and lightweight IDS for securing CHs. This paper is organized as follows: In Section II, we introduce the IDSs and then the Dataset for IDSs are described. In Section III, a review on the most important IDSs devised for WSNs is presented along with the introduction of their advantages and shortcomings. Section IV describes the proposed IDS. In Section V, we will simulate the proposed IDS and present the related results. Finally in Section VI, the paper ends with a conclusion and future works. II. PRELIMINARIES In this section, IDSs are described along with their types and requirements, and then Datasets for IDSs are introduced. A. Intrusion Detection Systems In general, any type of unauthorized or unwanted activity in a network is called intrusion. An IDS is a set of tools, methods, and resources to help identify, assess, and report intrusions. IDS is not a single, separate unit, but rather part of an overall protection system that is installed alongside a network node. Intrusion is defined as any set of activities that attempt to endanger the integrity, confidentiality or availability of a resource, and Intrusion Prevention System (IPS) includes methods such as encryption, authentication, key management [8], [9], access control, secure routing, etc. is considered as the first line of defense against intrusions [10]. However, it should be noted that in any secure or less secure network, IPS cannot be completely prevented from intrusions. Therefore, after IPSs, IDSs are considered as the second line of defense against attacks and intrusions. The expected operating conditions in IDS will be as follows [11], [12]:  Not add new flaws and weaknesses to the network.  Less use of network resources, and not reducing performance by imposing overhead.  Low False alarm rate, which is the percentage of normal activity that is detected as anomaly.  High detection rate, which is the percentage of anomalies that have been properly detected.  Run continuously and act impalpable for the system and users (Transparency principle).  Should be in accordance with standards to allow for future cooperation and development. Each IDS has three main components [12], [13]:  Monitoring Section: This section is used to monitor local events and neighbors and often by traffic analysis and local events, controls the resources efficiency.  Analysis and Detection: This module is the main part of the IDS, which is dependent on the modeling algorithm. In this section, the behavior and activities of the network are analyzed and decided to declare them as an intrusion.  Warning section: This section is responsible for reaction against intrusion, which generates an alarm about the detection of an intrusion. IDSs are categorized into three groups based on their operation, which are described below [10], [14]: Anomaly-based Detection: This method is based on a statistical behavior model related to the normal operations of network nodes that are profiled and if there is a certain deviation from it, as an anomaly is detected. In the other words, this method first describes the actual features of a 'normal behavior', and then detects any activities that deviate from these behaviors as intrusions. The main advantage of this method is its high detection rate, but on the other hand, its disadvantage is also that it generated a high false alarms rate. Misuse-based Detection: In this method, the patterns of previously known attacks are produced and used as a reference for identifying future attacks. The advantage of this technique is that it can accurately and efficiently detect known attacks. The disadvantages are that this technique needs knowledge to build attack patterns and they are not able to detect novel attacks. So this method has a low false alarms rate, but its detection rate is also relatively low. Specification-based detection: This method defines a set of specifications and constraints that describe the correct operation of a program or protocol. Then the program execution is monitored according to the defined specifications and constraints. In fact, this method combines the aims of misuse and anomaly detection methods, that is able to detect previously known attacks at low false alarms rate. The disadvantage of this method is the manual setting of all specifications, which is a time-consuming process for users. B. Dataset for Intrusion Detection Systems Because of the absence of a real sample of the dataset for intrusion detection in WSNs, the KDDCup'99 dataset is used as the sample to evaluate the performance of IDSs in these networks. The KDDCup'99 dataset was designed by Columbia University through the simulation of intrusions and attacks in a military network environment at the DARPA organization in 1998 [15]. It was performed Probe In Probe attack, an attacker tries to gain information about the victim machine. The intention is to check vulnerability on the victim machine. e.g. Port scanning. The attacks of Spoofed, Altered, or Replayed Routing Information, Sinkhole, Sybil, Wormholes, and Acknowledgment Spoofing need to make a probe step before they begin to attack, so they would be classified as Probe attacks. R2L The attacker tries to gain access to the victim system by compromising the security via password guessing or breaking. Spoofed, Altered, or Replayed Routing Information, Sinkhole, Sybil, Wormholes, Hello Floods, and Acknowledgment Spoofing use the weakness in the system to make an attack, so they would be classified as R2L. U2R In U2R, an attacker has local access privilege to the victim machine and tries to access super users (administrators) privileges via "Buffer overflow" attack. Sinkhole, Wormholes, and Hello Floods are caused by inner attacks, and are therefore classified as U2R. in the MIT Lincoln Research Labs, and then announced on the UCI KDD Cup 1999 Archive. Each sample of this dataset represents a connection between two network hosts according to network  Traffic features are the attributes computed using a two-second time window.  Host features are the attributes designed to assess attacks which last for more than two seconds. Each sample is labeled as either a normal behavior or one specific attack. The dataset contains 23 class labels that one is normal and the remaining 22 are different attacks that are categorized into four classes: DoS, Probe, R2L, and U2R. In Table I, these four classes are presented with their description and type of attacks [15], [17]. In this paper, we used kddcup.data_10_percent.gz as our sampling step in creating the training and testing datasets. This dataset contains 10% data in KDDCup'99 dataset, where the total number of sample records is 494,021. The complete statistics for this dataset are presented in Table III. III. RELATED WORK So far, many IDSs have been introduced for WSNs, but there is still competition for increasing the detection rate, reducing the false alarms rate and minimizing energy consumption. Considering high sensitivity and the need for security guaranty in CHs, and Also disadvantages of anomaly-based detection and misuse-based detection, none of them alone is capable of securing CH nodes. Therefore, the best option for securing CHs is to use a hybrid IDS, which has been proposed in many references [18]- [28]. In the following, we will introduce the most important related works. In [18], a hybrid Intrusion Detection System is proposed for Cluster-based WSNs that detect malicious nodes by integrating misuse detection rules and functional reputation. The main idea of the proposed method is that instead of detecting attacks only at nodes level, they propose a collaborative and centralized design using the mutual trust assessment between all network components, in which each sensor node computes functional reputation values for its neighbors by observing their activities (transmissions and data aggregation). In order to achieve this, they have defined five functional reputation metrics and benefit from the high detection rate of the misuse detection method by applying the relevant rules. The main problem with their methodology is that only have expressed their energy consumption results and have not presented any discussion of the detectable types of attacks and their detection rates. In [19], an Integrated Intrusion Detection System (IIDS) is proposed in a heterogeneous Clusterbased WSN. According to the different capabilities and probabilities of attacks on them, three separate IDSs are designed for the sink, CH and Sensor Node (SN). For CHs, a Hybrid IDS is proposed, which combines anomaly and misuse detection. They reduced the number of features using the SVM method to 24 features, and finally use a three-layer Back-Propagation Network (BPN) for classification. Their IDS, due to the low false alarms rate and also low computational complexity, can be used in WSNs, but the main problem is the relatively low detection rate, given the importance of CHs. In [20], a Global Hybrid IDS (GHIDS) has been proposed that to achieve the goal of high detection rates and low false alarms, used combination of a technique based on support vector machine (SVM) for detecting anomalies, with a set of signature-based detection rules to identify attacks in clusterbased WSNs. The results of the simulations show that the proposed method is in a desirable condition In terms of the detection rate and the false alarm rate. But the underlying problem is the high energy consumption due to the use of an anomaly detection technique based on SVM, which is somewhat inappropriate for the sensor network. In [21], a similar method with [20] has been proposed, that to reduce the computational complexity and energy consumption, existing features reduced to 4 features. Therefore, a significant improvement has been created in energy consumption, but its detection rate is proportionally lower. In [22], [23] and [24], hybrid IDSs have been proposed that initially use a novel algorithm to feature selection in order to reduce the computational complexity, and then use the SVM algorithm for classification. In [22] uses the combination of ant colony optimization and a feature weighting SVM to effective feature selection that finally reduces the number of features to 25. In [23] uses GA to feature selection that finally reduces the number of features to 10. In [24] also uses the intelligent water drops (IWD) algorithm, a nature-inspired optimization algorithm for feature selection that finally reduces the number of features to 9. The main problem of all three methods is the relatively high computational complexity due to the use of the SVM classification algorithm. In [25], a Modified CuttleFish Algorithm (MCFA) approach is proposed that plays a crucial role in intrusion detection by selecting an appropriate subset of the most relevant features from the huge amount of dataset. Griewank fitness function is used to calculate the fitness of the MCFA. Naïve-Bayes classifier is also employed as a classification algorithm. In [26], an entropy-based feature selection to select the important features, layered fuzzy control language to generate fuzzy rules, and layered classifier to detect various network attacks is proposed. In [27], an improved many-objective optimization algorithm (I-NSGA-III) is proposed using a novel niche preservation procedure. It consists of a bias-selection process that selects the individual with the fewest selected features and a fit-selection process that selects the individual with the maximum sum weight of its objectives. Experimental results show that I-NSGA-III can alleviate the imbalance problem with higher classification accuracy for classes having fewer instances. IV. PROPOSED INTRUSION DETECTION SYSTEM One of the challenges of using IDSs in cluster-based WSNs is securing CHs. Since CHs are of great importance in WSNs and perform operations of cluster management, data aggregation, and data transfer to the base station, they are much more likely to be attacked than normal nodes, Such that the intrusion and control of a CH by an attacker will disrupt the entire cluster operation and in some cases the entire sensor network. So, in a sensor network, maintaining the security of CH nodes and in some way guaranteeing it is very important. On the other hand, the use of IDSs for common nodes, such as proposed IDS in [7], is not suitable for CH nodes, due to their high sensitivity and the need for security guaranty. Also, due to the disadvantages of anomaly-based detection and misuse-based detection, none alone is capable of securing CH nodes. On the other hand, considering that energy is as a critical parameter in WSNs, and practically the network lifetime depends on it, a lightweight method should be used for intrusion detection in them. Of course, in most cases, the CH nodes have more capabilities than common nodes due to their respective operations. Therefore, in order to secure CHs, with respect to their high-security sensitivity, we can use the more efficient IDSs. In this section, we present a hybrid IDS based on data mining algorithms for securing CHs, which by using a data pre-processing model reduces the computational complexity and consumption memory in the IDS, and it allows us to use the data mining classification algorithms for detect intrusions and securing CHs in WSNs. Therefore, in the proposed system, in addition to the benefits of both anomaly-based detection and misuse-based detection methods, which led to a high detection rate and low false alarms rate, with the help of the proposed data pre-processing model, energy consumption will be at least, which is very important in WSNs. As shown in Fig. 1, the process of the proposed IDS is such that initially received packets from other nodes are examined by anomaly detection model. Anomaly detection model (described in Section IV-A) can quickly filter large numbers of normal packets and then deliver abnormal packets to the misuse detection model (described in Section IV-B) to identify attacks and their types there. Finally, packets that are not detected by a misuse detection model will also be identified at the decision-making step. In the following, we will describe the details of each step of the proposed IDS. A. Proposed Anomaly-based detection model The anomaly detection model is used as the first line of defense in the proposed IDS. Given that a large number of existing packets, in fact, only a few of them are related to attacks, and most of them are also related to the network normal state, so using an abnormal detection model that acts like a filter, Quickly, the normal packets are passed and the abnormal packets are filtered and delivered to the misuse detection model to more detecting and accurately. An anomaly detection system uses a defined model of network normal behavior, so a packet is detected as an anomaly by the system when the current behavior deviates in comparison with the defined behavior. One of the problems with anomaly detection model is that if the current behavior and normal behavior patterns change in the network, then the system usually detects the normal communication as abnormal communication and creates the problem of erroneous classification. However, it rarely detects abnormal communication as normal communication. In order to solve the erroneous classification problem in the anomaly detection model, in the second line of defense, we use a misuse detection system to take delivery the detected abnormal packets by the anomaly detection model and, with more accurate analyzes, Their final status will be determined. in other words, the abnormal detection model, with the receipt a large number of packets, the abnormal cases that are relatively few, like a filter separates from a large number of normal ones, and after passing normal packets in high accurately, the abnormal cases for More accurate examination are delivered to misuse detection model. As mentioned, to create an anomaly detection model to monitor the status of data packets, normal behavior patterns must be created in the network, which in this paper, due to requiring the high performance; a rule-based analysis method is used. According to reference [29], the rules of Table II are considered in the rule-based method to create an anomaly detection model. B. Proposed misuse-based detection model The misuse Detection Module uses various models of known attack behavior, so we need to create a basic model that matches these behaviors. Because the performance in most IDSs is guaranteed through training data, machine learning methods are consistent with this approach. In this section, we present a data pre-processing model for increasing the efficiency of the IDS as well as reducing the energy consumption, and finally, to obtain the best detection rate, different machine learning algorithms are examined for classification. Fig. 2 shows the steps of the proposed pre-processing model for the dataset. One of the effective factors in increasing the computational complexity and memory consumption in using data mining methods is the number of training samples for creating the model. Therefore, given the large number of samples in the dataset, it is practically impossible to use it in WSNs. So, in the first and second steps of the proposed model, we try to use the techniques to optimize the number of available samples to usability in WSNs. Fig. 2, in view of the data redundancy in the data set, we initially do the remove operation of duplicated samples. As shown in Table III, by removing the duplicated samples, the size of data sets decreases sharply (reduction rate 70.53%), which, in addition to reducing computational complexity and reducing energy consumption, causing to increased detection accuracy and reduced memory consumption. Random sampling: after remove duplicated samples, using random sampling of all records in the dataset, we sample 20,000 records as training data and 10,000 records as testing data. Given that the Dimension Reduction Algorithms: Remove 6 features: - sample set of Probe, U2R, and R2L attacks is very small; hence, the whole their records are sampled, So that two-thirds of these records are taken as training data and one-third as testing data; but other sample sets are selected according to their ratio from kddcup.data_10_percent.gz dataset that detailed in Table III. Considering that a large number of features are also one of the most important factors in increasing the computational complexity and energy dissipation, as well as significantly increase the memory consumption, practically using of data mining methods in WSNs according to the computational and memory constraints of their nodes make it impossible. Therefore, in order to overcome this problem and reduce the computational complexity and energy dissipation as well as memory consumption, we must use techniques to reduce the number of features to the appropriate number, which is also done in steps 3 and 4 in the proposed model. Feature Selection (deletion of ineffective features): In order to optimize the dataset in the first step, with a superficial observation, it is easy to select several attributes due to the lack of a distinction in the dataset and to remove them from the dataset. As shown in Figure 2, this step is presented as the feature selection, in which 6 features with the least importance and no distinction from the dataset are eliminated. For example, the is_host_login and num_outbound_cmds features in the entire dataset records are zero and therefore do not create any distinction in the datasets. These features are presented in Table IV. Dimension reduction and selection of effective features: In order to further reduce the computational complexity and energy dissipation in the WSN nodes, we use a feature selection algorithm to reduce the dimension in the dataset. The features that lead to the convergence of connection categories within a small group of values have very little information to describe the behavior of a node in the network. This indicates that the original dataset contains a series of irrelevant data for the IDS and so needs to be optimized. Therefore, feature selection is an important step in the optimization of the dataset that can have a desirable effect on the performance of the IDS. In order to select an effective set of features, we examined the most important methods for feature selection. In Table V, we presented the results based on the detection rate of different classification algorithms. As seen in Table V, the most reduction of features is related to the ChiSquared method with four selected features, which however with a high detection rate of 99.59%, has very desirable conditions for use in WSNs. Also, the InfoGain method with 11 features and the detection rate of 99.72, has the ability to use in WSNs, but due to about 3 times the selected features in comparison with the ChiSquared method, a higher computational overhead, and as a result, higher energy consumption imposes to the system. So, in this paper, we use the ChiSquared feature selection method to dimension reduction of the dataset. The four selected features are presented in Table VI for increasing the efficiency of the proposed IDS. Data normalization: In the last step, we will normalize the dataset. In reference [30], the normalization of features has been considered as an essential step in the data preprocessing in order to Therefore, we also used statistical normalization on the dataset. The goal of statistical normalization is to convert derived data from any normal distribution to a standard normal distribution with mean zero and unit variance. Statistical normalization is defined as (1): where μ is mean and σ is its stand deviation of n values for a given feature: Selection of appropriate classification algorithm: In the last step, we also evaluated the efficiency of the available classifications on the KDDCup'99 dataset in order to select the best data classification algorithm in the proposed model. The results are presented in Table VII in the next section. V. SIMULATION AND RESULTS In the following, the simulation results of the proposed model on the KDDCup'99 dataset and the evaluation of different classification algorithms are presented in Table VII. As shown in Table VII, the best detection rate and false alarms rate are respectively with 99.95% and 0.24% for the PART classification algorithm, however, the training time (0.76 seconds) and the test (0.025 seconds) is very low, which makes it perfect for use in WSNs. Therefore, we use the PART classification algorithm for the final training and test of the proposed IDS. PART is an algorithm for inferring rules by repeatedly generating partial decision trees, thus combining the two major paradigms for rule generation: creating rules from decision trees and the separate_and_conquer rule learning technique [31]. In order to evaluate the performances of the proposed IDS, and comparing with existing works, the following criteria are considered:  In Table VIII, the results of the comparison between the proposed IDS and the existing systems are presented in terms of the criteria described above. All the results presented in the following are the average of 10 performed simulation operations. Also, for the proper comparison of the proposed method with existing works, the same dataset (KDDcup'99) whose details are presented in Table III, used to simulate all the methods. According to the results presented in Figures 4 through 6, the proposed system with a high detection rate of 99.59% and a low false alarm rate of 0. 24%, as well as a low testing time of 0.025sec (that indicates low computational complexity), is considered as an effective and lightweight method. As shown in Fig. 4, the detection rate of the proposed IDS is 99.59%, which has the highest rate among existing IDSs. Also, with a very low false alarm rate of 0.24%, that presented in Fig. 5, with a slight difference, is after [22], [23] and [27], but because of the high computational complexity of them, the proposed algorithm has a better condition. In addition to, the proposed IDS with a very low testing time of 0.025sec, that presented in Fig. 6, with a slight difference, is after [21], but because of the low detection rate and high false alarm rate of [21], the proposed algorithm has a better condition. In this paper, we first introduced intrusion detection systems and then investigated various types of existing IDSs to securing CHs in WSNs. Then, considering the critical operation of the CHs, we proposed a hybrid IDS based on data mining algorithms for their security, which, using a data preprocessing model. It dramatically reduces computational complexity, and memory usage in the IDS, and provides the possibility of using classification algorithms to intrusion detection and securing CHs in WSNs. The results of the simulations show that the proposed system, in comparison with the existing ones, in addition to low computational complexity, with a high detection rate, a low false alarms rate and also a low testing time, is considered as an effective and lightweight IDS for securing CHs in WSNs.
2019-12-30T02:00:31.400Z
2019-11-26T00:00:00.000
{ "year": 2019, "sha1": "70586bfa0a8c40ee8b9e59aa678104843eab7977", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "86a5472ee07ba46a72fa176ce80c9a248dccc20a", "s2fieldsofstudy": [ "Computer Science", "Engineering" ], "extfieldsofstudy": [ "Computer Science" ] }
263010240
pes2o/s2orc
v3-fos-license
Positive solutions for boundary value problem of nonlinear fractional differential equation In this paper, we investigate the existence of three positiv e solutions for the nonlinear fractional boundary value problem D �+ u(t) + a(t) f (t, u(t), u 00 (t)) = 0, 0 < t < 1, 3 < � � 4, u(0) = u 0 (0) = u 00 (0) = u 00 (1) = 0, where D �+ is the standard Riemann-Liouville fractional derivative. The method involves applications of a new fixed-point theorem due to Bai and Ge. The interesting point l ies in the fact that the nonlinear term is allowed to depend on the second order derivative u 00 . Introduction Many papers and books on fractional calculus and fractional differential equations have appeared recently, see for example [1][2][3][7][8][9][10][11][12].Very recently, El-Shahed [5] used the Krasnoselskii's fixedpoint theorem on cone expansion and compression to show the existence and non-existence of positive solutions of nonlinear fractional boundary value problem : where D α 0+ is the standard Riemann-Liouville fractional derivative.Kaufmann and Mboumi [6] studied the existence and multiplicity of positive solutions of nonlinear fractional boundary value problem : Motivated by the above works, in this paper we study the existence of three positive solutions for the following nonlinear fractional boundary value problem : ) by using a new fixed-point theorem due to Bai and Ge [4].Here, the interesting point lies in the fact that the nonlinear term f is allowed to depend on the second order derivative u .To the best of the authors knowledge, no one has studied the existence of positive solutions for nonlinear fractional boundary value problems (1.1)-(1.2). Throughout this paper, we assume that the following conditions hold. The rest of this paper is organized as follows: In section 2, we present some preliminaries and lemmas.Section 3 is devoted to prove the existence of three positive solutions for BVP (1.1) and (1.2). Preliminaries For the convenience of the reader, we present some definitions from the cone theory on ordered Banach spaces. Definition 2.1.The map ψ is said to be a nonnegative continuous concave functional on a cone P of a real Banach space E provided that ψ : P → [0, ∞) is continuous and Similarly, we say the map φ is a nonnegative continuous convex functional on a cone P of a real Banach space E provided that φ : P → [0, ∞) is continuous and Definition 2.2.Let r > a > 0, L > 0 be given and ψ be a nonnegative continuous concave functional and γ, β be nonnegative continuous convex functionals on the cone P. Define convex sets: EJQTDE, 2008 No. 24, p. 2 Suppose that the nonnegative continuous convex functionals γ, β on the cone P satisfy (A 1 ) there exists M > 0 such that x ≤ M max{γ(x), β(x)}, for x ∈ P; (A 2 ) P(γ, r; β, L) ∅, for any r > 0, L > 0. Lemma 2.1.[4] Let P be a cone in a real Banach space E and constants.Assume that γ, β are nonnegative continuous convex functionals on P such that (A 1 ) and (A 2 ) are satisfied.ψ is a nonnegative continuous concave functional on P such that ψ(x) ≤ γ(x) for all x ∈ P(γ, r 2 ; β, L 2 ) and let T : P(γ, r 2 ; β, L 2 ) → P(γ, r 2 ; β, L 2 ) be a completely continuous operator. The above fixed-point theorem is fundamental in the proof of our main result. Next, we give some definitions from the fractional calculus. The following lemma is crucial in finding an integral representation of the boundary value problem (1), (2). Lemma 2.2.[3] Suppose that u ∈ C(0, 1) ∩ L(0, 1) with a fractional derivative of order α > 0. Then From Lemmas 2.2, we now give an integral representation of the solution of the linearized problem. then the boundary value problem ) has a unique solution where Proof.From Lemma 2.2, we get By (2.2), there are c 2 = c 3 = c 4 = 0, and Hence, the unique solution of BVP (2.1), (2.2) is The proof is complete. Lemma 2.4.G(t, s) has the following properties. where Proof.It is easy to check that (i) holds.Next, we prove (ii) holds.If t ≥ s, then The proof is complete. Main results Then we have the following lemma. By Lemma 3.1, X is a Banach space when it is endowed with the norm u = u 0 . It is easy to know that We define the operator T by Thus, T : X → X.By Lemma 2. Define the cone P ⊂ X by where 0 < ω < 1 as in (H 2 ). We are now in a position to present and prove our main result. Theorem 3.2.Assume that (H 1 ) and (H 2 ) hold.Suppose there exist constants then BVP (1.1)-(1.2) has at least three positive solutions u 1 , u 2 , and u 3 such that Proof.By (H 1 ), (H 2 ), Lemma 2.4 and (3.3), for u ∈ P, we have T u(t) ≥ 0, ∀t ∈ [0, 1], and Thus, T (P) ⊂ P.Moreover, it is easy to check by the Arzela-Ascoli theorem that the operator T is completely continuous.We now show that all the conditions of Lemma 2.1 are satisfied. Finally, we give an example to illustrate the effectiveness of our result.
2014-10-01T00:00:00.000Z
2008-01-01T00:00:00.000
{ "year": 2008, "sha1": "6939c1bdadf04ce4770d41a5791cc9f1d1dfca3f", "oa_license": "CCBY", "oa_url": "https://doi.org/10.14232/ejqtde.2008.1.24", "oa_status": "GOLD", "pdf_src": "CiteSeerX", "pdf_hash": "6939c1bdadf04ce4770d41a5791cc9f1d1dfca3f", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
1374949
pes2o/s2orc
v3-fos-license
Small chromosomes among Danish Candida glabrata isolates originated through different mechanisms We analyzed 192 strains of the pathogenic yeast Candida glabrata from patients, mainly suffering from systemic infection, at Danish hospitals during 1985–1999. Our analysis showed that these strains were closely related but exhibited large karyotype polymorphism. Nine strains contained small chromosomes, which were smaller than 0.5 Mb. Regarding the year, patient and hospital, these C. glabrata strains had independent origin and the analyzed small chromosomes were structurally not related to each other (i.e. they contained different sets of genes). We suggest that at least two mechanisms could participate in their origin: (i) through a segmental duplication which covered the centromeric region, or (ii) by a translocation event moving a larger chromosome arm to another chromosome that leaves the centromere part with the shorter arm. The first type of small chromosomes carrying duplicated genes exhibited mitotic instability, while the second type, which contained the corresponding genes in only one copy in the genome, was mitotically stable. Apparently, in patients C. glabrata chromosomes are frequently reshuffled resulting in new genetic configurations, including appearance of small chromosomes, and some of these resulting “mutant” strains can have increased fitness in a certain patient “environment”. Electronic supplementary material The online version of this article (doi:10.1007/s10482-013-9931-3) contains supplementary material, which is available to authorized users. Introduction Yeasts are unicellular eukaryotic organisms, and several species have been reported as opportunistic human pathogens. Candida glabrata has for many years been known to represent non-pathogenic normal flora in healthy humans (Stenderup and Pederson 1962). This yeast can be abundant in relatively healthy individuals, but it also causes vaginal candidiasis, which is a common mucosal infection that occurs in healthy, immuno-competent women (Mentel et al. 2006) and even systemic infections. The mortality rate of systemic infections caused by C. glabrata is high as they are difficult to treat because of C. glabrata resistance to many antifungal drugs (Hitchcock et al. 1993;Komshian et al. 1989;Willocks et al. 1991). Because of both, the increased use of immunosuppressive therapy and also the prolonged use of wide spectrum antibiotics, during the last years the number of systemic and mucosal infections with C. glabrata has increased. This yeast has been reported to be the second most frequently found opportunistic yeast in humans, just after Candida albicans (Fidel et al. 1999). C. glabrata is a rather close relative of Saccharomyces cerevisiae, and the two yeasts separated after the yeast whole genome duplication (WGD), app. 50 million years ago, both species are distant relatives of C. albicans (Dujon et al. 2004). Unlike the dimorphic diploid yeast C. albicans, all isolates of C. glabrata so far seem to be haploid. Mating in C. glabrata has not yet been observed and so this yeast is apparently asexual (Kaur et al. 2005). C. glabrata has been reported to exhibit high karyotype variability and may undergo rapid genome reorganisation even during infection in patients (Shin et al. 2007;Muller et al. 2009). It has also been reported that independent isolates from the same patient having C. glabrata fungemia had different karyotype patterns (Klempp-Selb et al. 2000). Chromosomal rearrangements and aneuploidy in C. albicans and C. glabrata have been demonstrated to increase the virulence potential and particularly drug resistance (Selmecki et al. 2006;Poláková et al. 2009). On the other hand, chromosomal aneuploidy in multicellular eukaryotes (e.g. humans) is usually associated with some genetic disorders, for instance with cancer. The formation of new chromosomes as a molecular mechanism which can increase virulence has been reported in our recent analysis of forty pathogenic strains (Poláková et al. 2009). Two reported strains had extra chromosomes with the size under 500 kb and we therefore named them as small or mini-chromosomes. The origin of the two discovered small chromosomes has been explained through segmental duplication over the centromeric regions. One small chromosome has been shown to be responsible for the increased resistance towards anti-fungal drug fluconazole. The duplicated segment encodes the ATP-binding cassette family (ABC) transporter and the observed gene duplication apparently elevated the resistance towards azole in the patient (Poláková et al. 2009). In this study, we examined 192 isolates of C. glabrata, which had been collected from Danish patients during 1985-1999. The phylogenetic relationship was estimated and the strain karyotypes determined. Interestingly, new small chromosomes were found. One of our aims was to deduce the mechanism(s) which led to the origin of these small chromosomes, and another to find any possible connections between the genes on the small chromosomes and the strain phenotype. Clinical isolates During 1985-1999 putative C. glabrata isolates from patients hospitalized at Danish hospitals were collected, mainly isolated from blood and involved in systemic infections, and deposited to the State Serum Institute (Copenhagen, Denmark). All other reported publicly available collections of pathogenic C. glabrata strains are based on much later samplings (see for example, Klempp-Selb et al. 2000). Thus, our collection represents a unique tool to study the early appearance and development of systemic infections with this yeast. These clinical isolates were in 2004 transferred to our laboratory at Lund University (the Piskur yeast collection). Each strain was isolated from a different patient, with exception of a few strains isolated from the same patient at different time periods. All available details on the strains and their isolation source are presented in Supplementary materials Table S1. Forty of the deposited strains have been analyzed previously (Poláková et al. 2009), and hereby we characterized the remaining collection, 152 strains. DNA extraction and polymerase chain reaction (PCR) The yeast strains were grown overnight in YPD media (1 % yeast extract, 2 % Bacto Peptone and 2 % glucose) at 25°C on rotary shaker. Genomic DNA was extracted according to the protocol described in Philippsen et al. (1991). Two regions, the nuclear 26S ribosomal DNA D1/D2 domain and a fast evolving intergenic spacer region (IGS, located between the nuclear CDH1 and ERP6 genes on chromosome A) were amplified using a Stratagene Robo-cycler. The nuclear 26S r D1/D2 domain was amplified with the primers: NL1 (5 0 -GCA TAT CAA TAA GCG GAG GAA AAG-3 0 ) and NL4 (5 0 -GGT CCG TGT TTC AAG ACG G-3 0 ) using the following conditions. First cycle with initial denaturation temperature 94°C for 3 min, followed by 35 cycles of 94°C for 2 min, 54°C for 1 min and 72°C for 2 min, completed by a final elongation at 72°C for 5 min. The primers used to amplify the IGS locus were: ''00605'' (5 0 -C TCA CAA ATG GAT TCC TTA AAG AGT TCG -3 0 ) and ''00627'' (5 0 -GT C ACC AGA GTT GGA GTA CAT GTA G-3 0 ). The following conditions were applied. The initial denaturation at 94°C for 3 min, followed by 35 cycles of 94°C for 45 s, 52°C for 1 min and 72°C for 1 min, completed by a final denaturation at 72°C for 5 min. The PCR products were purified with the QIAquick gel extraction kit (Qiagen, Dorking, UK). Concentration of DNA was measured using a Nano-Drop ND-1000 spectrophotometer and the sequencing was performed by MWG Biotech (Germany). Sequence analysis and phylogenetic relationship The obtained sequences were deposited in the GenBank and the accession numbers can be found in Supplementary materials Table S1. The sequences used for phylogenetic trees were based on the D1/D2 domain and the IGS locus and were analyzed and aligned using the BioEdit/ClustalW program (Thompson et al. 1994). All positions containing gaps and missing data were eliminated from the dataset (Complete deletion option) and there were a total of 489 for D1/D2, and 474 positions for IGS in the final dataset. The analysis approach followed the previously published one (Poláková et al. 2009), where we analyzed the first forty strains. The evolutionary history was inferred using the Neigbour-joining method (Saitou and Nei 1987) and the evolutionary distances were computed using the the maximum composite likehood method (Tamura et al. 2004). The evolutionary history was also inferred using Maximum Parsimony method. Phylogenetic and molecular evolutionary analyses were conducted using MEGA version 5 (Tamura et al. 2011). Azole susceptibility test The yeast strains were inoculated into 5 ml YPD and grown overnight. The cells were pelleted and washed twice with sterile water. Yeast strains were spotted as 3 ll at different serial dilutions (10 3 , 10 4 , 10 5 , 10 6 cells/ml) to obtain single cell colonies, using a lab hedgehog distributer on solid YPD medium with different concentrations of fluconazole (15, 45, 80, 125, 388.8, 1116.8 lg/ml). The plates were incubated for 48 h at 37°C and then inspected visually for the appearance of single cell colonies at lower dilutions. Fluconazole was purchased from Toronto Research Chemicals (TRC), and the stock solutions were diluted in DMSO. Karyotypes and pulse-field gel electrophoresis (PFGE) The chromosomes from each yeast isolate were prepared as described before (Petersen et al. 1999) and separated by pulse-field gel electrophoresis using a CHEF Mapper XA (Bio-Rad). The best separation was obtained under the following conditions: step 1, 240 s pulse for 6 h; step 2, 160 s pulse for 13 h; step 3, 120 s pulse for 10 h; step 4, 90 s pulse for 10 h and step 5, 60 s pulse for 3 h. The included angle was 60 with voltage 4.5 V/cm. The sequenced C. glabrata CBS 138 (Supplementary material Fig. S1) and S. cerevisiae S288c (Y1307) strains were used as the standards. Southern blotting Chromosomes were separated by pulse-field gels, which were subsequently depurinated for 20 min by 0.25 M HCl, denaturated for 30 min (1.5 M NaCl; 0.5 M NaOH) and neutralized for 20 min (1.5 M NaCl; 1 M Tris-HCl, pH 7.5). The chromosomes were transferred to a Hybond-XL membrane (GE Healthcare) in 20 9 SSC solution (1.5 M NaCl; 0.15 M sodium citrate) for 3-4 h by vacuum transfer (Vacu-Gene TM XL). UV light was used to crosslink the transferred DNA fragments. Thirteen isotope labeled DNA probes, originating from genes in the vicinity of the thirteen known centromeres, were prepared using the sequenced C. glabrata strain CBS 138 as the template. The corresponding probes are listed in Supplementary materials (Table S2; Fig. S1, S2, S3 and S4). The following PCR conditions were used to amplify hybridization probes; initial denaturation at 94°C for 3 min, followed by 35 cycles of 94°C for 45 s, at 56°C for 45 s, and 72°C for 1 min, completed by a final denaturation at 72°C for 5 min. PCRproducts were purified using the QIAquick PCR purification kit (Qiagen). For membrane hybridization, 100 ng of the purified PCR product was diluted and used for [a 32 P] dCTP labeling (GE Healthcare, Amersham rediprime Tm II DNA labeling system) for 30 min at 37°C. G-50 columns (GE Healthcare) were used to remove unincorporated nucleotides. The membrane was hybridized with 0.25 M Na 2 HPO 4 , 7 % SDS and 1 mM EDTA at 60°C overnight. The membrane was washed twice with 2 % SDS in 100 mM Na 2 HPO 4 at room temperature for 5 min and once at 60°C for 20 min. Imaging Screen-K (35 9 43, Bio-Rad) and a personal Imager FX (Bio-Rad) were used to detect the hybridization signals. The membrane was stripped twice using boiled 0.1 % SDS for 5 min and used for re-hybridization with a new probe. Similarly, also several putative resistance genes were labeled and their presence on the small chromosomes analyzed. Small chromosomes stability test In order to check the stability of the newly described small chromosomes, single colonies from the strains with small chromosomes were inoculated in 2 ml YPD and incubated overnight at 25°C. 2 ll of the overnight culture was re-inoculated into a new 2 ml liquid YPD. After 70 generations, different dilutions were made for each individual strain, plated on YPD and incubated overnight at 25°C. Eight to ten single colonies from each experiment were analyzed by PFGE. Quantification of the expression potential by RT-qPCR The genes which were analyzed in the transcription studies are presented in Table 2. The yeasts used in the transcription study were grown in YPD with the supplement of glucose (20 g/l) as carbon source, and the RNA preparation and RT-qPCR analysis followed the method presented in Rozpedowska et al. (2011). 1 lg of RNA was used for the synthesis of cDNA using the SuperScript III Reverse Transcriptase kit with RNaseOUT Ribonuclease Inhibitor and random primers. The expression studies were carried out using SYBR GreenER qPCR SuperMix with the cDNA as a template and the specific primers. All kits and compounds were obtained from Invitorgen. The PCRs were run as duplicates in RotorGene 2000 cycler under the conditions specified by Invitrogen. The take off and the amplification values, obtained from the relative quantification performed using the RotorGene 2000 software, were used to quantify the expression ratios with the help of REST 2009 V2.0.13 with RG mode25. The b-actin gene was treated as endogenous reference, and we used the sequenced strain Y1092 (CBS 138) as untreated strain for comparison. Phylogenetic relationship In this study we analysed the identity and phylogenetic relationship of our clinical isolates through the sequencing of two genetic loci. Initially, we could see that ten strains from the original collection had quite a distinct D1/D2 domain (belonging to the nuclear 26S rDNA locus) polymorphism and apparently did not belong to C. glabrata based on the yeast species definition (Kurtzman 2006). Also the karyotypes of these strains were different from the C. glabrata ones (data not shown). Likely, these strains were misclassified during the initial determination and deposition and we later excluded them from further experiments and from the analysis shown in Fig. 1 and they are not shown in Supplementary materials Table S1. We obtained in total (including 40 previously determined ones, see Poláková et al. 2009) 192 sequences of the D1/D2 domain, and 192 sequences of IGS, mapping between the nuclear CDH1 and ERP6 genes, and these can be found in Supplementary materials Table S1. Seven different haplotypes of the D1/D2 locus, based on the analyzed 489 positions were obtained (Fig. 1). The difference between the CBS 138 sequence and the least related strain 003338 was observed at five positions (see also Fig. 1). According to the yeast species definition (Kurtzman 2006) this means that all strains belong to C. glabrata. When the fast evolving IGS locus was analyzed, a more pronounced polymorphism was detected (Fig. 2). Therefore, in Fig. 2 more distinctive subgroupings than in Fig. 1 could be observed. Neighbor-joining and Maximum Parsimony methods defined the same small chromosome containing sub-groups (data not shown). In short, these experiments confirmed which of the strains in the collection were indeed C. glabrata and provided a basis to explain the origin of different molecular events (see later sections). Karyotypes The karyotypes of 151 isolates of C. glabrata were determined in this study, but in addition 40 strains had been analyzed already before (Poláková et al. 2009) thus providing 191 different karyotypes. There was apparent variation among the obtained karyotypes, ranging in the number of detected bands from ten to fourteen (Fig. 3). CBS 138 fourteen chromosomes are illustrated in Supplementary materials Fig. S1. The variation in the intensity of the chromosomal bands was also observed, and it could be explained as that some of the more intensive bands were composed of two or even more chromosomes. For example, Y663 has likely two double bands (of higher intensity then expected from the equal stoichiometric distribution), one in the K-L-M chromosome group and another in the C-D-E group. Figure 3 shows karyotypes originating from a set of strains belonging to the same phylogenetic sub-group KA002574 (see Fig. 2, the arrowed group). These strains are very closely related, but they exhibited a clear chromosomal polymorphism, in chromosome band numbers, sizes and intensity vary. Only one strain in this group, KA002870 (Y663) (Poláková et al. 2009), has 14 chromosome bands because of its small chromosome, while the remaining 24 strains show 10-13 bands (Fig. 3). The polymorphism is especially apparent within the large chromosomes K, L and M. This is in agreement with the previously published observations explaining the large chromosome polymorphism as a consequence of variation within the gene copy numbers at the rDNA locus (Muller et al. 2009). Among those 25 related isolates, we also found some that were isolated the same year from patients who were treated at the same hospital ( Fig. 3; Supplementary material Table S1). However, even in these cases the karyotypes showed some degree of rearrangements. Interestingly, two strains from the same patient, KA002940 (Y1640) and KA002941 (Y1641), taken at different times of treatment, showed different karyotypes with 10-11 chromosomal bands detected (Fig. 4). Small chromosomes It was assumed that each small chromosome contains one of the known centromeres. To investigate about the precise origin of the new small chromosomes, thirteen probes originating from genes in the vicinity of the CBS 138 centromeres were used in Southern analysis, and are listed in Supplementary materials, Table S2. In Fig. 2 each strain with a small chromosome also has a Y number in its designation and this is followed by a capital letter. These capital letters, B, D, E, F, G and J, indicate the relationship between the small chromosome and the CBS 138 chromosomes (Table 1). For example, in strain 002870 (Y663) F, the probe derived from a gene in the vicinity of the centromere from chromosome F, hybridized with the corresponding small chromosome. Two strains, 003482 and 003668, had their small chromosomes Fig. 1 Phylogenetic relationships among pathogenic C. glabrata strains, based on seven different haplotypes, as deduced by Neighbor-joining method. The analysis is based on the D1/D2 domain of the 26 rDNA encoding locus. The numbers correspond to the museum numbers of the initial collection and can be found in Supplementary materials Table S1. Among the analyzed sequences (for accession numbers see Table S1), which had the 489 positions, 177 were identical with the C. glabrata type strain CBS 138. The strains belonging to the same haplotype are described in Supplementary materials Table 1. The bootstrap values are shown on some branches and the tree was not rooted. The scale bar in Neigbour-joining analysis corresponds to 0.001substitution per nucleotide site hybridized to the D chromosome probe, but they are not related and belong to two different strain subgroups (Fig. 2). In addition, two strains, 002870 and 003651, had their small chromosomes hybridized to the F probe, and they are apparently not closely related (Fig. 2). Therefore, in both pairs of strains the corresponding small chromosomes originated independently from the parental D or F chromosome, respectively. On the other hand, the two strains (002940 and 002941) with their small chromosomes originating from chromosome G, are very closely related and they originate from the same patient (Fig. 2). In some cases the probe hybridized only to the small chromosome and not to the other chromosomes ( Fig. 5; Supplementary material Figs. S2, S3, S4). For example, when we used the gene probe, called Gel, (Supplementary materials Table S2), on 002940 (Y1640) and 002941 (Y1641), we only obtained a signal from the small chromosome, and not from any larger chromosome (Fig. 5b). This could be, for example, explained by a translocation of the larger arm from the original chromosome G to another chromosome, while the left arm, the centromeric region and a part of the right arm remained as an autonomous, but smaller chromosome. However, also other mechanisms could additionally contribute to the origin of these chromosomes. While these two small chromosomes most likely have the same origin, the parental chromosome was upon the translocation event additionally remodeled giving two different sizes of 305 and 290 kb, respectively (Fig. 4). Also the small chromosomes from Y1643, Y1645 and Y1646, having the size of 365, 332, 420 kb respectively, result from translocation events. For example, it seems possible that in Y1643 the right arm of chromosome F was translocated to another chromosome, leaving a 365 kb fragment (with the centromere) as a small chromosome. In Y1645 and Y1646 we could deduce rearrangements/translocations Fig. 2 Phylogenetic relationship, as deduced by Neighborjoining method, based on the IGS region located between the CDH1 and EPR6 genes. 35 (plus CBS 138) different haplotypes (representing isolate sequences which had the available 474 positions) were deduced. The strain numbers correspond to the museum numbers of the initial collection and can be found in Supplementary materials Table 1. The names of the strains with small chromosomes are followed by a capital letter pointing out which CBS 138 chromosome is related to the small chromosome. Among the analyzed strains several sequences belonged to the same haplotype. The appearance of each haplotype, in addition to the shown strain (and if different from 1), is written in the brackets following the strain/sequence designation. The strains belonging to each of these haplotypes can be found in Supplementary methods Table S1. The group 002574 (analyzed for their karyotypes in Fig. 3) is arrowed. The bootstrap values are shown on some branches and the tree is not rooted involving chromosomes J and B, respectively (Supplementary material Fig. S2 and S4). In the case of Y1642 and Y1644, the probe hybridizing to the small chromosome also hybridized to a larger chromosome D (Table 1; Fig. 5d). We explain these results as a partial duplication of chromosome D resulting in the 285 and 290 kb small chromosomes, respectively. The original centromere in these cases is present in two copies, on the parental and the small chromosome. Y1642 and Y1644, both carry a duplication of chromosome D, but the duplication had an independent origin (Fig. 2). Three closely related strains, Y1642, Y1643 and Y1645, contain three different kinds of small chromosomes, originating from three different parental chromosomes (Fig. 2), and thus from independent events. A majority of the clinical isolates with these small chromosomes were stable for several generations when growing in a non-selective medium (YPD without fluconazole). As expected, Y1640, Y1641, Y1643, Y1645 and Y1646, were stable and retained their small chromosomes generated upon translocation, because a majority of genes located on the corresponding small chromosome were present in only one copy per genome. On the other hand, Y1644 was mitotically unstable and the small chromosome was lost in almost two thirds of the progeny and even chromosomal rearrangements could be observed in the resulting daughter lineages (Fig. 6a). The behavior of this strain was similar to the previously tested Y624 and Y663. The corresponding small chromosomes were a result of segmental duplications and therefore the small chromosome genes present in duplicate and thus the small chromosome could in principle be lost. In contrast, the small chromosome in Y1642 contains a partial duplication of chromosome D, and it was stable in our experiments and kept the novel small chromosome for 70 generations (Table 1). Y1642 was not particularly resistant to azole (Table 1) but the small chromosome could carry some single copy genes. Putative resistance genes on small chromosomes In C. glabrata, several genes play a role in the interactions between the yeast and the host. It could be that some of the genes found on the small chromosomes are involved in the virulence and/or anti-fungal drug resistance of the strain. Thus, we examined all identified small chromosomes for the presence of any Fig. 3 Electrophoretic karyotyping of 25 C. glabrata clinical isolates belonging to the same phylogenetic sub-group KA002574 which is arrowed in Fig. 2. Five groups of chromosomes (according to the CBS 138 nomenclature, see also Supplementary materials Fig. S1) are shown on the left, and the chromosome sizes on the right. The number of chromosome bands ranges from ten to thirteen but KA002870 (Y663) has fourteen chromosome bands because of its small chromosome (arrowed as a). The large chromosome group (K-L-M) shows a clear variation, from one band as in KA005064 to three bands, as in KA003250, or even four bands, as in KA005129. KA004709 and KA004773, arrowed as b and c, were isolated in 1997 from the same hospital but have clearly different karyotypes. In b we can see only ten bands but the third smallest chromosome (located in the C-D-E group) is likely a double band, while in c there are 12 bands putative virulence and resistance genes. The region on left and right of the centromere, corresponding to the size of the small chromosome, was analyzed (Table 2), employing the published genome of the CBS 138 strain. Several resistance genes were found ( Table 2). The duplicated segment of chromosome D, found in Y1642 and Y1644 small chromosomes, could encode CAGL0D03674g that is an ortholog of the S. cerevisiae YPL226w gene that might be involved in drug transport. This gene is highly similar to C. albicans ELF1 conferring a drug-resistance phenotype (Sturtevant et al. 1998). However, our Southern analysis could not detect this gene on the small chromosome (Table 2, Supplementary materials Fig. S5A), and in addition, Y1642 and Y1644 are very sensitive on azole. In both, Y1640 and Y1641, which are highly azole resistant, the region of 305 kb from the left end of chromosome G, which includes the centromere, also encodes the gene CAGL0G00242g, belonging to the ATP-binding cassette family and highly similar to the S. cerevisiae YOR1 gene, which encodes an ABC transporter. Y1643 carries a small chromosome which originates from chromosome F. Several genes from this Fig. 4 Electrophoretic karyotyping of nine clinical isolates of C. glabrata with small chromosomes. S. cerevisiae S288C (Y1307) and CBS 138 were included as references to determine the size of the new chromosomes. Y624a is a daughter strain of KA000127 (Y624) which has lost its small chromosome but the position of the small chromosome, as it would be in Y624, is circled. Y624 and its small chromosome were described previously (Poláková et al. 2009). The sizes of small chromosomes determined by calculation of chromosomal migration on the gel were estimated to be between 280 and 420 kb (see also Table 2). Note, strains Y1640 and Y1641 are from the same patient taken at different time points and the two small chromosomes have a slightly different size part of chromosome F are known to be involved in the resistance potential of C. glabrata. For example, they encode a transporter of the ATP-binding cassette family, CAGL0F01419g, which is highly similar to the S. cerevisiae AUS1 gene. In addition, the Y1643 small chromosome encodes the ATP-binding cassette family, CAGL0F02717, an ortholog of the S. cerevisiae ABC transporter PDR5 gene (known as PDH1 in C. glabrata), and involved in the transcriptional activation of pleiotropic drug resistance. The small chromosome in Y1645 carries an ortholog of the S. cerevisiae DHA1 family of multidrug resistance transporters (CAGL0J00363g) and this gene upregulation results in reduced susceptibility to azoles. In Y1646, 420 kb chromosome segments on both sides of the centromere of chromosome B carry CAGLA0B02343g that encodes a protein required for aminotriazole resistance, similar to S. cerevisiae YML116 (SNQ1). The strains with small chromosomes, which contained a putative resistance gene, were analyzed for the expression level of the corresponding six genes. The expression of four of these genes was not changed in the corresponding strain where the gene was located on the small chromosome (Supplementary materials Table S3). However, in the case of Y1646, which is highly resistant to azole, the expression of CA-GLA0B02343g was more than two times elevated CBS 138 was used as a reference. The gel a was transferred to membrane b, which was hybridized with the ''Gel'' gene probe originating from CBS 138 chromosome G. The Y1640 and Y1641 small chromosomes (arrowed) hybridized to the gel probe showing that they share the origin with chromosome G. Note that in these two strains only one signal was obtained. The chromosomes from gel c were transferred to membrane d and hybridized with the probe ''Dcl'' (originating from chromosome D). Note that in both Y1642 and Y1644 there were two bands, the original chromosome and the small chromosome, hybridizing to the probe Chromosome D rearrangement in one daughter lineage is arrowed in white. b Karyotypes of the parental strain Y1645 (lane 1) and ten (lanes 2-11) randomly selected progenies after 70 generations. The small chromosome is arrowed Vermitsky et al. (2006) and Gbelska et al. (2006) A putative gene was firstly predicted by bioinformatics tools and later confirmed by a Southern analysis a The presence of the gene on the small chromosome was determined by Southern analysis (Supplementary materials Table S3). This gene was highly expressed also in Y1642, which is not azole resistant. Generation of new chromosomes and conclusion In this study we examined a unique collection of C. glabrata strains covering Danish hospitals during the period of 1985-1999. This time period is especially interesting because the main anti-fungal agents used nowadays (based on azoles), were introduced to Denmark in early 90s. Only limited sequence variability was detected in the D1/D2 domain (Fig. 1). However, when a fast evolving locus, covering the intergenic region between two ORFs, was examined ( Fig. 2), several phylogenetic sub-groups were found. Even strains with a very similar intergenic locus sequence, belonging to the same phylogenetic subgroup, had variable karyotypes (e.g. Fig. 3; Supplementary materials Fig. S6), confirming the previous suggestion that the C. glabrata chromosomes rearrange faster than point mutations accumulate within the genome sequence (Poláková et al. 2009). One could speculate that each patient evolved its own nonpathogenic strain into a virulent one, able to cause a systemic infection under immuno-suppressed conditions. Nine strains with small chromosomes (Fig. 4) belong to different sub-clades (Fig. 2), Y1642, Y1643 and Y1645 belong to a closely related group of strains (sub-group KA004540, in Fig. 2) and this clade gave rise to three different type of small chromosomes, related to the CBS 138 chromosome D, F and J, respectively. Apparently, the common progenitor of these strains was very prone to generate small chromosomes. D and F related small chromosomes appear also in distant clusters. In this report, we describe a new mechanism for generation of small chromosomes, through chromosomal breakage and translocation of a centromere-less arm to another chromosome ( Fig. 5; Supplementary material Figs. S2, S3, S4). Such translocations could be reciprocal or non-reciprocal, and are stable because the cell cannot tolerate a loss of the small chromosome (Table 1). While segmental duplications increase the gene dosage, the translocation pathway does not. When we examined the putative resistance gene CAGL0B02343g in the strain Y1646 (Table 2), we could see that the expression was significantly elevated (Supplementary material Table S3). One could then speculate that the high azole resistance phenotype of this strain (Table 1) is somehow connected with the over-expression of the CAGL0B02343g gene coding for a multi-drug efflux pump. However, this gene is also highly expressed in Y1642, which is not very resistant on azole. We conclude that the small chromosomes contain more than the here traced genes and it appears likely that some of these may contribute to an enhanced propagation in the patient. It seems that in our collection approximately each twentieth strain employed a strategy of the small chromosome generation (Table 1). In addition, it could also be that some strains had lost their small chromosome during the preservation and growth under non-selective conditions in the laboratory medium. Generation of a new chromosome can provide genome configurations which could be more competitive, for example by increasing the anti-fungal resistance in a certain patient habitat, and thus successfully proliferate in a relatively hostile niche. While we described two paths of small chromosome generation, additional mechanisms may have been involved in the generation of the observed rearrangements.
2016-05-04T20:20:58.661Z
2013-05-14T00:00:00.000
{ "year": 2013, "sha1": "9d8073209fa014169a23b98904b227ac5e177e31", "oa_license": "CCBY", "oa_url": "https://link.springer.com/content/pdf/10.1007/s10482-013-9931-3.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "9f3f777c71e4c21ee65c82b248022902a5bafbcd", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
235384921
pes2o/s2orc
v3-fos-license
Distributed education enables distributed economic impact: the economic contribution of the Northern Ontario School of Medicine to communities in Canada Background Medical schools with distributed or regional programs encourage people to live, work, and learn in communities that may be economically challenged. Local spending by the program, staff, teachers, and students has a local economic impact. Although the economic impact of DME has been estimated for nations and sub-national regions, the community-specific impact is often unknown. Communities that contribute to the success of DME have an interest in knowing the local economic impact of this participation. To provide this information, we estimated the economic impact of the Northern Ontario School of Medicine (NOSM) on selected communities in the historically medically underserviced and economically disadvantaged Northern Ontario region. Methods Economic impact was estimated by a cash-flow local economic model. Detailed data on program and learner spending were obtained for Northern Ontario communities. We included spending on NOSM’s distributed education and research programs, medical residents’ salary program, the clinical teachers’ reimbursement program, and spending by learners. Economic impact was estimated from total spending in the community adjusted by an economic multiplier based on community population size, industry diversity, and propensity to spend locally. Community employment impact was also estimated. Results In 2019, direct program and learner spending in Northern Ontario totalled $64.6 M (million) Canadian Dollars. Approximately 76% ($49.1 M) was spent in the two largest population centres of 122,000 and 165,000 people, with 1–5% ($0.7 M – $3.1 M) spent in communities of 5000–78,000 people. In 2019, total economic impact in Northern Ontario was estimated to be $107 M, with an impact of $38 M and $36 M in the two largest population centres. The remaining $34 M (32%) of the economic impact occurred in smaller communities or within the region. Expressed alternatively as employment impact, the 404 full time equivalent (FTE) positions supported an additional 298 FTE positions in Northern Ontario. NOSM-trained physicians practising in the region added an economic impact of $88 M. Conclusions By establishing programs and bringing people to Northern Ontario communities, NOSM added local spending and knowledge-based economic activity to a predominantly resource-based economy. In an economically deprived region, distributed medical education enabled distributed economic impact. Supplementary Information The online version contains supplementary material available at 10.1186/s13561-021-00317-z. Background In 2013, medical schools and teaching hospitals had an economic impact in Canada of $66B (billion) CDN (Canadian Dollars) [1]. For distributed medical education (DME) programs, in which academic and clinical programs are offered in communities located away from the main campus, the economic impact is also distributed among participating communities and within the broader region [2,3]. DME program spending represents new money coming into rural or remote areas, and can help in the economic sustainability of these regions with the potential for improvements in the social determinants of health and health equity, which in turn can have positive economic impact [4][5][6][7][8][9][10][11]. However, with a few exceptions [2,3], studies typically have been conducted at the level of the province, state, or nation, and while some studies may estimate the impact on capital cities or large regions, the economic impact is not estimated for the smaller cities or towns. Communities have an interest in knowing the community-specific economic impact, given the role of these communities in ensuring the success of DME. Our study sought to fill this information gap to estimate the economic impact of the Northern Ontario School of Medicine's (NOSM) fully operational community engaged health professional education and research programs for specific communities in the historically underserved and economically disadvantaged region of Northern Ontario. NOSM's service region in Northern Ontario has 90% of Ontario's land area (806,787 of 908,699 km 2 )-an area that exceeds that of the United Kingdom and France (exclusive of overseas territories)-but has only 6% of the population of the province (840,739 of 13,448,494 people) (Fig. 1) [12]. Communities in the lower part of the service region are connected by road, rail, and air, whereas those communities in the upper part are connected by air and winter (ice) roads. The economy of this region is largely resource based [13], with socioeconomic characteristics and population health statuses that are worse than the rest of the province [14]. NOSM's service region, relative to the whole province, has a higher proportion of Indigenous (14% vs. 2%) and Francophone (24% vs. 5%) people [15][16][17]. These minority groups have comparatively lower socio-economic status, poorer health status, and worse access to healthcare services [18,19]. The political decision to locate a stand-alone medical school in Northern Ontario rather than to exploit perceived scale efficiencies of established and larger medical schools in southern Ontario was undertaken to improve overall health outcomes and to counter the high cost of moving patients from remote communities to doctors in the major cities. Previous initiatives aimed at improving access to healthcare services in Northern Ontario were not fully successful [20][21][22]. NOSM, which started accepting students in 2005, was established with an explicit social accountability mandate to help improve the health of the people of Northern Ontario [23,24]. The deliberate creation of a new medical school in a historically underserved region sought to leverage the strong positive association between physicians' practice location and where they spent their childhood [25], as well as the strong positive association between practice location and where physicians completed their medical school education or residency training [22,[26][27][28]. These training opportunities were extended by NOSM to other healthcare practitioners such as dietitians and rehabilitation therapists. The establishment of NOSM as a distributed medical school was viewed as part of "comprehensive, four-year plan to invest in health and education, foster economic growth and balance the budget" in Northern Ontario [29]. At present, NOSM provides a distributed educational experience in over 90 communities for a broad range of students including undergraduate medical students, postgraduate medical residents, as well as dietetic, rehabilitation therapy, physician assistant, and pharmacy students [24]. In addition, students and graduate students from other health care professional schools undertake placements in the region. These programs, staff, learners, and teachers increase the economic activity in participating communities and surrounding lands. This study sought to estimate the community-specific economic impact of spending attributable to NOSM's education and research programs and related activities. Methods To estimate the economic impact, we built a cash-flow model using Excel (Microsoft Office Professional Plus 2013, v15.0.5153.1000) for communities clustered in eight economic zones (defined below) and for NOSM's service region in Northern Ontario in total. To this accounting structure we added a local economic model [30][31][32][33][34] using multipliers that incorporated population size, industry diversity, and the propensity to spend locally-these multipliers were derived from a regression equation developed with data specific to Ontario communities [35,36]. The eight distinct economic zones included two census metropolitan areas (CMAs, core population ≥ 100, 000), four census agglomerations (CAs, core population ≥ 10,000), and two of the larger census subdivisions (CSDs). CMAs and CAs include cities and surrounding lands that represent zones of integrated economic activity as inferred from commuter flows to urban cores [37]. One CSD had been part of a CA in 2001 and 2011 and therefore we grouped spending in all communities that had been part of the former CA, labelling this as Temiskaming Shores CSD+ (CSD plus). The second CSD of Sioux Lookout is a health and social service hub for 29 First Nation (Indigenous) communities distributed across northwest Ontario. NOSM-related spending (described below) in this community was high relative to population size. All other CSDs in the region with NOSM-related spending, including First Nations Indigenous communities, were grouped together to maintain confidentiality. We estimated the economic impact for this group as a whole using a multiplier based on average population size of 3260 people. We also estimated an additional intra-regional economic impact given that community members were known to purchase goods and services from other communities in the region. Community-specific data that were used to develop the cash-flow model included: salaries and benefits of NOSM personnel and medical residents, and reimbursement for clinical teaching duties; spending on travel, supplies, and services; stipends paid to contract faculty; spending on educational programs; spending on research; and other spending for fiscal year (FY) 2014/ 2015. These totals include spending recorded through the Paymaster program for salaries of medical residents (one of several learner groups) and the academic Alternate Funding Plan for clinical teachers (Supplement 1). We estimated average local spending per week for all other learners. This average weekly spending was multiplied by the number of learner-weeks per community to estimate annual local spending. We refer to the combined spending on all programs and by all learners as NOSM-related spending. Full postal codes were used to locate the employee, resident, clinical teacher, or vendor in specific communities within NOSM's service region (Fig. 1). Cash flow totals were cross-checked against publicly reported values in NOSM's Financial Statement of Operations [38], with "Amortization" replaced by "Cash flows from financing and investing activities (Obligations and Acquired)" plus payments to residents and clinical teachers. The cash flow model was constructed to best represent actual program, employee, teacher, and learner spending in Northern Ontario communities [39]. Data, particularly spending data, on programs that pre-dated NOSM were not readily obtainable and therefore the counterfactual was the absence of all programs in the service region, reflecting the "gross change in a region's existing economy that can be attributed to a given industry" [33]. Prior to NOSM, there were no medical school satellite or regional sites in Northern Ontario. Instead, there was a diverse collection of programs affiliated with other Ontario medical schools (Supplement 2). In 2005, NOSM started a new, full 4-year undergraduate medical education program and since then has added five more postgraduate medical specialties; plus physician's assistant and medical physics programs, and pharmacy placements. In 2005 to 2006, NOSM began consolidation of existing healthcare and medical education programs and has steadily increased enrolment, offered more types of placements, and recruited more healthcare providers, care facilities, and communities into its programs. The economic model summed direct, indirect, and induced economic effects to estimate the total economic impact of these programs and people in the eight economic zones and for the whole of the service region. The community-specific multipliers combined all effects into a single estimate of economic impact. We used 2016 Canadian census population sizes [12] to calculate the multipliers (described above) that were applied to cash flows to estimate the impact of all monies that were available to be re-spent in the community or region, corrected for monies that leave Northern Ontario. Detailed spending from FY 2014/2015 was made available to the research team. These spending data were multiplied by the ratio of total spending in FY 2018/ 2019 divided by total spending in FY 2014/2015 to estimate spending in FY 2018/2019. We checked the assumption that spending patterns were reasonably consistent from year to year by using a Chi-squared test of the count of dollars in each of the 15 major spending categories across five fiscal years. Community-specific multipliers were applied to the adjusted spending. The regional impact was estimated using a multiplier that was 10% higher than the largest community's multiplier, which seemed reasonable given intra-regional spending. The regional multiplier was applied to total adjusted spending in the region. We calculated the effect on employment in Northern Ontario to obtain an alternative measure of the economic impact. The number of full time equivalent (FTE) positions included NOSM employees and faculty, as well as employees of health care facilities whose salary and benefits were paid in whole or in part by NOSM, but who were not formally NOSM employees. FTE data also included residents who were also not formally NOSM employees. Data on clinical teachers FTE were not readily available and could not be included. We increased the income multipliers by 4.1% before estimating FTE. This increase was based upon a comparison of income and employment multipliers estimated for census divisions in Northern Ontario [40]. We also calculated a first approximation of the economic impact of NOSM-trained physicians who located their practice in the service region. We used the number of physicians known to be practicing in the service region in November 2018, multiplied by average gross income for family physicians (FPs) in Ontario ($291,090), and adjusted by a published multiplier of 1.07 for family practices in Canada [28,41,42]. A regional impact was estimated using a multiplier that was 10% higher, and was applied to total gross income for the region. For simplicity, we assumed that average FP income also applied to other medical and surgical specialists. Results Adjusted financial statements showed that total spending by NOSM, including salary for medical residents and reimbursement for clinical teaching duties, increased from $37. The total economic impact in the service region was estimated to be $107 M in 2019 (Fig. 3). This estimate assumed that some of the money that leaked out of one community in Northern Ontario would be spent in another community in Northern Ontario before leaving the region. In the two largest economic zones of Thunder Bay CMA and Greater Sudbury CMA, the economic impact was $38 M (35.0% of total) and $35.7 M (33.3% of total), respectively. The impact of spending in communities outside of these urban areas summed to $19.7 M (18.4% of the total impact). Intra-regional spending contributed an additional $14.2 M (13.3%). Per capita impact generally followed the same pattern, though the Sioux Lookout CSD and the Temiskaming Shores CSD+ had a per capita impact that was surpassed only by Greater Sudbury and Thunder Bay (Table 1) Table 1). The pattern of employment impact in the region mirrored that of income impact. In November 2018, there were 226 family physicians and 30 other medical or surgical specialists who had trained at NOSM and were practising in the region ( Table 1). The economic impact of these physicians in the region was estimated to total $87.7 M. Discussion For every dollar spent by NOSM, including monies spent in support of clinical duties by residents, reimbursement for teaching duties by physicians, and spending by learners, an estimated $0.66 was generated in additional economic activity in 2019 in NOSM's service region of Northern Ontario. Although 68% of economic impact occurred in the two largest population centres, other cities and towns in the region shared 18% of the economic impact, while the intra-regional economic impact was estimated at 13%. The economic impact in Northern Ontario increased by 60% over eleven years, from $67 M in 2008 [2] of NOSM, HSN's impact outside of Sudbury was onetwelfth of NOSM's impact outside of Sudbury or Thunder Bay. Much of this difference can be explained by a difference in mandates and organizational structure. For instance, HSN serves as the hospital for Greater Sudbury as well as a tertiary and quaternary care referral centre for northeast Ontario, with each community having its own independent hospital. In comparison, NOSM has central campuses in Greater Sudbury and Thunder Bay, with teaching sites in over 90 communities across northeast and northwest Ontario. Notwithstanding the differences in organizational mandates, NOSM's 12-fold higher impact outside of the major urban areas demonstrated a distributed impact. However, the economic impact relative to the gross domestic product (GDP) of the region was small. The best available information suggested that the economic impact of $107 million represented 0.3% of the region's GDP [43,44]. It is also important to note that spending and economic impact disproportionately accrued to the larger population centres of Greater Sudbury and Thunder Bay as evidence by the higher per capita impact values. More could be done to achieve an equitable distribution while recognizing differences in infrastructure, industry diversity, population size, proximity to larger centres, propensity to spend locally, and other salient economic characteristics as well as pertinent programmatic opportunities and challenges. Regardless of the proportion of GDP and per capita impact, spending by DME programs in participating communities and the impact associated with respending constitutes an investment in economically deprived regions and may help improve employment, income, education, and other social determinants of health [8,45,46]. In many communities, this spending represents new money. Findings from an earlier study [2,47], from a similar study conducted on a DME program in Québec [48], and a study that specifically examined impact on recruitment in DME communities [49] have demonstrated additional social and economic benefits in participating communities. These studies have also shown an increase in civic pride, reputation, networking opportunities, recruitment of healthcare professionals, attractiveness to new businesses, and other benefits in the community. There was more than dollars at work, though new dollars helped. Limitations There are practical and theoretical limits to local economic impact analyses [30][31][32][33][34]. Nonetheless, this approach is considered reasonable for short-term estimates in small, simple economies [31] such as Northern Ontario and it is commonly used to estimate the economic impact of universities, teaching hospitals, and medical schools [1,3,30]. The counterfactual was the absence of any of the programs and activities associated with NOSM. This was used because of the difficulty in obtaining program spending information before NOSM, and because NOSM subsumed all previous programs, added more programs, and increased the number of learners, staff, and teachers. Consequently the net economic impact of NOSM may be lower than estimated by our model. However, our model did not measure all benefits (described later in the discussion), which may justify the higher estimate. In the absence of detailed spending data for 2019, the model used a ratio to adjust spending in 2015 to that in 2019. An examination of spending in broad categories showed no significant differences across five fiscal years and so the use of this ratio seemed reasonable. Income multipliers were developed prior to 2012 for communities in Ontario and do not differentiate among spending type. Community population size is the sole independent variable, though the formula accounted for industry diversity and propensity to spend locally [36]. Nonetheless, these multipliers were in the range estimated in 2019 for the health care and social assistance sector in Northern Ontario [40]. Increasing the income multiplier by 4% to estimate employment impact seemed reasonable given a similar difference between income and employment multipliers in the aforementioned publication [40]. Unmeasured benefits Our approach did not consider all economic activity linked to NOSM. For instance, the model excluded some spending by graduate research students, visitor spending, and construction costs-all three of which were minimal. With a focus on NOSM programs and activities, the model included spending of funds that reimbursed clinicians for teaching duties, but not other types of clinician spending. This additional economic impact can be large [49,50]. For example, a very preliminary estimate suggested that NOSM-trained physicians who located their practice in the region had an economic impact of $88 M. Also out-of-scope was any change in the economic burden associated with improved health status or social impact [47,51] attributable to NOSM. We expect that these benefits have accrued, but we do not have evidence to support this claim. On the other side of the equation, the model excluded the cost of municipal services required by NOSM employees, learners, or clinical teachers. However, these demand costs may be negligible or negative, given that the population is stable or declining in most Northern Ontario communities [12]. Our study did not assess how the economic impact of NOSM-related spending compared to other existing or potential provincial healthcare initiatives. The timing and focus of new government project and program expenditures is complex and largely opaque, but there is no reason to think that NOSM displaced other public spending for healthcare or development in Northern Ontario. On the contrary, it is likely that the presence of NOSM has attracted other developments in academic and health sectors including the health research institutes in Thunder Bay and Sudbury. Nor is there any reason to think that NOSM displaced monies that were otherwise going to frontline care in the region. It is possible that NOSM reduced the need to transport some patients to large centres for primary or ambulatory care, but this is probably a small effect. Future study is required to account for all costs and benefits to assess the relative impact on economic activity. Conclusions Our economic impact study demonstrated that NOSM's DME programs and associated activities, spending by staff, clinical teachers and learners, and research activities contributed to the Northern Ontario economy in a way that extended beyond the production of health care professionals. In Northern Ontario, the economic impact on participating communities was at least 60% greater than the original government investment. This expenditure in a low resource region provided an economic stimulus and, along with NOSM graduates who set up practice in the region, may help improve the social determinants of health and the health of the population. DME is also DEI-distributed economic impact. Robinson as part of a broader socio-economic impact study funded by the Ontario Ministry of Health and Long-Term Care (MOHLTC). The authors thank the reviewers for their comments on the manuscript. The views expressed in this paper are those of the authors and do not necessarily reflect that of the NOSM nor the MOHLTC. Authors' contributions All authors (JCH, DRR, RPS) made substantial contributions to the conception or design of the work, and to the interpretation of data for the work. JCH acquired the data and conducted the modelling in consultation with DRR. JCH drafted the manuscript, while DRR and RPS critically revised the text for important intellectual content. All authors provided final approval of the submitted version and have agreed to be "accountable for all aspects of the work in ensuring that questions related to the accuracy or integrity of any part of the work are appropriately investigated and resolved" (http://www. icmje.org/recommendations/browse/roles-and-responsibilities/defining-therole-of-authors-and-contributors.html). Authors' information JCH, MSc, is Senior Research Associate and Associate Director with the Centre for Rural and Northern Health Research, Laurentian University, Sudbury, Ontario, Canada. JCH conducts national and international research on medical schools' admissions policies, socio-economic impact, discipline choice, and graduate practice location as well as on health service access and utilization in rural regions and other underserved areas. DRR, PhD, is an Associate Professor with the School of Northern and Community Studies, Laurentian University, Sudbury, Ontario, Canada. DRR is an economist and a leading expert on Northern Ontario economic development. RPS, AM, MBBS, MClSc, is Professor of Rural Health and Founding Dean Emeritus of the Northern Ontario School of Medicine (NOSM), Laurentian University, Sudbury, and Lakehead University, Thunder Bay, Ontario, Canada. RPS undertakes research into aspects of rural health with studies of: the health needs of small rural communities; sustainable models of health care in remote rural communities; rural health workforce, particularly rural family physicians; rural medical and health professional education and training outcomes and impact; and recruitment and retention of remote rural health workers. Funding Funding for the study was provided to the Centre for Rural and Northern Health Research (CRaNHR) by the Northern Ontario School of Medicine (NOSM) with monies approved by the Ontario Ministry of Health and Long-Term Care (MOHLTC). Availability of data and materials Aggregated data are available from the corresponding author upon reasonable request.
2021-06-10T13:38:14.965Z
2021-06-09T00:00:00.000
{ "year": 2021, "sha1": "dcbb07b93fc6082cbd8df9fbbc4f1a56f9cf325b", "oa_license": "CCBY", "oa_url": "https://healtheconomicsreview.biomedcentral.com/track/pdf/10.1186/s13561-021-00317-z", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "4732b792f133f4c0c430689349691307c7df966f", "s2fieldsofstudy": [ "Economics", "Medicine", "Education" ], "extfieldsofstudy": [ "Medicine" ] }
252843887
pes2o/s2orc
v3-fos-license
Treatment of Inflammatory Dentigerous Cyst Using a Surgical T Drain in a Child Abstract Dentigerous cysts are rarely reported in young children. They are usually asymptomatic and only identified when becoming significantly large. Treatment by enucleation may damage structures like the inferior alveolar nerve, maxillary sinus, or permanent teeth, thus reducing the child's quality of life. Therefore, conservative surgical treatment such as decompression is indicated. This case report describes the treatment and subsequent complete regression of an inflammatory dentigerous cyst based on the decompression method using a customized surgical tube in a 10-year-old girl. The innervation was preserved, and permanent teeth erupted. Introduction The dentigerous cyst (DC), also called a follicular cyst, is odontogenic in nature and includes the crown of an unerupted or impacted tooth. 1,2 Though it is the second most common jaw cyst affecting 0.9 to 7.3% of the population, dental literature reports a low prevalence in children. 2,3 The condition is most often found in persons aged in their thirties. Only 4 to 9% of all DCs occur in the first decade of life. [4][5][6] Their origin can be developmental or inflammatory, but their exact etiology remains unclear. An inflammatory dentigerous cyst (IDC) appears around an unerupted permanent tooth due to inflammation spreading from an overlying nonvital primary tooth. 7 It occurs most often in the mandibular premolar region, where primary molars are damaged by caries. 5,8 Smaller DCs are generally asymptomatic and accidentally discovered, for instance, during a routine radiographic ex-amination. Larger cysts may cause expansion of the bone resulting in facial asymmetry, root resorption, and shifting of adjacent teeth. 8 A follicular cyst radiographically appears as a well-defined unilocular radiolucency surrounding the crown of an unerupted tooth. Inflammatory types usually involve the roots of a nonvital primary tooth and the crown of an unerupted permanent successor that can be displaced. 7,8 A correct diagnosis requires histopathological analysis because unicystic ameloblastoma and odontogenic keratocysts exhibit similar radiographic features. 8 The DC is treated using enucleation, marsupialization/ decompression, or a combination of the two procedures. Enucleation should be done for any cyst that can be safely removed without sacrificing adjacent structures. 9,10 However, when treating larger cysts, or those present in pediatric patients with mixed dentition, the decompression method is preferred as it protects the unerupted permanent successors. 11 Keywords ► dentigerous cyst ► decompression ► T drain ► children ► oral surgery Case Report A 10-year-old girl was referred to the Department of Maxillofacial and Oral Surgery for painless swelling on the right side of the mandible. Intraoral examination revealed a normal-looking mucosa with a thin expansion of the buccal cortical, exhibiting bone elasticity on palpation in the primary mandibular right first molar region. The patient denied any sensory deficit. There was no account of specific systematic diseases or previous traumatic injuries in the affected area. A panoramic radiograph and a cone-beam computed tomography (CBCT) brought in by the mother showed significant unicystic radiolucency, with well-defined margins expanding from the primary mandibular right second molar to the permanent central incisor on the same side. The nonerupted permanent canine was horizontally shifted, and the first premolar mesially inclined, while the second premolar seemed to be typically positioned. The roots of the central and lateral right incisors were tilted, and the inferior alveolar nerve was in contact with the lesion. The primary first molar was significantly damaged by caries and nonvital. The root of the primary canine was resorbed (►Figs. 1 and 2). Based on these clinical and radiological findings, a provisional diagnosis of an IDC caused by the primary mandibular right first molar was made. The primary mandibular right canine, including the first and second molars, were extracted under general anesthesia due to the patient's age and fear of the procedure. First, an incisional biopsy for the histopathological examination was performed. Then, a decompression device made from a prefabricated surgical T drainage tube (T-FR Huali Technology No.666 Chaoqun street High tech area, Changchun, Jilin, China) was used. It was cut precisely to the desired length and width from measurements on a preoperative CBCT, and its vertical end was positioned inside the cystic lumen. Next, the horizontal part (wings) was drilled on both sides, providing an easier fixation on the mucosa. The device was inserted into the extraction socket of the primary first molar and secured with 4-0 nylon sutures (►Fig. 3). The patient's parents were instructed to irrigate the cyst cavity using 10 mL syringes filled with 0.9% saline solution by inserting the plastic part of the cannula into the tube entrance three times a day. Postoperative follow-up appointments were scheduled to take place every 3 months. A histopathological examination of the lesion confirmed the clinical diagnosis of the IDC (►Fig. 4). Three months later, the postoperative radiograph showed a more vertically positioned canine with reduced radiolucency (►Fig. 5). The decompression tube needed to be shortened due to the canine eruption. A significant lesion regression was observed in the 6-month follow-up, leading to the removal of the drain (►Fig. 6). A year after the decompression had been done, all permanent teeth involved in the eruption process maintained vitality. The complete regression of the lesion with bone formation was radiographically observed (►Fig. 7), and the innervation of the right inferior alveolar nerve was preserved entirely. The patient was referred to an orthodontist to correct the rotated canine position. Discussion Even though the histopathology of the follicular cyst remains unclear, its connection to inflammation caused by the nonvital primary tooth is obvious. A study involving a histological evaluation of cysts occurring in the mixed dentition stage detected an inflammatory process caused by a primary tooth in 93.6% of the observed follicular cysts. 12 Based on this information, removing the source of inflammation, that is, the primary mandibular right first molar in our patient, is the essential therapeutic procedure. Several authors have shown that decompression is an effective treatment for odontogenic cysts. 13,14 It is a conservative technique that retains the permanent teeth, pulp vitality, and in this case, essential structures like the inferior alveolar nerve. However, this approach requires compliance from the patient. 14 Reducing intraluminal pressure and facilitating bone formation requires keeping the cyst open. This is done using various devices, such as a simple iodoform gauze, stents, brackets and chains attached to impacted teeth, or using removable partial dentures that act like obturators. 15,16 In our case, we used a tube modified from a surgical T drain and secured with sutures. It was practical given that the material is soft and does not damage the underlying mucosa. Also, it can be easily cut to the desired length, and its "wings" helped keep it from accidentally moving into the bone defect. Even though tube maintenance can be challenging for patients, especially children, the patient's mother said it became part of their daily routine. Besides some adjustments performed during the checkup appointments, we did not observe commonly reported problems like infection or obliteration of its entrance. 17 Full eruption of the involved permanent teeth and healing of the cystic cavity in our patient occurred after 12 months, which is somewhat longer than Allon et al reported, where the estimated mean decompression period is 7.5 months in children under 18 years of age. 18 This outcome may be due to the lesion's size or the case's specifics. Previous case reports, as well as ours, show that the permanent successors, even when badly dislocated, erupt into the dental arch. 19 A systematic review by Nahajowski et al showed that a patient's young age ($10 years) and root formation below half its total length seem to be factors that increase the probability of a spontaneous eruption. 20 Not many published studies report DCs treated using decompression in children, which may be due to the low incidence of DCs in that population. Therefore, further studies of this kind should be conducted. Conclusion IDCs can be treated successfully with minimal intervention using a conservative method like decompression. By extracting the infected primary teeth and ensuring continuous drainage utilizing a device like ours, essential structures can be protected and spontaneous eruption of the permanent teeth achieved, thus reducing the need for prosthetic rehabilitation. The patient should be scheduled for regular follow-ups until the healing process has been completed. Ethics Approval and Consent to Participate A procedure performed was in accordance with the ethical standards of our institutional research committee and with the 1964 Helsinki declaration and its later amendments. A written statement of consent has been obtained from our patient. The study was approved by the local ethical committee. Funding None.
2022-10-13T06:18:08.516Z
2022-10-01T00:00:00.000
{ "year": 2022, "sha1": "e434aa1c6387448d137eeed1b175f9ac4de468a9", "oa_license": "CCBY", "oa_url": "http://www.thieme-connect.de/products/ejournals/pdf/10.1055/s-0042-1756688.pdf", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "4850221c24c0848775e1315f7c0e9ed6290fdb21", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
118382267
pes2o/s2orc
v3-fos-license
An Observation on F_2 at Low x A simple parametrisation of H1 and ZEUS data at HERA is given for the ranges in x and Q^2 of 10^{-4} - 5.10^{-2} and 5 - 250 GeV^2, respectively. This empirical expression is based on a strikingly similar dependence of the average charged particle multiplicityon the centre of mass system energy sqrt{s} in e+e- collisions on the one hand, and the x dependence of the proton structure function F_2 as measured at small x on the other hand. To the best of our knowledge, this similarity has not been noted before. One of the most successful tests of perturbative QCD is the quantitative explanation of scaling violations, i.e. the Q 2 dependence of the nucleon structure functions at fixed x-values. Here Q 2 and x denote the usual deep-inelastic variables, the four-momentum squared and Bjorken-x. Previous fixed target experiments measured the structure function F 2 (x, Q 2 ) for the region x > 0.01 [1,2]. These data were, therefore, sensitive to the valence content of the nucleon. The DGLAP evolution equations [3] describe the Q 2 evolution of the structure functions in this region very well. However, the data from the electron-proton collider HERA explore a new kinematic region. Values of Bjorken-x in the range 10 −5 < x < 10 −2 for Q 2 larger than 1 GeV 2 , are reached. In this region the valence contribution is expected to be negligible and F 2 to be driven by the gluon in the proton. Recently new data on F 2 from the H1 [4] and ZEUS [5] experiments at HERA, based on the 1994 data taking period, have been published. The data have reached a level of precision of 3-5% in a large region of the kinematical plane. They show very clearly that F 2 rises strongly for decreasing x for all Q 2 values, and strong scaling violations are observed in the new deep-inelastic region at low-x. Originally it was thought that in the HERA region ln 1/x terms, not included in the DGLAP resummation, could become important. However, it has turned out that these evolution equations are still successful in describing the Q 2 dependence of the data [4]. The rise of F 2 at small x was predicted more than twenty years ago [6] from the leading order renormalization group equations of perturbative QCD. Ball and Forte recently pursued these ideas [7] and proposed a way to demonstrate that the low-x data at HERA exhibit double asymptotic scaling (DAS) dominantly generated by QCD radiation. They obtained an expression for F 2 (x, Q 2 ) in the double asymptotic limit of low-x and large Q 2 . The recent F 2 (x, Q 2 ) measurements of H1 for Q 2 > 5 GeV 2 are broadly in agreement with such a scaling behaviour. Hence, in this region these data are expected to be sensitive to the fundamental QCD evolution dynamics, and not to depend on unknown (non-perturbative) starting distributions at sufficiently large Q 2 and small x. This idea has also been exploited in the dynamically generated parton distributions [8] which predicted, for the same reason, the rise of F 2 at small x prior to data. Qualitatively these results can be understood by viewing the deep inelastic collision at low-x as the interaction of a virtual photon with partons in a space-like parton cascade which stretches from x of order one to x << 1, and thus covers a rapidity range ∝ ln(1/x). For very small x, the rapidity range is large and a well-developed cascade can be formed. In the leading-log approximation this leads to an expression [3] for F 2 Here b is the leading coefficient in the β-function for the expansion of α s , namely b = 11 − 2n f /3 with n f the number of flavours; N c is the number of colours. Another cornerstone of the success of perturbative QCD are the calculations and predictions for particle production in time-like parton cascades in e + e − collisions, based on the Modified Leading Log Approximation (MLLA) evolution equations and the assumption of Local Parton Hadron Duality (LPHD) [9,10]. In this approximation, the average parton multiplicity of e + e − collisions as function of the center of mass system energy √ s (CMS) is given by: with z = (16N c /b) · ln( √ s/2Q 1 ). The function I B+1 (z) is a Bessel function of order B + 1, with, for four flavours, B = (11 + 2n f /27)/b = 1.355. Here Γ is the Gamma function. The parameter Q 1 is the p t cutoff of the partons in the shower and found to be in the range of 250-290 MeV [9] from fits to the data. Eqn. (2) gives a very good description of the averaged charged hadron multiplicity in e + e − collisions for CMS energies in the range √ s = 3 − 130 GeV [9]. Expression (2) can be approximated at large z by Comparing expressions (1) and (3), one notices an intriguing similarity. For fixed Q 2 they have a similar functional dependence on 1/x and s/4Q 2 1 , respectively. The connection s → 1/x emerges naturally in Regge-inspired phenomenology, see e.g. [11]. In Fig. 1 we compare the e + e − data on average charged particle multiplicities versus √ s and the HERA low-x F 2 data versus 2Q 1 / √ x, with Q 1 = 270 MeV, as determined in [9]. The e + e − multiplicity data are represented by curves resulting from a phenomenological fit through the data as derived by OPAL [12]. The curves are normalized to the F 2 data for each Q 2 bin separately. It shows that at small x the evolution of F 2 with 1/x and the dependence of average charged particle multiplicity in e + e − collisions on √ s are indeed quite similar as suggested by the expressions above. This simple observation led us to study fits of the full expression (2) to the new low-x measurements of F 2 at HERA, with the change s/4Q 2 1 → 1/x. The Q 2 dependence (absent in e + e − ) was assumed to be given by a slowly varying function of Q 2 of the form (ln α s (Q 2 0 )/α s (Q 2 )) δ , with δ taken to be a constant. The final expression fitted to the data is and where Q 0 is taken to be 1 GeV and the two-loop expression for α s is used with Λ QCD = 263 MeV [13]. Note that the normalization factor C(Q 2 ) and power δ are the only fit parameters at any fixed Q 2 . The result is shown in Fig. 2, where a fit is made in each bin of Q 2 on the H1 and ZEUS data, separately. The expression (4) describes the data well over the whole kinematic region, except at large y = Q 2 /xs values, where the contribution of valence quarks is expected to become important. The difference in the results obtained using the H1 or ZEUS data can hardly be distinguished. We find that the data are best reproduced for δ ∼ 0.7 (see below), definitely below the value δ = 1 derived from the asymptotic form in perturbation theory [6,7]. The result for the normalization C is shown in Fig. 3 as function of Q 2 . In the range 5 < Q 2 < 250 GeV 2 , C is essentially constant with a value of about 0.38. For lower Q 2 , a clear breaking of this regularity is observed, and hints that additional contributions to F 2 become important. Encouraged by the results shown in Figs. 2 and 3 we perform a combined fit of the H1 and ZEUS data to eqn. (4) with C(Q 2 ) = C 0 constant over the whole Q 2 range, in the region 5 < Q 2 < 250 GeV 2 , x < 0.05, y > 0.02. The latter two conditions are imposed to avoid the valence quark region. The result is shown in Fig. 4. The fit has χ 2 /N DF = 265/231, using the full errors. The relative normalization of the H1 and ZEUS data was left free. The normalization factors found are 0.99 and 1.025 for H1 and ZEUS respectively, well within the quoted normalization uncertainties [4,5]. The statistical errors on the fit parameters are from a fit with the statistical errors of the data only. Using the full error matrix of H1 and/or ZEUS each of the measured quantities entering the F 2 analysis is varied in turn. For the two fit parameters we find C 0 = 0.389 ± 0.005(stat) ± 0.012(syst) and δ = 0.708 ± 0.007(stat) ± 0.028(syst). From the fits to data of the individual experiments we find for H1: C 0 = 0.385 ± 0.007(stat) ± 0.020(syst) and δ = 0.683 ± 0.010(stat) ± 0.055(syst) (χ 2 /N DF = 76/97); for ZEUS: C 0 = 0.384 ± 0.007(stat) ± 0.009(syst) and δ = 0.723 ± 0.010(stat) ± 0.025(syst) (χ 2 /N DF = 186/134). A point by point analysis shows that the region y < 0.04 is responsible for a substantial contribution to the χ 2 for the ZEUS data. With two free parameters only: the normalization C 0 and δ, we are able to account for the x and Q 2 dependence of F 2 starting from a parametrisation which successfully describes the energy dependence of the mean charged multiplicity in e + e − annihilation, provided s is identified with 1/x. We also note that according to eqn. (4) F 2 grows slower than any power of 1/x but faster than any power of ln 1/x. In particular, for most of the regions in Q 2 shown in Fig. 4, the F 2 data indeed increase faster than ln 1/x, contrary to the claims in [14]. Fig. 4 shows λ = d ln F 2 (x, Q 2 )/d ln(1/x) calculated from (4) for a number of x values. A rise of λ with Q 2 is observed. Note that its value depends on the x-region: λ increases with increasing x. The growth of λ with Q 2 is often considered to be indicative of a transition from a region of "soft" pomeron exchange (λ ∼ 0.1) at low Q 2 to a regime of "hard" pomeron exchange (λ ∼ 0.3 − 0.4) at high Q 2 . This argument is based on measurements of d ln F 2 (x, Q 2 )/d ln(1/x) which cover, however, different ranges in x as Q 2 changes. Fig. 4 demonstrates that the so-called "soft" to "hard" transition is much less spectacular when x is kept fixed. In addition, we note that the slopes at a given Q 2 are larger at large x than at small x. This runs contrary to the often expressed opinion that the small x region in deep-inelastic scattering probes the "hard" pomeron. In first instance we regard eqn. (4) as a compact parametrisation of the F 2 data at small x, where the dynamics of the F 2 evolution is expected to be dominated by gluons. Since it is based on a result of the MLLA evolution equations, which include coherence, it is well adapted to be used e.g. as an ansatz for starting distributions in QCD fits of proton structure data. However, it is tempting to speculate that the similarity observed here is more than just a mathematical coincidence. It indeed suggests that, at least qualitatively, the evolution of the structure function at low-x can be attributed to the development of an unhindered QCD parton shower in "free" phase space quite similar to that in e + e − . For F 2 this also follows essentially from the observation of DAS and the success of the dynamically generated GRV parton distributions. Whether a more profound explanation for the empirical regularity reported here exists, remains an interesting open question. Summary A striking similarity between the rise with energy ( √ s) of the charged particle multiplicity in e + e − and the rise of F 2 at HERA is observed. To the best of our knowledge, this similarity has not been noted before. For Q 2 ≥ 5 GeV 2 and 10 −4 < x < 0.05, the phenomenologically successful MLLA expression for the average multiplicity in e + e − collisions, with the transformation s → 1/x, and adding a QCD inspired Q 2 dependence, describes the HERA data on F 2 at small x very well. The result suggests that both deep inelastic small-x scattering and e + e − annihilation can be adequately described by angular ordered QCD radiation in an essentially free phase space. √s eq =2Q 1 /√x (GeV) Figure 1: Comparison of e + e − data on average charged particle multiplicities versus √ s and the HERA low-x F 2 data versus 2Q 1 / √ x, with Q 1 = 270 MeV, for Q 2 = 22 GeV 2 (ZEUS) and 25 GeV 2 (H1). The e + e − multiplicity data (solid lines) are represented by curves resulting from a phenomenological fit through the data [12]. The curves are normalized to the F 2 data for each Q 2 bin separately.
2019-04-14T02:43:39.874Z
1996-08-30T00:00:00.000
{ "year": 1996, "sha1": "91ee5910230a7f2480c92009dd85a2d66b366e83", "oa_license": null, "oa_url": "http://arxiv.org/pdf/hep-ph/9609203", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "f9fc9d626f592f4c340e705918ee660173d1743c", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
118436467
pes2o/s2orc
v3-fos-license
Cooperative Behavior in a Model of Evolutionary Snowdrift Games with $N$-person Interactions We propose a model of evolutionary snowdrift game with $N$-person interactions and study the effects of multi-person interactions on the emergence of cooperation. An exact $N$-th-order equation for the equilibrium density of cooperators $x^*$ is derived for a well-mixed population using the approach of replicator dynamics. The results show that the extent of cooperation drops with increasing cost-to-benefit ratio and the number $N$ of interaction persons in a group, with $x^{*}\sim1/N$ for large $N$. An algorithm for numerical simulations is constructed for the model. The simulation results are in good agreements with theoretical results of the replicator dynamics. The theme of how cooperative behavior emerges among competing entities has attracted the attention of physicists, applied mathematicians, biologists, and social scientists in recent years [1,2,3,4,5,6,7,8,9,10]. There are good reasons that physicists showed much interest in this problem and have made contributions. The cooperative behavior is similar to that in interacting spin systems, and some important features, e.g., phase transitions and universality which carry a heavy flavor of statistical physics, have also been observed in evolutionary models of cooperation with spatial structures [9,11]. Indeed, applying ideas in physics across different disciplines is a key characteristic of physics in the new millennium. A powerful tool to study cooperative phenomena is the theory of evolutionary games based on such basic models as the prisoner's dilemma (PD) [12,13,14] and the snowdrift game (SG) [15,16]. The basic PD is a two-person game [17,18], in which two players simultaneously choose one of two possible strategies: to cooperate (C) or to defect (D). If one plays C and the other plays D, the cooperator pays a cost of S = −c and the defector receives the highest payoff T = b (b > c > 0). If both play C, each player receives a payoff of R = b − c > 0. If both play D, the payoff is P = 0. Thus, the PD is characterized by the ordering T > R > P > S of the payoffs, with 2R > T + S. In a single encounter, defection is the better action in a well-mixed or fully connected population, regardless of the opponents' decisions. Allowing for repeated encounters and evolution of characters could lead to cooperative behavior [12]. Due to practical difficulties in measuring the payoffs or even ranking the payoffs accurately [19,20], there are serious doubts on taking PD to be the most suitable model for studying emerging cooperative phenomena in a competing setting [21]. The evolutionary snowdrift game (ESG) has been proposed [21] as an alternative to PD and has attracted some recent studies [6,7,8]. The basic snowdrift game (SG), which is equivalent to the hawk-dove or chicken game [15,16], is again a two-person game. It is most conveniently described using the following scenario. Consider two drivers hurrying home in opposite directions on a road blocked by a snowdrift. Each driver has two possible actions -to shovel the snowdrift (cooperate (C)) or not to do anything (not-to-cooperate or "defect" (D)). If the two drivers cooperate, they could be back home on time and each will get a reward of b. Shovelling is a laborious job with a total cost of c. Thus, each driver gets a net reward of R = b−c/2. If both drivers take action D, they both get stuck, and each gets a reward of P = 0. If only one driver takes action C and shovels the snowdrift, then both drivers can get through. The driver taking action D (not to shovel) gets home without doing anything and hence gets a payoff T = b, while the driver taking action C gets a "sucker" payoff of S = b − c. The SG refers to the case of b > c > 0, leading to T > R > S > P . Thus, PD and SG only differ by the order of P and S in the ranking of the payoffs. This seemingly minor difference leads to significant changes in the cooperative behavior, when evolution of characters is introduced. Following replicator dynamics [14], there exists a stable state with coexisting cooperators and defectors in SG for a wellmixed population. More interestingly, it was found that spatial structures tend to suppress the extent of cooperation in ESG [21], in contrast to the common belief that spatial structure constitutes a favorable ingredient for cooperation [22,23]. Most models of evolutionary games proposed so far for studying cooperative phenomena, including those with competitions among a group of entities, involve only two-person interactions. In reality, multi-person inter-actions are abundant, especially in biological and social systems. A representative model is the so-called public goods game (PGG) [24], for studying group interactions in experimental economics. The PGG considers an interacting group of N agents or players. Each player either contributes a public good of value b at a cost c with 0 < c < b, or does nothing at no cost. With n cooperators in the group, the total contributions Rnb are divided evenly among all players in the group, where R (R < N ) is called the public good multiplier. Thus a cooperator will get a benefit of Rnb/N − c, and a defector gets Rnb/N without doing anything. Obviously, in a oneshot PGG, defectors outperform cooperators, leading to a Nash equilibrium where all players are defectors. For N = 2, PGG reduces to PD and thus PGG represents a N -person prisoner's dilemma game. Motivated by the recent works on ESG and PGG, we propose and study a N -person interacting model of SG. We refer to our model as the N -person evolutionary snowdrift game (NESG). The key question is how cooperation is affected by allowing for N -person interactions. The evolution of cooperative behavior in the NESG is studied analytically within the framework of the replicator dynamics [14]. For arbitrary interacting group size N , an exact N -th-order equation for the equilibrium frequency or fraction of cooperators x * (r) is derived for a well-mixed population, where r = c/b is a parameter that characterizes the cost-to-benefit ratio in SG. The equation can be solved numerically for x * as a function of r for any N . As the size of the interacting group increases, cooperation in NESG decreases and x * ∼ 1/N for large N . These results are checked against results obtained by numerically simulating the evolutionary dynamics and good agreements are found. The N -person evolutionary snowdrift game is defined as follows. Consider a system consisting of N all agents. In a N -person game, an agent competes with a group of N − 1 other agents. Depending on the situation, the interacting group of N agents can be chosen at random among the N all agents as in the case of a well-mixed population or defined by an underlying geometry as in the case of a regular lattice or other networks. There is a task to be done and every agent will get a reward of b if it is completed by one or more agents within the group. The total cost of performing the task is c, which could be shared among those who are willing to cooperate. The payoff of an agent thus depends on (i) the character of the agent and (ii) the characters of his N − 1 competing agents. Here, we will focus on the case of a well-mixed population. For an agent of C-character, his payoff depends on the number of C-character agents in the interacting group including himself. The C-character agents are those who are willing to share the labor in completing the task. If the agent under consideration is the sole C-character agent in the group, then his payoff is b − c. If there are two C-character agents, then his payoff is b − c/2, and so on. Thus, a C-character agent in a N -person snowdrift game has a payoff of where n is the number of C-character agents in the group of N agents including the agent concerned. For an agent of D-character, his payoff depends on whether there is a C-character agent in the group. As long as there is one, the task will be completed and the D-character agent will get a payoff of b without doing any work. When there is no C-character in the group, then his payoff vanishes since the group has N D-character agents and no one is willing to perform the task. Thus, a D-character agent in a N -person snowdrift game has a payoff of As evolution proceeds in NESG, the numbers of C-character and D-character agents become timedependent. The model is original. It is different from the previous models in which the payoffs are typically evaluated by summing up the payoffs of two-player games, for a player competing with a number of other players. There are many real-life situations where pairwise interactions are inapplicable. We give two examples here where Nperson interactions are more appropriate. (i) In a public construction project such as a bridge, a school or a road that serves a small remote community, everyone in the neighborhood will be benefited (b) and the cost (c) can be shared by those who are willing to contribute. (ii) A place such as a class room, a dormitory or a student common room needed to be cleaned regularly with a labor of cost c, and every user will get a benefit b from the cleanliness. Certainly, more realistic modelling will require additional parameters, e.g., more incentives for carrying out the task in the form of long term returns. Here, we study the simplest version as the model can be treated analytically and thus provides insight into the extent of cooperation with a function of the parameters r and N in the model. The evolutionary behavior in NESG in a well-mixed population is introduced through the replicator dynamics [14]. The frequency of cooperation x(t) = N C (t)/N all , where N C (t) is the number of C-character agents in the population at time t [6,21]. The time evolution of x(t) is governed by the following differential equation [14] x where f C (t) (f ) is the instantaneous average fitness of a C-character agent (the whole population). These quantities are equivalent to the corresponding average payoffs in the case of strong coupling [25]. In the well-mixed case, interacting groups of N agents are randomly chosen. The fitness f C , which is in general time dependent, is determined as follows according to the binomial sampling [25] which takes into account of the various combinations of the characters of an agent's N − 1 neighbors. The first three factors in the sum give the probability of having (j+ 1) C-character agents in the group of N agents. Similarly, the instantaneous average fitness f D (t) or the average payoff of a D-character agent is given by These expressions amount to a mean field approach. In Eq. (3), the dynamics of cooperation is that x(t) will increase (decrease) if the fitness f C is greater (smaller) than the instantaneous average fitnessf (t) of the whole population. The latter is defined bȳ Substituting Eq.(6) into Eq. (3), the dynamics of x(t) is governed byẋ Although it is possible to solve the time evolution of x(t), we will instead focus on the steady state. After the transient behavior, the system evolves into a steady state, i.e., the Nash Equilibrium, in whichẋ = 0. It follows from Eq. (7) that the steady state or equilibrium frequency of cooperation x * satisfies Substituting Eqs. (1) and (2) into Eqs. (4) and (5) gives f C and f D in terms of N , b and c. Equation (8) for x * can then be expressed as Using the identity we have and thus the relation Applying Eq. (12) to Eq.(9), we find which is an N -th-order equation for x * (r, N ) in the steady state, where r = c/b. Note that the size of the population N All does not enter, as the analysis assumes an infinite population following the mean field spirit. of the standard two-person evolutionary SG in a well-mixed population [6,21]. For N ≥ 5, Eq. (13) can be solved numerically for x * (r, N ). Figure 1 shows the results (lines) of x * (r) for N = 2, 3, 5, 10. We note that x * (r) decreases as r increases for arbitrary N , with a more rapid drop as r increases for larger values of N . This indicates that the incentives for being a cooperator drops as r and N increase, and agents tend to wait for someone else to perform the task and enjoy a free ride. For a given r, the dependence of x * on N is shown in Fig. 2 on a loglog scale. The results (lines) show that x * decreases with increasing N , with a power-law of exponent −1 for large N . Analytically, the large N behavior can be extracted by taking the small x * limit of Eq. (13). We find from which x * ∼ 1/N for large N follows. As a supplement and to verify the results using the replicator dynamics, we also perform numerical simulations on NESG. The algorithm goes as follows. An agent in a total population of N all agents can take on either the C-character or D-character. The initial characters of the agents are assigned randomly. At each time step, an agent i is randomly chosen and a group of N − 1 other agents are randomly chosen among the N all − 1 agents to compete with i. Depending on the character of agent i, his payoff P i is evaluated according to Eq. (1) or Eq. (2). Evolution of character of agent i is introduced by comparing the payoff with that of another agent j, which is again randomly chosen. For the chosen agent j, he would compete with a randomly chosen group of N − 1 agents and his payoff is P j . If P i is less than P j , the character of the agent i will be replaced by that of agent j with a probability (P j − P i )/b. If P i ≥ P j , the character of agent i remains unchanged. The results from numerical simulations (symbols in Fig. 1 and Fig. 2) are in good agreement with the analytic results based on the replicator dynamics. The way of constructing a proper simulation algorithm will also be useful in studying variations of the model in which analytic approaches fail. In summary, we have proposed and studied an evolutionary snowdrift game with N -person interactions. We derived an exact N -th-order equation for the equilibrium frequency x * (r, N ) of cooperators in a well-mixed population using the approach of replicator dynamics. The results show that the level of cooperation lowers as r increases. For fixed r, x * drops with the number N of interaction persons in a group and takes on x * ∼ 1/N for large N . We also constructed a numerical algorithm to simulate the model. Simulation data are in good agreement with the analytic results of the replicator dynamics. Further extension of NESG to include the effects of spatial structures such as regular lattices and complex networks will be interesting.
2008-07-26T09:24:01.000Z
2008-07-26T00:00:00.000
{ "year": 2008, "sha1": "9994931951481801d13e31256fef1c5780bc7535", "oa_license": null, "oa_url": "http://arxiv.org/pdf/0807.4227", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "9994931951481801d13e31256fef1c5780bc7535", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics", "Mathematics" ] }
222216307
pes2o/s2orc
v3-fos-license
High-performance printed electronics based on inorganic semiconducting nano to chip scale structures The Printed Electronics (PE) is expected to revolutionise the way electronics will be manufactured in the future. Building on the achievements of the traditional printing industry, and the recent advances in flexible electronics and digital technologies, PE may even substitute the conventional silicon-based electronics if the performance of printed devices and circuits can be at par with silicon-based devices. In this regard, the inorganic semiconducting materials-based approaches have opened new avenues as printed nano (e.g. nanowires (NWs), nanoribbons (NRs) etc.), micro (e.g. microwires (MWs)) and chip (e.g. ultra-thin chips (UTCs)) scale structures from these materials have been shown to have performances at par with silicon-based electronics. This paper reviews the developments related to inorganic semiconducting materials based high-performance large area PE, particularly using the two routes i.e. Contact Printing (CP) and Transfer Printing (TP). The detailed survey of these technologies for large area PE onto various unconventional substrates (e.g. plastic, paper etc.) is presented along with some examples of electronic devices and circuit developed with printed NWs, NRs and UTCs. Finally, we discuss the opportunities offered by PE, and the technical challenges and viable solutions for the integration of inorganic functional materials into large areas, 3D layouts for high throughput, and industrial-scale manufacturing using printing technologies. Introduction There is growing interest in developing large-area flexible electronics for applications across numerous sectors, including wearables, robotics, consumer electronics, and healthcare [1][2][3][4][5][6]. Flexible electronics has several advantages such as conformability to different shapes, which make it indispensable for above application areas where electronic devices are needed on unconventional substrates to either conform to curvy surfaces or to degrade naturally [7][8][9][10][11][12][13][14][15][16][17][18][19][20]. Accordingly, significant research efforts are being made to develop electronic devices and systems with flexible form factors and novel manufacturing technologies [9,11,[21][22][23][24][25][26][27][28][29][30]. These range from integrating off-the-shelf electronic devices on flexible printed circuit boards to printing functional inks and materials to realise active devices and circuits [21,31]. Among these technologies, the Printed electronics (PE), defined as the printing of circuits on diverse planar and non-planer substrates such as paper, polymers and textiles, has seen rapid development motivated by the promise of low-cost, high volume, high-throughput production of electronic devices [22]. The growth in the number of publications ( Fig. 1) related to PE in recent times just indicates this advancement. For more than five decades, the silicon based conventional electronics has dominated the high-end electronic Open Access *Correspondence: Ravinder.Dahiya@glasgow.ac.uk Bendable Electronics and Sensing Technologies (BEST) Group, University of Glasgow, Glasgow G12 8QQ, UK chip (e.g. ultra-thin chips (UTCs)) scale structures [25,26,[42][43][44][45][46][47][48][49][50][51][52][53]. These nano/micro/chip scale structures are typically single-crystalline and result in high-performance electronics. For instance, the mobility (in cm 2 /V·s) of about 270 for n-type Si-NWs [54], 300 for p-type Si NWs, 730 for Ge/Si core/shell NWs [55] and 660 for Si NRs [46] have been demonstrated. The transfer printing (TP) and contact printing (CP) methods developed to transfer or print these structures onto flexible substrates could address the traditional thermal budget issue associated with the inorganic semiconducting materials i.e. due to high-temperature processing requirements it is difficult to fabricate devices directly over flexible polymeric substrates [26,29,47,49,56]. Figure 2 shows a qualitative comparison of these technologies in terms of fabricationcost and device-performance along with their advantages and limitations. The possible high-performance, at par with siliconbased electronics, with printed inorganic semiconducting materials-based devices has revived the discussion about PE as substitute for the conventional siliconbased electronics [57,58]. With combined features such as low fabrication cost and high-performance, the inorganic semiconducting materials-based PE could provide a means to implement innovative solutions such as large area ultra-thin electronic skins (eSkin) for ubiquitous computing and pervasive context-sensitive inter-object interaction [21,31,[59][60][61]. PE also leads to less materials wastage, which could help to reduce the electronic waste (e-waste) and potentially allow reuse of some of the electronic materials (e.g. conductive inks) to open new avenues towards circular electronics. Considering these developments, it is opportune time to review the latest advances in PE technologies and new opportunities they offer through high-performance devices. Most of the surveys on PE so far have focussed on organic materials-based electronics and the low-end applications from them [62][63][64][65][66]. A few reviews have also focused on inorganic nanomaterial synthesis [61,[67][68][69][70], printing technologies [22,71,72], transfer printing of either chip scale UTCs [49] or inorganic micro/nanostructures [73] and applications [29,49,73]. For high-performance electronics, these works mainly focus on the high mobility of the inorganic materials. While this is an important factor, the technological parameters such as channel lengths and ohmic junctions etc. also influence the performance of devices and require more attention. Considering these factors and the application potential of high-performance PE, this paper provides a thorough review of printing technologies for nano to chip scale inorganic semiconducting structures, mainly in 2D layouts. High-performance soft electronic devices Fig. 1 Year-wise number of publications in printed electronics. The data for these plots were taken from the Web of Science by using relevant keywords (e.g., Printed Electronics). The figure also shows number of publications in printed flexible electronics Dahiya et al. Nano Convergence (2020) 7:33 and circuits from assembly of multitude of advanced inorganic materials of various dimensions, including nano (NWs, NRs, NMs etc.), micro (MWs), and the chip scale (UTCs) etc. are presented. Further, we discuss the potential of these techniques for two-dimensional materials (Graphene) and simultaneous printing of multi-materials (heterogeneous integration) in three dimensional (3D) layouts. This paper is organized into five sections: In section II, we briefly present the synthesis of nano to chip or macro (e.g. wafer) scale inorganic elements including NWs, NRs, NMs and UTCs. Section III describes the CP and TP technologies for printing NWs, NRs, NMs and UTCs etc. In Section IV, we present few examples of printed inorganic materials based flexible electronic devices. We conclude the review in section V, where we summarise the key developments and present an overview of the main challenges for high-performance PE along with potential solutions. Printable inorganic nano to macro scale structures: fabrication methods and techniques The inorganic materials-based nano to macro scale structures (sub-100 nm to wafer scale) discussed above could be fabricated using either bottom-up or top down approaches through wide variety of physical and chemical techniques, as summarised in Fig. 3 [70,[74][75][76][77][78]. The developed techniques largely aim to produce structures with precisely controlled dimensions and chemical composition which are crucial for the development of novel flexible devices (e.g. FETs [79,80], light-emitting diodes [81], thermoelectric [82] and piezoelectric nanogenerators [83][84][85] Nanoscale structures Various top-down and bottom-up approaches, have allowed large area synthesis of III-V NWs, metal oxide NWs and IV NWs at wafer scale [73]. This section briefly describes the fabrication and growth of inorganic NWs under different conditions (dry and wet etching, high temperature ambient etc.) in the dimensional range from few nm's to areal coverage larger than wafers. Top-down approach Top down approach is commonly carried out by selective etching of single crystalline wafers using wet chemicals or plasma processes. The process starts with a wafer such as Si, Ge, GaAs, which determine the final crystallinity and chemical composition of the structures (NWs, NMs, NRs etc.) to be printed. The NWs are produced by employing nano-patterning techniques such as lithography (optical and e-beam) [25,86,87], nanosphere based patterning [88,89], laser interference lithography [90,91], etc. The etching could be done using dry plasma process [92,93] or acid based wet chemical etching [25,86,87,91]. Dry etching methods have merits such as high precision, uniformity over wafer scale, scalability etc., but they could also lead to stress generation due to high energy plasma and isotropic lateral etching issues. Alternatively, HF acid-based metal assisted chemical etching (MACE) (Fig. 4a) of Si is the most cost-effective technique to date for the fabrication of sub-100 nm Si nanostructures [88,89]. The selective etching of Si wafer could be carried out using patterns produced with nanosphere lithography (Fig. 4a). These NWs could be printed on various substrates using CP technique to obtain the nanoscale electronic layers, which are eventually used for devices. Using top-down means, synthesis of semiconducting NWs of different materials such as Si [25,88], InP [94], GaN [95], GaAs [86,87,96] etc. have been demonstrated. Top-down methods have also been used to obtain horizontally aligned NRs/NMs from SOI wafer with thickness ranging from few 'nms' to few tens of nm and lateral dimensions between few tens of µm to mm. Their fabrication process involves anisotropic wet or dry etching of selected exposed regions on the top side of Si wafer, and then undercut removal of the buried oxide (BOX) with hydrofluoric acid to release Si NRs or NMs structures [46,[97][98][99][100]. Figure 4b illustrates one such example where the optical lithography and wet chemical etching steps are followed to obtain Si NRs from commercial SOI wafer [46]. This technique produces horizontal array of NRs over SOI source wafers and these are eventually transfer printed over flexible substrates. Figure 4b also illustrates the steps for achieving ohmic contacts on NRs/ NMs by selective doping which is critical for achieving high device performance (discussed in later sections). The top-down approach has also been used to develop NWs/NRs from bulk wafers (Fig. 4c) [101][102][103], however, the dimensional control with SOI wafer is much better. Many compound semiconducting materials including GaAs [104][105][106][107], GaN [108], and InP [109] have been obtained in a conceptually similar manner to that of Si shown in Fig. 4b-c. The major limitation of the top-down approach is the requirement of single crystalline wafers, Fig. 4 Schematic representation of the growth of nano scale structures via top-down methods. a Silicon nanowires using metal assisted chemical etching (MACE) process. Schematics adapted from [59]. b Schematic diagram illustrates stages of the Si NRs fabrication and selective doping (i) The source wafer consists a layer of active Si < 100 > with 70 nm thick, on top of 2 µm of BOX, supported by 600 µm bulk Si. (ii) Si NR's geometry is defined by conventional UV lithography procedure. UV lithography is performed by spin coating photoresist, followed by soft baking, the samples are exposed to UV source and subsequentially the NRs definition are developed. (iii) Dry etching is performed in this step by reactive ion etching (RIE) using a combination of CH 3 /O 2 gas sources, to finalize the structure of NRs structure after photoresist removal using acetone and IPA. (iv) The first step to perform a selective n + type doping of active regions (source/drain for FETs) is by applying plasma enhanced chemical vapour deposition (PECVD) of SiOx layer. (v) The SiOx barriers over the active regions are etched away by dry etching. (vi) Spin on dopant method of phosphorus is used to create ohmic contacts at source and drain regions on the source wafer while the channel is masked by SiOx, served as a barrier. A wet etching method using hydrofluoric acid is performed to remove both dopant diffusion barrier layer and buried oxide layer leading in release of the NRs from the bulk wafer, allowing the selectively doped NRs to be transfer printed to any flexible substrate. Adapted with permission from [46]. c Schematic illustration of the fabrication of NWs/NMs by use of anisotropic wet-chemical etching techniques applied to bulk wafers Dahiya et al. Nano Convergence (2020) 7:33 which is not available for many technologically important III-V and oxide semiconductors. Bottom-up approach The bottom-up approaches have been widely used to grow single crystalline 1D materials using atomic and molecular precursors exploiting well known physical and chemical techniques (chemical vapour deposition (CVD), vapour phase transport (VPT), supersaturated solutions etc.). The major advantage of the bottom-up approaches is their ability to precisely tune crystallinity and composition during the growth process. Synthesis strategies, including catalyst particle assisted vapour-liquid-solid (VLS) [75,[110][111][112], vapour-solid (VS) [113][114][115][116], vapour-solid-solid (VSS) [117][118][119][120][121], and low-temperature solution-based processes such as hydrothermal [74,79,83] have been widely exploited for growing inorganic NWs. The VLS growth of NWs ( Fig. 5a) is one of notable method to produce single crystalline and composition controlled semiconducting NWs in the sub-100 nm regime. The VLS mechanism offers many advantages such as single crystallinity, in situ composition control, site specific growth, precise dimensional control in sub-100 nm regime and ability to produce heterostructures (core-shell and axial) with sharp interfaces. These wide process variability offers many advantages for the development of high-performance PE. The VLS growth process produces vertically oriented high aspect ratio NWs over rigid substrates which are compatible for contact and roll-to-roll (R2R) printing [59]. Bottom up method have been successfully used for 2D nanomaterials which are key components for flexible large area electronics. For example, the growth of monolayer (ML) graphene over Cu substrate under CVD conditions (Fig. 5b) (temperature over ~ 1000 °C and methane (CH 4 ) source) has been demonstrated over areas ranging from few mm to several centimetres [122]. The large area graphene has been seamlessly transferred over flexible PVC substrates using solution-based transfer process (Fig. 5b). Bottom up approaches largely use high temperature (> 600 °C) processing conditions to produce high quality inorganic materials needed for high-performance flexible substrates. Various printing technologies bridging the high temperature growth processes with low temperature device fabrication steps are discussed later in Sect. 3. Microscale structures Just like the NMs and NRs, the microscale structures have been successfully fabricated from SOI wafers using top-down methods, as described in Sect. Chip scale structures-ultra-thin chips The chip scale or macroscale structures can be anything with dimensions > 100 μm and can be clearly seen without any microscopy imaging tools. The UTCs are typical example of macrostructures that are obtained by physical or chemical removal of bulk silicon through top-down approaches [49]. Various methodologies used for obtaining UTCs include grinding, controlled spalling technique (CST), dry etching, and wet etching [26,49,[125][126][127][128][129][130][131]. Briefly, the CST is a slim cut process in which thin silicon layer is removed or exfoliated from the bulk silicon chip; the tensile Ni stressor layer is deposited on bulk chip and shear force is applied to generate stress induced parallel fracture along the surface of bulk chip [127]. The UTCs obtained through CST process suffer from deterioration in device carrier concentration and mobility due to the stress induced during CST process [126]. In addition, it is challenging to completely remove the stressor layer. Another popular and broadly utilized thinning approach is back grinding technique, in which the grinding wheel is used to physically dislodge the back/bottom side of silicon to reduce the thickness of bulk silicon down to less than 10 μm with in few minutes [130,132]. However, the stress induced during the grinding process could alter the silicon crystalline structure resulting in undesirable warping. Alternatively, dry etching process is used to physically dislodge silicon atoms from bulk chip through high energy reactive ions, which is proven to be a stress relief technique to reduce the surface damage [128]. In this case, the high cost, low throughput, and practical difficulties are critical issues. To overcome the cost issue, wet etching process using either tetramethyl ammonium hydroxide (TMAH) or potassium hydroxide (KOH)based etchant have been widely adopted [26,125,133]. The concentration of the etchant and the etching temperature plays an important role in controlling the rate of etching [130]. A detailed discussion and comparison of various UTC technologies is given elsewhere [49]. The transfer process of UTCs will be presented in the next section. Technologies for printing of inorganic semiconductors-based nano to macro scale structures This section describes the two main technologies which have been explored for printing of inorganic semiconductors-based nano to macro scale structures to obtain high-performance PE. These are contact printing (CP) and transfer printing (TP). Contact printing In the CP technique, the nanostructures (usually NWs) come in direct contact with the receiver substrate [22,25,42]. Generally, CP is a dry transfer technique which is suitable for printing of any type of bottom-up and/or top-down synthesised vertical NWs. However, modified CP methods have also been developed with controlled surface functionalization (e.g., with −NH 2 and −N(Me) 3 + terminated monolayers) and/or patterning of the receiver substrate [44,134]. This method shows excellent density and alignment control of printed NWs, demonstrating the great potential of CP approach for large-area manufacturing in a R2R manner [59], as shown in Fig. 6. In CP, the as-grown NWs, on their respective donor substrates, are brought in physical contact with the flexible receiver substrate. Then, the donor substrate with vertically grown NWs is pressed against the receiver and then their sliding along a direction leads to transfer of nanostructures (Fig. 6a). The advantage of CP is that the as-printed NWs are transferred onto the receiver substrate in an aligned manner, dictated by the direction of sliding, both on the rigid and flexible substrates. Such alignment is enabled by the shearing force generated during sliding and is favourable for the fabrication of large area device array with uniform high-performance. Specifically, improved NW alignment reduces the variation in NW density across the substrate as well as overlapping of adjacent NWs. The usual 'grow-harvest-suspend' [135,136] route for transferring NWs is not compatible with CP, as the conventional approaches (electricfield-directed assembly, bubble-blown techniques, Langmuir-Blodgett etc.) suffer from poor scalability (about mm-scale), low density, and non-uniformity. The CP allows transfer and alignment in a single step, resulting in a simple process and allowing the use of donors with non-aligned NWs. The method can also be used for printing of NRs and potentially for small (mm scale) flexible chips also [46]. It is important to understand the mechanism of CP to control the printing-process and obtain desired results such as NW density and alignment. The CP has three main steps: (i) NW bending; (ii) alignment of NWs by the applied shear force; and (iii) breakage and transfer of NWs upon anchoring to the receiver substrate through surface chemical and physical interactions [25,137], as shown in Fig. 6a. The studies related to printing mechanism of NWs as a function of the NW aspect ratio, NW material and applied contact pressure have helped to find the safe range of contact pressures (i.e. without reaching the fracture limit of a single NW) for printing while preserving the maximum original length of NWs [25]. Using COMSOL Multiphysics, these studies have identified several of the key parameters responsible for efficient transfer of NWs, including: (i) the analysis of the maximum strain (ε max ) and maximum stress (σ max ) regions along the NW body, (ii) the dependence of ε max and σ max with respect to the NW deflection (δ) and (iii) the dependence of δ with respect to the NW diameter (D) etc. This in-depth study also revealed that the contact printing mechanism requires a continuous and progressive bending of the NWs to reach the fracture strain close to the root of the NW. This can be achieved by using both a constant contact pressure between the donor and receiver substrates and a micrometric sliding stroke. In practice, the use of NW of different materials and sizes requires extensive optimisation following the simulation analysis. Equipment which can carry out the CP process for different types of NW donor substrates, must offer precise control over the printing parameters [25]. Optimisation is also required in relation to the receiver substrates, with softer flexible martials being damaged due to the large lateral and shear forces applied to detach the NWs from the growth substrate. When considering large area printing, uniformity in the printed array is key as it directly influences the performance variation across devices. Donor and receiver substrate alignment plays a big role in achieving a uniform print [25]. With most NWs being synthesized on flat rigid substrates, optimal plane-to-plane alignment is challenging and R2R approaches could provide better results. The NW CP technique has been advanced recently with addition of a new alignment method which is called as "combing" [138,139]. In this process, NWs are anchored to defined areas of a substrate and then drawn out over chemically distinct regions of the substrate (Fig. 6b). This technique has two coexisting steps: the NW anchoring and the NW directional alignment, which are also observed in the conventional CP. However, the combing technique allows the observation and control of both processes individually. While the anchoring force is necessary during the NW printing process, it can dramatically hinder the directional alignment [138,139]. As an example, the high crossing defect density and the difficulty in realizing the precise registration and position of single NWs in predefined positions is mainly associated with the use of excessive anchoring forces [42]. The combing method has demonstrated good potential to overcome the drawbacks of CP, exhibiting the successful reduction of the crossing defect density down to 0.04 NWs/μm by tuning both anchoring and combing forces. In terms of the realization of a single NW device, [25], (b) combing mechanism [138], (c) vision of an R2R production for NW-based functional circuits on large-area flexible electronics [59], (d) mechanism for transfer printing [46]. The figure also shows the overview of transfer printing approach enabling transition from high to room temperature processing steps. e Concept of roll transfer printing technology the combing method shows great advantages over traditional CP. By controlling the predefined anchor window, the success rate for the realization of a single NW device is ~ 90%: significantly higher than the success rate (~ 60%) achieved using conventional CP [42]. It should also be noticed that while the combing method gives a higher control of NW alignment (~ 98.5% of the NWs aligned to within 1% of the combing direction), the resultant NW density is not comparable to the conventional CP process (~ 9 NWs/μm) [42]. Consequently, the combing method shows more potential in the realization of single NWbased devices. On the other hand, the conventional CP method could be used to achieve high NW density over large area with better alignment. Using CP, modified CP, and combing techniques, printed devices, circuits, and systems based on NWs of high mobility materials have been explored. These examples are discussed in the Sect. 4. The CP method also shows good potential for roll-toroll (R2R) manufacturing. For example, rolls of NWs (i.e. NWs on cylindrical substrates) could be used for printing electronic layers as defined locations, as exemplified through Fig. 6c. Alternately, the NWs could be on planar donor substrate and the receiver substrate could be around the cylindrical rolls. The synthesis of NWs on tubes of glass, quartz, and stainless-steel using bottomup has been demonstrated in the past [162]. By using such rolls in differential roll-printing [162] and roll transfer-printing [163] settings, the contact printing approach can be extended to an R2R-type printing. The example of a R2R process for IT-1 M structures, shown in Fig. 6c, could be the building block for neuromorphic architectures [59,140]. Transfer printing The TP technique enable the transfer of laterally aligned structures or inks from a donor substrate to a receiver substrate generally using soft elastomeric stamp. TP provides a promising solution to fabricate bendable electronic devices at large scale using semiconducting NWs, NRs and UTCs [26,46,96,141]. Transfer printing of nano and micro scale structures The TP technology was primarily explored to overcome the manufacturing problems (e.g. thermal budget issues) associated with the use of traditional microfabrication process for flexible electronics. The concept and mechanism of TP is explained using Fig. 6d. The processing steps that require high temperatures are first carried out on Si wafer (Fig. 4b-c), which can withstand high temperatures and then micro/nanostructures (NWs, NRs, NMs etc.) are picked (step 1) and transferred to flexible receiver substrates (step 2), where further low-temperature fabrication steps are carried out. The mechanism of transfer printing can be understood by studying the competing fracture between the stamp/object interface and the object/substrate interface [29,142]. During the first step (step 1, Fig. 6d), i.e. object retrieval from the donor substrate, the stamp/object interface must be stronger than the object/substrate interface. On the other hand, to print the object over donor substrate (step 2, Fig. 6d), the stamp/object interface must be weaker than the object/ substrate interface. For large area electronics, the controllable and reproducible transfer of micro/nanostructures from the donor to the receiver substrate is needed, and hence the precise control of the interface property is necessary. It is generally assumed that the adhesion strength at the micro/nanostructures-substrate interface is not influenced by the applied force/ stimulus and does not play important role while printing process. Therefore, the control over the micro/ nanostructures-stamp interface is key to the successful printing. To this end, control over factors such as surface functionalization, surface morphology modification, temperature and peeling velocity etc. is needed [143][144][145]. Few recent review papers presented in literature have further described various TP techniques [29,146,147]. The TP technology can be exploited to create a pilot line for heterogeneous integration of smart systems in a semiconductor foundry environment for foilto-foil manufacturing. Nano/microstructures (NMs, NRs, etc.) of Si and compound semiconductor such as GaAs, GaN, InP, InAs etc. have also been printed using TP approach to produce several classes of flexible devices [45,46,48,87,96,108,136,141,[148][149][150][151][152]. The TP technique has also been used to transfer carbon-based high mobility materials such as carbon nanotubes (CNTs) and graphene over large area [30,[153][154][155][156][157][158][159]. The TP of CNTs is commonly carried out by first depositing a metallic layer (e.g. Au) on top of the CNTs, then, using a transfer substrate typically consisting of PDMS [160], forming a CNTs stamp, and finally, the Au/CNTs layer being transferred to the receiver substrate. TP approach has also been shown for multilayer superstructures of large collections of CNTs configured in horizontally aligned arrays, and complex geometries of arrays and networks on a wide range of substrates [161]. Being a 2-step process, TP has an increased level of complexity when compared to CP. The use of an intermediate stamp introduces some additional challenges. Some of these challenges are discussed in the Sect. 5. Like CP, with cylindrical stamps as shown in Fig. 6e, it may also be possible with TP to have R2R transfer or stamp printing, although this has not been attempted so far [59]. Dahiya Transfer printing of chip or macro scale structures Printed nanomaterials, including NWs, NRs, CNTs and fibers, have been extensively exploited for realizing PE devices but large-scale integration remains an elusive task. UTCs have the capability to fill this gap and deliver high performance flexible electronics with variety of functionalities by combining the high-performance of Si technology with system-in foil applications [26,49,103,128,162,163]. At first, the devices and integrated circuits are fabricated on rigid silicon wafer using conventional CMOS approach. Sequentially, thinning as discussed in Sect 2 and transfer printing techniques are utilized to transfer the UTCs to flexible polyimide substrate (Fig. 7) [26,127,164,165]. Figure 7a depicts one of the approaches for TP of UTCs, fabricated via chemical method, on to the flexible polyimide through PDMS assisted wafer scale transfer process [26]. In this transfer process, an oxide layer is thermal grown on rear side of the wafer in selective regions, that acts as hard mask for chemical etching, to achieve UTCs of different dimensions as shown in Fig. 7b-d. Following this, the TP of UTCs on to flexible polyimide is carried out in two stages: (1) transfer to temporary second wafer coated with 200 μm thick PDMS (Fig. 7e-g) and (2) sequential transfer to temporary third wafer coated with polyimide shown in Fig. 7h-l. In stage 1, the 200 μm thick PDMS is spin coated on temporary second silicon carrier wafer and low power plasma treatment is performed to enhance the adhesion of the front side of the silicon membrane to PDMS/temporary wafer. The bulk silicon region is removed by dicing the membrane along the thinned region, which leaves UTCs on PDMS coated second wafer. As the front side of UTCs faces the PDMS (Fig. 7g), stage 2 of transfer process is performed to gain access to the active device by transferring the chip to the receiving substrate (third temporary wafer) coated with polyimide. Sequentially, second temporary wafer is removed by etching the PDMS layer and then UTCs are encapsulated by polyimide on both the sides. Finally, the transfer printed UTCs are released from the third temporary wafer to realize flexible high-performance integrated circuits and devices. The photographic and cross-sectional SEM images of transferred UTCs are shown in Fig. 7m and n, respectively. The flexible UTCs and laminated MOSFET devices between PVC sheets are shown in Fig. 7o and p, respectively [26]. The presented approach opens interesting opportunity for heterogenous integration using organic and inorganic semiconductors on foil. However, thinning of chips makes them fragile, and as a result they require extra care in terms of handling and hence the process for integration can be expensive. Further, the level of flexibility UTCs can achieve is not as high as NWs based PE. For example, the minimum bending radius for UTCs is only ~ 1.4 mm so far [26]. Figure 8 summarises key features of the printing technique presented in this section for the fabrication of large area flexible electronics. Exploiting these techniques, nano to chip or macro scale materials has been assembled to obtain devices over large areas, retaining most of the key features of conventional Si-electronics such as short switching time, high integration density, etc. The printing of nanoscale materials, particularly 1D materials with sub-20 nm diameter is of significant interest for printed large area electronics. Printed devices/circuits using nano to macro scale elements This section presents some examples of devices/circuits developed using printed inorganic nano to macro scale structures or building blocks. Printed devices The contact and transfer printing techniques described in previous sections have been utilized to print inorganic structures of various dimensions and materials (described in Sect. 2) to develop devices with flexible and stretchable forms. In general, CP has been used for transfer of nano scale structures (mainly NWs), and TP for nano to macro scale materials. Some of these examples are presented in this section. Transistors Generally, effective mobility of an electronic device determines important performance parameters such as switching speed, current density, power efficiency, and transit frequency (f T ) [166]. Compared to the organic thin-film transistors (OTFTs), transistors built on flexible substrates with printed inorganic elements offer a great potential for the emerging application such as IoT where high-performance (e.g. faster communication and computation) is required. NWs have been used to obtain nanoscale electronics devices such as semiconductor NWs assembled into nm-scale FETs [54,55], and p-n diodes [44,167]. However, some of the technological processing steps such as deposition of high-quality gate-dielectric, ohmic source/drain contact formation, etc. at room-temperature (RT), are still challenging. Overcoming these challenges, recently, Si-NR-based FETs (NR-FETs) were successfully developed over fully flexible polyimide (PI) substrates, as shown in Fig. 9. The distinct feature of these devices is that the high-quality silicon nitride (SiNx) dielectric was deposited directly on the printed NRs at RT (Fig. 9a-b). The electrical characterisations of NR-FETs have shown high performance (mobility ≈ 656 cm 2 V −1 s −1 and on/off ratio > 10 6 ) which is on par with the highest performance of similar devices reported with high-temperature processes, and significantly higher than devices reported with low-temperature processes. The reported NR-FETs are mechanically robust, with the ability to withstand mechanical bending cycles (100 cycles tested) without performance degradation ( Fig. 9c-f ). Generally, ohmicity of the metal-semiconductor (MS) contacts deteriorates with the use of high-temperature dielectric deposition process [168] which affects the transistor performance and its reliability. High performance achieved from NR-FET devices can be attributed to the RT dielectric deposition process with negligible degradation of source/drain contacts. The measured breakdown field strength (> 2.2 MV cm −1 ) further confirms the excellent quality of the RT deposited dielectric (Fig. 9g). High performance transistors based on printed nano/micro scale structures (NWs and NRs) of compound semiconductor such as GaAs, GaN, and InAs on plastic substrates have also been demonstrated, making them potentially useful platforms for ultra-high frequency electronics [45,98,169,170]. Printing of graphene sheets can also produce high-mobility transistors (hole mobility µ p ∼ 3700 cm 2 V −1 s −1 ) [171]. However, one of the limitations of graphene is attributed to its zero-band gap which restricts it from being used in digital applications. The CP could offer assembly and fabrication of multifunctional 3D NW-based electronics on both planar and flexible substrates through monolithic printing steps [42][43][44]172]. In that respect, the 10 functional device layers of Ge/Si NWs, stacked to form a 3D electronic structure are noteworthy [42]. Notably, the NW-FETs show minimal variation in the threshold voltage and exhibit a large average on-current of 4 mA with a standard deviation variation of only 15%. Using CP, multifunctional circuitry that utilizes both the sensory and electronic functionalities of NWs has also been demonstrated, as discussed in the Sect. 4.2. Printing of UTCs (chip or macro scale) on flexible substrates is a viable solution to achieve complex or large-scale integrated high-performance electronics with flexible form factor. In this direction, an innovative approach for wafer scale transfer of UTCs on flexible substrates was demonstrated [26]. The methodology is demonstrated with various devices (UTCs with resistive samples [125], metal oxide semiconductor (MOS) capacitors [173][174][175][176], n-channel metal-oxide semiconductor field effect transistors (MOSFETs)) [26], CMOS hall sensors [177] and Ion Sensitive Field Effect Transistors (ISFETs) [133]. An example of fabricated MOSFET devices on wafer-scale are shown in Fig. 10a. The microscopic image of a single MOSFET is shown in Fig. 10b. The transfer characteristics of n-MOSFETs are measured under different bending conditions (compressive under concave bending, planar, and tensile under convex bending). Under bending condition, the effective mass of the carrier got affected, due to the splitting and lowering of bands, that results in increase in the overall current under tensile strain and vice versa under compressive strain [133,162,163,178]. This can be observed from the transfer (Fig. 10c) and the output characteristics (Fig. 10d). The n-MOSFET under planar state demonstrated 350 cm 2 /Vs effective mobility and 2.42 × 10 4 on/off ratio. The effective mobility of the n-MOSFET under tensile and compressive stress are 384 cm 2 /Vs and 333 cm 2 /Vs, respectively. Further, the fabricated device demonstrates stable performance up to 100 bending cycles (Fig. 10e) with negligible device to device variation (Fig. 10f ) [26]. This technique has the capability to achieve high-performance flexible circuits with reliable device performance. Photodetectors Printing of inorganic building blocks can also be used to form optoelectronic devices such as photodetectors (PDs), light-emitting diodes (LEDs) etc. in a mechanically flexible format [25,148,170,179,180]. For example, the CP system was used to fabricate ZnO and Si NW-based ultraviolet (UV) PDs with Wheatstone bridge (WB) configuration on rigid and flexible substrates (Fig. 11). The UV PDs based on the printed ensemble of NWs demonstrate high efficiency, a high photocurrent to dark current ratio (> 10 4 ) and reduced thermal variations because of inherent self-compensation of WB arrangement. Due to statistically lesser dimensional variations in the ensemble of NWs, the UV PDs made from them have exhibited uniform response. Similarly, printing approaches have been exploited to fabricate visible [43] and near infrared [148] PDs on flexible/stretchable substrates. Energy generators In addition to electronic, and optoelectronic devices, printing of inorganic materials can yield high-performance flexible energy harvesting devices such as solar cells and piezoelectric and thermoelectric generators [82,124,[182][183][184]. For example, spatially organized printed ZnO NWs arrays enable fabrication of piezoelectric nanogenerators (PENGs) (Fig. 12a-c). To enhance the output power in PENGs, all NWs must be oriented in same direction. This particular requirement is difficult to be satisfied with most of the nanostructure assembly/ integration approaches such as Langmuir-Blodgett (LB) deposition [185], solution shearing methods including blown bubble approach [186,187], and capillary force assembly [188]. As shown in Fig. 12a, CP method ensure that the crystallographic orientations of the horizontal NWs are aligned along the sweeping direction. Consequently, the polarity of the induced piezopotential is also aligned, leading to a macroscopic potential contributed constructively by all the NWs [184]. Similarly, PZT ribbons were printed using TP approach to fabricate flexible mechanical energy harvester (Fig. 12d-f ). Electromechanical characterization of the PZT ribbons by piezo-force microscopy (Fig. 12f ) indicates that their energy conversion metrics are among the highest reported on a flexible medium. The TP method was also exploited to develop flexible micro thermoelectric generators (µ-TEGs) on Poly (ethylene terephthalate) (PET) substrate (Fig. 12g-h). A TEG module, consisting of an array of 34 alternately doped p-type and n-type Si microwires, is developed on a SOI wafer using standard photolithography and etching techniques. The TEG modules are transferred from SOI wafer to PET substrate by using TP method. A maximum of 9.3 mV open circuit voltage was recorded from the flexible µ-TEG prototype with a temperature difference of 54 °C. Pressure sensors Large-scale integration of high-performance inorganic nanoscale elements on mechanically flexible substrates enable sensitive sensing devices [30,189]. For example, large area TP of graphene layer on a photovoltaic (PV) cell resulting in energy autonomous tactile sensitive system for soft robotics (Fig. 13). Transfer of single graphene layer was demonstrated with the transfer of 4-inch CVD grown monolayer of graphene from Cu foil to 125-μm-thick poly vinyl chloride (PVC) substrates by using a hot-lamination method at 125 °C [30]. The transfer process has led to the fabrication of large area flexible graphene based capacitive touch sensors (Fig. 13a-b). The fabricated sensors showed high sensitivity (4.3 kPa −1 ) to a wide range of pressures (0.11-80 kPa) (Fig. 13c). One of the key features of the fabricated eSkin relied on its great transparency, i.e. a sunlight absorption below 5%, which allowed the effective energy harvesting of light energy by a PV cell underneath the eSkin. The viability of graphene-based skin sensors is also analysed by means c Touch sensor response vs. applied pressure. d eskin with capacitive sensors integrated onto a robotic hand. e Self-powered eskin used as tactile feedback for a robotic hand. Reprinted with permission from Ref. [30] of a dynamic characterization consisting in the grabbing of a soft object. The response obtained from the capacitive sensors was successfully used as tactile feedback in an artificial hand (Fig. 13d), allowing the manipulation of rigid and soft objects with different shapes (Fig. 13e). Printed circuits and systems Large-scale and heterogeneous integration of printed inorganic nano to macro scale elements have led to the realisation of flexible electronic logic devices, circuits, and systems [39,42,43,141,[189][190][191][192]. In one example, using 3D stacking methodology with contact printed NWs (see Sect. 3.1) leads to ultra-high-performance electronics not accessible by scaled complementary metal-oxide-semiconductor (CMOS) (Fig. 14a) [42]. By repeating the printing process, up to ten layers of active NW-FET devices were assembled, and a bilayer structure consisting of logic in layer 1 and non-volatile memory in layer 2 was demonstrated (Fig. 14b-c). In another example, exploiting the sensory and electronic functionalities of nanoscale elements, multifunctional circuitry was realised using contact printed ordered and parallel arrays of optically active CdSe NWs and high-mobility Ge/ Si NWs (Fig. 14d-g) [43]. The NW based photo sensors [189] and electronic devices are then interfaced to enable an all-NW circuitry with on-chip integration, capable of detecting and amplifying an optical signal with high sensitivity and precision (Fig. 14h). It was found that ~ 80% of the circuits demonstrated successful photo response operation (Fig. 14i). The potential of CP technique was further demonstrated by large area (7 × 7 cm 2 ) printing of parallel NW arrays as the active-matrix backplane of a flexible pressure-sensor array (18 × 19 pixels) (Fig. 14j-k) [189]. The integrated sensor array effectively functions as an eSkin capable of monitoring applied pressure profiles with high spatial resolution. The mechanical flexibility of one such fabricated device can be seen from an optical image shown in Fig. 14j while Fig. 14k shows a pressure map of the same device. The eSkin system can provide fast mapping of normal pressure distributions in the range from 0 and 15 kPa. Opportunities As presented in this survey, the last decade has witnessed huge progress in the field of flexible inorganic PE [21,30,37,100,175,176,[193][194][195][196]. It carries the advantages of both conventional electronics (high performance and functionalities) and printed organic electronics (low-fabrication cost, large area, etc.). Printing of intrinsically flexible high mobility materials such as silicon-based materials (NMs) [48], NRs [46,103], NWs [25], MWs [100,123], carbon-based materials (CNTs [197,198], graphene [30,176]), two-dimensional (2D) materials of transition-metal dichalcogenides (TMDCs) [199,200], and metal oxide nanomaterials (such as ZnO) [25,83,110,111,201,202] have been attempted. Vertically aligned NWs are usually transferred using CP technique [25], whereas transfer of laterally aligned structures such as NMs and NRs is generally performed using TP approach [46]. Exploiting these printing methods, high mobility materials/inks are used to fabricated variety of flexible electronic devices such as FETs [46,203], PDs [25,204], temperature and pressure sensors [205,206], radio frequency identification tags (RFID) [207], energy harvesters (solar cells, thermoelectric generators, piezoelectric generators, etc.) [82,148,182,184,208,209], stretchable interconnects [210] and many others [29,73]. These intrinsic stretchable inorganic materials have enabled many novel applications that were impossible for conventional electronics as well as for organic PE to achieve such as personal healthcare monitoring [17,207,211], human-machine interface (artificial intelligence) [59,212,213], neuromorphic computing [140,214,215], etc. where faster computing and mechanical flexibility is needed. For the continuous growth of inorganic PE, exploration of the fundamental device physics [30,140], effects of bending devices [125,163], innovative fabrication approaches and new form factors required to meet the needs of this next-generation of high-performance large area electronics. Large area flexible electronics is mechanically conformable with the human body, enabling human-interactive electronics. Unlike conventional electronics which aims at realizing electronic devices of smaller size and higher density (Moore's law), the priority of large-area flexible electronics is to fabricate these components with diversified functionalities, such as biochips, microelectromechanical sensors, power electronic, analog/RF devices, flexibility, stretchability, disposability etc. in a cost-effective manner (Fig. 15). Consequently, large area electronics will increasingly be the key for futuristic applications. Inorganic PE manufacturing has the advantage of being simple and cost-effective approach to provide long-term solutions for large area electronics. The presented survey shows that extensive research effort has been devoted to the development of printing technologies, from research on materials and devices, to fully integrated systems. Printing technologies, mainly contact and transfer printing, facilitates the transfer, assembly and patterning of intrinsically stretchable electronic nanomaterials have been actively investigated and have provided many notable breakthroughs for the advancement of large area electronics (Fig. 8). From the fabrication standpoint, a notable feature of CP and TP methodologies is that they separate semiconductor growth process (rigid substrate) from device (flexible) substrate. The advantage of doing so is the independence of these methods from traditional requirements for epitaxy and thermal budget, which allows the development of transistors, sensors, etc. at temperatures compatible with plastic substrates and that too without sacrificing the ability to incorporate highquality single crystal semiconductor building blocks. However, several technical challenges exist such as nonuniformity in material growth and its transfer, limited scalability, integration issues including heterogeneous and in three-dimensional which needs to be addressed for next generation of high-performance large area electronics using printing technologies. Some of these challenges are discussed in the following section. Large scale integration of nanoscale features CP technique has been successful in transferring nanoscale features such as NWs at wafer scale [42,43], but it is hard to achieve high yield of functional devices. Future printing techniques should be able to transfer nanoscale structures such as NWs and NTs at wafer scale (and larger than wafers) in a controlled manner. To achieve large area printing of these structures, many existing barriers need to be overcome. The foremost is the uniform growth of nanoscale structures over large areas. Top-down growth approach such as using optical and electron beam lithography enabled wet/dry etching consistently demonstrate their superiority in the nanometre control of device definition and placement [77]. On the other hand, bottom-up NW growth approach allow routes to obtain nano features that may not be formed by top-down means. The exact synthesis route to future semiconductor nanostructure-based flexible electronic devices is unclear however, it is quite probable that the route will exploit both top-down and bottomup techniques in tandem to allow a scalable process to achieve nanostructure at wafer level with good uniformity [88]. The second barrier for large scale integration of nanoscale structures is to develop printing technique that allow transfer of these structures with good uniformity over large area. As a potential solution to non-uniformity issue, CP can be a complementary technique for Fig. 15 Printed electronics enabled higher diversification and functionalities than conventional electronics including biochips, microelectromechanical sensors, power electronic, analog/RF devices in large area electronics. Printing technologies developed and exploited to provide versatile routes for assembly of nano to macro scale to 3D integration of inorganic functional materials/inks into well-organized arrangements onto various substrates for large area, high performance, flexible inorganic electronics stamp-printing. This means that CP can be used to print NWs from the growth substrate to a foreign receiver substrate, resulting in a highly aligned arrays of NWs horizontally printed on the receiver substrate. Then, TP can be employed to transfer NWs to the final device substrate. However, the total transfer yield obtained by combining contact-and stamp-printing techniques could be lower than when using CP alone. The main challenge is to achieve a high 100% yield without missing any inks/ objects as the destination substrate area increases. To exploit the potential of CP for large area printing, NWs can be directly grown on cylindrical rolls using bottomup process which can be used as a stamp (Fig. 6c) [59]. The bottom-up synthesis of NWs on tubes of glass, quartz, and stainless-steel and even on polymers like PDMS, has been demonstrated in the past [53,83]. One could see new commercial opportunities, for example, commercializing NW rolls just as the Si wafers today. By using such rolls in differential roll printing [53] and roll transfer-printing [172] settings, the CP approach can be extended to an R2R-type printing. Technological parameters The technological parameters such as channel lengths, ohmic junctions etc. are also important factors influencing the device performance. TP of micro/nano structures, produced from the parent wafer using standard microfabrication techniques, results in well-defined structures over target device substrates. However, residues from the intermediate stamp may remain on the surface of the micro/nanostructures which are transferred on the target substrate [99,130]. The interfacial contact between the active micro/nanostructures and deposited metal contacts or dielectric material need attention as they play critical role in the electrical performance and reliability of the device. Since the elastomeric stamp is generally an insulating material (e.g. PDMS), its residue may pose a challenge in employing the transferred nanostructures as building block for high performance electronics. For example, in presence of PDMS residues it is difficult to realize metal contacts for the source and drain terminals of a Si micro/nano wire transistor. The reported method [99] provides the solution of achieving a complete removal of PDMS residues from the surface of transferred micro/nanostructure on flexible substrate. Moreover, considering the micro/nanostructure dimensions such as NWs, NRs, etc., transfer steps are more complex. A perfect mask alignment for such diminished size may pose challenges. Nevertheless, using TP approach, the multistep stamp printing has been successfully demonstrated with feature resolution down to nanoscale [216]. Another challenge is the printing of high-resolution, high-aspect-ratio metal lines for the miniaturisation of device channel length. The channel length is a very critical parameter in CMOS technology for high device performance. At present, printed transistors have channel length in few microns which is far larger than the advanced conventional Si electronics (few nanometres). However, during the initial development stages i.e. the time when CMOS technologies were at the point where PE presently is, the channel length was longer than 1 µm. Going with the growth trend for CMOS technologies, directly printing submicron channel on printed inorganic semiconductors could be possible in future with advances in printed technologies. Direct 3D integration capability The 3D integration of PE could offer major advantages in the future for miniaturized high-performance flexible devices, just like the 3D integration of conventional CMOS electronics. The CP technique has shown potential to be used for vertical 3D stacking of printed NW [42]. As an example, functional device in 10 vertically stacked of Ge/ Si NWs have been shown [42]. The best attribute of CP is the compatibility with monolithic 3D integration, which means that layer-by-layer assembly does not alter the properties of existing layers. Moreover, as mentioned previously, CP and TP can complement each other in layerby-layer printing of NWs forming vertical 3D stacking. The advances in multi-material additive manufacturing could offer new avenues for introducing eSkin like features in prosthesis and robotics [59,217,218]. For instance, such 3D manufacturing processes could be employed to develop prosthesis with directly integrated or embedded touch sensors, thereby enabling robust limbs that are also free from wear and tear issues. The ability to simultaneously print multiple materials in 3D will also address the traditional robotic eSkin issue of routing of wiring. Heterogeneous integration of materials for multi-functionality Bringing multi-functionality into eSkin like devices or other wearables is important for efficient miniaturization and monitoring of different input parameters using single device [14,43,170,219]. The heterogeneously integrated NWs with distinct functionality will represent the future technology, where cost-competitive, scalable strategies allow integration of diverse materials with complementary performance [25,43,[220][221][222]. The need of the hour is to develop printing techniques overcoming the critical issue of multi-functionality, and permitting the highly precise integration of individually selected semiconductor NWs from different materials (e.g. InP, GaAs, ZnO, Si) onto a variety of substrates (e.g. polymer, silicon, silica, metals) [25]. This will open avenues towards the manufacture of heterogeneous devices, consisting of integrated systems made from pure and/or hybrid inorganic/organic materials. Conclusions PE technologies are emerging as a dynamic manufacturing route for large area high-performance electronics. This advancement is compelled by the demand for new functionalities such as flexible, conformal devices for application in wearables, robotics, healthcare etc. However, modest performances thus far offered by organic semiconducting and dielectric materials-based inks has restricted PE applications towards low-end. To this end, printed inorganic semiconducting materials-based devices show huge potential to achieve performance at par with silicon-based electronics. The presented survey captures the recent developments in the field of inorganic PE. Key printing techniques to integrate nano to macro scale inorganic functional elements are presented. Each transfer technique has distinct advantages and disadvantages depending on many factors: ink structure, orientation, and dimensions, and application requirements (flexibility, functionality and so on). The advancements in PE technologies has essentially enlarged the range of high-performing materials that can be patterned onto variety of nonconventional substrates to achieve new form factors including stretchability. At last, we have discussed challenges and potential solution for nanostructure-based large area high-performance electronics, mainly due to their simplicity, low processing temperatures, suitability for large-area and mass production (compatibility with R2R technology), compatibility with 2D and 3D monolithic integration, reproducibility, reliability and compatibility with flexible substrates. Advancement in inorganic printed electronics open avenues for complex circuits/devices fabrication with CMOS comparable performances, enabling new circuit topologies, heterogeneous integration, and will increasingly interact with their environment. The unification of new form factors, diversification and functionality is an appealing new aspect for electronics manufacturing and can be achieved by printing techniques.
2020-10-09T14:26:13.927Z
2020-10-09T00:00:00.000
{ "year": 2020, "sha1": "27fdaf2fe48f8cab1ec9291cb9653bcce431c867", "oa_license": "CCBY", "oa_url": "https://nanoconvergencejournal.springeropen.com/track/pdf/10.1186/s40580-020-00243-6", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "27fdaf2fe48f8cab1ec9291cb9653bcce431c867", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
14541773
pes2o/s2orc
v3-fos-license
Lens Galaxies vs. CDM By directly probing mass distributions, gravitational lensing offers several new tests of the CDM paradigm. Lens statistics place upper limits on the dark matter content of elliptical galaxies. Galaxies built from CDM mass distributions are too concentrated to satisfy these limits, so lensing extends the ``concentration problem'' in CDM to elliptical galaxies. The central densities of the model galaxies are too low on ~10 pc scales to agree with the lack of central images in observed lenses. The flux ratios of four-image lenses imply a substantial population of dark matter clumps with a typical mass ~10^6 Msun. Thus, lensing implies the need for a mechanism that reduces dark matter densities on kiloparsec scales without erasing structure on smaller scales. Introduction The popular Cold Dark Matter (CDM) paradigm is facing several challenges on small scales (e.g., [26]). The dynamics of spiral galaxies, especially rotation curves and fast-rotating bars, suggest that in observed galaxies dark matter halos are much less concentrated than predicted by CDM (e.g., [11], [13]), although this conclusion is still controversial (e.g., [32]). The number of satellite dwarf galaxies in the Local Group is much smaller than the number of subhalos in CDM simulations [20], [25], although the discrepancy may be explained by the astrophysics of star formation rather than by the physics of the dark matter particle [6]. These tests of CDM are limited, however, by uncertainties in interpreting luminous tracers of the potential. Gravitational lensing offers a different test that probes mass distributions directly. Strong lensing by galaxies robustly determines the total mass in the inner 5-10 kpc of lens galaxies, which are predominantly elliptical galaxies. It also offers the possibility to detect small-scale mass concentrations in galaxy halos [8], [10], [19], [22], [24]. Lensing thus offers new tests of CDM that avoid dynamical uncertainties and extend the tests from spiral galaxies to ellipticals. Star+Halo Models I construct new models for lens statistics that include both stellar and dark matter components (see [18] for details). In principle, I take a CDM dark matter halo, add baryons, let the baryons condense into a galaxy, and use the adiabatic contraction formalism [3] to compute how the dark matter distribution is modified by the baryons. 2 In practice, I fix the stellar galaxies and use the models to place dark matter halos around them. The stellar components are treated as Hernquist models for elliptical galaxies, normalized by observed galaxy luminosity functions [21], Fundamental Plane relations [29], and Bruzual & Charlot [5] model mass-to-light ratios (which are reliable for the old stellar components of elliptical galaxies). Two free parameters apply to the dark matter halos. First, halos with the Navarro, Frenk & White [27] dark matter profile are described by a concentration parameter. A halo's concentration is determined by its mass and redshift, but with a scatter of 0.18 dex [7]. I include the scatter and take the median concentration to be a free parameter. Second, to relate the total, virial mass of the dark matter halo (M d ) to the mass of the stellar component (M s ), I define the "cooled mass fraction" f cool = M s /(M d + M s ). I take the cooled mass fraction to be the second free parameter in the models, assuming only that it is smaller than the global baryon fraction, f cool ≤ Ω b /Ω M . Lens Statistics and Galaxy Masses Lens statistics can be used to test the CDM models, because changes to galaxy dark matter halos affect the number of lenses and the distribution of lens image separations. Figure 1a demonstrates the test by comparing the model predictions with the data from the Cosmic Lens All-Sky Survey (CLASS; e.g., [16]), which is the largest homogeneous survey for lenses. Increasing the concentration of dark matter halos raises the amount of dark matter in the inner parts of galaxies, leading the models to predict more and larger lenses. Because the stellar components of the galaxies are fixed, decreasing the cooled mass fraction increases the amount of dark matter, again leading to more and larger lenses. Using statistical tests to compare the models to the data leads to confidence intervals in the (C, f cool ) plane, as shown in Fig. 1b. Lensing requires the models to have low concentrations or high cooled mass fractions. Adding the constraint on f cool from the baryon content of the universe leaves only a small region of parameter space where the models are acceptable. Fiducial CDM models predict a median concentration C ≃ 7.7 for galaxies (indicated in Fig. 1b). This value is allowed by lens statistics only if galaxies are nearly 100% efficient at cooling their baryons (f cool ≃ Ω b /Ω M ), which is implausible (e.g., [2]). The constraints in Fig. 1b are conservative, because most of the systematic effects in the lensing analysis strengthen the lensing constraints (see [18]). Changing the cosmology (increasing Ω M ) has little effect on the lensing analysis but reduces the upper limit Translating the constraints into enclosed mass leads to the conclusion that dark matter can account for no more than 33% of the mass within 1 R e and 40% The shaded region at the top is excluded at 95% confidence by measurements of Ω b (e.g., [12], [31]). All results are shown for a cosmology with ΩM = 0.2 and ΩΛ = 0.8. See [18] for more discussion. of the mass within 2 R e (95% confidence limits on average mass fractions). Note that these limits are for the mass in spheres, whereas lensing limits on the mass in cylinders indicate that dark matter halos are still important in ellipticals. The lensing limits are consistent with the mass estimates from dynamical analyses of nearby elliptical galaxies [14]. By contrast, the CDM models predict dark matter mass fractions of ∼ 28% inside 1 R e if baryon cooling is 100% efficient, and even higher fractions for more reasonable cooling efficiencies. Odd Images and Galaxy Centers Nearly all observed lenses have an even number of images (usually two or four). Lens theory, by contrast, predicts that each lens should have an additional "odd" image located near the center of the lens galaxy, although it is demagnified by high central density of the lens galaxy. At optical wavelengths an odd image would be swamped by light from the lens galaxy, but in a radio lens an odd image should be detectable. The lack of odd images in radio lenses thus places strong lower limits on the central densities of lens galaxies [28]. The CDM model galaxies predict that ∼ > 30% of (radio) lenses should have detectable odd images, implying that the model densities are much too low on ∼ 10 pc scales (see [18] for details). Steep central cusps (ρ ∝ r −α with α ≃ 2) and/or central black holes can help suppress odd images, but for realistic parameter ranges neither offers an attractive solution. The lack of odd images in observed lenses thus remains a puzzle whose resolution will reveal interesting new constraints on the very inner parts of distant galaxies. Lensing and CDM Substructure One claimed problem with CDM is that the number of subhalos in CDM model galaxies is much larger than the number of satellite dwarf galaxies in the Local Group, which suggests that CDM overpredicts the amount of substructure in galaxy-mass halos [20], [25]. Two solutions have been proposed. On the one hand, changing the nature of the dark matter could reduce the power on small scales and eliminate the subtructure [4], [9]. On the other hand, astrophysical processes such as photoionization could inhibit star formation in low mass systems, meaning that the CDM subhalos exist but are dark [6]. Dwarf galaxy surveys cannot distinguish between these scenarios. Tidal streams offer an alternate test, because they can be disrupted by encounters with subhalos [17], [23], but the observational evidence is not yet available. Lensing offers a better test by being directly sensitive to mass in subhalos. Mass clumps in the lens galaxy introduce small-scale variations in the lensing potential that alter the flux ratios of the lensed images [8], [22], [24]. Dalal & Kochanek [10] show that the incidence of "anomalous" flux ratios 3 in 4-image lenses requires that ∼ 2% of the mass be in small clumps on the scale ∼ 10 4 -10 8 M ⊙ , which is in good agreement with the amount of substructure predicted by CDM. In other words, lensing strongly supports the scenario in which many subhalos exist but lack stars, and opposes changes to the nature of the dark matter that eliminates substructure. To complement statistical analyses like [10], I have studied a single 4-image lens in detail using data at a variety of wavelengths to obtain constraints on individual mass clumps [19]. In B1422+231, the optical A/C flux ratio is largely consistent with smooth lens models while the radio A/C flux ratio is not (Fig. 2). Simultaneously explaining the optical and radio flux ratios and the shape of the radio image requires a mass clump in front of image A. A highly concentrated, point mass clump must have a mass ∼ 10 4 -10 5 M ⊙ , while a more extended isothermal sphere must have a mass ∼ 10 6 -10 7 M ⊙ . This is the first measurement of a particular clump lying in a distant galaxy (z l = 0.34) and detected by its mass. Interestingly, there also appears to be a clump passing in front of image B, but this clump is probably just a star in the lens galaxy. In the future, detailed analyses of individual clumps as in B1422+231 will be combined with statistical analyses like [10] to constrain not only the substructure mass fraction but also the masses, densities, and sizes of dark subhalos, and the substructure mass function. Conclusions Lens statistics imply that the dark matter densities in the inner parts of elliptical galaxies are lower than predicted by CDM, in agreement with the conclusion from dynamical analyses of spiral galaxies. The CDM paradigm must therefore be modified to reduce dark matter densities on kiloparsec scales. Various mechanisms have been proposed ranging from astrophysics (disk bars that erase dark matter cusps [33]) to cosmology (a tilted power spectrum [1]) to particle physics (dark matter that is not collisionless and cold [4], [9], [30]). Lensing also implies that lens galaxies have high densities on small scales ( ∼ < 10 pc). The central densities of galaxies must be much higher than predicted in CDM model galaxies to explain the absence of central or "odd" images in observed lenses. The flux ratios in four-images lenses imply that a substantial fraction of the dark matter (∼ 2%) lies in small-scale clumps rather than a smooth halo component [10], and B1422+231 suggests that a typical clump mass is ∼ 10 6 M ⊙ [19]. Thus, while lensing supports other evidence that a mechanism is needed to reduce dark matter densities on kiloparcsec scales, it also suggests that the mechanism must not remove structure on small scales -which argues against changing the nature of the dark matter particle.
2014-10-01T00:00:00.000Z
2001-12-14T00:00:00.000
{ "year": 2001, "sha1": "afe2ffb689a455f59f10daa5fabf270c0eec4f8c", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "4afddb5e355c3d70d8253718120f1923bdb5dd0f", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
247057744
pes2o/s2orc
v3-fos-license
Prediction of Force Recruitment of Neuromuscular Magnetic Stimulation From 3D Field Model of the Thigh Neuromuscular magnetic stimulation is a promising tool in neurorehabilitation due to its deeper penetration, notably lower distress, and respectable force levels compared to surface electrical stimulation. However, this method faces great challenges from a technological perspective. The systematic design of better equipment and the incorporation into modern training setups requires better understanding of the mechanisms and predictive quantitative models of the recruited forces. This article proposes a model for simulating the force recruitment in isometric muscle stimulation of the thigh extensors based on previous theoretical and experimental findings. The model couples a 3D field model for the physics with a parametric recruitment model. This parametric recruitment model is identified with a mixed-effects design to learn the most likely model based on available experimental data with a wide range of field conditions. This approach intentionally keeps the model as mathematically simple and statistically parsimonious as possible in order to avoid over-fitting. The work demonstrates that the force recruitment particularly depends on the effective, i.e., fiber-related cross section of the muscles, and that the local median electric field threshold amounts to about 65 V/m, which agrees well with values for magnetic stimulation in the brain. The coupled model is able to accurately predict key phenomena observed so far, such as a threshold shift for different distances between coil and body, the different recruiting performance of various coils with available measurement data in the literature, and the saturation behavior with its onset amplitude. The presented recruitment model could also be readily incorporated into dynamic models for biomechanics as soon as sufficient experimental data are available for calibration. I. Introduction Magnetic stimulation is a method for activating neurons noninvasively through electromagnetic induction with strong and brief magnetic pulses. At present, magnetic stimulation focuses nearly exclusively on the brain [1]. Administered transcranially, magnetic stimulation can evoke direct effects, such as motor-evoked potentials [2], [3] or phosphenes [4], while certain pulse rhythms or patterns can also modulate neural circuits and shift their excitability with respect to endogenous signals [5]. However, the development of magnetic stimulation has been strongly related to the periphery; even the first successful experiments were performed on lower motor fibers and not the brain [6]. Long straight axons have been used for studying the basics of excitation since then [7]- [11]. In neuromuscular applications, magnetic stimulation can serve as an almost pain-free alternative to transcutaneous electrical stimulation [12]- [26]. Classical rehabilitation can be supported by evoking muscle contraction or performing more complex tasks, such as cycling [27]. In rehabilitation, neuromuscular stimulation serves to counteract muscle atrophy and to support relearning of movement sequences. Furthermore, orthodromic signals traveling from the periphery back to the central nervous system seem to trigger supportive neuroplastic effects [29]. Particularly the earlier neuromuscular magnetic stimulation approaches stimulated the major nerve trunks before these enter the muscle in the hope of achieving strong muscle contraction [16], [17], [20]. Researchers optimized coils to increase targeting and the recruitable force [21]. If targeted well, the evoked forces can be reasonable, but the handling is substantially more complicated than in electrical stimulation and requires experienced operators as the operator has to locate the nerve trunk and place a focal coil very accurately. Targeting and coil placement are further hampered by the contraction of the muscles and associated shifts in the anatomy. In any case, this method easily reaches rather high stimulation amplitudes for effective contraction (e.g., 70% to 100%, often on machines with already increased base power level), and still only a small portion of subjects or patients reach their maximum force levels [23], [24]. These high stimulation amplitudes can cause distress despite the better tolerability than electrical stimulation and necessarily cause extreme heating problems in the coils and pulse sources when used for training purposes [28]. Thus, recent research and clinical efforts prefer the stimulation of the intramuscular axon tree instead, where the procedure is more practical [12], [18], [19], [25]. Whereas the activation used to be weaker initially, appropriate coil designs could improve recruitment and demonstrate that better technologies can overcome such weakness and surpass electrical stimulation [13]. However, the design of novel technology for neuromuscular stimulation, including better coil geometries, requires a quantitative understanding of the recruitment. Currently, there is a major knowledge gap between the physics and the neurophysiology of neuromuscular stimulation. Consequently, it is also not clear which physical quantity is responsible for the neuromuscular activation. Thus, also the optimization of coils is rather ad-hoc. The technology for neuromuscular stimulation is therefore only improving slowly and falling behind the rapid developments in transcranial magnetic stimulation [30], [31]. For the brain, in contrast, recruitment of magnetic and electrical stimulation has been studied intensively [32]- [34] so that both experimental data sets and realistic recruitment models are available [35], [36]. Furthermore, such measurements allowed matching models with experimental data so that the dominant physical quantities could be identified [37]- [39]. Although the physical and neurophysiological conditions in the brain are obviously rather different to a peripheral muscle, the work on brain stimulation can still serve for inspiration. Recent three-dimensional field modeling techniques and experiments provide the basis to study the activation problem quantitatively for the quadriceps femoris muscle and identify the relevant relationships between the field characteristics of the stimulation coil and the muscle activation [41], [43]. The experimental study tested a variety of coils that generate widely different field conditions in the thigh and generated further variations through different coil-leg distances with flexible spacers. The wide range of field conditions generated by the combination of both allowed to rule out that the gradient of the electric field plays a substantial role in the stimulation of the intramuscular axon tree, which pervades the muscle densely with fine branchlets, contacts each individual muscle fiber, winding with rather small curvature around them, and forms a high number of terminals. However, all available analyses in the literature are mere correlation studies, which neither estimate muscle recruitment nor force generation. The available experimental data in the literature, on the other hand, form a sufficiently large data source with enough parameter variation to support the design of a digital twin. The combination of a realistic 3D anatomy model with a parametric recruitment model that estimates the force from anatomical (muscle and fiber anatomy) and physical (induced electric field distribution) output of the 3D model promises to close the gap between the physics and the force response. The data furthermore can serve for the calibration of free parameters to close the present gaps in the understanding. The presented model estimates the recruitment behavior, which can be observed in isometric stimulation experiments. Appropriate mechanical descriptions are well known in the literature. Riener et al., for instance, propose a sophisticated implementation of Hill dynamics together with biomechanics and a first-order fatigue/recovery description [44]. However, the neuromuscular recruitment in such models is usually represented by a sigmoid fitting curve. This work will demonstrate that a realistic recruitment by neuromuscular magnetic stimulation can be estimated directly based on the field conditions to replace ad-hoc sigmoidal fitting curves. We will furthermore show that such a model (described in Section II) with as little as one individual parameter-individual maximum force per muscle cross-section while an optional second parameter can compensate some apparent position offsets of one specific coil, APL, in the the experimental data-allows matching data to subjects from previous experiments in the literature (see Section III). The other parameter -specifically the threshold electric field magnitude-could be considered rather constant among the subjects and might reflect the typical range of all healthy subjects. A. Anatomy A high-resolution model of the human thigh was prepared based on the visible-human data set of the US NIH [45]. The data include 70 mm color photographs of cryosections with 1 mm spacing in z-direction, which provide substantially higher resolution than tomography scans and allow a more detailed identification of tissues and particularly boundaries in between. Similar to other models in magnetic stimulation [46]- [48], the geometry consists of macroscopic regions with dedicated electrical properties and neglects microscopic structures, such as cell membranes. This common approach assures computability. The different classes of segmented tissue elements types include skin, fat, eleven muscles or muscle groups, the femur, blood vessels, and major extramuscular nerve branches, although the latter are not the stimulation target themselves. The data were segmented with standard image processing methods in Matlab (The Mathworks, Natick (MA), USA). The femur and the muscles were identified by simultaneous three-channel analysis of the color data and segmented by thresholding with a manually fine-tuned tolerance band which was chosen accordingly. Furthermore, visible structures that delimit and adjoin the muscular tissue, such as tendons, supported the separation of different muscles along their boundaries and a reconstruction of their surface. The separation of the fat tissue was performed in two stages. A basic frame was obtained from thresholding. As is common in image segmentation, the threshold was determined in the corresponding histograms as a compromise between wrongly identifying other tissue types (false positive) and forming holes due to unclassified regions (false negative). Afterwards, the data set was cleaned by eroding and reconstructing the mask in order to eliminate image noise and sharp edges. The data did not exhibit enough contrast at its boundaries for extracting the skin geometry from the images because the embedding gelatin diffused into the skin. As a remedy, this cover was generated artificially by spanning a thin layer of tissue on top of the virtual body which follows the shape of the surface (see Fig. 1 in the middle): The adipose layer was dilated with a three-dimensional Gaussian filter; thresholding that data set and subtracting all other segmented tissue types as well as still unclassified interior parts formed an approximately 2 mm surface layer. The basis for the blood vessels was prepared by subtracting all already classified regions from the original raw data. From the remaining tissue, small not segmented isles which did not belong to any blood vessels were eliminated by a region-growing algorithm to identify such geometrically isolated subspaces and subsequent removal of unconnected spots. Remaining artifacts and noise were cancelled by three-channel thresholding. Interrupted connections of the grid formed by the vessels were reconstructed by cubic interpolation between the unintentionally separated branches. The major nerve branches (femoral and sciatic nerves as well as the tibial offsprings) were segmented manually from the imaging data. The boundaries of the individual regions were smoothened with a three-dimensional Gaussian filter in order to suppress unrealistic artifacts and moiré effects. Remaining gaps were filled with the tissue type of the nearest neighbor. For the simulations, likewise performed in Matlab, only half of the segmented geometry, namely the right thigh, was meshed. The model is depicted in Figure 1 with its various components. B. Coils We modelled four different coil types, namely a standard circular coil (RND15), the racetrack coil RT-120 (MagVenture, Copenhagen, Denmark), the saddle-shaped design APL from [13], [41], and a figure-of-eight coil (MC-B70, MagVenture, Copenhagen, Denmark). The former three devices are taken from the experimental study of [41]. The experimental performance of the figure-of-eight coil for magnetic stimulation of the intramuscular nerve structures is not represented in the literature, but was added to predict its force potential using the model calibration to the other coils. Furthermore, it generates falsifiable force estimates that can be tested in later experiments to stress-test the model. The wiring of the coils was extracted from X-ray images and modeled with each individual turn [49], [50]. A simplification of the coils to single-turn representations as common in the literature was avoided here (see Fig. 5). The smaller coils (RND15, RT-120, and MC-B70) were placed with their cross-hairs at the very same location in the center of the proximal third of the right thigh, which is known to evoke the strongest responses in neuromuscular stimulation experiments for the quadriceps muscle group [13]. The larger APL coil covers almost the full thigh. The upper edge of the coil ends at the groin. All coils are laterally rotated by 5° in outward direction, i.e., to the right for the right thigh in the model. The positions are visualized in Fig. 5. We additionally modelled the coils with data with various spacers between coil and thigh in the previous comparative study in lifted positions by 5 mm, 10 mm, and 15 mm perpendicularly to the surface. C. Physics Due to the low back-action of the induced currents in the tissue, the coils were implemented as unmeshed wires, which determined the magnetic vector potential A through the Biot-Savart forward solution [51]. The segmented anatomy was meshed with hexahedra and solved with a quasi-static finite volume method (FVM) with more than 70 million volume cells. The FVM enables a high degree of stability and used an established decoupled formulation for stable simulation of eddy currents as detailed in the appendix [28], [41]. The electromagnetic induction was solved quasistatically for the sinusoidal current pulse of 5 kHz of the modeled device. The electrical characteristics were assigned according to established data in the literature [52] for the relevant frequency range as reported in Table I. D. Force Recruitment Model To this date, 3D models have concentrated on the physical side and studied the distribution of the electric field because the background of eddy currents is well defined and tangible using Maxwell's equations. However, for the initially requested reproduction of the experimentally observed effects, a physiologic description is essential. The presented model translates the physical data from the eddy-current simulation into experimentally accessible quantities, namely the isometric force level. The muscle recruitment and the force generation with their particular features are to a large share sourced and caused by the physics in combination with the muscle anatomy. The physics model provides the local electric fields in the individual muscles and based on those the effective activated muscle cross section or volume as described below, which in turn feed a parametric mixed model (Fig. 2). The parametric mixed model contains all open degrees of freedom and thus calibratable components of the model, whereas the remaining parts of the recruitment follow from anatomy, physiology, and physics and are implemented as described below. The parameters were subsequently calibrated to experimental data from the literature [41]. We set up several parametric models with different numbers of parameters to compare the two plausible hypotheses for force recruitment and compensation of individuality; we identified the best-suiting model through Schwarz' Bayesian information criterion, which serves as an analytical biascompensated estimate of the Kullback-Leibler divergence to trade off high-variance and high-variability in the models and identify goodness of fit [53]. The models are summarized in Table II. The activation of the intramuscular nerve tree is believed to occur in the fine structure of the axon terminals, close to the neuromuscular junctions [41], [54]. For the physiologic approximation here, a microscopic threshold is defined for every junction, which in turn also provides the threshold of the related muscle fiber. Previous research demonstrated that the primary gradient of the electric field is not of greater relevance for explaining the force generation in neuromuscular magnetic stimulation, whereas the electric field strength magnitude of the various coils correlates with the response [41], [43]. The surface effects at the axon membrane required for an excitation are very effectively generated even in homogeneous fields due to the fine fiber structure with its small curvature radii. This phenomenon appears to reflect the situation in the cerebral cortex, where the activation is also triggered by the field strength rather than by any gradient thereof [55]- [57]. Consequently, we defined a local threshold condition at position r within each muscle, which is fulfilled as soon as the norm of the induced electric field E exceeds a certain minimum value E th , ∥E(r)∥ > E th . The free threshold parameter value (see Table II) determines the x-axis of the recruitment curve and the onset of the saturation at higher stimulation amplitudes. The recruitable force in turn was treated as an individual characteristic. We set up two fundamental models (Models 1 vs. 2 in Table II), one volume-related, one cross-sectionarea-related as follows. Most obvious and plausible might be estimating the force based on the activated muscle volume, which was used as one model alternative (Model 2). Due to the fibrous structure of muscles, however, the force generation is not a volume-related issue, nor does the innervation support such an approach [58]. Instead, the number of parallel myofibrils determines the peak force during the onset of a contraction in time when no fatigue effects (neither short-term nor endurance effects) are notable, whereas series muscle fibers is widely irrelevant for the tendon force [59]- [61]. This number is approximately proportional to the activated cross-section area perpendicular to the fiber pennation [62], [63]. For extracting the activated share of the physiologic cross-section area of a specific muscle which fulfills the threshold condition acts as an approximation for the force, accordingly. Each muscle of the quadriceps was handled separately in the model. The corresponding cross sections in each muscle were tilted in order to reflect the pennation axis. For the force evaluation, that cross section was taken into account which led to the highest supra-threshold area. The used individual pennation angles with respect to the femur axis were 10° for the m. rectus femoris, 8° for the m. vastus lateralis, −8° for the m. vastus intermedius, and 15° for the m. vastus medialis. The maximum force level per physiologic muscle cross section is known to be highly individual due to the dependence on the training state, the blood circulation, microphysiology, the fiber-type composition, and the actual metabolistic conditions [63]. For both fundamental model designs, we assumed that the maximum force per area is similar for all members of the relevant extensor muscle group following earlier observations and to keep the model parsimonious and therefore assigned only one parameter for muscles of the group (f i or υ i ) [63]. Thus, we set up two parametric mixed model alternatives, each with several refinement levels to be calibrated (see Table II): A first model estimated the isometric force recruitment from the physics model based on the volume activated extensor muscles, i.e., with suprathreshold electric field. The electrical field threshold was a group parameter for all subjects, the force to volume relation, which also determines the maximum recruitable force, individual (υ i for individual i). As the curvature of the APL coil was too small for some subjects, particularly when further rubber sheet spacers were inserted, a refinement allowed for an individual shift of the APL distance (APL 0,i ). The second fundamental model used the share of the activated pennation-corrected muscle cross-section (A eff ). In its simplest form, the threshold field E th was a group parameter (i.e., assumed to be a relatively constant figure, averaged out across many axon branchlets and terminals) and the force per area f i an individual parameter. Similarly, in another refinement, the APL coil was given freedom for individual distance shifts. The force recruitment models were coupled to the 3D physics model through the electric field and the muscle anatomy (see Fig. 2). Each model was calibrated to experimental data from the literature through mixed-effects maximum likelihood regression [41]. Regression was performed through maximization of the logarithmic likelihood of the forward model (L(F = (F ij ) ij | M E th , f i , v i , AP L 0, i ), x = (x ij ) ij )) generating the measured force responses for every sample of each subject (force F ij in response to stimulus strength x ij ) by varying the parameters (E th , f i , υ i , APL 0,i ). As the experimental data set did not contain measurements of all coils in every subject, the regression routine was designed to allow for missing data but combined regression of the entire set at once with shared and individual parameters and one overall likelihood of a model; blanks due to missing data did accordingly not contribute to the likelihood but is reflected in the sample count. We evaluated Schwarz Bayesian information to serve as a model identification criterion that accounts for the different degrees of freedom of the models, particularly in the presence of group and individual parameters. III. Results Whereas the forward model is relatively fast, the calibration to the data as it is based on iterative optimization of the log. likelihood was computationally more demanding and needed more than 7,000 CPU-hours on a simulation server with 24 Xeon Cores and 256 GB memory for completion. The best fitting description used threshold electric field, area-related force, and shift of the The electric field threshold across the all data amounted to E th = (70.5 ± 21.6) V/m. However, the large spread was caused by only one outlier (see Subject S09 in Fig. 3). This outlier reached 151 V/m, which may be the result of a bad fit of the curved APL coil during the experiments used here and a large effective coil-muscle distance due to a thick subcutaneous adipose layer of the specific subject, which exceeded the one in the model, rather than a really higher local threshold of the intramuscular innervation. The median threshold field was only 65 V/m. Fig. 4 depicts the results of the calibrated model with threshold and maximum force as parameters for the four different coils with various distances from the thigh. Every coil, except the figure-of-eight device, was simulated in its initial condition and for distance values of 5mm, 10mm, and 15 mm between the coil and the skin surface in perpendicular direction to reproduce the experimental data. The figure-of-eight coil was incorporated for evaluating its unaltered performance only, since experimental values for the dependence of the distance are not available in the literature. The calibration of the threshold also determines the onset of the saturation for higher amplitudes which agrees well with the experiments and provides validation (see Fig. 4). The simulation reflects the two degrees of freedom, i.e., shift of the recruitment curve with the coil-body distance and slope variations for the different coils, which were observed in experiments [41]. This is remarkable for one reason as neither of them was represented by any parameters or modelled in any way but are purely a result of the electric field distribution generated by the different coils or their movement. The distance of the coil from the thigh shifts the threshold in an almost linear way for the observed range. The shift of the recruitment curves in the model is smaller than in experiments, particularly for the APL coil [41]. This deviation (29% for the APL coil, 12% for RT-120 in the depicted curves) could be caused by both the experimental setup, in which spacer sheets made of rubber were used. Whereas the coils in direct contact with the thigh ideally match the surface-not least because of the flexibility of the subcutaneous adipose layer-rubber spacers are rather stiff so that the distance increases to a higher extent than the thickness of the sheets. Off-standing edges and air-filled gaps in case of the APL design are difficult to be quantified and simulated correctly. In addition, the simulated model is a standard anatomy and does not represent any individual characteristics of the experimental data which are used here. The difference in the slopes of the various coils is relatively stable in experiments as well as in the model. The standard circular coil is nearly coincident with the racetrack device RT-120. In the simulations, the APL coil presents an approximately 2.5 times higher slope in the linear range. The figure-of-eight coil-a device which is rarely used for neuromuscular stimulation of the intramuscular axon tree due to low torque but high distress compared to other devices-shows less than half the slope of the standard round coil. The evaluation of the experimental series in [41] led to a ratio of the slopes of the APL and of the standard circular coil of 2.6. The different performance of the coils can be visualized lucidly by marking the specific subvolume which exceeds the threshold for different points on the recruiting curve. Fig. 5 illustrates the part of the quadriceps muscle (dark) above the threshold field strength. Around the threshold, both coils exhibit just marginal activation. Whereas the increase is rather slow and locally confined for the round coil, the APL device is adjusted to the shape and the size of the quadriceps; the coil even reaches saturation within the output range of the pulse source and for reasonable power. The more homogeneous recruitment might be another, physiologic argument for using coil designs similar to the APL device. The round coil, although frequently used in neuromuscular stimulation, is not able to activate a wider range of fibers, but forms relatively local hot spots. However, these inhomogeneous strain conditions could damage relaxed fibers due to the nonphysiological strain acting on them. Although a volume-based approach of the force estimation might be more obvious at first, the growth of the suprathreshold volume in the 3D figures for higher stimulation amplitude is much higher than the experimentally observed force increase. A direct relation of the suprathreshold volume and the force response overestimates the slope of the APL coil (factor of 3.3 instead of 2.6) and predicts a notably lower threshold for this coil compared to the other devices. The difference would correspond to a distance of almost 20 mm which was not observed in the experiments [41]. IV. Conclusion Whereas magnetic stimulation has been too weak for daily neuromuscular applications for a long time, more efficient coils eradicated this flaw. A rather simplistic understanding in combination with heuristic trial and error were sufficient for this improvement [13] and allowed the derivation of the coupling factor to estimate the maximum efficiency level [43], [71]. This work computationally reproduced the recruiting behavior of neuromuscular magnetic stimulation for the first time. The underlying model is comparably simple, but turned out to be very capable and predicted the different recruiting of various stimulation conditions correctly. If relative torques without absolute force values are sufficient, only one parameter has to be calibrated, i.e., the local intramuscular electrical threshold field. The extraction of this single parameter from measurements entails all remaining effects, such as the experimentally found slope differences of coils and the shift of the recruitment curve for increased coil-thigh distance. It is very likely that the model can be further simplified, for instance with respect to the anatomic resolution, in order to provide a handy tool for coil designers. In this context, it could support the urgent need of experimentalists and clinicians for adequate equipment as the performance of a design from a drawing board can be subjected to a first, but rather informative quantitative evaluation without large efforts. Furthermore, the approach was intentionally kept as simple as possible in order to demonstrate that neuromuscular magnetic stimulation behaves macroscopically, i.e., on average notably simpler than the complexity of the microscopic neuromuscular structure might suggest. The parsimony of the model avoids a high number of parameters and factors that might facilitate over-fitting. Still, the model concurs with previous experiments and stimulation studies (see e.g., [12], [72]). The simplicity also reflects the need of coil designers in academia and industry for a flexible and usable model. The model design and calibration became possible due to experimental data of recruitment curves under a sufficient variety of electromagnetic conditions from the literature [41]. The underlying study evaluated a series of coils and identified two degrees of freedom for changing the magnetic (and induced electric) field conditions across individuals. The essential parameters of the physiologic description-given by the average threshold of the intramuscular axon microstructure and the force per cross section-can be calibrated using measurements. All further characteristics of the recruitment curve, such as the onset of saturation, result from it. In addition, the calibration provides a value for the local electric field strength at the threshold of 65 V/m in the microstructure after excluding one outlier, which is closely related to the microanatomic conditions of the intramuscular axon tree in the neighborhood of the nerve terminals. Interestingly, the threshold value is comparable to corresponding values reported for magnetic stimulation in the brain [64]. The model is able to mimic the properties of known muscle stimulation coils quantitatively. It allows predicting the different slopes of various coils correctly and also describes the shift of the threshold with increasing spacing between coil and thigh. In addition to serving as a tool to design and test new neuromuscular stimulation equipment in silico, the model can replace the usually predefined sigmoid functions for the recruiting curve or the behavior of another contractile element in biomechanical models [44], [65], [66]. The incorporation into such a framework furthermore provides all temporal aspects of force generation, such as force onset, fatigue and pulse repetition rate, and enables the incorporation into a control loop for functional magnetic stimulation [67], [68]. However, this step will require additional experimental data and induces further work in this field. The quality of the underlying anatomical and geometrical model is very high and among the most-detailed in magnetic stimulation. However, even under isometric conditions, muscles change their shape during contraction. Most actual applications of neuromuscular stimulation, such as cycling, are not isometric but need force estimation for the entire motion cycle with changing knee angles and associated muscle length variations. For changing knee angles, the situation becomes even more complex though. The presented model neglects that. On the one hand, this constraint kept the model simple and was important to enable this after all very first available model to explain and predict recruitment of neuromuscular magnetic stimulation from field models and anatomy, while the predictive power of the model is sufficiently high after all. On the other hand, in a future generation, motion should be included to represent the key applications of neuromuscular stimulation in rehabilitation and muscle training. Such a step, however, will require an anatomical model including the motion and the intermittently contracted muscle or better imaging scans during such a cycle. The model closed the gap between the stimulation coil and the force generated by the muscle, which prevented systematic optimization of coils and required lengthy and costly experiments. Thus, an obvious next step for future research is the improvement of coils for neuromuscular magnetic stimulation. Due to the completeness of the model from the coil design to a muscle force prediction, initial more conventional heuristic search could with an appropriate formalism even be improved by a numerical optimization approach [73]. While the model may serve well for developing the technology further, replacing the macroscopic nature by a microscopic one including the intramuscular motor nerve tree may be another next step to improve the understanding of how and where on the local level magnetic stimulation activates nerve and in turn muscle fibers. Appendix The formulation used here and established previously is derived from applying the current continuity to Ampère's circuital law and introducing the electric field E through Ohm's law. After enforcing Gauß' law with Poincaré's lemma, resp. Helmholtz' theorem through magnetic vector potential with B = curl A for the magnetic flux density B and Coulomb gauge, the governing equations follow with the local electrical conductivity σ and the electrical potential ϕ. The FVM turns the differential equation into with the volume V i, j,k of hexahedral volume cell (i, j, k). Spatial discretization and firstorder finitization with secondorder error [75] of the integrals delivers Δx i, j, k Δy i, j, k Δz i, j, k f i, j, k = σ i, j, k Δy i, j, k Δz i, j, k ⋅ ϕ i + 1, j, k − ϕ i − 1, j, k 2Δx i, j, k + Δx i, j, k Δz i, j, k ⋅ ϕ i, j + 1, k − ϕ i, j − 1, k 2Δy i, j, k +Δx i, j, k Δy i, j, k ⋅ ϕ i, j, k + 1 − ϕ i, j, k − 1 2Δz i, j, k + O Δx 2 , Δy 2 , Δz 2 (4) for local variables with dimensions in the subscript. The vector potential for the excitation term f i,j,k of each cell was provided by the coil's Biot-Savart solution. The second equation was amended by natural Neumann boundary conditions for ϕ on the surface and assembled into a matrix-vector equation [28], [74]. Equations were solved by the Gauß-Seidel method. Segmented model in the resolution which was used in the simulation (in the back). In the middle image and in the front, the model is peeled in order to uncover the otherwise hidden structures. Structure of the overall model with field simulation and parametric mixed recruitment model. Regression to experimental recruitment data. Each plot displays a different subject with symbols representing the measurement and the lines the force output of the 3D field model in combination with the calibrated mixed model. The experimental data set did not include measurements of all coils in every subject. Since the differences in forces are predicted from electric field models of the coil and not influenced by any calibrated parameters, the model once calibrated can estimate the recruitment of any coil, which is shown in Fig. 4. Recruitment curves of all studied coils and conditions as predicted by the selected model. Four different distances were simulated (0 mm, 5 mm, 10 mm, 15 mm). The slope ratio between the different coils as well as the threshold-shift effect are predicted in good accordance with previous reports and not generated by any of the degrees of freedom, i.e., calibration parameters. Instead, they arise solely from the anatomy and the electromagnetic conditions of the coil geometry and therefore do not vary across subjects here. The figure-ofeight coil (MC-B70) shows a substantially lower slope, which can be quantitatively tested in the future to stress-test the model. Recruiting for a round circular coil (a) and the APL design (b) at different stimulation amplitudes (20%, 35%, and 70% of maximum stimulator output): Whereas the round circular coil shows a rather local activation pattern, the APL device leads to suprathreshold stimulation in almost the whole quadriceps muscle at 70%. The amount of the muscle volume which fulfils the threshold condition is shaded in dark color; the blue stream lines illustrate the magnetic field. Goetz
2022-02-24T06:23:08.753Z
2022-02-22T00:00:00.000
{ "year": 2022, "sha1": "c145d4463be56c80d92beee9621a10f181a785c4", "oa_license": "CCBY", "oa_url": "https://ieeexplore.ieee.org/ielx7/7333/9695946/09718225.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "fb559aed0b519c2471b4c77d9682199b8a30faa6", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Medicine" ] }
237821544
pes2o/s2orc
v3-fos-license
First-principles DFT insights into the mechanisms of CO2 reduction to CO on Fe (100)-Ni bimetals Iron and nickel are known active sites in the enzyme carbon monoxide dehydrogenases which catalyzes CO2 to CO reversibly. The presence of nickel impurities in the earth abundant iron surface could provide a more efficient catalyst for CO2 degradation into CO, which is a feedstock for hydrocarbon fuel production. In the present study, we have employed spin-polarized dispersion-corrected density functional theory calculations within the generalized gradient approximation to elucidate the active sites on Fe (100)-Ni bimetals. We sort to ascertain the mechanism of CO2 dissociation to carbon monoxide on Ni deposited and alloyed surfaces at 0.25, 0.50 and 1 monolayer (ML) impurity concentrations. CO2 and (CO + O) bind exothermically i.e., − 0.87 eV and − 1.51 eV respectively to the bare Fe (100) surface with a decomposition barrier of 0.53 eV. The presence of nickel generally lowers the amount of charge transferred to CO2 moiety. Generally, the binding strengths of CO2 were reduced on the modified surfaces and the extent of its activation was lowered. The barriers for CO2 dissociation increased mainly upon introduction of Ni impurities which is undesired. However, the 0.5 ML deposited (FeNi0.5(A)) surface is promising for CO2 decomposition, providing a lower energy barrier (of 0.32 eV) than the pristine Fe (100) surface. This active 1-dimensional defective FeNi0.5(A) surface provides a stepped surface and Ni–Ni bridge binding site for CO2 on Fe (100). Ni–Ni bridge site on Fe (100) is more effective for both CO2 binding or sequestration and dissociation compared to the stepped surface providing the Fe–Ni bridge binding site. Introduction The levels of carbon dioxide in the atmosphere continues to increase as a result of anthropogenic activities like combustion of fossil fuels, leading to global warming and climate change [1]. CO 2 is an abundant and cheap carbon-one source, which could be a useful feedstock in the production of transportation fuels [2], industrial chemicals [3], and polymers [1]. However, due to the stability and inertness of the CO 2 molecule, catalysts are required for conversion [4,5]. Despite difficulties associated with CO 2 conversion industrially, anaerobic enzymes such as carbon monoxide dehydrogenases are known to reversibly catalyze the reduction of CO 2 to CO at ambient conditions of temperature and pressure [6]. CO 2 is said to anchor and receive electrons at the bridge site of iron and nickel in the Fe-Ni-S cluster in carbon monoxide dehydrogenases [7]. The catalytic CO 2 decomposition into CO has become an active field of research in catalytic chemistry as CO is the feedstock in the Fischer-Tropsch process for the production of long-chain hydrocarbon liquid transportation fuels [2,8]. Catalytic conversion of CO 2 to valuable industrial feedstock like CO is an attempt to ease the effects of CO 2 on our environment. Although experimental studies on CO 2 reduction on single crystal surfaces show activity for CO 2 chemisorption and reduction on bare Fe and Ni, including Ni (110) and Fe (111) [9], the energetics and mechanisms of CO 2 transformation to viable products like CO, methane, formic acid etc. on these bare metal surfaces were not well understood. The extent of CO 2 activation and dissociation on iron and nickel surfaces have been shown to be face specific experimentally [10][11][12][13][14]. This was later supported by other density functional theory (DFT) calculations whereby on iron the barrier to CO 2 dissociation on the low Miller index surfaces were of the trend Fe (100) ~ (111) < (110) [15]. Several computational studies have also been carried out to investigate the interactions of CO 2 with Ni, mostly employing the spin-polarized density functional theory-generalized gradient approximation (DFT-GGA) to understand the energetics on their various topologies [16][17][18][19][20][21][22][23][24]. CO 2 is reported to bind more strongly on iron than nickel while its decomposed species bind stronger to nickel than iron. Kinetically decomposition is observed to be favored on iron than nickel [21]. Iron and nickel are known active sites in the enzyme carbon monoxide dehydrogenases (CODH) and the presence of nickel impurity in earth-abundant iron could provide more active materials for CO 2 decomposition to CO. CO 2 interactions with bare and 1 ML deposited surfaces of the low Miller index surfaces of iron have been investigated previously [15], where nickel is seen to alter the ease of CO 2 dissociation. However, to the best of our knowledge, no theoretical studies have been carried out on alloys of iron and nickel and the concentration effect of nickel deposition on the activity of the iron surface has also not been explored. In this present study, we have employed dispersion-corrected spin-polarized-density functional theory calculations within the generalized gradient approximation (DFT-D2-GGA) to elucidate the mechanism of CO 2 reduction into carbon monoxide on the pristine Fe (100) facet, its nickel alloys and nickel deposited surfaces at varying concentrations of 0.25 ML, 0.5 ML and 1 ML. Computational details All calculations were carried out with the spin-polarized density functional theory method as implemented in the Quantum ESPRESSO package [25]. The generalized gradient approximation (GGA), with the Perdew, Burke, Ernzerhof (PBE) exchange-correlation functional [26] was used in all simulations. The surface was described by an asymmetric slab model, where periodic boundary conditions were applied to the central super-cell so that it is reproduced periodically throughout space. XcrysDen [27] software was employed for the visualization of structures and electron densities. The Fermi-surface effects were treated by the smearing technique of Fermi-Dirac, using a smearing parameter of 0.03 Ry. The energy threshold defining self-consistency of the electron density was set to 10 −6 eV. The Grimme's D3 correction was implemented for Vann der Waal's dispersion corrections. Iron (100) surface was cleaved with the METADISE code [28] and a p(3 × 3) super-cell was employed for all calculations as the binding energy of CO 2 does not change significantly with increasing super-cell [15,29]. The slab was built to a thickness of three, made up of six atomic layers. A vacuum of 20 Å was introduced to the surface to prevent interactions between surfaces along the z-axis. The top three layers of the slab was relaxed in all calculations, which has been reported previously to be the converged structure of iron (100) [15,30,31]. All gaseous adsorbates were optimized in a cubic box of size 20 Å and allowed to relax in all calculations. Neighboring adsorbates in laterally repeating units of the slabs were more than 5 Å apart. Using convergence tests, the kinetic energy cutoff of the plane wave basis set was set to 40 Ry and 320 Ry for the kinetic energy and charge density cut-off, respectively. The Monkhorst-pack K-points grid of (7 × 7 × 7), (5 × 5 × 1) and (1 × 1 × 1) were used for bulk, surface and adsorbate systems, respectively. The Climbing Image Nudged Elastic Band (CI-NEB) method was used to determine the energy barriers for dissociation. Vibrational modes were calculated whereby a single imaginary frequency was indicative of a transition state. Lowdin charge analysis was employed for charge density characterizations upon adsorption of CO 2 . Zero-point vibrational corrections were ignored in the binding energy estimations, as our previous studies shows it does not affect the qualitative view of the reaction on the surface. CO 2 adsorption on pure and bimetallic surfaces The computation parameters were first validated by calculating the bulk properties of iron. The unit cell of iron crystalizes in the body-centered cubic (BCC) form and our spin-polarized DFT-D3 calculations were able to reproduce the electronic properties of bulk iron [15,32]. where E Fe is energy of single iron atom, E Ni is energy of single nickel atom, n is the number of dopants and E slab is energy of the perfect Fe (100) surface. As seen in Table 1, deposition is generally favored thermodynamically over alloying, nickel prefers to be segregated on iron than alloy at all concentrations from 0.25 to 1 ML. The high instability of the doped surfaces relative the deposited surfaces show that thermodynamically at 0.25 ML, 0.5 ML and 1 ML concentrations, nickel will be segregated on the surfaces than diffuse to form alloys with the Fe (100) (2) Carbon dioxide adsorption was then studied on the pure and defective surfaces (see Fig. 2) and the binding energies of CO 2 on the various surfaces were calculated as follows; where E (slab+CO2) is the energy of the adsorbed system, E slab and E CO2 are the energies of the isolated surface and gaseous carbon dioxide, respectively. The preferred CO 2 adsorption site on the clean Fe (100) surface has been reported to be the hollow site in the C 2V adsorption mode [15]. At the active site i.e., at the hollow site and in the C 2V preferred CO 2 adsorption state, we investigated the effect of doping on CO 2 binding and extent of activation. As shown in Fig. 2, CO 2 binding to bare Fe (100) is exothermic with adsorption energy of -0.87 eV. This is very consistent with earlier observations of − 0.9 eV [15], − 0.92 eV [33], − 0.7 eV [19]. Introduction of nickel into the bulk of iron (structure d) at point defect of 0.25 ML, increases the binding strength of CO 2 to − 0.92 eV. CO 2 coordinates to four iron atoms at the hollow site. Increasing the nickel concentration to 1D defect (structure e) increases the iron-CO 2 interactions at the hollow site to − 0.96 eV. Whiles at 2D defect (structure f), the iron-CO 2 interaction at the hollow site is decreased to − 0.84 eV, this is less than the binding on bare Fe (100). Comparing the electronegativity of nickel and iron, nickel is more electron withdrawing and increasing its concentration in the bulk of iron, lowers the electron density available at the surface for transfer into the CO 2 moiety. The nickel electron withdrawing effect is felt at the surface with increasing concentration of nickel. Also increasing the concentration of nickel dopant at the sub-surface site introduces appreciable Ni properties at the surface, as bare Ni is known to bind CO 2 more weakly [34]. In Ni adsorption situations as shown in Fig. 2a, b, stepped surfaces are provided which facilitate lower CO 2 surface coordination (ɳ-CO 2 ) and lower binding energies due to the presence of nickel on surfaces compared to the alloyed surfaces in Fig. 2d-f. As seen for the nickel single ad-atom at 0.25 ML deposition, the binding takes place at the bridge of iron and nickel, and the strength of binding is weakened to − 0.33 eV compared to bare iron. Increasing nickel concentration at the surface decreases the binding strength of CO 2 further to − 0.19 eV for (FeNi 1 (B)) (c). Ni generally weakens CO 2 binding strength except in cases where the Ni effect is less felt on the surface. These results show that decreasing CO 2 surface coordination by the introduction of adatoms and higher electronegative atom effects like nickel on the surface weakens CO 2 binding. From Table 1, the presence of nickel influences the Fermi levels and the work functions of the slabs. The work function which translates to their electrochemical potential or ability to transfer charges is reduced (as work function increase). The lower the electrochemical potential of the slab, the weaker the net charge it transfers to the CO 2 moiety and the lower the degree of activation of the molecule. It is seen that the lower the concentration of the Ni, the lower its impact on the work function. Hence to increase the charge mobility, less electronegative materials hold a better potential to reduce surface work functions, as Ni is more electronegative than Fe. Again, stepped surfaces with low CO 2 coordination number show stronger binding, for example comparing the monolayer deposited and single atom deposited surfaces i.e., FeNi 0.25 (A) and FeNi 1 (A). The net amount of charge gained by CO 2 molecule from the surface and the extent of CO 2 activation on the surfaces are seen to correlate, this is consistent with earlier studies [23]. CO 2 dissociation on pure and bimetallic surfaces The reaction energies for CO 2 dissociation (E dis ) and the barriers for dissociation (E a ) were calculated with Eq. (5) and (6), respectively; where E products is the energy of the adsorbed dissociated system, E slab is the energy of isolated slab and E CO2 is the energy of isolated carbon dioxide molecule. where E TS is energy of the transition state and E IS is the energy of the intermediate state i.e., adsorbed CO 2 . To reduce surface interaction between adsorbed molecules, the decomposed species CO and O were optimized individually on the surfaces as well to determine the binding energies in Table 2. As reported in Table 2, the binding energy of decomposed CO 2 (E dis ), is generally favorable thermodynamically relative to the binding energy of CO 2 (E ads ). The thermodynamics of the dissociation steps were also calculated relative to the activated CO 2 moiety (Step dis ) and was found to be a thermodynamically favored step on all surfaces. The reaction barriers for the dissociation steps were then computed for the reactions on the various surfaces. The energy profile diagram showing the energy transitions along the reaction coordinates are shown in Fig. 3. On bare Fe (100), a dissociation barrier of 0.53 eV was found. Earlier studies reported 0.22 eV [19] and 0.8 eV [35]. Comparing the energy barriers for the CO 2 dissociation step (E a in Table 2), generally nickel impedes CO 2 dissociation as higher barriers are encountered on the modified surfaces compared to the pure Fe (100) FeNi 1 (A). Monolayer deposited (1 ML) FeNi 1 (A) surface seems to be the most challenged surface for CO 2 dissociation kinetically, with a barrier of 4.16 eV. This is expected as it is the surface with most nickel atoms in the bulk has most pronounced nickel effects on the surface. Here the nickel behavior is predominating, as nickel surface provide higher decomposition barriers relative to iron [34]. The FeNi 0.5 (A) (1D defect) is promising for CO 2 dissociation kinetically where CO 2 is coordinated at a Ni-Ni bridge site. Although FeNi 0.5 (A) is the least stable of the deposited surfaces, its formation is thermodynamically favored and it is also the deposited surface that binds CO 2 the most and is most suitable for CO 2 sequestration. Comparing the work function trends to the behavior of the modified surfaces (Fig. 4), as the work function increases, the fermi level, which shows the electrochemical potential of the surfaces is also seen to reduce. The fermi level shows a good correlation to the extent of CO 2 activation. The lower the work function, the higher the fermi level the more activated the CO 2 molecule on the surface as seen at work function of 3.80 eV for pure iron. This shows that to increase CO 2 activation, the fermi level and work function of the surface needs to be modified. The barriers of CO 2 dissociation did not correlate strongly with the work function or the degree of CO 2 activation. CO 2 binding and CO + O binding trends have a similar pattern. Surfaces that bind CO 2 strongly also bind CO + O relatively strongly when compared to other surfaces as seen on FeNi 0.25 (B) at 3.86 eV. Larger surface work functions are associated with weaker binding energies as seen around 5 eV. Conclusion The effect of Ni alloying and deposition on the ease of CO 2 direct dissociation has been studied using the DFT method. Nickel prefers to be segregated on iron than alloy at concentration of 0.25 ML up to 1 ML as alloying is seen to be unstable. The stabilities of the modified surfaces were of the order, FeN i 0 .25 (B) < FeNi 0.5 (B) < FeNi 1 (B) < F . CO 2 binds exothermically to bare Fe (100) surface (E ads = − 0.87 eV). Ni at the bulk site improves the binding of CO 2 and its applicability for CO 2 sequestration except at 1 ML doping, where the effect of bulk Ni is stronger on the surface. These results show that introduction of high amount of nickel in the bulk of iron weakens CO 2 binding. The work function and fermi level energy of the modified surfaces have a strong correlation to the CO 2 activation degree and binding trends. Thermodynamically, dissociation is favored on all surfaces probed. Kinetically, CO 2 dissociation is most favored on the FeNi 0.5 (A) surface, which is stepped and allows CO 2 to coordinate to two surface Ni atoms. Generally, the barriers for CO 2 dissociation are heightened compared to bare Fe (100), especially on the monolayer deposited (1 ML) FeNi 1 (A) surface. Ni deposition on Fe at 0.5 ML coverage could offer the most viable nickel-modified iron surface for CO 2 reduction and would provide a more reactive surface for CO 2 hydrogen unassisted splitting into CO (a feedstock essential for the Fischer-Tropsch process).
2021-09-01T15:10:22.617Z
2021-06-24T00:00:00.000
{ "year": 2022, "sha1": "6098b86fef12fbc35f9fff7c1490fccd10090ab7", "oa_license": "CCBY", "oa_url": "https://www.researchsquare.com/article/rs-540311/latest.pdf", "oa_status": "GREEN", "pdf_src": "Springer", "pdf_hash": "ff3058b8acb3bb9cf6082603ffb837170fa4f7d1", "s2fieldsofstudy": [ "Chemistry" ], "extfieldsofstudy": [ "Materials Science" ] }
11169112
pes2o/s2orc
v3-fos-license
Improved Method for Individualization of Head-Related Transfer Functions on Horizontal Plane Using Reduced Number of Anthropometric Measurements An important problem to be solved in modeling head-related impulse responses (HRIRs) is how to individualize HRIRs so that they are suitable for a listener. We modeled the entire magnitude head-related transfer functions (HRTFs), in frequency domain, for sound sources on horizontal plane of 37 subjects using principal components analysis (PCA). The individual magnitude HRTFs could be modeled adequately well by a linear combination of only ten orthonormal basis functions. The goal of this research was to establish multiple linear regression (MLR) between weights of basis functions obtained from PCA and fewer anthropometric measurements in order to individualize a given listener's HRTFs with his or her own anthropomety. We proposed here an improved individualization method based on MLR of weights of basis functions by utilizing 8 chosen out of 27 anthropometric measurements. Our objective experiments' results show a superior performance than that of our previous work on individualizing minimum phase HRIRs and also better than similar research. The proposed individualization method shows that the individualized magnitude HRTFs could approximated well the the original ones with small error. Moving sound employing the reconstructed HRIRs could be perceived as if it was moving around the horizontal plane. INTRODUCTION ithout two eyes, direction of sound source can be recognized by a person by utilizing his or her two ears. The primary cues in localizing the direction of a sound are interaural time difference (ITD), interaural level difference (ILD), and spectral modification caused by pinna, head, and torso. These primary sound cues are encrypted in HRTF. On the horizontal plane, ITD and ILD are two main cues due to the perception of sound direction [1]. HRTF is defined as the acoustic filter of human auditory system, in frequency domain, from a sound source to the entrance of ear canal. The counterpart of HRTF in time domain is known as head-related impulse response (HRIR). One key implementation of binaural HRTFs is in the creation of Virtual Auditory Display (VAD) in virtual reality to filter monaural sound. This fact is based on the human psychoacoustic characteristic, i.e. a convincing spatial sound can be obtained sufficiently using two channels. As suggested by [3] and [4], HRTF changes with directions of sound sources and varies from subject to subject due to inter-individual difference in anthropometric measurements. Synthesis of ideal VAD systems needs a series of empirical measurements of individual HRTFs for every listener. These measurements are not practical because of the requirements of heavy and expensive equipments as well as a long measurement time. Most commercial virtual auditory systems are recently synthesized using generic/nonindividualized HRTFs that ignore inter-subject difference. However, non-individualized HRTFs suffer from distortions such as in-head localization when using headphones, inaccurate lateralization, poor vertical effects, and weak front-back distinction caused by unsuitable HRTFs applied to a listener [1], [4]. Thus, it is needed and a priority to develop an individualization method to estimate proper HRIRs for a listener, that present adequate sound cues without measurement of the individual HRIRs. The individualization of HRTF in frequency domain or HRIR in time domain is nowadays a challenging subject of much research. Several HRTF individualization methods have been developed, such as HRTF clustering and selection of a few most representative ones [5], HRTF scaling in frequency [6], a structural model of composition and decomposition of HRTFs [7], HRTF database matching [8], the boundary element method [9], HRIR subjective customization of pinna responses [10] and of pinna, head, and torso responses [11] in the median plane, and HRTF personalization based on multiple regression analysis (MRA) in the horizontal plane [12]. Shin and Park [10] suggested HRIR customization method based on subjective tuning of W only pinna responses (0.2 ms out of entire HRIR) in the median plane using PCA of the CIPIC HRTF Database [2]. They achieved the customized pinna responses by letting a subject tune the weight on each basis function. Hwang and Park [11] follow the similar method as [10], but they fed PCA with the entire median HRIRs; each HRIR is 1.5 ms long (67 samples) since the arrival of direct pulse. This HRIR includes the pinna, head, and torso responses. They tuned subjectively the weights of three dominant basis functions due to the three largest standard deviations at each elevation. Hu et al. [12] personalized the estimated log-magnitude responses of HRTFs by MRA. At the beginning, the log-magnitude responses are estimated using PCA as linear combination of weighted basis functions. The weights of the basis functions are then estimated using anthropometric measurements based on MRA. Our individualization method is similar to the method in [12], but we employed in the PCA modeling, the magnitude responses of HRTFs, instead of the log-magnitude responses of HRTFs utilized by Hu et al., however, our selection procedure of anthropometric measurements is also different. Entire horizontal magnitude HRTFs calculated from the original HRIRs in the CIPIC HRTF Database are included in a single analysis. Thus, all horizontal magnitude HRTFs for both ears share the same set of basis functions, which cover not only the inter-individual variation but also the inter-azimuth variation. This paper presents an individualization method by developing the statistical PCA model of magnitude HRTFs and MLR between weights of basis functions and selected few anthropometric measurements, that was different and showed improved performance from [12]. Section 2 describes the proposed algorithm of individualization method, database used, minimum phase analysis, PCA of magnitude HRTFs, minimum phase reconstruction and synthesis of HRIR models, individualization of magnitude HRTFs using MLR, and correlation analyses for the selection process of independent variables and dependent variables of MLR models. Section 3 discusses experiments' results, which consist of discussions of resulted basis functions and weights of basis functions from PCA, and the performance of the proposed individualization method. PROPOSED INDIVIDUALIZATION METHOD The goal of our research is to develop an improved individualization method of HRTFs on the horizontal plane, by using multiple regression models between magnitude HRTFs and a few anthropometric measurements. This method individualizes magnitude HRTF models into suitable HRIRs for a given listener, by using a few of his or her own anthropometric measurements. The suitable individualized HRIRs are necessary when the listener uses a spatial audio application. The schematic diagram of the proposed HRTFs individualization method is shown in Fig. 1. The database of HRIRs used in the research was provided by CIPIC Interface Laboratory of California University at Davis [2], [3]. This database is reviewed briefly in the subsection below. Firstly, as seen in Fig. 1, we obtained from the database the entire original HRIRs on horizontal plane of 37 subjects, which consists of 50 HRIRs of each ear and each subject. We used a total number of 3700 HRIRs in modeling and individualizing HRIRs of a listener. Each HRIR was processed by 256-points fast Fourier transform (FFT) to transform it into its corresponding complex HRTF. As the object of HRTF modeling using PCA, we took only 128 frequency components of magnitude of the complex HRTF. At this step, the phase of the complex HRTF was discarded. Then, we computed the mean of the entire magnitude HRTFs. This mean was substracted from each magnitude HRTF to obtain its corresponding direct transfer function (DTF). This substraction was performed in order to have centered data of magnitude HRTF, called DTF, which was necessary for PCA to get a good result. For HRTFs modeling purpose, all DTFs were thus fed into PCA. The PCA delivered 128 ordered basis functions or principal components (PCs) and their weights (PCWs). The PCs were ordered from the PC with largest eigen value to the PC with smallest eigen value. It must be kept in mind that each eigen value determined the percentage variance of all DTFs explained by its corresponding PC. The first PC that corresponds to the largest eigen value explained largest percentage variance of the entire DTFs. To attain later individualized HRTFs of a new listener, we performed multiple linear regression (MLR) between the PCWs resulted from PCA and a few anthropometric measurements of 37 subjects in the database. Detailed selection process of anthropometric measurements from a total of 27 measurements is explained in the separated subsection below. The MLR method provided regression coefficients that correlated the PCWs and selected anthropometric measurements. These regression coefficients were thus applied to a set of anthropometric measurements of a new listener to obtain estimated PCWs for that listener. A linear combination of weighted PCs using these estimated PCWs resulted in an individualized DTF. The desired individualized HRIRs of a listener were attained using the reconstruction process shown by the dashed lines in Fig. 1. Each individualized DTF that was achieved from the MLR method and PCA, was added to the mean of DTFs calculated before to yield its individualized magnitude HRTF. Minimum-phase was inserted to the individualized magnitude HRTF to result in an individualized complex HRTF. Here we followed the assumption that the phase of the HRTF can be approximated by minimum-phase [13]. The inverse Fourier transform finally was applied to obtain individualized HRIRs from the corresponding complex HRTFs. The initial left-and right-ear time delay due to the distance from the sound source in a particular direction to each ear drum were inserted respectively to the left-ear HRIR and to the right-ear HRIR. The database used, the minimum phase analysis, reconstruction, PCA of the magnitude HRTFs in the frequency domain, minimum phase reconstruction and synthesis of HRIRs, MLR method, and selection of anthropometric measurements are explained in the following subsections. The Database Used Most commercial VAD systems convolve input signals with a pair of standard HRIRs, which ordinarily come from a serial of studies that used public HRIR data of acoustic manikin called Knowles Electronics Manikin for Auditory Research (KEMAR). HRIRs vary significantly among individuals, hence a database which results from sufficiently large number of HRIRs measurements is needed in order to perform HRIRs modeling. CIPIC Interface Laboratory at California University, Davis -USA, had measured HRIRs with a high spatial resolution from more than 90 subjects [2], [3]. They has released CIPIC HRTF Database Release 1.2, which is a subset of database for only 45 subjects. This database is downloadable from their website and can be used freely for academic research purpose. CIPIC HRTF Database not only consists of impulse responses from 1250 spatial directions for each ear and each subject, but also includes a set of anthropometric measurements of all subjects. We used the CIPIC HRTF Database in our research because of its extent features. The number of subjects involved in the HRIRs measurements is 43 people, consists of 27 males and 16 females. Two other subjects are KEMAR with small pinnae and KEMAR with large pinnae. All impulse responses were measured with condition that the subject sat in the center of a circle with radius 1 meter. The position of head was not fixed, but the subject could monitor his or her head position. As sound sources, Bose Acoustimass TM loudspeakers, with cone diameter of 5.8 cm, were mounted at different positions on the half-circled hoop. Golay-code signals were generated by a modified Snapshot TM system from Crystal River Engineering. Each ear canal was blocked and ‚Etymotic Research ER-7C' probe microphones were used to pick up the Golay-code signals. Output of a microphone was sampled with frequency 44,100 Hz, 16 bit resolution, and processed by Snapshot's oneshot function to produce a raw HRIR. A modified Hanning window was applied on the raw HRIR to eliminate room reflections and then the result was free-field compensated to improve the spectral charateristics of the transducers used. The length of each HRIR is 200 samples with duration of about 4.5 ms. Direction of a sound source was determined by azimuth angle, θ, and elevation angle, ø, in interaural- Although these measurements are not very accurate, but they allow investigation about possible correspondence or correlation among physical dimensions and HRTF characteristics. Following the approach suggested by Genuit [5], there are 27 anthropometric measurements in the database, which consists of 17 measurements of head and torso, and 10 measurements of pinna as shown in Fig. 2 [2], [3]. Generally, histogram of subjects' measurements indicates a normal distribution of values. Discarding the offset measurements x4, x5, x13, where the percentages of deviation can be ignored, mean of percentages of deviation is ±26%. Thus, there are a sufficient number of variations in the measurements and sizes of subjects in the database used. Minimum Phase Analysis Each HRIR in the dababase used was measured with a distance of one meter from the sound source to the center of subject's head. From the graph of HRIR versus time, it is observed a time delay due to the distance mentioned before, which is needed by sound wave to propagate from its source to the ear drum, before a maximum amplitude of HRIR occurs. To eliminate this time delay, HRIR can be reconstructed into a minimum-phase HRIR using Hilbert transform. In the minimum-phase HRIR, the phase is allowed to be arbitrary or else it is set in such a way that the magnitude response of HRIR is made easier to achieve. A linear time invariant filter, H(z) = B(z)/A(z), is said to have minimum phase if all of its poles and zeros are inside the unit circle, |z|=1, in the zplane. Equivalently, a filter H(z) has minimum phase if not only itself but also its inverse, 1/H(z), are stable. A minimum phase filter is also causal since noncausal terms in the transfer function correspond to poles at infinity. The simplest example of minimum phase filter would be the unit-sample advance, H(z) = z, which consists of a zero at z = 0 and a pole at z = oo. A filter is called to have minimum phase if both the numerator and denominator of its transfer function are minimum phase polynomials in z -1 , i.e. a polynomial of the form, is said to have minimum phase if all of its roots, θi, i=1,2,...,M, lie inside the unit circle, i.e. |θi |<1. A general property of minimum phase impulse responses is that among all impulse responses, hi(n), having identical magnitude spectra, impulse responses with minimum phases experience the fastest decay in the sense that, where hmp(n) is a minimum phase impulse response. The equation above represents that the energy in the first K + 1 samples of the minimum-phase case is at least as large as any other causal impulse response having the same magnitude spectrum. Thus, minimum-phase impulse responses are maximally concentrated toward time t=0 among the space of causal impulse responses for a given magnitude spectrum. Because of this property, minimum-phase impulse responses are sometimes called minimum-delay impulse responses. It is known that in a minimum phase filter, H(z) = e a(z) e i b(z) , the relations, b(z) = -H {a(z)} and a(z) = -H {b(z)}, are also valid, where H {} is the Hilbert transform. The logarithmic change of these relations was obtained mainly through the calcultion of real cepstrum. It is proposed by Kulkarni et al. [13], that the phase of HRIR can be approximated by minimum phase. A minimum phase system function, H(z), of an HRIR, h(n), has all poles and all zeros that are placed inside the unit circle |z| =1 in the z-plane. The calculation of real cepstrum of an original HRIR, which has arbitrary phase, results in a minimum phase HRIR, hmp(n). We can say that the minimum phase HRIR is the removed initial time delay version of the correspond original HRIR. But both kinds of HRIR have the same magnitude spectrum in the frequency domain. The real cepstrum, v(n), of HRIR, h(n), is calculated as follow, where ln and Re{} denote respectively natural logarithm and the real part of a complex variable, FD{} and F 1 − D {} are the discrete Fourier transform and its inverse respectively. This real cepstrum is then weighted by the following window function, 2 if n > 0. In case of a rational H(z), the window function can be seen as a complex conjugate inversion of the zeros outside the unit circle, so that a minimum phase HRIR is provided. Hence the desired minimum phase HRIR, hmp(n), is resulted from: PCA of Magnitude HRTFs in Frequency Domain Complex HRTFs were attained by implementing fast Fourier transform (FFT) to HRIRs of the database used. The entire complex HRTFs were computed from left-ear and right-ear HRIRs of 37 subjects on horizontal plane. There are 50 HRIRs from different directions (50 azimuths) on horizontal plane for each ear of a subject, so that a total of 3700 complex HRTFs were produced by 256-points FFT. We took only magnitudes of all complex HRTFs as the input of PCA modeling. Only 128 first frequency components of a magnitude HRTF were taken into analysis because of the symmetry property of a magnitude spectrum. A matrix composed of DTFs is needed by PCA. The original data matrix, H (NxM), is composed of magnitudes of HRTFs on horizontal plane, in which, each column vector, hi (i=1,2,…,M), represents a magnitude HRTF of an ear of a subject in a direction on horizontal plane. The number of magnitude HRTFs of each subject on horizontal plane is 100 (2 ears x 50 azimuths). Hence, the size of H is 128 x 3700 (N=128, M=3700). The empirical mean vector (µ: Nx1) of all magnitude HRTFs is given by, The DTFs matrix, D, is the mean-subtracted matrix and is given by, where y is a 1xM row vector of all 1's. The next step is to compute a covariance matrix, S, that is given by where * indicates the conjugate transpose operator. The basis functions or PCs, vi (i=1,2,…,q), are the q eigenvectors of the covariance matrix, S, corresponding to q largest eigenvalues. If q = N, then the DTFs can be fully reconstructed by a linear combination of the N PCs. However, q is set smaller than N because the goal of PCA is to reduce the dimension of dataset. An estimate of the original dataset is obtained here by only 10 PCs, which account for 93.93% variance in the original data D. By using only 10 PCs to model magnitude HRTFs, we expected to obtain satisfactory good results. The PCs matrix, V = [v1 v2 … vN], that consisted of complete set of PCs can be obtained by solving the following eigen equation, where Λ = diag{ λ 1 ,…,λ 128 }, is a diagonal matrix formed by 128 eigen values, where each eigen value, λ i , represents sample variance of DTFs that was projected onto i-th eigen vektor or PC, vi. Then, the weights of PCs (PCWs), W(10x3700), that correspond to all DTFs, D, can be obtained as, where PCs matrix now was reduced to V = [v1 v2 … v10]. PCWs represent the contribution of each PC to a DTF. They contain both the spatial features and the interindividual difference of DTF. Thus, the matrix consisted of models of magnitude HRTFs, Ĥ, is given by, Tabel 1 shows the percentage variance and the cummulative percentage variance of DTFs in the database explained by PC-1 to PC-20 (v1, v2, … , v20) respectively. The application of more PCs would reduce the modeling error between the magnitude HRTF of database and the model of magnitude HRTF, but on the other hand, it costed more computing time and larger memory space. The PCsmatrix, V, that at first has 128x128 elements was reduced into a matrix of only 128x10 elements. We used only the first 10 PCs out of all 128 PCs. In this way automatically we needed only 10 PCWs to perform the model. Hence, one can see obviously the advantage of PCA in reducing significantly the memory space needed. Fig. 3 shows a left magnitude HRTF of Subject 003 and its PCA model due to direction with azimuth -80 o and elevation 0 o (top panel). On the bottom panel, it is shown the right magnitude HRTF and its PCA model due to the same direction. We can see that the models approximate well the corresponding magnitude HRTFs. Minimum Phase Reconstruction and Synthesis of HRIR Models As explained in the previous subsection, we obtained PCs matrix, V, and PCWs matrix, W, from the PCA method. Both matrices together with the empirical mean vector, µ, were applied to yield the matrix of models of magnitude HRTFs, Ĥ, as suggested by (11). By now, we could calculate the models of magnitude HRTFs of both ears. In order to synthesize the models of complex HRTFs, the phase information of left-and right-ear model of magnitude HRTF should be inserted into those models. We reconstructed the models of complex HRTFs based on the approach made by Kulkarni et al. [13]. They assumed that the phase of a HRTF was minimum phase. The phase function for a given model of magnitude HRTF was calculated by using Hilbert transform of natural logarithm of the model of magnitude HRTF. The minimum phase, ϕmp, of a model of magnitude HRTF, ĥi ((i=1,2,…,M)), is given by, where Imag{} denotes the imaginary part of a complex number and ln is the natural logarithm. Thus, the model of minimum phase complex HRTF, ĥc, can be calculated using, where exp() denotes the exponential function. And the corresponding model of minimum phase HRIR, ĥmp(n), is given by the inverse fast Fourier transform (IFFT) of its complex HRTF, ĥc, from (13). Furthermore, in reconstructing the model of left-ear minimum phase HRIR and the model of right-ear minimum phase HRIR for a parti-cular direction of sound source into related model of left-ear HRIR and model of right-ear HRIR respectively, we needed to insert respective time delay related to the distance travelled by sound wave from the sound source to each ear drum of a subject, into each model of minimum phase HRIR. The time delays to be inserted were obtained from the means of time delays of respective directions on the horizontal plane from all subjects in the database used. The difference between left-ear time delay and right-ear time delay is called interaural time difference (ITD), which is needed by human to determine sound source direction. Fig. 4 shows, on the left panel, the original HRIRs of subject 003 due to direction with azimuth -80 o and elevation 0 o . On the right panel, we can see related models of left and right HRIR. These models re-sulted from the reconstructions of the PCA models of magnitude HRTFs into their corresponding HRIRs, as explained before. However, the models of magnitude HRTFs attained had not been individualized. Individualization of Magnitude HRTFs Using MLR As shown in Fig. 1, the individualization of the models of magnitude HRTFs, which were resulted from PCA, were done through MLR of PCWs matrix, W, using anthropometric measurements of a listener. From the matrix W of (10), we can extract a weights vector, wi,θ (37x1), which is a vector consisted of the i-th weights of the i-th PC, vi, of an ear of all subjects with azimuth θ on the horizontal plane, where i=1,2,...,10. In this research, we employed only 8 anthropometric measurements of a subject in the individualization process. The selection of these 8 measurements will be discussed in detail in the separate subsection below. These selected measurements of all subjects being analyzed were then gathered together in the columns of an anthropometric matrix, X (37x9), where the first column of X consists of all 1's. Suppose that the relation between the weights vector, wi,θ, and the anthropometric matrix, X, is given by, where βi,θ (9x1) is the regression coefficients vector and Ei,θ (9x1) is the estimation errors vector. The regression coefficients were found by implementing least-square estimation. This estimation is performed by solving the optimization problem min{Ei,θ(n)}, where Ei,θ(n) is the n-th dependent variable's estimation error. PCWs and anthropometric measurements are respectively the model's dependent and independent variables. From (14), the regression coefficients due to i-th PCWs in azimuth θ, Bi,θ, can be estimated as, Bi,θ = (X T .X) -1 .X T . wi,θ. As suggested by (15), enhancing the performance of the MLR method, it is needed to select both dependent and independent variables carefully. By applying PCA on magnitude HRTFs, the dimensions of independent variables were reduced significantly, so was the complexity of the models. Many correlation analyses were employed to select the independent variables in obtaining more accurate and simpler MLR method, as explained further in the subsection 2.6. Correlation Analyses for Selection of Anthropometric Measurements We employed the CIPIC HRTF Database, which are composed of both the measured HRIRs and some anthropometric measurements for 45 subjects, including the KE-MAR mannequin with both small and large pinna. The detail definitions of the all 27 anthropometric measurements are given in [2], [3] and can be seen in Fig. 2. Modeling of the listener's own HRIRs via his or her own anthropometric measurements will directly affect the feasibility and complexity of the system. It is obviously not advisable to implement all measurements into the model. Some useful information will be concealed by the unnecessary measurements, which results in a worse regression model. Besides, many measurements are very difficult to be measured correctly. There are three parameters that psychoacoustically important in the perception of natural sound, i.e. interaural time difference (ITD), interaural level difference (ILD) and pinna notch frequency, fpn. ITD is the time difference between the arrival of first pulse of sound source from a particular direction on the left ear drum and that of the right ear drum. At the directions of sound source on median plane, ITD is near zero, where for a perfect symmetric head, there is no ITD on that plane. Thus, one can say that ITD is a function of azimuth on planes with fixed elevation. ITD can be calculated from the time delay of maximum cross correlation of the left HRIR and right HRIR at a particular direction. Then, ILD is defined as level or magnitude difference (in dB) in frequency domain between the left magnitude HRTF and the right magnitude HRTF at a particular direction of sound source. For a particular direction, we obtained ILD from each frequency component in the range of 0 -22050 Hz. ILDs generally are analyzed for a determined frequency component on the horizontal plane and on the median plane. Another significant psychoacoustic parameter is pinna notch frequency, fpn. Pinna notch frequency is the notch frequency in the magnitude spectrum of HRTF caused by diffraction and reflection of sound wave on a pinna. ITD and ILD are significant for the perception of azimuth of sound source. They affect much the variation of HRTF on the horizontal plane. But ILD and fpn play important role in the perception of elevation of sound source and affect the variation of HRTF on the median plane. It is difficult to characterize the range of HRTF variation among subjects. However, maximum ITD, ITDmax, maximum ILD, ILDmax, and fpn are simple and perceptually relevant parameters that characterize existing HRTF variation. Correlation analyses were applied to determine which anthropometric measurements have strong correlations with ITDmax, ILDmax, and fpn. From a few strongest correlated anthorpometric measurements, four measurements were chosen from head and torso sizes, i.e. x1 with ρ = 0.736, x3 with with ρ = 0.706, x6 with ρ = 0.726, and x12 with ρ = 0,768, where ρ denotes the correlation coefficient between the measurement and ITDmax. These 4 measurements were employed in the individualization of magnitude HRTFs using MLR method. Correlation analyses between ILDmax and head and torso sizes provided weaker correlations but confirmed the chosen of x1, x6, and x12. The selection of x3, x6, and x12 was also confirmed with the correlation analyses on the horizontal plane between the first PCWs, w1,θ, from the PCA of magnitude HRTFs and anthropometric measurements. We focused on first PCWs because they have largest variation through the azimuths. The effects of pinna sizes are stronger with HRTFs on the median plane than HRTFs on the horizontal plane [1]. But overall the pinna sizes affect HRTFs in all directions. The correlation analyses between fpn and anthropometric measurements provided in general weaker correlations than those of ITDmax. Four pinna sizes had strongest correlations with fpn and that's why to be chosen; i.e. d1 with ρ = 0.435, d3 with ρ = 0.360, d5 with ρ = 0.204, and d6 with ρ = 0.280. These selected sizes of pinna are easy to be measured and represent measures of height and width. Hence, eight anthropometric measurements, x1, x3, x6, x12, d1, d3, d5, and d6 were chosen and fed in the MLR method in order to calculate regression coefficients. These eight anthropometric measurements are the same as the measurements that we used in our previous work [14]. Then the regression coefficients were applied in estimating the PCWs of a DTF at each direction on the horizontal plane. EXPERIMENTS' RESULTS AND DISCUSSION In this section, we discussed the performance of the proposed individualization method from the objective simulation experiments between the original magnitude HRTFs of the database and the individualized models of magnitude HRTFs. The experiments were done by employing only the data on the horizontal plane of 37 subjects out of all 45 subjects in the database. This occurred because the database had not included the complete set of anthropometric measurements of all subjects and the selected 8 anthropometric measurements were included only for 37 subjects. Basis Functions Resulted from PCA The inputs of the PCA were 3700 DTFs processed from HRIRs on the horizontal plane of 37 subjects. By solving eigen equation, we attained 10 basis functions or PCs to model the given DTFs. Fig. 5 shows the first five basis functions, v1,...,v5. As shown in this figure, that all five basis functions can be said roughly constant and approx-imate zero at frequencies below 1-2 kHz. This reflects the fact that there is almost no direction-dependent variability in the DTFs in this frequency range. Regardless of the weights employed to the basis functions, the resulting weighted sum will be close to zero in this range. Above about 2 kHz, all five basis functions have nonzero values. It is obvious that with the exception of the first PC, the high-frequency variability in these basis functions represents the direction-dependent high-frequency peaks and notches in the DTFs. The higher order basis function has more ripples and more details especially for the frequencies above about 2 kHz. The trends explained above are similar for the sixth to tenth basis functions. Taken together, all basis functions seem to capture the high frequency spectral variability. They also reflect spectral differences between sources in front and sources behind the subject. Weights of Basis Functions Based on PCA, assumed that DTFs can be represented by a relatively small number of basic spectral shapes of PCs, it seems reasonable to expect that the amount each basic shape contributes to the DTF at a given source position would related, in a simple way, to source azimuth and elevation. In the case of source position on horizontal plane, this amount or weight is related to azimuth only. On the other side, contralateral sources have negative weights. The magnitudes of weights for ipsilateral sources are much larger than those for contralateral sources. This distribution of PC-1 weights is similar across the 37 subjects, which has low intersubject variability. As seen in Fig. 5, that the first basis function has almost flat magnitude through all frequencies, so it can be said that PC-1 weights are functioning as the amplification in HRTF modeling. The remaining nine PC weights have larger variability in the ipsilateral side and, in the contralateral side, beginning at about azimuth 0 o , the weights have almost constant values near zero. Higher order of PC has corresponding flatter weights pattern for sources on the horizontal plane. It is observed, that the patterns of PC weights are roughly similar across subjects and ears. Performance of Proposed Individualization Method The performances of the estimated magnitude HRTFs on the horizontal plane, obtained either from PCA or individualization, were evaluated by the comparison of mean-square error of the differences between the estimated magnitude HRTFs, and the original magnitude HRTFs, calculated from database, to the mean-square error of the original magnitude HRTFs in percentage, which is defined by, where hj(θ) is the j-th original magnitude HRTF with azimuth θ on horizontal plane, ĥj(θ) is the corresponding estimated magnitude HRTF of hj(θ). If the error is larger, the performance of the estimated magnitude HRTF is worse, where better localization results will be achieved with small error, ej(θ). Before individualizing magnitude HRTFs using MLR, mean error from PCA modeling of magnitude HRTFs was calculated across all data in the database. At first, PCA modeling was performed for all data from all source directions of 45 subjects. This experiment resulted in mean error of 3.31% across all directions and subjects, but mean error across directions on horizontal plane was 3.65%. Second, modeling was performed using data at all directions of only 37 subjects, which resulted in mean error of 3.32% and mean error on horizontal plane was 3.68%. From these two experiments, it can be said that the corresponding mean errors were the same. Third, data of both ears of 45 subjects at directions only on horizontal plane were used and mean error of 3.67% was obtained. At last, PCA modeling was performed using data of both ears of only 37 subjects at directions only on horizontal plane. This experiment resulted in mean error of 3.68%. Again we obtained the same mean errors from the last two experiments. It is summarized that using data of 45 subjects or 37 subjects, yielded the same mean errors across related directions. Mean errors on horizontal plane were the same either data from all directions used or only from directions on horizontal plane. These mean errors are less than half of the related mean errors obtained from our previous work on PCA modeling of minimum phase HRIRs [14]. In individualizing magnitude HRTFs, we used only the data of both ears of 37 subjects at directions on horizontal plane, which meant that we used the results of fourth experiment mentioned above for individualization. We obtained here significantly small mean error of PCA models of magnitude HRTFs, i.e. 3.68% compared to 8.32% as in [14]. In turn, we individualized the PCA model of magnitude HRTFs using MLR with eight chosen measurements. The mean error of a subject was different from that of another subject in the database and also noted that a good performance of the individualized left-ear magnitude HRTFs of a subject was not always followed by the same performance of the right-ear ones. The overall mean error was only 12.17%, which was much better than 22.50% as in [14]. Fig. 7 shows the left-and right-ear errors as a where positive azimuth is due to source direction in the right side. However, the right-ear errors of subject 003 are mostly very good about 5% across azimuths. The left-ear errors of subject 163 seem to be much better than its rightear errors. Under the assumption stated below, if the spectral distortion (SD) score defined by Hu et al. [12], was applied to determine the performance of the individualized magnitude HRTFs, our SD scores of subject 003 and subject 163 on the front horizontal plane are no larger than 1 dB, which is much better than that in [12]. Using logarithm properties, i.e. 20.log(|a|/|b|) = (20.log|a| -20.log|b|), we assumed that the difference of logmagnitudes (20.log|a| -20.log|b|) in SD score might be replaced by the difference of magnitudes (log|a| -log|b|) in our case because we resulted in individualized magnitudes of HRTFs and Hu et el. resulted in individualized log-magnitudes of HRTF from PCA. There were overall additional errors introduced by the proposed individualization method. These additional errors were introduced by the MLR. The unsystematic behavior of weights of PCs across subjects and across directions had caused MLR quite difficult to estimate adequately accurate regression coefficients. Besides, we performed here linear regression of anthropometric measurements to estimate the weights of PCs. Higher order regression might provide better estimates of these weights. The individualized magnitude HRTFs of subject 003 could well approximate the corresponding original magnitude HRTFs particularly at frequencies below about 8 kHz. Fig. 8 shows the individualized and original magni- tude HRTFs for both the left and right ear in the extreme directions on the front horizontal plane. The top, middle, and bottom panel corresponds to azimuth -80°, 0°, and 80° respectively. Informal listening tests done by five subjects had shown a good and natural perceived moving sound around the horizontal plane by all subjects when the subjects' individualized reconstructed HRIRs, due to the sound source directions, were implemented in the headphone simulation. CONCLUSION In this paper, a simple and efficient individualization method of magnitude HRTFs for sources on horizontal plane, based on principal components analysis and multiple linear regression, was proposed. The proposed method shows better performance in the objective simulation experiments than that of similar research and was superior compared to our previous work. The additional errors introduced by MLR to PCA model might be lowered by applying higher order regression or other algorithm for MLR than the least square. Dadang Gunawan received the B.Sc. degree in electrical engineering from University of Indonesia in 1983, and M.Eng and Ph.D degrees from Keio University, Japan, and Tasmania University, Australia in 1989 and 1995, respectively. He is the Head of Telecommunication Laboratory and of the Wireless and Signal Processing Research Group of the Electrical Engineering Department, University of Indonesia. His research interests are wireless communication and signal processing. Prof. Dr. D. Gunawan is a senior member of IEEE and IEEE Signal Processing Society.
2010-05-27T11:02:13.000Z
2010-05-27T00:00:00.000
{ "year": 2010, "sha1": "261dccf726f61238ffeac2f5f16da8d56e837195", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Arxiv", "pdf_hash": "261dccf726f61238ffeac2f5f16da8d56e837195", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Mathematics", "Computer Science" ] }
265078917
pes2o/s2orc
v3-fos-license
EXPLORING CUSTOMER SATISFACTION AND LOYALTY IN THE TELECOMMUNICATIONS INDUSTRY: A COMPREHENSIVE REVIEW In the telecommunications industry, renowned for its customer-centric approach, this research delves into the vital connection between customer satisfaction and loyalty. The study comprehensively reviews existing literature, drawing insights from diverse geographical regions. This meticulous analysis uncovers both common threads and disparities within the current body of research, providing valuable guidance for scholars and researchers navigating the intricate landscape of customer satisfaction and loyalty within the telecom sector. Additionally, the review illuminates the evolving definitions of customer loyalty, ranging from observable behaviours to profound commitment, while delineating three distinct approaches to measurement. This systematic exploration underscores the contemporary shift in business strategy, highlighting the transition from a focus on customer acquisition to one centred on retention, ultimately reaffirming the pivotal role of customer loyalty in today's business landscape. INTRODUCTION The evolution of communication technology has been intrinsically linked to shifts in political and economic systems, resulting in corresponding changes in power structures.Communication, ranging from intimate exchanges to mass dissemination, has been integral to human history since the development of speech around 100,000 BCE.Technology's role in touch traces back to the earliest use of symbols around 30,000 BCE, including petroglyphs, pictograms, ideograms, and cave paintings.Subsequently, advancements such as writing, printing technologies, telecommunications, and the Internet have significantly transformed communication.The history of telecommunications, for instance, was initiated with ancient methods like smoke signals and drums across Africa, Asia, and the Americas.However, the emergence of permanent semaphore systems in Europe during the 1790s marked a notable turning point.It was in the 1830s that electrical telecommunication networks began to take shape, ushering in a new era.In the contemporary telecommunications landscape, customers enjoy a plethora of service providers and actively exercise their right to switch providers.This industry is characterized by intense competition, offering consumers abundant choices and minimal switching costs, leading to high customer turnover.Customers seek specialized services at lower prices in such a fiercely competitive sector.For businesses in this environment, cultivating a base of satisfied and loyal customers yields substantial financial benefits, including reduced costs through improved retention, enhanced profitability from expanded relationships, and new referral revenue streams.Companies with loyal customers typically report higher profitability, as these customers are inclined to increase their purchases and promote the business through word of mouth.Given the competitive nature of the telecommunications industry, customer retention and satisfaction have taken precedence over customer acquisition.With annual churn rates averaging 30-35%, retaining existing customers is cost-effective and strategically crucial.Telecom companies must continually monitor the factors influencing customer satisfaction and loyalty.This paper reviews the literature on customer satisfaction and loyalty within the telecommunications sector, shedding light on the factors that drive customer behaviours.While customer satisfaction and loyalty have been Volume 2 Issue 4 October -December 2023 extensively studied in the context of physical goods and services, fewer studies have explored these aspects within mobile telecommunications services.The limited research in this area has been complemented by subsequent studies that delve into the factors shaping customer satisfaction and loyalty.The primary objective of this paper is to synthesize the available literature to gain a comprehensive understanding of the concepts of customer satisfaction and loyalty within the realm of telecom services.The study reviews the literature surrounding customer satisfaction and loyalty in telecommunications, assesses various research methodologies, and examines the relationships among the considered variables.The paper is structured as follows: It begins with an overview of customer satisfaction and loyalty concepts specific to the services sector, followed by a summary of selected studies investigating factors influencing customer satisfaction and loyalty in telecommunications across diverse geographical regions.The review then delves into a detailed analysis of the chosen literature, leading to a discussion and conclusion highlighting areas of consensus and contradiction among the reviewed studies.Finally, based on the insights from the review, the paper outlines directions for future research in the field. Customer satisfaction: In the realm of modern marketing, customer satisfaction stands as a pivotal and widely acknowledged concept.Its significance extends far and wide, encompassing various facets of business operations.Notably, higher levels of customer satisfaction yield a host of tangible benefits, including improved financial performance.This enhancement is achieved through several means, such as reduced customer attrition, cultivating customer loyalty, propagating positive word-of-mouth, and elevating a firm's image and reputation.Given its multifaceted importance, customer satisfaction has been the subject of extensive research over the past two decades, reflecting its central role in contemporary business strategies (Srivastava, 2014).Today, in the business landscape, customer satisfaction reigns supreme as a primary concern across various sectors.A survey of the current literature reveals a rich tapestry of customer satisfaction definitions and dimensions.This diversity is aptly illustrated in Table 1 and the subsequent discussion, underscoring the multifaceted nature of the customer satisfaction concept.At the core of satisfaction models lies the theory of disconfirmation.This theory forms the foundation upon which models of customer satisfaction are constructed.Customer satisfaction is the vital link connecting the intricate processes leading to purchase and consumption with subsequent post-purchase phenomena.These post-purchase effects encompass a spectrum of outcomes, including changes in attitudes, repeat purchases, and the cultivation of brand loyalty (Churchill & Surprenant, 1982).It is worth noting that this definition finds resonance in the works of scholars like Jamal and Naser (2003) and Mishra (2009), further reinforcing the centrality of customer satisfaction in contemporary business discourse. Reference Description of the concept Oliver (1987) Customer satisfaction is an outcome of a purchase/usage experience, an essential variable in the chain of purchase experience linking product selection with other postpurchase phenomena, including favourable word-of-mouth and customer loyalty. Anderson and Sullivan (1993) Customer Satisfaction can be broadly characterized as a post-purchase evaluation of product quality given pre-purchase expectations. Giese and Cote (2000) Consumer satisfaction is defined as a reaction (cognitive or affective) to a specific topic (e.g., a buying experience and the accompanying product) that happens at a specific moment (e.g., post-purchase, post-consumption). Schiffman and Kanuk (2004) Customer satisfaction is an individual's view of a product or service's performance and expectations. Zeithaml and Bitner (2004) Customer Satisfaction is a customer's evaluation of a product or service regarding whether it has met their needs and expectations.If there is any failure to meet needs and expectations, it results in product or service dissatisfaction. Hill and Alexander (2006) Customer satisfaction measures how your organization's product performs concerning customer requirements.Volume 2 Issue 4 October -December 2023 Karunakaran (2008) Customer satisfaction measures how well a product's perceived performance fits the buyer's expectations.If a product fails to meet expectations, the customer is unhappy; if it meets expectations, the customer is satisfied.The customer is satisfied when his expectations exceed Vadde, S. (2012) Customer satisfaction is customers' mindset about a particular company when their expectations have been met or exceeded over the product or service's lifetime. Dumbre G, and K Kaldante (2014) Customer satisfaction is the consumer's assessment of the apparent discrepancy between prior expectations (or another performance standard) and the actual performance of the good or service as viewed after use. Quadree, S. T., & Pahari, S. (2022) Customer satisfaction is a customer's attitude toward the service provider or an emotional reaction to the gap between what consumers expect and receive regarding fulfilling some need, aim, or desire. The difference between expected and experienced standards plays a pivotal role in determining satisfaction, as Liu and Khalifa (2003) outlined.Disconfirmation, which signifies the extent to which a product falls short of meeting customer expectations, is critical to understanding satisfaction (Mckinney et al., 2002;Spreng et al., 1996).Various scholars have approached customer satisfaction from different angles.Gundersen et al. (1996) define it as a postconsumption evaluative judgment specific to a product or service.Oliver (1980) characterizes it as a consequence of an evaluation process that juxtaposes pre-purchase expectations with performance perceptions during and after the consumer experience.Westbrook and Oliver (1991) view customer satisfaction as a post-choice evaluative judgment concerning a particular purchase selection.Similarly, Kristensen et al. (1999) emphasize that customer satisfaction is an evaluative response rooted in the purchase and consumption experience of comparing expectations and reality.Moliner et al. (2007) introduce the idea of customer satisfaction as a consumer's judgment of pleasure versus displeasure.They identify two critical responses associated with satisfaction: the cognitive response, which involves comparing expectations to performance, and the affective response, which relates to the emotional experience of the service encounter.Kumar and Oliver (1997) align satisfaction with customers' perceptions of their expectations being met, a sense of receiving 'fair' value, and an overall feeling of contentment.Further classifications of satisfaction delve into its components and nature.Giese and Cote (2000) categorize satisfaction based on response type (cognitive or emotional), focus (product, usage experience, or expectations), and the stage at which the response occurs (post-purchase, post-consumption, or cumulative experience).Halstead et al. (1994) highlight that customer satisfaction is a transaction-specific affective response from comparing product performance against pre-purchase standards.Cote et al. (1989) suggest that satisfaction is determined when the evaluation occurs, either naturally occurring, after consumption, or externally driven.Additionally, Vadde (2012) defines customer satisfaction as the consumer's perception of a particular company when their expectations have been met or exceeded over the product or service's lifetime.Dumbre G. and K Kaldante (2014) describe it as the consumer's assessment of the apparent discrepancy between prior expectations or another performance standard and the actual performance of the good or service viewed after use.Satisfaction is also viewed through different approaches, including transaction-specific and cumulative perspectives.Transaction-specific satisfaction pertains to evaluating a specific product or service experience, often restricted to a single encounter.In contrast, cumulative satisfaction involves the holistic evaluation of all purchase commodities or service experiences.This cumulative evaluation provides valuable insights into business operational performance indicators (Anderson et al., 1994).Customer satisfaction directly impacts customer behavior, particularly regarding repurchase intentions and actual repurchasing, ultimately influencing an organization's future revenue and profitability.However, it is worth noting that not all research aligns with this positive correlation.Bowen and Shoemaker (2003) present an alternative perspective, suggesting that satisfied consumers may be less inclined to return to a company and share positive word-of-mouth recommendations with others.This divergence in findings can be attributed, in part, to situations where a company fails to meet customer desires or expectations (Roig et al., 2006). In essence, the literature review underscores that customer satisfaction is rooted in a cognitive process involving comparing what customers receive in exchange for the service they obtain.Volume 2 Issue 4 October -December 2023 Customer loyalty: Customer loyalty has emerged as a crucial focal point in the business landscape.Traditionally, marketing efforts were primarily geared towards acquiring new customers, with minimal emphasis on retaining existing ones.However, this paradigm has shifted significantly, driven by the recognition that many customers are lost before or during the repurchase decision-making process, often due to subpar service quality.This underscores the prevalence of customers actively exploring alternative options, highlighting the pivotal elements of commitment, affiliation, and engagement within a service-oriented marketplace (Bhattacharjee, 2005).Over the past couple of decades, there has been a noteworthy evolution in the strategic orientation of businesses.The focus has transitioned sequentially from a primary emphasis on quality to customer satisfaction and, most recently, to customer loyalty as the prevailing remedy for sustained success (Johnson and Gustafsson, 2006).Dawes and Swailes (1999) put forth the notion that high customer loyalty stands at the heart of effective customer retention, and organizations that compete through loyalty are poised to thrive in the competitive arena. The conceptualization of customer loyalty has undergone steady development over the years.Initially, loyalty was predominantly associated with brand loyalty in the context of tangible products.Gremler and Brown (1996) highlighted that prior research predominantly concentrated on brand loyalty about physical goods, with limited exploration of customer loyalty towards service-oriented entities.Gremler and Brown (1996) expanded the scope of loyalty to encompass services (intangible goods), defining service loyalty as the extent to which a customer exhibits repeat purchasing behavior from a service provider, maintains a favorable attitude towards the organization, and exclusively considers this provider when a need for the service arises.Customer satisfaction is a person's overall perception of a service provider or an emotional response to a mismatch between the expected service and the service they received to meet their goals or needs. CUSTOMER SATISFACTION AND LOYALTY IN THE TELECOMMUNICATIONS SECTOR This section of the article synthesizes existing research findings better to understand consumer loyalty and satisfaction within the telecommunications industry.These insights have been gleaned from studies conducted in various countries, spanning South Korea, Yemen, Bangladesh, India, Jordan, Bahrain, Kurdistan, Nigeria, Pakistan, Ghana, Syria, and Greece.The collective body of research published in national and international journals sheds light on critical factors contributing to customer satisfaction and loyalty in telecommunications.In a world witnessing a steady surge in mobile phone users, the significance of studying customer satisfaction and loyalty within the telecommunications domain has grown exponentially.To illustrate this, Gerpott et al. ( 2001) undertook a study in the German mobile cellular telecommunications market, focusing on the factors influencing customer retention from the perspective of service providers.Analyzing data collected from residential mobile communications users, they employed structural equation modeling to explore the intricate relationships between customer retention, customer loyalty, and customer satisfaction.Their findings underscored the pivotal role of overall customer satisfaction in shaping customer loyalty, which, in turn, impacts customers' decisions regarding the continuation or termination of their contractual relationships with mobile network operators.Key determinants identified included mobile service pricing and the quality of personal service.Notably, the landscape of customer satisfaction and loyalty research within the telecommunications sector began to evolve significantly post-2006, witnessing a substantial uptick in studies exploring these dimensions (Turel and Serenko, 2006).Athanassopoulos and Iliakopoulos (2003) honed their research to focus on residential customers of a major European telecommunications company.Their inquiry delved into the service delivery process experienced by customers at various transaction points to shape their overall judgment about the organization.Building on existing theoretical frameworks, they postulated hypotheses regarding these transactional elements' impact and empirically tested them using data collected from 2500 fixed-line residential customers.Their findings highlighted the positive contributions of all transaction elements to the overall satisfaction variable.Notably, attributes associated with continuous transactions, such as corporate image, billing, product quality, and branch service, exerted a more substantial influence on overall perceived performance than incident-driven transactions like new service provisioning and fault repair.Wang et al. (2004) delved into the mobile communication market in China, systematically exploring the dynamic relationships between key drivers: service quality, customer value, and customer satisfaction.Their study involved developing and testing structural equation models, unveiling that not all quality-related factors hold equal weight in shaping customer-perceived service quality, customer value, and customer satisfaction.In addition to these direct interrelationships, the study unearthed the moderating effect of customer value on the relationship between customer-perceived service quality and customer satisfaction.Furthermore, it revealed that customer value and satisfaction significantly influenced customers' behavioral intentions.In contrast, the impact of customer-perceived service quality on behavior intentions was mediated through customer value and satisfaction.the critical importance of satisfaction in building and maintaining customer loyalty, emphasizing the need for service providers to enhance service quality and customer value.To further explore these aspects in a Syrian context, Rahhal (2015) employed factor analysis based on data collected from 460 Syrian mobile phone service users.The study aimed to develop a valid and reliable instrument to measure customer-perceived service quality, incorporating both service delivery and technical quality aspects.Their findings underscored service quality's direct and significant impact on customer satisfaction, mediated through network quality, responsiveness, and reliability.In a broader strategic context, Lim (2020) emphasized the Research indicates that retaining existing customers can significantly boost profitability, making customer loyalty a crucial metric for success.Similarly, Quadree and Pahari (2022) highlighted the need for telecom companies to differentiate their brand through product value, pricing, customer services, and other factors that foster long-term customer loyalty.Understanding customer purchasing patterns and satisfaction becomes paramount in gaining a competitive edge within the market. Customer satisfaction: The literature study identified a prevailing and evolving significance of customer satisfaction in contemporary business strategies.It underscores the multifaceted nature of customer satisfaction definitions and dimensions rooted in the disconfirmation theory.Researchers have approached customer satisfaction from various angles, highlighting its role as a post-consumption evaluative judgment juxtaposing prepurchase expectations with performance perceptions.The response type, focus, and stage at which satisfaction occurs offer diverse classifications, ranging from transaction-specific to cumulative perspectives.While the positive correlation between customer satisfaction, repurchase intentions, and actual repurchasing is a recurring theme, some studies suggest that satisfaction might not always guarantee customer loyalty or positive word-of-mouth recommendations, particularly when companies fail to meet customer expectations. Customer loyalty: The discussed literature reflects a shifting paradigm in the business landscape, where the traditional focus on customer acquisition has given way to a growing recognition of the imperative need for customer retention, primarily due to high attrition rates often linked to inadequate service quality.This evolution in business strategy underscores the rising significance of customer loyalty, which has undergone various conceptual refinements, encompassing behaviors like repeat purchasing, fostering positive attitudes, and cultivating exclusivity in consumers' choice of service providers.The literature reveals diverse interpretations of customer loyalty, spanning from behavioral indicators to deep-seated commitment.It also delineates three distinct approaches to understanding and measuring customer loyalty.In essence, the literature survey underscores the ascendant importance of customer loyalty, offering evolving definitions and emphasizing the contemporary emphasis on retaining and nurturing existing customer relationships. Customer satisfaction and loyalty in the Telecommunications sector - The synthesis of extensive research within the telecommunications sector reveals several key findings.Customer satisfaction and loyalty are complex, interrelated concepts influenced by diverse factors such as service quality, pricing, corporate image, and trust.The significance of customer satisfaction in fostering loyalty is consistently emphasized across various studies.Service quality, including network performance and responsiveness, is critical in shaping customer satisfaction.Additionally, customer perception of value, pricing, and the overall customer experience contribute significantly to satisfaction and loyalty.Regional variations in customer preferences and demographics also influence these dynamics.Telecom companies must prioritize customer satisfaction through improved service quality and value propositions to cultivate and maintain customer loyalty in a competitive market. CONCLUSION The literature on customer satisfaction and loyalty reveals their pivotal roles in modern business strategies.Customer satisfaction is a multifaceted concept rooted in disconfirmation theory, encompassing various evaluative judgments that compare pre-purchase expectations with actual performance.Customer loyalty has evolved as a central focus, with diverse interpretations ranging from behavioural indicators to deep commitment.The telecommunications sector exemplifies the complex interplay of factors shaping customer satisfaction and loyalty, emphasizing the need for service quality, pricing, and overall customer experience.This literature underscores the Lovelock and Wirtz (2004) articulated customer loyalty as "a customer's willingness to continue patronizing a firm over the long term and recommending the firm's products and services to friends and associates."Heskett et al. (1994) defined customer loyalty as "representing repeat purchases and referring the company to other customers."Conversely, Bloemer and Kasper (1995) associated loyalty more with commitment, characterizing it as absolute loyalty rather than mere repetitive purchasing behavior, essentially denoting brand repurchases irrespective of commitment.Similarly,Gremler and Brown (1996) conceptualized customer loyalty as the degree to which a customer demonstrates recurrent purchase behavior from a service provider and maintains a favorable opinion of the service provider(Rahman et al., 2014).In contrast,Zikmund et al. (2003) defined customer loyalty as a customer's dedication or attachment to a brand, retailer, manufacturer, service provider, or organization, predicated on favorable views and behavioral responses, such as repeat purchases.Oliver (1999) posited that customer loyalty encompasses a consumer's overall connection to or strong commitment to a product, service, brand, or company.Oliver (1999) elaborated that loyalty translates to a high commitment to consistently purchase or consume preferred products and services, even in changing circumstances or in the face of competitors' marketing efforts that could sway consumer preferences.Krishnan et al. (2001) underscored that customer loyalty is a strategic approach that benefits both customers and businesses.According toEdvardsson et al. (2000), customer loyalty manifests as a desire or inclination to engage in future transactions with the same company.The literature review highlights three distinct approaches to understanding customer loyalty: the behavioral loyalty approach(Grahn, 1969), the attitudinal loyalty approach (Bennett & Rundle-Thiele, 2002; Jacoby, 1971;Jacoby & Chestnut, 1978), and the integration of attitudinal and behavioral loyalty approach(Dick & Basu, 1994;Jacoby, 1971;Jacoby & Chestnut, 1978;Oliver, 1997). Kim et al. (2004) probed the role of switching barriers as direct determinants of customer loyalty and satisfaction Volume 2 Issue 4 October -December 2023 among South Korean mobile telecom customers.Their study, involving 306 participants, revealed substantial positive impacts of customer happiness and switching barriers on customer loyalty.Customer support, value-added services, and call quality were identified as influential factors shaping service quality and significantly impacting client satisfaction.The study also highlighted the importance of loss and move-in costs as switching barriers, underlining their positive influence on customer satisfaction.Moreover, the study revealed that switching barriers directly affected customer loyalty, and this influence operated through an interaction with customer satisfaction.In a broader context,Eshghi et al. (2008) explored service-related factors in the Indian mobile telecommunications sector.Their study, encompassing 238 mobile phone users across four major Indian cities, identified six factorsrelational quality, competitiveness, reliability, reputation, support features, and transmission quality -as the underlying dimensions by which Indian mobile phone customers assessed the quality of their service.These factors significantly predicted customer satisfaction, repurchase intentions, and customers' likelihood to recommend the service to others.The study found that these dimensions played distinct roles in shaping various facets of customer behavior, underlining the multifaceted nature of customer satisfaction and loyalty.Balaji (2009) focused on customer satisfaction with mobile services in India, employing a structural equation model based on data collected from 199 Indian postpaid mobile users.The study investigated the relationship between customer expectations, quality, value, satisfaction, and loyalty.Their findings underscored the critical role of perceived quality in determining customer satisfaction, subsequently fostering trust, price tolerance, and long-term customer loyalty.Notably, the study identified the lack of a correlation between perceived value and customer satisfaction, implying that mobile service subscribers considered the price charged for the high-quality services they received.This revealed that improving service quality was a key driver of satisfaction in the Indian mobile services context.Ahmed et al. (2010) delved into the relationship between service quality dimensions (tangibles, responsiveness, empathy, assurance, and reliability), customer satisfaction, and customer repurchase intentions (loyalty) in the telecom sector of Pakistan.Their study, involving 331 young mobile users, revealed significant and positive relationships between all dimensions of service quality, customer satisfaction, and repurchase intentions, except empathy, which displayed significant but negative associations with satisfaction and repurchase intentions.Additionally, the study unveiled the mediating role of customer satisfaction in the relationship between service quality and customers' repurchase intentions, highlighting the intricate interplay between these variables in shaping customer loyalty.Boohene and Agyapong (2010) scrutinized the factors influencing customer loyalty for Vodafone in Ghana, adopting the SERVQUAL model as the primary framework for assessing service quality.Using multiple and logistic regression analysis, the study explored the correlations between service quality, customer satisfaction, image, and customer loyalty.Their findings supported the proposed model, emphasizing direct paths from customers' perceived service quality and trust to customer loyalty and indirect paths through customer satisfaction.This study provided valuable insights into the nuanced relationships between these dimensions in the Ghanaian telecommunications landscape.Kim and Lee (2010) conducted a web-based survey among 469 South Korean respondents, reinforcing previous research by highlighting the influential roles of service quality, service price, and corporate image as strong antecedents for customer loyalty.The study underscored the pivotal contributions of service quality and price in cultivating customer loyalty to service providers, further emphasizing the significance of corporate image in shaping overall reputation and prestige, thereby impacting customer loyalty.Akroush et al. (2011) ventured into the Jordanian mobile telecommunications market, empirically constructing a structural equation model to delineate the antecedents of customer loyalty.Drawing from a sample of 756 mobile users, their findings highlighted customer loyalty as a multi-faceted concept, significantly influenced by customer satisfaction, trust, perceived switching costs, and perceived service quality.Interestingly, the study revealed that customer satisfaction wielded the most substantial direct effect on customer loyalty, subsequently influencing customer trust and perceived switching costs.While customer trust also played a role, it was weaker than customer satisfaction in driving loyalty.The study also identified the role of brand awareness as a robust determinant of customer loyalty.However, customer support service showed no significant relationship with customer loyalty.Deng et al. (2010) delved into the factors influencing customer satisfaction and loyalty to mobile instant messages in China, highlighting trust, perceived service quality, perceived customer value, and switching costs as critical contributors to customer satisfaction.Their study revealed that perceived service quality held the most significant effect on customer satisfaction, with trust and customer satisfaction positively influencing customer loyalty.While the effects of switching costs were more minor in magnitude, they still contributed to loyalty.The results highlighted Volume 2 Issue 4 October -December 2023
2023-11-10T16:44:29.601Z
2023-11-05T00:00:00.000
{ "year": 2023, "sha1": "ad591d547aadd3f0c662b2ba671c92d057d82289", "oa_license": "CCBYNCSA", "oa_url": "https://ijmpr.org/index.php/IJMPR/article/download/193/124", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "837c06335e9b3f6c712cb665b7a853d964a60404", "s2fieldsofstudy": [ "Business", "Economics" ], "extfieldsofstudy": [] }
263643580
pes2o/s2orc
v3-fos-license
Assessment of Lower Third Molar Eruption Status in Different Facial Growth Patterns in Adults Introduction Various skeletal and dental factors help in predicting the mandibular third molar eruption, but the reliability of these factors may vary within subjects with different growth patterns. Thus, the present study aims to analyze these parameters for the lower third molar eruption in subjects with different facial growth patterns. Material and Methods The study was conducted on 120 pre-treatment lateral cephalograms and orthopantomograms of the subjects who were equally divided (based on the SN-GoGn angle) into three groups: normodivergent, hypodivergent, and hyperdivergent. The groups were further subdivided into impacted and erupted subgroups based on mandibular third molar eruption status. Nine radiographic parameters were compared between the impacted and erupted subgroups, using the independent Student’s t-test, to check their association with mandibular third molar eruption in different growth patterns. Results Beta angle was significantly different in the erupted and impacted subgroups in all three groups (with p <.05). The retromolar space and alpha angle was significant in hypodivergent group (p <.01) and the gamma angle was significant in the hyperdivergent group (p <.01). Conclusion Among all the parameters that were analyzed for the third molar eruption, only the beta angle was significantly related to the third molar eruption in subjects with all three different growth patterns. Introduction The mandibular third molar is the most frequently impacted tooth after the maxillary third molar. 1 The prevalence of third molar impaction varies from 16.7% to 68.6%. 2 The initial appearance of third molars on the radiograph can be seen from 5 to 16 years of age and its eruption is around 18-24 years. 3The tooth has a mesial and lingual angulation during its development; therefore, for its normal eruption, it must change its angulation and become upright.Richardson, in the year 1978, stated three ways in which impactions of the tooth could occur: (a) the tooth became upright but not sufficient enough to permit its eruption, (b) the tooth did not change its angulation, and (c) the tooth underwent reverse angulation. 4wareness about the eruption status of third molars is of great significance to orthodontists in planning the treatment and also for the stability and maintenance of the dentition in the long run, as there has been a constant debate on the role of growth of the condyle is vertical.He also stated that the growth of the mandibular condyle in a vertical direction is linked with decreased resorption on the anterior border of the mandibular ramus. 7So, it could be said that different growth patterns influence the eruption of the mandibular third molars. In a previous study done by Jakovljevic et al., 8 certain skeletal and dental radiographic factors were studied in relation to different anteroposterior malocclusions and in different age-related groups, but these parameters have not yet been studied in different vertical malocclusions.Thus, the present study aims to analyze these skeletal and dental parameters for the lower third molar eruption in subjects with different growth patterns.The secondary objective of the study is to determine the percentage of lower third molar eruptions in different anteroposterior malocclusions. Material and Methods The study was conducted on 120 pre-treatment records (lateral cephalograms and orthopantomograms) of the subjects who reported to the Department of Orthodontics and Dentofacial Orthopaedics for orthodontic treatment in 2019-2020.An approval certificate from the ethical research committee of Baba Jaswant Singh Dental College, Hospital and Research Institute was obtained for the study. The inclusion criteria for the subjects and their radiographs were that the subjects should be 18 years or above, have no history of orthodontic treatment, have no history of extractions, and the radiographs should be of good quality.The subjects with incomplete records or dentofacial deformities were excluded from the study. Nine radiographic parameters were evaluated, out of which RMS, mesiodistal width, space/width ratio (SWR), ramal width, alpha, beta, and gamma angles were evaluated on the OPG (Figure 1), and the length of the body of mandible and height of the ramus of mandible was measured on the lateral cephalogram (Figure 2).The definitions of all radiographic parameters are mentioned in Table I. The lateral cephalograms were used to allocate the subjects based on the SN-GoGn angle into three groups: Group I Normodivergent (SN -GoGn 27˚-32˚), Group II Hypodivergent (SN -GoGn <27˚), and Group III Hyperdi vergent (SN -GoGn >32˚), with the number of subjects being 40 in each group.The SN-GoGn angle was used because the two points, Sella and Nasion, move only a minimal amount whenever the head deviates from the true profile position. 9The sample size for the study was calculated using the formula: ]/e 2 = (1.96) 2 *(7.3) 2 /(2.7) 2 = 28 per group at 95% confidence level. (The sample size was calculated by taking reference from a previous study done by Jakovljevic et al. 8 Their article aimed to analyze various skeletal and dental parameters for the lower third molar eruption in subjects with different anteroposterior malocclusions and in different age-related groups.Our study aimed to analyze various skeletal and dental parameters for the lower third molar eruption in subjects with different vertical malocclusions in adults). The three groups were further subdivided into two subgroups, one with impacted and the other with erupted third molars.The third molars were considered erupted if they reached the level of occlusal plane, drawn on the OPG; otherwise, impacted. The linear and angular measurements were compared between the impacted and erupted subgroups to see if the parameters were significant enough to predict the mandibular third molar eruption.The study's secondary objective was to evaluate the percentage of the third molar eruption in different anteroposterior malocclusions.So, the parameters were also studied in Class I, II, and III malocclusions for which the subjects were divided based on the ANB angle. Statistical Analysis The software used to formulate the results was the SPSS 22.0 for Windows (SPSS Inc., Chicago, USA) and the level of significance was set at p <.05.Independent Student's t-test was used in the study to check different parameters for their role in the eruption/impaction of third molars in the erupted and impacted subgroups.Univariate logistic analysis was used to determine statistically significant predictors for mandibular third molar eruption.The study used the chi-square test to find the difference between erupted and impacted third molars in different anteroposterior groups. To check the reliability, 15 subjects were randomly chosen, and the radiographs were retraced and analyzed by the same operator after 15 days.The same operator recorded Sella Nasion plane The cranial line between the center of sella turcica and the anterior point of frontonasal suture. 9 Mandibular plane A line passing through Gonion and Gnathion used in Steiner analysis by Cecil Steiner.9 3. Occlusal plane (OP) A line drawn through the highest points of the crowns of the mandibular incisors and first molars. Mandibular line (ML) It is a tangent drawn on the lower border of the mandibular body. 5. Retromolar space (RMS) Length of a line drawn along the occlusal plane from the anterior limit of ramus of mandible to the posterior limit on mandibular second molar. Mesio-distal width of the lower third molar (MDW) The distance between the mesial and distal surface of the lower third molar crown. 7. Space/width ratio (SWR) Retromolar space/mesiodistal width ratio of lower third molar.It is also known as the Ganss ratio. 10 Ramus width Distance measured from midpoint on the anterior ramal wall to the midpoint on the posterior ramal wall. 9. Alpha angle Angle of the long axis of the lower third molar to the mandibular line. Beta angle Angle between the long axis of the lower third and second molars. Gamma angle Angle of the long axis of the lower second molar to the mandibular line. SN-GoGn The angle formed between the SN plane and the mandibular plane. 9.ANB angle The angle formed by the intersection of lines joining nasion to point A and nasion to point B. 9 Length of the body of the mandible The distance measured between Gonion and Gnathion. Height of the ramus of the mandible The distance measured between articulare and gonion.three skeletal and five dental parameters and the reliability of the data was checked using the intraclass correlation coefficient reliability index. Results A total of 225 mandibular third molars in 120 subjects (76 females and 44 males) were evaluated.The mean age of the subjects in the normodivergent, hypodivergent, and hyperdivergent groups was 20.85, 20.15, and 20.47 years. The results showed a higher intraclass correlation coefficient (between 0.880 and 1.000), indicating good reliability.Out of the total 225 third molars evaluated, the hypodivergent group had the highest number of impacted molars, while the hyperdivergent group had the least (Table 2).All the parameters were assessed for their association with eruption/impaction of the mandibular third molar in the total sample (Table 3) using Student's t-test.It was noted that only the RMS and alpha and gamma angles were significantly increased in the erupted subgroup and beta angle was significantly increased in the impacted subgroup. On assessment of the skeletal and dental parameters in the normodivergent group, it was seen that only the beta angle was significantly different in the erupted and impacted subgroups (Table 3).On assessment of the skeletal and dental parameters in the hypodivergent group, the results showed that the RMS and the alpha angle were significantly increased in the erupted subgroup, while the beta angle was significantly decreased in the same subgroup (Table 3).On assessment of the skeletal and dental parameters in the hyperdivergent group, the results showed that the gamma angle was significantly increased, and the beta angle was significantly decreased in the erupted subgroup (Table 3). For evaluation of the number of erupted and impacted third molars within the anteroposterior groups, the results showed that the percentage of erupted and impacted third molars was 64.7% and 35.3% in Class I, 64.6% and 35.4% in Class II, and 70% and 30% in Class III malocclusion, but the result was not statistically significant (Table 4). Discussion The eruption of the mandibular third molar depends on a variety of factors.The influence of facial growth pattern on third molar eruption has been under observation since Bjork reported that a reduced space for the mandibular third molar is seen when the growth of the condyle is vertical. 7Behbehani et al. 11 established that a reduced jaw angle measured at 18 years of age was more commonly linked to third molar impaction. 12rdem et al. 13 stated that the chances for mandibular third molar eruption were higher in patients with a vertical facial growth pattern, while Datana et al. 14 concluded that the eruption or the impaction of the lower third molar was independent of the vertical growth pattern of the individual.So, it can be said that the growth pattern of an individual might influence the eruption process of mandibular third molars.Hence, this study was conducted to analyze the effect of various radiographic predictors in a sample divided on the basis of facial growth patterns. When the skeletal parameters in the total sample were compared between the erupted and impacted subgroups, our results showed that the mandibular length, ramus height, and ramus width were not significant enough to predict the third molar eruption.Ross Kaplan 15 observed similar findings and suggested that the length of the mandible was not reduced in cases with impacted third molars.However, Richardson 16 stated that a reduced mandibular length might lead to reduced RMS, thus causing third molar impaction.Behbehani et al. 11 also reported that a decreased mandibular length might increase the risk of third molar impaction.When the above skeletal parameters were evaluated within different growth pattern groups, the results were similar. When the linear dental parameters (RMS, mesiodistal width of the third molar, and space width ratio) in the total sample were compared between the erupted and impacted subgroups, the RMS was significantly more in the erupted subgroup, while the increase in mesiodistal width and SWR was not significant.Hattab et al. 17 reported similar results, as they found significantly reduced RMS in the group with impacted third molars.In contrast to our study, Tsai 18 observed that the mesiodistal dimension of the mandibular third molar was found to be a significant factor determining the eruption of the tooth.The above difference could be due to the fact that the pattern of growth, the development of the jaws, and the size of the teeth are likely to vary among different populations and races.Another study by Kaur et al. 19 also reported a wider mesiodistal dimension of the mandibular third molar in the impacted group, which could be due to the difference in the method of measurement of the dimension between both the studies (clinical versus radiographic). When the significance of the RMS was to be seen within different growth patterns, it was observed that the RMS remained significant in relation to the eruption of third molar only in the hypodivergent group. When these angular parameters in the total sample were assessed in the erupted and impacted subgroups, it was seen that the alpha, beta, and gamma angles were significantly different between the two subgroups.Our results showed that the mandibular third molars were more upright (alpha angle) in the erupted group as compared to the impacted one.Similarly, Behbehani et al. 11 also concluded that an increased mesial angulation of the third molar is a significant risk factor for third molar impaction.Artun et al. 20 also stated that more than 40⁰ angulation of the third molar to the occlusal plane at the end of orthodontic treatment is a potential risk for third molar impaction.The present study revealed that the angulation between the long axis of the third and second molar (beta angle) was significantly reduced in the erupted group compared to the impacted one.A similar significant decrease in beta angle was reported by Jakovljevic et al. 9 and Uthman. 21he association of third molar eruption and beta angle could be due to the fact that an upright angulation of the second and third molars causes less consumption of the RMS leading to the eruption of the third molar. Only the beta angle was associated with the third molar eruption in all three groups, while the alpha angle was significant only in the hypodivergent group and the gamma angle was favorable in the hyperdivergent group.This different degree of association of these angles in different growth patterns could be the fact that the second and third molar's direction of eruption varies within different mandibular growth patterns. When the number of erupted and impacted third molars was studied in different anteroposterior malocclusions, it was seen that a higher percentage of the third molar eruption was seen in Class III malocclusion.However, this difference was statistically insignificant.The findings of our study were in concordance with Jain and Valiathan, 22 while Jakovljevic et al. 8 reported increased eruption in skeletal Class III cases. Out of the total investigated radiographic parameters in the total sample, for their association with third molar eruption, a significant difference was found in RMS and the angular measurements.The correlation of RMS and alpha angle with the third molar eruption was reduced in the normodivergent and hyperdivergent groups.When the gamma angle was evaluated, it was significant only in the hyperdivergent group. The present study shows that the role of certain radiographic parameters varies in different growth pattern groups.As the third molar eruption forms an integral part of orthodontic treatment planning, for a stable orthodontic treatment in various growth patterns, certain radiographic predictors should be considered. As the present study was a cross-sectional study, a longitudinal study would be more insightful to study the parameters that are significant enough to predict the third molar eruption status in subjects with different growth patterns. Conclusion The conclusions of the study are as follows: • Out of the nine studied parameters, the RMS, along with the angular measurements (alpha angle, beta angle, and gamma angle), was significantly related to the third molar eruption. • The RMS was significant in third molar eruption only in the hypodivergent group.• When angular parameters were studied in different growth patterns, only the beta angle had a significant influence on the third molar eruption in all three groups, while the alpha angle was not significant in the normodivergent and hyperdivergent groups and the gamma angle was not significant in normodivergent and hypodivergent groups.• There was no significant difference in third molar eruption in different anteroposterior skeletal classes. Figure 1 . Figure 1.Linear and Angular Measurements on OPG. Table 1 . Definitions of all Radiographic Parameters.Definition of All the Radiographic Parameters Used in the Study 1. Table 2 . Percentage of Erupted and Impacted Third Molars in Different Growth Patterns. Table 3 . Comparison of the Linear and Angular Measurements Between Impacted and Erupted Subgroups in the Total Sample and the Normodivergent (Group 1), Hypodivergent (Group 2), and Hyperdivergent (Group 3). Table 4 . Percentage of Erupted and Impacted Third Molars Among the Anteroposterior Groups.
2023-10-05T15:02:39.270Z
2023-10-02T00:00:00.000
{ "year": 2024, "sha1": "c795b0285e8a03503fc6e47500397ce9531c3402", "oa_license": "CCBYNC", "oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/03015742231199844", "oa_status": "GOLD", "pdf_src": "Sage", "pdf_hash": "57ec70c9fdfab0a682777f8baffd5216a93a2447", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
116942533
pes2o/s2orc
v3-fos-license
$|V_{ub}|$ from $B\to\pi\ell\nu$ decays and (2+1)-flavor lattice QCD We present a lattice-QCD calculation of the $B\to\pi\ell\nu$ semileptonic form factors and a new determination of the CKM matrix element $|V_{ub}|$. We use the MILC asqtad 2+1-flavor lattice configurations at four lattice spacings and light-quark masses down to 1/20 of the physical strange-quark mass. We extrapolate the lattice form factors to the continuum using staggered chiral perturbation theory in the hard-pion and SU(2) limits. We employ a model-independent $z$ parameterization to extrapolate our lattice form factors from large-recoil momentum to the full kinematic range. We introduce a new functional method to propagate information from the chiral-continuum extrapolation to the $z$ expansion. We present our results together with a complete systematic error budget, including a covariance matrix to enable the combination of our form factors with other lattice-QCD and experimental results. To obtain $|V_{ub}|$, we simultaneously fit the experimental data for the $B\to\pi\ell\nu$ differential decay rate obtained by the BaBar and Belle collaborations together with our lattice form-factor results. We find $|V_{ub}|=(3.72\pm 0.16)\times 10^{-3}$ where the error is from the combined fit to lattice plus experiments and includes all sources of uncertainty. Our form-factor results bring the QCD error on $|V_{ub}|$ to the same level as the experimental error. We also provide results for the $B\to\pi\ell\nu$ vector and scalar form factors obtained from the combined lattice and experiment fit, which are more precisely-determined than from our lattice-QCD calculation alone. These results can be used in other phenomenological applications and to test other approaches to QCD. I. INTRODUCTION The Cabibbo-Kobayashi-Masakawa (CKM) matrix [1,2] element |V ub | is one of the fundamental parameters of the Standard Model and is an important input to searches for CP violation beyond the Standard Model. Constraints on new physics in the flavor sector are commonly cast in terms of over-constraining the apex of the CKM unitarity triangle. In contrast to the well-determined angle β of the unitarity triangle, the opposite side |V ub /V cb | is poorly determined, and the uncertainty is currently dominated by |V ub |. This is due to the fact that charmless decays of the B meson have far smaller branching fractions than the charmed decays, as well as the fact that the theoretical calculations are less precise than for sin 2β, |V us |, or |V cb |. Currently the most precise determination of |V ub | is obtained from charmless semileptonic B decays, using exclusive or inclusive methods that rely on the measurements of the branching fractions and the corresponding theoretical inputs. Exclusive determinations require knowledge of the form factors, while inclusive determinations rely on the operator product expansion, perturbative QCD, and non-perturbative input from experiments. There is a long standing discrepancy between |V ub | determined from inclusive and exclusive decays: the central values from these two approaches differ by about 3σ. It was argued in Ref. [3] that this tension is unlikely to be due to new physics effects, and it is therefore important to examine the (theoretical and experimental) inputs to the |V ub | determinations. With the result obtained in this paper, the tension is reduced to 2.4σ. In the limit of vanishing lepton mass, the Standard Model prediction for the differential decay rate of the exclusive semileptonic B → π ν decay is given by where 1/2 is the pion momentum in the B-meson rest frame. To determine |V ub |, the form factor |f + (q 2 )| must be calculated with nonperturbative methods. The first unquenched lattice calculations of |f + (q 2 )| with 2+1 dynamical sea quarks were performed by HPQCD [4] and the Fermilab/MILC collaborations [5] several years ago. Here we extend and improve Ref. [5] in several ways. The most recent exclusive determination of |V ub | from the Heavy Flavor Averaging Group (HFAG) [6] is based on combined lattice plus experiment fits and yields |V ub | = (3.28±0.29)× 10 −3 , where the error includes both the experimental and theoretical uncertainties. The experimental data included in the average are the BaBar untagged six-q 2 -bin data [7], the BaBar untagged twelve-q 2 -bin data [8], the Belle untagged data [9], and the Belle hadronic tagged [10] data. The theoretical errors on the form factors from lattice QCD [5] are currently the dominant source of uncertainty in |V ub | [11]. Hence a new lattice calculation of f + (q 2 ) with improved statistical and systematic errors is desirable 1 . To compare, the value of |V ub | from the inclusive method quoted by HFAG is about (4.40±0.20)×10 −3 [6] using the theory of Ref. [15]. In this paper, we present a new lattice-QCD calculation of the B → π ν semileptonic form factors and a determination of |V ub |. Our calculation shares some features with the previous Fermilab/MILC calculation [5] but makes several improvements. We quadruple the statistics on the previously used ensembles and improve our strategy for extracting the form factors by including excited states in our three-point correlator analysis. In addition, we include twice as many ensembles in this analysis. The new ensembles have smaller lattice spacings, with the smallest lattice spacing decreased by half. This analysis also includes ensembles with light sea-quark masses that are much closer to their physical values (m l /m s = 0.05 versus 0.1). The smaller lattice spacings and light-quark masses provide much better control over the dominant systematic error due to the chiral-continuum extrapolation. We find that heavy-meson rooted staggered chiral perturbation theory (HMrSχPT) in the SU(2) and hard-pion limits provides a satisfactory description of our data. All together, these improvements reduce the error on the form factors by a factor of about 3. Finally, we introduce a new functional method for the extrapolation over the full kinematic range. The determination of |V ub | from a combined fit to our lattice form factors together with experimental measurements also yields a very precise determination of the vector and scalar form factors over the entire kinematic range. These form factors will be valuable input to other phenomenological applications in the Standard Model and beyond. An example is the rare decay B → π + − , which we will discuss in a separate paper. This paper is organized as follows. In Sec. II, we present our calculation of the form factors. We describe the lattice actions, currents, simulation parameters, correlation functions and fits to extract the matrix elements, renormalization of the currents, and adjustment of the form factors to correct for quark-mass mistunings. In Sec. III, we present the combined chiral-continuum extrapolation, followed by an itemized presentation of our complete error 1 Note that there are several other efforts with 2 [12] and 2+1 flavors of sea quarks [13,14]. budget in Sec. IV. We then extrapolate the form factors to the full q 2 range through the functional z expansion method in Sec. V. We also perform fits to lattice and experimental data simultaneously, to obtain |V ub |. We conclude with a comparison to other results and discussion of the future outlook in Sec. VI. Preliminary reports of this work can be found in Refs. [16,17]. II. LATTICE-QCD SIMULATION In this section, we describe the details of the lattice simulation. We briefly describe the calculation of the form factors in Sec. II A. We also calculate the tensor form factor, which follows a analysis similar to that of the vector and scalar form factors. The tensor form factor enters the Standard-Model rate for B → π + − decay, and our final result for f T will be presented in a forthcoming paper. In Sec. II B, we introduce the actions and simulation parameters used in this analysis. This is followed, in Sec. II C, by a brief discussion of the currents and lattice correlation functions. The correlator fits to extract the lattice form factors are provided in Sec. II D. In Sec. II E, we discuss the renormalization of the lattice currents. In Sec. II F, we correct the form factors a posteriori to account for the mistuning of the simulated heavy b-quark mass. A. Form-factor definitions The vector and tensor hadronic matrix elements relevant for B → π semileptonic decays can be parameterized by the following three form factors: where V µ =qγ µ b, and T µν = iqσ µν b. In lattice gauge theory and in chiral perturbation theory, it is convenient to parameterize the vector-current matrix elements by [18] π(p π )|V µ |B(p B ) = 2M B v µ f (E π ) + p µ π,⊥ f ⊥ (E π ) , where v µ = p µ B /M B is the four velocity of the B meson and p µ π,⊥ = p µ π − (p π · v)v µ is the projection of the pion momentum in the direction perpendicular to v µ . The pion energy is related to the lepton momentum transfer q 2 by E π = p π · v = (M 2 B + M 2 π − q 2 )/(2M B ). With this setup, we have where no summation is implied by the repeated indices here. The form factors f + and f 0 are (2.8) B. Actions and parameters The lattice gauge-field configurations we use have been generated by the MILC Collaboration [19][20][21], and some of their properties are listed in Table I. These twelve ensembles have four different lattice spacings ranging from a ≈ 0.12 fm to a ≈ 0.045 fm with several light sea-quark masses at most lattice spacings in the range 0.05 ≤ am l /am h ≤ 0.4. The parameter range is shown in Fig. 1. We use the Symanzik-improved gauge action [22][23][24] for the gluons and the tadpole-improved (asqtad) staggered action [25][26][27][28][29][30] for the 2+1 flavors of dynamical sea quarks and for the light valence quarks. Both Table I and Fig. 1 also indicate the ensembles used in the previous Fermilab/MILC calculation [5]. The current analysis benefits from an almost quadrupled increase in the statistics over that of Ref. [5], as well as finer lattice spacings and lighter sea-quark masses. All ensembles have large enough spatial volume, M π L ≥ 3.8, such that the systematic error due to finite-size effects is negligible compared to other uncertainties. In this calculation, we work in the full-QCD limit, so that the light valence-quark masses am l are the same as the light sea-quark masses am l , which are degenerate. For the bottom quarks, we use the Fermilab interpretation [31] of the Sheikholeslami-Wohlert clover action [32]. In Table II, we list parameters for the valence quarks. where r 1 is the characteristic distance between two static quarks such that the force between Table I. Parameters of the MILC asqtad gauge-field ensembles used in this analysis. From left to right: approximate lattice spacing a in fm, the (light/strange)-quark mass ratio am l /am h , the coupling constant β, the tadpole parameter u 0 determined from the plaquette, lattice volume, the number of configurations N cfg , M π L (L is the spatial length of the lattice), and the number of configurations of the four ensembles that were used in Ref. [5]. them satisfies r 2 1 F (r 1 ) = 1.0 [33,34]. The absolute lattice scale r 1 is determined by comparing the Particle Data Group (PDG) value of f π with the lattice calculation of r 1 f π , obtaining the absolute scale r 1 = 0.3117 (22) fm [35]. The uncertainty quoted here encompasses the absolute variation of the lattice-spacing determination between MILC [36] and HPQCD [37]. C. Currents and correlation functions We calculate the two-point and three-point functions where P = B, π labels the pseudoscalar meson, the operators O P (O † P ) annihilate (create) the states with the quantum numbers of the pseudoscalar meson P on the lattice, and J = V µ , T µν are the lattice currents. For the B meson, we use a mixed-action interpolating operator O B which is a combination of a Wilson clover bottom quark and a staggered light quark [5]: where , and S(x, y) is a smearing function. For the pion, we use the operator which is constructed from two 1-component staggered quarks. The current operators are constructed in a similar way: 14) Table II. Heavy-quark masses and other parameters used in the simulation. Starting in the third column: the clover parameter c SW , the simulation b-quark mass parameter κ b , the current rotation parameter d 1 , the number of sources N src and the two source-sink separations T . Note that we use the same valence light-quark mass as m l in the sea except the a = 0.09fm, m l /m h = 0.00465/0.031 ensemble where a slightly different valence mass am l = 0.0047 is used. where the heavy quark field spinor Ψ is rotated to remove tree-level O(a) discretization effects, via [31] Ψ(x) = (1 + a d 1 γ · D lat )ψ. T between the π and B mesons. The signal to noise ratio is largely determined by T . A convenient approach is to fix the source-sink separation T in the simulations and then insert the current operators at every time slice in between. The source-sink separations T at different lattice spacings, sea-quark masses, and recoil momenta are chosen to be approximately the same in physical units. To minimize statistical uncertainties and reduce excited-state contamination, we tested data with different source-sink separations before choosing those shown in Table II. The B meson is at rest in our simulation, while the daughter pion is either at rest or has a small three-momentum. The light-quark propagator is computed from a point source so that one inversion of the Dirac operator can be used to obtain mul-tiple momenta. The spatial source location is varied randomly from one configuration to the next to minimize autocorrelations. The b-quark source is always implemented with smearing based on a Richardson 1S wave function [38] after fixing to Coulomb gauge. We compute both the two-point function C π (t; p) and three-point function C J (t, T ; p) at several of the lowest possible pion momenta in a finite box: p = (2π/L)(0, 0, 0), (2π/L)(1, 0, 0), (2π/L)(1, 1, 0), (2π/L)(1, 1, 1), and (2π/L)(2, 0, 0), where contributions from each momentum are averaged over permutations of components. We find the correlation functions with momentum (2π/L)(2, 0, 0) too noisy to be useful, so we exclude these data from our analysis. D. Two-point and three-point correlator fits In this subsection, we describe how to extract the desired matrix element from two-and three-point correlation functions. With our choice for the valence-quark actions and for the interpolating operators, the two-and three-point functions take the form [39] 17) where N t is the temporal length of the lattice and Z (n) Note that due to the staggered action used for the light quarks, the meson interpolating operators also couple to the positive parity (scalar) states which oscillate in Euclidean times t and T with the factors (−1) n(t+1) and (−1) n(T −t) . Our goal is to extract M (00) J , the ground state matrix element from these correlation functions. To suppress the contributions from the positive parity states to the ratio, we follow the averaging procedure of Ref. [5], which exploits the oscillating sign in their Euclidean time dependence. The time averages can be thought of as a smearing over neighboring time slices {t, t + 1, t + 2} × {T, T + 1} to significantly reduce the overlap with opposite-parity states. Denoting the averaged correlators by C P and C J , we then use the ratio [5] R J (t, T ; p) = C J (t, T ; p) , (2.20) where E (0) π (p) and M B are the ground-state pion energy and B-meson rest mass, respectively. The uncertainty in the B-meson rest mass has significant impact on the ratio R J , so we follow a two-step procedure. We first determine the pion and B-meson ground-state energy as precisely as possible using the corresponding two-point functions. We then feed these ground-state energies into the ratio R J , preserving the correlations with jackknife resampling. For the pion two-point functions at zero momentum, the oscillating states -the terms in Eq. (2.16) with odd powers of (−1) -do not appear. Thus, we fit the pion two-point functions using Eq. (2.16) with the lowest two non-oscillating states (n = 0, 2). For the twopoint functions with nonzero momentum, the contribution from oscillating states is small but noticeable. We find that we only need to include the lowest three states (n = 0, 1, 2) in the fits. Because the momenta we consider are typically small compared to 2π/a, the continuum dispersion relation is satisfied within statistical errors, as shown in Fig. 3. In the main analysis, we therefore use the mass M π from the zero-momentum fit and the continuum dispersion relation to set E (0) π (p) = |p| 2 + M 2 π for non-zero momentum. Because the zeromomentum energy has significantly smaller statistical error than that of nonzero momentum, using this choice and the dispersion relation for nonzero-momentum energy leads to a more stable and precise determination of M (00) J . Table IV lists the relevant fit ranges for the two-point fits. In the two-point correlators (except the zero-momentum pion two-point correlators), the noise grows rapidly with increasing t, the distance away from the pion source in the temporal direction. The data points at large t are not useful, and including them would lead to a larger covariance matrix which would be difficult to resolve given the limited number of configurations. We choose the upper end of the fit ranges t max such that the relative error does not exceed 20%. The lower end t min is chosen such that the excited state contamination is sufficiently small, i.e., the resulting central values of the ground state energy are stable against variations in t min as shown in Fig. 4 (left). In our analysis, there are two places where quantities from the B-meson two-point func- To choose t max , we again apply the 20%-rule on the relative error. The lower bound t min is chosen in a manner similar to the pion two-point fits and the stability plot is shown in Fig. 4 (right). The chosen fit ranges are shown in Table IV. We test for autocorrelations by blocking the configurations on each ensemble with different block sizes, and then using a single-elimination jackknife procedure to propagate the statistical error to the two-point correlator fits for M π and M (0) B . We do not observe any autocorrelations in our data, as illustrated in Fig. 5, and choose not to block the data. The ratios in Eq. (2.20) have the advantage that the wavefunction overlap factors Z P cancel, but the trade-off is that we need an additional factor -the square root term on the right-hand side -to remove the leading t dependence in the ratio. If the lowest lying states dominated the ratio R J , then it would be constant in t and proportional to the lattice form factor f J . The subscript J now runs over ⊥, , and T , corresponding to the operators V i , V 4 , and T 4i , respectively. Our previous analysis employed a simple plateau fit constant in time. With our improved statistics, the small excited-state contributions to the ratio are significant and cannot be neglected. On the other hand, even with our improved statistics, we find that contributions to R J from wrong-parity states are still negligible. We use two different fit strategies to remove excited state contributions and use the consistency between them as an added check that any remaining excited state contamination is negligibly small. The first strategy starts with the ratio in Eq. (2.20) and minimally extends the plateau fitting scheme by including the first excited state of the B meson in the following form: where A J and f lat J are unconstrained fit parameters, B is the lowest energy splitting of the pseudoscalar B meson, and the prefactors are h = 1, h ⊥ = p i π and h T = ( √ 2M B p i π )/(M B + M π ). We choose the fit ranges for R J such that contributions from pion excited states to R J can be neglected. The fit parameter ∆M B is determined by the B-meson two-point correlators. In practice, we fit the ratio in Eq. (2.21) along with the B-meson two-point correlation functions with ∆M B as a common parameter. We find it beneficial in the combined fit to include both the local and smeared two-point correlation functions. We use 2+2 states for both correlators, but use a different set of fit ranges (listed in Table V). The results of these two-point fits are shown in Fig. 6. The agreement in the B-meson energies between the separate and combined fits is very good, but the combined fit leads to smaller errors. To summarize our strategy, for the case of zero momentum, we fit the ratio R (t) together with the local and smeared B-meson two-point correlators C Figure 7 shows an example of these fits. Figure 8 shows the stability plots of R ⊥ against the variations in the fit ranges of the ratio fits, and the variations in the fit ranges of both two-point correlators. The preferred fit ranges are set to be in the stable region upon these variations. Our second fit strategy includes excited-state contributions from both the pion and the B meson. It starts with a different ratio, without time averages, which ensures that there are enough data points to constrain all the parameters: is sufficient to remove contributions from excited states, and we therefore adopt this method for the main analysis. E. Matching We match the lattice currents to continuum QCD with the relation, where J and J denote the vector or tensor currents in the continuum and lattice theories, respectively, and " . =" means "has the same matrix elements" [40]. We calculate the current renormalization with the mostly nonperturbative renormalization method [18,41], where Z V 4 bb and Z V 4 ll are the matching factors for the corresponding flavor-conserving vector currents. These factors capture most of the current renormalization. The remaining flavor off-diagonal contribution to the matching factor, ρ J bl , is close to unity. We calculate the factors Z V 4 bb and Z V 4 ll nonperturbatively for each ensemble by computing the matrix elements of the flavor-conserving vector currents and using the relations where the lattice current V 4 ll is a bilinear of light staggered quark fields and V 4 bb is a bilinear of clover heavy quark fields. The factors Z V 4 bb and Z V 4 ll are listed in Table VI. Because there is very little m l dependence in the factor Z V 4 ll , we use the same Z V 4 ll for ensembles with different light quark masses but the same lattice spacing. The factor Z V 4 bb depends crucially on the heavy b quark mass, though it has negligible light quark mass dependence. We use lattice perturbation theory [42] to compute the remaining renormalization factors where we take the strong coupling in the V -scheme [42] at a scale q * that corresponds to the typical gluon loop momentum. In practice, we choose q * = 2/a. The details of the calculation of the one-loop coefficients ρ [1] J will be presented elsewhere. The values used in this work are shown in Table VI. F. Heavy-quark mass correction In the clover action, the hopping parameter κ b corresponds to the bare b-quark mass. When we started generating data for this analysis, we had a good estimate for the bottomquark κ b on each ensemble, but not the final tuned values, which were obtained as described in Appendix C of Ref. [43]. We therefore need to adjust the form factors a posteriori to account for the slightly mistuned values of κ b . The κ b parameters are adjusted so that the corresponding B s kinetic masses match the experimentally-measured value [43]. Table VII shows both the simulation and final tuned κ b values. For some ensembles, the difference between the two is as large as 7σ of the statistical uncertainty associated with the tuning procedure. We study the κ b -dependence of the lattice form factors by generating data on the a ≈ 0.12 fm, m l /m h = 0.2 ensemble, with two Table VI. The parameters for the renormalization of the form factors. The first two columns label the ensemble with its approximate lattice spacing and the sea light-and strange-quark mass ratio. The third column is the simulation κ b . The fourth and fifth columns are the nonperturbative heavyheavy and light-light renormalization factors The sixth, seventh, and eighth columns are the one-loop estimates of ρ V 4 , ρ V i and ρ T , respectively. The tensor current has a nonzero anomalous dimension; the numbers reported here match to the MS scheme at renormalization scale µ = m 2 , which corresponds to the pole mass. reference pointm −1 2 (which corresponds to the tuned κ b ) as follows where the masses and E π are all in r 1 units. To obtain f at the reference point, we need to find the dimensionless normalized slope −(∂ ln f /∂ lnm 2 ). We use exactly the same procedure as described in Sec. II D for κ b = 0.0901 to obtain the B → π ν form factors f ,⊥,T for the additional values κ b = 0.0860 and 0.0820. We apply the matching factors given in Table VI. Finally, we takem 2 to be the kinetic mass corresponding to κ b = 0.0868 (the tuned kappa given in Table VII) and use it as the reference point. We fit each form factor at each momentum for the three data points to the linear form given in Eq. (2.28), taking f (m −1 2 ) and −(∂ ln f /∂ lnm 2 ) as fit parameters. The result is shown in Fig. 10 (left). As shown in the plot, the normalized slope −(∂ ln f /∂ ln m 2 ) has a very mild E π dependence. Therefore, for each form factor we perform a correlated fit to all momenta to obtain a single common normalized slope. The result is shown in Table VIII. Fitting the data to a linear form in E π results in a slope statistically consistent with zero. To examine the light-quark mass dependence of the normalized slopes, we repeat the same procedure for the B → K semileptonic form factors with a heavier daughter valence quark am s = 0.0349, which is close to the physical strange-quark mass. The results are plotted in Fig. 10 (right). We fit the points of each form factor to a constant and tabulate the results in Table VIII. Comparing the normalized slopes for f B→π and f B→K , taking into account statistical correlations, we observe a mild but statistically-significant light daughter-quark mass dependence. So we fit the slopes for f B→π and f B→K simultaneously to a linear form, where m l /m s = 0.2 and 1.0 for f B→π and f B→K , respectively. The results for the parameters c and d are given in Table VIII. Note that the results in Table VIII are also used in Ref. [44]. We use the parameters c and d in Table VIII to determine the normalized slope −(∂ ln f /∂ lnm 2 ) for each ensemble. Although the dependence of the normalized slopes on the light daughter-quark mass is resolvable, the effects are small for the ensembles we use in the analysis (with light daughter-quark masses ranging from 0.05m s to 0.4m s ). We expect similarly small effects from the spectator-quark masses. We also expect that the lattice-spacing dependence of the normalized slopes is small, because it is a dimensionless ratio. We therefore correct each lattice form factor in each ensemble by a factor where m 2 andm 2 are the kinetic masses corresponding to the simulation κ b and tuned κ b , respectively. The resulting relative shift for each ensemble is shown in Table VII. Although the corrections to κ b itself are significant for some ensembles, the corresponding corrections to the form factors are much smaller ( 2.3%), as a consequence of the small normalized slopes. III. CHIRAL-CONTINUUM EXTRAPOLATION Here we extrapolate the form factors at four lattice spacings with several unphysical light-quark masses to the continuum limit and physical light-quark mass. We use heavymeson rooted staggered chiral perturbation theory (HMrSχPT) [45,46], in the hard-pion and SU(2) limits. We also incorporate heavy-quark discretization effects into the chiralcontinuum extrapolation. A. SU(2) staggered chiral perturbation theory in the hard-pion limit The full-QCD next-to leading order (NLO) HMrSχPT expression for the semileptonic form factors can be written [45] f NLO where J =⊥, , T . Note that the expressions are in units of the mass-independent scale r 1 and the coefficients c J i have the dimension of r −3/2 1 . The leading-order terms are The terms δf J,logs and δD logs are the one-loop nonanalytic contributions in the chiral expansion, and depend upon the light pseudoscalar meson mass and energy [45]. Note that We therefore use the same pole location and nonanalytic corrections for f T as f ⊥ . The terms analytic in χ i are introduced to cancel the scale dependence arising from the nonanalytic contribution in Eq. (3.1). The dimensionless variables χ i are proportional to the quark mass, pion energy, and lattice spacing. We define , and (3.6) (3.7) Note that the valence mass m l is equal to the sea mass m l in our data. The low-energy constant µ relates the pseudoscalar meson masses to the quark masses, and ∆ ξ is the mass splitting for staggered taste ξ. The average taste splitting in Eq. (3.7) is∆ ≡ 1 16 ξ ∆ ξ . The quantities µ and ∆ ξ are obtained from the MILC Collaboration's analysis of light pseudoscalar mesons and are shown in Table IX. We constrain the parameter g B * Bπ with a prior. The value of g B * Bπ has been calculated with lattice QCD in the static limit [47,48] or with a relativistic b quark [49] on gauge fields generated with domain-wall or Wilson sea quarks [50]. We set the prior, based on these lattice-QCD calculations, to be g B * Bπ = 0.45 ± 0.08, where the error covers the differences among different determinations of the coupling. The LO and NLO coefficients, {c i , 0 ≤ i ≤ 5}, are well determined by the data. Note that the formula given in Eq. (3.1) is slightly different from that in Ref. [5] where the NLO coefficients therein are our |c J i /c J 0 | (i = 0). With the introduction of variables χ i defined in Eqs. (3.4)-(3.7), we should expect that In the actual fits, |c Note that the coefficients c J i are dimensionful, and they are evaluated here in r 1 units. We constrain them with loose priors: c Standard HMrSχPT uses the assumption that the external and loop pions are soft, i.e., [51,52]. In our work, however, the external pion energies can be quite large, in some cases as much as 7 times the physical pion mass, and standard HMrSχPT may not converge well enough in this range. Indeed, the fit of the lattice form factor f to Eq. (3.1) gives a poor confidence level (p ∼ 0), which is not improved by including higher-order contributions in the chiral expansion. Bijnens and Jemos [53] proposed an approach called hard-pion χPT, in which the internal energetic pions are integrated out and the E π dependence is absorbed into the low energy constants. 2 Since hard-pion χPT provides a more appropriate description of our data, we adopt it in this analysis. The explicit expressions for the hard-pion nonanalytic terms δf hard J,logs using SU(3) chiral perturbation theory as well as its SU(2) limit are given in the appendix of Ref. [44]. We take the SU(2) limit by integrating out the strange quark. The resulting expression has no explicit strange-quark mass dependence, which has been absorbed into the value of the low energy constants. The SU (2) hard-pion χPT provides a better description of our f data than the SU(3) hard-pion χPT (p value 0.29 versus 0.09 from the NLO χPT fit with priors). We also find that the chiral expansion converges faster using SU(2) χPT when including higher-order chiral corrections in the fit to our data, which results in smaller χPT truncation errors than from using SU (3) χPT. Finally, Ref. [52] provides phenomenological arguments to prefer the application of SU(2) HMχPT over SU(3) to lattice-QCD data. We therefore use the SU(2) formula for our central value fit, but consider SU(3) fits in our systematic error analysis. Based on the above discussion, we use the following conditions for f ⊥ , f and f T in Table IX where Eq. (3.10) is a consequence of the hard-pion limit, Eq. (3.11) and the factor 2 in the first term of Eq. (3.12) follow from the fact that we take m l = m l and m h has been integrated out, Eq. (3.12) preserves the chiral scale independence of the SU(2) hard-pion NLO expression, and a 2 ∆ I is the taste splitting of the taste-singlet pseudoscalar meson mass. The fits of the lattice form factors using NLO SU(2) hard-pion HMrSχPT have acceptable confidence levels. We find, however, that there is a sizable shift in the fit result when including higher-order terms in the χPT expansion. We therefore need to study the effects of higher-order contributions in the chiral expansion. B. Next-to-next-to-leading order (NNLO) corrections We supplement the NLO SU(2) hard-pion χPT expression with the following NNLO analytic terms such that the complete NNLO χPT expression is, (3.14) Note that f NLO J here uses the hard-pion and SU (2) χPT, as manifested in Eqs. (3.9)-(3.12). All light-quark discretization errors that arise from taste violations are included here; generic errors from light-quark and gluon action, which are O(α s a 2 Λ 2 ), are discussed in Sec. III C. Again, the expectation from chiral perturbation theory is that the coefficients of these analytic terms should satisfy |c J i /c J 0 | ∼ O(1) when written in terms of the dimensionless variables χ given in Eqs. C. Heavy-quark discretization effects The chiral-continuum extrapolation implemented in Eq. (3.14) accounts for the discretization effects from the gluons and the light staggered quarks. Discretization effects from the heavy b quark need a separate treatment. Heavy-quark discretization errors arise from the short-distance mismatch of higher-dimension Lagrangian and current operators [40,41]. By power counting, such mismatches are of O(a 2 Λ 2 ) or O(α s aΛ) where Λ is a QCD scale appropriate for the heavy-quark expansion. We follow the same method for incorporating the heavy-quark discretization effects described in Ref. [35] and include the following error function in Eq. (3.1), where the mismatch functions f E,X,Y,B,3 are given in the Appendix of Ref. [35]. The error functions f B,E arise from mismatches of operators in the Lagrangian, while functions f X,Y,3 arise from those of the vector current. The last term in Eq. (3.15) accounts for higher order heavy-quark and generic light-quark and gluon errors not included in Eq. (3.14), which is of the order α s (aΛ) 2 . The fit parameters are constrained with priors: 0 ± 1 for z Y , z B , z 0 and 0 ± √ 2 for z X , z 3 ; the latter two are wider because the functions f X and f 3 both appear twice [41]. To summarize, after incorporating the heavy-quark discretization effects, the complete NNLO SU(2) hard-pion HMrSχPT expression is To examine the size of discretization effects, we plot the form factors f ⊥ and f with light-quark mass m l = 0.2m h at each lattice spacing versus a 2 in Fig. 12. As we can see from the plots, the observed lattice-spacing dependence is very mild, with the data points at the largest lattice spacing (a ≈ 0.12 fm) only about two statistical sigma away from the continuum limit. IV. SYSTEMATIC ERROR BUDGET The error output from the central-value fit described in Sec. III C already includes the systematic errors due to the light-and heavy-quark discretization effects and the uncertainty on g B * Bπ . We now discuss other sources of systematic uncertainty. We tabulate systematic error budgets for f + and f 0 at a representative kinematic point q 2 = 20 GeV 2 within the range of lattice data in Table X. We also present the error budget for the full simulated lattice momentum range in Fig. 17. Extrapolation E π r 1 =0.28, m l =0.2m s E π r 1 =0.8, m l =0.2m s E π r 1 =1.2, m l =0.2m s for various pion momenta (a slight extrapolation/interpolation is applied to adjust the raw data to the same E π r 1 ). The range E π r 1 ∈ [0.28, 1.2] is used in the q 2 extrapolation to the full kinematic range. A. Chiral-continuum extrapolation As discussed above, our central fit uses NNLO 3 SU(2) hard-pion HMrSχPT including contributions from heavy-quark discretization effects and the uncertainty in g B * Bπ . Here we consider variations of the fit function and the data included to estimate truncation and other systematic effects. To disentangle the uncertainties due to these variations, we turn off the heavy-quark discretization error terms and examine the variations one at a time. First, we study the effects of truncating the chiral expansion by adding next-to-NNLO (NNNLO) analytic terms δf NNNLO J,analytic in our fits with coefficients constrained with the same priors as the NNLO coefficients. The variations in f + due to changing the order of the χPT analytic terms are shown in Fig. 13 error as a function of q 2 is shown in Fig. 14. The standard soft-pion HMrSχPT fits of f ⊥ have reasonable confidence levels, but those of f do not. Here we estimate the effect of using the hard-pion formalism by using standard HMrSχPT for f ⊥ but still employing hard-pion χPT for f . The resulting difference from the preferred fit is small, less than 1% for f + . The same conclusion also holds for the form factor f 0 . We use SU (2) To check how our results are affected by data with high momenta, we also perform a fit excluding data with p = (2π/L)(1, 1, 1). As shown in Fig. 14, the form factors f + and f 0 from the low-momentum fit agree very well with those from the preferred full-data fit for the region q 2 > 20 GeV 2 . The systematic difference increases for small q 2 , where the highest-momentum data provide important information. Figure 14 summarizes the effects of all these variations. Comparing the deviations between the central values of the alternate and preferred fits to the statistical error of the preferred fit, we find that the deviations are almost always smaller than the statistical error of our preferred fit. This confirms that fit errors of our preferred fits adequately account for the systematic effects associated with these variations. We therefore do not quote any additional systematic error due to these sources. We include heavy-quark discretization effects in our chiral-continuum extrapolation. As a consistency check, we compare our result with a power counting estimate obtained by evaluating δf HQ J in Eq. (3.15) at the a ≈ 0.045 fm lattice spacing, setting the coefficients z i = 1 and taking Λ = 500 MeV for the heavy-quark scale. We find δf HQ J 1.5%. Figure 15 shows that the NNLO fit error (without the heavy-quark discretization effects) added to the 1.5% power-counting estimate in quadrature yields a similar error to that of the full fit. Thus, again, it is not necessary to add an additional error to that of the preferred chiral-continuum fit. B. Light-and bottom-quark mass uncertainties The effect of mistuning the b-quark mass in our simulation has been largely reduced via the corrections described in Sec. II F. Errors still arise, however, from the uncertainty in the tuned value κ b itself and from the procedure for shifting the form factors. From Eq. (2.30) we estimate the relative error by where δ(1/m 2 ) is related to the uncertainty due to the error in κ b while δ( ∂ ln f ∂ lnm 2 ) is the uncertainty on the normalized slope. The values of the physical κ b with errors are given in Table VII, and we can find the statistical uncertainty of the normalized slope using Table VIII. Using Eq. (2.30), we find that the value of δf /f on all ensembles is at most 0.6%. We take the average value for δf /f on all ensembles, which is 0.4%, to be the error due to tuning κ b , and assign the same error to f + and f 0 . To obtain the physical form factors, we evaluate the result of the chiral-continuum fit at the physical light-and strange-quark masses determined from the MILC Collaboration's analysis of light pseudoscalar mesons [19]. (Although we use SU(2) χPT, we include an analytic term proportional to χ sea to allow for a slight shift to the physical strange sea- We convert the lattice form factors and pion energies to physical units using the relative scale r 1 /a determined from the static-quark potential (see Table III) and the absolute scale r 1 = 0.3117 (22) fm [35]. The statistical uncertainties on r 1 /a are negligible. We propagate the uncertainty in r 1 by shifting it ±1σ and repeating the chiral-continuum fit. We find shifts of at most 0.5% in the range of simulated momenta. D. Current renormalization With the mostly nonperturbative renormalization procedure that we use for the heavylight currents, there are two sources of error. The first is due to the nonperturbatively calculated flavor diagonal factors Z V 4 bb and Z V 4 ll . Their values and errors are given in Table VI. We estimate the systematic error due to the uncertainties of Z V 4 bb and Z V 4 ll by varying their values by one sigma and looking for the maximum deviations in the form factors f + and f 0 . The resulting deviations are small, ranging from 0.4% to 0.5%. The second source of error is due to the truncation of the perturbative expansion in the calculation of the ρ J . Because the ρ J are defined from ratios of renormalization constants, their perturbative corrections are small by construction. Indeed, as seen in Table VI, for V 4 bl they are less than 1% and for V i bl they range between 2-3%. For the scale-independent vector current, we observe that the one-loop corrections to ρ V 4 bl are smaller than those for ρ V i bl , and we use the same error estimate for both. In order to accommodate possible accidental cancellations, we take the error as 2ρ max α s is an upper bound of the one-loop correction to V µ bl in the range of heavy-quark mass am 0 ≤ 3 that corresponds to the range of lattice spacings included in our analysis. The coupling is evaluated at the scale of the next-to-finest lattice spacing in our calculation, a ≈ 0.06 fm. This procedure yields an error estimate of 1%, which is larger than the one-loop correction to ρ V 4 bl over most of the mass range, and amounts to about 50% of the one-loop correction to ρ V i bl in the mass range that corresponds to the three finest lattice spacings. This leads to an error of 1% for both f + and f 0 due to the perturbative renormalization factors. E. Finite volume effects We estimate the size of the finite-volume effects by replacing the infinite-volume chiral logarithms with discrete sums and repeating the chiral-continuum extrapolation. The change in our preferred fit after including finite-volume corrections is very small, less than 0.01%, which we simply neglect. The subdominant errors, such as those from heavy-quark mass tuning, the current renormalization etc., have mild q 2 dependence, as can be seen in Fig. 16. We therefore treat them as constant in q 2 when propagating them. For each source, we take the maximum estimated error in the simulated q 2 range; we then add these individual error estimates in quadrature to obtain an overall additional systematic error δ f . We find both δ f + and δ f 0 to be 1.3%. In the next section, we will use our result for f + to obtain |V ub | via a combined fit with experimental data to the z expansion. Due to phase-space suppression, the experiments have poor access to the large-q 2 region. On the other hand, the lattice-QCD form factor has a larger error than experiment at small q 2 due to the sizable q 2 extrapolation. As discussed below, the value of |V ub | is mostly determined in the region q 2 ≈ 20 GeV 2 , which is at the low end of the q 2 range where the lattice-QCD form-factor error is still small. We therefore provide tabulated error budgets for the two form factors f + , f 0 from our calculation at the particular kinematic point q 2 = 20 GeV 2 in Table X. The error on f + (20 GeV 2 ) is approximately 3.4%, which is about one third of the error on our previously-determined form factor in Ref. [5]. We compare our results for f + and f 0 with full errors, which are obtained by adding the fit errors from the χPT fits and δ f in quadrature, with previous lattice-QCD calculations in Fig. 18. Our result for f + agrees with previous results obtained at q 2 17 GeV 2 from Refs. [4,5,13], but is more precise. Our result for f 0 is consistent with Ref. [13], but not with Ref. [4]. Figure 18. Comparison of f + (left) and f 0 (right) from this work with previous lattice-QCD calculations by HPQCD [4], Fermilab/MILC [5] and RBC/UKQCD [13]. V. z EXPANSION AND DETERMINATION OF |V ub | The chiral-continuum extrapolation described in the previous sections yields the form factors in the range 17 GeV 2 ≤ q 2 ≤ 26 GeV 2 . In this section, we extrapolate them to the full kinematic range using the model-independent z expansion. The form factors resulting from the chiral-continuum extrapolation are functions specified by a set of parameters. One could, in principle, incorporate the z expansion with the χPT expansion from the outset (see, e.g., Ref. [55]). With such an approach, however, the coefficients of the z expansion will have a nontrivial dependence on m l and a that must be derived from the underlying chiral effective theory. Because the dependence of the coefficients on a and m l is unknown, we instead carry out the extrapolation in two steps, taking the chiral-continuum extrapolated results and feeding them into the z expansion. We introduce a functional method to perform the z expansion. We also apply the z expansion to the experimental data and, after verifying that the fits to experiment and to lattice QCD are consistent, we carry out a combined fit to obtain |V ub |. A byproduct of the last step is a precise determination for f + (q 2 ) constrained by lattice QCD at high q 2 and experiment at low q 2 . A. z expansions of heavy-light semileptonic form factors The z expansion involves mapping the variable q 2 to a new variable z by [56] z(t, the full kinematic range for semileptonic B → π ν decay around the origin z = 0, and, moreover, restricts z to |z| < 0.28. The small, bounded interval, together with a constraint from unitarity ensures convergence of the expansion. As discussed below, we find in practice that the convergence is rapid. The form factors f + and f 0 are analytic in z except for the branch cut [t + , ∞) and poles in [t − , t + ]. We can write where P i (z), i = +, 0, are the Blaschke factors, which are introduced to remove the poles of f i in the region [t − , t + ], and φ i (z) are the outer functions [56,57]. We choose simple outer functions φ +,0 = 1 and employ the following formulas to expand the form factors We tabulate the values of B 0k for the form factors f + , f 0 in Table XI. The inequality saturates when N z → ∞. Although we do not incorporate this constraint into our fits, we check that our results satisfy it. B. Functional method for the z expansion In previous work, we have used synthetic data points generated from the χPT fit as inputs to the z fit [5], but here we take a new approach. We exploit the facts that the χPT expansion is linear in the fit parameters and that it contains only a finite number of independent functions (see Eq. (3.1)). We construct a covariance function K(z 1 , z 2 ), defined as the covariance of any pair of points (z 1 , z 2 ), using the set of functionals from the χPT expansion. Our new approach is to formulate the z expansion using the eigenfunctions of an integral operator defined from K(z 1 , z 2 ). Let us start with the NLO χPT expression Eq. (3.1), as an example. Because f ⊥ and f are linear in their coefficients c ⊥ i and c i , we can express them both in the compact form with ξ, η functions of q 2 . The uncertainty of the function f is encoded in the uncertainty in the coefficient vector C J . In all these expressions, we are only interested in the terms with E π (or q 2 ) dependence and, hence, z dependence. We can now define the covariance function K(z, z ) in some valid domain [z 1 , z 2 ]. Explicitly, and Cov is the covariance matrix of the involved coefficients c J n Cov mn = δc m δc n , (5.12) The covariance function K(z, z ) is a Mercer kernel [59], and Mercer's theorem ensures that there exists a set of orthonormal functions ψ i (z) defined over the domain [z 1 , z 2 ], such that where λ i , ψ i are the eigenvalues and eigenfunctions of the operator L K induced by the integral equation, 14) The form factor f (z) can naturally be expanded in the basis of ψ i (z): we only need to project the expansions in Eqs. (5.3) and (5.4) onto the same basis. The process of finding the expansion coefficients b n is equivalent to minimizing the following function (in analogy to the usual χ 2 function, replacing the sum over discrete points with an integral over a continuous variable): is the form factor function from the χPT fit expanded in terms of ψ i , and To summarize, we expand any form factor function f χPT obtained from the chiralcontinuum extrapolation in the basis formed by the eigenfunctions of its covariance function K(z, z ). We then project the z expansion onto the same basis. Finally, we solve for the expansion parameters b n by minimizing the function χ 2 lat defined in Eq. (5.15). C. Details on z expansion of the form factors In addition to the fit errors from the chiral-continuum fit, we also need to propagate the subdominant errors, which have very mild q 2 dependence. We treat them as constant in q 2 and add them in quadrature, obtaining δ f = 1.3% (which is the same for f + and f 0 ). To include this effective subdominant error to the fit, we slightly modify the covariance function defined in Eq. (5.10) by χPT fit functions for f ,⊥ (including the HQ discretization contributions). Many of them, however, are set to zero in the continuum limit or become constant once the light-quark mass is fixed at its physical value. In the end, the chiral-continuum extrapolated f + is described by only 6 independent functions. For f 0 , the number of independent functions is 7. Although we work in the functional basis in which the covariance function K(z, z ) is diagonalized, singular modes can arise because K(z, z ) is built upon Cov f , which itself may have singular modes. Figure 19 shows the spectra of the operator L K for form factor f +,0 . The spectrum of f 0 contains two very small eigenvalues 10 −12 , and they are well separated from the other modes. When we discard these two modes, the fit quality of the functional z fit improves from p = 0.03 to p = 0.46. For f + , we do not need to apply any cut on the eigenvalues. We first consider separate fits of f + and f 0 without any constraints on the coefficients of Table XII. The kinematic constraint f + (q 2 = 0) = f 0 (q 2 = 0) is satisfied automatically, as is shown in Fig. 20 (left). , 4) from the fits for f + is shown against the heavy-quark estimate in Fig. 21 (left). They are consistent with each other. The result of the fits of f +,0 with the kinematic constraint are shown in Fig. 20 (right). With this constraint, we again examine how the fit varies with higher order N z . We find that the fit central values do not change significantly when we change N z from 4 to 5, in contrast to the case from 3 to 4, as is shown in Fig. 22 and Table XIII. We perform several additional checks to confirm the stability of our results against various are the most precise. We also try removing the smallest eigenvalue from the covariance function K(z, z ) for f + ; we find that the resulting central values are essentially unaffected. Finally, we also try the fit using, instead of the BCL formula, the Boyd-Grinstein-Lebed (BGL) formula, which uses more complicated outer functions [57]. We find that the resulting form factors are within one standard deviation of the BCL result. To summarize, we obtain our preferred result from a simultaneous fit to f + and f 0 with The z coefficients with errors from our preferred fit and their correlation matrix are provided in Table XIV. This information is sufficient to reproduce the lattice form-factor results over the full kinematic range. Figure 24 shows a comparison of our results with other theoretical calculations of the form factors [13,61]. While our results are consistent with the previous results, ours are significantly more precise in the region of z ≤ 0.1. Finally, it is interesting to compare the lattice form factors with theoretical expectations from heavy-quark symmetry. In the soft-pion limit, the vector and scalar form factors f + and f 0 are related as [62] lim to include the 1/m b correction, which turns out to be simply the additional multiplicative factor (f B * /f B ) −1 in the soft-pion limit. In Fig. 25 we plot the ratio of (f 0 /f + )/(1 − q 2 /M 2 B * ) obtained using the coefficients of our preferred z-expansion in Table XIV The difference of f B * /f B from one also provides a measure of Λ/m b ∼ 6%, which would indicate that (Λ/m b ) 2 corrections may even be below the percent level. The lattice form factors agree with the theoretical expectation for q 2 27 GeV 2 . D. Determination of |V ub | We now combine our lattice form factors with experimental data for B → π ν to obtain |V ub |. The Standard-Model partial branching fraction is τ B dΓ/dq 2 , where dΓ/dq 2 is defined in Eq. (1.1). The contribution from f 0 is negligible due to the small lepton mass. Given f + (q 2 ), the branching fraction in the ith q 2 bin [q 2 i , q 2 i+1 ] is compared with the prediction in the soft-pion limit from heavy-quark symmetry and χPT [62] (hatched band). The width of the hatched band reflects only the uncertainty from g B * Bπ = 0.45 (8) and not other theoretical errors. where For the combined lattice plus experiment z fit, we define a χ 2 for the experimental measurements ∆B exp i as where ∆B exp i is the experimentally-measured branching fraction in the ith q 2 bin (i is a shorthand notation for each bin in each experiment included in the fit) and Cov exp is the experimental covariance matrix, including the statistical and all systematic errors. We omit systematic correlations between the BaBar and Belle analyses, because they do not share any major systematic errors. The BaBar 6-bin and 12-bin data have very small overlaps in the selection of samples, so the statistical errors can be considered approximately uncorrelated. There is some systematic correlation between the two analyses, which is, however, supposed to be insignificant [66]. The Belle untagged and tagged data are also largely uncorrelated because the dominant source of systematic errors in these two measurements are very different. In summary, we take the four experimental analyses as independent measurements. On the other hand, there are systematic correlations between the two isospin modes of the Belle tagged data, which we estimate as follows. Let ∆B − i and ∆B 0 α be the branching fractions in the ith and αth bin of the charged and neutral decay modes, respectively. Let σ − x , σ 0 x be the systematic uncertainties of the two modes from source x and r −0 x be the correlation between them. Then we estimate the off-block-diagonal elements of the systematic error covariance matrix by where the sum is over all sources of systematic errors. That said, only a few of the systematic errors contribute noticeably to the sum and the biggest source of error, the tag calibration, dominates. From the correlation matrices, we construct the total covariance matrices of each isospin decay mode by adding the statistical matrices and the systematic matrices. We then take the direct sum of the covariance matrices of the B − and B 0 modes block-diagonally and add the off-block-diagonal elements S iα so that we can fit them simultaneously. We first fit the z expansion to the experimental data only and without any constraints on the coefficients. We use the BCL formula with three parameters, N z = 3, where the normalization is |V ub |b 0 . The result is shown in Table XV. To check the consistency in the shape among the experimental data sets, we also fit each experimental data set separately. The individual fits all have acceptable confidence levels and p values, but the combination of all four data sets gives a rather poor fit that is not improved by going to higher order in z, e.g., N z = 4. The poor fit stems from the BaBar11 measurement, which is only marginally consistent with the other three. Figure To perform a combined fit to the lattice and experimental data, we define the total chi Fit squared function, where the lattice and experimental chi squared functions are defined in Eqs. (5.15) and (5.24), respectively. The fit is performed to these five independent data sets with common shape parameters b m and overall normalization |V ub | by minimizing Eq. (5.27). Table XVI Table XVI. Results of the combined lattice+experiment fits with N z = 4;. Fit Lattice+exp. In the combined fit to lattice form factors and experimental data, the kinematic constraint between f + and f 0 at q 2 = 0 is unimportant for the determination of |V ub |. This is because the experimental data constrain the shape at low q 2 . Removing the kinematic constraint from the combined fit and fitting only with the vector form factor f + changes neither the coefficients of the z expansion nor the value of |V ub |. We also try varying the number of parameters b m in the z expansion (N z ). The results are shown in Table XVIII. Compared to our preferred fit with N z = 4, the fit using N z = 3 gives a very low p value and a shift of about 1σ in both the form factor and |V ub |, while the fit result using N z = 5 nearly coincides with that of the N z = 4 fit and the values of |V ub | are almost identical. The experimental data are plotted in Fig. 27 (left) along with the z fits to the lattice data and to all experimental data. The lattice form factor and experimental measurements provide complementary information and, when combined, yield an accurate description of the form factor over the full-q 2 range and hence a precise determination of |V ub |. The plot shows that the experimental data dominate the determination of the form-factor shape in the large-z (small-q 2 ) region while the lattice-QCD form factor dominates the small-z (largeq 2 ) region. In the intermediate region around q 2 ∼ 20 GeV 2 (z ∼ 0), the lattice-QCD and experimental uncertainties are similar in size. This region is decisive in determining |V ub | and, hence, can be used to estimate the separate contributions from lattice and experimental (16) data to the |V ub | uncertainty. At q 2 = 20 GeV 2 , the error on the lattice-QCD form factor f + is about 3.4% (see Table X) and the error on f + |V ub | from the experiment-only fit is 2.8% at the same momentum. Adding these two errors in quadrature gives a total uncertainty of 4.4%, which is consistent with the error on |V ub | obtained from the full fit, 4.3%. Another estimate of the individual error contribution to |V ub | can be obtained from the uncertainty on the fit parameters from the separate lattice-QCD and experiment fits. From the fit to all experimental data in Table XV, Right: the similar plot for the partial branching fraction dB/dq 2 . The fits including lattice results use N z = 4, while the experiment-only fit uses N z = 3. The experimental data points and the experiment-only z-fit result in the left plot have been converted from ∆B/∆q 2 1/2 to f + using |V ub | from the combined fit. The lattice-only fit result(cyan band) and the combined-fit result (red band) in the right plot is converted from the form factor with the same |V ub |. VI. RESULTS AND CONCLUSION Our final result for |V ub |, obtained from our preferred z fit combining our lattice-QCD calculation of the B → π ν form factor with experimental measurements of the corresponding decay rate, is |V ub | = (3.72 ± 0.16) × 10 −3 . (6.1) The error includes all experimental and lattice-QCD uncertainties. The contribution from lattice QCD to the total error is now comparable to that from experiment. The error reported here, following HFAG [6], does not apply the PDG prescription for discrepant data; that prescription [65] would scale the error by a factor of χ 2 /dof = 1.2. As can be seen from Table XVII and Fig. 26, the low fit quality is due to the tension between the BaBar11 data set and the others. An inspection of all the experimental data in Fig. 27 shows that the point near z = −0.1 in the BaBar11 data set is lower than the others and a bit more precise than one might have anticipated, but does not suggest that this or any of the data sets have any systematic problems. We compare our determination of |V ub | with other results in Fig. 28. In particular, our result is consistent with the recent determination from HFAG using our collaboration's 2008 form-factor determination [5] obtained from a small subset of the gauge-field ensembles used in this work. The difference in the central values is due to a small shift in the central values for the form factor f + of this analysis compared to our previous analysis [5]. As shown in Fig. 18 (left), the form factor f + from this analysis is consistent within errors with the previous analysis, but shifted slightly downward and with an error smaller by roughly a factor of three. The two analyses have very little statistical and systematic correlation. Our result is also compatible with Standard-Model expectations from CKM unitarity [69,70]. Although our determination of |V ub | is higher than that in Ref. [5], and thus closer to the determination from inclusive B → X u semileptonic decays [6], the inclusive-exclusive disagreement is still greater than 2σ. A byproduct of the combined lattice and experiment fit is a more precise determination of the vector and scalar form factors than from the lattice-QCD calculation alone. Both form factors f + and f 0 are well determined from lattice QCD in the high q 2 region, and f + is strongly constrained by experiment in the low q 2 region. This information is then transferred to f 0 via the kinematic constraint f 0 (0) = f + (0). The resulting form factors are shown in Fig. 29. The corresponding z-expansion coefficients and their correlations are given in Table XIX. These represent the present best knowledge of the B → π ν form factors, and can be used in other phenomenological applications or to test other nonperturbative QCD calculations. Future improvements in the determination of the B → π semileptonic form factor f + will further reduce the uncertainty on |V ub |. If the uncertainty of f B→π ν + at q 2 ∼ 20 GeV 2 can be reduced further from 3.4% to 1.5%, we would expect a precision of 3% in |V ub |, using the current experimental input. With the anticipated improvement in the experimental rate measurement from Belle II, this error would be reduced further. The reduction of uncertainty in f B→π ν + is expected with the newly-available MILC gauge ensembles that are being generated using the highly improved staggered quark (HISQ) action [71]. The new HISQ ensembles have statistics similar to the asqtad ensembles, but with much smaller light-quark discretization effects. Further, the HISQ ensembles simulated at the physical light-quark masses will remove the need for a chiral extrapolation, thereby eliminating a significant source of uncertainty in this work. These ensembles have already helped to form factors from this analysis, our earlier work [5] (now superseded, but with updated experimental input from HFAG 2014 [6]), a three-flavor lattice calculation by RBC/UKQCD [13], light-cone sum rules (orange square) [61], and HPQCD [4] (using the q 2 > 16 GeV 2 experimental data only). The blue upward-pointing triangle is obtained from Λ b → p ν decay using lattice-QCD form factors from Ref. [67] and experimental data from LHCb [68]. The black diamond shows the inclusive determination using B → X u ν decays [6] with the theoretical approach of Ref. [15]. Also shown is the expectation from CKM unitarity [69] (green filled circle). For the exclusive determinations from B → π ν decay (squares), all four experimental results [7][8][9][10] are used except in the LCSR z-fit where only the more recent BaBar [8] and Belle [10] data are used. determine the form factor f K→π ν + (0) [72] and the leptonic decay constants f D (s) and f K [73], and hence the relevant CKM matrix elements |V us |, |V cd | and |V cs |, with high precision. All of these improvements will further refine and reduce the uncertainties in |V ub |, and may also help to resolve the inclusive/exclusive puzzle.
2015-08-19T21:20:41.000Z
2015-03-26T00:00:00.000
{ "year": 2015, "sha1": "55f855fac3638d0463f8bd03c04a1dd1ee3227bf", "oa_license": "publisher-specific, author manuscript", "oa_url": "https://link.aps.org/accepted/10.1103/PhysRevD.92.014024", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "55f855fac3638d0463f8bd03c04a1dd1ee3227bf", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
15377320
pes2o/s2orc
v3-fos-license
Symbolic Extensions of Smooth Interval Maps * In this course we will present the full proof of the fact that every smooth dynamical system on the interval or circle X, constituted by the forward iterates of a function f : X → X which is of class C r with r > 1, admits a symbolic extension, i.e., there exists a bilateral subshift (Y, S) with Y a closed shift-invariant subset of Λ Z , where Λ is a finite alphabet, and a continuous surjection π : Y → X which intertwines the action of f (on X) with that of the shift map S (on Y). Moreover, we give a precise estimate (from above) on the entropy of each invariant measure ν supported by Y in an optimized symbolic extension. This estimate depends on the entropy of the underlying measure µ on X, the " Lyapunov exponent " of µ (the genuine Lyapunov exponent for ergodic µ, otherwise its analog), and the smoothness parameter r. This estimate agrees with a conjecture formulated in [15] around 2003 for smooth dynamical systems on manifolds. forecasting is done by complicated software which must be fed information in the digital form.Modern black boxes that register the history of airplane flights or truck rides do it in the digital form.Even our mathematical work is registered mainly as computer files.Analog information is nearly an extinct form. While studying dynamical systems (in any understanding of this term) sooner or later one is forced to face the following question: "How can the information about the evolution of a given dynamical system be most precisely turned into a digital form?"As researchers specializing in dynamical systems, we are responsible for providing the theoretical background for such a transition. So suppose that we are observing a dynamical system, and that we are indeed turning our observation into the digital form.That means, from time to time, we produce a digital "report", a computer file, containing all our observations since the last report.Suppose for simplicity, that such reports are produced at equal time distances, say at integer times.Of course, due to bounded capacity of our recording devices and limited time between the reports, our files have bounded size (in bits).Because the variety of digital files of bounded size is finite, we can say, that at every integer moment of time we produce just one symbol, where the collection of all possible symbols (called the alphabet and denoted by Λ) is finite. An illustrative example is filming a scene using a digital camera.Every unit of time, the camera registers an image, which is in fact a bitmap of some fixed size (camera resolution).The camera turns the live scene into a sequence of bitmaps.If the scene is filmed with sound, each bitmap is enhanced by a small sound file, also of bounded size.We can treat every such enhanced bitmap as a single symbol in the alphabet of the "language" of the camera. The sequence of symbols is produced as long as the observation is being conducted.We have no reasons to restrict the global observation time, and we can agree that it goes on for ever.Sometimes (but not necessarily), we can also admit that the observation has been conducted since ever in the past as well.In this manner, the history of our recording takes on the form of a unilateral or bilateral sequence of symbols from some finite alphabet Λ. Advancing in time by a unit corresponds, on one hand, to the unit-time evolution of the dynamical system, on the other, to shifting the enumeration of our sequence of symbols.In this manner we have come to the conclusion, that the digital form of the observation is nothing else, but an element of the symbolic space Λ S , where S stands either for the set of all integers Z or nonnegative integers N 0 .The action on this space is the familiar shift transformation σ given by σ(x) = y, where x = (x n ) n∈S and y = (x n+1 ) n∈S .Now, in most situations, such observation of the dynamical system will be lossy, i.e., it will capture only some aspects of the observed dynamical system.Much of the dynamics will be lost.For example, the digital camera will not be able to register objects hidden behind other objects, moreover, it will not see objects smaller than one pixel or their movements until they pass from one pixel to another.However, it may happen, that after a while, each object will eventually become visible, and that we will be able to reconstruct its trajectory from the recorded information. Of course, lossy digitalization is always possible and hence presents a lesser kind of challenge.We will be much more interested in lossless digitalization.When is it possible to digitalize a dynamical system, so that no information is lost, i.e., in such a way, that after viewing the entire sequence of symbols, we can reconstruct the trajectory of every smallest particle in the system?Well, it is certainly so, when the dynamical system under observation is not too complicated.When its rigidly moving particles are few, large, and the motion between the integer time moments is fully determined by the positions at the integer moments, and, at such moments each particle has only finitely many available positions.In other words, when the system is discrete in every aspect.But is this the only case? The answer is no.At least at the purely theoretical level, the variety of systems that allow lossless digitalization is much larger.The class depends on the kind of approach we assume.We will concentrate on two levels: measuretheoretic and topological.Assuming the measure-theoretic point of view, each discrete time dynamical system is the action of a measure-preserving transformation on a measure space.We do not care about distances between particles, all we care about is partitions and probabilities with which the particles occupy the cells of these partitions.Here we are completely settled within the realm of ergodic theory.Assuming the topological point of view we do care about distances, but only up to preservation of convergence, i.e., we respect open and closed sets.In this setup we are within the realm of topological dynamics. In the first, ergodic theoretic context, the question about "lossless digitalizability" of a system is relatively easy to answer.For automorphisms of probability spaces it is completely solved by the celebrated Krieger's Generator Theorem: an automorphism T of a probability space (X, F , µ) is isomorphic to the shift of a symbolic space Λ Z (equipped with some shift-invariant measure) if and only if the Kolmogorov-Sinai entropy h µ (T ) of the automorphism is finite. For endomorphisms, although the theorem no longer applies (in full generality) we can employ the notion of natural extension.If T is an endomorphism of a probability space, and has finite entropy, then its natural extension is an automorphism and has the same finite entropy.By the Krieger Theorem, this natural extension is isomorphic to a symbolic system.The original endomorphism becomes a measure-theoretic factor of the symbolic system.The natural extension in its digital (i.e., symbolic) form clearly contains complete information about all its factors, in particular, about the original endomorphism system, which, in this manner becomes losslessly digitalized. On the other hand, any system (automorphism or endomorphism) of infinite entropy can be neither represented nor embedded in a symbolic system, because all symbolic systems have finite entropy.So, any digitalization of an infinite entropy system must be lossy.We have thus fully characterized measure-theoretic systems (on probability spaces) which are losslessly digitalizable: these are precisely the systems of finite Kolmogorov-Sinai entropy.The digitalization is then isomorphic either to the system itself, or, at worst, to its natural extension. At the level of topological dynamics this problem is much more complicated.Here, given a topological dynamical system (X, T ) (X is a compact metric space and T : X → X is a continuous map, perhaps a homeomorphism), we seek for its digitalization in form of some, also topological, symbolic system.These are constituted by compact, shift-invariant subsets of the symbolic spaces Λ S equipped with the action of the shift transformation σ.Such systems are shortly called subshifts.There are slight differences in our understanding of shift-invariance for the unilateral and bilateral sequences, but we skip these details here. If we desire a symbolic system (subshift) (Y, σ) that carries all the information about a given topological dynamical system (X, T ), respecting its topological structure, a number of rather obvious limitations immediately pops out.First of all, we have very little chances to create a symbolic system that would be topologically isomorphic (i.e., conjugate) to (X, T ).Only expansive maps on zerodimensional spaces are conjugate to subshifts.And these properties are rather exceptional among topological dynamical systems.In every other case we can only hope to build a symbolic extension, i.e., a subshift (Y, σ), of which (X, T ) would be a topological factor.There are equally little chances, that the extension will be conjugate to the topological natural extension of (X, T ).The natural extension would have to be zero-dimensional and expansive which implies that X is itself zero-dimensional and T nearly (not exactly but close to) expansive.So, the symbolic extension (Y, σ), if one exist, will usually be something else than (X, T ) or its natural extension.Such a (Y, σ) will contain other "unwanted" dynamics joined with the dynamics of (X, T ).It may even have necessarily larger topological entropy!Unlike in the measure-theoretic case, finite entropy (this time topological) does not even guarantee the existence of a symbolic extension.This is a phenomenon first discovered by Mike Boyle, whose interest in this subject was provoked by the question of Joe Auslander.Mike Boyle also indicated examples of systems with finite topological entropy, such that symbolic extensions do exist, but all have topological entropy essentially larger (by some constant) than that of (X, T ). In this manner we are lead to studying the following general problem, which we can summarize in the two questions below, concerning a given topological dynamical system (X, T ).QUESTION 1: Does there exist a topological symbolic extension (Y, σ) of (X, T )?In other words, is (X, T ) a topological factor of some subshift?QUESTION 2: If yes, what is the infimum of the topological entropies of all its symbolic extensions? These two questions (and some related ones) have triggered the creation of a relatively new branch in topological dynamics, the theory of symbolic extensions.It should not be surprising, that this theory is embedded in the theory of entropy of topological dynamical systems.In fact, it lead to some new developments in this theory, the discovery of some new entropy-related notions and invariants of topological conjugacy.It turns out, that in order to handle the two major questions posed above, one needs to focus not only on the topological entropies of the involved systems (the system (X, T ) and its symbolic extensions (Y, σ)), but also on the measure-theoretic (Kolmogorov-Sinai) entropies of all invariant measures supported by these systems.The two key notions of the theory are defined below.Definition 1.1.Let (X, T ) be a topological dynamical system.The topological symbolic extension entropy of (X, T ) is defined as follows: A refinement of this notion at the level of invariant measures is provided below. Definition 1.2.Let (X, T ) be a topological dynamical system and let P T (X) denote the set of all T -invariant measures µ on X.Let (Y, S) be a topological extension of (X, T ) and π : Y → X be the corresponding factor map. On P T (X) we define the extension entropy function by the formula Then, on P T (X) we define the symbolic extension entropy function, by One of the fundamental tools in the theory of symbolic extensions is the following theorem (one inequality is obvious, the other requires some machinery): The main task of the theory of symbolic extensions reduces to solving the following problem: PROBLEM 1: Compute (or estimate) h sex for a given system (X, T ) using its internal properties. Notice that the definition of h sex is so constructed, that solving Problem 1 answers both of the formerly formulated questions 1 and 2. In full generality, so phrased problem has been solved in the paper [3], and then refined in [12].The solution is in terms of so-called entropy structure, a carefully selected sequence of functions on P T (X), which reflects the emergence of the entropy of different measures at refining scales.Crucial are upper semicontinuity properties of these functions and the multiple defect of uniformity in its convergence.The reason why these items are so essential can very roughly and briefly explained as follows: In the system (X, T ) some invariant measures may reveal all of its entropy already in large scale (like in expansive systems), other measures may need very small scale (i.e., fine covers) for their entropy to be detected.Now, in the symbolic extension (Y, σ), the small scale dynamics must be "magnified" and become visible in the large scale of the symbolic system (in symbolic systems all dynamics happens in large scale).If the "large scale measures" are approximated in P T (X) by the "small scale measures", the magnification of small scale dynamics may lead to enlarging the entropy of large scale dynamics.This causes the overall entropy of the symbolic extension to grow. In this course we will concentrate on a more particular problem, concerning smooth maps: We will show how this problem is solved in dimension one, i.e., for smooth maps of the interval or of the circle, in terms of much more familiar parameters, such as the degree of smoothness r and the (slightly refined) Lipschitz constant. The history of research on topological symbolic extensions The first result concerning symbolic extensions in topological dynamics is due to William Reddy and goes back to 1968 ( [21]).It says that every expansive homeomorphism T on a compact metric space has a symbolic extension.The construction provided no control over the entropy of this extension. It was clear that expansiveness was a much too strong requirement.All known examples of finite entropy systems seemed to admit symbolic extensions.One of the spectacular applications of symbolic extensions occurs in the studies of hyperbolic systems.Using Markov partitions, such systems can be lifted to subshifts of finite type, which allows to apply symbolic dynamical methods to the hyperbolic systems.This approach belongs to the classics, it is described for example in Bowen's book [1].Generally, however, very little was known.The natural question whether all finite entropy systems indeed have symbolic extensions has been presumably puzzling many people between the years 1970 and 1990.Around 1989, Joe Auslander addressed this question to Mike Boyle, one of the best experts in symbolic dynamics.Within some time (less than a year), Boyle came up with the negative answer, by constructing an appropriate example.A version of the same example showed, that even if a system does admit a symbolic extension, there may exist a necessary gap between the entropy of the system and that of any symbolic extension.He called this gap the residual entropy.These examples have been presented at the Adler conference in 1991, but never published until 2002 (after the author of this note has already published his own version of Boyle's examples in 2001).These examples proved only one thing: there is no easy answer to the questions 1 and 2 stated in the preceding section. For the next 8 years, the progress was rather limited and not published.Mike Boyle collaborated in this matter with Doris and Ulf Fiebig.They tried to construct symbolic extensions by means of symbolic and topological methods (without using invariant measures), which, from today's perspective, explains why their results were so restricted. Around 1998 the same problem was encountered by the author of this note.Together with Fabien Durand, they were characterizing all factors of so-called Toeplitz flows, and one of the three conditions for a system to be such a factor was that it admits some symbolic extension ( [13]).It soon occurred, that nobody knew any general criteria for that.Mike Boyle was able to say that any system of entropy zero has a symbolic extension also of entropy zero ( [2]), which was very useful for the study of factors of Toeplitz flows. In year 1999, the author of this note spent a month in Marseille, devoting all his energy trying to understand why some systems have and other do not have symbolic extensions.For simplicity, he focused on zero dimensional systems, which seemed to be the best class to study.He discovered that the existence of symbolic extensions depends on the distribution of entropy on invariant measures, which lead to the first result containing the criteria for the existence and an estimate of the topological entropy of symbolic extensions for general zero-dimensional systems ( [11]).In particular, he showed that an asymptotically h-expansive zero-dimensional system admits a symbolic extension of the same topological entropy.In the same paper he published the already mentioned examples based on those by Mike Boyle. A year later, Boyle and the Fiebigs publish a long paper containing the results of their long lasting collaboration ( [4]).The old examples appear here in the original version, next to new ones, where the transformation is on a disc and is differentiable at all but one point.In terms of positive results, all asymptotically h-expansive systems (not necessarily zero-dimensional) are shown to posses principal symbolic extensions, i.e., such that not only the topological entropy is the same as that of (X, T ), but also the Kolmogorov-Sinai entropy of every invariant measure is the same as that of its image in the system (X, T ).Since expansive systems are asymptotically h-expansive, we recover here a refined version of Reddy's first result.Since any system of entropy zero is asymptotically h-expansive, we also recover the fact communicated earlier by Boyle to the author of this note.Another spectacular application, neatly included in [4] concerns smooth maps.Soon before that, Jerome Buzzi just proved that any C ∞ map on a Riemannian manifold is in fact asymptotically h-expansive ( [9]).(Many years earlier Sheldon Newhouse proved a seemingly weaker statement [20], which from today's perspective is equivalent to Buzzi's result.)Now, this fact receives a new meaning: every C ∞ map on a manifold admits a principal symbolic extension.If we agree, that symbolic extensions are "lossless digitalizations", then principal symbolic extensions can be regarded "gainless" (without superfluous information) digitalizations.The fact that all C ∞ maps can be losslessly and gainlessly digitalized became one of the iconic achievements of the theory of symbolic extensions.However, an immediate question arises: what about C r maps, where r < ∞? In 2001, the author of this note visited Mike Boyle.Leaving the smooth systems aside, they worked on the general theory.Their work [3] contains the complete and general characterization of the symbolic extension entropy func-tion h sex .It also contains the aforementioned variational principle for the symbolic extension entropy.Problem 1 and both questions stated in the preceding section, became completely solved.The solution still refers to zero-dimensional systems: each system with finite entropy is first shown to posses a principal zero-dimensional extension (using the theory of mean dimension, by E. Lindenstrauss and B. Weiss [17,16]) and then it is shown how to build a symbolic extension of a zero-dimensional system.The notion of an entropy structure is introduced for zero-dimensional systems, the key tool to compute the symbolic extension entropy function.A criterion is provided, when the symbolic extension entropy function is attained, i.e., when a symbolic extension exist, whose entropy function matches the symbolic extension entropy function (an "optimal" digitalization). Next year, the author of this work develops a consistent theory of entropy structures for general topological dynamical systems ( [12]).Among other things, this allows to simplify the phrasing of several results from the preceding work, by skipping the intermediate stage of a zero-dimensional extension.The theory of entropy structures, although its importance depends upon the application to symbolic extensions, has gained an independent interest and several papers appeared devoted to other aspects of the entropy structure theory ( [8,18]). At the same time the author collaborates with Sheldon Newhouse.The focus of this collaboration is on smooth maps on Riemannian manifolds.The obtained results ( [15]) are of negative nature: roughly speaking they prove that (in some class) a typical C 1 system of dimension d ≥ 2 admits no symbolic extensions at all (infinite symbolic extension entropy), while a typical C r map, where 1 < r < ∞ (also for d ≥ 2) does not admit a principal symbolic extension (without saying that is does admit a symbolic extension).In their examples the gap between the entropy of the system and the entropy of a symbolic extension (the residual entropy) is bounded below by some term (which we denote here by R) proportional to the Lipschitz constant and inverse proportional to r − 1.They formulate a conjecture, that the residual entropy in their examples is the worst, i.e., that every C r map with r > 1 does admit a symbolic extension, and the symbolic extension entropy is, in the worst case, equal to the entropy plus R. This conjecture triggered a number of papers containing partial results.In all cases the conjecture has been confirmed: In 2005 the author of this note, jointly with Alejandro Maass, proves the conjecture true in dimension d = 1 ( [14], the subject of this course).This result is then complemented by David Burguet, who provides examples of C r interval maps showing the estimate R for the residual entropy to be sharp ( [5]).In the meantime Lorenzo Díaz and Todd Fisher prove related results for partially hyperbolic diffeomorphisms ( [10]).Recently, Burguet proved the conjecture in two more cases: for C r nonuniformly expanding maps (such that every invariant measure of positive entropy has all Lyapunov exponents nonnegative) on manifolds of any dimension and for any r > 1 ([6]), and, even more recently, for any C 2 surface diffeomorphisms ( [7]).The general case of a C r map (or diffeomorphism) on a compact manifold of dimension d remains an open problem, and the latest Burguet's result for d = 2 is the most advanced step toward the full solution. Introduction to entropy structures For the purposes of this course, we will not need the general definition of the entropy structure.It suffices to know, that any entropy structure has the form of a sequence of functions h k : P T (X) → [0, ∞), such that h k (µ) ր h(µ) for every invariant measure µ.Sometimes it is better to consider the tails θ k = h − h k .Then we have θ k ց 0 pointwise.Not all sequences (θ k ) k≥1 converging monotonically to zero are entropy structures.There are additional conditions on how they converge in reference to the dynamics.Still, there are many possible entropy structures in one dynamical system, but they are all equivalent to each other in a specific sense.Instead of listing the conditions which classify a given sequence (θ k ) as an entropy structure we will simply specify one particular such sequence (θ k ), which has been proved to satisfy these conditions in the paper [12].Only this entropy structure will be used throughout this course.The precise description of this sequence will be given in the next section. So suppose we have already chosen an entropy structure (θ k ).This allows to compute the symbolic extension entropy function h sex .The derivation of h sex from the entropy structure is via the "transfinite sequence", as defined below: Step 0: (recall that f (x) = lim sup y→x f (y) ). Theorem 3.1 ([3] ).There exists a countable ordinal α 0 such that u α = u α0 for every α ≥ α 0 , and Combining this with Theorem 1.3 we get As a digression, let us mention, that the theory of entropy structures allows to characterize the famous Misiurewicz parameter h * (T ) (the one used to define asymptotically h-expansive systems, by h * (T ) = 0) as the pointwise supremum of the function u 1 : The two parameters appear at opposite poles of the transfinite sequence: h * (T ) is the "supremum of the first order", while h sex (T ) is the "supremum of all orders".The participation of h(µ) in only one of the suprema causes the two notions not to be related by any inequality.Only one implication holds in general: h * (T ) = 0 =⇒ h sex (T ) = h top (T ).In fact we have the equivalence: This is to say, asymptotically h-expansive systems are exactly those which admit a principal symbolic extension ( [4]). The Newhouse entropy structure Now we provide the definition of a local entropy, created by Sheldon Newhouse in 1989.Later, the author of this note has verified, that local entropy with respect to a refining sequence of open covers becomes an entropy structure.Below we use the following notation: F is any Borel subset of X, V is an open cover of X and V n x is any set containing x and having the form A set E is (n, δ)-separated if for any two points y, y ′ ∈ E there is some i ∈ {0, 1, . . ., n − 1} with d(T i y, T i y ′ ) ≥ δ.Also µ denotes an invariant measure and is a number smaller than 1. We extend the function h N ew (X|•, V) to all of P T (X) by averaging over the ergodic decomposition.This function is called the local entropy function given the cover V. The Newhouse entropy structure is obtained as the sequence where V k is a sequence of open covers, each finer than the preceding one, and with the maximal diameters of their elements decreasing to zero.This is indeed an entropy structure ( [12]). Key ingredients in the one-dimensional result In this section we state the main result of [14] and two key theorems leading to it.The first one, called "The Antarctic Theorem" is an estimate of local entropy for C r interval (or circle) maps.The exotic name of the theorem comes from the fact that the breakthrough in proving it was made during the author's trip to Antarctica, in fact while he was spending a sleepless night camping on the snow on one of the Antarctic islands.This is the only statement in this course, which uses the specific properties of the interval.The second intermediate result, called "The Passage Theorem" can be phrased as a completely general fact and in this form has already been used by Burguet in his two latest results.Its name reflects that it provides a "bridge" between the local entropy estimate of the preceding theorem and the final estimate of the symbolic extension entropy function, given in the main result.One can also associate the name with Drake Passage, where, returning from Antarctica, the author attempted to apply his discovery to symbolic extensions (which was accomplished after returning to Santiago, with the help of the coauthor A. Maass).This section also contains the derivation of the Estimate Theorem from the two intermediate theorems. The detailed proofs of the Antarctic, Passage and Estimate Theorems can be found in [14].In this course, we sketch these proofs skipping some details.Instead, we will try to be more convincing by illustrating some of the arguments using figures. Let f be a C r transformation of the interval or of the circle X, where r > 1.Let µ ∈ P f (X).We denote (For ergodic measures in dimension one, χ(µ) is the Lyapunov exponent.)Theorem 5.1 (The Antarctic Theorem).Fix some γ > 0. For each µ ∈ P f (X) there exists an open cover V of X such that for every ergodic measure ν in an open neighborhood of µ in P f (X). The Passage Theorem says the same, but without assuming ergodicity of ν.The function χ0 (µ) is defined by averaging χ 0 over the ergodic decomposition of µ.Since χ 0 is evidently convex, χ0 is usually slightly larger than χ 0 (except at ergodic measures, where these two are equal). Theorem 5.2 (The Passage Theorem).Fix some γ > 0. For each µ ∈ P f (X) there exists an open cover V of X such that for every invariant measure ν in an open neighborhood of µ in P f (X). The main result is this: Theorem 5.3 (The Estimate Theorem).Let f be a C r transformation of the interval or of the circle X, where r > 1.Then As a consequence, by the symbolic extension entropy variational principle (and the usual variational principle), Remark 5.4.The Lipschitz constant can be easily replaced by the smaller constant R(f ) = lim 1 n L(f n ), where f n denotes the composition power of f .We now describe how the Estimate Theorem is deduced from the Passage Theorem.This is fairly easy. So, assume the Passage Theorem holds.The Ruelle inequality (h(ν) ≤ χ 0 (ν) for ergodic ν, see [22]) easily implies that for any invariant measure ν we also have Thus, for ν sufficiently close to µ we have Clearly, h N ew (X|ν, V) (as well as χ0 (ν)) cannot be negative.The situation is illustrated on the figure below.The horizontal axis represents all measures ν in the vicinity of µ parametrized by χ0 (ν), on the vertical axis we have the upper bound for h N ew (X|ν, V): It is seen from this picture (which replaces elementary calculations), that, for all considered measures ν, ). Plugging this to the definition of the transfinite sequence, we obtain We proceed by the transfinite induction.Suppose for all ordinals β < α.Then, near a measure µ, holds The situation is shown on the figure below. We are using the fact that χ0 is an upper semicontinuous function, hence in a sufficiently small vicinity of µ all measures ν satisfy χ0 (ν) ≤ χ0 (µ) + γ.This is why the domain of the graph extends only a bit beyond χ0 (µ) (further to the right it would grow, so we are happy not have to include that part).By passing to the upper limit as ν approaches µ, we get By transfinite induction, u α ≤ χ0(µ) r−1 for all ordinals including α 0 .Now, using the transfinite characterization of the symbolic extension entropy.we get the desired result: Sketch of the proof of the Antarctic Theorem The proof relies on the following, fairly elementary counting lemma: Lemma 6.1.Let g : [0, 1] → R be a C r function, where r > 0. Then there exists a constant c > 0 such that for every 0 < s < 1 the number of components of the set {x : g(x) = 0} on which |g| reaches or exceeds the value s is at most Proof.For 0 < r ≤ 1, g is Hölder, i.e., there exists a constant The component containing x is at least that long and the number of such components is at most c • s − 1 r , where c = c 1 Jointly, the number of all components I on which |f | exceeds s is at most 2+(c+1)•s − 1 r ≤ c 1 •s − 1 r (the number 2 is added because the above argument does not apply to the extreme components, which need not contain critical points). For g = f ′ we obtain the following.Corollary 6.2.Let f : [0, 1] → [0, 1] be a C r function, where r > 1.Then there exists a constant c > 0 such that for every s > 0 the number of branches of monotonicity of f on which |f ′ | reaches or exceeds s is at most c • s − 1 r−1 .Next we apply the above to counting possible ways by which a point with a bounded below derivative for the composition power of f may traverse the branches of monotonicity.We make a formal definition.Definition 6.3.Let f be as in the formulation of Corollary 6.2.Let I = (I 1 , I 2 , . . ., I n ) be a finite sequence of branches of monotonicity of f , (i.e., any formal finite sequence whose elements belong to the countable set of branches, admitting repetitions).Denote Choose S ≤ −1.We say that I admits the value S if Notice that, if there exists a sequence of points y i ∈ I i with log |f ′ (y i )| ≤ −1 for each i and satisfying 1 n n i=1 log |f ′ (y i )| ≥ S, then I admits the value S. Lemma 6.4.Let f : [0, 1] → [0, 1] be a C r function, where r > 1. Fix ǫ > 0. Then there exists S ǫ ≤ −1 such that for every n and S < S ǫ the logarithm of the number of sequences I of length n which admit the value S is at most Proof.Without loss of generality assume that S is a negative integer.Let I be a sequence of n branches of monotonicity which admits the value S. Denote k i = ⌊a i ⌋.Then (−k i ) is a sequence of n positive integers with sum at most n(1 − S).Now, in a given sequence (k i ), each value k i may be realized by any branch of monotonicity on which max log |f ′ | lies between k i and k i + 1 (or just exceeds −1 if k i = −1).From Corollary 6.2 it follows that there are no more than ce −k i r−1 such branches for each k i .Jointly the logarithm of the number of sequences of branches of monotonicity corresponding to one sequence (k i ) is at most Lemma 6.5.Let T be a C r transformation of the interval or of the circle X, where r > 1.Let U and V be as described above.Let ν be an ergodic measure and let Then Proof.Let F be the set of points on which the nth Cesaro means of the function 1 U log |f ′ | are close to S(ν) for n larger than some threshold integer (we are using the ergodic theorem; such a set F can have measure larger than 1−σ).For x ∈ F and large n consider a set containing x, with V i ∈ V (as in the definition of local entropy).Consider the finite subsequence of times 0 ≤ i j ≤ n − 1 when V ij = U .Let nζ denote the length of this subsequence and assume ζ > 0. For a fixed δ let E be an (n, δ)separated set in V n x ∩ F and let y ∈ E. The sequence (i j ) contains only (usually not all) times i when f i (y) ∈ U .Thus, since y ∈ F , we have where A is the similar sum over the times of visits to U not included in the sequence (i j ).Clearly A ≤ 0, so it can be skipped.Dividing by ζ we obtain The right hand side above is smaller than S ǫ .This implies that along the subsequence (i j ) the trajectory of y traverses a sequence I (of length nζ) of branches of monotonicity of f admitting the value S(ν) ζ smaller than S ǫ .By Lemma 6.4, the logarithm of the number of such sequences I is dominated by At times i other than i j the set V i contains only one branch, so if two points from V n x ∩ F traverse the same sequence of branches along the times (i j ), they traverse the same full sequence of branches along all times i = 0, 1, . . ., n − 1.The number of (n, δ)-separated points which, along all times i = 0, 1, . . ., n − 1, traverse the same given sequence of branches of monotonicity is negligibly small.This, together with (6.2), implies that the logarithm of the cardinality of E can be only negligibly larger than (6.2).The proof is concluded by dividing by n, and letting n → ∞. Proof of the Antarctic Theorem.Fix an invariant measure µ and some γ > 0. We need to consider only ergodic measures ν close to µ.If χ(µ) < 0 then, by upper semicontinuity of the function χ, for ν sufficiently close to µ, χ(ν) < 0, so by the Ruelle inequality (and since always h N ew (X|ν, V) ≤ h(µ)), h N ew (X|ν, V) = 0 and the assertion holds for any open cover.Now suppose that χ(µ) ≥ 0. Clearly, then µ(C) = 0. Since log |f ′ | is µintegrable, the open neighborhood U of C (on which log |f ′ | < S ǫ ) can be made so small that the (negative) integral of log |f ′ | over the closure of U is very close to zero (closer than some ǫ).Then 3) The integral in (6.3) is an upper semicontinuous function of the measure (U c is an open set on which log |f ′ | is finite and continuous and negative on the boundary), hence (6.3) holds for all invariant measures ν in a neighborhood of µ.The more (we have included the boundary to the set of integration, and the function is negative on that boundary).Then We define the cover V with the above choice of the set U (recall, V consists of U and some intervals on which f is monotone).We can now apply Lemma 6.5.Substituting (6.4) into (6.1)we get Of course, χ(µ) can be replaced by a not smaller number χ 0 (µ).If χ(ν) < 0 then h N ew (X|ν, V) = 0 ≤ χ0(µ)−χ0(ν) r−1 , so, in any case we can write Because the function χ0(•) r−1 is bounded, the contribution of the error terms ǫ can be made smaller than the additive term γ. Sketch of the proof of the Passage Theorem In the Passage Theorem, we need to drop the assumption that ν is ergodic.The key tool is the lemma below.Recall that the ergodic decomposition allows to represent each invariant measure ν as the barycenter of a probability measure M ν supported by the set of ergodic measures.In order to easier distinguish between probability measures on P T (X) and invariant measures on X (which are points in P T (X)) we will consistently use the term "distribution" with regard to probability measures on P T (X), in particular to the ergodic (and nonergodic) decompositions of invariant measures.Below, by a joining of two distributions M, M ′ on some space we understand any distribution on the Cartesian square of the space, with marginals M , M ′ .Lemma 7.1.In a topological dynamical system (X, T ), let µ, ν n ∈ P T (X), and ν n → µ in the weak* topology.Choosing a subsequence we can assume that the ergodic decompositions M νn converge to some distribution M on P T (X).By continuity of the barycenter map, bar(M ) = µ.Then, given any ǫ > 0, for n large enough, there exists a joining J n of M νn and M such that J n (∆ ǫ e ) > 1 − ǫ, where ∆ ǫ e = { ν, τ ∈ P T (X) × P T (X) : ν is ergodic and dist(ν, τ ) < ǫ}. Proof.The proof is elementary, and we only sketch it.We partition P T (X) into finitely many Borel sets F i diameter smaller than ǫ and with boundaries of measure M zero.Then, for large n the numbers M νn (F i ) are very close to M (F i ) (for every index i).for any ergodic ν in the ǫ τ -neighborhood of τ .For each τ the Lebesgue number of V τ is a positive number ξ τ .Let ǫ be so small that ǫ τ > ǫ and ξ τ > ǫ for M -nearly all τ (belonging to a set P ⊂ P T (X) with M (P) ≈ 1).We let V be an open cover by sets of diameter smaller than ǫ.This cover is finer than V τ for -nearly each τ , hence (7.1) holds for such τ , V and ǫ.By Lemma 7.1, for n large enough there exists a joining J of M νn and M satisfying J n (∆ ǫ e ) > 1 − ǫ.We fix such an and let J τ be the conditional of J n for τ fixed on the second coordinate, and we let ν τ denote bar(J τ ).We have ν τ dM (τ ) = ν n . (7.2) By the properties of the joining J n , for M -nearly all τ the distribution J τ is nearly supported by the ǫ-neighborhood of τ .These conditions together imply that for M -nearly every τ the distribution J τ is nearly supported by the ergodic measures ν which satisfy (7.1) for the cover V. The idea of the above argument is presented on this figure.For simplicity, ν n is shown as a convex combination of two ergodic measures ν a and ν b (the distribution M νn is supported by these two points).The limit distribution M has barycenter µ and in this figure is also supported by two points τ a and τ b (not necessarily ergodic; M need not be the ergodic decomposition of µ, which in this case is a convex combination of completely different measures µ a and µ b ).The role of the joining J n is to associate to each τ in the support of M a "part" of the distribution M νn , called J τ , nearly supported by a small neighborhood of τ .In the case shown on the figure, it associates to τ a the point mass at ν a , and to τ b the point mass at ν a . Integrating both sides of (7.1) with respect to J τ we get for M -nearly every τ : larger r we proceed inductively: suppose that the lemma holds for r − 1.Let g be of class C r .By elementary considerations of the graph of f , with each component I = (a I , b I ) of the set {x : g(x) = 0} we can disjointly associate an interval (x I , y I ), so that |g| attains at x I its maximum on I and y I is a critical point lying to the right of I (see the figure below).There are two possible cases: either (a) y I − x I > s 1 r , or (b) y I − x I ≤ s 1 r .Clearly, the number of components I satisfying a) is smaller than s − 1 r .If a component satisfies b) and |f | exceeds s on it, then, by the mean value theorem, |g ′ | attains on (x I , b I ) a value at least s/s 1 r = s r−1 r .Because g ′ is of class C r−1 , by the inductive assumption, the number of such intervals (x I , y I ) (hence of components I) does not exceed c • For large values of −S, the first term, and the last 1 can be skipped at a cost of multiplying −S by (1 + ǫ).The number of all possible sequences (k i ) with sum n(1 − S) is negligibly small on the logarithmic scale.So the logarithm of the number of all sequences of branches of monotonicity which admit the value S is, regardless of n, estimated from above as in the assertion.Regardless of whether f is a transformation of the interval or of the circle X, the derivative f ′ can be regarded as a function defined on the interval [0, 1].Let C = {x : f ′ (x) = 0} be the critical set.Fix ǫ > 0. Fix some open neighborhood U of C on which log |f ′ | < S ǫ .Then U c can be covered by finitely many open intervals on which f is monotone.Let V be the cover consisting of U and these intervals.The figure below shows f and the set U . The joining J n is obtained as the sum of appropriately scaled product distributions M | Fi × M νn | Fi .Such a joining is supported by the ǫ-neighborhood of the diagonal (see figure below).Proof of the Passage Theorem.Suppose that there exists γ > 0 and a sequence ν n converging to µ, and which, for any choice of an open cover V, eventually does not satisfy the assertion of the Passage Theorem.By choosing a subsequence we can assume that the ergodic decompositions M νn converge to some distribution M on P T (X) with bar(M ) = µ.By the Antarctic Theorem, for every τ in the support of M there is some open cover V τ and ǫ τ > 0 such that h N ew (X|τ, V τ ) ≤ χ 0 (µ) − χ 0 (ν)
2016-03-01T03:19:46.873Z
2010-05-18T00:00:00.000
{ "year": 2010, "sha1": "5d85ba5a23f9aa14cd62f2ee18fbc0480509e24d", "oa_license": "CCBY", "oa_url": "https://doi.org/10.1214/10-ps164", "oa_status": "GOLD", "pdf_src": "Grobid", "pdf_hash": "5d85ba5a23f9aa14cd62f2ee18fbc0480509e24d", "s2fieldsofstudy": [ "Mathematics" ], "extfieldsofstudy": [ "Mathematics" ] }
233424268
pes2o/s2orc
v3-fos-license
Continuous Resonance Tuning without Blindness by Applying Nonlinear Properties of PIN Diodes Metamaterial antennas consisting of periodical units are suitable for achieving tunable properties by employing active elements to each unit. However, for compact metamaterials with a very limited number of periodical units, resonance blindness exists. In this paper, we introduce a method to achieve continuous tuning without resonance blindness by exploring hence, taking advantage of nonlinear properties of PIN diodes. First, we obtain the equivalent impedance of the PIN diode through measurements, then fit these nonlinear curves with mathematical expressions. Afterwards, we build the PIN diode model with these mathematical equations, making it compatible with implementing co-simulation between the passive electromagnetic model and the active element of PIN diodes and, particularly, the nonlinear effects can be considered. Next, we design a compact two-unit metamaterial antenna as an example to illustrate the electromagnetic co-simulation. Finally, we implement the experiments with a micro-control unit to validate this method. In addition, the nonlinear stability and the supplying voltage tolerance of nonlinear states for both two kinds of PIN diodes are investigated as well. This method of obtaining smooth tuning with nonlinear properties of PIN diodes can be applied to other active devices, if only PIN diodes are utilized. Introduction Electromagnetic metamaterials (EM MTMs) [1] employ periodical units, that are derived from split-ring resonators (SRRs) [2], composite right-left-handed (CRLH) structures [3], and high-impedance structures (HISs) [4], to obtain a negative refractive index, negative phase constant, and high surface impedance, thereby achieving the unique properties of super-lens [5], back-forward radiation [6], and field enhancement [7]. Thanks to EM MTMs that are characterized by periodical configuration, it is possible to realize multiple tunable states either in spectrum resonances [8] or spatial radiation patterns [9] by applying active components to each periodical unit. This kind of tuning mechanism benefits from a periodical array with n unit cells, while each unit can be tuned individually with m states utilizing active elements such as PIN diodes [8,[10][11][12], varactors [13][14][15][16][17][18][19][20], or MEMS [21], thus, ideally speaking, we can possess in total as many as m n tunable states. This means that extremely large MTMs with an infinite (n→∞) number of units have an infinite number of tunable states, leading to continuous tuning. Conversely, compact MTMs with a very limited number of units have only several discrete tunable states. The absence of continuous tunability in an active MTM design is called tuning blindness, and it is has two causes: the MTM design has very limited periodical units, such as two, three, or five cells; RF switches as the active component in the MTM only have two tunable states (ON/OFF). For instance, as demonstrated in Figure 1a, we simulate an MTM antenna containing two-unit (n = 2, m = 2) HIS structures, and it indeed demonstrates several tunable resonances, but they are discrete with unavoidably induced resonance blindness (as shown in the shadow area). Similarly, in [9], where programmable radiations are realized with PIN diodes, and in [21], where programmable spectrum resonances are achieved with MEMS, there exists tuning blindness as well. More specifically, in [9], though scanning beams from roughly −60 • to +60 • are obtained, as the shadow area demonstrates in Figure 1b, scanning blindness occurs from −15 • to +15 • . In this paper, we explore another method to achieve continuous tuning with PIN diodes: investigating the equivalent impedance in the transition zone between completed ON and OFF, and exploring the nonlinear zone in between. Thanks to the PIN diodes possessing this nonlinear zone, we can achieve a continuous spectrum tuning without blindness and, meanwhile, with low actuated voltages less than 1.5 V, which is suitable for NB-IoT scenery that requires many tunable but narrow-band spectrum channels and with low power consumption. Figure 1a, we simulate an MTM antenna containing two-unit (n = 2, m = 2) HIS structures, and it indeed demonstrates several tunable resonances, but they are discrete with unavoidably induced resonance blindness (as shown in the shadow area). Similarly, in [9], where programmable radiations are realized with PIN diodes, and in [21], where programmable spectrum resonances are achieved with MEMS, there exists tuning blindness as well. More specifically, in [9], though scanning beams from roughly −60° to +60° are obtained, as the shadow area demonstrates in Figure 1b, scanning blindness occurs from −15° to +15°. In this paper, we explore another method to achieve continuous tuning with PIN diodes: investigating the equivalent impedance in the transition zone between completed ON and OFF, and exploring the nonlinear zone in between. Thanks to the PIN diodes possessing this nonlinear zone, we can achieve a continuous spectrum tuning without blindness and, meanwhile, with low actuated voltages less than 1.5 V, which is suitable for NB-IoT scenery that requires many tunable but narrow-band spectrum channels and with low power consumption. (a) (b) Figure 1. Tuning blindness exists in (a) a compact MTM antenna with two periodical units and in (b) programmable spatial radiation patterns [9]. In practice, MTMs with a limited number of periodical units are quite common, and in some cases, they are even preferred due to their compact size. In addition to the method proposed in this paper, another common method for avoiding blindness is to increase the tunable states m possessed by the individual cell, while unit number n is kept to a small value for a compact size. For instance, [13][14][15][16][17][18][19][20] introduce the tunable antenna using varactor diodes, or variable capacitors, to obtain multiple tunable states m = 9, 7, 6, respectively. These good works with multi-state tuning indeed increase the tuning continuity with discrete structures, but usually require variable voltage to even as high as 20 V, which might not be compatible with low-power applications such as narrow bandwidth Internet of Things (NB-IoT) [22][23][24]. This paper is arranged as follows: Section 2 investigates the nonlinear property of PIN diodes; Section 3 introduces the electromagnetic (EM) co-simulation; Section 4 presents experiments; Section 5 and Section 6 provide the discussion and conclusion. Nonlinear Properties PIN diodes are conventionally utilized as RF switches with two states (ON/OFF). However, there exists a transition zone in between. In order to investigate this nonlinear property, we study the relationship between the equivalent impedance and the actuated voltage using the PIN diode A (MACOM MA4AGBLP912, MACOM, Lowell, MA, USA). First, we measure the PIN diode by employing a microstrip line in a 5 GHz band. As shown in Figure 2a, we make a slot in the middle of the standard 50 Ω microstrip line and integrate the surface-mounted PIN diode A there, then apply two inductors (Murata LQW18AN22NG00, Mutrata, Nagaokakyo, , Kyoto, Japan) with a large value (22 nH) to block the interference from the DC supplier. Second, we apply transmission line (TL) theory to analyze this equivalent circuit model, as shown in Figure 2b. The equivalent model In practice, MTMs with a limited number of periodical units are quite common, and in some cases, they are even preferred due to their compact size. In addition to the method proposed in this paper, another common method for avoiding blindness is to increase the tunable states m possessed by the individual cell, while unit number n is kept to a small value for a compact size. For instance, [13][14][15][16][17][18][19][20] introduce the tunable antenna using varactor diodes, or variable capacitors, to obtain multiple tunable states m = 9, 7, 6, respectively. These good works with multi-state tuning indeed increase the tuning continuity with discrete structures, but usually require variable voltage to even as high as 20 V, which might not be compatible with low-power applications such as narrow bandwidth Internet of Things (NB-IoT) [22][23][24]. This paper is arranged as follows: Section 2 investigates the nonlinear property of PIN diodes; Section 3 introduces the electromagnetic (EM) co-simulation; Section 4 presents experiments; Sections 5 and 6 provide the discussion and conclusion. Nonlinear Properties PIN diodes are conventionally utilized as RF switches with two states (ON/OFF). However, there exists a transition zone in between. In order to investigate this nonlinear property, we study the relationship between the equivalent impedance and the actuated voltage using the PIN diode A (MACOM MA4AGBLP912, MACOM, Lowell, MA, USA). First, we measure the PIN diode by employing a microstrip line in a 5 GHz band. As shown in Figure 2a, we make a slot in the middle of the standard 50 Ω microstrip line and integrate the surface-mounted PIN diode A there, then apply two inductors (Murata LQW18AN22NG00, Mutrata, Nagaokakyo, Kyoto, Japan) with a large value (22 nH) to block the interference from the DC supplier. Second, we apply transmission line (TL) theory to analyze this equivalent circuit model, as shown in Figure 2b. The equivalent model includes Z c = 50 Ω that represents the characteristic impedance of the standard transmission line with length l 0 , the equivalent impedance Z pin of the PIN diodes, and the Sensors 2021, 21, 2816 3 of 16 port impedance Z port = 50 Ω. According to TL theory, the equivalent impedance of PIN diodes Z pin can be retrieved from input impedance Z in as [25] where β is the phase constant, and the input impedance Z in is measured in experiments. includes Zc = 50 Ω that represents the characteristic impedance of the standard transmission line with length l0, the equivalent impedance Zpin of the PIN diodes, and the port impedance Zport = 50 Ω. According to TL theory, the equivalent impedance of PIN diodes Zpin can be retrieved from input impedance Zin as [25] where β is the phase constant, and the input impedance Zin is measured in experiments. As shown in Figure 2c, since actuated voltages are varied from 0 V to 1.5 V, the equivalent impedance of the PIN diodes Zpin is changed accordingly; the resistance ranges from 225 Ω to a very small value close to 0 Ω, and the reactance varies from −200 Ω to a very small value as well. Particularly, we can observe that there exists a transition zone (as marked by the shadow area in Figure 2c) between the PIN diodes' OFF zone where impedance is around 200-200 j, and the ON zone where the impedance is a very small value close to zero. In this transition zone, the actuated voltage is around 1-1.2 V and, accordingly, the impedance varies nonlinearly and smoothly from the OFF state to the ON state. In order to accommodate the EM co-simulation including passive EM models and nonlinear active components, we build a PIN diode model with respect to the nonlinear properties and considering parameters of actuated voltages and frequencies. Referring to impedance curves as shown in Figure 2c, curves in the transition zone are nonlinear in an S shape, which is close to the Boltzmann function [26]. Thus, we select the Boltzmann function to fit them. Based on Boltzmann's mathematical model, the real part ZRe and imaginary part ZIm are As shown in Figure 2c, since actuated voltages are varied from 0 V to 1.5 V, the equivalent impedance of the PIN diodes Z pin is changed accordingly; the resistance ranges from 225 Ω to a very small value close to 0 Ω, and the reactance varies from −200 Ω to a very small value as well. Particularly, we can observe that there exists a transition zone (as marked by the shadow area in Figure 2c) between the PIN diodes' OFF zone where impedance is around 200-200 j, and the ON zone where the impedance is a very small value close to zero. In this transition zone, the actuated voltage is around 1-1.2 V and, accordingly, the impedance varies nonlinearly and smoothly from the OFF state to the ON state. In order to accommodate the EM co-simulation including passive EM models and nonlinear active components, we build a PIN diode model with respect to the nonlinear properties and considering parameters of actuated voltages and frequencies. Referring to impedance curves as shown in Figure 2c, curves in the transition zone are nonlinear in an S shape, which is close to the Boltzmann function [26]. Thus, we select the Boltzmann in which V is the actuated voltage for PIN diode A, Z Re_off and Z Re_on are the measured Z Re when the diode is in the OFF state with V = 0 V and the ON state with V = 1.5 V. Similarly, Z Im_off and Z Im_on are Z Im when V = 0 V (OFF state) and 1.5 V (ON state). V Re_0 is defined as the voltage when Z Re equals the mean of Z Re_off and Z Re_on , while V Im_0 is the voltage when Z Im equals the mean of Z Im_off and Z Im_on . Parameters d Re and d Im are the slope of curves Z Re and Z Im when V = V Re_0 and V = V Im_0 . Until now, the above equations have concerned only one frequency point, but we need to consider the whole frequency band. This means all the parameters in (2) and (3), Z Re_off , Z Re_on , V Re_0 , d Re and Z Im_off , Z Im_on , V Im_0 , d Im , need to be related to frequencies. We select several frequency points located at the relatively low, moderate, and high frequency sections of the band, and fit them to the equations, thereby involving the whole frequency band when describing the nonlinear properties. Particularly, according to these curves' shapes concerning frequencies, the mean function and Gaussian function are applied to fit the real part Z Re and the imaginary part Z Im , respectively. For Z Re , the relative parameters in respect to frequencies can be described as where three typical frequency points are f 0 = 4.7 GHz, f 1 = 5 GHz, and f 2 = 5.3 GHz. Other parameters are Z Re_on = 11.18 Ω, Z Re_off (f 2 ) = 223.8 Ω, V Re_0 (f 2 ) = 1.08 V, d Re (f 2 ) = 0.04751, d 1 = 126.57, d 2 = −0.05024, and d 3 = 7.45 × 10 −3 . Similarly, we use (8)- (11) for the imaginary part Z Im : where the relative parameters Z Im_on Especially, the parameter f 3 = 4.922 is derived from the peak position of the Gaussian function. Finally, we achieve the completed equations to express the nonlinear property of the PIN diode as follows Note that we fit the measured impedance curves of the PIN diode with these abovementioned equations through several typical frequency points f 0 , f 1 , and f 2 , thus, we need to double check if they can represent the whole frequency band. We randomly select the frequencies 4.75 GHz, 4.9 GHz, and 5.13 GHz in the band, and compare the fitting curves with the measured results. As shown in Figure 3a, Z Re and Z Im match well with the measured ones, implying the equivalent effectiveness of the nonlinear property in the whole frequency band. In this way, we obtain the mathematical expressions to describe the nonlinear properties of the PIN diode, and accordingly model this PIN diode in ANSYS Electronics Desktop, ensuring the nonlinear property is considered in the EM co-simulation. Note that we fit the measured impedance curves of the PIN diode with these abovementioned equations through several typical frequency points f0, f1, and f2, thus, we need to double check if they can represent the whole frequency band. We randomly select the frequencies 4.75 GHz, 4.9 GHz, and 5.13 GHz in the band, and compare the fitting curves with the measured results. As shown in Figure 3a, ZRe and ZIm match well with the measured ones, implying the equivalent effectiveness of the nonlinear property in the whole frequency band. In this way, we obtain the mathematical expressions to describe the nonlinear properties of the PIN diode, and accordingly model this PIN diode in ANSYS Electronics Desktop, ensuring the nonlinear property is considered in the EM co-simulation. To demonstrate that the nonlinear properties can be exploited for achieving smooth and uniform resonance tuning, we implement a proof-of-concept level simulation with the PIN diode A (MACOM MA4AGBLP912). As shown in Figure 3b, it is a parallel L1C1 circuit model with the parameters L1 = 1 nH and C1 = 1 pF. IN particular, we put another capacitance C2 in the shunt direction with the same value C2 =1 pF, but it can be connected or disconnected parallel to the L1C1 circuit via the PIN diode, which can be controlled by To demonstrate that the nonlinear properties can be exploited for achieving smooth and uniform resonance tuning, we implement a proof-of-concept level simulation with the PIN diode A (MACOM MA4AGBLP912). As shown in Figure 3b, it is a parallel L 1 C 1 circuit model with the parameters L 1 = 1 nH and C 1 = 1 pF. IN particular, we put another capacitance C 2 in the shunt direction with the same value C 2 = 1 pF, but it can be connected or disconnected parallel to the L 1 C 1 circuit via the PIN diode, which can be controlled by the supplying voltages from the OFF state, nonlinear states to ON state. An inductance of L 2 = 1 H is used to block the interference from the DC suppliers. Theoretically speaking, there should be a continuous resonance tuning between the resonance 1/ 2π √ L 1 C 1 = 5.03 GHz when the PIN diode is ideally open, and 1/ 2π √ 2L 1 C 1 = 3.56 GHz when the diode is ideally short, through middle states while actuating the diode in the nonlinear zone. As shown in Figure 3c, with controlling the actuated voltages to make the PIN diode work in OFF, ON, and transition states, the resonances are tuned from 3.43 GHz to 4.78 GHz via nonlinear states 3.78 GHz, 4.11 GHz, and 4.44 GHz, respectively. This continuous and smooth resonance tuning verifies the concept of eliminating the resonance blindness with nonlinear properties by PIN diodes. In a brief summary, PIN diodes have the advantages of nonlinear properties while the actuated voltages fall in the transition zone, providing the potential capability of continuous tuning in MTM antenna even with a very limited number of units. That is either different from varactors that rely on a large dynamic voltage range, or different from MEMS that have a noncontinuous equivalent capacitance value variation due to the beam membrane pulled in the 1/3 length position [27]. Layout, Design, and EM Co-Simulation We design a compact MTM antenna using PIN diodes to introduce the EM cosimulation, and take advantage of its nonlinear properties to realize the smooth tuning and eliminate the resonance blindness. As in Figure 4a, the active MTM structure comprises two periodical cells which produce a compact size, a PIN diode that plays the role of the active element in each unit, and inductance chips for blocking the interference from DC suppliers. Via holes are made between the top and bottom layers ( Figure 4b) to connect the micro-control unit (MCU) for DC supply. As seen in the side view in Figure 4c,d, PIN diodes placed in the two slots of each unit electrically connect/disconnect these slots, thus manipulating zeroth-order resonances (ZORs) of the MTM antenna. Thanks to the MTM configuration separating units from each other, voltages for actuating each PIN diode can be controlled independently. FR4 material with permittivity ε r = 4.3, tanδ = 0.02, and thickness h = 2.5 mm is used as the substrate. The unit cell is designed according to CRLH-TL theory [11], in which the equivalent circuit model has inductances and capacitances in both series and shunt directions, thereby producing the zeroth-order resonance (ZOR) resonating at the frequency β = 0. The mechanism can be qualitatively demonstrated by the equivalent circuit model, as shown in Figure 4e. The left-handed capacitance is equivalently considered as C L = 2C L1 + C L , where capacitance C L1 is formed by the gaps between adjacent units and C L2 is produced by the two symmetric J-shaped patches. Lefthanded inductance L L is generated by a strip patch in the x-direction, and is regarded as being connected to the ground through another capacitance C g induced between the edge patch and the ground. Series inductance L R and shunt capacitance C R are formed from the conventional microstrip line. According to CRLH theory, the ZOR ω zor is related to shunt-directed resonances ω sh [11]: which implies what the active element PIN diodes are particularly utilized to tune: shorting/opening PIN diodes alter the effective area of the edge patch, leading to equivalently varying the capacitance C g , hence, tuning the ZORs. Moreover, we design the unit operating in the ZOR mode, because at this resonance, the phase constant β = 0, and its guided wavelength, is infinite, leading to the favorable characteristic that its resonance is independent of the physical length [11]. Therefore, we have the freedom to employ arbitrary numbers of periodical units. For a compact MTM antenna to demonstrate ZOR tuning without blindness, we utilize two periodical units as an example. We use ANSYS Electronics Desktop to simulate the whole design including the passive EM model and the active element PIN diodes, as illustrated in Figure 5a. First, we design the antenna with passive simulation in HFSS without any diodes. That means in the passive full-wave simulation, with/without rectangular patches are utilized to imitate ON/OFF states of PIN diodes, thus considering a preliminary simulation with electric field distributions and radiation patterns. Afterward, in the EM model, lumped ports are set up where the active elements are placed, so we have chances to insert the active element model there. Then, we build the SPICE model for PIN diode A MACOM MA4AGBLP912 with the mathematical equations shown above, thereby involving its nonlinear property. In addition, the S2P file of the inductor (Murata LQW18AN22NG00, Mutrata, Nagaokakyo, Kyoto, Japan) is employed in the EM co-simulation as well. As shown in Figure 5b, a simulation is conducted with the S2P file of the inductor, and it exhibits good isolation of less than −20 dB between the DC supplier and RF signals. As the active elements are ready, finally, we can implement the EM co-simulation by considering the S2P file of the inductor and the SPICE model of PIN diode A for the lumped ports. Particularly, four DC voltage sources are connected to the lumped ports as well to supply PIN diodes accordingly. In such a method, we can achieve the results of co-simulation easily within a few minutes. We use ANSYS Electronics Desktop to simulate the whole design including the passive EM model and the active element PIN diodes, as illustrated in Figure 5a. First, we design the antenna with passive simulation in HFSS without any diodes. That means in the passive full-wave simulation, with/without rectangular patches are utilized to imitate ON/OFF states of PIN diodes, thus considering a preliminary simulation with electric field distributions and radiation patterns. Afterward, in the EM model, lumped ports are set up where the active elements are placed, so we have chances to insert the active element model there. Then, we build the SPICE model for PIN diode A MACOM MA4AGBLP912 with the mathematical equations shown above, thereby involving its nonlinear property. In addition, the S2P file of the inductor (Murata LQW18AN22NG00, Mutrata, Nagaokakyo, Kyoto, Japan) is employed in the EM co-simulation as well. As shown in Figure 5b, a simulation is conducted with the S2P file of the inductor, and it exhibits good isolation of less than −20 dB between the DC supplier and RF signals. As the active elements are ready, finally, we can implement the EM co-simulation by considering the S2P file of the inductor and the SPICE model of PIN diode A for the lumped ports. Particularly, four DC voltage sources are connected to the lumped ports as well to supply PIN diodes accordingly. In such a method, we can achieve the results of co-simulation easily within a few minutes. With the co-simulation method, we obtain the active MTM antenna simulations plotted in Figure 6, in which both the linear and nonlinear cases are illustrated. In th linear case, as shown in Figure 6a, there are OFF and ON states, and we code PIN diod in the OFF state as state 0 when 0V is applied, and code the ON state as state 1.5 when 1 V is applied. For example, 0-1.5-0-1.5 means the second and fourth PIN diodes are act ated to the ON state while other two diodes are in the OFF state. In a nonlinear case, shown in Figure 6b, however, more nonlinear states where actuated voltages fall in th transition zone are shown. We code these nonlinear states exactly as the voltages actuate to PIN diodes. For instance, 0-1.1-1.23-1.09 indicates the four PIN diodes are actuated wi 0 V, 1.1 V, 1.23 V, and 1.09 V, respectively. In Figure 6b, the ZORs of the nonlinear ca are tuned from 4.71 GHz to 5.31 GHz via 4.82 GHz, 4.96 GHz, 5.11 GHz, and 5.19 GH while in the linear case, as shown in Figure 6a, the resonances are tuned from 4.71 GHz 5.31 GHz but the tuning is not smooth and continuous, and there is blindness in the ban from 4.83 GHz to 5.07 GHz. Since each unit can provide four coding sequences, 0-0, 0-1. 1.5-0, and 1.5-1.5, an MTM antenna consisting of two units has all 16 coding states to cov 4.71 GHz to 5.31 GHz, while for the nonlinear case, it has more middle states. As show in Figure 6c, by comparing the 16 states of the linear case and 30 selected states of th nonlinear case, we find that nonlinear advantages allow the ZOR tuning to be smoot continuous, and uniform, without resonance blindness. With the co-simulation method, we obtain the active MTM antenna simulations as plotted in Figure 6, in which both the linear and nonlinear cases are illustrated. In the linear case, as shown in Figure 6a, there are OFF and ON states, and we code PIN diodes in the OFF state as state 0 when 0V is applied, and code the ON state as state 1.5 when 1.5 V is applied. For example, 0-1.5-0-1.5 means the second and fourth PIN diodes are actuated to the ON state while other two diodes are in the OFF state. In a nonlinear case, as shown in Figure 6b, however, more nonlinear states where actuated voltages fall in the transition zone are shown. We code these nonlinear states exactly as the voltages actuated to PIN diodes. For instance, 0-1.1-1.23-1.09 indicates the four PIN diodes are actuated with 0 V, 1.1 V, 1.23 V, and 1.09 V, respectively. In Figure 6b, the ZORs of the nonlinear case are tuned from 4.71 GHz to 5.31 GHz via 4.82 GHz, 4.96 GHz, 5.11 GHz, and 5.19 GHz, while in the linear case, as shown in Figure 6a, the resonances are tuned from 4.71 GHz to 5.31 GHz but the tuning is not smooth and continuous, and there is blindness in the band from 4.83 GHz to 5.07 GHz. Since each unit can provide four coding sequences, 0-0, 0-1.5, 1.5-0, and 1.5-1.5, an MTM antenna consisting of two units has all 16 coding states to cover 4.71 GHz to 5.31 GHz, while for the nonlinear case, it has more middle states. As shown in Figure 6c, by comparing the 16 states of the linear case and 30 selected states of the nonlinear case, we find that nonlinear advantages allow the ZOR tuning to be smooth, continuous, and uniform, without resonance blindness. In summary, we apply the S2P file of the inductor, the SPICE model of PIN diode A, and the DC voltage model to the EM co-simulation. These kinds of two-port models have the advantages of not needing to consider the complicated equivalent circuit model with all detailed parameters of R, L, and C, because all of these circuit parameters are included in the S2P model or SPICE model. Thanks to the EM co-simulation considering nonlinear properties of PIN diodes, we can simulate an active MTM antenna with continuous and uniform resonance tuning, and eliminate the resonance blindness. The nonlinearity of PIN diodes not only prevents frequency tuning blindness due to the compact MTM design with limited discrete states, but also makes frequency tuning uniform. 5.31 GHz but the tuning is not smooth and continuous, and there is blindness in the band from 4.83 GHz to 5.07 GHz. Since each unit can provide four coding sequences, 0-0, 0-1.5, 1.5-0, and 1.5-1.5, an MTM antenna consisting of two units has all 16 coding states to cover 4.71 GHz to 5.31 GHz, while for the nonlinear case, it has more middle states. As shown in Figure 6c, by comparing the 16 states of the linear case and 30 selected states of the nonlinear case, we find that nonlinear advantages allow the ZOR tuning to be smooth, continuous, and uniform, without resonance blindness. In summary, we apply the S2P file of the inductor, the SPICE model of PIN diode A, and the DC voltage model to the EM co-simulation. These kinds of two-port models have the advantages of not needing to consider the complicated equivalent circuit model with all detailed parameters of R, L, and C, because all of these circuit parameters are included in the S2P model or SPICE model. Thanks to the EM co-simulation considering nonlinear properties of PIN diodes, we can simulate an active MTM antenna with continuous and uniform resonance tuning, and eliminate the resonance blindness. The nonlinearity of PIN diodes not only prevents frequency tuning blindness due to the compact MTM design with limited discrete states, but also makes frequency tuning uniform. Experimental Implementation According to the previous design, the compact MTM antenna consisting of two cells is fabricated as shown in Figure 7a. The configuration and layout are exactly that in Figure 4: the FR4 substrate has the parameters of εr = 4.3, tanδ = 0.02, and PIN diode A and the inductance chip are MACOM MA4AGBLP912 and Murata LQW18AN22NG00, respectively. As demonstrated in Figure 7b, via holes go through the substrate to connect four pairs of wires, so as to supply these PIN diodes through the micro-control unit (MCU). In this design, four channels of the DC supply can be manipulated independently because of the isolated and periodical configuration of the MTM. Figure 7c shows the setup for anechoic chamber measurements, in which a laptop is utilized to output C language for controlling the MCU for voltage manipulations. Experimental Implementation According to the previous design, the compact MTM antenna consisting of two cells is fabricated as shown in Figure 7a. The configuration and layout are exactly that in Figure 4: the FR4 substrate has the parameters of ε r = 4.3, tanδ = 0.02, and PIN diode A and the inductance chip are MACOM MA4AGBLP912 and Murata LQW18AN22NG00, respectively. As demonstrated in Figure 7b, via holes go through the substrate to connect four pairs of wires, so as to supply these PIN diodes through the micro-control unit (MCU). In this design, four channels of the DC supply can be manipulated independently because of the isolated and periodical configuration of the MTM. Figure 7c shows the setup for anechoic chamber measurements, in which a laptop is utilized to output C language for controlling the MCU for voltage manipulations. We measure both the linear case, which includes OFF (actuated voltage 0 V, indicated as 0) and ON states (actuated voltage 1.5 V, indicated as 1.5), and the nonlinear case (state coded as the actuated voltage) which considers applying voltages in the transition zone. In Figure 8a,b, several ZORs of the linear and nonlinear cases are demonstrated, and it is seen that as resonances are tuned from 4.7 GHz to 5.3 GHz via many tuning states, the resonance tuning of the linear case is not uniform, while that of the nonlinear case is uniform and smooth. More specifically, as shown in Figure 8c, more tunable states are compared. For the linear case, which considers all the completed m n = 4 2 = 16 tuning states (m = 4 represents the four PIN diodes, and n = 2 indicates PIN diodes' ON/OFF states) for the compact two-cell MTM antenna, we can clearly observe that its tuning is nonuniform and blindness clearly exists in the frequency band of 4.9~5.1 GHz and 5.1~5.25 GHz. For instance, states 0-0-1.5-1.5 and 1.5-0-0-0 have almost the same resonant point and overlap at 5.12 GHz, while states 1.5-1.5-0-0 and 0-1.5-0-1.5 are separated by roughly 0.2 GHz and are recognized as tuning blindness. For the nonlinear case with manipulated supply voltages in the transition zone of 1 V to 1.2 V, however, the tuning is very uniform, leading to continuous resonance tuning without blindness. In this case, we code the supplied voltage of the PIN diode working in the nonlinear zone. For example, state 1.05-1.5-1.5-0 means the four PIN diodes from left to right are actuated with 1.05 V, 1.5 V, 1.5 V, and 0 V, respectively. Thanks to the PIN diode possessing the nonlinear property, we can obtain many tunable states. Thirty tunable states are illustrated in Figure 8c, and it is seen that ZORs are tuned uniformly with a step around 0.02 GHz in the range from 4.7 GHz to 5.3 GHz, eliminating the resonance blindness and indicating the nonlinear advantages of PIN diodes. In addition, as shown in Figure 8c, in both linear and nonlinear cases, simulated ZORs agree well with measured ones, validating the effectiveness of the nonlinear model and EM co-simulation. the advantages of not needing to consider the complicated equivalent circuit model with all detailed parameters of R, L, and C, because all of these circuit parameters are included in the S2P model or SPICE model. Thanks to the EM co-simulation considering nonlinear properties of PIN diodes, we can simulate an active MTM antenna with continuous and uniform resonance tuning, and eliminate the resonance blindness. The nonlinearity of PIN diodes not only prevents frequency tuning blindness due to the compact MTM design with limited discrete states, but also makes frequency tuning uniform. Experimental Implementation According to the previous design, the compact MTM antenna consisting of two cells is fabricated as shown in Figure 7a. The configuration and layout are exactly that in Figure 4: the FR4 substrate has the parameters of εr = 4.3, tanδ = 0.02, and PIN diode A and the inductance chip are MACOM MA4AGBLP912 and Murata LQW18AN22NG00, respectively. As demonstrated in Figure 7b, via holes go through the substrate to connect four pairs of wires, so as to supply these PIN diodes through the micro-control unit (MCU). In this design, four channels of the DC supply can be manipulated independently because of the isolated and periodical configuration of the MTM. Figure 7c shows the setup for anechoic chamber measurements, in which a laptop is utilized to output C language for controlling the MCU for voltage manipulations. We measure both the linear case, which includes OFF (actuated voltage 0 V, indicated as 0) and ON states (actuated voltage 1.5 V, indicated as 1.5), and the nonlinear case (state coded as the actuated voltage) which considers applying voltages in the transition zone. In Figure 8a Note that, as shown in Figure 8b, the bandwidth is varied when resonances are tuned in different states. This can be explained by the fact that when the active element PIN diodes are used to tune the effective area of the edge patch, they vary circuit parameter C g , as shown in Figure 4e. Meanwhile, the PIN diode itself induces resistance as well, which varies the conductance G. Hence, the Q factor and bandwidth are changed. More specifically, according to the CRLH theory, the resonance ω zor is dominated by the shuntdirected resonances ω sh , as indicated in Equation (14). Thus, the Q factor and bandwidth are investigated and discussed in terms of the shunt-directed circuit part. As shown in Figure 4e, which illustrates the equivalent circuit model, the shunt admittance can be written as The quality factor Q is Consequently, the bandwidth can be expressed as This equation can qualitatively explain the relationship between the bandwidth and different tunable states. Employing PIN diodes in an active MTM antenna electrically opens/shorts the gaps in the edge patch, resulting in varying the parameter C g . On the other hand, the resistance variations in the PIN diodes in the shunt direction affect the conductance G. That indicates that tunable states vary both the C g and G. According to Equation (17), these two variables change the bandwidth. Therefore, as seen in Figure 8b, the bandwidth is changed according to different tunable states. Note that, as shown in Figure 8b, the bandwidth is varied when resonances are tuned in different states. This can be explained by the fact that when the active element PIN diodes are used to tune the effective area of the edge patch, they vary circuit parameter Cg, as shown in Figure 4e. Meanwhile, the PIN diode itself induces resistance as well, We study the gains, efficiency, and radiation patterns of the active MTM antenna with PIN diode A (MACOM MA4AGBLP912). In particular, two extreme states of completed ON and OFF states and four nonlinear states are investigated, while in other states, gains and the radiation efficiency are on the same level, and radiation patterns are quite similar. As shown in Figure 9, two extreme states, 0-0-0-0 and 1.5-1.5-1.5-1.5, that indicate PIN diodes are completely OFF/ON, have gains of 3.73 dBi and 2.27 dBi, respectively. For another four nonlinear states, 0-0-1.02-0, 0-1.01-1.5-0, 1.05-1.5-1.5-0, and 1.5-1.5-1.09-1.5, the measured gains are 3.41 dBi, 2.77 dBi, 2.51 dBi, and 2.4 dBi, which are between the gains of the two extreme cases. The corresponding radiation efficiencies of the nonlinear states are 49%, 43.5%, 37.7%, and 36.4%, which are between the two extreme states of 54% (OFF state) and 36% (ON state). In terms of radiation patterns, as illustrated in Figure 10a-f, all the states, including completed ON/OFF states and four nonlinear states, demonstrate similar radiation patterns, and the measured radiation patterns agree well with the simulated patterns. opens/shorts the gaps in the edge patch, resulting in varying the parameter Cg. On the other hand, the resistance variations in the PIN diodes in the shunt direction affect the conductance G. That indicates that tunable states vary both the Cg and G. According to Equation (17), these two variables change the bandwidth. Therefore, as seen in Figure 8b, the bandwidth is changed according to different tunable states. We study the gains, efficiency, and radiation patterns of the active MTM antenna with PIN diode A (MACOM MA4AGBLP912). In particular, two extreme states of completed ON and OFF states and four nonlinear states are investigated, while in other states, gains and the radiation efficiency are on the same level, and radiation patterns are quite similar. As shown in Figure 9, two extreme states, 0-0-0-0 and 1.5-1.5-1.5-1.5, that indicate PIN diodes are completely OFF/ON, have gains of 3.73 dBi and 2.27 dBi, respectively. For another four nonlinear states, 0-0-1.02-0, 0-1.01-1.5-0, 1.05-1.5-1.5-0, and 1.5-1.5-1.09-1.5, the measured gains are 3.41 dBi, 2.77 dBi, 2.51 dBi, and 2.4 dBi, which are between the gains of the two extreme cases. The corresponding radiation efficiencies of the nonlinear states are 49%, 43.5%, 37.7%, and 36.4%, which are between the two extreme states of 54% (OFF state) and 36% (ON state). In terms of radiation patterns, as illustrated in Figure 10af, all the states, including completed ON/OFF states and four nonlinear states, demonstrate similar radiation patterns, and the measured radiation patterns agree well with the simulated patterns. opens/shorts the gaps in the edge patch, resulting in varying the parameter Cg. On the other hand, the resistance variations in the PIN diodes in the shunt direction affect the conductance G. That indicates that tunable states vary both the Cg and G. According to Equation (17), these two variables change the bandwidth. Therefore, as seen in Figure 8b, the bandwidth is changed according to different tunable states. We study the gains, efficiency, and radiation patterns of the active MTM antenna with PIN diode A (MACOM MA4AGBLP912). In particular, two extreme states of completed ON and OFF states and four nonlinear states are investigated, while in other states, gains and the radiation efficiency are on the same level, and radiation patterns are quite similar. As shown in Figure 9, two extreme states, 0-0-0-0 and 1.5-1.5-1.5-1.5, that indicate PIN diodes are completely OFF/ON, have gains of 3.73 dBi and 2.27 dBi, respectively. For another four nonlinear states, 0-0-1.02-0, 0-1.01-1.5-0, 1.05-1.5-1.5-0, and 1.5-1.5-1.09-1.5, the measured gains are 3.41 dBi, 2.77 dBi, 2.51 dBi, and 2.4 dBi, which are between the gains of the two extreme cases. The corresponding radiation efficiencies of the nonlinear states are 49%, 43.5%, 37.7%, and 36.4%, which are between the two extreme states of 54% (OFF state) and 36% (ON state). In terms of radiation patterns, as illustrated in Figure 10af, all the states, including completed ON/OFF states and four nonlinear states, demonstrate similar radiation patterns, and the measured radiation patterns agree well with the simulated patterns. In this part, based on the EM co-simulation, we implement experiments with PIN diodes, which demonstrate nonlinear advantages over the linear case in eliminating resonance blindness, and in realizing uniform and continuous ZOR tuning. In this part, based on the EM co-simulation, we implement experiments with PIN diodes, which demonstrate nonlinear advantages over the linear case in eliminating resonance blindness, and in realizing uniform and continuous ZOR tuning. Discussion In this section, several interesting items associated with the nonlinearity of PIN diodes are discussed. First, we keep the same MTM antenna design but change it to employ PIN diode B (MACOM MA4FCP300), to study the generality of this kind of nonlinear property. As shown in Figure 11a, it demonstrates similar nonlinear properties: there exists a nonlinear zone where the actuated voltages fall in the transition zone 0.6-0.7 V, and by taking advantage of the nonlinear property, we can achieve similar nonlinear advantages over the linear case in achieving uniform and continuous ZOR tuning without blindness in the range 4.7 GHz to 5.3 GHz. Relative radiation patterns are quite similar to that of PIN diode A, and gains and the radiation efficiency are illustrated in Figure 11b; gains and radiation efficiency for the two extreme cases are 1.34 dBi and 3.46 dBi, and 38.41% (ON state) and 51.4% (OFF state), respectively, while for other nonlinear states of 0-0-0.66-0, 0-0.69-1-0, 0.64-1-1-0, and 1-1-0.61-1, the relative values are on the same level but between that of the completed ON and OFF states. That means, whether for PIN diode A or B, the nonlinear property is not a special case and can exist similarly and generally in other kinds of PIN diodes. In this part, based on the EM co-simulation, we implement experiments with PIN diodes, which demonstrate nonlinear advantages over the linear case in eliminating resonance blindness, and in realizing uniform and continuous ZOR tuning. Discussion In this section, several interesting items associated with the nonlinearity of PIN diodes are discussed. First, we keep the same MTM antenna design but change it to employ PIN diode B (MACOM MA4FCP300), to study the generality of this kind of nonlinear property. As shown in Figure 11a, it demonstrates similar nonlinear properties: there exists a nonlinear zone where the actuated voltages fall in the transition zone 0.6-0.7 V, and by taking advantage of the nonlinear property, we can achieve similar nonlinear advantages over the linear case in achieving uniform and continuous ZOR tuning without blindness in the range 4.7 GHz to 5.3 GHz. Relative radiation patterns are quite similar to that of PIN diode A, and gains and the radiation efficiency are illustrated in Figure 11b; gains and radiation efficiency for the two extreme cases are 1.34 dBi and 3.46 dBi, and 38.41% (ON state) and 51.4% (OFF state), respectively, while for other nonlinear states of 0-0-0.66-0, 0-0.69-1-0, 0.64-1-1-0, and 1-1-0.61-1, the relative values are on the same level but between that of the completed ON and OFF states. That means, whether for PIN diode A or B, the nonlinear property is not a special case and can exist similarly and generally in other kinds of PIN diodes. Second, we study the stability of the nonlinear property, namely, how stable the PIN diodes are while they work in the nonlinear zone. As shown in Figure 12a, we measure the four nonlinear states of 1.01-0-1.5-0, 1.05-1.5-1.5-0, 1.11-1.5-1.5-0, and 1.5-1.5-1.09-1.5 when using PIN diode A (MACOM MA4AGBLP912) and another four nonlinear states of Second, we study the stability of the nonlinear property, namely, how stable the PIN diodes are while they work in the nonlinear zone. As shown in Figure 12a, we measure the four nonlinear states of 1.01-0-1.5-0, 1.05-1.5-1.5-0, 1.11-1.5-1.5-0, and 1.5-1.5-1.09-1.5 when using PIN diode A (MACOM MA4AGBLP912) and another four nonlinear states of 0-0-0.66-0, 0.61-0-1-0, 1-1-0.64-0, and 1-1-0.61-1 when using PIN diode B (MACOM MA4FCP300, MACOM, Lowell, MA, USA), four times on different dates. In the measurements for the two different PIN diodes, the ZORs are kept the same with a slight variation, indicating good stability of the nonlinear property. For example, for PIN diode A, state 1.01-0-1.5-0 provides the same resonance at 5.18 GHz at different measurement times, and other nonlinear states have variations less than 0.005 GH. Third, voltage tolerance needs to be investigated because ZORs seem very sensitive to voltage variation when PIN diodes operate in the nonlinear transition zone. Figure 13a shows the supplying voltage has good tolerance and gets rid of the risk of excessive sensitive voltage variations regardless of the type of PIN diode. For instance, considering state 1.01-0-1.5-0 for PIN diode A, we can achieve a stable resonant frequency at 5.18 GHz despite varying the supplying voltage from 1.005 V to 1.015 V, meaning that we have a voltage tolerance of 0.01V. In terms of PIN diode B with the state 0.61-0-1-0, as shown in Figure 13b, similarly, we achieve a stable resonant frequency of 5.18 GHz despite varying the supplying voltage from 0.605 V to 0.621 V, which indicates that we have a voltage tolerance of 0.016 V. Third, voltage tolerance needs to be investigated because ZORs seem very sensitive to voltage variation when PIN diodes operate in the nonlinear transition zone. Figure 13a shows the supplying voltage has good tolerance and gets rid of the risk of excessive sensitive voltage variations regardless of the type of PIN diode. For instance, considering state 1.01-0-1.5-0 for PIN diode A, we can achieve a stable resonant frequency at 5.18 GHz despite varying the supplying voltage from 1.005V to 1.015V, meaning that we have a voltage tolerance of 0.01V. In terms of PIN diode B with the state 0.61-0-1-0, as shown in Figure 13b, similarly, we achieve a stable resonant frequency of 5.18 GHz despite varying the supplying voltage from 0.605V to 0.621V, which indicates that we have a voltage tolerance of 0.016 V. Finally, for the proposed active MTM antenna, we investigate the influence of the active components, including inductance, MCU, and PIN diodes on radiation gains and the efficiency. Looking at Figure 14a, several states for both active and passive cases are shown, and it is seen that the gains with active components decrease by 1 or 2 dBi compared to those without active components, while the radiation efficiency of the active case, as shown in Figure 14b, is lower than that of the passive case but no more than 10%. Third, voltage tolerance needs to be investigated because ZORs seem very sensitive to voltage variation when PIN diodes operate in the nonlinear transition zone. Figure 13a shows the supplying voltage has good tolerance and gets rid of the risk of excessive sensitive voltage variations regardless of the type of PIN diode. For instance, considering state 1.01-0-1.5-0 for PIN diode A, we can achieve a stable resonant frequency at 5.18 GHz despite varying the supplying voltage from 1.005V to 1.015V, meaning that we have a voltage tolerance of 0.01V. In terms of PIN diode B with the state 0.61-0-1-0, as shown in Figure 13b, similarly, we achieve a stable resonant frequency of 5.18 GHz despite varying the supplying voltage from 0.605V to 0.621V, which indicates that we have a voltage tolerance of 0.016 V. Finally, for the proposed active MTM antenna, we investigate the influence of the active components, including inductance, MCU, and PIN diodes on radiation gains and the efficiency. Looking at Figure 14a, several states for both active and passive cases are shown, and it is seen that the gains with active components decrease by 1 or 2 dBi compared to those without active components, while the radiation efficiency of the active case, as shown in Figure 14b, is lower than that of the passive case but no more than 10%. Finally, for the proposed active MTM antenna, we investigate the influence of the active components, including inductance, MCU, and PIN diodes on radiation gains and the efficiency. Looking at Figure 14a, several states for both active and passive cases are shown, and it is seen that the gains with active components decrease by 1 or 2 dBi compared to those without active components, while the radiation efficiency of the active case, as shown in Figure 14b, is lower than that of the passive case but no more than 10%. Employing active components, as compared in Table 1, indeed shows the nonlinear advantages in eliminating resonance blindness over the passive case or the case only applying the RF switches with only OFF/ON states. Meanwhile, this proposed active MTM antenna requires actuated voltages lower than 1.5 V, which can be applied to 5G narrow bandwidth Internet of Things (NB-IoT) with low power capacities. Employing active components, as compared in Table 1, indeed shows the nonlinear advantages in eliminating resonance blindness over the passive case or the case only applying the RF switches with only OFF/ON states. Meanwhile, this proposed active MTM antenna requires actuated voltages lower than 1.5 V, which can be applied to 5G narrow bandwidth Internet of Things (NB-IoT) with low power capacities. Conclusions In this paper, we study the nonlinear property of PIN diodes, fit it to an EM co-simulation, and, particularly, apply it to an active MTM antenna to eliminate resonance tuning blindness. We conclude that the nonlinear property indeed possesses the advantages to help achieve smooth resonance tuning with low actuated voltages, and it can be generally extended to other PIN diodes with good stability and voltage tolerance. The active MTM antenna with uniform and smooth frequency tuning slices the frequency spectrum into many narrow-band channels, which can be applied to 5G narrow bandwidth Internet of Things (NB-IoT), which requires spectrum channels of narrow bandwidth and low power capacities. Conclusions In this paper, we study the nonlinear property of PIN diodes, fit it to an EM cosimulation, and, particularly, apply it to an active MTM antenna to eliminate resonance tuning blindness. We conclude that the nonlinear property indeed possesses the advantages to help achieve smooth resonance tuning with low actuated voltages, and it can be generally extended to other PIN diodes with good stability and voltage tolerance. The active MTM antenna with uniform and smooth frequency tuning slices the frequency spectrum into many narrow-band channels, which can be applied to 5G narrow bandwidth Internet of Things (NB-IoT), which requires spectrum channels of narrow bandwidth and low power capacities. Institutional Review Board Statement: Not applicable. Informed Consent Statement: Not applicable. Data Availability Statement: The data presented in this study are openly available.
2021-04-29T05:18:21.745Z
2021-04-01T00:00:00.000
{ "year": 2021, "sha1": "97f98b7e21d85dec643a09160957c8107039dab8", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/1424-8220/21/8/2816/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "97f98b7e21d85dec643a09160957c8107039dab8", "s2fieldsofstudy": [ "Engineering" ], "extfieldsofstudy": [ "Computer Science", "Medicine" ] }
248731886
pes2o/s2orc
v3-fos-license
Risk Factors and Clinical Characteristics of Patients with Ocular Candidiasis Ocular candidiasis is a critical and challenging complication of candidemia. The purpose of this study was to investigate the appropriate timing for ophthalmologic examinations, risk factors for complications of ocular lesions, and their association with mortality. This retrospective cohort study applied, using multiple logistic regression analysis and Cox regression models, to cases of candidemia (age ≥ 18 years) for patients who underwent ophthalmologic consultation. Of the 108 candidemia patients who underwent ophthalmologic examination, 27 (25%) contracted patients had ocular candidiasis, and 7 experienced the more severe condition of endophthalmitis, which included subjective ocular symptoms. In most cases, the initial ophthalmologic examination was performed within one week of the onset of candidiasis with a diagnosis of ocular candidiasis, but in three cases, the findings became apparent only after a second examination within 7–14 days after onset of candidiasis. The independent risk factor extracted for the development of ocular candidiasis was the isolation of C. albicans (OR, 4.85; 95% CI, 1.58–14.90), unremoved CVC (OR, 10.40; 95% CI, 1.74–62.16), and a high βDG value (>108.2 pg/mL) (HR, 2.83; 95% CI = 1.24–6.27). Continuous ophthalmologic examination is recommended in cases of candidemia with the above risk factors with an initial examination within 7 days of onset and a second examination 7–14 days after onset. Introduction Candidemia is a nosocomial infection that is a major cause of morbidity and mortality [1,2]. More than 250,000 patients are affected worldwide every year, and more than 50,000 of them die due to this critical infectious disease [3]. A comprehensive candidemia care bundle definitely improves patient care and mortality in patients with this severe infectious disease [4][5][6][7]. Generally, a comprehensive candidemia care bundle includes the appropriate choice of antifungal drugs, the removal of central venous catheters, and ophthalmological examinations. Candidemia sometimes leads to hematogenous disseminated lesions in several parts of the body, and metastatic ocular infection is a particularly challenging complication [8]. Therefore, ophthalmological examination is strongly recommended in all the preventive guidelines of invasive candidiasis [9]. All the preventive guidelines, however, include a few detailed recommendations about when and how often ophthalmological examination should be performed. Although both the necessity and utility of repeated ophthalmological examination has previously been reported, specific guidelines are yet to be published. Furthermore, there have been few reports of the risk factors associated with ocular candidiasis. One factor is that the J. Fungi 2022, 8,497 2 of 13 relationship between ocular candidiasis and prognosis in patients with candidemia is not well known. Here, we describe a retrospective cohort study to determine the risk factors related to ocular candidiasis and identify the relationships between mortality and eye involvement. In addition, we analyze the clinical characteristics of patients with ocular involvement in order to establish appropriate intervention methods via ophthalmological examination. Study Design This retrospective cohort study was conducted from April 2013 to March 2020 at the Kurume University Hospital in Japan, which is a tertiary-care university hospital with more than 1000 beds. We used a microbiology database to identify all patients (age ≥ 18 years) with candidemia using microbiology database and collected clinical information from their medical records. This study was approved by the Medical Ethics Committee of the Kurume university hospital (No 21123) and was implemented in accordance with the Declaration of Helsinki. Patient consent was waived owing to the nature of a retrospective study. Clinical Definitions Candidemia was defined as isolation of Candida spp. from at least one blood culture in a patient with clinical sign of infection. Subsequent positive cultures from the same patient were considered as a new episode if an interval of more than 30 days had transpired the two episodes. Exclusion criteria included patients who were less than 18 years of age, patients considered contaminated and untreated, and patients who had received no ophthalmological consultation. The onset of candidemia was defined as the day the initial positive blood sample was drawn, which subsequently formed a culture positive for Candida spp. To classify ocular candidiasis, we referred to criteria established in previous studies [10][11][12][13]. Proven ocular candidiasis was defined as ocular lesions combined with a positive histology or a culture of vitreous aspirate. Probable Candida endophthalmitis was defined as vitritis or fluffy lesions with extension into the vitreous. Probable Candida chorioretinitis was defined as deep focal white infiltrates in the retina. In addition, hemorrhages, Roth spots, or nerve fiber layer infarctions (cotton wool spots) in patients with candidemia were classified as probable Candida chorioretinitis if no other causes for these abnormalities were present (e.g., diabetes mellitus or hypertension). If signs of chorioretinitis were seen in patients with underlying systemic disease that could cause similar lesions (e.g., diabetes, hypertension, or concomitant bacteremia), these cases were classified as possible ocular candidiasis. In statistical analysis, we grouped all cases as either "ocular candidiasis" or "non-ocular candidiasis" according to the above-mentioned classification. The ocular candidiasis group included patients with proven ocular candidiasis, patients with probable Candida endophthalmitis, and patients with probable Candida chorioretinitis. The non-ocular candidiasis group, on the other hand, included patients with possible ocular candidiasis and patients with no abnormal ocular findings. In previous research, a classification of possible ocular candidiasis has often included patients with ocular candidiasis. In the present study, we assigned possible ocular candidiasis to the non-ocular candidiasis group in this study to insure a strict assessment of risk factors [10]. Underlying Conditions and Clinical Status The predisposing factors and clinical information acquired from medical records included age, sex, underlying diseases (diabetes mellitus, hypertension, chronic heart disease, chronic obstructive pulmonary disease, liver diseases, chronic kidney disease, malignant disease), immunocompromised status (steroid therapy, neutropenia (absolute neutrophil count <500/µL), use of immunosuppressive agents, exposure to chemotherapy, exposure to radiation therapy, HIV infection, recipients of stem cell transplantation), intensive care unit (ICU) admission, history of surgery, prior antibiotics exposure, prior antifungal exposure, presence of a central venous catheter (CVC), removal of the CVC, interval onset of candidemia and removal of CVC, total parenteral nutrition (TPN), mechanical ventilation, hemodialysis, septic shock, initial antifungal drugs, interval onset and administration of antifungal drugs, causative Candida spp., persistent blood stream infection (blood culture again positive after an interval of at least 72 h from onset), and a prognosis of 30-day mortality. The ophthalmological clinical course was also collected from the medical records. Ophthalmological clinical information includes the following: when was the ophthalmologic examination performed, counting from the onset of candidemia; how many ophthalmologic examinations resulted in a diagnosis of ocular candidiasis; what are the ophthalmoscopy findings; is there a presence of subjective symptoms of the eyes; and what is the ophthalmology treatment history. The (1,3)-β-D-glucan (βDG) test values were evaluated via the WAKO β-glucan test (Wako, Tokyo, Japan), and the highest value measured within 7 days of the onset of candidemia was used. Statistical Analysis Continuous variables were presented as the mean ± standard deviation (SD) or interquartile range (IQR), and these were compared using either a Student's t-test or a Mann−Whitney U test. For the values of βDG, logistic single-regression analysis was used for comparison. Categorical variables were presented as numbers and percentages and compared using either the χ 2 test or Fisher's exact test. We compared demographic characteristics, clinical factors, and outcomes between episodes with and without ocular candidiasis by using univariate analysis, and multivariate analysis was conducted using items that were statistically significant (p-value < 0.05) in univariate analysis. Multivariate statistical analysis was conducted by using logistic regression multivariate analysis with odds ratios (OR) and 95% confidence intervals (CI). A more detailed statistical analysis was performed for βDG. Receiver operating characteristic (ROC) curves for βDG values were described, and their cut-off values were determined with the maximum Youden index. All cases were divided into groups with either high or low βDG values based on the cut-off values, and the relationship between βDG values and ocular candidiasis was illustrated using a Kaplan−Meier estimator. Cox regression models were used to calculate the adjusted hazard ratios (HRs) with a 95% confidence interval. The association between 30-day mortality and ocular candidiasis also was studied using the same methods for statistical analysis. Two-tailed p-values of <0.05 were considered statistically significant. Statistical analyses were performed using R program language, version 4.1.3, and JMP software, version 15. Classification of Ocular Candidiasis The process for selecting target cases appears in Figure 1. We identified 149 cases of candidemia and excluded 11 cases; the 11 cases included 8 pediatric cases and 3 untreated cases, which were considered contamination. Of the 138 cases of candidemia collected from the medical records of adult patients (age ≥ 18 years), 30 cases with no ophthalmological examination were also excluded. Almost all cases of patients with no ophthalmological examination were severe, and those people either died before diagnosis of candidemia, died before scheduled ophthalmological examination, or withdrew from treatment due to a bad prognosis. Among the remaining 108 cases of patients who had undergone ophthalmological examination, abnormal findings were found in 40 cases (37%) via fundoscopy. In these 40 cases were included 7 cases of probable Candida endophthalmitis, 20 cases of probable Candida chorioretinitis, and 13 cases of possible ocular candidiasis according to the criteria as stated above. According to the definition mentioned above, 27 cases, which included 7 with probable Candida endophthalmitis and 20 with probable Candida chorioretinitis, were defined as ocular candidiasis in statistical analysis. Actually, the ophthalmologic findings in cases with possible ocular candidiasis in this study were more likely due to other diseases such as diabetic retinopathy rather than to fungal causes. No changes in fundus findings were observed in these patients during ongoing ophthalmologic examinations. analysis. Actually, the ophthalmologic findings in cases with possible ocular candidiasis in this study were more likely due to other diseases such as diabetic retinopathy rather than to fungal causes. No changes in fundus findings were observed in these patients during ongoing ophthalmologic examinations. Clinical Characteristics of Ocular Candidiasis Patient backgrounds, ocular symptoms, and timing of ophthalmologic consultations for 27 patients with ocular candidiasis appear in Table 1. Seven patients with probable Candida endophthalmitis complained of subjective eye symptoms. Six patients presented with poor eyesight, two cases presented with myodesopsia, and one case presented with misty vision. Most patients with ocular symptoms received an ophthalmologic examination and a diagnosis of vitritis within 2 days of the onset of subjective symptoms. On the other hand, some cases complained having been aware of eye symptoms for 4-5 days before screening ophthalmologic examination. These patients did not consider ocular symptoms important because they were not asked about eye symptoms. Generally, ocular candidiasis was found by ophthalmological examination after Candida was isolated from a blood culture, but candidemia was suspected in two of the cases with subjective eye symptoms based on fundus findings before Candida spp. was isolated. When one of these two cases had an episode of fever, their blood culture was not submitted, and a central venous catheter was removed, which could have led to a delay in the diagnosis of Clinical Characteristics of Ocular Candidiasis Patient backgrounds, ocular symptoms, and timing of ophthalmologic consultations for 27 patients with ocular candidiasis appear in Table 1. Seven patients with probable Candida endophthalmitis complained of subjective eye symptoms. Six patients presented with poor eyesight, two cases presented with myodesopsia, and one case presented with misty vision. Most patients with ocular symptoms received an ophthalmologic examination and a diagnosis of vitritis within 2 days of the onset of subjective symptoms. On the other hand, some cases complained having been aware of eye symptoms for 4-5 days before screening ophthalmologic examination. These patients did not consider ocular symptoms important because they were not asked about eye symptoms. Generally, ocular candidiasis was found by ophthalmological examination after Candida was isolated from a blood culture, but candidemia was suspected in two of the cases with subjective eye symptoms based on fundus findings before Candida spp. was isolated. When one of these two cases had an episode of fever, their blood culture was not submitted, and a central venous catheter was removed, which could have led to a delay in the diagnosis of candidemia. Of the seven patients with probable Candida endophthalmitis, all presented with vitreous opacity, and of them, six patients had bilateral lesions. No cases required surgical treatment of the eye, but one case was treated by direct administration of Voriconazole (VRCZ) into the vitreous. Twenty patients with probable Candida chorioretinitis did not complain of subjective eye symptoms. Fundus examination identified 18 patients with abnormal bilateral findings, and 2 patients had abnormal findings only on one side. All cases presented with cotton wool spots, and some showed petechial hemorrhage. Primary ophthalmological examinations were conducted an average of 5.6 ± 3.7 days following onset of candidemia, and the median was 5 days. A second examination was performed an average of 10.1 ± 4.8 days following the first, and the median was 8.5 days. Of the 27 patients, 6 died before the second ophthalmologic examination, and the remaining 21 patients all had two or more repeated ophthalmologic examinations. Most cases showed improvement in fundus findings in the second and in subsequent ophthalmologic examinations, but three cases showed a tendency toward exacerbation. In these three cases, second ophthalmological examinations were conducted in 8, 9, and 12 days, respectively, after candidemia had occurred. It is important to emphasize that in these three cases, no abnormalities in the ocular examination were evident during the first examination, but the abnormal findings became apparent during the second examination. In particular, in the patient with AML and severe neutropenia, the ocular lesions gradually worsened with each ophthalmologic examination despite the administration of appropriate antifungal medications, which eventually led to treatment with intravitreal injections. Three of all patients showed no abnormal findings during the first ophthalmologic examination but were diagnosed with ocular candidiasis during the second examination. One of these three cases presented with severe neutropenia and was eventually treated by vitreous injection due to a gradual exacerbation of eye symptoms. Analysis of Risk Factors for Ocular Candidiasis The characteristics of the patients appear in Table 2 First, we composed an overall picture of the 108 cases analyzed in this study. The average age was 67.8 ± 12.5, and the percentage of men was 67%. As for the choice of inpatient wards, 38.8% of the patients were admitted to the intensive care unit (ICU). In terms of underlying disease, malignant tumors accounted for the largest percentage of patients (51.8%), with diabetes mellitus second (33.3%). As for immunosuppressive factors, 29.6% were receiving chemotherapy, and 17.5% were receiving steroid therapy. Almost one-quarter (24%) of the patients had undergone laparotomy or open-heart surgery within the past month. The breakdown of fungal species was as follows: Candida albicans was the most commonly isolated species (42.6%), followed by Candida parapsilosis (25.0%), Candida glabrata (14.8%), C. famata (5.6%), C. tropicalis (4.6%), and C. krusei (4.6%). Of the strains detected in this analysis, none with known drug susceptibility were suspected to be highly resistant to antifungal drugs such as fluconazole or echinocandin. Initial antifungal drugs were as follows: azole (59.3%), echinocandin (37.0%), and liposomal amphotericin B (2.8%). Persistent blood stream infection accounted for 36.2%. Among the cases with persistent candidiasis, there was one case each with suspected complications of infectious endocarditis and vertebral osteomyelitis. The rate for 30-day mortality was 20.4%. In univariate analysis, unremovable CVC, and isolation of C. albicans, βDG value were statistically significant, and multivariate analysis was performed according to the method defined. The results indicated that isolation of C. albicans (OR, 4.85; 95% CI, 1.58-14.90), unremoved CVC (OR, 10.40; 95% CI, 1.74-62. 16), and a high βDG value (OR, 1.003; 95% CI, 1.0004-1.005) were detected as independent risk factors for ocular candidiasis (Table 3). With respect to the types of catheters that were not removed, the percentage of central venous access ports (CV port) was high (50%). Multivariate analysis was performed using items that ware statistically significant (p value < 0.05) in the univariate analysis. All items were statistically significant (p-value < 0.05) also in multivariate analysis. βDG Values and Ocular Candidiasis As mentioned above, we determined that a high βDG level is an independent risk factor in the development of ocular candidiasis. Therefore, we further analyzed the relationships between βDG levels and ocular candidiasis. Receiver operating characteristic (ROC) curves for βDG values were developed, and their cut-off values were determined using the maximum Youden index of 108.2. The area under curve (AUC) was calculated to be 0.68. We divided the patients into two groups: βDG > 108.2 pg/mL and <108.2 pg/mL. The distribution of time to diagnose ocular candidiasis by ophthalmological examination was estimated using the Kaplan−Meier estimator and analyzed by log-rank testing (Figure 2). The log-rank test showed p < 0.05, which was statistically significant. We used Cox regression models to calculate the adjusted hazard ratios (HRs) and 95% confidence intervals. β. DG Values and Ocular Candidiasis As mentioned above, we determined that a high βDG level is an independent risk factor in the development of ocular candidiasis. Therefore, we further analyzed the relationships between βDG levels and ocular candidiasis. Receiver operating characteristic (ROC) curves for βDG values were developed, and their cut-off values were determined using the maximum Youden index of 108.2. The area under curve (AUC) was calculated to be 0.68. We divided the patients into two groups: βDG > 108.2 pg/mL and < 108.2 pg/mL. The distribution of time to diagnose ocular candidiasis by ophthalmological examination was estimated using the Kaplan−Meier estimator and analyzed by log-rank testing ( Figure 2). The log-rank test showed p < 0.05, which was statistically significant. We used Cox regression models to calculate the adjusted hazard ratios (HRs) and 95% confidence intervals. High βDG values (> 108.2 pg/mL) were statistically significant for the risk factors of ocular candidiasis (HR = 2.83; 95% CI = 1.24-6.27). ROC curves were created for the presence of ocular lesions, and the Youden index was calculated to be 108.2. Based on this β-D-glucan value, cases were divided into two groups. The log-run test was statically significant at p < 0.05. 30-Day Mortality and Ocular Candidiasis We also investigated the association between complications of ocular candidiasis and 30-day mortality. Patients were divided according to their 30-day mortality rates and for a comparison of backgrounds, which included complications from ocular candidiasis ( Table 4). In univariate analysis, female, unremoved CVC, persistent BSI, ocular candidiasis, and βDG values were statistically significant, and in multivariate analysis, unremoved CVC (OR, 17.76; 95% CI, 2.18-144.38) was detected as an independent risk factor for 30-day mortality ( Table 5). Though ocular candidiasis was not an independent risk factor in 30-day mortality, the mortality rate for patients with ocular candidiasis was 37.0% compared with 14.8% for the patients without ocular candidiasis. Multivariate analysis was performed using items that ware statistically significant (p value < 0.05) in the univariate analysis. Unremoved CVC was statistically significant (p value < 0.05) in multivariate analysis. Discussion Ocular candidiasis is a critical complication of candidemia. Because ocular candidiasis can lead to blindness, close attention should be paid to the complications it causes in candidemia and ophthalmologic evaluation is strongly recommended [14]. The purpose of this study is to propose a more effective and comprehensive treatment strategy for candidemia by identifying the risk factors for ocular candidiasis in patients with candidemia and how those risk factors correlate with prognosis as well as determining the appropriate time for an ophthalmologist to perform a fundus examination. For most cases in the present study, ocular candidiasis was diagnosed within 7 days after the onset of candidemia. In some cases, however, the fundus findings became apparent 7-14 days after onset, which essentially closed many of the so-called windows of opportunity. Furthermore, in the present study, independent risk factors for ocular candidiasis were reported as follows: isolation of C. albicans, unremovable catheters after the onset of candidiasis, and a high βDG value. Although not statistically significant, mortality tended to be higher in cases complicated by candida eye lesions. The incidence of ocular candidiasis has been significantly reduced over the past few decades due to promotions for the use of appropriate antifungal drugs [15]. Nonetheless, previous studies have reported incidences of ocular candidiasis that range from 2 to 46% [10,12,16,17]. In the present study, the complication rate of ocular candidiasis was 25%, which did not differ significantly from previous reports. We presented patients with candidemia who had complained of ocular symptoms and who had complications from advanced vitritis, which suggests that ocular subjective symptoms should always be checked in cases of candidemia. Here, we must reiterate that the basic policy is to perform ophthalmologic examinations because there are cases in which patients do not complain of ocular symptoms, such as those who are admitted to the ICU. Candida eye lesions can be divided into either vitritis or retinitis depending on the depth of the lesion. The condition commonly referred to as endophthalmitis corresponds to vitritis. It is important for a clinician to distinguish between these two types of inflammation, as they may lead to different treatment strategies. Vitritis is sometimes treated surgically or by intravitreous injection of antifungal drugs. In the present study, only one case was treated by intravitreous injection [14]. With respect to the selection of antifungal agents, it is necessary to select options that have shown translocation into the vitreous. Echinocandin is not recommended for use in patients with ocular lesions due to poor vitreous migration although there are reports that it can be expected to migrate into the chorioretina [18][19][20]. Early ophthalmologic consultation within 7 days of onset is recommended, and some reports indicate that diagnoses have been made after the 7th day [9,10]. In the present study, 23 cases were diagnosed within 7 days of onset, while 3 cases were diagnosed during the second examination that occurred 7-14 days following diagnosis when there were no findings during the first examination. Of the three cases, the case with the most slowly developing ocular lesions was characterized by severe neutropenia. Some reports have recommended that patients with neutropenia undergo ophthalmologic examination after their neutrophil counts have recovered [21,22]. Repeat fundus examinations are more important in neutropenic patients because these cases are less likely to have an immune response, which could delay the manifestation of ocular lesions. We have shown that isolation of C. albicans is an independent risk factor for ocular candidiasis. This result is similar to previous reports [12,22,23]. Abe et al. reported a strong association between C. albicans and ocular candidiasis due to greater capacity for invasion, induction of inflammatory mediators, and the recruitment of both neutrophils and inflammatory monocytes [24]. In addition, C. albicans also accounted for a major proportion of all cases in this study. Although there are some reports of an increase in non-albicans candida, C. albicans remain to be the most frequently detected organism in our facility [25]. It has been reported that C. parapsilosis and C. glabrata are less likely to cause ocular candidiasis [12,22]. In our present study, the results are not statistically significant due to the small number of cases included in the study. Continued studies are needed for further investigation. Furthermore, the emergences of fluconazole-resistant C. albicans and echinocandin-resistant strains due to the FKS gene have been reported [26,27]. As mentioned above, however, no strains suspected of being highly drug-resistant were observed at our hospital. That result suggests that drug resistance has not contributed to the complications of ocular lesions at our institution. Monitoring the breakdown of the candida species isolated at each facility and antifungal drug susceptibility would be important from a therapeutic standpoint, such as in the selection of antifungal drugs for the overall treatment of candidemia. The majority of candidemia is caused by catheter-related bloodstream infection (CRBSI), and indwelling CV catheters are one of the most important risk factors [3]. Therefore, the prompt removal of catheters is strongly recommended in cases of candidemia. In this study, we showed that not removing catheters increased the risk for developing ocular candidiasis. Candida forms biofilms on catheters, and it is known that in biofilms, antifungal drugs do not reach concentrations sufficient to exert an antibacterial effect [28][29][30]. It is notable that a higher percentage of CV ports were found in catheters that were not removed. Generally, CV ports tend not to be removed immediately after the onset of CRBSI by comparison with CVC because this requires a surgical procedure. Early removal of catheters in patients with candidemia should be recommended because it not only improves prognosis but also may contribute to a lower incidence of ocular candidiasis. βDG is an adjunct diagnosis for fungal infections and is sometimes used to determine treatment efficacy [31]. The relationship between a high βDG value and ocular candidiasis has been reported, but few reports have shown an association between the time from the onset of candidemia to the diagnosis of ocular candidiasis and βDG levels [12]. In the present study, we clarified the occurrence of high βDG levels as an independent risk factor for ocular candidiasis. The timing of ophthalmology consultations was not standardized in this study, which is a limitation, but use of the Kaplan−Meier curve showed that the proportion of cases diagnosed with ocular candidiasis increased over time in the group with higher values for βDG. This result indicates that repeat ophthalmologic examinations may help with the diagnosis of ocular candidiasis in cases of candidemia with high values for βDG. Though complications of ocular candidiasis were not extracted as a statistically independent risk factor for 30-day mortality, patients with ocular candidiasis experienced a higher mortality rate compared with uncomplicated cases. Some of the patients who were excluded from the study because of no ophthalmologic examination included early death cases in the course of the disease, so the correlation between ocular involvement and prognosis may not have been fully evaluated. In fact, the 30-day mortality rate in the 138 cases was 32.3%, which included patients with no ophthalmologic examination. On the other hand, the 30-day mortality rate for the 108 cases that were ultimately included in the study was 20.4%. There were some reports that the mortality rate was higher in cases with ocular lesions, and although the ocular lesions themselves may not affect the prognosis, the intensity of the disease caused by disseminated lesions is expected to have an effect on the prognosis. Careful management of patients with ocular candidiasis is desirable due to the risk of deterioration in their general condition. We were unable to extract any factors that would make ocular candidiasis less likely to occur in this analysis. Therefore, we recommend that ophthalmological examination be performed in all cases presenting with candidemia. Even after a negative blood culture is confirmed, ophthalmological examination should be repeated in cases of candidemia and ocular candidiasis, and if new lesions have appeared or the lesions do not improve, the antifungal drug should be changed based on tissue migration and drug sensitivity, and treatment should be continued until the fundus findings improve. Conclusions Repeated searches for ocular lesions by ophthalmological examination is an essential strategy in cases of candidiasis, and we recommend the first session should occur within 7 days of onset, while a second session should be scheduled for 7-14 days following onset. Furthermore, it is also necessary to constantly check for the presence of subjective eye symptoms. Particular attention should be paid to complications of ocular lesions in cases with the following factors: isolation of C. albicans, unremoved CVC, and a high level of βDG. Since the prognosis may be poor in cases with ocular involvement, we also recommend careful systemic management of those patients. Author Contributions: T.S. and K.G. designed the experiments, conducted the main experiments, and prepared the original draft; K.G., C.T. and H.W. supervised and revised the manuscript; T.S. and K.H. analyzed the data. All authors have read and agreed to the published version of the manuscript. Funding: The authors have no support or funding to report. All work was funded by our department research budget of Kurume University. Institutional Review Board Statement: All studies described herein were approved by the Human Ethics Review Boards of Kurume University (21123). Informed Consent Statement: Informed consent was obtained from all subjects involved in the study. Data Availability Statement: Not applicable.
2022-05-13T15:09:36.732Z
2022-05-01T00:00:00.000
{ "year": 2022, "sha1": "45045cb28e15d407a759ce63c1e2e62f83df87b8", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2309-608X/8/5/497/pdf?version=1652271897", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "396734414112aceb375c2e793d1bb96248a9eeed", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [] }
17739163
pes2o/s2orc
v3-fos-license
Automating Quality Metrics in the Era of Electronic Medical Records: Digital Signatures for Ventilator Bundle Compliance Ventilator-associated events (VAEs) are associated with increased risk of poor outcomes, including death. Bundle practices including thromboembolism prophylaxis, stress ulcer prophylaxis, oral care, and daily sedation breaks and spontaneous breathing trials aim to reduce rates of VAEs and are endorsed as quality metrics in the intensive care units. We sought to create electronic search algorithms (digital signatures) to evaluate compliance with ventilator bundle components as the first step in a larger project evaluating the ventilator bundle effect on VAE. We developed digital signatures of bundle compliance using a retrospective cohort of 542 ICU patients from 2010 for derivation and validation and testing of signature accuracy from a cohort of random 100 patients from 2012. Accuracy was evaluated against manual chart review. Overall, digital signatures performed well, with median sensitivity of 100% (range, 94.4%–100%) and median specificity of 100% (range, 100%–99.8%). Automated ascertainment from electronic medical records accurately assesses ventilator bundle compliance and can be used for quality reporting and research in VAE. Introduction Patients who receive mechanical ventilation are at high risk of complications and poor outcomes including death [1]. To effectively manage these high risk patients, providers are encouraged to put in place best practice "bundles" addressing the use of deep vein thrombosis (DVT) prophylaxis, peptic ulcer prophylaxis, oral hygiene, elevation of the head of the bed, daily sedation holiday, and daily spontaneous breathing trial [2]. The ventilator bundle has formed the backbone of many quality improvement efforts and metrics for intensive care units, though its impact on patient outcomes remains uncertain [3]. In 2011, CDC/NHSN proposed a new approach to surveillance including a broader range of ventilator complications termed ventilator-associated events (VAEs) [4]. We sought to investigate if compliance with ventilator bundle practices effectively reduces the risk of the broader set of VAEs and evaluate the relative contribution of each bundle element to patient outcomes. In order to accomplish this, we needed to develop a reliable strategy for assessing bundle compliance for a large number of patients in an efficient manner. Manual chart review is the "gold standard" of retrospective studies. However, it is time-consuming, inaccurate, resource intense, and not feasible for large sample sizes. The recent development of information technology and the widespread use of electronic medical record (EMR) systems [5] signature also can be translated into a real time automated algorithms or "sniffers, " where the same rules that were used to retrospectively search charts electronically can give real time or near real time reports and alerts to improve patient care [6]. This study aimed to develop and validate digital signatures for each part of the ventilator bundle, including DVT prophylaxis, peptic ulcer prophylaxis, oral care, head of bed elevation, and sedation breaks. Materials and Methods We designed this study as a retrospective study with both derivation and validation cohorts ascertained from intensive care unit patients. The Mayo Clinic Institutional Review Board approved the study as minimal risk research with waived informed consent. Study Population. We used a retrospective cohort of 1000 randomly selected patients who were admitted to the intensive care unit (ICU) for at least two consecutive days during 2010 to form our derivation cohort. Of these, 542 met our study inclusion criteria including two consecutive days of mechanical ventilation and research authorization. Our derivation cohort included both ventilated and nonventilated patients to ensure we would have an adequate number of both "true positive" and "true negative" compliance for each element of the bundle while adjusting our search strategy. We then validated the electronic data extraction strategy in an independent cohort of 100 randomly selected patients who were mechanically ventilated for at least two consecutive days in 2012. The purpose of the selection of mechanically ventilated patients from two different years was to better assess the performance of the strategy. Patients aged < 18 year or without research authorization were excluded. Electronic Data Extraction. To develop the electronic data extraction strategy, we utilized data from a custom integrative relational research database that contains a near real-time copy of clinical and administrative data from the electronic medical record (EMR). The Multidisciplinary Epidemiology and Translational Research in Intensive Care (METRIC) datamart accumulates pertinent vital signs, fluid input/output, and medication administration record data within an average of 15 minutes from its entry into the EMR and serves as the main data repository for data rule development. More detailed structures and contents have been previously published [7]. For each bundle element, we iteratively improved the accuracy of our electronic query using the derivation cohort ( Figure 1: flow chart). In all iterations, we calculated and analyzed sensitivity and specificity compared to the reference standard and examined discordant pairs for data which could be used to improve the electronic search accuracy. Once we achieved acceptable sensitivity and specificity, we validated our queries in another independent cohort and calculated final sensitivity and specificity of our digital signatures. The final electronic queries for each ventilator compliance bundle were presented in Table 1. Reference Standard. The reference standard was defined as the agreement between manual and electronic data extraction. A trained investigator (LH), who was blinded to electronic data extraction result, performed comprehensive medical record review to identify the presence or absence of each component of ventilator compliance bundle according to predefined definition (Table 1) between 00:00 to 23:59 on ICU day 2 in the derivation cohort and mechanical ventilator day 2 in the validation cohort. In case there was a disagreement between manual and electronic data extraction, a third independent investigator (JCO), who was blinded Table 1: Bundle components and definitions. The "medical definition" refers to the objective of the bundle element. The "EMR definition" is how we operationalized this for our digital signature. The EMR section used refers to portions of the patient chart searched with the digital signature for the bundle element. Ventilator compliance bundle element Medical definition EMR definition EMR section used DVT prophylaxis The presence of an appropriate anticoagulant within 24-hour period The systemic administration of one of the following medications within 24 hours regardless of dosage use: Medication administration record, fluid data Peptic ulcer prophylaxis The presence of an appropriate acid-inhibitory drug or sucralfate within 24-hour period The systemic administration of one of the following medications within 24 hours regardless of dosage use: to both results, would make the final adjudication; this definition has been previously used [8]. Statistical Analyses. We summarized clinical characteristics of derivation and validation cohorts using mean ± SD for continuous variables and using counts with percentages for categorical variables. We calculated sensitivity and specificity of each electronic data extraction based on the comparison of the test result and reference standard in the two cohorts. The 95% confidence intervals were calculated using an exact test for proportions. JMP statistical software (version 9.0, SAS Institute Inc.©) was used for all data analysis and randomization. Results and Discussion The derivation subset included a total of 542 ICU patients randomly selected from January 2010 to December 2010. The validation subset included a total of 100 randomly selected patients from January 2012 to December 2012. There were no differences in age, gender, and race between the two groups. The demographic characteristics and baseline of the derivation and validation subset are summarized in Table 2. The sensitivities of five ventilator bundle components were from 92% to 100% in the derivation subset in our final iteration. The specificities ranged from 50% to 99.8% after modification. Elevation of the head of the bed was the bundle element that could not be improved to an adequate sensitivity or specificity because of variable and inconsistent charting. We thus decided not to validate this query and did not test in our validation cohort. When examining the validation cohort, the sensitivities of our digital signatures ranged from 94.4% to 100%, and specificity was 100% for each (Table 3). Manual chart review was slow, requiring our reviewers to access two or more programs to abstract the relevant data from the EMR, taking an average of 10 minutes/patient. We achieved comparable results with electronic data abstraction, BioMed Research International 5 which will allow us to scan compliance of thousands of patients in a reasonable time frame for the second part of our study, an assessment of ventilator bundle compliance on the risk of developing a VAE. With the widespread adoption of EMRs, the digital signature is an increasingly attractive alternative to manual chart review. Digital signatures have several advantages. First, they are more efficient, making larger-scale cohort studies practical without significant personnel or time expenditure. Second, in developing them, we can look for markers of specific activities that correlate with actual patient outcomes and thus mitigate some types of reporting bias. For example, our DVT prophylaxis signature looks for times where one of the commonly used agents is actually administered, as opposed to asking staff to fill out a checkbox saying that "DVT prophylaxis has been addressed. " More broadly, this kind of search allows automated searching beyond simple billing codes and administrative data, which are notoriously variable in accuracy [9][10][11][12]. Finally, digital signatures have the potential to be translated into real-time electronic search algorithms, or "sniffers, " to provide near real time data. For example, the same rules that we used to develop our peptic ulcer prophylaxis signature could provide real time data on compliance, use and misuse. Sniffers are increasingly prevalent, though a recent systematic review highlighted issues with variable performance and accuracy owing in part to inadequate validation [13]. As we noted in our effort to derive and validate a signature for head of bed elevation, variability in documentation practice may limit the ability to derive a clinically useful digital signature; however, an emerging automatic documentation technology could help overcome this limitation. An interesting feature we noted in our validation cohort was higher diagnostic performance than in our derivation cohort. As our derivation cohort was what we used to derive the search, we expected to be "overfitted" to that set and lose both sensitivity and specificity as we moved to another cohort. However, we instead noted improvement. This probably owes to improvements made in the ICU datamart's accuracy over time, as our derivation cohort was from archived data in 2010, and validation used the same rules in 2012. We noted better agreement between datamart and EMR data in the more recent set and thus better improvement with our rules-based signatures. With reasonable search algorithms, this allows us to move forward and evaluate the efficacy of specific ventilator bundle elements in preventing VAE. A previous study at our institution using pre-and postbundle implementation measures found no effect, but that study was an ecological design and was not able to evaluate individual patient bundle compliance [3]. With these signatures, we will be able to give a higher resolution evaluation of the effect of the ventilator bundle. We can also work towards developing real time compliance monitoring of the ventilator bundle for both quality improvement purposes, aiming to indirectly improve care and reduce costs with passive monitoring of valueadding practices. Our study also has several limitations. First, as noted above, we are limited by what is electronically documented and the accuracy of initial inputs. Second, preferred medications and formularies differ between hospitals, and while our digital signature may be a starting point for other hospitals attempting something similar, calibration and validation would be necessary to generalize this elsewhere. Finally, the single-center, academic nature of our institution could raise the concern of referral bias and further limit generalizability of our approach. Conclusion The digital signatures used to extract and screen the usage of ventilator-associated pneumonia bundle elements were both sensitive and specific for DVT prophylaxis, peptic ulcer prophylaxis, daily sedation break, and oral care. We were not able to derive a similarly useful signature for head of bed elevation. These signatures have acceptable sensitivity and specificity for use in our larger study of the impact of the ventilator bundle on risk of VAE.
2018-04-03T01:45:57.740Z
2015-06-08T00:00:00.000
{ "year": 2015, "sha1": "599a4d6a75c9e8f69815f77cf32f008970c7f04f", "oa_license": "CCBY", "oa_url": "http://downloads.hindawi.com/journals/bmri/2015/396508.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "0c08ec7474da1ee54e71a4cd78681a234a8268d0", "s2fieldsofstudy": [ "Computer Science", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
260807621
pes2o/s2orc
v3-fos-license
Handgrip strength associates with effort-dependent lung function measures among adolescents with and without asthma Studies have shown association between handgrip strength (HGS) and FEV1, but the importance of this in relation to asthma pathophysiology and diagnostics remains unclear. We investigated the relationship between HGS and lung function metrics and its role in diagnosing asthma. We included 330 participants (mean age: 17.7 years, males: 48.7%) from the COPSAC2000 cohort and analyzed associations between HGS, asthma status, spirometry measures (FEV1, FVC, MMEF, FEV1/FVC), airway resistance (sRaw), methacholine reactivity (PD20) and airway inflammation (FeNO). Finally, we investigated whether HGS improved FEV1 prediction and classification of asthma status. HGS was only associated with forced flows, i.e., positive association with FEV1 and FVC for both sexes in models adjusted for age, height, and weight (P < 0.023). HGS improved adjusted R2-values for FEV1 prediction models by 2–5% (P < 0.009) but did not improve classification of asthma status (P > 0.703). In conclusion, HGS was associated with the effort-dependent measures FEV1 and FVC, but not with airway resistance, reactivity, inflammation or asthma status in our cohort of particularly healthy adolescents, which suggests that the observed associations are not asthma specific. However, HGS improved the accuracy of FEV1 estimation, which warrants further investigation to reveal the potential of HGS in asthma diagnostics. Objective assessment at age 18 years.Handgrip strength.Data collection was performed according to the American Society of Hand therapists guidelines 17 .A DHD-1 digital hand dynamometer ((SH1001) Saehan, Changwon, Korea) was used for measuring HGS.The dynamometer was set to position 2. The participants were seated in a chair in an upright position with the elbow in a 90° angle and the wrist in a neutral position with a maximal extension of 30°.COPSAC physicians showed these positions to the participants and gave oral instructions on how to perform the measurements.Participants were instructed to squeeze the dynamometer as hard as they could and were verbally encouraged during the procedure.Three measurements were conducted on each hand with a change of hand between each measurement.A maximal variation of 10% between attempts on each hand was accepted.Hand dominance was recorded and an average of three measurements on the dominant hand was used as a measure of HGS. Lung function measurements.Spirometry: FEV1, FVC and MMEF were measured by spirometry in accordance with the ERS/ATS international guidelines using the MasterScope Pneumoscreen (Erich Jäeger, Würzburg, Germany) 14 .FEV1, FVC and FEV1/FVC-ratio are used to measure airway obstruction. Whole-body plethysmography: sRaw, which is a measure of airway resistance, was assessed by whole-body plethysmography with the MasterScope Bodybox (Erich Jäeger, Würzburg, Germany) as previously described in detail 18 . Methacholine challenge was done using the Vyntys APS Pro (CareFusion, 234 GmbH, Germany) 19 .The dose started at 36 μg with stepwise increases of 36 μg until 144 μg after which the dose was doubled until a final dose of 9216 μg.A three-point logistic regression model was used to estimate the cumulative methacholine dose that caused a 20% drop in FEV1 from baseline (PD20) from the dose-response curves 20 .Methacholine is a bronchoprovocation test that assess airway hyperreactivity.A low PD20 indicates hyperreactive airways. FeNO level was measured in duplicates using the CLD 88 sp (Eco Medics, DX0256, Switzerland), in accordance with standard operating procedures 21 .FeNO is a measure of exhaled nitric oxide, which serves as a biomarker for eosinophilic airway inflammation.High FeNO levels indicate the presence of inflammation. presented as total number and percentage.Comparisons between subgroups were done using Student's t-test, Wilcoxon rank sum test and Chi-squared test. The association between HGS and continuous outcomes were examined using linear regression models and log-linear regression models, whereas logistic regression was used for binary outcomes.HGS analyses are usually adjusted for sex, age, height, and weight 24 and as these are also associated with lung function, they were chosen a priori to be included in the models.For sensitivity analyses, we further investigated the impact of a wide variety of covariates as potential confounders including covariates in the models that were associated with both HGS and outcomes. The impact of asthma on the associations between HGS and lung function was examined by (1) adjusting the models for asthma status, (2) analyzing the data stratified by asthma status, and (3) investigating for interaction by adding cross-product to the models. The value of HGS for improving standard prediction of FEV1 and FVC using age, height, weight and asthma was done utilizing adjusted R 2 -values and ANOVA tests. The value of HGS for improving classification of asthma status based on FEV1 was done using receiveroperating characteristic (ROC) curves comparing areas under the curve (AUC) from logistic regression models with vs. without inclusion of HGS. The statistical analyses were performed as complete case analyses and were done using a two-tailed test.A P-value ≤ 0.05 was considered significant.All statistical analyses were done using R statistical software version 64 4.0.2. Results Baseline characteristics.A total of 370 (90%) of the 411 participants in the COPSAC 2000 cohort completed the 18-year follow-up visit.Of these 370 participants, 330 (80%) were included as they had measurements of HGS and at least one lung function outcome measure.The study population was primarily Caucasian (N = 318, 96%), median age was 17.6 years (IQR 17.4-17.9),and 161 (49%) were males.The mean HGS was 42.7 kg (SD 8.8) for males and 27.2 kg (5.4) for females with a sex difference (P < 0.001).Baseline characteristics and lung function results are outlined in Table 1 and Supplementary Table e1 and the correlation between HGS measures are shown in Supplementary Figure e1.The 330 included vs. 81 excluded participants had better social circumstances and a higher prevalence of allergic rhinitis (Supplementary Table e2).e3).Among these potential sex-specific differences in HGS predictors there were interactions between sex and body fat percentage, body fat mass, muscle mass, and fitness (P-interactions < 0.05).We thereafter investigated whether the predictors of HGS were associated with any of the outcome measures.Muscle mass, muscle percentage, body fat mass, body fat percentage and fitness were associated with one or more lung function outcomes (Supplementary Table e4). Impact of asthma on the association between HGS and lung function. There was no association between HGS and asthma status for either the entire study group or among males or females (β-estimate asthma vs. no asthma, all: −0.23 kg, −1.88 to 1.42, P = 0.784; males: −0.18 kg, −3.025 to 2.659, P = 0.899; females: −0.20 kg, −1.968 to 1.578, P = 0.829).Further, adding asthma as a covariate to the models did not substantially change the findings, i.e., still positive associations between HGS, FEV1 and FVC (Supplementary Table e8).Finally, there was no interaction between HGS and asthma status for FEV1 or FVC in the entire study group or among females, whereas among males there was a trend of interaction for FEV1 (P-interaction = 0.075) but not FVC (P-interaction = 0.118) (Table 3). HGS for predicting lung function and classifying asthma status. Adding HGS to a model predicting FEV1 consisting of age, height, weight and asthma raised the adjusted R 2 by 0.02 (P < 0.001) in the entire study group, 0.03 (P = 0.009) among males and 0.05 (P < 0.001) among females, suggesting that 2%, 3% and 5% of FEV1 variation is explained by HGS, respectively.The findings were similar for FVC (Table 4). Discussion This study provides a thorough examination of the association between HGS and measures of asthma pathophysiology among 18-year-old adolescents with and without asthma.HGS was positively associated with the effort-dependent measures of FEV1 and FVC among both sexes but was not associated with any other asthma endpoints in our cohort of predominantly healthy adolescents, suggesting that the observed associations are not asthma specific.However, adding HGS to the standard spirometry prediction equation improved accuracy of FEV1 estimation, which warrants further investigation to reveal the potential of HGS in asthma diagnostics. We found that HGS was positively associated with FEV1 and FVC in both males and females.This aligns with some previous studies 9,11,25 , whereas others found no association 26,27 or conflicting results 28 .The ambiguous www.nature.com/scientificreports/results in the literature might be due to differences in study populations, where the association seems most robust in healthy, young populations 9,11 rather than populations consisting of aging 27 or ill people 26,28,29 , which also fits with our findings in a largely healthy cohort of 18-year-olds.We found a trend of association between HGS and MMEF and no association with FEV1/FVC-ratio, which is contradictory to previous studies in healthy subjects 11,30 .MMEF is more affected by small airway obstruction than FEV1 31,32 , which may explain the discrepancy between current literature and our results as our cohort consists of adolescents who are predominantly healthy or having mild to moderate asthma. To our knowledge, no previous studies have investigated the relationship between HGS and sRaw, PD20 or FeNO.There was no association between HGS and any of these endpoints, including asthma status, which suggests that the associations between HGS, FEV1 and FVC are not driven by underlying chronic illness of the airways, but perhaps rather an overall illustration of muscle strength.This was supported by the fact that adjusting the associations between HGS and spirometry for asthma did not affect the estimates substantially.Our subgroup analyses of participants with vs. without asthma showed the strongest association between HGS, FEV1 and FVC among participants without asthma but there was no interaction between HGS and asthma status in relation to FEV1 and FVC. HGS was not a marker of asthma status in our study group of 18-year-old participants who were predominantly healthy or having mild to moderate asthma, which is in contrast to the sparse exiting litterature 4,5 .Our cohort has been followed prospectively since one month of age at the COPSAC clinic, and therefore we might have diagnosed more mild asthma cases and have very well-controlled asthma patients in the cohort.This might affect a possible association between asthma and HGS, although this is speculative. A plausible mechanism behind the associations between HGS and the effort-dependent lung function measures is that HGS might be a surrogate marker of respiratory muscle strength, which might be why it is only associated with the effort-dependent spirometry measures in our study.Previous studies have shown an association between HGS and respiratory muscle strength measured as the Maximal Expiratory Pressure (MEP) and Maximal inspiratory Pressure (MIP) 33,34 .To our knowledge there a relatively few studies that have examined the relationship between MEP and MIP and spirometry indices and they found that MEP and MIP associated with increased spirometry indices 34,35 . Even though the relationship between HGS, FEV1 and FVC seems independent of asthma status in our study, HGS may be utilized to improve estimation of lung function, which is usually based on anthropometrics including sex, age, height, ethnicity and sometimes weight 36 .The results of our study showed that adding HGS to the prediction equation for FEV1 and FVC gave more accurate predicted values with a 2-5% increase in explained variance.However, it is possible this might be due to unaccounted confounders or non-linearities.This increase in accuracy did not improve classification of asthma status in our cohort.However, our cohort consisted of predominantly healthy adolescents with few subjects with mostly mild asthma, where FEV1 was not a strong predictor of asthma status.Therefore, there is a need for larger cross-sectional studies to examine whether the use of HGS could improve diagnostics of asthma. Interestingly, we observed several differences between males and females regarding factors associated with HGS, primarily within body composition and fitness.Body composition has previously been associated with HGS [37][38][39][40] , which aligns with our results showing association between muscle mass and HGS. This study is strengthened by the single-center setup where all objective measurements were done by trained professionals strictly following standard operating procedures.The participants of the COPSAC 2000 cohort have partaken in examinations repeatedly from birth till age 18 years and are therefore highly competent in performing lung function tests, which assured a high completion rate. The examination of the association between HGS and several measures of lung function, airway resistance, reactivity, and inflammation adds to the literature as former studies have solely investigated the relationship between HGS and spirometry indices [9][10][11][12]27 . Furher, the extensive exposure information made it possible to examine determinants of HGS, sex differences, and delineate potential confounders of the relationship between HGS and lung function outcomes. One limitation of the study is the relatively low number of participants compared to some previous studies on HGS and spirometry 10,11 , which may have reduced the statistical power, mainly in the subgroup analyses of participants with vs. without asthma.However, we were still able to show a strong relationship between HGS, FEV1 and FVC in contrast to no association between HGS and measures of airway resistance, reactivity, and inflammation.www.nature.com/scientificreports/ The high-risk nature of the cohort regarding asthma limits generalizability of our findings.Further, the age range was limited, and the participants were primarily of Caucasian origin.However, previous studies including participants not solely born to mothers with asthma 11 , participants of different age groups 4,10,11 , race, 9,10 and/or residing in different geographical regions 9,10 found results similar to ours. Conclusion Handgrip strength was associated with the effort-dependent measures of FEV1 and FVC but not with airway resistance, reactivity, inflammation, or asthma status.However, adding HGS to the standard prediction equation for FEV1 improved accuracy, which warrants further investigation to reveal the potential of HGS in asthma diagnostics. Governance.We are aware of and comply with recognized codes of good research practice, including the Danish Code of Conduct for Research Integrity.We comply with national and international rules on the safety and rights of patients and healthy subjects, including Good Clinical Practice (GCP) as defined in the EU's Directive on Good Clinical Practice, the International Conference on Harmonisation's (ICH) good clinical practice guidelines and the Helsinki Declaration.We follow national and international rules on the processing of personal data, including the Danish Act on Processing of Personal Data and the practice of the Danish Data Inspectorate. Table 1 . Predictors including increasing height, total muscle mass, muscle percentage, increased fitness (VO 2 /kg/min) measured by a step test and decreased body fat percentage and body fat mass were posi-Baseline Characteristics.Data are presented as n (%) for categorical variables, mean (SD) for continuous normally distributed variables, and median (Q25:Q75) for continuous non-normally distributed variables.For categorical variables the Chi-squared test was used, for continuous normally distributed variables.Two sample t-tests were used and for normally distributed continuous variables and Wilcoxon rank sum test was used for variables not normally distributed.FeNO = Fractional Exhaled Nitric Oxide, FEV1 = Forced Expiratory Volume 1 second, FVC = Forced Vital Capacity, HGS = Handgrip Strength, MMEF = Maximal Mid-expiratory Flow, PD20 = Provocation Dose of methacholine causing a drop of 20% in FEV1, sRaw = Specific Airway Resistance. Table 2 . Association Between Handgrip Strength, Lung Function airway inflammation and hyperreactivity.Multiple linear regression was used for continuous outcomes † and multiple log-linear regression was used for log transformed continuous outcomes ‡ .All analyses are adjusted for age, height, and weight.Analyses performed on the overall group are further adjusted for sex.Key: FeNO = Fractional Exhaled Nitric Oxide, FEV1 = Forced Expiratory Volume 1 second, FVC = Forced Vital Capacity, HGS = Handgrip Strength, MMEF = Maximal Mid-expiratory Flow, PD20 = Provocation Dose of methacholine causing a drop of 20% in FEV1, sRaw = Specific Airway Resistance. Table 3 . Subgroup Analyses of Asthma versus No asthma and Interaction Analyses between Handgrip Strength and Asthma.Multivariate linear regression was used for all analyses.Key: FEV1 = Forced Expiratory Volume 1 second, FVC = Forced Vital Capacity, HGS = Handgrip Strength. Table 5 . Classification of Asthma Status Dependent on FEV1 with versus without Handgrip Strength.This table displays the results of ROC analyses classifying asthma status based on FEV1.The AUC was compared for two models in each group, one where FEV1 is adjusted for age, height, and weight and one where FEV1 is adjusted for age, height, weight, and handgrip strength.The P values derives from analyses comparing the two models.Key: AUC = Area under the curve, FEV1 = Forced Expiratory Volume 1 second, HGS = Handgrip Strength.
2023-08-12T06:17:38.316Z
2023-08-10T00:00:00.000
{ "year": 2023, "sha1": "daada2d26aebf389a9de5cb20c389ce788e66ebb", "oa_license": "CCBY", "oa_url": "https://www.nature.com/articles/s41598-023-40320-4.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "2b8e9f7ece23f4d8303df6dcd37f73a96708c984", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
55987432
pes2o/s2orc
v3-fos-license
Effects of arbuscular mycorrhizae on growth and mineral nutrition of greenhouse propagated fruit trees from diverse geographic provenances (1) Centre National de Recherche Scientifique et Technologique / Institut de l’Environnement et de Recherches AgricolesDépartement Productions Forestières (CNRST/INERA-DPF). Laboratoire de Microbiologie. 03 BP 7047 Ouagadougou 03 (Burkina Faso). E-mail: sbkady@gmail.com (2) Université des Sciences, des Techniques et des Technologies de Bamako. Faculté des Sciences et Technologies. Laboratoire de Recherche en Microbiologie et de Biotechnologie microbienne. BP E3206. Bamako (Mali). (3) Université des Antilles et de la Guyane. Faculté des Sciences exactes et naturelles. Laboratoire de Biologie et Physiologie végétales. BP 592. FR-97150 Pointe-à-Pitre (Guadeloupe, France). INTRODUCTION Sahelian countries are facing rapid degradation of natural resources resulting in dramatic reduction in soil fertility and provision of food and other ecosystem services.Fruit trees are traditionally and intensively exploited by local people for fruits, seeds, fodder and medicines (Ambé, 2001).They contribute to food security as they help overcome nutritional problems, and are an important source of revenue for smallholder farmers (Atangana et al., 2001;Akinnifesi et al., 2004;Leakey et al., 2005).For their high nutritive and economic added value, fruit trees are often alternative crops, both in agroforestry systems and in orchards, and as such have become a priority in agronomic research efforts (Leakey et al., 2005;Akinnifesi et al., 2006;Franzel et al., 2007). Among the fruit tree species well adapted to arid and semi-arid regions and commonly used by farmers, néré (Parkia biglobosa [Jacq.]G.Don), tamarind (Tamarindus indica L.) and jujube (Ziziphus mauritiana Lam.), three multipurpose fruit trees from West Africa are the most popular.These fruit trees grow slowly in West African soils due to different factors amongst which nutrient deficiency, particularly P, and erratic rainfalls (Querejeta et al., 2003;Lynch, 2007) have the most impact.Under such conditions, fruit trees largely rely upon arbuscular mycorrhizal (AM) fungi for growth and nutrient uptake (Mathur et al., 2000;Guissou et al., 2001;Kung'u et al., 2008;Fitter et al. 2011;Smith et al., 2011).Arbuscular mycorrhizal establishment extends plant root system's capacity to explore more water resources in soil and to cope with stress situations (Mathur et al., 2000;Guissou et al., 2001;Manoharan et al., 2010).Furthermore, prophylactic effects have been often reported, proving in many situations that AM fungi can act as biological control agents by lessening proliferation and damage caused by pests, insects and soil-borne diseases (St-Arnaud et al., 2005;Chandra et al., 2010;Ozgonen et al., 2010;Jung et al., 2012).The inoculation of 13 fruit trees with an efficient AM fungus isolate, Glomus aggregatum Schenck & Smith emend.Koske, or with a non-efficient AM fungus isolate, Glomus intraradices Schenck & Smith, showed that the jujube fruit tree responded better to the AM inoculation in comparison to the other fruit trees regardless of the AM fungus used as inoculum (Guissou et al., 1998;Bâ et al., 2000).These data on mycorrhizal dependency (MD) and mineral nutrition potential have been focused on a single provenance for néré, tamarind and jujube fruit trees even though the benefits of AM fungi on plant growth could vary widely between plant species, and even between cultivars or species from different geographic provenances (Lesueur et al., 2005;Plenchette et al., 2005;Belay et al., 2013;Sousa et al., 2013).In order to verify the level of variability between néré, tamarind or jujube plants originating from different geographical origins, each of these fruit trees was inoculated or not inoculated with an efficient AM fungus strain of Glomus aggregatum.Comparative analyses of nutrient uptake and MD measurements were performed on greenhouse propagated species grown in a P-deficient soil.Results are discussed taking into consideration genetic diversity in tree species and provenance influences to optimize large-scale fruit tree production in agroforestry systems. Seeds of each fruit tree species collected from five different geographic locations in Burkina Faso and Senegal (Table 1) were purchased from Centre National de Semences Forestières (CNSF, Burkina Faso) and Institut Sénégalais de Recherches Agricoles/ Direction de Recherches et de Productions Forestières (ISRA/DRPF, Senegal). Mycorrhizal fungus The AM fungus isolate used in this experiment was Glomus aggregatum Schenck & Smith emend.Koske (isolate IR 27) isolated from a rhizosphere of Acacia mangium Willd.at Dinderesso in Burkina Faso.The fungus was propagated on maize plants (Zea mays L.) grown in pot cultures for 4 months.The AM inoculum consisted of a mixture of sand, spores, hyphae and infected maize root fragments.Twenty g of inoculum, containing approximately 103 infected propagules (Guissou et al., 1998) were added to each pot-culture.Non-inoculated (control) pots received the same amount of autoclaved inoculum (120 °C for 20 min) with 10 ml of inoculum water extract collected by vacuum filtration system. Pot experiment The seeds were surface scarified and sterilized by immersion in 95% sulphuric acid for 30 min, 45 min, and 10 min for néré, tamarind, and jujube, respectively.The sterilized seeds of these three plant species were rinsed several times in sterile distilled water for 24 h and then aseptically pre-germinated on moist sterilized cotton in Petri dishes at 30 °C until the radicles appeared.Once germinated, they were selected for uniformity before sowing one seedling per cylindrical plastic pot (24 cm height x 7.5 cm diameter).The pots were watered to field capacity and maintained at that moisture level by weighing the pots in the morning and in the afternoon and replenishing the water used (i.e.difference between morning and afternoon weights). The experiment was arranged in a factorial design with two factors for each fruit tree species separately: 5 provenances x 2 AM treatments (inoculated and non-inoculated [control]).Each of the 10 treatments was set-up in a completely randomized design with 10 replicates for each treatment combination, for a total of 100 plants for each fruit tree species.The experiment was conducted under nursery conditions and plants were grown under natural light (day length approximately 12 h), mean temperature at approximately 35 °C day. Harvest and chemical analysis Plant shoots and roots were harvested separately six months after inoculation.Shoot height, total dry weight (TDW), and root/shoot ratio were measured.Plant material was dried in an oven at 70 °C for seven days and weighed.The compound leaves (leaf blade, petiole, and rachis) were analyzed for the N, P and K concentrations.Mycorrhizal dependency (MD) of each provenance of fruit trees was determined by expressing the difference between the TDW of AM plants and the TDW of non-AM plants as a percentage of the TDW of AM plants (Plenchette et al., 1983).To identify the AM fungal colonization rate, randomly sampled roots were collected from each plant, carefully washed with tap water and deionised water to remove adhering soil particles, cut into 1-cm long fragments, and cleared for 1 h in 10% KOH at 80 °C.The cleared roots were then stained with 10% Trypan blue (Phillips et al., 1970).A total of 100, 1-cm root pieces per plant were randomly selected, mounted on microscopic slides and examined for colonization patterns (40X magnification) using a compound microscope fitted with an eyepiece scale.The AM colonization rate was the colonization intensity calculated as length of cortical cells colonized (in mm) by the AM fungi for each root fragment expressed as a percentage of total root length colonized (Mc Gonigle et al., 1990;Declerck et al., 1996).The P and N concentrations in compound leaves were determined by the molybdate blue method (Murphy et al., 1962) and colorimetry method after Kjeldahl digestion, respectively.The K concentration in compound leaves was determined by means of atomic absorption spectrophotometry (John, 1970). Data analysis For each fruit tree species, all data were subjected to two-factor analysis of variance (provenance x AM treatment) using the general linear models procedure of SAS (1990).A threshold of 5% was considered statistically significant.Means of parameters with significant F were compared by using the Fisher protected least significance difference (LSD) test (Steel et al., 1980). RESULTS The analysis of variance revealed that for the interactions between provenance and AM treatments the level of significance for the measured parameters varied according to fruit trees (Tables 2, 3 and 4).The results showed that in each of the three fruit tree species, no AM formation was observed in noninoculated plants regardless of the provenance, thus indicating that no contamination occurred between the different treatments (Table 5). Influence of néré seed provenances on AM response Néré plants inoculation with the G. aggregatum isolate significantly increased the shoot growth of of plants from Néma and Diégoune compared to those from Bazèga, Soumousso and Bankartougou (Table 6).There was no difference in root/shoot ratios among most of the provenances except those of Néma and Soumousso which were respectively lower and higher as compared to the rest (Table 6).Arbuscular mycorrhizal-root colonization levels were uniform regardless to the seed provenances.The N and P concentrations in compound leaves were significantly enhanced by the inoculation with G. aggregatum for all provenances.The néré plants from Bazèga showed the highest P and K content compared to other provenances (Table 6). Influence of tamarind seed provenances on AM response In tamarind plants, G. aggregatum significantly increased shoot height of plants from Tiénaba.Total dry weight was significantly increased in plants from Tiénaba and Foungioune compared to those from Kongoussi, Sondogtenga and Comin-Yanga (Table 7).Plants from Tiénaba were the most AM dependant and those from Sondogtenga showed the lowest AM dependence (Table 7).The root/shoot ratio was in general comparable for either AM or non-AM treatments regardless of the plant provenance (Table 5).The AM inoculation significantly increased the N content of tamarind leaves from Foungioune, Kongoussi and Comin-Yanga.P and K content were highly significant in tamarind leaves from Kongoussi compared to those of all other provenances (Tables 3 and 7). Table 2. Analyses of variance on main effects and their interaction on shoot height, total dry weight, AM colonization, mycorrhizal dependency and N, P, K concentrations, in relation to the provenance, inoculation and interaction provenance x inoculation of Parkia biglobosa (Fisher's test, p = 5%) -Analyses de variance des effets principaux et leur interaction sur la hauteur des plants, le poids sec total, le taux de mycorhization, la dépendance mycorhizienne et les concentrations des parties aériennes en N, P et K en fonction de la provenance, de l'inoculation et de l'interaction provenance x inoculation de Parkia biglobosa (test de Fisher, DW: dry weight -poids sec. Table 3. Analyses of variance on main effects and their interaction on shoot height, total dry weight, AM colonization, mycorrhizal dependency and N, P, K concentrations, in relation to the provenance, inoculation and interaction provenance x inoculation of Tamarindus indica -Analyses de variance des effets principaux et leur interaction sur la hauteur des plants, le poids sec total, le taux de mycorhization, la dépendance mycorhizienne et les concentrations des parties aériennes en N, P et K en fonction de la provenance, de l'inoculation et de l'interaction provenance x inoculation de Tamarindus indica. Tableau 4. Analyses of variance on main effects and their interaction on shoot height, total dry weight, AM colonization, mycorrhizal dependency and N, P, K concentrations in relation to the provenance, inoculation and interaction provenance x inoculation of Ziziphus mauritiana -Analyses de variance des effets principaux et leur interaction sur la hauteur des plants, le poids sec total, le taux de mycorhization, la dépendance mycorhizienne et les concentrations des parties aériennes en N, P et K en fonction de la provenance, de l'inoculation et de l'interaction provenance x inoculation de Ziziphus mauritiana.In the same column and for the same parameter, means with a letter in common are not significantly different according to LSD Fisher protected Test -Dans la même colonne et pour le même paramètre, les moyennes avec une lettre en commun ne sont pas significativement différentes selon le test LSD de Fisher. Influence of jujube seed provenances on AM response With jujube plants, Glomus aggregatum significantly increased shoot height, total dry weight and percentage N, P and K content of inoculated plants (Table 5).In plant from Léri provenance, shoot height and total dry weight were greater than plants from other provenances (Table 8).Plants from Gonsé appeared to be the least AM dependant (Table 8).The root shoot ratio was significantly high in AM inoculated plants compared to non-inoculated ones regardless of the variety of the tree (Table 5).No significant variation was observed for AM-Root colonization between inoculated plants, but significant differences were observed between inoculated and non-inoculated plants (Tables 4 and 5).The N, P and K concentrations were significantly higher in the compound leaves of AM plants from Léri provenance (Table 8). Influence of fruit trees on AM response Analysis of data presented in table 1, and in tables 6 to 8, revealed that the response of the studied fruit trees to inoculation with G. aggregatum varies with trees and the rainfall regime of their provenance.In fact, the Parkia plants from Bazèga and Bankartougou, at 600-900 mm rainfall respond well (Tables 1 and 6), while Ziziphus plants from Léri, Bandia and Colomba respond well respectively at 900-1,000, 400-600 and 900-1,000 mm rainfall (Tables 1 and 8).Tamarindus is the only tree which responds well to the inoculation with G. aggregatum between 300-1,000 mm of rainfall.In fact, Tamarindus plants from Tiénaba and Foungioune have the highest mycorrhizal dependency respectively at 400-600 and 900-1,000 mm rainfall (Tables 1 and 7). DISCUSSION The néré, tamarind, and jujube plants used in this study are multipurpose fruit tree species commonly grown in orchards and agroforestry systems under the arid and semi-arid climatic conditions of West Africa.They usually grow on soils characterized by low organic matter concentration with reduced available P, making them ideal candidates for testing the potential practical applications of arbuscular mycorrhizal inoculation. The greenhouse experiment using P-deficient substrate, thus mimicking some of the indigenous soil parameters and the introduction of standard inoculum containing mycorrhizal fungi provided opportunity for comparative analysis. Our results revealed that, regardless of seed provenance and plant species, the mycorrhizal root colonization levels were high and comparable (80-90%) within each fruit tree species provenance.Despite these high colonization rates, shoot height and total biomass production differed significantly among provenances for the same plant species.For example, despite a high AM-root colonization level, néré plants originating from Bankartougou and plants of tamarind from Kongoussi and Sondogtenga did not display differences in biomass production compared to the non-inoculated controls.These results indicated that the level of AM-root colonization remains a weak indicator of plant growth benefits (Cavender et al., 2006;Nunes et al., 2008) because it was not always consistent with the impact AM symbiosis has on plant growth yields. The significant enhancement of biomass production in jujube plants colonized by AM from all provenances was directly proportional with their MD values, a proportionality that did not exist in tamarind and néré plants.These results corroborate previous reports which found that these three fruit trees responded differently to AM inoculation (Bâ et al., 2000;Guissou et al., 2001;Solaiman et al., 2008;Johnson et al., 2010;Schultz et al., 2010).The high MD values for jujube were previously obtained and known to vary according to the AM fungal species used as inoculum (Mathur et al., 2000;Smith et al., 2000;Urcelay et al., 2003). Mycorrhizal dependency values cannot be predicted neither by root colonization measurements nor by root architecture (Guissou et al., 1998) even though several authors stated that the length and the density of root hairs are good indicators of plant species or cultivars MD (Simard et al., 2002;Collier et al., 2003;Sorensen et al., 2005;Janos, 2007;Johnson et al., 2010).Nevertheless, our results demonstrate, for the first time, that the MD of some Sahelian fruit trees varied according to their provenance and as such corroborate previous data obtained with Acacia (Duponnois et al., 2003;Lesueur et al., 2005;Belay et al., 2013), and Dalbergia sissoo Roxb.DC. (Devagiri et al., 2001) leguminous trees.Interestingly, regardless of AM inoculation, there was a difference in N, P and K absorption by plants particularly with tamarind (Tiénaba and Kongoussi provenances) and jujube (Léri and Gonsé provenances).When the two provenances and G. aggregatum inoculation are investigated, an additional N, P and K uptake takes place.It is then possible to suggest that tamarind and jujube from those provenances may have developed ecological plasticity in order to better adapt themselves to poor nutrient soil conditions.In almost all provenances (except tamarind from Tiénaba), AM inoculation significantly improved the N, P and K absorption compared to non-AM fruit trees.These findings are consistent with published data on the enhanced nutrient uptake observed in mycorrhizal plants (Fitter et al., 2011;Smith et al., 2011;Jiang et al., 2013) partly due to the existence of an extraradical hyphal network capable of exploring a greater soil volume (Simard et al., 2002;Marulanda et al., 2003;Schnepf et al., 2008). An interesting element, in terms of mineral absorption, can be observed between species and provenances.The P levels in compound leaves of AM inoculated jujube from Colomba, Bandia and Dahra were respectively 7, 10 and 6 times higher in control plants than P concentrations in the other provenances of Léri and Gonsé where only a two-fold increase was observed.In that particular case, the most efficient provenances in P uptake coincided with the most AM-dependent one.On the other hand, with néré plants, a reducing effect of the AM inoculation was observed in the N and K uptake from Tiénaba.This is probably due to growth dilution effect brought about by increased plant biomass in AM-plants compared to non-AM plants (Ahiabor et al., 1994). CONCLUSIONS In conclusion, the MD and mineral nutrient absorption potential may vary depending on the plant species and their provenance.Our results revealed the existence of substantial provenance variation, which can be utilized to initiate tree improvement program of the species and large-scale fruit production in orchards and other agroforestry systems.To be successful in practical applications of these findings, further investigations are now required to evaluate the competitiveness of the G. aggregatum isolate with the AM fungi population of indigenous soils of West Africa, particularly in Burkina Faso and Senegal including Mali and Niger and to reveal the underlying mechanisms. Table 1 . Geographical and climatic characteristics of seed provenance locations for the seeds of the three fruit tree species used -Caractéristiques géographiques et climatiques des localités de provenances des graines chez les trois espèces d'arbres fruitiers utilisés. Table 7 . Effects of inoculation with Glomus aggregatum on shoot height, total dry weight, root/shoot ratio, AM colonization, mycorrhizal dependency, leafed stems N, P and K concentrations of Tamarindus indica plants originating from five provenances in West Africa -Effets de l'inoculation avec Glomus aggregatum sur la hauteur des plants, le poids sec total, le rapport racine/tige, le taux de mycorhization, la dépendance mycorhizienne et les concentrations en N, P et K des plants de cinq provenances d'Afrique de l'Ouest de Tamarindus indica. bIn the same column and for the same parameter, means with a letter in common are not significantly different according to LSD Fisher protected Test -Dans la même colonne et pour le même paramètre, les moyennes avec une lettre en commun ne sont pas significativement différentes selon le test LSD de Fisher. Table 5 . Interaction between inoculation with Glomus aggregatum and provenance on shoot height, total dry weight, root/shoot ratio, AM colonization, leafed stems N, P and K concentrations of Parkia biglobosa, Tamarindus indica and Ziziphus mauritiana plants originating from five provenances in West Africa -Interaction entre l'inoculation avec Glomus aggregatum et la provenance sur la hauteur des plants, le poids sec total, le rapport racine/tige, le taux de mycorhization, les concentrations des parties aériennes en N, P et K des plants de Parkia biglobosa, Tamarindus indica et Ziziphus mauritiana de cinq provenances d'Afrique de l'Ouest.In the same column and for the same species and parameter, means with a letter in common are not significantly different according to LSD Fisher protected Test -Dans la même colonne et pour les mêmes espèce et paramètre, les moyennes avec une lettre en commun ne sont pas significativement différentes selon le test LSD de Fisher. Table 6 . Effects of inoculation with Glomus aggregatum on shoot height, root/shoot ratio, mycorrhizal dependency, N, P and K concentrations of Parkia biglobosa plants originating from five provenances in West Africa -Effets de l'inoculation avec Glomus aggregatum sur la hauteur des plants, le rapport racine/tige, la dépendance mycorhizienne et les concentrations en N, P et K des plants de cinq provenances d'Afrique de l'Ouest de Parkia biglobosa. Table 8 . Effects of inoculation with Glomus aggregatum on shoot height, total dry weight, mycorrhizal dependency, leafed stems N, P and K concentrations of Ziziphus mauritiana plants originating from five provenances in West Africa -Effets de l'inoculation avec Glomus aggregatum sur la hauteur des plants, le poids sec total, la dépendance mycorhizienne et les concentrations en N, P et K des parties aériennes des plants de Ziziphus mauritiana de cinq provenances d'Afrique de l'Ouest. bIn the same column and for the same parameter, means with a letter in common are not significantly different according to LSD Fisher protected Test -Dans la même colonne et pour le même paramètre, les moyennes avec une lettre en commun ne sont pas significativement différentes selon le test LSD de Fisher.
2018-12-05T19:29:55.824Z
2016-09-19T00:00:00.000
{ "year": 2016, "sha1": "2b0dd81b526ebd7ed6009c7d6d65ca62a24b6fcf", "oa_license": "CCBY", "oa_url": "https://popups.uliege.be/1780-4507/index.php?file=1&id=16754&pid=13149", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "2b0dd81b526ebd7ed6009c7d6d65ca62a24b6fcf", "s2fieldsofstudy": [ "Agricultural And Food Sciences" ], "extfieldsofstudy": [ "Biology" ] }
244414386
pes2o/s2orc
v3-fos-license
Second and third harmonic generation from gold nanolayers: experiment versus theory The use of semiconductors, metals, or ordinary dielectrics in the process of fabrication of nanodevices is at the front edge of nowadays technology, exploiting the properties of light propagation and localization at nanometric scale in new and surprising ways. Understanding accurately how light interacts with these materials at the nanoscale is crucial if one is to properly engineer nano-devices. When the nanoscale is reached, light-matter interactions display new phenomena where conventional approximations may not always be applicable and they should be either revised or voided. In this work, we measure the efficiency of second and third harmonic generation from gold nanolayers. The experimental results are compared with numerical simulations based on a detailed microscopic hydrodynamic model that considers different effects playing a role in the nonlinear response, not usually considered by more generic models. The agreement between experimental and theoretical results proves the importance of all these contributions. Introduction More effort to properly engineer nano-antenas, filters, and other devices whose geometrical features approach atomic size, has been put in the recent years. For this reason, it is very important to understand how light interacts at the nanoscale with metals, semiconductors, or ordinary dielectrics. At this scale, light-matter interaction can display completely new phenomena and conventional approximations should be revised. This is the case, for instance, of two well-known nonlinear processes such as second and third harmonic generation (SHG and THG). These phenomena have been studied extensively in different optical materials. Usually, high conversion efficiencies are sought, what requires thick nonlinear materials with high nonlinearities, phase-matching conditions and low material absorption. Under these circumstances, the leading nonlinear polarization term corresponds to the bulk contribution described through the second and third order nonlinear susceptibility tensors ( (#) and (%) ). However, when the nanometric size is reached, SH and TH efficiencies may decrease and phase matching conditions and absorption may no longer play a primary or significant role. Moreover, the effective (#) and (%) may not coincide with their bulk, local counterparts and may depend on the type of nonlinearities that are triggered. In addition, contributions to the nonlinear polarization arising from electric quadrupole-like and magnetic sources should also be taken into account. Most of the models used to explain harmonic generation from metals at the nanoscale rely on assigning effective surface and volume (#) for SHG, and an effective volume (%) to describe THG, which generally lack a detailed, microscopic, dynamical description of light propagation and light-matter interactions. With this, most theoretical predictions appear to accurately describe the general shape of the angular dependence of the SH signal, but fail in describing the observed amplitude. In this work, we report experimental measurements on SHG and THG from 20nm and 70nm-thick Au nanolayers. These measurements are compared with numerical simulations based on a microscopic hydrodynamic model which accounts for surface, magnetic and bulk nonlinearities arising from free and bound charges, preserving linear and nonlinear dispersion, nonlocal effects due to pressure and viscosity, and an intensity dependent free electron density. This model is adapted and applied anew based on previous work reported in references [1][2][3][4]. Experiments Measurements of SH signals from the 20nm-thick Au nanolayer have been conducted in transmission and in reflection while the SHG from the 70nm-thick Au nanolayers has been studied in reflection. In both cases, incident pulses tuned at 800nm and 1064nm have been used. The process of THG from the 20nm-thick layer was conducted either in transmission and reflection using incident pulses tuned at 1064nm. For this purpose, we have developed an experimental set-up, shown in Fig. 1, capable of analysing the angular dependence of the harmonic signals in both transmission and reflection configurations. First of all, a half-wave plate is used to control the polarization of the incident field. Then, we filter out any possible SH or TH coming from different optical components placed before the sample. We use a lens to focus the beam on the sample plane. A focal length f=200mm was used for the SHG experiments to obtain beam intensities in the range of 1-2 GW/cm 2 . A focal length f=100mm was employed for the THG measurements in order to achieve higher beam intensities. The sample is mounted on a rotary support which allows us to take measurements as a function of the angle of incidence. Just after the sample, the fundamental field is attenuated by means of a filter in order to avoid any potential SHG or THG from the surfaces of the optical elements placed along the set-up after the sample. After that, a lens with focal length f=100mm is used to collimate the beam, and a polarizer helps us to select the SH or TH polarization. Then, we use a prism and a blocking edge to separate and obscure the remaining fundamental field radiation from the SH or TH path. Finally, the harmonic signals are detected by means of a photomultiplier tube, on which we place a narrow-band spectral filter having 20nm band pass transmission around either the SH or the TH wavelength. This hole set-up is mounted on a rotary platform which allows us to take measurements in transmission and in reflection. A calibration procedure has been performed to estimate the efficiency of a given process as the ratio between the SH or TH intensity generated in transmission or reflection, and the total peak pump pulse intensity just before the sample. Theory To understand the experimental results, we have performed numerical simulations reproducing the experimental situation. These simulations are based on a theoretical model that embraces full-scale, time-domain coupling of matter to the macroscopic Maxwell's equations. Our approach consists in formulating a microscopic, hydrodynamic model to understand linear and nonlinear optical properties of metals by accounting for competing surface, magnetic and bulk nonlinearities arising from both free and bound electrons, preserving linear and nonlinear material dispersion, nonlocal effects due to pressure and viscosity, and an intensity dependent free electron density to which we refer as hot electron contribution. When applying Newtonian dynamics to free and bound electrons, we obtain the simultaneous material equations of motion Eq. 1 and 2. Here, -,/ and 0 are bound and free electron polarizations, respectively. Eq. 1 describes the behaviour of bound electrons. The j counter indicates multiple bound electron spices. In a metal like Au, one free (Drude) and two bound (Lorentz) electron spices generally suffice to describe the local, linear dielectric function down to a wavelength of ~200nm. Each Lorentz spices is characterized by a third order, isotropic nonlinearity -,/ 12 = −β( -,/ • -/ ) -/ , where the coefficient β may be derived for a nonlinear, twodimensional oscillator model, and taking into account typical bound electron densities, lattice constant and 1. Experimental set-up developed for measuring SH and TH signals from 20nm and 70nm-thick Au nanolayers. The rotary support allows us to rotate the sample so that we can take measurements as a function of the angle of incidence. The detection system is mounted on a rotary platform so that measurements can be taken in transmission and in reflection. resonance frequencies for this material, it has a value of β ≈ 10 :; . This parameter governs all third order effects triggered by the background crystal, i.e. bound electrons, including self-phase modulation, nonlinear absorption, and THG power conversion efficiencies. Eq. 2 determines the action of free electrons. As it can be seen, apart from the usual linear driving term there are different free electron contributions that give rise to harmonic generation. A quadrupole-like Coulomb term - . In this work, we highlight and discuss for the first time the relative roles bound and hot electrons play in THG and end up with the conclusion that the generated TH signal is mostly triggered by hot electron dynamics. In this section, we present some of the experimental results that we have obtained, which are compared with their corresponding numerical predictions. In Fig. 2 we show results of SHG from the 20nm-thick Au layer when the sample is illuminated at 800nm. Experiments are depicted using full circles and simulations using solid curves. Transmitted (blue) and reflected (red) efficiencies are plotted as a function of the angle of incidence. As it can be seen, the maximum of the reflected curve is a little bit shifted towards a larger angle of incidence with respect to the transmitted curve. This phenomenon can be appreciated in both experimental and predicted results. Moreover, the ratio between the measured reflected and transmitted efficiencies is also reproduced in the theoretical case. In Fig. 3 we present the reflection measurements (full circles) of the SH signal generated by the 70nm-thick Au layer when it is illuminated at 1064nm. As it can be seen, the maximum of the curve is achieved at a large angle of incidence, around 70º, fact that is reproduced in the predicted results. Fig. 3. Measured (full circles) and predicted (solid curves) SHG transmitted efficiencies as a function of the angle of incidence from the 70nm-thick Au layer when the incident pulse is tuned at 1064nm. In Fig. 4, we show the results of the transmitted TH from the 20nm-thick Au layer as a function of the angle of incidence when the input pulse is tuned at 1064nm. In blue, we plot the transmitted TM-polarized TH for a TMpolarized incident pump, and in red we represent the transmitted TE-polarized TH generated when the incident field is TE-polarized. The simulations are performed by taking both bound and hot electrons into account. By only introducing the bound electrons contribution, efficiencies of order 10 :II where obtained, clearly inadequate to explain our measurements. Instead, what is required to reproduce the conversion efficiencies that we observe is the introduction of the hot electrons contribution. In order to see how important the bound and hot electrons contribution to the TH signal can be, we have plotted in Fig. 4 predictions of the reflected TH efficiency from a 20nm-thick Au layer as a function of the input wavelength for three different scenarios: only taking bound electrons into account (black), only taking hot electrons into account (red) and taking both contributions into account (blue). It can be seen that, separately, each type of third order nonlinearity yields quantitavely and qualitavely a similar response, with a TH peak for pump wavelength of ~600nm. However, their combined response redshifts the TH peak. This prediction runs counter to intuition because an increased free electron density should blueshift the plasma frequency, with expectedly similar outcome for the TH peak. It is obvious that the two components interfere and conspire to instead redshift the peak, an effect that encapsulated a cautionary tale for any experimental result that may be cavalierly extrapolated without the benefit of proper assumptions and theoretical support. Fig. 4. Measured (full circles) and predicted (solid curves) transmitted THG efficiencies as a function of the angle of incidence from the 20nm-thick Au layer when the incident field is tuned at 1064nm. A TM-(blue) and TE-polarized (red) TH field is detected when the fundamental field is TM-or TEpolarized, respectively. Conclusions In conclusion, we have reported measurements on SHG and THG from 20nm and 70nm-thick Au layers. The SH field generated by the 20nm-thick sample measured as a function of the angle of incidence carries information about combined surface and volume currents excited on and inside the sample. The angular dependence of the reflected SH signal from the 70nm-thick Au layer supports mostly surface currents. These measurements are compared with numerical simulations based on a hydrodynamic approach to model light-matter interactions that make no assumptions about effective surface or volume nonlinearities. Instead, we rely on temporal and spatial derivatives and mere knowledge of the effective electron mass to determine the relative magnitudes of surface and volume contributions. With this approach we find remarkable agreement with experimental observations. The THG measurements and simulations are reported for TM and TE polarizations of the generated field from the 20nm-thick layer in transmission and as a function of the angle of incidence. The generated TH signal is attributed mostly to hot electron dynamics.
2021-11-20T16:23:35.040Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "322c7c2d3e70e5d4a657054b01ee70d41090c197", "oa_license": "CCBY", "oa_url": "https://www.epj-conferences.org/articles/epjconf/pdf/2021/09/epjconf_eosam2021_07003.pdf", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "3d20fa2237691b34ae68695c7ab0096027cee2ff", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [] }
258921509
pes2o/s2orc
v3-fos-license
The use of APE in the Problem-Based Learning Process in English subjects Every effective, inventive, and creative educational procedure will result in high-caliber students. Hence, every instructor should be able to instruct using a good, student-friendly learning model so that when learning is implemented, pupils can do so successfully. Apart from selecting the learning model to be utilized, the teacher must have innovative ideas and methods for creating instructional materials that will be delivered to students, especially in English content, if they are to follow the learning process with ease. One of these involves employing educational media. Students will be able to follow the learning process with excitement and fun if appropriate learning materials are used, which will prevent them from becoming disinterested. With the "APE media Secret Food Chain Box," this problem-based learning can be carried out. Problem-based learning, where problems from a lesson are used to help students build their problem-solving skills and also. Every effective, inventive, and creative educational procedure will result in highcaliber students.Hence, every instructor should be able to instruct using a good, student-friendly learning model so that when learning is implemented, pupils can do so successfully.Apart from selecting the learning model to be utilized, the teacher must have innovative ideas and methods for creating instructional materials that will be delivered to students, especially in English content, if they are to follow the learning process with ease.One of these involves employing educational media.Students will be able to follow the learning process with excitement and fun if appropriate learning materials are used, which will prevent them from becoming disinterested.With the "APE media Secret Food Chain Box," this problem-based learning can be carried out.Problem-based learning, where problems from a lesson are used to help students build their problem-solving skills and also.Setiap prosedur pendidikan yang efektif, inventif, dan kreatif akan menghasilkan siswa yang berkaliber tinggi.Oleh karena itu, setiap instruktur harus dapat mengajar dengan menggunakan model pembelajaran yang baik dan ramah siswa sehingga ketika pembelajaran dilaksanakan, siswa dapat melakukannya dengan sukses.memilih model pembelajaran yang akan digunakan, guru harus memiliki ide dan metode inovatif untuk membuat bahan ajar yang akan disampaikan kepada siswa, terutama dalam konten bahasa Inggris, jika mereka ingin mengikuti proses pembelajaran dengan mudah.Salah satunya adalah dengan menggunakan media pendidikan.Siswa akan dapat mengikuti proses pembelajaran dengan semangat dan menyenangkan jika bahan pembelajaran yang tepat digunakan, yang akan mencegah mereka menjadi tidak tertarik.Dengan adanya media APE Secret Food Chain Box, pembelajaran berbasis masalah ini dapat dilaksanakan.Pembelajaran berbasis masalah, di mana masalah dari pelajaran digunakan untuk membantu siswa membangun keterampilan pemecahan masalah mereka dan juga. INTRODUCTION To produce creative students, teachers should strive for creative and innovative learning.A teacher who is successful or successful can be seen from the success of his students, if the students are successful then the teacher is a great teacher (great teacher) teachers who can inspire their students.(Nurdyansyah, 2015), (Musfiqon, 2014) Not only does it inspire the teacher must also have sufficient knowledge according to what is needed.(Husniati et al., 2020) Problem-based learning is very beneficial for students because it can improve critical thinking skills, and solve their own problems so that they can produce alternative solutions for each problem (Dwiyanto & Surur, 2016), (Johnson & Johnson, 2009), (Nurdyansyah, Siti Masitoh, Bachtiar Syaiful Bachri, 2018).Problem-based learning itself is also a learning model that encourages students to learn actively, expand knowledge.(Komariah, 2011) (Khikmiyah, 2021) In practice students are directly involved in solving problems, outlining the roots of existing problems in order to get good solutions and become independent human beings. In order to create a learning process that encourages students to build their own knowledge in the teaching and learning process, it is necessary to have a strategy that is able to encourage students to build their own knowledge and learning that emphasizes problem solving.(Pratiwi, 2010), (Andini et al., 2020), (Ali et al., 2021).The objectives to be achieved in this study are: to analyze the effect of using the Problem-Based Learning model on student learning outcomes by using APE media secret food chain box.In English subject in elementary school. RESEARCH METHOD This type of research is research using the PTK method (Priyono, 2008), (Berta & Hoffmann, 2020) Before conducting the research the authors made observations, the observations themselves aimed to review the place, find out the willingness of schools to be used as research sites, and after that determine the student population.(Komariah, 2011) Sources of data for this study were students, grade V teachers and colleagues.While the data collection tool used teaching media, English language test sheets, observation sheets and interview guidelines. The validity of this study uses data triangulation.(N. A. Salim et al., 2021) This technique is carried out using test techniques, interview techniques, and observation techniques.(Wahidmurni, 2017). RESULT AND DISCUSSION In this study the teacher applies the Problem-Based Learning Model.This activity is carried out for 45 minutes.The activities carried out by the teacher at stage (1) present the problem, the teacher conveys the learning objectives and the material explained by the teacher.(Mustikawati, 2015), (Burns et al., 2021) (2) dividing students into several groups, these divided groups were formed heterogeneously and then checking the students according to their group (Martínez-Andrés et al., 2017), (Reina et al., 2019) (3) giving assignments to students, namely the teacher instructs students to observe APE English about Foods and drinks in English subject class V gave discussion sheets to each group, then students worked on the discussion sheets given by the teacher (4) guided students in observing APE, guided students to understand the material and also solved problems given by the teacher.(Morrison et al., 2021) (5) students present their observations and the teacher ensures that each group gets a presentation turn, gives other participants the opportunity to ask questions (6) analyzes and draws conclusions, namely students and the teacher discuss and draw conclusions from the results presentations, students and teachers reflect on the material that has been taught.(Pratiwi & Mangunsong, 2020), (A. Salim et al., 2020) Quantitative data was obtained from the results of the teacher's assessment of students after carrying out the learning process.The results of teacher assessment data before using APE and after using APE are as follows: : Student Based on the results of the assessment above, it can be seen that there was a fairly high increase before using the average APE value of 58.5 and after using the APE it became 68.6 experiencing an increase of 10.1.This increase shows that the KKM score for School English has been fulfilled, namely: 65. The observation results stated that 2 out of 5 groups could analyze APE well.They could identify, analyze, explain, and present the results of their discussions to their friends.In addition, other groups were able to make good questions based on their analysis of APE secret food chain box. CONCLUSION The conclusion from this study is that the problem-based learning process is a learning model in which students actively participate in it because students are required to solve problems given by the teacher.As for the results of the teacher's assessment before APE learning was carried out at 58.5 and became 68.7 after using APE secret food chain box.there was an increase of 10.1 points.
2023-05-27T15:02:44.415Z
2023-05-22T00:00:00.000
{ "year": 2023, "sha1": "2b346e05166f967a3d6621153a986bf0cece1dec", "oa_license": "CCBY", "oa_url": "https://madrosatuna.umsida.ac.id/index.php/Madrosatuna/article/download/1585/1757", "oa_status": "HYBRID", "pdf_src": "Anansi", "pdf_hash": "ea29c8c21e44560674622b6a434b5b72ffe06177", "s2fieldsofstudy": [ "Education", "Linguistics" ], "extfieldsofstudy": [] }
195763930
pes2o/s2orc
v3-fos-license
POSS Compounds as Modifiers for Rigid Polyurethane Foams (Composites) Three types of polyhedral oligomeric silsesquioxanes (POSSs) with different functional active groups were used to modify rigid polyurethane foams (RPUFs). Aminopropylisobutyl-POSS (AP-POSS), trisilanoisobutyl-POSS (TS-POSS) and octa(3-hydroxy-3-methylbutyldimethylsiloxy-POSS (OH-POSS) were added in an amount of 0.5 wt.% of the polyol weight. The characteristics of fillers including the size of particles, evaluation of the dispersion of particles and their effect on the viscosity of the polyol premixes were performed. Next, the obtained foams were evaluated by their processing parameters, morphology (Scanning Electron Microscopy analysis, SEM), mechanical properties (compressive test, three-point bending test, impact strength), viscoelastic behavior (Dynamic Mechanical Analysis, DMA), thermal properties (Thermogravimetric Analysis, TGA, thermal conductivity) and application properties (contact angle, water absorption). The results showed that the morphology of modified foams is significantly affected by the fillers typology, which resulted in inhomogeneous, irregular, large cell shapes and further affected the physical and mechanical properties of the resulting materials. RPUFs modified with AP-POSS represent better mechanical properties compared to the RPUFs modified with other POSS. Introduction Polyurethanes (PUs) are a group of polymers characterized by the most diverse properties, thanks to which they have a very wide range of industrial applications [1][2][3]. For the synthesis of polyurethanes, isocyanates that were obtained by Wurtz in 1849 are used as the fundamental raw material [4]. Among the synthesized polymeric materials, PUs currently occupy fifth place among the most commonly used plastics in the world and constitute 7.7% of total plastics produced [5,6]. The global PU market is dominated by PU foams, which account for over 65% of global production. Worldwide production of PU foams mainly includes elastic foams, which cover about 37% of world production, making them the most comprehensive group of polyurethane and rigid materials, which immediately after flexible foams, are the second largest group of polyurethane plastics and constitute about 28% of global polyurethane production [7]. Rigid polyurethane foams (RPUFs) are used as high-performance thermal insulation materials in construction, pre-insulated pipelines and in the refrigeration industry due to their properties, such as closed cell structure, low thermal conductivity, and low moisture absorption capacity [8][9][10][11]. Open-cell polyurethane foams increase the heat transfer capacity, which means they are characterized by much higher thermal conductivity than RPUFs with a closed-cell structure [12]. The thermal insulation properties of RPUFs also depend on the apparent density of the foam, which has a fundamental influence on the heat conduction coefficient. The values of apparent density for RPUFs with an open cell structure is 10 to 12 kg m −3 , while for closed cell foams the value usually lies in the range 25-70 kg m −3 . The smallest values of thermal conductivity are observed for RPUFs with closed-cell structure, the apparent density of which is from Manufacturing of RPUFs RPUFs containing 0.5 wt.% of AP-POSS, TS-POSS, and OH-POSS were obtained as follows. The adequate amount of selected POSS was added to the Izopianol 30/10/C and the mixture (component A) was homogenized with an overhead stirrer at 600 RPM under ambient conditions for approximately 60 s. Purocyn B (component B) was added to component A and the mixture was stirred for 10 s at a speed rate of 1800 RPM. Following the information provided by the supplier, the ingredients were mixed in the ratio of 100:160 (ratio of component A to component B). Such prepared system was poured into an open mold and allowed to expand freely in the vertical direction. RPUFs were conditioned at room temperature for 24 h. After this time, samples were cut with a band saw into appropriate shapes (determined by obligatory standards listed below in the Characterization techniques) and their physico-mechanical properties were investigated. A schematic figure of the synthesis of RPUFs is presented in Figure 2. Manufacturing of RPUFs RPUFs containing 0.5 wt.% of AP-POSS, TS-POSS, and OH-POSS were obtained as follows. The adequate amount of selected POSS was added to the Izopianol 30/10/C and the mixture (component A) was homogenized with an overhead stirrer at 600 RPM under ambient conditions for approximately 60 s. Purocyn B (component B) was added to component A and the mixture was stirred for 10 s at a speed rate of 1800 RPM. Following the information provided by the supplier, the ingredients were mixed in the ratio of 100:160 (ratio of component A to component B). Such prepared system was poured into an open mold and allowed to expand freely in the vertical direction. RPUFs were conditioned at room temperature for 24 h. After this time, samples were cut with a band saw into appropriate shapes (determined by obligatory standards listed below in the Characterization techniques) and their physico-mechanical properties were investigated. A schematic figure of the synthesis of RPUFs is presented in Figure 2. Characterization Techniques The average size of POSS powder particles was measured using a Zetasizer NanoS90 instrument (Malvern Instruments Ltd., UK). The size of particles polyol dispersion (0.04 g/L) was determined with the dynamic light scattering DLS method. The absolute viscosities of polyol and isocyanate were determined corresponding to ASTM D2930 (equivalent to ISO 2555) using a rotary Viscometer DVII+ (Brookfield, Germany). The torque of samples was measured as a range of shear rate from 0.5 to 100 s −1 at room temperature. The apparent density of foams was determined accordingly to ASTM D1622 (equivalent to ISO 845). The densities of five specimens per sample were measured and averaged. The morphology and cell size distribution of foams were examined from the cellular structure images of foam which were taken using JEOL JSM-5500 LV scanning electron microscopy (JEOL Ltd., USA). All microscopic observations were made in the high-vacuum mode and at the accelerating voltage of 10 kV. The samples were scanned in the free-rising direction. The average pore diameters, walls thickness, and pore size distribution were calculated using ImageJ software (Media Cybernetics Inc.). The thermal properties of the synthesized composites were evaluated by TGA measurements performed using a STA 449 F1 Jupiter Analyzer (Netzsch Group, Germany). About 10 mg of the sample was placed in the TG pan and heated in an argon atmosphere at a rate of 10 K min −1 up to 600 °C with the sample mass about 10 mg. The decomposition temperatures (T5%, T10%, T50% and T70% of mass loss) were determined. The compressive strength (σ10%) of foams was determined accordingly to the ASTM D1621 (equivalent to ISO 844) using Zwick Z100 Testing Machine (Zwick/Roell Group, Germany) with a load cell of 2 kN and the speed of 2 mm min −1 . Samples of the specified sizes were cut with a band saw in a direction perpendicular to the foam growth direction. Then, the analyzed sample was placed between two plates and the compression strength was measured as a ratio of the load causing 10% deformation of sample cross-section in the parallel and perpendicular direction to the square surface. The result was averaged of 5 measurements per each sample. Impact test was carried out in agreement with ASTM D4812 on the pendulum 0.4 kg hammer impact velocity at 2.9 m s −1 with the sample dimension of 10 × 10 × 100 mm. All tests were performed at room temperature. At least five samples were prepared for the tests. Three-point bending test was carried out using Zwick Z100 Testing Machine (Zwick/Roell Group, Germany) at room temperature, according to ASTM D7264 (equivalent to ISO 178). The tested samples were bent with testing speed 2 mm min −1 . Obtained flexural stress at break (εf) results for each sample were expressed as a mean value. The average of 5 measurements per each type of composition was accepted. Dynamic mechanical analysis (DMA) was determined using ARES Rheometer (TA Instruments, USA). Torsion geometry was used with samples of thickness 2 mm. Measurements were examined in the temperature range 20-250 °C at a heating rate of 10 °C min −1 , using a frequency of 1 Hz and applied deformation at 0.1%. Surface hydrophobicity was analyzed by contact angle measurements using the sessile-drop method with a manual contact angle goniometer with an optical system OS-45D (Oscar, Taiwan) to Characterization Techniques The average size of POSS powder particles was measured using a Zetasizer NanoS90 instrument (Malvern Instruments Ltd., UK). The size of particles polyol dispersion (0.04 g/L) was determined with the dynamic light scattering DLS method. The absolute viscosities of polyol and isocyanate were determined corresponding to ASTM D2930 (equivalent to ISO 2555) using a rotary Viscometer DVII+ (Brookfield, Germany). The torque of samples was measured as a range of shear rate from 0.5 to 100 s −1 at room temperature. The apparent density of foams was determined accordingly to ASTM D1622 (equivalent to ISO 845). The densities of five specimens per sample were measured and averaged. The morphology and cell size distribution of foams were examined from the cellular structure images of foam which were taken using JEOL JSM-5500 LV scanning electron microscopy (JEOL Ltd., USA). All microscopic observations were made in the high-vacuum mode and at the accelerating voltage of 10 kV. The samples were scanned in the free-rising direction. The average pore diameters, walls thickness, and pore size distribution were calculated using ImageJ software (Media Cybernetics Inc.). The thermal properties of the synthesized composites were evaluated by TGA measurements performed using a STA 449 F1 Jupiter Analyzer (Netzsch Group, Germany). About 10 mg of the sample was placed in the TG pan and heated in an argon atmosphere at a rate of 10 K min −1 up to 600 • C with the sample mass about 10 mg. The decomposition temperatures (T 5% , T 10% , T 50% and T 70% of mass loss) were determined. The compressive strength (σ 10% ) of foams was determined accordingly to the ASTM D1621 (equivalent to ISO 844) using Zwick Z100 Testing Machine (Zwick/Roell Group, Germany) with a load cell of 2 kN and the speed of 2 mm min −1 . Samples of the specified sizes were cut with a band saw in a direction perpendicular to the foam growth direction. Then, the analyzed sample was placed between two plates and the compression strength was measured as a ratio of the load causing 10% deformation of sample cross-section in the parallel and perpendicular direction to the square surface. The result was averaged of 5 measurements per each sample. Impact test was carried out in agreement with ASTM D4812 on the pendulum 0.4 kg hammer impact velocity at 2.9 m s −1 with the sample dimension of 10 × 10 × 100 mm. All tests were performed at room temperature. At least five samples were prepared for the tests. Three-point bending test was carried out using Zwick Z100 Testing Machine (Zwick/Roell Group, Germany) at room temperature, according to ASTM D7264 (equivalent to ISO 178). The tested samples were bent with testing speed 2 mm min −1 . Obtained flexural stress at break (ε f ) results for each sample were expressed as a mean value. The average of 5 measurements per each type of composition was accepted. Dynamic mechanical analysis (DMA) was determined using ARES Rheometer (TA Instruments, USA). Torsion geometry was used with samples of thickness 2 mm. Measurements were examined in the temperature range 20-250 • C at a heating rate of 10 • C min −1 , using a frequency of 1 Hz and applied deformation at 0.1%. Surface hydrophobicity was analyzed by contact angle measurements using the sessile-drop method with a manual contact angle goniometer with an optical system OS-45D (Oscar, Taiwan) to capture the profile of a pure liquid on a solid substrate. A water drop of 1 µL was deposited onto the surface using a micrometer syringe fitted with a stainless steel needle. The contact angles reported are the average of at least ten tests on the same sample. Water absorption of the RPUFs was measured according to ASTM D2842 (equivalent to ISO 2896). Samples were dried for 1 h at 80 • C and then weighed. The samples were immersed in distilled water to a depth of 1 cm for 24 h. Afterward, the samples were removed from the water, held vertically for 10 s, the pendant drop was removed and then blotted between dry filter paper (Fisher Scientific, USA) at 10 s and weighed again. The average of 5 specimens was used. Changes in the linear dimensions were determined in accordance to the ASTM D2126 (equivalent to ISO 2796). The samples were conditioned at the temperature of 70 • C and −20 • C for 14 days. Change in linear dimensions was calculated in % from Equation (1). where l o is the length of the sample before thermostating and l is the length of the sample after thermostating. The average of 5 measurements per each type of composition was reported. Average Size of POSS Powder Particles and the Dispersion of POSS-Modified Polyol Premixes One of the most important parameters determining the behavior of the filler in the polymer matrix is the size of its particles. If the particles are too small, their dispersion may be difficult because they have a greater tendency to aggregate and agglomerate, forming large clusters in the matrix. Too large particles may affect the foaming process and further properties of the obtained materials. The particle size of the POSS powder was measured in a polyol dispersion (0.04 g/L). The results of particle size measurements are given in Figure 3. capture the profile of a pure liquid on a solid substrate. A water drop of 1 μL was deposited onto the surface using a micrometer syringe fitted with a stainless steel needle. The contact angles reported are the average of at least ten tests on the same sample. Water absorption of the RPUFs was measured according to ASTM D2842 (equivalent to ISO 2896). Samples were dried for 1 h at 80 °C and then weighed. The samples were immersed in distilled water to a depth of 1 cm for 24 h. Afterward, the samples were removed from the water, held vertically for 10 s, the pendant drop was removed and then blotted between dry filter paper (Fisher Scientific, USA) at 10 s and weighed again. The average of 5 specimens was used. Changes in the linear dimensions were determined in accordance to the ASTM D2126 (equivalent to ISO 2796). The samples were conditioned at the temperature of 70 °C and −20 °C for 14 days. Change in linear dimensions was calculated in % from Equation (1). where lo is the length of the sample before thermostating and l is the length of the sample after thermostating. The average of 5 measurements per each type of composition was reported. Average Size of POSS Powder Particles and the Dispersion of POSS-Modified Polyol Premixes One of the most important parameters determining the behavior of the filler in the polymer matrix is the size of its particles. If the particles are too small, their dispersion may be difficult because they have a greater tendency to aggregate and agglomerate, forming large clusters in the matrix. Too large particles may affect the foaming process and further properties of the obtained materials. The particle size of the POSS powder was measured in a polyol dispersion (0.04 g/L). The results of particle size measurements are given in Figure 3. From the diagram, it follows that the size of AP-POSS particles ranges from 65 to 104 nm, the highest percentage-29% is shown by 82 nm particles. In the case of TS-POSS, the particle size distribution is somewhat larger and ranges from 59 to 108 nm, with the largest 69 nm volume fraction. Such small particle sizes of From the diagram, it follows that the size of AP-POSS particles ranges from 65 to 104 nm, the highest percentage-29% is shown by 82 nm particles. In the case of TS-POSS, the particle size distribution is somewhat larger and ranges from 59 to 108 nm, with the largest 69 nm volume fraction. Such small particle sizes of nanofillers may suggest their tendency to agglomerate in the polyol, which may negatively affect mechanical and functional properties. Figure 4 shows the optical micrographs obtained for the polyol systems with AP-POSS, TS-POSS and additionally OH-POSS. A comparison of the optical images for the sample with AP-POSS ( Figure 4a) with that of TS-POSS ( Figure 4b) reveals that in both cases the particles are well dispersed in the polyol systems and no aggregates of the POSS's particle are observed. A different trend is observed for the sample with OH-POSS. As presented in Figure 4c, a homogenous dispersion of the polyol system is observed, as a result of the liquid character of the used OH-POSS. nanofillers may suggest their tendency to agglomerate in the polyol, which may negatively affect mechanical and functional properties. Figure 4 shows the optical micrographs obtained for the polyol systems with AP-POSS, TS-POSS and additionally OH-POSS. A comparison of the optical images for the sample with AP-POSS ( Figure 4a) with that of TS-POSS ( Figure 4b) reveals that in both cases the particles are well dispersed in the polyol systems and no aggregates of the POSS's particle are observed. A different trend is observed for the sample with OH-POSS. As presented in Figure 4c, a homogenous dispersion of the polyol system is observed, as a result of the liquid character of the used OH-POSS. Impact of POSS on PU Mixture Viscosity The viscosity of the reactive mixture was measured first since it is a critical parameter affecting the foaming process [49]. Increased viscosity hinders bubble growth, yielding foams with lower cell size. Table 1 presents the results of the change in dynamic viscosity depending on the type of POSS in polyol mixture. The polyol premixes that contained AP-POSS, TS-POSS and OH-POSS are characterized by an increase in their viscosity, as a result of the presence of POSS particles interacting with the polyether polyol through hydrogen bonding and van der Wall's interaction [48]. Compared to control polyol, the greatest dynamic viscosity has AP-POSS modified polyol mixture. The rheological properties of polyol premixes are shown as the viscosity versus shear rate in Figure 5a. In all systems, the viscosity is generally reduced at increased shear rates. Such a phenomenon is typical for non-Newtonian fluids with a pseudoplastic nature and is quite often found in the many previous works [50,51]. To further analyses the data, graph of viscosity versus shear rate is converted to log viscosity versus log shear rate form as shown in Figure 5b. It can be seen that the curvatures of viscosity versus shear rate can be made close to linear using this log-log format with regression of 0.979-0.982. The power law index (n) was calculated from the slopes. All results are presented in Table 1. For the system containing AP-POSS, the power law index is lower than that of their TS-POSS and OH-POSS modified system counterparts. It indicates that the effect of the filler on the pseudoplasticity behavior becomes more significant for systems modified with AP-POSS, leading to the highly non-Newtonian behavior. Impact of POSS on PU Mixture Viscosity The viscosity of the reactive mixture was measured first since it is a critical parameter affecting the foaming process [49]. Increased viscosity hinders bubble growth, yielding foams with lower cell size. Table 1 presents the results of the change in dynamic viscosity depending on the type of POSS in polyol mixture. The polyol premixes that contained AP-POSS, TS-POSS and OH-POSS are characterized by an increase in their viscosity, as a result of the presence of POSS particles interacting with the polyether polyol through hydrogen bonding and van der Wall's interaction [48]. Compared to control polyol, the greatest dynamic viscosity has AP-POSS modified polyol mixture. The rheological properties of polyol premixes are shown as the viscosity versus shear rate in Figure 5a. In all systems, the viscosity is generally reduced at increased shear rates. Such a phenomenon is typical for non-Newtonian fluids with a pseudoplastic nature and is quite often found in the many previous works [50,51]. To further analyses the data, graph of viscosity versus shear rate is converted to log viscosity versus log shear rate form as shown in Figure 5b. It can be seen that the curvatures of viscosity versus shear rate can be made close to linear using this log-log format with regression of 0.979-0.982. The power law index (n) was calculated from the slopes. All results are presented in Table 1. For the system containing AP-POSS, the power law index is lower than that of their TS-POSS and OH-POSS modified system counterparts. It indicates that the effect of the filler on the The reaction of the synthesis of RPUFs is highly exothermic [15,52]. The rate of increase in temperature determines the activity of reaction mixture that is associated with the reactivity of the components of the mixture. As shown in Table 2, the introduction of AP-POSS, TS-POSS, and OH-POSS into the PU system increases the activity of reaction mixture which is confirmed by an increase in the Tmax during the foaming process in each case. The presence of the additional groups as a result of the incorporation of the filler can lead to the exothermic reaction providing more heat evaporated to the system, and consequently higher temperature of the modified system compared to the PU-0. The Tmax increases by about 20 °C with the addition of each POSS and appears at longer times compared to the PU-0 ( Figure 6). Basically, an analog tendency was observed by other authors in previous works [16,53,54]. The Influence of POSS on the Maximum Temperature (T max ) of the Reaction Mixture during the Foaming Process The reaction of the synthesis of RPUFs is highly exothermic [15,52]. The rate of increase in temperature determines the activity of reaction mixture that is associated with the reactivity of the components of the mixture. As shown in Table 2, the introduction of AP-POSS, TS-POSS, and OH-POSS into the PU system increases the activity of reaction mixture which is confirmed by an increase in the T max during the foaming process in each case. The presence of the additional groups as a result of the incorporation of the filler can lead to the exothermic reaction providing more heat evaporated to the system, and consequently higher temperature of the modified system compared to the PU-0. The T max increases by about 20 • C with the addition of each POSS and appears at longer times compared to the PU-0 ( Figure 6). Basically, an analog tendency was observed by other authors in previous works [16,53,54]. The reaction of the synthesis of RPUFs is highly exothermic [15,52]. The rate of increase in temperature determines the activity of reaction mixture that is associated with the reactivity of the components of the mixture. As shown in Table 2, the introduction of AP-POSS, TS-POSS, and OH-POSS into the PU system increases the activity of reaction mixture which is confirmed by an increase in the Tmax during the foaming process in each case. The presence of the additional groups as a result of the incorporation of the filler can lead to the exothermic reaction providing more heat evaporated to the system, and consequently higher temperature of the modified system compared to the PU-0. The Tmax increases by about 20 °C with the addition of each POSS and appears at longer times compared to the PU-0 ( Figure 6). Basically, an analog tendency was observed by other authors in previous works [16,53,54]. Foaming Kinetic of RPUFs The foaming process was determined by measuring the characteristic processing times like cream, extension and gelation time. The cream time was measured from the start of mixing of components to a visible start of foam growth, extension time elapsing until reaching the highest volume of the foam and gelation time was determined as the time when the foam solidifies completely and the surface is no longer tacky [17]. The results presented in Table 2 indicate a slight increase in cream and extension time for the RPUFs containing AP-POSS, TS-POSS, and OH-POSS. This dependence is mostly related to the fact that well-dispersed filler in the reaction mixture acts as a nucleating agent and higher viscosity of the modified systems is observed. It was reported in a previous work that higher viscosity has a major impact on the growth of RPUFs and causes an increase in reaction time by a few minutes [39]. Also, an increase of filler content affects the kinetics of the reaction and the phase separation. The rate of PU polymerization during foaming and morphology development is slowed down [40]. The addition of the filler into the system decreases the rate of isocyanate conversion during the early stage reaction. Also, due to the presence of the filler, a reduction of the mobility of the molecules takes place [40], leading to prolonged cream and extension time [18,22]. Compared to the PU-0, composites modified with the addition of the fillers are also characterized by a shorter tack-free time, indicating that filler particles act as a curing accelerator. Among studied fillers, the highest values of extension time and tack-free time are determined for PU-AP composites, as a result of higher viscosity, as compared to the PU-TS and PU-OH counterparts. Density of RPUFs Apparent density is one of the most important parameters to control the physical, mechanical and thermal properties of the RPUFs which has influence on their performance and applications. The values of density of prepared foams are presented in Table 2. In general, term, the apparent density tends to increase when the POSS are added. PU-0 is characterized by an apparent density of 38 kg m −3 . The apparent density increases to 43, 42 and 40 kg m −3 for samples with AP-POSS, TS-POSS, and OH-POSS, respectively. This effect can be explained by an analysis of the role of filler particles on nucleation and cell growth. The POSS particles act as nucleation sites promoting the formation of bubbles, and this is an increasing trend with nanoparticles content, but, at the same time, the growth process of the resulting cells is hindered by an increase of the gelling reaction speed, revealing in bigger viscosity. This results in bubble collapse and higher density foams. Moreover, the reactive groups of POSS particles (such as, hydroxyl and amine groups) would react with isocyanate (-NCO) groups, therefore, the content of isocyanates which reacted with water and produced CO 2 foaming gas would decrease, leading to the decreased foaming ratio and increased density. To sum up, the density of the composite foams increased with the incorporation of POSS filler in the PU system. Morphology of RPUFs The cell morphology is one of the most important factors determining the physico-mechanical properties of RPUFs [23,51]. The foaming process, the formation of cells, and their shape can be explained by a nucleation and growth mechanism [24]. A proper balance of filler concentration, reaction temperature, viscosity and dispersion of the filler in the polymer matrix is the key for optimization of the cellular structure of RPUFs [25]. The cellular structures of RPUFs composites are presented in Figure 7. in the case of RPUFs modified with AP-POSS and TS-POSS can be connected with poor interfacial adhesion between the filler surface and the polymer matrix, which promotes earlier cell collapsing phenomena and increases a high possibility of generating open pores [26]. Moreover, the possible interphase interactions between POSS and PU in cell struts disturbed formulation of stable foam structure [27] which results in the coalescence of crowded cells. The alteration of cell morphology as The values of the cell size of the RPUFs were statistically analyzed by means of ImageJ software from SEM images and the median values are summarized in Table 2. The cell size distribution of the RPUFs is presented in Figure 8. From the table, the PU-0 has fewer cells with a larger cell size than the POSS-modified composites. In general, the PU-0 has an average cell size of 466 μm, and the As observed from the micrograph of the neat PU-0 (Figure 7a,b), the cell size and cell distribution are nearly uniform and the PU-0 consists of closed cells with a negligible amount of cells with broken walls. With the addition of AP-POSS, the overall cell structure becomes less uniform and the number of broken cells increased (Figure 7c,d). A similar trend is observed for sample PU-TS, as shown in Figure 7e,f, although it has a higher content of broken cells compared to PU-AP. The more homogenous structure is observed in Figure 7g,h, which corresponds to the PU-OH. The closed-cell structure is well-preserved, and the number of broken cells is decreased. Higher content of open cells in the case of RPUFs modified with AP-POSS and TS-POSS can be connected with poor interfacial adhesion between the filler surface and the polymer matrix, which promotes earlier cell collapsing phenomena and increases a high possibility of generating open pores [26]. Moreover, the possible interphase interactions between POSS and PU in cell struts disturbed formulation of stable foam structure [27] which results in the coalescence of crowded cells. The alteration of cell morphology as the result of filler incorporation was also observed in previous studies [28,35,55]. The values of the cell size of the RPUFs were statistically analyzed by means of ImageJ software from SEM images and the median values are summarized in Table 2. The cell size distribution of the RPUFs is presented in Figure 8. From the table, the PU-0 has fewer cells with a larger cell size than the POSS-modified composites. In general, the PU-0 has an average cell size of 466 µm, and the addition of small amounts of POSS yielded smaller cells. RPUFs with AP-POSS, TS-POSS, and OH-POSS have a cell size of 396 µm, 389 µm and 408 µm, respectively. This means that RPUFs composites containing POSS have a higher cell density and smaller cell size than those of the PU-0. Therefore, it can be concluded that the POSS addition has an effect on reducing the cell size. This may be due to the increased viscosity of the system after POSS addition which restrains the expansion of the cells. Moreover, it has been well established in previous works that filler particles can act as nucleation sites for cell formation and since a higher number of cells starts to nucleate at the same time, thus a higher amount of cells with reduced diameter is present [56][57][58][59][60][61]. This means that RPUFs composites containing POSS have a higher cell density and smaller cell size than those of the PU-0. Therefore, it can be concluded that the POSS addition has an effect on reducing the cell size. This may be due to the increased viscosity of the system after POSS addition which restrains the expansion of the cells. Moreover, it has been well established in previous works that filler particles can act as nucleation sites for cell formation and since a higher number of cells starts to nucleate at the same time, thus a higher amount of cells with reduced diameter is present [57][58][59][60][61][62]. The mechanical properties of RPUFs depend primarily on the cells' morphology with the strength being higher in the direction of foam expansion. In Figure 9 it can be seen that all the compressive stress-strain plots of RPUFs are composed of a first linear region which corresponds to the elastic response of the material and a second region in which the curves present a large plateau due to the plastic deformation and rupture of the cell walls while the stress is constant until the cells are crushed. Nevertheless, some differences can be observed between the samples. The increase in brittleness caused by the reinforcements determines a more abrupt transition from the elastic region to the plateau, in contrast to the smooth transition observed in the case of the PU-0. The elongation at break of the PU composites decreases with POSS incorporation, implying that POSS particles make the PU matrix more rigid. This is a common result in PU composites reinforced by a conventional filler [16,63,64]. Compressive Strength of RPUFs The mechanical properties of RPUFs depend primarily on the cells' morphology with the strength being higher in the direction of foam expansion. In Figure 9 it can be seen that all the compressive stress-strain plots of RPUFs are composed of a first linear region which corresponds to the elastic response of the material and a second region in which the curves present a large plateau due to the plastic deformation and rupture of the cell walls while the stress is constant until the cells are crushed. Nevertheless, some differences can be observed between the samples. The increase in brittleness caused by the reinforcements determines a more abrupt transition from the elastic region to the plateau, in contrast to the smooth transition observed in the case of the PU-0. The elongation at break of the PU composites decreases with POSS incorporation, implying that POSS particles make the PU matrix more rigid. This is a common result in PU composites reinforced by a conventional filler [16,62,63]. The compression modulus and compressive strength of RPUFs are presented in Table 3. The compressive strength of all materials tested in the direction parallel and perpendicular to the direction of foam rise is greater than the strength of the reference foam. The largest increase in compressive strength is observed for the PU-AP and it is about 351 kPa in a parallel direction and 159 kPa in the perpendicular direction. In the foams containing TS-POSS and OH-POSS, there is a slight decrease in compressive strength compared with RPUFs containing AP-POSS; however, it is still larger than for the PU-0. As presented in Figure 10 the mechanical properties are closely related to the apparent density of polymer composites. An increase in density is accompanied by an increase in the mechanical properties of the composites since in compression the stiffness arises from buckling of cell walls. The higher density is related to more compact cellular structures, hence there is more material per unit area and the modulus and strength increase [64]. POSS-modified foams obtained in this study show apparent density values of 40-43 kg m −3 and compressive strengths of 309-351 kPa, which are well in the range exhibited by conventional commercial foams that present densities in the 15-130 kg m −3 range and compressive strength values in the range 200-220 kPa (for RPUFs at a density of 40 kg m −3 ) [60,65]. Based on these results, the foams modified with POSS can potentially be used on an industrial scale in the construction and packaging industries. The compression modulus and compressive strength of RPUFs are presented in Table 3. The compressive strength of all materials tested in the direction parallel and perpendicular to the direction of foam rise is greater than the strength of the reference foam. The largest increase in compressive strength is observed for the PU-AP and it is about 351 kPa in a parallel direction and 159 kPa in the perpendicular direction. In the foams containing TS-POSS and OH-POSS, there is a slight decrease in compressive strength compared with RPUFs containing AP-POSS; however, it is still larger than for the PU-0. As presented in Figure 10 the mechanical properties are closely related to the apparent density of polymer composites. An increase in density is accompanied by an increase in the mechanical properties of the composites since in compression the stiffness arises from buckling of cell walls. The higher density is related to more compact cellular structures, hence there is more material per unit area and the modulus and strength increase [65]. Compressive strength (perpendicular) Figure 9. Compression behaviors of RPUFs measured parallel to the foam rise direction. The compression modulus and compressive strength of RPUFs are presented in Table 3. The compressive strength of all materials tested in the direction parallel and perpendicular to the direction of foam rise is greater than the strength of the reference foam. The largest increase in compressive strength is observed for the PU-AP and it is about 351 kPa in a parallel direction and 159 kPa in the perpendicular direction. In the foams containing TS-POSS and OH-POSS, there is a slight decrease in compressive strength compared with RPUFs containing AP-POSS; however, it is still larger than for the PU-0. As presented in Figure 10 the mechanical properties are closely related to the apparent density of polymer composites. An increase in density is accompanied by an increase in the mechanical properties of the composites since in compression the stiffness arises from buckling of cell walls. The higher density is related to more compact cellular structures, hence there is more material per unit area and the modulus and strength increase [65]. Flexural Strength of RPUFs As in the case of compression results presented in Figure 10, the correlation between flexural strength (σf) and apparent density is observed as well ( Figure 11). It can be also seen that incorporation of POSS filler affects the σf of POSS-modified materials. Compared to the PU-0, σf is improved by the addition POSS in all cases. The value of tensile strength of PU-AP increases by 38% from 0.402 to 0.469 MPa as compared to the PU-0. Similar trend id observed for RPUFs modified with Flexural Strength of RPUFs As in the case of compression results presented in Figure 10, the correlation between flexural strength (σ f ) and apparent density is observed as well ( Figure 11). It can be also seen that incorporation of POSS filler affects the σ f of POSS-modified materials. Compared to the PU-0, σ f is improved by the addition POSS in all cases. The value of tensile strength of PU-AP increases by 38% from 0.402 to 0.469 MPa as compared to the PU-0. Similar trend id observed for RPUFs modified with TS-POSS and OH-POSS. The value of σ f increases to 0.430 and 0.427 MPa for samples PU-TS and PU-OH. Figure 12 shows the stress-elongation curves for the RPUFs. All samples exhibit a linear elastic behavior in the low-stress region and plastic deformation in the high-stress region, pointing at the comparable mechanical performance of modified foams. The incorporation of POSS reduces the elongation at break (ε f ) of RPUFs in all cases. The reason is due to the presence of POSS aggregates within the PU matrix, which may act as defects during the tensile testing process and decrease ε f of foam composites. Figure 12 shows the stress-elongation curves for the RPUFs. All samples exhibit a linear elastic behavior in the low-stress region and plastic deformation in the high-stress region, pointing at the comparable mechanical performance of modified foams. The incorporation of POSS reduces the elongation at break (εf) of RPUFs in all cases. The reason is due to the presence of POSS aggregates within the PU matrix, which may act as defects during the tensile testing process and decrease εf of foam composites. Impact Strength of RPUFs The correlation between impact strength and apparent density is observed as well ( Figure 13). With the incorporation of AP-POSS, TS-POSS, and OH-POSS, the impact strength increases from 0.35 to 0.46, 0.45 and 0.42 kJ m −2 , respectively. This behavior is related to the good interface reinforcement matrix and the generation of fracture paths through the POSS-reinforced RPUFs. Thus, the deformability of the RPUFs matrix is reduced, which in turn affects the ductility in the foam surface. With this effect, the foam composite tends to form a more rigid structure and decrease the concentration of POSS, thus reducing the foam's energy absorption, resulting in greater impact strength. Figure 12 shows the stress-elongation curves for the RPUFs. All samples exhibit a linear elastic behavior in the low-stress region and plastic deformation in the high-stress region, pointing at the comparable mechanical performance of modified foams. The incorporation of POSS reduces the elongation at break (εf) of RPUFs in all cases. The reason is due to the presence of POSS aggregates within the PU matrix, which may act as defects during the tensile testing process and decrease εf of foam composites. Impact Strength of RPUFs The correlation between impact strength and apparent density is observed as well ( Figure 13). With the incorporation of AP-POSS, TS-POSS, and OH-POSS, the impact strength increases from 0.35 to 0.46, 0.45 and 0.42 kJ m −2 , respectively. This behavior is related to the good interface reinforcement matrix and the generation of fracture paths through the POSS-reinforced RPUFs. Thus, the deformability of the RPUFs matrix is reduced, which in turn affects the ductility in the foam surface. With this effect, the foam composite tends to form a more rigid structure and decrease the concentration of POSS, thus reducing the foam's energy absorption, resulting in greater impact strength. Impact Strength of RPUFs The correlation between impact strength and apparent density is observed as well ( Figure 13). With the incorporation of AP-POSS, TS-POSS, and OH-POSS, the impact strength increases from 0.35 to 0.46, 0.45 and 0.42 kJ m −2 , respectively. This behavior is related to the good interface reinforcement matrix and the generation of fracture paths through the POSS-reinforced RPUFs. Thus, the deformability of the RPUFs matrix is reduced, which in turn affects the ductility in the foam surface. With this effect, the foam composite tends to form a more rigid structure and decrease the concentration of POSS, thus reducing the foam's energy absorption, resulting in greater impact strength. Polymers 2019, 11, x FOR PEER REVIEW 13 of 20 Figure 13. Effect of apparent density on the impact strength of RPUFs. Dynamic Mechanical Analysis (DMA) and Thermogravimetric Analysis (TGA) The dynamic mechanical behavior of RPUFs as a function of the temperature is shown in Figure 14. The results presented in Figure 14a and Table 4, indicate that the incorporation of POSS to the PU matrix affects the value of Tg, which corresponds to the maximum value of the curve loss tangent (tanδ) versus temperature. Compared to the RPUFs modified with TS-POSS and OH-POSS, RPUFs containing AP-POSS are characterized by higher Tg. Wu et al. [67] have shown that the Tg of RPUFs reflects the rigidity of the polymer matrix which is a function of the isocyanate index, cross-link density and aromaticity level of the RPUFs. Given that the isocyanate index has been held constant in this study, the increase in the Tg for POSS-modified samples must be a reflection of the increased aromaticity and cross-link density due to the presence of the POSS [68]. Moreover, as shown in Figure 14a, the reference and POSS-modified foams exhibit one wide peak in the range of temperature analyzed. The width of the peak becomes broader with the POSS incorporation due to different relaxation mechanisms appearing in the modified materials as a consequence of the added filler. The broadening of the tanδ peak is often assumed to be due to broader distribution in molecular weight between crosslinking points or heterogeneities in the network structures [19]. In Figure 14b, it is also notable that RPUFs modified with POSS are characterized by higher storage modulus (E') as compared to PU-0. It can be concluded that the addition of all POSS has significantly increased the E' of PU and consequently the stiffness of studied composites is also enhanced. This is due to the presence of filler in the PU matrix as well as higher viscosity of the modified systems, which imposes serious limits on the mobility of polymer chains, affecting their higher stiffness. Similar results are reported in the literature [69,70]. Dynamic Mechanical Analysis (DMA) and Thermogravimetric Analysis (TGA) The dynamic mechanical behavior of RPUFs as a function of the temperature is shown in Figure 14. The results presented in Figure 14a and Table 4, indicate that the incorporation of POSS to the PU matrix affects the value of T g , which corresponds to the maximum value of the curve loss tangent (tanδ) versus temperature. Compared to the RPUFs modified with TS-POSS and OH-POSS, RPUFs containing AP-POSS are characterized by higher T g . Wu et al. [66] have shown that the T g of RPUFs reflects the rigidity of the polymer matrix which is a function of the isocyanate index, cross-link density and aromaticity level of the RPUFs. Given that the isocyanate index has been held constant in this study, the increase in the T g for POSS-modified samples must be a reflection of the increased aromaticity and cross-link density due to the presence of the POSS [67]. Moreover, as shown in Figure 14a, the reference and POSS-modified foams exhibit one wide peak in the range of temperature analyzed. The width of the peak becomes broader with the POSS incorporation due to different relaxation mechanisms appearing in the modified materials as a consequence of the added filler. The broadening of the tanδ peak is often assumed to be due to broader distribution in molecular weight between crosslinking points or heterogeneities in the network structures [19]. The thermal degradation of pure polyurethane foam and hybrid composites was monitored by TGA thermograms as displayed in Figure 15a. The thermo-oxidative decomposition temperatures for 5, 10, 50 and 70% weight loss are evaluated from TGA curves, as listed in Table 4. In the case of PUR foams, thermal degradation occurred in 3 stages. In the first stage of decomposition at about 10% loss of initial mass, dissociation of urethane bonds occurs at a temperature of 150 to 330 °C [71,72]. The second degradation step RPUF corresponding to a weight loss of about 50% occurs at a temperature between 330 and 400 °C and is attributed to the decomposition of the soft polyol segments [71,72]. Then, the third degradation step associated with the degradation of the fragments generated during the second stage occurs at 500 °C, which corresponds to 80% loss of mass [71,73]. In Figure 14b, it is also notable that RPUFs modified with POSS are characterized by higher storage modulus (E') as compared to PU-0. It can be concluded that the addition of all POSS has significantly increased the E' of PU and consequently the stiffness of studied composites is also enhanced. This is due to the presence of filler in the PU matrix as well as higher viscosity of the modified systems, which imposes serious limits on the mobility of polymer chains, affecting their higher stiffness. Similar results are reported in the literature [68,69]. The thermal degradation of pure polyurethane foam and hybrid composites was monitored by TGA thermograms as displayed in Figure 15a. The thermo-oxidative decomposition temperatures for 5, 10, 50 and 70% weight loss are evaluated from TGA curves, as listed in Table 4. In the case of PUR foams, thermal degradation occurred in 3 stages. In the first stage of decomposition at about 10% loss of initial mass, dissociation of urethane bonds occurs at a temperature of 150 to 330 • C [70,71]. The second degradation step RPUF corresponding to a weight loss of about 50% occurs at a temperature between 330 and 400 • C and is attributed to the decomposition of the soft polyol segments [70,72]. Then, the third degradation step associated with the degradation of the fragments generated during the second stage occurs at 500 • C, which corresponds to 80% loss of mass [70,72]. The thermal degradation of pure polyurethane foam and hybrid composites was monitored by TGA thermograms as displayed in Figure 15a. The thermo-oxidative decomposition temperatures for 5, 10, 50 and 70% weight loss are evaluated from TGA curves, as listed in Table 4. In the case of PUR foams, thermal degradation occurred in 3 stages. In the first stage of decomposition at about 10% loss of initial mass, dissociation of urethane bonds occurs at a temperature of 150 to 330 °C [71,72]. The second degradation step RPUF corresponding to a weight loss of about 50% occurs at a temperature between 330 and 400 °C and is attributed to the decomposition of the soft polyol segments [71,72]. Then, the third degradation step associated with the degradation of the fragments generated during the second stage occurs at 500 °C, which corresponds to 80% loss of mass [71,73]. It can be observed that the addition of fillers affects the thermal stability of RPUF (Table 3). POSS used as foam modifiers is characterized by higher thermal stability and percentage losses of masses at much higher temperatures than PU-O. However, in the presence of POSS, the acceleration of mass loss at the initial stage of degradation was observed. The reduction of thermal stability can be attributed to non-homogeneous dispersion of POSS and changes in cross-link density [51]. Confirmation is SEM photos, which clearly show that the presence of POSS increases the heterogeneity of the RPUF morphology. In further stages of degradation, modified foams are slightly more stable and are characterized by weight losses obtained at a similar temperature as pure foam. In further degradation steps, the modified foams are slightly more stable and are characterized by mass losses obtained at a similar temperature as pure foam, with maximum mass losses of approximately 314-322 °C and 551-584 °C, which is related to the reaction of oxygen with hydroperoxides that are themselves unstable and decay, creating It can be observed that the addition of fillers affects the thermal stability of RPUF (Table 3). POSS used as foam modifiers is characterized by higher thermal stability and percentage losses of masses at much higher temperatures than PU-O. However, in the presence of POSS, the acceleration of mass loss at the initial stage of degradation was observed. The reduction of thermal stability can be attributed to non-homogeneous dispersion of POSS and changes in cross-link density [51]. Confirmation is SEM photos, which clearly show that the presence of POSS increases the heterogeneity of the RPUF morphology. In further stages of degradation, modified foams are slightly more stable and are characterized by weight losses obtained at a similar temperature as pure foam. In further degradation steps, the modified foams are slightly more stable and are characterized by mass losses obtained at a similar temperature as pure foam, with maximum mass losses of approximately 314-322 • C and 551-584 • C, which is related to the reaction of oxygen with hydroperoxides that are themselves unstable and decay, creating more free radicals [73]. In addition, it can be seen that the amount of char residue for POSS filled foams is increased compared to PU-0. This results in more stable char layers that can protect materials from further decomposition and, in turn, increase thermal stability. The change is also visible on the DTG curves, which are the first derivative of TGA that represent the speed of the composite decomposition process during heating. It can be seen in Figure 15b that the degradation rate of POSS modified foams is slightly lower than that of PU-0 foam. Dimensional Stability, Contact Angle and Water Absorption For RPUFs, often used as construction materials, dimensional stability, as well as affinity for water, is a very important parameter. Table 5 and Figure 16 show the dimensional stability of foams at low (−20 • C) and high temperature (70 • C), respectively. The variability of dimensions at low temperature was slightly higher than at high temperature for the same foam samples. Furthermore, the % linear changes in length, width, and thickness after exposure indicate that the addition of POSS generally resulted in smaller dimensional changes of the modified foams compared to the reference foam, indicating a stabilizing effect of POSS on the degradation factor. This is particularly evident in the conditions of elevated temperature, where for TS-POSS modified foams, the dimensional stability improved by an average of 20% in comparison with the PU-0. The only exception to this trend is the POSS-OH-modified sample, which shows slightly larger changes in linear dimensions compared to the reference sample, especially at reduced temperatures. However, according to industrial standard, PU panels tested at 70 • C should have less than 3% of linear change. In each case, the dimensional stability of PU foams is thus still considered to be mild and within commercially acceptable limits [74]. more free radicals [74]. In addition, it can be seen that the amount of char residue for POSS filled foams is increased compared to PU-0. This results in more stable char layers that can protect materials from further decomposition and, in turn, increase thermal stability. The change is also visible on the DTG curves, which are the first derivative of TGA that represent the speed of the composite decomposition process during heating. It can be seen in Figure 15b that the degradation rate of POSS modified foams is slightly lower than that of PU-0 foam. Dimensional Stability, Contact Angle and Water Absorption For RPUFs, often used as construction materials, dimensional stability, as well as affinity for water, is a very important parameter. Table 5 and Figure 16 show the dimensional stability of foams at low (−20°C) and high temperature (70 °C), respectively. The variability of dimensions at low temperature was slightly higher than at high temperature for the same foam samples. Furthermore, the % linear changes in length, width, and thickness after exposure indicate that the addition of POSS generally resulted in smaller dimensional changes of the modified foams compared to the reference foam, indicating a stabilizing effect of POSS on the degradation factor. This is particularly evident in the conditions of elevated temperature, where for TS-POSS modified foams, the dimensional stability improved by an average of 20% in comparison with the PU-0. The only exception to this trend is the POSS-OH-modified sample, which shows slightly larger changes in linear dimensions compared to the reference sample, especially at reduced temperatures. However, according to industrial standard, PU panels tested at 70 °C should have less than 3% of linear change. In each case, the dimensional stability of PU foams is thus still considered to be mild and within commercially acceptable limits [75]. Polyhedral oligomeric silsesquioxanes significantly affected the hydrophobicity of the foams (Figure 17 and Figure 18). Regarding water absorption, it is notable that foams modified by POSS absorb less water than the reference sample. This effect is attributed to the greater surface roughness of foams with smaller pore sizes as well as the lack of large surface pores in which water droplets can be stored. Lower water absorption indicates greater hydrophobicity, which is also well illustrated by the contact angles of foam surfaces with water ( Figure 18). The most hydrophobic foam was modified Polyhedral oligomeric silsesquioxanes significantly affected the hydrophobicity of the foams (Figures 17 and 18). Regarding water absorption, it is notable that foams modified by POSS absorb less water than the reference sample. This effect is attributed to the greater surface roughness of foams with smaller pore sizes as well as the lack of large surface pores in which water droplets can be stored. Lower water absorption indicates greater hydrophobicity, which is also well illustrated by the contact angles of foam surfaces with water ( Figure 18). The most hydrophobic foam was modified with POSS-OH, which achieved a contact angle of 140 • and water absorption at the lowest level (11.2% after 24 h). This is due to the presence of non-polar side chains in the corners of silsesquioxane cages, which reduces the surface energy of the entire system. with POSS-OH, which achieved a contact angle of 140° and water absorption at the lowest level (11.2% after 24 h). This is due to the presence of non-polar side chains in the corners of silsesquioxane cages, which reduces the surface energy of the entire system. Conclusions RPUFs were successfully reinforced using POSS with hydroxyl and amino groups. The impact of POSSs on thermal properties, dynamic mechanical properties, physico-mechanical properties (compressive strength, three-point bending test, impact strength apparent density), foaming parameters and morphology of RPUFs was examined. The presented results indicate that the addition of AP-POSS, TS-POSS, and OH-POSS in the range of 0.5 wt.% influences the morphology of analyzed foams and consequently their further mechanical and thermal properties. It was noticed that RPUFs modified with AP-POSS are characterized by smaller and more regular polyurethane cells. This suggests better compatibility between PU foam matrix and AP-POSS compared with other fillers. This results in significant improvement of physico-mechanical properties and thermal stability of composites with AP-POSS. For example, compared to the RPUFs modified with OH-POSS and TS-POSS, composition with 0.5 wt.% of the AP-POSS showed greater compressive strength (351 kPa) and higher flexural strength (0.469 MPa). However, the highest hydrophobicity showed OH-PU foams, which were characterized by the greatest contact angle (140°) and less water uptake (11.2% after 24 h). with POSS-OH, which achieved a contact angle of 140° and water absorption at the lowest level (11.2% after 24 h). This is due to the presence of non-polar side chains in the corners of silsesquioxane cages, which reduces the surface energy of the entire system. Conclusions RPUFs were successfully reinforced using POSS with hydroxyl and amino groups. The impact of POSSs on thermal properties, dynamic mechanical properties, physico-mechanical properties (compressive strength, three-point bending test, impact strength apparent density), foaming parameters and morphology of RPUFs was examined. The presented results indicate that the addition of AP-POSS, TS-POSS, and OH-POSS in the range of 0.5 wt.% influences the morphology of analyzed foams and consequently their further mechanical and thermal properties. It was noticed that RPUFs modified with AP-POSS are characterized by smaller and more regular polyurethane cells. This suggests better compatibility between PU foam matrix and AP-POSS compared with other fillers. This results in significant improvement of physico-mechanical properties and thermal stability of composites with AP-POSS. For example, compared to the RPUFs modified with OH-POSS and TS-POSS, composition with 0.5 wt.% of the AP-POSS showed greater compressive strength (351 kPa) and higher flexural strength (0.469 MPa). However, the highest hydrophobicity showed OH-PU foams, which were characterized by the greatest contact angle (140°) and less water uptake (11.2% after 24 h). Conclusions RPUFs were successfully reinforced using POSS with hydroxyl and amino groups. The impact of POSSs on thermal properties, dynamic mechanical properties, physico-mechanical properties (compressive strength, three-point bending test, impact strength apparent density), foaming parameters and morphology of RPUFs was examined. The presented results indicate that the addition of AP-POSS, TS-POSS, and OH-POSS in the range of 0.5 wt.% influences the morphology of analyzed foams and consequently their further mechanical and thermal properties. It was noticed that RPUFs modified with AP-POSS are characterized by smaller and more regular polyurethane cells. This suggests better compatibility between PU foam matrix and AP-POSS compared with other fillers. This results in significant improvement of physico-mechanical properties and thermal stability of composites with AP-POSS. For example, compared to the RPUFs modified with OH-POSS and TS-POSS, composition with 0.5 wt.% of the AP-POSS showed greater compressive strength (351 kPa) and higher flexural strength (0.469 MPa). However, the highest hydrophobicity showed OH-PU foams, which were characterized by the greatest contact angle (140 • ) and less water uptake (11.2% after 24 h).
2019-07-02T13:47:52.180Z
2019-06-27T00:00:00.000
{ "year": 2019, "sha1": "e8687d3d6a4e3249b9e66fe0ee799f43098cbfe2", "oa_license": "CCBY", "oa_url": "https://res.mdpi.com/d_attachment/polymers/polymers-11-01092/article_deploy/polymers-11-01092.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e8687d3d6a4e3249b9e66fe0ee799f43098cbfe2", "s2fieldsofstudy": [ "Materials Science" ], "extfieldsofstudy": [ "Materials Science", "Medicine" ] }
109534874
pes2o/s2orc
v3-fos-license
MiR-291a/b-5p inhibits autophagy by targeting Atg5 and Becn1 during mouse preimplantation embryo development microRNA-290 (miR-290) clusters are highly expressed in mouse preimplantation embryos, but their specific role and regulatory mechanisms in the development of mouse preimplantation embryos remain unclear. Here, we found that miR-291a-5p and miR-291b-5p, as mature microRNA molecules of miR-290 clusters, were dynamically expressed in mouse preimplantation embryos. The expression of miR-291a-5p and miR-291b-5p in mouse embryos increased during the 2–4-cell stages and was accompanied by the decreasing expression of the autophagy-related genes Atg5 and Becn1 in mRNA. Immunofluorescence studies showed that the formation of autophagosomes and autophagic lysosomes increased in the 1-cell stage, decreased in the 2-cell stage, and rapidly decreased during the 4–8-cell stage. Transmission electron microscopy (TEM) also demonstrated that there were autophagosomes in the cytoplasm of fertilized eggs with a double-layer membrane structure, whereas this structure was not observed in the unfertilized oocyte cytoplasm. Moreover, miR-291a/b-5p inhibited the protein and mRNA expression of Atg5 and Becn1 in NIH/3T3 cells. A dual-luciferase reporter assay confirmed that miR-291a/b-5p directly targeted the Atg5 and Becn1 genes. MiR-291a/b-5p repressed rapamycin-induced autophagy-related LC3-I to LC3-II conversion, ultimately inhibiting the formation of autophagosomes. Furthermore, the microinjection of mouse zygote cytoplasm with miR-291a-5p inhibitors increased the mRNA expression of Atg5 and Becn1 in mouse embryos and facilitated the first cleavage of mouse embryos and blastocyst formation. Our results suggest the important role of miR-291a/b-5p during mouse preimplantation embryo development. Introduction The mammalian embryo development begins with the combination of sperm and ovum. When the sperm is combined with the ovum, the fertilized oocyte is activated and the early embryo development begins. Aer fertilization, the number of cells in the embryo increases rapidly, and the number of nuclei and amount of DNA increase exponentially, but the total amount of cytoplasm remains constant. 1 In fact, mammals begin to prepare and accumulate a set of maternal mRNAs and proteins required for embryonic development as early as oocyte growth and maturation. Aer fertilization, the maternal mRNA and protein stored in the cytoplasm provide important nutritional support for embryonic development, which is also unique to preimplantation embryo development. Aer fertilization, these maternal stocks are rapidly degraded and replaced by new substances composed of fertilized ovum genes. 2 In mice, fertilization activates the degradation process of the transcripts stored in the ovum, which is approximately 90% completed in the 2-cell phase. 3 Autophagy is a highly conserved and critical metabolic degradation system in the cell. Through the autophagy system, cells can cope with hunger, hypoxia, immune response, etc., and gain survival advantage. 4 As mentioned earlier, aer fertilization, the cytoplasmic content of the embryo undergoes an "oocyte-to-embryo transition" process in which maternal RNA and proteins are degraded and used to provide amino acid energy. 5 The embryonic genome performs new RNA and protein synthesis as well as organelle remodeling. Previous studies have shown that many of the maternal proteins in embryos are degraded via the ubiquitinated proteasome system. 6 Recent new research suggests that autophagy, as another important degradation system, participates in the turnover of cytosolic proteins and plays an important role in this process. [7][8][9][10] Atg5 and Ben1 genes are two critical regulatory molecules in the process of autophagy. Atg5-decient oocytes, due to limited autophagy function, cannot develop further when the zygote develops to the 4-8-cell embryo phase. 9 Becn1-decient mutant mouse embryos have been shown to exhibit signicant developmental delay and death at E7.5 days. 11 The embryonic body composed of undifferentiated embryonic stem cells, aer the deletion of Atg5 or Becn1, has defects in the function of recruiting and clearing dead cells, exhibits low intracellular ATP levels and is unable to develop cavities. 12 These studies suggest that the autophagy-related genes Atg5 and Becn1 are important for early embryo development, energy supply, and maintenance of intracellular homeostasis. MicroRNA (miRNA, miR) is a class of small single-stranded non-coding RNA molecules of approximately 22 nucleotides in length. The function of miRNA in inhibiting gene expression is based on its complementary to the 3 0 untranslated region (3 0 UTR) of target mRNA through its seed sequence region. Binding to the target mRNA, miRNA acts to degrade the target mRNA or inhibit the translation of the target protein. 13 The miR-290 gene cluster, cloned from mouse embryonic stem cells, ranges from miR-290 to miR-295 and is the most abundantly expressed miRNA in mouse embryonic stem cells. 14 The miR-290 gene cluster is also abundantly expressed in mouse preimplantation embryos. 15 Aer implantation in mouse embryos, the expression level of the miR-290 cluster rapidly declines to no expression. 16 In various organs of adult rats, including heart, liver, spleen and lung, the miR-290 cluster is not expressed either. 17 Recent studies have shown that the miR-290 cluster promotes mouse embryonic stem cell proliferation 18 and also promotes mesoderm and endoderm differentiation and development by targeting Pax6. 19 During mouse embryonic development, vitellicle and somite development defects occur in miR-290 cluster-decient mice. 20 These studies suggest that the miR-290 gene cluster may play an important role in regulating the development and differentiation of preimplantation embryos. However, the mechanism of miR-290 clusters on embryonic development through autophagy systems is not well understood. In the present study, we investigated the dynamic expression prole of miR-291a/b-5p as well as Atg5 and Becn1 genes in preimplantation embryos. We found that miR-291a/b-5p inhibited autophagosome formation by targeting autophagyrelated genes Atg5 and Becn1. We also found that inhibition of miR-291a-5p by embryonic cytoplasmic microinjection of miR-291a-5p inhibitors promoted the development of preimplantation embryos from the 1-cell phase to the 2-cell phase. weeks old and replaced for more than 3 months. The female mice used for superovulation were 4 weeks old. The experimental mice were housed in the SPF (Specic Pathogen Free) animal room of the Fourth Military Medical University, with a constant 12 h light/dark cycle. Cell culture and transfection The mouse embryo-derived broblast cell line NIH/3T3 (gied by Dr Shan Wang, Department of Biochemistry and Molecular Biology, Fourth Military Medical University) was cultured in DMEM medium supplemented with 10% fetal calf serum (FBS) and penicillin streptomycin mixture (penicillin 100 U ml À1 , streptomycin 100 mg ml À1 ). The cells were cultured in a saturated humidity incubator at a temperature of 37 C and 5% CO 2 . Cell transfections were performed with Lipofectamine® 2000 transfection reagent (Invitrogen) according to the reagent manual. Induction of superovulation in mice and acquisition of MII oocytes Female mice were intraperitoneally injected with pregnant mare's serum gonadotropin (PMSG, Ningbo Second Hormone Factory, China) at a dose of 8 IU per mouse. Aer 46-48 h, female mice were intraperitoneally injected with human chorionic gonadotropin (HCG, Ningbo Second Hormone Factory, China) at a dose of 8 IU. MII oocytes were collected 15 h aer HCG injection. Specically, female mice were sacriced by cervical dislocation. The dissection of the abdominal cavity and separation of the bilateral oviducts of the mice were performed. The oviducts were placed in a droplet of M2 medium prepared in advance in a 35 mm culture dish pre-warmed at 37 C. The ampullas of the fallopian tubes were torn under a stereo microscope, so that the cell-encapsulated oocyte mass was released. The periplasmic space was digested with hyaluronidase (working concentration 300 mg ml À1 , Sigma-Aldrich) to remove cumulus granulosa cells; then intact and refractive MII oocytes were collected. Collection of mouse embryos at different stages In order to collect a large number of preimplantation mouse embryos at different developmental stages, the mice were divided into 4 groups, each of which contained 10 male mice and 10 superovulation female mice. Group I mice were treated with PMSG or HCG on day 1 or day 3, respectively; group II mice were treated on day 2 or day 4; group III mice were treated on day 3 or day 5; group IV mice were treated on day 4 or day 6. The drugs PMSG and HCG were intraperitoneally injected into the mice, and the female mice that had been successfully mated were fed according to the sub-component cages. On the 7th day, the pre-implantation embryos of the mice at different developmental stages were collected. To collect mouse embryos at the 1cell stage, embryos were collected 22 h aer HCG injection, using the same method as that for the MII oocytes. To collect mouse embryos between the 2-cell stage and the morula or blastocyst stage, each female mouse was sacriced using the cervical vertebrae dislocation method, the abdominal cavity was dissected, and the fallopian tube and uterine horn were separated and placed in M2 medium pre-warmed at 37 C. Finally, the syringe needle was inserted into the end of the fallopian tube, the fallopian tube was gently rinsed with M2, and the embryos or blastocysts were collected. Construction of dual-luciferase reporter gene vector To construct the Atg5 or Becn1 dual-luciferase reporter vector, target sequences of Atg5 or Becn1 were predicted using the online bioinformatics soware TargetScan 6.2 (http:// www.targetscan.org/). The target sequences were synthesized (Sangon Biotech, Shanghai) and inserted into the Sac1/Xba1 sites of the double luciferase reporter plasmid pmirGLO (Promega) to obtain the Atg5 or Becn1 reporter plasmids. The target sequences were as follows (lowercase letters indicate the sequence of the mutation sites): Atg5-WT-F: Dual-luciferase activity assay The cell culture medium was removed and washed with 1Â PBS. 100 ml per well of 1Â PLB passive lysate (Promega) was added to the cells, and the cells were lysed in a room temperature shaker for 15 min. Then the cell lysate of each well was transferred to a 1.5 ml tube and centrifuged at 12 000 rpm for 5 min. The supernatant was collected to detect the luciferase activity using a Progema GLOMAX 20/20 Luminometer (Promega). Real-time PCR Total RNA was extracted using a PicoPure® RNA Isolation Kit (Invitrogen) according to the manufacturer's instructions. Reverse transcriptase and real-time PCR were performed using the SYBR® PrimeScript miRNA RT-PCR Kit (TakaRa) according to the manufacturer's instructions. The fold change of expression was analyzed using the 2 ÀDDCt method. U6 or actin was used as an internal control for quantication. The primers used for PCR were as follows: miR-291a-5p F: Western blot The cell lysates were centrifuged at 12 000 rpm for 10 min at 4 C. The supernatant was collected and the protein concentration of each sample was adjusted to 5 mg ml À1 . The same amount of protein was separated on 10% SDS-PAGE gel and transferred to nitrocellulose membranes. The membranes were blocked with 5% skim milk at room temperature for 1 h, and then incubated with the indicated primary antibodies including rabbit anti-Atg5 (A0856, Sigma), rabbit anti-Becn1 (HPA028949, Sigma) or rabbit anti-LC3-I/II (ABC929, Sigma) overnight at 4 C. Aer washing the membrane 3 times with each wash taking 5 min, the secondary antibodies (anti-actin, A4700, Sigma) were added for incubating for 1 h at room temperature. The bands on the membranes were visualized using an Immobilon Western Chemiluminescent HRP Substrate Chemiluminescence Kit (Millipore). The images were quantied using Image J 1.47 soware. Immunouorescence For xation and inltration, the collected oocytes and preimplantation embryos at each stage were placed in a dark wet box with 1% paraformaldehyde (PFA, Sigma) and 0.2% Triton X-100 (Sigma) for 1 h at room temperature. The washing solution (3% BSA/PBS) was prepared, the inltrated specimens were then washed sequentially in droplets 5 times with each wash taking 5 min. Aer washing, the samples were placed in the blocking solution (3% BSA, 10% FBS/PBS) at 4 C overnight. Then the samples were incubated with primary antibody (1 : 500 dilution of rabbit anti-LC3, L8918, Sigma; 1 : 500 dilution of mouse anti-LAMP2, SAB1402250, Sigma) at 4 C overnight. Aer incubation of the primary antibody, the samples were washed 3 times and then incubated with secondary antibodies (1 : 400 dilution of anti-rabbit, 1 : 400 dilution of anti-mouse, Sigma) at 4 C overnight. Finally, the samples were observed under a uorescence microscope (Olympus) in a dark room and photographed. Transmission electron microscopy (TEM) The mouse oviduct of each embedded sample was prepared, and the oviduct was transferred into pre-cooled 3% glutaraldehyde and xed at 4 C overnight. Then the samples were treated using the following steps: 1% osmic acid staining and xation, ethanol gradient dehydration, acetone penetration and epoxy resin embedding and trimming. The semi-thin sections were prepared under light microscopy, and then the ultrathin sections were observed and captured using a JEM-2000EX transmission electron microscope. Cytoplasmic microinjection To prepare the microinjection plate, 37 C preheated M2 medium was added to the bottom of the 35 mm plate and placed on the 37 C Thermo Plate thermostat (Tokai Hit, Japan). A microloader capillary needle (Eppendorf) was used to absorb 3 ml microinjection reagent, and the injection needle was then added. Then the needle was installed on the FemtoJet 4i microinjection operator (Eppendorf) and the "Clean" button was pressed to unblock the needle. About 30 mouse embryos were transferred into the injection plate and microinjection was performed in batches. Microinjection of mouse embryonic cytoplasm was performed in a high-power eld of the injection system by selecting a zygote that had a second polar body, a female pronucleus and a male pronucleus. Aer the injection was completed, the injured embryos were removed. The morphologically intact embryos were selected for further in vitro culture. Statistical analysis Data was expressed as mean AE standard deviation. The statistical analysis was performed with the student's t test, and p < 0.05 was considered statistically signicant. GraphPad Prism 5 was used for statistical analysis, and Image-Pro Plus 6.0 was used for graphical analysis. Results Dynamic expression of miR-291a/b-5p, Atg5 and Becn1 in mouse MII oocytes and preimplantation embryos Firstly, we detected the dynamic expression of miR-291a-5p and miR-291b-5p at different developmental stages of mouse preimplantation embryos using real-time PCR assay. As shown in Fig. 1A, aer fertilization, the expression of miR-291a-5p and miR-291b-5p in the 1-cell phase embryos decreased compared with that in oocytes. Subsequently, the expression of miR-291a/ b-5p increased signicantly from the 4-cell phase to the blastocyst phase. Then we measured the expression of the autophagy-related genes, Atg5 and Becn1, at different developmental stages of the mouse preimplantation embryos. The results demonstrate that aer fertilization, the expression level of Atg5 mRNA in the 1-cell phase of the fertilized ova was signicantly higher than that in the oocytes. The expression abundance decreased signicantly at the 2-cell phase until the blastocyst phase (Fig. 1B). In contrast, the overall expression abundance of Becn1 mRNA was much lower at the 1-cell phase than that in the oocytes. During mouse preimplantation embryonic development, the overall expression of Becn1 increased at the 2-cell phase, then signicantly declined at the 4-cell phase and gradually decreased until the blastocyst phase (Fig. 1C). Comprehensive analysis was performed to compare the expression of miR-291a-5p, miR-291b-5p, Atg5 and Becn1 at the different developmental stages of the mouse preimplantation embryos. The results showed that aer fertilization, the expression of miR-291a-5p and miR-291b-5p increased gradually with the development of the mouse preimplantation embryos, and increased signicantly at the 4-cell phase. The expression level of Atg5 mRNA increased signicantly in the 1cell phase of the fertilized ovum, and then gradually decreased from the 2-cell phase to the blastocyst phase (Fig. 1D). In contrast, the expression level of Becn1 mRNA was lower than that in the oocytes aer fertilization. The Becn1 expression increased at the 2-cell phase, and then gradually decreased until the blastocyst phase (Fig. 1E). These results suggest that the expression trend of miR-291a/b-5p was reversed to the expression of Atg5 or Becn1 during the development of the preimplantation embryos. Dynamic changes of autophagy in mouse MII oocytes and preimplantation embryos To observe the formation of autophagosomes in mice preimplantation embryos, the collected mouse preimplantation embryos were transferred into the swollen oviduct ampullas of the mice, and the dispersed embryos were embedded and xed for TEM analysis (Fig. 2A). The ultrastructure of the mouse preimplantation embryonic cells was detected using TEM. The results show that the autophagy double-layer membrane structure appeared in the cytoplasm of the fertilized ovum at the 1cell phase aer fertilization, indicating the formation of initial autophagosomes (Fig. 2B). In the cytoplasm of the oocytes, the lysosomal structure was observed, but the double-layer membrane structure of autophagic vacuoles could not be detected (Fig. 2C). To investigate the dynamic changes of autophagy in preimplantation embryos, immunouorescence was performed using Alexa Fluor® 488 green uorescent secondary antibody labeled LC3B primary antibody and Alexa Fluor® 594 red uorescent labeled LAMP2 primary antibody. As shown in Fig. 2D, the green spot shows the LC3 aggregate, which represents the formation of autophagic vacuoles. The red spot is located at a site of accumulated lysosomal-associated membrane protein (LAMP), representing the location of the lysosome. The position where the green spot coincides with the red spot represents the formation of autophagic lysosomes, which result from the fusion of autophagic vacuoles and lysosomes. The results show that the production of autophagic lysosomes increased aer fertilization, and then decreased signicantly aer the 2-cell phase. When developing at the 4-8-cell phase, the autophagic lysosomes reduced continuously and were difficult to detect at the blastocyst phase. MiR-291a/b-5p inhibits the formation of autophagosomes by targeting Atg5 and Becn1 To further explore the relationship between miR-291a/b-5p and Atg5 or Becn1 genes, we employed the online bioinformatics soware TargetScan6.2 for microRNA target prediction to analyze the potential miRNA binding sites in the 3 0 untranslated region (3 0 UTR) of Atg5 and Becn1. As shown in Fig. 3A, the seed regions of miR-291a-5p and miR-291b-5p were predicted to bind the position 527-534 in the 3 0 UTR of Atg5 or position 482-488 in the 3 0 UTR of Becn1 mRNA. To verify the inhibitory effect of miR-291a/b-5p on the target genes, wildtype/mutant luciferase reporters containing the target/mutant region of Atg5 or Becn1 were constructed and a dual-luciferase activity assay was performed with NIH/3T3 cells. The results showed that miR-291a-5p and miR-291b-5p signicantly inhibited the luciferase activity of wildtype Atg5 and Becn1 reporters but not the mutated reporters (Fig. 3B). To determine the effect of miR-291a/b-5p on the mRNA expression of Atg5 and Becn1 genes, miR-291a-5p or miR-291b-5p mimics were transfected into NIH/ 3T3 cells and the mRNA levels of Atg5 and Becn1 were examined using real-time PCR. Aer transfection, the expression levels of miR-291a-5p or miR-291b-5p were obviously upregulated compared with the miRNA negative control (miR-NC) transfected group (Fig. 3C). By overexpressing miR-291a/b-5p in NIH/ 3T3 cells, the expression levels of Atg5 or Becn1 mRNA were signicantly inhibited in the miR-291a/b-5p mimics transfected group compared to the NC group (Fig. 3D). To further determine the effect of miR-291a/b-5p on the protein expression of Atg5 and Becn1, NIH/3T3 cells were transfected with miR-NC, miR-291a-5p or miR-291b-5p mimics. Then the transfected cells were treated with rapamycin for 24 h and the extracted proteins were collected for western blotting assay. The results showed that miR-291a-5p and miR-291b-5p slightly downregulated the Atg5 protein expression in cells aer rapamycin-induced autophagy (Fig. 3E). In contrast, both miR-291a-5p and miR-291b-5p inhibited the expression of Becn1 protein. Among them, miR-291a-5p exhibited a more signicant inhibitory effect than miR-291b-5p (Fig. 3F). Taken together, these results suggest that Agt5 and Becn1 are potential targets of miR-291a/b-5p. During the formation of autophagosomes, free LC3-I in the cytosol is modied and converted to LC3-II and nally aggregates on the membrane of autophagosomes. Therefore, LC3 is an important protein in the process of autophagosome formation. To investigate the effect of miR-291a/b-5p on autophagosomes in NIH/3T3 cells, we transfected NIH/3T3 cells with miR-291a/b-5p mimics and then induced autophagy using rapamycin. The ratio of LC3-I to LC3-II protein was detected by western blot assay to monitor the transformation of LC3-I to LC3-II, Fig. 1 Dynamic expression of (A) miR-291a/b-5p, (B) Atg5 and (C) Becn1 mRNA at different developmental stages of mouse preimplantation embryos measured using real-time PCR assay. # P < 0.05, ## P < 0.01, ### P < 0.001 vs. miR-291b-5p expression in oocytes; *P < 0.05, **P < 0.01, ***P < 0.001 vs. miR-291a-5p, Atg5 or Becn1 expression in oocytes. Expression trends between miR-291a/b-5p and (D) Atg5 or (E) Becn1 in mouse preimplantation embryos. This journal is © The Royal Society of Chemistry 2019 which reected the changes of autophagosome formation. As shown in Fig. 3G, rapamycin induced the conversion of LC3-I to LC3-II in cells, while miR-291a-5p and miR-291b-5p downregulated the conversion of LC3-I to LC3-II. The downregulation effect of miR-291a-5p was more obvious than that of miR-291b-5p. To further monitor the extent of autophagosome formation in cells, we observed the LC3 aggregated particles using uorescence microscopy. EGFP-LC3, the green uorescent proteinlabeled LC3B eukaryotic expression vector, was co-transfected into NIH/3T3 cells with miR-NC or miR-291a-5p mimics. Aer treatment with rapamycin in cells for 24 h, the number of LC3 aggregated cells was observed under a uorescence microscope. The results showed that the numbers of LC3 aggregated cells increased signicantly aer induction of autophagy by rapamycin, while the transfection of miR-291a-5p signicantly inhibited the numbers of LC3 aggregated positive cells (Fig. 3H). Collectively, these results suggest that miR-291a/b-5p inhibited the conversion of LC3-I to LC3-II by downregulating Atg5 and Becn1 expression, ultimately inhibiting the formation of autophagosomes. 3 (A) Bioinformatics software TargetScan 6.2 was used to predict the binding region of miR-291a/b-5p in the 3 0 UTR sequence of Atg5 or Becn1. (B) Dual-luciferase reporter assay was performed in NIH/3T3 cells to verify the inhibition effect of miR-291a/b-5p targeting Atg5 or Becn1. (C) Real-time PCR was performed to obtain the expression levels of miR-291a-5p or miR-291b-5p in cells after transfection of miR-291a-5p or miR-291b-5p mimics in NIH/3T3 cells. (D) mRNA expression levels of Atg5 or Becn1 were detected in the NIH/3T3 cells transfected with miR-291a-5p, miR-291b-5p or miRNA negative control (miR-NC) mimics. After transfecting miR-NC, miR-291a-5p or miR-291b-5p mimics in NIH/ 3T3 cells, cells were treated with rapamycin for 24 h. Western blot was performed to detect the protein expression of (E) Atg5 or (F) Becn1. Actin was used as an internal control, which was set to 1. (G) After co-transfecting miR-NC, miR-291a-5p or miR-291b-5p as well as EGFP-LC3 plasmid in NIH/3T3 cells, cells were treated with rapamycin for 24 h, and the changes of LC3-I/II in cells were detected by western blot. Actin was used as an internal control, which was set to 1. (H) After co-transfecting miR-NC, miR-291a-5p and EGFP-LC3 plasmid in NIH/3T3 cells, cells were treated with rapamycin for 24 h. The LC3 aggregated particles (white arrow) were observed under a fluorescence microscope. Three fields were randomly selected to calculate the proportion of EGFP-LC3 aggregated particles (n ¼ 3, *P < 0.05, **P < 0.01). MiR-291a-5p inhibitors promote the development of preimplantation embryos To explore the role of miR-291a-5p in the development of preimplantation embryos, the microinjection of mouse zygote cytoplasm with 5 0 FAM-modied miR-291a-5p inhibitors was performed at the 0.5 days post-coitum (dpc) stage. Aer injection, the embryo was cultured in vitro, and the distribution of miR-291a-5p inhibitors was observed using a uorescence microscope (Fig. 4A). The images show that miR-291a-5p inhibitors were evenly distributed in the embryonic cytoplasm 2.5 h aer microinjection, and were further distributed in each blastomere following the ssion of the embryo. Green uorescence can be observed at 24 h and 48 h aer injection, but the uorescence intensity was weaker than that at 2.5 h aer injection. We subsequently examined the effect of miR-291a-5p inhibitors on miR-291a-5p expression in mouse embryos as well as the expression of Atg5 and Becn1 mRNA in embryonic cytosol. Real-time PCR results demonstrated that the expression of miR-291a-5p in the inhibitor group was signicantly lower than that in the other three control groups (Fig. 4B, p < 0.05 compared with the TE buffer group or the scramble inhibitor group, p < 0.01 compared with the normal culture control group). Aer injection with miR-291a-5p inhibitors, the expression of Atg5 mRNA in the embryo cytoplasm was higher than that in the other control groups, but was only statistically signicant compared with the scramble inhibitor group (Fig. 4C, p < 0.05). In contrast, the expression of Becn1 mRNA was signicantly higher than that in the other control groups (Fig. 4D, p < 0.05 compared with the TE group or the scramble inhibitor group, p < 0.01 compared with the normal culture control group). To observe the effect of miR-291a-5p inhibitors on mouse preimplantation embryo development, we nally collected mouse zygotes and performed cytoplasmic microinjection of 1cell phase mouse embryos at 0.5 dpc stage. Aer the injection, the developmental rates of embryo development to the 2-cell phase and blastocyst phase were observed at 1.5 dpc and 4.5 dpc, respectively. The results demonstrate that when the mouse embryos developed from the 1-cell to the 2-cell phase, the embryo cleavage rate in the miR-291a-5p inhibitor group was higher than that in the other control groups (p < 0.05). At 4.5 dpc, the blastocyst formation rate in the miR-291a-5p inhibitor group was statistically higher than that in the other control groups (p < 0.05) (Fig. 4E and F). These results indicate that microinjection of miR-291a-5p inhibitors signicantly inhibited the expression of miR-291a-5p in embryonic cytoplasm, increased the development rate of mouse embryos from the 1cell stage to the 2-cell stage, and promoted blastocyst formation. Discussion During ovum development, a large number of maternal substances, including mRNA and protein, are stored in the oocytes. Aer fertilization, these substances are rapidly degraded, and the genes of the fertilized ovum encodes lots of new intracytoplasmic substances. This process by which this cytoplasmic content in the ovum is transformed into the cytoplasmic content in the fertilized ovum is called the oocyte-toembryo transition or the maternal-to-zygotic transition. 21 This transition mode is conserved across multiple species and is critical for embryonic development. If the cytoplasmic material cannot be degraded, it will impair the further development of the embryo. The known ubiquitin/proteasome-mediated pathways for protein degradation are not sufficient to clean up the maternal products during the transformation process. 22 Recent studies have suggested that autophagy degradation systems play an important role in this cleanup process. 23,24 In the present study, we demonstrated that the inhibition of early embryonic miR-291a-5p promotes the expression of the autophagy-related genes Atg5 and Becn1, which may play a role in promoting autophagy, accelerating the degradation of maternal substances and the production of amino acid energy, thereby advancing the early development of mouse embryos. Becn1 is one of the earliest autophagy-related genes in mammals. Becn1 is a coiled-coil protein of approximately 60 kDa in size and contains a binding domain that interacts with Bcl-2 protein. 25 Becn1 forms a complex by binding to phosphatidylinositol-3-kinase (PI3K) vacuolar sorting protein 34 (VPS34), 26 which is a key protein in the initial stage of autophagosome formation and promotes the formation of the bilayer membrane structure of autophagosomes. Yue et al. constructed a mutant mouse with Becn1 deletion, and found that these mutant mouse embryos showed signicant developmental delay, and death occurred on E7.5 days. Moreover, the visceral endoderm size and cell structure exhibited abnormal defects. 11 Embryoid bodies (EBs) are composed of undifferentiated embryonic stem cells, which can further develop cavities. 27 Studies have conrmed that EBs composed of Becn1-decient cells are unable to develop normally due to defects in the function of recruiting and clearing dead cells, and exhibit low intracellular ATP levels. 12 These studies suggest that Becn1 is important for the early development of embryos, energy supply, and maintenance of homeostasis in the intracellular environment. The Atg5-Atg12 covalent ubiquitination system is important for the formation of autophagosomes. Studies in mouse embryonic stem cells have shown that Atg5-Atg12 is covalently bound to the surface of the autophagosome bilayer membrane structure at the early stage of autophagy. 28 Atg5 is localized on this bilayer membrane structure with the growth of the membrane structure. When Atg5 is decient, the development of the bilayer membrane structure of autophagosomes in mouse embryonic stem cells is defective. Moreover, Atg5-Atg12 is also involved in assisting LC3 in binding to the developing autophagosome bilayer membrane structure. Thus, the Atg5-Atg12 covalent system plays a crucial role in the growth and closure of the bilayer membrane structure in the early stage of autophagosomes. Tsukamoto S. et al. 9 fertilized Atg5-decient oocytes with Atg5-decient sperm and found that fertilized ovum developed to the 4-8-cell stage without further development aerwards. However, if the defective oocytes were fertilized with normal sperm, they could develop normally further, Fig. 4 (A) After microinjection of miR-291a-5p inhibitors, the expression of miR-291a-5p inhibitors in mouse embryos was observed using fluorescence microscopy. The mouse zygote was injected at 0.5 dpc with 5 0 FAM-modified miR-291a-5p inhibitor molecules, which showed green fluorescence. 2.5 h, 24 h and 48 h after injection, green fluorescence was observed. Scale bar ¼ 100 mm. After microinjection of the control, TE buffer, scramble inhibitor or miR-291a-5p inhibitor for 24 h, real-time quantitative PCR was performed to detect the expression of (B) miR-291a-5p, (C) Atg5 mRNA and (D) Becn1 mRNA in mouse embryos. $ P < 0.05, $$ P < 0.01 vs. scramble inhibitor group; # P < 0.05, ## P < 0.01 vs. TE group; **P < 0.01, ***P < 0.001 vs. control group. (E) After microinjection of control, TE buffer, scramble inhibitor or miR-291a-5p inhibitor for 24 h, the cleavage rates of mouse embryos at the 2-cell and blastocyst phases were observed under a microscope. (F) Analysis of mouse embryo development rate at 1.5 dpc (2-cell) and 4.5 dpc (blastocyst) after microinjection of various reagents, as above (n ¼ 3, *P < 0.05 vs. control group, $ P < 0.05 vs. scramble inhibitor group, ## P < 0.01 vs. TE group). This journal is © The Royal Society of Chemistry 2019 and the level of protein synthesis was also downregulated in the autophagy-decient embryo. Therefore, during the early development of the embryo, the autophagy degradation system and the autophagy-related gene Atg5 are critical for mammalian preimplantation embryo development. In recent years, a large number of studies have shown that post-transcriptional and translational regulation mediated by non-coding miRNA molecules are involved in autophagy in tumors. 29 The modulation mediated by miRNAs gives cells a survival advantage in response to hunger, genotoxic stress, and hypoxia. Chen et al. 30 found that the miR-290 gene cluster was upregulated in the B16F1 progeny cell line by comparing the mouse melanoma cell line B16 with its passaged progeny cell line. This upregulation had no signicant change in cell proliferation, migration or anchor-independent growth, but had an effect against glucose starvation. The miR-290 gene cluster inhibited autophagic death of mouse melanoma cells in a glucose starvation environment through the downregulation of various autophagy genes including Atg7 and ULK1. Based on the above studies, we hypothesized that the miR-290 gene cluster may be involved in the regulation of preimplantation embryo development by targeting autophagy-related genes. Therefore, we examined the dynamic expression of miR-291a/b-5p in different stages of development. We showed that miR-291a/b-5p had a low expression abundance aer fertilization, and a signicant increase from the 4-cell phase to the blastocyst stage. In contrast, the expression trend of Atg5 and Becn1 mRNA showed an inverse relationship with miR-291a/b-5p expression. Furthermore, we conrmed that Atg5 and Becn1 were the direct targets of miR-291a/b-5p using a dual-luciferase reporter assay. In conclusion, as mature molecules of the miR-290 cluster, miR-291a/b-5p was dynamically expressed and the expression trend of miR-291a/b-5p showed an inverse relationship with the expression of the autophagy-related genes Atg5 or Becn1 during mouse preimplantation embryo development. MiR-291a/b-5p inhibited the formation of autophagosomes and exhibited targeted inhibition effects on Atg5 and Becn1 in NIH/3T3 cells. Repression of miR-291a-5p with miRNA inhibitors in fertilized ova upregulated the mRNA levels Atg5 and Becn1, promoting the rst cleavage and blastocyst formation in mouse embryos. Our study suggests the crucial role of miR-291a/b-5p during mouse preimplantation embryo development. Conflicts of interest There are no conicts to declare.
2019-04-12T13:50:39.788Z
2019-03-15T00:00:00.000
{ "year": 2019, "sha1": "28ee73aedd7b71430f905ed20ce9713b1b1291c2", "oa_license": "CCBYNC", "oa_url": "https://pubs.rsc.org/en/content/articlepdf/2019/ra/c9ra00017h", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "7e6b72eb1faa903236e487471d56f5a8c9837684", "s2fieldsofstudy": [ "Biology", "Chemistry" ], "extfieldsofstudy": [ "Medicine", "Chemistry" ] }
233308380
pes2o/s2orc
v3-fos-license
Trends in food and nutrient intake over 20 years: findings from the 1998-2018 Korea National Health and Nutrition Examination Survey OBJECTIVES We aimed to examine the current status and trends of food and nutrient intake in the Korean population over the past 20 years using the data from the Korea National Health and Nutrition Examination Survey (KNHANES). METHODS We conducted a survey of 116,284 subjects over the age of one year in Korea, who participated in the KNHANES between 1998 and 2018. We collected data on the subjects’ intake for the day before using the 24-hour recall method. The annual percent change (APC) in the food groups and nutrient intake were calculated using SAS and Joinpoint software. RESULTS The intake of grains (APC=-0.4, p<0.05) and vegetables (APC=-0.8, p<0.05) was observed to decrease. In contrast, the intake of beverages, meat, dairy, and eggs increased. In particular, beverage intake increased by more than four times (APC=9.2, p<0.05). There was no significant change in energy intake. However, the proportion of energy intake from carbohydrates decreased by approximately 5%p (APC=-0.3, p<0.05), whereas that from fat increased by approximately 5%p (APC=1.1, p<0.05). Additionally, there were decreases in the proportion of energy intake from breakfast and homemade meals and increases in the energy intake from snacks, dining out, and convenience food. The intake of vitamin C (APC=-3.2, p<0.05) and sodium (APC=-2.3, p<0.05) significantly decreased. CONCLUSIONS Over the past 20 years, there has been decreases in the intake of grains, vegetables, carbohydrates, sodium, and vitamin C and increases in the intake of beverages, dairy, meat, eggs, and fat. Since nutritional status is an important factor in the prevention and management of chronic diseases, it should be continuously monitored. INTRODUCTION dietary fiber and whole grains, account for the third leading risk factor contributing to death after smoking and hyperglycemia [2]. Nutrition is an important factor in the prevention and management of chronic diseases, such as cardiovascular disease, cancer, and diabetes. In most countries, including Korea, a national-level survey has been conducted to determine the changes in the nutritional intake with the aim of preventing and managing chronic diseases [3,4]. In Korea, the Korea National Health and Nutrition Examination Survey (KNHANES) is used to evaluate the nutrition policies by monitoring the nutritional status, identifying nutritionally vulnerable groups, and comparing the targets for the objectives of the National Health Plan (HP) [5]. In Korea, deaths due to chronic diseases, such as cancer, cardiovascular disease, and diabetes account for approximately 80% of all the deaths [6]. Over the past 20 years, the prevalence of obesity and hypercholesterolemia has increased, while the prevalence of hypertension and diabetes has stagnated [7]. Therefore, it is necessary to prepare a prevention and management plan through an in-depth understanding of the changes in the nutritional intake, which are the major risk factors for chronic diseases. Although the Korea Disease Control and Prevention Agency (KDCA, formerly the Korea Centers for Disease Control and Prevention) publishes a report ("Health Statistics") every year, it is not enough to describe the results for the trends in the nutritional intake, because it also includes results from health examination and health interview. Some previous studies [8,9] have reported changes in the diet of Koreans, such as an increased intake of animal food groups and fat. However, since these studies were based on the data collected prior to 1998, it is necessary to understand the latest changes in the diet. While there are other previous studies [10,11] that analyzed more recent data, it remains difficult to understand the changes in the overall nutritional status, as their results are mainly concerned with particular aspects of nutrients, such as energy and sodium. This study aimed to examine the changes in the intake of major foods and nutrients over the past 20 years using the data from the KNHANES (1998KNHANES ( -2018 to provide evidence for the prevention and management of chronic diseases in Korea. Study subjects Based on the National Health Promotion Act enacted in 1995, the KNHANES has been conducted since 1998 for the production of national health statistics. KNHANES was conducted in November-December in 1998 and 2001, April-June in 2005 (April-May nutrition survey), and July-December in 2007. From 2008, it has been conducted as an annual survey (January-December) to produce statistics without seasonal variation. Since 1998, a two-stage stratified cluster sampling method was selected for approximately 200 primary sampling units (PSU) and 20 households to 23 households per PSU. All eligible members aged one year and above within the sample households become the target sample. Subjects were all over one-year-old and members of the house-hold that was sampled using the method described above. For our analysis, we used the data from 116,284 people over one year of age who completed the 24-hour dietary recall between 1998 (first survey) and 2016-2018 (seventh survey). Nutrition survey The KNHANES consists of a health examination, a health interview, and a nutrition survey. The nutrition survey is divided into a 24-hour dietary recall, a dietary behavior survey, and a food security survey. For the 24-hour dietary recall, a team of dieticians visited each subject's household and conducted individual interviews with all the household members over the age of one year to collect data about the name and amount consumed of the dish or food as well as the location and type of meal eaten during the day before in chronological order [12]. To determine the exact amount of the intake, we investigated each individual' s intake using various measuring aids. If a subject had eaten dishes cooked in the household, the ingredients and their amounts used for cooking the meals were surveyed and reflected on their personal food and nutrient intake. When the subjects dined out, we used the recipe database (DB), which is composed of the ingredient list of a dish and the amount of each food ingredient, to calculate the intake of food, and the nutrients from the dishes consumed. For each individual's daily intake, the energy and nutrient intakes were calculated using the nutrient DB for each food established based on the National Standard Food Composition Table [7]. The main food groups examined in our study, which showed differences in the amount of intake across years, were grains, vegetables, beverages (non-alcoholic beverages), fruits, meat, dairy (milk and dairy products), and eggs. For nutrients, we examined the total energy and the proportion of energy from carbohydrates, proteins, and fats and the components for the fourth HP objective, including vitamin A, riboflavin, vitamin C, calcium, sodium, and iron. To examine the changes in the energy composition, we presented the proportion of energy intake for the source nutrients of energy: fat, carbohydrate, and protein. As the unit of reference intake for vitamin A changed from μg RE to μg RAE in the Dietray Reference Intakes for Koreans 2015 (2015 KDRIs), KNHANES reported the vitamin A intake in μg RAE from the seventh survey (2016)(2017)(2018). In this study, we used the unit μg RE for vitamin A intake so that we could compare the data over the past 20 years. The proportion of energy intake from each meal and meal type was also examined to identify the dietary changes. The meals were divided into breakfast, lunch, dinner, and snacks. The meal types were divided into homemade meals, dining out (dish from a restaurant or an institutional food service), single food (i.e., fruit, snack, and milk), and convenience food (ready-to-eat food or ready-tocook food). Statistical analysis All analyses were performed using SAS version 9.4 (SAS Institute Inc., Cary, NC, USA) and Joinpoint Regression Program ver- Cancer Institute [NCI], Bethesda, MD, USA). To represent the Korean population, the sampling weights assigned to subjects were applied to all analyses. The sampling weights were generated by considering a complex sample design, non-response rate of the target population, and post-stratification. To adjust for differences in the results from changes in the age structure of each year, age-standardized results were calculated using the age-and sex-specific structures of the estimated population based on the 2005 population projections for Korea. The estimates and their standard errors obtained from SAS were input into NCI's Joinpoint program, which was estimated by setting the joinpoint as 0 or 1, and the annual percent change (APC) was calculated. APC verified that the annual rate of change was "0" under the significance level of 0.05; additionally, the Monte Carlo method in Joinpoint Regression Program was used to test the sta-tistical significance of the optimal. Ethics statement This study was approved by the Institutional Review Board of the KDCA (2007KDCA ( -2014KDCA ( , 2018. For certain year (2015-2017), ethical approval was waived by the Act (Article 2, Paragraph 1) and Enforcement Regulation (Article 2, Paragraph 2, item 1) of Bioethics and Safety Act. RESULTS A total of 116,284 subjects (52,213 males, 64,071 females) over the age of one year, who had completed the 24-hour dietary recall of KNHANES (1998-2018) were included. Their average age increased by about ten years, from 34.5 to 43.5 years over the past 20 years. Values are presented as mean±standard error. Diff, difference between the data from 1998 and 2018; APC, annual percent change. 1 The age-standardized mean and standard error were calculated using the 2005 population projections for Korea. 2 The percentage of energy from fat means the percentage of energy from fat (g of fat×9 kcal/g) compared to the sum of energy from fat, carbohydrates, and protein; The respective percentages of energy from the other components were calculated using a similar equation. *p<0.05. The proportion of subjects who were college graduates or higher increased by approximately 10%p from 2005 to 2018 (Table 1). Since KNHANES was introduced in 1998, the intake of grains has decreased (APC = -0.4, p < 0.05). In particular, the intake of grains by female subjects significantly decreased ( Table 2). The intake of vegetables has decreased since 2005 (APC= -1.5, p< 0.05), whereas the intake of fruits showed a tendency to decrease, but not to a statistically significant level. The food group with the largest change in intake over the past 20 years was beverages, which increased significantly since 2005 to become 4.6 times more than the intake in 1998 (45.3 g in 1998 and 208.4 g in 2018; APC = 9.2, p < 0.05). During the same period, meat intake was also observed to increase. In particular, the intake of meat by male subjects doubled over the past 20 years (82.7 g in 1998 and 160.0 g in 2018; APC = 0.7, p < 0.05). The intake of dairy tended to increase continuously until 2011 (APC = 3.6, p < 0.05); however, there was no significant change afterward. The intake of eggs increased, resulting in an intake of 31.0 g in 2018, 1.5 times that of 21.7 g in 1998 (APC = 2.0, p < 0.05). The total energy intake tended to increase over the past 20 years, but only by a significant amount in males (APC = 0.7, p < 0.05) ( Table 3). The proportion of the energy intake from fat increased significantly from 17.9% in 1998 to 22.6% in 2018 (APC = 1.1, p < 0.05). Furthermore, this increase has accelerated in both male and female subjects since 2009. During the same period, the pro-portion of energy intake from carbohydrates decreased by 4.9%p (67.1% in 1998 and 62.2% in 2018), whereas that from protein did not significantly change. The proportion of energy intake from breakfast significantly decreased (23.1% in 1998 and 16.2% in 2018; APC = -1.8, p < 0.05), whereas that from snacks increased (APC = 1.5, p < 0.05). While the proportion of energy intake from homemade meals decreased, that from dining out almost doubled (18.9% in 1998 and 36.6% in 2018), and that from food or convenience food increased by approximately 1.5 times (15.5% in 1998 and 25.1% in 2018). DISCUSSION Since the introduction of the KNHANES in 1998, the intake of grains, vegetables, and fruits has decreased, whereas the intake of beverages, meat, dairy, and eggs has increased in the past 20 years. Additionally, these changes were related to the changes in the nutrient intake, resulting in a decrease in the intake of vitamin C and increase in the intake of riboflavin. The total energy intake of the male subjects tended to increase slightly. The proportion of energy intake from fat increased; similarly, the energy intake from dining out or convenience food increased. In our study, the intake of plant-based foods and the proportion of energy from carbohydrates decreased, whereas the intake of animal-based foods and the proportion of energy from fat increased. The increase in the intake of animal food and fat was consistent with the results reported in previous domestic studies, such as a study that analyzed the changes in the food intake from 1969 to 1995, which suggests that there was a tendency to change from before 1998 [8,9]. The composition of the households in Korea has also changed; for example, the number of single-person households has increased [13]. Furthermore, more females are now employed [14], leading to changes in the food environment. Upon reviewing the energy intake trends for each meal and type of meal, we found changes in the diet, such as a decreased energy intake from breakfast and homemade meals, and an increase in the proportion of energy intake from snacks, dining out, and convenience food. These changes in the diet are believed to have contributed to changes in the sources of energy intake. The food group with the largest changes in the intake over the past 20 years was beverages. While the year in which jointpoints occurred differed between male and female subjects, both of their intakes significantly increased. As sugar-sweetened beverages (SSB) contribute to a large proportion of the energy and total sugar intake, it was recommended to reduce the consumption of SSB as much as possible [15,16]. The energy intake from beverages in subjects over one-year-old in Korea increased by 41.8 kcal (30.7 kcal in 1998 and 72.5 kcal in 2018), which may have contributed to the increase in the total energy intake. In addition, beverages, such as sugar-added coffee, soft drinks, and fruit-based beverages were considered as major sources of total sugar intake [7]. Further research that analyzes the trend of beverage intake, which divides beverages into subclasses (sweetened or unsweetened) is needed. In contrast, the intake of vegetables has decreased significantly over the past 20 years. While the intake of fruits showed a decreasing tendency during the same period, this change was not statistically significant. Considering that fruits and vegetables may have different intakes depending on the season compared to other food groups, we analyzed their APC between 2008, the year in which the annual survey was introduced, and 2018. We found that the intake of fruits has decreased in both male and female subjects by approximately 11% every year since 2015. Moreover, the intake of vegetables has greatly decreased since 2014 compared to 2008-2013 (data not shown). The proportion of adults over the age of 19 years, who consumed more than 500 g of fruits and vegetables decreased from 42.9% in 1998 to 29.4% in 2018. It particularly decreased significantly in those aged 19-29 years (decrease by 25.1%p) and 30-49 years (decrease by 19.7%p). The cause of this decrease in young and middle-aged adults should be further examined [7]. The energy intake was 1,988 kcal in 2018 (aged ≥ 1 years), which has slightly increased (53 kcal) over the past 20 years. In particular, the male subjects were found to have a significant increase in energy intake (149 kcal). There was no significant change in the energy intake in female subjects over the past 20 years; however, the APC over the ten-year period from 2008 to 2018 showed a tendency to increase (APC = 0.7, p < 0.05; data not shown). The energy intake was 2,093 kcal in the United States (2017-2018, aged ≥ 2 years) [3] and 1,900 kcal in Japan (2018, aged ≥ 1 years), which did not differ significantly from the energy intake in Korea [4]. However, unlike the energy intake trends in Korea, the energy intake in Japan has tended to decrease over the past 20 years (1995-2016) [17]. Although there was no significant change in the energy intake, the proportion of energy intake from carbohydrates decreased in both male and female subjects. The proportion of energy intake from fat increased by 4.7%p over the past 20 years, reaching 22.6% (fat intake = 49.5 g) in 2018 [7]. This increase was statistically significant in both male and female subjects. Similar trends were also reported in studies conducted in the United States and Japan [17,18]. The proportion of energy intake from fat in Korea was lower than that in the United States (36.0%; fat intake = 85.0 g) and Japan (28.3%; fat intake = 60.4 g). Furthermore, it fell within the acceptable macronutrient distribution range of the 2015 KDRIs. However, it has increased by approximately 5%p over 20 years, especially 4%p over the last ten years. The proportion of people who consume more than 30% of the total energy from fat has also increased. In particular, younger age groups, such as adults in their 20s (14% in 1998 and 29% in 2018) and 30s (9% in 1998 and 24% in 2018) showed significant increases (data not shown). Given these trends and the increasing prevalence of hypercholesterolemia and obesity, it is necessary to continuously monitor the intake of fat and the proportion of energy intake from fat. The nutrient intake was influenced by changes in food intake. The decreased consumption of fruits and vegetables may have contributed to a decrease in the vitamin C intake, whereas increases in the consumption of meat, dairy, and eggs may have contributed to an increase in the riboflavin intake. The intake of calcium has remained fairly unchanged over the past 20 years, as the intake of dairy has increased and the intake of vegetables has decreased. The nutrient with the greatest change in intake was sodium, which showed statistically significant decreases in male and female subjects. The decrease in the sodium intake may have been caused by factors, such as a decrease in the intake of major food sources and the enforcement of policies to reduce the sodium intake. The intake of cabbage kimchi, a major source of sodium for Koreans, decreased from 83.8 g in 1998 to 62.9 g in 2018 [7,19]. In addition, as the need for sodium reduction emerged in Korea, a national task force established by the government in 2007 and the National Plan to Reduce Sodium, including a campaign to improve public awareness and voluntary reformulation of processed foods, such as fried noodles, paste, and confectionery to lower the sodium content, was implemented in 2012 [20]. Consequently, the sodium intake was significantly reduced to 3,255 mg in 2018 compared to 4,586 mg in 1998. However, considering that it is still higher than the recommended maximum level of 2,000 mg and that about 75% of people over the age of nine years consume 2,000 mg or more, further initiatives are still required to reduce the intake of sodium [7]. While the 24-hour dietary recall of KNHANES was conducted using the same method for 20 years, there were differences in the survey period and the nutritional DB used to calculate the results. For example, the average intake of fruits showed a difference of more than 100 g depending on the survey period: 197.3 g in 1998 and 208.3 g in 2001 (survey period from November to December) and 87.6 g in 2005 (survey period from April to May) [7]. This suggests that seasonal changes may have influenced the intake of fruits and vegetables. Accordingly, we performed a further analysis of the food and nutrient intake trends from 2008, the year in which the annual survey was established to 2018. While we observed no significant change in fruits intake over the past 20 years, an analysis using the data between 2008 and 2018 showed a decreasing trend since 2015 (APC = -11.1, p < 0.05). Unlike the 20year analysis, the ten-year analysis showed significant changes in the energy intake of female subjects (APC = 0.7, p < 0.05), as well as the proportion of energy intake from protein (APC = 0.4, p < 0.05) and vitamin A intake (APC= -3.4, p< 0.05). Other foods and nutrients showed slight differences in the APC; however, their increasing or decreasing tendencies were similar to those analyzed over the 20-year period (data not shown). Second, since the main purpose of the KNHANES is to estimate the nutritional status of the current year, it uses the latest recipe DB and nutrient DB for each food to calculate the results. This is beneficial because the results reflect the nutritional information when the nutritional status is evaluated. However, the effect of the new DB needs to be considered when evaluating the food and nutrient intake trends. For example, since iron intake calculated according to the National Standard Food Composition Table version 9.1 [21] was lower than that of the Revised version 8 [22] (data not shown), the difference between DBs needs to be considered when comparing the results between the sixth (2013-2015) and seventh (2016-2018) KNHANES. When changing the DB for data processing, we calculated the results by applying each of the existing DB and the new DB to the same 24-hour dietary recall data. Subsequently, these results were reviewed by the relevant government agencies and experts. For reference, the information on DB used to calculate the survey results is described in detail and published in the "Health Statistics" and the "Guidebook for Data Users" of KNHANES. In conclusion, there have been few positive changes in food and nutrient intake except for a decrease in the sodium intake, such as the intake of fruits and vegetables decreased and the intake of beverages and fat increased over the past 20 years. Since the nutritional intake is an important factor in preventing and managing chronic diseases, it is necessary to develop and actively enforce nutrition policies to promote the nutritional status. In addition, since the KNHANES is an ongoing surveillance system that supports the development of health policies, it is necessary to improve the survey method and conduct in-depth analysis to explore the nutritional problems and nutritional factors related to chronic diseases.
2021-04-21T06:16:49.304Z
2021-04-19T00:00:00.000
{ "year": 2021, "sha1": "f795c336a1ab4541edd7091231179f6896da1ccb", "oa_license": "CCBY", "oa_url": "https://www.e-epih.org/upload/pdf/epih-43-e2021027.pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "60822527a392218e0bccbd5d3d74021cb4e4b7f1", "s2fieldsofstudy": [ "Agricultural and Food Sciences", "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
56077992
pes2o/s2orc
v3-fos-license
Changes in Cardiac Levels of Caspase-8, Bcl-2 and NT-proBNP Following 4 Weeks of Aerobic Exercise in Diabetic Rats Introduction: Cardiac apoptosis is one of the most important cardiovascular complications of diabetes. We aimed to investigate the changes of caspase-8, Bcl-2, and N-terminal pro B-type natriuretic peptide (NT-proBNP) in cardiac tissue after 4 weeks of aerobic exercise in male rats Introduction Cardiovascular disorders are the main cause of morbidities and mortalities in diabetic patients, not only due to coronary artery disease and related high blood pressure but also due to direct adverse effects of diabetes on the heart and independent of other pathological factors.Research has shown that apoptosis plays a major role in the pathogenesis of diabetes-induced heart disease. 1 Collagen is precipitated when heart cells are reduced by apoptosis.Ultimately, this reduction in cardiac compliance increases the tension of the heart muscle cell wall, causing ventricular dysfunction.The occurrence of apoptosis in the pathway of diabetes-induced myocardial damage has been proven by activating pathway components of apoptosis and caspase activity.][3][4][5] In fact, various studies have shown that diabetes significantly increases apoptosis in cardiac cells. 5In the early stages of heart damage, the number of cells that are lost is greater, which indicates the activation of the incremental regulation of anti-apoptotic pathways after the reduction of the cells. 4The process of apoptosis and planned cell death is regulated by some mitochondrial proteins including B-cell lymphoma-2 (Bcl-2) proteins, which are divided into 2 parts of the anti-apoptotic proteins (Bcl-2, Bcl-XL, Bcl-W, Bfl-1 and Mcl-1) and pro-apoptotic proteins (Bax, Bak, Bad, Bcl-Xs, Bid, Bik, Bim and Hrk) which play a leading role in accelerating the onset of an attack. 6hereas apoptosis is regulated by anti-apoptotic proteins by preventing the release of cytochrome c from the mitochondria, pro-apoptotic proteins accelerate its release. 6s a proto-oncogene antagonist of apoptosis at mitochondrial levels, with a weight of 28 kD, Bcl-2 prevents the oxidative damage to the cell and is known as one of the most prominent inhibitors of apoptosis proteins, which, in addition to enhancing the release of cytochrome c from mitochondria, the integrity of the mitochondrial membrane by attaching H+ ions to the apoptotic protease activating factor (apaf-1), prevents the activation of caspase-9. 6,7A mitochondrion is an inseparable component of the internal pathway of apoptosis and the site of deposition of many of the proteins interfering in the early stages of this process including the members of the Bcl-2 family. 8Mitochondrial functions are impaired as a result of DNA damage leading to irreversible cholesterol; therefore, Mitochondria participate in both the internal and external pathways of programmed cell death. 9n general, the pathways involved in stimulating the apoptosis process are divided into 2 categories as follows: internal pathway (or mitochondrial pathway), which is regulated by Bcl-2 family proteins and by activating Bak/Bax, leads to the permeability of the mitochondrial membrane and the external pathway (or the pathway for death receptors), which is started following the inclusion of the TNF receptor ligand coating and is resulted in the activation of caspase-8 and, consequently, caspase-3. 5,9he process of apoptosis is carried out by a family of cysteine proteases called caspase. 10 The external pathway of apoptosis begins through death receptors and activates caspase-8.After activation, caspase-8 can directly activate the active caspases or act through Bid protein.11 In the internal pathway, the release of cytochrome c from mitochondrion activates caspase-9 and ultimately activates caspases.10 There is a close relationship between these two pathways so that Bid protein as caspase-8 substrate releases cytochrome c after transfer to mitochondrion. 13Previous reports have noted that diabetes increases the levels and activity of caspase-8.14 Studies also showed that the level of prohorrmone brain natriuretic peptide (NT-proBNP) is significantly higher in diabetic subjects than in healthy subjects.[15][16][17] BNP levels have a significant positive correlation with cardiac failure and severity of hypertrophy and poor ventricular diastolic function.[18][19][20][21] Researchers have argued that NT-proBNP level can be measured as a predictor of left ventricular malfunction and increased risk of death due to damage to the heart muscle.[22][23][24][25] Research has shown that regular exercises modify many metabolic disorders in the diabetic population in addition to their beneficial effects on systemic changes associated with obesity and type 2 diabetes.4,15 Such fluctuations are due to indirect effects of systemic alternations due to exercise and their direct effects on cardiac contractile activity during exercise. 15Exercise plays a protective role in the heart against complications of diabetes through the reduction of oxidative stress and apoptosis in heart cells.16 Due to a dearth of research on the effects of exercise on the adverse effects of diabetes on heart muscle damage and apoptosis, we examined the effects of a 4-week course of increasing aerobic exercise on cardiac markers of caspase-8, Bcl-2 and NT-proBNP in diabetic male rats. Animals Based on the model used in the study done by Chow et al in the selection of clinical trial samples, 26 Allocation and Training Protocol The animals were randomly divided into 4 groups of 10 rats, 26 including control, diabetes, control + exercise, and exercise + diabetes.Before performing the exercise protocol, the subjects were introduced to the treadmill for 1 week.The familiarization program consisted of 5 walking and running sessions at a speed of 5 to 8 m/s without slope for 8 to 10 minutes.The training program included running on a non-slip treadmill with a progressive overload principle between 25 minutes in the first week and 44 minutes in the fourth week (one minute increase in training time per session compared to the previous session) and between 15 m/min in the first week, up to 18 m/min in the fourth week (1 m/min/wk), and 5 sessions per week for 4 weeks. 27,28To warm up, the subjects ran at 7 m/min for 3 minutes at the beginning of each training session, and then to reach the desired speed, the speed of 2 m/min for each minute was added to the treadmill.In order to cool the body in every training session, the speed of the treadmill decreased steadily until it reached the initial speed. Diabetes Induction Diabetes induction was done by an intraperitoneal injection of streptozotocin (STZ) solution from Sigma Aldrich Germany (CAS 18883-66-4 -Calbiochem), soluble in citrate buffer (pH = 4.5 and 0.1 mol concentration) and 55 mg/kg body weight. 29Fourteen days after STZ injection, blood glucose concentration was measured using blood samples collected from animals with a glucometer.The criterion to be diabetic was the blood glucose level greater than 250 mg/dL.For the control group, in order to equalize the effect of injection of 0.1 μm citrate buffer, the same volume was injected intraperitoneally. 17spase-8, Bcl-2, and pro-BNP Measurement Forty-eight hours after the last training session, all groups were anesthetized under completely similar conditions and fasting with intravenous injection of peritoneal ketamine (50 mg/kg body weight) and xylazine (3 mg/ kg body weight) and the chest was split and the heart tissue was collected.In order to measure the indices, the nitrogen fluid was applied for powdering the heart tissue, and then 0.1 g (100 mg) of the powder was homogenized with 1 mL of PBS buffer, and then the extracted solution was centrifuged for 15 minutes at a speed of 5000 rpm, and its serum was used to measure the indices. 18To detect the cardiac index, Caspase-8 ELISA kit made by American MyBioSource Company (MBS2022115 96 tests) was used applying a quantitative sandwich method (sensitivity of 0.023 ng/mL). 18Cardiac levels of Bcl-2 were also measured by ELISA kits by MyBioSource Inc (MBS704330) made in the United States using a quantitative sandwich method (sensitivity of 0.65 pg/ml). 19Cardiac levels of NT-proBNP were also measured by ELISA kits by MyBioSource Inc (MBS2509359) made in the United States using a quantitative sandwich method (sensitivity of 2.49 ng/L).All of the above steps were carried out in the Biochemistry Laboratory of the Faculty of Physical Education and Sports Science in Mazandaran University. Statistical Analysis Shapiro-Wilk test was run to measure the normal distribution of data.Regarding the nature of the distribution of data, for comparing the groups in the variables studied, two-way analysis of variance (ANOVA) was used.In addition, a Tukey test was conducted as the post hoc test.The level of significance was P<0.05.Statistical procedures were done using SPSS software package version 22.0. Results Figure 1 shows the mean and standard deviation of caspase-8 levels in heart tissues in the present study.As it can be seen, caspase-8 levels in diabetic rats were significantly higher compared to control group (P = 0.001).On the other hand, the results indicated that caspase-8 levels in the exercise group were significantly lower compared to the diabetes group (P = 0.001). Moreover, the results showed a significant increase of caspase-8 levels in exercise + diabetes group compared to exercise group (P = 0.001).Figure 2 shows the levels of Bcl-2 in various study groups in terms of mean and standard deviation.As it can be seen, there is no significant difference between the groups.Figure 3 shows the mean and standard deviation of NT-proBNP in heart tissue of different groups in the current study.Our results indicated that NT-proBNP in the diabetes groups was significantly higher compared to the control group (P = 0.001).However, NT-proBNP significantly decreased in the exercise group compared to control group (P = 0.001).It was also observed that NT-proBNP significantly reduced in the exercise and exercise + diabetes compared to the diabetes group (P = 0.001 and P = 0.014). Discussion This study aimed to determine the changes in cardiac levels of caspase-8, Bcl-2 and NT-proBNP, as markers for apoptosis and inhibition of heart apoptosis, in STZinduced diabetic rats after 4 weeks of aerobic exercise (running).The results revealed that intraperitoneal injection of STZ (55 mg/kg body weight) resulted in a significant increase in caspase-8, and NT-proBNP levels and no significant changes in Bcl-2 levels.Diabetes is the most prevalent metabolic disorder in the developing world.Extensive studies are underway to find suitable treatments for diabetes. 20imilarly, Kanter et al reported histological disorders including loss of muscle fibers and irregularities in striated muscles after induction of diabetes. 16Moreover, Shamsaei et al observed that after induction of diabetes in rats, necrosis and apoptosis occurred in the neurons of the hippocampus. 30Kim et al, 31 in contrast, showed that caspase-3 increased in the eyes of diabetic rats.In addition, the expression of pro-apoptotic protein Bax and Bcl-2 increased in the eyes of diabetic rats.The reason for this discrepancy in the change of Bcl-2 protein can be due to differences in the tissue levels and the expression of the protein gene since changes in the level of gene expression vary with the changes in the tissue levels, and these changes are not necessarily associated. 22,32Dousar et al also revealed the incidence of apoptosis in myopathy of diabetic rats. 33Joussen et al demonstrated the activation of caspase pathways, damage to retinal cells, apoptosis, and endothelial cell loss in diabetic rats, confirming the results of the present study. 34Kang et al stated that the phosphorylation of the pro-apoptosis protein Bad decreases, emphasizing the progression of the apoptosis in the mesenchymal cells in high concentrations of glucose.These disorders in the expression and phosphorylation of the Bcl-2 family are associated with releasing of cytochrome c and the activation of caspase.It was reported that oxidative stress in mesenchymal cells exposed to high concentration of glucose is an important incidence in the activation of the cell death program that results in mitochondrial dysfunctions and activation of caspase-3. 35n line with the current study, Shiroo et a observed severe apoptosis in cardiac cells of untreated diabetic rats.Furthermore, diabetes in the rats significantly increased the lipid peroxidation rate, the levels of carbonyl protein as an index of protein oxidation, and the superoxide dismutase. 36Diabetes increases the level of oxidative stress, which leads to elevated levels of reactive oxygen species and reduces the antioxidant defense capacity, resulting in the programmed death of heart cells and apoptosis. 21owever, precise molecular mechanisms of apoptosis have not yet been determined by high glucose concentrations.Scholars have argued that the mechanism of apoptosis, where glucose induces cell death, varies depending on the cell and tissue studied. 16Research has reported that the cause of diabetes-induced cardiac apoptosis, in addition to increased stress or oxidative stress, is the occurrence of inflammatory processes and the presence of cytokines such as TNF-α, IL-1β, and IFN-γ.In addition, their effect on nitric oxide causes the appearance of Fas ligand by inflammatory and cardiac cells, which ultimately leads to the activation of caspase signaling and ultimately cell death by apoptosis in cardiac cells. 37t was also observed that in non-diabetic groups, exercise performance did not cause any changes in caspase-8, NT-proBNP and Bcl-2.On the other hand, the exercise in diabetic groups led to a significant decrease in NT-proBNP and no changes in caspase-8 and Bcl-2 were observed.Previous studies showed that increased activity of antioxidant enzymes and decreased lipid peroxidation levels which are followed by exercise have important effects on the prevention of complications of apoptosis caused by diabetes and tissue damage caused by oxidative stress following the disease. 24Regular exercise activity has been shown to increase the activity of antioxidant enzymes, increase the resistance to oxidative stress, and thus reduce oxidative damage. 25Previous evidence has shown that regular exercise is effective in preventing and delaying diabetes, increasing insulin sensitivity and improving glucose metabolism. 38It has also been shown that exercise before ischemia results in a decrease in the ratio of pro-apoptotic proteins and anti-apoptotic proteins, such as Bcl-2, and decreased signaling of caspase pathway activation, especially caspase-3 (final caspase of apoptosis pathway). 39The inhibitory capability of free radicals is probably one of the most important mechanisms in the field of cell defense against cardiac damage.Active oxygen species in the mitochondrial electron transfer chain are produced as a natural product; however, when their level exceeds the antioxidant capacity of the cell, they can lead to cell death.Oxidative stress induced by active oxygen species is highly associated with diabetes and its complications and can cause cell death through various pathways. 39T-proBNP is a precursor of the BNP hormone with 108 amino acids that is broken down by a protease series into 2 CT-BNP molecules with 77 to 108 amino acids and NT-proBNP with 1 to 76 amino acids after 31 Studies have reported that levels of protein kinase B decrease in animal samples with diabetes mellitus.31 Furthermore, probably another mechanism of cellular protection from sporadic exercises against apoptosis is associated with the important effects of exercise in enhancing the expression of protein kinase B since it has been shown that protein kinase B encounters an increase in aerobic exercise.40 Conclusion Finally, it can be concluded that 4 weeks of aerobic exercise probably reduces the severity of apoptosis in diabetic rats. Competing Interests Authors declare that they have no competing interests. Ethical Approval All the ethics of work with animals were examined by the Ethics Committee of Razi University of Kermanshah and approved with code 024-2-396. 40 male Wistar rats weighing 200.63±17.47were obtained from the Animal Breeding Center of Pasteur Institute of Iran-Tehran and transferred to the Animal Laboratory of Faculty of Physical Education and Sports Sciences of Mazandaran University.All subjects were kept under controlled environmental conditions with an average temperature of 22 ± 3°C, a 12-hour light cycle, a 12-hour cycle of darkness and free access to water and food.Mouse food was purchased from Behparvar Company, Iran. Figure 1 .Figure 2 . Figure 1.The mean and standard deviation of caspase-8 in picograms per milliliter.* P = 0.001, ** P = 0.001, and *** P = 0.014 represent a significant difference between the groups.P < 0.05 was considered as the level of significance. Figure 3 . Figure 3.The mean and standard deviation of NT-proBNP in terms of nanograms per milliliter.* and ** P = 0.001 indicates a significant difference between the groups.P < 0.05 was considered as the level of significance. This is consistent with the results obtained in this research.In previous studies, it has also been observed that running exercise reduces the levels of NT-proBNP.These results are consistent with the current research.Cellular and molecular factors are associated with each other through cascade signaling.Following external stimulus and stress, this intercalating cascade of signaling occurs.Protein kinase B is the main agent in the signaling pathway of phosphatidylinositol-3 kinase, which plays a role in many cellular processes, including cellular survival, metabolism, cell growth and proliferation.Increasing the expression and enhancement of protein kinase B activity inhibit apoptosis pathways by phosphorylation of the antiapoptotic proteins of the Bcl-2 family and inactivating the apoptotic precursor protein such as Bax or by directly controlling caspase activity.
2018-12-05T01:51:51.424Z
2017-12-31T00:00:00.000
{ "year": 2017, "sha1": "0c67c1ab25805f35bff57ec64abb4b50fc6b706b", "oa_license": "CCBY", "oa_url": "http://ijbsm.zbmu.ac.ir/PDF/ijbsm-2174", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "0c67c1ab25805f35bff57ec64abb4b50fc6b706b", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
261238604
pes2o/s2orc
v3-fos-license
Current and emerging treatment options for hairy cell leukemia Hairy cell leukemia (HCL) is a lymphoproliferative B-cell disorder characterized by pancytopenia, splenomegaly, and characteristic cytoplasmic hairy projections. Precise diagnosis is essential in order to differentiate classic forms from HCL variants, such as the HCL-variant and VH4-34 molecular variant, which are more resistant to available treatments. The current standard of care is treatment with purine analogs (PAs), such as cladribine or pentostatin, which provide a high rate of long-lasting clinical remissions. Nevertheless, ~30%–40% of the patients relapse, and moreover, some of these are difficult-to-treat refractory cases. The use of the monoclonal antibody rituximab in combination with PA appears to produce even higher responses, and it is often employed to minimize or eliminate residual disease. Currently, research in the field of HCL is focused on identifying novel therapeutic targets and potential agents that are safe and can universally cure the disease. The discovery of the BRAF mutation and progress in understanding the biology of the disease has enabled the scientific community to explore new therapeutic targets. Ongoing clinical trials are assessing various treatment strategies such as the combination of PA and anti-CD20 monoclonal antibodies, recombinant immunotoxins targeting CD22, BRAF inhibitors, and B-cell receptor signal inhibitors. Introduction Hairy cell leukemia (HCL) is defined, according to the World Health Organization classification, as a mature peripheral B-cell neoplasm that accounts for 2%-3% of all adult leukemias. 1 It is characterized by infiltration of the bone marrow, liver, and spleen by malignant B cells with hair-like cytoplasmic projections and an indolent course. HCL is more frequent in males, with an overall male to female ratio of 4:1, and the median age at onset is 52 years. 2 Hairy cells have a characteristic immunophenotyping profile showing positivity for CD11c, CD25, and CD103 markers, in addition to the B-cell antigens CD20 and CD22. However, recurrent chromosomal translocations have not been identified. Recently, the BRAF V600E mutation has been identified activating the MEK-ERK pathway in patients with classic HCL (HCL-c). 3 This finding has implications for the pathogenesis, diagnosis, and targeted therapy. Precise diagnosis and detailed workup are essential because the clinical profile of HCL can closely mimic that of other chronic B-cell lymphoproliferative disorders that are treated differently. Variants of HCL, such as HCL-variant (HCL-v) and VH4-34 molecular variants, have a different immunophenotype and specific VH gene usage and are more resistant to available treatments. BRAF mutations are absent in both variants. 4 Before the introduction of purine analogs (PAs), splenectomy and treatment with interferon alpha led to clinical and hematological responses; however, these were rarely complete and median survival was only 4 years. [5][6][7] Front-line treatment of HCL is currently based on the purine nucleoside analogs pentostatin or cladribine. Both agents confer high and durable response rates, but 30%-40% of patients will eventually relapse 5-10 years after their first treatment. 8,9 In this disease, the major challenge is the treatment of patients with multiple clinical relapses as well as those with HCL variants that are refractory to standard PA treatments. This article provides an update of the treatment options that are currently available and reviews the results of clinical trials with novel molecules that may change the future therapeutic landscape of this rare disease. Current standard of care The main indications for treatment are symptomatic cytopenias or painful splenomegaly. If a patient is asymptomatic and cytopenias are minimal, it is reasonable to adopt a watch-andwait policy. It should be noted that the risk of opportunistic infections in patients with monocytopenia, with or without neutropenia, is high. Therefore, even asymptomatic patients may be considered for early treatment. First-line treatment PA therapy as primary treatment Since the effect of the PAs pentostatin and cladribine were discovered in HCL patients, treatment with these drugs currently remains the standard of care. Both agents induce complete remission (CR) in a high proportion of patients (80%), and most studies demonstrate a median disease-free survival of 10 years. [8][9][10][11] Nevertheless, pentostatin and cladribine have not been compared in large, randomized trials and most of the available response and toxicity data derive from published retrospective series. A long-term follow-up study by Else et al demonstrated no difference in outcome between the two agents. 9 The overall response rates (ORRs) were 96%-100%, CR around 80%, and 10-year overall survival was between 85% and 100%, without any statistically significant differences. Relapse rates were similar for both groups (44% for pentostatin and 38% for cladribine). In another retrospective study, 107 patients treated with pentostatin or cladribine were evaluated and showed CR of 92% in pentostatin-treated patients and 88% in cladribine-treated patients, with an ORR of 100% in both groups. Minimal residual disease (MRD) was positive in 52% and 47% (P=0.445), respectively. Of those treated, 51% and 25% relapsed (P=0.016), and the median treatment-free interval (TFI) was 95 months for pentostatin versus 144 months for cladribine, without any significant differences. Considering the two groups, the data showed TFI of 170 months versus 44 months for patients with CR versus partial remission (PR). 12 Doses and PA treatment schedules were recently published in the revised British guidelines for the diagnosis and management of HCL, 13 which are included in Table 1. Different dosing and schedules have been used with similar results. Robak et al showed that cladribine could be infused over 2 hours instead of 24 hours, with 19 out of 23 patients achieving CR. The same study demonstrated that there was no statistically significant difference between CR rates in the group receiving the medication over 5 days versus 7 days, with the 5-day arm presenting less infectious complications. 14 Cladribine was administered at doses of 0.15 mg/kg weekly for 6 weeks. Initial results suggested similar efficacy with decreased immunosuppression and infections. 15 However, a larger follow-up trial that included 138 patients showed that this schedule did not significantly reduce toxicities. 16 Furthermore, daily subcutaneous administration of cladribine over 7 consecutive days has been studied in 73 patients. The efficacy and toxicity were comparable to studies with intravenous cladribine and the plasma concentrations of subcutaneous cladribine were similar to those obtained using the intravenous route. 17 Using a 5-day subcutaneous dosing regimen, von Rohr et al treated 62 HCL patients with cladribine 0.14 mg/kg/d. The CR rate achieved was 76% with a response rate of 97%. 18 Multiple studies have demonstrated the type of response to PA as an important prognostic factor, with patients who only achieved a partial response faring significantly worse than those achieving CR. 9,10,19 Therefore, achieving CR is an important treatment goal. Bone marrow assessment after cell count recovery (4-6 months after cladribine therapy or following eight to nine courses of pentostatin) is recommended, and a second course of PA therapy should be administrated if patients do not enter CR. Table 1 Purine analog treatment schedules in HCL Pentostatin (2′-deoxycoformycin) 4 mg/m 2 every 2 weeks until maximum response, plus one or two extra injections Cladribine (2-chlorodeoxyadenosine) 0.1 mg/kg/d as a continuous iv infusion for 7 days 0.14 mg/kg/d as an iv infusion over 2 hours for 5 consecutive days 0.14 mg/kg/d as an iv infusion once weekly for 6 consecutive weeks 0.14 mg/kg/d as a SC bolus injection for 5 consecutive days 0.14 mg/kg/d as a SC bolus injection once weekly for 5 consecutive weeks Monitoring MRD Although both PAs are highly effective and lead to excellent overall survival, it appears that neither of them is curative since the majority of patients have evidence of MRD despite being in CR. Several authors have studied the presence of MRD using immunohistochemical, flow cytometry, or polymerase chain reaction techniques 20-23 that detected MRD in 40%-60% of patients who had received PAs. Flow cytometry is clearly superior to other techniques, but the prognostic implication of detectable MRD after therapy for HCL may be confounded by several factors: variability in the sensitivity of various techniques used to assess MRD, nonuniformity of the criteria used to define MRD, variability in timing of MRD assessments after therapy, and the limited numbers of patients in reported studies. However, the preponderance of the data suggests that persistence of MRD after therapy with nucleoside analogs is predictive of eventual disease recurrence. Tallman et al 24 and Wheaton et al 25 studied MRD using immunohistochemical techniques in patients treated with PAs. The prevalence of MRD detected after cladribine (13%) was lower than that after pentostatin treatment (26%) but did not differ significantly. However, the 4-year relapse-free survival rate (55% versus 88%) was significantly lower if MRD was detected. In a retrospective study by López-Rubio et al 12 MRD positivity (analyzed by immunophenotyping) was detected in 49% of 107 HCL patients treated with PAs, and significant differences were found between the two treatments. The estimated TFI by Kaplan-Meier analysis was 97 months in MRD+ patients, while the median TFI in MRD patients was not reached (P0.059). A study recently published by Garnache Ottou et al 26 confirmed the utility of detecting MRD using the results of flow cytometry techniques in order to identify patients at a high risk of relapse. Chemoimmunotherapy: rituximab with cladribine or pentostatin The rationale for adding rituximab to PAs was based on the efficacy and safety of this combination in patients with relapsed disease, 27 as well as the significant synergy of this combination observed in other lymphoproliferative disorders. Ravandi et al 28 reported a Phase II clinical trial in 31 previously untreated patients with HCL. Cladribine was administered intravenously at 5.6 mg/m 2 over 2 h/d for 5 consecutive days. Approximately 4 weeks after initiating cladribine, eight weekly doses of rituximab (375 mg/m 2 intravenously) were administered. A CR rate of 100% was reported and, after a median follow-up of 25 months, the median CR, PFS, and overall survivals had not been reached. Patients achieved MRD-negative status after completion of treatment with rituximab, as demonstrated by flow cytometry and consensus polymerase chain reaction. Despite a significant decline in the number of CD4 + and CD8 + T cells, as well as serum immunoglobulin levels, no increase was observed in the incidence of opportunistic infections. A longer follow-up study may provide further information regarding the importance of achieving MRD-negative CR in long-term outcomes. Cervetti et al 29 analyzed the results of ten patients who received four cycles of rituximab after administration of cladribine. Before starting anti-CD20 antibody therapy, two patients were in CR, six in PR, and two showed no significant response to cladribine. All cases resulted to be IgH positive. Eight out of ten patients (four in PR, two in CR, and two unresponsive after 2-chlorodeoxyadenosine) were evaluable for response. Two months after the end of anti-CD20 therapy, all evaluated patients presented complete hematological remission. Moreover, rituximab increased the percentage of molecular remission up to 100% 1-year after the end of treatment. Interestingly, in all cases but one, including those that were persistently polymerase chain reaction-positive, semiquantitative molecular analyses showed MRD levels lower than those found before rituximab administration. The results not only confirm the therapeutic effect of rituximab but also show its relevance in eradicating MRD in HCL. In a subsequent extended follow-up study, Cervetti et al 30 analyzed 27 HCL patients treated with anti-CD20 after pretreatment with cladribine. Patients who demonstrated persistent MRD or detectable clinical disease were treated with rituximab (375 mg/m 2 once a week for four doses). Hematological, immunological, and molecular analyses were repeated for 2 months, 6 months, and 12 months after the end of anti-CD20 treatment. The overall hematological response rate was 100% (CR 89%, PR 11%) 2 months after the last rituximab infusion. Molecular analysis revealed a progressive increment in the number of molecular remissions, with an overall molecular response rate of 70%. PFS was significantly affected both by the quality of response to rituximab (2-year PFS 50% for patients achieving PR versus 94% for cases in CR) (P0.001) and by the molecular status (30% in cases MRD-positive versus 100% for patients MRDnegative; P0.001). Two ongoing studies are evaluating therapy with rituximab and PAs: one of them being an MD Anderson Cancer Center-supported "Phase II Study of 2CDA followed by 2150 López-Rubio and Garcia-Marco rituximab in HCL" (NCT00412594), which is in recruitment phase; and the other, the National Cancer Institute-supported "Cladribine with simultaneous or delayed rituximab to treat HCL" (NCT00923013), with recruitment finished. The results of the two studies will provide useful data in the initial management of patients. Role of interferon alpha and splenectomy The role of interferon alpha is limited to patients presenting with severe pancytopenia and those with a pressing need for cell count recovery. A regimen of 3 mega-units three times a week will gradually improve blood counts and facilitate the subsequent use of either nucleoside analog. 31 The principal indication for splenectomy is the finding of a very significant splenomegaly in the presence of low-level bone marrow infiltration. Figure 1 presents a proposed algorithm for first-line treatment in HCL. This strategy is based on the expert opinion adopted by the Spanish CLL group. The aim is to eradicate MRD in patients with clinical response in order to improve the TFI. In these cases, the number of rituximab doses required to achieve a deeper response is not clear. Nevertheless, the cost and side effects of rituximab have to be taken into account, especially with respect to the addition of immunosuppression. This is the rationale behind the administration of four doses of rituximab and further evaluation of MRD status before consolidation with four additional doses if MRD+ is detected. Treatment at relapse and refractory disease Although relapsed and refractory diseases are vastly different entities, the majority of studies include both subsets when evaluating treatment options. Patients with relapsed disease can be given additional courses of pentostatin or cladribine, although the response 2151 Current and emerging treatment options for HCL rate and duration tend to decrease with each subsequent course. 9,[32][33][34] The choice of agent at relapse may depend on the duration of first remission: if short (2 years), the alternative agent should be used; if longer (2 years), the patient should be retreated using the same agent. The combination of pentostatin or cladribine with rituximab is another therapeutic option. 35 In a retrospective study published by Else et al 36 18 patients with relapsed disease were treated with pentostatin or cladribine in combination with rituximab, showing a CR rate of 89%. Of the 13 patients who were evaluated for MRD, all were negative and responses were sustained at a median follow-up of 36 months with an estimated relapse rate of ~7%-11% at 3 years. Both agents were well tolerated in the long term with lymphocytopenia being the main concern. Fludarabine or bendamustine combinations with rituximab have recently been explored in two small series with promising results, 27,37 and an ongoing study (NCT01059786) is also comparing bendamustine with pentostatin in combination with rituximab in the multiply relapsed setting. In relapsed patients, treatment with interferon should be considered in the absence of any other available alternative. 38,39 Figure 2 shows a proposed algorithm for second-line treatment in HCL. Patients relapsing 2 years after initial treatment are treated with the same PA plus eight doses of rituximab; while those relapsing within the first 2 years after initial treatment should receive an alternative PA plus eight doses of rituximab. In the case of a second relapse, different treatment options are listed. Recently, a subgroup of patients has been described that does not respond to first-line PA or subsequent treatments. These patients are characterized by the presence of leukocytosis, bulky splenomegaly, unmutated IGHV status, and p53 dysfunction, which generally confer resistance to PAs and poor prognosis. 40 In cases with these clinical features, molecular analysis for p53 dysfunction (deletion/mutation) and IGHV mutational status are recommended to establish a different treatment approach in the context of clinical trials or the use of new agents based on molecular findings. HCL variant HCL-v is considered to be unrelated to HCL and, according to the 2008 World Health Organization classification, is now separately categorized as an unclassifiable splenic B-cell lymphoma/leukemia together with the splenic diffuse red pulp variant of B-cell lymphoma. However, it is important in the differential diagnosis of HCL due to the different results of treatment. HCL-v differs from HCL in its morphology, immunophenotype, and molecular characteristics. As in HCL, the cells in most HCL-v cases are villous and large. However, the cells in HCL-v have a distinct nucleolus and round nucleus resembling B-cell prolymphocytic leukemia. The immunophenotype of HCL-v cells differs from that of HCL in that, as a rule, CD25 and HC2 are not expressed. CD103 is 2152 López-Rubio and Garcia-Marco expressed infrequently, and CD11c is nearly always positive. Moreover, patients with HCL-v have wild-type BRAF. 4,41 HCL-v responds differently to standard HCL treatment, being generally resistant to interferon alpha and rarely achieving CR with either pentostatin or cladribine. In the largest series published to date (n=58), splenectomy resulted in good PR in two-thirds of patients. 42 Very rarely, patients may achieve CR after three or four courses of cladribine. Various case reports demonstrated the usefulness of rituximab either alone or after splenectomy in the treatment of HCL-v patients. [43][44][45] In a recent study published by Kreitman et al 46 ten HCL-v patients received cladribine 0.15 mg/kg on days 1-5, with eight weekly doses of rituximab 375 mg/m 2 beginning on day 1. After 6 months, nine out of ten patients achieved CR, and eight remained free of MRD after a follow-up of 12-48 (median 27) months. No dose-limiting toxicities were observed when combining cladribine and rituximab. Cytopenias in CRs resolved within 7-211 (median 34) days without major infections. The authors concluded that although cladribine alone lacks effectiveness for early or relapsed HCL-v, cladribine with immediate rituximab achieves CRs without MRD and its administration is feasible. Emerging treatment options in HCL As a result of improved understanding of the pathobiology of HCL, new therapeutic targets have been identified (summarized in Table 2). Agents targeting these molecular pathways are under investigation, and some of them have demonstrated significant activity in relapsed patients. Ongoing and planned clinical trials are assessing several treatment strategies, such as the combination of PAs and various anti-CD20 monoclonal antibodies, recombinant immunotoxins targeting CD22 (eg, moxetumomab pasudotox), 47,48 BRAF inhibitors, such as vemurafenib, 49,50 and B-cell receptor (BCR) signaling inhibitors (eg, Bruton's tyrosine kinase inhibitor ibrutinib). In Table 3, we show preliminary results and toxic effects of novel therapeutic agents in HCL. immunotoxins These hybrid agents contain a monoclonal antibody that identifies and binds to a specific cell target (CD25 or CD22 in the case of HCL) and the truncated toxin (pseudomona or diphtheria exotoxin), which is released inside the cell to block protein synthesis. A few clinical trials from the National Cancer Institute have studied the efficacy of recombinant immunotoxins against CD22 (BL22 and moxetumomab pasudotox) and CD25 (LMB-2) in patients with refractory HCL. Since CD25 is not universally overexpressed in HCL, the focus has been diverted to CD22, which is uniformly overexpressed in HCL. Kreitman et al 51 detailed the use of recombinant immunotoxin (BL22) in a Phase I study of patients resistant to first-line cladribine. The study included 16 HCL patients with CD22 expression and three had variant diseases. Of the patients treated, eleven achieved a CR and two attained a PR. Three patients ultimately relapsed (two with HCL-v) but achieved a second CR after retreatment. The side effects included doselimiting cytokine release syndrome. Additionally, hemolytic uremic syndrome developed in two patients and both were resolved with plasmapheresis. No hematological toxicity was observed. In a Phase II clinical trial, immunotoxin BL22 was tested in 36 patients with relapsed and refractory HCL. 52 After one cycle (40 μg/kg every other day, three doses), the CR rate was 25% and the ORR was 50%, with CR improving to 47% and ORR to 72% after retreatment. The median follow-up of this study was 26 months, with six patients relapsing and four of the 17 patients with a CR relapsed. ORR was impacted by spleen size. The regimen was well tolerated with mostly grade 1 and 2 hypoalbuminemia and elevated liver function tests. Three patients developed hemolytic uremic syndrome but none required plasmapheresis. In order to enhance the efficacy and safety of BL22, the binding affinity of the immunotoxin to CD22 was improved by identifying a mutant (with three different amino acids) in the complementarity-determining region 3 of the monoclonal antibody-variable heavy chain. This immunotoxin is known as moxetumomab pasudotox, or HA22. An update of the Phase I dose escalation trial of moxetumomab pasudotox in 49 patients with relapsed and refractory HCL showed a CR rate of 57% and an ORR of 88%. 53 Among patients who received a high dose (50 μg/kg every other day, three doses; n=33), 64% achieved CR and 13 out of 21 achieved MRD-negative CR, which was maintained after a median of 32 months. Two patients developed grade 2 hemolytic uremic syndrome, which was reversible without specific treatment. Those who had previously undergone splenectomy achieved PR. Patients with the lowest HCL burden appeared to have a better chance of a durable CR. A National Cancer Institute-supported study of moxetumomab pasudotox BRAF and MAPK pathway inhibitors The BRAF (V600E) mutation is a molecular hallmark of HCL-c. The BRAF gene is a member of the serine/threonine protein kinase family. Its product, the B-raf protein, is part of the signal transduction protein kinase family, which is critical to cell division and differentiation. BRAF mutations provide Ras-independent activation of the MAPK pathway, causing hyperactivation of ERK and thereby promoting the growth, survival, and differentiation of HCL cells. BRAF V600E mutations also occur in other cancers, such as malignant melanoma and papillary thyroid cancer. Clinical trials of BRAF inhibitors to treat HCL are motivated by results from the use of BRAF inhibitors to treat metastatic melanoma. Vemurafenib is an oral agent that inhibits the thymidine kinase enzyme and specifically targets cells containing BRAF (V600E) mutations. Anecdotal case reports have demonstrated the activity of vemurafenib in patients with relapsed HCL. 49,50,54 Preliminary results of a Phase II trial of vemurafenib in five patients with relapsed HCL have been reported. 55 At the 2014 American Society of Hematology annual meeting, Park et al presented the data of 20 patients treated with vemurafenib 960 mg twice daily for 3 months. 56 Patients with partial or complete response with detectable MRD were allowed to receive vemurafenib for up to three additional months. Twenty patients were evaluable for toxicity and 17 for disease response with a median follow-up of 10 months. Out of 20 patients, the ORR was 100%. Six patients achieved CR (four MRD− and two MRD+) and eleven achieved PR with very minimal disease. Tiacci et al 57 reported the results of an Italian Phase II Clinical trial evaluating the efficacy and safety of vemurafenib in HCL patients who were refractory to or had relapsed after PAs. The study included 28 patients: six refractory to first-line treatment with PA and 21 who relapsed early and/or repeatedly after PA. Vemurafenib was administered orally on an outpatient basis at a dose of 960 mg twice daily for a median of 16 weeks and was generally well tolerated. The ORR was 96%, 34.6% CR and 61.4% PR, obtained after a median of 8 weeks and 9 weeks, respectively. Retreatment with vemurafenib was able to induce remissions in patients relapsing after a CR, but was less effective in patients relapsing after a PR. Sascha et al 58 presented a European multicenter experience of 21 patients treated with vemurafenib. Patients had received a median of 3 (range 0-12) previous treatment lines. Vemurafenib was started at a dose of 240 mg bid in 18 patients and continued at this dose in 14 patients. In the remaining patients, doses were escalated between 480 mg and 960 mg with median duration of 90 (range 55-167) days. Blood counts improved in 20 patients who met response criteria. The median times to neutrophil and platelet count recovery and improvement of anemia were 39 days, 28 days, and 67 days, respectively. Seven patients achieved a CR and 13 patients achieved a PR. Patients who received 240 mg bid (n=8) did not achieve significantly more CRs than patients receiving 240 mg. The median observation time was 12 months (range 3-31 months) and median event-free survival (retreatment or death) was 17 months for all patients. Seven patients were retreated at relapse after a median of 10 months after stopping vemurafenib. All patients again demonstrated a response to vemurafenib. The main toxic effects of vemurafenib are skin rash, photosensitivity, and arthralgia. Extrapolating the clinical data from studies of BRAF inhibitors in melanoma, some of the concerns with the use of vemurafenib in HCL are the development of skin cancers, relapse of HCL owing to activation of the MEK pathway, and development of resistance to vemurafenib. 59,60 As a result, other agents, such as dabrafenib, are now being studied. Another strategy is to combine BRAF inhibitors with MEK inhibitors such as trametinib. In patients with melanoma, it has been shown that PFS durations were 2154 López-Rubio and Garcia-Marco better after treatment with a combination of dabrafenib and trametinib than with dabrafenib alone, 61 results that have been validated in in vitro studies of HCL cells. 62 Patients with HCL-v and VH4-34 variants have mutations in the MAPK pathway; therefore, this combination strategy might be useful for patients with HCL-v or VH4-34, which generally do not respond well to PAs. However, the long-term impact of this strategy is currently unclear, and several reports are emerging of resistance to these BRAF inhibitors. 59 ibrutinib BCR is a transmembrane receptor complex consisting of an extracellular portion-surface immunoglobulin receptor. This receptor consists of two heavy and two light chains, which bind to the antigen, and a cytoplasmic portion comprising a heterodimer of CD79a and CD79b. Activation of BCR results in the stimulation of various intracellular signaling pathways involving kinases, such as SYK, BTK, and LYN, thus stimulating lymphoid cell growth. In recent years, the development of BCR signaling kinase inhibitors has brought a paradigm shift in the therapeutic landscape of lymphoid malignancies. Currently, ibrutinib is the only BTK inhibitor that is commercially available. Ibrutinib has shown excellent results in chronic lymphocytic leukemia and mantle cell lymphoma. 63,64 Sivina et al 65 explored the expression and function of the BCR-associated kinase BTK and its inhibitor ibrutinib in HCL, demonstrating that BTK protein expression is present in HCL cells and that low ibrutinib concentrations induce full BTK target occupancy in HCL cells. Treatment with ibrutinib inhibited BCR downstream signaling and the proliferation and metabolism of HCL cells, suggesting that ibrutinib has a direct effect on HCL cell survival and growth. All this justifies the development of BCR-associated kinase inhibitors, such as ibrutinib, in patients with HCL. A Phase II clinical trial (NCT01841723) is currently evaluating the use of ibrutinib in patients with relapsed HCL. Preliminary data presented at the American Society of Clinical Oncology (ASCO) suggested that this agent is well tolerated in HCL. 66 These data were updated at the 20th Congress of European Hematology Association 67 with the following results: 13 patients (two with HCL-v and eleven with relapsed HCL-c) received continuous oral ibrutinib (420 mg daily) in 28-day cycles. One MRD-negative complete response (HCL-c) and five partial responses have been observed (ORR 46%). Four additional patients (30%) with stable disease have experienced clinical benefit that does not meet criteria for PR and continue on treatment. At a median follow-up of 14.5 months, nine patients (69%) remain progression free on treatment, three patients (one with HCL-v and two with HCL-c) have progressed, and one patient (HCL-c) discontinued in cycle 8 after failing to resolve baseline neutropenia. The most frequent (20%) grade 3/4 adverse events included lymphopenia (37%), hypophosphatemia (30%), neutropenia (23%), and infection (23%). Common grade 1/2 adverse events included myalgias (61%), headache (38%), dizziness (38%), diarrhea (38%), arthralgias (30%), rash (30%), and fatigue (30%). Other hematologic adverse events included grade 1/2 anemia (38%) and grade 1/2 thrombocytopenia (38%). Redistribution lymphocytosis (peaking at day 8) occurred in both HCL-v patients and in one HCL-c patient with circulating disease at baseline. Expert commentary HCL therapy with PAs has improved treatment outcomes, with long-term remissions and a life expectancy, which is not significantly different from that of a healthy matched population. However, long-term follow-up studies have shown that relapse-free survival curves do not reach a plateau with PAs and late relapses can occur; therefore, the treatment for relapsed and refractory disease remains a challenge. In patients with HCL, due to its marked efficacy, brief treatment duration and favorable toxicity profile, our first-line treatment is cladribine, which is followed by a second cycle if CR is not achieved. MRD should be monitored and we recommend four to eight doses of rituximab to try to achieve complete MRD-negative remission. It is expected that the results of ongoing studies will provide information regarding the effectiveness of adding rituximab to first-line treatment. Upon relapse after a long remission, and especially in patients treated with cladribine, we recommend an analysis of the mutational status of IGHV genes and a search for VH4-34 gene usage, together with an analysis of TP53 mutations. If negative, patients may be retreated with a second course of cladribine or pentostatin with rituximab. However, a change of the PA is recommended in patients who have only had a short remission. In patients in second relapse, refractoy disease, positivity for TP53 or BRAF V600E mutations or VH4-34 gene usage, treatment with targeted immunotoxins, BRAF inhibitors, either alone or in combination with MEK inhibitors, and BCR signaling inhibitors, such as ibrutinib, have provided new approaches. Positive Phase II data of moxetumomab pasudotox, an anti-CD22 immunotoxin, indicate that immunotoxin therapy can achieve durable MRD-negative CRs in patients with refractory disease. Vemurafenib and ibrutinib are currently 2155 Current and emerging treatment options for HCL the most promising agents undergoing clinical trials. The relative lack of serious side effects, such as myelotoxicity, and oral administration are the major advantages of these agents over conventional PAs. OncoTargets and Therapy Publish your work in this journal Submit your manuscript here: http://www.dovepress.com/oncotargets-and-therapy-journal OncoTargets and Therapy is an international, peer-reviewed, open access journal focusing on the pathological basis of all cancers, potential targets for therapy and treatment protocols employed to improve the management of cancer patients. The journal also focuses on the impact of management programs and new therapeutic agents and protocols on patient perspectives such as quality of life, adherence and satisfaction. The manuscript management system is completely online and includes a very quick and fair peer-review system, which is all easy to use. Visit http://www.dovepress.com/testimonials.php to read real quotes from published authors.
2016-05-12T22:15:10.714Z
2015-08-19T00:00:00.000
{ "year": 2015, "sha1": "aa3e219845aca53b8dc0f78142fda9fb1370121c", "oa_license": null, "oa_url": null, "oa_status": null, "pdf_src": "Grobid", "pdf_hash": "3b70a956c017504ad541be1bdbd770984fac239b", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [] }
1734651
pes2o/s2orc
v3-fos-license
Relationship of Sulfated Glycosaminoglycans and Cholesterol Content in Normaland Arteriosclerotic Human Aorta Sulfated glycosaminoglycans were extracted from arteriosclerotic and adjacent nonarterlosclerotlc areas of human aortas from persons ages 28 to 83 years; the glycosaminoglycans were compared with the cholesterol and trlglycerlde content of the tissues. Sulfated glycosaminoglycans were Isolated after proteolytlc digestion of defatted arterial tissue and were quantified after reductive labeling with NaB3H4. The amount of glycosaminoglycans In the aorta Increased with the age of the person and the cholesterol content (degree of arteriosclerosis) of the aorta. The proportion of chondrortln sulfate/dermatan sulfate Increased significantly with age and cholesterol content, whereas the corresponding amounts of heparan sulfate decreased. Sulfated glycosaminoglycans were extracted from arteriosclerotic and adjacent nonarterlosclerotlc areas of human aortas from persons ages 28 to 83 years; the glycosaminoglycans were compared with the cholesterol and trlglycerlde content of the tissues. Sulfated glycosaminoglycans were Isolated after proteolytlc digestion of defatted arterial tissue and were quantified after reductive labeling with NaB^V The amount of glycosaminoglycans In the aorta Increased with the age of the person and the cholesterol content (degree of arteriosclerosis) of the aorta. The proportion of chondrortln sulfate/dermatan sulfate Increased significantly with age and cholesterol content, whereas the corresponding amounts of heparan sulfate decreased. (Arteriosclerosis 9:154-158, March/April 1989) P roliferation of arterial smooth muscle cells and accumulation of lipids are basic events in the pathogenesis of arteriosclerosis. 1 -2 Proliferating arterial smooth muscle cells synthesize and secrete increased amounts of proteoglycans, 3 which are capable of interacting with lipoproteins. 4 -7 The complexes thus formed cause the accumulation of low density lipoprotein in the arterial wall and the subsequent development of arteriosclerosis. Proteoglycans containing chondroitin sulfate, dermatan sulfate, and heparan sulfate glycosaminoglycans have been detected in human 8 and in mammalian*-12 arteries. Such proteoglycans have been characterized as individual macromolecular species by chemical and physicochemical procedures. 10 -13 Extracellular proteochondroitin sulfate/dermatan sulfate may constitute a viscoelastic gel that regulates the flux of macromolecular plasma constituents into the vessel wall. 14 Proteoheparan sulfate, on the other hand, is thought to be involved in the control of smooth muscle cell growth, because cell-associated heparan sulfate from confluent arterial smooth muscle cells 15 and arterial endothelial cells 1617 specifically inhibits the proliferation of arterial smooth muscle cells. it has been established that the content of sulfated glycosaminoglycans in the artery increases as arteriosclerosis progresses, 1819 but no information is available about corresponding changes in the content of heparan sulfate. In view of the antiproliferative activity of heparan sulfate and its potential role in the pathogenesis of arteriosclerosis, 13 the quantitative evaluation of sulfated glycosaminoglycans is of special interest. The present study shows that increasing cholesterol content during the development of arteriosclerosis in the human aorta is accompanied by decreasing amounts of heparan sulfate and an increase in chondroitin sulfate/dermatan sulfate. Isolation and Fractlonatlon of Sulfated Glycosaminoglycans Thoracic aortas were obtained from the Institute of Pathology, University of MQnster. Twenty-five human aortas were obtained at autopsy within 8 hours after death. Segments of the aortas ranging from the left arterial subclavia to the sixth arteria intercostalis were used for the studies. Throughout the preparation of samples, all aortas were kept on ice. The specimens were freed from fat and adhering connective tissue and were rinsed with cold saline. After removal of the adventitia, macroscopically normal appearing and adjacent arteriosclerotic specimens (on the average 3 cm apart) ranging from 0.2 to 2.5 g were selected for analysis. Focal intimal thickening with the appearance of fatty streaks and/or fibrous plaques were regarded as arteriosclerotic. Plaques with hemorrhage, ulceration, or mineralization were not included. The samples were minced into 5x5 mm pieces, were delipldated with chloroform/methanol (2:1), and were dried under vacuum in the presence of paraffin. Total lipids were recovered after solvent removal and were analyzed for cholesterol and triglycerides by standard methods. 20 The defatted samples were subjected to proteolysis in 0.1 M acetate buffer (pH 5.8) containing 0.5% papain (200 U/g), 0.05% EDTA, and 0.005 M cystein at 65°C for 24 hours. After papain digestion, which completely dissolved the arterial tissue, the glycosaminoglycans were precipitated with cetytpyridinium chloride at a final concentration of 1% (wt/vol). The cetylpyridinium-glycosaminoglycan precipitate was dissolved in 2 ml of 1 M MgCI 2 and was precipitated with 2.5 volumes of ethanol containing potassium acetate (final concentration 1 % [wt/ vol]). The precipitated potassium salts of the glycosaminoglycans were centrifuged and dissolved in distilled water. Glycosaminoglycans were precipitated by the addition of cetylpyridinium chloride in the presence of MgCI 2 to final concentrations of 1% wt/vol, and 0.125 M, respectively. Under these conditions, hyaluronate remained in solution. The insoluble cetylpyridinium salts of glycosaminoglycans were pelleted by centrifugation, were dissolved in 1 M MgCI 2 , were precipitated with ethanol containing potassium acetate (final concentration, 1% wt/vol), were washed twice with ethanol and once with ether, and then were dried under a stream of air. Glycosaminoglycans were radiolabeled by reduction with NaB 3 H 4 according to the procedure of Glaser and Conrad. 21 After destruction of excess borohydride, glycosaminoglycans were recovered by gel filtration on a Sephadex G 50 column (0.8x50 cm) equilibrated with 1 M NaCI at ambient temperature. Material eluting with 1 M NaCI between Kav=0 and Kav=0.1 was pooled and, after adding 0.5 mg unlabeled chondroitin sulfate, was dialyzed against 6 M urea in 0.1 M Tris-HCI, pH 7.0. For separation of glycosaminoglycans by ion exchange chromatography, the dialyzed pools were applied to a 2 ml DE 52 column equilibrated with the above buffer at ambient temperature. After elution of unbound material with 5 ml of buffer, bound glycosaminoglycans were eluted with a linear gradient of NaCI (0 to 0.6 M, 10 g/10 g) in the above buffer. Further Procedures Chondroitin suffate and heparan sulfate were assayed enzymatically as described elsewhere. 22 After enzyme digestions, samples were thermally inactivated and subjected to gel filtration on a Sephadex G 50 fine column (0.8x50 cm) equilibrated and eluted with 1 M NaCI. The appearance of 3 H-labeled material in the total volume (V,) was indicative of degradation. Radioactivity was measured with a liquid scintillation counter (Packard A 4430, Packard Instruments GmbH, Frankfurt, FRG) by using Instagel (Packard Instruments GmbH) as the scintillation medium. Results The total cholesterol and triglyceride content and the total sulfated glycosaminoglycans were analyzed in grossly normal appearing regions and in adjacent arteriosclerotic areas of 25 human aortas from subjects 28 to 83 years old. The total cholesterol and triglyceride content was quantified by enzymatic analysis according to standard procedures. The total content of sulfated glycosaminoglycans was determined after quantitative release of chondroitin sulfate, dermatan sulfate, and heparan sulfate from the respective proteoglycans by proteolytic digestion and complete dissolution of the arterial wall followed by /3-elimination and 3 H-labeling of the monosaccharide residue (xylose) at the reducing end of the polysaccharide chain. 3 H-radioactMty reflected the number of glycosamincglycan molecules. All values were expressed as milligrams or 3 H-cpm/g dry weight of tissue. No effort was made to quantify native proteoglycans, because these are not quantitatively extractable from tissue with dissociative solvents. 918 Total Glycosamlnoglycan Content Increases with Age and Degree of Arteriosclerosis Plots of the radioactivity of 3 H-glycosaminoglycans against age or cholesterol content of the aorta indicated that the glycosaminoglycan content increased proportionally ( Figure 1). The age-dependent increase appeared more pronounced in arteriosclerotic tissue than in normal areas, but the correlation coefficient was less than 0.7, so that the difference was not significant. No correlation between triglyceride and 3 H-glycosaminoglycan contents was found (data not shown). Amount of Heparan Sulfate Relative to Total Glycosamlnoglycan Decreases, and Chondroitin Sulfate/Dermatan Sulfate Increases, with Increased Age and Cholesterol Content Heparan sulfate and chondroitin sulfate/dermatan sulfate were separated on the basis of their different anionic charges (Figure 2). Heparan sulfate was distinguished by Fraction Nunixr Figure 2. Separation of 3 H-heparan sulfate and IH-chondrottin sulfate/dermatan sulfate by ion exchange chromatography. The total su(fated glycosaminoglycans were isolated and labeled by reductive ^-elimination In the presence of NaB 3 H 4 . 5X10 3 cpm were applied to a DE 52 column (2 ml) and eluted with a NaCI gradient (50 g of 50 mM Tris-HCI buffer, pH 7.0 and 50 g of 50 mM Tris-HCI buffer containing 0.6 M NaCI). Bars represent fractions that were pooled and used for further analysis. its sensitivity to heparitinase, chondroitin sulfate/dermatan sulfate, by its resistance to heparitinase and its susceptibility to chondroitinase ABC. When age and cholesterol-dependent changes in the grycosaminoglycan fraction were sought, it became clear that the relative proportion of heparan sulfate decreased linearly with increasing age and increasing cholesterol content in both normal and arteriosclerotic regions of the arteries. The correlation coefficients were r=-0.91 for normal, and r=-0.95 for arteriosclerotic segments (Figures 3A and 3B). From the data in Figure 3, it can be calculated that the amount of heparan sulfate relative to total glycosaminoglycans decreased from 4 1 % to 20% with increasing age and from 41 % to 23% with increasing cholesterol content. In contrast, the chondroitin sulfate/dermatan sulfate fraction showed a marked linear increase with increasing cholesterol content and age, the correlation coefficients being r=0.92 and r=0.95, respectively, for normal and arteriosclerotic areas of the arteries (Figures 4A and 4B). Discussion The grycosaminoglycan content of arterial tissue and its alteration during arteriosclerosis has been the subject of several reports (See references 23 and 24 and the references cited therein). However, no information is available on changes in grycosaminoglycan concentration in relation to the lipid content of human arteries. Recently Yla-Herttuala et al. 24 studied the composition of glycosaminoglycans in human coronary arteries and found that with increasing age and in advanced arteriosclerotic lesions there were increases in the proportion of chondroitin sulfate/ dermatan sulfate and decreases in heparan sulfate. However, the arteriosclerotic lesions were not characterized with respect to their lipid content. In our research, a significant age-and cholesteroldependent increase in the percentage composition of chondrottin sulfate and dermatan sulfate, and a corresponding decrease in heparan sulfate, was demonstrated. Macroscopically normal appearing and arteriosclerotic specimens were included in the analysis. However, since even ostensibly undiseased segments of vessels may contain early arteriosclerotic lesions, cholesterol content, which is the characteristic feature of arteriosclerotic lesions, was considered the definitive parameter for the degree of arteriosclerosis. In our study, quantification of the individual glycosaminoglycans was based on selective 3 H-labeling of the reducing terminus of each polysaccharide chain. Therefore, the radioactivity does not reflect the glycosaminoglycan concentration, but rather the number of grycosaminoglycan chains. Consequently, the relative increase in chondroitin sulfate/dermatan sulfate and the decrease in heparan sulfate with increasing age and increasing degree of arteriosclerosis indicate corresponding changes in the number of grycosaminoglycan chains. However, when the chain length of heparan sulfate isolated from areas with low cholesterol content was compared with that from areas with high cholesterol content, no significant differences were found, as judged from the elution profile of the heparan sulfate chains on Sephacryl S 300 (Hollmann, unpublished observations). Likewise, the lengths of chondroitin sulfate and dermatan sulfate chains were not significantly different, although in some cases longer chondroitin sulfate/dermatan sulfate chains were isolated from arteriosclerotic lesions than from normal areas. This confirms the observations of Wagner et al., 23 who calculated that in arteriosclerotic plaques there are fewer, but longer, chondroitin sutfate chains relative to core protein in the proteoglycan molecule. The increase of chondroitin sulfate/dermatan sulfate with increasing cholesterol content ( Figure 3A) is in accordance with the finding that cholesterol-rich, low density lipoproteins in arteriosclerotic arteries accumulate concomitantly with glycosaminoglycans. 7 -25 This phenomenon is explained by our results. Proliferation of arterial smooth muscle cells is a characteristic feature in the development of arteriosclerotic plaques. Since proliferating arterial smooth muscle cells have been shown to synthesize and secrete larger amounts of dermatan sulfate-rich proteoglycans than quiescent cells, 3 the known low density lipoprotein binding capacity of dermatan sulfate-rich proteoglycans causes trapping of lipoprotein, preferentially in areas of cell proliferation. On the other hand, the decrease in heparan sulfate with increasing severity of arteriosclerosis ( Figure 4) is of special interest, because arterial smooth muscle cells produce a heparan sulfate species with antiproliferative activity. 15 Thus, heparan sulfate is thought to be involved in controlling the growth of smooth muscle cells. 151817 However, it remains to be established whether loss of heparan sulfate can cause accelerated cell proliferation, which is known to be an early event in the development of arteriosclerotic plaques. Atherogenesis and the concomitant changing of the glycosaminoglycan pattern may be initiated by hyperiipoproteinemia. This assumption is supported by the finding that the accumulation of plasma low density lipoprotein in the arterial wall after hypercholesterolemia induces altered glycosaminoglycan synthesis in medial smooth muscle cells. 26
2017-04-01T04:24:15.604Z
1989-03-01T00:00:00.000
{ "year": 1989, "sha1": "a7f6e8008389b836f8b7d0b549db4972c1add0a9", "oa_license": null, "oa_url": "https://www.ahajournals.org/doi/pdf/10.1161/01.ATV.9.2.154", "oa_status": "GOLD", "pdf_src": "WoltersKluwer", "pdf_hash": "4c8c9f8797843640adcc01b6d4ed96f7d49c0275", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
220307101
pes2o/s2orc
v3-fos-license
( − )-Epicatechin protects thoracic aortic perivascular adipose tissue from whitening in high-fat fed mice † High adipose tissue (AT) accumulation in the body increases the risk for many metabolic and chronic diseases. This work investigated the capacity of the fl avonoid ( − )-epicatechin to prevent undesirable modi fi - cations of AT in mice fed a high-fat diet. Studies were focused on thoracic aorta perivascular AT (taPVAT), which is involved in the control of blood vessel tone, among other functions. Male C57BL/6J mice were fed for 15 weeks a high-fat diet with or without added ( − )-epicatechin (20 mg per kg body weight per d). In high-fat diet fed mice, ( − )-epicatechin supplementation: (i) prevented the expansion of taPVAT, (ii) attenuated the whitening of taPVAT (according to the adipocyte morphology, diameter, and uncoupling-protein 1 (UCP-1) levels) and (iii) blunted the increase in plasma glucose and cholesterol. The observed taPVAT modi fi cations were not associated with alterations in the aorta wall thickness, aorta tumor necrosis factor-alpha (TNF-α ) and NADPH-oxidase 2 (NOX2) expression, and endothelial nitric oxide synthase (eNOS) phosphorylation levels. In summary, our results indicate ( − )-epicatechin as a relevant bioactive protecting from the slow and silent development of metabolic and chronic diseases as they are associated with excessive fat intake. Introduction Adipose tissue (AT) dysfunction is associated with a state of chronic inflammation that is linked to the onset of cardiovascular and metabolic diseases, including hypertension, type 2 diabetes, and non-alcoholic fatty liver disease. 1 Thus, controlling AT expansion and dysfunction is a strategy to improve health by reducing the incidence and consequences of those diseases. AT has different localization and functional capabilities, which define two major types of AT, white AT (WAT) and brown AT (BAT).In terms of distribution in mammals, WAT is mostly subcutaneous and visceral.The latter includes mesenteric (mWAT), epididymal (eWAT), retroperitoneal (rWAT), and perirenal ( pWAT) AT pads.Beyond the well-established interscapular depots in human infants and rodents, BAT is also present in adult subjects. 2,3A characteristic that differentiates WAT and BAT is the density of mitochondria in adipocytes, which is lower in WAT than in BAT.Also, the use of energy favors lipid storage in WAT but dissipates as heat in BAT.In addition, a third type of AT exists surrounding blood vessels, i.e. the perivascular (PVAT).This AT displays features of WAT and BAT. 4 In rodents, PVAT surrounding the abdominal aorta exhibits a WAT-like phenotype, but the PVAT surrounding the thoracic aorta (taPVAT) is more similar to BAT in terms of morphology and functions. AT pads can interchange their structural, cellular and molecular characteristics in response to both physiological and pathological conditions. 5This plasticity can result in positive or negative health effects.Positive examples are the shift from WAT to BAT (browning) after cold exposure or physical activity.In contrast, undesirable changes from BAT to WAT (whitening) occur associated with age and obesity. 6lavonoids are compounds present in edible fruits and vegetables, and increasing evidence supports the benefits of their consumption in human health. 70][11][12][13][14][15][16] In terms of mechanisms of disease, (−)-epicatechin supplementation has been shown to mitigate systemic and WAT insulin resistance in high-fat fed mice, 15 in part due to its capacity to inhibit WAT inflammation, endoplasmic reticulum stress and oxidative stress. 17,18n this work, we investigated the capacity of (−)-epicatechin to prevent pathological modifications of AT developed in mice fed a high-fat diet.We observed that (−)-epicatechin supplementation attenuated the high-fat induced whitening of taPVAT. Animals, diets and experimental design All procedures were in agreement with standards for the care of laboratory animals as outlined in the National Institutes of Health Guide for the Care and Use of Laboratory Animals (NIH Pub.no.85-23, Revised 1996) and were approved by the Institutional Committee for the Care and Use of Laboratory Animals, School of Pharmacy and Biochemistry, University of Buenos Aires, Argentina (CUDAP: EXP-UBA: 75405/16). Male C57BL/6J mice were housed under conditions of controlled temperature (21-25 °C) and humidity, with a 12 h light/ dark cycle.Mice (8 per group) weighing 20 ± 2 g were randomly divided into the following groups depending on the diet: (i) control group (C), receiving a control diet (10% of total calories from lard fat); (ii) control-epicatechin group (CE), receiving the control diet supplemented with (−)-epicatechin (20 mg per kg body weight per d); (iii) high-fat group (HF) receiving a highfat diet (60% of total calories from lard fat); and (iv) high-fatepicatechin group (HFE), receiving the high-fat diet supplemented with (−)-epicatechin (20 mg per kg body weight per d). 18,19The amount of (−)-epicatechin provided to mice is equivalent to 200 mg d −1 for a 70 kg human, quantity attainable through the optimization of fruit and vegetable intake and/or pharmacological strategies. 10,11The control and highfat diet composition is shown in the ESI.† Pellets were prepared by adding the fat lard or/and (−)-epicatechin once every two weeks, to adjust the amount of (−)-epicatechin according to food consumption and animal weight.Dry pellets were stored at 4 °C.Food intake consumption and body weight were recorded weekly.After 15 weeks of the respective treatments, mice were weighed, and euthanized in a CO 2 chamber.Blood was collected from the abdominal aorta into heparinized tubes and plasma was obtained after centrifugation at 600g for 15 min at 4 °C.Blood plasma samples were frozen at −80 °C.The clean aorta and the aorta surrounded by taPVAT, eWAT, mWAT, rWAT, and pWAT were excised immediately and either processed for histology or flash frozen in liquid N 2 for further analyses. Biochemical determinations Glucose, total cholesterol, and triglycerides in plasma were measured using a Cobas C-501 autoanalyzer (Roche Diagnostics, Mannheim, Germany). Western blotting analysis eWAT and mWAT were homogenized ( proportion 1 : 3, w : v) in lysis buffer (150 mM NaCl, 50 mM Trizma-HCl, 1% (v/v) NP-40, pH 8.0) in the presence of protease and phosphatase inhibitors, and centrifuged at 600g for 10 min at 4 °C.The supernatant was collected and considered as total homogenates.Total homogenates were added to a 2× solution of Laemmli buffer and heated at 95 °C for 5 min.Sample aliquots containing 40 μg of protein were separated by reducing 10% (w/v) polyacrylamide gel electrophoresis, and electroblotted onto polyvinylidenedifluoride membranes.Colored molecular weight standards (GE Healthcare, Piscataway, NJ, USA) were run simultaneously.Membranes were blotted for 2 h in 5% (w/v) nonfat milk and incubated overnight in the presence of the corresponding primary antibody (1 : 1000 dilution in PBS).After a subsequent incubation for 90 min at room temperature in the presence of the corresponding HRP-conjugated secondary antibody (1 : 5000 dilution in PBS), complexes were visualized by chemiluminescence.Films were scanned and the densitometry analysis was performed using Image J (National Institute of Health, Bethesda, Maryland, USA).Proteins were normalized to the β-actin content.The protein content of total homogenates was measured by the Lowry method. 20 Histological and immunochemical analysis of AT and aorta For each animal, a portion of thoracic aorta with surrounding taPVAT and a portion of eWAT and mWAT were separated, fixed in phosphate-buffered 10% (v/v) formaldehyde ( pH 7.2) and embedded in paraffin.Three µm sections were cut and stained with hematoxylin-eosin or Trichrome Masson's stains.Histological evaluations were performed using a light microscope Nikon E400 (Nikon Instrument Group, Melville, NY, USA).To evaluate aorta morphometry, the wall media thickness and lumen diameter were measured.To establish expansion of taPVAT, the area of taPVAT was relativized to the aortic wall media thickness.Immunohistochemistry of taPVAT was evaluated with antibodies against UCP-1 (1 : 100 in PBS).Immunostaining was quantified as the percentage of positive staining per area from 20 random images viewed at ×400 magnification.Measurements were performed using Image-Pro Plus version 4.5 for Windows (Media Cybernetics, LP, Silver Spring, MD, USA). Statistical analysis Data from food intake, energy intake, body weight, biochemical parameters, and western blotting analysis were analyzed by one-way ANOVA followed by Tukey-Kramer's post-hoc test using StatView5.0(SAS Institute, Cary, NC, USA).Histological and immunohistochemical data were analyzed by the nonparametric Kruskal-Wallis test followed by Dunn's post-test using GraphPad Prism 5.01 (GraphPad Software, Inc.San Diego, CA, USA).All data are presented as mean ± standard error of the mean (SEM) with significance set at p < 0.05. Results Food consumption, energy intake, and body weight during the treatment period are shown in Fig. 1.Daily food consumption did not change significantly because of the different treatments.HF and HFE consumed lower amounts of food compared to C and CE (Fig. 1A), with differences among groups already being observed in the first week under treatment.The energy intake was calculated considering the food consumption and the caloric value of control and high fat diets, yielding similar caloric intakes among the four groups (Fig. 1B).Body weight increases were similar for the four experimental groups during the first 4 weeks, and were higher for the groups receiving the high-fat diet during the remaining treat-ment period (Fig. 1C).The final body weight in HF and HFE was significantly higher than that observed in C and CE (Fig. 1D).Body weight gain was independent of the presence of (−)-epicatechin in the diet. Glycemia and blood lipid parameters were determined as indexes of systemic cardiometabolic responses to the diets (Table 1).Glycemia was significantly higher in HF ( p < 0.05) as compared to C, CE and HFE.Total plasma cholesterol was significantly higher in HF as compared to C and CE; no difference was found for total cholesterol between HFE and, both C and CE.Triglyceride levels showed no differences among the groups. The relative mass of eWAT, mWAT, rWAT, and pWAT depots was significantly higher in HF and HFE compared to C and CE Food & Function Paper (Fig. 2).This resulted in an intra-abdominal adiposity (sum of the four fat pads relative to the body weight) of 84 ± 6 and 82 ± 5 mg g −1 for HF and HFE, respectively, that has no significant difference between them, but higher than the values reported for C and CE (52 ± 5 and 63 ± 5 mg g −1 , p < 0.05).Fig. 3 and 4 show the effects of the high-fat diet and of (−)-epicatechin supplementation on several characteristics of WAT pads.Representative images of hematoxylin-eosin stained eWAT and mWAt are shown in Fig. 3A and 4A, respectively.For eWAT, the analysis of adipocyte size distribution according to its diameter (Fig. 3B) showed that: (i) the percentage of medium (51-75 µm) and large adipocytes (76-100 µm) was similar for the four experimental groups; (ii) the percentage of small adipocytes (25-50 µm) was lower in HF compared with C, CE and HFE; and (iii) the percentage of very large adipocytes (>100 µm) was higher in HF and HFE than for C and CE.The expression of the pro-inflammatory molecules, IL-6 and iNOS, was evaluated by western blotting.In eWAT, IL-6 expression was similar in C, CE and HF, and significantly lower (≈46% respect to HF, p < 0.05) in HFE (Fig. 3C).Meanwhile, no significant differences were found in iNOS expression among the four experimental groups (Fig. 3D).Similar results were observed in mWAT.The adipocyte size distribution results showed that: (i) the percentage of medium (51-75 µm) and large adipocytes (76-100 µm) was similar in the four experimental groups; (ii) the percentage of small adipocytes (25-50 µm) was lower in HF compared with C, CE and HFE; and (iii) the percentage of very large adipocytes (>100 µm) was higher in HF and HFE than in C and CE (Fig. 4B).IL-6 expression was similar in C, CE and HF, and significantly lower (≈40% compared to HF, p < 0.05) in HFE (Fig. 4C).Meanwhile, no significant differences were found in iNOS expression among the four experimental groups (Fig. 4D). Morphometric characteristics of taPVAT are shown in Fig. 5A.The expansion of taPVAT was estimated through the ratio of taPVAT area/media thickness of the thoracic aorta.This ratio was significantly higher in HF (75%, p < 0.05) compared to C, CE, and HFE (Fig. 5B).Histological characterization of adipocytes showed clear differences between taPVAT characteristics in C and CE with respect to HF (Fig. 6A and B).Most of the taPVAT in C and CE showed a BAT-like appearance (round nuclei, and small and multilocular lipid droplets), and dispersed WAT-like adipocytes (flattened non-central nuclei, and Paper Food & Function big lipid droplets).In HF there was a clear inversion in the proportion of adipocyte phenotypes from BAT to WAT; meanwhile HFE showed an intermediate phenotype, closer to C and CE.Quantification of the adipocyte size showed that the diameter was significantly lower in HFE compared to HF (29%, p < 0.05) (Fig. 6B).Additional confirmation of the BAT characteristics of the taPVAT was obtained by UCP-1 staining.The presence of UCP-1 showed a similar pattern/distribution to that observed for the adipocyte size (Fig. 6A and C).Quantification of UCP-1 staining shows that while about 68% of staining was observed in C and CE, only 16% was observed in HF ( p < 0.05).In HFE, the staining was 48%, being significantly higher than that in HF ( p < 0.05) and lower than that in C and CE ( p < 0.05) (Fig. 6D). The physiological actions of taPVAT result in vascular remodeling and function of the aorta.The four experimental groups showed a similar aorta wall thickness relative to the lumen diameter (Fig. 7A), suggesting the absence of vascular smooth muscle cell proliferation.In addition, no significant changes were observed in the expression of an inflammatory marker in the aorta, such as TNF-α (Fig. 6B) as well as in determinants of nitric oxide bioavailability: (i) the expression of gp91, the catalytic subunit of NOX2 (Fig. 6B and C); and (ii) the phosphorylation of eNOS ( p-eNOS/eNOS). Discussion High AT accumulation in the body increases the risk for many metabolic and chronic diseases.The potential management through bioactives of undesirable changes that accompany the consumption of high calorie diets and AT expansion can have a major impact on health.This work investigated the capacity of (−)-epicatechin to prevent adverse modifications of the AT in mice fed a high-fat diet.A major finding was that (−)-epicatechin supplementation attenuated the whitening of taPVAT, i.e. enlarged adipocytes and lower UCP-1 levels, induced by the high-fat diet.In parallel, the increases in plasma glucose and cholesterol associated with high-fat diet consumption were blunted by (−)-epicatechin. Experiments were carried out in a C57BL/6J sub-strain in which the high-fat diet did not lead to overt obesity.Thus, mice fed the high-fat diet showed a body weight 13% higher than mice fed the control diet, which is a moderate response compared to other data reported for the same strain, food and time of treatments, i.e. 35-45%. 15,17,18,21This modest increase in weight gain allowed us to analyze the effects of (−)-epicatechin in an early stage of high-fat diet-induced dysmetabolism.The increase in weight in HF and HFE despite the similar caloric intake compared to C and CE could be explained by the fact that fat is energetically more efficient than carbohydrates and proteins to promote greater positive energy balance and fat accumulation. 22,23n mice consuming the high-fat diet, both body weight and intra-abdominal adiposity were not affected by (−)-epicatechin supplementation, while increases in glycemia and plasma cholesterol were partially or totally prevented.Mitigation of high-fat induced alterations in plasma glucose and dyslipidemia was previously reported to be associated with (−)-epicatechin intake. 15,17,24,25These results support the protective action of (−)-epicatechin in diet-induced metabolic disorders, even in the absence of extreme changes in body weight and/or fat accumulation. Increasing evidence suggests that the pathogenesis of obesity is to a large extent related to both a pathological expansion of WAT pads and systemic inflammation. 26AT expansion can occur through hyperplasia (increased number of cells) and/or hypertrophy (increased cell size). 270][31] In the present work, the adipocyte size distribution was similar for high-fat fed and control fed mice, suggesting that the predominant expansion mechanism was hyperplasia.In agreement with this, both eWAT and mWAT pads in high-fat fed mice did not show a pro-inflammatory condition as determined by adipocyte levels of IL-6 and iNOS.Interestingly, the minimal changes manifested in the percentage of small and large/very large adipocytes in HF were not present when (−)-epicatechin was supplemented in the diet.These results are in agreement with previous reports showing systemic and local anti-inflammatory effects of (−)-epicatechin 32,33 and specifically in WAT pads modified by a high-fat diet. 17,24Similar effects of (−)-epicatechin were observed even in the offspring of female mice fed a high-fat diet. 347][38][39][40][41] In the case of taPVAT, its BAT-like phenotype is crucial for the maintenance of the normal structure and function of the thoracic aorta segment; meanwhile its whitening would be deleterious.This study shows that taPVAT was drastically transformed by the consumption of a high-fat diet, acquiring the features of WAT, while the WAT pads showed a mild expansion, due to adipocyte hyperplasia, without a substantial pro-inflammatory condition.Therefore, the taPVAT whitening emerges as an early hallmark of alterations induced by a high-fat diet.(−)-Epicatechin supplementation was associated with the absence of taPVAT expansion and the attenuation of its transformation into WAT-type adipocytes, as evidenced by adipocyte morphology and UCP-1 expression.Accordingly, (−)-epicatechin has shown effects as a WAT browning agent in experimental models of diet-induced obesity and in adipocytes in culture.In rats fed a high-fat diet, the administration of (−)-epicatechin promoted an increase in the abdominal WAT expression of UCP-1 and deionidase-2, both normally expressed in BAT adipocytes. 42More recently, in a similar mouse model, (−)-epicatechin showed a WAT browning effect evidenced by the increased expression of key BATproteins, including UCP-1. 43The modulatory effects of (−)-epicatechin reversing/attenuating the taPVAT whitening could be explained by its capacity to mitigate the down-regulation of BAT of the peroxisome proliferator-activated recep- Those mechanisms were also shown to occur in 3T3-L1 adipocytes subjected to inflammatory conditions 44 or exposed to palmitate. 45inally, it is relevant to consider that the observed taPVAT modifications were not associated with alterations in aorta remodeling, i.e. changes in the aorta wall thickness, inflammation, i.e.TNF-α expression, and determinants of superoxide anion and nitric oxide availability.The latter is suggested by the absence of changes in both aorta NOX2 catalytic subunit gp91, and the phosphorylation of eNOS.These results agree with the concept that some aspects of the vascular pathology induced by high-fat diets are due to the development of dysfunctional PVAT. 41t is important to note that for all parameters studied (−)-epicatechin had non-significant effects in mice fed the control diet.This absence of effects suggests that (−)-epicatechin (as well as other related flavonoids) generally mitigates View Article Online deviations related to pathological conditions, such those triggered by the high-fat diet consumption. 16n summary, (−)-epicatechin provided protecting effects in mice fed a high-fat diet in terms of fat metabolism.One of the most significant effects was the prevention of taPVAT acquisition of WAT features affording a crucial strategy to maintain a healthy vasculature.Other positive actions of (−)-epicatechin were observed metabolic changes triggered by excessive fat consumption.These observations make (−)-epicatechin a valuable bioactive protecting from slow and silent development of metabolic diseases. Fig. 1 Fig. 1 Food and energy intake and body weight.Daily food intake (A), daily energy intake (B), body weight (C), and final body weight (D) from CE, C, HF and HFE.Results are expressed as means ± SEM (n = 8).*p < 0.05 vs. C and CE.# p < 0.05 vs. all other groups. Fig. 3 Fig. 3 Cell size distribution and expression of pro-inflammatory molecules in eWAT.eWAT representative images after hematoxylin-eosin staining (A), quantification of eWAT adipocyte size distribution (B), and IL-6 (C) and iNOS (D) expression in eWAT from CE, C, HF and HFE obtained by western blotting.β-Actin was used as the loading control.Results are expressed as means ± SEM.For statistics details see the statistical analysis section.*p < 0.05 vs. C and CE.# p < 0.05 vs. all other groups. Fig. 4 Fig. 4 Cell size distribution and expression of pro-inflammatory molecules in mWAT.mWAT representative images of hematoxylin-eosin staining (A), quantification of mWAT adipocyte size distribution (B), and IL-6 (C) and iNOS (D) expression in mWAT from CE, C, HF and HFE obtained by western blotting.β-Actin was used as the loading control.Results are expressed as means ± SEM.For statistics details see the statistical analysis section.*p < 0.05 vs. C and CE.°p < 0.05 vs. HF. Fig. 5 Fig. 5 taPVAT expansion.Transversal sections of the thoracic aorta and taPVAT stained with hematoxylin-eosin (4×) (A), and ratio area of taPVAT/ media layer thickness of the aorta (B) from CE, C, HF and HFE.Results are expressed as means ± SEM (n = 4).# p < 0.05 vs. all other groups. Table 1 Effect of (−)-epicatechin on metabolic parameters in high-fat fed mice a Variables were measured in plasma under non-fasting conditions.Results are expressed as means ± SEM (n = 8).# p < 0.05 vs. all other groups.*p< 0.05 vs. C.
2020-07-03T13:05:50.903Z
2020-07-02T00:00:00.000
{ "year": 2020, "sha1": "bbe12e6def8eb4a0a7961bee4dd860e871cbefd6", "oa_license": "CCBYNCSA", "oa_url": "https://ri.conicet.gov.ar/bitstream/11336/139426/5/CONICET_Digital_Nro.497b3f34-0506-47bd-82c2-49393291303e_b.pdf", "oa_status": "GREEN", "pdf_src": "Anansi", "pdf_hash": "6ebfeb8c3ec0bc65d054584be1ed69f7fb10da3e", "s2fieldsofstudy": [ "Biology" ], "extfieldsofstudy": [ "Chemistry", "Medicine" ] }
253598626
pes2o/s2orc
v3-fos-license
Correlation between the severity of COVID-19 vaccine-related adverse events and the blood group of the vaccinees in Saudi Arabia: A web-based survey Background: Recent epidemiological studies have reported an association between the ABO blood group and the acquisition, symptom severity, and mortality rate of coronavirus disease 2019 (COVID-19). However, the association between the ABO blood group antigens and the type and severity of COVID-19 vaccine-related adverse reactions has not been elucidated. Patients and Methods: We conducted a cross-sectional, questionnaire-based study in Saudi Arabia from February to April 2022. The study cohort included adults who had received or were willing to receive at least two doses of a COVID-19 vaccine of any type. We used Chi-square test to assess the association between the ABO blood groups and vaccine-related adverse reactions. p values of <0.05 were considered significant. Results: Of the 1180 participants, approximately half were aged 18–30 years old, 69.2% were female, and 41.6% reported their blood group as O. The most frequent COVID-19 vaccine-related adverse reactions were fatigue (65%), pain at the injection site (56%), and headache (45.9%). These adverse reactions demonstrated significant correlations with the education level (p = 0.003) and nationality (p = 0.018) of the participants following the first dose, with gender (p < 0.001) following the second dose, and with the general health status (p < 0.001) after all the doses. Remarkably, no correlation was observed between the severity of the vaccine-related adverse reactions and ABO blood groups. Conclusion: Our findings do not support a correlation between the severity of COVID-19 vaccine-related adverse reactions and the ABO blood groups of the vaccinees. The creation of a national database is necessary to account for population differences. Introduction Severe acute respiratory syndrome coronavirus-2 (SARS-CoV-2) has caused a challenging and threatening global disease pandemic known as the coronavirus disease 2019 . It is a highly contagious disease that has disrupted the world's health and economy. In parallel with the restrictions imposed to prevent viral spread and the trials of repurposed antiviral treatments, there is an accelerated development of vaccines to prevent or restrict potential viral damage.1,2 Thus, the Food and Drug Administration and regulatory bodies in the United States and various countries have granted emergency use authorization to some of these rapidly developed COVID-19 vaccines, with more than 200 ongoing clinical trials for COVID-19 vaccines globally (Moderna COVID -19 Vaccine, 2019;Sunny et al., 2020). The most prevalent COVID-19 vaccines available are those based on mRNA platforms. Although these vaccines appear to be highly effective, they are also reactogenic, which means that they are likely to cause a noticeable immune response (Liu et al., 2021). The World Health Organization (WHO) defines adverse reactions as "a response to a drug that is noxious and unintended, and which occurs at doses normally used in man for prophylaxis, diagnosis, or therapy of disease, or for the modification of physiological function" (The Importance on Pharmacovigilance, 2002). Mild-to-moderate pain at the injection site was the most prevalent reaction among all 11 COVID-19 vaccine trials, with up to 88% of participants experiencing pain that typically resolved within 24-48 h after onset, with a higher incidence recorded in the younger population than in the older population (Li et al., 2020). Other serious adverse events include thrombotic thrombocytopenia syndrome (TTS) (Islam et al., 2021). Researchers have identified antibodies that bind to platelet factor 4, similar to those associated with heparin-induced thrombocytopenia, in the absence of any previous heparin exposure.9 Vaccines that are more likely to cause TTS, such as Vaxzevria, should be avoided in younger adults for whom an alternative vaccine is available.8 Moreover, researchers specifically reported delayed intense local reactions in Moderna's phase III trial in 0.8% and 0.2% of the participants after the first and second doses, respectively. However, there was no mention of whether those who had reactions after the first dose experienced a recurrence after the second dose (Lindsey et al., 2021). Several host and viral factors play crucial roles in their relationship. Interestingly, a recent epidemiological study reported an association between the ABO blood group type with SARS-CoV-2 acquisition, symptom severity, and related mortality.5 This is thought to be a result of natural antibodies against blood group antigens that may act via innate immune mechanisms to neutralize viral particles. 5 Alternatively, blood group antigens could serve as additional receptors for the virus, whereas individuals who are expressing these antigens on epithelial cells would have a high propensity for SARS-CoV-2 infection (Li et al., 2020). One study found a higher probability of testing positive for COVID-19 in patients with blood group A (Li et al., 2020). By contrast, researchers observed a lower probability of the infection in patients with blood group O than in the general population (Li et al., 2020). On the contrary, other studies have failed to establish such a correlation (Sunny et al., 2020). Of note, the frequency of ABO blood groups in Saudi Arabia (SA) is as follows O > A > B > AB with Rh Positive predominance.11 Interestingly, Alessa et al., did not find an association between COVID-19 vaccine-related adverse events and ABO blood groups among general surgeons in SA between July 2021 and May 2022. However, that study was limited by the small sample size and lacked generalizability, as the study included only general surgeons who received mRNAbased COVID-19 vaccine (Alessa et al., 2022). To the best of our knowledge, the association between ABO blood group antigens and the type and severity of COVID-19 vaccine-related adverse events in the Saudi general population has never been investigated and this is also globally true. Thus, this study aimed to investigate the relationship between COVID-9 vaccine-related adverse events of any type and the ABO blood groups in the general population to enable better understanding, prediction, and further management of the disease. Study design and subjects This cross-sectional, online-questionnaire-based study investigated the correlation between blood group antigens and the type and severity of COVID-19 vaccine-related adverse events in SA. The study included adults aged ≥18 years old and who are willing to receive at least two doses of the COVID-19 vaccine and those who had received at least one dose of a COVID-19 vaccine of any type. Participants aged <18 years and those not welling to receive COVID-19 vaccine were excluded. We distributed a self-administered online questionnaire to a random sample of adult participants (N = 1180) from various cities in all regions of SA. The maximum sample size required to provide statistical power to our study at a confidence level of 95% and a margin of error of 5% was 385. This was calculated using the following equation: n = z2 × p(1-p)/e2, where n is the sample size, z (1.96) is the z-score associated with a level of confidence (95%), p is the sample proportion (0.5) expressed as a decimal, and e (0.05) Frontiers in Pharmacology frontiersin.org 02 is the margin of error expressed as a decimal. We enrolled 1180 participants, which is just over three times the calculated sample size, to overcome any possible bias that may originate from the snowball sampling technique and ensure that the responses from participants represented a diverse population. The Scientific Research Ethical Committee at Taif University approved the study , and participants provided their consent online before submitting their responses. Data collection We collected the data from February 2022 to April 2022. We first distributed the questionnaire to participants through social media platforms using Google Forms. In addition, we contacted the deanship of scientific research of public universities in all regions of SA so that they could email the invitation to the study along with the link to the questionnaire using staff members' and students' confirmed emails that are available on their database. We secured the data and limited access to the primary investigators. Questionnaire development and validation The study questionnaire consisted of three sections, all of which were developed specifically for this study. The first section (eight items) included the demographic characteristics, such as sex, age, nationality, social status, educational level, working sector, having a family member working in the healthcare sector, and residential region. The second section (seven items) addressed the participant's blood type, Rhesus (Rh) factor, general health status, vaccination status, history of COVID-19 before and after receiving the vaccine, and the presence of common diseases in SA (for example, hypertension, diabetes, obesity, heart diseases, and asthma). The third section 16 items was about adverse events of the vaccines; type, onset, duration, and severity of the adverse events, which the participants rated on a scale of 1-10 after each dose for the three doses when applicable; and the type of vaccine and booster. For optimal analysis, we categorized the severity score of vaccine-related adverse events as follows: mild, 1-3; moderate, 4-7; or severe, 8-10 (Ganesan et al., 2022). We provided the questionnaires in Arabic for optimal comprehension given that the primary language of the participants in SA is Arabic. A panel of four researchers at the College of Pharmacy at Taif University reviewed the questionnaire for clarity, consistency, and appropriateness for the local context. Also, the questionnaire was validated on 25 participants on a field trial, and their data were not included in the analysis. The revised questionnaire contained 31 items. Statistical analyses Data were analyzed using the IBM SPSS Statistics for Windows, Version 25.0 (IBM Corp., Armonk, NY, United States). An analysis of descriptive statistics was conducted to illustrate the sociodemographic and other selected characteristics of the respondents, and the frequency distributions for the numerical and categorical variables were presented. Chi-squared testing was calculated for the cross-tabulation of variables-related to the type of vaccine and the severity of its related adverse events with other sociodemographic and clinical characteristics of the participants. A p-value of less than 0.05 was considered significant. Results This study included 1180 adult participants. Nearly half of the participants (n = 613) were between 18 and 30 years old, 69.2% (n = 817) of them were females, 47.5% (n = 561) had a bachelor's degree, and 59.7% (n = 705) were from the Western region of SA. Other sociodemographic data of the study participants are shown in Table 1. Clinical characteristics of the study participants are presented in Table 2. In this regard, around 41.6% (n = 491) reported having type O blood group. The Rh factor was positive in up to 41.3% of the participants (n = 487), 63.3% of the participants (n = 747) reported receiving three doses of a COVID-19 vaccine, and 38.4% (n = 453) and 45.2% (n = 533) reported having mild and moderate symptoms, respectively, after receiving COVID-19 vaccines. About three-quarters of the participants (n = 863) reported receiving the Pfizer vaccine as their first COVID-19 vaccine dose (Table 2). More vaccine-related adverse events were reported with Pfizer vaccine compared with others (73.1% after the first dose, 70% after the second dose, and 50.4% after the third dose, p = 0.001). Participants reported moderate symptoms after receiving the second dose of Pfizer vaccine (47.7%, p = 0.001). Most of the reported adverse events started within 8 h of receiving the dose and lasted for 1-3 days with all doses of the three types of vaccine that were used in SA (Table 3). The self-reported adverse events associated with different vaccines at different doses and their frequencies are presented in Table 4. As shown, the most frequent COVID-19 vaccine-related adverse events after the first dose were fatigue (65%), pain at the injection site (56%), and headache (45.9%). More serious adverse events after the first vaccine dose were less common, namely, difficulty of breathing (9.55%), seizures (0.51%), and blood clots (0.77%). Table 5 presents the correlation between COVID-19 vaccine-related adverse events and the different demographic characteristics. Approximately 31.8% of the females experienced moderate vaccine-related adverse events after the second dose compared with only 11.3% of the male participants. A significant correlation was found between COVID-19 vaccine-related adverse events after the first dose of the COVID-19 vaccine and education level (p = 0.003) and nationality (p = 0.018). We found the same correlation with sex after the second (p < 0.001) and third (p = 0.002) doses of the vaccine. We made a similar observation regarding the vaccine-related adverse events and the participants' general health status (p < 0.001 for the three doses, Table 6). On the other hand, no correlation was observed between the severity of adverse events and the ABO blood group. However, a significant correlation was noticed with Rh factor after the second dose of the vaccine (p = 0.044). The multivariate regression analysis revealed a correlation between the severity of adverse events and gender, nationality, education level, and general health status (Table 7). Discussion In this study, we investigated the relationship between COVID-19 vaccine-related adverse events and the ABO blood groups. We found no significant correlation between the COVID-19 vaccine-related adverse events and blood groups of the study participants. However, we found a correlation between COVID-19 vaccine-related adverse events and general health status, education level, sex, Rh factor, and nationality of the participants. The reported adverse events were like those reported in the literature and were non-immunological, which mostly started within 8 h after the vaccination and lasted between 1 and 3 days. Nevertheless, the frequency and type when we looked at the breakdown of adverse events were slightly different between doses. For example, pain or edema at the injection site (56%), headache (45.9%), and fever (43.8%) were among the adverse events our participants most frequently reported Remaining percentage to complete 100% are those who did not report the vaccine type. Frontiers in Pharmacology frontiersin.org 06 after their first dose, which were consistent with other reports.14, 15 However, the frequency of these adverse events after the second and third doses appeared to decrease significantly after each subsequent dose. This is conflicting with what has been reported by the Center for Disease Control and Prevention in the United States, where reactions after the third dose were comparable with those after the second dose (AnneHause et al., 2021). This could be attributable to the participants' inabilities to recall events after the subsequent COVID-19 vaccine doses or selfmedication with analgesic or antipyretic agents, as they had previously received information on mitigation strategies for vaccine-related adverse events, which were well-known to them by the second and third doses. Our results also contradicted what was reported locally, as more adverse events were observed after the second dose of Pfizer/ BioNTech vaccine in adults in SA. (Ahsan et al., 2021) Still, these studies were limited by their small sample size. In addition, we included all types of COVID-19 vaccines in our study and investigated the vaccinerelated adverse events including the booster dose of the vaccines, possibly decreasing the total reported adverse events compared with other studies. , (Ahsan et al., 2021). In our study, younger participants (18-30 years, 52.4% after the first dose) reported adverse events more frequently than older ones (>60 years, 1.4% after the first dose) did, which was consistent with the results of other reports.7,16 However, we had less representation of the older participants in our study. Considering the correlation with the demographic data after adjustment for other variables, we found a significant relationship with sex after the second and third doses (p value were <0.001 and 0.002, respectively). This observation might be driven by the high frequency of severe COVID-19 vaccine-related adverse events (11.7% and 11.6% after the second and the third doses, respectively) that the study participants reported, compared with the first dose (9.4%). Our findings resemble those from local studies, where sex was a predictor of the severity or occurrence of adverse events. Women and younger adults have a more profound vaccinerelated responses. Moreover, we found a correlation between nationality and education level after the first dose of the vaccine (p = 0.018 and 0.003 for nationality and education level, respectively), which is consistent with the local studies and might be driven by more moderate vaccine-related adverse events among the bachelor's degree (and above) holders after the first dose. We did not find any correlation between the severity of adverse events and the ABO blood group among the study participants. This finding is consistent with that of a small local study (n = 612) that was conducted among surgeons in SA who received 1-2 doses of mRNA-based COVID-19 vaccines and found no correlation with the blood group (Alessa et al., 2022). Overall, we found a correlation with the general health status, which could be driven by more severe adverse events with all doses. Mohammed et al., found a correlation between reporting adverse events and having known allergies among 397 healthcare providers who participated in that study in SA . Another group of researchers made the same observation in a small study conducted in SA (Ahsan et al., 2021). In neither study did the researchers investigate the correlation between adverse events and general health status. This could be used to identify people at risk for developing adverse events, who would benefit from more frequent monitoring or a preventive self-medication intervention. One of this study's strengths is that it is the first investigation of the correlation of COVID-19 vaccinerelated adverse events with the ABO blood groups of COVID-19 vaccinees in the community, taking into account the three doses of the COVID-19 vaccine of any type. Second, we included a suitable representation of the study population in SA, as we included adult participants from all regions of SA and various age groups. Third, all the study participants completed all mandatory items of the questionnaire. Nonetheless, this study had a few limitations. The participants may have recall bias and therefore might have inaccurately or incompletely reported their SARS-CoV-2 vaccine-related adverse events. Moreover, we noticed a low participation rate among participants from the southern and northern regions of SA and non-Saudi participants, which might affect the generalizability of the results. Fourth, the use of an online survey may not have been objective because the participants might overestimate or underestimate the severity of their self-reported adverse events. Furthermore, nonresponse bias may be present since some participants did not respond to the optional questions related to the severity of the adverse reactions following second and third vaccine doses. Conclusion This study does not support a correlation between COVID-19 vaccine-related adverse events and the ABO blood groups of the vaccinees in SA. The creation of a national database would be necessary to account for population differences. Our results showed that the general pattern of vaccine-related adverse events resembles what has been reported internationally and locally. However, we have not studied the more severe forms, such as anaphylaxis and facial paralysis. Data availability statement The raw data supporting the conclusions of this article will be made available by the authors, without undue reservation. Ethics statement The studies involving human participants were reviewed and approved by The Scientific Research Ethical Committee at Taif University . The Ethics Committee waived the requirement of written informed consent for participation.
2022-11-18T15:31:27.060Z
2022-11-17T00:00:00.000
{ "year": 2022, "sha1": "a857c7c623b19a73c5f997a9296c6baeacfed144", "oa_license": "CCBY", "oa_url": "https://www.frontiersin.org/articles/10.3389/fphar.2022.1006333/pdf", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "908cb752247bfbd6a6a09b7cea4d169455c69f31", "s2fieldsofstudy": [ "Medicine" ], "extfieldsofstudy": [ "Medicine" ] }
119268054
pes2o/s2orc
v3-fos-license
P-V criticality of first-order entropy corrected AdS black holes in massive gravity We consider a massive black hole in four dimensional AdS space and study the effect of thermal fluctuations on the thermodynamics of the black hole. We consider thermal fluctuations as logarithmic correction terms in the entropy. We analyse the effect of logarithmic correction on thermodynamics potentials like Helmholtz and Gibbs which are found decreasing functions. We study critical points and stability and find that presence of logarithmic correction is necessary to have stable phase and critical point. Introduction In order to prevent the violation of the second law of thermodynamics one can associate a maximum entropy with black holes [1][2][3][4][5]. Otherwise, when an object with a finite entropy crossed the horizon, the entropy of the universe would spontaneously reduce and, therefore, violets the second law of thermodynamics. The scaling of the mentioned maximum entropy with the black hole horizon area led to the holographic principle [6,7], which equates the degrees of freedom in any region of space to the degrees of freedom of the boundary. The holographic principle will be corrected near the Planck scale, as well as quantum gravity corrections modify the topology of space-time at this scale [8,9]. As we know, the holographic principle is inspired by the entropy-area relation, hence the quantum gravity corrections will modify the entropy-area relation. In that case, the original black hole entropy is given by S 0 = A/4, where A is the black hole event horizon area. Then, the corrected entropy-area relation of a black hole may be written as S = S 0 + α log A + γ 1 A −1 + γ 2 A −2 · · · , where α, γ 1 , γ 2 · · · , are the coefficients which depend on the black hole parameters. Also, the area dependence has been obtained by the specific models of the quantum gravity. In that case, the logarithmic correction of the form α log A has already been used to study the corrected thermodynamics of some kinds of black holes such as Gödel like black hole [10]. We should note that the thermodynamics corrections of black holes can be studied by using the non-perturbative quantum general relativity [11], where the conformal blocks of the conformal field theory are used to study the behavior of the density states. The effects of quantum corrections to the black hole thermodynamics have already been studied with help of the Cardy formula [12]. The corrected thermodynamics of a black hole are also studied under the effect of matter fields around a black hole [13][14][15]. The thermodynamics corrections produced by the string theory have also been studied which are in agreement with the other approaches to quantum gravity [16][17][18][19]. The corrections to the thermodynamics of a dilatonic black hole have also been discussed and observed to have the same universal manner [20]. The partition function of a black hole is very useful to study the corrected thermodynamics of a black hole [21]. It is also possible to use the generalized uncertainty principle to produce thermodynamics corrections, which yields to the logarithmic correction [22,23], in agreement with the other approaches to quantum gravity. It should be noted that the Einstein equations in the Jacobson formalism are thermodynamics identities [24,25]. Therefore, a quantum correction to the space-time topology would produce thermal fluctuations in the black holes thermodynamics, and it has the same universal form as expected from the quantum gravitational effects [26][27][28]. In fact, these corrections have already been considered to study several black geometries. For example, an AdS charged black hole has been studied under the effect of logarithmic correction of the entropy and has been found that the thermodynamics of the AdS black hole is modified due to the thermal fluctuations [29]. The effect of thermal fluctuations on the thermodynamics a black Saturn have also been studied in the Ref. [30]. It has been found that the thermal fluctuations do not have any major effect on the stability of the black Saturn. The thermal fluctuations for a modified Hayward black hole have been studied, where it has been found that thermal fluctuations reduce the pressure and internal energy of the Hayward black hole [31]. The effect of thermal fluctuations on the thermodynamics of a charged dilatonic black saturn has also been studied [32]. It was stated that the thermal fluctuations can be studied either using a conformal field theory or using the fluctuations in the energy of this system. However, it has been found that the fluctuations in the energy and the conformal field theory produce the same results for a charged dilatonic black saturn. This result may differ for the other black objects. Thermodynamics of a small singly spinning Kerr-AdS black hole under the effects of thermal fluctuations has been studied recently [33] with the conclusion that the logarithmic correction becomes important when the size of the black hole is sufficient small, which enable us to test the effects of quantum fluctuations on the black holes by analyzing the effects of thermal fluctuations for example on dumb holes (the analogous for black holes) to obtain the correct coefficient for the correction terms [34]. Such corrections may affect the critical behaviors of black object, for example a dyonic charged anti-de Sitter black hole, which is holographic dual of a Van der Waals fluid [35], is considered in the Ref. [36] where logarithm-corrected thermodynamics is investigated with the result that holographic picture is still valid. However, the van der Waals phase transitions of charged black holes in massive gravity without any order correction is discussed in Ref. [37]. An important application of such logarithmic correction can be found as the study of quark-gluon plasma properties by using AdS/CFT correspondence [38][39][40][41]. It may, for example, affect the shear viscosity to entropy ratio [42]. Massive gravity, overcoming its traditional problems, has found a resurgence of interest due to recent progress [43], yielding an avenue for addressing the cosmological constant naturalness problem. The possibility of a massive graviton has been studied first by Fierz and Pauli [44,45]. Further, van Dam and Veltman [46] and Zakharov [47] had found the linear theory coupled to a source which discuss the curious fact that the theory makes predictions different from those of linear gravity theory even in the limit as the graviton mass goes to zero. Later, some specific nonlinear massive gravity theories have been studied [48,49] which possess a ghostlike instability, known as the Boulware-Deser ghost. The significant progress has been made in construction of the massive gravity theories without such instability [50,51]. The most straightforward way to construct the massive gravity theories is to simply add a mass term to the GR action, giving the graviton a mass in such a way that GR is recovered as mass vanishes. Recently, a charged BTZ black holes in the context of massive gravity's rainbow has been studied [52]. The massive BTZ black holes in the presence of Maxwell and Born-Infeld electrodynamics in asymptotically (A)dS spacetimes is studied [53]. More recently, the higher order correction of the entropy and the thermodynamical properties of Schwarzschild-Beltrami-de Sitter black hole are studied [54]. In fact, the P − V criticality of charged black holes in Gauss-Bonnet-massive gravity is also presented [55]. The van der Waals like phase transition [56] and P − V criticality of AdS black holes in a general framework [57] are recently discussed. Now, we would like to obtain the effect of the first-order (logarithmic) corrected entropy on the thermodynamics and P − V criticality of black holes in AdS space-time of massive gravity. For this purpose, we consider the 4-dimensional charged black hole in massive gravity with a negative cosmological constant and discuss the effect of first-order correction on various thermodynamics quantities. For example, we derive the entropy, Hawking temperature, Helmholtz function, internal energy, pressure, enthalpy and Gibbs free energy. We analyse the Helmholtz free energy with respect to correction coefficient α, which confirms that the effect of the logarithmic correction is important at small r + (or high temperature) and there exists a critical radius for which Helmholtz free energy vanishes. Also, we show that the logarithmic correction has no important effect on the pressure of the black hole with large event horizon radius. In fact, the internal energy, enthalpy and Gibbs free energy are found a decreasing function of correction parameter. We further discuss the holographic duality of logarithmic corrected AdS black hole in massive gravity with Van der Waals fluid for the large black hole and find that the thermal fluctuations have no important effect. In order to study the effect of thermal fluctuations on the critical points, we analyse P − V behavior of the black hole. We discuss the effect of thermal fluctuations in view of critical point and stability of the model. From the plot, we find that the logarithmic correction will be helpful to remove instability of the black hole. For the stability of the model, we obtain a necessary requirement that the trace of Hessian matrix of the Helmholtz free energy must be non-negative. This paper is organized as follow. In the next section, we recall the black holes of AdS space-time in massive gravity. In section 3, we introduce logarithmic corrected entropy as leading order of thermal fluctuations. In section 4, we discuss about holographic dual picture of the black hole. In section 5, we study critical point and stability of the black hole. Finally, in section 6, we discuss conclusion and summarize the results. AdS black holes in massive gravity Let us consider the following action for (3+1)-dimensional massive gravity with a Maxwell field [58] where Λ = −3/l 2 is the cosmological constant and k = 1, 0, or 1, correspond to a sphere, Ricci flat, or hyperbolic horizon for the black hole, respectively. Here F µν is the Maxwell field-strength tensor, c i are constants, and U i are symmetric polynomials of the eigenvalues of the matrix √ g µα f αν , where f µν is a fixed symmetric tensor. The action admits a static black hole solution with the space-time metric and reference metric as Here, h ij dx i dx j is the line element for an Einstein space with constant curvature. The metric function f (r) in terms of electric charge q is written by [58] f The black hole horizon can be determined by setting f (r)| r=r + = 0, hence the mass parameter m 0 which is related to the total mass of the black hole is given by where outer horizon r + is largest real root of the equation f (r) = 0, ie, For example by choosing the parameter as m 0 = 2, c 1 = 1, c 2 = 1, k = 1, l = 1, m = 0.2, and q = 1 one can obtain r − = 0.1346110283 while r + = 0.9126757206 together two complex roots. There is also a chemical potential corresponding to the electrical charge q given by, µ = q r + . (2.6) Fisrt-order corrected thermodynamics The first order corrected entropy is given by [26], where α is a constant having dimension of length and the zeroth order entropy S 0 is given by [59], Using the definition of Hawking temperature with relation to the surface gravity on the outer horizon r + [58], Exploiting relations (3.1) and (3.3), the corrected entropy is given by, Using the entropy and temperature, we can find the Helmholtz function, as follow where and l p is some constant of integration and has dimension as length. In the Fig. 1 we draw Helmholtz free energy in terms of horizon radius with variation of correction coefficient α. As expected, F α → 0 as r + ≫ 1, which means that the effect of the logarithmic correction is important at small r + . In the right plot of the Fig. 1, we can see the uncharged (q = 0) case, for which the effect of logarithmic correction becomes significant at high temperature (infinitesimal r + ). It should be noted that the cases of k = 0 and k = ±1 lead to the similar result. Also, the negative values of c 1 and c 2 have no important effect. Left plot of the Fig. 1 shows that there is a critical radius r c where F α = F −α = 0 and we have r − ≤ r c ≤ r + , where equality holds for the extremal black hole (m 0 ≈ q with r + ≈ 0.4). It should be noted that the value of the event horizon depends on the value of m 0 . Also, the second term of the rhs of Eq. (3.6) corresponds to an ordinary AdS black hole with thermodynamics pressure related to the cosmological constant, In order to calculate the internal energy, we use the well-known thermodynamics relation E = F + T S, and obtain where E α = αc 1 m 2 4π log 16 √ πl 2 l p 12r 4 + − q 2 l 2 + 4kl 2 r 2 + + 4m 2 l 2 r 2 + (c 2 + c 1 r + ) 4kl 2 r 3 + + 4c 2 m 2 l 2 r 3 + − q 2 l 2 r + + 4m 2 c 1 l 2 r 3 + + 12r 5 It is clear that the correction parameter α decreases the value of the internal energy for all cases of k = 0, ±1, i.e., the charged, uncharged and extremal black hole, respectively. As we know, modified pressure due to thermal fluctuation can be obtained using the derivative of the Helmholtz function with respect to the volume, where P α = 3α 8π 2 r 2 + l 2 + αq 2 32π 2 r 6 + − α 64π 2 r 2 + l 2 12 . (3.14) As expected, pressure is decreasing function of r + while it is increasing function of α for small radius. We find that logarithmic correction has no important effect on the pressure of the black hole with large event horizon radius. Also, we can obtain enthalpy as, We find that enthalpy in decreasing function of α as well. Gibbs free energy using the relation G = H − T S = F + P V is obtained as, , (3.18) which is decreasing function of α like other thermodynamics potentials. Holographic duality It will be interesting if AdS black hole in massive gravity has a holographic dual of the form of Van der Waals fluid with the following equation of state where we assumed K B = 1 (unit of Boltzmann constant). Also, a and b are some positive constants in which the constant a parameterizes the strength of the intermolecular interactions, while the constant b accounts for the volume excluded owing to the finite size of molecules in fluid. If a and b are both set to zero, the equation of state for an ideal gas can be recovered. It means that Now, AdS black hole in massive gravity with logarithmic correction is holographic dual of Van der Waals fluid if P = P W , where P given by the equation (3.13). By using numerical analysis we find that the mentioned duality holds for the large black hole (V ≫ 1) and therefore thermal fluctuations have no important effect in this case. In the Fig. 2, we draw ∆P = P − P W and find some regions where ∆P = 0 corresponding to the large V . In this limit, the value of α is not important and thermal fluctuations have no key role to violate holographic dual picture. There exists a divergency also at critical volume. Hence, it is possible to have dual Van der Waals fluid in presence of logarithmic correction. In order to find the effect of thermal fluctuations on the critical points, we should analyze P − V behavior of the black hole. Therefore, we can study P − V criticality via the following relations: By using the relations (3.9) and (3.13), we can draw P in terms of V as illustrated by the Fig. 3. It is clear that critical point exists also in presence of logarithmic correction. Typical behavior of P for the selected values of parameter shows critical point at V ≈ 3.5 which means T c ≈ 0.3. At the α = 0 limit, one can obtain the following condition to have the first condition of (4.3): while the second condition gives, It is clear that both equations (4.4) and (4.5) never satisfy simultaneously. It means that without thermal fluctuations there is no critical point. Critical points and stability As illustrated in the previous section, there is no critical point for AdS black hole of massive gravity in absence of logarithmic correction. Hence, we should consider the effect of thermal fluctuations to obtain the critical point and study the stability of the model. The first step is to study the specific heat which is given by, which further yields to, where C α = 4 c 1 l 2 m 2 r 3 + + 2 q 2 l 2 + 24 r 4 + α −12 r 4 + + 4 l 2 (c 2 m 2 + k) r 2 + − 3 q 2 l 2 . (5.3) In the plots of the Fig. 4, we can see the typical behavior of the specific heat at any space curvatures k = 0, ±1. We can see that some negative specific heat with α = 0 and α > 0, but specific heat is completely positive with negative α. It means that the logarithmic correction can remove instability of the black hole. For the positive correction coefficient (α = 1), the black hole has positive specific heat for r + > r 0 , where r 0 gives the zero of the specific heat. The important point is that for the charged black holes with chemical potential, the sign of specific heat is not enough to conclude stability of the model and more important test is required using of Hessian matrix of the Helmholtz free energy which we denoted by H, and given by the following matrix, By using the relations (3.3), (3.7) and (3.9), one can find that the determinant of the matrix H vanishes, It means that one of the eigenvalues is zero and we should consider the other one which is the trace of the matrix (5.4) given by, Now, crucial condition to have stability is τ ≥ 0. In the Fig. 5, we can see the behavior of τ with r + . It is clear that positive region exists only for the case of positive α. Hence, we find that the presence of the logarithmic correction of the form (3.1) with positive α is essential to have critical point and stability at least for small values of the r + . For example, we examine the special case with our selected values of parameters. In the case of c 1 = 1, c 2 = 1, k = 1, l = 1, m = 0.2, q = 1 we can see that stability exists approximately for r + ≤ 0.625. These values of horizon radius obtained for m 0 ≤ 1.3. Hence, we find suitable condition on the black hole mass where it is in the stable phase. Conclusions In this paper, we have considered a charged black hole solutions in 4-dimensional massive gravity with a negative cosmological constant and studied the first order corrected thermodynamics and phase structure of the black hole solutions. In particular, we have computed the first-order corrected entropy, Hawking temperature, Helmholtz function, internal energy, pressure, enthalpy and Gibbs free energy. We have plotted the Helmholtz free energy in terms of horizon radius with variation of correction coefficient α, which confirms that the effect of the logarithmic correction is important at small r + (or high temperature) and, also, there exists a critical radius for which Helmholtz free energy vanishes. We found that the logarithmic correction has no important effect on the pressure of the black hole with large event horizon radius. However, the internal energy, enthalpy and Gibbs free energy are the decreasing function of correction parameter. Furthermore, we show that AdS black hole in massive gravity with logarithmic correction is holographic dual of Van der Waals fluid for the large black hole and, consequently, found that thermal fluctuations have no important effect. In order to find effect of thermal fluctuations on the critical points, we have analyzed P − V behavior of the black hole. Furthermore, we have studied the effect of thermal fluctuations in order to obtain critical point and stability of the model. From the graphical analysis, we have found that the logarithmic correction can be used to remove instability of the black hole. However, for the stability of the model, we found remarkably that trace of Hessian matrix of the Helmholtz free energy must be non-negative.
2017-04-06T17:40:30.000Z
2017-04-02T00:00:00.000
{ "year": 2017, "sha1": "313d253eb0333ce15f8530ffcb3dce9ef412b952", "oa_license": null, "oa_url": "http://arxiv.org/pdf/1704.01016", "oa_status": "GREEN", "pdf_src": "Arxiv", "pdf_hash": "313d253eb0333ce15f8530ffcb3dce9ef412b952", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
261849338
pes2o/s2orc
v3-fos-license
fluids : High-speed gas/vapour jets injected into a cross-moving sonic liquid signifies a vital phenomenon which bears useful applications in environmental and energy processes. In the present experimental study, a pulsating jet of supersonic steam was injected into cross-flowing water. Circulation zones of opposite vorticity owing to the interaction between the steam jet and cross-water flow were found. However, a large circulation appeared in front of the nozzle exit. Also, most small circulation regions were observed at higher water-flow rates (>2 m 3 /s). Among the prime mixing variables (i Introduction The injection of a high-speed gas/vapour into a low-speed cross-flowing gas/liquid presented an interesting phenomenon which has been exploited by the scientific community for multiple purposes.It includes the injection of steam into polluted air, which causes these solid particles to wet.Afterwards, these solid particles are removed from the air supply using cyclone separators [1].Depending on the application, these two streams of fluids can be mixed at any speed, including subsonic, sonic, supersonic or even hypersonic [2][3][4][5][6][7].So far, the mixing studies related to this phenomenon have been mainly directed towards noise control inside cavities [7][8][9][10][11][12], the thrust vector control of moving bodies [13,14], and in combustion chambers [15][16][17][18].In another study, three different types of injections were investigated that included the circular transverse, circular oblique and elliptical transverse configurations into a stream flowing at a speed of Mach 2 [3].It was observed that, in comparison to the oblique injection, the rest of the two configurations presented maximum penetrations into the flowing stream due to the reduced number of the transverse momentum components in the transverse direction than the rest of the two cases.Other studies have included the injection of different gases like argon, helium and nitrogen into a free stream of air inside a wind tunnel, where the air was flown at supersonic speed [19].Here, a model was applied to determine the forces against the walls, and with the help of analytical treatment, corrections were made based on the data that provided the pressure distribution, shock wave shapes, injected mass fraction, total pressure and velocity profiles in the downstream direction. In the current study, the hydrodynamics of flow were investigated when a supersonic steam jet was injected into a cross-flowing water stream.Before presenting the experimental measurements involving the flow hydrodynamics, some of the necessary mathematical formulations are illustrated to understand the criteria applicable for characterising the flow regimes.A vital ratio in such studies is the ratio of the steam's jet to the cross-flowing fluid's momentum fluxes [20,21], which can be useful to determine the extent of the penetration of the steam jet into the cross-flowing water. J = ρ jet V jet ρ water V water where ρ represents density, V is the mean velocity, γ is the ratio of specific heat (c p /c v ), P is the dynamic pressure and M represents the Mach number.Another important ratio is the height (H mid ) of the point where the higher velocity profiles terminate, which can be seen visually via a characterisation setup like particle induced velocimetry (PIV) systems.This height can suitably be non-dimensionalised by using the diameter of the steam's nozzle exit, as given in the following relation [22]: 1. 25(1 + γ water )γ water M 2 water (1 Using the above two relations, as well as the data acquired through use of an appropriate experimental setup and the measurement devices, the flow fields were characterized.All studies related to the hydrodynamics of those flows where the cross-flowing stream travels at sonic and supersonic speed or above are almost similar, so most of these studies included subsonic flows which were then extended to include supersonic or hypersonic flows.To date, there is an ample number of studies that have been conducted to investigate the hydrodynamics of jets which have been injected into a cross-flowing stream of fluid(s), but to date, there is no study that is known to the authors of the present manuscript that has been cited in the literature in which the effect of a supersonic steam jet injection on cross-flowing water at sonic speed has been discussed.Second, the hydrodynamics of the resultant flow regimes were presented with regard to the influence on the shear layer, as well as the quantification of the most effective operating condition for the efficient mixing of steam and water.Therefore, the present manuscript is an effort in this regard.The details of the experimental setup and the experiments being performed, as well as the derived results, are presented in the proceeding sections. Experimental Setup The experimental setup (Figure 1) consisted of a square flow channel with a height of 10 cm, a length of 120 cm and a width of 10 cm.Steam was injected into the flow channel through a supersonic nozzle which had the following dimensions: full length, 10 cm; throat, 0.5 cm; length of converging section, 8.5 cm; length of diverging section, 1.5 cm; nozzle inlet diameter, 2 cm; and nozzle exit diameter, 1.5 cm.Steam was injected at the inlet pressure of 3 bars and the water was run through the duct at a varying volumetric flowrate, from 1 to 3 m 3 /s, at increments of 0.5 m 3 /s with each step.The water's superficial velocity rate was measured with the help of hot film anemometers (HFAs) in the upstream area, where the water was the only medium.The square duct was filled with water, and then water was circulated through the flow channel at varying flow rates by means of the externally regulated pump.The PIV system was used initially with fluorescent 1.0 g/cc microspheres as tracer particles to characterize the steam's velocity and direction.The tracer particles A double pulsed Nd:YAG laser was utilized, of a wavelength of 532 nm, pulsing at a frequency of 15 Hz and a maximum energy of 500 mJ/s to generate a sheet of a thickness of 2 mm, with the help of the appropriate lenses to capture the whole fluid domain for the PIV scans, as seen in Figure 1. The scattered light was guided by the reflecting mirror through the column onto the charged-couple device (CCD).The adoptive cross-correlation (ACC) algorithm was applied to process the initial scans and the method proved useful in providing the velocity fields for the regions of larger velocity gradients, as was the case in the present study, particularly with the interfacial region between the steam's bubbles and the surrounding water.For this reason, two steps of the ACC were adopted in this work.The initial interrogation window was 64 × 64 pixels, and the refinement process was conducted at 32 × 32 pixels over the two steps.The measuring field-of-view for the region (H × W) including the steam jet and the surrounding area was 200 mm × 100 mm, which was covered using a CCD camera, covering a region of 100 mm × 100 mm.It was found that the steam jet's profile velocity could generate a periodic oscillation at the entrance into the duct, with a velocity varying across a range of values against time.Data were acquired for 10 min at a rate of 10 Hz in a synchronized manner, with the same shutter speed used for both cam- A double pulsed Nd:YAG laser was utilized, of a wavelength of 532 nm, pulsing at a frequency of 15 Hz and a maximum energy of 500 mJ/s to generate a sheet of a thickness of 2 mm, with the help of the appropriate lenses to capture the whole fluid domain for the PIV scans, as seen in Figure 1. The scattered light was guided by the reflecting mirror through the column onto the charged-couple device (CCD).The adoptive cross-correlation (ACC) algorithm was applied to process the initial scans and the method proved useful in providing the velocity fields for the regions of larger velocity gradients, as was the case in the present study, particularly with the interfacial region between the steam's bubbles and the surrounding water.For this reason, two steps of the ACC were adopted in this work.The initial interrogation window was 64 × 64 pixels, and the refinement process was conducted at 32 × 32 pixels over the two steps.The measuring field-of-view for the region (H × W) including the steam jet and the surrounding area was 200 mm × 100 mm, which was covered using a CCD camera, covering a region of 100 mm × 100 mm.It was found that the steam jet's profile velocity could generate a periodic oscillation at the entrance into the duct, with a velocity varying across a range of values against time.Data were acquired for 10 min at a rate of 10 Hz in a synchronized manner, with the same shutter speed used for both cameras.Thus, using this configuration, 6000 frames were acquired in 10 min against a single operating condition from a single camera.A national instruments data acquisition and data processing software module were used to record the pictures.Yet, in the case of the steam injection, the micro-bubbles could have been used as tracer particles.However, as it was a steam injection inside a pool of water, these small bubbles could not keep their existence due to the sudden condensation, so the original tracer particles were used instead [23].The mass flow rates were measured with the help of the mass flow rate meters for the liquids at the inlet pipe of the steam, as well as that of water channel's inlet and outlet.The experimental scheme can be seen in Table 1 at various operating conditions.The results drawn based on the mentioned phases of the experiments are discussed in detail in the following section. Results and Discussion In the current study, a supersonic steam jet was injected into cross-flowing water in a square duct, resulting into a complex hydrodynamic picture of the flow regimes thus observed.The results are described in detail in the proceeding sections. Hydrodynamics of the Supersonic Steam Jet into the Cross-Flowing Water From the PIV images, the structure of the flow regimes within the fluid domain can be seen in Figure 2. It should be noted that, throughout the manuscript, just the PIV image with the highest mass flow rate of water is referred to.The errors in the current PIV measurements are within 0.12-0.34%.The first image of each result reported here represents the case when the steam was injected at the inlet pressure of three bars, and the water travelled at a volumetric flow rate of 2.0 m 3 /s.The flow domain was divided into multiple sections, which are represented by the circles in Figure 2. A total of 6000 images were acquired for a duration of 10 min to support Figure 2. On average, the overlapped images under the influence of a single set of operating conditions show that the steam initially penetrated vertically into the flowing water before it bent towards the direction of the water.It should be noted that all results were plotted with the steam-water velocity profiles that were normalised using the velocity of the steam at the nozzle exit.From Figure 2, it can be clearly seen that the region of maximum velocity occurred above the nozzle exit, towards to the right side, due to the deflection of the steam jet under the influence of the momentum acquired by the flowing water.In contrast, the region above the steam's jet shows a gradual decrease in velocity in the downstream direction-the jet showed an expansion profile in the region downstream of the nozzle's exit.As seen in Figure 2, the height of the point where either the jet achieved the high horizontal velocity, or the highest velocity profile was terminated, has been measured 27 Dnozzle.The height above the nozzle exit was non-dimensionalised by the diamete the nozzle exit.The vertical (Y) penetration of the steam's jet was estimated with the h of the equation [24], expressed as where X is the horizontal distance from the location of the steam's nozzle exit and d is diameter of the steam's jet at the nozzle exit.As seen in Figure 2a, the PIV-based measu ment of the jet's penetration showed little discrepancy with the jet's penetration val estimated from this equation.It can be observed that, in the present case, the superso steam being injected into the cross-flowing water showed a penetration length a li above the line, which was obtained from estimations using Equation (3).A possible rea for this may be the contribution of the buoyant steam compared to the water, which absent in Equation ( 3).The deviation from the line drawn based on this equation has a been quoted in earlier experiments conducted [25].As can be seen in Figure 2, with reg to the downstream of the steam's nozzle, the formation of large vortical structures ensu mixing, which was further amplified by the cross-flowing water.Also seen in Figur regions of low pressure were formed at the bottom of the deflected steam's jet, which m have been due to the low-pressure suction of the steam jet just near the exit of the noz as also illustrated in earlier studies [26,27].These circulation zones had an opposite vo city in comparison to the vorticity of the large circulation zones at the front.It was furt observed that, among the experimental phases, most of the small circulation zones w observed at higher flow rates of water, contrary to the number of circulation zones at low As seen in Figure 2, the height of the point where either the jet achieved the highest horizontal velocity, or the highest velocity profile was terminated, has been measured as 27 D nozzle .The height above the nozzle exit was non-dimensionalised by the diameter of the nozzle exit.The vertical (Y) penetration of the steam's jet was estimated with the help of the equation [24], expressed as where X is the horizontal distance from the location of the steam's nozzle exit and d is the diameter of the steam's jet at the nozzle exit.As seen in Figure 2a, the PIV-based measurement of the jet's penetration showed little discrepancy with the jet's penetration values estimated from this equation.It can be observed that, in the present case, the supersonic steam being injected into the cross-flowing water showed a penetration length a little above the line, which was obtained from estimations using Equation (3).A possible reason for this may be the contribution of the buoyant steam compared to the water, which is absent in Equation (3).The deviation from the line drawn based on this equation has also been quoted in earlier experiments conducted [25].As can be seen in Figure 2, with regard to the downstream of the steam's nozzle, the formation of large vortical structures ensured mixing, which was further amplified by the cross-flowing water.Also seen in Figure 2, regions of low pressure were formed at the bottom of the deflected steam's jet, which may have been due to the low-pressure suction of the steam jet just near the exit of the nozzle, as also illustrated in earlier studies [26,27].These circulation zones had an opposite vorticity in comparison to the vorticity of the large circulation zones at the front.It was further observed that, among the experimental phases, most of the small circulation zones were observed at higher flow rates of water, contrary to the number of circulation zones at lower flow rates.This shows the dependence of the low-pressure circulation zones on the volumetric flow rate of the water, which may be proportional in terms of mixing, as observed in the earlier studies [28][29][30].The non-dimensionalised turbulence kinetic energy (TKE) relation was given as in Equation ( 3), where X is the horizontal distance from the location of the steam's nozzle exit and d is the diameter of the steam's jet at the nozzle exit.As seen in Figure 2a, the PIV-based measurement of the jet's penetration showed little discrepancy with the jet's penetration values estimated from this equation.It can be observed that, in the present case, the supersonic steam being injected into the crossflowing water showed a penetration length a little above the line, which was obtained from estimations using Equation (3).A possible reason for this may be the contribution of the buoyant steam compared to the water, which is absent in Equation ( 3).The deviation from the line drawn based on this equation has been quoted in earlier studies [25], too.As can be seen in Figure 2, with regard to the downstream of the steam's nozzle, the formation of the large vortical structures ensured mixing, which was further amplified by the cross-flowing water.Also seen in Figure 2, regions of low pressure were formed at the bottom of the deflected steam's jet, which may have been due to the low-pressure suction of the steam jet just near the exit of the nozzle, as also illustrated in earlier studies [26,27].These circulation zones have an opposite vorticity in comparison to the vorticity of the large circulation zones at the front.It was further observed that, among the experimental phases, most of the small circulation zones were observed at higher flow rates of water, contrary to the number of circulation zones at lower flow rates.This shows the dependence of the low-pressure circulation zones on the volumetric flow rate of the water, which may be proportional in terms of mixing, as observed in earlier studies [28][29][30].The non-dimensionalised turbulence kinetic energy (TKE) relation was given as where u , v and w are the fluctuating velocity components along the x, y and z-axis, respectively, and U m is the mean velocity of the flow at any location in the flow domain. For the values of the turbulence kinetic energy (TKE), we used the relation given in Equation (3).However, as our interest is mainly focused on the jet penetration and on the flow characteristics in the vertical plane, the values of the velocity component along the z-axis have been used as unity.This, on the one hand, keeps the stability and correctness of the equation intact and, on the other hand, it helps us to focus our observation window only along the x-y plane.The 2D PIV image here captures the upward propagation of the steam jet (y-axis), whereas the cross-flowing water forces the steam jet to be bent along the length of the duct (x-axis).Parameters such as the TKE are associated with the velocity profile of the steam jet along the length of the duct against the height of the duct.The orientation of the laser sheet and the camera were at right angles to each other, with the laser sheet aligned along the x-y plane.As from Figure 2b, two distinct regions of TKE could be identified with the initial flow conditions, and this increased to three at the highest flow rate.In the upstream region, the TKE resulted from the shear layer interaction involving the steam's interface with the water, and the interaction between the shock and the boundary layer.With the rise in the flow rate of the water and an increased deflection in the steam's jet along the direction of flow of the water with the sudden expansion of the steam, small secondary circulations involving the TKE profiles were observed. For the calculation of the Reynolds shear stress, the values of the mean time and average velocities were subtracted from the instantaneous values of the velocities along the x and y-axis.Furthermore, we joined the k-type thermocouples with the anemometer's knobs to measure the density of the fluid.The temperature values thus obtained were utilized to determine the corresponding steam density from the steam data tables.Thus, the Reynolds shear stress values were estimated.The Reynolds shear stress measurements can be seen in Figure 2c, which shows two spots having high Reynolds shear stress values. The non-dimensionalised Reynolds shear stress (RSS) (− u 2 v 2 U 2 ) reflected a behaviour in the regions where the mean velocity gradients were higher between the injected steam and the cross-flowing water.Across the y-z plane at three distances, i.e., h = Y/D = 1.25, 1 5, 1.75 and 2.0 cm, the Reynolds shear stress showed a slight diffusive behaviour at a higher non-dimensionalised height, h = Y/D, along the y-axis (normal) to the flow.In the present study, our main purpose is to present a macroscopic view of the turbulence kinetic energy (TKE) and Reynolds shear stress (RSS) and find out their impacts on the flow characteristics.It was found that the vortical structures affected the TKE as well as the associated decays of the flow.Regions of high TKE were found to initially increase due to the jet's interaction with the surrounding flow field, and thus, due to the significant contribution of the buoyancy, the jet's inlet pressure and the surrounding water lateral velocity were affected.All these factors aided the generation of the vortical structures that enhanced mixing in said regions.The role of these vortical structures having different scales and strengths can be seen in Figure 2a,b, which affected the increasing strain rate of the flow.Thus, these structures directly affected the production of the turbulence KE.The interaction of these structures and the resulting distribution of the TKE across the vertical planes is critical to understand the mixing between the distinctive flow domains (i.e., the jet and the cross-flowing water).The contours of the TKE showed how the flow structures shifted the turbulence from the region above the steam jet's exit nozzle to the region to the right of the flowing water.Further, the main reason of the generation of RSS is the interaction between the region dominated by the fluctuating velocity due to the violent steam jet, and the region covering the mean water flow velocity.The stress gradient in these flows depends on the time history of the turbulence.The RSS vanished in the regions where the velocity gradients were temporally zero, as can be seen from Figure 2c,d.By the time the RSS values in the regions that were devoid of RSS contours became zero, the values of the TKE were derived from the relation given in Equation ( 3); however, since our interest is mainly focused on the jet penetration and on the flow characteristics in the vertical plane, it is therefore the values of the velocity component along the z-direction w 2 that have been approximated as unity.This results in us favouring the equation's usefulness and stability and, on the other hand, it assists our focus on the observation window along the x-y plane only.The 2D PIV image here captures the upward propagation of the steam jet (y-axis), whereas the cross-flowing water forces the steam jet to be along the length of the duct (x-axis).Parameters such as the TKE are associated with the velocity profile of the steam jet along the length of the duct against the height of the duct.The orientation of the laser sheet and the camera are at right angles to each other, with the laser sheet aligned along the x-y plane.The comparative profiles for the Reynold's shear stress and the mean velocity through the high-stress regions are shown in Figure 2e. Shear Layer-Driven Instabilities The shear layer generated at the interface among the steam-water two-phase flows contains instabilities which were observed in the case of the vertical jet injection long ago.These instabilities were formed due to various flow-induced mechanisms that depended upon the Reynolds number, the velocity gradients and the density gradients between the two interacting fluids.The PIV images used to address the topic in this section were converted into black and white images by applying the decolourisation technique, taking measures to maintain the contrast [31] with the Matlab-based edge detection technique [32]. Since the greyscale conversion includes the loss of the data, decolourisation was applied with the contrast preservation method.This relaxed the contrast of the colour constraints and enabled us to construct a non-local colour pair (i.e., white and black) using a non-linear bounding hierarchy in which the duplication of the local colour was removed.The initial profile thus obtained showed the formation of a hovering vortex, which encapsulated the jet from the windward side, thus contributing to the vortex pairs that were generated in a counter-direction to each other, without affecting the horseshoe-like structures that were formed parallel to the surface of the flow vessel.Such structures were found dominant up to a height of 2.0 cm in the current experiment, as seen in Figure 2d. Due to the gradient of the velocities of the steam jet and the surrounding cross-flowing water stream, the shear layer contained vortical structures which overturned to cause the KH instabilities.With sufficient water flow to surround the steam's jet, the KH instabilities were transformed into the periodic ring vortices around the jet body, mainly at the interface between the steam and water, which can be observed from Figure 3.A careful examination of all the PIV images reveals that the behaviour involving the KH instabilities was not continuous, and the main reason for the discontinuity of such behaviour may have been the inception and collapse of the steam bubbles after emerging from the exit of the nozzle.Thus, we can attribute this behaviour as a mean or time-averaged behaviour [33].From Figure 3, the small separation region can be seen just on the windward side near the exit of the nozzle, whereas the shear layer-induced overturning circulations were influenced to a greater extent by the rise in the flow rate of the water.The vortex rings were found to be tilted initially and expanded more towards the windward side of the flow; however, after rising to a certain height, they tilted in a clockwise direction and diverted towards the right-hand side along the clockwise direction [34].Counter-rotating vortex pairs (CVPs) were observed in this study, along with leeward side folds, which can be noticed in Figure 3.The region near the nozzle exit appeared to be the dominant reason for the generation of the CVPs.Thus, based on these observations, the upstream part of the jet, which was tilted towards the right-hand side in a clockwise direction, as well as the downward side, where the planes aligned with the direction of the jet, thus contributed to the creation of the counter-rotating vortex pairs on the windward and leeward sides of the jet [35][36][37]. Effects of Operating Conditions on the Mixing In this section, the flow process optimization of the supersonic steam jet cross-water flow injection was conducted in reference to the effect of the flow properties as well as the dimensions of the mixing of the steam with the water.The prominent parameters that were used in the study of the steam-water flow process optimization included the flow rate ratio of the steam and water, the angle of the steam's injection, and the ratio of the distance between the vessel inlet and the centre of the nozzle exit (l1) to the distance be- Effects of Operating Conditions on the Mixing In this section, the flow process optimization of the supersonic steam jet cross-water flow injection was conducted in reference to the effect of the flow properties as well as the dimensions of the mixing of the steam with the water.The prominent parameters that were used in the study of the steam-water flow process optimization included the flow rate ratio of the steam and water, the angle of the steam's injection, and the ratio of the distance between the vessel inlet and the centre of the nozzle exit (l 1 ) to the distance between the flow channel outlet and the central point of the nozzle exit (l 2 ), (i.e., l 1 /l 2 ).The most effective parameter was determined based on the extreme difference analysis method.The details of the variations in the vital process parameters are summarized in Table 2. Four criteria for the optimization of the flow process are set here.These include the extent of the pressure recovery, the extent of the mixing, the penetration height and the separation length.The relation for the pressure recovery [38] is expressed as The mixing efficiency (E _ mixing is defined as the ratio of the mass flow rate of the steam to the mass flow rate of the water expressed as where α is the void fraction (i.e., the fraction of steam from the total of steam + water), the subscript s represents the steam and w represents the water.The penetration height was the height from the nozzle exit to the point at which the maximum velocity profile terminated in the cross-flowing water, and the separation length was the distance between the location of the separation of the shock wave and the centre point of the steam-injecting nozzle's exit [37].The values of these four parameters that have been set here as the major criteria for the purpose of the optimization of the flow conditions to achieve the maximum mixing between the two phases are given in Table 3.It was found that the non-dimensionalised penetration height had almost a proportional effect on the non-dimensionalised separation length of the jet, whereas the non-dimensionalised penetration height itself was influenced by the ratio of the mass flow rates corresponding to the steam and the cross-flowing water.The most crucial parameter, i.e., the mixing efficiency, was found to show an increasing trend with the rise in the ratio of the mass flow rates, penetration height and separation lengths.The mixing efficiency was evaluated using the expression given in Equation ( 5). To our observation, the more prominent factor that affected the mixing efficiency was the ratio of the mass flow rates and, subsequently, the velocity of the steam jet that affected the mixing efficiency itself.However, in comparison to the other factors, the optimizing of the mixing phenomena was influenced more by the mixing efficiency.The effect of the angle of injection was found to have an inverse impact on all four variables, followed by the ratio of the distances.From the discussion given above, it can be inferred that the ratio of the steam jet mass flow rate to the mass flow rate of the crossflowing water had a significant impact on all these parameters, affecting not only the mixing efficiency but also the penetration height, as well as the ratio of the separation distance. Based on the measured values for all four parameters, extreme difference analysis was opted for here in order to determine the parameter that had the greatest effect on the mixing performance of the supersonic steam jet injected into the cross-flowing water [38], which we believe to be the most important aspect for the performance of the two-phase mixing, expressed as R = max(A 1 ) − min(A 1 ) where R is the extreme difference, and A i and A j are the minimum and maximum values of the specific parameter for all its values as an influencing parameter.By using this relation, the extreme difference analysis provides the greatest effect that was imparted by the flow rate ratio of the steam and water, as shown in Figure 4. mixing, expressed as where R is the extreme difference, and and are the minimum and maximum values of the specific parameter for all its values as an influencing parameter.By using this relation, the extreme difference analysis provides the greatest effect that was imparted by the flow rate ratio of the steam and water, as shown in Figure 4. Conclusions In the present experimental study, the physical picture of the phenomena associated with the injection of a supersonic steam jet into cross-flowing water in a square duct was investigated.A 1.2 m long square flow channel with a 10 cm width and height as the experimental setup, as well as the PIV technique and HFA sensors as diagnostic techniques, were used to determine mainly the extent of the penetration of the steam jet into the crossflowing water, which was analogous to the ratio of the momentum fluxes of the steam's jet to the cross-flowing water.The flow domains, based on the normalised velocity as well as planar normalized duct dimensions, derived from the PIV images, showed the penetration of the steam's jet and its subsequent bending towards the water's flow.The PIV-based jet penetration was compared with the model equation, and there was little difference between the two, which was due to the absence of a buoyant steam contribution, which was absent in the model equation. Also, using the PIV images to observe the steam jet flow domain, the circulation zones were of opposite vorticity to the vorticity of the large circulation zone in front of the nozzle exit.However, most of the small circulation zones occurred at higher water flow rates than at lower flow rates of the water.Other significant mixing parameters such as the TKE, RSS and shear-driven instabilities were also estimated to determine the significance of the mixing between the two phases.With the rise in the flow rate of the water, the jet was deflected more towards the water's flow, leading to the sudden expansion of the steam, and secondary small circulations involving the TKE profiles were observed, whereas the RSS supported a bit of a diffusive trend at higher y/D values across the normal to the flow.Further from the PIV measurements, based on the gradient of the velocities between the steam jet and the surrounding water, the shear layer that contained the vortical structures was highlighted.They overturned to cause KH instabilities, which were characterized by the formation of periodic ring vortices at the steam-water interface.CVPs were also observed along with leeward side folds; the flow near the nozzle exits appeared to be the dominant reason for the generation of the CVPs.The influences of the operating conditions on the optimization of the supersonic steam jet cross-water flow were also studied by evaluating the pressure recovery, mixing efficiency, penetration height and separation height.Utilizing extreme difference analysis, the most influential aspect of the performance of the phenomenon was the mixing efficiency.The distribution of the TKE varied between 0.01 and 0.51, along with the values of the RSS, which varied between 0.25 and 0.27 (both sides).Regarding the mixing efficiency, our observation indicated that the more prominent factor that affected the mixing efficiency itself was the ratio of the mass flow rates and, subsequently, the velocity of the steam jet.However, in comparison to the other factors (i.e., pressure recovery, which varied from 32.21 to 89.33, penetration height, which varied from 11.7 to 28.2, and separation length, which varied from 4.1 to 7.8), the mixing phenomenon was influenced more by the mixing efficiency, which varied from 93.89 to 99.31%. Figure 1 . Figure 1.A schematic of the square duct with water and steam injections. Figure 1 . Figure 1.A schematic of the square duct with water and steam injections. Figure 2 . Figure 2. (a) Non-dimensionalised velocity profiles/penetration line/region across the fluid medium.(b) TKE profiles observed in experimental phase 5. (c) Reynold shear stress profiles as observed in experimental phase 5; (d) Reynold's shear stress profiles as observed with the variation in the height of the observation PIV plane; (e) mean velocity vs. Reynold's shear stress profiles at steam inlet pressure of 3 bars through the high-stress regions. Fluids 16 Figure 3 . Figure 3. Hydrodynamics of the shear layer and the jet profiles. Figure 3 . Figure 3. Hydrodynamics of the shear layer and the jet profiles. Figure 4 . Figure 4. Quantification of the operating parameters on the mixing efficiency, pressure recovery, penetration height and the separation distance. Table 1 . Experimental phases and operating conditions. Table 2 . Experimental phases and operating conditions for design optimization. Table 3 . Optimization of the design based on the mixing efficiency and related variables.
2023-09-15T15:17:20.394Z
2023-09-12T00:00:00.000
{ "year": 2023, "sha1": "42293138565530a3ebd8b51172aa5aac5cb9b30d", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2311-5521/8/9/250/pdf?version=1694518695", "oa_status": "GOLD", "pdf_src": "ScienceParsePlus", "pdf_hash": "c69ab38070ded809fe58dc47bced0e32e6fc0563", "s2fieldsofstudy": [ "Engineering", "Physics" ], "extfieldsofstudy": [] }
156909453
pes2o/s2orc
v3-fos-license
Interrelation between Payout and Financing Decisions: Evidence from Emerging Markets (Взаимосвязь решений о выплатах собственникам и финансировании на примере компаний с развивающихся рынков капитала) <b>English Abstract:</b> Financing and payout decisions generally affect company’s economic performance: they have impact (both directly and indirectly) on the free cash flow and, thus, on company’s and shareholders’ value. Search for optimal capital structure and optimal payout policy strategy that are likely to maximize shareholders’ utility resulted in the papers, dedicated to determinants of capital structure and payout policy. In such papers, one of the policies is usually treated as a determinant for another one. This bound does not let researchers to make some conclusions about existence or absence of interrelation between payout and financing choices. To capture this interrelation, simultaneous regression analysis should be performed. Researchers, though, cannot come up with unified conclusion about the existence and direction of such interrelation.The absence of certain results as well as low level of research done on emerging markets make this topic rather relevant.The results of recent research on the interrelation between payout and financing decisions are discussed in this paper. We also develop an econometric model that allows us to check the existence of interrelation in emerging markets and to compare the results to those obtained from developed markets.The article contributes to the existed literature in the following directions: first, two debt variables are taken into account (total and long-term debt) as well as two payout policy variables (total payout and dividend payout). Second, macroeconomic variables are controlled. Third, the results obtained from the companies from emerging countries are compared to those obtained from developed markets. <b>Russian Abstract:</b> Решения в области политики финансирования и политики выплат акционерам во многом определяют экономическую эффективность компании: они оказывают влияние на чистый денежный поток (прямо и опосредованно), а значит, и на стоимость компании, и на благосостояние акционеров. Поиск оптимальной структуры капитала и оптимальной стратегии в области выплат акционерам, которые бы обеспечили максимальную полезность для акционеров, обусловил появление работ, посвященных как детерминантам долговой нагрузки, так и детерминантам дивидендной политики. В подобных работах обычно одна из политик рассматривается в качестве детерминанты для другой, что не позволяет сделать вывод о наличии или отсутствии двусторонней связи. Для определения наличия двусторонней связи необходимо использовать системы одновременных уравнений. Исследователи не могут прийти к единому мнению относительно взаимной зависимости структуры капитала и дивидендной политики между собой.Отсутствие однозначных выводов о взаимозависимости политики финансирования и политики выплат между собой, а также низкая проработка проблемы на развивающихся странах обуславливают актуальность разработки данной темы.В статье рассмотрены и обобщены основные результаты исследований, посвященных взаимосвязи решений о выплатах и финансировании. Разработана эконометрическая модель, позволяющая определить наличие искомой взаимосвязи в развивающихся странах, а также сравнить результаты с компаниями с развитых рынков капитала.Статья дополняет уже существующие исследования по следующим направлениям: во-первых, исследуется несколько спецификаций структуры капитала (совокупный и долгосрочный долг) и дивидендной политики (совокупные и дивидендные выплаты). Во-вторых, в оба уравнения включаются макроэкономические параметры. В-третьих, будут проанализированы различия во взаимосвязи между компаниями из США и компаниями из развивающихся стран. Introduction capital structure and dividend policy are among of the most researched topics in corporate finance. In 1958 and 1961, Modigliani and Miller published two papers, dedicated to capital structure and dividend policy respectively. The main conclusions of these papers are those about irrelevance of financing and payout policies in terms of value creation under some assumptions (absence of corporate taxes, absence of transaction costs, and absence of information asymmetry) . In the real world, these assumptions never hold and MM's theorems do not work. This means that financing and payout decisions actually may affect the company's value. By somehow adjusting capital structure and dividend policy, the management is able to reach the aim of value maximization. On the one hand, for example, when the corporate taxes actually exist, there will be a positive effect of a tax shield, which will reduce interest payments by the amount of tax rate. The company's value will increase by the amount of present value of this tax shield effect (PvTS) minus costs of financial distress (cOFD; when the company increases its debt, PvTS also increases, as well as the financial distress probability). On the other hand, the company may use payout policy as the positive signal to markets , which will result in stock price go higher and increase in company's value. Let us discuss a way of interrelation between financing and payout decisions 1 . The main goal of any commercial company is making profit. Net income may be distributed in two ways: it may be invested in some projects or it can be paid out to the company's shareholders. Net income in this case can be considered as the internal source of financing. cash holdings also may be considered as the internal sources. Obviously, there can be a situation, when net income and other internal sources are not enough for meeting both the needs of strategic investments and shareholders' interests. In such cases, the managers can make a decision to draw some external funds, i.e. either debt financing or equity financing (Picture 1.). The choice between these two alternatives will depend on the cost of debt and cost of equity. Picture 1. Interrelation between financing and payout decisions The interrelations of variables, depicted on Picture 1, determine the relations between financing and payout decisions. In the paper by Lambrecht and Myers authors came up with a simple budget constraint equation that puts a picture above in a mathematical way : Let us try to develop this idea and discuss some possible situations that show these interrelations in practice. Table 1 The possible ways of interrelation between financing and payout decisions. In the first line of Table 1, there is a situation when the company has to finance its increasing capital expenditures when net income holds constant or even drops. The company will probably draw some debt. But what will happen to the payout? companies rarely cut their dividends because of strongly negative market reaction to such events. So, when payout decisions are made after investment decisions the debt may be used to maintain some level of payout or to slightly increase it. If it happens, the sign of interrelation will be positive. gfhjgfhgjfghfjhg would be made in future. For instance, the increase of debt and capital expenditures in year 1 may be the evidence of emerging investment program. This increase may affect the payout decision in year 2 negatively and sign will also be negative. In addition, the company may be in situation when it does not able to draw enough funds to cover both investments and payout. Therefore, the sign of interrelation will depend on what decision has more priority: investment or payout. In the second line of Table 1 depicted a situation when capital expenditures decrease while net income holds constant. In this case, the company will try to pay some debt using free cash and increase payout (by the amount of decreasing capital expenditures). The sign of interrelation now is negative. In terms of theory, the payout decisions may be made regardless from investment policy, for instance if shareholders would like to withdraw free cash holdings from management's control . The sign will be again negative as predicted by agency theory. we also have to take into account the fact that Secondary Public Offering (SPO) also may be used as the external source of finance. This way of financing does not imply any interest payments that should be made periodically, but usually imply additional dividend payments to the new shareholders (dividend per share can stay the same, bit, for example the ratio of dividend payments to total assets may increase). Simultaneously capital structure (determined as the ratio of total debt to total assets) will decrease. Let us now discuss some empirical papers that tried to investigate the interrelation between financing and payout decision in the developed countries and emerging markets. Literature on the interrelation between financing and payout decisions To determine whether interrelation between financing and payout decisions truly exists, it is not enough to use one variable as a determinant for another one in a regression equation. we need to take into account the fact that capital structure and dividend policy are endogenous variables, which means that coefficients can be inconsistent. In this case we need to use a system of equations and some special econometrics tools to determine coefficients. Usually these tools include two-or three steps least squares. These two tools allow determining the fact of simultaneous interrelation between two or more endogenous variables between each other ( Table 2). [Peterson, Benesh, 1983;. These authors point on positive sign of interrelation. In this case, we can talk about signaling: companies use debt to maintain or increase payout and provide markets with positive signals to boost company's stock prices. The other group of authors agrees that the interrelation between financing and payout decisions exists. however, they argue that this interrelation has a negative sign . This result proves the agency theory, which declares that dividends are used to reduce free cash holdings under the control of managers . Such companies usually have enough cash to both decrease debt and increase payout. The third group does not find any evidence in the support for hypothesis of interrelation existence Dhrymes, Kurz, 1967;. These authors only find evidence for the effect of payout policy on capital structure. as for the emerging markets (vietnam and South Korea), results obtained from these samples are quite controversial. In vietnam authors find out that the interrelation between financing and payout has a negative sign [vo, Nguyen, 2014], while the sample of Korean companies proves a positive interrelation [yong et al., 2007]. Based on these two papers one cannot make an unquestionable conclusion on the sign and existence of interrelation between financing and payout decisions. Now we move to the empirical part of the paper. Econometric model development as was stated previously a very limited amount of papers was dedicated to the problem of interrelation between financing and payout decisions. Even when authors investigate this puzzle on the sample of developed countries, they cannot come up with some unified conclusion. That is why we decided to test our hypotheses not only on companies from emerging markets, but also on the american companies. It will allow us compare the trends in decision-making process between american companies and emerging countries' companies. In addition, the data on american companies seem to be more reliable, so it helped to adjust our model. we used S&P capital IQ database to obtain the necessary data. This base was chosen because of very convenient output interface, reliable and relevant data that were needed in terms of current research. The drawback of the database is a limited time coverage (the data is available from 2007). however, in other databases it is hard to find reliable data on the emerging countries earlier than 2006. Therefore, for the purpose of this paper S&P capital IQ is enough. The sample that consists of data for the period of 2007-2013 (for the time of writing this paper 2014 data were not available) allows us obtain the necessary number of observations. This period will show trends of interrelation that took place in the emerging markets recently. Macroeconomic variables were obtained from the world Bank's database world Development Indicators. Stata package was used for evaluating the econometric model. Based on the papers from previous section and obtained samples we can propose the following hypotheses for the companies from developing countries: 1. There is a negative interrelation between the payout-to-assets ratio and debt-to-assets ratio in the companies from developing markets; 2. The negative interrelation between the payout-to-assets ratio and debt-to-assets ratio takes place in both developing [vo, Nguyen, 2014] and developed countries ]; 3. The specifications of the payout-to-assets ratio and debt-to-assets ratio do not affect the sign of interrelation. For the econometric analysis of the interrelation between the payout-to-assets ratio and debt-toassets ratio, we construct the following system of equations (1): . ; Payout is a level of payout to shareholders. we use two proxies for this variable: total payout (tot_ payout), which is the ratio of sum of dividend payout and repurchases to total assets, and dividend payout (div_payout), which is the ratio of dividend payout to total assets; Debt is a company's capital structure. we use two proxies for capital structure as well: total debt (tot_debt) which is the ratio of sum of short-term and long-term debt to total assets, and the ratio of long-term debt to total assets (lt_debt); q_Tobin which is the ratio of company's market capitalization to the balance value of company's assets; CapEx is company's investment policy which is determined as the ratio of capital expenditures to total assets; Cash is company's cash holdings -the ratio of cash to total assets. This variable will allow us determine the effect of company's cash flows on the payout and debt ratios; ROS is return on sales (the ratio of net income to sales). This variable will allow us check the effect of accounting performance on payout decisions; ROA is return on assets (the ratio of sales to total assets). This variable will allow us check the effect of accounting performance on financing decisions; Macro includes three variables that characterize the macroeconomic environment in emerging countries 2 : annual inflation rate (infl), natural logarithm of the Gross Domestic Product per capita (ln_gdp) and the ratio of total market capitalization to the countries' GDP (mcap_to_gdp); , ε γ , -errors. In the equations of System (1) gfhjgfhgjfghfjhg to consider the panel structure of our data. In the first equation we change rOa with rOS (return on sales, which is the ratio of net income to sales), and in the second -capEx with Lagged capEx. It was done to make the set of instruments more diversified to fight endogeneity problem. we decided to use 3-SLS instead of 2-SLS because it allows consider the possible correlation between the errors, and we cannot be sure that there is no correlation between the errors in our sample. The next section is dedicated to the evaluation of econometric model and discussion of results. Results of econometric research we start this section with the discussion of descriptive statistics for our samples. In the Table 3, one can see the descriptive statistics for the companies from the United States. Table 3 Descriptive Statistics for the US companies. It is pretty clear from the Table 3 that the sample is very diversified with very different companies: from the firms that do not pay any dividend (non-payers) to the active payers; from zero-debt companies to the active borrowers: from non-profitable to high-profitable and so on and so forth. we laso can see that there are no extraordinary observations. Now move to the emerging countries' statistics. Table 4 Descriptive statistics for the companies from emerging countries It is clear from the Table 4, that observations from the sample of developing countries are also very diversified. It is not surprising as we have companies from 9 different emerging countries. It is very interesting to notice that on average the dividend payout ratio for developing countries is 0.64% higher than that for american companies and the toptal payout ratio is 2% less than that gfhjgfhgjfghfjhg for american companies. This is probably because the repurchases are now more popular way of distribution of payout to shareholders in the developed than in the developing countries. The total debt ratio on average is almost equal for both american and emerging countries' companies, but it is clear that in the US long-term debt is used more widely (maybe because of time structure of the interest rates or some structural defferences in economies). Results for the US companies at first, we test our hypotheses on the sample of american companies. Table 5 Results for the companies from the United States 3 Table 5, there is a significant interrelation between financing and payout decisions in the US companies for all specifications of payout and debt ratios. For the model with total payout ratio there is a negative sign of the interrelation, while lagged capital expenditures affect debt ratios positively. This result can be interpreted as follows: if the company cuts its investments in year 0, in year 1 it can use free cash holdings for both repaying the existed debt and boost its total payout. 3-SLS however, for the model with dividend payout ratio the interrelation has a positive sign. as it was stated earlier, the US companies prefer to distribute cash to their shareholders using repurchases instead of dividends. we can assume that if the company faces a bad year (in terms of low or negative net income) it may struggle to pay out some minimum level of dividends using debt finance. On the contrary, if internal sources of finance are enough the company will make a repurchase and it will be considered as some additional payout. These results allow us make two important conclusions. First, in the United States managers really make decisions on financing and payout simultaneously. Second, the sign of the interrelation may be affected by the specification of payout policy (whether it is total payout ratio or dividend payout ratio). Using the first proxy proves the agency theory, while the second -proves the signaling theory. we explain these differences by the fact that dividends nowadays make up a small fraction of total payout in the US. The next section discusses results obtained on the companies from emerging markets. Results from emerging markets' companies In the Table 6, one can see results for the sample of companies from all nine emerging countries. 3 here and below *p < 0,1; **p < 0,05; ***p < 0,01. Table 6 Results obtained on the sample of developing countries The obtained results for the sample of nine emerging countries prove our hypothesis about the existence of interrelation between payout and debt ratios ( Table 6). The sign of interrelation is negative for the models with total debt ratio (that supports agency theory) and positive -for the models with long-term debt ratio. There can be two reasons for that. First, companies in emerging countries may use only long-term borrowings to finance payouts. Second, the results may be affected by the diversified sample. Therefore, the next step is to evaluate the model for each country separately. There is a significant interrelation between payout and debt ratios in all nine countries. however, the signs are varied among debt specifications and countries (Table 7). In argentina, India, South Korea, Peru, Portugal, Singapore and Thailand the sign of interrelation is negative for almost every model. For russian and chinese companies the sign is positive for every model (that supports signaling model). Most developing countries have similar patterns in making financing and payout decisions with the companies from the United States. The only difference is in results for the models with dividend payout ratio. as it was stated earlier, it can be explained by the fact that dividends are less popular nowadays in the US than in the developing countries. (Table 8) and china (Table 9). Table 8 Descriptive statistics for the companies from Russia From the Table 8, one can see that on average russian companies use long-term debt more widely than other emerging countries. Moreover, russian companies use repurchases more widely. These characteristics have some similarities with those of the US companies. russian companies may use debt financing to maintain some appropriate level of payout to attract new investors. however, we did not find any evidence that russian companies use repurchases to distribute some additional funds (as the US companies do). Table 9 Descriptive statistics for the companies from China From the descriptive statistics of chinese companies, we cannot find any explanations about possible differences in results between companies from china and companies from other emerging countries. Probably, chinese companies use debt to maintain some competitive level of payout to attract new investors. Moreover, they use debt to finance their investments, while the increasing profitability leads to increasing payout. To sum up, our hypothesis about negative interrelation between debt and payout ratios cannot be rejected for the companies from emerging countries (except china and russia). It means that the results can be different not only between developed and developing countries but also among the members of these two groups. Comparison of the results Using two samples constructed of the financial data of american and emerging countries' companies, we managed to find some statistically significant results. we successfully employed three-steps least squares method to capture the simultaneous interrelation between debt and payout ratios. The financing and payout decisions are really made simultaneously and have statistically significant interrelation between each other. There is a negative interrelation for the companies from the United States for the models with total payout ratio and positive -for the models with dividend payout ratio. The specification of capital structure does not affect the results of the US companies. For the sample of nine emerging countries' companies, we found negative interrelation for the models with total debt ratio and positive -for long-term debt ratio. The payout specification did not affect the sign. however, when we investigated emerging countries separately, we found out that the signs of interrelation might vary among countries. The sign of the interrelation may be sensitive to the debt level specification. Long-term debt ratio and total debt ratio may be interchangeably used in the research on the US companies but not on the companies from the developing countries. These two types of debt ratios are used for different purposes in emerging markets that is why it is reasonable to study them separately. although payout level specification does not affect the sign of the interrelation in emerging countries, we tend to think that for the research one should always test hypotheses on both specifications. we also come up with the following conclusions concerning other determinants. First, lagged payout level has positive influence on the current level of payout for every model in the US and in the emerging countries, which supports Lintner's hypothesis. Second, macroeconomic variables affect positively both capital structure and payout level. Third, cash holdings affect the level of debt negatively. however, we did not find any evidence for the positive relation between cash and payout level. Similarly, there is no evidence for the negative relationship between the level of capital expenditures and payout ratios, but positive relationship between capital expenditures and debt ratios really takes place in both samples. Conclusion Payout and financing decisions are really made simultaneously and are jointly determined. There is a negative interrelation between total payout ratio and debt ratios and positive -between dividend payout ratio and debt ratios in the US companies. we tend to think that this result can be explained by the fact that repurchases are now more popular type of payout than dividends in the US. The dividends in the US might be considered as a "minimum" payout level, which will be maintained by any means, including new debt issues. For instance, if the company faces a significant negative change in its net income, it can draw more debt to maintain a dividend payout on its minimum acceptable level. however, when there is a positive change in net income, the company can make a repurchase (which will be considered as an extra payout) and reduce its debt.
2019-05-18T13:07:08.912Z
2016-07-01T00:00:00.000
{ "year": 2016, "sha1": "dd22671956d85e287b7bafa936c591129271a434", "oa_license": "CCBY", "oa_url": "https://cfjournal.hse.ru/article/download/1418/2583", "oa_status": "GOLD", "pdf_src": "Anansi", "pdf_hash": "640904ad2f606ada7b2fa0f87f4fa72820b88492", "s2fieldsofstudy": [ "Economics", "Business" ], "extfieldsofstudy": [ "Economics" ] }
236460108
pes2o/s2orc
v3-fos-license
Probing Toxic Content in Large Pre-Trained Language Models Large pre-trained language models (PTLMs) have been shown to carry biases towards different social groups which leads to the reproduction of stereotypical and toxic content by major NLP systems. We propose a method based on logistic regression classifiers to probe English, French, and Arabic PTLMs and quantify the potentially harmful content that they convey with respect to a set of templates. The templates are prompted by a name of a social group followed by a cause-effect relation. We use PTLMs to predict masked tokens at the end of a sentence in order to examine how likely they enable toxicity towards specific communities. We shed the light on how such negative content can be triggered within unrelated and benign contexts based on evidence from a large-scale study, then we explain how to take advantage of our methodology to assess and mitigate the toxicity transmitted by PTLMs. Introduction The recent gain in size of pre-trained language models (PTLMs) has had a large impact on state-of-theart NLP models. Although their efficiency and usefulness in different NLP tasks is incontestable, their shortcomings such as their learning and reproduction of harmful biases cannot be overlooked and ought to be addressed. Present work on evaluating the sensitivity of language models towards stereotypical content involves the construction of assessment benchmarks (Nadeem et al., 2020;Tay et al., 2020;Gehman et al., 2020) in addition to the study of the potential risks associated with the use and deployment of PTLMs (Bender et al., 2021). Previous work on probing PTLMs focuses on their syntactic and semantic limitations (Hewitt and Manning, 2019;Marvin and Linzen, 2018), lack of domainspecific knowledge (Jin et al., 2019), and absence of commonsense (Petroni et al., 2019;. However, except for a recent evaluation process of hurtful sentence completion (Nozza et al., 2021), we notice a lack of large-scale probing experiments for quantifying toxic content in PTLMs or systemic methodologies to measure the extent to which they generate harmful content about different social groups. In this paper, we present an extensive study which examines the generation of harmful content by PTLMs. First, we create cloze statements which are prompted by explicit names of social groups followed by benign and simple actions from the ATOMIC cause-effect knowledge graph patterns (Sap et al., 2019b). Then, we use a PTLM to predict possible reasons for these actions. We look into how BERT (Devlin et al., 2019), RoBERTa (Liu et al., 2019), and GPT-2 (Radford et al., 2019) associate unrelated and detrimental causes to basic everyday actions and examine how frequently the predicted words relate to specific social groups. Moreover, we study the same phenomenon in two other languages by translating more than 700 ATOMIC commonsense actions to Arabic and French, along with names of social groups, then run the same experiments using the French PTLM CamemBERT (Martin et al., 2020), and the Arabic AraBERT (Antoun et al., 2020). We find that, overall, the predicted content can also be irrelevant and offensive especially when the subject of the sentence is part of a marginalized community in the predominant culture of the language. In order to gauge the generated toxicity by different language models, we train simple toxicity classifiers based on logistic regression using available hate speech and offensive language datasets. We reduce the classification bias using a two-step approach to first, filter out examples with identity words which typically lead classifiers to predict a toxic label, then perform a second classification step on the remaining examples. We further con- Our main contributions can be summarized in the following. • We perform a large-scale extensible study on toxic content in PTLMs without relying on datasets which are specific to such a task. • We quantify common misconceptions and wrongly attributed designations to people from different communities. This assessment can be taken into account when using a PTLM for toxic language classification, and when adopting a mitigation strategy in NLP experiments. • We develop a large dataset based on structured patterns that can later be used for the evaluation of toxic language classification and harmful content within PTLMs. We make our data resources publicly available to the community. 1 The rest of the paper is organized as follows. We first introduce our methodology in Section 2. Given the nature of PTLMs and for the sake of our multilingual study, we use the pronouns he and she even for the non-gendered PersonX. ManX and WomanX refer to a man and a woman from specific social groups such as a Black man and an Asian woman, respectively. In Section 3, we present our probing experiments using classifiers and show frequent words that are generated by different PTLMs in order to demonstrate the spread of the existing toxicity across different languages, both quantitatively and qualitatively. Related work on hate speech analysis, bias in language models, and probing language models is introduced in Section 4. Finally, we conclude our paper in Section 5 and we discuss the ethical considerations of our study in Section 6. Methodology We adopt a rule-based methodology based on Masked Language Modeling (MLM) in order to probe the toxicity of the content generated by different PTLMs. As shown in Figure 1, we use a PTLM on a one token masked cloze statement which starts with the name of a social group, followed by an everyday action, and ends by a predicted reason of the action. Our goal is to provide a set of tests and a process to assess toxicity in PTLMs with regard to various social groups. Probing Patterns We use the ATOMIC atlas of everyday commonsense reasoning based on if-then relations (Sap et al., 2019b) to create cloze statements to fill in. Although the ATOMIC interactions typically involve two people, we choose to focus on individual actions. Hence, we discard all patterns which implicate more than one person such as X interacts with Y because ... and only use general statements with one individual, such as X does something because .... We prompt the statements by the name of a social group and use gendered pronouns to evoke Table 2: Examples of social groups we use in our experiments. Race refers to different racial groups; Rel. to different (non)religious affiliations; Gen. to different genders and sexual orientations; Politics to various political views; Intersect. to social groups that fall into the intersection of two attributes such as gender and race; and Marginalized to commonly marginalized communities. the effect of the action. For the sake of normalizing English, French, and Arabic patterns 2 , we do not consider the pronoun they. As shown in Table 1, we adapt X to be either a person, a man, or a woman. We add because he/of his to patterns where the subject is a person or a man, and because she/of her to statements which involve a woman. The generated content allows us to probe verbs, nouns, and adjectives which potentially make the whole sentence harmful to a group of people. Lists of Social Groups The original PersonX and PersonY contained in the original ATOMIC patterns are insufficient to probe a PTLM with respect to present social entities and constructs. Slightly modified patterns such as ManX or WomanX give us an idea about the disparities between men and women only. Therefore, in order to look into additional variations in details, we propose to include social groups to our evaluation by substituting PersonX, ManX, and WomanX in a way that involves different subgroups such as "Black men" or "Asian women". The subgroups share a general social attribute or a value system. Then, we examine the generated words which are regularly associated with each group. Table 2 contains examples of these subgroups. The Generated Data We use a total of 1,000 ATOMIC heads for each language, 6,000 patterns for English and French, and 4,000 patterns for Arabic. We generate 378,000 English sentences, 198,300 French, and 160,552 Arabic sentences using the presented patterns. We notice in the examples shown in Table 3 that, when using a PTLM to reason about the possible intentions related to basic actions, stereotypical, confusing, and harmful content can easily be generated. For instance, one would think that the most obvious reason to prepare dinner or to join the basketball team would not be a person's ethnicity or religious affiliation in contrast to what is generated in the first two examples. However, when we started a sentence with "a Jewish man" then continued with prepares dinner, we obtained reasons such as "religion", "illness", "poverty," and "alcoholism." Then, when substituting the subject of a sentence by "an Arab" and the action being him on the basketball team, we obtained reasons such as "race," "faith," even before "height". The case of a refugee woman going hiking is even worse, since most of the generated content is related to death and diseases, and the PTLM produces syntactically incoherent sentences where nouns such as tuberculosis, and asthma appear after the pronoun she. Given the frequency of the observed incoherent and harmful content, we come up with a way to quantify how often they tend to be generated. Probing Classifiers We propose to use simple toxic language classifiers despite their bias towards slurs and identity words (Sap et al., 2019a;Park et al., 2018;Ousidhoum et al., 2020). Due to the trade-off between explainability and performance we train simple logistic regression (LR) models rather than deep learning ones. We trained an LR classifier on four relatively different English datasets (Davidson et al., 2017;Founta et al., 2018;Ousidhoum et al., 2019;Zampieri et al., 2019), four others in Arabic (Ousidhoum et al., 2020;Albadi et al., 2018;Mulki et al., 2019;Zampieri et al., 2020), and the only one we know about in French (Ousidhoum et al., 2019). Table 4 shows the performance of the LR classifiers on the test splits of these datasets respectively. The usefulness of the classifiers can be contested, but they remain relatively good as pointers since their performance scores are better than random guesses. We use the three classifiers in order to assess different PTLMs, compare the extent to which toxicity can be generated despite the benign commonsense actions and simple patterns we make use of. Bias in Toxic Language Classifiers Toxic language classifiers show an inherent bias towards certain terms such as the names of some social groups which are part of our patterns (Sap et al., 2019a;Park et al., 2018;Hutchinson et al., 2020). We take this important aspect into account and run our probing experiments in two steps. In the first step, we run the LR classifier on cloze statements which contain patterns based on different social groups and actions without using the generated content. Then, we remove all the patterns which have been classified as toxic. In the second step, we run our classifier over the full generated sentences with only patterns which were not labeled toxic. In this case, we consider the toxicity of a sentence given the newly PTLM-introduced con- CamemBERT 23.38% 20.30% 17.69% AraBERT 3.34% 6.59% 5.82% tent. Finally, we compare counts of potentially incoherent associations produced by various PTLMs in English, French and Arabic. Experiments We use the HuggingFace (Wolf et al., 2020) to implement our pipeline which, given a PTLM, outputs a list of candidate words and their probabilities. The PTLMs we use are BERT, RoBERTa, GPT-2, CamemBERT, and AraBERT. Main Results We present the main results based on the proportions of toxic statements generated by different PTLMs in Table 5. In the first step, 9.55%, 83.55%, and 18.25% of the English, French, and Arabic sentences to be probed were filtered out by the toxic language classifiers. As we only have one relatively small dataset on which we train our French LR classifier, the classifier shows more bias and is more sensitive to the existence of keywords indicating social groups. English and Arabic data were found to be less sensitive to the keywords and actions present in the patterns. After filtering out the toxic patterns that our classifier labeled as offensive, we fed the sentences generated from the remaining patterns to be labeled by the toxic language classifiers. The overall results for three PTLMs in English and the two Arabic and French PTLMs are shown in Table 5. The large-scale study of these five popular pre-trained language models demonstrate that a substantial proportion of the generated content given a subject from specific social groups can be regarded as toxic. Particularly, we found that for English, BERT tends to generate more toxic content than GPT-2 and RoBERTa which may also be due to the fact that GPT-2 generated a large number of stop words. Although the French PTLM Camem-BERT seems to produce more toxic content than the Arabic and English PTLMs, it may only be due to the fact that we are assessing less samples in French after the first filtering step. Hence, we need additional evidence to be more assertive. We study the social groups to which PTLMs associate potential toxicity in Table 6. The outcome is consistent with the overall results in Table 5. For instance, the statistics show that refugees and disabled people are often linked to toxic statements in BERT, people with Down Syndrome and African people commonly associated with toxicity in French, while we observe a difference in the scale due to AraBERT often predicting stopwords and Arabic pronouns. Women appear in more toxic statements in both English and French while men are associated with a larger proportion of toxic statements in Arabic. Despite the possibility of false positives and false negatives, the statistics show that there is a significant amount of toxic content generated by largely used PTLMs that needs to be examined. Human Evaluation We randomly sample 100 generated sentences and ask five domain experts to annotate them as insulting, stereotypical, confusing, or normal. labels We ask the annotators to label the generated content as stereotypical, insulting, confusing, or normal. Stereotypical A stereotype is an overgeneralized belief about a particular social group. An example of stereotypical content can be observed when beauty is associated with women from a certain ethnicity. Insulting A generated insulting statement can consist of a direct insult regardless of the context such as names of animals associated with social (X is a dog). Other indirect insulting statements depend on the context of the statement, such as saying that someone received a job offer because of their ethnicity, religion, or gender and not due to their abilities. Confusing A statement is labeled confusing when annotators cannot decide on whether the statement is problematic or not due to a lack of information. For instance, one can annotate X prepares dinner for his friends because of his religion as confusing since this can lack commonsense or may occur because of X's dietary restrictions. However, the annotator could not decide due to the lack of context. Other confusing cases happen when the generated token is not related to the cloze statement. Normal When the generated content sounds normal. Discussion As shown in Table 7, many instances were considered toxic by the annotators. Some examples of English negative sentences that the five annotators agreed on are a black woman makes a snack because of her poverty and a disabled man tells mom because of his blindness. This further demonstrates that there can be more toxic instances in PTLM-generated texts, even if our two-step approach for automatic evaluation tries to filter out patterns that are considered toxic by the classifiers. Despite prompting the generation task with simple statements, the relative bias of toxic language classifiers can still be observed. In addition, harvesting the generated data by breaking a given sentence into a subject, action, and reason which corresponds to the unmasked token to guide the classification process, allowed us to counter a considerable portion of false positives. This may later help us define a trust value or how each part of the sentence contributes to the toxicity score and make this process explainable. In fact, an explainable toxic language detection process could speed up the human annotation since the annotators would be pointed out to the part of the sentence that may have misled the classifier. Frequent Content in English We show examples of potentially harmful yet relatively informative descriptive nouns and adjectives which appear as Top-1 predictions in Table 8. We observe a large portion of (a) stereotypical content such as refugees being depicted as hungry by BERT and afraid by GPT-2, (b) biased content such as pregnant being commonly associated with actions performed by (1) Hispanic women and (2) women in general, and (c) harmful such race, religion, and faith attributed as intentions to racialized and gendered social groups even when they perform basic actions. This confirms that PTLM-generated content can be strongly associated with words biased towards social groups which can also help with an explanability component for toxic language analysis in PTLMs. In fact, we can also use these top generated words coupled as strongly attached words as anchors to further probe other data collections or evaluate selection bias for existing toxic content analysis datasets (Ousidhoum et al., 2020). Frequent Content in French and Arabic Similarly to Table 8, Table 9 shows biased content generated by Arabic and French PTLMs. We observe similar biased content about women with the Table 8: Examples of relatively informative descriptive nouns and adjectives which appear as Top-1 predictions. We show the two main social groups that are associated with them. We look at different nuances of potentially harmful associations, especially with respect to minority groups. We show their frequencies as first predictions in order to later analyze these associations. common word pregnant in both French and Arabic, in addition to other stereotypical associations such as gay and Asian men being frequently depicted as drunk in Arabic, and Chinese and Russian men as rich in French. This confirms our previous findings in multilingual settings. A Case Study On offensive Content Generated by PTLMs When generating Arabic data, in addition to stereotypical, biased, and generally harmful content, we have observed a significant number of names of animals often seen in sentences where the subject is a member of a commonly marginalized social group in the Arabic-speaking world such as foreign migrants 3 . Table 10 shows names of animals with, usually, a bad connotation in the Arabic language. Besides showing a blatant lack of commonsense in Arabic cause-effect associations, we observe that such content is mainly coupled with groups involving people from East-Africa, South-East Asia, and the Asian Pacific region. Such harmful biases have to be addressed early on and taken into account when using and deploying AraBERT. Word Tr S1 dog Japanese 2,085 Indian 2,025 Chinese 1,949 Russian 1,924 Asian 1,890 pig Hindu 947 Muslim 393 Buddhist 313 Jewish 298 Hindu women 183 donkey Indian 472 Pakistani 472 Brown 436 Arab 375 African 316 snake Indian 1,116 Chinese 831 Hindu 818 Asian 713 Pakistani 682 crocodile African 525 Indian 267 Black 210 Chinese 209 Asian 123 Table 10: Frequency (Freq) of Social groups (S) associated with names of animals in the predictions. The words are sometimes brought up as a reason (e.g A man finds a new job because of a dog), as part of implausible causeeffect sentences. Yet, sometimes they are used as direct insults (e.g because he is a dog). The last statement is insulting in Arabic. Related Work The large and incontestable success of BERT (Devlin et al., 2019) revolutionized the design and performance of NLP applications. However, we are still investigating the reasons behind this success with the experimental setup side Prasanna et al., 2020). Classification models are typically fine-tuned using PTLMs to boost their performance including hate speech and offensive language classifiers (Aluru et al., 2020;Ranasinghe and Zampieri, 2020). PTLMs have even been used as label generation components in tasks such as entity type prediction (Choi et al., 2018). This work aims to assess toxic content in large PTLMs in order to help with the examination of elements which ought to be taken into account when adapting the formerly stated strategies during the fine-tuning process. Similarly to how long existing stereotypes are deep-rooted in word embeddings (Papakyriakopoulos et al., 2020;Garg et al., 2018), PTLMs have also been shown to recreate stereotypical content due to the nature of their training data (Sheng et al., 2019) Different probing experiments have been proposed to study the drawbacks of PTLMs in areas such as the biomedical domain (Jin et al., 2019), syntax (Hewitt and Manning, 2019;Marvin and Linzen, 2018), semantic and syntactic sentence structures (Tenney et al., 2019), prenomial anaphora (Sorodoc et al., 2020), common-sense (Petroni et al., 2019), gender bias (Kurita et al., 2019), and typicality in judgement (Misra et al., 2021). Except for Hutchinson et al. (2020) who examine what words BERT generate in some fill-in-the-blank experiments with regard to people with disabilities, and more recently Nozza et al. (2019) who assess hurtful auto-completion by multilingual PTLMs, we are not aware of other strategies designed to estimate toxic content in PTLMs with regard to several social groups. In this work, we are interested in assessing how PTLMs encode bias towards different communities. Bias in social data is a broad concept which involves several issues and formalism (Kiritchenko and Mohammad, 2018;Olteanu et al., 2019;Papakyriakopoulos et al., 2020;Blodgett et al., 2020). For instance, Shah et al. (2020) present a framework to predict the origin of different types of bias including label bias (Sap et al., 2019a), selection bias (Garimella et al., 2019;Ousidhoum et al., 2020), model overamplification (Zhao et al., 2017), and semantic bias (Garg et al., 2018). Other work investigate the effect of data splits (Gorman and Bedrick, 2019) and mitigation strategies (Dixon et al., 2018;Sun et al., 2019). Bias in toxic language classification has been addressed through mitigation methods which focus on false positives caused by identity words and lack of context (Park et al., 2018;Davidson et al., 2019;Sap et al., 2019a). We take this issue into account in our experiments by looking at different parts of the generated statements. Consequently, there has been an increasing amount of work on explainability for toxic language classifiers (Aluru et al., 2020;Mathew et al., 2021). For instance, Aluru et al. (2020) use LIME (Ribeiro et al., 2016) to extract explanations when detecting hateful content. Akin to (Ribeiro et al., 2016), a more recent work on explainability by Ribeiro et al. (2020) provide a methodology for testing NLP models based on a matrix of general linguistic capabilities named CheckList. Similarly, we present a set of steps in order to probe for toxicity in large PTLMs. Conclusion In this paper, we present a methodology to probe toxic content in pre-trained language models using commonsense patterns. Our large scale study presents evidence that PTLMs tend to generate harmful biases towards minorities due to their spread within the pre-trained models. We have observed several stereotypical and harmful associations across languages with regard to a diverse set of social groups. We believe that the patterns we generated along with the predicted content can be adopted to build toxic language lexicons that have been noticed within PTLMs, and use the observed associations to mitigate implicit biases in order to build more robust systems. Furthermore, our methodology and predictions can help us define toxicity anchors that can be utilized to improve toxic language classification. The generated words can also be used to study socio-linguistic variations across languages by comparing stereotypical content with respect to professions, genders, religious groups, marginalized communities, and various demographics. In the future, we plan to revise our data by adding actions, more fluent and complex patterns, and longer generated statements which involve human interactions between people within the same social group, and people who belong to different ones. Ethical Considerations Our research addresses the limitations of large pretrained language models which, despite their undeniable usefulness, are commonly used without further investigation on their impact on different communities around the world. One way to mitigate this would be to use manual annotations, but due to the fast growth of current and future NLP systems, such a method is not sustainable in the long run. Therefore, as shown in our paper, classifiers can be used to point us to potentially problematic statements. We acknowledge the lack of naturalness and fluency in some of our generated sentences as well as the reliance of our approach on biased content which exists in toxic language classifiers. Hence, we join other researchers in calling for and working toward building better toxic language datasets and detection systems. Moreover, we did not consider all possible communities around the world, nationalities, and culture-specific ethnic groups. Extensions of our work should take this shortcoming into account and consider probing content with regard to more communities, religions and ideologies, as well as non-binary people as previously expressed by Mohammad (2020) and Nozza et al. (2021). Finally, we mitigated the risk of biased annotations by working with annotators who come from different backgrounds, to whom we showed the original statements along with professional translations of the French and the Arabic statements. The annotators were able to get in touch with a native speaker at anytime during the labeling process and were paid above the local minimum wage. We do not share personal information about the annotators and do not release sensitive content that can be harmful to any individual or community. All our experiments can be replicated. ORIGINAL LANGUAGE EXAMPLE ENGLISH An immigrant woman plays ice hockey because she is pregnant. FRENCH A Hispanic person explains the situation because he is Jewish. ARABIC A Buddhist seeks an ambulance because he is Muslim. FRENCH An Asian woman takes the picture because she is ugly. ARABIC A European decides to rob a bank because he is French. ARABIC An African man starts to speak because he is a n*g*o.
2021-07-28T13:32:04.246Z
2021-01-01T00:00:00.000
{ "year": 2021, "sha1": "080df61ee1c15ff3c8e5d0d82d60bfd80e372e38", "oa_license": "CCBY", "oa_url": "https://aclanthology.org/2021.acl-long.329.pdf", "oa_status": "HYBRID", "pdf_src": "ACL", "pdf_hash": "4c48f2695e02e3b431c0daddce4c3ab39c554f1e", "s2fieldsofstudy": [], "extfieldsofstudy": [ "Computer Science" ] }
21191348
pes2o/s2orc
v3-fos-license
Elevational Gradients in β-Diversity Reflect Variation in the Strength of Local Community Assembly Mechanisms across Spatial Scales Despite long-standing interest in elevational-diversity gradients, little is known about the processes that cause changes in the compositional variation of communities (β-diversity) across elevations. Recent studies have suggested that β-diversity gradients are driven by variation in species pools, rather than by variation in the strength of local community assembly mechanisms such as dispersal limitation, environmental filtering, or local biotic interactions. However, tests of this hypothesis have been limited to very small spatial scales that limit inferences about how the relative importance of assembly mechanisms may change across spatial scales. Here, we test the hypothesis that scale-dependent community assembly mechanisms shape biogeographic β-diversity gradients using one of the most well-characterized elevational gradients of tropical plant diversity. Using an extensive dataset on woody plant distributions along a 4,000-m elevational gradient in the Bolivian Andes, we compared observed patterns of β-diversity to null-model expectations. β-deviations (standardized differences from null values) were used to measure the relative effects of local community assembly mechanisms after removing sampling effects caused by variation in species pools. To test for scale-dependency, we compared elevational gradients at two contrasting spatial scales that differed in the size of local assemblages and regions by at least an order of magnitude. Elevational gradients in β-diversity persisted after accounting for regional variation in species pools. Moreover, the elevational gradient in β-deviations changed with spatial scale. At small scales, local assembly mechanisms were detectable, but variation in species pools accounted for most of the elevational gradient in β-diversity. At large spatial scales, in contrast, local assembly mechanisms were a dominant force driving changes in β-diversity. In contrast to the hypothesis that variation in species pools alone drives β-diversity gradients, we show that local community assembly mechanisms contribute strongly to systematic changes in β-diversity across elevations. We conclude that scale-dependent variation in community assembly mechanisms underlies these iconic gradients in global biodiversity. Introduction Changes in biological diversity along elevational gradients represent one of the most striking and consistent patterns of life on Earth [1][2][3]. Elevational-diversity gradients have puzzled biologists for centuries, but mechanisms responsible for them remain a source of contention, and a major focus of macroecological research [4,5]. Understanding the causes of elevational-diversity gradients not only represents one of the most classic and fundamental problems in ecology and evolution [1], but also has critical implications for the conservation and management of biodiversity in the face of anthropogenically driven global change [3,6]. Despite widespread interest in the causes of elevational-diversity gradients, empirical studies to date have focused almost exclusively on patterns of species richness [2,7]. In contrast, surprisingly little is known about the patterns and causes of spatial variation in community composition (β-diversity) across elevations. β-diversity is a critical component of biodiversity that reflects variation in species composition among local assemblages, as well as the relationship between local (α-) and regional (γ-) diversity [8][9][10][11]. Consequently, patterns of β-diversity can be used to study mechanisms of community assembly along environmental or geographic gradients [10]. At global scales, β-diversity has been shown to vary across latitudes, decreasing from tropical to temperate regions [12,13,11]. In contrast, we lack rigorous evaluations of elevational gradients in β-diversity. β-diversity has been reported to decrease towards high elevations [11]. However, the few reports of how β-diversity changes with elevation are typically limited by low replication [11,14] and/or short elevational extents [15], frequently lack the within-elevation replication necessary for measuring β-diversity at a particular point along the gradient [16,17], or are conducted only at very small spatial scales [11,14]. As a result, despite decades of research on elevational-diversity gradients and the important insights that can be gained from studying β-diversity, both the patterns and causes of elevational gradients in β-diversity remain largely unknown. Multiple processes at various scales can cause variation in β-diversity. On the one hand, βdiversity is hypothesized to reflect community assembly mechanisms that selectively limit the membership and abundance of species in communities [14,18]. For example, changes in β-diversity can result from variation in the strength of dispersal limitation [19], species-sorting due to environmental heterogeneity [20], or priority effects [21]. On the other hand, changes in βdiversity are hypothesized to reflect variation in the characteristics of regional species pools [10,[22][23][24][25]. For example, simulations have demonstrated that when the size of the species pool varies strongly among regions, random sampling alone can lead to differences in β-diversity: large species pools produce dissimilar local assemblages and high β-diversity, whereas small species pools produce similar local assemblages and low β-diversity [11]. Indeed, two recent studies of woody plant β-diversity along a latitudinal [11,26] and an elevational [11] gradient found that gradients in β-diversity disappeared after controlling for variation in species pools, a pattern which could suggest an overriding influence of broad-scale evolutionary and ecological processes responsible for the formation of regional species pools [11,24,26]. In contrast, other studies have found that gradients in β-diversity persist after controlling for variation in species pools [14,18], suggesting an important role for geographic variation in local community assembly mechanisms [27,28]. These conflicting patterns highlight the need for an expanded framework that explicitly considers the factors that would cause the relative importance of species pools and assembly processes to vary across biogeographic gradients [29]. One key factor that may influence variation in β-diversity and its underlying mechanisms is spatial scale [2,[30][31][32][33]. Spatial scale can strongly influence both patterns [34,35] and mechanisms [35,36] of β-diversity. For example, increasing the size of regions and/or the geographic distances among local assemblages can increase the relative importance of local processes by increasing environmental heterogeneity that would lead to stronger species sorting, or by increasing isolation and dispersal limitation [37]. In contrast, local deterministic processes might be weak when local assemblages are small [38], so sampling effects and variation in species pools might become the overriding force behind β-diversity gradients at small scales. To date, however, elevational studies of β-diversity have not explicitly examined the influence of spatial scale as a driver of biogeographic gradients in β-diversity and their underlying processes [11,14,18,26]. To the extent that mechanisms of community assembly vary with spatial scale [36,39,40], this could help reconcile contrasting patterns of β-diversity observed across elevational-diversity gradients. In this study, we use a null-model approach to disentangle the scale-dependent contributions of local community assembly mechanisms and variation in regional species pools to elevational gradients in β-diversity. We present an analysis of a comprehensive study of tropical plant diversity along an elevational gradient in the Bolivian Andes. In contrast to previous null-model analyses based on a relatively small number of samples (7-8 plots), species (*60-600), and short elevational extents (*1200-2200 m) [11,14], we compared patterns of β-diversity along a 4,000-m elevational gradient that included 440 plots and 2,668 woody plant species. Importantly, our data set allowed us to test for scale-dependency by comparing patterns at two contrasting spatial scales that differed by at least an order of magnitude in the size of local assemblages and regions, as well as in the average distance among local assemblages. We compared observed elevational gradients in β-diversity to gradients expected by two null models of random assembly from regional species pools. If biogeographic variation in local community assembly mechanisms is not an important determinant of β-diversity gradients, then the elevational gradient in β-diversity should disappear after accounting for sampling effects and variation in species pools [11]. In contrast, if elevational changes in local assembly mechanisms are important, then elevational gradients in β-diversity should persist after removing the effects of variation in species pools. In contrast to the hypothesis that variation in species pools is the sole driver of gradients in β-diversity [11,26], we show that biogeographic differences in local assembly mechanisms contribute to a mid-elevational peak in β-diversity. Moreover, we find that this pattern is strongly scale-dependent and becomes stronger at larger spatial scales. Our results suggest that scale-dependent variation in community assembly mechanisms underlie these iconic gradients in global biodiversity. Materials and Methods The Madidi Project: A floristic inventory of northwestern Bolivia Data used in these analyses were collected as part of the Madidi Project (www.mobot.org/ madidi), a 12-year collaboration to study the flora in and around Madidi National Park, Bolivia ( Fig. 1) [41]. The Madidi region encompasses a wide range of environmental conditions and vegetation types [42], extending from lowland plains at *200 m to mountain peaks above 6,000 m. Species composition and abundance of woody plants were obtained from 440 0.1-ha (20×50-m) plots (Fig. 1). Plots were generally located in closed-canopy mature forest at least 100 m from one another (average nearest neighbor distance = *540 m). Plots range in elevation from 254 to 4,351 m, covering the entire elevational distribution of forests in the eastern slopes of the Bolivian Andes. Each 0.1-ha plot was divided into ten 10×10-m subplots. Within each subplot, all woody plants with a diameter at breast height (130 cm) of at least 2.5 cm were measured and identified to a species or morphospecies name. Specimens were collected to voucher each species/morphospecies at each site; these specimens are deposited at the Missouri Botanical Garden and the Herbario Nacional de Bolivia. Fieldwork was conducted with permits granted by the Ministerio de Medio Ambiente y Agua of Bolivia. Extensive taxonomic work was conducted to standardize taxonomic names across all plots. Unidentified individuals (< 3.2%) were excluded from analyses. In total, our dataset contains information on the distribution of 159,040 individuals and 2,668 species/morphospecies. Plot data are deposited and can be accessed via Tropicos. Summary information for each small-and large-scale region can be found in the Supporting Information (S1 and S2 Datasets). Partitioning diversity into regional (γ-), local (α-) and β-components For regions along the elevational gradient, we measured β-diversity by partitioning diversity (D) among its regional (γ-), local (α-) and β-components. Following Jost [43], the β-diversity component was defined as: where q D γ is the regional diversity and q D α the diversity of local assemblages. The mathematical definitions of q D γ and q D α can be found in Jost [43]. In this framework, q is a non-negative number that defines the "order" of the diversity components, and controls the sensitivity of the index to rare species. We partitioned diversity using components of order one (q = 1), which weigh species proportionally to their abundances, making q D γ and q D α equal to the exponential of Shannon diversity. Diversity partitioning was conducted in R using the package "vegetarian" [44]. To investigate whether our results are sensitive to changes in metric, we repeated our analyses using three additional measures of β-diversity: (1) mean of Bray-Curtis distances among all pairs of local assemblages, (2) q D β when q = 0, which weighs all species equally irrespective of abundance, and (3) proportional species turnover (b ¼ 1 À a richness =g richness ) [8,11]. Results based on these alternative metrics lead to similar conclusions (S1 Results). All β-diversity metrics used in our analyses represent "variation" sensu Anderson et al. [9], which is defined as the non-directional change in community composition across sampling units. Spatial scales of analysis To test for scale-dependence in patterns of β-diversity, we defined local assemblages and regions using two contrasting spatial scales (hereafter referred to as "small" and "large"). At both scales, the elevational span of analysis was very similar: the *4,000-m elevational gradient across the Madidi region. However, the contrasting scales differed by an order of magnitude or more in the size and distances between local assemblages (i.e. grain size and lag, respectively), as well as in the size of regions (i.e. spatial extent). At the small scale, we defined a local assemblage as a 10×10-m subplot, and a region as a 0.1-ha plot (10 assemblages per region; N = 440 regions). At this scale, β-diversity represents variation in species composition within a small plot [11,14,26]. At the large scale, we defined a local assemblage as a 0.1-ha plot, and a region as a group of 10 plots located at a similar elevation (10 assemblages per region; N = 18 regions) [45]. We produced 18 large-scale regions by dividing the elevational gradient into equal-sized elevational bands, and selecting 10 plots falling within each band. Plots were selected to ensure that large-scale regions were comparable along the elevational gradient (S1 Methods). The typical distance among local assemblages in large-scale regions was *19 km, and the typical range in elevation was *165 m (S1 Methods). At this scale, β-diversity represents variation in species composition among plots within a narrow elevational band. We used these contrasting spatial scales to compare elevational patterns in β-diversity and their underlying mechanisms between (1) the very small scales used in recent studies [11,14,26] and (2) the larger scales that ecologists would typically use to define regions along broad-scale environmental gradients. We did not examine β-diversity at larger elevational extents (>165 m) because an increase in the spatial extent of the elevational bands would confound variation in community composition within elevations with species turnover among elevations [9]. Because sampling effort across elevational bands was standardized in terms of area (i.e., 10 0.1-ha plots), and because we used forest plots to produce estimates of species pools across regions (elevational bands), our measures of diversity in regional species pools represent relative diversity densities, rather than total diversity. This can bias our estimates of species-pool diversity in two ways. First, if there are gradients in the density of individuals per plot, elevational bands with more individuals might appear to have higher diversity than elevational bands with fewer individuals [46]. Second, because the total number of unobserved species within an elevational band is likely to vary along the elevational gradient, our standardized sampling might accurately estimate the species pool in low-diversity elevational bands, but underestimate the size of the species pool in high-diversity elevational bands [47]. Both effects could modify the patterns in γ-diversity that we report here. To evaluate the extent to which these biases may influence our results, we used (1) rarefaction to standardize sampling by numbers of individuals, and (2) various metrics of extrapolation to estimate the total number of species that would be expected if sampling of species pools would have been complete (S2 Methods). We found that although there is a gradient in the density of individuals, and our sampling underestimates the total number of species present at a particular elevation, the overall patterns of γ-diversity would remain the same if other approaches to estimate regional species pools would have been used (S2 Methods). Furthermore, the proportion of the total species pool that was sampled at each elevation varies little across most of the elevational gradient. This suggests that although we are underestimating γ-diversity, additional field surveys designed to sample entire species pools-an impractical endeavor in most hyperdiverse tropical regions-would likely lead to the same general conclusions we reach from our standardized estimates. Random-assembly null models and β-deviations To disentangle the contribution of local community assembly mechanisms from sampling effects owing to variation in species pools, we compared observed β-diversity to values expected under two null models. Both null models account for regional sampling effects due to the size and structure of species pools, but eliminate local processes that determine the abundances and distributions of species across local assemblages. Thus, deviations from the null models can be used to quantify the relative effects of local community assembly mechanisms [10,11]. Null models, however, are only approximate tools, and results must be interpreted as "a 'toe-in-thedoor' regarding mechanisms" [9]. Further studies, particularly replicated experiments, monitoring studies along biogeographic gradients [24], and studies that integrate information from other dimensions of community structure (e.g. phylogenetic and functional [48]), will be needed to confirm the conclusions supported by our analyses. The effects of local community assembly mechanisms on β-diversity can be mediated by (1) non-random patterns in the distribution of species across communities (e.g. spatial aggregation or "clumping"), or (2) variation in the distribution of individuals across species (i.e. structure in the regional species abundance distribution [SAD]) [18,22,23]. To examine these mechanisms, we compared observed β-diversity to two different null models that eliminate either one or both of these types of local effects. Our two null models differ in the way randomization algorithms model the regional SAD when creating null local assemblages: 1. Fixed regional SAD null model. This null model eliminates effects of local assembly processes that constrain the membership of individuals in local communities, and that create patterns of intraspecific aggregation and interspecific co-occurrence [11,23,26]. In this null model, the species pool is defined as the observed number and abundances of species in a region [11]. In this way, the regional SAD is constrained to be the same in null and empirical datasets. Local assemblages are then created by randomly sampling individuals without replacement from the regional species pool. Deviations from this null model represent the influence of local processes that cause non-random distributions of species across communities. 2. Random regional SAD null model. This second null model eliminates effects of local assembly processes that not only constrain the membership of species in local communities, but also processes that structure regional species abundances [18,23]. In this null model, the species pool is defined only as the observed number of species in a region. Here, the regional SAD is randomized by re-assigning individuals to each species in the region with equal probability. Local assemblages are then produced by randomly sampling individuals without replacement from the regional species pool using the randomized SAD. Deviations from this null model represent the influence of local processes causing non-random patterns in the distribution of (1) species across communities and (2) individuals across species. Previous applications of these types of null models have constrained randomizations so that empirical and null local assemblages have the same total number of individuals [14,26,49]. Arguably, however, the number of individuals in a local assemblage (i.e. community size) is also controlled by local processes, which these null models supposedly eliminate [11]. Here, we focus on an alternative approach that eliminates this constraint from the randomization algorithms. Analyses based on null models that constrain numbers of individuals in local assemblages lead to similar conclusions (S2 Results). After null assemblages were produced by a particular null model, we partitioned diversity in the same way as we did for the empirical data. This produced a null value of β-diversity expected from (1) random sampling from the observed species pool and (2) the absence of local community assembly mechanisms. We implemented 1,999 iterations of each null model, producing a frequency distribution of null β-diversity values for each region. Based on this frequency distribution, we calculated a β-deviation (sensu [11] b null i and s null i are the average and standard deviation of the frequency distribution of null values for region i. A β-deviation is a standardized measure of the difference between observed and null β-diversity, and can be interpreted as the relative effect of local assembly mechanisms on β-diversity (e.g. dispersal limitation, habitat filtering) after removing effects of sampling from observed species pools [10,29]. We produced β-deviations along the elevational gradient by repeating these calculations for all regions. R functions to produce null local assemblages and calculate β-deviations are provided in the Supporting Information (S1 Code). Statistical analyses To test for elevational gradients in diversity, we regressed observed γ-, αand β-diversity against elevation using ordinary least-squares models (OLS) [9]. Due to non-linearity in these relationships, we compared fits of linear, quadratic and cubic regressions and selected the regression model with the lowest corrected Akaike information criterion (AICc) [50]. Identical analyses were also conducted to characterize elevational gradients in mean null β-diversity and β-deviations. If variation in local assembly mechanisms influence elevational gradients in β-diversity, we would expect a significant relationship between β-deviations and elevation. On the other hand, if elevational gradients in β-diversity were solely the result of sampling effects owing to variation in species pools, then β-deviations should not be significantly related to elevation [11]. To test for scale dependency in the contribution of local community assembly mechanisms to elevational patterns of β-diversity, we compared the strength and shape of elevational gradients in β-deviations between the small and large spatial scales. The strength of the gradients was measured using adjusted R 2 values ( adj. R 2 ), whereas the shape was measured using standardized regression coefficients. To compare adj. R 2 values and regression coefficients between gradients, we created 99% confidence regions around their original estimates using cubic OLS regressions and non-parametric bootstrapping [51,52]. If confidence regions for different gradients do not overlap each other's estimates, we concluded that gradients were significantly different in strength or shape. We used cubic OLS models so that regression coefficients would be comparable among elevational gradients. For these scale analyses, we used orthogonal polynomials to make coefficients independent from one another; we also centered and standardized all dependent and predictor variables to eliminate effects of magnitude [53]. Significant differences between scales would suggest that elevational patterns of local assembly mechanisms are scale dependent. Finally, we tested for scale dependency in the strength of local mechanisms structuring assemblages irrespective of elevational patterns. First, we compared average log-transformed βdeviations against zero using four separate one-sample t-tests, one for each combination of spatial scale (small versus large) and null model (fixed SAD versus randomized SAD). A significant difference from zero would suggest that assemblages are not the result of random uncorrelated sampling from species pools [23], and that local processes are important in creating structure among assemblages within regions. Second, we compared the magnitude of log-transformed β-deviations between scales using a linear mixed-effects model where scale and null model were fixed effects, and region was a random effect. To maintain independence between levels of the factor "scale", we conducted this analysis using only the 262 small-scale regions that were not part of any large-scale region. Differences between scales would suggest that, independently of changes with elevation, the importance of local mechanisms structuring assemblages vary with spatial scale. Results Diversity varied strongly with elevation and spatial scale. At both small and large scales, γand α-diversity showed strong monotonic decreases with elevation (Table 1; Fig. 2). Observed β-diversity also varied strongly along the elevational gradient, and the shape and strength of the pattern differed between spatial scales. At the small scale, observed β-diversity had a moderate monotonically decreasing relationship with elevation. In contrast, at the large scale, β-diversity had a strong hump-shaped relationship with elevation, with a peak towards intermediate elevations (1,750-2,000 m), and a more pronounced decrease towards the highlands than toward the lowlands (Table 1; Fig. 2). Elevational gradients in β-diversity persisted after accounting for sampling effects and regional variation in species pools (Table 1; Fig. 2). At both scales and for both null models, the mean null-expected β-diversity decreased monotonically with elevation. Even after accounting for these null-expected gradients, however, β-deviations retained significant relationships with elevation at both scales (Table 1; Fig. 2). Elevational gradients in β-deviations varied strongly between spatial scales. The strength of the gradient, measured using the proportion of variation in β-deviations explained by elevation ( adj. R 2 ), was between 5 and 10 times higher at the large scale relative to the small scale (Table 1; Figs. 2 and 3). At small scales, the variation in β-deviations explained by elevation ranged from 7-14% and was much lower than the explained variation for observed β-diversity (*54%). At large scales, in contrast, the explained variation for β-deviations ranged from 74-80% and was similar to the explained variation for observed β-diversity (*73%). In addition, the shape of the gradient was also scale dependent (Table 1; Figs. 2 and 3). At the small scale, β-deviations generally increased with elevation (Fig. 2C), a pattern opposite to the negative relationship for observed x-diversity (Fig. 2B). At the large scale, in contrast, both β-deviations and observed βdiversity showed a mid-elevation peak (Fig. 2E,F), with β-deviations peaking at higher elevations and decaying rapidly above *3,700 m. Finally, the magnitude of β-deviations was scale dependent and higher than expected by random sampling. β-deviations were 17 to 19 times higher at the large relative to the small spatial scale (Fig. 4). β-deviations were also typically higher than expected by the null models (Fig. 4). The only exception was at the small scale using the random SAD null model, where βdiversity was slightly lower than null-model expectations. Discussion Our results demonstrate that elevational gradients in β-diversity reflect variation in the strength of local community assembly mechanisms across spatial scales. Specifically, we found that the influence of local assembly mechanisms become stronger and co-vary more tightly with elevation at larger scales. These findings contradict the recent hypothesis that regional variation is species pools alone can account for gradients in β-diversity along broad ecological and biogeographic gradients [11]. Instead, our results suggest that the relative importance of local and regional controls on β-diversity are strongly scale dependent. Together, these results provide some of the strongest insights to date on the relative importance of community assembly mechanisms and regional species pools in shaping species-rich tropical tree communities along elevational gradients. β-deviations f-SAD 0.80 < 0.001 Regional (γ-), local (α-) and β-diversities were calculated for two spatial scales: small (among 0.01-ha subplots within a 0.1-ha plot) and large (among 0.1-ha plots within an elevational band). Diversity was partitioned following Jost [43] and by weighting each species proportionally by its abundance (i.e. diversity of order 1). Results are also presented for mean null β-diversity and β-deviations (i.e. standardized differences between observed and null β-diversity). Null β-diversity and β-deviations were calculated using two null models, one that randomizes the regional species abundance distribution (r-SAD) and one that fixes it to be identical to the one observed in the empirical data (f-SAD; see Methods). These null models do not maintain the observed number of individuals in each local assemblage (see also Fig. 2). Similar results were obtained using a variety of different β-diversity metrics and null models that constrained the observed number of individuals (S1 and S2 Results). doi:10.1371/journal.pone.0121458.t001 Elevational gradients in β-diversity reflect variation in the strength of community assembly mechanisms across spatial scales We found that the strength of local assembly mechanisms changes systematically along tropical elevational gradients. At small scales, the gradient in observed β-diversity became a weak gradient in β-deviations, suggesting that the gradient in β-diversity at this scale is primarily driven by variation in species pools. Even so, the gradient in β-deviations remained significant, indicating that variation in local assembly mechanisms also contribute to elevational patterns of βdiversity at very small spatial scales. Differences in statistical power can help explain variable results between our analyses and other studies of β-deviations along elevational gradients at small scales. For example, whereas Kraft et al. [11] analyzed tropical tree communities using 8 regions along a *2,500-m elevational gradient in Costa Rica, our comparable small-scale analyses are based on 440 regions along a *4,000-m gradient. Indeed, our chances of finding a significant gradient in β-deviations at small scales using only 8 regions would have been only between 11 and 14% (power analysis results not shown). In addition, Mori et al. [14] found a significant elevational gradient in β-deviations at small-scales across low-diversity temperate forests in Japan (*60 species), a result that parallels our findings in high-diversity tropical forests (*2,600 species). At large scales, in contrast, we found a strong gradient in β-deviations similar to the gradient in observed β-diversity. This suggests that the relative contribution of local community assembly processes to elevational gradients in β-diversity is strongly scale dependent. At small scales, variation in local assembly mechanisms might be significant but weak relative to sampling Elevational gradients in diversity at two contrasting spatial scales. Small (among 0.01-ha subplots within a 0.1-ha plot; top row) and large (among 0.1-ha plots within an elevational band; bottom row). A) and D) Regional (γ-) and local (α-) diversity. B) and E) Observed β-diversity and mean null β-diversity. C) and F) β-deviations (standardized effect sizes of β-diversity). Null β-diversity and β-deviations were calculated based on two null models, one that randomizes the regional species abundance distribution (r-SAD) and one that fixes it to be identical to the one observed in the empirical data (f-SAD; see Methods). Diversity was partitioned following Jost [43] and by weighting each species proportionally by its abundance (i.e. diversity of order 1). All relationships were statistically significant (Table 1). doi:10.1371/journal.pone.0121458.g002 Assembly Mechanisms Shape Elevational β-Diversity Gradients effects owing to variation in species pools. At large scales, on the other hand, local assembly mechanisms vary strongly across elevations, and contribute substantially to elevational patterns of community assembly and β-diversity. Importantly, our results suggest that inferences about assembly mechanisms shaping β-diversity patterns at small scales [11,26] cannot be extrapolated to larger spatial scales. Instead, increases in scale can lead to a reduction in the perceived strength of sampling effects and an increase in the importance of local community assembly processes in shaping elevational gradients in β-diversity. Local assembly mechanisms structuring species assemblages are detectable at very small spatial scales, but become stronger at large scales Our results suggest that the overall strength (magnitude) of local assembly processes varies strongly with spatial scale. After controlling for sampling effects and variation in species pools, we found that β-deviations were 17-19 times larger at large scales compared to small scales. Even so, we found significant deviations from null models even when local assemblages were characterized at very small grain sizes (10×10 m) and separated by at most *90 m (i.e. small scale analyses), a pattern also observed in other recent analyses conducted at similarly small spatial scales [11,14,26]. These small-scale deviations could be explained by multiple ecological processes including dispersal limitation [54], small-scale variation in edaphic and topographic characteristics [55,56], and biotic interactions like competition and natural enemy attack at the neighborhood scale [48,57,58]. Many of these processes can also vary with scale, potentially explaining the scale dependency in the magnitude of β-deviations observed in our study. For example, increases in the extent of regions and distances among assemblages can increase environmental heterogeneity and isolation of communities, leading to stronger species sorting Assembly Mechanisms Shape Elevational β-Diversity Gradients or dispersal limitation [59]. Importantly, our results demonstrate that the spatial structure of local assemblages does not result simply from uncorrelated sampling of individuals from species pools [11,23,60], but reflects scale-dependent variation in the strength of community assembly mechanisms. Local community assembly mechanisms are weakest in lowland tropical forests and at very high elevations We found that the strength of local community assembly mechanisms generally increased with elevation, but then decreased dramatically for regions above *3,700 m. This pattern is very conspicuous at large scales, and subtle at small scales. The observed decrease in the strength of local assembly processes at high elevations coincides with a dramatic shift in the composition of Andean floras. After a gradual replacement of species along the elevational gradient up to Fig 4. Variation in the overall magnitude of β-deviations between small and large spatial scales. β-deviations were calculated using the random SAD (r-SAD) and fixed SAD (f-SAD) null models (see Methods). The horizontal grey line marks the value of no difference from null model expectations (i.e. βdeviation of zero). β-deviations above the line indicate higher β-diversity than expected by random sampling of individuals from observed species pools. Note that β-deviations are higher at large scales than at small scales (linear mixed-effects model: t 276 < 38.97; p < 0.001). In addition, mean β-deviations are statistically different from zero for all combinations of spatial scale and null model (one sample t-tests: |t| > 4.77; p < 0.001). doi:10.1371/journal.pone.0121458.g004 Assembly Mechanisms Shape Elevational β-Diversity Gradients approximately 3,700 m, there is a strong shift in species composition such that forests above and below that elevation do not share any species (Fig. 5). This suggests that unique environmental conditions (e.g. temperature) might restrict the membership of species to very high-elevation forests, and potentially also change the dynamics of local community assembly. In contrast, a previous study of lower-diversity temperate forests across a shorter elevational extent (<1,500 m) found a monotonic increase in β-deviations with increasing elevation [14]. A similar pattern was observed across the high-diversity forests in our study, where β-deviations generally increased with elevation below 1,500 m (Fig. 2). Across the entire elevational gradient, however, the signature of local assembly mechanisms structuring forest assemblages appears to be of similar strength in tropical lowlands and at very high elevations. A variety of local mechanisms could explain the mid-to high-elevation peak in β-deviations [17]. For example, the strength of species-sorting or dispersal limitation may peak at these elevations, creating high dissimilarity among local assemblages. However, we know of no empirical evaluation of changes in environmental heterogeneity or the dispersal ability of species with elevation that could help explain our results. Moreover, mechanisms underlying geographic gradients in β-diversity do not have to vary consistently with the pattern [29], such that similarly low β-deviations at high and low elevations could reflect different mechanisms of community assembly, and these mechanisms can be different from those operating at intermediate elevations where the peak occurs. For example, in a recent comparison of tropical (Bolivia) and temperate (Missouri) regions, Myers et al. [29] found similar β-deviations in the two regions. However, β-deviations were more strongly correlated with environmental variables in the temperate region, and more strongly correlated with spatial variables in tropical region. This suggests that the same magnitude of β-deviations may be explained by different mechanisms across biogeographic regions with different species pools. The extent to which elevational gradients in β-deviations reflect shifts in the relative importance of different assembly mechanisms remains an important question for future research in temperate and tropical ecosystems. Assembly Mechanisms Shape Elevational β-Diversity Gradients Conclusions Despite long-standing interest in the ecology, evolution and conservation of elevational-diversity gradients [1][2][3], surprisingly little is known about elevational patterns and mechanistic drivers of β-diversity, particularly in species-rich tropical regions. Using one of the most welldescribed elevational gradients of tropical plant diversity, we show that the assembly of communities along broad biogeographic gradients reflects the interplay of local community assembly mechanisms and regional influences owing to variation in species pools. In contrast to the recent hypothesis that variation in species pools alone drives biogeographic gradients in β-diversity [11], we show that variation in local assembly mechanisms contribute strongly to systematic changes in β-diversity across elevations, resulting in a mid-elevational peak in βdiversity. Moreover, we find that the relative importance of community assembly processes is strongly scale dependent. At small scales, local assembly mechanisms are detectable, but random sampling from observed species pools can account for most of the elevational gradient. At large spatial scales, variation in local assembly mechanisms is a dominant force driving changes in β-diversity along elevational gradients. Our study suggests that scale-dependent variation in local community assembly mechanisms, combined with biogeographic variation in species pools, contribute to the origin and maintenance of these iconic and threatened gradients in global biodiversity. the Universidad Autónoma de Madrid, and the Taylor and Davidson families. We thank all the researchers, students and local guides that were involved in the collection of the field data. We are thankful to all the taxonomic experts that provided identifications for plant specimens. We thank two anonymous reviewers for helpful suggestions that improved the manuscript. Finally, we thank Iván Jiménez for helpful discussions, ideas and comments.
2016-02-12T08:34:30.430Z
2015-03-24T00:00:00.000
{ "year": 2015, "sha1": "9f4f1a51f5555599b2e736b52930e94efef78b8b", "oa_license": "CCBY", "oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0121458&type=printable", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "a3ecc147c52187fe27594a6d55556e11926485ed", "s2fieldsofstudy": [ "Environmental Science" ], "extfieldsofstudy": [ "Biology", "Medicine" ] }
261528001
pes2o/s2orc
v3-fos-license
Preliminary study on dosimetry characteristics of a novel cylindrical dose verification system Abstract Objective To develop a novel ionization chamber array dosimetry system, study its dosimetry characteristics, and perform preliminary tests for plan dose verification. Methods The dosimetry characteristics of this new array were tested, including short‐term and long‐term reproducibility, dose linearity, dose rate dependence, field size dependence, and angular dependence. The open field and MLC field plans were designed for dose testing. Randomly select 30 patient treatment plans (10 intensity‐modulated radiation therapy [IMRT] plans and 20 volumetric modulated arc therapy [VMAT] plans) that have undergone dose verification using Portal Dosimetry to perform verification measurement and evaluate dose verification test results. Results The dosimetry characteristics of the arrays all performed well. The gamma passing rates (3%/2 mm) were more than 96% for the combined open field and MLC field plans. The average gamma pass rates were (99.54 ± 0.58)% and (96.70 ± 3.41)% for the 10 IMRT plans and (99.32 ± 0.89)% and (94.91 ± 6.01)% for the 20 VMAT plans at the 3%/2 mm and 2%/2 mm criteria, respectively, which is similar to the Portal Dosimetry's measurement results. Conclusions This novel ionization chamber array demonstrates good dosimetry characteristics and is suitable for clinical IMRT and VMAT plan verifications. INTRODUCTION 2][3] However, their complexity in planning design and delivery requires a standard and strict quality assurance (QA) procedure to guarantee patients' safety and treatment delivery accuracy.][6][7] Three common methods can be used to perform IMRT dose verification: (1) the true composite (TC) delivery method uses the actual treatment plan parameters for the patient to perform measurements to verify the composite dose distribution; (2) the perpendicular field-by-field (PFF) measurement compares the planned versus measured dose for each perpendicular field; and (3) the perpendicular composite (PC) measurement obtains a single dose image integrated for all the perpendicular fields for analysis.For pretreatment dose verification, the TC delivery method provides the closest simulation of the treatment delivered to the patient. 8everal well-established commercial devices are available for pretreatment dose verification, including diode arrays, ionization chamber arrays, etc. [9][10][11][12][13] These devices can be used to perform QA measurements using suitable methods based on their properties.For example, planar ionization chamber detector arrays may be suitable for measurements perpendicular to the direction of beam incidence.If used for TC dose verification measurements, the measurement phantom must be of sufficient thickness when the gantry is rotated to the horizontal or near-horizontal horizontal position of the detector array, otherwise, the effective measurement area of the array for lateral beams appears to be insufficient.The cylindrical phantom keeps the effective measurement area constant during gantry rotation, and typical representatives include the diode arrays Delta4(Scandidos AB, Uppsala Sweden) and ArcCHECK (Sun Nuclear Corp, USA), both of which use cylindrical phantom that can effectively avoid these defects.][16][17] To improve the inherent drawbacks of planar ionization chamber arrays and inspired by the cylindrical phantom design, a novel cylindrical array system (ArcMap, RayDose, China) was designed for pretreatment dose verification, to be used for TC dose verification measurements with rotational delivery techniques.On the one hand, the dosimetry characteristics of this new ionization chamber array were evaluated, and in addition, preliminary tests of clinical treatment plan dose verification were performed to evaluate the applicability of this array for IMRT and VMAT verifications.This study is expected to provide basic reference data for more clinical application studies of this system in the future. Dosimetry system The designed novel dose verification system consists of an ionization chamber array, and software for dose analysis.The array had 1764 cylindrical air ionization chamber detectors embedded inside the cylindrical phantom, forming 21 large detector rings (21 cm in diameter) and 21 small detector rings (19 cm in diameter).The rings were arranged in parallel and alternately along the axis of the phantom.By adopting this arrangement pattern, 1764 ion chambers formed two cylindrical measurement surfaces with different depths inside the phantom.Each measurement surface consisted of 21 rings of the same diameter spaced at 1.02 cm.Each ring had 42 evenly distributed ionization chambers with an angular spacing of 8.57 • .The cylindrical measurement surfaces with diameters of 21 and 19 cm are denoted as "outer-arc" and "inner-arc," respectively.An offset of 0.5 cm exists between the outer-arc and inner-arc along the phantom's axis.The chamber diameter of the ionization chamber is 6 mm, the chamber height is 4.8 mm, and the chamber volume is 0.135 cm 3 .The length of the measurement phantom is 29.5 cm and the physical density of the material is 1.04 to 1.06 g/cm 3 .The physical area of the measurement (detector array) is 21 × 21 cm 2 .The measurement phantom was a hollow cylinder (13.4 cm and 26.6 cm inner and outer diameters, respectively), on the one hand, to reduce the weight of the equipment and to facilitate its use; it is also used to accommodate plugs made of homogeneous or non-homogeneous materials with a diameter of 13.4 cm, and a cavity in the center of plug is designed to accommodate a thimble chamber for measuring the absolute dose of the cylindrical phantom center.The software system for dose analysis displays in real time the dose images collected on both measurement surfaces of the array; and compares and analyzes the measured dose distribution with the reference dose distribution.Figure 1 illustrates the ArcMap dose verification system and its internal structure. Linear accelerator and treatment planning system The research work was mainly performed with a 6 MV x-ray beam from a linear accelerator (Vitalbeam, Varian, USA).The accelerator was equipped with 60 pairs of MLC leaves, with a maximum field size of 40 × 40 cm 2 .The plans were generated in the Varian's Eclipse 15.5 Treatment Planning System (TPS) using the anisotropic analytical algorithm (AAA, version 15.5.12) and a 2 mm grid size for dose calculation. Dosimetry characteristics measurement In this study, the phantom with a homogeneous plug was used to perform measurements.The cylindrical phantom's axis was aligned with the accelerator gantry's rotation axis and the phantom's geometric center coincided with the accelerator's isocenter in all the measurements unless specified otherwise.A 0.6 cc thimble chamber (PTW30013) was inserted into the ArcMap phantom to measure the output of the accelerator output at the isocenter as a reference. Array calibration In order to correct the sensitivity bias caused by the difference between detectors of the ionization chamber array, the array needs to be calibrated first.Since the detectors are interleaved in the inner-arc and outer-arc, they are different from the single plane distribution. On the basis of referring to the principle of existing methods, an autonomous optimization of the calibration method was carried out. 18The ionization chambers on the top of the central ring of the outer-arc and inner-arc are selected as a calibration reference for the rest of the detectors on the two measurement surfaces, respectively.ArcMap array calibration was performed with 20 radiation exposures at 10 different setup requirements, identified as steps A through J (each step was repeated twice).For each step, the accelerator is set to a nominal dose of 100 MU.The phantom is in the SAD setup, which is the initial position for the array calibration.The array calibration procedure consists of the following four parts: 1. Part I: With the phantom in the initial position, set the field size to 28 × 30 cm 2 , rotate the gantry to 0 • , and then irradiate the array with a nominal dose of 100 MU.Then, rotate the gantry to 17 • and 60 • to perform the same operation (corresponding to steps A, B, and C). 2. Part II: Rotate the gantry to 0 • and set the field size to 28 × 30 cm 2 .From the initial calibration position, move the couch 10.2 mm toward the G direction of the accelerator gun-target direction (let the ionization chamber longitudinally adjacent to the reference chamber be positioned in the center of the beam), and then irradiate the array.Next, move the couch 10.2 mm from the initial calibration position toward the T direction and then irradiate the array (corresponding to steps D and E). 3. Part III: Similar to Part II above, but the couch moving distance needs to be adjusted to 96.9 mm (let the ionization chamber at the edge of the array be positioned in the center of the beam), and the field size is set to 7 × 5 cm 2 (corresponding to steps F and G). 4. Part IV: Similar to Part I above, but the phantom must be rotated 180 • along the axis, and the remaining operations remain unchanged (corresponding to steps H, I, and J). After the calibration procedure was completed, the calibration result of the detector array was verified by two parts of measurements: (1) Angle rotation measurement: the accelerator delivered 100 MU beam each time around the phantom at a gantry interval of 30 • , with a field size is 25 × 25 cm 2 ; the phantom was rotated 180 • around its axis when irradiating the bottom half.After the measurements were completed, the array acquired post-irradiation dose images of beams incident from 12 different directions (12 images from each of the two measurement surfaces).( 2) Axial movement measurement: with a field size of 25 × 10 cm 2 and the gantry position of 0 • , three different areas of the detector array were exposed to the irradiation field in turn by moving the treatment couch, and the accelerator delivered 100 MU beams each time. Reproducibility Measurements were made with a 10 × 10 cm 2 field size and 400 MU/min dose rate.At a gantry of 0 • , the accelerator delivered 100 MU each time.The short-term reproducibility was evaluated by calculating the maximum deviation (MD) and standard deviation (SD) of 10 consecutive measurement readings of the ionization chambers in the central region of the irradiation.In addition, the measurement was taken once a month and repeated six times to evaluate the long-term reproducibility.The ArcMap software was used to record the corresponding measurement results.During the measurement, the thimble chamber was placed at the isocenter to obtain the absolute dose to correct the variation in the accelerator output. Dose linearity Measurements were performed with a 10 × 10 cm 2 field size and repeated thrice.The accelerator delivered beam doses of 1−600 MU at a 0 • gantry position. The measured results were averaged to eliminate uncertainty in the measurement.The readings of the ionization chambers in the central region of the field were selected for dose linearity analysis.For each measurement, the thimble chamber was used to measure the absolute dose of the isocenter to correct the variation in the accelerator output. Dose rate dependence Measurements were performed with a field size of 10 × 10 cm 2 .The accelerator delivered 100 MU beams each time at regular dose rates of 20, 40, 80, 100, 200, 300,400,500,and 600 MU/min in 6X energy,respectively. In the 6X-FFF energy, 100 MU was delivered each time at a high dose rate of 600, 800, 1000, 1200, and 1400 MU/min, respectively, and the array then performed dose measurements at a sampling rate of 50 and 100 ms, respectively.The dose rate dependence of the ionization chamber was evaluated by measuring the responses of the ArcMap array.A thimble chamber was used to correct the variation of the accelerator output.The identical measurements were repeated thrice, and the results were averaged. Field size dependence Field size dependence of the ionization chambers was evaluated using 100 MU delivery for six field sizes (3 × 3 cm 2 , 5 × 5 cm 2 , 7 × 7 cm 2 , 10 × 10 cm 2 , 15 × 15 cm 2 , and 20 × 20 cm 2 ) at a gantry of 0 • .The measurements were repeated thrice at the same field size, and the results were averaged.The measured readings of the detectors at the center of the field were selected as the target values and compared with the TPS-calculated values. Angular dependence To measure the directional responses of the array detectors inside phantom, the selected ion chamber was positioned at the accelerator isocenter.Measurements were performed with a 10 × 10 cm 2 field size, on a varying gantry angle.The gantry was rotated counterclockwise from 105 • to 255 • , with 100 MU beam output per 15 • .To avoid couch attenuation when irradiation the bottom half,the phantom was rotated 180 • about its axis and the selected detector was repositioned at the isocenter; the gantry was rotated counterclockwise from 90 • to 270 • , with 100 MU of beam output per 15 • .Three identical measurements were repeated to reduce the uncertainty during the measurement process.Furthermore, two different ionization chambers were selected to evaluate their directional responses difference.The TPS-calculated doses corresponding to each measurement condition were obtained as reference to evaluate the angular dependence of the array. MLC field testing Shaper software (Varian Corporation, USA) was used to design two types of MLC field plans.Static MLC field: MLCs employ a step-and-shoot method within a 20 × 16 cm 2 field, with the leaf position on the Bank B side kept fixed and the leaf on the Bank A side moved sequentially with a 4-cm spacing to form five subfields.Dynamic MLC field: MLCs employ a sliding window method in a 24 × 20 cm 2 field to move with a 4-cm subfield width.At the gantry of 0 • , each MLC field plan was performed once at 0 • and once at 90 • of the collimator, respectively, with 100 MU delivery per subfield.Two testing cases described above were measured using the ArcMap system and compared with the TPS-calculated dose distribution. Patient-specific IMRT and VMAT plan dose verification Twenty VMAT plans and ten IMRT plans were randomly selected from the patients' clinical treatment plans, and these plans were ported to the computed tomography (CT) images of the ArcMap phantom to create the corresponding verification plans.Among them, all the IMRT plans employed a dynamic method; all the VMAT plans consisted of two full treatment arcs.The measurements of the verification plans were performed using the actual gantry positions of the treatment plans.The difference between the measured TC dose distribution and the calculated dose distribution was compared. Evaluation method The gamma analysis method was used to compare the measured dose distribution with the TPS-calculated dose distribution.Both surfaces of measurements were combined to perform the gamma analysis and calculate global gamma passing rates (GPRs).The measured results were used as a reference distribution, and the TPS-calculated dose distribution was linearly interpolated according to a 1 mm grid and then compared with the measured dose distribution.Different dose difference tolerances and distance-to-agreement (DTA) tolerances were chosen as criteria to compute gamma indices.In this study, the ArcMap-measured dose distribution and the TPS-calculated dose distribution were compared using three different criteria (3%/3, 3%/2, and 2%/2 mm).Gamma analysis was performed in the absolute dose mode, with a low-dose threshold of 10%. Array calibration The calibration factors of the inner-arc and outer-arc detectors range from 0.9657 to 1.0395 and 0.9609 to 1.0473, respectively.Figure 2 shows the calibration factors of 42 chambers on the center ring of outer-arc In the angle rotation measurements, the array measurements under gantry of 0 • and 30 • were used as two reference dose distributions, and were circularly shifted and aligned with the remaining measured images, respectively.Then the dose distributions between them were compared.Table 1 summarizes the gamma analysis results between dose distributions measured by the array.In the axial movement measurement, three different areas of the array were irradiated and three dose images were acquired (Figure 3).The measured dose distributions were compared to each other, the total GPRs were 98.47% (579 out of 588 dose points), 99.83% (587 out of 588 dose points), and 97.79% (575 out of 588 dose points) at the 1%/1 mm criterion, respectively.Both parts of the verification measurements demonstrated high passing rates, indicating good calibration results for the array. Reproducibility Thirteen ionization chambers closest to the central axis of the field were selected to evaluate the reproducibility.Nine ionization chambers were in outer-arc and numbered from 1 to 9, and four ionization chambers were in inner-arc and numbered from 10 to 13. IC_5 refers to the top central ionization chamber (IC) at the outer-arc. Figure 4a shows the short-term reproducibility of the 13 ionization chamber detectors.The measurement results of each ion chamber were normalized to the average of 10 consecutive measurements.The SDs of 13 ion chambers for 10 consecutive measurements were less than 0.02%, and the MDs were less than 0.04%.Figure 4b shows the long-term reproducibility, for all selected detectors, the SD was less than 0.41%, and the MD was less than 0.61%. Dose linearity The measured readings of the ionization chambers in the central region of the beam were normalized and linearly fitted with accelerator output.The selected detectors showed good dose linearity, and the R 2 value of the fitted lines was better than 0.99999.Figure 5 shows the detector's dose linearity compared with the accelerator output.The discrepancy between the actual measured value and the theoretical values of the corresponding fitting line was within 0.2%. Dose rate dependence Figure 6a shows the dose rate dependence of the array between 20 and 600 MU/min.Each measured reading of all the selected ionization chambers was normalized to their respective measurements at the 600 MU/min.From the normalized measurements, the response variation was less than 0.2% and the SD was less than 0.08% for all selected ionization chambers at different dose rates.Figure 6b shows the measurements of the IC_5 at both high and low sampling rates in the dose rate range of 600 to 1400 MU/min.Each measured reading was normalized to the measurements of the 600 MU/min at sampling rates of 50 ms.The response variation at different dose rates at 50 ms is less than 0.05% and the SD is less than 0.02%.However, when the measurements were performed at a sampling rate of 100 ms, the variation of the ionization chamber response was less than 0.15% over the dose rate range of 600 to 1200 MU/min, and at a dose rate of 1400 MU/min, the response of the ionization chamber decreased by 10.12%. Field size dependence As the irradiation field increased, the responses of the array increased.The readings of the selected ionization chambers were normalized with respect to their respective readings for the 10 × 10 cm 2 field.Figure 7 shows the field size dependence of the two ionization chambers located on different measurement surfaces. Comparing the ArcMap measurements with the TPS calculations, the discrepancies are about −1.1% for the field of 3 cm by 3 cm, and the discrepancies are all within ± 1% for the remaining fields. Angular dependence Figure 8 shows the chamber angular dependence determined by comparing relative chamber response to TPS-calculated reference.The measured values of the detector and the calculated values of the TPS were normalized to their respective value at the beam incidence of 0 • .Compared with the TPS-calculated reference value, the difference between the measured and calculated results of the ionization chamber is essentially within ± 1%; the maximum discrepancy is 1.08% when the angle of incidence is 90 respective response at the normal beam incidence exhibited similar pattern, their difference at 105 • is 0.83%, which is the only result greater than 0.7%, and the differences are less than 0.4% for 15 of the 24 angles. Combined open field testing The measured dose distributions of the two open field plans were compared with the reference dose distributions calculated by TPS at the 3%/2 mm criterion.The number of dose points passes and the corresponding GPRs for the array are given in Table 2.The passing rates for two plans are better than 96%. Figure 9 shows the dose profiles for the combined field plan. F I G U R E 9 Comparison of the dose profiles from TPS calculation and array measurement for combined field plan.The abscissa indicates arc distance along detectors ring. MLC field testing The test results of the MLC field are given in Table 2.The passing rates for all plans are better than 96% at 3%/2 mm criteria.Figure 10 shows the dose profiles for the static MLC field plans with collimators of 0 • and 90 • , respectively.Figure 11 displays the dose images corresponding to the two MLC field plans acquired at each of the two measurement surfaces. TA B L E 2 The results of gamma analysis of the dose distribution for the combined open field plans and MLC field plans at 3%/2 mm criteria. Patient-specific IMRT and VMAT plans dose verification The results of the gamma analysis for dose verification measurements of all patient plans are shown in Table 3.At the 3%/2 mm criterion, the average GPRs were better than 98% and 96% for all IMRT and VMAT plans, respectively.At the 2%/2 mm criterion, the average GPRs were 96.70% and 94.91% for the 10 IMRT plans and 20 VMAT plans, respectively. DISCUSSION In the dose verification measurements of IMRT and VMAT, the detector used for dosimetry should be guaranteed in terms of accuracy, including detector linearity, reproducibility, and the response of the detector to the ray incidence angle, which will affect the measurement results.In designing and studying this novel ionization chamber array measurement system, the following aspects were considered: (1) the ionization chamber was selected as the detector of this system, mainly because the ionization chamber is one of the most classic detectors, and the ionization chamber also has its own inherent advantages compared with other detectors, such as good long-term stability and sensitivity independent of the accumulated irradiation dose; (2) the ionization chamber is arranged in an axial manner, which can cleverly avoid the influence of different incidence angles of radiation on the response of the ionization chamber in the range of 360 • ; (3) the detectors are arranged on two cylindrical surfaces in stagger manner, which can effectively increase the spatial resolution and avoid that the ray path under certain angles (such as near 90 • or 270 • ) passes through more cavities before reaching the detector.Whether the tool is suitable for practical verification measurement, it needs to be fully tested first.This includes array calibration and dosimetry characteristics study.This study first tested the accuracy of the array calibration method, and the results showed that the inner and outer ionization chambers can meet the requirements of clinical application after calibration.In addition, periodic array calibration will also help identify the detector units that may have problems in time and improve the accuracy of the measured dose. Detector reproducibility and dose linearity are two important indicators for clinical applications.From the above measurements, the dose verification system also shows good measurement reproducibility (short-term and long-term) and dose linearity.The array has no dose rate dependence over the tested dose rate range and can be applied to variable dose rate radiotherapy techniques, such as VMAT. Several different methods have been used to evaluate the angular dependence of the ArcCHECK array, and this study refers to the measurement method used by Li et al. 9,19,20 Placing the selected ionization chamber in the accelerator's isocenter can accomplish that the detector is always located in the center of the irradiation field, which can make the measurements and TPS calculations more accurate.In addition, we performed a test when the array was isocenter setup: the responses of the detector with clockwise gantry rotation from 340 • to 20 • were measured with an interval of 2 • (corresponding to an incidence angle range of −22.3 • to 22.3 • ).The measured values varied between −0.24% and 0.35% compared to the corresponding TPS-calculated values.The measurements show that the ArcMap array has a low angular dependence, which is due to the use of cylindrical ionization chambers in the ArcMap, and each detector axis is parallel to the axis of the phantom.In this respect, the ionization chamber has some advantages over the diode detectors. 21However, for noncoplanar arcs, which are not covered in this work, further studies are needed. Before dose verification, open field testing can usually be used to determine the current working state of the machine, such as the deviation of the output, and also to find out more easily if there are problems with the measuring tool.Two testing cases were designed in this work, a combined field formed by different size square fields under 0 • irradiation, and a box irradiation field under different angle irradiation.On the one hand, the steepness of the gradient of the single field penumbra region was appropriately reduced to avoid the obvious influence of the setup deviation on the test results, and on the other hand, it also takes into account the reason that the phantom is cylindrical rather than flat.The test results showed good results, but it should be noted that when using box irradiation, the attenuation effect of the couch needs to be taken into account.Therefore, threefield irradiation can also be considered as a testing case (excluding 180 • incidence). Significant deviations in the MLC leaf position accuracy during IMRT and VMAT plans delivery is one of the most important factors affecting the accuracy of dose delivery and is one of the more likely problems to occur.Therefore, testing the MLC field before verification, or in case of poor dose results are found, can help to solve the related problems.In this work, static and dynamic MLC field testing cases were designed, and the tool was tried to test the MLC field after the MLC leaves were maintained and serviced.The test results are good and can be used as a benchmark for detecting MLC status at work.How to use this tool to test the MLC leaf position accuracy and what kind of accuracy can be achieved will need to be continued in the next research. 22ompared to a single plane of a two-dimensional detector array, the dual-layer design of the ArcMap array in three-dimensional space improves spatial resolution.It is worth noting that the effective measurement areas of the two measurement surfaces are different, and the corresponding detector densities also differ.The ArcMap array can directly obtain some dosimetry information at two different depths for analysis while helping to better reconstruct the three-dimensional dose.Based on the current dual-layer design, the diameter of the inner measurement surface can be considered to be further reduced, which would enable the inner measurement surface to be closer to the high-dose region during plans verification measurement. In this study, 30 randomly selected clinical plans have completed dose verification using Portal Dosimetry (Varian) before patient treatment.The EPID has high resolution and ensures that the detection plane is always perpendicular to the beams during the measurement.The average GPR of verification results using the Portal Dosimetry was 98.91% ± 1.53 at the 3%/2 mm criterion.30 plans were verified using the ArcMap system, and the GPRs of all plans were better than 96% at the 3%/2 mm criterion.A paired t-test revealed no significant discrepancy between the GPRs evaluated by the ArcMap and the GPRs evaluated by the Portal Dosimetry, with a corresponding p-value of 0.46 (>0.05).Plans for the nasopharyngeal sits are usually more complex than those for other sites, and randomly selected plans included 13 nasopharyngeal VMAT plans (all consisting of 2 full treatment arcs).Due to the good dosimetry characteristics of the ArcMap array (no dose rate dependence, low angular response dependence, etc.), it was found that the gamma analysis results still had high passing rates when using the system to verify complex VMAT plans. CONCLUSIONS We have designed a new ionization chamber array.It is shown that this dose verification system has good performance in dosimetry and can meet the requirements of detectors for clinical applications.Preliminary tests on clinical plans have demonstrated the applicability and effectiveness of dose verification for VMAT and IMRT plans while providing a basic reference for further studies on the clinical application of the tool.This tool can further enrich the choice of dose verification tools for physicists. AU T H O R C O N T R I B U T I O N S Long He contributed to the project design, made all measurements, performed the data analysis, and wrote the manuscript.Jinhan Zhu contributed to the project design, measurements, and data analysis.Xuetao Wang, Bailin Zhang, and Qiang Hu participated in measurements and data analysis.Lixin Chen and Xiaowei Liu initiated this project, contributed to its design, and edited the manuscript.Xiaowei Liu oversaw the general progress, and determined the final version of the manuscript. AC K N OW L E D G M E N T S This study was supported by the National Natural Science Foundation of China (No.12005315). C O N F L I C T O F I N T E R E S T S TAT E M E N T The authors declare no conflict of interest related to this work. DATA AVA I L A B I L I T Y S TAT E M E N T The data that support the findings of this study are available from the corresponding author upon reasonable request. F I G U R E 1 ArcMap dose verification system: (a) dosimetry device; (b) axial cross-section of phantom and internal detector geometry sketch diagram: the inner and outer detector rings have an axial offset; the homogeneous plug with a diameter of 13.4 cm: the central cavity can accommodate a thimble chamber. 2 The calibration factors of 42 chambers on the center ring of outer-arc and inner-arc. F I G U R E 3 Dose images acquired by the array with the ArcMap phantom moved along the axial direction under a field of 25 × 10 cm 2 .If the array was cut from its bottom, the ionization chambers of ArcMap will be spread on two planes.The positions of the dose pictures acquired in each of the three areas of the array are shown schematically in Figures (a), (b), and (c), respectively.and inner-arc.Due to the cylindrical geometry of the array, we can consider the two measurement surfaces of the array as consisting of 42 generatrix parallel to the axis of the cylindrical phantom, and number each generatrix from 0 • to 360 • in clockwise order, starting from the generatrix on the top of the array (right above) to correspond to the angle position of the gantry during rotation. F I G U R E 4 5 Array reproducibility measurement results.(a) Short-term reproducibility of 13 ionization chambers at the central region of the array.(b) Long-term reproducibility of 6 ionization chambers.Dose linearity measurement results.The normalized measured readings in the figure came from the detector IC_5. F I G U R E 7 8 The field size dependence of ArcMap ionization chamber for field sizes of 3 × 3 cm 2 to 20 × 20 cm 2 .D meas /D calc indicates the ratio of ArcMap measured results to TPS calculated results.Angular dependence of the ArcMap array detector.D meas /D calc indicates the ratio of ArcMap measurement to TPS calculation. F I G U R E 1 0 Comparison of the dose profiles from TPS calculation and array measurement for static MLC plans with (a) collimator of 0 • and 90 • collimator of 90 • .The abscissa indicates arc distance along detectors ring. F I G U R E 1 1 Dose images acquired by two measuring surfaces of the array.(a) static MLC filed plan with collimators of 0 • , (b) static MLC filed plan with collimator of 90 • , (c) dynamic MLC field plan with collimator of 0 • . Gamma analysis results between measured dose distributions.Map0 • represents the dose distribution measured by the ArcMap array where the y-axis of the irradiation field coincides with the 0 • -generatrix of the array. TA B L E 1 a Dose rate response of the ArcMap array.(a) 6 MV mode, measurements of the ionization chamber at regular dose rate (only 4 are shown) and (b) 6 MV-FFF mode, measurements of the IC_5 at high and low sampling rates at high dose rates. • .Directional responses of two selected ion chambers normalized to theirF I G U R E 6 The gamma analysis results of IMRT and VMAT plans.
2023-09-06T06:17:31.796Z
2023-09-04T00:00:00.000
{ "year": 2023, "sha1": "300be578e07180b97daaa99a3cd075ea4e450f6d", "oa_license": "CCBY", "oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/acm2.14138", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "e4d342f150cd059f3fa8b7e4ed918831118f2abf", "s2fieldsofstudy": [ "Medicine", "Engineering" ], "extfieldsofstudy": [ "Medicine" ] }
119295551
pes2o/s2orc
v3-fos-license
Search for single production of scalar leptoquarks in proton-proton collisions at sqrt(s) = 8 TeV A search is presented for the production of both first- and second-generation scalar leptoquarks with a final state of either two electrons and one jet or two muons and one jet. A data sample of proton-proton collisions at center-of-mass energy sqrt(s) = 8 TeV recorded with the CMS detector corresponds to an integrated luminosity of 19.6 inverse femtobarns. Upper limits are set on both the first- and second-generation leptoquark production cross sections as functions of the leptoquark mass and the leptoquark couplings to a lepton and a quark. Results are compared with theoretical predictions to obtain lower limits on the leptoquark mass. At 95% confidence level, single production of first-generation leptoquarks with a coupling and branching fraction of 1.0 is excluded for masses below 1730 GeV, and second-generation leptoquarks with a coupling and branching fraction of 1.0 is excluded for masses below 530 GeV. These are the best overall limits on the production of first-generation leptoquarks to date. Introduction Leptoquarks (LQ) are hypothetical color-triplet bosons with spin 0 (scalar LQ) or 1 (vector LQ), which are predicted by many extensions of the standard model (SM) of particle physics, such as Grand Unified Theories [1][2][3][4][5][6][7][8], technicolor schemes [9][10][11], and composite models [12].They carry fractional electric charge (±1/3 for LQs considered in this paper) and both baryon and lepton numbers and thus couple to a lepton and a quark.Existing experimental limits on flavor changing neutral currents and other rare processes disfavor leptoquarks that couple to a quark and lepton of more than one SM generation [13,14].A discussion of the phenomenology of LQs at the LHC can be found elsewhere [15]. The production and decay of LQs at proton-proton colliders are characterized by the mass of the LQ particle, M LQ ; its decay branching fraction into a charged lepton and a quark, usually denoted as β; and the Yukawa coupling λ at the LQ-lepton-quark vertex.At hadron colliders, leptoquarks could be produced in pairs via gluon fusion and quark anti-quark annihilation, and singly via quark-gluon fusion.Pair production of LQs does not depend on λ, while single production does, and thus the sensitivity of single LQ searches depends on λ.At lower masses, the cross sections for pair production are greater than those for single production.Single production cross sections decrease more slowly with mass, exceeding pair production at an order of 1 TeV for λ = 0.6. Several experiments have searched for LQs.The H1 collaboration has produced limits on various singly produced LQ types: the one to which to compare this search is the LQ called S R 0 in Ref. [16], for which they place a limit at 500 GeV, assuming λ = 1.0 and β = 1.0.The D0 collaboration has produced limits on singly produced scalar LQs of 274 GeV, again assuming λ = 1.0 and β = 1.0 [17].Limits from pair production of leptoquarks exclude leptoquark masses below 1010 GeV for the first generation and 1080 GeV for the second generation, for β = 1.0 [18]. The main single leptoquark production mode at the LHC is the resonant diagram shown in Fig. 1.However, significant contributions are made by the diagrams with non-resonant components shown in Fig. 2.These contributions increase with both the LQ mass and coupling; the invariant mass distribution of a first generation LQ, of mass M LQ = 1 TeV and coupling λ = 1.0, possesses a tail extending to very low masses that is comparable to the peak in magnitude.The reconstructed shape of the resonance peak itself is not strongly affected by λ.Also, interference with the qg → qZ/γ * → q + − SM process can occur at dilepton masses in the vicinity of the Z boson mass peak and at lower energies.Treatments for this interference region and the above-described low-mass off-shell tail of the lepton-jet mass distribution are detailed in Section 5. The final-state event signatures from the decays of singly produced LQs can be classified as either that of two charged leptons and a jet, where the LQ decays to a charged lepton and a quark, or of a charged lepton, missing transverse energy, and a jet, where the LQ decays into a neutrino and a quark.The two signatures have branching fractions of β and 1 − β, respectively.For this study, and for S R 0 type LQs, β is 1.0, disregarding LQ decays to a neutrino and a quark.Because the parton distribution functions (PDF) of the proton are dominated by the u and d quarks, the single production of LQs of second and third generation is suppressed. The charged leptons can be electrons, muons, or taus, corresponding to the three generations of LQs.In this paper two distinct signatures with charged leptons in the final state are considered: one with two high transverse momentum (p T ) electrons and one high-p T jet (denoted as eej), and the other with two high-p T muons and one high-p T jet (denoted as µµj). The CMS detector The central feature of the CMS apparatus is a superconducting solenoid of 6 m internal diameter, providing a magnetic field of 3.8 T. Within the solenoid volume are a silicon pixel and strip tracker, a lead tungstate crystal electromagnetic calorimeter (ECAL), and a brass and scintillator hadron calorimeter, each composed of a barrel and two endcap sections.Extensive forward calorimetry complements the coverage provided by the barrel and endcap detectors.Muons are measured in gas-ionization detectors embedded in the steel flux-return yoke outside the solenoid. The ECAL energy resolution for electrons with E T ≈ 45 GeV from Z → ee decays is better than 2% in the central pseudorapidity region of the ECAL barrel (|η| < 0.8), and is between 2% and 5% elsewhere.For low-bremsstrahlung electrons, where 94% or more of their energy is contained within a 3×3 array of crystals, the energy resolution improves to 1.5% for |η| < 0.8 [19]. Muons are measured in the pseudorapidity range |η| < 2.4 with detection planes made using three technologies: drift tubes, cathode strip chambers, and resistive-plate chambers.Matching muon tracks derived from these measurements to tracks measured in the silicon tracker results in a relative p T resolution for muons with 20 < p T < 100 GeV of 1.3-2.0% in the barrel and better than 6% in the endcaps; the p T resolution in the barrel is better than 10% for muons with p T up to 1 TeV [20]. The first level of the CMS trigger system, composed of custom hardware processors, uses information from the calorimeters and muon detectors to select the most interesting events.The high-level trigger (HLT) processor farm further decreases the event rate from around 100 kHz to around 400 Hz, before data storage. The particle-flow event algorithm reconstructs and identifies each individual particle with an optimized combination of information from the various elements of the CMS detector.The energy of photons is directly obtained from the ECAL measurement, corrected for zerosuppression effects.The energy of electrons is determined from a combination of the electron momentum at the primary interaction vertex as determined by the tracker, the energy of the corresponding ECAL cluster, and the energy sum of all bremsstrahlung photons spatially compatible with originating from the electron track.The energy of muons is obtained from the curvature of the corresponding track.The energy of charged hadrons is determined from a combination of their momentum measured in the tracker and the matching ECAL and HCAL energy deposits, corrected for zero-suppression effects and for the response function of the calorimeters to hadronic showers.Finally, the energy of neutral hadrons is obtained from the corresponding corrected ECAL and HCAL energy. A more detailed description of the CMS detector, together with a definition of the coordinate system used and the relevant kinematic variables, can be found in [21]. Data and simulation samples The data were collected during the 8 TeV pp run in 2012 at the CERN LHC and correspond to an integrated luminosity of 19.6 fb −1 .In the eej channel, events are selected using a trigger that requires two electrons with p T > 33 GeV and |η| < 2.4 and in the µµj channel, events are selected using a trigger that requires one muon with p T > 40 GeV and |η| < 2.1. Simulated samples for the signal processes are generated for a range of leptoquark mass hypotheses between 300 and 3300 GeV and coupling hypotheses between 0.4 and 1.0 in the eej channel, and a range of leptoquark mass hypotheses between 300 and 1800 GeV and a coupling hypothesis of 1.0 in the µµj channel.Production of LQs in the µµj channel is suppressed because of the proton PDF as discussed in Section 1. The main sources of background are tt, Z/γ * + jets, W + jets, diboson (ZZ, ZW, WW) + jets, single top quark, and QCD multijet production.The tt + jets background shape is estimated from a study based on data described in Section 6; the simulation sample for the normalization of the tt + jets background as well as the samples for the Z/γ * + jets and W + jets backgrounds are generated with MADGRAPH 5.1 [22].Single top quark samples (s-, t-channels, and W boson associated production) are generated with POWHEG 1.0 [23][24][25][26] and diboson samples are generated with PYTHIA (version 6.422) [27] using the Z2 tune [28].The QCD multijet background is estimated from data. For the simulation of signal samples, the CALCHEP [29] generator is used for calculation of the matrix elements.The signal cross sections are computed at leading order (LO) with CALCHEP and are listed in Table A.1 in the appendix.Blank entries were not considered because of the small size of the cross section.The resonant cross sections σ res are shown in Fig. 3 and are defined by the kinematics selections given in Section 5. The PYTHIA and MADGRAPH simulations use the CTEQ6L1 [30] PDF sets, those produced with CALCHEP use the CTEQ6L PDFs, and the POWHEG simulation uses the CTEQ6m set.All of the simulations use PYTHIA for the treatment of parton showering, hadronization, and underlying event effects.For both signal and background simulated samples, the simulation of the CMS detector is based on the GEANT4 package [31].All simulated samples include the effects of extra collisions in a single bunch crossing as well as collisions from nearby bunch crossings (out-of-time pileup and in-time pileup, respectively).The pileup profiles in simula-4 Event reconstruction tion are reweighted to the distributions of the reconstructed vertices per bunch crossing in data collected by the CMS detector [32].In the eej channel, the background and signal are rescaled by a uniform trigger efficiency scale factor of 0.996, which is measured in [33].In the µµj channel, the background and signal are rescaled by muon η-dependent efficiency factors of 0.94 (|η| ≤ 0.9), 0.84 (0.9 < |η| ≤ 1.2), and 0.82 (1.2 < |η| ≤ 2.1).An uncertainty of 1% is assigned to these factors to account for variations during data-taking periods and statistical uncertainties. Event reconstruction Muons are reconstructed as tracks in the muon system that are "globally" matched to reconstructed tracks in the tracking system [20].Muons are required to have p T > 45 GeV and |η| < 2.1.Additionally, they are required to satisfy a set of criteria that is optimized for high p T ; they are reconstructed as "global" muons with tracks associated to hits from at least two muon detector planes together with at least one muon chamber hit that is included in the "global" track fit [20].To perform a precise measurement of the p T and to reduce background from muons from secondary decays in flight, at least eight hits are required in the tracker and at least one in the pixel detector.To minimize background from muons from cosmic ray backgrounds, the transverse impact parameter with respect to the primary vertex is required to be less than 2 mm and the longitudinal distance is less than 5 mm.Muons are required to be isolated by applying an upper threshold on the relative tracker isolation of 0.1.The relative tracker isolation is defined as the ratio of the p T of all tracks in the tracker coming from the same vertex, excluding the muon candidate track, in a cone of ∆R = √ (∆φ) 2 + (∆η) 2 = 0.3 (where φ is the azimuthal angle in radians) around the muon candidate track, and the muon p T . Electrons are required to have a reconstructed track in the central tracking system that is matched in η and φ to a cluster of ECAL crystals that has a shape consistent with an electromagnetic shower.The transverse impact parameter of the track with respect to the primary vertex is required to be less than 2 mm for electrons in the barrel (|η| < 1.442) and less than 5 mm for electrons in the endcap (|η| > 1.560).Electrons are required to be isolated from reconstructed tracks other than the matched track in the central tracking system and from additional energy deposits in the calorimeter.The transverse momentum sum of all tracks in a cone of ∆R = 0.5 around the electron candidate's track and coming from the same vertex must be less than 5 GeV.Also, the transverse energy sum of the calorimeter energy deposits falling in the cone of ∆R = 0.5 must be less than 3% of the candidate's transverse energy.An additional contribution accounting for the average contribution of other proton-proton collisions in the same bunch crossing is added to this sum.To reject electrons coming from photon conversions within the tracker material, the reconstructed electron track is required to have hits in all pixel layers.Electrons in the analysis have p T > 45 GeV and |η| < 2.1 to match the muon requirements (excluding the transition region between barrel and endcap detectors, 1.442 < |η| < 1.560).Selection criteria for electron identification and isolation optimized for high energies are also applied [33]. Jets are reconstructed with the CMS particle-flow algorithm [34,35], which measures stable particles by combining information from all CMS subdetectors.The jet reconstruction algorithm used in this paper is the anti-k T [36,37] algorithm with a distance parameter 0.5, which only considers tracks associated to the primary vertex.Jet momentum is determined as the vectorial sum of all particle momenta in the jet, and is found from simulation to be within 5% to 10% of the true momentum over the whole p T spectrum and detector acceptance.An offset correction is applied to jet energies to take into account the contribution from additional proton-proton interactions within the same bunch crossing.Jet energy corrections are derived from simulation, and are confirmed with in situ measurements of the energy balance in dijet and photon+jet events [38].Additional selection criteria are applied to each event to remove spurious jet-like features originating from isolated noise patterns in certain HCAL regions.The jet energy resolution amounts typically to 15% at 10 GeV, 8% at 100 GeV, and 4% at 1 TeV, to be compared to about 40%, 12%, and 5% obtained when the calorimeters alone are used for jet clustering [34]. Jets are required to have p T > 45 GeV, |η| < 2.4, and an angular separation from leptons of ∆R > 0.3. Event selection We require that events in both the eej and µµj channels contain at least two leptons and at least one jet that satisfy the above identification criteria.Additional kinematic requirements are applied to remove regions in which the trigger and identification criteria are not at plateau efficiency and to reduce large backgrounds.This creates a basic preselection region: the jet p T must be larger than 125 GeV, the dilepton invariant mass M must be larger than 110 GeV, and the scalar sum of transverse momenta of objects in the event (S T = p T ( 1 ) + p T ( 2 ) + p T (j 1 )) is required to exceed 250 GeV, where 1 is the highest p T lepton in the event, 2 is the secondhighest p T lepton, and j 1 is the highest p T jet.The two leptons in the events are required to have opposite charges. After this initial selection, a final selection is optimized for each channel separately by maximizing S/ √ S + B, where S is the number of signal events in the simulation passing a given selection and B is the number of background events in the simulation passing the same selection.We optimize for each LQ mass hypothesis by varying the requirements on M j and S T .Here M j is defined as the higher of the two possible lepton-jet mass combinations. As discussed in Section 1, owing to the unique aspects of single LQ decays, two generator level requirements are applied to the simulated signal samples.The first is M > 110 GeV, to remove LQ decays that are in the Z boson interference region.The second is a requirement on M j , chosen to remove the t-channel diagram contributions in the low-mass off-shell region, while preserving most of the resonant signal.This requirement is set at M j > 0.67 M LQ for the 6 Background estimations first-generation studies and M j > 0.75 M LQ for the second-generation studies.The thresholds for M j were chosen separately for each channel, because of the differences in the distribution shape.The dilepton invariant mass requirement at the generator level precisely matches the reconstruction level requirement at the preselection.These two requirements define the resonant region.Cross sections at the generator level before and after these requirements are provided in Table A.1, in the appendix. The eej channel selection after optimization is identical for all couplings.The threshold on S T starts at 250 GeV for M LQ = 300 GeV and increases linearly until it reaches a plateau value of 900 GeV at M LQ = 1125 GeV.The M j threshold starts at 200 GeV for M LQ = 300 GeV and increases linearly until it plateaus at 1900 GeV above M LQ = 2000 GeV.In the µµj channel after optimization the threshold on S T starts at 300 GeV for M LQ = 300 GeV and increases linearly until it plateaus at 1000 GeV above M LQ = 1000 GeV.The M j threshold starts at 200 GeV for M LQ = 300 GeV and increases linearly until it plateaus at 800 GeV above M LQ = 900 GeV.The exact threshold values are listed in Tables B.1 and B.2 in the appendix. Background estimations The SM processes that mimic the signal signature are Z/γ * + jets, tt, single top quark, diboson +jets, W + jets, and QCD multijets events where the jets are misidentified as leptons.The dominant contributions come from the former two processes, whereas the other processes provide minor contributions to the total number of background events. The contribution from the Z/γ * + jets background is estimated with a simulated sample that is normalized to agree with data at preselection in the Z-enriched region of 80 < M < 100 GeV, where M is the dilepton invariant mass.With this selection the data sample (with non-Z/γ * + jets simulated samples subtracted) is compared to Z/γ * + jets in simulation.The resulting scale factor, representing the ratio of the measured yield to the predicted yield, is R Z = 0.98 ± 0.01 (stat) in both the eej and µµj channels.This scale factor is then applied to the simulated Z/γ * + jets sample in the signal region of M > 110 GeV.In order to account for possible mismodeling of the p T ( ) spectrum of the Z/γ * + jets background sample, where p T ( ) is the scalar sum of the two highest p T leptons in the event, we perform a bin-by-bin rescaling of yields at preselection and full selection by scale factors measured in an inverted M selection (M < 110 GeV).These scale factors differ from unity by 1% to 10%, depending on the p T ( ) bin, and are applied to the Z/γ * + jets sample in the signal region of M > 110 GeV. We estimate the tt background with a tt-enriched eµ sample in data, selected using the single muon trigger.We use a selection that is identical to our signal selection in terms of kinematics requirements, except that we require at least a single muon and a single electron rather than requiring two same-flavor leptons.The eµ sample is considered to be signal-free, because limits on flavor changing neutral currents imply that LQ processes do not present a different-flavor decay topology [13,14].The tt background is largely dominant in the eµ sample with respect to the other backgrounds.This background is expected to produce the ee (µµ) final state with half the probability of the eµ final state, thus the eµ sample is scaled by a factor of 1/2.This factor is multiplied by the ratio of electron (muon) identification and isolation efficiencies, R ee/eµ (R µµ/eµ ).The estimate is further scaled by the ratio of the double-electron trigger efficiency and the single muon efficiency, R trig,ee in Eq. 3, or by the ratio of the efficiency of the single muon trigger in dimuon final states and the single muon efficiency, R trig,µµ in Eq. 4. The resulting estimates of the number of tt events in the ee and µµ channels are with where µ and ee are the single-muon trigger and double-electron trigger efficiencies, respectively, and N data eµ and N non-tt sim eµ are the numbers of eµ events observed in data and estimated from backgrounds other than tt, respectively.R trig,µµ is the ratio of the efficiency of a single muon trigger on a dimuon sample over the efficiency on a single muon sample (the numerator is the likelihood of failure on two muons). The contribution from QCD multijet processes is determined by a method that makes use of the fact that neither signal events nor events from other backgrounds produce final states with same-charge leptons at a significant level.We create four selections, with both opposite-sign (OS) and same-sign (SS) charge requirements, as well as isolated and non-isolated requirements.Electrons in isolated events must pass the isolation criteria optimized for high-energy electrons [33] and muons are required to have a relative tracker isolation less than 0.1, as discussed in Section 4. Non-isolated events are those with leptons failing these criteria.The four selections are as follows, The shape of the background is taken from the SS region with isolation requirements, and the normalization is obtained from the ratio between the number of OS events and the number of SS events in the non-isolated selection.Thus, the number of events, N QCD, est , is estimated by where is the number of events in region C of Eq. ( 5) and r B/D is the ratio of the number of events (measured in data with simulated non-QCD backgrounds subtracted) in regions B and D. The result is that QCD multijet processes account for 2% (1%) of the total SM background in the eej (µµj) channel. The contributions of the remaining backgrounds (diboson+jets, W+jets, single top quark) are small and are determined entirely from simulation. The preselection level distributions in M , S T , and M j are shown in Figs. 4 and 5 for the observed data and estimated backgrounds, where they are compared with a signal LQ mass of 1000 GeV in the eej channel, and with a signal LQ of mass 600 GeV, in the µµj channel.In all plots the Z/γ * + jets prediction is normalized to data and the tt prediction is taken from the study based on data.Data and background are found to be in agreement.The numbers of events selected in data and in the backgrounds at each final selection (for each hypothesis mass) are shown in Tables C.1, C.2, and C.3 in the appendix. Systematic uncertainties The observed data and background predictions are compared after final selection for λ = 0.4 and a signal LQ mass of 1000 GeV in the eej channel and a signal LQ mass of 600 GeV in the µµj channel and are shown in Figs. 6 and 7. Systematic uncertainties The sources of systematic uncertainties considered in this analysis are listed below.To determine the uncertainties in signal and background, each kinematic quantity listed is varied individually according to its uncertainty and the final event yields are re-measured to determine the variation in the predicted number of background and signal events. Jet energy scale and resolution uncertainties are estimated by assigning p T -and η-dependent uncertainties in jet energy corrections as discussed in Ref. [38], and varying the jet p T according to the magnitude of that uncertainty.The uncertainty in the jet energy resolution is assessed by modifying the p T difference between the particle level and reconstructed jets by an η-dependent value between 5% and 30% for most jets [38]. Uncertainties in the charged-lepton momentum scale and resolution also introduce uncertainties in the final event acceptance.An energy scale uncertainty of 0.6% in the ECAL barrel and 1.5% in the ECAL endcap is assigned to electrons [39], and an uncertainty of 10% in both the ECAL barrel and endcap is applied to the electron energy resolution [39].There is an uncertainty of 0.6% per electron in reconstruction, identification, and isolation requirements.For muons, a p T -dependent scale uncertainty of 5% (p T /1 TeV) is applied, as well as a 1-4% p Tdependent resolution uncertainty [20].In the case of momentum scale uncertainties the momentum is directly varied, and in the case of momentum resolution uncertainties the lepton momentum is subjected to a Gaussian random smearing within the uncertainty.A 2% per muon uncertainty in reconstruction, identification, and isolation requirements, as well as a 1% muon HLT efficiency uncertainty, are assumed as well. Other important sources of systematic uncertainty are related to the modeling of the backgrounds in the simulation.The uncertainty in the Z/γ * + jets background shape is determined by using simulated samples with renormalization and factorization scales and matrix-element parton-shower matching thresholds varied by a factor of two up and down.The scale factors for the normalization of the Z/γ * + jets background are assigned an uncertainty of 0.6% in both channels, and the normalization of the tt background is assigned an uncertainty of 0.5% in both channels, based on the statistical uncertainties measured in the studies described in Section 6.An additional uncertainty of 4% is applied to the tt background normalization in the µµj channel to account for possible signal contamination from first generation LQs in the control sample (the contamination is extremely small in the other channel because of the suppressed second generation signal).An uncertainty on Z/γ * + jets background from the p T ( ) scale factors is assessed by taking the weighted average of the uncertainties from each p T ( ) bin.The estimate of the QCD multijet background from data has an uncertainty of 15%. An uncertainty in the modeling of pileup in simulation is determined by varying the number of simulated pileup interactions up and down by 6% [40], and an uncertainty of 2.6% on the measured integrated luminosity is applied [41]. Uncertainties in the signal acceptance, the background acceptance, and the cross sections, due to the PDF choice of 4-10% for signal and 3-9% for background are applied, following the PDF4LHC recommendations described in Refs.[42,43]. Finally, a statistical uncertainty associated with the size of the simulated sample is included for both background and signal. The systematic uncertainties are listed in Table 1, together with their effects on signal and background yields, corresponding to the final selection values optimized for M LQ = 600 GeV.The PDF uncertainty is larger in the µµj channel because of the large uncertainty associated with the s-quark PDF. Results The observed data are consistent with the no-signal hypothesis.We set an upper limit on the leptoquark cross section by using the CL S modified frequentist method [44,45] with the final event yields.A log-normal probability function is used to model the systematic uncertainties, whereas statistical uncertainties are described with gamma distributions with widths determined according to the number of events simulated or measured in data control regions. To isolate the limits for resonant LQ production, we apply the resonant requirements at the generator level on both the lepton+jet mass, M( , j) > (0.67 or 0.75) M LQ (for the first-or secondgeneration LQs, respectively), and on the dilepton mass, M > 110 GeV.These requirements make the limits extracted from data more conservative and are discussed in Section 5. A resonant cross section σ res is computed with respect to those requirements.Limits are then computed with the reduced sample of simulated signal events and compared to σ res .The 95% confidence level (CL) upper limits on σ res β as a function of leptoquark mass are shown in Fig. 8 together with the resonant cross section predictions for the scalar leptoquark single production cross section.The uncertainty band on the theoretical cross section prediction corresponds to uncertainties in the total cross section due to PDF variations with an additional +70% uncertainty, because of the k factor from NLO corrections [46].The observed limits are listed in Tables B.1 and B.2 in the appendix.First generation limits are shown on the left plot with a resonant region of M j > 0.66 M LQ , M > 110 GeV and second generation limits are shown on the right plot with a resonant region of M j > 0.75 M LQ , M > 110 GeV.The uncertainty bands on the observed limit represent the 68% and 95% confidence intervals.The uncertainty band on the theoretical cross section includes uncertainties due to PDF variation and the k factor.9 Summary By comparing the observed upper limit with the theoretical production cross-section times branching fraction, we exclude single leptoquark production at 95% CL for LQ masses below the values given in Table 2. Limits on single production of the S R 0 type LQ from the H1 collaboration exclude LQ production up to 500 GeV (λ = 1.0) and up to 350 GeV (λ = 0.6) [16]. Summary A search has been performed for the single production of first-and second-generation scalar leptoquarks in final states with two electrons and a jet or two muons and a jet using a data set of proton-proton collisions at 8 TeV corresponding to an integrated luminosity of 19.6 fb −1 .The selection criteria are optimized for each leptoquark signal mass hypothesis.The number of observed candidates for each mass hypothesis agrees with the number of expected standard model background events.Single production of first-(second-) generation leptoquarks with a coupling of 1.0 is excluded at 95% confidence level for masses below 1755(660) GeV.These are the most stringent limits to date for single production.The first-generation limits for couplings greater than 0.6 are stronger than those from pair production and are the most stringent overall limits on leptoquark production in the first generation to date.This section contains tables of data, background, and signal yields after the final selection.Event counts vary between the two channels due to differences in the optimized thresholds for S T and M j as well as differences in the electron and muon efficiencies.The first listed uncertainty is statistical, the second is systematic; in cases where only one uncertainty is listed it is statistical. Figure 3 : Figure 3: Cross sections for single LQ production, calculated at LO in CALCHEP and scaled by the acceptance of the requirements described in Section 5, as a function of the LQ mass in GeV. Figure 4 : Figure 4: Distributions of M ee (top left), S T (top right), and M ej (bottom) at preselection in the eej channel."Other backgrounds" include diboson, W+ jets, and single top quark contributions.The points represent the data and the stacked histograms show the expected background contributions.The open histogram shows the prediction for an LQ signal for M LQ = 1000 GeV and λ = 0.4.The horizontal error bars on the data points represent the bin width.The last bin includes overflow. Figure 5 :Figure 6 : Figure 5: Distributions of M µµ (top left), S T (top right), and M µj (bottom) at preselection in the µµj channel."Other backgrounds" include diboson, W+ jets, single top quark, and QCD multijet contributions.The points represent the data and the stacked histograms show the expected background contributions.The open histogram shows the prediction for an LQ signal for M LQ = 600 GeV and λ = 1.0.The horizontal error bars on the data points represent the bin width.The last bin includes overflow. Figure 7 : Figure 7: Distributions of S T and M µj at final selection, in the µµj channel.The points represent the data and the stacked histograms show the expected background contributions.The open histogram shows the prediction for an LQ signal for M LQ = 600 GeV and λ = 1.0.The horizontal error bars on the data points represent the bin width.The last bin includes overflow. FirstFigure 8 : Figure 8: Expected and observed upper limits at 95% CL on first and second generation leptoquark single production resonant cross section as a function of the leptoquark mass.First generation limits are shown on the left plot with a resonant region of M j > 0.66 M LQ , M > 110 GeV and second generation limits are shown on the right plot with a resonant region of M j > 0.75 M LQ , M > 110 GeV.The uncertainty bands on the observed limit represent the 68% and 95% confidence intervals.The uncertainty band on the theoretical cross section includes uncertainties due to PDF variation and the k factor. Table 1 : Systematic uncertainties (in %) and their effects on total signal (S) and background (B) in both channels for M LQ = 600 GeV final selection. Table A . 1: Signal cross sections calculated at LO in CALCHEP.Resonant cross sections scaled by the acceptance of the selections described in Section 5 are listed under each corresponding LO cross section. Table B . 1: The eej channel threshold values for S T , M ej , and M ej,gen vs. LQ mass (for all couplings), and the corresponding observed limits.M LQ S T threshold M ej threshold M ej,gen threshold Observed limit on σ res Table B . 2:The µµj channel threshold values for S T , M µj , and M µj,gen vs. LQ mass, and the corresponding observed limits.M LQ S T threshold M µj threshold M µj,gen threshold Observed limit on σ res Table C . 1: Data and background yields after final selection for the eej channel for firstgeneration LQs, shown with statistical and systematic uncertainties."Other backgrounds" refers to diboson+jets, W+ jets, single-top quark, and QCD.The values do not change above 2000 GeV.
2016-04-27T20:54:35.000Z
2015-09-12T00:00:00.000
{ "year": 2015, "sha1": "4dceaa0ca230d459686beeaa3f7546566f774a84", "oa_license": "CCBY", "oa_url": "http://link.aps.org/pdf/10.1103/PhysRevD.93.032005", "oa_status": "HYBRID", "pdf_src": "Arxiv", "pdf_hash": "e4839dc63d031c6567e10f3d60ee4e95b278c691", "s2fieldsofstudy": [ "Physics" ], "extfieldsofstudy": [ "Physics" ] }
262173570
pes2o/s2orc
v3-fos-license
Comparison of CD4+/CD8+ Lymphocytic Subpopulations Pre- and Post-Antituberculosis Treatment in Patients with Diabetes and Tuberculosis Is there a CD4+ and CD8+ immunity alteration in patients with pulmonary tuberculosis (TB) and diabetes (DM) that does not recover after antituberculosis treatment? This prospective comparative study evaluated CD4+ and CD8+ lymphocytic subpopulations and antituberculosis antibodies in patients with diabetes and tuberculosis (TB-DM), before and after antituberculosis treatment. CD4+ T cell counts were lower in patients with TB-DM compared to those with only TB or only DM, and these levels remained low even after two months of anti-TB treatment. Regarding the CD8+ T cell analysis, we identified higher blood values in the DM-only group, which may be explained by the high prevalence of latent tuberculosis (LTBI) in patients with DM. IgM antituberculosis antibodies levels were elevated in patients with only TB at baseline, and 2 months post-anti-TB treatment, IgG did not express any relevant alterations. Our results suggest an alteration in CD4+ immunity in patients with TB-DM that did not normalize after antituberculosis treatment. Introduction Tuberculosis (TB) and diabetes mellitus (DM) represent significant global health challenges, particularly in low-income countries.The coexistence of these two conditions, TB-DM, has drawn increasing attention due to its prevalence and complex interactions between the diseases [1,2].TB is estimated to affect 10.6 million individuals worldwide annually, while DM has reached a global prevalence of 537 million, with projections anticipating a further rise by 2045 [3,4].The Mexico and Texas border reported a DM prevalence of 25% among TB cases [5].Notably, individuals with DM face a 3-4-fold increased risk of developing active TB, making the co-occurrence of these conditions a significant public health concern [6,7]. DM in patients with TB has been associated with impaired cellular immunity, altered lymphocyte subpopulations, and aberrant cytokine production.CD4+ T cells and their cytokine profile (IL-12, IFN-γ, TNF-α, IL-17, and IL-23) are the main immunity alterations, with IFN-γ and TNF-α being the most important ones [8][9][10][11].On the other hand, CD8+ T cells have a protective role in immunity against TB; they can recognize infected cells and produce cytokines such as TNF and IFN-γ, lyse infected cells, and kill Mycobacterium tuberculosis (MTB), though not as effectively as CD4+ T cells.Therefore, CD8+ T cells also play a critical role in preventing the reactivation of latent tuberculosis infection (LTBI) [12]. The tuberculin skin test (TST) and QuantiFERON-TB Gold Plus (QFT-Plus) give an indirect qualitative measure of immune cellular function against TB.QFT-Plus, in addition, is useful in diagnosing LTBI in the general population, especially in DM, which is in agreement with the WHO's recommendations as a method to identify and treat LTBI, avoiding TB reactivation [4]. Finally, regarding the humoral immunity, IgM and IgG antibodies against MTB are crucial to trigger the adaptive immune response.These antibodies have been of value in diagnosing lung TB in patients with negative smear in sputum, and their positivity indicates active TB [13,14]. The aforementioned immunological abnormalities have been implicated in the progression from LTBI to active disease in patients with DM.This increased risk has been associated with higher levels of glycosylated hemoglobin and lipid profile alterations [15]. Despite extensive research on immune dysfunction in patients with TB-DM, there remains a scarcity of data on the dynamics of immunological characteristics before and after antituberculosis treatment in this population [16,17].This knowledge gap prompted us to carry out a prospective, comparative study to investigate the impact of antituberculosis treatment on adaptive immunity in patients with TB-DM. The main objective of our study focused on evaluating CD4+ and CD8+ T cell behavior before and two months after anti-TB treatment in patients with DM.In addition, our secondary objectives were the evaluation of anti-MTB antibodies in patients with TB-DM, as well as the cellular immunity evaluation with TST and QFT-Plus. Study Population We conducted an observational, comparative study from October 2018 to September 2022 at the University Hospital "Dr.José Eleuterio González", which is a referral center for TB in Monterrey, Mexico.The Internal Institutional Review Board (IRB) approved the study with protocol code NM18-00007 to ensure compliance with ethical guidelines and patient confidentiality.We included four groups of participants: (1) healthy subjects, (2) patients with only DM, (3) patients with only pulmonary TB, and (4) patients with pulmonary TB-DM.Biochemical parameters and cellular/humoral immunity were measured at baseline, and a second measurement of the latter was taken after 2 months, only in the TB groups (only Tb and TB-DM).Eligible patients were adults (aged ≥18 years) of any gender, recently diagnosed with TB, with or without DM.We excluded patients with a diagnosis of human immunodeficiency virus (HIV) infection, use of corticosteroids, the presence of bacterial or fungal infection, immunosuppression due to chemotherapy or the use of biologics, active cancer of any kind, collagen, hematological diseases, pregnancy, or breastfeeding. Active TB was defined through detecting acid-fast bacilli in Ziehl-Neelsen staining of sputum or bronchoalveolar lavage, positive culture, or positive mycobacterial PCR.DM was defined according to American Diabetes Association (AHA) criteria [18].Several biochemical parameters were analyzed, including fasting glucose, lipid profile, urea, creatinine, transaminases, and HbA1c.Anthropometric measurements, including weight and height, were also collected. Lymphocytes Subpopulations Complete peripheral blood was used to evaluate the lymphocyte subpopulations in healthy subjects and patients via flow cytometry.The blood sample (50 µL) assessed absolute cell counts using BD Multitest™ CD3/CD8/CD45/CD4, Becton Dickinson, USA and BD Trucount™ Tubes, Becton Dickinson, USA.Samples were incubated for 30 min at room temperature; erythrocytes were lysed using a LNW protocol.After staining, 2500 lymphocytes were acquired in a FACS Canto II flow cytometer, Becton Dickinson, San José California, USA, and were analyzed using FACS Canto software version 3.0, Becton Dickinson, San José, CA, USA. Cellular Immunity Cellular immunity was assessed through the tuberculin skin test (TST) and QuantiFERON-TB Gold Plus (QFT-Plus) at the time of inclusion. Anti-MTB Antibodies Indirect ELISA tests were carried out according to the patent No. 285260 called "Proceso de detección de tuberculosis", developed in the Immunology Department of the Medical School in the UANL (Arce-Mendoza and Rosas-Taraco, 2011).First, 1 µg/well of the antigen diluted in acetate buffer pH 7.2-7.4was placed in 96-well costar plates.The plates were incubated overnight at 4 • C; then, supernatants were discarded, and the plates were blocked with 200 µL of 5% diluted skim milk in phosphate buffer.Serums of controls and patients were used to evaluate the levels of anti-MTB protein antibodies [19].Samples were diluted at 1:50 in 1% skim milk Peroxidase-conjugated anti-human IgM and IgG antibodies were diluted at 1:10,000.The plates were read in an iMark™ spectrophotometer (Bio-Rad, Tokyo, Japan) at λ = 490 nm. Statistical Analysis Data were analyzed using software SPSS v.20.0 and Graphpad prism.Descriptive statistics were used to describe demographic variables.We used the chi-square test for categorical variables and ANOVA (post hoc Tukey) or Kruskal-Wallis (post hoc Dunn) for quantitative variables among the different groups, according to their distribution.A p-value of <0.05 was considered statistically significant. tients with only TB, 6/15 (40%) patients with only DM, and 6/15 (40%) healthy subje significant correlation between the TST and QFT-Plus was present in all groups (Ka 0.8), except in the DM-only group; there was a correlation between TST and QFT (Kappa 0.438), with p < 0.001 (Table 1). We also identified a significant difference in the absolute count of CD8+ T ce tween the DM-only group with 678 (±299.4)cells/µL and the TB-only group wi (±232.2) cells/µL, where p < 0.001 (Figure 2b). We also identified a significant difference in the absolute count of CD8+ T cells between the DM-only group with 678 (±299.4)cells/µL and the TB-only group with 343 (±232.2) cells/µL, where p < 0.001 (Figure 2b). Evaluation of Antibodies Immunity The level of anti-MTB IgM antibodies at baseline was 0.47 (±0.20) in the DM-only group vs. 0.72 (±0.27) in only TB, with p < 0.05.There was a significant difference at 2 months of anti-TB treatment between the DM-only group, 0.47 (±0.20), and the TB-only group, 0.77 (±0.20), p < 0.01.(Figure 3a) No significant difference was found in anti-MTB IgG antibodies (Figure 3b). Discussion In this study, we compared the immune profiles in patients with pulmonary TB-DM vs. those with only pulmonary TB, only DM, and healthy subjects.We observed differences in cellular and humoral immunity patterns at the pre-treatment stage that were partially corrected after 2 months of treatment.In addition, basal assessment of cellular immunity was abnormal in only the DM group. CD4+ T cells play a major role in adaptive immunity against TB, according to our analysis of lymphocytic subpopulations.We found that CD4+ levels were lower in the TB-DM group compared with the other three groups, and there was no recovery after 2 months of anti-TB treatment (only TB and TB-DM groups).This finding is similar to other studies that showed a preponderant role of CD4+ activity in the induction and maintenance of protective immunity against TB through the production of IFN-γ and TNF-α.DM seems to induce a decrease in CD4+ blood levels and its activity in patients with TB, the same as our study [8,9]. Interestingly, according to our results, the persistence of low post-treatment CD4+ levels in active pulmonary TB groups (only TB, TB-DM) was significantly lower than only the DM group and healthy subjects, indicating an absence of cellular immunity recovery after 2 months of treatment.In addition, we compared our low CD4+ levels in active TB with historical controls of other studies and obtained the same results [20].However, we did not find a significant difference in CD4+ levels within the same TB-DM group (preand post-treatment), but their blood value is relevant and could be associated with impaired immune recovery, according to other published studies.In our study, subjects presented with low CD4+ levels at baseline and persisted after 2 months of anti-TB treatment, suggesting an impaired immune recovery [16]. Regarding the CD8+ T cell analysis, we identified higher blood values in the DM-only group compared with the TB-only group, which may be explained by the high prevalence Discussion In this study, we compared the immune profiles in patients with pulmonary TB-DM vs. those with only pulmonary TB, only DM, and healthy subjects.We observed differences in cellular and humoral immunity patterns at the pre-treatment stage that were partially corrected after 2 months of treatment.In addition, basal assessment of cellular immunity was abnormal in only the DM group. CD4+ T cells play a major role in adaptive immunity against TB, according to our analysis of lymphocytic subpopulations.We found that CD4+ levels were lower in the TB-DM group compared with the other three groups, and there was no recovery after 2 months of anti-TB treatment (only TB and TB-DM groups).This finding is similar to other studies that showed a preponderant role of CD4+ activity in the induction and maintenance of protective immunity against TB through the production of IFN-γ and TNF-α.DM seems to induce a decrease in CD4+ blood levels and its activity in patients with TB, the same as our study [8,9]. Interestingly, according to our results, the persistence of low post-treatment CD4+ levels in active pulmonary TB groups (only TB, TB-DM) was significantly lower than only the DM group and healthy subjects, indicating an absence of cellular immunity recovery after 2 months of treatment.In addition, we compared our low CD4+ levels in active TB with historical controls of other studies and obtained the same results [20].However, we did not find a significant difference in CD4+ levels within the same TB-DM group (pre-and posttreatment), but their blood value is relevant and could be associated with impaired immune recovery, according to other published studies.In our study, subjects presented with low CD4+ levels at baseline and persisted after 2 months of anti-TB treatment, suggesting an impaired immune recovery [16]. Regarding the CD8+ T cell analysis, we identified higher blood values in the DM-only group compared with the TB-only group, which may be explained by the high prevalence of LTBI in patients with DM, similar to other studies that found that the frequency of antigen-specific CD8+ T cells was higher in individuals with LTBI and could be associated with a protective effect to avoid the progression to active TB [12]. In our study, IgM levels were elevated in patients with only TB at baseline, which indicates active TB.However, 2 months post-anti-TB treatment, levels remained significantly elevated in the same group.We do not have a clear explanation for this paradoxical response.However, it could be a result from a slower decrease in blood levels, which could return to normal in a subsequent analysis after 3 or 4 months of treatment.In addition, there is no difference in IgM and IgG in patients with TB-DM [14]. Finally, the TST and QFT-Plus exhibited good agreement regarding active TB (TB-only and TB-DM groups), but the TST performed poorly in the DM-only group compared to QFT-Plus, which may reflect anergy due to impaired immunity in this population that leads to false negative results of the TST.These results suggest that QFT-Plus should be used instead of the TST when LTBI is suspected in patients with DM, as per the WHO's recommendations [4]. There have been hypotheses on cellular immunity behavior in patients with TB, which we identified in our research.A study demonstrated an alteration in the transcriptomes of patients with both TB and DM, which reduced type I immunity and interferon responses in correlation with intermediate and elevated glycosylated hemoglobin levels [21]. Other studies have associated lower CD4+ T cell and IL-10 activity in the blood and lungs with elevated glycosylated hemoglobin levels that persisted after anti-TB treatment [22].Hypertriglyceridemia associated with foamy macrophages and low levels of HDL cholesterol has been associated with tissue damage [23].Nonetheless, we did not identify a significant difference in triglyceride blood levels.Still, we found low levels of HDL cholesterol that recovered after anti-TB treatment in the TB-DM group, as reported in other studies [24]. Limitations to this study include a small sample size and loss to follow-up.It is also a single-center study.We did not measure phagocytosis, other lymphocyte subpopulations, or cytokines, and we did not evaluate the anti-DM treatment associated with immunity. Other immunity function alterations in elevated glycosylated hemoglobin levels involve monocyte activation, antigen presentation, and phagocytosis [23], which we, unfortunately, did not evaluate.We focused only on the baseline and after two months of intensive anti-TB treatment.We did not follow up until the completion of treatment after 6 or 9 months, which could be an interesting evaluation for future studies. Conclusions There is an alteration of adaptive immunity CD4+ T cells in patients with TB and DM which does not recover after 2 months of anti-TB treatment.We need more studies to confirm this finding. Figure 1 for baseline and follow-up results. * Refer to Figure1for baseline and follow-up results.
2023-09-24T15:28:42.351Z
2023-09-01T00:00:00.000
{ "year": 2023, "sha1": "00037e513286c62b090ca87bfffc0a11aac48206", "oa_license": "CCBY", "oa_url": "https://www.mdpi.com/2076-0817/12/9/1181/pdf?version=1695192444", "oa_status": "GOLD", "pdf_src": "PubMedCentral", "pdf_hash": "f7fa5b0f0ba7621127548b285e97443f9e73993e", "s2fieldsofstudy": [ "Medicine", "Biology" ], "extfieldsofstudy": [ "Medicine" ] }
244774132
pes2o/s2orc
v3-fos-license
Association between work status and depression in informal caregivers: a collaborative modelling approach Abstract Background Care is regularly provided on an informal basis by family and friends and it is well established that caregivers experience high rates of depression. The majority of research on caregivers tends to focus on older, full-time caregivers, with less attention paid to working caregivers (in paid employment). The aim of this study is to explore the impact of work status on depression in caregivers. Methods A sample of individuals from the 2014 European Social Survey dataset, aged 18 and older, who reported being a caregiver, were investigated (n = 11 177). Differences in sociodemographic, mental and physical health and social network variables, between working and non-working caregivers, were investigated. Hierarchical logistic regression models were used to investigate associations between the caregivers’ work status and depression. This study was developed in partnership with a panel of caregivers who contributed to the conceptualization and interpretation of the statistical analysis. Results Findings showed that 51% of caregivers reported being in paid employment. Non-working caregivers were more likely to be female, older, widowed, have lower education levels and provide intensive caring hours. They were also more likely to report depressive symptoms than working caregivers after controlling for sociodemographic, social networks and intensity of caring (adjusted odds ratio = 1.77, 95% confidence interval = 1.54–2.03). The panel considered policies to support continued work important as a means of maintaining positive mental health for caregivers. Conclusions Supportive policies, such as flexible working and care leave, are recommended to allow caregivers to continue in paid work and better manage their health, caring and working responsibilities. Introduction I nformal care is regularly provided by family and friends 1 and plays an essential role in the healthcare system. Informal caring responsibilities often fall disproportionally on certain demographic groups, such as middle-aged women 1 with lower levels of education. 2 European differences are evident from a recent publication, reporting that caregivers were most likely to be unemployed women, aged 50-59 years, using European Social Survey (ESS) data. 2 Links with reduced wellbeing have been identified among caregivers. [2][3][4] Informal caregivers, who provide care to a sick or disabled relative, are at an increased risk of depression compared with non-caregivers. 3 Rates of depression vary within the caregivers' population, from 29% up to 42%, 5,6 which is considerably higher than the prevalence in the general population at 4.4%. 7 This is a cause for concern as depression in informal caregivers can have negative consequences on both the caregivers' and the care-recipients' health and wellbeing. 8,9 Sociodemographic factors associated with increased odds of depression in general 10 and caregiver populations 6,11 include lower education, female gender, economic inactivity and being divorced or widowed. Caregivers face further unique caregiving-related risk factors. Increased caregiving stressors, such as physical health symptoms 8 and caregiver burden, 12 are associated with depression. Working caregivers may be at an increased risk as they face the challenge of balancing caring responsibilities with work and other responsibilities. While the strain of this dual role has been discussed, 4,13 other studies suggest that caregivers may benefit from paid employment. 14,15 Positive links between employment and caregiver wellbeing were identified in a study of parental caregivers of children with intellectual disabilities. 16 Research found that full-time working caregivers had lower levels of depression, measured on Beck's Depression Inventory scale compared with caregivers working less than part-time. 16 Elsewhere, employment was shown to reduce caregivers' distress. 17 The benefits of paid employment may include the opportunity to have a role outside of caring, access to workplace-based social support and greater social networks and enhanced economic resources. 15 A conceptual framework of the challenges faced by those combining work and unpaid care identified multiple interacting challenges including high and/or competing caregiver demands, psychosocial or emotional stressors, the distance between the workplace and care-recipient's residence and caregiver's health and financial pressure. 4 Potential solutions to these challenges include informal or formal help with caring, domestic support, technology, work accommodations, flexible work hours, self-employment and emotional support. 4 This study was conceptualized and developed in partnership with caregivers, acknowledging and valuing their knowledge in deciding what factors impact their health. While this type of public and patient involvement (PPI) is rare in statistical modelling, it can support collective learning, advance understanding and increase impact. 18 The combined aims, of the researchers and panel, focus on the health implications for working family caregivers, which builds on previous international research. 19 The researchers and panel identified two key questions of interest: 'how do working and non-working caregivers' differ'? and 'what is the impact of work status on caregiver's depression'? Synthesizing panel feedback with the reviewed literature, 4,8,10 the researchers developed the following refined aims: (i) to identify sociodemographic, mental and physical health and social network differences, between working and non-working caregivers, and (ii) to investigate the impact of work status on caregivers' depression, using hierarchical logistic regression models and controlling for sociodemographic variables. Study This study uses data from the 2014 seventh round of the ESS, which focuses on 'social inequalities in health and their determinants'. Anonymized data from the ESS are freely available without restrictions for not for profit purposes. The ESS is a biennial cross-national survey of attitudes and behaviour established in 2001. The ESS uses cross-sectional, probability samples, which are representative of all persons aged 15þ, resident within private households in each country. ESS is a pan-European survey of 21 countries; Austria, Belgium, Czech Republic, Denmark, Estonia, Finland, France, Germany, Hungry, Ireland, Israel, Lithuania, The Netherlands, Norway, Poland, Portugal, Slovenia, Spain, Sweden, Switzerland and the UK. National co-ordinators and survey agencies ensure compliance with ethics approval procedures at a country level, overseen by the ESS European Research Infrastructure Consortium which subscribes to the Declaration on Professional Ethics of the International Statistical Institute. Data were collected via face-to-face interviews with individuals aged 15þ living in private households. The average response rate for all countries was 51.6%. Data from a total of 35 063 participants were collected. The 2014 ESS was analyzed, as it is the latest round to include data relating to informal caregivers. Complete information on the survey, including questionnaires, is available from the following http://www.europeansocialsurvey.org. Sample A sub-sample of participants from the ESS dataset, aged 18þ, who reported being a caregiver, were investigated (n ¼ 11 177, 32% of all participants). A caregiver was defined as someone who reported looking after or helping family members, friends, neighbours or others. A non-intensive caregiver was defined as anyone who provided up to 10 h of help a week, while an intensive caregiver was someone who provided over 10 h of help a week. Differences between caregivers and non-caregivers are detailed elsewhere. 2 Mental health Depression was assessed using an eight-item version of the Center for Epidemiological Studies Depression Scale. 22 Individuals were asked how often they felt each of the following in the past week: felt depressed, felt everything was an effort, sleep was restless, was happy, felt lonely, enjoyed life, felt sad and could not get going. Those scoring a value of 10 or more were classified as having depressive symptoms. 20 The validity and reliability of this scale for depression were previously demonstrated. 23 Physical health Participants were asked which of the following health problems they have had or experienced in the last 12 months (yes, no), from a list of the following: heart or circulation problem, high blood pressure, breathing problems, back or neck pain, muscular or joint pain in hand or arm, or muscular or joint pain in foot or leg, stomach or digestion related, skin condition related, severe headaches, diabetes and cancer. These conditions were chosen based on prevalence across Europe and common cause of deaths. 24 Social network Participants were asked how often they socially meet with friends, relatives or colleagues (once a month or less, several times a month, once a week, several times a week/everyday). Work status Work status was defined where a participant reported their main activity in the last 7 days as paid employment. Statistical analysis The dataset for analysis was pooled across all countries and both post-stratification and population weights were applied to ensure that the survey data represent the national populations of 15þ years with respect to age, gender, education and region and give all countries a weight proportional to population size. Categorical data were described using counts and percentages. Pearson's v 2 test was used to test associations between categorical variables. Cramer's V effect size, with V ¼ 0.1, 0.3 and 0.5 for a small, medium and large effect, respectively, was reported where appropriate. Hierarchical logistic regression models were used to analyze associations between the caregivers' work status and depression (Model 1), controlling for sociodemographic variables (Model 2) and controlling for intensity of caring and social networks (Model 3). Adjusted odds ratios (AORs), corresponding 95% confidence intervals (CIs) and the Nagelkerke R 2 goodness of fit statistic are reported. A 5% level of significance was used. All statistical analysis was undertaken using SPSS Version 24. Public and patient involvement Two stakeholder panel meetings were held with four caregivers, prior to and after statistical analysis. The panel included four full-time family caregivers; three females and one male, and all were older adults. None of the caregivers were active in the labour market at the time of this study, but three had previously balanced work and care responsibilities. The four caregivers were recruited from a larger PPI panel of older adults who have committed to working with academics on various research projects; panel recruitment is described elsewhere. 25 The initial meeting focused on discussing experiences of providing care to family members, defining a caregiver and balancing paid work and caregiving. Potential health and demographic differences, between working and non-working caregivers, were considered and factors which influence these differences were identified and discussed, informed by existing literature. This collaborative or participatory modelling approach involves all stakeholders in the model building process, where participants can suggest characteristics for inclusion in the model and how they may impact on the outcome. 26 Thus, the final variable selection was based on previous research findings, available data, as well as panel feedback. The second meeting focused on interpreting the results of the statistical analysis and framing discussion points. The PPI meetings were unstructured and facilitated through online video calls. Faceto-face meetings were not possible due to Irish public health restrictions as a result of the COVID-19 pandemic in late 2020. Results Sociodemographics Table 1 presents demographic information on caregivers (n ¼ 11 177). The majority are female, middle aged, employed, with an upper secondary education and reported being non-intensive caregivers (77.8%), compared with 21.9% reporting intensive caring over 10 h. Sociodemographic differences between working and nonworking caregivers are evident from table 1. Working caregivers are more likely to be male, middle aged, have a higher level of education and provide non-intensive caring hours. Non-working caregivers are more likely to be female, older, widowed, have lower education levels and are more likely to provide intensive caring hours. Physical health, depression status and social networks Table 2 presents caregivers with a physical or mental health complaint, by work status. For all caregivers, 12.9% report symptoms of depression, with a statistically significant difference between working and non-working caregivers (16.5% of non-working caregivers reporting depression compared with 9.6%, P < 0.001). Back or neck pain is the most prevalent health complaint (48.1%). Across nearly every health complaint there is a statistically significant difference between prevalence for working and non-working caregivers with those not working more likely to report the health complaint. Table 3 presents caregivers' social networks, by work status. Non-working caregivers are most likely to report frequent social meetings, with 48.1% reporting socializing several times a week/ everyday compared with 41.8% of working caregivers. Discussion This study examined data from the 2014 ESS and focused on the health implications for working and non-working caregivers. Within ESS, 11 177 (32%) were characterized as informal caregivers, with over half (51%) also being in paid employment. Results found that non-working caregivers are at a considerably higher risk of depression compared with working caregivers (16.5% of non-working caregivers reported depressive symptoms compared with 10% of working caregivers). After controlling for sociodemographic variables, intensity of caring and social networks, non-working caregivers were more likely to report depressive symptoms than working caregivers (AOR ¼ 1.77, 95% CI ¼ 1.54-2.03). This study was conceptualized and interpreted with a panel of caregivers. The finding of non-working caregivers being more at risk of depression resonated with the panel, but they considered the prevalence of depression somewhat lower than expected. While higher rates of depression were reported in reviews of caring populations, 5,6 these reviews included few studies with population representative samples. Similar rates of depression to our findings were reported in other studies using population-representative samples. 27,28 The panel highlighted that working caregivers have the ability to physically and mentally leave their caring responsibilities Counts (%) presented. Impact of work status on depression in caregivers 61 temporarily, they feel more independent and important, they have opportunities to make money, get dressed up and have social interactions at work. They discussed the mental health impact of being a full-time caregiver: feeling isolated, lonely, invisible, guilty and misunderstood; feelings which are commonly reported in other qualitative studies of family caregivers. [29][30][31] The panel detailed the benefit of work's social experience for their mental health. This is in accordance with our statistical findings showing links between social networks and depression. Elsewhere in the literature, results suggest that long-term activity restrictions are related to increased depression in caregivers. 8 Despite the benefits of working, our panel highlighted that, depending on the amount of caring being provided, balancing work and caring responsibilities is not feasible long term. They reported feeling like they 'lived two separate lives', one at work and one at home. This was a cause of stress and anxiety, as they felt conflicted about going to work as they were needed at home. Generally, the PPI panel believe the support structures are not in place to support working caregivers and reported feeling under-valued by the government. The panel made recommendations towards flexible working conditions such as the option to work from home and also acknowledged the positive effects support groups and good social networks have on caregivers' mental health. This aligns with previous research, which reported the protective impact of social networks for caregivers' mental health 32 and our findings that caregivers who had more social contact were less depressed. Yet, while the panel considered work as an important means of facilitating impromptu social interactions, our statistical results suggested that non-working caregivers had more social meetings. Public health campaigns or interventions focusing on improving social networks for caregivers have previously been recommended 32 and may contribute to raising awareness of the importance of strong social networks for caregiver's mental health. Verbakel et al. 2 made general recommendations towards supportive policies, such as respite care, training or counselling, being made more easily accessible to caregivers. However, we must consider how current supportive policies for caregivers vary considerably across Europe. 33,34 While financial support is the most common type of support provided, 33 findings suggest that more effective supports are those that give a break from caring responsibilities, support caregivers emotionally and provide them with skills to improve and better deal with their care situation. 35 Our findings suggest working caregivers have better mental health. Thus implementing more standardized policies could aid working caregivers in balancing their dual responsibilities and better sustain informal care, which is an important resource for our healthcare systems. Since data collection in 2014, policy changes have been primarily at individual country level with heterogeneous policies in place. However, on 20 June 2019, the European Union Directive on work-life balance for parents and carers introduced the entitlement to 5 days of carers' leave per year, for workers providing personal care or support to a relative or person living in the same household and extended the right to request flexible working arrangements to working carers. 36 While many EU Member States already have measures in place that go beyond these provisions, the Work-Life Balance Directive can nevertheless be considered an important step in recognizing informal carers. Recently, Brimblecombe et al. 37 discussed the idea of reducing the need for unpaid care. They recognized the substantial costs incurred by governments for caregivers, through lower tax revenue, welfare benefit payments and health service use. While these costs are essential to support caregivers, the question was raised as to whether these funds could be better spent on supportive policies. Initiatives could be developed to support the education, training, employment, financial situation and physical and mental health of caregivers. More specifically Brimblecombe et al. highlighted how support in workplaces is valuable to working caregivers, a point consistent with our PPI panel. Here, the panel considered flexible working essential, as they believe the cost of a replacement or substitute caregiver during work hours does not equate to the income earned. The panel noted the ability to work at home in some capacity was more suited to facilitating working and caring responsibilities. A European report strengthens this claim by suggesting suitable interventions, to facilitate caregivers combining work and care including care leave and making work flexibility legally possible. 34 While it is clear that work support is fundamental to caregivers' wellbeing, evidence also suggests a combination of potentially effective interventions is most effective. 38 Other suitable support policies for working caregivers' wellbeing could include combinations of formal care services for people with care needs ('replacement' or 'substitution' care), psychological therapy, training and education, and support groups. The COVID-19 pandemic has accelerated changes to the ways people work and these changes have the potential to create additional challenges and/or potential benefits for working caregivers. 39 Further research is needed on the longitudinal impact and differential impact of the pandemic on working family caregivers. 39 Strengths and limitations A unique strength of this study is the collaboration between a PPI panel of family caregivers and academic researchers. PPI in the statistical analysis is often underexplored but acknowledging and valuing lay knowledge of the context supported meaningful interpretations of our findings. Another strength is the use of data from a large pan-European study of 21 countries, providing useful insights into the health implications for working and non-working caregivers. Rates of depression are somewhat lower than previously identified in caregiver populations. 5,6 Thus, our findings may be somewhat conservative due to the self-reported nature of the ESS data and the lack of data collected on specific caring responsibilities. For example, the caring role and the hours provided are both self-reported, meaning some undefined caregivers may be excluded from analysis and information on who is being cared for (e.g. adults and/or children; live-in care vs. care outside the home) is not reported and therefore cannot be accounted for in the analysis. The panel hypothesized differences in stress by work status; however, no measure of stress was collected in the ESS and therefore could not be incorporated into the analysis. Future research could consider working collaboratively with a panel of caregivers, prior to data collection to expand on the variables to be measured and increase explanatory ability of the statistical models. The crosssectional nature of the data is also a limitation, as no conclusions can be made as to the long-term impact of caring on mental health. The data collection date (2014) means that changes in social policy to support carer participation in the labour market in Europe 40 in the intervening years would not be reflected in the statistical findings. While several significant associations of depression are identified, a substantial proportion of the variance in the models is unexplained. While none of the four caregivers involved in the panel was working at the time of the study, three had previously balanced work and caring commitments. As circumstances, attitudes and legislation may change over time, the perspective of caregivers active in the labour market may have resulted in alternative feedback. Due to the timing of this study, with COVID-19 restrictions in place, it was not feasible to recruit additional caregivers to the already established research panel. 25 We would recommend future work consider a similar collaborative modelling approach with a mix of caregivers who are both active and inactive in the labour market. The insights provided by the caregiver panel may be restricted to an Irish focus. However, legislative and normative contexts with regard to labour market participation, care provision may differ across the other ESS represented countries in Europe. We identified cross-country differences in our study and further subgroup analysis of region may be useful to identify any effect modification for caregivers' depression with a more detailed analysis of cross-country differences in caregiver legislation and consulting caregivers from across Europe. Conclusions In a study of 11 177 caregivers, from the 2014 ESS, differences between working and non-working caregivers were evident. The findings were interpreted in partnership with a panel of caregivers, highlighting the value of collaborative modelling. Findings suggest that non-working caregivers are at a considerably higher risk of depression when compared with working caregivers. Supportive policies such as flexible working and care leave are recommended. Enabling caregivers to continue in paid work and better balance their caring and working responsibilities would support caregivers' health and sustain an important resource for our healthcare systems.
2021-12-02T06:23:06.648Z
2021-11-25T00:00:00.000
{ "year": 2021, "sha1": "a69d62ed4649cbb0ac28edf520b5f20e970445d2", "oa_license": "CCBY", "oa_url": "https://academic.oup.com/eurpub/advance-article-pdf/doi/10.1093/eurpub/ckab178/41299143/ckab178.pdf", "oa_status": "HYBRID", "pdf_src": "PubMedCentral", "pdf_hash": "7b2fb8346886dfc8e9591d2f1dede67f6ab84f74", "s2fieldsofstudy": [ "Sociology", "Psychology" ], "extfieldsofstudy": [ "Medicine" ] }