id
stringlengths 3
9
| source
stringclasses 1
value | version
stringclasses 1
value | text
stringlengths 1.54k
298k
| added
stringdate 1993-11-25 05:05:38
2024-09-20 15:30:25
| created
stringdate 1-01-01 00:00:00
2024-07-31 00:00:00
| metadata
dict |
|---|---|---|---|---|---|---|
118281720
|
pes2o/s2orc
|
v3-fos-license
|
Connes duality in Lorentzian geometry
The Connes formula giving the dual description for the distance between points of a Riemannian manifold is extended to the Lorentzian case. It resulted that its validity essentially depends on the global structure of spacetime. The duality principle classifying spacetimes is introduced. The algebraic account of the theory is suggested as a framework for quantization along the lines proposed by Connes. The physical interpretation of the obtained results is discussed.
Introduction
The mathematical account of general relativity is based on the Lorentzian geometry being a model of spacetime. As any model, it needs identification for a physicist in terms of measurable values. In this paper we focus on evaluations of intervals between events, and the measurable entities will be the values of scalar fields. This work was anticipated by the Connes distance formula for Riemannian manifolds where the supremum is taken over the smooth functions whose gradient does not exceed 1.
This formula gives rise to a new paradigm in the account of differential geometry being more sound from the physicist's point of view since it expresses the distance through the values of scalar fields on the manifold. Our goal was to investigate to what extent this formula is applicable in Lorentzian manifolds.
The first observation was that even in the Minkowskian space this formula is no longer valid in its literal form. The reason is that the Cauchy inequality on which the Connes formula is based, does not hold in the Minkowskian space. In section 2 following the Connes' guidelines we managed to obtain an evaluation rather than the expression for the distance. In 'good' cases, in particular, in the Minkowski spacetime, this evaluation is exact and gives an analog of the Connes formula.
The attempt to generalize it to arbitrary Lorentzian manifolds resulted in building of counterexamples which show the drastic difference between the Riemannian and Lorentzian manifolds. In section 3 the duality principle was introduced in order to point out the class of Lorentzian manifolds being as 'good' as Riemannian ones. It turned out that one can find a 'bad' spacetime even among those conformally equivalent to the Minkowskian one. An example is provided in section 3.
The Connes duality principle play an important rôle in the framework of the so-called 'spectral paradigm' in the account of non-commutative differential geometry [2]. However, the correspondence principle for this theory is corroborated on Riemannian manifolds. Following these lines, in section 4 we show a way to introduce non-comutative Lorentzian geometry.
Connes formula
Both the Riemannian distance and Lorentzian interval are based on calculation of the same integral: which is always referred to a pair of points. The intervals (resp., distances) as functions of two points are obtained as extremal values of this integral over all appropriate curves connecting the two points.
There is a remarkable duality to evaluate this integral suggested by Connes for the Riemannian case. Consider it in more detail.
For any two points x, y of a Riemannian manifold M connected by a smooth curve γ the following evaluation of its length ℓ(γ) takes place: based on the Cauchy inequality: So, the distance ρ(x, y) between the points of the manifold satisfies the following inequality where f ranges over all functions whose gradient does not exceed 1: |∇f | ≤ 1.
It was shown by Connes [3], that as a matter of fact no curves are needed to determine the distance, which may be obtained directly as: The physical meaning of this result is the following: we can evaluate the distance between the points measuring the difference of potentials of a scalar field whose intensity is not too high. So, the following duality principle takes place in Riemannian geometry: Note that this formula is valid even for non-connected spaces: in this case both sides of the above equality are equal +∞, if we assume, as usually, the infinite value of the infimum when the ranging set is void.
The question arises: can we write down a similar evaluation for the Lorentzian case?
Duality inequality in Lorentzian geometry
The Cauchy inequality from which the Riemannian duality principle was derived is no longer valid in the Lorentzian case. Instead, we have the following: where both ∇f ,γ are non-spacelike: If under these circumstances we also have (∇f,γ) ≥ 0, the inequality (3) reduces to Now let x, y be two points of a Lorentzian manifold M such there is a causal curve γ going from x to y. Then for any global time function f on M we immediately obtain the analog of the inequality (1) Introduce the class F (M) of global time functions satisfying the following condition: Then the Lorentzian interval between x and y l(x, y) = sup can be evaluated as follows: provided the class F (4) is not empty. Introducing the value we obtain the following duality inequality: It is worthy to mention that this inequality is still meaningful when the points x, y can not be connected by a causal curve. In this case the supremum l(x, y) is taken over the empty set of curves and its value is, as usually, taken to be −∞, that is why the inequality (6) Proof. Assume with no loss of generality that a = 0. If b is in the future cone of a = 0, then l where v a vector defining the time orientation. The direct calculation shows that (∇f ) 2 ≥ 1 and (∇f, v) ≥ 0, that is, f ǫ ∈ F . In the meantime f (b) = ǫ(v, b)/(1 − ǫ) which can be made arbitrarily close to zero by appropriate choice of ǫ. Now let the point b be beyond the future cone of the point 0, therefore they can be separated by a spacelike hyperplane f k (x) = (k, x) = 0 and choose the vector k to be future-directed. Remark. Note that if we borrow the definition of l(a, b) from [1], namely l(a, b) = 0 for b ∈ J + (a), then the duality principle will not hold even for the Minkowskian case: this was the reason for us to introduce the definition (5).
Consider one more example. Let M = S 1 ×R 3 be a Minkowskian cylinder where S 1 is the time axis. In this case any two points x, y ∈ M can be connected by an arbitrary long timelike curve, therefore l(x, y) = +∞. In the meantime the class F (M) is empty (since there is no global time functions), and therefore L(x, y) = +∞. So we see that even in this "pathological" case the duality principle is still valid.
Note that the class F (M) itself characterizes spacetimes. In general, if the class F (M) is not empty, the spacetime M is chronological (it follows immediately from that F (M) consists of global time functions). Now we may inquire whether all Lorentzian manifolds are as 'good' as Minkowskian? In the next section we show that the answer is no.
Duality principle
In the previous section we have proved the duality inequality (6) which is always true in any Lorentzian manifold. However, unlike the case of Riemannian spaces, this inequality may be strict, which is corroborated by the following example. Let M be a Minkowskian plane from which a closed segment connecting the points (1, −1) and (−1, 1) is cut out (Fig. 1). This characterization is global. The example we presented above show that this notion is not hereditary: if we take an open subset of a 'good' manifold it may happen that it will no longer enjoy the duality principle.
Contemplating the above mentioned examples may lead us to an erroneous conclusion that the reason for the duality principle to be broken is when the spacetime manifold is not simply connected. The next example [8] shows that there are manifolds which are simply connected, geodesically convex, admit global chronology but do not enjoy the duality principle.
Let M be a right semiplane (−∞ < t < +∞; x > 0) with the metric tensor conformally equivalent to Minkowskian. It is defined in coordinates t, x as follows: The example illustrates the problems related with the dual evaluations: it shows that the existence of a global time function does not guarantee the class F to be non-empty. The spacetime M evidently admits global time functions such as, for instance, f (t, x) = t. However, the following proposition can be proved.
Proposition. The class F (M) is empty.
Proof. Suppose there is a function f (x, t) satisfying (4) and consider two values A, B (A < B) of the function f . The appropriate lines of constant level of f are the graphs of functions t A (x), t B (x). Since the derivative f t > 0, we have t A (x) < t B (x). These functions are differentiable and their derivatives are bounded: t ′ A , t ′ B ≤ 1 (because these lines are always spacelike), therefore they have limits when x → 0. Let us show that these limits are equal.
Consider the difference B − A and evaluate it: where the first factor 1/ √ x is directly obtained from the condition (∇f ) 2 ≥ 1/x. So, the limit of t B (x) − t A (x) is to be equal 0. Since the values A, B were taken arbitrary, we conclude that all the lines of levels of the global time function f come together to a certain point. Therefore these lines (being spacelike) cannot cover all the manifold M.
✷ This proposition shows that the space M does not support duality principle: we can take two points a, b on a timelike geodesic and calculate the interval l(a, b), while L(a, b) = +∞.
Algebraic aspects and quantization
Let us study the dual evaluations from the algebraic point of view. It was pointed out yet by Geroch [6] that the geometrical framework of general relativity can be reformulated in a purely algebraic way. Recall the basic ingredients of Geroch's approach. The starting object is the algebra A = C ∞ (M), then the vector fields on M are the derivations of A, that is, the linear mappings v : A → A satisfying the Leibniz rule: Denote by V the set of vector fields on M (= derivations of A). It is possible to develop tensor calculus along these lines: like in differential geometry, tensors are appropriate polylinear forms on V. In particular, the metric tensor can be introduced in mere algebraic terms.
The Geroch's viewpoint is in a sense 'pointless' [10]: it contains no points given ab initio. However the points are immediately restored as onedimensional representations of A. For any x ∈ M the appropriate representationx reads:x Now let M be a Riemannian manifold. If we then decide to calculate the distance between two representation x, y in a 'traditional' way we have to introduce such a cumbersome object as continuous curve in the space of representations. It is the result of Connes (2) which lets us stay in the algebraic environment: and the problem now reduces to an algebraic description of the class F of suitable elements f of the algebra A. The initial Connes' suggestion still refers to points: F = {a ∈ A| ∀m ∈ M |∇a(m)| ≤ 1}.
Connes' intention was to build a quantized theory which could incorporate non-commutative algebras as well. For that, the construction of spectral triple was suggested [2].
A spectral triple (A, H, D) is given by an involutive algebra of operators A in a Hilbert space H and a self-adjoint operator D with compact resolvent in H such that the commutator [D, a] is bounded for any a ∈ A (note that D is not required to be an element of A).
Then for any pair (x, y) of states (= non-negative linear functionals) on A the distance d(x, y) between x and y may be introduced: The suggested construction satisfies the correspondence principle with the Riemannian geometry. Namely, we form the spectral triple with A = C ∞ (M), H = L 2 (M, S) -the Hilbert space of square integrable sections of the irreducible spinor bundle over M and D is the Dirac operator associated with the Levi-Cività connection on M [9]. Then d(x, y) recovers the Riemannian distance on M (see, e.g. [2]).
Comparing the definition (7) of the class F with that used in section 1: F = {a ∈ A| ∀m ∈ M |∇a(m)| ≤ 1} we see that the operator D is a substitute of the gradient. Following [7,5] the gradient condition (7) can be written in terms of the Laplace operator taking into account that: which restores the metric on M according to the Connes' duality principle (2) for Riemannian manifolds. However this condition is still checked at every point of M.
We suggest an equivalent algebraic reformulation of (7) with no reference to points. Starting from the notion of the spectrum of an element of algebra [4]: spec(a) = {λ ∈ C| a − λ · 1 is not invertible} and taking into account that the spectrum of the multiplication operator coincides with the domain of the multiplicator we reformulate the Connes' condition f ∈ F as spec(1 − (∇f ) 2 ) is non-negative (8) Within this framework, to pass to Lorentzian case, we simply substitute the Laplacian ∆ by the D'Alembertian ✷, and the spectral condition (8) is changed to spec((∇f ) 2 − 1) is non-negative which makes it possible to recover the Lorentzian interval provided the duality principle holds.
So we see that the notion of spectral triple is well applicable to develop quantized Lorentzian geometry along the lines of Connes' theory.
|
2019-04-14T02:25:24.133Z
|
1998-03-27T00:00:00.000
|
{
"year": 1998,
"sha1": "36898af5c4789597868d6bdd1ffaf67862fd1f9f",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/gr-qc/9803090",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "ab59200c639c3a2737c4b67490f528d9b154b1be",
"s2fieldsofstudy": [
"Mathematics",
"Physics"
],
"extfieldsofstudy": [
"Mathematics",
"Physics"
]
}
|
270885193
|
pes2o/s2orc
|
v3-fos-license
|
Comparison between Classical- and Rotational-Mechanical-Chair-Assisted Maneuvers in a Population of Patients with Benign Paroxysmal Positional Vertigo
Introduction: Benign paroxysmal positional vertigo (BPPV) stands as the most common cause of peripheral vertigo. Its treatment with repositioning maneuvers on an examination table is highly effective. However, patients with back or neck problems, paraplegia, or other conditions face challenges with these maneuvers, potentially experiencing longer healing times and creating additional difficulties for physicians diagnosing and treating BPPV in everyday practice. The emergence of mechanical rotational chairs (MRCs) offers a more convenient alternative for performing these maneuvers. Objectives: The primary objective was to compare the effectiveness of maneuvers on the examination table with those on MRCs in BPPV patients diagnosed in the emergency room and randomly classified into one of the treatment options. The secondary objectives included a comparison of patient quality of life during BPPV episodes and after their resolution and an analysis of recurrences and associated risk factors. Methods: This was a cohort study on sixty-three patients diagnosed with BPPV in the emergency department. Patients were classified into two cohorts depending on diagnostic and treatment maneuvers (MRC or conventional repositioning maneuvers (CRMs)) and received weekly follow-ups until positioning maneuvers became negative. Subsequent follow-ups were conducted at 1 month, 3 months, and 6 months after the resolution of vertigo. Patients were classified into two groups based on their assigned treatment method. Results: Thirty-one patients were treated with CRMs and 32 with TRV. Mean age was 62.29 ± 17.67 years and the most affected canal was the PSC (96.8%). The mean number of required maneuvers was two, while 55.56% only required one maneuver until resolution. Recurrence was present in 26.98% of the patients during the 6-month follow-up. Comparing both cohorts, there were no statistically significant differences between treatments (TRV vs. CRM) regarding the number of maneuvers, number of recurrences, and days until remission of BPPV. Dizziness Handicap Inventory and Visual Analogue Scale values decreased considerably after BPPV resolution, with no statistically significant differences between the groups. Age was identified as a covariable in the number of maneuvers and days until BPPV resolution, showing that an increase in age implies a greater need of maneuvers. Conclusions: There was no difference between the means of treatment for BPPV in our population ot There was no difference between the groups of treatments for BPPV in our population. The quality of life of patients improved six months after the resolution of BPPV, regardless of the treatment applied.
Introduction
Benign paroxysmal positional vertigo (BPPV) is the most common cause of peripheral vestibular disorders.It is caused by the displacement of otoconia from the utricle in one or more semicircular canals (SCC), and it is characterized by brief episodes (half to one minute) of rotatory vertigo triggered by specific head movements [1].Otoconia can be floating in the endolymphatic space of the SCC (canalolithiasis) or be attached to the cupula within the SCC (cupulolithiasis) [2,3].
In the literature, a female-to-male ratio of 2:1 is described, with involvement of the right posterior semicircular canal (PSC) in approximately 72-80% of cases.The average age onset ranges between 49 and 57 years old [4,5].Nevertheless, the involvement of other semicircular canals along with other rare variants of BPPV often lead to misdiagnosis.
Although the disease is benign, in addition to the vertigo and vegetative symptoms, BPPV can be very debilitating, producing psychological and physical impairments that decrease the quality of life.It also increases the risk of falls and fear of falling, especially in elderly patients [6,7].However, once the episode is resolved, an improvement in these symptoms is observed, although some patients may still experience chronic subjective symptoms [8].
Up to 50% of BPPV cases resolve spontaneously in 2 to 12 weeks [5,9,10], and between 90 and 98% resolve after one or two repositioning maneuvers [11].However, a specific group of patients with high risk of BPPV needs more treatment sessions and has a higher risk of recurrence.This group includes those with a history of head trauma, reduced head mobility, and the geriatric population [12].
The management of BPPV involves canalith repositioning maneuvers, such as the Epley or Semont maneuver [13,14], whose aim is to relocate the otoconia from the PSC back into the utricle.Maneuvers are typically performed manually on the examination table.However, it is important to note that the success of these maneuvers relies on their correct execution by the examiner, inter-examiner variability, and specific factors of the subject that may hinder the satisfactory completion of the maneuver and can even increase the risk of multicanal involvement if inappropriately performed.Similarly, maneuvers performed by patients at home may result in a lower cure rate.Patients with conditions such as obesity, cervical arthritis, paraplegia, or other mobility disorders face challenges in carrying out these maneuvers [15].Considering these limitations, mechanical rotational chairs (MRC) have been developed for the diagnosis and treatment of BPPV.These chairs allow for 360-degree movements to perform all the described BPPV maneuvers.They enable specific angles and inclinations for each SCC, as well as more abrupt movements with greater acceleration force, without altering the position of the patient's head or requiring neck hyperflexion.The patient remains seated in the same position throughout the procedure [16].
So far, studies conducted with this tool have shown that MRCs are superior in the treatment of BPPV patients with multicanal or bilateral BPPV, cupulolithiasis, or refractory vertigo [18].However, it is important to note that most of these studies are retrospective, non-randomized, and lack a systematic and protocolized follow-up [19][20][21].As a result, there is a lack of clinical evidence regarding the true superiority or benefits of MRCs compared to traditional maneuvers in a population that accurately represents the everyday reality of an emergency and ENT unit in any healthcare facility.
Taking into consideration the information above, we developed a cohort study that compares the efficacy of both means of treatment of BPPV available in our facility.
Patient Selection
Sixty-three patients were recruited for this cohort study from March 2023 to February 2024 by members of the Department of Otorhinolaryngology in Getafe University Hospital.Patients with a recent history of labyrinthine disorders (including vestibular neuritis, acute sensorineural hearing loss, and Ménière's disease) were excluded from this study to avoid introducing confounding factors related to symptoms and quality of life.This decision was made because diagnosing and treating BPPV in these cases is more challenging, and the presence of another vestibular disorder can act as a confounding factor.Other pathologies, such as vestibular migraine or any other disorders of the central nervous system were excluded with a complete neurological assessment.A cerebral MRI was performed in cases of doubt regarding neurological etiology.Additionally, pregnant women and patients under 18 years old were excluded.
Patients were divided into two cohorts depending on whether the diagnostic and treatment maneuvers were undertaken using a TRV MRC (TRV chair, Interacoustics, Middelfart, Denmark) or conventional repositioning maneuvers (CRMs) on the examination table.
This project was approved by our institutional review board (CEIm22/58).Informed consent was obtained from every patient.
Patient Assessment
Patients were admitted to the emergency unit and were diagnosed with BPPV according to the criteria formulated by the Committee for Classification of Vestibular Disorders of the Bárány Society (clinical presentation: recurrent episodes of vertigo triggered by changes in head position; duration: seconds to several minutes; frequency: the episodes occur frequently, often several times a day; trigger: the episodes are triggered by specific head movements, such as rolling over in bed, getting up from a lying down position, or looking up or down; nystagmus: the patient exhibits nystagmus (abnormal eye movements) during the episodes; no other causes: the patient does not have any other underlying conditions that could cause vertigo, such as vestibular migraine, Ménière's disease, or labyrinthitis) [22].Patients were always treated with CRM on the first visit as they were seen in the emergency department.
Upon enrollment in this study, each patient underwent comprehensive evaluation conducted by the same team on every occasion.This evaluation encompassed a thorough review of clinical history and neurotological physical examination.Eye movement was recorded using goggles equipped with an infrared camera (Visual Eyes™ 505, Interacoustics, Denmark).
Patients with prior episodes of BPPV had not undergone treatment with any mechanical rotational chair, instead receiving conventional maneuvers, and the most recent episode had occurred at least 6 months before the onset of the new episode.
Every patient underwent weekly examinations, including specific diagnostic maneuvers for each semicircular canal (Dix-Hallpike, McClure, and the Head Hanging position) and specific repositioning maneuvers tailored to the affected semicircular canal until there was no nystagmus seen during positioning maneuvers and no vertigo symptoms triggered.Subsequently, they received follow-up appointments at one month, three months, and six months after being treated.
During the follow-up, patients underwent complete neurotological evaluation and repositioning maneuvers, if needed, depending on the cohort they were included in.They filled in the Dizziness Handicap Inventory (DHI) and Visual Analogue Scale (VAS) on every visit and the Short Falls Efficacy Scale International (Short FES-I) on the second visit and a month after cure of BPPV. Figure 1 shows the flowchart followed in every patient and the specific maneuver applied depending on the mean of treatment.In case of recurrence, we evaluated the patient again on a weekly basis until the negativization of the diagnostic maneuvers.
Videos S1 and S2 depict the execution of maneuvers in each group.
Handicap Measurements
We used the DHI questionnaire [23,24], while vertigo severity was assessed using the VAS.DHI was developed in 1990 and translated into and validated in Spanish in 2000.It consists of 25 questions assessing the physical, emotional, and functional impact of vertigo on daily life, with scores ranging from 0 to 100, with higher scores indicating greater handicap.The DHI is simple, reliable, and allows for comprehensive evaluation of the patient's condition.
VAS scores were obtained by asking the patient to rate the severity of their vertigo from 0 to 10.
Additionally, a risk of falls scale, the Short FES-I was filled out [25].Short FES-I evaluates how concerned the patient is about the possibility of falling while performing seven activities, rating each one from "not concerned at all", "somewhat concerned", and "fairly concerned" to "very concerned", and has a total score from 7 to 28.
Statistical Analysis
All data were processed using the Statistical Analysis Statistical Package for the Social Sciences for Windows (SPSS version 25.0; IBM Corp, Armonk, NY, USA).
The normality of the study population's distribution was assessed using the Kolmogorov-Smirnov test.
Values were described as the mean ± standard deviation.For the evolution of continuous quantitative variables, the ANOVA test was used.For qualitative variables, Chi2 test was performed, and the t-test was employed to compare quantitative variables between groups.A study of bivariate correlation was performed to analyze the impact of age and body mass index (BMI) on the number of recurrences, days until remission of BPPV, and number of maneuvers.Statistical significance for all tests was considered at a p-value < 0.05.In case of recurrence, we evaluated the patient again on a weekly basis until the negativization of the diagnostic maneuvers.
Videos S1 and S2 depict the execution of maneuvers in each group.
Handicap Measurements
We used the DHI questionnaire [23,24], while vertigo severity was assessed using the VAS.DHI was developed in 1990 and translated into and validated in Spanish in 2000.It consists of 25 questions assessing the physical, emotional, and functional impact of vertigo on daily life, with scores ranging from 0 to 100, with higher scores indicating greater handicap.The DHI is simple, reliable, and allows for comprehensive evaluation of the patient's condition.
VAS scores were obtained by asking the patient to rate the severity of their vertigo from 0 to 10.
Additionally, a risk of falls scale, the Short FES-I was filled out [25].Short FES-I evaluates how concerned the patient is about the possibility of falling while performing seven activities, rating each one from "not concerned at all", "somewhat concerned", and "fairly concerned" to "very concerned", and has a total score from 7 to 28.
Statistical Analysis
All data were processed using the Statistical Analysis Statistical Package for the Social Sciences for Windows (SPSS version 25.0; IBM Corp, Armonk, NY, USA).
The normality of the study population's distribution was assessed using the Kolmogorov-Smirnov test.
Values were described as the mean ± standard deviation.For the evolution of continuous quantitative variables, the ANOVA test was used.For qualitative variables, Chi2 test was performed, and the t-test was employed to compare quantitative variables between groups.A study of bivariate correlation was performed to analyze the impact of age and body mass index (BMI) on the number of recurrences, days until remission of BPPV, and number of maneuvers.Statistical significance for all tests was considered at a p-value < 0.05.
Results
Sixty-three subjects were included in this study and completed the treatment and follow-up.A total of 32 of the 63 patients (50.7%) were randomized to treatment with the TRV chair, and 31 of the 63 subjects (49.3%) were randomized to traditional treatment on an examination bed.
Eighteen of the patients (28.6%) were male and forty-five (71.4%) were female.Mean age was 62.29 ± 17.67 years in the global population.The most affected canal was the PSC, with a total of 96.8% of the population.The left ear was affected in 54% of the population and the right ear in 42.9%.In total, 14.29% (9/63) of patients had a bilateral affection.These findings are shown in Table 1.No statistical differences were found in the distribution of population characteristics between groups based on gender, age, affected side and semicircular canal, or body mass index, making the groups statistically comparable.Mean follow-up time of the population was 196.97 ± 23.87 (6.5 months).
Risk Factors Distribution
Sixteen out of thirty-two (50%) patients treated using TRV had previously suffered at least one episode of BPPV, versus nineteen out of thirty-one (61.29%) in the CRM group.Table 2 summarizes the frequencies of all the studied risk factors in each group.There were non-significant differences between the groups when comparing the possible risk factors for BPPV.
Number of Treatments and Days until Resolution of BPPV
The TRV group required a mean of 2.34 ± 0.56 maneuvers to achieve remission of BPPV, whereas the CRM group needed 1.71 ± 0.21 (Figure 2).There was no statistically significant difference in the number of treatments required between the two groups in our population (p = 0.120).There were five patients in the TRV group and two in the manual treatment group requiring five or more maneuvers, of which four had bilateral BPPV.Table 3 shows the number of patients that needed one maneuver, between two and four and more than five in each group. 1. Thomas Richard-Vitton (TRV). 2. Conventional repositioning maneuvers (CRMs).
Number of Treatments and Days until Resolution of BPPV
The TRV group required a mean of 2.34 ± 0.56 maneuvers to achieve remission of BPPV, whereas the CRM group needed 1.71 ± 0.21 (Figure 2).There was no statistically significant difference in the number of treatments required between the two groups in our population (p = 0.120).There were five patients in the TRV group and two in the manual treatment group requiring five or more maneuvers, of which four had bilateral BPPV.Table 3 shows the number of patients that needed one maneuver, between two and four and more than five in each group.The mean number of days until achieving a negative maneuver in the global population was 15.08 (±12.14);specifically, in the TRV group, it was 17.75 (±2.53) days and in the manual treatment group, 12.32 (±1.57) (Figure 3).No significant differences were found between the groups (p = 0.076).The mean number of days until achieving a negative maneuver in the global population was 15.08 (±12.14);specifically, in the TRV group, it was 17.75 (±2.53) days and in the manual treatment group, 12.32 (±1.57) (Figure 3).No significant differences were found between the groups (p = 0.076).
However, the Pearson correlation test showed a significant correlation between age and days until cure (r = 0.246; p = 0.026) and number of maneuvers applied (r = 0.296; p = 0.009) (Figure 4).Despite being significant, these results indicate a weak correlation.On the ANOVA test, it was also shown that age is a covariable in the number of maneuvers (p = 0.026) and days until remission (p = 0.024).
Quality of Life Evaluation: DHI, VAS and Sort FES-I Scores
Table 4 summarizes quality of life test scores in each group.Quality of life was reflected in significantly lower DHI total scores and VAS scores in follow-up visits after successful treatment (p < 0.001) (Figure 5).Nevertheless, the comparison of the scores between both treatment groups did not show statistically significant differences in either first visit scores or post-treatment scores (Table 4).However, the Pearson correlation test showed a significant correlation between a and days until cure (r = 0.246; p = 0.026) and number of maneuvers applied (r = 0.296; 0.009) (Figure 4).Despite being significant, these results indicate a weak correlation.O the ANOVA test, it was also shown that age is a covariable in the number of maneuv (p = 0.026) and days until remission (p = 0.024).However, the Pearson correlation test showed a significant correlation betwee and days until cure (r = 0.246; p = 0.026) and number of maneuvers applied (r = 0.29 0.009) (Figure 4).Despite being significant, these results indicate a weak correlatio the ANOVA test, it was also shown that age is a covariable in the number of mane (p = 0.026) and days until remission (p = 0.024).
Recurrence
Seventeen out of sixty-three (26.98%) subjects suffered a recurrence within the sixmonth follow-up.The TRV group had a 25% recurrence (8/32) and the CRM group had a recurrence rate of 29.03% (9/31), showing no statistically significant differences (p = 0.718).Of all the recurrences, only four patients experienced a canal shift in both groups.All cases shifted from HSC canalolithiasis to PSC; three subjects were in the TRV group and one patient in the CRM group.
There was a moderate positive correlation between BMI and the number of recurrences (r = 0.514; p = 0.017). 1 Short Falls Efficacy Scale International (Short FES-I) 2 Visual Analogue Scale (VAS). 3Dizziness Handicap Inventory (DHI).
Discussion
The present study represents one of the first prospective randomized studies which compare the treatment of common BPPV patients with mechanical chairs versus manual repositioning maneuvers.It is also one of the few prospective and randomized studies with a long-term follow-up of patients treated by the same team every time and referred directly to the ENT department right after their first visit in the emergency room.Many studies related to this topic do not have a control group [19]; select only complex patients with multicanal BPPV [26]; or compare the efficacy of different mechanical rotational chairs with manual maneuvers [21].
In the same way, many other studies are retrospective [17,26]; do not show a long-term follow-up [27]; or the period between visits while the patients are affected with BPPV is too long [19].
In our study, both cohorts were comparable as they did not differ with respect to demographic or clinical baseline characteristics.The average age of our population was slightly higher than in some published series [5], whereas other studies show similar mean age [17].The female-to-male ratio was higher, as we found more women than men, as seen in the majority of neurotological disorders [4,28].According to which canal was affected and which side, we found differences between studies, some show a similar ratio of left/right affection [28], whereas others show a higher affection of the right labyrinth [29]; in addition, we found a significantly higher incidence of bilateral BPPV, which can be challenging to treat [4].These differences between studies can be explained by the challenges many clinicians face when diagnosing BPPV, as there are no clear objective tests for it, and the diagnosis depends on the clinical experience of the physician and on the collaboration of the patient [30].These demographic differences may also affect treatment efficacy but are mostly dependent on the type of hospital and neurotologic unit providing the treatment.
Treatment Efficacy
Our results, showing a successful treatment rate, with 55.56% of our patients being cured after the first visit, are comparable to other studies, where rates range from 34.2% to 87% [13,28,31,32].These findings align with the literature because every patient in our study received manual treatment during their first visit.This is due to patient recruitment being conducted through the emergency department, where only an examination table is available, after diagnosis and initial treatment.This represents a common clinical scenario, as BPPV patients are treated with manual maneuvers in most cases.However, this is a limitation of our study, and in future studies, it would be beneficial to compare two groups of patients with acute vertigo, each receiving the specific maneuver for their respective treatment group (TRV or CRM) during the first visit.
This wide range of success rates found in the literature could be explained by the variety of protocols followed in each study, the fact that not all patients co-operate equally in the execution of the maneuver [15], the differences in the follow-up periods, or the experience of the treating clinician, as some reports were run by general practitioners [33].
Regarding treatment, both means of treatment were equal in terms of the number of maneuvers and days to achieve remission of BPPV, as no statistically significant differences were found.Research on this topic yields very variable results.Tan et al. conducted a prospective randomized study on PSC-BPPV that showed statistically significant differences between groups in mean numbers of maneuvers on the first follow-up visit.However, there were no differences in long-term follow-up [20].On the other hand, one study that retrospectively evaluated the treatment of multicanal BPPV with TRV or manual treatment showed no statistically significant differences [26].There is only one other study that resembled ours the most in terms of design, even though it only included short-term follow-up.The study did not show differences between groups in terms of the number of maneuvers [27].All previously published studies only evaluated the number of treatments but did not show the time interval between visits/treatments, and we consider this an important fact, as long periods between follow-ups can lead to biases due to spontaneous resolution of BPPV [34][35][36].This was considered when designing our study protocol to minimize the possibility of spontaneous resolution.
The same studies that did not find differences in the number of maneuvers, especially the retrospective study by Baydan-Aran et al. [26] and the one by Luryi et al. [36], showed a similar mean number of treatments in each group, which matches our study.
A stratification of our population into age groups shows that the older the subjects, the more maneuvers and days until resolution they need.Nevertheless, we can see a tendency of subjects in the TRV group to achieve resolution faster than the manual treatment group (Figure 4).This can be explained by the fact that the elderly have a more restricted mobility of the neck and of the whole body at the time of performing the repositioning maneuvers [15] and might represent a selected at-risk population who could benefit from treatment using a TRV chair.
Quality of Life
Questionnaires answered by our population show a statistically significant improvement in scores, with no significant differences between groups.This is consistent with other research on this topic analyzing BPPV groups treated using mechanical rotational chairs [27,37,38] or manual procedures [6,8].
Recurrence
Up to 50% of patients experience a recurrence of BPPV within the next 10 years, with 80% of these occurring in the first year [38].In many studies evaluating recurrences in BPPV treated with MRC, it is shown that recurrences happen mostly in the first six months and that these comprise 25% of recurrences in this population, which matches our findings (26.98%) [17].However, many of these studies do not have a long-term or uniform and consistent follow-up for each subject in the population.Thus, a longer follow-up period should be considered in our study to evaluate recurrence rates beyond the six-month period.
Conclusions
In conclusion, the TRV chair proves to be a safe tool for both diagnosing and treating BPPV, without an increased risk of recurrences.However, it is not superior to manual treatment in conventional BPPV patients referred to our unit after a first treatment on the examination table in the emergency room.Nevertheless, TRV seems to be more effective in elderly patients and more convenient in populations with mobility difficulties.
Quality of life improves considerably after the resolution of BPPV no matter the means of treatment.Further studies need to be conducted to analyze the long-term follow-up of these two groups of patients prospectively.It would also be appropriate to have a bigger population to study bigger subgroups of patients (age, type of BPPV, risk factors. . .).
Figure 2 .
Figure 2. Error bar showing the number of maneuvers needed to achieve successful treatment in each group.Student's t-test with statistical significance level defined as p-value < 0.05.
Figure 2 .
Figure 2. Error bar showing the number of maneuvers needed to get a successful treatment in each group.T student test with statistical significance level defined as p-value < 0.05.
Figure 3 .
Figure 3. Error bar showing the number of days needed to achieve successful treatment in ea group.Student's t-test with statistical significance level defined as p-value < 0.05.
Figure 4 .
Figure 4. Error bar showing fewer maneuvers in the TRV group in patients more than 65 years o Student's t-test with statistical significance level defined as p-value < 0.05.
Figure 3 .
Figure 3. Error bar showing the number of days needed to get a successful treatment in each group.T student test with statistical significance level defined as p-value < 0.05.
Figure 3 .
Figure 3. Error bar showing the number of days needed to achieve successful treatment i group.Student's t-test with statistical significance level defined as p-value < 0.05.
Figure 4 .
Figure 4. Error bar showing fewer maneuvers in the TRV group in patients more than 65 yea Student's t-test with statistical significance level defined as p-value < 0.05.
Figure 4 .
Figure 4. Error bar showing less maneuvers in the TRV group in patients of more than 65 years old.
Figure 5 .
Figure 5. Bar chart showing evolution of DHI total scores in initial and follow-up visits.ANOVA test with a statistical significance defined as p-value < 0.05.
Table 3 .
Number of maneuvers.
Table 3 .
Number of maneuvers.
Table 4 .
Quality of life scores.Student's t-test with statistical significance level defined as p-value < 0.05.
|
2024-07-03T15:10:10.680Z
|
2024-06-30T00:00:00.000
|
{
"year": 2024,
"sha1": "349faa44b28d23f5e0bfe2179294628625038f2c",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "be1d9ff03009b451b4b6f694e04438ac656e1d86",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
196613134
|
pes2o/s2orc
|
v3-fos-license
|
Free Style Perforator Flaps for Aesthetic Facial Reconstruction
BACKGROUND Functional and cosmetic outcomes affect reconstruction of the face more than any region of the body. To use a predetermined perforator flap freely designed allowing a wide range of movement and manipulation can give us an optimum outcome. We present our clinical experience with free style facial perforator flaps, surgical technique, and complications. METHODS Thirty patients with post-tumor resection of the face were reconstructed with free style local perforator flaps between January 2014 and November 2016. Doppler was used to identify the perforator vessels preoperatively. RESULTS Twenty-two clinical cases had no complications. Four had venous congestion that resolved spontaneously, three had a distal 1/3 superficial necrosis, and one suffered from hematoma. CONCLUSION Freestyle perforator flaps were applied to get better cosmetic facial reconstruction, allowing one stage procedure and decreasing donor site morbidity. Modern anatomical understanding, good planning, and meticulous surgical technique can affect clinical results.
1.
Department of Plastic Surgery, Faculty of Medicine, Menoufia University, Menoufia, Egypt 2. Department of Plastic Surgery, Faculty of Medicine, Tanta University, Tanta, Egypt
Original Article
Functional and cosmetic outcomes affect reconstruction of the face more than any region of the body. Local facial flaps are an excellent option due to color and texture match of their tissues. The excellent vascularity of facial skin ensures a reliable blood supply to pedicled or islanded local flaps. Limited range of motionand bulkiness at the pedicle site is among the limitations that confront local pedicled flaps and may require secondary surgical revision. 1 Hofer 2 was the first to use the free style approach that was introduced by Mardini and Wei in 2004 3 for facial reconstruction.
Depending on Doppler signals in a specific region free style perforator flaps can be harvested. [2][3][4] A large arc of rotation which is due to thin pedicle allows these flaps to reach different defects in the face. 5 The needed primary closure of the harvesting site and pedicle location 6 can limit the reconstruction by these flaps to small and medium size facial defects. 7 Gunnarsson and Thomsen (2016) reported a maximum facial perforator flap size of 9×5 cm. 4 Although Taylor and Palmer in 1987 8 studied perforator arteries on cadavers and others recently studied perforator vessels in the face, 2,9,10 but clinical studies discussing the use of local perforator flaps for facial reconstruction still not enough, and so clinical applications of these flaps have not been adequately investigated. Surgical advantages of local perforator flaps have been well described, 7,11,12 unlike their cosmetic manipulation or complications. We aimed to present our clinical experience in surgical technique, cosmetic outcome and complications of predetermined free style facial perforator flaps.
MATERIALS AND METHODS
Thirty patients with facial defects after tumor excision were reconstructed with free style local perforator flaps between January 2014 and November 2016 at the Plastic Surgery Department, Tanta University. Patients ages ranged from 35 to 72 years, 16 were males and 14 were females. Twenty flaps were based on facial artery perforators at the nasolabial region, 7 flaps were based on infraorbital artery perforators and 3 in the post-auricular area.
Skeletonization of the perforator vessel was done in 12 cases to increase the range of movement, while for the other 18 cases, no skeletonization was done. All the flaps were of the propeller type. The arc of rotation of the propeller flaps ranged from 90 to 180 degrees and 4 of them were pended over to fit into the defect. General anesthesia was used in 20 patients, while the other 10 patients were operated under local anesthesia. Preoperatively, the excision margins and the expected defect were established, then, Doppler was used to identify the perforator vessels around the expected defect.
One suitable perforator had been chosen and the flap had been designed suitable enough to fit into the expected defect. Intraoperatively, exploration of the chosen perforator had been done before raising the flap to make sure that it was suitable for the vascular supply of the flap and for its proposed movement into the defect.
Whether the perforator would be skeletonized or not, would be governed by the needed movement of the flap to be in set without any compromise of its blood supply (Figure 1-4).
RESULTS
Histopathological examination of the resected lesions showed 22 basal cell carcinomas, 6 squamous cell carcinomas, and 2 melanomas. No recurrence was observed during the follow-up period. Twenty-two cases had no complications, three had venous congestion that resolved spontaneously within 2 days and they were the three flaps that had been bent over to fit into the malar defect, three had a distal 1/3 superficial necrosis, and 2 suffered hematoma that needed evacuation under local anesthesia ( Figure 5-8).
DISCUSSION
The knowledge of a perforator-based design evolved from the angiosomal concept introduced by Taylor and Palmer in 1987. 8 Blondeel et al.
described a perforating vessel as a vessel that had its origin in one of the axial vessels of the body and passed through different structures, perforating the deep fascia before reaching the skin. 13 Hofer et al. in 2005 described facial artery perforator flap and indicated the location of perforators. 2 D'Arpa et al. published a series of nasolabial perforator flaps for alar reconstruction and reported that facial artery perforators pierce the superficial musculoaponeurotic system layer due to absence of deep fascia in the face. 14 On the other hand, other studies suggested that localization of suitable perforator vessels in the face cannot be guaranteed by Doppler due to the anatomical features of these regions. 7,12,14,15,16 With experience, we could rely on the use of Doppler for planning perforator flaps of the face although of the superficial position of the axial arteries of the face that can be confused with the perforators. Free style perforator flaps can be based on one or more perforators obtaining a reliable blood supply together with great versatility in design, free choice of orientation, arc of rotation up to 180°, wider rangeof motion compared with local flaps, and primary closure of the donor site along the relaxed skin tension lines to minimize scarring.
These flaps, based on a perforator from a known axial vessel, can be realized in different areas such as nasolabial sulcus, peri-oral, peri-zygomatic, and submental region. 2,4,[6][7][8][9][10][11][12][13][14][15]17,18 During dissection, it is necessary to leave a cuff of subcutaneous fatty tissue around the artery to avoid pedicle kinking and to choose a safe sense of rotation in 180° propeller before in setting. 18 We recommend accurate selection of patients and identification of possible risk factors which can lead to complications such as diabetes, smoking, radiation, and immune-suppression. 19 In conclusion, free style local perforator flaps are useful for cosmetic reconstruction of complex facial defects because of their versatility, wide arc of rotation, similar texture, and color match with pleasing results.
CONFLICT OF INTEREST
The authors declare no conflict of interest.
|
2019-07-17T06:34:52.658Z
|
2019-04-10T00:00:00.000
|
{
"year": 2019,
"sha1": "dd29aaf068eaf9c532d046c367ae813dc3748bf5",
"oa_license": "CCBY",
"oa_url": "http://wjps.ir/files/site1/user_files_c1050c/samehalghamry-A-10-425-1-b8a8f2c.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bb294206b5816e0b9166a4f682f4521d386d4134",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
244828456
|
pes2o/s2orc
|
v3-fos-license
|
Performance Assessment of Green Filtration System with Evaporative Cooling to Improve Indoor Air Quality ( IAQ )
This paper focuses on the performance assessment of the green filtration system that incorporated with evaporative cooling that used to enhance indoor air quality. This system was invented in an attempt to thrive in a clean environment that becomes a solution for certain places. Indoor air quality (IAQ) and public health risk related to each other, it is due to the percentage of the city population that stays indoors rather than go outdoors. Indoor air contamination is originated from mixed origins such as volatile organic compounds (VOCs) and indoor airborne particulate matter (PM). The results show that the green filtration system manages to filter PM and VOCs in the air but not as efficient as modern air filters in the market. Furthermore, the evaporative cooling system possesses a huge deal in energy-saving within hot and arid climatic regions.
I. INTRODUCTION
Indoor air quality (IAQ) and public health risk related to each other, it is due to the percentage of the city population that stays indoors rather than go for outdoors, which is around 85 -90 % [1]. The United States Environmental Protection Agency classifies indoor air quality as one of the five national health threat [2]. Indoor air contamination is originated from mixed origins such as volatile organic compounds (VOCs) and indoor airborne particulate matter (PM). It was found that the green filtration system grows progressively as a favored approach for indoor air contaminant reduction [3], additionally, impart an encouraging workplace decoration. A Passive green filtration system can treat indoor air pollutants, like volatile organic compounds (VOCs) that predominantly extracted by greenery rhizosphere bacterium colony [4], plus CO 2 which discarded along photosynthesis process [5]. Inversely proportional, dynamic systems that employ force ventilation, drive air along with substrate and planting bed foundation system that elevates the filtration and air purification aptitude of system. It is recommended that using a dynamic green filtration system for indoor air contaminants domination is a more empirical replacement to conventional mechanical practice which inherently assists in diminish city greenhouse gas emission [6].
Evaporative cooling is an approach that can be utilized to cool either air or water and even both by implementing Manuscript immense enthalpy of water vaporization. The repressed concept is to converse the sensible heat to latent heat throughout the evaporation of water, generate a declination in temperature and a rise in the humidity of the surrounding air. In contrast to the typical air conditioning system practice, it is evaluated that evaporative cooling systems possibly demand only around one-fourth of the electric consumption that vapour compression system that uses for air conditioning [7]. Moreover, evaporative cooling can also propose different substantial benefits, inclusive coherent operation, economical in terms of set up and maintenance costs, superb indoor air quality as fully fresh air is used, little pollution and reasonable top-class essential energy [8], [9]. Regarding thermal comfort, stand-alone evaporative cooling structure can designate the occupant requirement in tropical and dry conditions as the endmost relative humidity maintain in the scale of 60 -70 %. Nevertheless, it is unmanageable to achieve the correlative outcome for the humid region because near-saturated conditions are demanded temperature drop needed. In such situation, hybrid structure like desiccant in the form of evaporative cooling are desired.
This paper presents an experimental assessment on green filtration systems with evaporative cooling in terms of several affecting factors for air filtration such as temperature, humidity, air velocity, particulate matter (PM) removal, CO 2 level, volatile organic compounds (VOCs) and feasibility analysis on evaporative cooling in Malaysia. The outcome of this study focusing on providing a better understanding of green filtration comes with a cooling system.
A. Testing Room
The study was conducted in Universiti Kuala Lumpur Malaysia France Institute (UniKL MFI), Section 14 Bandar Baru Bangi, Malaysia. The location experiencing hot and rainy weather within the humid tropical zone with nearly uniform temperature throughout the year, rainfall acceptance is high in total. Humidity in the area is within the range of 65% -85% with average weather temperature range 26.5°C and 31°C [10].
As for the room specification, it is a room located at Level 2 with a 100% panel ceiling board, and the total floor area is approximately 512 ft 2 . The ceiling height is 11 ft. There are three (3) windows but only two (2) which can be opened for ventilation purpose. There is only an access door to the room. The appliances in the room are lighting using six (6) fluorescent lamp 36 watt each of them, two (2) unit of ceiling fans with power input 75 watt, one (1) unit of ceiling exposed VRV air conditioner with the capacity of 2.0 HP and two (2) sets of ducting come with fan for educational purpose.
C. Data Collection
Data collection was done for the test room and test kit to measure the effectiveness of the green filtration system and the evaporative cooling effect. The data gathering for ambient and room temperature and relative humidity using Kima KH50 Data Logger were done for 28 days from 28 March to 24 April 2019, which is considered to be the best in determining the fluctuation of data because of weather change. While the Thermal Comfort instrument was used to measure the CO 2 level, room pressure, relative humidity and temperature of the room from 11 April to 24 April 2019. For the test kit performance, the data collection was focused on the temperature, relative humidity, air velocity, particle measurement and TVOC measurement in front and rear of the test kit. The instrument used for measurement at the test kit are shown in Table I.
The data taken for the test kit was done for 2 days, that was from 22 -23 April 2019 from 10 am to 12 pm for four (4) different kinds of plants in a closed room without any ventilation, and the only air conditioner was turned on. For the afternoon session where the temperature was at its peak, data was taken from 2 pm to 4 pm. It was being done in an open room where the air conditioner was turned off and the ceiling fan was turned on. Air circulation was allowed to the test room via window and door in the open room condition. On the next day, VOC measurement was taken at 10 am where all VOCs gasses such as formaldehyde, benzene, xylene, toluene, and even carbon monoxide is released into closed room 5 minutes before taking the readings. The VOC measurements were taken in a certain distance, ranging from 0.25 m, 0.5 m, and 1.0 m before it went through the test kit as illustrated in Fig. 2. All data taken after the test kit was carried out at a distance of 15cm in front of the test kit. The effectiveness of evaporative cooling was determined. At first, the only blower fan was turned on instead of both blower fan and pump. The second measurement recorded when both the blower fan and pump were turned on at the same time. Every measuring point, there were three (3) to four (4) reading were taken to get an average reading which would show the best reading.
D. Effectiveness
To measure the effectiveness of botanical components required different types of plants, as plant structure and arrangement are the main factors in filtering the air. For this purpose, two (2) types of structures were selected. The first type is the potted plant which are the Bamboo Palm, Chrysanthemum, and Ficcus Plant, and the second is a vining plant, Money Plants. The effectiveness of the green filtration system with evaporative cooling, data was collected and expressed in the form of a graph. Performance of the test kit can be calculated using the following formula: Effectiveness of evaporative cooling in reducing the temperature of air passing through the test kit can be determined by using equation 1. While equation 2 below was used to determine RH escalation in the air after passing through the test kit.
Equation 3 similar to the above but different in the variable as it was used in determining the filtration effectiveness of particulate matter (PM) according to each concentration. The PM fractions measured in this experiment started from 0.3 µm to 10 µm.
Equation 4 was used to verify the percentage if there is some declination in VOCs after the test kit. It was divided into two gasses, which are formaldehyde and benzene.
III. RESULT AND DISCUSSION
A. Ambient Temperature and Relative Humidity Both ambient temperature and relative humidity were collected using Kimo KH50 Data Logger. As illustrated in Fig. 3 Total readings recorded were 32,263 points and it demonstrated a fluctuating trend, but it was still within the acceptable range from 500 -700 ppm. But there was an incident that the CO 2 level shot up above 700 ppm which counted exactly 725.6 ppm at 6.35 am on 4th April 2019. The average CO 2 level in the test room was 592 ppm. During this period, the test room allowed ventilation from windows and ceiling fan was turned on. No occupant was allowed to enter the test room except during the irrigation of plants. To show a closer look at CO 2 profile during the closed room where no ventilation was allowed, a data of 24 hours before the experiment was taken for monitoring purposes which were from 11.00 am of 22nd April 2019 to 11.00 am of 23rd April 2019 as illustrated in Fig. 6
CO 2 Level VS Time in Closed Test Room for 24 hours
Nevertheless, the CO 2 profile was taken for graphing shows a piece of interesting information. It was mostly taken during open ventilation, almost the same as the variation of CO 2 in a day which also called Seasonal and Diurnal CO 2 pattern. To show it was not the same, a data of 24 hours in which there was no ventilation was allowed which was the closed room was taken and a graph was plotted, and its result was magnificent in which plant photosynthesis and breathing can be observed on a certain time.
D. Reading Before and After Test Kit (Closed Room)
Reading Before Test Kit -Morning Table II and Table III display the readings for temperature and relative humidity taken before and after/in front of the test kit, with the test condition as below. These readings were taken as the reference for the next experiments for comparison purposes. Table IV shows the readings when the test kit's water pump was turned on. This provided an evaporative cooling effect while the water from the aquarium underneath the planter boxes was circulating from the pipes above the planter boxes to the aquarium through the cooling tower infill, which acts as a cooling pad. The air dry bulb temperature was further reduced when the water pump was ON.
As shown in Fig. 7 above, the highest air velocity was through the Chrysanthemum plant which was 1.55 m/s and the lowest air velocity experienced from a Money Plant that was 1.14 m/s. Fig. 8 shows the data on the dry bulb temperature after the pump is turned on. It can be due to the blockage of the leaves for a different type of plants. The temperature decreased for all plants, the highest temperature is Bamboo Palm which is 27.68 °C and the lowest is 26.58 °C by Ficcus Plant. Based on Fig. 9 below, it shows the increased value of RH for all points taken. The maximum point acquired is 81.05 % on the Money Plant and the minimum point is 75.95 % on the Bamboo Palm. The value of RH gradually increasing from Bamboo Palm, Chrysanthemum, Ficcus Plant, and Money Plant. The RH percentage points increased due to the evaporative cooling effect from the water stream flowing down through the cooling tower infill. This directly will bring down the air temperature as shown in Fig. 9 below. Fig. 9. Test kit RH measurement in a closed room.
Airborne particle count was taken using CEM Particle Counter Meter, and it was done in a closed room rather than an open room because closed room readings almost the same as environmental reading. There are three (3) states of data have been taken. The first set of reading was taken before the test kit, the second set was at the condition where only blower fan was turned on and the readings were taken after the test kit. Lastly, the reading taken after the test kit with both the blower fan and pump were turned on. The concentration of particles measured from the range of 0.3 µm to 10 µm as shown in Table V.
F. Total Volatile Organic Compounds (TVOC)
For the TVOC test, several TVOC products were prepared to produce gasses such as ammonia, formaldehyde, benzene, trichloroethylene, xylene, and toluene. Table VII shows the certain TVOC products and Table VIII shows TVOC readings. As shown in Fig. 12 the highest formaldehyde removal efficiency was 50.0 % by Chrysanthemum and lowest 18.18 % by Money Plant. While for benzene, the highest was 56.65 % also by Chrysanthemum and the lowest was 10.84 % by Bamboo Palm.
As for evaluating affecting factors for different types of air filtration methods such as air velocity, temperature, and relative humidity, it was observed that for air velocity, the structure of plants somehow affects the velocity due to some of the leaves and the main stem blocking the airflow. The temperature profile was separated into two conditions that measure in an open room and another was a closed room. In the closed room, Ficcus Plant recorded the highest temperature drop which was 4.7 % after the evaporative cooling was turned on which increased the humidity in the air Last but not least, as for the feasibility study of evaporative cooling in Malaysia, this study has found that the potential use of direct evaporative cooling in Malaysia capital is very significant, and can be more than 95% for the base case if humidity control is not required. For the situation where humidity control may be required, the energy required for dehumidification can be quite significant. It has also been found that the potential use of direct evaporative cooling can be significantly influenced by the characteristic of building and associated cooling system for example design parameters. From this experiment, it found that if the test kit power input is much lesser, which is just 196W compared to 1.0HP of air conditioner and portable air purifier out there is about 746W and 250W. As for the evaporation loss and drift loss in the system, it is recorded that water level in test kit aquarium with capacity of 70liters of water reduced 1.5cm for every 2 hours.
IV. CONCLUSIONS
In a conclusion, there are frequent researches conducted on the performance of today's filters in the market and in-room air purifier that is portable, but little studies on the effectiveness of botanical components for air filtration. The highest particulate removal efficiency is 31.58 % for PM5.0 and the lowest is 2.87 % at concentration PM0.5. While for TVOC removal recorded, the highest Formaldehyde removal efficiency is 50.0 % by Chrysanthemum and the lowest, 18.18 % by Money Plant. As for Benzene, the highest removal efficiency is 56.65 % demonstrated by Chrysanthemum, and the lowest is 10.84 % by Bamboo Palm. Based on the particle count and TVOC measurement results, it can be concluded that both types of plants can be implemented into the system as long as the arrangement is correct. Moreover, the selection of certain plants according to the certain application is important. For example, Money Plant and the Bamboo Palm are perfect for smoking areas as they can filter most of TVOC gasses such as formaldehyde, benzene, carbon monoxide and even particulate matter which already been proven based on the data in the chapter before. While Chrysanthemum is perfect in filtering and eliminates toxin ammonia according to NASA Clean Air Study which we failed to acquire the data due to lack of the equipment and funds. Additionally, the system's ability to remove VOCs and CO 2 and modulate temperature and humidity make the device superior to most non-biological systems as general air quality maintenance devices. Nonetheless, further, controlled laboratory experiments are needed to investigate the long term performance of the system, and to better describe the simultaneous removal of PMs, VOCs, and CO 2 . These experimental research will provide empirical data on which to develop a simulation model that can be used to optimize the system's design, as well as to advance the implementation of the device.
|
2021-12-03T16:44:29.002Z
|
2022-01-01T00:00:00.000
|
{
"year": 2022,
"sha1": "3e37ba2dd6b5845852d6abb556709fb0b5f9d144",
"oa_license": "CCBY",
"oa_url": "http://www.ijesd.org/vol13/1364-J5008.pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "44df9601a8008e5b90b805c24b6f965e428a9124",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science"
],
"extfieldsofstudy": []
}
|
268081650
|
pes2o/s2orc
|
v3-fos-license
|
The relationship between blood pressure and functional fitness of older adults in Korea
Hypertension, also known as high blood pressure (BP), is a critical health issue that can cause cardiovascular disease. It is observed more frequently in older adults. Thus, this study aimed to identify the functional fitness and body composition factors that significantly influence both systolic and diastolic BPs in older adults. Data from 155,266 older adults (51,751 men [33.3%] and 103,505 women [66.7%]) who underwent functional fitness tests between 2013 and 2018 were analyzed. The following seven functional fitness tests were conducted: (a) aerobic endurance (2-min step), (b) upper body muscle strength (hand grip strength), (c) lower body muscle endurance (chair sit-and-stand), (d) flexibility (sit-and-reach), (e) agility (Timed Up and Go), and (f) body composition (body mass index [BMI] and body fat percentage). Systolic and diastolic BPs were used as outcome variables. In examining the proposed relationships, the regression analysis revealed that BMI, body fat percentage, sit-and-reach, 2-min step, hand grip, chair sit-and-stand, and Timed Up and Go were significantly associated with reduced systolic and diastolic body pressures.
INTRODUCTION
Hypertension, a major cause of cardiovascular diseases, including heart disease and stroke, is the most important factor affecting the global disease burden and is a significant public health issue (Wang et al., 2017).Its prevalence is growing worldwide, which may be attributed to population aging, poor diet, alcohol consumption, physical inactivity, being overweight, and persistent ex-posure to stress (World Health Organization, 2013).
High blood pressure (BP) is a major modifiable risk factor for cardiovascular diseases and mortality in older people.It is a risk factor for myocardial infarction, stroke, congestive heart failure, end-stage renal disease, and peripheral vascular disease (Burt et al., 1995;Edwards et al., 2007).In young adults, BP gradually increases with age.By the age of 60 years, the majority of the population develops hypertension, and one in four people aged ≥70 years suffers from hyper-tension (Burt et al., 1995).The Framingham Heart Study has demonstrated that people who maintain a normal BP until the age of 55-65 years have a 90% risk of developing hypertension if they survive until the age of 80-85 years (Guo et al., 2012).The onset and progression of hypertension differ according to sex.The incidence is higher among men until young and middle adulthood, whereas it becomes more prevalent among women in older adulthood.The changes in systolic BP (SBP) and diastolic BP (DBP) also differ according to age.SBP increases throughout an individual's lifetime, whereas DBP increases slowly until approximately 60 years of age, after which it gradually declines or remains unchanged (Burt et al., 1995).Hence, as age increases, SBP increases, whereas DBP decreases with a concomitant increase in pulse pressure (Kim et al., 2023).BP is associated with the risk of cardiovascular disease and all-cause mortality.
From middle to old age, increased BP is strongly correlated with cardiovascular disease and overall mortality without evidence of a lower threshold of at least 115/75 mmHg (Lewington et al., 2002;Noh et al., 2022).Hypertension is the most important global risk factor for disease burden (Kim et al., 2023).In older people, the 12 https://www.e-jer.orgrisk of cardiovascular events and mortality is directly related to SBP and pulse pressure, although it is inversely related to DBP (Franklin et al., 1999;Rigaud and Forette, 2001), reflecting the changing physiological pattern of BP with aging.Lower DBP is a predominant marker of higher pulse pressure, which is a stronger determinant of risk (Franklin et al., 2015).
Hypertension management guidelines emphasize routine physical activity as an early treatment to prevent, reduce, and halt the progression of hypertension and report that aerobic and resistance exercises contribute to lowering BP.High physical activity enhances physical fitness (Haskell et al., 2007), and high body mass index (BMI), low functional activity, and low functional fitness are modifiable risk factors for hypertension.Functional fitness is a stronger predictor of cardiovascular disease than functional activity (Crump et al., 2016;Nielsen and Andersen, 2003) and a better indicator of habitual functional activity than self-reported activity (Edwards et al., 2007;Williams, 2001).
Since 2013, the Korean government has been working toward increasing functional activity in individuals aged ≥65 years through the National Fitness Award Project, which is a national testing and consultation service to promote functional fitness and health of all citizens.This study aimed to identify the effects of functional fitness on BP in older Korean adults.
Research data
This cross-sectional study used data from the Korea Institute of Sports Science Fitness Standards as part of the National Fitness Award Project.In Korea, there are 81 test centers (75 fixed centers and 6 mobile measurement teams) across 17 regions.This study tested the functional fitness of older adults.The participants in this study were aged 65-90 years.Raw functional fitness test data from 155,256 individuals (51,751 men [33.3%] and 103,505 women [66.7%]) who voluntarily participated in the centers from 2013 to 2018, along with their demographic information, were used in this study.
Functional fitness measurement
The functional fitness battery developed by the project for older adults comprised seven tests as follows: (a) aerobic endurance (2-min step), (b) upper body muscle strength (hand grip strength), (c) lower body muscle endurance (chair sit-and-stand), (d) flexibility (sitand-reach), (e) agility (Timed Up and Go), and (f) body composition (BMI and body fat percentage).Their height, weight, and BP were also recorded (Jeoung and Pyun, 2022).All functional fitness parameters were measured at the center.Each functional fitness test revealed high internal consistency with satisfactory relia-bility statistics (r) ranging from 0.70 to 0.93 (Choi et al., 2014).All the items were measured by certified national professional health and fitness instructors working full-time at the centers.
Data analysis
Data were analyzed using IBM SPSS Statistics ver.21.0 (IBM Co., Armonk, NY, USA).De-scriptive statistical analysis was performed to calculate the means, standard deviations, frequencies, and percentages of the measures, and a paired t-test was used to determine sex differences.Prior to performing the main analysis, a correlation test was conducted to examine the associations between the variables.Multiple regression analyses were performed to test the relationships between func-tional fitness factors and SBP and DBP.The significance level was set at P<0.05.
RESULTS
Table 1 presents the demographic information, sex differences in BP, and functional fitness of participants.Altogether, 155,256 participants were included, comprising 51,751 men and 103,505 women, with the women being older than the men.Although the men were taller and heavier than the women, the women had higher BMI values and body fat percentage than the men.Moreover, the men had higher SBP and DBP than the women.Among the fitness tests, the values for grip strength, chair sit-and-stand, and 2-min step were higher for men who demonstrated better per-formance than for women, although the sit-and-reach values were higher for women than for men.By contrast, the men performed better in the Timed Up and Go test than the women did (Table 1).
According to the BP standard recommended by the Korean Society of Hypertension (2021), 44.9%, 13.75%, 23.7%, and 17.65% of the participants had normal BP, elevated BP, prehyper-tension, and hypertension, respectively.Furthermore, the men had a higher prevalence of hyper-tension than the women (Table 2).
Table 3 demonstrates results confirming the differences in BP according to levels of physical function items.For all the items, a significant difference in BP was observed according to their physical function levels (P<0.001).Specifically, in obesity-related BMI, the higher the BMI, the higher the BP (post hoc, obesity>over weight>normal>underweight; P<0.001).Moreover, it was also confirmed that the weaker the grip, the higher the BP (post hoc, 4>3>2>1; P<0.001) and the lower the score, the lower the BP https://doi.org/10.12965/jer.2346596.298 Jeoung B • Blood pressure and functional fitness (P<0.001).However, low scores in the sit-and-reach, chair sit-andstand, Timed Up and Go, and 2-min step tests were associated with higher BP (P<0.001).
As presented in Table 4, to identify the risk factors for hypertension, models 1, 2, and 3 were corrected for the following parameters: fitness level, BMI, age, sex and disability level.Com-pared to normal BP, an increase in BP owing to borderline hypertension and hypertension is a factor that shows a significant risk depending on a person's physical fitness level, obesity, age and sex.Specifically, age and sex differed by 1.025 times (odds ratio [OR], 1.025; 95% confidence interval [CI], 1.016-1.035;P<0.001), hypertension by 1.033 times (OR, 1.033; 95% CI, 1.023-1.044;
DISCUSSION
This study aimed to determine the prevalence of hypertension and the effects of functional fitness factors on BP among older adults (≥65 years) in South Korea.This study compared the functional fitness and BP according to sex.Obesity was more prevalent among women than among men, and the female participants performed better in grip strength, chair sit-and-stand, 2-min step, and Timed Up and Go tests, except for the sit-and-reach test, than the men did.The prevalence of hypertension tends to increase with age, and it is higher in men than women in all age groups.The prevalence of systolic hypertension increases with age, whereas that of diastolic hypertension tends to decline.The prevalence of systolic hypertension was higher among men than women in the 60-74 year group, whereas that of diastolic hypertension was higher among women than men in the 80-90 year group.Lee and Lee (2012) have reported that the incidence of hypertension tended to increase with advancing age and was higher among men than among women.The Korean Society of Hypertension ( 2021) has reported similar results.Furthermore, the rates of increase of both SBP and DBP were higher in men than in women.However, Burt et al. (1995) observed that SBP increased with age, with a higher rate of increase observed in men than in women.DBP generally increased until the age of 50-59 years but declined with advancing age.These results are inconsistent with those of this study.Further research is needed to examine the relationship between the increasing prevalence of hypertension and environmental factors and race, in addition to previously known factors that contribute to the increasing prevalence of hypertension such as diet, alcohol consumption, physical activity, being overweight, and stress.Burt et al. (1995) have reported that the prevalence of hypertension differs according to race, and our findings revealed consistent trends in an elderly Korean population.Our study demonstrated that functional fitness factors were significantly associated with both SBP and DBP, suggesting that they affect SBP and DBP elevation.Many previous studies have highlighted the importance of regular physical activity and nutritional management in hypertension management; in particular, high physical activity contributes to enhanced physical fitness.Regarding the relative risk of cardiovascular disease in relation to physical activity and physical fitness, the prevalence of cardiovascular disease declined by approximately 25% with increased physical activity and by 60% with increased physical fitness.Interestingly, although physical activity gradually decreased the relative risk of cardiovascular disease, even a slight improvement in physical fitness to the bottom 25% drastically diminished the relative risk of cardiovascular disease.Hence, physical fitness has more health benefits than physical activity alone (Williams, 2001).
In this study, hypertension was linked to health-related fitness parameters, and poor fitness parameters contributed to elevated BP.Crump et al. (2016) has reported that physical fitness factors help reduce hypertension.Choi et al. (2014) has also demonstrat-ed that the hypertensive population had poor cardiopulmonary endurance, muscle strength, and balance, based on which they recommended exercises that improved cardiopulmonary endurance and muscle strength.Regarding chronic levels of inflammation, Edwards et al. (2007) have reported evidence on the relationship between inflammatory response to acute exercise and physical fitness, and that the change in circulating lymphocyte populations with acute exercise is differentiated by physical fitness.Many studies have emphasized the importance of increasing physical activity and func-tional fitness with age (Bakker et al., 2018;Chen et al., 2009;Edwards et al., 2007;Haskell et al., 2007;Jeoung and Pyun, 2022).Hence, various measures to promote physical activity and functional fitness are required to treat and prevent chronic diseases, including hypertension, in the aging population.
Table 1 .
Sex differences in functional fitness and blood pressure
Table 3 .
Differences in blood pressure by functional fitness level ***P< 0.001 tested by performing a one-way analysis of variance.Scheffe was done for post hoc.
Table 4 .
Odds ration with 95% confidence intervals for the National Fitness Award Project data on hypertension risk
|
2024-03-02T05:07:33.566Z
|
2024-02-01T00:00:00.000
|
{
"year": 2024,
"sha1": "9aaa280ca089271ca8451e793854cbc9f36ecaf6",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "9aaa280ca089271ca8451e793854cbc9f36ecaf6",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
15798977
|
pes2o/s2orc
|
v3-fos-license
|
Epigenetic Mechanisms and Microbiota as a Toolbox for Plant Phenotypic Adjustment to Environment
The classic understanding of organisms focuses on genes as the main source of species evolution and diversification. The recent concept of genetic accommodation questions this gene centric view by emphasizing the importance of phenotypic plasticity on evolutionary trajectories. Recent discoveries on epigenetics and symbiotic microbiota demonstrated their deep impact on plant survival, adaptation and evolution thus suggesting a novel comprehension of the plant phenotype. In addition, interplays between these two phenomena controlling plant plasticity can be suggested. Because epigenetic and plant-associated (micro-) organisms are both key sources of phenotypic variation allowing environmental adjustments, we argue that they must be considered in terms of evolution. This ‘non-conventional’ set of mediators of phenotypic variation can be seen as a toolbox for plant adaptation to environment over short, medium and long time-scales.
The classic understanding of organisms focuses on genes as the main source of species evolution and diversification. The recent concept of genetic accommodation questions this gene centric view by emphasizing the importance of phenotypic plasticity on evolutionary trajectories. Recent discoveries on epigenetics and symbiotic microbiota demonstrated their deep impact on plant survival, adaptation and evolution thus suggesting a novel comprehension of the plant phenotype. In addition, interplays between these two phenomena controlling plant plasticity can be suggested. Because epigenetic and plant-associated (micro-) organisms are both key sources of phenotypic variation allowing environmental adjustments, we argue that they must be considered in terms of evolution. This 'non-conventional' set of mediators of phenotypic variation can be seen as a toolbox for plant adaptation to environment over short, medium and long time-scales.
Keywords: plant plasticity, phenotypic plasticity, microbiota, epigenetics, rapid adaptation Evolution is driven by selection forces acting on variation among individuals. Understanding the sources of such variation that has led to the diversification of living organisms, is therefore of major importance in evolutionary biology. Diversification is largely thought to be controlled by genetically based changes induced by ecological factors (Schluter, 1994(Schluter, , 2000. Phenotypic plasticity, i.e., the ability of a genotype to produce different phenotypes (Bradshaw, 1965;Schlichting, 1986;Pigliucci, 2005), is a key developmental parameter for many organisms and is now considered as a source of adjustment and adaptation to biotic and abiotic constraints (e.g., West-Eberhard, 2005;Anderson et al., 2011). However, many current studies still focus on genetically generated plasticity to predict and model biodiversity response to a changing climate (Peck et al., 2015), omitting considerable individual variability. In addition, it is striking how poorly the variability is integrated and that both experiments and models most often measure population averages (Peck et al., 2015).
Because of their sessile lifestyle, plants are forced to cope with local environmental conditions and their survival subsequently relies greatly on plasticity (Sultan, 2000). Plastic responses may include modifications in morphology, physiology, behavior, growth or life history traits (Sultan, 2000). In this context, the developmental genetic pathways supporting plasticity allow a rapid response to environmental conditions (Martin and Pfennig, 2010) and the genes underlying these induced phenotypes are subjected to selection . If selection acts primarily on phenotype, the environmental constraints an organism has to face can lead either to directional selection or disruptive selection of new phenotypes . Thus, novel traits can result from environmental induction followed by genetic accommodation of the changes (West-Eberhard, 2005). These accommodated novelties, because they are acting in response to the environment, are proposed to have greater evolutionary impact than mutation-induced novelties (West-Eberhard, 2005). The links between genotype and phenotypes are often blurred by factors including (i) epigenetic effects inducing modifications of gene expression, post-transcriptional and posttranslational modifications, which allow a quick response to an environmental stress (Shaw and Etterson, 2012) and (ii) the plant symbiotic microbiota recruited to dynamically adjust to environmental constraints (Vandenkoornhuyse et al., 2015). We investigate current knowledge regarding the evolutionary impact of epigenetic mechanisms and symbiotic microbiota and call into question the suitability of the current gene-centric view in the description of plant evolution. We also address the possible interactions between the responsive epigenetic mechanisms and symbiotic interactions shaping the biotic environment and phenotypic variations.
GENOTYPE-PHENOTYPE LINK: STILL APPROPRIATE?
In the neo-Darwinian synthesis of evolution (Mayr and Provine, 1998), phenotypes are determined by genes. The underlying paradigm is that phenotype is a consequence of genotype (Alberch, 1991) in a non-linear interaction due to overdominance, epistasis, pleiotropy, and covariance of genes (see Alberch, 1991;Pigliucci, 2005). Both genotypic variations and the induction of phenotypic variation through environmental changes have been empirically demonstrated, thus highlighting the part played by the environment in explaining phenotypes. These phenotypes are consequences of the perception, transduction and integration of environmental signals. The latter is dependent on environmental parameters, including (i) the reliability or relevance of the environmental signals (Huber and Hutchings, 1997), (ii) the intensity of the environmental signal which determines the response strength (Hodge, 2004), (iii) the habitat patchiness (Alpert and Simms, 2002) and (iv) the predictability of future environmental conditions with current environmental signals information (Reed et al., 2010). The integration of all these characteristics of the environmental stimulus regulates the triggering and outcomes of the plastic response (e.g., Alpert and Simms, 2002). In this line, recent works have shown that plant phenotypic plasticity is in fact determined by the interaction between plant genotype and the environment rather than by genotype alone (El-Soda et al., 2014). Substantial variations in molecular content and phenotypic characteristics have been repeatedly observed in isogenic cells (Kaern et al., 2005). Moreover, recent analyses of massive datasets on genotypic polymorphism and phenotype often struggle to identify single genetic loci that control phenotypic trait variation (Anderson et al., 2011). The production of multiple phenotypes is not limited to the genomic information and the idea of a genotype-phenotype link no longer seems fully appropriate in the light of these findings. Besides, evidence has demonstrated that phenotypic variations are related to genes-transcription and RNAs-translation, which are often linked to epigenetic mechanisms, as discussed in the following paragraph (Rapp and Wendel, 2005).
EPIGENETICS AS A FUNDAMENTAL MECHANISM FOR PLANT PHENOTYPIC PLASTICITY
"Epigenetics" often refers to a suite of interacting molecular mechanisms that alter gene expression and function without changing the DNA sequence (Richards, 2006;Holeski et al., 2012). The best-known epigenetic mechanisms involve DNA methylation, histone modifications and histone variants, and small RNAs. These epigenetic mechanisms lead to enhanced or reduced gene transcription and RNA-translation (e.g., Richards, 2006;Holeski et al., 2012). A more restricted definition applied in this paper considers as epigenetic the states of the epigenome regarding epigenetic marks that affect gene expression: DNA methylation, histone modifications (i.e., histone amino-terminal modifications that act on affinities for chromatin-associated proteins) and histone variants (i.e., structure and functioning), and small RNAs. These epigenetic marks may act separately or concomitantly, and can be heritable and reversible (e.g., Molinier et al., 2006;Richards, 2011;Bilichak, 2012). The induction of defense pathways and metabolite synthesis against biotic and abiotic constraints by epigenetic marks has been demonstrated during the last decade mainly in the model plant species Arabidopsis and tomato (e.g., Rasmann et al., 2012;Slaughter et al., 2012;Sahu et al., 2013). Epigenetics is now regarded as a substantial source of phenotypic variations (Manning et al., 2006;Crews et al., 2007;Kucharski et al., 2008;Bilichak, 2012;Zhang et al., 2013) in response to environmental conditions. More importantly, studies have suggested the existence of epigenetic variation that does not rely on genetic variation for its formation and maintenance (Richards, 2006;Vaughn et al., 2007). However, to date, only a few studies have demonstrated the existence of pure natural epi-alleles (Cubas et al., 1999) although they are assumed to play an important role in relevant trait variation of cultivated plants (Quadrana et al., 2014). Similarly to the results observed in mangrove plants (Lira-Medeiros et al., 2010), a recent work on Pinus pinea which exhibits high phenotypic plasticity associated with low genetic diversity, discriminated both population and individuals based on cytosine methylation, while the genetic profiles failed to explain the observed phenotype variations (Sáez-Laguna et al., 2014). Epigenetics can provide phenotypic variation in response to environmental conditions without individual genetic diversity. It could hence provide an alternative way or an accelerated pathway for adaptive 'evolutionary' changes (Bossdorf et al., 2008). Epigenetic marks could also 'tag' a site for mutation: it is known that methylated cytosine is more mutable increasing the opportunity for random mutation to act at epigenetically modified sites.
EPI-ALLELES, GENETIC ACCOMMODATION AND ADAPTATION
Even if totally independent epigenetic variations (i.e., pure epi-alleles) are scarce and still need to be investigated, the evolutionary significance of the resulting epigenetically induced phenotypic variations is being increasingly debated (Schlichting and Wund, 2014). Assuming that selection acts on phenotypes and that these phenotypes are not always genetically controlled, it can be argued that new phenotypes arising from adaptive plasticity are not random variants (West-Eberhard, 2005). Changes in the trait frequency then correspond to a 'genetic accommodation' process (West-Eberhard, 2005;Schlichting and Wund, 2014) through which an environmentally induced trait variation becomes genetically determined by a change in genes frequency that affects the trait 'reaction norm' (West-Eberhard, 2005;Crispo, 2007). It may also be suggested that genetic accommodation can result from the selection of genetic changes optimizing the novel variant's adaptive value through modifications in the form, regulation or phenotypic integration of the trait.
In the "adaptation loop, " the effect of environment on plant performance induces the selection of the most efficient phenotype. The epigenetic processes are not the only engines of plant phenotypic plasticity adjustment. Indeed, plants also maintain symbiotic interactions with microorganisms to produce phenotypic variations.
PLANT PHENOTYPIC PLASTICITY AND SYMBIOTIC MICROBIOTA
Plants harbor an extreme diversity of symbionts including fungi (Vandenkoornhuyse et al., 2002) and bacteria (Bulgarelli et al., 2012;Lundberg et al., 2012). During the last decade, substantial research efforts have documented the range of phenotypic variations allowed by symbionts. Examples of mutualist-induced changes in plant functional traits have been reported (Streitwolf-Engel et al., 1997Wagner et al., 2014), which modify the plant's ability to acquire resources, reproduce, and resist biotic and abiotic constraints. The detailed pathways linking environmental signals to this mutualist-induced plasticity have been identified in some cases. For instance, Boller and Felix (2009) highlighted several mutualist-induced signaling pathways allowing a plastic response of plants to virus, pests and pathogens initiated by flagellin/FLS2 and EF-Tu/EFR recognition receptors. Mutualist-induced plastic changes may affect plant fitness by modifying plant response to its environment including (i) plantresistance to salinity (Lopez et al., 2008), drought (Rodriguez et al., 2008), heat (Redman et al., 2002) and (ii) plant nutrition (e.g., Smith et al., 2009). These additive ecological functions supplied by plant mutualists extend the plant's adaptation ability (e.g., Vandenkoornhuyse et al., 2015), leading to fitness benefits for the host in highly variable environments (Conrath et al., 2006) and therefore can affect evolutionary trajectories (e.g., Brundrett, 2002). In fact, mutualism is a particular case of symbiosis (i.e., long lasting interaction) and is supposed to be unstable in terms of evolution because a mutualist symbiont is expected to improve its fitness by investing less in the interaction. Reciprocally, to improve its fitness a host would provide fewer nutrients to its symbiont. Thus, from a theoretical point of view, a continuum from parasite to mutualists is expected in symbioses. However, the ability of plants to promote the best cooperators by a preferential C flux has been demonstrated both in Rhizobium/ and Arbuscular Mycorrhiza/Medicago truncatula interactions (Kiers et al., 2007(Kiers et al., , 2011. Thus, the plant may play an active role in the process of mutualist-induced environment adaptation as it may be able to recruit microorganisms from soil (for review Vandenkoornhuyse et al., 2015) and preferentially promote the best cooperators through a nutrient embargo toward less beneficial microbes (Kiers et al., 2011). In parallel, vertical transmission or environmental inheritance of a core microbiota is suggested (Wilkinson and Sherratt, 2001) constituting a "continuity of partnership" (Zilber- Rosenberg and Rosenberg, 2008). Thus the impact on phenotype is not limited to the individual's lifetime but is also extended to reproductive strategies and to the next generation. Indeed, multiple cases of alteration in reproductive strategies mediated by mutualists such as arbuscular mycorrhizal fungi (Sudová, 2009) or endophytic fungi (Afkhami and Rudgers, 2008) have been reported. Such microbiota, being selected by the plant and persisting through generations, may therefore influence the plant phenotype and be considered as a powerhouse allowing rapid buffering of environmental changes (Vandenkoornhuyse et al., 2015). The idea of a plant as an independent entity on the one hand and its associated microorganisms on the other hand has therefore recently matured toward understanding the plant as a holobiont or integrated "super-organism" (e.g., Vandenkoornhuyse et al., 2015).
HOLOBIONT PLASTICITY AND EVOLUTION
If the holobiont can be considered as the unit of selection (Zilber- Rosenberg and Rosenberg, 2008), even though this idea is still debated (e.g., Leggat et al., 2007;Rosenberg et al., 2007), then the occurrence of phenotypic variation is enhanced by the versatility of the holobiont composition, both in terms of genetic diversity (i.e., through microbiota genes mainly) and phenotypic changes (induced by mutualists). Different mechanisms allowing a rapid response of the holobiont to these changes have been identified (1) horizontal gene transfer between members of the holobiont (i.e., transfer of genetic material between bacteria; Dinsdale et al., 2008) (2) microbial amplification (i.e., variation of microbes abundance in relation to environment variation) and (3) recruitment of new mutualists within the holobiont (Vandenkoornhuyse et al., 2015). In this model, genetic novelties in the hologenome (i.e., the combined genomes of the plant and its microbiota, the latter supporting more genes than the host) are a consequence of interactions between the plant and its microbiota. The process of genetic accommodation described in Section "Epi-Alleles, Genetic Accommodation and Adaptation, " impacts not only the plant genome but can also be expanded to all components of the holobiome and may thus be enhanced by the genetic variability of microbiota. In the holobiont, phenotypic plasticity is produced at different integration levels (i.e., organism, super-organism) and is also genetically accommodated or assimilated at those scales (i.e., within the plant and mutualist genomes and therefore the hologenome). The holobiont thus displays greater potential phenotypic plasticity and a higher genetic potential for mutation than the plant alone, thereby supporting selection and the accommodation process in the hologenome. In this context, the variability of both mutualist-induced and epigenetically induced plasticity in the holobiont could function as a "toolbox" for plant adaptation through genetic accommodation. Consequently, mechanisms such as epigenetics allowing a production of phenotypic variants in response to the environment should be of importance in the holobiont context.
DO MICROBIOTA AND EPIGENETIC MECHANISMS ACT SEPARATELY OR CAN THEY INTERACT?
Both epigenetic and microbiota interactions allow plants to rapidly adjust to environmental conditions and subsequently support their fitness (Figure 1). Phenotypic changes ascribable to mutualists and mutualists transmission to progeny are often viewed as epigenetic variation (e.g., Gilbert et al., 2010). However, this kind of plasticity is closer to an "interspecies induction of changes" mediated by epigenetics rather than "epigenetics-induced changes" based solely on epigenetic heritable mechanisms (see section on epigenetics for a restricted definition). Apart from the difficulty of drawing a clear line between epigenesis and epigenetics (Jablonka and Lamb, 2002), evidence is emerging of the involvement of epigenetic mechanisms in mutualistic interactions. An experiment revealed changes in DNA adenine methylation patterns during the establishment of symbiosis (Ichida et al., 2007), suggesting an effect of this interaction on the bacterial epigenome or at least, a role of epigenetic mechanisms in symbiosis development. Correct methylation status seems also to be required for efficient nodulation in the Lotus japonicus -Mesorhizobium loti symbiosis (Ichida et al., 2009) and miRNA "miR-397" was only induced in mature nitrogen-fixing nodules (De Luis et al., 2012). As epigenetic mechanisms are involved in the development of symbiosis, we assume that epigenetic phenomena may have significant effects on mutualist associations. As yet, little is known about the epigenetic effects and responses underlying host-symbiont interactions. These epigenetic mechanisms and microbiota sources of plant phenotypic plasticity may act synergistically although this idea has never convincingly been addressed. As far as we know, different important issues bridging epigenetic mechanisms and microbiota remain to be elucidated such as (1) the frequency of epigenetic marking in organisms involved in mutualistic interactions, (2) the range of phenotypic plasticity associated with these marks either in the plant or in microorganisms, (3) the consequences of these marks for holobiont phenotypic integration, (4) the functional interplay between epigenetic mechanisms and microbiota in plant phenotype expression, (5) the inheritance of epigenetic mechanisms and thus their impact on symbiosis development, maintenance and co-evolution. To answer these questions, future studies will need to involve surveys of plant genome epigenetic states (e.g., methylome) in response to the presence/absence of symbiotic microorganisms. Recent progress made on bacteria FIGURE 2 | A plant's phenotypic variations can be inherited even in the case of a phenotypic trait not controlled by a gene/genome variation. This rapid response to environmental change involves epigenetic mechanisms and/or microorganisms recruitment within the plant microbiota. Heritable transgenerational plasticity mediated by epigenetic mechanisms and/or mutualists could be followed by genetic accommodation and long term adaptation. methylome survey methods should represent useful tools to design future experiment on this topic (Sánchez-Romero et al., 2015).
Although research on the interaction between microbiota and epigenetics is in its infancy in plants, recent works mostly on humans support existing linkages. Indeed, a clear link has been evidenced between microbiota and human behavior (Dinan et al., 2015). Other examples of microbiota effects are their (i) deep physiological impact on the host through serotonin modulation (Yano et al., 2015) and (ii) incidence on adaptation and evolution of the immune system (Lee and Mazmanian, 2010). Such findings should echo in plant-symbionts research and encourage further investigations on this topic.
More broadly, and despite the above-mentioned knowledge gaps, our current understanding of both epigenetic mechanisms and the impact of microbiota on the expression of plant phenotype, invites us to take those phenomena into consideration in species evolution and diversification.
'EXTENDED PHENOTYPE' AND 'HOLOGENOME THEORY' Microbiota and epigenetic mechanisms play different but complementary roles in producing phenotypic variations which are then subjected to selective pressure. Diversification of traits is suggested to depend on evolutionary time (necessary for the accumulation of genetic changes, i.e., Martin and Pfennig, 2010) but rapid shifts in plant traits, as allowed by both microbiota and epigenetics, would provide accelerated pathways for their evolutionary divergence. In addition, such rapid trait shifts also permit rapid character displacement. Induction of DNA methylation may occur more rapidly than genetic modifications and could therefore represent a way to cope with environmental constraints on very short time scales (during the individual's lifetime; Rando and Verstrepen, 2007). In parallel, microbiotainduced plasticity is achieved both at a short time scale (i.e., through recruitment) and at larger time scales (i.e., through symbiosis evolution; Figure 2). Because of the observation of transgenerational epigenetic inheritance, the relevance of epigenetically induced variations is a current hot topic in the contexts of evolutionary ecology and environmental changes (Bossdorf et al., 2008;Slatkin, 2009;Zhang et al., 2013;Schlichting and Wund, 2014). This has stimulated renewed interest in the 'extended phenotype' (Dawkins, 1982). The central idea of Dawkins 'extended phenotype' (Dawkins, 1982) is that phenotype cannot be limited to biological processes related to gene/genome functioning but should be 'extended' to consider all effects that a gene/genome (including organisms behavior) has on its environment. For example, the extended phenotype invites us to consider not only the effect of the plant genome on its resources acquisition but also the effect of the genome on the plant symbionts as well as on nutrient availability for competing organisms. More recently the development of the 'hologenome theory' (Zilber- Rosenberg and Rosenberg, 2008) posits that evolution acts on composite organisms (i.e., host and its microbiome) with the microbiota being fundamental for their host fitness by buffering environmental constraints. Both the 'extended phenotype' concept and 'hologenome theory' admit that the environment can leave a "footprint" on the transmission of induced characters. Thus, opportunities exist to revisit our understanding of plant evolution to embrace both environmentally induced changes and related 'genetic accommodation' processes.
ACKNOWLEDGMENTS
This work was supported by a grant from the CNRS-EC2CO program (MIME project), CNRS-PEPS program (MYCOLAND project) and by the French ministry for research and higher education. We also acknowledge E.T. Kiers and D. Warwick for helpful comments and suggestions for modifications on a previous version of the manuscript and A. Salmon for helpful discussions about epigenetics.
|
2016-06-18T00:24:54.386Z
|
2015-12-23T00:00:00.000
|
{
"year": 2015,
"sha1": "032b425eea9ef3255ef8b5f1385ec2304c512e1d",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpls.2015.01159/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "032b425eea9ef3255ef8b5f1385ec2304c512e1d",
"s2fieldsofstudy": [
"Biology",
"Environmental Science"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
3993173
|
pes2o/s2orc
|
v3-fos-license
|
Position-dependent impact of hexafluoroleucine and trifluoroisoleucine on protease digestion
Rapid digestion by proteases limits the application of peptides as therapeutics. One strategy to increase the proteolytic stability of peptides is the modification with fluorinated amino acids. This study presents a systematic investigation of the effects of fluorinated leucine and isoleucine derivatives on the proteolytic stability of a peptide that was designed to comprise substrate specificities of different proteases. Therefore, leucine, isoleucine, and their side-chain fluorinated variants were site-specifically incorporated at different positions of this peptide resulting in a library of 13 distinct peptides. The stability of these peptides towards proteolysis by α-chymotrypsin, pepsin, proteinase K, and elastase was studied, and this process was followed by an FL-RP-HPLC assay in combination with mass spectrometry. In a few cases, we observed an exceptional increase in proteolytic stability upon introduction of the fluorine substituents. The opposite phenomenon was observed in other cases, and this may be explained by specific interactions of fluorinated residues with the respective enzyme binding sites. Noteworthy is that 5,5,5-trifluoroisoleucine is able to significantly protect peptides from proteolysis by all enzymes included in this study when positioned N-terminal to the cleavage site. These results provide valuable information for the application of fluorinated amino acids in the design of proteolytically stable peptide-based pharmaceuticals.
General information
All reactions were run under an argon atmosphere unless otherwise indicated. Room Coupling constants J are given in Hertz (Hz). Multiplicities are classified by the following abbreviations: s = singlet, d = doublet, t = triplet, q = quartet, br = broad or m = multiplet and combinations thereof. High resolution mass spectra were obtained on an Agilent ESI-ToF 6220 (Agilent Technologies, Santa Clara, CA, USA).
Peptide characterization
High resolution mass spectra were recorded on an Agilent 6220 ESI-ToF LC-MS spectrometer (Agilent Technologies Inc., Santa Clara, CA, USA) to identify the pure peptide products. The samples were dissolved in a 1:1 mixture of water and acetonitrile containing 0.1% (v/v) TFA and injected directly into the spray chamber by a syringe pump using a flow rate of 10 µL min −1 . A spray voltage of 3.5 kV was used, the drying gas glow rate was set to 5 L min −1 and the nebulizer to 30 psi. The gas temperature was 300 °C.
To verify purity of the synthesized peptides analytical HPLC was carried out on a Chromaster 600 bar DAD-System with CSM software (VWR/Hitachi, Darmstadt, Germany). The system works with a low-pressure gradient containing a HPLC-pump (5160) with a 6-channel solvent degasser, an organizer, an autosampler (5260) with a 100 µL sample loop, a column oven (5310) and a diode array flow detector (5430).
A LUNA TM C8 (2) column (5 μm, 250 × 4.6 mm, Phenomenex ® , Torrance, CA, USA) was used. As eluents water and ACN, both containing 0.1% (v/v) TFA were used, the flow rate was adjusted to 1 mL/min and the column was heated to 24 °C. The used gradient method is shown in Table S1. The UV-detection of the peptides occurred at 220 nm. The data were analyzed with EZChrom Elite software (version 3.3.2, Agilent Technologies, Santa Clara, CA, USA). Figure S1: Analytical HPLC chromatograms of purified peptides; column: Luna TM C8 (5 µM, 250 × 4.6 mm, Phenomenex ® ); Solvent A was H2O, solvent B was acetonitrile, both containing 0.1% (v/v) TFA. The flow rate was 1 mL/min; linear gradient from 5% B to 70% B over 18 min (see Table S1).
Enzymatic digestion studies
Characterization of the enzymatic digestion reactions was carried out via analytical HPLC on a LaChrom-ELITE-HPLC-System from VWR International Hitachi (Darmstadt, Germany). The system contains an organizer, two HPLC-pumps (L-S7 2130) with solvent degasser, an autosampler (L-2200) with a 100 µL sample loop, a diode array flow detector (L-2455), a fluorescence detector (L-2485) and a high pressure gradient mixer. As eluents water and ACN, both containing 0.1% (v/v) TFA were used, and a flow rate of 3 mL/min was applied. The used linear gradients are shown in Table S3. For the non-fluorinated peptides method A was used to follow the digestion process, and for the fluorinated peptides method B was applied. For chromatograms where an insufficient baseline separation was observed, measurements were repeated using methods C [FA (pepsin), P2-LeuFA S8 Figure Identification of the proteolytic cleavage products (Table S4-S7) occurred according to the mass-to-charge ratios determined with an Agilent 6220 ESI-ToF-MS instrument (Agilent Technologies, Santa Clara, CA, USA). For this, the quenched peptide-enzyme-solutions after 120 min and 24 h incubation were analyzed. The solutions were injected directly into the spray chamber using a syringe pump with a flow rate of 10 µL min −1 . Spray voltage was set to 3.5 kV, a drying gas flow rate of 5 L min −1 was used, the nebulizer was set to 30 psi, and the gas temperature to 300 °C.
The fragmentor voltage was 200 V. Not all corresponding fragments could be detected. Table S5: Identification of the cleavage products of the different peptides by ESI-ToF mass spectrometry after digestion with pepsin.
|
2017-12-29T10:36:21.611Z
|
2017-12-22T00:00:00.000
|
{
"year": 2017,
"sha1": "7c3c5e8d15600a537b2700d655ad4ad29411a0b7",
"oa_license": "CCBY",
"oa_url": "https://www.beilstein-journals.org/bjoc/content/pdf/1860-5397-13-279.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7c3c5e8d15600a537b2700d655ad4ad29411a0b7",
"s2fieldsofstudy": [
"Biology",
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
98321087
|
pes2o/s2orc
|
v3-fos-license
|
Positioning of the HKUST-1 metal – organic framework ( Cu 3 ( BTC ) 2 ) through conversion from insoluble Cu-based precursors
A Cu-based metal–organic framework (HKUST-1 or Cu3(BTC)2, BTC = 1,3,5-benzene tricarboxylate) has been synthesized from insoluble Cu-based precursors and positioned on substrates. Patterning of HKUST-1 was achieved through a two-step process: (1) the positioning of the insoluble Cu-based ceramic precursors on substrates using a sol–gel solution, and (2) the subsequent conversion into HKUST-1 by treatments with an alcoholic solution containing 1,3,5-benzene tricarboxylic acid (H3BTC) at room temperature for 10 min. This technique has been found to be suitable for both inorganic and polymeric substrates. The HKUST-1 pattern on a polymer film can be easily bent without affecting the positioned MOFs crystals. This approach would allow for versatile and practical applications of MOFs in multifunctional platforms where the positioning of MOFs is required.
Introduction
Metal-organic frameworks (MOFs), also referred to as porous coordination polymers (PCPs), are a new class of three-dimensional crystalline porous materials consisting of metal ions (or metal-oxo clusters) and organic linkers. 1 In recent years, MOFs have attracted much attention due to their porosity and the possibility of combining high surface areas with pore characteristics that can be designed at the molecular level. 1 These unique features make MOFs suitable materials for gas adsorption, 2 catalysis 3 and sensing 4 as well as promising agents for separation, 5 electronics, 6 decontamination 7 and drug delivery. 8ue to the desirable properties of MOFs, recent research has focused on spatial positioning and device fabrication through the integration of MOFs into miniaturised multifunctional platforms such as microfluidic or lab-on-a-chip devices. 9Integration of MOFs into these devices requires the materials to be positioned in a geometrically controlled fashion on substrates.However, achieving spatial control over the location of MOF materials is challenging since such porous crystals are usually obtained through a delicate self-assembly process. 10Therefore, much effort has been devoted to developing methodologies for controlling the location of MOFs.Examples include the deposition of MOF precursors and subsequent conversion into porous crystals, 11 selective substrate functionalization to trigger the MOF growth at precise locations 12 and other advanced techniques potentially compatible with current conventional lithographic methods. 13ynthesis of MOFs from ceramic materials is a recent research trend and has attracted great interest in MOF technology, due to various advantageous features for MOF-based device fabrication, including a fast and spatially controlled conversion into MOF and the controlled architecture of the material in the mesoscale. 14Importantly, this method can exploit established technologies utilised for the production of ceramic films and patterns with finely tuned chemical compositions such as sol-gel, 15 physical and chemical vapour methods, 16 spray deposition 17 and chemical processing of metals. 18In addition, the low residual content of metal ions in the solution after the conversion into MOFs can address concerns related to the environmental impacts of MOF fabrication.Among the recent reports focusing on the production of MOFs from ceramic precursors, a remarkable approach has been described by Majano et al. 19 These authors found that solvent insoluble Cu(OH) 2 can be easily and quickly transformed into HKUST-1 (also called Cu 3 (BTC) 2 , BTC = 1,3,5benzene tricarboxylate) at room temperature through an acidbase reaction (3Cu(OH) 2 + 2H 3 BTC → Cu 3 (BTC) 2 + 6H 2 O).This † Electronic supplementary information (ESI) available.See DOI: 10.1039/ c4qi00215f a Division of Materials Science & Engineering, Graduate School of Engineering, Osaka Prefecture University, 1-1 Gakuen-cho, Naka-ku, Sakai, Osaka 599-8531, Japan.E-mail: matsumac@chem.osakafu-u.ac.jp discovery was subsequently combined with the use of photolithography for patterning copper, converting it into Cu(OH) 2 nanotubes and finally using it as a feedstock material to induce HKUST-1 formation. 20Although the versatility towards different substrates (including Cu mesh, wire and grid) has been successfully demonstrated, substrates need to be made of metallic copper.To improve the versatility towards practical applications, the localization of MOFs on other substrates such as glass or flexible polymers are highly desired.
Herein, we report a novel strategy to position the HKUST-1 MOF on different supports through a two-step approach, namely, the positioning of the solvent insoluble Cu-based ceramic precursors on substrates, followed by their conversions into HKUST-1, as illustrated in Fig. 1.The solvent insoluble Cu-based precursors, which are ceramic materials according to modern definitions, 21 have been obtained by reacting Cu(NO 3 ) 2 •3H 2 O with ammonia solutions.The obtained nanopowder products were subsequently converted to HKUST-1 through the treatments with alcoholic solution containing H 3 BTC.This method has been successfully combined with a local deposition of a sol-gel solution to promote the adhesion between the ceramic nanopowder precursors and glass or plastic substrates.The ceramic patterns were converted into the corresponding HKUST-1 patterns.Remarkably, the pattern on the polymeric substrate could be bent, showing that MOF patterns can be applied on flexible substrates.According with the need for improved patterning protocols, 15a the method proposed here offer a cheaper alternative to photolithography.Indeed, the process for the fabrication of the mold used in the contact printing process 22 does not require proximity lithography equipment involved in the microfabrication of a thermally sensitive resist.13d Although the resolution in the present investigation is in the millimetre scale, use of established soft lithography protocols such as contact printing 23 and dip-pen lithography 24 can potentially increase the resolution up to a few microns.
Materials preparation
The insoluble ceramic Cu-based precursors were prepared as follows: 0.5-4 ml of mixed solutions of NH 3 aq.(5 ml) and EtOH (25 ml) were added to EtOH (30 ml) containing 0.6 g of Cu(NO 3 ) 2 •3H 2 O.The mixed solution was stirred for 24 h at room temperature.The precipitate was washed with water and collected by centrifugation.The collected powders were dried under vacuum at room temperature.The prepared samples are denoted as Cu-based precursors_X, where X = 0.5, 1, 2, 3 and 4 ml (amount of mixed solution of NH 3 and EtOH added).
HKUST-1 was prepared from the solvent insoluble Cu-based precursors as follows: 97 mg of Cu-based precursors_X was added to the mixed solution of 5 ml of EtOH and 2 ml of water containing 106 mg of H 3 BTC as an organic linker.The mixture was stirred for 10 min at room temperature.The deep turquoise-coloured powder was washed with water and EtOH, and then collected via centrifugation.The collected samples were dried under vacuum.The prepared samples are denoted as HKUST-1_X, where X = 0.5, 1, 2, 3 and 4 ml.For comparison, HKUST-1 was also prepared from Cu(NO 3 ) 2 •3H 2 O by using the previously reported method. 25The prepared sample is denoted as HKUST-1_ref.
Positioning of HKUST-1 was performed via the following procedure: a sol-gel solution was prepared by mixing GPTMS (3 ml) and APTMS (3 ml) under stirring at room temperature, followed by adding 1 M HCl solution (0.1 ml) and stirring for 1 h at room temperature.The sol-gel solution was used as a viscous medium to bind the sprinkled Cu-based precursors_2 to substrates.The patterns were obtained using stamps to transfer the sol-gel solution onto selected locations.Subsequently, the MOF precursor was sprinkled on top of the patterned solgel solution.The system was aged for 5 minutes at room temperature to dry the sol-gel solution.For conversion to HKUST-1, the substrate was immersed into the mixed solution of 5 ml of EtOH and 2 ml of water containing 106 mg of H 3 BTC at room temperature.After 10 min, the substrate was removed from the solution and washed with EtOH, then dried in air.
General methods
The surface morphologies of samples were observed using a field emission scanning electron microscopy (FE-SEM; Merlin; Carl Zeiss Germany) with a thin Iridium film coating.X-ray powder diffraction (XRD) patterns were collected using a PANalytical X'Pert Pro diffractometer employing Co Kα radiation (λ = 0.179 nm).Nitrogen adsorption-desorption isotherms were collected using a BEL-SORP mini (BEL Japan, Inc.) at 77 K. Fourier transform infrared spectroscopy (FT-IR: ALPHA FT-IR spectrometer, Bruker Optik GmbH) was employed in the ATR configuration.Small-angle X-ray scattering (SAXS) and wideangle X-ray scattering (WAXS) were performed at the Australian Synchrotron.A capillary with a 1.5 mm diameter was used for the in situ characterization by SAXS and WAXS.Once filled with the solution, the capillary was illuminated by a highintensity X-ray beam at the beamline, and the scattered radiation was registered by the detector (Pilatus 1 M).An on-axis video camera allowed for parallax free sample viewing and alignment at all times before and during exposure, enabling precise and rapid sample alignment.
Results and discussion
The solvent insoluble Cu-based precursors were prepared from Cu(NO 3 ) 2 •3H 2 O by the ammonia treatment with different concentrations.Their SEM images are displayed in Fig. 2a-e.The particle size increased with the amount of ammonia added, and plate-like structures were seen at higher ammonia concentrations.Subsequently, the conversion of the insoluble Cubased precursors was carried out by adding an alcoholic solution containing H 3 BTC.After about 10 seconds of reaction time, the colour of the precursor materials rapidly changed to deep turquoise, visibly indicating the formation of HKUST-1.Significant changes in morphology from the solvent insoluble Cu-based precursors to the octahedral crystals of HKUST-1 were observed, as shown in Fig. 2a′-e′.It should be noted that the particle size of HKUST-1 decreased with increasing amount of ammonia added for the preparations of the Cubased precursors.
In order to confirm the formation of HKUST-1 structures, powder X-ray diffraction (XRD) measurements were carried out.As shown in Fig. 3a, the insoluble Cu-based precursors were found to include the crystalline Cu 2 (OH) 3 NO 3 26 and Cu(NH 3 ) 4 (NO 3 ) 2 phases. 27The data were analyzed using the Rietveld method, 28 and the approximate phase fractions were calculated using the method of Hill and Howard. 29The peaks in the diffraction patterns collected from the three samples prepared with less than 3 ml of ammonia solution indicate that the samples contain a large proportion of the Cu 2 (OH) 3 NO 3 phase.However, Cu 2 (OH) 3 NO 3 has two polymorphs, the monoclinic rouaite phase, 26a and the orthorhombic gerhardite phase 26b with very similar diffraction patterns (see Fig. S1 in the ESI †).The difference between this phases relates to the packing of the NO 3 − ions between the Cu layers.Analysis of the diffraction data indicates both phases are present in the samples, however, discrepancies between the observed and calculated patterns suggest there may be some disorder between the layers of the Cu 2 (OH) 3 NO 3 phase(s).The samples prepared with 3 and 4 ml of ammonia solution were found to also contain the crystalline Cu(NH 3 ) 4 (NO 3 ) 2 phase. 27The Rietveld analysis indicates that the concentration of this phase increases with increasing ammonia solution.For example, excluding any amorphous component, the proportion of the Cu(NH 3 ) 4 (NO 3 ) 2 phase to the Cu 2 (OH) 3 NO 3 phase(s) increased from ∼27 wt% to ∼53 wt% from 3 ml to 4 ml of ammonia solution respectively (Fig. S2 in the ESI †).However, the accuracy of these measurements is limited by discrepancies between the observed and calculated relative peak intensities of the Cu 2 (OH) 3 NO 3 phase.An EDX investigation revealed the presence of carbon (6-8%) confirming the presence of residual solvent (ethanol), which may disrupt the crystal structure of the Cu 2 (OH) 3 NO 3 phase.Previous studies have shown that layered metal hydroxy nitrates can be sensitive to the presence and intercalation of organic molecules. 30It was also found that the observed diffraction patterns after the conversion to MOFs (Fig. 3b) are in good agreement with the previouslyreported diffraction pattern of a HKUST-1-type structure 31 as well as the pattern of HKUST-1_ref which was synthesized following the aqueous room temperature protocol proposed by Bradshaw et al. 25 No diffraction peaks corresponding to residual precursors or the bulk copper oxides were observed for the patterns of converted HKUST-1_X.FT-IR measurements were also performed to characterize both the insoluble Cu-based precursors and the corresponding HKUST-1.The spectra are displayed in Fig. 4. In the spectra for Cu-based precursors_X, the wide band attributed to O-H stretching modes appeared in the region of 3550-3200 cm −1 .32a The mode at around 3340 cm −1 for Cu-based precursors_3 and 4 is attributed to the N-H stretching vibration, suggesting the presence of residual NH 3 .32b The absorption bands at around 877, 1044, 1240, 1296, 1333 and 1416 cm −1 can be assigned to vibrational modes of nitrate (NO 3 − ) ions having an interaction with copper hydroxide layers.32c,d Moreover, the bands at 1640 and 820 cm −1 for Cu-based precursors_3 and 4 are attributed to the N-H bending mode.32b The remaining bands at 670 and 772 cm −1 are assigned to the Cu-OH stretching mode.32e After treatments with an alcoholic solution containing H 3 BTC, the absorption bands attributed to O-H stretching modes in the region of 3550-3200 cm −1 significantly decreased for all HKUST-1_X samples.New modes were observed and are assigned to the organic molecules used as ligands in the HKUST-1_X frameworks (BTC); 729, 760, 940 cm −1 : C-CO 2 stretching, 1114 cm −1 : C-O stretching, 1373, 1449, 1645 cm −1 : COO-Cu 2 stretching. 33These results confirmed the conversions of the solvent insoluble Cu-based precursors to HKUST-1.The absorption bands coming from nitrate ions are still observable for all HKUST-1_X materials.N 2 adsorption isotherms are shown in Fig. 5. Using the BET (Brunauer-Emmett-Teller) method, the specific surface areas of the Cu-based precursors_X (X = 0.5, 1, 2, 3 and 4) were measured (26, 23, 16, 11 and 6 m 2 g −1 respectively), showing that the feedstock ceramic materials have a low porosity.After conversion into the MOF, type I isotherms were measured for all of the prepared materials, indicating the microporosity of the samples.The specific surface areas of HKUST-1_X (X = 0.5, 1, 2, 3 and 4) were determined to be 1021, 1019, 998, 965, and 966 m 2 g −1 , respectively.HKUST-1_ref was also found to have the surface area of 1236 m 2 g −1 .With the room temperature conversion proposed here, we found that the surface areas are between 17% (HKUST-1_0.5) to 22% (HKUST-1_3 and 4) lower than HKUST-1_ref.This result suggests the residual precursor species still exist in the framework, which is consistent with the results obtained by FT-IR measurements.This conclusion is supported by the previous studies converting other Cubased ceramics into HKUST-1. 19,20In summary, the investigation performed here (XRD, FT-IR and N 2 adsorption measurements) indicates the formation of HKUST-1 from the solvent insoluble Cu-based precursors.
In situ SAXS/WAXS measurements were conducted at the Australian Synchrotron to gain an insight into the kinetics of HKUST-1 growth from the solvent insoluble Cu-based precursors.For the measurements, the Cu-based precursors were placed in a capillary with the mixed solution of EtOH and water, followed by the controlled addition of the alcoholic solution containing the H 3 BTC linker using a syringe.Once filled with the solution, the capillary was illuminated by a high-intensity X-ray beam, and the scattered radiation was collected by the two 2D detectors.The obtained 2D scattering patterns have been background-subtracted, radially integrated and plotted as a function of the reaction time.As an example, the conversion from Cu-based precursors_2 into HKUST-1_2 is shown in Fig. 6.The peaks attributed to the HKUST-1 structure appeared just after the ligand was injected (within 5 seconds).Within 30 seconds of the ligand injection, plateaus were found in the peaks corresponding to the HKUST-1 structure including two intense peaks which appeared at Q = 0.67 and 0.82 Å −1 .These peaks are attributed to the (022) and ( 222) planes of the HKUST-1 framework, respectively.This result indicates that the conversion process was almost completed within 30 seconds of the reaction time.Similar trends were observed for all of the samples, with the reaction occurring within ca. 30 seconds.However, fluctuations in the intensity of the diffraction peaks were observed due to MOF crystals passing in and out of the volume illuminated by the X-ray beam in the process of settling to the bottom of the capillary, which made accurate quantification of the phase fractions difficult.
HKUST-1 was recently used as a functional material for sensing, 34a catalysis, 34b decontamination 34c and electronics.34d Here we propose a protocol that can position HKUST-1 on substrates for their potential use in practical device fabrication.The protocol can be described as a two-step approach.First, the solvent insoluble Cu-based precursors were positioned on a glass slide and fixed on it using a sol-gel solution based on APTMS and GPTMS (Fig. 7a).Then, the substrate was immersed into the mixed solution of EtOH and water containing H 3 BTC (10 min, 25 °C) to convert precursor materials to HKUST-1 (Fig. 7b).SEM measurements confirmed the successful formation of the octahedral crystal morphology of HKUST-1 on the glass slide as found for the free nanopowder form.A highly advantageous aspect of the approach proposed here is the variety of patterning procedures available for the sol-gel technology. 35Although synthesis of MOF thin films have been of great interest in recent years, there are only several reports describing the fabrication of MOF thin films on flexible substrates, such as porous polymers and paper fibers. 36Positioning of MOFs on flexible substrates would offer further opportunities for practical applications due to their light weight, flexibility, low cost, and ease of design over conventional substrates. 37To demonstrate this, a bendable polyethylene terephthalate (PET) film was employed as a substrate using the same method developed in this study, as shown in Fig. 8. Positioning of HKUST-1 was successfully demonstrated through the two-step approach.It was also confirmed that the prepared film could be easily bent without changes in the positioned MOF, as shown in Fig. 8d.A previous study proposed an interesting technique (ink-jet printing) for the positioning HKUST-1 on a number of substrates, including flexible plastic sheets.36e However, the protocol proposed here does not require heating steps or the use of hazardous solvents (e.g.DMSO) in the process.In addition, this environmentally friendly approach offers easy access to positioned MOFs for the exploration of MOF properties in miniaturized devices and the exploitation of the functional properties for industrial application.
Conclusions
We have presented the conversion of solvent insoluble Cubased precursors into HKUST-1 through a simple and quick acid-base reaction at room temperature.The technique has been successfully adopted for use in the positioning of MOFs on flat substrates.To demonstrate the broad versatility and applicability of this method, positioning was also performed on a flexible polymer substrate.The fabricated thin film was found to be easily bendable without change in the positioned MOF crystals.This approach offers significant advantages including the stability over time of the Cu-based ceramic nanopowder used as a precursor, an environmentally friendly protocol for the conversion from ceramic into HKUST-1 and the exploitation of established technologies for HKUST-1 film and pattern fabrication.This article is licensed under a Creative Commons Attribution-NonCommercial 3.0 Unported Licence.
Fig. 1
Fig. 1 (a) A schematic illustration of conversion of the solvent insoluble Cu-based precursors into HKUST-1 in powder form.The conversion is achieved by immersing the Cu-based precursors into a mixed solution of EtOH and water containing H 3 BTC as an organic linker at room temperature (r.t.) for 10 min.(b-e) Schematic illustrations of the conversion of the solvent insoluble Cu-based precursors into HKUST-1 on substrates using the protocol reported here.(b, c) A sol-gel solution is stamped on substrates.(d) The Cu-based precursors are sprinkled and bound to the substrates via the sol-gel solution, which acts as a viscous media.(e) The Cu-based precursors are converted into HKUST-1 by immersing the substrates in the alcoholic solution containing H 3 BTC at r.t. for 10 min.
Fig. 6
Fig. 6 Nucleation and growth of HKUST-1_2 at room temperature as a function of time when the alcoholic solution containing BTC is added to the solution containing Cu-based precursors_2.The ligand was injected at t = 30 seconds.A SAXS-based diffraction pattern of HKUST-1_2 as a function of time shows that nucleation starts within 5 seconds after addition of the organic linkers.
Fig. 7
Fig. 7 Photographs of (a) the positioned solvent insoluble Cu-based precursors (Cu-based precursors_2 and (b) HKUST-1_2 and (c) SEM image of HKUST-1_2 on a slide glass substrate.
Fig. 8
Fig. 8 Photographs of (a) the positioned solvent insoluble Cu-based precursors (Cu-based precursors_2 and (b, d) HKUST-1_2 and (c) SEM image of HKUST-1_2 on a PET substrate.
|
2018-12-12T11:56:09.374Z
|
2015-05-05T00:00:00.000
|
{
"year": 2015,
"sha1": "c5c4ca3f9ee39a5d8678fd63a9c99e54d07ac8d3",
"oa_license": "CCBYNC",
"oa_url": "https://pubs.rsc.org/en/content/articlepdf/2015/qi/c4qi00215f",
"oa_status": "HYBRID",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "c5c4ca3f9ee39a5d8678fd63a9c99e54d07ac8d3",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
229348530
|
pes2o/s2orc
|
v3-fos-license
|
Substantial Impact of Post Vaccination Contacts on Cumulative Infections during Viral Epidemics
Background: The start of 2021 will be marked by a global vaccination campaign against the novel coronavirus SARS-CoV-2. Formulating an optimal distribution strategy under social and economic constraints is challenging. Optimal distribution is additionally constrained by the potential emergence of vaccine resistance. Analogous to chronic low-dose antibiotic exposure, recently inoculated individuals who are not yet immune play an outsized role in the emergence of resistance. Classical epidemiological modelling is well suited to explore how the behavior of the inoculated population impacts the total number of infections over the entirety of an epidemic. Methods: A deterministic model of epidemic evolution is analyzed, with 7 compartments defined by their relationship to the emergence of vaccine-resistant mutants and representing three susceptible populations, three infected populations, and one recovered population. This minimally computationally intensive design enables simulation of epidemics across a broad parameter space. The results are used to identify conditions minimizing the cumulative number of infections. Results: When an escape variant is only modestly less infectious than the originating strain within a naïve population, there exists an optimal rate of vaccine distribution. Exceeding this rate increases the cumulative number of infections due to vaccine escape. Analysis of the model also demonstrates that inoculated individuals play a major role in the mitigation or exacerbation of vaccine-resistant outbreaks. Modulating the rate of host-host contact for the inoculated population by less than an order of magnitude can alter the cumulative number of infections by more than 20%. Conclusions: Mathematical modeling shows that optimization of the vaccination rate and limiting post-vaccination contacts can affect the course of an epidemic. Given the relatively short window between inoculation and the acquisition of immunity, these results might merit consideration for an immediate, practical public health response.
Results: When an escape variant is only modestly less infectious than the originating 28 strain within a naïve population, there exists an optimal rate of vaccine distribution. 29 Exceeding this rate increases the cumulative number of infections due to vaccine 30 escape. Analysis of the model also demonstrates that inoculated individuals play a 31 major role in the mitigation or exacerbation of vaccine-resistant outbreaks. Modulating 32 the rate of host-host contact for the inoculated population by less than an order of 33 magnitude can alter the cumulative number of infections by more than 20%. CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) preprint The copyright holder for this this version posted December 24, 2020. ;https://doi.org/10.1101https://doi.org/10. /2020 Introduction: 40 41 The emergence of the novel Severe Acute Respiratory Syndrome Coronavirus 2 42 (SARS-CoV-2) responsible for the Covid-19 pandemic motivated dramatic public health 43 intervention including recommendations for isolation and quarantine throughout most of 44 2020 1 . The beginning of 2021 will be marked by widespread vaccine distribution. 45 Optimizing distribution is challenging and subject to a myriad of social and economic Naïve, unvaccinated, hosts are more easily infected than vaccinated hosts but 54 mutations conferring resistance are unlikely to provide a selective advantage in the 55 naïve background. Thus, naïve hosts are likely to shed escape variants at very low, 56 likely, negligible rates. The reverse is true for vaccinated hosts. Recently vaccinated, 57 inoculated, hosts that are not yet immune remain highly susceptible to infection with the 58 originating strain, and in these hosts, mutations conferring resistance are more likely to 59 provide a selective advantage. As a result, a substantial fraction or even most of the 60 virus shed by such hosts will be resistant mutants. This situation is analogous to the 61 administration of a low-dose antibiotic regime 14,15 . In both cases, the pathogen is 62 introduced to a susceptible host and subject to elevated selective pressure towards the 63 emergence of resistant (escape) variants.
65
We sought to establish the constraints imposed by virus escape on optimal vaccine 66 distribution and the role played by the small, but critical, population of inoculated hosts.
67
To this end, we constructed an epidemiological compartment model to simulate 68 vaccination campaigns over a broad parameter regime. This minimally computationally 69 . CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) preprint The copyright holder for this this version posted December 24, 2020. ;https://doi.org/10.1101https://doi.org/10. /2020 intensive approach enabled us to simulate many possible scenarios for epidemic 70 evolution, in order to determine the optimal vaccination strategy for each condition. We divided the population into 7 compartments (Fig. 1A). Three compartments are The remaining compartment, Recovered (R), contains all hosts who were previously 84 infected. Vaccination is represented by a reduction in susceptibility to infection with the 85 originating strain. Naïve hosts are inoculated at rate kV. Inoculated hosts do not 86 immediately acquire immunity and mature into the vaccinated compartment at rate kM. 87 All infected hosts recover at rate kR.
89
Within the timescale of the model, recovery is assumed to grant stable immunity, and 90 any variation in population size due to birth/death is assumed to be negligible. It should 91 be noted that, if recovery from the Infected-Naïve or Infected-Inoculated compartments 92 does not confer immunity against escape infection, the key results in this work will have 93 an even greater impact on the vaccination outcome. Hosts come into contact at rate kC. 94 For simplicity, we assume that contact with an escape-infected host can only produce 95 an escape infection. Also, vaccine efficacy is assumed to be perfect such that 96 vaccinated hosts cannot be infected with the originating strain. The Inoculated-Infected 97 compartment is assumed to represent a symmetric composition of escape and 98 originating infections such that the total probability of a Naïve or Inoculated host being 99 infected after contact with a Naïve-Infected or Inoculated-Infected host is the same.
100
. CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) preprint The copyright holder for this this version posted December 24, 2020. ; https://doi.org/10. 1101 Finally, we assume that the probability of escape-infection is the same for Naïve and An escape mutant can emerge within an Infected-Naïve or Infected-Inoculated host.
112
The parameter α represents the infectivity of the escape variant relative to the 113 originating strain when a Naïve host interacts with an Infected-Naïve host. The 114 parameter β represents the infectivity of an escape variant when a Naïve host interacts 115 with an Infected-Escape host relative to the infectivity of the originating strain when a 116 Naïve host interacts with an Infected-Naïve host. Informally, α reflects the ratio of 117 escape variant to originating strain shed by Infected-Naïve hosts, whereas β reflects the 118 fitness of an escape variant relative to the originating strain. Finally, we introduce the 119 parameter q to represent the impact of varying the rate of host-host contact for 120 Inoculated hosts relative to that for the other compartments. q>1 represents increased 121 contact, and q<1 corresponds to decreased contact. This completes the model 122 description and structures the differential equations: CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) preprint The copyright holder for this this version posted December 24, 2020. ; https://doi.org/10.1101/2020.12.19.20248554 doi: medRxiv preprint escape infections, the fraction of those infections attributable to contact with inoculated 142 hosts would be smaller.
144
The solutions of the ODEs were obtained using the MATLAB ode45 method 16 . However, the outcome substantially differs for high β (Fig. 2B). Due to the vaccine 165 escape, at high rates of distribution, the cumulative number of infections remains large.
166
Furthermore, there is an optimal rate of distribution such that exceeding this rate 167 increases the cumulative number of infections. In this regime, the benefit of reducing the 168 size of the population susceptible to infection by the originating strain is outweighed by 169 the cost of increasing the selective pressure for the emergence of escape variants. In all 170 subsequent analysis, the vaccination rate is varied but vaccination is fixed to begin 171 when 1% of the population has recovered from infection.
172
. CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) preprint The copyright holder for this this version posted December 24, 2020. ; https://doi.org/10. 1101 Infections can be mitigated by reducing host-host contact. We sought to determine how 174 perturbing the contact rates for hosts in the Inoculated compartment relative to that of all 175 other compartments, q, affects the outcome. For β ranging between 0.75 and 1, we 176 considered three relative contact rates, q=[0.2,1,5] (Fig. 2C). Increasing the rate of host-177 host contact only within this compartment has a significant impact on the cumulative 178 number of infections. The optimal vaccination rate is also perturbed. Furthermore, if the 179 optimal vaccination rate it exceeded, reducing q below 1 slows the accumulation of 180 infections (Fig. 2D). The converse is true for increasing q, and the landscape is similar 181 when vaccination falls below the optimal rate (Fig. 2E).
183
Having demonstrated how q perturbs the optimal vaccination rate and how reducing q 184 below 1 can mitigate or delay the cumulative number of infections even if this rate is not 185 met or, conversely, is exceeded, we sought to establish the impact of q on the 186 cumulative number of infections across a wide range of β at the optimal vaccination rate 187 for each condition (Fig. 3A). The cumulative number of infections is sensitive to q across 188 the entire range of β. Varying q within an order of magnitude can substantially aid or 189 hinder the vaccination campaign, and when q>>1, the optimal rate of vaccination is 0. CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity.
(which was not certified by peer review) preprint
The copyright holder for this this version posted December 24, 2020. ; https://doi. org/10.1101org/10. /2020 We additionally consider the cumulative infections added or subtracted relative to the 203 outcome corresponding to q=1 and scaled by the fraction of infections due to vaccine 204 escape (Fig. 3E). Varying q within an order of magnitude alters the cumulative number 205 of infections added or subtracted by more than 20% of the cumulative number of 206 infections in the absence of vaccination, RNull, again demonstrating the critical role 207 played by inoculated hosts with respect to vaccine escape. The landscape is similar 208 when then rate of vaccination is fixed (Fig. 3F). However, newly emergent variants are not always much less infectious and can be 221 responsible for non-negligible disease incidence 17,18 . Here, we demonstrated that, even 222 when escape variants are modestly less infectious, there exists an optimal vaccination 223 distribution rate such that exceeding this rate increases the cumulative number of 224 infections. This optimal rate depends on the infectivity of the escape variants. Such a 225 prediction is impractical at the time of writing, that is, in the very first days of the 226 vaccination campaign, and the cost of overestimation of the optimal vaccination rate is 227 far less than that of underestimation. However, to our knowledge, this phenomenon, 228 analogous to the evolution of antibiotic resistance, is not widely appreciated and, as 229 such, seems to warrant discussion.
231
Of more practical concern is the role of inoculated hosts in the emergence of escape 232 variants. Within low-dose antibiotic regimes 14,15 , hosts are susceptible to infection with 233 . CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) preprint The copyright holder for this this version posted December 24, 2020. ; https://doi.org/10. 1101 the originating variant, and in such hosts, the pathogen is subjected to elevated 234 selective pressures towards the emergence of resistance. Similarly, within inoculated 235 hosts, the virus is subjected to gradually increasing selective pressures towards the 236 emergence of resistance while the intra-host population remains sufficiently large to 237 explore a substantial fraction of the mutation space. We demonstrated that moderately In this study, we leveraged classical modelling techniques to elucidate the factors that 247 could substantially impact the outcome of any vaccination campaign. Although these 248 results are broadly applicable, they are not necessarily instrumental for predicting 249 quantitative outcomes for the current SARS-CoV-2 pandemic or any other particular 250 epidemic. Similarly, the model presented in this study is not designed to forecast long-251 term outcome, a topic that has been thoroughly addressed for the case of SARS-Cov-2 252 and more generally 7-13,21-23 .
254
The results presented here appear to be of immediate interest in relation to the 255 vaccination campaign against SARS-Cov-2 that is beginning at the time of this writing.
256
However, although diversification of the virus is apparent 24-26 and the potential for 257 escape is being actively investigated 5,6,27,28 , it cannot be ruled out that all escape 258 variants have a substantially reduced infectivity, represented by β<<1, in which case the 259 highest possible vaccination rate will be optimal and the benefits specific to post-260 vaccination contact limitation could be negligible. CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) preprint The copyright holder for this this version posted December 24, 2020. ;https://doi.org/10.1101https://doi.org/10. /2020 Depending on the infectivity of escaping, vaccine-resistant mutants, the optimal 265 vaccination rate with respect to the cumulative number of infections can be lower than 266 the maximum rate. Contact rates for inoculated hosts can have a substantial impact on CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) preprint The copyright holder for this this version posted December 24, 2020. ;https://doi.org/10.1101https://doi.org/10. /2020 CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) preprint The copyright holder for this this version posted December 24, 2020. ; https://doi.org/10. 1101 CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) preprint The copyright holder for this this version posted December 24, 2020. ; https://doi.org/10. 1101 ∑Escape(q)/∑Infections(q)(∑Infections(q)-∑Infections(q=1))/RNull, for a range of β and q, 372 optimizing kV for each condition. F. Same as in E for fixed kV=0.03. A-C/E/F. 961 values 373 were computed for each panel and 4x by 4x bilinearly interpolated points are displayed. 374 375 . CC-BY-NC-ND 4.0 International license It is made available under a is the author/funder, who has granted medRxiv a license to display the preprint in perpetuity. (which was not certified by peer review) preprint The copyright holder for this this version posted December 24, 2020.
|
2020-12-23T02:08:19.781Z
|
2020-12-22T00:00:00.000
|
{
"year": 2020,
"sha1": "90dc154fe009e55e3badcbf66328e03b8cb52284",
"oa_license": "CCBYNCND",
"oa_url": "https://www.medrxiv.org/content/medrxiv/early/2020/12/24/2020.12.19.20248554.full.pdf",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "24fc8052c83288e9e646a21d652dd4473282f89b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
4902529
|
pes2o/s2orc
|
v3-fos-license
|
Bile acids are important direct and indirect regulators of the secretion of appetite- and metabolism-regulating hormones from the gut and pancreas
Objective Bile acids (BAs) facilitate fat absorption and may play a role in glucose and metabolism regulation, stimulating the secretion of gut hormones. The relative importance and mechanisms involved in BA-stimulated secretion of appetite and metabolism regulating hormones from the gut and pancreas is not well described and was the purpose of this study. Methods The effects of bile acids on the secretion of gut and pancreatic hormones was studied in rats and compared to the most well described nutritional secretagogue: glucose. The molecular mechanisms that underlie the secretion was studied by isolated perfused rat and mouse small intestine and pancreas preparations and supported by immunohistochemistry, expression analysis, and pharmacological studies. Results Bile acids robustly stimulate secretion of not only the incretin hormones, glucose-dependent insulinotropic peptide (GIP), and glucagon-like peptide-1 (GLP-1), but also glucagon and insulin in vivo, to levels comparable to those resulting from glucose stimulation. The mechanisms of GLP-1, neurotensin, and peptide YY (PYY) secretion was secondary to intestinal absorption and depended on activation of basolateral membrane Takeda G-protein receptor 5 (TGR5) receptors on the L-cells in the following order of potency: Lithocholic acid (LCA) >Deoxycholicacid (DCA)>Chenodeoxycholicacid (CDCA)> Cholic acid (CA). Thus BAs did not stimulate secretion of GLP-1 and PYY from perfused small intestine in TGR5 KO mice but stimulated robust responses in wild type littermates. TGR5 is not expressed on α-cells or β-cells, and BAs had no direct effects on glucagon or insulin secretion from the perfused pancreas. Conclusion BAs should be considered not only as fat emulsifiers but also as important regulators of appetite- and metabolism-regulating hormones by activation of basolateral intestinal TGR5.
INTRODUCTION
The gut is the source of several hormones with important effects on appetite (neurotensin (NT), glucagon-like peptide-1 (GLP-1), and polypeptide YY (PYY)) and glucose regulation (the incretin hormones, glucose-dependent insulinotropic peptide (GIP) and GLP-1) [7,16,29]. Activation of this system, therefore, represents a potential approach to treat type-2-diabetes and obesity, and this has generated considerable interest in understanding the molecular sensing mechanisms underlying the secretion of these hormones. Recent studies have indicated that BAs, in addition to their well-known role in fat absorption, may stimulate the secretion of a number of appetite-and metabolismregulating peptide hormones from the gut and pancreas, including glucagon, GIP, GLP-1 insulin, and PYY [2,3,6,13,14,19,30,34,35,38,46,47], but the molecular sensing mechanisms involved are not well understood. The aim of this study was two-fold. First, we sought to evaluate the rat as a model for BA effects by examining the effects of BAs on the secretion of the most important appetite-and metabolism-regulating hormones from the gut and pancreas: ghrelin, GLP-1, NT, GIP, glucagon, insulin, and PYY and to compare the secretory responses to those elicited by glucose. Second, we sought to characterize the molecular sensing machinery behind the hormonal responses using a physiologically relevant model. The project relies on in vivo studies in rats as well as studies involving isolated, perfused preparations of the mouse and rat small intestine and the rat pancreas, allowing the secretory mechanisms to be studied in detail; these studies were supplemented with immunohistochemical and receptor pharmacology studies.
Chemicals
Test reagents were obtained as specified in supplementary methods.
Animal studies
Studies were conducted with permission from the Danish Animal Experiments Inspectorate (2013-15-2934-00833) and the local ethical committee in accordance with the guidelines of Danish legislation governing animal experimentation (1987) and the National Institutes of Health (publication number 85-23).
Systemic effects of bile-acids in anesthetized rats e intraluminal stimulation
Male Wistar rats (w250 g) were obtained from Janvier (Saint Berthevin Cedex, France) and housed two per cage under standard conditions with ad libitum access to chow and water and left to acclimatize for at least one week before the study. Studies were carried out in different groups of rats on two occasions. On the day of study, rats were fasted 8 h prior to the study (1500 h) with access to water. Rats were anesthetized by a subcutaneous injection with Hypnorm/midazolam, then the lower part of the abdominal cavity was opened and a needle inserted into the inferior vena cava approximately 2 cm cranial to the iliac veins. A plastic tube was also inserted into the intestinal lumen approximately 8 cm below the pyloric sphincter. A basal blood sample was collected 5 min later and another immediately before stimulant administration. At time 0 min, rats (n ¼ 9) were given 1.5 mL intraluminally of one of the following four solutions: 1) a complex bile acid mixture consisting of CA, GCA, TCA, GDCA, TDCA, DCA, CDCA, GCDCA, TCDCA; each 0.321 mM, 2) a mixture of UDCA and TUDCA, each at concentration of 1.404 mM, thus resulting in a similar total bile acid concentration as the complex bile acid mixture (2.808 mM), 3) Dglucose (50% (w/v): positive control), or 4) 0.9% NaCl (negative control). All stimulants were diluted in 0.9% NaCl. Rat body weights did not differ between treatment groups (305AE5 g vs. 304AE3 g, vs. 300AE4 g vs. 295AE5 g, P > 0.05 for all groups), and, on each study occasion, rats from the same cage received different treatments. Blood for hormone and blood glucose measurement was collected through the needle in vena cava (200 mL/time point; 1800 mL in total) into ice cold EDTA-coated tubes at time À5, 0, 2, 5, 10, and 30 min, and blood glucose was immediately measured by a handheld glucometer utilizing the gluco-oxidase method (Accu-chek Compact plus device, Roche, Mannheim, Germany). Samples for hormone analysis were centrifuged (1,650 Â g, 4 C, 10 min) to obtain plasma, which was transferred to fresh Eppendorf tubes and immediately frozen on dry ice. Samples were stored at À20 C until analysis. In between sample collection, the needle was regularly flushed with isotonic salt water (w0.2 ml) to prevent clot formation. Samples were analyzed as described in the biochemical measurement section.
2.4. Isolated perfused rat and mouse small intestine and rat pancreas Methods are described in more detail in the supplementary methods and elsewhere [8,9,12,24,44]. Male Wistar rats (w250 g) were housed as described above and acclimatized for a least a week. Heterozygous mice (TGR5 þ/À ) [46] were transferred to University of Copenhagen and bred to generate both TGR5 þ/þ and TGR5 À/À progeny. Animals were housed under same conditions as the rats. On the day of experiment, TGR5 þ/þ or TGR5 À/À mice (weight matched) or Wistar rats were anesthetized, and the abdominal cavity was opened. The upper half of the small intestine and the entire large intestine (for small intestine perfusions) or the entire intestine (for the pancreas perfusions) were carefully removed after tying off the supplying vasculature. Furthermore, for the pancreas perfusions, the spleen and stomach were removed, and the kidneys were excluded from perfusion by tying off the renal arteries. For the intestinal perfusions, a plastic tube was inserted into the lumen to allow administration of luminal stimulants. The small intestine (the distal half and the most proximal part connected to the pancreas) or the pancreas was perfused vascularly through the upper mesenteric artery or the abdominal aorta with warm (37 C) modified Krebs ringer buffer gassed with 95% O 2 and 5% CO 2 (both v/v) until equilibrium, using a Uniper UP-100 perfusion system (Hugo Sachs; Harvard Apparatus, March-Hugstetten, Germany), and venous effluent were collected through a catheter in vena portae. As soon as the catheters were in place, the animals were killed by perforation of the diaphragm, and the preparation was allowed to equilibrate for approximately 30 min before sample collection Venous effluent was collected, immediately chilled on ice, and transferred to À20 C within 30 min.
2.5. TGR5 expression in isolated murine a-, b-, dand L-cells TGR5 expression in L-cell positive and L-cell negative epithelial cells were isolated from transgenic mice (GLU-Venus) expressing fluorescent proteins under the control of proglucagon promoter [36]. Pancreatic a-cells were isolated from GLU-Venus transgenic mice and pancreatic d-cells from transgenic mice expressing YFP under the control of the somatostatin promoter as described previously [1]. The b-cells were isolated on the basis of their size, using forward and side scatter characteristics distinguishing them from other nonfluorescent islet cells in single cell preparations from mice with glucagon-driven Venus expression. TGR5 expression was determined by RT-PCR and expressed relative to betaactin as described previously [36] and specified in supplementary methods.
Biochemical measurements
In vivo rat study: Plasma concentrations of peptide hormones were determined using a customized xMAP based multiplex-assay (Milliplex map rat metabolic hormone magnetic bead panel e metabolism multiplex assay, cat. no. RMHMAG-84 K, Millipore), for the following selected analytes: Amylin (Active), C-Peptide, Ghrelin (Active), GIP (Total), GLP-1 (total), Glucagon, IL-6, Insulin, Leptin, MCP-1, PP, PYY (Total), TNF-a. The assay is reported to have no significant crossreactivity with other hormone tested. After the analytes were bound to the antibody-coupled beads, a biotinylated detection antibody was added, followed by incubation with streptavidin-phycoerythrin conjugate. Subsequently, the beads were subjected to a flow cytometrybased detection method using a Luminex laser-based analyzer (catalog no. Luminex 200, Luminex, Austin, TX 78727, USA). The manufacturer's instructions were closely followed. Reported intra-and interassay variation was 1e8% and 7e29%, respectively. The median fluorescent intensities from the eight calibrators were used to interpolate concentrations in plasma samples using 5-parameter logistic regression.
Isolated perfused organs
Total bile acid concentrations in venous effluents were measured by an enzymatic assay relying on the ability of bile acids (in the presence of 3a-hydroxysteroid dehydrogenase) to reduce thio-NAD þ to thio-NADH (detected) (Total Bile Acid Kit, Cat. no. STA-631, Cell Biolabs Inc. CA, USA). Peptides from the isolated perfused mouse/rat intestine/pancreas were quantified by use of extensively validated inhouse radioimmunoassays (RIAs). GLP-1 concentrations were measured with assay 89390, employing an antibody specifically targeting the amidated C-terminus of the molecule, thus measuring total GLP-1 (both intact GLP-1 7-36amide, the primary metabolite GLP-1 9-36amide and other potential N-terminally truncated or extended isoforms) [32]. The amidated isoform of GLP-1 was targeted rather than the glycine extended isoform (GLP-1 7e37) because the rat predominantly stores and secretes amidated GLP-1 [22,45]. Rat PYY (total) was measured employing antibody T-4093, measuring both intact 1e36 and the primary metabolite PYY 3e36 [45]. NT (total) was measured using antibody 3D97, which targets Nterminal epitopes in the 1e8 sequence, thus targeting total NT [23]. Glucagon was measured with the C-terminally directed antibody 4305, which reacts with all bioactive forms of glucagon [31]. Insulin was measured using antibody 2006 which detects all bioactive forms of (murine) insulin [5]. Somatostatin was measured with a sideviewing antibody (assay 1758), thus detecting all bioactive peptide forms (SST-14 and SST-28) [17]. For all measurements, standard curves were prepared in perfusion buffer, which was shown to be devoid of matrix effects in control studies. Further assay details on the respective assays, including experimental detection limits, can be found elsewhere [25].
2.8. TGR5-induced cAMP production COS-7 cells were grown in DMEM supplemented with 2 mM Lglutamine, 180 U/ml penicillin, 45 mg/ml streptomycin, and 10% (v/v) FBS according to protocols described previously [39]. 35,000 COS-7 cells per well were seeded in 96-well plates coated with poly-Dlysine and modified for either human or rat TGR5 expression using a transient calcium phosphate precipitation transfection procedure [21], using a pCMV6-XL5 or pCMV6-Entry vector, respectively (Cat. no. SC123312 and RN210451, OriGene, Technologies Inc., Rockville, MD). On the assay day, two days after transfection, growth medium was removed from the cells, and they were left to equilibrate in HBS buffer containing 1 mM IBMX for 30 min at 37 C. Concentrations of different BAs or the TGR5 agonist RO6272296 were added to duplicate wells, and the cells were incubated for 30 min at 37 C and subjected to in vitro HitHunter cAMP assay (based on enzyme fragment complementation, DiscoveRx) carried out according to the manufacturer's instructions. The TGR5 agonist was included in all runs to allow for data normalization between runs. . * Red: Bile acid group, black: glucose group and grey: 0.9% NaCl group (neg. control). n ¼ 9 (bile acid and glucose); n ¼ 3 (0.9% NaCl). *P < 0.05, **P < 0.01 between respective baseline and response values or between treatment groups (AUCs).
Immunohistochemistry
Control wild type C57BL/6 male mice and male Wistar rats (w300 g, Taconic, Ejby, Denmark) were killed and approximately 2 cm of the jejunum was collected. Tissue samples were fixed in formalin, paraffin embedded, sectioned, and processed using standard methods [51], The immuno-fluorescence staining protocol employed involved microwave treatment as previously described [52]. The double immunofluorescence staining was performed using (rabbit) anti-GLP-1 (ab22625, Abcam Ò 330 Cambridge Science Park, Cambridge, CB4 0FL, UK) and (goat) anti-TGR5 (Cat. no. SC-48687, Santa Cruz Biotechnology, Inc. Dallas, TX) diluted 1:50 in 10% (v/v) FBS in PBS. Sections were incubated overnight at 4 C, washed thoroughly with PBS, and then double labeled with (donkey) anti-rabbit IgG (H þ L): Alexa Fluor Ò 546 conjugate (Cat. no. A10040, ThermoFischer Scientific, Slangerup, Denmark) and (donkey) anti-goat IgG (H þ L): Alexa FluorÒ 488 conjugate (Cat. no. A11055, ThermoFischer Scientific) after 30 min incubation. Cell nuclei were stained with DAPI (4 0 , 6-Diamidino-2-phenylindole dihydrochloride). Fluorescence imaging was performed using a Zeiss confocal 510 microscope equipped with a water 63x/1.0 Plan-Apochromat objective and Zen software (Carl Zeiss, Oberkochen, Germany). In a separate set of experiments, the specificity of the TGR5 antibody applied above as well as another TGR5 antibody (Cat. no. LS-A1936, LifeSpan BioSciences, Inc., Seattle, WA), was validated by transfecting HEK293 with cDNA clones containing mouse or rat (Myc-DKK-tagged) TGR5 transcript (Cat. no. MR227683 and RR 210451) or with mock vector. Antibody SC-48687 strongly reacted with cells transfected with either mouse and rat TGR5 cDNA (identified by co-expression of c-Myc), whereas the other TGR5 antibody (LS-A1936) did not react with TGR5 transfected cells and were therefore judged unsuitable for further characterization (data shown in Supplementary Fig. 1A). To further validate the specificity of the SC-48687 antibody, we applied the same protocol as described above on small intestinal tissue collected from a TGR5 KO mouse or wild type littermate. Staining was confirmed in the tissue from the wild type littermate but not the TRG5 KO mouse (Supplementary Fig. 1B). IBAT staining was performed on the same tissue by similar methods using an antibody from Aviva Systems Biology (Cat. no. OAEB00210, San Diego, CA).
Data presentation and statistical analysis
In vivo data are presented as absolute mean values AE SEM and as baseline-corrected (calculated as the mean of À5 and 0 min concentrations) area under the curve (AUC) values to compare overall effects during the course of the study between treatments. own baseline (average of À5 and 0 min) or testing AUCs against each other. Perfusion responses were assessed by comparing averaged concentrations during the stimulation period (10 consecutive 1-min observations) with mean basal levels over a similar duration: immediately before the stimulation (5 consecutive 1-min observations) and at the end of the following equilibrium period (also 5 consecutive 1-min observations). Significance was assessed by One-way-ANOVA followed by Tukey post hoc. For the remaining data, unpaired or paired two-sided t-test was used to assess differences between two groups, as indicated. For all tests, threshold for significance was set to P ¼ 0.05.
RESULTS
3.1. Intra-luminal delivery of bile acids stimulates secretion of numerous appetite-and metabolism-regulating peptides in anesthetized rats Blood glucose was unaltered by an intra-luminally administered BAmix (cholic acid (CA), deoxycholic acid (DCA) and chenodeoxycholic acid (CDCA) both in free form and in their glycine-and taurineconjugated forms, 1 mM each) (P > 0.05, Figure 1A) but increased after intra-luminal glucose treatment (baseline: 9.6 AE 0.2 mM, peak: 15.5 AE 1.2, P < 0.001). Glucose as well as the BA-mix stimulated amylin secretion 15 and 30 min after administration (P < 0.05, Figure 1B). Glucagon secretion increased after BA-mix administration at time point 5 min (P < 0.05, Figure 1.C1) and net AUC 0e15min increased (P NaCl vs. BA-mix ¼ 0.07, Figure 1.C2). Both glucose and BAs stimulated plasma C-peptide and insulin secretion at the later time points (Figure 1.D,E). GIP and GLP-1 secretion was robustly stimulated by the bile acid mix (2e4 fold) at 5 min (P < 0.05 compared to baseline, n ¼ 9 Figure 2A,B) and returned to baseline at 15 min. Similar responses were observed after intra-duodenal glucose, and neither peak-value nor AUCs differed between treatments (P GIP, glucose vs. BA-mix AUC's ¼ 0.78, P GLP-1, glucose vs. BA-mix AUC's ¼ 0.77, n ¼ 9, Figure 2A,B). Glucose and BAs both tended to increase PYY secretion ( Figure 2C). For all measured peptides, plasma concentrations remained unchanged at all time points in the isotonic saline control group (P > 0.05, Figures 1 and 2).
Effects of bile acids and a bile acid sequestrant
Since GLP-1 and PYY secretion was stimulated by BAs in vivo, we next investigated the molecular mechanisms underlying these (as well as NT) responses using the isolated, perfused rat small intestine. Intraluminal instillation of a BA mixture (CA, DCA, CDCA in both unconjugated and glycine-and taurine-conjugated forms) increased secretion of GLP-1, PYY and NT to 4e8 times the basal secretion, and this was prevented by cholesevelam (Figure 3.A1e2, n ¼ 6). Cholesevelam is a BA sequestrant, which binds to BAs and reduces their absorption (Figure 3A3e4). In order to investigate whether the lack of gut hormone secretion in the presence of this BA-sequestrant (Figure 3A1e2) was due to reduced availability of free BAs to stimulate luminally exposed receptors or to reduced absorption and reduced exposure of basolateral BA ( Figure 3D,G), we next tested the effects of inhibiting the predominant transporter for conjugated bile acids, the ileal-bile acid transporter (IBAT).
Role of the ileal-bile acid transporter (IBAT)
Transporter inhibition attenuated both TDCA absorption and hormone secretion (Figure 3 B1e4, n¼6). In a separate line of experiments, we tested the effect of the IBAT inhibitor together with intra-luminal infusion of a mixture of unconjugated BAs (CA, DCA, and CDCA: each 1 mM), which are more lipophilic than the conjugated-BAs (and should, therefore, be absorbed by IBAT-independent diffusion over the intestinal epithelium). In this case, the inhibitor had no effects on either the BA absorption or the GLP-1 and NT responses, (although PYY secretion was slightly reduced) (P < 0.01, Figure 3.C1e4, n¼6). As IBAT couples BA uptake to the co-uptake of sodium, activity of this transporter could (if present on the respective enteroendocrine cells), in itself stimulate peptide secretion by depolarizing the L-cell, as is the case for glucose-stimulated GLP-1 and NT secretion (through the sodium-glucose transporter 1 [11,22e24,33,37]). In the rat, IBAT was predominately expressed in the ileum ( Figure 3D) and co-localized with GLP-1 ( Figure 3E), but the effect of IBAT activity on TDCA-stimulated GLP-1, NT, or PYY secretion from the perfused rat intestine does not appear to involve cell depolarization since inhibition of L-type voltagegated calcium channels (by nifedipine) had no effect on the secretory response ( Figure 3F.1,2). In separate control studies, intra-luminal administration of TDCA robustly stimulated the secretion of GLP-1, NT, and PYY and in case of GLP-1 and NT to a comparable extent when applied twice in the same experiment (Figure 3 F1, 2, n ¼ 6), confirming that the attenuation of TDCA-stimulated secretion in presence of the IBAT inhibitor was a specific event.
3.2.3. The role of farnesoid-X-activated receptor or G proteincoupled bile acid receptor 1 (GPBAR1/TGR5) We next investigated whether the secretory response to BAs could be explained by activation of the BA-sensitive receptors FXR or TGR5. Intra-arterial, but not intra-luminal, application of a poorly absorbable TGR5 agonist [48] robustly stimulated GLP-1, NT and PYY secretion ( Figure 4A.1,2), while application of a specific FXR agonist had no effect whether applied from the luminal or vascular route ( Figure 4B1,2). In COS-7 cells transiently transfected with either the human or rat TGR5 ( Figure 4C,D), a strong activating signal was observed with pharmacologically relevant BA doses (EC 50 < 1 mM) in following order of potency: LCA > DCA > CDCA acid for both receptors. Comparable efficacy and potency were observed for free, taurine-and glycine-conjugated forms of the respective BAs. In contrast, UDCA (free or taurine-and glycine-conjugated) did not activate human or rat TGR5 in concentrations up to 10 mM, while CA (free as well as glycine-and taurine-conjugated forms) was a weak agonist for the rat but not the human receptor (Figure 4C1e5, D1e5). Additional pharmacological properties of the tested BAs (Hill slope, time to peak, EC 50 -value and percentage efficacy compared to maximal cAMP-response resulting from TGR5-agonist stimulation) are provided in Supplementary Table 1. As we found TGR5 to be expressed by isolated primary mouse L-cells and to co-localize with GLP-1 and PYY in rat small intestine ( Figure 4E,F), our data, therefore, indicate that BAstimulated secretion is dependent on direct activation of basolaterally located TGR5 receptors expressed by the N-or L-cell. Intra-luminal UDCA (which does not activate TGR5) in vivo did not change GLP-1 secretion, whereas a complex BA mixture (CA, DCA, and CDCA in un-conjugated and glycine-and taurine-conjugated forms) with the same total BA concentration resulted in a pronounced increase ( Figure 4G). Furthermore, intra-arterial TUDCA (0.01 mM) neither stimulated GLP-1, NT nor PYY secretion from isolated perfused rat small intestine, in contrast to the TGR5-activating BAs TCDCA and TDCA, both of which robustly stimulated secretion when applied at the same concentration (P < 0.001, n ¼ 6, Figure 4H1,2. To further demonstrate the role of the TGR5 receptor, we also studied isolated perfused small intestine from TGR5 knockout (KO) mice [46]. Here, intra-luminal administration of bile acids (same complex mixture as above) had no effects on GLP-1 and PYY secretion, whereas a robust hormone response was seen in wild-type littermates ( Figure 4I,J).
No direct effects of bile acids on the secretion of pancreatic hormones
To investigate whether the BA-stimulated insulin response and the tendency to stimulated glucagon secretion observed in vivo resulted from direct effects of BAs on the pancreas, we studied the direct effects of BAs and direct TGR5 activation in the isolated perfused rat pancreas, using the same BA-mix and TGR5 agonist used in vivo and in the intestine perfusions. Neither BAs nor the TGR5 agonist (even at a 10-fold higher dose than that which robustly stimulated GLP-1, NT and PYY secretion in the gut) had any effect on glucagon or insulin secretion, whether at low (3.5 mM) or high (10 mM) glucose levels (Figure 5AeD). consistent with our observation that neither pancreatic anor b-cells isolated from mice express TGR5 ( Figure 4F). Glucose had the expected effects on the secretion of these peptides: raising the glucose concentration from 3.5 to 10 mM inhibited glucagon secretion by a factor of approx. 15 and stimulated insulin secretion by a factor of approx. 10, while L-arginine (pos. control) also robustly stimulated the secretion of both hormones.
DISCUSSION
In humans, BAs have been identified as stimuli for glucagon, GLP-1, insulin, NT, and PYY secretion [2,3,13,30,38], but a detailed analysis of the physiological sensing mechanisms involved has not been carried out, and this was the purpose of our study. Because of their chemical nature, unspecific effects of BAs could potentially confound interpretation of results, especially when used in high concentrations. Since such studies cannot easily be carried out in humans, we turned to rats and first investigated the effects of intraluminal BA stimulation (a complex mixture) on a number of appetite-and metabolism-regulating peptide hormones in anesthetized animals. The BAs were administered in a dose similar to that reached intra-luminally in the proximal small intestine in humans after meal intake, i.e. 4e12 mM depending on meal composition [41,49] and well above the critical micellar concentration 1e2 mM [15]. However, whether the composition of the applied BA mix in our study is representative for the postprandial intraluminal composition in humans (or rats) is difficult to assess. This is because most data on the composition of BAs in humans have been collected from samples obtained from a peripheral vein. Little is known about how closely this reflects the composition of BAs originally secreted into the intestinal lumen, because intestinal reabsorption and liver retention may differ between BAs and because of differences in the microbial activity in the gut which dehydroxylates the primary Bas. In the anesthetized rat, intraluminal administration of the complex BA-mix (9 mM in total) robustly stimulated the secretion of GIP, GLP-1, C-peptide and insulin, mirroring the effects of BAs in humans [2,3,13,38]. Remarkably, the secretory responses were of comparable magnitude to those resulting from intraluminal glucose administration (a powerful stimulus for most of these peptides [18]). For PYY, the half hour sample collection period may have been too short to allow the stimulants to reach all of the more distally located PYY. To study the secretory pathways directly involved in BA-stimulated gut hormone secretion, we used the isolated perfused rodent small intestine. The benefit of this model is that it allows a high level of experimental control (e.g. it is possible to discriminate between responses caused by luminal or vascular activation of secretory sensors), high time resolution (secretion is followed minute by minute), and application of drugs that may have been hazardous or lethal in vivo, as reviewed [43]. Additionally, the natural anatomical arrangement of the cells (including polarization) and the vascular perfusion (and therefore convective drag of absorbed nutrients and secreted hormones) are preserved [43], allowing studies of the full dynamics of absorption and secretion. Thus, conclusions drawn from results obtained using this model are likely to be physiologically relevant. (3,4) secretion at low glucose (3.5 mmol/L) in response to the TGR5 agonist RO9272296 or a complex BA-mix. B: Glucagon (1,2) and insulin (3,4) secretion at high glucose (10 mmol/L) in response to the TGR5 agonist RO9272296 or a complex BA-mix. L-arginine (Arg) was included at the end of all experiment and used a positive control. n ¼ 6 for all experiments. *P < 0.05, **/ ## P < 0.01, ***/ ### P < 0.001, ****P < 0.0001. Stars indicate significance compared to baseline and hashes indicate significance between treatments.
Similar to the in vivo study, intra-luminal BA administration resulted in robust GLP-1, NT, and PYY responses, confirming in vitro studies where a variety of BAs have been shown to stimulate GLP-1 secretion from the GLP-1 producing cell lines GLUTag and STC-1, from primary murine cell cultures, and from mouse gut tissue mounted in Ussing chambers [6,19,34]. In our perfused gut model, we are able to demonstrate that peptide secretion depends on the absorption of BAs, since BA-sequestrants (cholesevelam) or pharmacological inhibition of the transporter responsible for conjugated BA uptake from the intestinal lumen, IBAT, eliminated the secretory responses. In contrast, the secretory responses to a mixture of unconjugated BAs (which are absorbed by IBAT-independent diffusion over the intestinal epithelium) were unaffected by IBAT inhibition. Consistent with this, IBAT inhibition attenuated conjugated BAs stimulated GLP-1 secretion from mouse epithelium mounted in Ussing chambers [6]. Although L-cells have been shown to express IBAT [6] and we here demonstrate colocalization (by immunohistochemistry) of this transporter with GLP-1, the secretory response to luminal (conjugated) BAs did not appear to depend on cellular depolarization resulting from coupled sodium transport, since inhibition of voltage-gated calcium channels (by nifedipine) had no effect on the secretory response to conjugated BAs (whereas the same dose of nifedipine has been shown to eliminate glucose-stimulated GLP-1 and NT secretion [23,24]). This is consistent with our previous observation with regards to BA-stimulated GLP-1 secretion from mouse gut tissue mounted in Ussing chambers [6]. We therefore next investigated whether stimulated secretion relied on activation of BA-sensitive receptors. BAs have previously [6,46] been shown to activate the surface receptor TGR5 [20] and the nuclear receptor FXR, with the latter being effectively activated by CDCA, TCA, TCDCA, TDCA (but not tauro-muricholic acid) [28,50]. In our perfused rat intestine model, application of a specific FXR-agonist (GW4064) from the luminal or vascular side (at concentrations far above the reported EC 50 of 15 nM [4]) had no effect on peptide secretion, indicating that activation of FXR is not acutely involved in the secretory response. Rather, this nuclear receptor may play a role for the beneficial (transcriptional) adaptations resulting from gastric bypass operations in mice [40]. In contrast, administration of a poorly-absorbable TGR5 agonist [48] to the perfused rat intestine from the vascular, but not the luminal, side of the preparation robustly stimulated the secretion of GLP-1, NT, and PYY. This is consistent with a study by Ullmer C et al. who showed that systemic rather than luminal activation of TGR5 stimulated GLP-1 and PYY secretion in mice, since intravenous but not oral administration of the same TGR5 agonist used here stimulated secretion [48]. By immunohistochemical and pharmacological studies, we showed that BAs appear to stimulate secretion of these peptides by direct activation of TGR5 at the level on the L-cell, because the receptor is co-localized with GLP-1 on the basolateral side of the cell and is activated (both human and rat versions of the receptor) by a variety of BAs in the following order of potency: LA > DCA > CDCA, irrespectively of conjugation. Consistent with this, UDCA and its glycine-and taurine-conjugated isoforms neither activated the receptor nor stimulated GLP-1 secretion. Since a TGR5 antagonist, which could provide further evidence for the role of this receptor is not available, we used TGR5 KO mice [46] and wild type littermates for gut perfusion studies [12]. In line with our observations from the rat and two studies using other experimental models [6,46], TGR5 deficiency resulted in elimination of BA-stimulated GLP-1 and PYY secretion. BAs or selective TGR5 agonists have been suggested to directly stimulate glucagon and insulin secretion from isolated human and rodent islets [10,26,27]. However, although we observed stimulatory effects of BAs on insulin secretion in vivo, these seemed to be indirect, as the same complex BA-mixture that gave rise to robust secretion of gut hormones had no effect on secretion of glucagon or insulin from the perfused rat pancreas. Furthermore, TGR5 was not expressed in primary aor b-cells and a TGR5 agonist (which stimulated secretion of GLP-1, NT, and PYY from the perfused intestine) had no effect on endocrine hormone secretion from the perfused pancreas. Therefore, the late increase in insulin secretion could be a result of BA-stimulated secretion of GIP and GLP-1, which could be tested in experiments involving blockade of the relevant hormone receptors (with GIP and GLP-1 receptor antagonists, e.g. the truncated GIP isoform 3-30 NH2 recently showed to be a competitive antagonist for the rat GIP receptor [42] and the widely used GLP-1 receptor antagonist, exendin 9-39). In contrast, glucagon and insulin secretion was very sensitive to changes in the glucose concentration in the perfusate and responded to our positive control (L-arginine), confirming that the preparation was functional and responds appropriately to physiological stimuli. BA-stimulated secretion of insulin, therefore, appears to be mediated indirectly and, although this remains to be clarified, potential mediators could include GIP, GLP-1, fibroblast growth factor (FGF19/and/or 21). It therefore appears that BAs stimulate the secretion of a number of gut hormones by activation of TGR5 located on the basolateral side of the enterocytes, whereas the effects of BAs on glucagon and insulin secretion is indirect. A detailed model for the molecular mechanisms underlying BAstimulated secretion of gut hormones is provided in Figure 6.
CONCLUSION
Our study shows that BAs that activate TGR5 have marked effects on the secretion of appetite and metabolism-regulating hormones and, therefore, in addition to their role as fat emulsifiers, should be regarded as important regulators of blood glucose and metabolism. Given the strong stimulatory effect on GIP, GLP-1, NT, and PYY secretion, the mechanisms of BA-stimulated secretion, here demonstrated to depend on absorption and TGR5 activation, may represent appealing targets for development of drugs for treatment of obesity and type-2-diabetes.
|
2018-04-27T03:18:20.203Z
|
2018-03-17T00:00:00.000
|
{
"year": 2018,
"sha1": "7c80ec8485bad5142c26acc9705920a9d7c1111f",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.molmet.2018.03.007",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7c80ec8485bad5142c26acc9705920a9d7c1111f",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
249232194
|
pes2o/s2orc
|
v3-fos-license
|
Shock Wave Intravascular Lithotripsy (IVL)-Assisted Staged Percutaneous Coronary Intervention (PCI) for a Calcified Right Coronary Artery in a Patient With Unstable Angina: Shock the Rock
Coronary artery calcification represents one of the challenging and demanding subsets of percutaneous coronary intervention (PCI). Accumulated evidence has aptly outlined that shockwave intravascular lithotripsy (IVL) is a reliable tool to overcome calcified stenosis in coronary arteries. However, there is a lack of published case reports in the Indian context. Herein, we describe a case of right coronary artery (RCA) calcifications successfully managed with shock wave IVL-assisted staged PCI. Initially, a 74-year-old male patient presented with ST-segment elevation myocardial infarction (STEMI). At that time, coronary angiography demonstrated calcific thrombotic occlusion in the left anterior descending artery (LAD) and stenosis in proximal and mid tubular RCA. It was decided to proceed with immediate PCI of LAD followed by staged PCI of RCA. The patient presented with unstable angina at the time of the second repeat PCI of RCA and was managed with shock wave IVL-assisted staged PCI. Ultimately, the patient’s condition was improved with good thrombolysis in myocardial infarction (TIMI) flow.
Introduction
Coronary artery calcification (CAC) hinders percutaneous coronary intervention (PCI) by impeding stent crossing, disrupting drug-polymer from the stent surface, altering drug delivery and elution kinetics, and reducing stent expansion and apposition [1]. Thankful to the novel shockwave intravascular lithotripsy (IVL) system that emits circumferential mechanical energy and thus disrupts the calcium, minimizes soft-tissue injury while aiding stent deployment has greatly enriched therapeutic armamentarium of calcified lesions [2]. Despite the overwhelming clinical evidence with encouraging outcomes available for shock wave IVLassisted PCI [3][4][5], there is a paucity of the data in the Indian context. Toward this end, we hereby report a case of a male patient with right coronary artery (RCA) calcifications presented with unstable angina. He was treated with shockwave IVL-assisted staged PCI.
Case Presentation
A 74-year-old male with a past history of hypertension, uncontrolled diabetes mellitus, and pancreatitis admitted to a tertiary care facility for staged PCI of RCA. He had developed ST-segment elevation myocardial infarction (STEMI) nearly one month ago, which was managed with the implantation of a drug-eluting stent (DES) (Figure 1).
LAD: left anterior descending artery
At that time, transthoracic echocardiography revealed akinetic mid anterior septum, apex, mid anterior wall, dilated left ventricle, severe left ventricular systolic dysfunction with an ejection fraction of 30%, grade 1 diastolic dysfunction, and mild aortic regurgitation. All laboratory parameters were normal, except for troponin-I (619.9 ng/dl). Further, coronary angiography revealed normal left main (LM), 100% calcific thrombotic stenosis in left anterior descending artery (LAD), 50% stenosis in proximal left circumflex artery (LCX), and normal obtuse marginal artery-OM1 and OM2, 70% stenosis in proximal RCA and 80-90% stenosis in mid tubular RCA, normal posterior descending artery (PDA)/posterior left ventricular (PLV) artery. We planned to proceed with the immediate PCI of LAD followed by staged PCI of RCA. Nearly one month later, the patient was admitted to an intensive care unit (ICU) for the second PCI using shockwave IVL system. At this time, he was presented with unstable angina. On admission, the patient had a blood pressure of 143/86 mmHg, a pulse rate of 69 beats per minute, and oxygen saturation of 99% on room air. Moreover, chest and neurological examinations were normal. Coronary angiography was performed at arrival demonstrated normal LM, type-II vessel proximal to mild patent stent in LAD, normal diagonal arteries-D1 and D2, 40-50% stenosis in proximal LCX, normal OM1 and OM2 arteries, 60-70% stenosis in RCA-mild 95% calcific stenosis followed by 70% diffuse stenosis, normal PDA/PLV ( Figure 2). The patient was diagnosed as having a single-vessel disease. A 3.5 JR guiding catheter (Medtronic, Minneapolis, MN, USA) was engaged in ostial RCA and then the guidewire was crossed to distal RCA. Thereafter, sequentially shockwave IVL was performed using a 2.0 × 10 mm balloon, which was inflated to 12 atmospheres, and 10 pulses of ultrasound energy of 10 seconds were successively delivered ( Figure 3). This cycle was repeated four times. were implanted followed by post-dilatation with a 3.5 × 10 mm non-compliant balloon ( Figure 6).
Discussion
The prevalence of CAC in PCI cases is estimated to be within the range of 18-26%; however, this number is expected to rise. Extremely calcified coronary lesions remain a hurdle for PCI as their dilatation and precise implantation of stents are quite challenging. Very tight calcified lesions may resist dilatation at low balloon inflation pressures or rupture at high pressures. This causes high rates of procedural complications and, consequently, poor clinical outcomes [6]. Recent therapeutic calcium debulking techniques used to overcome CAC can be categorized into two groups: non-balloon (rotational, orbital, and laser coronary atherectomy) and balloon-based technologies (high-pressure/ultra-high-pressure non-compliant balloon, scoring/cutting balloons, shockwave IVL balloon). Sadly, these techniques are encumbered by a significant number of procedural complications, such as distal embolization, vascular wall injury, and coronary artery dissection or perforation [7]. Derived from lithotripsy technology employed to treat nephrolithiasis [8], the shockwave IVL system has also proved to be beneficial in treating CAC. Compared to the aforementioned conventional approaches, the shockwave IVL system requires minimal training and provides excellent outcomes of luminal widening, successful stent implantation, and reduced risk of major adverse cardiovascular events [6]. These observations are noted in our case, too.
Disrupt coronary artery disease (CAD) II study, a prospective multicenter, single-arm post-approval study, concluded the safety and effectiveness of shock wave IVL-assisted PCI in severe CAC [9], this is in agreement with findings reported by Aksoy et al. [10]. Wong and coworkers [3] reported the usefulness of the shockwave IVL system in coronary calcium modification to optimize stent expansion in cases of acute coronary syndromes (ACS) (29% of ACS patients were staged PCIs for severe non-culprit lesions), stable angina, and PCI before transcatheter aortic valve implantation. According to Tsiafoutis et al. [11], the shock wave IVLassisted PCI appears to be a safe and useful alternative to achieve procedural success in CAC in STEMI cases. Further, many authors have reported promising outcomes of shock wave IVL system in percutaneous revascularization of severely calcified LM disease [12], severely calcified and undilatable LAD lesions in a patient with recurrent myocardial infarction [4], chronic total occlusion (CTO) PCI [13].
All previously mentioned studies hold the view that shock wave IVL-assisted PCI remains the default strategy for severely calcified coronary stenoses. In the present case, calcified stenosis of RCA has been successfully treated with shockwave IVL-assisted staged PCI in patients presented with unstable angina. Considering the following facts, we have used shockwave IVL in our case: a) IVL has preferential effect on deep calcium than other ablation techniques, b) being a balloon-based technique, it is user-friendly with a short learning curve [14]. Patel and co-investigator [15] proposed the use of shockwave IVL in staged PCI during which thrombus burden and myocardial electrical instability may be significantly less. This notion is backed by the DEFER-STEMI (Deferred Stent Trial in STEMI), wherein deferring stent implantation in STEMI culminated in reduced no-reflow and increased myocardial salvage, with nearly 4% requiring urgent PCI prior to the staged procedure [16]. However, there is room for research on shockwave IVL-assisted staged PCI in treating CAC.
Conclusions
CAC still represents one of the most challenging subsets in PCI because of worse clinical outcomes. Slowly but surely, the proportion of patients with severely calcified lesions has been projected to grow worldwide, including in India. The optimal treatment for the condition still remains demanding. IVL is a relatively novel technique designed to overcome calcified stenosis in coronary arteries, with promising outcomes from several clinical trials. In this case, our experience demonstrates that conjugation of shockwave IVL system with staged PCI is safe in treating CAC and associated with a low rate of complication and high procedural success.
Additional Information Disclosures
Human subjects: Consent was obtained or waived by all participants in this study. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
|
2022-04-29T15:37:15.074Z
|
2022-04-01T00:00:00.000
|
{
"year": 2022,
"sha1": "aeaa4e3243598800f41755794cababfa8fbb5c62",
"oa_license": "CCBY",
"oa_url": "https://www.cureus.com/articles/93727-shock-wave-intravascular-lithotripsy-ivl-assisted-staged-percutaneous-coronary-intervention-pci-for-a-calcified-right-coronary-artery-in-a-patient-with-unstable-angina-shock-the-rock.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "083262388ac8dc05037801c162a57b7a77a07169",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
168721639
|
pes2o/s2orc
|
v3-fos-license
|
Customers’ Assessment on ATM Services in Bangladesh
E-banking implies provision of banking products and services through electronic delivery channels. The study was designed to investigate ATM services in Bangladesh, its impact on social life and security of ATM accounts. The main objective of present study is to find out the practice, impact and security status of ATM booths in Bangladesh. The investigation was concentrated in various divisions, districts and towns in Bangladesh. The sample consisted of 120 bankers and bank customers, were selected purposively from various territory. Out of 120 respondents, 38 bankers and 72 bank customers who use ATM card have given their opinion on many issues of ATM accounts. Data was collected using questionnaire administrated by the researcher. Data were processed through micro computer using Statistical Package for Social Science (SPSS). From the analysis of data the following major findings were obtained i. ATM card holders in Bangladesh feel insecurity from hijacker to withdraw and deposit money from ATM booths and all the ATM booths of Bangladeshi banks are not in safe position.
Introduction
E-Banking, the latest generation of electronic banking transactions, has opened up new window of opportunity to the existing banks and financial institutions. Most of the banks have their own websites but not all of them offer internet facilities. The main reason of this is that the banks do not have the IT infrastructure and proper security features. In Bangladesh most of the people are illiterate and obviously they are technology ignorant [1]. An ATM is simply a data terminal with two input and four output devices. Like any other data terminal, the ATM has to connect to, and communicate through, a host processor. The host processor is analogous to an Internet Service Provider (ISP) in that it is the gateway through which all the various ATM networks become available to the cardholder (the person wanting the cash). ATM has two input devices-card reader and keypad. An ATM has four output devices-Speaker, Display screen, Receipt printer, cash dispenser [2]. Electronic banking does not mean only 24-hours access to cash through an Automated Teller Machine (ATM) or Direct Deposit of pay checks into checking or savings accounts as many consumers may think. Electronic banking (e-banking) involves many different types of transactions; it is a form of banking where funds are transferred through an exchange of electronic signals between financial institution, rather than exchange of cash, cheques or other negotiable instruments [3]. Electronic banking has been around for quite some time in the form of automatic teller machines (ATMs) and telephone transactions [4]. Some of the Foreign Commercial Banks and Private Commercial Banks are already using this optical fiber network for conducting online transactions, ATM and point of sale (POS) services. Bangladesh has joined the information super-highway by connecting itself with international submarine cable system in 2006. A total of 159 Internet Service Providers (ISPs) have now been connected with this system of which 64 are actively providing services. Government to allocate Tk.5, 000 corers to rescue four banks owned by the government [5]. E-Banking is a new concept in banking sector of Bangladesh. It is becoming popular in Bangladesh; thus almost all Bangladeshi banks offer many facilities of e-banking. E-banking is growing in Bangladesh day by day. Domestic private commercial banks and foreign commercial banks are in leading position. State owned commercial banks do not offer all the functions of e-banking. Majority of Bangladeshi customers use mobile bank account and ATM account [6]. They were around 50 ATM booths operating in the country; two foreign banks had several ATMs of their own, two local ATM booths service providers that offered syndicated or rental service to several banks [7]. In Bangladesh, credit card and point of sale services (POS) are already provided by a quarter of local banks, while ATM and internet banking were expanding rapidly especially in major cities [7]. A broad spectrum of electronic banking services, a subset of e-finance, is available in Bangladesh with various degree of penetration. Credit card and POS services are provided by 23% of banks (PCBs and FCBs). Several thousand of POS terminals have been set in major cities of the countries. Tele banking is second most penetrated e-banking service in Bangladesh. ATM is expanding rapidly in major cities. A group of domestic and foreign banks operate shared ATM network, which drastically increase access to this type of electronic banking service [9]. Due to incredible proliferation of Information and communication technology (ICT) the concept of money has been radically changed. Different kinds of business sectors have been merged in a single that can be accessible in a global way and that has been possible just for the proliferation of ICT [10].
Methodology of the Study
This study has done based on primary data. Data has been collected from 16 selected banks in Bangladesh. Information was collected from the branches of selected banks located in Dhaka, Chittagong, Rajshahi, and Khulna Division. Sample were selected from State Owned commercial banks, domestic private commercial banks and foreign commercial banks in Bangladesh and bank customers were selected from different university, college and school teachers and students, businessmen and other professionals from private and public sectors. Respondents were basically of two categories; bankers and bank customers. Bankers were selected from different branches of selected banks. Two sets of questionnaires were designed and distributed to bankers and bank customers. The questionnaires were distributed directly among the respondents through email. Questions were basically two categories; i. 05 questions are relating to availability and location of ATM account, 08 questions are relating to the security of ATM accounts and ATM booths, 05 questions are relating to the problems and prospects of ATM account and 05 questions are relating to take opinion to solve existing problems of ATM account in Bangladesh. Among the respondents 44 were banker and 76 were bank customers. Data were analyzed by using SPSS software.
Analysis of Data
Followings are the presentation of the result and analysis of data collected for the study. Responses from total 120 respondents consisting of 44 bankers and 76 bank customers, 98 male and 22 female, 34 married and 86 unmarried, and 28 from rural and 92 from urban areas selected for the study were considered. Figure 1 shows the percentage of respondents from uses various types of e-account. It also shows 28.3% respondents don't use any account, 15.8% uses mobile banking, 5% respondents uses credit card account, e-account, 35.9% (24.2% respondents uses ATM card + 11.7% uses debit card ) uses ATM account, 10.8% uses internet banking, 11.7% uses debit card account and 4.2% uses other accounts. It is revealed from Table 2 that majority of the respondents (48%) use online account are Bachelor Degree Holder. The next highest numbers of respondents use online accounts are Master Degree Holder (47%). Lowest number of respondents (01%) is Ph.D. holders. Among the online accounts ATM account is used by majority (36%) of the respondents. It is evident from Table 3 that among the respondents 29% from rural area, 33% from municipalities, 14% from various district town and 24% from different city corporations. 15% respondents from various locations mentioned that ATM account is highly secured and 31% respondents mentioned that security of ATM account is not sufficient in Bangladesh.
Out of the total respondents 11.2% do not know about the security status of ATM account in Bangladesh. Majority (42.5%) users of municipal areas stated that security of ATM account is not enough and majority (48%) users from city corporation mentioned that ATM account in Bangladesh is sufficient secured. Table 4 shows that 15.83% respondents use mobile banking. Among the mobile bank account holders 42.11% feel that ebanking is sufficient secured. Among the ATM card users 06 said, this is not enough secured. Among the respondents 28.34% respondents don not have online account. Table shows that majority of the mobile bank account holders feel that security of this type of account is not enough. Credit card users feel this account is highly secured.
It is shown by Table 5 that e-banking in Bangladesh is not free from security problem of hijacker. Customers feel insecurity to withdraw money from ATM booths. In this research 49.1 percent respondents feel security problem from hijacker to withdraw and deposit money in ATM booths. It is obtained from Table 6 that majority of the respondents use GP Modem. Among the respondents 18.33% have no connection. All users of same connection don't feel same comfort for the reason of location. Only 14 respondents feel the internet connection is excellent. Out of 120 respondents 30% users said the speed quality is slow and disturbing. Table shows that majority of respondents (41.54%) from domestic private commercial banks gave their opinion that ATM service quality of their bank is very good, and majority respondents from state owned commercial banks mentioned that ATM service of their banks are not good. Same service of foreign commercial banks are excellent. According to 43.75% respondents from foreign commercial banks, ATM service of their bank is excellent. It is gained from Table 8 that out of 120 respondents 62.50% mentioned that power crisis, 35.83% stated slow internet speed, 48.33% mentioned insecurity from hijacker to withdraw or deposit cash in ATM booths, 21.6% said hackers hidden camera, 32.5% mentioned the unviability of close circuit camera inside and outside of ATM booths, 49.17% mentioned the unavailability of services for 24 hours, 28.33% users said there has no provision to use finger print & digital signature to make the account more secured and 27.5% users feel insecurity to use ATM booths in Bangladesh. any hidden-camera in side of ATM Booths to collect password of account holders. Then they will use this for hacking. It is gained from Table 8 that 21.67% users feel insecurity to use ATM booths for hackers' hidden camera. So ATM booths should be secured by resisting this camera to avoid this types of crime. 9. Use Finger Print and Digital Signature: Table 8 revealed that 28.33% users mentioned that their banks don't provide provisions to use finger print and digital signature to authenticate the actual user for avoiding hacking.
Conclusion
ATM account is one of the most popular e-banking services provided by the banks in Bangladesh. Almost all domestic private commercial banks and foreign commercial banks are offering this service to their customers. But the transactions in this system are not highly secured. In recent past many occurrence were happened by which users were suffered and the authorities didn't take any responsibility. So account holders feel insecurity to use ATM booths though some time they are bound to use ATM booths because of unavailability of cash in another sources. As Bangladeshi bank customers have limited knowledge on ebanking transactions. So they are not ready to accept any financial difficulties and further they will not trust on ebanking services it they will feel on any insecurities or any kind of difficulties. So bankers have to be sincere about ATM services and its security.
|
2019-05-30T13:19:25.533Z
|
2017-06-18T00:00:00.000
|
{
"year": 2017,
"sha1": "477159e04960db357494b8d71961ed75a3e2ecdf",
"oa_license": "CCBY",
"oa_url": "http://article.sciencepublishinggroup.com/pdf/10.11648.j.ijfbr.20170303.11.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "3cd0c6c22a7a54855d4b324046f179ee6b924f6a",
"s2fieldsofstudy": [
"Business",
"Economics"
],
"extfieldsofstudy": [
"Business"
]
}
|
270057625
|
pes2o/s2orc
|
v3-fos-license
|
The outcome of nonoperative treatment for adult humeral shaft fractures using a U-shaped slab in resource-limited settings: a prospective cohort study
Background Humeral shaft fractures, constituting 3–5% of musculoskeletal injuries, are commonly managed conservatively using functional braces. However, this approach may not be feasible in resource-limited settings. This study aimed to evaluate the functional outcomes of nonoperative treatment for humeral shaft fractures in adults utilizing a U-shaped slab. Methods This prospective study was conducted from August 2021 to August 2022 involving 16-year-old and older individuals who received nonsurgical treatment for humeral shaft fractures at public tertiary hospitals in Rwanda. The assessment focused on various functional outcomes, including alignment, union rate, range of motion, return to activities of daily living, and DASH score. Results The study included 73 participants, predominantly males (73.9%), with a median age of 33 years. The union rate was high at 89.04%, and 10.96% experienced delayed union. Radial nerve palsy occurred in 4.11% of patients, but all the patients fully recovered within three months. Despite angular deformities during healing in the majority of participants, these deformities did not significantly impact functional outcomes. According to the international classification of disabilities, 77% of participants achieved a good functional grade. Conclusion The conservative U-shaped slab method was effective at managing humeral shaft fractures. However, optimal results necessitate careful participant selection and comprehensive rehabilitation education. Implementing these measures can improve the overall success of nonoperative management.
Background
Humeral shaft fractures significantly contribute to musculoskeletal injuries and are more common in men than in women; they constitute 3-5% of all adult fractures and impact 20% of humeral fractures in the adult population [1].The incidence of humeral shaft fractures has a dual-peaked age distribution [2,3], and with the increasing elderly population, there is concern that the incidence of these fractures could increase [4].
For managing humeral shaft fractures, a functional brace is the preferred method because it offers advantages such as early resumption of activities, favourable functional outcomes, limited complications, patient comfort, and cost-effectiveness [5,6].Initially, a coaptation splint is used for 7 to 14 days to reduce swelling, followed by the application of a humeral functional brace.Regular radiographs are taken over three weeks to ensure proper maintenance of reduction, with subsequent imaging obtained at 3-to 4-week intervals [7][8][9].
Recent surgical techniques and implant innovations have led to increased inclination toward immediate intervention for humeral shaft fractures [4,10].Despite the presence of published randomized controlled trials, the question of whether surgical treatment yields superior or inferior outcomes compared to nonoperative management for humeral shaft fractures remains unresolved [11,12].High-income countries show comparable functional outcomes and patient satisfaction between surgical and nonsurgical management of humeral midshaft fractures, while low-income countries prefer nonoperative management of humeral shaft fractures, but concerns about elbow stiffness persist [13][14][15].
Most of the available studies on functional outcomes after nonoperative management of humeral shaft fractures have involved the use of a functional brace, which is not commonly used in low-income settings.This study aimed to assess the efficacy of nonoperative treatment for humeral shaft fractures using a U-shaped slab, a common approach in low-income settings.
Study design and settings
This prospective cohort study focused on participants with humeral shaft fractures who sought consultation between August 2021 and August 2022.The study was conducted within the orthopedic unit of the Department of Surgery at the University Teaching Hospital of Kigali (CHUK) and Rwanda Military Hospital (RMH).These hospitals, CHUK and RMH, are tertiary and referral public hospitals in Rwanda that cater to patients from across the country.Both are situated in Kigali, the capital city of Rwanda.
CHUK has a total of 519 beds for inpatients, with the surgery department occupying 125 beds, 40 (32%) of which are designated for orthopedics.On the other hand, the RMH has a bed capacity of 500, providing healthcare services to approximately 40,000 to 50,000 patients annually, including both military personnel and civilians.
Study population and eligibility criteria
This study included individuals aged 16 years or older who had sustained humeral shaft fractures suitable for nonoperative management (less than 20 degrees of anteroposterior angulation, less than 30 degrees of varus-valgus angulation and less than 3 cm of shortening where only limb traction done during splint application.Participants sought consultation at the CHUK or RMH within two weeks of injury during the study period and underwent conservative treatment involving a U-shaped slab.Patients with unacceptable alignment for nonoperative treatment; those with nonunion or malunion; those with a floating elbow, pathological fractures, or a history of osteomyelitis; and those with open fractures or burns that impeded nonoperative treatment were excluded.
Study procedure
Patients with acute closed humeral shaft fractures who sought consultation at the outpatient department (OPD) or Accident and Emergency Departments and were prescribed nonoperative treatment were included in the study.Patients were enrolled after receiving initial treatment (only limb traction during splint application and weight of the splint itself creates a continuous gentle pulling force on the limb which helped to maintain fracture reduction) there clinically and radiologically follow-up at six and twelve weeks in addition patients were encouraged for self-exercise as pain tolerated by moving limb and joints as much as possible.
At the 6-week follow-up, x-ray controls were included.For patients who exhibited both clinical and radiological signs of union, the U-shaped slab was removed, and physiotherapy was prescribed two times per week for a period of six weeks.The assessment at the 6-week followup included evaluating radiological and clinical signs of union indicated by absence of bone pain, tenderness when stressing the fracture site, as well as joint movement.At the 12-week follow-up, another x-ray was taken, and the functional outcome was assessed and patients noted to have non-union offered surgery.
Treatment outcomes were evaluated using specific parameters, including alignment, consolidation, complications, International Classification of Impairments, Disabilities, Handicaps, and DASH score.Alignment measurements were conducted in both the coronal plane (varus and valgus) and the sagittal plane (anterior and posterior) using Dx-view and Vision web computerized systems.These measurements were derived from both initial and final radiographs.Consolidation was clinically evaluated and characterized by the absence of bone pain, tenderness, and movement when stressing the fracture site.Radiographic union was determined by the presence of callus formation on plain x-rays.Delayed union was defined as the absence of clinical union 12 weeks after the initial trauma.
Limb function was assessed by evaluating pain and the return of movement at the shoulder, elbow, and hand.This assessment was graded according to the International Classification of Impairments, Disabilities, and Handicaps as follows: • Grade I: Pain and complete limitations preventing any activities.• Grade II: Mild pain and significant restraint, severely impeding daily activities.• Grade III: Limitations allowing for daily activities with some challenges.• Grade IV: Minimal restriction, no interference with daily activities, and absence of pain.• Grade V: Unrestricted activities and absence of pain
Data collection and analysis
In this study, patients with humeral shaft fractures from the outpatient department (OPD) or from the Accident & Emergency Department were identified, and relevant information was recorded on data capture forms.A preelaboration questionnaire was completed at the 6th and 12th weeks of follow-up.The data were entered into EpiData and secured on the primary investigator's password-protected computer.
When collecting the data, we categorized the energy mechanism as follows: • Low-Energy Mechanism: Humeral shaft fractures result from relatively mild or minimal forces applied to the humerus.These fractures typically occur during activities or incidents with minimal impact on the arm, such as slipping, tripping, or falling from a standing position without significant external force applied to the arm.• Moderate Energy Mechanism: Humeral shaft fractures are the result of forces stronger than those causing low-energy fractures but less severe than those associated with high-energy fractures.This category included incidents where patients experienced a direct blow to the arm without substantial impact (this category includes incidents where patients experienced a direct blow to the arm including sport injuries such as football, basketball without substantial impact).• High Energy Mechanism: Humeral shaft fractures occur due to extremely strong forces or significant trauma and are often linked to severe accidents, falls from considerable heights, or direct blows with substantial impact.
For analysis, the data were analysed with the statistical software package SPSS version 28.0.The study findings are presented in tables and charts.To determine associations within the results, the chi-square test, binary logistic regression test, and multivariable logistic regression test were employed.The significance of the results was assessed by calculating the p value and odds ratio (OR).
Results
A total of 73 adult patients with humeral shaft fractures who sought consultation during the study period were enrolled, and no patients were lost to follow-up.These individuals were treated using a U-shaped slab, and their progress was monitored for three months.Subsequently, the functional outcomes were assessed.
Sociodemographic profile of the enrolled participants
The study included participants ranging from 16 to 76 years of age, with an average age of 35 years.Predominantly, the participants were male (73.97%), with a significant portion residing in Kigali city (43.8%).The male-female ratio was approximately 3 to 1.
Regarding the etiology of humeral shaft fractures, the largest percentage were motorcycle accidents (52.5%), followed by falls (26.03%), motor vehicle accidents (12.33%), assault (6.85%), and bicycle accidents (2.74%).Among the participants, 19.18% had hypertension, while 6.85% had diabetes mellitus.Additionally, 13.7% of the participants had a history of smoking.The median duration between the accident and consultation was two days.A majority of the patients (73.97%) presented with right-sided injuries, and 69.86% of them had injuries to their dominant limb.Regarding the location of the fractures, 75.34% were mid-shaft fractures, while 19.18% and 5.48% were proximal third shaft fractures and distal third shaft fractures, respectively.According to the fracture patterns, the most common type was oblique fracture (53.42%), followed by spiral fracture (28.77%), transverse fracture, and comminuted fracture (9.59% and 8.22%, respectively).
Sociodemographic profile of enrolled participants
Concerning the energy mechanisms causing the fractures, 49.32% were the result of high-energy forces, while 45.20% and 5.48% were caused by medium-energy and low-energy mechanisms, respectively.
Clinical characteristics of the humeral shaft fractures managed non-operatively
Variables n %
Clinical factors at the 6th and 12th weeks of follow-up
By assessing the joint range of motion and related discomfort at the 6th week of follow-up, following Stewart and Hundley's classification, we observed that eight patients (10.96%) achieved an excellent rating, 52 (71.23%) received a good rating, six (8.22%) were evaluated as fair, and seven (9.59%) were categorized as poor.At the 12th week, utilizing the International Classification of Impairments, Disabilities, and Handicaps, a significant majority of the participants achieved favourable functional grades (IV & V) for the affected limb, accounting for 77%.Consequently, the remaining 23% exhibited less favourable functional grades (II and III).
Radiological and clinical findings at 6 and 12 weeks of follow-up
At the 6th week of follow-up, 62 (84.93%) of the participants exhibited adequate callus formation coupled with a lack of pain and tenderness at the site of the fracture.Only 6.85% reported severe pain, while 9.59% experienced moderate pain.The most prevalent complication was elbow pain, affecting 97.26% of the participants, followed by elbow stiffness defined as restriction of range of motion experienced at the elbow joint, typical measured in degree of flexion and extension, noted in 86.30% of the patients.No comparison of affected to unaffected limb done and documented during the study.
During the 12th week of follow-up, the status of the callus improved in 89.04% of the participants, while 10.96% still experienced signs of delayed union of the fracture.
A total of seventy-one participants, accounting for 97.26% of the sample, successfully restarted their activities of daily living (ADL) within twelve weeks.The average duration for initiating ADLs was found to be eight weeks.Notably, none of the participants reported suffering severe pain at this juncture.The majority of participants (53.42%) indicated that they had experienced mild pain.Remarkably, complete functional restoration was observed in all three individuals diagnosed with radial nerve palsy.The calculated median DASH score was 14.12.Median (Q1-Q3) 14.12 (3)(4)(5)(6)(7)(8)(9)(10)(11)(12) Association between alignment at the 12th week of follow-up and functional outcome A significant association was found between reduced antero-posterior (AP) angulation at the 12th week of follow-up and favourable functional outcomes.However, there were no statistically significant differences in the outcomes concerning varus/valgus alignment or shortening of the affected limb p 0.179.
Radiological and clinical findings at the 6th week of follow-up
Association between alignment at the 12th week of follow-up and the functional outcome
Factors associated with delayed fracture union
When examining the occurrence of delayed fracture union, 19.05% of patients with spiral fractures and 57.14% of patients with transverse fractures experienced this condition (p < 0.001).
In terms of medical history, 60% of patients with diabetes mellitus and 42.86% of those with chronic hypertension exhibited delayed fracture union, while considerably smaller percentages of 7.35% and 3.39%, respectively, were affected among patients without these conditions (p < 0.001).A similar pattern emerged concerning smoking history, with 60% of patients with such a history experiencing delayed fracture union, in contrast to only 3.17% of those without such a history.
True predictors of functional outcome using multivariable logistic regression analysis
All the predictors that showed statistically significant associations with functional outcomes in the binary logistic regression analysis, including patient age, pattern of fracture, fracture location, smoking status, mechanism of injury, hypertension, diabetes status, and education level, were incorporated into the multivariable logistic regression analysis.After the final analysis was conducted, patient age, fracture pattern, and smoking status were identified as the most accurate independent predictors of functional outcomes among patients with humeral shaft fractures treated nonoperatively with a U-shaped slab.Participants younger than 35 years were more likely to attain good functional outcomes than were those aged 35 to 65 years (OR = 5.15; 95% CI = 1.27-20.83;p = 0.021).Conversely, all four patients aged older than 65 years exhibited poor functional outcomes by the 12th week of follow-up.
Regarding the fracture patterns, participants with oblique fractures exhibited a significantly greater probability of achieving good functional outcomes by the 12th week of follow-up than did those with transverse fractures (OR = 21.8;95% CI: 3.18-152.0;p = 0.002).Similarly, participants with spiral fractures tended to have better functional outcomes than did those with transverse fractures, although the difference was not significant (OR = 6.25; 95% CI = 0.94-41.52;p = 0.058).
Patients with no history of smoking were significantly better able to achieve good functional outcomes than were those with a smoking history (OR = 12.36; 95%CI = 2.73-56.08;p = 0.001)
Discussion
The functional outcomes of humeral shaft fractures treated with a U-shaped slab can vary depending on various factors, including specific fracture characteristics, patient age, overall health, and compliance with rehabilitation and follow-up protocols.Management of humeral shaft fracture with a coaptory U-shaped slab can provide good to excellent outcomes, as revealed by different researchers [16,17].
In our study, we examined seventy-three participants with humeral shaft fractures (HSFs) whose ages ranged from 16 to 76 years (median age: 33 years).Males constituted the majority of the injured individuals.Among the participants, thirty-seven were younger than 35 years of age, while 32 fell within the 35-65 years age range.Notably, more than half of the participants had sustained their fractures due to motorcycle accidents, followed by falls from heights.
The age group younger than 65 years represents an active demographic group that is predominantly composed of males engaged in various physically demanding occupations.Additionally, given the prevalent use of motorcycles as a primary mode of transportation in our setting, it is unsurprising that motorcycle accidents contribute significantly to the occurrence of injuries in this group.This could explain the higher incidence of injuries among males and the dominant presence of individuals under 65 years old.
Our findings align with a study by Oboirien M. that focused on the management of humeral fractures in a resource-poor region in northwestern Nigeria.Oboirien's study similarly highlighted the prominence of road traffic accidents, particularly motorcycle accidents, as the leading cause of humeral fractures.This parallel strengthens the consistency and relevance of our observations [18].
Our study revealed interesting patterns concerning the energy mechanisms responsible for humeral shaft fractures (HSFs) across different age groups.We found that more than half of the patients younger than 35 years sustained their fractures through high-energy mechanisms.In the 35-to 65-year-old age group, the majority of patients required moderate to high energy to fracture their humerus.Conversely, most patients aged older than 65 years Experience humeral fractures due to low-energy mechanisms.
Our findings are consistent with the findings of previous studies by G. Tytherleigh-Strong et al., Eben A. Carroll et al., and Nicolas Gallusser.These studies also revealed a bimodal age distribution pattern for HSF.The first peak occurred in the third decade among men and was characterized by high-energy mechanisms, while the second peak was observed in women during the sixth to seventh decade and was typically caused by low-energy mechanisms [8,19,20].
Notably, we observed that most participants with transverse fracture patterns developed delayed union, with half of all participants with transverse fractures experiencing this outcome.This observation echoes the results of L. Klenerman, who reported a strong association between delayed union and transverse mid-shaft humeral shaft fractures, with three out of five delayed union cases involving transverse fractures [21].
In our study, eleven patients showed no signs of union during the 6th week of follow-up.Eight (72.73%) patients continued to exhibit a lack of clinical and radiological improvement in union signs during the 12th week of follow-up.At 6th week of follow-up those who exhibited lack of imorovement the splint was readjusted resulting into improved angulations as the splint's weight continued effectively in reduction of the fracture.These findings are consistent with the results of Sargeant et al., who suggested that the absence of clinical and radiological signs of union at the 6th week of follow-up can predict delayed union [24].
The angular deformities of humeral shaft fractures gradually improved throughout treatment with the U-shaped slab.Upon admission, the median anterior/ posterior angulation was 10.50°, while the median varus/valgus angulation was 8.17°.By the 12th week of follow-up, these angles improved, with the median anterior/ posterior angulation decreasing to 6.2 degree and median varus/valgus angulation decreased to 4.38 degree.Although most participants exhibited some remaining angular deformities even after healing, these deformities did not have a substantial impact on the final functional outcome.Our findings are consistent with similar research by H. Majeedet et al., L. Klenerman, and Abdul Rehman et al., who also demonstrated that despite residual angular deformities, overall functional outcomes were not adversely affected [16,21,22].
The incidence of radial nerve palsy in our study was 4.11%.Notably, patients with primary radial nerve palsy spontaneously recovered recovered by the 12th week of follow-up, and no patients developed radial nerve palsy.After receiving treatment.Our study's results were more favourable than those of a systematic review conducted by Y. C. Shao et al. on radial nerve palsy associated with humeral shaft fractures.In that review, the prevalence of radial nerve palsy across 21 papers was 11.8% (532 palsy cases among 4517 fractures).Additionally, most palsies (70.7%) recovered spontaneously in patients treated conservatively.The lower incidence of radial nerve palsy in our study can be attributed to the smaller sample size, as the systematic review included a significantly larger number of participants [25].
In line with our findings, Hunter also reported that 8.5% of patients with radial nerve palsy achieved spontaneous recovery by the 12th week.The positive recovery trend observed in both studies highlights the potential for favourable outcomes in patients with radial nerve palsy associated with humeral shaft fractures [23].
In a study conducted by Abdul Rehman et al. involving 100 patients with humeral shaft fractures (HSFs), similar criteria were employed.Their results indicated that 60% of patients achieved an excellent outcome, 27% obtained a good outcome, 11% had a fair outcome, and 2% experienced a poor outcome.Notably, Abdul Rehman's study evaluated outcomes at the 16th week of follow-up, in contrast to our investigation, which assessed outcomes at the 6th week immediately after U-slab removal.Interestingly, a parallel comparison between the two studies revealed a consistent trend.Both studies reported that a substantial proportion of patients achieved excellent to good outcomes (87% in Abdul Rehman's study and 81.19% in our study).
We did not find a discernible association between sex and functional outcomes.However, several factors, including patient age, fracture patterns, fracture location, smoking habits, and adherence to rehabilitation protocols, emerged as statistically significant predictors of patient outcomes.
Among our participants, 10 were smokers, and 63 were nonsmokers.Within the smoker group, a majority experienced delayed union and exhibited poor functional grades.Moreover, all patients who did not adhere to the rehabilitation instructions demonstrated poor functional outcomes.In contrast, only 5% of those who followed the rehabilitation instructions experienced poor functional outcomes.Our rehabilitation protocol consists of selfexercise as pain tolerated by moving limb and joints as much as possible and physiotherapy of the affected limb two times per weeks for the period of six weeks.
Our findings are consistent with the findings of E. Shields et al. (2015), who demonstrated that patient age, psychiatric history, insurance type, fracture location, and Charlson comorbidity index score had substantial influences on patient-reported functional outcomes after treating humeral shaft fractures [26].
Conclusion
We analysed 73 adult patients with closed humeral shaft fractures (HSFs) managed nonoperatively using a U-shaped slab.The median age of the participants was 33 years, with males comprising the majority of the participants and a male-to-female ratio of 3:1.The right side was more commonly affected, and the dominant limb was frequently injured.The majority of patients had mid-shaft fractures, most of which were oblique fractures.Notably, for patients aged older than 65 years, HSF was more commonly associated with low-energy mechanisms, while in younger age groups, moderate-to-high-energy mechanisms were the predominant cause.
We achieved an excellent rate of union with favourable functional outcomes.Delayed union was primarily linked to transverse fracture patterns and a history of smoking.The rate of radial nerve palsy was low, with all patients classified as having primary radial nerve palsies exhibiting neuropraxia.These patients spontaneously recovered within three months, and no patient developed radial nerve palsy after receiving treatment.
Despite some participants healing with the remaining angular deformities, these deformities did not significantly impact the overall functional outcome.The median DASH score was 14.12, reflecting favourable limb function, and the scores ranged from excellent to good.In our analysis, we did not find any association between sex and functional outcome.However, we identified several predictors of functional outcome, including patient age, fracture pattern, history of smoking, and the use of physiotherapy.
Proper patient selection by an orthopaedic surgeon for certain group patients (transverse fracture, with smoking history and older age) and accurate education concerning rehabilitation may offer additional value in improving functional outcomes for patients with humeral shaft fractures managed nonoperatively using a U-shaped slab.
Limitations of the study
This study has limitations, such as a short follow-up period and small sample size, which may affect the accuracy of its findings.However, a longer follow-up and larger sample size could provide more comprehensive insights into functional outcomes in patients with humeral shaft fractures, enhancing their validity.
Function
grading at the 6thweek of follow-up by Stewart and Hundley's criteria Functional grade of the affected limb at the 12th week of follow-up Radiological and clinical findings at the 12th week of follow-up Variablesn % Presence of adequate callus (bridging both bony cortices) Yes 62 84.93No 11 15.07 Absence of pain and tenderness at the fracture site Activity of daily livingWhen have you started activities of daily living (such as washing, cooking, housecleaning, and cleaning yourself )?
|
2024-05-28T13:10:40.052Z
|
2024-05-28T00:00:00.000
|
{
"year": 2024,
"sha1": "74911dc3c069319b1f9b793bc87a3f2c802a8f54",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "b84b2af53694bb06c480fc6e1303ae559a2ea5cd",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
59151227
|
pes2o/s2orc
|
v3-fos-license
|
Simple Moving Voltage Average Incremental Conductance MPPT Technique with Direct Control Method under Nonuniform Solar Irradiance Conditions
A new simple moving voltage average (SMVA) technique with fixed step direct control incremental conductance method is introduced to reduce solar photovoltaic voltage (VPV) oscillation under nonuniform solar irradiation conditions. To evaluate and validate the performance of the proposed SMVAmethod in comparison with the conventional fixed step direct control incremental conductance method under extreme conditions, different scenarios were simulated. Simulation results show that in most cases SMVA gives better results with more stability as compared to traditional fixed step direct control INC with faster tracking system along with reduction in sustained oscillations and possesses fast steady state response and robustness. The steady state oscillations are almost eliminated because of extremely small |dP/dV| around maximum power (MP), which verify that the proposed method is suitable for standalone PV system under extreme weather conditions not only in terms of bus voltage stability but also in overall system efficiency.
Introduction
Penetration of solar photovoltaic (PV) power in centralized or decentralized generation system has been evolving at a rapid pace in recent years and is considered to be one of the most promising power generation options among all renewable energy sources (RES) for sustainable energy development.But due to the intermittency of environmental conditions and nonuniform nature of solar irradiance it produces significant fluctuation in solar power generation.This is because solar irradiance is not highly correlated between even close locations at very short timescale which is one of the important factors in solar power generation output variability.Studies have shown that increased geographical diversity in solar PV generation system leads to decrease in the output power efficiency and sometimes generates hotspots which causes damage to the solar cells [1].So far power system operators accommodate solar and wind power variability through storage reserves to stabilize the power output levels [2].Technically there are two ways to improve the efficiency of PV power generation: it could be possible either to develop low cost high efficiency solar conversion materials or to operate the PV system at maximum power point (MPP) for getting optimal output power.Because of the high cost of solar cells, it is necessary to operate the PV array at the maximum operating point.Therefore maximum power point tracking (MPPT) is considered as an essential part of PV generation system and is one of the key issues for researchers to reduce the effects of nonlinear characteristics of PV array [3].
So far different MPPT algorithms have been proposed for optimization of PV output power, such as perturb & observe (P&O) [4][5][6], incremental conductance (INC) [7,8], hill climbing [9,10], neural network, fuzzy logic theory, and genetic algorithm [11][12][13].However it has been observed that most of the MPPT methods are developed by assuming that solar irradiance is applied on the entire PV array uniformly.Unfortunately, the nonlinearity of solar irradiation is directly effecting the PV characteristic because of multiple local maxima (the mismatching problem) which can be exhibited on current-voltage ( curve) and power-voltage ( curve)
MPPT controller
PV system of solar PV array if the entire array does not receive uniform solar irradiation.
Although some researchers have worked on partially shaded condition (PSC) and fast changing solar irradiance MPPT [14][15][16][17], in [14], a two-stage MPPT with instant online oc and sc measurement was proposed.This MPPT is very simple to implement but an additional circuit is required to track the real maximum power point (RMPP) under nonuniform insolation conditions; a novel algorithm to track the global power peak (GPP) under partially shaded conditions based on several critical observations in conjunction with a DC-DC converter to accelerate the tracking speed with feed forward control is proposed in [15].To ensure fast MPPT process with DC-DC converter duty cycle modulation under partial shading conditions and load variations with modified incremental conductance (INC) algorithm to track the GMPP is proposed in [16] and a modified incremental conductance algorithm under fast changing solar irradiance to reduce oscillation in solar module power at zero level and to mitigate the inaccurate response is discussed in [17].
Among all the aforementioned MPPT algorithms, incremental conductance (INC) and perturb & observe (P&O) are commonly used for small and large scale PV power plants because both the algorithms operate in accordance with power against voltage (-) curve of PV module and tune the duty cycle of converter to ensure the next MPP point accordingly.In P&O steady state oscillation occurred because perturbation continuously changes in both the directions to maintain MPP under rapidly changing solar irradiance which causes system to be less efficient and to have more power losses [6,18].However, the conventional incremental conductance method determines the slope of PV curve by varying the converter duty cycle in fixed or variable step size until the MPP is achieved and in this way oscillation under rapidly changing solar irradiance is reduced with greater efficiency but due to complicated algorithm speed is slow.
As discussed that nonlinear solar irradiance produces significant fluctuation in PV output voltage ( PV ).So far, no considerable work is done to minimize the fluctuations of PV terminal voltage of MPPT controller which is directly related to optimizing the efficiency and reducing the MPP tracking time.In this paper a direct control incremental conductance with simple moving voltage average (SMVA) technique is proposed.Using SMVA we examined the variability of PV among different PV array configurations with nonuniform solar irradiance to aggregate the plant output at varying timescales.The simulation of proposed model is performed in MATLAB/Simulink and results are provided with comparison of conventional fixed step direct control incremental conductance method.The comparison results reveal that the proposed SMVA method provides better output by eliminating the steady state oscillations and greater output which is fast and accurate with response to variation of solar irradiation.
In Section 2 of this paper an overview about conventional and fixed step direct control INC is given and proposed SMVA technique is discussed in Section 3. Section 4 is about the case example and in Section 5 results are discussed, and finally in Section 6 conclusion is drawn.
Direct Control Incremental Conductance MPPT Method
Traditional conventional incremental conductance INC is based on two independent control loops as shown in Figure 1(a).The first loop uses the incremental and instantaneous conductance to generate the error signal, and the second is the closed loop with a proportional-integral (PI) controller to drive the error to zero at MPP according to (1).But in practical implementation of INC under nonuniform solar irradiance the slope of - characteristics curves (/ ̸ = 0) at MPP.Therefore, a direct control incremental conductance method is proposed in [19][20][21] to simplify the control circuit, in which second loop, the proportional-integral (PI) controller, is eliminated as shown in Figure 1(b), in which duty cycle is adjusted into the algorithm and, to recompense the PI controller error detection function, a small marginal error of 0.002 is attuned in code.Now rewriting (1) of INC into direct control INC with fixed duty cycle and small marginal error, new equation comes out as (2): So, now INC equations can be rewritten as where IC is reported as an error in incremental and instantaneous conductance.The error ( IC ) is set on constant basis or by following trial-and-error procedure [22].But it has been observed that large marginal error provided faster convergence to MPP but produces unnecessary steady state oscillations, whereas small marginal error produces less steady state oscillation with slow convergence which tends to decrease the efficiency of system [19].
Proposed Method
In this paper a simple moving voltage average (SMVA) technique is proposed for recovering oscillatory effect such as ripple in solar PV generator voltage PV under nonuniform solar irradiance.The proposed technique is inspired by advantages of practical simple moving average (SMA) model which is frequently used in financial markets to form a trend following indicator by reducing price fluctuations.Herein, SMVA does not predict price direction but rather defines the voltage direction with a lag because it is based on solar irradiation to compute and average the irradiation signal in time series analysis.The moving average is a simple low pass FIR (Finite Impulse Response) filter commonly used for smoothing an array of sampled data/signal; so far SMA is effectively used by different scholars [22,23] in engineering characteristic reducing noise in random samples while retaining a sharp step response and computing the monitoring values to predict the future data.
Although several other soft computing methods have been developed, as we can find in the works of Stevenson and Porter [24], Hansun and Subanar [25][26][27], and Popoola et al. [28,29], moving average method is still considered as the best method by many researchers due to its easiness, objectiveness, reliability, and usefulness.
Therefore, SMA technique is adopted for reducing the oscillatory effect of PV under nonuniform solar irradiation conditions.The proposed simple moving voltage average (SMVA) model is developed by following (4) in conjunction with fixed step direct control incremental conductance MPPT as shown in Figure 2, where () and () are input and output signal of the SMVA, respectively, and () is the size of the moving average window, which holds the number of samples of the input signal as per defined limit and operates by averaging the number of points from the input signal to produce each point in the output signal [30]: () .
In technical analysis, the number of sample points is stochastic.It depends on nonuniformity of solar irradiance one is concentrating on.One characteristic of the SMVA is that if the data have an intermittent fluctuation, then applying SMVA of that period will eliminate that variation (with the average always containing one complete cycle).If 20 measurements, 1 through 20 , are available, the successive 5-period simple moving averages, for example, are as follows: ( Technically it is not possible to compute a 5-period moving average until 5 periods' data are available.That is why the first moving average in the above example starts with SMVA 5 . In Figure 5 an output signal of SMA is given where fluctuated (noisy) signal is smoothed by following (6), with 10 and 20 data points, where it can be observed that as the filter length increases (the parameter ) the smoothness of the output increases.
International Journal of Photoenergy
Output signal Input signal
Case Example
Roof top centralized PV system installed at Zhejiang University, Yuquan Campus, College of Electrical Engineering Building, is taken as an example as shown in Figure 6.In fact the installed PV system is regarded as a good one because of the same module and facing the sun in the same angle and direction.Therefore irradiation of these panels is supposed to be uniform.However, one should notice the chiller, water storage tank, and weather data collection unit cause shading effects on adjacent panels as in red circles it can be seen.With reference to ZJU roof top PV system, a 3 KW system was designed by using a 37-watt PV module to quantify the analysis; PV panel specifications are shown in Table 1.
Approximation is made such that the PV panel peak power reduction rate is directly proportional to shading effects.Therefore, irradiance is estimated as (1) normal PV panel: 1000 W/m 2 and (2) shaded PV panel between 768.36 W/m 2 and 426.96W/m 2 as shown in Figures 7(a
Results and Discussions
To validate the performance of proposed simple moving voltage average (SMVA) technique under nonuniform solar irradiation, a MATLAB/Simulink model was developed as shown in Figure 9, consisting of 3 KW PV array, a DC-DC boost converter (in Table 2 its component values are given), and a fixed step direct control incremental conductance MPPT controller with the SMVA technique.
At the first step of the simulation, a nonuniform solar irradiation is applied to the PV array where irradiation was set to 800 W/m 2 at = 0.0 s and decreased to 600 W/m 2 at 0.02 s and increased back to 1000 W/m 2 at = 0.04 s; finally the irradiation decreased from 800 W/m 2 to 600 W/m 2 from 0.06 s to 0.1 s, with 25 ∘ C constant temperature.In traditional International Journal of Photoenergy PV systems photovoltaic voltage PV is directly given as an input to MPPT controller but in the proposed model PV is given as an input to the SMVA module as depicted in Figure 9, and output of SMVA is given as an input to MPPT controller.
To investigate and validate the efficacy of the SMVA model buffer sizes (number of sample points) of = 10 and = 30 with a change in duty cycle Δ = 0.001 are applied; simulation results are presented in Figure 10. Figure 10(a) is an output voltage ( PV ) of the PV array which is given as an input to the SMVA module to perform voltage smoothing and reduce the fluctuation using a span of data points following (6) with the buffer sizes = 10 and = 30.scenarios were simulated with a change in duty cycle Δ = 0.001, 0.005, and 0.01 and the SMVA buffer size number was adjusted at = 10.Results are shown in Figures 12 and 13, where magenta color is representing the simple fixed step direct control INC's output voltage and power and blue lines are for the proposed SMVA outputs.Results are showing that performance of the proposed technique is much better than that of fixed step direct control incremental conductance method at different duty cycle Δ step changes.It can be easily observed that the output voltage and power of the proposed technique give greater efficiency with more stability at all the different step size changes as compared to direct control INC, as it can be seen in Figures 12(a), 12(b), and 12(c) as Δ increases from 0.001 to 0.01, and INC's output voltage decreases from the range of 385-407 volts to 374-397, where the upper limit decreases to 10 volts and the lower limit went down to 11 volts.In the SMVA output voltage remains higher than the INC's within the range of 387-410 volts to 385-405 with the change in upper limit of 5 volts and lower limit to 2 volts.In the same way, in Figures 13(a), 13(b), and 13(c) output power comparisons between INC and the proposed SMVA method can be observed at Δ = 0.001, and INC's output power is between 2775 and 3100 watts.At the same duty cycle SMVA output power is 2825-3150 watts and at Δ = 0.01 INC's output power is 2625-2955, whereas SMVA output power is 2750-3055 watts.
Figures 12 and 13 show that the proposed simple moving voltage average (SMVA) technique with direct control incremental conductance MPPT method can efficiently deal with the tradeoff between dynamic response speed and steady state accuracy.The steady state oscillations are almost eliminated because of extremely small |/| around MPP, as shown in Figure 10(f), the ripple voltage is less than 1.0 volt.The dynamic performance is obviously better than that with fixed step direct control INC.
Furthermore, Tables 3(a) and 3(b) surmise the measurement output voltage and power of INC and the proposed SMVA method with different buffer sizes = 10, 30, and 50 and change in duty cycle Δ = 0.001, 0.003, 0.005, and 0.01 in order to verify the repeatability of the results, where the same tests were carried out at three different irradiance levels.It can be seen that, at Δ = 0.001, INC and SMVA perform extremely close in most cases because of small change in duty cycle, as it is reported in [31,32] that smaller Δ reduces the steady state losses caused by the oscillation of the PV operating point around the MPP, but it makes the algorithm slower and less efficient in the case of rapid change in solar irradiation and larger step size contributes to faster dynamics but excessive steady state oscillations, resulting in a comparatively low efficiency as it can easily be seen in Figures 12 and 13 and Tables 3(a) and 3(b) as change in duty cycle increases from Δ = 0.001 to 0.01, INC's output voltage and power decrease, and fluctuation increases.From the above study, it is observed that in most cases SMVA gives better results with more stability as compared to traditional fixed step direct control INC with faster tracking system under extreme weather conditions along with reduction in sustained oscillations, which verify that the proposed method is suitable for standalone PV system under extreme weather conditions not only in terms of bus voltage stability but also in overall system efficiency.
Conclusion
In this paper, simple moving voltage average (SMVA) technique with fixed step direct control incremental conductance method was employed.Simulation results show that proposed technique is able to reduce PV oscillations, thereby reducing the power losses faced by the conventional INC algorithm under nonuniform solar irradiation.Also this method is able to improve not only the steady and dynamic state but also the design efficiency of system.In conclusion the proposed method performs accurately and
Figure 1 :
Figure 1: Block diagrams of INC MPPT algorithms implementation techniques; (a) reference voltage control; (b) direct duty ratio control.
( 4 )
A certain size of SMVA moving block diagram is shown in Figures3(a) and 3(b), where () is moving along with the array size compiled from the input signal, one element at a time, and the average of all elements in the current window is the output of the SMVA.When calculating successive values, a new value comes into the sum and an old value drops out by replacing each data point with the average of the neighboring data points defined within the span.The proposed SMVA model flow chart in conjunction with INC is depicted in Figure4.The proposed SMVA model is computed by following
) and 7(b).According to Figures 7(a) and 7(b), arrays A, B, and C with different irradiance - characteristic curves are drawn in Figure 8 in which multiple maximum power points due to irradiance mismatch are observed.
Figure 4 :
Figure 4: Flow chart of proposed simple moving voltage average (SMVA) model with direct control incremental conductance.
Original noisy signal (b) 10-point moving average (c) 20-point moving average
Figure 5 :
Figure 5: Example of a moving average filter.In (a), random noise signal.In (b) and (c), this signal is filtered with 10-and 20-point moving average filters buffer size.
Table 1 :
Electrical characteristic data of Solkar 36 W PV module.
Table 2 :
Boost converter components values.
|
2018-12-15T18:23:10.166Z
|
2015-12-21T00:00:00.000
|
{
"year": 2015,
"sha1": "48edbfbff4f174af2e5e2869ea4b13dee35c5e6f",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/ijp/2015/479178.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "48edbfbff4f174af2e5e2869ea4b13dee35c5e6f",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
37197245
|
pes2o/s2orc
|
v3-fos-license
|
Never Too Late: A Case Report on Transcatheter Aortic Valve Implantation in a 97-Year-Old Patient
Aortic valve stenosis is a well-recognized valvular problem in the aging population. Transcatheter aortic valve implantation (TAVI) is becoming an increasingly popular treatment alternative to surgical aortic valve replacement for frail elderly individuals with symptomatic severe aortic valve stenosis. There are multiple research reports documenting the effectiveness of TAVI in octogenarians; however, few authors discuss the success of this procedure in nonagenarians. This case report depicts the successful transfemoral implantation of a prosthetic aortic valve in a 97-year-old man. Moreover, the current literature on TAVI outcomes in nonagenarians is reviewed.
Introduction
Stenosis of the aortic valve is a common type of valvular heart disease in the elderly due to degenerative changes of the valve [1]. The prevalence of aortic stenosis in North Americans older than 75 years is 2.7 million [2]. Of these, 540,000 individuals will develop severe symptomatic aortic stenosis [3], which manifests as chest pain, syncope with exertion, and congestive heart failure [4]. This drastically reduces quality of life and increases mortality risk. Although surgical aortic valve replacement (SAVR) has been the standard treatment for severe symptomatic aortic stenosis, this approach is controversial in the elderly population, as peri-operative death rates approach 10% in those beyond 90 years of age [3]. This is concerning, as the United States' population is comprised of 1.9 million nonagenarians who live with various disabilities, including aortic valve stenosis [5]. Furthermore, the nonagenarian population is expected to quadruple by the year 2050 [5].
Transcatheter aortic valve implantation (TAVI) has been a feasible alternative to SAVR in those with symptomatic severe aortic stenosis and high mortality risk [3]. Alain Cribier pioneered this treatment approach in France in 2002 [6]. Based on two multicenter North-American trials-namely, PARTNER 1 [7] and PARTNER 2 [8]-which showed comparable results in those undergoing TAVI versus SAVR with regard to symptomatic improvement and all-cause mortality, the procedure was later approved by the United States' Food and Drug Administration in 2011 [6]. According to the 2017 focused update guideline of the American Heart Association and the American College of Cardiology, TAVI is a class I recommendation for severe symptomatic aortic stenosis in patients who are at high risk for surgical valve replacement [9]. While TAVI has frequently been performed in octogenarians, there is limited literature on its effectiveness in nonagenarians. The author of this article will report on the successful outcome of TAVI in a 97-year-old North-American man. Furthermore, a review of the literature on the therapeutic effectiveness of this procedure in the nonagenarian patient population will be included.
Case Report
In June 2016, a 97-year-old man presented to the cardiology clinic with a feeling of impending doom and symptoms of heart failure New York Heart Association class III (dyspnea with minimal Geriatrics 2017, 2, 25 2 of 8 exertion, peripheral edema, and fatigue) after recently being treated in the emergency department for similar symptoms with intravenous diuretics. The patient had a long-standing history of asymptomatic severe aortic stenosis and had been highly functional until that day. Three years prior, he was denied SAVR due to being considered a high surgical risk. A 2D echocardiogram revealed a trileaflet aortic valve with a valve area of 0.5 cm 2 (normal is 3-4 cm 2 ) and a mean transvalvular gradient of 48 mmHg (normal is <5 mm Hg), which indicated severe aortic valve stenosis. Additional co-morbidities consisted of moderate tricuspid regurgitation, hypertension, chronic obstructive pulmonary disease (COPD), chronic renal disease stage III, gastrointestinal hemorrhage in 2013, and adenocarcinoma of the prostate that was treated in 1991 with radiation and adjuvant hormone therapy. On assessment, his blood pressure was 143/70 mm Hg, heart rate was 50 beats per minute, respiration rate was 14 breaths per minute, and he was afebrile. Auscultation of the heart revealed the class murmur of aortic valve stenosis, which was a loud ejection murmur over the aortic area, radiating to the carotid arteries. He had bilateral lower extremity edema, +2, and non-pitting.
Preoperative Evaluation for TAVI
The patient was admitted to the hospital emergently. His pre-operative risk assessment for 30-day mortality-the Society of Thoracic Surgeons (STS) score-was elevated at 14.4% [10], and he was thus evaluated for TAVI. Multiple tests were performed to assess the feasibility of the procedure. CT angiograms of the thorax, abdomen, and pelvis were implemented to investigate for abnormalities of the vasculature that would prohibit a transfemoral approach for TAVI. Considering that stroke is a common complication of this procedure [6], a carotid ultrasound was performed to evaluate for carotid atherosclerosis. Two cardiothoracic surgeons examined the patient and declared that he would be at high mortality risk to have SAVR, and thus they recommended TAVI. Cardiac catheterization was performed to evaluate for coronary artery disease and to obtain hemodynamic measurements.
Performance of TAVI
Under general anesthesia, the right and left femoral arteries were each accessed with 6-french sheaths. A temporary pacemaker was placed in the right ventricle through an 8-french sheath in the right femoral vein. Balloon valvuloplasty was performed by advancing a balloon via the right femoral artery sheath, and during rapid ventricular pacing at 160 beats per minute, inflating it across the aortic valve to clear the stenosis and to deploy the 26-mm SAPIEN S3 bioprosthetic aortic valve ( Figure 1), which expanded within the native aortic valve ( Figure 2). The purpose of rapid ventricular pacing during TAVI is to reduce cardiac output, which facilitates balloon inflation across the valve and placement of the bioprosthetic aortic valve. The mean valvular gradient after TAVI decreased to 1.9 mm Hg (normal is <5 mm Hg). There were no intraoperative complications. The patient was extubated and transferred to the coronary care unit with the temporary transvenous pacemaker, which was removed two days later.
Postoperative Course
A 2D echocardiogram performed on the first postoperative day showed that the prosthetic aortic valve was well seated without any regurgitation. A 12-lead electrocardiogram revealed new onset paroxysmal atrial fibrillation with slow ventricular response (his heart rate was in the range of 50 beats per minute). Anticoagulation treatment for the prevention of thromboembolic events was initiated with Apixaban 2.5 mg BID. The lower dose of Apixaban was selected because he was older than 80 years and his serum Creatinine level was above 1.5 mg/dL [11]. In addition, Clopidogrel 75 mg daily was started to prevent stenosis of the bioprosthetic valve. The patient was discharged home three days post procedure.
complications. The patient was extubated and transferred to the coronary care unit with the temporary transvenous pacemaker, which was removed two days later.
Postoperative Course
A 2D echocardiogram performed on the first postoperative day showed that the prosthetic aortic valve was well seated without any regurgitation. A 12-lead electrocardiogram revealed new onset paroxysmal atrial fibrillation with slow ventricular response (his heart rate was in the range of 50 beats per minute). Anticoagulation treatment for the prevention of thromboembolic events was initiated with Apixaban 2.5 mg BID. The lower dose of Apixaban was selected because he was older than 80 years and his serum Creatinine level was above 1.5 mg/dL [11]. In addition, Clopidogrel 75 mg daily was started to prevent stenosis of the bioprosthetic valve. The patient was discharged home three days post procedure. complications. The patient was extubated and transferred to the coronary care unit with the temporary transvenous pacemaker, which was removed two days later.
Postoperative Course
A 2D echocardiogram performed on the first postoperative day showed that the prosthetic aortic valve was well seated without any regurgitation. A 12-lead electrocardiogram revealed new onset paroxysmal atrial fibrillation with slow ventricular response (his heart rate was in the range of 50 beats per minute). Anticoagulation treatment for the prevention of thromboembolic events was initiated with Apixaban 2.5 mg BID. The lower dose of Apixaban was selected because he was older than 80 years and his serum Creatinine level was above 1.5 mg/dL [11]. In addition, Clopidogrel 75 mg daily was started to prevent stenosis of the bioprosthetic valve. The patient was discharged home three days post procedure.
Follow-up Visits
One month later, during a follow-up appointment with the primary care provider, the patient was found to be severely bradycardic and became unresponsive for a few minutes. He regained consciousness without any resuscitative efforts and was taken emergently to the hospital. An inpatient limited 2D echocardiogram showed normal systolic function with ejection fraction of 55-60%. Unfortunately, nothing was reported on the function of the bioprosthetic aortic valve. The patient remained asymptomatic during the hospitalization and was discharged home the next day. A review of patient's home medications revealed that he was taking the negative chronotropic medication metoprolol succinate, which may have precipitated his syncopal episode. He was instructed to stop this medication.
During the six-month follow-up visit, the patient reported continued symptomatic improvement. He had mild peripheral edema. Dyspnea occurred with more significant exertion; thus, NYHA functional class II. He remained off metoprolol as instructed, and despite being bradycardic with a heart rate of 55 beats per minute, he did not experience any further episodes of dizziness. A limited 2D echocardiogram revealed that the bioprosthetic valve was well seated without any paravalvular leak. The ejection fraction was 65% and he had mild diastolic dysfunction. The patient was told to stop clopidogrel (as he had completed the standard six-month treatment), and to continue antiplatelet therapy with Aspirin 81 mg daily indefinitely.
Literature Review
There were nine articles in the literature that addressed TAVI in nonagenarians, which included four case reports and thirteen research studies. Most participants in the studies were women. Patient selection for TAVI was generally based on operative mortality risk, which was calculated using the Society of Thoracic Surgeons (STS) score or the European System for Cardiac Operative Risk Evaluation (EuroSCORE); however, a few authors mentioned the importance of also using determinants of quality of life such as physical functioning status, social support, and cognitive status [12][13][14][15]. The Duke Activity Status Index was mentioned in two TAVI studies for the evaluation of physical functioning [12,13]. The Kansas City Cardiomyopathy Questionnaire (KCCQ-12) was another instrument used to evaluate the impact of cardiac symptoms on quality of life [15].
The author of the earliest case study documented the performance of TAVI via the transapical approach in a 96-year old woman with positive outcomes at one-month follow-up [11]. Jabs et al.'s case report illustrated the success of TAVI in a 99-year old patient with multiple comorbidities, whose valve function and physical functioning were excellent at five-year follow-up [12]. Similar findings were noted in Kneitz et al.'s case study of a 95-year-old woman [16]. The oldest patient known from the literature to have had TAVI was a 102-year-old woman [13]. Approximately four years later, transthoracic echocardiography revealed good functioning of her bioprosthetic aortic valve with mild paravalvular aortic regurgitation. In addition, she was able to perform activities of daily living without any assistance [13].
Mortality after Hospital Discharge
There were great variations in the death rates post TAVI, which were likely impacted by the sample sizes of the research articles. The 30-day all-cause mortality rates ranged from 0% to 27% [15,[17][18][19]21,[23][24][25][26][27], while the one-year mortality rates were between 10-32% [15,[17][18][19]21,23]. However, survival at 5-year follow-up was 30.4% [25]. Most comparison studies revealed that mortality rates at one year were similar in nonagenarians versus the younger cohorts in [19][20][21]24]. However, Scholtz et al. found that nonagenarians had significantly higher death rates at 30 days and one-year follow-up [24]. Despite enrolling a healthier nonagenarian population, Escarcega et al. reported that death rates at 30 days were higher than in the octogenarian cohort [21]. Moreover, death rates at six months post TAVI were noted to be double in Yamamoto et al.'s nonagenarian patient population as compared to their patient group younger than 90 years [18]. This was attributed to extensive cardiovascular comorbidities such as moderate to severe mitral regurgitation and NYHA functional class IV. Most non-survivors had previous aortic valvuloplasty and congestive heart failure within the preceding 12 months [18]. Causes of death included hemorrhage, pneumonia, heart failure, stroke, and sudden death [18]. Similar findings were reported by Gurvitch et al., who found that deaths post TAVI were primarily due to respiratory problems, such as respiratory failure, pneumonia, and chronic obstructive pulmonary disease [20]. Escarcega et al. reported that variables associated with 30-day mortality in nonagenarians were hemorrhage, stroke, and post-TAVI implantation of pacemaker [21]. The factors correlated with one-year mortality in this patient population were the same as those at 30-day with the addition of moderate aortic valve regurgitation [21]. The transfemoral versus the transapical approach was a predictor of lower 30-day mortality [21]. Furthermore, male gender and renal failure were found to be mortality predictors [25].
TAVI in Centenarians
Although the scope of this literature review is to describe TAVI in nonagenarians, it is worth illustrating the success of this procedure by mentioning the outcomes in a sample of 24 centenarians from Arsalan et al.'s research study [15]. Post-procedural complications in these patients were primarily vascular in nature and more frequent as compared to the younger cohort. There were no deaths at 30 days, and the one-year mortality was 6.7% [15].
Discussion
It is estimated that there are 100,000 candidates for TAVI in North America [2]. Favorable results were reported in octogenarians undergoing TAVI [3]. The literature base on TAVI in nonagenarians is presently limited. Most of the articles described above depicted the effectiveness of TAVI in European subjects. However, the success rate of this procedure is generally unknown in North American nonagenarians. There are multiple approaches to TAVI (i.e. transfemoral, transapical, transaortic, subclavian, transcarotid), but the transfemoral route was most frequently implemented in the nonagenarian populations from the reviewed articles [29]. This was likely due to the less-invasive nature of the transfemoral approach.
Comparison of TAVI versus SAVR outcomes in nonagenarians was very limited in the research literature. There were similar lengths-of-stay and mortality rates in-hospital and at one year follow-up [30][31][32]. Those who underwent SAVR were more likely to experience renal failure and to require blood transfusions [30]. Patients in both treatment groups had improved quality of life at one year post-procedure [31]. However, given the less-invasive nature of TAVI in comparison to SAVR, its popularity will likely continue to increase, which will prompt further research in this patient population. Suggestions for future research include studies with larger samples and long-term follow-up that compare the effectiveness of the various types of bioprosthetic aortic valve as well as the patients' quality of life. Furthermore, standardization of geriatric assessment pre-and post TAVI to determine quality of life would be beneficial in determining the success of this procedure across health care centers in various geographic locations.
Although TAVI seems to be a promising alternative for those who are too frail for SAVR, the financial impact of this procedure on the health care system has been a rarely evaluated variable in the literature. A cost analysis of transfemoral TAVI based on the PARTNER trial cohort B revealed a procedural cost of $42,806 and a hospitalization cost of $78,542 [33]. These costs were higher than those associated with standard nonsurgical therapy [33], conventional SAVR [34], and the newer sutureless technique for SAVR [35]. The incremental cost effectiveness ratio for TAVI was $502,000 per year of life saved, which was deemed acceptable according to US healthcare spending thresholds [34]. However, there were no cost effectiveness analyses of this procedure in nonagenarians, despite its increasing frequency and high number of comorbidities in this patient population. Furthermore, when considering costs and therapeutic success, careful deliberation must be given to potential complications and anticipated quality of life post-procedure. The only complication experienced by Geriatrics 2017, 2, 25 6 of 8 the patient described in this case report was atrial fibrillation. His physical functioning after TAVI was excellent, and he had extensive social support.
Conclusions
As the life expectancy continues to rise, especially in developed nations, and more individuals survive into the tenth decade of life and beyond, there is a need for less invasive treatments that add quality to longevity. TAVI is a revolutionary approach to symptomatic severe aortic stenosis, which carries a grim prognosis for those who do not qualify for surgical valve replacement. The current case report of the 97-year-old man demonstrates that it is never too late to push the boundaries of medicine in the new millennium.
|
2017-12-15T13:56:43.913Z
|
2017-07-17T00:00:00.000
|
{
"year": 2017,
"sha1": "3afe88fa26ab9847e3b432f38c6961c6b813e072",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2308-3417/2/3/25/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3afe88fa26ab9847e3b432f38c6961c6b813e072",
"s2fieldsofstudy": [
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
54758035
|
pes2o/s2orc
|
v3-fos-license
|
Strategy Planning in Simultaneous Interpreting -- with Beijing Historical and Cultural Speeches as Examples
Simultaneous interpreting contains various different layers of strategy plan at any given point of time for the practitioner. And history and culture speeches, such as the ones in China in general and in Beijing in particular, may be rather unique from the perspective of language, and may not have much “equivalence” in other cultures. It is therefore a challenge for simultaneous interpreting in message conveyance. That may mean that it is necessary to make strategy plan before the conference starts and as the conference going on in order to appropriately convey the message. This paper uses borrowed theories from psychology and management to the creation of strategy plan model of simultaneous interpreting. It attempts to find out the strategy model of simultaneous interpreting by analyzing a professional interpreter’s performance working on three speeches of history and culture of Beijing and by analyzing the interview with the interpreter. Based upon the review of the literature and the analysis of the transcript of both the interpreting and the interview, a model of strategy plan of simultaneous interpreting is then proposed to facilitate the practice.
Introduction
Beijing historical and cultural speeches are considered difficult by professional interpreters, which is clearly shown in the interview with a CATTI Level I 1 interpreter in this study. The interpreter, when interpreting something she was familiar with, would unconsciously make forecast on what the speaker was going to say, and thus, making a fairly adequate plan for the incoming segments of the speech, though problems do exist from time to time. However, when she was interpreting something she was less familiar with, it seems that her performance was greatly undermined by the unfamiliar concepts, which she called "words", presenting a confusion of strategies with completely different nature. Therefore, it seems that strategy plan, to a certain extent, is necessary to the interpreting of Beijing historical and cultural speeches. Therefore, this paper will focus on the discussion of strategy plan, a necessary process if the interpreting performance is to be improved. 1 China Accreditation Test for Translators and Interpreters --CATTI Level I is currently the highest level in the examination track.
Definition of several concepts should be clarified in order to have a discussion on the strategy plan in simultaneous interpreting of Beijing historical and cultural speeches. (1) Strategy plan. Strategy of interpreting is the method an interpreter chooses at the micro and macro level in order to interpret the speech. Plan is a psychological phenomenon of mental simulation of actions and inherently "adaptive activity, promoting rather than inhibiting flexible reactions to a changing environment". [5] [7: 213] (2) Simultaneous interpreting (SI). There are several definitions of simultaneous interpreting which described the practice from different perspectives. The definitions are as follows: (a) SI conveys a message into another language at virtually the same moment in time as it is expressed in the first language. [12] [23: 2] (b) SI is a unique human activity practiced under extreme cognitive processing conditions and constrained by short-term memory capacity. [1] [23 : 2] (c) SI is a translation practice transferring the thoughts and ideas of one language into another and delivered almost synchronously with the speaker. [21] [23 : 2] (d) SI is a type of interpreting by the interpreter which orally and precisely express the ideas and contents of one language (source language) into another (target language) at almost the same speed with the speaker. [22] [23: 2] All the above definitions described the features of simultaneous interpreting: (a) It is a form of interpreting. (b) The interpretation happens at virtually the same time as the original speech. (c) This type of interpretation is heavily constrained by the processing capacity of the interpreter. (3) Beijing Historical and Cultural Speeches. It is a type of speech reflecting what existed and exists in Beijing, of various aspects of history and current society of Beijing and what is rarely seen in non-Chinese cultures.
Strategy Planning
Planning is conceived of as "the mental simulation of actions in a dynamic environment". [7: 214] It is psychological concept which, according to research, "can have a pervasive impact on performance, influencing not only the course of action pursued but also other, more subtle aspects of performance such as learning, motivation, and teamwork". [7] [6] [13] [18] And planning is "an inherently adaptive activity, one more likely to promote than inhibit flexible reactions to a changing environment" [7] [5], instead of an inhibiting factor to make flexible adjustment for high-level performance. [7] [14] That means that planning is aimed to better performance and it is inherently adaptive and flexible, which is in line with the nature of simultaneous interpreting where constant adjustment is needed as the interpreter deals with the unexpected for a fair amount of time in the practice. "Planning, in fact, appears to influence performance in five ways: (a) It contributes to more effective problem solving; (b) it promotes learning; (c) it enhances motivation; (d) it facilitates adaptation; and (e) it enhances coordination." [7: 225] "Planning, whatever its contributions to individual performance, is a resource-intensive activity calling for environmental appraisal, goal appraisal, contingency projection, interactive plan development, and monitoring." [7: 227] [3].
It is believed that the factors of goal identification, environmental analysis, specification of causes and consequences, forecasting, which operate together in integrated planning, play major roles in planning. [7] "...planning begins with environmental monitoring and needs assessment, followed by generation and prioritization of applicable goals. With goal identification, it becomes possible to identify the key steps and relevant contingencies needed to formulate a prototype plan. Given this prototype, key causes and contingencies operating in the local situation can be identified and the potential consequences of plan execution can be projected, leading to development of a revised, more detailed plan." [7] It seems that this is also applicable to the interpreting setting. And the "goal", which seems to be an important link in this whole planning process, may relate to what is called "skopos" in the translation theory. Skopos, proposed by Vermeer for the first time in 1970s, indicates the purpose of a piece of translation or translational behavior. [8: 78-79] [19: 71] And and the core concept of Skopotheorie is that the purpose of translational behavior is the key determinant for the translation process. [19: 71] And goal identification, or skopos identification, may be helpful to constitute the general plan of the strategy in interpreting.
Upon having a general plan, some detailed situations, or contingencies, may be seen in the execution process. And "knowledge" could be the basis for their treatment. It is believed by Mumford et al. that knowledge of the possible contingencies and that of the general rules and procedures are needed for the execution of the strategies and the dealing of the contingencies. [7] [11] It seems that interpreters should be able to grasp the knowledge on interpreting procedure and potential interpreting difficulties in order to make adequate plan for the interpreting strategy. Experienced interpreters may acquire the knowledge during the process of developing their career, while novice interpreters may do so with training and theoretical learning.
With general and local strategy plans, it is necessary to coordinate the plans at the two levels in order to make it an integral whole. It is believed that planners, which should include the interpreters, should efficiently organize their activities in accordance to the the goals, while this process may be more rapid for experts than beginners. [7: 224]
Simultaneous Interpreting Strategy
According to , there is an effort model of simultaneous interpreting, as well as several operational conditions. The simultaneous interpreting(SI) consists several parts shown in the equation: (1) SI=L+P+M+C L stands for listening and analysis, P reformulation in the target language, M short-term memory and C coordination.
To make simultaneous interpreting work without failure, several operational requirements need to be met: (2) TR=LR+MR+PR+CR TR Total processing capacity requirements LR processing capacity requirements for L MR processing capacity requirements for M PR processing capacity requirements for P CR processing capacity requirements for C To make simultaneous interpreting proceed uninterruptedly, five conditions must be met at any point in the simultaneous interpreting process.
(3) TR≤TA TA total available processing capacity (4) LR≤LA LA being the processing capacity available for L (5) MR≤MA MA being the processing capacity available for M (6) PR≤PA PA being the processing capacity available for P (7) CR≤CA CA being the processing capacity available for C The core of these inequations is that the capacity requirement for each effort should be smaller than the available. Therefore, it can be deduced that reducing the use of the processing capacity at any given point of time is a necessary principle for the interpreting if simultaneous interpreting is to proceed smoothly.
Many scholars believed that anticipation is an important skill in simultaneous interpreting. Jones [4: 105-107] believed that interpreters should be able to anticipate the speech in order to avoid the awkward situation of not being able to finish the sentence started which is resulted from passive following of the speech. And Nolan [9: 18-19] believed that preparation and anticipating what the speaker is going to say is very important for the interpreter. The preparation and anticipating may include getting the speech material before the conference starts, gaining familiarity with the subject matter to be discussed, and attending a meeting in advance to understand procedural rules and terms. At the same time, Nolan was aware of the potential changes in real conference setting compared with what is prepared and anticipated. Therefore, it is in line with the flexibility requirement in the strategy plan discussed in the previous part of the paper.
Nolan discussed anticipating from a general perspective, while Jones made it more concrete from several levels. Firstly, anticipating the structure and the general thrust of the speech from the context of a meeting. This is based upon the knowledge of conference and procedures, same as Nolan's proposition. Secondly, recognizing speech patterns and rhetorical structures. This is the "middle" layer of anticipation, which requires the interpreter to understand the speech type and its possible direction of development. Jones [4: 14-21] categorized four types of speeches commonly seen in international conferences, namely, reasoned logical argument, narrative speech, persuasive speech, and rhetorical speech. Wang [16] made this division of speech type a very important component of strategy for consecutive interpreting of Beijing historical and cultural speeches. Thirdly, anticipating specific words or phrases in individual sentences to understand the direction of the sentence. At the same time, Zhang believed that the communicative environment of interpretation made the operation of interpreting special, and the flexibility of the strategy of interpreting made the information between the source and the target texts asymmetrical, which is different from the traditional standard of faithfulness in translation. [
Research Questions and Methods
Research questions of this study are: (1) What are challenges in interpreting of Beijing historical and cultural speeches from the interpreter's perspective? (2) What are the possible strategies adopted by the interpreter in interpreting Beijing historical and cultural speeches? (3) What is the possible model of strategy plan in simultaneous interpreting?
In order to explore the possible model of strategy plan, a piece of research is conducted to understand the possible strategy planning process of conference interpreter. And the method used in this study is qualitative research method, including observation, interview and case study and content analysis.
Inviting an Interpreter
As content being analyzed is based upon cases, one interpreter, who gave her full consent to the study, was invited for this study. This interpreter was female, 31 years old at the time when the research was conducted. With a master's degree in translation and interpreting and having passed CATTI level I Interpreters' Test, she got 6 years of practicing experience on the freelance conference interpreting market, having worked with hundreds of sessions of international conferences. She also worked with social institutions on the training of interpreters.
Seen from the actual performance and the interview of the interpreter in this study, it is believed that the interpreter is able to convey the message of the speaker in a general sense and made many choices based upon the consideration for the audience. Therefore, the interpreter can be confirmed as a professional one and the results in the research, to a certain extent, are able to reflect the condition of strategy plan in conference interpreting.
And because the performance and the interview with this particular interpreter seems to be representative to a certain extent, and because of the limitation of various research resources, it is decided that this study will limit the number of professional conference interpreter to only one.
Choosing the Content of the Research
Because of the diversity and multitude of possible speeches relevant to Beijing historical and cultural speeches, it is decided that this study will choose some representatives with the following features: (1) the content of the speech should be about Beijing history and culture; (2) the style of the speaker should be typical Chinese, which is clearly different from the speech structure and common ways of expression of normal international conferences.
It was decided to choose three speeches on TV shows, all of which being introduction of Beijing history and culture. The second speech differ from the first and the third in that it seemed more similar to speeches of international conference, which the speaker attended a lot. And the first and the second speeches were mainly meant for Chinese audience, thus, closer to traditional Chinese speech style. The second speech was chosen for the sake of comparison to see whether any major differences in strategy planning can be observed when the speech styles differ.
Process of Data Collection
There are two major steps in generating data for this study: (a) lab work; (b) transcription of audio and video.
The interpreter was invited to come to the lab of simultaneous interpreting with newclass DL760 system, which helps to create conditions just like the interpreter working with a real conference system. The videos of the three TV programs were played with a computer screen, using QQ video to erase the subtitles at the bottom of the screen so that the interpreter would not be able to read the subtitles.
The processes of interpreting and interviewing can be seen from the following three figures. The interpreter came with formal clothes and computer, putting herself in a state of conference. She, however, did not prepare for the topic of Consort kin of the imperial families, though the researcher primed her on the topic beforehand. This is normal circumstance during busy time of the freelance market and the researcher decided to tell her the equivalence of the key word "consort kin" in English.
After taking the data in the form of audio recording, the researcher worked with the students of the Master's program of translation and interpreting to fully transcribe the interpreting and interview.
Interview
To understand the strategy used in the interpreter's lab performance so as to tap the potential of strategy plan in performance enhancement, interview with the interpreter is first analyzed to understand the status of the strategy used and the interpreter's perception of the strategy effect.
The framework of analysis is based upon: (1) Jones [4] proposition of the three levels of anticipation; and (2) Wang's [16] strategy model of consecutive interpreting. Therefore, three different levels of analysis on interpreting strategy are identified: (1) strategies related to the general structure and thrust of the speech which is relevant to the context. (2) strategies related to the speech type and the local sense which is beyond the words and expressions. (3) strategies related to the words and the expressions.
The four segments of the interview in this study was half-structured, with only several questions being listed as the general structure before the interview, and other questions asked during the interviewing process were spontaneous based upon points that interested the researcher. The outline of the interview consists of the following components: (1) What do you think of the difficulty level of this segment? Please make comments on the difficult points and easy points. Numbers in the following three tables show the times the interpreter mentioned this point. Generally speaking, the number of times being counted is determined by the sense, for instance, the interpreter speaking a lot on one point, which will be counted as once. However, if, for instance, the interpreter mentioned two concepts or words problems consecutively, if will be counted as twice. Seen from the number of mentioning at the three levels of strategies, it is obvious that the total number of the strategies relevant to the general structure is smaller than those of speech type and the local sense which is smaller than those related to the words and expressions. (Hereinafter referred to as the first, second and third.) The second is 112% more than the first and the third is 83% more than the second. It may show that the interpreter paid more attention to the word and concept strategy than other two levels. Words and concepts are perhaps legitimate concerns for interpreters as these are what can be directly perceived. They are also very much important in building up the local sense. And what was mentioned quite a few times by the interpreter in her interviews of Consort kin of the imperial families and Imperial workshop was that the unfamiliar words were major obstacles for her interpreting.
Although the total number of mentioning in the second and the third is much more than that of the first, Shan Jixiang guiding your to tour the Forbidden Palace seems to be an exception. It is obvious that the interpreter talked more about the general strategy of Shan Jixiang guiding your to tour the Forbidden Palace than local sense and words and concepts. One important reason, perhaps, is because the interpreter once interpreted for the speaker on a similar speech beforehand, and was very much confident of both the general thrust and the details. Therefore, she was very happy to take the initiative to talk about the general thrust without any leading questions posed by the researcher.
On the other hand, the Consort kin and the Imperial workshop proved to be very much different. Though with questions leading to the discussion of the general thrusts from the researcher, the interpreter could not help but talked about words and concepts a lot. And she believed that not being familiar with the two topics, including the things being talked about, background knowledge, etc., was a huge constraint for her performance. This division may mean that the interpreter is able to make the general strategy plan when she believed that the content topic being talked about was somehow "under control", while for those less familiar topics, it would be hard for her to do so. And for the unfamiliar topics, it seems that the interpreter was prone to "passive" following of the speech, instead of having an adequate strategy plan. Therefore, it is a point of discussion whether an interpreter is able to form strategy plans even on less familiar topics, maybe more on the local sense instead of the general thrust.
If we take a closer look at the two interviews of the Consort kin, it is obvious that the mentioning of the first and the second level mainly appeared in the first interview, not the second one. And the second interview mainly focused on words and concepts. It may because that the interpreter believed that she was not having anything new on the strategies at the two levels, and focused only on the problems she met with in the interpreting process, which is in line with the fact that the second interview saw more times of mentioning passive coping or inability to deal with words and concepts.
Strategies Adopted by the Interpreter at the Three Levels
This part is mainly based upon the analysis of the transcript of the interpreting of the three programs.
Strategies of General Structure and Thrust of the
Speech This seems to be less obvious in the actual interpreting performance, as the nature of simultaneous interpreting, unlike written translation where structure change is possible, only allows the output at the local level. Therefore, it seems difficult to observe this kind of strategy directly from the transcript, while strategies at this level may be known from the interview with the interpreter.
Strategies Related to the Speech Type and the Local
Sense (1) Consort kin of the imperial families (a) Summary translation: Instead of following every sentence and every tiny piece of information closely, the interpreter made a summary of what the speaker has just said. This usually should be a summary of very long segment, and a summary of two or three sentences talking about one same idea is usually the longest one that may be considered acceptable.
Strategies Related to the Words and the Expressions
(1) Using explanation or superordinates to replace words not knowing equivalence (2) Omission of unimportant words It can be seen that there are many more types of strategies at the "mid" level than at the level of words and expressions that can be directly observed. Compared with what is mentioned in the interview, it seems that what the interpreter actually worked on is in the local levels of the "sense", which is in line with what was proposed by "theorie du sens". And although the interpreter seemed to have a lot of problems with the "words" in the interview, she actually have more tools at hand on the sense, not words, level. Or, it is possible to say that words may give interpreters problems, and she tried to solve the problem at a higher level of sense.
Whether there are strategies at the level of the general thrust and structure is not directly observable. And it may be subjected to further study to gain more understanding on it. Table 4 show the strategies at the sense level that are observed in this study. As can be seen from Table 4, there are some strategies that are common to all three renditions, such as summary translation, addition, and omission, while there are some strategies more prominent in some, not others. For instance, "gradually adjusting interpreting sense-making", appeared mainly in Consort kin and Imperial workshop, which shows that probably the sense was not very clear to the interpreter at the beginning before it became clearer when sense built upon each other. It was somehow corresponds to what the interpreter said in the interview as she mentioned several times that she was not clear about certain meaning because of either lack of background knowledge or having difficulty in understanding. On the other hand, the interpreting of Shan Jixiang guiding your to tour the Forbidden Palace was much more comfortable for the interpreter as she has already interpreted for the speaker on a much similar topic before. Perhaps for the same reason, it is obvious that there is one strategy in this part of the rendition, that is, "following every sentence closely". It shows that the interpreter was at ease with the part of the rendition. And accordingly, her performance in this part is better than the other two.
Failures in the Interpretation
Though with certain strategies applied, there are a few failures in the interpretation of the three programs, as can be shown in the following.
Consort kin of the Imperial Families
(1) Being stuck with the dilemma of either following the sentences closely or making summary (2) Pronoun confusions that mess the sense, for instance, the interpreter confused male and female pronouns, and sometimes used "it" without clearly referring to anything.
(3) Missing important comments (4) Unclear rendition of human relationship, for instance, there was local confusion, and sometimes previous confusion can lead to that of later paragraphs. (5) Unclear rendition of typical Chinese concepts, such as emotions, articles, and proper nouns.
Imperial Workshop
(1) Unclear rendition of human relationship (2) Unclear rendition of typical Chinese concepts such as titles of people. (3) number error (4) unable to back translate foreign names (5) logical error (6) Grammatical errors, such as having no subject in sentences. (7) Unclear rendition because of introduction of characters without appropriate contexts to translation listeners (8) misunderstanding of source text Failures in the three sections of interpreting can be summarized in Table 5. It is obvious from Table 5 that there are several errors all the three interpretations have in common, such as "unclear rendition of typical Chinese concepts", "logical error" and "misunderstanding of source text". It is evident that typical Chinese concepts, which describe a "world" very much different from traditional English speaking countries, could be very much different to translate unless being known previously. Therefore, it seems that much knowledge on traditional Chinese concepts seems to be a must in order to do the interpretation on related subjects.
Logical error seems to be that at certain points, the interpreter was not able to catch up with each sentence, but was still trying to adopt the strategy of following the sentence instead of making summary, thus, causing unclear logic, which seems to be an error much more personal to the interpreter herself. Therefore, at speeches not being able to be followed, it seems the interpreter should choose either to follow closely or to make summary. To a certain extent, however, it seems that not both are usable at the same time unless the interpreter is absolutely clear of the logic she is having, which seems unlikely as the catching-up-process would consume too much energy for her already, especially when it is not possible to follow the speech closely. Therefore, choosing either of the strategy is important in this process.
At the same time, it seems that the appearance of errors is somehow related to the features of different texts. For instance, both Consort kin of the imperial families and Imperial workshop have "unclear rendition of human relationship", but not for Shan Jixiang guiding you to tour the Forbidden Palace. That shows that the first two texts are featured with complicated human relationship but not the third. It is the same with "unclear rendition because of introduction of characters without appropriate contexts to translation listeners", which only appeared in Imperial workshop. Therefore, it seems necessary that an interpreter be able to quickly identify the prominent features of the text that may affect interpreting performance and develop strategies accordingly, as flexibility and adaptability are the features of strategy plan in general and strategy plan of interpreting in particular.
And some of the errors can be more of a personal one such as "pronoun confusions that mess the sense", "slip of tongue", etc. And the strategies for these problems could be more personal as well.
It seems likely that errors could occur at the second and the third level, not the level of the general thrust, which would otherwise mean distortion of the message. Seen from the interview, it is possible that some of the errors and failures at the sense level are relevant to the fact that the interpreter was not able to adequately deal with the words and concepts. Therefore, it is deducible that in the interpretation of Beijing historical and cultural speeches, words and concepts, which carry much concrete meaning, and which are the foundations for the description of the "world", are closely linked with the "mid" structure sense-making. Therefore, it is inevitable that the interpreter needs to be able to adequately know the words and concepts, and apply strategies that could help with sense-making at the mid-level.
From the strategies and the failures, it is very clear that both are somewhat different according to the features of the texts. Therefore, it seems to be important to try to understand the features of the texts to form the appropriate strategies. At the same time, it seems that the failure and inappropriate strategies are somehow relevant to inadequate strategy plan. Therefore, it may be necessary for the interpreter to work on strategy plan if further improvement of the performance is to be made.
Findings and Discussions--Strategy Plan of SI Interpreting
With literature review and the analysis of the interview and the interpreter's performance, it is possible to know that there are several factors that compose of the strategy planning of the interpreter in the assignment.
(1) Goal identification and keeping. A Functionalist view of translation theory is that goal and function are the main factors that determine translation strategies. [15: 52] Goal identification could be of several different levels. (i) the general goal of the conference. When an interpreter serves a conference, it is important to know the theme, the background, and the purpose of the conference. (ii) the general goal of individual speakers. It is also noteworthy that every speaker may have his/her own agenda and purpose in delivering a speech, either expressing opinions, evoking emotions or communicating information, which is basically in line with the purpose of different speech types proposed by Jones [4]. This general goal obviously heavily impacts upon general strategy plan of the interpreter. For instance, if it is about evoking people's emotions, perhaps the interpreter should also try to do so as well. And if it is about communicating information, the interpreter should make sure key information is clearly conveyed by giving it high priority when the mental processing resource is not sufficient. (iii) the local purpose of the speaker. When the speakers says something, there could be a local purpose as well. For instance, in an emotional speech, there could also be important information, and the priority of then should be given to the piece of information the speaker wishes to convey, which may also be important.
The higher level goal should always be remembered when pursuing lower level goals, and when mental resource is not sufficient to fulfill all these goals, prioritization of the goals is needed for the interpreter.
Seen from the interview with the interpreter, it can be seen that the interpreter understood the goal generally to make the audience understand the sense of the speaker. However, it seems that goal identification was not particularly clear, which is understandable that the research did not take place in a real conference, and it is possible to confuse the goal. And without clearly identifying the goals, it is easy for the interpreter to fall into the "equivalence" and "loyalty", vague but popular understanding of what a piece of translation should be by many Chinese who are not translation theorists. When this happens, it is possible to see that the interpreter was trapped with not being able to follow closely while trying to, resulting in a confusion of logic.
Therefore, at the real conference setting, it seems to be a good strategy to identify goals at different levels and keep implementing the goals and prioritizing goals when they are not in line with each other.
(2) Environmental analysis and forecasting. In the strategy plan of simultaneous interpreting, environmental analysis should probably be more than the physical surrounding of the interpreter at work. In this study, it is considered to be composed of two main parts. (i) the analysis of the speaker, including the educational background, the profession, the delivery speed, accents, speech style, temperament, and virtually anything about the speaker that could be identified on the spot and that may affect the interpreting strategies. (ii) the analysis of the audience, including the educational background, the profession, the information reception habits, and anything that may affect the interpreting strategies.
And here, similar to goal identification and keeping, where interpreters have to prioritize goals when they may lead to competing strategies, interpreter may do the same to the strategies relevant to environmental analysis.
Forecasting is closely linked to environmental analysis in that interpreter may make forecast on what is going to be said by the speaker based upon the judgment of the speaker and the three levels of goals identified.
(3) Contingencies. It is inevitable that something unexpected may happen when interpreting. Therefore, an interpreter should always watch out for difficulties. It is, however, able to make plans on contingencies to a certain extent. (i) Being familiar with the interpreting procedure. This may relieve the interpreter of the efforts of minding the procedure, thus more processing capacity for the unexpected.
(ii) Being aware of the possible difficulties. For instance, the interpreter may be aware of possible challenges in their interpreting, such as names, numbers, enumerations, a word in a foreign language, poor pronunciation, etc. [2: 160-161] Wang [17] believed that nouns (or names), numbers and logical relationship are three main difficulties for Chinese students on consecutive interpreting. Figure 4 below is a model of Strategy Plan of Simultaneous Interpreting based upon the literature review and the case study in this research.
Conclusions
As can be seen from the literature review and the analysis, strategy plan is important for simultaneous interpreting, as in other activities. The activity of simultaneous interpreting is one the interpreter needs to consider all relevant possible factors, which takes the utterance of the speaker as the foundation but goes far beyond that.
Strategy plan actually started from the time before the conference starts, possibly since the time of acceptance of the task, though then the plan could only be rough at best. With the preparation process going on, the plan becomes more detailed. Under "normal" circumstances, the strategy plan can be ideal, which, however, may not be "normal" circumstance in itself. The practice is full of contingencies, which has to be dealt with from time to time, as can be seen from the model. When this happens, it is the prioritization of the factors under normal circumstances that should be traced back in order to properly apply strategies to contingencies. This, together with prioritization of various different factors, demands intensive mental resource allocation as the decision-making process only takes a very short time. Therefore, the practice of simultaneous interpreting is a highly demanding one for mental processing capacity as all of the activities in the model take place at the same time while prioritization, which takes the processing to another level, taking place in just a split of a second. The internalization of the ability perhaps requires interpreter's instinct as well as training and practicing in the long while.
|
2019-05-11T13:05:42.710Z
|
2017-09-28T00:00:00.000
|
{
"year": 2017,
"sha1": "afc1e44feaf9318b2afc33e96e95c36bff7dfb8c",
"oa_license": "CCBY",
"oa_url": "http://article.sciencepublishinggroup.com/pdf/10.11648.j.ijll.20170505.15.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "c859789e47398e2063dfd625a84c0a15f0110bec",
"s2fieldsofstudy": [
"Linguistics",
"History",
"Political Science"
],
"extfieldsofstudy": [
"Sociology"
]
}
|
241581008
|
pes2o/s2orc
|
v3-fos-license
|
Treatment of Gunshot-Related Mandibular Fracture with Splint-Guided Reduction: Case Report
Background/Aim : Gunshot injury-related mandibular fractures often have a complex pattern, characterized by comminution, bone loss, and soft-tissue avulsion. The management is difficult and varies between individual cases. Case Report : A 41-year-old male patient presented with marked swelling and ecchymosis in the left mandibular region. Intraorally, he had a deviated open bite on the left side. A unilateral comminuted mandibular fracture was diagnosed by panoramic radiograph and computed tomography. An acrylic dental splint-guided open reduction and internal fixation, including intermaxillary fixation through brackets and intermaxillary elastics, was planned. No complications were observed throughout the healing period, and healing at the fracture site was satisfactory. The occlusion returned to the preinjury position and was stable. Conclusions : This case report shows that successful functional and esthetic results can be achieved with a strict patient-specific treatment protocol for a comminuted mandibular fracture due to gunshot injury.
Introduction
Mandibular fractures are the most common types of fracture of maxillofacial fractures. They cause both functional and esthetic impairments 1,2 and have different etiologies based on the geographic region, cultural status, lifestyle differences, and socioeconomic status 3,4 . The most frequent etiologies include traffic accidents, violence, gunshots, falls, and work and sport accidents 5,6 . Of these factors, gunshot injury-related mandibular fractures are different from the other related etiologies and often have a complex pattern characterized by comminution, bone loss, and soft-tissue avulsion. The management of gunshot injury-related mandibular fractures varies between individual cases and often presents difficulty for the surgeon. There are risks of both death and infection. A strict patient-specific treatment protocol, including occlusion, should be established. If the occlusion is not treated, the treated fracture may be nonfunctional and have unesthetic results, and a redo surgery may be required. Reestablishment of preinjury occlusion provides accurate positioning of the jaw and anatomical reduction of the fracture 7 .
The aim of this case report is to present the treatment of a 41-year-old male patient with a history of mandibular fracture related to a suicide attempt with a gunshot.
Case Report
A 41-year-old male patient with a history of mandibular fracture related to a suicide attempt with a gunshot was referred to the department of orthodontics for occlusal impairment. The patient consulted 8 days after the injury. He had a medical history of diabetes mellitus and psychotic depression. Extraorally, he had marked posterior ( Figure 1). The maximum mouth opening was 15 mm. Radiological examination showed a comminuted mandibular fracture on the left side and the anterior and posterior cortexes of the mandibular angle were slightly separated ( Figure 2). swelling, ecchymosis, pain, and tactile sensation on the left side of his face, around the mandible. The projectile had entered under his right mandible and exited from the left cheek. Intraoral examination revealed a deviated open bite starting from the left anterior and extending to the An acrylic dental splint (ADS)-guided open reduction with both mini screw and intermaxillary fixation (IMF) using brackets and intermaxillary elastics was planned for the treatment. First, the teeth were rinsed with water and dried on airflow. All teeth from the upper and lower right second premolars to the upper and lower left second premolars were acid-etched with 37% orthophosphoric acid. After the teeth were rinsed and dried again, the primer was applied. Brackets (Roth prescribed, 0.018 inch) were bonded to the teeth with an orthodontic composite resin. A 0.016×0.022 inchdiameter stainless steel wire was bent appropriately to fit into the bracket slot passively and attached to the teeth with ligature wire. Crimpable surgical hooks were placed on the bent arch wires to attach intermaxillary elastics ( Figure 3). Following the bonding procedure, dental impressions of both the upper and lower jaws were obtained using alginate impression material and plaster models were prepared. A traditional surgical model set up was performed according to an old photograph showing the preinjury occlusion ( Figure 3). An ADS was fabricated so as to establish the preinjury occlusion. Before surgery, the compatibility of the ADS with the upper and lower teeth was confirmed. normal (Figure 4), and the brackets and arch wires were removed. Healing at the fracture site was satisfactory.
The occlusion returned to the preinjury position and was stable. The maximum mouth opening increased from 15 mm to 35 mm ( Figure 5).
Following the presurgical preparation, the fracture segments were reduced with the aid of the ADS, and internal fixation was achieved with four bicortical screws (length 15 mm, diameter 2 mm) under general anesthesia. Two days post-surgery, the ADS was placed into the mouth and IMF was done over the brackets using intermaxillary elastics to guide the mandible. During the IMF period, the patient was set on a liquid diet and was examined weekly. Four weeks after surgery, the IMF device was released, and a few training elastics were retained to ensure the stability of the fracture and occlusion. Then, the elastic force was gradually reduced. He was advised a soft diet and to perform mouth opening-closing exercises.
The patient experienced minimal discomfort during the healing period. No complications were observed. The panoramic radiograph taken 3 months after the injury was
Discussion
Gunshot injury-related mandibular fractures are different from other simpler mandibular fractures. There are often numerous small pieces of comminuted bone and foreign bodies covered with non-vital soft tissue at the wound site. The severity of gunshot injuries to the mandible is dependent on the energy of the projectile (low or high), entrance angle of the projectile, proximity to the anatomic region, and anatomy of the injured region [8][9][10] . Gunshot injuries occur mostly in males 8,11 . The most common causes of these injuries are interpersonal civilian violence and suicide attempts 12 . The present case was also an attempted suicide with the patient's own handgun while on duty.
As with all fractures, gunshot injury-related mandibular fractures should be treated as soon as they are diagnosed; otherwise, the risk of infection increases as the treatment is postponed. However, immediate treatment of these fractures may not always be possible due to airway problems, hemorrhage, pain, and other issues. In our case, the consultation was made at the department of orthodontics on the eighth day after the injury due to the patient's general condition and high glucose value. Providing care for such patients is difficult for both surgeons and orthodontists as the treatment is challenging. First, it is necessary to analyze in detail how the event occurred so as to propose an appropriate treatment. After stabilizing the patient's vital signs, the treatment steps are removal of foreign bodies and dead tissues in the wound area, reduction of the fracture, and subsequent fixation. Previous studies have proposed different treatment approaches for gunshot injury-related mandibular fractures; these include external fixation, closed reduction with IMF, internal wire fixation, and open reduction and internal fixation with or without IMF 7,11,[13][14][15] . External fixation and closed reduction with IMF have been previously recommended in some studies so as to avoid periosteal stripping and devascularization of the comminuted bony segments in selected patients 13,14 . as infection, malocclusion, malunion/nonunion, plate exposure, facial asymmetry, airway obstruction, nerve injury, and sequestration 9,11 . Such fractures require special management to minimize these complications.
Conclusions
Each case with gunshot-related mandibular fracture should be evaluated individually and a treatment plan should be patient-specific, depending on factors such as the general condition of the patient, severity of the fracture, skill of the doctor, and presence of appropriate equipment. An ADS-guided open reduction and internal fixation, including IMF with intermaxillary elastic application over the brackets, provides optimum stabilization and treatment results when the patient is an appropriate candidate.
With advances in technology and increased experience, open reduction and internal fixation have now been advocated in these cases 7,11,14,15 . This has significantly shortened the postoperative healing period 16 . Furthermore, patients treated with open reduction and internal fixation with adjunct IMF have been reported to have lesser complications than those treated with closed reduction with IMF 11,14 . For this case, our surgeons preferred open reduction and internal fixation with bicortical mini screws. The aim of using mini screws was to decrease the infection risk by using less instruments; furthermore, the procedure is technically easier, and the use of a plate would not provide control over the third fragment of the fracture.
Concerns regarding malocclusion after open/closed reduction prompted researchers to include occlusion in the treatment of these patients 7,13,15,17 . In particular for gunshot injury-related mandibular fractures, difficulties in reduction of multisegmented bones during fixation may result in improper fixation, leading to malocclusion and unesthetic results. Furthermore, malocclusion occurring after rigid fixation cannot respond to orthodontic adjustment, which results in additional surgical interventions required to reestablish preinjury occlusion. For the reestablishment of preinjury occlusion splints, orthodontic brackets, and a combination of these have been used in the literature 7,9,15,17 . Cohen et al. 17 suggested the use of a splint to reestablish preinjury occlusion and obtained excellent results. Unlike in this case report, they used an arch bar for IMF. In a study on cases with comminuted or complex maxillofacial fractures, Konas et al. 7 reported only one case with mild malocclusion after splint-assisted open reduction and internal fixation including IMF through the application of intermaxillary elastics over brackets, and a splint was the leading device overcoming problems related to intraoperative reduction. On the contrary, Peleg and Sawatari stated that 6 out of 30 patients with comminuted mandibular fractures had malocclusion after open reduction and internal fixation in combination with splint use and IMF, resulting in a slightly higher malocclusion rate 9 . This result was likely related to the severity of the fracture and a small sample size. In this case, the occlusion was restored with an ADS and IMF through the application of intermaxillary elastics over brackets. This method has the advantages of providing dental alignment and stabilization and preventing fractured segment from rotation and distraction. The occlusal surface prevents overeruption of teeth 17 . We used brackets for IMF instead of an arch bar, as the latter may damage the gingival and periodontal tissues and requires increased surgical time. On the other hand, IMF through the brackets is both challenging and time-consuming.
At the end of the treatment, we did not encounter any complications in the patient. In contrast to this result, several complications may arise from these injuries such
|
2021-11-04T13:13:20.967Z
|
2021-11-01T00:00:00.000
|
{
"year": 2021,
"sha1": "8f7a3cc3b2e8e854ef388931de2cd16998a4d34a",
"oa_license": "CCBY",
"oa_url": "https://www.sciendo.com/pdf/10.2478/bjdm-2021-0030",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "43d00b8463e626ae6102a1428b08076035fa979f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
270590185
|
pes2o/s2orc
|
v3-fos-license
|
The Effect of Sexual Counseling Based on PLISSIT and EX-PLISSIT Models on Sexual Function, Satisfaction, and Quality of Life: A Systematic Review and Meta-Analysis
This systematic review and meta-analysis study aimed to investigate the effect of sexual counseling based on PLISSIT (Permission, Limited Information, Specific Suggestions, and Intensive Therapy) and EX-PLISSIT models on sexual function, satisfaction, and quality of sexual life. We searched seven electronic databases (MEDLINE, CINAHL, Web of Science, Cochrane Library, ProQuest, Scopus, and PubMed). Studies published between January 1, 2010, and August 16, 2022, were included in the search. Eighteen articles were eligible for inclusion in the analysis. There was a significant difference in the sexual function scores of the PLISSIT and EX-PLISSIT groups and the comparison groups (standardized mean difference (SMD): 1.677; 95% CI 0.668, 2.686; p < 0.05) and “sexual and communication satisfaction” sub-dimension of sexual life quality (SMD: 0.748; 95% CI 0.022, 1.475; p < 0.05). There was no difference in the sexual satisfaction (SMD: 0.425; 95% CI − 0.335, 1.184; p > 0.05) and quality of sexual life scores of the PLISSIT and EX-PLISSIT groups and the comparison groups (SMD: − 0.09; 95% CI − 0.211, 0.032; p > 0.05). PLISSIT and EX-PLISSIT models-based sexual counseling on sexual function was affected by the moderator variables of the time of evaluation of the results after the intervention, type of comparison group, the study population, and by whom the intervention was applied. Sexual counseling based on the PLISSIT and EX-PLISSIT models improved sexual function scores and “sexual and communication satisfaction” sub-dimension of sexual life quality.
Introduction
Sexuality is one of the basic human needs (East & Hutchinson, 2013).Sexual function is an important part of health and one of the factors affecting the quality of life (Panahi et al., 2021).Sexual quality of life, on the other hand, refers to a general state of well-being in sexual function and satisfaction with sexual function (Mohammadi et al., 2022).In recent years, models are commonly used in sexual health assessment and counseling.It is stated that the use of models is effective in improving the sexual functions of individuals, increasing their sexual satisfaction and sexual life quality (Ziaei et al., 2022).One of these models is the PLISSIT (Permission, Limited Information, Specific Suggestions, and Intensive Therapy) and EXTENDED PLISSIT (EX-PLISSIT) models (Taylor & Davis, 2007).The PLISSIT model was first developed by Annon (1976).The model includes four levels (Permission-Limited Information-Specific Suggestions-Intensive Therapy) of intervention.Each level suggests approaches for responding to sexual concerns.The first level in meeting the sexual health needs of the individual is the evaluation process.In the second level, where information is given about the effect of the disease on sexuality and how treatment can affect sexual functions, it is emphasized that informing patients about their treatments on sexual health has an important place among nursing interventions.
3
The third level includes special suggestions and information for the individual/partner in order to make sexual life more satisfying.The fourth level involves intensive therapy and requires referral to a specialist in sexual rehabilitation.
The PLISSIT model is a standard model and one of the most commonly used.One of the limitations of this model is its linearity and proceeding from one level to the next in which the therapist cannot diagnose the necessity of returning to the previous level to resolve the patient's sexual concerns.Additionally, it does not include reflection and review elements.Thus, Taylor and Davis (2007) developed the EX-PLISSIT model as an extension of the PLISSIT model.EX-PLISSIT consulting model is based on the key concepts of the PLISSIT model.The main difference is that the allowing step is at the center of the other steps.As in the PLISSIT model, each step is intertwined with each other, not in sequence.In this way, it enables the individual to reveal his/her feelings and thoughts about sexuality.Counseling and intervention in the PLISSIT model, it is possible to move from one level to the other linearly, while in the EX-PLISSIT model, they are cyclical, the permission level is at the center of the other levels.Although the EX-PLISSIT model is based on the main concepts of the PLISSIT model, feedback is essential to increase self-awareness in the EX-PLISSIT model.In the EX-PLISSIT model, after seeking feedback from the client and reviewing outcomes, the therapist will be better off in challenging his or her own assumptions.
There are systematic review and meta-analysis studies on the effectiveness of the PLISSIT model in the literature (Kharaghani et al., 2020;Mashhadi et al., 2022;Tuncer & Oskay, 2022).A systematic review found that the PLISSIT model on sexual counseling was an effective, simple, useful, and cost-effective counseling method.The meta-analysis study showed that psychological interventions including the PLISSIT model improved the sexual function of women significantly (Kharaghani et al., 2020).In the other meta-analysis study, it was determined that sexual counseling based on the PLISSIT and EX-PLISSIT models was effective in sexual dysfunction (Mashhadi et al., 2022).Although there are these meta-analysis studies in the literature evaluating the effect of the PLISSIT and EX-PLISSIT models on sexual function, there are no studies in the literature examining the effect of PLISSIT and EX-PLISSIT models on sexual satisfaction and quality of sexual life.For this reason, this systematic review and meta-analysis study aimed to investigate the effect of sexual counseling based on the PLISSIT and EX-PLISSIT models on sexual function, level of satisfaction, and quality of sexual life.
Design
It followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guideline (Moher et al., 2015).The protocol of the study was recorded in the "PROSPERO" database, which allows systematic review and meta-analysis studies to be recorded (ID: CRD42021240114).
Search Method
We searched seven electronic databases (MEDLINE, CINAHL, Web of Science, Cochrane Library, ProQuest, Scopus, and PubMed).Studies published between January 1, 2010, and August 16, 2022, were included in the search.
The following MeSH search headings were used: "women OR female" AND plissit OR ex-plissit AND "sexual health" OR sexual OR func* OR "sexual func*" OR dysfunc* OR "sexual dysfunc*" OR "quality of life" OR satisfac* OR "sexual satisfac*" OR "sexual life."These terms and their combinations were searched as text words or abstract/title.
Types of Studies
We limited the studies to randomized controlled trials.Randomized controlled trials and controlled trials that compared the PLISSIT and EX-PLISSIT model with control groups (e.g., usual care, placebo, no intervention, or waitlist control) or other intervention groups (BETTER model, Sexual Health Model (SHM), and solution-focused group).
Language of Studies
The studies published in English were included in the analysis.
Participants
We applied no restrictions to the participants.
Types of Interventions
We considered the PLISSIT and EX-PLISSIT models-based interventions.We applied no restrictions to the intervention type, dosage, duration, etc.
Types of Outcome Measures
The outcomes measured using validated scales were sexual function (Female Sexual Function Index [FSFI], Brief Index of Sexual Functioning for Women [BISF-W], and Sexual Dysfunctional Beliefs Questionnaire), sexual satisfaction (Hudson's Index of Sexual Satisfaction [ISI] and Berg's Sexual Satisfaction Questionnaire), and quality of sexual life (Sexual Quality Life-Female [SQOL-F]).We imposed no restrictions for the time measuring health outcomes after the intervention (follow-up period).
Search Outcomes
We initially identified 224 records.The results were saved to a citation manager in EndNote X8-2.Then, 115 duplicated articles were identified and excluded.After removing duplicates, we excluded 86 records by reviewing titles and abstracts.The full texts of the remaining 23 records were retrieved and screened for eligibility.We excluded four articles; there were unwanted outcomes (n = 2), and they were not in English (n = 2).As a result, 19 articles were eligible for inclusion in the systematic review and 18 articles in the metaanalysis.Figure 1 presents the flowchart of this systematic review according to the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) (Page et al., 2021).The process of study selection is illustrated in Fig. 1.
Study Selection
Firstly, we imported the documents retrieved from seven databases into EndNote to check for duplicates.Then, the initial screening was performed by two reviewers (SCO and ADG) who independently screened the titles and abstracts of potentially relevant studies.The full texts of the studies that met the inclusion criteria were retrieved and evaluated independently for inclusion in the analysis by the two reviewers (SCO and ADG).The two researchers (SCO and ADG) read the full texts of the studies and determined whether they should be included in the analysis.If they had any disagreement during the study selection process, they tried to resolve it by trying to reach consensus.When consensus could not be reached, a third researcher (ASE) independently reviewed the full text of the study, and three researchers discussed whether it should be included in the analysis.
Assessment of Study Quality
Two reviewers (SCO and ADG) independently assessed the study eligibility.The quality of the included studies was assessed using the RoB 2: A revised Cochrane risk of bias tool for randomized trials (Higgins et al., 2011).Before assessment, the researchers learned how to use the tool which includes seven items: (1) randomization process; (2) deviations from intended interventions; (3) missing outcome data; (4) measurement of the outcome; (5) selection of the reported result; and (6) overall bias.Each item has three options: low risk, high risk, and unclear risk.All assessments of the researchers were integrated into a risk of bias graph and risk of bias summary (Table 1).
Two researchers (SCO and ADG) selected original studies on the basis of the inclusion criteria and reviewed the quality of each.Any disagreement between the two researchers regarding the assessment process was resolved through discussion.The quality assessment process was checked by the third researcher (ASE).
Data Abstraction
Two researchers (SCO and ADG) extracted detailed data on publication information, authors' name, publication year, publication country, publications' name, objective, study design, participant characteristics (sample size, diagnosis, and age), intervention details, comparators, tools, outcomes, and results of each study using a structured data extraction form, which was confirmed by the third researcher (ASE).
Data Synthesis and Analysis
All statistical analyses were performed using the Comprehensive Meta-Analysis software 3.0.All randomized studies with sufficient data to calculate the standardized mean difference were included in the meta-analyses (Higgins et al., 2022).One of the studies included in the analysis reported only median and min-max values (Dangesaraki et al., 2019).Therefore, it was not included in the meta-analysis.
Results are presented as standardized mean difference (SMD) with 95% confidence interval (CI) for continuous variables.Cochran's Q test, I 2 , and Tau-squared (τ 2 ) were performed to examine the heterogeneity.Heterogeneity was present when the Cochran's Q test was statistically significant, and the I 2 statistic was > 50% (Borenstein et al., 2011).
Another indicator of heterogeneity is the Tau-squared (τ 2 ) statistic; if the value of this test is zero, it means there is no heterogeneity (Quintana, 2015).
A random effects model was used in case of heterogeneity.We weighted the studies included in meta-analyses using the inverse-variance method.A two-sided p value of < 0.05 was used to indicate statistical significance.A subgroup analysis was carried out on factors that were thought to affect the homogeneity of the study.Subgroup analyses with mixed-effects models were applied to examine the potential categorical moderators for the PLISSIT or EX-PLISSIT models effects (Borenstein et al., 2011).Publication bias was assessed for the outcomes sexual function and sexual satisfaction.The funnel plot, Begg and Mazumdar's rank correlation test, and Egger's regression asymmetry test were used to evaluate the publication bias.There was evidence of publication bias when the results from both tests were statistically significant (Borenstein et al., 2011).
Search Results
A total of 224 studies written in English were retrieved from all databases.Then, 115 duplicate articles were excluded.The remaining 109 articles were reviewed by title and abstract, and 86 studies that did not meet the inclusion criteria were excluded.The full texts of 23 articles were evaluated for relevance.A total of four articles whose language was not English, whose measurement outcomes were not appropriate (Fig. 1).The risk of bias of 19 articles meeting the eligibility criteria was evaluated, and they were included in the review.However, since one of these studies (Dangesaraki et al., 2019) did not report mean and SD values, it was excluded from the analysis, and 18 studies were included in the meta-analysis.
Study Characteristics
A total of 1501 women with a mean age ranging from 18 to 60 years had participated in the 19 randomized controlled trials included in the study.One of the women constituting the sample of the evaluated studies was HIV positive, three had MS, two were post-hysterectomy, one had cyclic mastalgia, three were puerperal, one was with diabetes mellitus, two had sexual problems, three were pregnant, one high body mass index, one was gynecologic cancer, and one had spinal cord injury.Eighteen of the studies were carried out in Iran, and one was conducted in Turkey (Table 2).The duration of the counseling session based on the PLIS-SIT and EX-PLISSIT was different.The number of the counseling sessions varied from 1 to 5, and the duration of each session varied from 30 to 120 min.In all studies, the comparison group was the control group, the group received routine care.The other comparison groups were the solution-focused group counseling, BETTER model, and the SHM.
Sexual quality of life was evaluated in four studies.The SQOL-F was used as a measurement tool for this purpose (Azari- Barzandig et al., 2020;Dangesaraki et al., 2019;Kazemi et al., 2021;Mohammadi et al., 2022).In these studies, sexual quality of life was reevaluated 2 months after the intervention.It was evaluated 2 weeks after the intervention in only one study (Kazemi et al., 2021).
Risk of Bias
The bias assessment of nineteen RCT studies using the Cochrane risk of bias ROB 2.0 instrument showed that six studies were at high risk, and thirteen studies were at some concerns.These studies lacked detailed randomization methods and allocation concealment.And also data collection was based on self-report questionnaires, the participants or practitioners could not be blinded and no "intention-to-treat estimates."Table 1 shows for the details summarizing the risk of bias assessment for the included studies.
Quality of Sexual Life
Outcome data were available for three trials (1164 women).Random effects was selected because these three studies were heterogeneous (Tau 2 = 0.584, Q 2 = 123.725,df = 11 (p < .001),I 2 = 91.109%).The forest plot in Fig. 4 illustrates that there was no difference in the quality of sexual life scores of the PLISSIT and EX-PLISSIT groups and the comparison group (SMD: − 0.666; 95% CI − 0.520, 0.389; p > 0.05).SQOL-F scale sub-dimension scores of the groups were examined.No difference was found between the PLISSIT
Subgroup Analysis
Two months (5.588) and 3 months (4.The intervention is performed by a trained midwife (4.699) which had a greater effect in improving sexual function than by a research team members (0.824) (effect size: 1.770; 95% CI 0.685, 2.854; p < 0.05) (Table 3).Three months of after intervention (2.042) had a greater effect in improving sexual satisfaction than those with 1 and 2 months (1.004, 95% CI 0.685, 1.323, p < 0.05).There was not significant overall effect of PLISSIT and EX-PLISSIT models on sexual satisfaction by the comparison group, study population, and by whom intervention (p > 0.05) (Table 3).
Publication Bias
Based on the results obtained using the funnel plot of standard error by SMD for the outcomes sexual function and sexual satisfaction, it was not possible to conclude that there was a likelihood of publication bias (Figs. 5 and 6).Because the Begg and Mazumdar rank correlation test (p = 0.465) and the Egger's regression test (p = 0.138) were not significant for the outcomes sexual function.Likewise, the Begg and Mazumdar rank correlation test (p = 0.916) and the Egger's regression test (p = 0.408) were not significant for the outcomes sexual satisfaction.
Discussion
Our meta-analysis revealed that individual sexual counseling based on the PLISSIT and EX-PLISSIT models applied in different populations improves sexual.And also, there was a positive and significant improvement only in the "sexual and communication satisfaction" sub-dimension of sexual life quality.
Sexuality is affected by many physiological, cultural, social, and psychological factors (McCool-Myers et al., 2018).This effect is more pronounced in pathologies that are directly related to reproduction rates, and in medical conditions such as multiple sclerosis and spinal cord injury that affect the muscle and neurological system (Azimi et al., 2019;Courtois et al., 2017).In addition, it is known that chronic diseases and periods such as pregnancy, postpartum period, and menopause, where biological and hormonal changes are experienced, directly affect sexual life negatively (Sentürk Erenel et al., 2015;Gutzeit et al., 2020;Rahmanian et al., 2019).The women in our study had different characteristics (HIV positive, MS, women with hysterectomy, pregnant women, postpartum women, cancer, with diabetes mellitus, women with sexual problems, high body mass index, and spinal cord injury).The study revealed that individual sexual counseling based on the PLISSIT and EX-PLISSIT models had a positive effect on sexual function.Similar to our results, two systematic review studies which examined the effectiveness of sexual counseling based on the PLIS-SIT model reported that the use of a model was effective in improving sexual functions (Kırıcı & Ege, 2021;Tuncer & Oskay, 2022).One meta-analysis study which investigated the effect of sexual counseling based on the PLISSIT model on sexual dysfunction in both women and men reported that counseling positively affects sexual function (Mashhadi et al., 2022).
Our meta-analysis showed that individual sexual counseling based on the PLISSIT and EX-PLISSIT models had a positive effect on the sub-dimension of sexual and communication satisfaction in the SQOL scale.It is known that with the increase in sexual function, the harmony between the couples increases and the general quality of life is positively affected (Jones et al., 2018;Mallory et al., 2019).In this context, our meta-analysis findings support the literature.However, no significant effect was observed in this study in the other sub-dimensions of the SQOL scale (feeling of worthlessness, psychosexual feelings, and suppression of sexual expression).This may be related to the multidimensional and complex nature of sexuality.
According to the subgroup analysis, it was determined that the effectiveness of sexual counseling based on PLISSIT and EX-PLISSIT model on sexual function and satisfaction was high in the 2nd and 3rd months.However, the effect of PLIS-SIT and EX-PLISSIT model on sexual function decreased in the 6th and 7th months.This shows that the effectiveness of the counseling based on the PLISSIT and EX-PLISSIT models decreased in the following periods.It is important to evaluate the effects of sexual counseling at certain periods and to repeat the intervention when necessary (Kharaghani et al., 2020).
In studies with a control group, the effect of the PLISSIT and EX-PLISSIT models on sexual function is higher than other studies (BETTER and SHM groups).This may have resulted from the fact that the individuals participating in the control group studies did not receive any intervention and received routine care.
There was no standardized model-based counseling in the reviewed studies, and counseling was carried out by different practitioners with different populations.According to the subgroup analysis, the effectiveness of sexual counseling based on PLISSIT and EX-PLISSIT model on sexual function is higher in women with diseases (MS, HIV, SCI, diabetes mellitus, CM, and obesity) and post-hysterectomy, than in women with sexual problems and pregnant or postpartum women.Sexual interest and desire decrease due to physical and physiological changes during pregnancy and postpartum.The frequency of sexual intercourse during pregnancy decreases due to cultural practices, lack of awareness that sexual intercourse during pregnancy is not contraindicated unless recommended by an obstetrician, belief that it may cause miscarriage, stillbirth and fetal infections, lack of appropriate counseling by health-care providers about safe sexual practices during pregnancy, and lack of communication between spouses about their sexual expectations and needs during pregnancy (Fernández-Carrasco et al., 2020;Thapa et al., 2023).During the puerperium, sexual interest and desire decreases due to body changes, pain, fatigue, anxiety, and role changes (Drozdowskyj et al., 2020).It is thought that the reason why the effectiveness of the models is less effective in women during pregnancy and puerperium is due to the decrease in sexual interest and desire in women during these processes and the focus being on the baby.Furthermore, according to subgroup analysis, we identified that the trained midwifery by sexual counseling based on PLISSIT and EX-PLISSIT models was an important moderator on increasing sexual function.This may be due to the longer duration of counseling provided by trained midwives.In addition, midwives, like nurses, are the main care providers who first contact with the patient and are perceived as more reliable by patients (Demir et al., 2020).These situations may have affected the high effectiveness of midwife-led interventions.
Studies in this meta-analysis were rated to have "some concerns" or "a high risk of bias."The reviewed studies did not include blinding, and the data were collected using selfreport questionnaires, which increased the risk of bias.In addition, "allocation concealment" was not ensured during the allocation process, and no appropriate analyses were used in the evaluation of the missing data, which increased the risk of bias in the reviewed studies.Therefore, it is recommended to evaluate the results of this meta-analysis considering the risk of bias findings.Experimental studies with low risk of bias are needed to clearly demonstrate the effect of individual counseling based on the PLISSIT and EX-PLISSIT models on sexual health parameters.
Conclusion and Recommendations
This meta-analysis showed that sexual counseling based on the PLISSIT and EX-PLISSIT models provided significant improvements in sexual function and "sexual and communication satisfaction" sub-dimension of sexual life quality.According to the subgroup analysis, it was determined that PLISSIT and EX-PLISSIT models-based sexual counseling on sexual function was affected by the moderator variables of the time of evaluation of the results after the intervention, type of comparison group, the study population, and by whom the intervention was applied.In addition, there is a need for more methodologically strong experimental studies in this area in order to explain the current effect more clearly.
Fig. 1
Fig. 1 Flowchart of the literature search and study selection
Fig. 4
Fig. 4 Meta-analyses for quality of sexual life.SMD: standardized mean difference; CI: confidence interval; IV: inverse variance
Fig. 5
Fig. 5 Funnel plot for sexual function
Table 1
Risk of bias summary (ROB 2.0) for RCTs + Low risk, − High risk, and ?Some concerns
or Subgroup Std diff in means Standard error Variance Lower limit
170) after intervention had a greater effect in improving sexual function than those with other measurement times (effect size: 2.489; 95% CI 1.256, 3.721; p < 0.05).In studies with a control group, the effect of the PLISSIT and EX-PLISSIT models on sexual function is higher than other studies (BETTER and SHM groups) (effect size: 1.514; 95% CI 0.614, 2.414; p < 0.05).The effect of PLISSIT and EX-PLISSIT models on sexual function of women with diseases (2.792) and with hysterectomy (1.432) was more than in women with pregnant or postpartum (0.899), with cancer (0.127), and with sexual
Table 3
Results of subgroups analysis k, number of samples; SHM, Sexual Health Model; MS, multiple sclerosis; SCI, spinal cord injury; CM, cyclic mastalgia; and HBMI, high body mass index Categorical moderators
|
2024-06-20T06:16:16.113Z
|
2024-06-18T00:00:00.000
|
{
"year": 2024,
"sha1": "7097fcc47c6f0b0c3963c1b1527754a17bdcb23e",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10508-024-02898-2.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "02a188657ff1cbfec89abb9814f2fdae40f6307c",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
73657207
|
pes2o/s2orc
|
v3-fos-license
|
P o S ( I C 2 0 0 6 ) 0 2 3 Application of zeta-function techniques to the Compactified Gross-Neveu Model
We consider theN-component D-dimensional Euclidean massive Gross-Neveu model, confined in a (D−1)-dimensional cubic box (edge L), at finite temperature ( T). Usingζ -function analytical regularization, we determine the largeN effective coupling constant ( g) as a function ofL, T and the fixed coupling constant ( λ ), for the casesD = 2,3,4. In all cases, we find that g tends to 0 whenL goes to 0 orT goes to infinity, corresponding to an “asymptotic freedom" type of behavior. For finiteL andT, distinct behaviors appear depending on the value of λ . For small λ only “asymptotic freedom" occurs. However, for λ greater than a “critical" value ( λc), starting from small values of L (and low enough temperatures), a divergence of g appears as L approaches a valueLc(λ ) which lies in a finite interval forλ ≥ λc. Such behavior suggests that the system becomes spatially confined in a box of size Lc(λ ) if λ is large enough. If the temperature is raised, the divergence disappears at a temperature Td(λ ) which can be considered as a deconfining temperature. Taking the fermionic mass as the constituent quark mass, the confining length and the deconfining temperature obtained are comparable with the estimated values for hadrons.
We consider the N-component D-dimensional Euclidean massive Gross-Neveu model, confined in a (D − 1)-dimensional cubic box (edge L), at finite temperature (T ).Using ζ -function analytical regularization, we determine the large-N effective coupling constant (g) as a function of L, T and the fixed coupling constant (λ ), for the cases D = 2, 3, 4. In all cases, we find that g tends to 0 when L goes to 0 or T goes to infinity, corresponding to an "asymptotic freedom" type of behavior.For finite L and T , distinct behaviors appear depending on the value of λ .For small λ only "asymptotic freedom" occurs.However, for λ greater than a "critical" value (λ c ), starting from small values of L (and low enough temperatures), a divergence of g appears as L approaches a value L c (λ ) which lies in a finite interval for λ ≥ λ c .Such behavior suggests that the system becomes spatially confined in a box of size L c (λ ) if λ is large enough.If the temperature is raised, the divergence disappears at a temperature T d (λ ) which can be considered as a deconfining temperature.Taking the fermionic mass as the constituent quark mass, the confining length and the deconfining temperature obtained are comparable with the estimated values for hadrons.
Fifth International Conference on Mathematical Methods in Physics -IC2006
April 24-28 2006 Centro Brasilerio de Pesquisas Fisicas, Rio de Janeiro, Brazil
Introduction
The difficulty of handling analytically QCD has stimulated, over the last decades, the use of phenomenological approaches and the study of effective, simplified, theories to get clues of the behavior of hadronic systems.In the realm of effective field theories, renormalizability is not a definitive requirement for a theoretical model to have a physical meaning.The simplest effective model which may be conceived to describe quark interactions, is a direct four-fermion coupling where the gluon fields are integrated out and all color degrees of freedom are ignored, in a way similar to the Fermi treatment of the weak interaction; this corresponds to the Gross-Neveu (GN) model [1], considered in space-time dimension D = 4.In fact, the Gross-Neveu model is not renormalizable (perturbatively) for dimensions greater than D = 2 but, for D = 3, the N-component massive Gross-Neveu model has been constructed in the the large-N limit [2].
In this report we generalize previous work on the 3-D Gross-Neveu model with one compactified spatial dimension, at zero [3] and finite [4] temperatures, to arbitrary dimension D, studying particularly the cases D = 2, 3, 4 with all spatial dimensions compactified.We consider the GN massive model in D dimensions at finite temperature with d (≤ D) compactified coordinates, one of them being the imaginary time whose compactification length is the inverse temperature.The compactification of spatial dimensions, engendered through a generalization of the Matsubara procedure (antiperiodic boundary conditions), correspond to consider the system contained in a parallelepiped "box" with bag model boundary conditions on its faces [5,6].In other words, our system is defined inside a spatial region in thermal equilibrium at some temperature.We study the behavior of the system as a function of its size and of the temperature.The large-N GN model, in arbitrary dimension D, will be regularized along the lines of the previous papers, that is, by subtracting polar terms coming from Epstein-Hurwitz generalized zeta-functions.We show that the model treated in this way has the same structure for all values of D, which allows us to conjecture that it would have a sense, in particular for the space-time dimension D = 4.This assumption is reinforced a posteriori by the fact that the numerical results for the confining spatial dimensions and the deconfining temperature are of the same order of magnitude of the corresponding values for D = 2 and D = 3, and moreover are in the right ballpark of the experimentally measured values.
Similar ideas have been applied in different physical situations: for spontaneous symmetry breaking in the compactified φ 4 model [7,8]; for second-order phase transitions in superconducting films, wires and grains [9]; for the Casimir effect for bosons [10] and for fermions in a box [11].For the Gross-Neveu model, discussed in the present paper, we obtain simultaneously asymptotic freedom type of behavior and spatial confinement, for low enough temperatures.We also show that, as the temperature is increased, a deconfining transition occurs.We calculate the values of the confining lengths and the deconfining temperature and compare the results with the values obtained from experiments and lattice calculations.
Compactified Gross-Neveu model
We consider the Wick-ordered, massive, N-component Gross-Neveu model in a D-dimensional Euclidean space, described by the Lagrangian density where m is the mass, u is the coupling constant, x is a point of R D and the γ's are the Dirac matrices.The quantity ψ(x) is a spin 1 2 field having N (flavor) components, ψ a (x), a = 1, 2, ..., N, and summation over flavor and spin indices is understood.We shall take the large-N limit (N → ∞), which permits considerable simplification.We use natural units, h The objective is to determine of the renormalized large-N (effective) coupling constant when d (≤ D) Euclidean coordinates, say x 1 , . . ., x d , are compactified.The compactification is engendered via a generalized Matsubara prescription, which corresponds to consider the system with topology In other words, the coordinates x i are restricted to segments of length L i (i = 1, 2, ....d), with the field ψ(x) satisfying anti-periodic boundary conditions.If we choose one of the coordinates to represent the imaginary (Euclidean) time (say x d ), such scheme leads to the system at finite temperature, with d − 1 compactified spatial dimensions; in this case, L d would stand for β = 1/T , the inverse of the temperature.Otherwise, with all x i referring to spatial coordinates, the model refers to compactified d dimensions at T = 0.For fermions, spatial compactification corresponds to the system be constrained by bag model boundary conditions (no outgoing currents) [5,6], to "live" inside a d-dimensional parallelepiped "box" whose parallel faces are separated by distances L i (i = 1, 2, ....d) [5,6].In any case, to describe the model with d compactified Euclidean coordinates, the Feynman rules should be modified following the Matsubara replacements The large-N effective coupling constant between the fermions will be defined in terms of the four-point function at zero external momenta.At leading order in 1 N , summing chains of one-loop (bubble) diagrams, the {L i }-dependent four-point function has the formal expression where the {L i }-dependent Feynman one-loop diagram is given by (2.4) in the above expression, ν i = 2π(n i + 1 2 )/L i (i = 1, . . ., d) are the Matsubara frequencies and k stands for a continuous (D − d)-dimensional vector in momentum space.
To define a renormalized effective coupling constant, we have to handle the ultraviolet divergences of Σ Dd ({L i }).In order to simplify the use of regularization techniques, the following dimensionless quantities, b i = (mL i ) −2 (i = 1, . . ., d) and q j = k j /2πm ( j = d + 1, . . ., D), are introduced.In terms of these quantities, the one-loop diagram is written as where (2.6)
PoS(IC2006)023
This shows explicitly that Σ Dd has dimension of mass D−2 , which is the inverse of the mass dimension of the coupling constant.We shall use a modified minimal subtraction scheme, employing concurrently dimensional and analytical regularizations, where the terms to be subtracted are poles (for even D ≥ 2) of the Epstein-Hurwitz zeta−functions [3].Thus, performing the integral over q = (q d+1 , . . ., q D ) in Eq. (2.6), using well-known dimensional regularization formulas, we obtain Transforming the summations over half-integers into sums over integers, Eq. (2.7) can be written as where is the multiple (d-dimensional) Epstein-Hurwitz zeta-function.
The function Z h 2 d (η, {a i }) can be analytically extended to the whole complex η-plane [8], through a generalization of the procedure presented in Refs.[12,13]; for reviews of applications of zeta-function regularization, see also Ref. [14].We find (see Appendix) where K α (z) is the Bessel function of the third kind.This implies that the function U Dd (µ; {b i }) can also be analytically extended to the complex µ-plane.Using the identity
PoS(IC2006)023
Compactified Gross-Neveu Model Jorge M. C. Malbouisson and grouping similar terms appearing in the parcels of Eq. (2.8), we find with W Dd (µ; {b i }) given by where {ρ j } stands for the set of all combinations of the indices {1, 2, . . ., d} with j elements and the functions F D j (µ; a 1 , . . ., a j ) ( j = 1, . . ., d) are defined by The use of Eq. (2.11) leads directly to an analytic extension of Σ Dd (s; {b i }) for complex values of s, within a vicinity of s = 2: We notice that the first term in this expression for Σ Dd (s; {b i }), involving the Γ-functions, does not depend on parameters b i , that is, it is independent of the compactification lengths L i (i = 1, . . ., d).
For even dimensions D ≥ 2, this term is divergent due to the poles of the Γ-functions.Since we are using a modified minimal subtraction scheme, where terms to be subtracted are poles appearing at the physical value s = 2, this term should be suppressed to give the renormalized single bubble function, Σ R Dd ({b i }).For the sake of uniformity, this term is also subtracted in the case of odd dimensions, where no poles of the Γ-functions are present; in a such situation, we perform a finite renormalization.The second term, which depends on the compactification lengths L i and arises from the regular part of the analytical extension of the Epstein-Hurwitz zeta-functions, gives the renormalized one-loop diagram Notice that, replacing b i by (mL i ) −2 in the above expression, we recover explicitly Σ R Dd ({L i }).Now, we proceed to analyze the behavior of the large-N coupling constant in various cases.
Large-N renormalized coupling constant
As it is usual in four-body interacting field theories, we shall define the coupling constant in terms of the four-point function at fixed external momenta, taking p = 0.The coupling constant is then interpreted as measuring the strength of the interaction between the fermions.Thus, inserting
PoS(IC2006)023
Compactified Gross-Neveu Model Jorge M. C. Malbouisson Σ R Dd ({b i }) into Eq.( 2.3) and taking the limit N → ∞ and u → 0, with Nu = λ fixed as usual, we find the large-N ({b i }-dependent) renormalized coupling constant, for d (≤ D) compactified dimensions, as This is the basic result for subsequent analysis.Some general properties of the renormalized effective coupling constant can be obtained from the fact that the dependence of Σ R Dd on {b i } is dictated by the Bessel functions of the third kind appearing in Eq. (2.13).
First, notice that if all b i tend to zero, that is, if {L i → ∞}, then Σ R Dd → 0 and therefore lim This is an expected result, expressing a consistency condition for our calculations: when all the compactification lengths tend to infinity, the renormalized effective coupling constant must reduce to the renormalized fixed coupling constant in free space at zero temperature, λ .On the other hand, if any of the b i tends to ∞ (that is, if any compactification length L i goes to 0), the renormalized single bubble diagram Σ R Dd diverges, implying in the vanishing of the renormalized effective coupling constant g Dd , irrespective of the value of λ .This suggests that the system presents an ultraviolet asymptotic-freedom type of behavior for short distances and/or for high temperatures.
Interesting features should appear if Σ R Dd acquires negative values; in such a situation, depending on the value of λ , the renormalized effective coupling constant may diverge for finite values of the lengths L i .Such a possibility, and its consequences, will be explicitly investigated in the following subsections, where we consider, particularly, the compactified model for space-time dimensions D = 2, 3, 4 at zero temperature.The discussion of finite temperature effects is postponed to Sec.III.
Compactified Gross-Neveu model in
where the function E 1 (x) is given by As stated before, Σ and λ are dimensionless for D = 2. Notice that L has dimension of inverse of mass and so the argument of the Bessel functions are dimensionless, as it should.We can calculate Σ R 21 (L) numerically by truncating the series in Eq. (3.4), defining the function E 1 (y), at some value n = N.For moderate and large values of mL (say, 0.5), N can be taken as a relatively small value; for example, for mL = 0.5 with N = 36, we obtain the correct value of Σ R 21 to six decimal places.
PoS(IC2006)023
Compactified Gross-Neveu Model Jorge M. C. Malbouisson However, due to the presence of positive and negative parcels in the summation and the fact that the functions K 0 (z) and K 1 (z) diverge for z → 0, large values of N are required to calculate Σ R 21 for small values of mL; for mL = 0.005, we have to take N = 4500 to obtain Σ R 21 to six decimal places.Some features about the function Σ R 21 (L) can be obtained from the numerical treatment of Eq. (3.3) and can also be visualized from from Fig. 1, where this quantity is plotted as a function of mL.As already remarked on general grounds, Σ R 21 (L) diverges (→ +∞) when L → 0 and tends to 0, through negative values, as L → ∞.We find that Σ R 21 (L) vanishes for a specific value of L, which we denote by L min , has remarkable influence on the behavior of the renormalized effective coupling constant.
In the present case, Eq. (3.1) becomes We first note that, independent of the value of λ , g 21 (L, λ ) approaches 0 as L → 0; therefore, the system presents a kind of asymptotic-freedom behavior for short distances.On the other hand, starting from a low value of L (within the region of asymptotic freedom) and increasing the size of the system, g 21 will present a divergence at a finite value of L (L c ), if the value of the fixed coupling constant (λ ) is high enough.In fact, this will happen for all values of λ above the "critical value" λ We interpret this result by stating that, in the strong-coupling regime (λ > λ (2) c ) the system gets spatially confined in a segment of length L (2) c (λ ).The behavior of the effective coupling as a function of mL is illustrated in Fig. 2, for some values of the fixed coupling constant λ .
From the definition of λ (2) c , we find that, for λ = λ (2) c , the divergence of g 21 (L, λ ) is reached as L approaches the value that makes Σ R 21 minimal, which we denoted by L (2) max .On the other hand, since min , the zero of Σ R 21 , as λ → ∞.In other
PoS(IC2006)023
Compactified Gross-Neveu Model Jorge M. C. Malbouisson words, the confining length L (2) max .For a given value of λ , the confining length L (2) c (λ ) can be found numerically by determining the smallest solution of the equation 1 + λ Σ R 21 (L) = 0.These results are presented in Fig. 3, where we plot mL where the function G 2 (x, y) is defined by Notice that the numerical computation of Σ R 32 (L 1 , L 2 ) is greatly facilitated by the fact that the double series defining the function G 2 (y, z) is rapidly convergent.
We remark, initially, that by taking one of the compactifications lengths going to infinity in the expression (3.6), all terms depending on it vanish and we regain the renormalized bubble diagram for the case where only one spatial dimension is compactified in the 3-D model; thus, particularly, all the results of Ref. [3] follow.We see explicitly that if both L 1 and L 2 tend simultaneously to ∞, Σ R 32 goes to zero g 32 → λ , confirming the general statement made above.Also, if either L 1 or L 2 tends to 0, Σ R 32 → +∞ implying that the system gets asymptotically free, with the effective coupling constant vanishing in this limit.However, instead of work in more general grounds, we shall restrict our analysis to the case where the system is confined within a square of size L, by considering L 1 = L 2 = L and without loosing the generality our results may have.
The quantity Σ R 32 (L, L)/m behaves, as a function of L (measured in units of m −1 ), in the same way as it appears in Fig. 1.We find that it vanishes for a specific value of L, L (3) c (λ ), leading to a divergence in the renormalized effective coupling constant.The behavior of the effective coupling as a function of L, for increasing values of the fixed coupling constant λ , can be illustrated showing the same pattern as that of Fig. 2 for the preceding case.We find that the divergence occurs at max .Again, we interpret such a result by considering the system spatially confined in the sense that, starting with L small (in the region of asymptotic freedom), the size of the square can not go above L
PoS(IC2006)023
Compactified Gross-Neveu Model Jorge M. C. Malbouisson 2.15), which corresponds to consider the system confined within a cubic box, we obtain where the functions H j , j = 1, 2, 3, are defined by .9) .10) The quantity Σ R 43 (L)/m 2 has the same behavior as its counterparts for D = 2 and D = 3.We find numerically that Σ R 43 (L) vanishes for L = L As in the other cases discussed in detail before, the renormalized effective coupling constant,
Effect of temperature on the compactified Gross-Neveu model
We now discuss the effect of raising the temperature on the renormalized effective coupling constant for the Gross-Neveu model with all spatial dimensions compactified.Finite temperature is introduced through the compactification of the time coordinate, with the compactification length given by L D = β = 1/T .Although in an Euclidean theory time and space coordinates are treated on the same footing, the interpretation of their compactifications are rather distinct.On general grounds, we expect that the dependence of Σ R D and g D on β should follow similar patterns as that for the dependence with L. In fact, as β → 0 (that is, T → ∞), Σ R D → ∞ implying that g D → 0, independently of the value of the fixed coupling constant λ ; thus, we have asymptotic-freedom behavior for very high temperatures.Therefore, we expect that, starting from the compactified model at T = 0 with λ ≥ λ (D) c , raising the temperature will lead to the suppression of the divergence of g D and the consequent deconfinement of the system.In this section, we discuss this deconfining transition and determine the deconfining temperature for the cases of D = 2, 3, 4.
D = 2
We now consider the effect of finite temperature on the 2-D compactified model.For that, we take the second Euclidean coordinate (the imaginary time, x 2 ) compactified in a length L 2 = β =
PoS(IC2006)023
Compactified Gross-Neveu Model Jorge M. C. Malbouisson 1/T , T being the temperature.In this case, the L and β -dependent bubble diagram, obtained from Eqs. (2.13-2.15)with b 1 = L −2 and b 2 = β −2 (L and β measured in units of m −1 ), can be written as where the function E 1 (x) is given by Eq. (3.4) and the function E 2 (x, y) is defined by Firstly, observe that if β → ∞, all terms depending on β vanishes and Σ R 22 (L, β ) reduces to the expression for zero temperature, Σ R 21 (L).On the other hand, independently of the value of λ , if β → 0, Σ R 22 (L, β ) → ∞ and the system becomes asymptotically free; therefore, we expect that raising the temperature tends to suppress the divergence of g, favoring the disappearance of the mentioned spatial confinement.Such a reasoning implies that, for a given value of λ ≥ λ (2) c , there exists a temperature, T (2) d (λ ), at which the divergence in g disappears and the system becomes spatially unconfined.We can determine T , with λ = 30 fixed, as a function of L (in units of m −1 ), for some values of β (in units of m −1 ): 2.4, 1.15 and 1.0 (dashed, full and dotted lines, respectively).
In Fig. 4, we plot g −1 22 (L, β , λ ) as a function of L, for some values of β and a fixed value of λ > λ (2) c .We find that, in this example with λ = 30, the minimum value of g −1 22 vanishes for β = β
D = 3
We now investigate the effect of the temperature in the compactified 3-D Gross-Neveu model by considering the coordinate x 3 (the imaginary time) compactified in a length β = 1/T .Taking,
D = 4
To look at the effect of finite temperature and determine the deconfining temperature for the fully compactified model in D = 4, we need to compactify the imaginary time besides the spatial coordinates.With b 4 = β −2 , measuring the lengths in units of m −1 , we find from Eqs. (2.12-2.15)that where the functions H 1 , H 2 and H 3 are given by Eqs.(3.9-3.11), and H 4 (x, y, z, w) is defined by As before, we can determine the deconfining temperature by searching for the value of β (λ ) for which the minimum of the inverse of the effective renormalized coupling constant, g −1 44 (L, β , λ ) = (1 + λ Σ R 44 (L, β ))/λ , vanishes.For example, taking the specific case of λ = 620 m −2 , we find β
Concluding remarks
We have analyzed the N-component D-dimensional massive Gross-Neveu model with compactified spatial dimensions, both at zero and finite temperatures.The large-N effective coupling constant g, for T = 0, shows a kind of asymptotic freedom behavior, vanishing when the comapctification length tends to zero, irrespective to the value of the fixed coupling constant λ .In the strong coupling regime, where the fixed coupling constant is greater than some critical value, starting from small compactification lengths and increasing the size of the system, a divergence of the renormalized effective coupling constant appears at a given length, L c (λ ), signalizing that the system gets spatial confined.When the temperature is raised, a deconfining transition occurs at a temperature T d (λ ), as the minimum of the inverse of the renormalized effective coupling constant It should be emphasized that these results are intrinsic of the model, do not emerging from any adjustment.The limit values of L c (λ ) and T d (λ ) depend only on the fermion mass.Thus, to get an estimate of these values we have to fix the parameter m.To do so, we consider the Gross-Neveu model as an effective theory for the strong interaction (in which the gluon propagators have been shrank, similarly to the Fermi treatment of the week force) and take m to be the constituent quark mass, m ≈ 350 MeV 1.75 fm −1 [15].With such a choice, for the model with D = 3 and both spatial coordinates compactified, we find 0.74 fm < L c (λ ) < 1.20 fm and, correspondingly, 305 MeV > T d (λ ) > 189 MeV.These values should be compared with the experimentally measured proton charge diameter (≈ 1.74 fm) [16] and the estimated deconfining temperature (≈ 200 MeV) for hadronic matter [17].A detailed analysis of such a comparison, for arbitrary dimension and in particular for D = 4, will be presented elsewhere.
Appendix: Analytical continuation of the multivariable zeta function
Here, we summarize the steps to obtain Eq. (2.10).First, rewrite Eq. (2.9) in terms of sums over positive integers, D = 2 at T = 0 Firstly, consider the case of a two-dimensional space-time (D = 2), where only the spatial coordinate is compactified, that is d = 1, and fix L 1 = L. Inserting these values of D and d into Eqs.(2.12) and (2.13), remembering that b 1 = b = (mL) −2 , Eq. (2.15) becomes
( 2 )
min , being negative for all L > L(2) min ; it also assumes a minimum (negative) value for a value of L we denote by L (2) max , for reasons that will be clarified later.Numerically, we find: L m −1 and Σ R min 21 −0.0445.This dependence of Σ R 21 on L, in particular the fact that Σ R 21 (L) is negative for L > L (2)
Figure 3 :
Figure 3: Plot of the confining length (in units of m −1 ), as a function of l = λ /λ (2) c ; the horizontal dashed lines correspond to the limiting values mL
3. 2 2 (
Compactified 3-D Gross-Neveu model at T = 0 We start by considering the 3-D model at zero temperature, with two compactified dimensions.We should then take D = 3 and d = 2 in formulas (2.12-2.15),with L 1 and L 2 being the compactification lengths associated with the two spatial coordinates x 1 and x 2 , measured in units of m −1 ( √ b 1 and √ b 2 , respectively).Using that K ± 1 z) = √ π exp(−z)/ √ 2z and summing geometric series, we PoS(IC2006)023 Compactified Gross-Neveu Model Jorge M. C. Malbouisson find the following expression for the renormalized bubble diagram
cc
= −(Σ Rmin 43 ) −1 439.5 m −2 , meaning that the system gets confined in a cubic box of edge L (λ ), as a function of l = λ /λ(4) c , shows the similar features as that of Fig.3, but with the limiting values L
Figure 4 :
Figure 4: Inverse of the effective coupling constant g −1 22, with λ = 30 fixed, as a function of L (in units of m −1 ), for some values of β (in units of m −1 ): 2.4, 1.15 and 1.0 (dashed, full and dotted lines, respectively).
1 .
65 m −1 which corresponds to the deconfining temperature T (3) d 0.61 m; this result can be illustrated in a figure with the same pattern as that appearing in Fig. 4 for the D = 2 case.The plot of T (3) d (λ ), as a function of l = λ /λ (3) c , has the same aspect as that in Fig. 5 with the limiting values T (3) dmin 0.54 m and T (3) dmax 0.87 m.
m −1 which corresponds to the deconfining temperature T(3) d 0.59 m.However, in the case of D = 4, the lowest value of T (4) dmin = 0; this means that, for λ = λ (4) c the system is confined at T = 0, but it becomes unconfined at any finite T ; the upper bound of the deconfining temperature is T Compactified Gross-Neveu Model Jorge M. C. Malbouisson reaches zero.These general aspects of the model hold for arbitrary values of D, as explicitly shown for D = 2, 3, 4.
|
2018-12-18T11:34:39.714Z
|
2007-04-11T00:00:00.000
|
{
"year": 2007,
"sha1": "407a297b48fd7984ed0ea3aadc903931efd7f583",
"oa_license": "CCBYNCSA",
"oa_url": "https://pos.sissa.it/031/023/pdf",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "407a297b48fd7984ed0ea3aadc903931efd7f583",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
8614185
|
pes2o/s2orc
|
v3-fos-license
|
Issues in detecting abuse of xenobiotic anabolic steroids and testosterone by analysis of athletes’ urine
Over the last decade the number of laboratories accredited by the International Olympic Committee (IOC) has grown to 25. Nearly half of the ; 90 000 samples tested annually are collected on short notice—the most effective means to deter the use of anabolic androgenic steroids (AAS). The major urinary metabolites of AAS have been characterized and are identified by their chromatographic retention times and full or partial mass spectra. The process of determining if an athlete has used testosterone (T) begins with finding a T to epitestosterone (E) ratio > 6 and continues with a review of the T/E–time profile. For the user who discontinues taking T, the T/E reverts to baseline (typically ; 1.0). For the extremely rare athlete with a naturally increased T/E ratio, the T/E remains chronically increased. Short-act-ing formulations of T transiently increase T/E, and E administration lowers it. Among ancillary tests to help discriminate between naturally increased T/E values and those reflecting T use, the most promising is determination of the carbon isotope ratio.
Testing for Anabolic Steroids accredited laboratories
In 1983 five laboratories were accredited by the International Olympic Committee (IOC) 3 to perform national and international sport drug testing. Now there are 25 accredited laboratories, and several more are training and preparing for the initial inspection and examination. The accreditation process is initiated by a recommendation from the National Olympic Committee and a commitment from the Committee or other relevant national authority to support the laboratory. The laboratories are reaccredited annually on the basis of performance on one annual examination with blind samples and several proficiency tests. In addition, the laboratories are obliged to adhere to rules and regulations laid down in the IOC Medical Code [1]. Of these, the most important is a provision that forbids testing samples from athletes outside of a regulated program that includes sanctions.
For a program to help deter athletes from using drugs, an important factor is geographical access to a laboratory; thus, the ideal distribution of laboratories would be in proportion to population. This is not yet the case, but it is improving. Seventeen of the laboratories are in Europe, three in North America, three in Asia, and one in Australia; South Africa is the most recent addition. There is still no laboratory in South America or the Middle East. A second, equally important factor is the number of tests performed on the national athletes. In some countries, Olympic-caliber athletes are tested many times a year by the national authority and may be tested other times by an International Federation. In other countries, where there is little or no national testing, an elite athlete may be tested only at major national or international competitions.
results reported
The total number of tests conducted by IOC-accredited laboratories and the percentage of those positive for anabolic steroids are shown in Fig. 1. The total number of tests performed in 1994, the latest year for which data are available, was 93 680, compared with 33 982 in 1986. The graph shows rapid growth in the late 1980s, with a peak annual increase of 36% between 1989 and 1990. This corresponds to a time when sport organizations and the Doping in Sport Symposium media were particularly focused on the influence of drugs on sport. In the 1990s, the annual growth has tapered off, averaging 3.5% for the last 3 years shown in Fig. 1. Of greater importance than the number of tests is the proportion of tests performed on samples collected out-ofcompetition (OOC), i.e., samples collected on short notice or with no notice. OOC testing is the most effective deterrent to the use of anabolic androgenic steroids (AAS). In 1987, the percentage of total tests that was OOC testing was 17%, whereas in 1993 and 1994 this was 39% and 43%, respectively. It is encouraging that the percentage of OOC tests continues to increase at a rate equal to or greater than the total growth rate.
Laboratories that contribute data such as those reported in Fig. 1 are asked to classify the samples into the categories of major international competition, international competition, national competition, and OOC. Ideally, the OOC competition category should be further subdivided by the actual amount of time elapsed between notice to test and collection of the sample-because some steroids have effective half-lives of detection of just a few hours. Some of the OOC samples in Fig. 1 may represent samples collected after as much as 3 days' notice and others after virtually no notice. In some countries, virtually all OOC tests are no-notice tests.
The percentage of samples that test positive for AAS ( Fig. 1) provides some information on the trend of usage; however, the lack of further detail precludes conclusions about individual sports or subsets of athletes according to level of performance. For example, an elite athlete who competes several times a year in track and field will probably give many samples, both in-competition and OOC, over the course of a year. Yet Fig. 1 gives only the total number of tests, not the number of athletes tested. Further, the data do not take into consideration the ultimate disposition of the case. For example, not all cases illustrated in Fig. 1 result in sanctions, for various factors that influence the adjudication process [2]. Despite the shortcomings in the available data, however, evidence is ample that a growing number of nations and sport federations are conducting credible testing.
Detection of Xenobiotic AAS
In 1994 about two-thirds of the steroid-positive cases shown in Fig. 1 were xenobiotic AAS. For these cases, detection is based on identifying the parent drug or metabolite(s) or both. Identification consists of obtaining the chromatographic retention time or relative retention time and the mass spectrum of the substance and showing that the (relative) retention time and spectrum match that of a reference compound. Fig. 2 shows the total ion chromatogram for an analyzed urine from a male who ingested boldenone. The peaks corresponding to boldenone (17-hydroxyandrosta-1,4-dien-3-one) and two major metabolites (17-hydroxy-5-androst-1-en-3-one and 3␣-hydroxy-5-androst-1-en-17-one) are emphasized [3]. The spectra of these three peaks matched those of known (reference) compounds (data not shown). In this case the analysis identified not only a xenobiotic steroid but also two metabolites of the steroid. Thus the amount of analytical information obtained in this analysis is far beyond simple identification of a single substance; indeed, this analysis is helpful in responding to assertions that the sample might have been fortified, because the finding of metabolites is legal evidence that the individual who submitted the urine ingested the substance.
The ability of the laboratories to detect xenobiotic AAS continues to improve, given a growing body of knowledge of the metabolism of the compounds, method improvements, and instrument advances. The major metabolites of virtually all AAS are known and many have been synthesized [4]. For some drugs, as many as 11 metabolites have been detected [5]. The recent development of methods for analysis by high-resolution mass spectrometry (HRMS) is an important, but expensive, advance. HRMS operates in the selected-ion monitoring (SIM) mode to screen for xenobiotics and in the full-scan mode for confirmations [6]. This approach is particularly adept at screening for long-lasting metabolites of stanozolol and methandienone. As in all sports drug testing, a confirmation assay is performed on samples that screen positive. Note that the HRMS-based method described by Schänzer et al. [6] uses immunoextraction or HPLC to clean up the The arrows point to boldenone (17-hydroxyandrosta-1,4-dien-3-one) and two major metabolites: 17-hydroxy-5-androst-1-en-3-one (M1) and 3␣-hydroxy-5androst-1-en-17-one (M2). sample extracts before HRMS confirmation of stanozolol or methandienone metabolites, respectively.
One aspect of identification that continues to be debated is the absolute amount and nature of analytical information that must be obtained before a positive report is issued. Some advocate obtaining a full ion scan of at least one of the substances named in the positive report. Others find that SIM is sufficient, and still others use SIM with the major ions bracketed by the adjacent masses. Monitoring the preceding nominal mass allows demonstration that it is not substantially more intense than the ion of interest and thus excludes the possibility that the latter is an isotopic peak from some interference. The subsequent mass must have the expected response. Generally, drug-testing programs in the context of athletics ascribe to the full-scan approach for at least one of the substances reported. This is in contrast to the widely accepted practice in drugs-of-abuse testing of reporting positive cases on the basis of SIM data [7,8].
Another aspect of xenobiotic steroid testing that merits review is the question of the origin of the steroid and metabolites. One report [9] describes the finding of small amounts of boldenone and two metabolites in the urine of a normal man who had not received boldenone. This case argues for caution in interpretation of positive tests for low quantities of boldenone, for additional studies of this issue, and for devising ways to discriminate between endogenous and exogenous sources of boldenone. It is illegal in most countries to treat or feed cattle with anabolic steroids, and European countries in particular vigorously monitor meat for contamination with anabolic steroids; nevertheless, evidence of contamination exists [10]. Debruyckere et al. [11], investigating the possibility that contaminated meat could result in steroids or their metabolites in human urine, found that some meat obtained from butcher shops contained clostebol acetate and that its ingestion led to excretion of a clostebol metabolite (4-chloro-delta-4-androstene-3-alpha-ol-17-one [9]) in urine. The possibility that urine from untreated subjects may contain very low concentrations of nandrolone metabolites has been discussed [12,13]; however, the mass spectrum of these substances has not been presented, and others find no evidence for nandrolone metabolites in human urine after ingestion of meat from animals that were treated with nandrolone decanoate 28 and 61 days before slaughter [14]. Clenbuterol has been reported in very small amounts (Ͻ1 ng/L) in some human urines after ingestion of meat from clenbuterol-treated animals [14]. In an experiment designed to show human contamination, ingestion of meat from chickens deliberately treated with methenolone heptanoate led to finding methenolone and metabolites in urine [15].
Detection of Doping with Testosterone (T)
Doping with endogenous steroids is the most serious issue facing sport today; when cleverly administered, these are very difficult to detect. Quadrupole mass spec-trometers cannot distinguish between pharmaceutical T and natural T because their spectra are identical. The average male produces 6 -10 mg of T per day, of which only ϳ1% is excreted in urine. The urine concentration of T increases briefly after T administration; however, T has a short half-life (ϳ1 h) and its concentrations fall rapidly [16]. Even T enanthate, an ester that increases concentrations of T in serum for ϳ2 weeks, often does not cause an obvious increase in urine T. Urinary androsterone and etiocholanolone account for ϳ70% of a dose of T [17]; however, many other endogenous steroids are metabolized to androsterone and etiocholanolone, and neither their concentrations nor their ratio is a useful marker of T administration. The ratio of androsterone to epitestosterone (E; 17␣-hydroxy-androst-4-en-3-one) has some utility as a marker of recent T use.
Various approaches to prove T administration have been considered within the practical constraint that only one untimed urine is available for the screening test. Brooks et al. [18] pointed out that urine concentrations vary over a very wide range such that defining T doping as a concentration that exceeds an upper limit of normal would be difficult. Instead, they advocated measuring the ratio of T to luteinizing hormone (LH) in urine, because chronic administration of T inhibits the production of LH and lowers the urine concentration of LH. Detailed studies of urine T/LH ratios are available [19]. Because the T/LH and other urine ratios are independent of urine volume and other factors that influence concentration, such ratios have been used extensively to detect doping with endogenous substances. One major drawback of T/LH is the lack of a reference method for LH [20].
Doping with T was reported in the 1950s [21], but not until 1982 did an effective test became available: Donike et al. [22] proposed to detect T doping by monitoring the T/E ratio. E is the 17␣ epimer of T and is present in urine in concentrations similar to T. T and E differ chemically only in the configuration of the hydroxyl group on C-17. T is not metabolized to E [23], and T/E increases after T administration. Subsequently, Dehennin and Matsumoto pointed out that T administration lowers the concentration of E [24]; thus, the increase in T/E results from an increase in T and a decrease in E.
In 1982 the IOC Medical Commission defined a T/E Ͼ6:1 as the dividing line between the presumed upper limit of normal T/E values and doping with T. Since then, a small number of cases have been found with T/E Ͼ6:1 but without the subjects having been administered T [25][26][27][28]. Therefore, the definition of T doping has been changed: Individuals with T/E Ͼ6:1 undergo further testing before a conclusion is reached on the etiology of the increased T/E. The shape of the distribution of T/E values for 3710 male athletes (Fig. 3) is skewed, having a median of 1.1, and is 5.6 or less in 99% of the athletes tested. Of those with T/E Ͼ6:1, some admitted using T and others denied ever using it. Moreover, a T/E that is not Ͼ6 does not mean that the individual has not used T recently; rather, it is consistent with recent use of shortacting T, use of T and E together, use of E to camouflage recent T use, or no use of T. When T/E is not Ͼ6, the laboratory issues a negative report, recognizing that it may be a describing a false-negative sample.
t/e-time profile
In view of the many interpretations of an increased T/E value in a single urine, a strategy is needed for managing individual cases. If the athlete has never been tested before, one should try to obtain at least two additional samples at 3-to 6-week intervals, with as short a notice as possible. Next, plot the T/E results against time and calculate the mean, SD, and CV. Stable T/E values are interpreted as consistent with physiological or natural increases of T/E, and decreasing T/E values indicate that the high result reflected T administration. If the individual has been tested two or more times before the increased T/E result is obtained, and these T/E values are near or less than the median T/E for males (ϳ1.0), then the high T/E is a strong indication of T administration. The actual value of the T/E is very helpful in differentiating natural increases from T administration, particularly if the T/E is 15 or higher. At those values, the likelihood of natural increases is remote. Conversely, the closer the high T/E is to 6:1, the more likely is the possibility of a physiological etiology.
Our data on urinary T/E ratios from drug-free males reveals that, for three or more samples taken at monthly or greater intervals, the CV will be Ͻ60%; 55% is the maximum we have observed. Fig. 4 shows the T/E-time profiles for three athletes. Athletes B and C have a mean T/E of 1.0 and 2.3 and CVs of 9.8% and 39.5%, respectively. Athlete A, whose initial T/E was 8.0, has a mean of 2.7 and CV of 105%; his pattern is typical of a T user who gets caught and discontinues T. In our experience, most T users who give three or more samples will have a CV Ͼ60%. This is in agreement with the data of Garle et al. [29] and appears to be consistent with the athletes who displayed a "T/E spike" pattern described by Baenziger and Bowers [30].
Many males will have a remarkably stable T/E over several months or years [31][32][33]. For example, the volunteer in Fig. 5 has a stable T/E over the first 9 consecutive days (CV 3.5%) and in intermittent assays over 8 months (CV 35%). Donike et al. reported [32] that the CV (in normal males) will not exceed 30%; however, in a study of 796 athletes who provided at least 3 urines over 2 years, Baenziger and Bowers [30] showed (their Figs. 8 and 9) that about one-third had CVs Ͼ30%, and that the 90th percentile for the CV of T/E ratios was 58%. Further, in a study [29] of 28 athletes with at least one T/E Ͼ6, 17 (61%) had CVs Ͻ30%, but 10 (36%) had CVs of 31-43%; the authors regarded the former group as "not likely to be testosterone users," and the latter as being in a "grey area." In the only study that covers several months of sampling in females (2 samples a month for 12 months, 5 subjects), Mareck-Engelke et al. [33] reported CVs of 15%, 51%, 25%, 31%, and 30%. Thus, although CVs are a useful guideline to understanding the T/E-time profile of an athlete, various factors may influence the T/E (see below, and [34]); therefore, the interpretation of a profile that includes one sample with T/E Ͼ6 should take into account all the available information for the subject tested.
The role of quantification of concentrations of T and E is important in the interpretation of T/E; however, the most useful information is the overall pattern of the T/E-time profile. Analytical and instrumental factors that influence the determination of T/E need to be taken into consideration [35,37]. If the actual T/E value is reported, it is important to have documentation of the confidence interval of the measurement, in case that information is requested. Likewise, if the sample is reported as Ͼ6:1, it is important to have documentation of the statistical criteria used to make the decision.
The precise value of T/E is less critical if the value is two or more times the 6:1 threshold for reporting. The rationale for caution is that precise numbers are readily attacked by litigation attorneys, and lay-adjudicators often do not understand measurement variability. For example, the T/E on confirmation A will not be identical to the confirmation B value for T/E, and both will differ from the T/E reported in the screening assay. The T/E of the first sample from athlete A was 8.2, followed by subsequent T/Es of 1.2, 1.3, and 1.4 (mean ϭ 3.0, CV ϭ 114%). The T/Es from athlete B showed minimal variability (mean ϭ 0.97, CV ϭ 9.8%); athlete C revealed greater variability (mean ϭ 2.3, CV ϭ 40%) but gave ratios still within our norms for non-T-using controls.
Clinical Chemistry 43, No. 7, 1997 In many cases of increased T/E, the concentration of E is low (Ͻ5 g/L). Because the CV of the assay increases at low concentrations of T or E (or both), the corresponding CV of the T/E may be relatively high. Low concentrations of T are not an issue with T/E ratios; however, very high concentrations of T (Ͼ300 g/L) are an indication of recent T administration. Expressing T and E concentrations in nanograms per milligram of creatinine is useful, but this approach has not been widely used, probably because of concerns that creatinine excretion rates are influenced by exercise and many other factors [26].
increased t/e
T/E between 6:1 and 10:1. If an athlete produces three or more urine samples with T/E Ͼ6:1, none with T/E Ͻ6:1, and a CV Ͻ60%, we tentatively classify the results as naturally increased and recommend additional tests. We also take into consideration the absolute value of the E and the amount of advance notice for the collections. If, at this point, a clinical examination and serum tests have not been performed, we recommend serum tests that are routinely available to the clinician: complete blood count; routine chemistries, including HDL and LDL cholesterol; free and total T, sex-hormone-binding globulin, dehydrotestosterone, follicle-stimulating hormone, and LH; urine LH; and a complete medical history and physical examination, with emphasis on the hypothalamic-pituitary-testicular axis and the adrenal. If these evaluations are normal, the likelihood of a T-secreting tumor is extremely remote. If the serum T is increased or if the follicle-stimulating hormone or LH is low, or both, it is important to extend the evaluation and determine if there is an endogenous source of T or the use of exogenous T. In difficult cases, a 24-h urine may be collected; analyzed for the total amount of T, androsterone, and etiocholanolone; and the results compared with published norms [39]. For a few of the athletes we monitor, serum tests and clinical evaluation have been performed and the results have been normal.
T/E Ͼ10:1. In this situation, we still recommend a minimum of two additional tests, even though we have never encountered a case of potential physiologically increased T/E where the mean is Ͼ15:1. If the first sample tested from an athlete exceeds 15:1, we expect the subsequent T/E values to decrease to Ͻ6:1; in all cases where three additional samples have been obtained, T/E did fall to Ͻ6:1. If the index T/E is 10 -15:1, a later T/E will decline to Ͻ6:1 in most, but not all, cases. When it does not, we recommend the clinical evaluation described above and, if possible, one or more of the ancillary tests described below.
Recently, we have encountered several cases where the T/E has remained in the 9 -13:1 zone despite as many as five additional samples collected with Ͻ24-h notice (Fig. 6). These cases are a growing concern because they raise the specter of sophisticated T delivery systems that can produce stable yet very high T/E values; moreover, they may be increasing in frequency.
ancillary tests to detect t administration
Invasive tests. In case the above approach still does not lead to a definitive determination of whether or not T has been used, additional tests may be performed-with the understanding that documented experience with these tests is limited and that more-invasive techniques are involved.
The ketoconazole challenge test [26,27,40] involves collecting urines before and after administration of an oral dose of ketoconazole, which inhibits the synthesis of T. In a normal male, after administration of ketoconazole, urinary T declines, E is unchanged, and T/E decreases [27,40]. Conversely, if the T/E is increased because of T administration, the T/E after ketoconazole administration increases. In individuals who probably have a naturally higher T/E, the ketoconazole test reveals a pattern similar to that of the normal male [27,40]. Experience to date with the ketoconazole test indicates that it differentiates between T users and nonusers. Its greatest potential application is to determine whether individuals suspected of having physiologically increased T/E respond as expected. At this time, the test is not widely used in the US because it requires drug administration and entails the risks related thereto, as well as commitment and expense For the first 9 days a sample was collected each day, all were analyzed in one batch, and the CV was only 3.5%. For the second period of consecutive daily samples shown, the CV was 21%. The CV for all samples was 35%. Fig. 6. T/E-time profile for one athlete who gave five samples on short notice over 13 months.
Although the mean of 13 is high, the CV is relatively low (22%). on the part of the athlete or the sports organization. There is also a sense that athletes with naturally increased T/E should not be coerced into the test for the sake of proving their innocence.
Another approach is based on measuring the concentration in serum of any substance that precedes T in the biosynthetic pathway to T and calculating a ratio of that substance to T; in the presence of exogenous T, the production of the precursors will be suppressed and the ratio should be high. Carlströ m et al. [20] described the serum 17␣-hydroxyprogesterone/T ratio, presented the distribution of values for 12 males who did not use T, and showed that the ratio is increased after T administration; in the one subject with chronically increased T/E, the 17␣-hydroxyprogesterone/T ratio was within the normal range. More experience with this test is desired.
Noninvasive tests. In typical T/E analysis, the glucuronides and sulfates in the urine samples are hydrolyzed with -glucuronidase from Escherichia coli or Helix pomatia. In the former case, the test measures the ratio of total T (unconjugated T plus the T deconjugated from the glucuronide) to total E. In the latter case, the unconjugated glucuronide and to some extent the sulfate fractions are taken into account. Dehennin [41] has taken the additional step of measuring the concentrations of T and E glucuronides (TG, EG) and sulfates (TS, ES) plus other precursors of T and E. He used -glucuronidase to obtain the glucuronide fraction and DEAE-Sephadex to separate the sulfates from the glucuronides and the nonconjugated steroids. He has proposed [41,42] that individuals with T/E Ͼ4:1 have a defect in E production, which results in decreased concentrations of EG and normal or increased concentrations of ES. Further, he finds that individuals with T/E Ͼ4:1 have low EG/ES ratios, and their TG/ (EGϩES) values are less than the increase threshold (mean ϩ 4.5 SD) of the reference group. Additional studies of these and other proposals [30] of Dehennin are underway. To that end, Sanaullah and Bowers [43] have synthesized deuterium-labeled T and E glucuronides and sulfates and have developed a liquid chromatographytandem mass spectrometry method for directly quantifying these moieties.
Another noninvasive test is the determination of the carbon isotope ratio (CIR). Most carbon atoms are 12 C, and a very few are 13 C and 14 C. Becchi et al. [44] explored the hypothesis that the 13 C/ 12 C of synthetic T differs from the 13 C/ 12 C in endogenous T. Using a specialized GC-MS that measures this CIR, they found support for the hypothesis by showing that, after T administration, the ␦ 13 C 0 / 00 values for T were ϽϪ27, whereas samples from normal controls had values that were less negative than Ϫ27. In a subsequent study, the ␦ 13 C 0 / 00 values for T, cholesterol, and metabolites of T were determined for 25 samples from 8 apparently healthy volunteers before and after T administration [45]. The ␦ 13 C 0 / 00 values of T and metabolites were lower after T administration, and dis-criminant analysis correctly identified the samples collected after T administration [45]. More recently, we have measured the ␦ 13 C 0 / 00 values of T in 14 urines from three individuals whom we expected to have physiological increases of T/E. The ␦ 13 C 0 / 00 values of T in all 14 was between Ϫ24 and Ϫ27. At this juncture, the CIR studies consistently show that after T administration the ␦ 13 C 0 / 00 values of T are Ϫ30 to Ϫ36 if the T/E is Ͼ10; however, the studies to this point have not included enough samples from T users with T/E ratios in the critical range of 4:1 to 10:1, so the sensitivity of the CIR test is not known. Although it was encouraging to find high T ␦ 13 C 0 / 00 values in three subjects presumed to have physiologically increased T/E, more data are needed. Further on-going studies are anticipated to improve on the measurement of CIR and on the premeasurement analytical techniques.
other variables influencing t/e
Ethanol. In 1992 Falk et al. [46] measured the T/E before and after administering ethanol (2 g/kg) over 4 h to four males. The ratio increased by 60% in one, ϳ20% in two, and no change in the fourth [46]. Subsequently, papers presented at a workshop on doping [47][48][49] found differing effects of ethanol on the T/E. Males showed no significant effect of ethanol on T/E at doses of 1.0 -1.2 g/kg [46,49], whereas the same dose produced large increases in another study [36]. At very high doses of ethanol (2 g/kg), the T/E ratios of males increased, although none exceeded 6:1 [46,47,49]. The T/E ratios in females appear to be more sensitive to ethanol. Doses of 1.0 -1.2 g/kg produced either no effect on T/E [49] or increases [48], and high doses produced increases [47,49], including two that exceeded 6:1 [48,49]. The effect of ethanol on T/E appears to be limited to the 8 h after completing the ingestion. It will be important to advance these studies by adding placebo controls to factor out the effects of biorhythms and other sources of potential bias and to clarify dose-response relationships. Until further details are available, one may prudently consider that large and inebriating doses of ethanol may increase the T/E for several hours after ingestion and that some females may be particularly sensitive to the ethanol effect.
Gender.
Little is published about gender differences in T/E; however, in our experience, the T/E distribution of a control group of female students, who are not at risk to take T, is shifted slightly to the left (lower values) of the distribution for control males. The reason for this is not known. We have found, however, that the menstrual cycle may influence T/E. We obtained daily T/E values on morning urines from three females throughout a total of five menstrual cycles; each showed a peak during menses, and the nadir-to-peak T/E ratio could vary by nearly threefold over the course of a cycle. Data on three cycles in one of these subjects have been presented [50]. In other studies that collected 24-h urines from four females on days 2, 7, 14, and 21 of the menstrual cycle [33], at the Clinical Chemistry 43, No. 7, 1997 beginning and at mid-cycle for 1 year (n ϭ 5) [51], there was no evidence of a relationship between T/E and the menstrual cycle. Additional studies using identical protocols will help clarify the relation between T/E and the menstrual cycle.
Masking Testosterone Administration
Short half-life formulations of T. Currently, the drug of choice for the management of hypogonadal syndromes is T enanthate, typically given by injection every 2 or 3 weeks in doses of 100 -200 mg. Because parenteral injections are inconvenient, the pharmaceutical industry has been developing alternative formulations and routes of administration. T cyclodextrin is a formulation administered by having the patient place a tablet in the buccalsublingual pouch to be absorbed into the bloodstream. A dose of 5 mg produces serum concentrations of T within the reference range for normal [52] but, because of the very short half-life of T, the drug must be taken three times per day. This preparation leads to very high urinary ratios of T/E, but these typically fall to Ͻ6:1 within 4 -6 h ( Fig. 7) [55]. Two T formulations that utilize the transdermal route of administration became available recently and others are in development [53,54]. One of these is a T gel that is administered by applying the gel on the skin. As expected, this formulation also increases the urine T/E (Fig. 7). (These studies were approved by the institutional review board.) The question remains whether or not these preparations can be administered in sufficient doses to enhance performance. The threshold dose for a pharmacological effect of T in men is not known, but 600 mg/week of T enanthate for 10 weeks is known to increase fat-free body mass and muscle size and strength in healthy men [56]. Given that T cyclodextrin is readily bioavailable, we can reasonably suspect that it would provide enhancement.
Administration of E. Anecdotal reports suggest that E is used as an emergency measure to rapidly lower a T/E that is above-normal as a result of T administration. E is not available in a pharmaceutical dosage form but it is available as a chemical. For this reason the IOC Medical Commission classified E as a urine-manipulating agent, set 150 g/L (520 nmol/L) as the threshold for reporting cases, and later changed the threshold to 200 g/L (693 nmol/L). In our opinion the threshold should be higher: We often encounter cases in the 150 -200 g/L range. In the future we expect that the CIR technique might be useful for detecting E administration. The two highest E concentrations we have encountered in urine samples were 1200 (4.16 mol/L) and 1550 g/L (5.20 mol/L).
Perhaps the ideal doping agent would be a combination of T and E designed to deliver T and E in a ratio of ϳ25:1, i.e., the ratio of production rates of T and E in men [42]. In theory, this might produce a T/E of 1:1 and therefore produce false-negative test results while allowing administration of T. In practice, the dosing regimens do not always lead to T/E Ͻ6:1, and the formulations are cumbersome to prepare. Moreover, Dehennin [42] proposes to detect this scheme by measuring the ratio of T and E to 5-androstene-3,17␣-diol, a precursor of E, and Kicman et al. [19] have shown that the T/LH ratio is high after administration of combined T and E.
|
2018-04-03T00:00:38.064Z
|
1997-07-01T00:00:00.000
|
{
"year": 1997,
"sha1": "55b0275009269a536365394993ebe9413f55474b",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/clinchem/article-pdf/43/7/1280/31924124/clinchem1280.pdf",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "aa1ed92453557413948b684f0412c8eaee3c72ab",
"s2fieldsofstudy": [
"Chemistry",
"Medicine"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
139661211
|
pes2o/s2orc
|
v3-fos-license
|
Fibre Laser Treatment of Beta TNZT Titanium Alloys for Load-Bearing Implant Applications: Effects of Surface Physical and Chemical Features on Mesenchymal Stem Cell Response and Staphylococcus aureus Bacterial Attachment
: A mismatch in bone and implant elastic modulus can lead to aseptic loosening and ultimately implant failure. Selective elemental composition of titanium (Ti) alloys coupled with surface treatment can be used to improve osseointegration and reduce bacterial adhesion. The biocompatibility and antibacterial properties of Ti-35Nb-7Zr-6Ta (TNZT) using fibre laser surface treatment were assessed in this work, due to its excellent material properties (low Young’s modulus and non-toxicity) and the promising attributes of fibre laser treatment (very fast, non-contact, clean and only causes changes in surface without altering the bulk composition/microstructure). The TNZT surfaces in this study were treated in a high speed regime, specifically 100 and 200 mm/s, (or 6 and 12 m/min). Surface roughness and topography (WLI and SEM), chemical composition (SEM-EDX), microstructure (XRD) and chemistry (XPS) were investigated. The biocompatibility of the laser treated surfaces was evaluated using mesenchymal stem cells (MSCs) cultured in vitro at various time points to assess cell attachment (6, 24 and 48 h), proliferation (3, 7 and 14 days) and differentiation (7, 14 and 21 days). Antibacterial performance was also evaluated using Staphylococcus aureus ( S. aureus ) and Live/Dead staining. Sample groups included untreated base metal (BM), laser treated at 100 mm/s (LT100) and 200 mm/s (LT200). The results demonstrated that laser surface treatment creates a rougher (Ra value of BM is 199 nm, LT100 is 256 nm and LT200 is 232 nm), spiky surface (Rsk > 0 and Rku > 3) with homogenous elemental distribution and decreasing peak-to-peak distance between ripples (0.63 to 0.315 µ m) as the scanning speed increases (p < 0.05), generating a surface with distinct micron and nano scale features. The improvement in cell spreading, formation of bone-like nodules (only seen on the laser treated samples) and subsequent four-fold reduction in bacterial attachment (p < 0.001) can be attributed to the features created through fibre laser treatment, making it an excellent choice for load bearing implant applications. Last but not least, the presence of TiN in the outermost surface oxide might also account for the improved biocompatibility and antibacterial performances of TNZT.
Introduction
Aseptic loosening is the most commonly cited indication of load bearing orthopaedic implant revision surgeries [1]. A mismatch in elastic modulus of bone and implant means that required stresses for bone remodelling are not obtained, leading to stress shielding which causes bone resorption and ultimately results in aseptic loosening. The ability of an implant to successfully integrate into native tissue is determined by the surface features, namely surface roughness and topography (physical), wettability (physiochemical) and chemistry (chemical). All of these play a crucial role in modulating cell-surface interactions. The ideal surface conditions for optimal osseointegration still remain to be fully elucidated, however, a consensus exists that surface features are vital in the control of cell response (adhesion, proliferation and differentiation). Among several cell types involved in the osseointegration process, mesenchymal stem cells (MSCs) are multipotent progenitor cells that are responsible for self-replicating and differentiating into varying lineages. The osteogenic lineage includes two fundamental cell types, osteocytes and osteoblasts, which are responsible for the formation of new bone. The hematopoietic-derived osteoclasts resorb bone cells and are crucial for bone homeostasis [2]. Insufficient mechanical loads at the implant site lead to exacerbated osteoclast activity [3], which can potentially be reduced by choosing a material with lower elastic modulus, a property belonging to beta (β) titanium (Ti) alloys.
Ti-based alloys, namely commercially pure (cp) Ti and Ti-6Al-4V, have been successfully applied for orthopaedic applications in the past few decades on account of their promising mechanical and corrosion properties, as well as desirable biocompatibility. In principle, they are classified as alpha (α), near-α, α + β, metastable β or stable β, depending upon material composition (α or β-stabilizers) and thermo-mechanical processing history. β-stabilizers, such as Nb and Ta, are isomorphous, while Zr is a neutral stabilizer [4]. β Ti alloys offer unique characteristics in comparison with their cp Ti (α Ti) and Ti-6Al-4V (α + β Ti) counterparts. Among a family of β Ti alloys which have been developed for orthopaedic applications in recent years, the Ti-Nb-Zr-Ta (TNZT) quaternary alloy system is particularly promising due to its excellent properties, which include superior biocompatibility, low elastic modulus (55 GPa) [5], good corrosion resistance and the absence of toxic elements such as aluminium (Al) and vanadium (V). The cytotoxicity and adverse tissue reaction caused by Al and V have been widely reported in literature [6][7][8][9]. Another notable flaw in the choice of the conventional material used for load-bearing orthopaedic implants is the staggering difference in elastic modulus of cortical bone [10] and Ti-6Al-4V (110 GPa) [11]. The utilisation of TNZT can aid in reducing aseptic loosening and stress shielding, and subsequently can cause a reduction in tissue reaction to particulate debris, while surface modification in the form of fibre laser treatment can be implemented to reduce bacterial infection and improve native cell adhesion.
Bacterial infection caused by Staphylococcus aureus (S. aureus) is considered to be a major issue with this particular strain, accounting for the majority (34%) of implant associated infections [12]. The process is initiated through three core steps: bacterial adherence to the implant surface, bacterial colonisation and lastly biofilm formation. The biofilm consists of layers of bacteria covered in a self-produced extracellular polymeric matrix, hence phagocytosis cannot occur and antibiotics have no effect [13]. Bacteria are non-specific with their adherence, attaching to both rough and smooth surfaces, and to different types of materials. Bacterial adherence and subsequent biofilm development is detrimental to the performance of implants, and the ensuing infection can also be a cause of significant morbidity and mortality to implant patients. Therefore, implementing strategies to minimise the likelihood of initial bacterial adherence to the implant surfaces is crucial to prevent bacterial infection [14,15].
Findings in recent research [16,17] shows that surface modification by laser treatment can be used to reduce bacterial infection and improve native cell adhesion. Laser surface treatment, particularly when carried out by fibre laser technology, is a novel technique for implant surface modification [18][19][20], providing a clean, fast and highly repeatable process. One important characteristic of the laser surface treatment implemented in this study is that the rapid solidification process produces a homogenous surface with little thermal penetration, resulting in little to no distortion [21]. Laser treatment has been shown to improve surface hardness and wear corrosion [22][23][24][25]. Other sources of pain after primary total hip replacement (THR), as recorded by the UK National Joint Registry (NJR), include adverse soft tissue reaction to particulates and to infection, both of which can be improved upon by combining better material selection in the form of a novel beta titanium alloy with surface modification using fibre laser treatment. In terms of the economic benefit, TNZT is potentially superior to currently used materials, as it, coupled with laser surface treatment, can simultaneously decrease the most common indications for hip revision surgery listed in the NJR reports [1,[26][27][28][29]. TNZT use for hip stems may also have a longer life span beyond the current three quarters of hip replacements that last 15-20 years [30] before a revision surgery is required. A systematic review conducted suggested that just over half of hip replacements last 25 years [30]; as ever, there is room for improvement.
If a quantitative relationship between the surface features of an implant and the cell responses were to be established, implants could be designed with specific surfaces which would aid in the host's natural healing processes [31]. To date, very little work has been conducted using the aforementioned composition of beta titanium alloy, and even less specifically focusing on the effect of surface features in relation to MSC response.
The study objectives were to investigate surface feature effects, namely, roughness, topography, composition and chemistry of untreated and fibre laser treated beta titanium alloy Ti-35Nb-7Zr-6Ta on human MSC response by assessing attachment, proliferation and differentiation at various time points, as well as S. aureus bacterial attachment, to determine if biocompatibility and antibacterial properties can be improved upon simultaneously using fibre laser treatment.
Materials
Ti-35Nb-7Zr-6Ta plates were sourced (American Element, USA) with dimensions of 250 mm × 250 mm with 3 mm thickness, and were wire cut using electrical discharge machining (EDM) (Kaga, Japan) into 30 mm × 40 mm plates. The material was polished prior to laser treatment using a progression of silicon carbide (SiC) papers with a finish of 1000 grit. Standard metallographic procedures were followed to remove the pre-existing oxide layer and any surface defects present after the manufacturing process. Samples were ultrasonically cleaned in acetone for 10 min, rinsed with deionised water and air-dried prior to laser treatment and material characterisation. A sample size of n = 3 was used for all material characterisation and in vitro cell culture experiments except bacterial attachment, for which n = 4 samples were used.
Laser Treatment
Laser surface treatment was performed using an automated continuous wave (CW) 200 W fibre laser system (MLS-4030). The laser system was integrated by Micro Lasersystems BV (Driel, the Netherlands) and the fibre laser was manufactured by SPI Lasers UK Ltd (Southhampton, UK). The laser wavelength was 1064 nm. The samples were prepared using the following parameters: laser power 30 W, stand-off distance 1.5 mm, argon gas with 30 L/min flow rate and two different scanning speeds, 100 and 200 mm/s. Laser sample groups are denoted as LT100 and LT200 hereafter. The laser-treated area of the surface was 6 mm 2 in square shape. The laser energy at the two speeds was 1.8 and 0.9 J respectively (see Supplementary Materials for calculations). The control base metal samples (1000 grit finish) are denoted as BM. The sample plate was used for bacterial attachment, otherwise samples were wire cut using EDM into 6 mm diameter discs. Prior to biological culture, samples were ultrasonically cleaned in acetone twice for 1 h, then in deionised water for 30 min and air-dried in the fume hood before a final sterilisation step in an autoclave (Prestige Medical, UK) at 121 • C and 1.5 bar pressure for 20 min to destroy any microorganisms present on the surface.
Surface Roughness, Topography and Composition
The surface roughness and 3D profile of the untreated and laser treated samples were captured using white light interferometry (WLI) (Talysurf CCI 6000, UK). Roughness was assessed using four parameters: arithmetic mean (Ra), maximum profile height (Rz), surface skewness (Rsk) and surface kurtosis (Rku). Values were extracted from the 1.2 mm 2 scan areas perpendicular to the laser track orientation. Scanning electron microscopy (SEM) was used to image the ripples on the laser treated surfaces (FlexSEM 1000, Hitachi, UK). SEM images were acquired using a 20 kV beam and backscattered electron compositional (BSE-COMP) mode detection. SEM images for energy dispersive X-ray spectroscopy (EDX) analyses were acquired using a Zeiss Leo 1455VP SEM at 20 kV beam energy with secondary electron detection. EDX data were acquired in the SEM using an Oxford Instruments X-Act detector (Abingdon, UK) with INCA v4.15 acquisition and processing software.
Phase Identification
Phase and crystallographic structure of the samples were captured by X-ray diffraction (XRD) using a PANanalytical X'Pert Pro MPD (PANalytical, UK) with a CuKα radiation source operated at 40 kV, 40 mA with 1 2 • fixed slit, 10 • anti-scatter slit and 0.02 step size with Ni filter. Samples were analysed in a 2 theta (2θ) range between 10 • -90 • .
Surface Chemistry
X-ray photoelectron spectroscopy (XPS) spectra were acquired using a bespoke ultra-high vacuum (UHV) chamber fitted with Specs GmbH Focus 500 monochromated Al Kα X-ray source and Specs GmbH Phoibos 150 mm mean radius hemispherical analyser with 9-channeltron detection. Survey spectra were acquired over the binding energy range between 0 and 1100 eV using a pass energy of 50 eV, and the high resolution scans over the C 1s, Ti 2p, Zr 3d, Nb 3d and O 1s lines were made using a pass energy of 20 eV. Data were quantified using Scofield cross-sections corrected for the energy dependencies of the effective electron attenuation lengths and the analyser transmission. Data processing and curve fitting were carried out using the CasaXPS software v2.3.16 (CasaXPS, UK).
Attachment
Cell culture was performed in a Class II microbiological safety cabinet, and sterile conditions were maintained. Human mesenchymal stem cells (passage 5-9) (Texas A&M Health Science Centre College of Medicine, Institute for Regenerative Medicine, USA) were cultured in tissue culture flasks (Thermo Scientific). The medium was comprised of Minimum Essential Medium Alpha with GlutaMAX (Gibco), supplemented with 16.5% foetal bovine serum and 1% glutamine. The cells were maintained in a humidified atmosphere with 5% CO 2 at 37 • C, and were sub-cultured when they reached confluency by washing with phosphate buffered saline (PBS) and disassociated with 0.05% Trypsin-EDTA (Gibco) to provide adequate cell numbers for all studies undertaken, or for further subculture or cryopreservation. Cells were counted using a haemocytometer (Agar Scientific), and seeded in a 96 well plate (Sarstedt) at a density of 5 × 10 3 cells per well and maintained in the same culture conditions as previously mentioned. Early cell attachment was assessed at the following times: 6, 24 and 48 h using direct immunofluorescent staining, the procedure for which is as follows. Cells were quickly washed with cold PBS, fixed with 4% paraformaldehyde (PFA) for 15 min and permeabilised with 0.1% Triton X-100 in PBS for 5 min. Cells were then blocked with 5% donkey serum in PBS for 30 min. The cells were stained with α smooth muscle actin (SMA)-Cy3 conjugated mouse antibody (1:200) in 5% donkey serum in PBS for 45 minutes at 37 • C. Cells were then counterstained with 4',6-diamidino-2'-phenylindole dihydrochloride DAPI (1:1000) in PBS for 5 min. Each step was performed at room temperature unless otherwise stated. Cells were washed with PBS in between every step except after blocking. The last step involved a final wash with distilled water.
Samples were transferred to a new well plate for imaging using the Leica DMi8 inverted fluorescence microscope (Leica, Germany). Three images were captured per sample at magnification ×100, giving a total of nine images per group at each time point.
Proliferation
Cell proliferation capacity was assessed using the CyQUANT NF Cell Proliferation Assay Kit (Thermo Scientific) at 3, 7 and 14 days. The dye solution was prepared by adding 1× dye binding solution to 1× Hank's balanced salt solution (HBSS). The cell culture medium was aspirated and 200 µl of the dye solution was added to each well and incubated at 37 • C for 30 min. The fluorescence intensity was measured using a microplate reader (Varioskan LUX) at 485 nm excitation and 530 nm emission.
Differentiation
Cells were stained using indirect immunofluorescent assay staining at 7, 14 and 21 days. All steps were the same as previously mentioned, see Section 2.6.1, except after blocking, where cells were instead incubated with anti-osteocalcin (10 µg/ml) (R&D Systems) in 5% donkey serum in PBS for 1 h. Cells were then stained with Alexa Fluor 488 donkey anti-mouse (1:500) (Thermo Scientific) in PBS for 45 min. This was followed by DAPI staining, PBS wash and distilled water wash. Samples were transferred to a new well plate for imaging using the Leica DMi8 inverted fluorescence microscope (Leica, Germany). Three images were captured per sample at magnification ×100, giving a total of nine images per group at each time point.
Bacterial Attachment
The TNZT plate was washed three times with sterile PBS. S. aureus (ATC 44023) was cultured in Müller Hinton broth (MHB) for 18 h at 37 • C on a gyrotatory incubator with shaking at 100 rpm. After incubation, sterile MHB was used to adjust the culture to an optical density of 0.3 at 550 nm, and it was diluted (1:50) with fresh sterile MHB. This provided a bacterial inoculum of approximately 1 × 10 6 colony forming units (CFU)/mL. 1 mL of culture was applied to the plate carefully suspended over a petri dish base at an inoculum not exceeding 2.4 × 10 6 CFU/mL, as verified by viable count. The plate was incubated for 24 h at 37 • C on a gyrotatory incubator with shaking at 100 rpm. Four samples of each group, for both untreated and laser treated, were tested to ensure the consistency of the results. After 24 h of incubation, the plate was washed three times with sterile PBS to remove any non-adherent bacteria. The adherent bacteria were stained with fluorescent Live/Dead®BacLight™ solution (Molecular Probes) for 30 min at 37 • C in the dark. The fluorescent viability kit contains two components: SYTO 9 dye and propidium iodide. The SYTO 9 labels all bacteria, whereas propidium iodide enters only bacteria with damaged membranes. Green fluorescence indicates viable bacteria with intact cell membranes, while red fluorescence indicates dead bacteria with damaged membranes. The stained bacteria were observed using a fluorescence microscope (GXM-L3201 LED, GX Optical). Twenty four random fields of view (FOV) were captured per group. The surface areas covered by the adherent bacteria were calculated using ImageJ software. The areas corresponding to the viable bacteria (coloured green) and the dead bacteria (coloured red) were individually calculated. The total biofilm area was the sum of the green and red areas, and the live/dead cell ratio was the ratio between the green and red areas. The results are expressed as the means of the twenty four measurements taken per group.
Statistical Analysis
Data were expressed as mean ± standard error (SE). The significance of the observed difference between untreated and laser treated samples was tested by one-way and two-way ANOVA using Prism software (GraphPad Prism Software Version 7, USA). Statistically significant differences were calculated, with a p-value of ≤0.05 considered significant; (*) p < 0.05; (**) p < 0.01; (***) p < 0.001.
Surface Roughness by WLI
The 3D and 2D profiles of the untreated and laser treated surfaces can be seen in Figure 1A,B. The laser treated surfaces appear to have very smooth tracks (as indicated by each individual colour scale), showing a higher degree of surface polishing effect with increased laser scanning speed, as shown by the shift in the colour scale of the 2D profiles. It is also evident that the base metal areas in between the laser tracks have significant variation in surface roughness, as is expected of the metallographic preparation process (i.e., polished by SiC paper). The roughest regions are prominent at the very edge of the laser tracks for each surface (as indicated by the arrows in Figure 1A,B). The surface roughness values extracted from the 2D profiles can be seen in Figure 1C. The Ra values ranged from a low of 199.3 nm on BM to a highest value of 256.1 nm on the LT100 surface, while LT200 had a Ra of 232.5 nm. The Rz values followed a similar trend to the Ra, with BM having the lowest maximum profile height (1900.3 nm) while LT100 and LT200 had similar values, 2879.7 and 2845.3 nm, respectively. Overall, the laser surface treatment increased the Ra and Rz in comparison to the untreated surface, due to the rough edges created on the edges of the laser created tracks.
The untreated and laser treated samples were also quantified using the surface skewness and kurtosis values. The skewness defines whether a surface consists of spikes (Rsk > 0) or valleys (Rsk < 0), while the kurtosis defines whether a surface has peaks (Rku > 3) or is flat (Rku < 3); see Figure 1D for diagram illustrating the visual differences between varying Rsk and Rku values. There was a slight increasing trend in Rsk and a more notable difference in Rku as the laser scan speed increased, as seen in Figure 1C. The LT100 surface had the lowest Rsk (0.346), while the BM had a value of 1.229. The LT200 had the highest Rsk with 1.676, showing that the highest scanning speed creates the spikiest surface. All Rku values were >3, with BM being the lowest at 5.749 and increasing with scanning speed. There was a sharp increase in Rku at the highest scanning speed, reinforcing the laser surface polishing effect. All the surfaces have a Rsk > 0 and Rku > 3. The LT200 group had the highest skewness and kurtosis, suggesting it had more peaks and spikes present on the edges of the laser created tracks, but the smoothest laser treated area.
Surface Topography and Composition by SEM-EDX
The SEM images of the untreated and laser treated surfaces can be seen in Figure 2A(a-d).
The typical surface morphology after mechanical grinding can be seen above and below each laser track in Figure 2A(a,b), with the surface exhibiting random scratches, pits and grooves. The tracks created using laser treatment had a distinctive ripple effect, which was more prominent at the lower scan speeds, with small ripples present along the entire track and the periodic appearance of distinctive larger ripples. The magnified SEM images show that the ripples were much smaller and uniform on the LT200 surface than on LT100, which had more distinct arches, as shown in Figure 2A(c,d). The tracks created by laser surface treatment became smoother as the scanning speed increased, verifying the laser surface polishing effect observed in the WLI images.
The ripple effect was quantified by calculating the peak-to-peak distance between ripples, using Image J, as seen in Figure 2B,C. The distance became smaller with increasing laser scanning speeds. The peak-to-peak distance between ripples halved when comparing LT100 and LT200, dropping from 0.63 to 0.315 µm. The surfaces were significantly different from one another (p < 0.001).
The SEM-EDX analyses were performed to determine if there were any notable differences in elemental distribution post laser treatment, and can be seen in Figure 2D. The base metal polished areas were quite distinctive from the smooth laser treated tracks; the boundary between these areas is defined by a dashed yellow line. The carbon and oxygen were quite densely concentrated on the base metal polished area of the samples, while homogeneously distributed in the laser tracks on each surface, irrespective of scanning speed. It is apparent that for each element, except Ta, the laser treatment created a homogenous elemental distribution within the tracks with no measureable spatial variation. Enriched particles of Ti, Nb and Zr can be seen in Figure 2D(i-vi) for each surface, as indicted by the black arrows, which can be attributed to the base metal polished surface features. There was a more uniform and homogeneous oxide film present post laser treatment.
Phase Identification by XRD
The phase and structure of the samples were identified using XRD, as seen in Figure 3. There was a notable preferential crystallographic phase shift post laser treatment. The blue dot indicates the presence of a specific peak in all samples. The untreated base metal surface had a prominent peak at ~56° β (200), with additional smaller peaks present (~38.5° β (110) and ~69° β (211)). The Scale bar = 100 µm. EDX elemental maps show the spatial distribution of carbon, oxygen, titanium, niobium, zirconium and tantalum. Black arrows indicate particle enrichment in the base metal polished zones for all elements except tantalum.
Phase Identification by XRD
The phase and structure of the samples were identified using XRD, as seen in Figure 3. There was a notable preferential crystallographic phase shift post laser treatment. The blue dot indicates the presence of a specific peak in all samples. The untreated base metal surface had a prominent peak at~56 • β (200), with additional smaller peaks present (~38.5 • β (110) and~69 • β (211)). The crystallographic plane shifted to~38.5 • β (110) after laser treatment for LT100. All samples had a peak present at~38.5 • β (110),~56 • β (200) and~69 • β (211), although LT100 had an additional weak peak present at~83 • (β 220). All peaks were associated with beta phases, no alpha phases were detected. This is due to the presence of beta stabilising elements (niobium and zirconium) in the material, which suppresses the formation of alpha phase.
Coatings 2019, 9, x FOR PEER REVIEW 9 of 20 crystallographic plane shifted to ~38.5° β (110) after laser treatment for LT100. All samples had a peak present at ~38.5° β (110), ~56° β (200) and ~69° β (211), although LT100 had an additional weak peak present at ~83° (β 220). All peaks were associated with beta phases, no alpha phases were detected. This is due to the presence of beta stabilising elements (niobium and zirconium) in the material, which suppresses the formation of alpha phase.
Surface Chemistry by XPS
The XPS spectra and narrow scans for the untreated and laser treated groups can be seen in Figure 4. A summary of the % concentrations by XPS on untreated and laser treated surfaces can be found in Table 1, and a summary of the species detected are noted in Table 2. All surfaces showed the expected Ti, Nb, Zr, C and O, with the additional presence of N, Zn and Cu. The presence of organic nitrogen is probably due to sample exposure to air [32]. The Cu and Zn on the BM surface were attributed to surface contamination during manufacture of the alloy, and showed levels which reduced as a consequence of laser treatment. Each element and assignment, as seen in Table 2, was found in all of the laser treated samples.
Surface Chemistry by XPS
The XPS spectra and narrow scans for the untreated and laser treated groups can be seen in Figure 4. A summary of the % concentrations by XPS on untreated and laser treated surfaces can be found in Table 1, and a summary of the species detected are noted in Table 2. All surfaces showed the expected Ti, Nb, Zr, C and O, with the additional presence of N, Zn and Cu. The presence of organic nitrogen is probably due to sample exposure to air [32]. The Cu and Zn on the BM surface were attributed to surface contamination during manufacture of the alloy, and showed levels which reduced as a consequence of laser treatment. Each element and assignment, as seen in Table 2, was found in all of the laser treated samples. The untreated sample had no Ti 2p 3/2 in the form of Ti metal present in the surface layer, and a very weak presence of Ti 3+ in Ti 2 O 3 /TiN at 456.2 eV. The majority of Ti at the outermost untreated surface was found in the Ti 4+ state in TiO 2 at 458.4 eV. The case was similar for the two other metal elements, niobium and zirconium. Nb was only found in the Nb 5+ state in Nb 2 O 5 at 207 eV, while the majority of Zr was found in the Zr 4+ assignment in ZrO 2 at 182.3 eV. There was very little to no nitride present on the untreated sample, whereas there was some evidence from the laser treated samples to suggest that the laser treatment creates a nitride layer at a BE range of 395.8-397 eV. present on the untreated sample, whereas there was some evidence from the laser treated samples to suggest that the laser treatment creates a nitride layer at a BE range of 395.8-397 eV. The oxygen spectra for the untreated and laser treated samples typically showed a relatively sharp lower binding energy metal-bonded component, and a broader but relatively featureless higher binding energy component typical of organic oxygen, e.g., bonded within the hydrocarbon contamination layer. The metal-bonded component (i.e., Ti-O, Nb-O and Zr-O) was typically found at 529.8 eV. The carbon-oxygen region was typically fitted with two components at 530.9-531.5 eV and 532.3-532.9 eV, representative of O=C and O-C bonding respectively, typically associated with residual organic contamination. The oxygen spectra for the untreated and laser treated samples typically showed a relatively sharp lower binding energy metal-bonded component, and a broader but relatively featureless higher binding energy component typical of organic oxygen, e.g., bonded within the hydrocarbon contamination layer. The metal-bonded component (i.e., Ti-O, Nb-O and Zr-O) was typically found at 529.8 eV. The carbon-oxygen region was typically fitted with two components at 530.9-531.5 eV and 532.3-532.9 eV, representative of O=C and O-C bonding respectively, typically associated with residual organic contamination.
Attachment
It is evident at the early attachment time points that the cells behaved distinctively differently on the laser treated surfaces in comparison with the untreated surfaces between the 24 and 48 h time points, as seen in Figure 5. Alpha SMA was specifically used to clearly visualise cell morphology. The MSCs were visually similar in shape at 6 h on all surfaces, displaying the typical polygonal structure with uniform spreading regardless of underlying surface topography. At 24 and 48 h, the cells on the laser treated surfaces began to show evidence of interacting with the surface. At 24 h, the LT100 cells remained fairly rounded and polygonal in shape, while the LT200 surfaces encouraged cell stretching, seen across and along the laser created tracks. At 48 h, the cell shapes had vastly changed again, with LT100 causing the cells to become slightly smaller in size in comparison with their untreated BM counterparts. LT200 appears to have the most influence on cell shape, as it is clear that at 48 h the cells displayed a spindle shaped appearance. Meanwhile, the BM surface encourages cells to stretch. This is probably an effect of the scratch marks remaining after the SiC paper polishing process.
Proliferation
The fluorescence intensity of MSCs on the untreated and laser treated surfaces at Day 3, 7 and 14 can be seen in Figure 6. A higher intensity is associated with more cells being present on the surface. There were no significant differences between surfaces within time points until Day 7 and 14. At Day 7, there was a significant difference between BM and LT100 (p < 0.05). At Day 14, the untreated BM had the highest fluorescence intensity, followed by LT200 then LT100. BM was significantly different from the two laser treated groups, LT100 (p < 0.001) and LT200 (p < 0.05).
Proliferation
The fluorescence intensity of MSCs on the untreated and laser treated surfaces at Day 3, 7 and 14 can be seen in Figure 6. A higher intensity is associated with more cells being present on the surface. There were no significant differences between surfaces within time points until Day 7 and 14. At Day 7, there was a significant difference between BM and LT100 (p < 0.05). At Day 14, the untreated BM had the highest fluorescence intensity, followed by LT200 then LT100. BM was significantly different from the two laser treated groups, LT100 (p < 0.001) and LT200 (p < 0.05).
Differentiation
MSC morphology was qualitatively assessed using fluorescence staining of osteocalcin at Day 7, 14 and 21. There was a very distinct difference in MSC morphology on the untreated and laser treated surfaces, as seen in Figure 7. Those on BM were spindle shaped, while TCP had good coverage, as expected. More distinct cell shapes can be seen on LT200 than on LT100. The distinctly different cell morphology can be seen at the later differentiation time points. The number of cells increased on all surfaces between Day 7 and Day 14, with TCP having nearly full coverage and monolayer. The cells on BM had the same spindle shape, while the cells on both laser treated surfaces had begun to form clusters, the larger seen on LT200, with small rounded cells present alongside the clusters.
At Day 21, the TCP had formed a monolayer of MSCs, while the cell clusters could be seen on the BM surface. There were multiple cell cluster formations on the laser treated surfaces, which could be indicative of bone-like nodule formation, suggesting that the parameters chosen for the laser surface treatment encourage bone to grow more quickly than on the BM surface, suggesting that laser surface treatment is a potential modification technique to encourage faster bone growth.
Differentiation
MSC morphology was qualitatively assessed using fluorescence staining of osteocalcin at Day 7, 14 and 21. There was a very distinct difference in MSC morphology on the untreated and laser treated surfaces, as seen in Figure 7. Those on BM were spindle shaped, while TCP had good coverage, as expected. More distinct cell shapes can be seen on LT200 than on LT100. The distinctly different cell morphology can be seen at the later differentiation time points. The number of cells increased on all surfaces between Day 7 and Day 14, with TCP having nearly full coverage and monolayer. The cells on BM had the same spindle shape, while the cells on both laser treated surfaces had begun to form clusters, the larger seen on LT200, with small rounded cells present alongside the clusters.
At Day 21, the TCP had formed a monolayer of MSCs, while the cell clusters could be seen on the BM surface. There were multiple cell cluster formations on the laser treated surfaces, which could be indicative of bone-like nodule formation, suggesting that the parameters chosen for the laser surface treatment encourage bone to grow more quickly than on the BM surface, suggesting that laser surface treatment is a potential modification technique to encourage faster bone growth.
Differentiation
MSC morphology was qualitatively assessed using fluorescence staining of osteocalcin at Day 7, 14 and 21. There was a very distinct difference in MSC morphology on the untreated and laser treated surfaces, as seen in Figure 7. Those on BM were spindle shaped, while TCP had good coverage, as expected. More distinct cell shapes can be seen on LT200 than on LT100. The distinctly different cell morphology can be seen at the later differentiation time points. The number of cells increased on all surfaces between Day 7 and Day 14, with TCP having nearly full coverage and monolayer. The cells on BM had the same spindle shape, while the cells on both laser treated surfaces had begun to form clusters, the larger seen on LT200, with small rounded cells present alongside the clusters.
At Day 21, the TCP had formed a monolayer of MSCs, while the cell clusters could be seen on the BM surface. There were multiple cell cluster formations on the laser treated surfaces, which could be indicative of bone-like nodule formation, suggesting that the parameters chosen for the laser surface treatment encourage bone to grow more quickly than on the BM surface, suggesting that laser surface treatment is a potential modification technique to encourage faster bone growth.
Bacterial Attachment
The bacterial attachment results on the untreated and laser treated surfaces can be seen in Figure 8. Attachment was quantitatively analysed using S. aureus coverage and live-to-dead ratio to determine which surface(s) elicited a bactericidal response. The fluorescence images show that live cells were green stained with SYTO 9, while dead cells were red stained with propidium iodide. Green fluorescence indicates viable bacteria with intact cell membranes, while red fluorescence indicates dead bacteria with damaged membranes. As seen in Figure 8A, there was a visibly higher number of green stained cells present on the untreated surface. After only 24 h, the lower number of bacteria on the laser treated surface in comparison with the untreated suggests that laser treatment created an inhospitable environment for the bacteria, causing them to become non-viable. The bacterial attachment results were quantified using bacteria coverage and the ratio of live-to-dead cells. The bacterial coverage on each laser treated surface was significantly different (p < 0.001) from the untreated BM surface, see Figure 8B. There was a four-fold increase in the bacterial coverage between the laser treated and untreated group. Likewise, the live/dead ratio of bacteria present on the laser treated surfaces was also drastically reduced, see Figure 8C.
Bacterial Attachment
The bacterial attachment results on the untreated and laser treated surfaces can be seen in Figure 8. Attachment was quantitatively analysed using S. aureus coverage and live-to-dead ratio to determine which surface(s) elicited a bactericidal response. The fluorescence images show that live cells were green stained with SYTO 9, while dead cells were red stained with propidium iodide. Green fluorescence indicates viable bacteria with intact cell membranes, while red fluorescence indicates dead bacteria with damaged membranes. As seen in Figure 8A, there was a visibly higher number of green stained cells present on the untreated surface. After only 24 h, the lower number of bacteria on the laser treated surface in comparison with the untreated suggests that laser treatment created an inhospitable environment for the bacteria, causing them to become non-viable. The bacterial attachment results were quantified using bacteria coverage and the ratio of live-to-dead cells. The bacterial coverage on each laser treated surface was significantly different (p < 0.001) from the untreated BM surface, see Figure 8B. There was a four-fold increase in the bacterial coverage between the laser treated and untreated group. Likewise, the live/dead ratio of bacteria present on the laser treated surfaces was also drastically reduced, see Figure 8C.
Discussion
Bimodal texturing is an important consideration for improvement in surface feature design. Micro and nanotopography must be used in tandem to create a surface that is sufficient for long-term stability [19,33].. Micro scale features such as grooves, ridges and pits can increase surface area and
Discussion
Bimodal texturing is an important consideration for improvement in surface feature design. Micro and nanotopography must be used in tandem to create a surface that is sufficient for long-term stability [19,33]. Micro scale features such as grooves, ridges and pits can increase surface area and provide more opportunities for attachment, and these features can cause cells to align and organise within them. Features at the nano scale directly affect protein interactions, filaments and tubules, which control cell signalling and regulates cell adhesion, proliferation and differentiation [34].
The roughest regions were prominent at the very edge of the laser tracks for each surface (as indicated by the arrows in Figure 1A,B). This can be attributed to the consequence of melt pool dynamics at the liquid/solid boundary (or melt pool/heat affected zone (HAF) boundary) during the laser re-melting process [35]. It is known that the micro-ripple surface (as seen in Figure 2A) results from oscillation of the liquid metal due to the Marangoni convection and hydrodynamic processes driven by thermocapillary motion acting on the melt pools [36,37]. The black marks seen in Figure 2A(a-d) could be due to contamination from the material handling process, although further in-depth analysis is required.
There was a more uniform and homogeneous oxide film present post laser treatment ( Figure 2D). The elements on the LT100 and LT200 surfaces were uniformly distributed with no measureable spatial variation over the surface area imaged, suggesting the elongated and thin brush-like marks (extended from the interior areas of laser tracks to the boundary, as seen in Figure 2A) were laser-induced surface features as a consequence of the convection field in the complex melt pool dynamics and solidification processes during laser treatment [38]. Tantalum is the only element that did not show any evidence of particle enrichment, perhaps due to it being the least abundant element present in the quaternary alloy.
There was a notable preferential crystallographic phase shift post laser treatment ( Figure 3). This could be due to the preferential orientation of a specific phase (i.e., peak angle of~38.5 • ) caused by higher laser energy input at lower scanning speed. However, further in-depth analysis is still required to investigate how the difference of laser energy input between the two scanning speeds (i.e., lower at 100 mm/s and higher at 200 mm/s) makes the change. Sharp dominant phase peaks present in XRD analysis indicated the treated material had a high degree of crystallinity.
The significant reduction of bacterial attachment and/or biofilm formation on the laser treated surfaces can be attributed to the following: (i) Firstly, the SEM-EDX results indicated "homogenisation" or "finer dispersion" of metal compositions in the laser-melted surfaces (i.e., absence of metal-enriched particles, as indicated in Figure 2D). In addition, the laser treated surfaces look smoother in the SEM images (Figure 2A), and the original fine-scale texture and roughness of the untreated areas was not present after laser treatment. Both of these can be linked to the formation of a more uniform and homogeneous oxide film after laser treatment. (ii) Secondly, the XPS data and also the SEM-EDX results indicate that overall the oxide film was thinner after laser treatment, but it was likely to be more uniform in thickness, as described in point (i) above. Therefore, possibly, there was a better coverage of a more well-defined though thinner oxide layer over the treated areas. (iii) Thirdly, removal of residual organic contaminants, as indicated by the reduced proportion of both oxygen-bonded and carbon-bonded species in the oxide film, which could be acting as potential sources to attract bacteria to attach on surfaces via non-covalent interactions [39]. The phenomenon of laser treatment helps reduce overall levels of organic surface contaminants, as reported elsewhere [16,36]. This can be due to rapid vaporisation of more volatile species under the sudden input of energy from the laser, or, possibly, more adherent contaminants are buried as the locally-melted metal re-solidifies after laser treatment. (iv) Finally, the presence of titanium nitride (TiN) in the oxide film after laser treatment could contribute to the antibacterial activities. It has been reported that a surface with TiN can deactivate biofilm formation [14,40]. Likewise, zirconium nitride (ZrN) is known to be an antibacterial material [41]. However, the possibility of an antibacterial effect attributed to ZrN can be eliminated in this study, because there is no evidence for the existence of ZrN in the oxide film, as shown in the XPS narrow scan profile in Figure 4B (i.e., the binding energy for ZrN would be expected to be around 180.9 eV [42]). In contrast, the evidence for TiN present in the oxide film after laser treatment is clear (i.e., the curve fitted at the binding energy around 456.2 eV in Figure 4B).
It is important to note that, although the evidence for the appearance of metallic species in the oxide film, namely Ti and Nb metals, after laser treatment is also very clear, the results in this study indicated that they are not necessarily encouraging the bacterial attachment and/or biofilm formation. The presence of the oxide layer improves the corrosion resistance of the material's passive surface [43]. The organic contamination found in the form of O-C and O=C bonds could be due to the material handling process or a carbon-containing cleaning agent; further in-depth analysis is required.
The ultrastructure of the bone-titanium interface demonstrates simultaneous direct bone contact, osteogenesis and bone resorption [44]. Upon implantation, a material surface initially interacts with water, followed by protein adsorption then cell interaction, among which is included MSCs. The surface macro scale is responsible for the interlock between bone and implant [45]. The micro scale can influence cell orientation [46] and potentially proliferation capacity and differentiation ability. The nano scale can influence cell-to-cell signalling [34], and can override biochemical cues [47]. Independent of the surface chemistry, the surface scale, namely micro and nano topography, has a significant effect on cell behaviour [48]. The cell cytoskeleton organisation is strongly affected by the orientation of the surface structures (i.e., physical roughness and topography), which stimulates cell contact guidance [49]. Contact guidance refers to the phenomenon where cells will adjust their orientation and align along nano-micro-groove-like patterns to grow. It was first observed by Harrison [50] in 1912, and the terminology first described by Weiss and Taylor [46] in 1945. Curtis and Wilkinson reviewed the materials (one being titanium surface oxides) and topographical structures which can effect cell behaviour, such as grooves, ridges, spikes and pits [51]. Research has evolved since, and now emphasis is placed on how cell geometric cues can direct cell differentiation, and in the process manipulate cells into square, rectangular and pentagon shapes [52].
The cell coverage was similar across the early attachment time points ( Figure 5) which correlates with the proliferation data at Day 3 ( Figure 6). Although the coverage was similar, the morphology was distinctive on the laser treated surfaces. At Day 14, proliferation results show that BM surface had the highest fluorescence intensity (significantly different from both laser treated surfaces), although at Day 14 the differentiation results show that BM had spindle shaped cells and approximately 60% coverage, while the laser treated groups first began to show morphological evidence of cluster formation, perhaps indicative of bone-like nodules, although additional in-depth analysis is required to further characterise the cell behaviour. It is clear that cell shape is a relevant parameter in the biomaterial design process, as a fundamental physiological feature of functional tissue [53]. Faster bone formation by laser surface treatment could be explained by the hypothesis that osteoblast precursors migrating into the pores of a rough surface reach confluence earlier within the enclosed space, cease proliferation and then differentiate [54]. Improvement of overall in vitro performance links to the surface roughness and topographical features, with these being the main indicators of osseointegration success [19,55,56]. The introduction of laser technology for titanium surface modification is feasible, and evidently beneficial for accelerating bone formation [57].
The antibacterial effect arising from the changes in surface chemistry of TNZT after laser treatment could apply to the attachment of MSCs, i.e., negatively impacting the MSCs attachment on laser treated surfaces. However, positive results of attachment and coverage of MSCs on the laser treated surfaces, at least comparable with that of the untreated (polished) surfaces, can still be observed in this study. It can be attributable to the size difference between bacteria and MSCs, namely 0.5-1 µm and 20-30 µm, respectively.
In the authors' recent study [16,36], bacteria were found to be insensitive to the micro-sized surface features, namely micro-ripples in the laser tracks [36], while the response of bacteria was very much dictated by the nano-sized features, i.e., nano-spiky features are effective to inhibit bacterial attachment and to kill the bacteria that attached [16]. Similarly, Puckett et al. found that certain nanometre sized titanium topographies may be useful for reducing bacteria adhesion while promoting bone tissue formation [58]. In contrast, the relatively large-sized MSCs, compared with the bacteria, are more sensitive to surface features in both the micro-and nano-sized range. It has been reported by Chan et al. [59] that laser-induced surface ripples or patterns in micro size can encourage higher cell attachment of MSCs, leading to a higher cell coverage on the surfaces after laser treatment. Surface modification is emerging as a promising strategy for preventing biofilm formation on abiotic surfaces [60]. Implant success relies upon the surface inhibiting bacterial adherence and concomitantly promoting tissue growth.
To summarise, there is a competing process between effects caused by the laser-induced surface chemistry and micro-features, i.e., changes in surface chemistry after laser treatment could make the surface less hospitable to MSCs, whereas the micro-sized ripples (or physical features) can promote more cell attachment and coverage. However, at this stage it is still inconclusive as to how the laser-induced chemical or physical effects individually act on the response of MSCs, or which one is more dominant. A single surface feature effect cannot be studied in isolation from the others.
Conclusions
The beta titanium alloy Ti-35Nb-7Zr-6Ta coupled with fibre laser surface treatment in a high speed regime (ranging 100 and 200 mm/s) is a promising choice for load bearing implant applications. The surface roughness, topography and composition can be tailored by fibre laser treatment to improve in vitro mesenchymal stem cell attachment, proliferation and differentiation, as well as reducing bacterial attachment.
The major findings of this research are summarised below: 1) Fibre laser treatment can be used to polish the TNZT surfaces in the high speed regime, with the scanning speed of 200 mm/s (or 12 m/min) being the most effective in this study; 2) The laser treated samples exhibited surface homogenisation (or homogenous elemental distribution), and only showed beta phases after fibre laser treatment, namely β (110) and (200); 3) Fibre laser treatment created a rougher (Ra value of BM was 199 nm, LT100 was 256 nm and LT200 was 232 nm) and spiky surface (Rsk > 0 and Rku > 3) with a homogenous elemental distribution and presence of TiN in the outmost oxide layer, which encouraged bone-like nodule formation and a bactericidal effect.
To summarise, the cell (attachment, proliferation and differentiation of MSCs) and bacterial culture (live/dead ratio of S. aureus) results indicate that LT200 is the optimal condition to treat the TNZT surface, giving the most desirable MSC responses and a significant reduction in bacterial adhesion.
Funding:
The work described in this paper was supported by research grants from the Queen's University Belfast, awarded to C-WC and LC (D8201MAS, D8304PMY). The cell work was funded by Biotechnology and Biological Sciences Research Council (BBSRC) and British Heart Foundation (BHF) grants.
|
2019-04-30T13:09:26.924Z
|
2019-03-12T00:00:00.000
|
{
"year": 2019,
"sha1": "229f571be09ca74c6c0d2c95e8ea0834cb980973",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2079-6412/9/3/186/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "9cd6193c468477773bb8769440749906f6400b0e",
"s2fieldsofstudy": [
"Medicine",
"Materials Science",
"Engineering"
],
"extfieldsofstudy": [
"Materials Science"
]
}
|
251493193
|
pes2o/s2orc
|
v3-fos-license
|
Joint reconstruction-segmentation on graphs
Practical image segmentation tasks concern images which must be reconstructed from noisy, distorted, and/or incomplete observations. A recent approach for solving such tasks is to perform this reconstruction jointly with the segmentation, using each to guide the other. However, this work has so far employed relatively simple segmentation methods, such as the Chan--Vese algorithm. In this paper, we present a method for joint reconstruction-segmentation using graph-based segmentation methods, which have been seeing increasing recent interest. Complications arise due to the large size of the matrices involved, and we show how these complications can be managed. We then analyse the convergence properties of our scheme. Finally, we apply this scheme to distorted versions of ``two cows'' images familiar from previous graph-based segmentation literature, first to a highly noised version and second to a blurred version, achieving highly accurate segmentations in both cases. We compare these results to those obtained by sequential reconstruction-segmentation approaches, finding that our method competes with, or even outperforms, those approaches in terms of reconstruction and segmentation accuracy.
Image reconstruction background
The general setting for image reconstruction is that one has some observations y of an image x * , which are related via y = T (x * ) + e (1. 1) where T is the forward model, typically a linear map, and e is an error term (e.g. a Gaussian random variable). Solving (1.1) for x * -given y, T , and the distribution of e-is in general an ill-posed problem. A key approach to solving (1.1), pioneered by Tikhonov [46] and Phillips [40], has been to solve the variational problem argmin where R is a regulariser, encoding a priori information about x * , and D enforces fidelity to the observations and encodes information about e.
Image segmentation background
One of the most celebrated methods for image segmentation is that of Mumford and Shah [37]. This method segments an image x : Ω → R by constructing a piecewise smoothx ≈ x and a set of contours Γ (the boundaries of the segments) minimising a given segmentation energy, namely the Mumford-Shah functional where µ is the Lebesgue measure. As MS is difficult to minimise in full generality, Chan and Vese [17] devised a method wherex is restricted to being piecewise constant. 1 This simplifies (1.3) to an energy which can be minimised via level-set methods. Some key drawbacks of these methods are that they: can be computationally expensive, as one must solve a PDE; can be hard to initialise [23]; can perform poorly if the image has inhomogeneities [51]; and are constrained by the image geometry, and so less able to detect large-scale non-local structures. These Mumford-Shah methods are related to Ginzburg-Landau methods, see e.g. [21], because the Ginzburg-Landau functional Γ-converges to total variation [36]. Because of this Γ-convergence (which also holds on graphs [47]), Bertozzi and Flenner [7] were inspired to develop a segmentation method based on minimising the graph Ginzburg-Landau functional using the graph Allen-Cahn gradient flow. Soon after, Merkurjev, Kostić, and Bertozzi [34] introduced an alternative method using a graph Merriman-Bence-Osher (MBO) scheme. These "PDEs on graphs" methods have received considerable attention, both theoretical, see e.g. [6,33,48], and in applications, see e.g. [14,15,24,31,35,42]. In previous work by some of the authors, Budd and Van Gennip [12] showed that the graph MBO scheme is a special case of a semi-discrete implicit Euler (SDIE) scheme for graph Allen-Cahn flow, and Budd, Van Gennip, and Latz [13] investigated the use of this SDIE scheme for image segmentation and developed refinements to earlier methods that resulted in improved segmentation accuracy.
Joint reconstruction-segmentation background
Reconstruction-segmentation was traditionally approached sequentially: first reconstruct the image, then segment the reconstructed image. The key drawback of this method is that the reconstruction ignores any segmentation-relevant information. At the other extreme is the end-to-end approach: first collect training data {(y n , u n )} of pairs of observations and corresponding segmentations, then use this data to learn (e.g., via deep learning) a map that sends y to u. However, this forgoes explicitly reconstructing x * , can require a lot of training data, and the map can be a "black box" (i.e., it may be hard to explain its segmentation or prove theoretical guarantees).
Joint reconstruction-segmentation (a.k.a. simultaneous reconstruction and segmentation) lies between these extremes, seeking to perform the reconstruction and segmentation simultaneously, using each to guide the other. It was first proposed by Ramlau and Ring [43] for CT imaging, with related (but extremely varied) methods later developed for other medical imaging tasks (for an overview, see [19, §2.4]). An extensive theoretical overview of task-adapted reconstruction was developed in Adler et al. [2], which found that joint reconstruction-segmentation produced more accurate segmentations than both the sequential and end-to-end approaches. These methods were enhanced in Corona et al. [19] using Bregman methods, and a number of theoretical guarantees were proved about this enhanced scheme. However, these approaches have mostly relied on Mumford-Shah or Chan-Vese methods for the segmentation.
Contributions and outline
The primary contribution of this work will be a joint reconstruction-segmentation method based around the joint minimisation problem min x∈R N × ,u∈V where x is the reconstruction, u is the segmentation, the first two terms describe a reconstruction energy as in (1.2), and the final term is a segmentation energy using the graph Ginzburg-Landau energy. The use of this energy is motivated by the success of the graph Ginzburg-Landau-based segmentation methods described in Section 1.2. These objects, and other groundwork required for this paper, will all be defined in Section 2. In particular, in this paper: i. We will present an iterative scheme for solving this minimisation problem, which alternately updates the candidate reconstruction and the candidate segmentation (Section 3).
ii. We will devise algorithms for computing the steps of this iterative scheme (Sections 4 to 6). We compute the reconstruction update by linearising the corresponding variational problem (Section 4). We compute the segmentation update via the SDIE scheme (Section 5).
iii. We will demonstrate the convergence of this iterative scheme to critical points of the joint minimisation problem (Section 7).
iv. We will apply this scheme to highly-noised and to blurred versions of the "two cows" image familiar from [7,11,13,34] (Section 8). Our scheme will exhibit very accurate segmentations which compete with or outperform sequential reconstruction-segmentation approaches.
Framework for analysis on graphs
We begin by giving a framework for analysis on graphs, abridging Budd [11, §2], which itself is abridging Van Gennip et al. [48]. Let (V, E, ω) be a finite, undirected, weighted, and connected graph with neither multi-edges nor selfloops. The finite set V is the vertex set, E ⊆ V 2 is the edge set (with ij ∈ E if and only if ji ∈ E for all i, j ∈ V ), and {ω ij } i,j∈V are the weights, with ω ij ≥ 0, ω ij = ω ji , and ω ii = 0, and ω ij > 0 if and only if ij ∈ E. We define function spaces For a parameter r ∈ [0, 1], and writing d i := j ω ij for the degree of vertex i ∈ V , we define inner products on V and E (and hence inner product norms · V and · E ): Next, we introduce the graph variants of the gradient and Laplacian operators: where the graph Laplacian ∆ is positive semi-definite and self-adjoint with respect to V. As shown in [48], these operators are related via u, ∆v V = ∇u, ∇v E . We can interpret ∆ as a matrix. Define D := diag(d) (i.e. D ii := d i , and D ij := 0 otherwise) to be the degree matrix. Then writing ω for the matrix of weights ω ij we get ∆ := D −r (D − ω).
The choice of r is important. For r = 0, ∆ = D − ω is the standard unnormalised (or combinatorial) Laplacian. For r = 1, ∆ = I − D −1 ω (where I is the identity matrix) is the random walk Laplacian.
There is also an important Laplacian not covered by this definition: the symmetric normalised Laplacian ∆ s := I − D −1/2 ωD −1/2 . For image segmentation it is important to use a normalised Laplacian, see [7, §2.3], so we shall henceforth take r = 1.
The graph Ginzburg-Landau functional
In this paper, we shall use graph-based segmentation methods based on minimising the graph Ginzburg-Landau functional. The basic form of this functional is where W is a double-well potential and ε > 0 is a parameter. In particular, following Budd [11] we shall be taking W to be the double-obstacle potential : Furthermore, we define the graph Ginzburg-Landau functional with fidelity by where M := diag(µ) for µ ∈ V [0,∞) the fidelity parameter and f ∈ V [0,1] is the reference. We define Z := supp(µ), which we call the reference data. Note that µ i paramaterises the strength of the fidelity to the reference at vertex i. We may assume without loss of generality that f is supported on Z.
It is worth briefly describing why minimising GL ε,µ,f is a good way to segment an image. The first term penalises the segmentation u if two vertices with a high edge weight are in different segments, encouraging the segmentation to group similar vertices together. The second term wants the segmentation to be binary. The third term penalises u for disagreeing with an a priori segmentation, propagating those a priori labels to the rest of the vertices.
It will be useful for our joint reconstruction-segmentation scheme to redefine GL ε,µ,f as a function of both u and ω. A simple calculation gives that Note that this is linear in ω. We can therefore define G ε,µ,f : where ·, · F denotes the Frobenius inner product. Furthermore, note that if v i :
Turning an image into a graph
To represent an image as a graph, we first let our vertex set V be the set of pixels in the image, and consider the image as a function x : V → R , where depends on whether the image is greyscale, RGB, or hyperspectral etc. To build our graph, we construct feature vectors F(x) =: z : V → R q where F is the feature map (which we shall assume to be linear). The philosophy behind these vectors is that vertices which are "similar" should have nearby feature vectors. What "similar" means is application-specific, e.g. [49] incorporates texture into the features, [7,14] give other options, and there has been recent interest in deep learning methods for constructing features, see e.g. [20,35]. Next, we construct the weights on the graph by defining ω ij to be given by some similarity function evaluated on z i and z j . There are a number of standard choices for the similarity function, see e.g. [7, §2.2]. For our choices for feature map and similarity measure see Section 8.2 and (3.1), respectively.
The Nyström extension
A key practical challenge is that V is usually very large, and hence matrices such as the weight matrix ω ∈ R V ×V and ∆ are much too large to store in memory. Instead, we shall compress these matrices using a technique called the Nyström extension, first introduced in Nyström [39] and developed for matrices in Fowlkes et al. [22]. Consider an N × N symmetric matrix A, written in block form where X is the interpolation set, with |X| =: K N . Let A XX = U X ΛU T X , and u i X be a column eigenvector of U X with eigenvalue λ i . The idea of the Nyström extension is to extend this eigenvector to a vector which can be observed to resemble a quadrature rule for the eigenvalue problem where in the first equality we used that U X c = A X c X U X Λ −1 . The upshot of (2.1) is that we only need to store and calculate with A XX and A X c X , which are much smaller than A. Also, (2.1) yields an efficient way to approximate matrix-vector products Av.
3 A joint-reconstruction-segmentation scheme on graphs
Set-up
We will begin by formally stating our reconstruction-segmentation task. Here Y and Z are disjoint finite sets. Given y, T , x d , and f , reconstruct x ≈ x * and find u : Y ∪ Z → {0, 1} such that u| Y segments x and u| Z is close to f .
Next, we must incorporate this into a graph framework, following Section 2.3. Let V := Y ∪ Z be the vertex set of our graph, and let the edge set E be given by E := {ij | i, j ∈ V, i = j}. Let N := |Y | and N d := |Z|. To encode the candidate reconstruction x : Y → R and the reference image x d in the weights on this graph, we define feature maps F and F d , and feature vectors z : Y → R q and z d : Z → R q by z := F(x) and z d := F d (x d ). Since x d and F d are given, we hereafter treat z d as given. We then define the edge weights via ω = Ω(z, z d ), where Ω(z, z d ) is given by (for z := (z, z d )) with · F denoting the Frobenius norm. 2 The q in the denominator averages over the q components of z so that parameter choices for σ generalise better.
Note 3.2. The feature vectors z and z d are defined so that z does not depend on x d and z d does not depend on x. This is a simplification, since x and x d might be different parts of the same image and hence one might want z to partially depend on x d . However, this simplification greatly aids in the following analysis, and in computation, as it means that the edge weights between vertices of Z can be considered fixed and given.
The joint reconstruction-segmentation scheme
To solve Problem 3.1, we will employ a variational approach. We will consider our candidate reconstructions x and segmentations u to be candidate solutions to the following joint minimisation problem: where R is a convex regulariser, which following Appendix B we shall assume can be written as R(x) = R(K(x)) for K a linear map and R convex and lower semicontinuous (l.s.c.) with convex conjugate R * 3 proper, convex, l.s.c., and non-negative. The first two terms in the objective functional are a standard Tikhonov reconstruction energy as in (1.2), and the final Ginzburg-Landau term is the segmentation energy. As this problem (and related variational problems considered in this paper) are non-convex, by "solving" we will mean finding adequate local minimisers.
To avoid needing to solve the difficult problem (3.2) directly, we will use the following alternating iterative scheme to approach solutions (where α, β, η n , ν n are parameters): We can understand this scheme intuitively as iterating the following steps: I. Given the current segmentation, update the reconstruction using the segmentation energy as an extra regulariser and the previous reconstruction as a momentum term.
II. Given the current reconstruction, update the segmentation using the previous segmentation of the image to be reconstructed as a momentum term.
Initialisation
The simplest initial reconstruction x 0 would be x 0 := T + (y), where T + is the (Moore-Penrose) pseudoinverse of T (see [27, §5.5.2]). However, in practice T + (y) can be too poorly structured to give a good initial segmentation. Also, the pseudoinverse can be highly unstable [27, §5.5.3] and does not exist for non-linear T . Thus, we favour initialising by applying a cheap and better behaved reconstruction method to y. The initial segmentation u 0 is constructed by segmenting x 0 via the SDIE methods to be described in Section 5.
Solving (3.3a)
We now describe how we compute (approximate) solutions to (3.3a). This minimisation problem is highly computationally challenging, so we will simplify it by linearising (3.3a). This reduces the problem to one which can be solved by standard methods.
Linearising (3.3a)
The challenging term in (3.3a) is the Ginzburg-Landau energy term. Recall from Section 2.2 that this can be written, where for a matrix A, A Y Z := (A ij ) i∈Y,j∈Z and likewise for A Y Y , and where Let us assume that our candidate minimiser for (3.3a) is close to x n (this assumption will become more reasonable the higher the value of η n is). Then we can make the following approximation: where g n := ∇ x F 1 (F(x n )) + ∇ x F 2 (F(x n )). We will describe how to compute g n in Appendix A. Using this approximation, we can approximate (3.3a) by solving This is of the form of a standard variational image reconstruction problem (1.2). To solve (4.3), we shall be employing an algorithm of Chambolle and Pock [16], see Appendix B.
Note 4.1. Due to difficulties in employing the algorithms of [16] for non-linear T , we will henceforth take T to be linear (except in Section 7). The framework we describe in this paper is however applicable for general T , so long as one is able to efficiently solve (4.3) for that T .
For t, x ∈ R, let F t (x) := (1 − e −tx )/x, and extend F t to (real) matrix inputs via its Taylor series. Then, for any given u 0 ∈ V, (5.3) has a unique solution, given by the map: The solution to (5.2) is then given by the following theorem.
Note 5.3. The τ = ε special case described by (5.4b) is the graph MBO scheme, which has seen widespread use in image segmentation, pioneered by Merkurjev et al. [34].
We describe how to compute this SDIE scheme in Appendix C.
The algorithm for (3.3b)
We summarise the above as Algorithm 2.
Convergence analysis
In this section, we will show that (3.3) converges to critical points of (3.2), using the theory from Attouch et al. [5]. From Note 4.1 we recall that in this section we do not require T to be linear. We first rewrite (3.2) abstractly: min 1] and H (u) := ∞ otherwise. Note 7.1. Both F and H are proper and l.s.c., and G is C ∞ (indeed, analytic), so Then (3.3) can be written as and our partially linearised iterative scheme can be written as However, the presence of the semi-norm · | Y V in (7.1b) is an obstacle to the deployment of the theory from [5]. Hence, we will make an assumption that for all n, u n | Z = f . In practice, we have observed that this approximately holds, and furthermore the larger the value of µ the more closely this will hold. Thus, under this assumption (7.1) becomes: We begin by making some key definitions. 4 Definition 7.2 (Kurdyka-Lojasiewicz property). A proper l.s.c. function g : R n → (−∞, ∞] has the Kurdyka-Lojasiewicz property atz ∈ dom ∂g 5 if there exist η ∈ (0, ∞], a neighbourhood U ofz, and a continuous concave function ϕ : [0, η) → [0, ∞), such that • ϕ is C 1 with ϕ(0) = 0 and ϕ > 0 on (0, η), and • for all z ∈ U such that g(z) < g(z) < g(z) + η, the Kurdyka-Lojasiewicz inequality holds: If ϕ(s) := cs 1−θ is a valid concave function for the above with c > 0 and θ ∈ [0, 1), then we will say that g has the Kurdyka-Lojasiewicz property with exponent θ atz. Definition 7.3 (Semi-analyticity and sub-analyticity). Following e.g. Lojasiewicz [32], we define A ⊆ R n to be a semi-analytic set if for all z * ∈ R n there exists a neighbourhood U containing z * and a finite collection of analytic functions (a ij , b ij ) such that Following Hironaka [28], we define A to be a sub-analytic set if for all z * ∈ R n there exists a neighbourhood U containing z * , m ∈ N, and a bounded semi-analytic set B ⊂ R n+m such that We define g : R n → (−∞, ∞] to be a semi-analytic function if its graph Gr g := {(z, z ) ∈ R n × R | z = g(z)} is a semi-analytic set, and g to be a sub-analytic function if its graph is a sub-analytic set. Both of these collections of sets are closed under elementary set operations.
We collect some key results regarding these definitions in the following lemma.
ii. If g is proper and sub-analytic, dom g is closed, and g is continuous on its domain, then for allz ∈ dom g with 0 ∈ ∂g(z), there exists θ ∈ [0, 1) such that g has the Kurdyka-Lojasiewicz property with exponent θ atz.
iii. If g : iii. Fix (z * , w * ) ∈ R n × R. Since g is sub-analytic, there exists U containing (z * , w * − h(z * )) and a bounded semi-analytic set B ⊂ R n+1+m such that Since h is continuous, there exists a neighbourhood V containing (z * , w * ) such that for all (z, w) ∈ Since h is analytic, B is a bounded semi-analytic set, and for all (z, w) ∈ Gr(g + h) ∩ V , there exists y ∈ R m such that (z, w, y) ∈ B . It follows that g + h is sub-analytic. 1] , X , and R are semi-analytic, and hence sub-analytic, and therefore if Gr g is sub-analytic then so is Gr h. Assumption 7.5. Suppose that R is sub-analytic, continuous on its domain, bounded below, and dom R is closed. Suppose also that T is analytic and that F (x) → ∞ as x F → ∞. Note 7.6. Examples of R satisfying this assumption are: R(x) := Ax 1 (commonly used in compressed sensing, see Adcock and Hansen [1]), where A is any matrix, and R given by a feedforward neural network with a ReLU activation function (commonly used as regularisers, see e.g. Arridge et al. [4]); see Theorem D.1 for proofs. Examples of F satisfying the assumption are when T is an invertible linear map or R is coercive.
Proof. Note that, by Assumption 7.5, J f (u, x) is bounded below, and that for all x ∈ X , J f (·, x) is proper. Hence given the assumption on η n and ν n , [5, Assumption (H 1 )] is satisfied, and therefore the result follows by [5, Lemma 3.1].
which is therefore closed by the assumption. Next, note that by the assumption and the continuity of G and H f (in the latter case, on its domain), J f is continuous on its domain. Finally, since G is analytic, T (x) − y 2 F is analytic, and R is sub-analytic, it follows by Lemma 7.4(iii-iv) that J f is sub-analytic.
Letū ,x), then by the above Lemma 7.4(ii) applies. Thus there exists θ ∈ [0, 1) such that J f has the Kurdyka-Lojasiewicz property with exponent θ at (ū,x). If instead 0 / ∈ ∂J f (ū,x), then by Lemma 7.4(i)-as J f is proper and l.s.c.-for all θ ∈ [0, 1), J f has the Kurdyka-Lojasiewicz property with exponent θ at (ū,x). Lemma 7.9. Suppose that, for some 0 < a < b and all n, η n , ν n ∈ (a, b), and that Assumption 7.5 holds. Then for all u 0 ∈ V f [0,1] and for all x 0 ∈ dom R, if (u n , x n ) n∈N is defined from (u 0 , x 0 ) by (7.3), then (u n , x n ) := u n V + x n F is bounded. 1] for all n, it suffices to show that x n F is bounded. For all u ∈ V and for all x ∈ X , G (u, x) + H f (u) ≥ 0, and, by Theorem 7.7(i), J f (u n , x n ) ≤ J f (u 0 , x 0 ) for all n. By Assumption 7.5, . Suppose that: for some 0 < a < b and all n, η n , ν n ∈ (a, b); (u n , x n ) n∈N is defined from (u 0 , x 0 ) by (7.3); and Assumption 7.5 holds.
Then for all i. If θ = 0, then (u n , x n ) converges in finitely many steps.
Note 7.11. The above convergence results concerned (7.3), in which everything is solved without linearisation. In Bolte et al. [9], similar convergence results as in [5] were proved for fully linearised alternating schemes for energies with the Kurdyka-Lojasiewicz property. It is beyond the scope of this work to extend such results to (7.2), which is only partially linearised, but it is the authors' belief that such an extension is likely to be straightforward.
Applications
We will test our scheme on distorted versions of the "two cows" images familiar from [7,11,13,34]. 6 Figure 1: Two cows: the reference data image, the image to be segmented, the reference f (which is a segmentation of the reference data image), and the ground truth segmentation associated to Example 8.1. Both segmentations were drawn by hand by the authors. Timings were taken with implementations executed serially on a machine with an Intel ® Core ™ i7-9800X @ 3.80 GHz [16 cores] CPU and 32 GB RAM of memory.
The feature map and its adjoint
In the following applications, we will define F (and likewise F d mutatis mutandis) as follows. Recall that x : Y → R . For each pixel i ∈ Y , suppose we have a map N i : {1, ..., k} → Y which defines the k "imageneighbours" of i in Y and we likewise have a kernel K : {1, ..., k} → R. Then for each channel s ∈ {1, ..., } of x, i ∈ Y , and p ∈ {1, ..., k}, we define (Z(x s )) ip := K (p)x s Ni(p) , and we define z = F(x) ∈ R N ×k by z := Z(x 1 ) Z(x 2 ) ... Z(x ) . We now derive F * : R N ×k → R N × , the adjoint of F with respect to it follows that Z has adjoint Z * : R N ×k → R N given by Finally, by construction, F * (w) = Z * (w 1 ) Z * (w 2 ) ... Z * (w ) . For this section we will take k = 9, the image-neighbours of pixel i to be the 3 × 3 square centred on i (with replication padding at the boundaries), and K to be 9 multiplied by a 3 × 3 Gaussian kernel with standard deviation 1, centred on the centre of that square, computed via fspecial('gaussian'). As the images are RGB: = 3, and z ∈ R N ×27 .
Measures of reconstruction and segmentation accuracy
In the following examples, we will measure the accuracy of a reconstruction x relative to a ground truth of x * by the Peak Signal-to-Noise Ratio (PSNR) (defined as in [1, (2.6)]).
We will measure the accuracy of a segmentation u relative to a ground truth u * by its Dice score, defined as 2T P/(2T P + F P + F N ), where T P is the number of pixels which are true positives ("positives" will here be pixels identified as "cow"), F P of false positives, and F N of false negatives. That is (since "positive" pixels will be given label "1"): Dice score of u relative to u * := 2u · u * 2u · u * + u · (1 − u * ) + (1 − u) · u * .
Denoising the "two cows" image
As our first application, we will test our method on a highly noised version of Example 8.1.
Example 8.2 (Noised two cows).
Let Z, f , and x * (the true image that is to be reconstructed and segmented) be as in Example 8.1. Let the observed data y (see Figure 3) be x * plus Gaussian noise with mean 0 and standard deviation 1, created via imnoise. Thus, T is the identity. This is a very high noise level, with a typical PSNR of y relative to x * of 6.2 dB.
Parameters and initialisation
Unless otherwise stated, all parameter values were chosen through manual trial-and-error. We let α = 0.75, β = 10 −5 , η n = 0.1, ν n = 10 −6 , and σ = 3. For the SDIE scheme for (3.3b), we choose τ = ε = 0.00285, µ = 50χ Z , k s = 5, and stopping condition parameter δ = 10 −10 . For the Nyström-QR scheme we take K = 100. As regulariser R we use Huber-TV [29]: 7 x i↓ is the x-value at the pixel directly below pixel i in the image and x i→ the x-value at the pixel directly to the right of pixel i (with replication padding at boundaries). The initial reconstruction x 0 was computed via a standard TV-based (i.e., Rudin-Osher-Fatemi [44]) denoising, with fidelity term 1.05. That is where TV(x) := i∈Y (∇x) i 2 ; this is solved using the split Bregman method from Getreuer [25], using code from https://getreuer.info/posts/tvreg/index.html (accessed 10 August 2022), with 50 split Bregman iterations and a tolerance of 10 −5 . The initial segmentation u 0 of x 0 was computed via the SDIE scheme with the above parameters and initial state u 0 = 0.47χ Y + f .
Example results
Before discussing the choice of parameters, timings, and accuracy in more detail, we present an example run of the reconstruction-segmentation method for the noisy data and set-up described in Example 8.2 and Section 8.4.1, respectively. We show the results of our denoising-segmentation in Figure 4. In the figure we see that the reconstruction is not particularly good by itself (as one would expect given the very high noise level), but the segmentation is very good (compared to the baseline achieved by other methods, see Section 8.4.4). Importantly, the reconstruction increases the contrast between cows and background, which aids the segmentation.
Parameters, accuracy, and timings
We now study the denoising-segmentation of Example 8.2 more quantitatively. We consider timings of the total runs, but also of the most important steps, as well as reconstruction and segmentation accuracy. In order to understand the dependence on the algorithm's parameters we consider four settings: the setting from Section 8.4.1, a change in the segmentation parameters compared to Section 8.4.1 (K = 70 (decrease), ε = τ = 0.003 (increase)) (as used in [13]), a doubling of the segmentation weight (β = 2 · 10 −5 ), and a decrease in the reconstruction weight (α = 0.5). We list the results of these settings in Table 1, and present them in Figures 5 to 9.
In all of the settings, we obtain a quick and accurate segmentation. The PSNR of the reconstruction is similar in all settings, suggesting that this low accuracy is intrinsic to this noise level. The initial segmentation is very fast, but in all settings not as accurate as the joint reconstruction-segmentation. Of the results, some observations are particularly remarkable: • For the pure MBO segmentation discussed in [13], the Nyström rank K and τ = ε were chosen as in Changes to 8 the second setting presented here (Figure 7). Whilst this choice was optimal for pure segmentation, and the smaller rank leads to a smaller computational cost, the obtained Dice scores are worse than all the other settings.
• The larger β value ( Figure 8, see also Figure 5 third from left) puts more weight on the Ginzburg-Landau energy, leading to the best Dice average, but (unexpectedly) also the highest PSNR average.
• The Dice values have relatively large standard deviations. This is likely caused by the very high noise level, but also by the inherent randomness of the Nyström extension.
• We attain near peak Dice and PSNR values in fewer than ten iterations in all cases.
Comparison to sequential reconstruction-segmentation
Finally, we compare the accuracy of our joint reconstruction-segmentation approach to the more traditional sequential approach. That is, we will first denoise the image of cows, and then segment it. Segmentations will be performed using the graph MBO scheme with the same set-up as in Section 8.4.1.
One example of this sequential approach is our initialisation process (i.e., TV denoising followed by MBO segmentation), which we observed gave worse PSNR and Dice scores than the joint reconstructionsegmentation output. However, this is perhaps unfair because that initialisation was specifically chosen because it is quick (around 1.5s, see Table 1), whilst the whole joint reconstruction-segmentation scheme takes about 2.5 minutes to run (although as was mentioned above, the scheme achieves near peak accuracy in closer to 1 minute).
For a potentially fairer comparison, we consider three more sophisticated denoisers: denoising with total generalised variation (TGV) regularisation [10] through code by Condat [18] and downloaded from https:// Remarkably, however, we observe that the BM3D denoiser barely outperforms TV, the TGV denoiser performs slightly worse than TV, and the CNN denoiser performs much worse, 8 see Figure 10. Comparing with the PSNR scores in Table 1, we note that the TV, TGV, BM3D, and CNN denoisers are all outperformed by the reconstruction from our scheme. The mean Dice score ± standard deviation (over 50 trials) for MBO segmentations of the TGV denoised image is 0.7138 ± 0.0578, of the BM3D denoised image is 0.8180 ± 0.0130, and of the CNN denoised image is 0.5167 ± 0.0094. Typical segmentations are shown in Figure 11. Similarly to the reconstructions, all of the segmentations are worse or considerably worse than the segmentations obtained from our scheme. However, the sequential denoising-segmentations are notably faster than our scheme; we report the timings in Table 2.
Deblurring the "two cows"
For our next example, now with T not equal to the identity map, we consider a blurred version of Example 8.1.
Example 8.3 (Blurred two cows).
Let Z, f , and the true image x * (that is to be reconstructed and segmented) be as in Example 8.1. Let the observed data y (see Figure 12) be a horizontal motion blurring of x * of distance 75 pixels (with symmetric padding at the boundary) created via imfilter, plus Gaussian noise with mean 0 and standard deviation 10 −1 created via imnoise. This y has a typical PSNR relative to x * of around 17.9 dB.
T , its adjoint, and (B.3)
In Example 8.3, the forward model T works by convolving x with a motion blur filter M (computed using fspecial('motion')). The adjoint T * therefore corresponds to convolution with a filter M defined by reflecting M in both axes. In the case of motion blur, M = M , so T is self-adjoint. Furthermore, M has non-negative values, and so T is represented by a non-negative (symmetric) matrix. In order to solve (4.3), recall that we need to compute (B.3). That is, we need to be able to compute solutions to equations of the form 2η n + δt −1 I + 2αT 2 x = z. We will do this via a fixed-point iteration.
We take R as in Section 8.4.1 except that we change the multiplicative factor from 10 to 1. Another change is the number of iterations of the algorithm: while we iterate for 25 steps in the denoising problems, in preliminary runs (not reported) we noticed in the deblurring case that 15 steps are sufficient. The initial reconstruction x 0 is computed via TV deblurring with fidelity term 45, i.e., This is solved by the split Bregman method of Getreuer [26], using code from https://getreuer.info/ posts/tvreg/index.html (accessed 10 August 2022), with 50 split Bregman iterations and a tolerance of 10 −5 . The initial segmentation u 0 of x 0 is computed via the SDIE scheme with the above parameters and initial state u 0 = 0.47χ Y + f .
Example results
Before discussing the choice of parameters, timings, and accuracy in more detail, we present an example run of the reconstruction-segmentation method for the noisy data and set-up discussed in the beginning of this section. Here, we use the parameter setting from Section 8.5.2. We show the results of this example run in Figure 13. The effect of an increasing contrast between cows and background in the reconstruction, which we already observed for the denoising results in Section 8.4.2, is even more visible in this deblurring reconstruction. In contrast to the denoising setting, here this even leads to reconstructions with PSNRs that deteriorate over the runtime of the algorithm.
Parameters, accuracy, and timings
As in Section 8.4.3, we now study the deblurring-segmentation of Example 8.3 more quantitatively. We again consider timings of the total runs and of the key steps, as well as reconstruction and segmentation accuracy. Again, we look at four parameter settings: the setting from Section 8.5.2, a change in the segmentation parameters compared to Section 8.5.2 (K = 100 (decrease), ε = τ = 0.00285 (increase), u 0 = 0.45χ Y + f (lower constant on Y )), an increase in the segmentation weight (β = 1.52 · 10 −5 ), and an increase in the reconstruction parameters (α = η n = 10). We list the results of these settings in Table 3, and present them in Figures 14 to 18.
We now comment on those simulation results. As mentioned before, we see that in most settings, the reconstruction PSNR is reduced over the course of the algorithm. The joint reconstruction-segmentation algorithm enhances the constrast between segments to a point where the reconstruction quality suffers (see Figure 14). Unlike in most settings we used in the denoising problem, here the segmentation accuracy is not always monotonically increasing.
It is interesting to note that (as shown in Figure 16) ε, τ should be chosen smaller for the deblurring problem than for Example 8.2 or the noise-free problem (see [13]). In the Allen-Cahn equation, a smaller ε leads to a smaller interface and, thus, a harder thresholding. A harder thresholding should lead to a stronger regularisation which aids the deblurring. Indeed, the softer thresholding in Figure 16 leads to more parts of the background being incorrectly identified as cow. Moreover, an even larger Nyström rank K is required. If K = 100, the results have a large variance and barely improve the initial segmentation on average. When increasing β, we see a slight increase in the Dice standard deviation, which might imply that the method becomes more unstable when increasing the influence of the Ginzburg-Landau energy. When increasing α, as expected, we see an increased reconstruction accuracy.
In the case where β = 1.5 · 10 −5 (Figure 17), we see a certain long-term instability: the Dice score reaches its maximum at iteration step number 9, but is considerably lower at the end of the algorithm. In iteration step 9, we obtain an average Dice of 0.8754 (±0.0073), beating all of the results reported in Table 3. Hence, the number of iterations is also an important tuning parameter as the system can experience metastability.
Comparison to sequential reconstruction-segmentation
As in the denoising case, we observe that (except in the setting of Figure 16) our joint scheme outperforms the sequential TV-based initialisation in terms of Dice score (and in the setting of Figure 18, also in PSNR).
For a fairer comparison, we seek to compare the performance of our scheme to that of a sequential method using a more sophisticated deblurrer. We consider three alternative deblurrers: 10 the TV-based deblurrer but with 500 split Bregman iterations and tolerance 10 −10 ; the BM3D deblurrer from Mäkinen et al. [38] (i.e., BM3DDEB in the associated software); and a BM3D denoising followed by a TV deblurring (with fidelity term 150, 100 split Bregman iterations, and tolerance 10 −7 ). Typical deblurrings via these methods are shown in Figure 19. We segment only the latter of these, as it is the only one with a perceptible improvement Figure 19: Typical deblurred output for the TV-based deblurrings, BM3D deblurring, and BM3D denoising followed by TV deblurring.
over the TV deblurring. The mean Dice score ± standard deviation (over 50 trials) for MBO segmentations of this deblurred image (with τ = ε = 0.00285, K = 200, and initial state 0.45χ Y + f , cf. Figure 16) is 0.8737 ± 0.0143. 11 Segmentations from the first three of these runs are shown in Figure 20. Visually, these segmentations appear slightly patchier than those we obtain with our joint method (cf. Figure 13), but they have higher Dice scores (with the exception of the metastable optimal segmentation observed in Figure 17, which has a slightly higher mean Dice score). However, the sequential deblurring-segmentations are much faster than the joint method; over 50 runs, their mean (± standard deviation) reconstruction time is 23.45(±0.16)s, segmentation time 5.85(±0.05)s, and total time 29.30(±0.17)s.
Conclusions and directions for future work
In this paper, we have developed a joint reconstruction-segmentation scheme which incorporates the highly effective graph-based segmentation techniques that have been developed over the past decade. There are numerous challenges which arise in the efficient implementation of this scheme, but we have shown how these obstacles can be navigated. Furthermore, we have shown how the Kurdyka-Lojasiewicz-based theory of [5,9] can be applied to show the convergence of our scheme.
Finally, we have tested our scheme on highly-noised and blurry counterparts of the "two cows" image familiar from the literature. In the denoising case, we observed that our scheme gives very accurate segmentations despite the very high noise level, and gives reasonably accurate reconstructions, with a run time of about 2.5 minutes. Moreover, our joint scheme substantially outperforms sequential denoising-segmention methods, in both segmentation and reconstruction accuracy, even when much more sophisticated denoisers are employed, albeit at the cost of a much longer run time.
In the deblurring case, again our scheme gives highly accurate segmentations (with a run time of about 3 minutes), but in the reconstructions it introduces an artificial level of contrast between the "cows" and the "background". This aids the segmentation accuracy at the cost of the reconstruction accuracy deteriorating over the course of the iterations. Increasing the reconstruction weighting prevents this effect, at the expense of a lower segmentation accuracy. Increasing the segmentation weighting has the curious effect of producing a very accurate but metastable (w.r.t. a change in the number of iterations) segmentation after 9 iterations. Compared to sequential deblurring-segmentation, our scheme produces worse reconstructions and on the whole slightly worse segmentations (with the exception of the metastable segmentation, which is slightly better) and runs considerably slower. However it should be noted that the deblurring method which was used is more sophisticated than the one used within our scheme, and that our joint scheme does give substantially more accurate segmentations than sequential deblurring-segmentation using only a TV-based deblurrer (i.e., our initialisation process).
There are three major directions for future work. First, the scheme in its current form has many parameters which must be tuned by hand. Future work will seek to develop techniques for tuning these parameters in a more principled way, so that this scheme can be applied to a large image set without the need for constant manual re-tuning.
Second, we have in this work applied our scheme only to artificially noised/blurred images, and in the comparison to sequential methods we did not exhaust the state-of-the-art. Future work will seek to test our scheme on real observations, with potentially unknown ground truths and/or forward maps, and compare our scheme to other state-of-the-art methods (including other joint reconstruction-segmentation methods such as those in Corona et al. [19]).
Finally, there are a number of potential ways to make our scheme more accurate. One is to use a different regulariser. Candidates of particular interest are implicit regularisation with a "Plug-and-Play" denoiser (as in Venkatakrishnan et al. [50]) and learned regularisation (see e.g. Arridge et al. [4, §4]). Another potential improvement is the use of more sophisticated feature maps (e.g., the CNN-VAE maps used in Miller et al. [35]), though in this choice one is constrained by the need to compute both the map and its adjoint very efficiently. Moreover, the theory in this paper assumes a linear feature map. A third opportunity for improvement lies in solving (3.3a) without resorting to linearisation.
A Computing g n
In this appendix we explain how we compute g n , which is defined in Section 4. We recall from Section 3.1 that z := F(x), and likewise we define z n := F(x n ). Then g n = F * (∇ z F 1 (z n ) + ∇ z F 2 (z n )).
We now compute each term in turn. To compute ∇ z F 1 (z), note that and thus for all l ∈ Y and r ∈ {1, ..., q} where δ denotes the Kronecker delta. Therefore, is symmetric, and therefore To compute ∇ z F 2 (z), note that by a similar argument as the above, for all l ∈ Y and r ∈ {1, ..., q}. Hence, letting B(z) : and therefore we arrive at a similar formula as for ∇ z F 1 (z): Tying this all together, we get To compute (A.1), we need to compute matrix-vector products of the form C n v. Recalling (4.1), it follows that Next, we observe a neat linear algebra result 12 where in this case A = Ω Y V (z n , z d ). Hence it suffices to be able to compute terms of the form Ω Y V (z n , z d )v. Via the Nyström extension (2.1) we have where X ⊆ V is some interpolation set, so we can compute such products quickly. These considerations lead us to Algorithm 4 to compute C n v for v ∈ V.
Algorithm 4 Definition of the CProd function to be used in Algorithm 1.
(B.3)
If T is non-linear, things are more difficult. There may not exist a valid γ, and if so one must use a method such as [16, Algorithm 1] to solve (4.3), which has slower convergence. We must still compute prox δtG (x). Since T is assumed to be differentiable, for all x there exists a linear map DT x such that T (x + δx) = T (x) + DT x (δx) + o(δx). Then it follows that
C Computing the SDIE scheme
In this appendix we describe how we compute the SDIE scheme that is described in Section 5.1. By Theorem 5.2, an SDIE update has two steps: a fidelity-forced diffusion and a piecewise linear thresholding. The thresholding is trivial to compute, but the diffusion is non-trivial. Our method for computing fidelityforced diffusion was described in detail in [11, §5.2.6], so here we only reproduce the key details. By Theorem 5.1, given u n and the parameter τ > 0, we seek to compute where m ∈ N and δt := τ /m. Computing matrix-vector products with e − 1 2 δtM is straightforward, since M is a diagonal matrix. To compute matrix-vector products with e −δt∆ , we will compute an approximate eigendecomposition of ∆ using the Nyström-QR method (recommended by Alfke et al. [3]) described in Algorithm 6. For details on this method, see [11, §5.2.3].
Note C.1. Algorithm 6 really computes an approximate decomposition U s ΣU T s ofω := D −1/2 ωD −1/2 , and then makes a further approximation ∆ s = I −ω ≈ U s (I K − Σ)U T s = U s ΛU T s , where I K is the K × K identity matrix, and so on for the random walk Laplacian.
Finally we note that, by Theorem 5.1, b is the fidelity-forced diffusion with initial condition u 0 = 0 at time τ . We compute b via the semi-implicit Euler scheme used for fidelity-forced diffusion in [34]. To compute this, again we use the Nyström-QR decomposition of ∆.
Note C.2. We do not use the scheme from [34] for all of the fidelity-forced diffusions because the Strang formula is more accurate for the e −τ (∆+M ) u n term, see [11, §5.2.5-5.2.7] for details.
D Examples of sub-analytic regularisers
The following theorem shows that the examples of regularisers R that are given in Note 7.6, do indeed satisfy the conditions required by Assumption 7.5.
Proof. i. Gr f = {(x, Ax 1 ) | x ∈ R n } can be written as e i (Az 1 ) i = 0 and e j (Az 1 ) j ≥ 0 and thus Gr f is a semi-analytic set. The other properties are trivial.
ii. N N is a composition of piecewise linear continuous functions, and is hence piecewise linear and continuous. It follows that it is semi-analytic. It is bounded below due to ρ applying the ReLU function.
|
2022-08-12T01:15:41.839Z
|
2022-08-11T00:00:00.000
|
{
"year": 2022,
"sha1": "69989a8874ac25702c5efc2a5057fe63d7cbd83e",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "69989a8874ac25702c5efc2a5057fe63d7cbd83e",
"s2fieldsofstudy": [
"Computer Science",
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
15803971
|
pes2o/s2orc
|
v3-fos-license
|
Transcriptional regulation of the proto‐oncogene Zfp521 by SPI1 (PU.1) and HOXC13
Abstract The mouse zinc‐finger gene Zfp521 (also known as ecotropic viral insertion site 3; Evi3; and ZNF521 in humans) has been identified as a B‐cell proto‐oncogene, causing leukemia in mice following retroviral insertions in its promoter region that drive Zfp521 over‐expression. Furthermore, ZNF521 is expressed in human hematopoietic cells, and translocations between ZNF521 and PAX5 are associated with pediatric acute lymphoblastic leukemia. However, the regulatory factors that control Zfp521 expression directly have not been characterized. Here we demonstrate that the transcription factors SPI1 (PU.1) and HOXC13 synergistically regulate Zfp521 expression, and identify the regions of the Zfp521 promoter required for this transcriptional activity. We also show that SPI1 and HOXC13 activate Zfp521 in a dose‐dependent manner. Our data support a role for this regulatory mechanism in vivo, as transgenic mice over‐expressing Hoxc13 in the fetal liver show a strong correlation between Hoxc13 expression levels and Zfp521 expression. Overall these experiments provide insights into the regulation of Zfp521 expression in a nononcogenic context. The identification of transcription factors capable of activating Zfp521 provides a foundation for further investigation of the regulatory mechanisms involved in ZFP521‐driven cell differentiation processes and diseases linked to Zfp521 mis‐expression.
| I N T R O D U C T I O N
Alterations in gene expression during lymphocyte differentiation can lead to malignancies including leukemia and lymphoma. Retroviral insertions that cause up-regulation of Zinc finger protein 521 (Zfp521) or its paralogue, Zinc finger protein 423 (Zfp423), are associated with B-cell leukemia in mice (Hentges et al., 2005;Warming et al., 2003Warming et al., , 2004. Links between ZNF521 (the human orthologue of Zfp521) and human leukemia also exist, as a translocation that generates a chimaeric fusion protein of ZNF521 and PAX5 has been found in paediatric acute lymphoblastic leukemia (Mullighan et al., 2007). Retroviral insertions at the Zfp521 locus also promote the formation of B-cell acute lymphoblastic leukemia (B-ALL) in mice expressing the chimaeric oncogenic fusion protein E2A-hepatic leukemia factor (E2A-HLF), and ZNF521 overexpression is found in patients with translocations generating E2A-HLF fusion proteins (Yamasaki et al., 2010). Recent investigations have established a role for Zfp521 in B-cell differentiation, mediated through an interaction with the B-cell transcription factor EBF (Hentges et al., 2005;Hiratsuka et al., 2015;Mega et al., 2011).
Additional functions for ZFP521 and its paralogue ZFP423 have been identified, demonstrating that these multiple zinc-finger proteins participate in cell proliferation and differentiation events critical for the formation of a diverse set of cell types. An important role for ZFP521 in cell differentiation events has been documented for neural cells V C 2016 The Authors. Genesis Published by Wiley Periodicals, Inc. This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited. (Han et al., 2012;Hashemi et al., 2016;Kamiya et al., 2011;Lobo et al., 2006;Ohkubo et al., 2014;Tang et al., 2015), erythrocytes (Matsubara et al., 2009), chondrocytes (Correa et al., 2010;Hesse et al., 2010;Kiviranta et al., 2013;Park and Kim, 2013) and adipocytes (Kang et al., 2012). Similarly, ZFP423 has been identified as a key factor in adipocyte differentiation (Addison et al., 2014;Gupta et al., 2010;Gupta et al., 2012;Hiratsuka et al., 2015), in addition to being required for cerebellar development (Warming et al., 2006). The protein domains and interaction partners of ZFP521 and ZFP423 required in different cell types varies (Correa et al., 2010;Hesse et al., 2010;Kamiya et al., 2011;Mega et al., 2011;Spina et al., 2013), suggesting multiple mechanisms by which these large zinc-finger proteins can regulate cellular activities. Despite the emerging evidence that ZFP521 and ZFP423 are important factors in determining cell fate, information regarding the regulation of their expression in lymphocytes has been limited to the viral-mediated over-expression of these genes seen in B-cell leukemia.
Therefore, we sought to identify factors that directly regulate Zfp521 expression during B-cell differentiation.
B-cell differentiation involves a complex cascade of transcription factor activity leading to specific patterns of gene expression in cells at various stages of differentiation (Busslinger, 2004;Dias et al., 2008;Northrup & Allman, 2008;Nutt & Kee, 2007;Singh et al., 2005). The ETS-family transcription factor SPI1 (also referred to as PU.1) functions in directing cell fate during haematopoiesis (Oikawa et al., 1999). Along with a role in myeloid lineage commitment, SPI1 is needed for the generation of lymphoid progenitors during B-cell differentiation, and mice lacking Spi1 fail to form B-cells (Scott et al., 1994). SPI1 functions early in the process of B-cell development to specify lymphoid progenitors by activating the expression of genes such as the IL7 receptor, which are essential for B-cell differentiation (DeKoter et al., 2002). SPI1 binds to DNA as a monomer through its ETS-domain (Kodandapani et al., 1996), and also acts cooperatively with various other DNA binding proteins, including other ETS family transcription factors, to activate transcription of target genes (Li et al., 2000).
The homeodomain is a DNA-binding protein domain present in many developmentally important transcription factors. Several studies have revealed roles for Hox genes in normal haematopoiesis and in haematological malignancies such as leukemia (reviewed in (Argiropoulos and Humphries, 2007)). Hox genes have a variety of roles in B-cell differentiation and function. For example, over expression of human HOXB3 in mouse bone marrow results in a decrease in the total number of B2201 B-cells . Conversely, deletion of Hoxb3 in the bone marrow of adult mice also inhibits B-cell differentiation, with knock out animals showing a reduced number of pro-B cells (Ko et al., 2007). Deletion of the homeobox gene Hoxa9 causes a reduction in the number of lymphocytes present in the spleen of adult mice, due to defects in the specification of committed B-lymphocyte progenitor cells in the bone marrow . In addition to these roles in B-cell differentiation, Hox genes are also associated with leukemia. For example, genes in the Hoxa cluster are frequent targets of murine leukemia retroviral insertions (Bijl et al., 2005).
HOX gene over-expression is also a common feature of human lymphoid malignancies, indicating that dysregulation of HOX genes may be an important mechanism for leukemic transformation (Argiropoulos and Humphries, 2007).
The phenotypes of mouse mutants of both Spi1 and Hox genes reveal roles for these factors in B-cell development and differentiation.
The expression profile of Zfp521, and its over-expression in mouse Bcell leukemia, suggests that appropriate regulation of Zfp521 is important for B-cell function. We therefore sought to identify transcription factors required for expression of the Zfp521 gene, finding that SPI1 and HOXC13 synergistically regulate Zfp521 expression in a dosedependent manner. Furthermore, transgenic mice over-expressing Hoxc13 also have increased Zfp521 expression in the fetal liver, the site of B-cell differentiation during development. Further studies are needed to examine whether SPI1 and HOXC13 regulate Zfp521 in additional cell types, and whether alterations in the activity of these regulatory factors contributes to B-cell leukemogenesis.
| Evolutionary analysis of Zfp521 and Zfp423
The close protein sequence similarity between ZFP521 and ZFP423 suggests that the genes encoding these proteins arose due to a gene duplication event during evolution. We identified orthologues of ZFP521 and ZFP423 based on protein sequences from genomes of both invertebrate and vertebrate organisms (Supporting Information Table 1), and assembled a phylogenetic tree. We found that the vertebrates analysed have both ZFP521-and ZFP423-related protein sequences (Figure 1a,b), while organisms such as insects have only one single protein sequence that is equally related to ZFP521 and ZFP423.
This finding suggests that vertebrates have retained both genes following a duplication event. A reason for paraloguous genes to be retained after duplication events is that the paralogoues have specialized such that the two paralogues no longer have conserved functions or conserved expression patterns. Given that both Zfp521 and Zfp423 cause leukemia in mice when over-expressed (Hentges et al., 2005;Warming et al., 2003;Warming et al., 2004), they may both retain similar functions in lymphocytes. However, their expression patterns in B-cells have diverged (Warming et al., 2003;Warming et al., 2004), providing an explanation for the retention of both paralogues in vertebrate genomes. Due to the noted expression of Zfp521 in B-cells during differentiation (Hiratsuka et al., 2015;Warming et al., 2003), and links between Zfp521 over-expression and B-cell leukemia (Hentges et al., 2005;Warming et al., 2003), we sought to identify transcriptional regulators of Zfp521.
| Zfp521 promoter analysis
In order to identify transcriptional regulators of ZNF521, an analysis of the promoter region was performed. We identified a 1Kb region upstream of the human ZNF521 gene transcription start site as annotated in the UCSC genome browser (Figure 2a), and the corresponding region 1Kb upstream of the annotated mouse Zfp521 transcriptional start site from the UCSC genome browser (Figure 2b). DNaseI hypersensitivity sites are present near the ZNF521 transcriptional start site, which were experimentally identified from GM12878 and K562 lymphocyte cells (Ho & Crabtree, 2010). The Zfp521 promoter region shows hypersenstivity to DNaseI cleavage in CD191 B-cells isolated from an 8-week old mouse (Sabo et al., 2006), suggesting the Zfp521 promoter region has transcriptional activity in B-cells. Both the mouse and human promoter regions lack a consensus TATA box, but instead have a GC-rich region near the transcriptional start site (Figure 2a,b).
There is a region of high conservation amongst 100 vertebrates in the ZNF521 promoter region overlapping with a K562 DNase I hypersensitivity hot spot (Figure 2a), although conservation of the mouse Zfp521 promoter region amongst placental mammals does not show a similar region of high conservation (Figure 2b).
Sequence alignments between the mouse and human ZNF521 promoters revealed the presence of two conserved binding sites for the transcription factor SPI1, a known B-cell transcription factor (Figure 2c, gray boxes). We defined the proximal site as SPI1a and the distal site as SPI1b. Both SPI1 sites overlap with regions of DNase I hypersensitivity, suggesting these promoter regions are bound by regulatory proteins in lymphocytes and leukemia cells. A similar analysis of the ZNF423 gene promoter did not reveal any SPI1 binding sequences (data not shown). In addition to the predicted SPI1 binding sites in the ZNF521 promoter, we also found a predicted conserved binding site for HOXC13 distal to the predicted SPI1 binding sites (Figure 2c,underline). HOXC13 has been demonstrated to bind the ETS domain of SPI1, and is coexpressed with SPI1 in erythroid leukemia cells (Yamada et al., 2008). No other conserved binding sites for transcription factors know to be involved in B-cell differentiation were identified. Therefore, we sought to test the hypothesis that SPI1 and HOXC13 may regulate ZNF521/Zfp521 expression, either individually or synergistically.
| SPI1 and HOXC13 regulate Zfp521 expression
To determine if SPI1 and HOXC13 could act as Zfp521 transcriptional regulators, we generated luciferase reporter constructs containing varying regions of the Zfp521 promoter ( Figure 2d). The 1Kb construct contains the HOXC13 and both SPI1 predicted binding sites, while the 0.5Kb promoter only contains the predicted SPI1 binding sites. The 0.2Kb promoter contains only the most proximal SPI1 binding site. We then tested the ability of SPI1 or HOXC13 to activate these various reporters in HEK293 cells. We found that when transfected individually, both SPI1 and HOXC13 proteins modestly activated the Zfp521 luciferase reporter constructs. Upon cotransfection of SPI1 and HOXC13 we noted a greater than additive activation of the Zfp521 reporter, confirming that SPI1 and HOXC13 activate Zfp521 in a synergistic manner (Figure 2e). The greatest activation was detected from the 1Kb Zfp521 promoter reporter construct containing all HOXC13 and SPI1 predicted binding sites (Figure 2e). Similar results were found when the reporter assays were performed in BCL1 B-lymphoblast cells (data not shown).
| SPI1 and HOXC13 regulation of Zfp521 is dosedependent
We detected the greatest activation of Zfp521 using the 1 Kb reporter construct, so we further examined Zfp521 regulation using this construct for transfections in HEK293 cells. To confirm that the activation depended on SPI1 and HOXC13, we generated truncated versions of Other species as named on tree. 521 5 ZNF521/Zfp521, 4235 ZNF423/Zfp423 the SPI1 and HOXC13 proteins, lacking their respective DNA binding regions ( Figure 3a). We confirmed that truncated mutant forms of SPI1 and HOXC13 proteins showed no Zfp521 reporter activation above background levels, even when cotransfected with a wild-type version of the partner protein ( Figure 3b).
Following the observation that SPI1 and HOXC13 synergistically activate Zfp521 expression, we examined the effects of varying the dosage of SPI1 and HOXC13 on Zfp521 reporter activation. We performed cotransfection assays with decreasing amounts of either SPI1 or HOXC13 plasmid, while maintaining a FIG URE 2 The human and mouse ZNF521/Zfp521 promoter regions contain SPI1 and HOXC13 predicted binding sites. a: Human chromosome 18q11.2. Scale bar 5 2.5 Kb. The GC% in 5-base windows is shown at the top. The human ZNF521 promoter region used in this study is shown as a black box labelled promoter. ZNF521 exon 1 is annotated. H3K27Ac marks, DNaseI hypersensitivity hot spots in K562 erythroleukemia cells, and DNaseI hypersensitivity hot spots in GM12878 lymphocyte cells are shown. Sequence conservation among 100 vertebrates is shown at the bottom. The locations of the predicted SPI1 (Sa and Sb) and HOXC13 (H) binding sites are labeled. Sites Sa and Sb overlap with K562 DNase I hypersensitivity hot spots. b: Mouse chromosome 18qA1. Scale bar 5 2.5Kb. The GC% in 5-base windows is shown at the top. The mouse Zfp521 region used in this study and in experimental constructs is shown as a black box labeled promoter. Zfp521 exon is annotated. DNaseI hypersensitivity hotspots in CD191 B-cells are annotated. Sequence conservation among placental mammals is shown at the bottom. The locations of the predicted SPI1 (Sa and Sb) and HOXC13 (H) binding sites are labelled. Sites Sa and Sb overlap with CD191 B-cell DNase I hypersensitivity hot spots. c: The mouse Zfp521 promoter sequence. Predicted SPI1 binding sites are shown in gray, and the predicted HOXC13 binding site is underlined. d: Zfp521 reporter constructs used in transfection assays. Gray box represents predicted HOXC13 binding site, black boxes represent SPI1 binding sites. e: Reporter assay measuring Zfp521 promoter activation following HEK293 cell transfection with SPI1 (light gray bars), HOXC13 (medium gray bars), or SPI1 and HOXC13 cotransfection (dark gray bars). Relative expression levels are shown as normalized to transfection with reporter alone (black bars). Cotransfection of SPI1 and HOXC13 shows a statistically significant activation of the 1Kb reporter construct when compared to vector control (t-test; p < 0.05). Error bars represent standard deviation of 3 independent transfections, each with three technical replicates constant concentration of the other construct. We found that the interaction between SPI1and HOXC13 is dose-dependent ( Figure 3c).
| SPI1 and HOXC13 bind the Zfp521 promoter
A physical interaction between SPI1 and HOXC13 has been reported (Yamada et al., 2009). As our epitope-tagged constructs varied slightly from the ones previously reported, we verified that our full-length FLAG-tagged HOXC13 protein did indeed bind to full-length SPI1 (Figure 4). Following the confirmation that the SPI1 and HOXC13 proteins encoded by our epitope-tagged constructs shared a physical interaction, we next examined whether they could also bind the Zfp521 promoter directly. We performed chromatin immunoprecipitation assays on HEK293 cells transfected with either FLAG-SPI1 or FLAG-HOXC13 and the Zfp521 1Kb promoter construct. Following immunoprecipitation with an anti-FLAG antibody, we detected an enrichment of the Zfp521 promoter sequence from extracts individually transfected with either FLAG-SPI1 or FLAG-HOXC13 ( Figure 3d). However, immunoprecipitation with a nonspecific antibody (IgG) did not allow amplification of the Zfp521 promoter. Likewise, transfection of the empty FLAG vector control did not produce enrichment of the Zfp521 promoter following immunoprecipitation. As a control, we demonstrated that we could detect the Zfp521 promoter in all input samples. We also detected a GFP vector control sequence in our input samples, but not in any reactions subjected to immunoprecipitation. The enrichment of the Zfp521 promoter region following immunoprecipitation of either SPI1 or HOXC13 indicates that SPI1 and HOXC13 proteins bind the Zfp521 promoter ( Figure 3d) in a specific manner.
To provide further support for the hypothesis that SPI1 and HOXC13 bind the Zfp521 promoter, we assayed binding in vitro via EMSAs. We confirmed that the full-length SPI1 and HOXC13 proteins were indeed capable of binding to their predicted recognition sequences in the Zfp521 promoters, as indicated by reduced migration of the promoter DNA (Figure 5a-c). SPI1 and HOXC13 truncation mutants did not alter DNA migration. The addition of specific antibodies (anti-SPI1 or anti-FLAG) abolished the band shift observed in the EMSA, suggesting that the binding of the antibody interfered with the DNA-FIGU RE 3. F IG URE 3 Analysis of Zfp521 transcriptional activation by SPI1 and HOXC13. a: SPI1 and HOXC13 protein isoforms encoded by expression vectors. The full-length isoforms are shown on top and truncation mutants shown below. Both truncation mutants remove the DNA binding domain from the protein. b: Reporter assay measuring Zfp521 promoter activation following HEK293 cell cotransfection of wild type SPI1 and HOXC13 (black bar), transfection of truncated SPI1 and wild type HOXC13 (charcoal gray bar), transfection of wild type SPI1 and truncated HOXC13 (medium gray bar), or truncated SPI1 and truncated HOXC13 (light gray bar). Results were normalized to transfection with an empty expression vector (white bar). Only cotransfection of both wild-type proteins demonstrated statistically significant gene activation when compared to empty expression vector control (t-test; p < 0.05). Error bars represent standard deviation of 3 independent transfections, each with three technical replicates. c: SPI1 and HOXC13 regulation of the Zfp521 promoter is dose-dependent. Decreasing the amounts of transfection plasmid of either SPI1 or HOXC13 protein causes a decrease in promoter activation in HEK293 cells. The graph shows relative expression levels normalized to transfection with reporter construct only. d: Evaluation of SPI1 and HOXC13 binding to the Zfp521 promoter. Transfection & IP: Protein-DNA cross-linking followed by immunoprecipitation of FLAG-SPI1 or FLAG-HOXC13 with an anti-FLAG antibody allows amplification of the Zfp521 promoter region, but immunoprecipitation using irrelevant antibody (IgG) does not precipitate the Zfp521 promoter. Extracts from HEK293 cell transfections of the FLAG empty vector control do not show amplification of the Zfp521 promoter following immunoprecipitation. The GFP vector control DNA is not detected in any samples following immunoprecipitation. Right panel: Input samples show the presence of Zfp521 and GFP vector control DNA samples binding activity of the SPI1 and FLAG-HOXC13 proteins. The addition of a nonspecific antibody (IgG) did not disrupt the migration of the promoter DNA sequence. Additionally, incubating the nonspecific antibody with the SPI1 and HOXC13 truncation mutant proteins did not produce a shift in DNA migration, demonstrating that the binding seen in reactions with the nonspecific antibody is due to SPI1 and HOXC13 proteins rather than the antibody.
We also wished to examine whether SPI1 and HOXC13 were colocalized on the Zfp521 promoter, in support of our finding that these proteins synergistically activate Zfp521 expression. We therefore sought to determine if the presence of anti-FLAG antibody (detecting FLAG-HOXC13 transfected protein) in a binding reaction including both SPI1 and HOXC13 full-length proteins would disrupt band shifting of either of the SPI1 binding sites of the Zfp521 promoter. A reciprocal reaction was performed using the SPI1 antibody and the putative HOXC13 binding site of the Zfp521 promoter. In all cases we found that the presence of the partner antibody disrupted the band-shift (Figure 5d) in a manner similar to that seen for the specific antibodies . From these results we conclude that the SPI1 and HOXC13 proteins are closely associated at their predicted binding site sequences in the Zfp521 promoter, supporting the model for synergistic regulation of Zfp521 expression.
| Knockdown of SPI1 and HOXC13 reduces ZNF521 expression
To confirm that SPI1 and HOXC13 are required for activation of Zfp521, we performed a knockdown experiment. shRNA plasmids containing sequences targeting SPI1 and HOXC13 were transfected either individually or jointly into THP-1 cells. A plasmid with a scrambled shRNA sequence was used as a control. Expression levels of SPI1, HOXC13, and ZNF521 were measured in each transfection condition by qPCR. We found that ZNF521 expression levels were significantly reduced upon cotransfection of the SPI1 and HOXC13 shRNA constructs (Figure 6a) as compared to the scrambled shRNA control. Transfection of either the SPI1 or HOXC13 shRNA constructs individually reduced ZNF521 expression, but the reduction was not as great as the cotransfection condition. SPI1 expression and HOXC13 expression were each reduced following the transfection of their specific shRNA construct, but not by transfection with the control shRNA plasmid (Figure 6a).
2.7 | SPI1 and HOXC13 expression rescues the effects of Zfp521 knockdown Using the mouse BCL1 B-lymphoblast cell line, we have demonstrated that knockdown of Zfp521 reduces cell viability, increases apoptosis, and alters expression of Pro-B-cell genes (Al Dallal et al., 2016). To provide further evidence for a role for SPI1 and HOXC13 in Zfp521 regulation, we also assessed whether addition of these factors could restore cell viability following knockdown of Zfp521 in a BCL1 cells.
We therefore cotransfected SPI1 and HOXC13 into BCL1 cells 3 days after an initial transfection with the Zfp521 shRNA or experimental control plasmids. On day 7 after the initial transfection, the ratio of viable cell number from cells receiving the rescue plasmids (denoted 71) compared to control cells receiving no rescue plasmid (denoted 7) was calculated. We found that cotransfection of SPI1 and HOXC13 into cells with Zfp521 knockdown led to a significant increase in viable cell number, as compared to cells with control transfections (Figure 6b). (Yokota et al., 2006). If SPI1 and HOXC13 cooperatively regulate Zfp521 expression during B-cell differentiation, we hypothesized that they should both be expressed in the fetal liver. We found that both genes are expressed in mouse fetal liver from two separate animals at E16.5. Expression is also detected in adult mouse bone marrow, In vitro assessment of SPI1 and HOXC13 binding to the Zfp521 promoter predicted binding sites. DNA sequences added to assay are shown above gels, and protein extracts added are shown below gels for each lane. a: EMSA reactions demonstrate that full-length SPI1 binds to the predicted SPI1a binding site of the Zfp521 promoter, because the migration of the DNA probe is reduced when protein extract is added prior to electrophoresis (arrow). Addition of an anti-SPI1 antibody abrogates the shift seen in DNA migration, while addition of a nonspecific antibody (IgG) does not affect the shift in DNA migration. The truncation mutant form of SPI1 (aa1-169) is not capable of binding to the Zfp521 promoter sequence. Adding specific or nonspecific antibody with the truncation mutant does not affect migration of the Zfp521 promoter SPI1a DNA sequence. A mutated version of the Zfp521 promoter SPI1a site is not bound by wild type SPI1 protein. b: EMSA reactions demonstrate that full-length SPI1 binds to the predicted SPI1b binding site of the Zfp521 promoter (arrow). Addition of an anti-SPI1 antibody abrogates the shift seen in DNA migration, while addition of a nonspecific antibody (IgG) does not affect the shift in DNA migration. The truncation mutant form of SPI1 (aa1-169) is not capable of binding to the Zfp521 promoter sequence. Adding specific or nonspecific antibody with the truncation mutant does not affect migration of the Zfp521 promoter SPI1b DNA sequence. A mutated version of the Zfp521 promoter SPI1b site is not bound by wild type SPI1 protein. c: EMSA reactions demonstrate that full-length FLAG-HOXC13 binds to the predicted Hoxc13 binding site of the Zfp521 promoter (arrow). Addition of an anti-FLAG antibody abrogates the shift seen in DNA migration, while addition of a nonspecific antibody (IgG) does not affect the shift in DNA migration. The truncation mutant form of Hoxc13 (Hoxc13delta) is not capable of binding to the Zfp521 promoter sequence. Adding specific or nonspecific antibody with the truncation mutant does not affect migration of the Zfp521 promoter Hoxc13 DNA sequence. A mutated version of the Zfp521 promoter Hoxc13 site is not bound by wild type FLAG-Hoxc13 protein. d: The use of the anti-FLAG antibody (recognising FLAG-HOXC13) eliminates the shift of the Zfp521 promoter predicted SPI1a and SPI1b binding sites (arrow), similar to the results seen for SPI1 protein and anti-SPI1 antibody on its own predicted promoter site (Figure 5a-b). The use of the SPI1 specific antibody eliminates the shift of the Zfp521 promoter predicted HOXC13 binding site, similar to the results seen for HOXC13 protein and anti-FLAG antibody on its own predicted promoter site ( Figure 5c). As shown in Figure 2c, the SPI1 binding site further away from the Zfp521 transcriptional start site was called SPI1a, and the SPI1 site closer to the Zfp521 transcription start site termed SPI1b YU ET AL.
| 525 marrow and spleen into different subpopulations. Zfp521 expression in these sub-populations has been confirmed (Hiratsuka et al., 2015). Spi1 expression was not detected in bone-marrow derived pro-B-cells, but was present in bone marrow derived pre-B-cells, spleen derived immature B-cells, and spleen derived mature B-cells (Figure 7c). We found that Hoxc13 was expressed in all subpopulations examined (Figure 7d).
These data support the hypothesis that SPI1 and HOXC13 are developmental regulators of Zfp521 expression.
| SPI1 and HOXC13 cooperatively up-regulate Zfp521 expression in vivo
We cotransfected Spi1 and HOXC13 expression vectors into the mouse B-lymphoblast BCL1 cell line, and tracked endogenous Zfp521 expression levels through qPCR. We found that endogenous Zfp521 expression increased with cotransfection of wild-type SPI1 and HOXC13 (Figure 7e). However, Zfp521 expression levels were not as greatly increased following either SPI1 or HOXC13 single transfections. To extend our analysis, we also examined fetal liver expression of Zfp521 in embryos of Tg(Hoxc13) 61B1Awg mice, a transgenic strain over-expressing Hoxc13 (Tkatchenko et al., 2001). The relative expression levels of both genes were compared within the same mouse FIG URE 6 Knockdown of SPI1 and HOXC13 reduces ZNF521 expression, and expression of Spi1 and HOXC13 can rescue Zfp521 knockdown cell defects. a: SPI1 and HOXC13 were knocked down by shRNA transfection in THP-1 cells, either in combination (black bars) or individually (gray bars). A control vector with a scrambled noneffective shRNA sequence was used as a control (light gray bars). Gene expression of ZNF521, SPI1, and HOXC13 were assayed in each knockdown condition (labels on x-axis). Knockdown of both SPI1 and HOXC13 resulted in significantly reduced ZNF521 expression as compared to control shRNA transfection (t-test; p < 0.05). Individual knockdowns had a less profound reduction in ZNF521 expression levels. Expression levels of SPI1 and HOXC13 confirm that the shRNA transfections reduced gene expression of each gene as expected. b: Using a knockdown rescue assay, we compared the ratio of viable cells on day 7 post-transfection for cells that were initially transfected with Zfp521 knockdown shRNA plasmids or control plasmids to cells that had the original knockdown plus a transfection on day 3 of Spi1 and HOXC13 expression constructs (day 71). Cell viability is recovered when Spi1 and HOXC13 are cotransfected into BCL1 cells after Zfp521 shRNA transfection, and shows a significant increase when compared to cells with initial mock transfection (t-test; p < 0.05). Cells with initial empty vector or a scrambled Zfp521 shRNA sequence transfection do not show a similar increase in cell viability after rescue transfection. c: The percentage of dead cells identified by trypan blue staining in BCL1 cell cultures following Zfp521 knockdown or control transfections on day 3 (left) is lower than the percentage of cells on day 7 (middle). When Spi1 and HOXC13 expression constructs are cotransfected on day 3, the percentage of dead cells on day 71 (right) is reduced as compared to cells on day 7 that did not receive rescue plasmid transfections (t-test, p < 0.05). d: Caspase 3/7 activity was measured in BCL1 cells 7 days after transfection with Zfp521 shRNA or control plasmids (left). Introduction of Spi1 and HOXC13 expression vectors by cotransfection on day 3 resulted in a significant reduction in Caspase 3/7 activity on day 71 (middle), with no significant difference in cells initially transfected with Zfp521 shRNA as compared to cells with any other initial transfection condition. Cells transfected with a control empty vector on day 3 did not show a similar reduction in Caspase 3/7 activity (day 7 1 C; right). NS 5 nonsignificant. In all panels error bars represent standard deviation of 3 independent transfections, each with three technical replicates embryo. Transgenic embryos with high levels of Hoxc13 expression showed a significant increase in Zfp521 expression in the liver at E16.5 as compared to nontransgenic littermates (Figure 7g). Notably, not all Tg(Hoxc13) 61B1Awg embryos demonstrated increased Hoxc13 expression in the liver, perhaps due to epigenetic alterations of transgene expression, but the expression levels of Hoxc13 and Zfp521 were strongly correlated in all embryos (R 2 5 0.91). As the fetal liver is a critical developmental location of B-cell differentiation (Yokota et al., 2006), we conclude that the HOXC13-dependent mechanism of Zfp521 regulation is relevant in vivo. FIG URE 7 Assessment of Spi1, Hoxc13, and Zfp521 expression in immune tissues. a: The expression of Spi1 as detected by RT-PCR in mouse fetal liver (FL) and adult bone marrow (BM) samples. Spleen cDNA (SP) is used as a positive control, and no template as a negative control (neg). Bioline Hyperladder I (L) is used as a molecular weight standard. b: The expression of Hoxc13 as detected by RT-PCR in mouse immune tissues (samples as in A). Hindlimb (HL) is used as a positive control. c: Expression of Spi1 in FACS sorted B-cells. Expression was analysed in ProB cells isolated from bone marrow (BMPro), PreB cells isolated from bone marrow (BMPre), immature B-cells isolated from spleen (SPimm), and mature B-cells isolated from spleen (SPmat). Embryonic day 11.5 (E11.5) cDNA was used as a positive control. d: Expression of Hoxc13 in cDNA samples (samples as in C). e: Quantitative RT-PCR demonstrates that cotransfection of SPI1 and HOXC13 into mouse BCL1 cells results in up-regulation of endogenous Zfp521 expression (black bar). The up-regulation is greater than for cells transfected with either protein individually, cells with no transfection, or empty expression vector. Results show expression normalized to 18S levels, reported relative to BCL1 mock transfected cells (BCL1). Error bars show standard deviation of three independent experiments, each with three technical replicates. f: RT-PCR on BCL1 cells used for quantitative expression analysis confirms expression of Spi1 and HOXC13 from transfection plasmids. g: The expression levels of HoxC13 and Zfp521 were compared to each other within fetal liver samples from the same embryo. A correlation is seen between the relative expression levels of Hoxc13 and Zfp521 in mouse fetal liver from GC13 transgenic (Tg) mice (black triangles) and nontransgenic (wildtype) littermate controls (gray diamonds; R 2 5 0.91). Expression was measured by quantitative RT-PCR and normalized to Gapdh
| DISCUSSION
The large, multi-zinc finger proteins ZFP521 and ZFP423 were initially characterized as proto-oncogenes, both causing B-cell leukemia in mouse models (Hentges et al., 2005;Warming et al., 2003;Warming et al., 2004). Subsequently the critical roles that these factors play in cellular differentiation have been identified, and on-going investigations will likely reveal additional details about the mechanisms by which they regulate gene expression. Both proteins contain zinc-fingers of the Kr€ uppel type, which are predicted to meditate protein-protein interactions as well as possess DNA-binding ability. Whilst protein binding partners for ZFP521 have been identified, such as EBF (Mega et al., 2011), the DNA sequence to which ZFP521 binds is as yet unknown.
Likewise, the DNA sequence bound by ZFP423 has not been identified. Although both proteins have been shown to regulate cellular differentiation processes, the overlap between their functions at a molecular level, such as whether they will both bind the same DNA sequence, is unclear. Our phylogenetic data combined with prior reports of different expression patterns for these two genes suggests that their expression patterns have diversified following a gene duplication event, which may explain why both genes were retained in vertebrate genomes.
As Zfp521 expression is detected early in the B-cell differentiation process (Hiratsuka et al., 2015;Warming et al., 2003), relevant transcription factors that initiate Zfp521 expression should be expressed at early stages of B-cell differentiation. In fact, SPI1 expression is found in committed lymphoid progenitor cells (DeKoter et al., 2002), making SPI1 a good candidate for the regulation of Zfp521 during B-cell differentiation. Spi1, Hoxc13, and Zfp521 are all expressed in the fetal liver, where B-cell differentiation originates during development (Yokota et al., 2006). Importantly, we find that if we cotransfect SPI1 and HOXC13 into mouse BCL1 cells, a B-lymphoblast cell line, the endogenous expression levels of Zfp521 in the cells are up-regulated ( Figure 4e). Hoxc13 over-expressing mice display up-regulation of Zfp521 in the fetal liver, providing further supporting evidence that SPI1 and HOXC13 coregulation of Zfp521 expression is relevant during developmental specification of B-lymphocytes.
Zfp521 up-regulation resulting from retroviral insertions at the 5' end of the Zfp521 locus causes B-cell transformation in mice (Hentges et al., 2005;Warming et al., 2003). The tumors present in these mice express a variety of cellular markers known to be downstream of the B-cell transcription factor EBF, which are indicative of activated B-cell receptor (BCR) signalling (Hentges et al., 2005). Additionally, retroviral insertion up-regulation of Zfp521 has been identified as an important cooperative event which increased the incidence of B-ALL in mice expressing an E2A-HLF transgene as a model for human t(17;19) acute lymphoblastic leukemia (Yamasaki et al., 2010). The finding that SPI1 and HOXC13 coregulate Zfp521 expression highlights the possibility that over-expression of either of these factors could lead to B-cell leukemia via activation of Zfp521, resulting in enhanced BCR signalling in a manner similar to that displayed in AKXD27 mice (Hentges et al., 2005). During the process of bone differentiation Cyclin D1 has been identified as a ZFP521 target gene, promoting proliferation of growth plate chondrocytes (Correa et al., 2010). Tumors from mice with retroviral insertions within the Zfp521 gene have increased Cyclin D1 expression (Al Dallal et al., 2016), suggesting that over-expression of Zfp521 leads to an excessive proliferation of immature B-lymphocytes via Cyclin D1 up-regulation, culminating in B-cell leukemia. In humans ZNF521 over-expression is found in acute myeloid leukemia but not in B-cell leukemia (Mesuraca et al., 2015), although B-cell leukemia initiating cells do show increased expression of ZNF521 (Aoki et al., 2015). It has thus been proposed that up-regulation of ZNF521 in rare leukemia initiating cells may facilitate tumor progression, even though ZNF521 over-expression is not detected in the majority of leukaemic B-cells from patients (Mesuraca et al., 2015).
The role of SPI1 in leukemia is complex. Over-expression of SPI1 is known to cause erythro leukemia by inhibiting apoptosis and blocking the terminal differentiation of erythrocytes (Yamada et al., 2008). STAT3 activation of SPI1 is a key step in the disease progression of Friend erythro leukemia (Hegde et al., 2009), reinforcing the finding that SPI1 over-expression alters erythroid cell differentiation. However, acute myeloid leukemia can result from loss-of-function mutations in SPI1 or reduced Spi1 expression (Sive et al., 2016;Will et al., 2015), while induced expression of SPI1 in myeloid leukemia cells restores their differentiation and reduces their proliferation rates (Tkatchenko et al., 2001). Likewise, expression of SPI1 is severely reduced in human patients with chronic myeloid leukemia due to aberrant promoter methylation in tumor cells (Yang et al., 2012). Deletion of SPI1 and the related ETS-transcription factor SPI-B in the B-cell lineage results in fully-penetrant pre-B-cell acute lymphoblastic leukemia (B-ALL), suggesting that SPI1 and SPI-B are tumor suppressors in the B-cell lineage (Sokalski et al., 2011). Likewise, mice lacking both SPI1 and IRF8 develop B-ALL (Pang et al., 2016). These results suggest that overexpression of SPI1 may play a part in erythroid leukemia, while reduced SPI1 expression contributes to myeloid and B-cell leukemia, although the many diverse functions of SPI1 at multiple time points during hematopoiesis complicate the analysis of SPI1 in specific lineages.
HOXC13 is a key downstream target of the polycomb group family gene B-cell specific Moloney murine leukemia virus integration site 1 (BMI-1). BMI-1 dysregulation leads to lymphoma and nonsmall cell lung carcinoma in humans (Jacobs et al., 1999;Vonlanthen et al., 2001).
Additionally, in human cervical adenocarcinoma cells BMI-1 knockdown promotes the up-regulation of HOXC13 expression, contributing to a block in cell proliferation (Chen et al., 2011). Therefore, BMI-1 induced HOXC13 repression may cause abnormal cell proliferation contributing to adenocarcinoma. HOXC13 has additional links to leukemia, because fusions of NUP98 and HOXC13 cause human myeloid leukemia (Cheng & Reed, 2007). Also, an antagonistic role for SPI1 and HOXC13 in erythroid cell differentiation has been proposed based on data from erythro leukemia cell lines (Yamada et al., 2008). Cross-talk between SPI1 and the MEIS/HOX gene regulation pathway has been noted in mixed lineage leukemia (Zhou et al., 2014). Yet to our knowledge there are no prior reports of HOXC13 involvement in lymphoid leukemia.
However, based on the results of this study, the potential role of HOXC13 in lymphoid leukemia and B-cell gene expression via regulation of Zfp521 should be explored. Future experiments are required to determine the contribution of perturbations in Hoxc13 and Spi1 expression to dysregulation of Zfp521, and the potential for alternations in the expression of these transcription factors to promote lymphoid leukemia.
| Phylogenetic analysis
Protein sequences for orthologues of both ZFP521 and ZFP423 from multiple species were obtained from the online database of National Centre for Biotechnology Information (Supporting Information Table 1).
Multiple sequence alignment was performed using MUltiple Sequence
Comparison by Log-Expectation (MUSCLE) (Edgar, 2004). Phylogenetic trees were generated by MUSCLE, and branch length presented as either cladogram (Figure 1a) or real (Figure 1b).
| Bioinformatic promoter analysis
The 1Kb regions upstream of the annotated mouse and human Zfp521/ZNF521 and Zfp423/ZNF423 genes were obtained from the UCSC genome browser using the mm9 assembly for mouse and the hg38 assembly for human. Annotated tracks for DNAse I hypersensitivity (Ho & Crabtree, 2010) and the percentage of GC bases in the promoter regions were obtained from the UCSC genome browser on the hg38 and mm9 assemblies. DNase I hypersensitivity data was generated by the University of Washington ENCODE group (Sabo et al., 2006). The 1Kb mouse and human promoter regions were searched for transcription factor binding sites using TESS (Schug, 2008). Conserved sites were identified through manual inspection of annotated promoters.
| Constructs
IMAGE clone 3600260 was used as the wild-type Spi1 expression construct and as the template for site-directed mutagenesis. IMAGE clone 6171228 was used as the wild-type HOXC13 expression vector. Primer sequences for Zfp521 promoter amplification listed in Supporting Information Table 2. Promoter regions were cloned up stream of the luciferase gene in the pGL3 Basic plasmid. Nomenclature of the reporter plasmids is based on the approximate size of promoter regions. The human HOXC13 cDNA sequence was cloned in-frame into pFLAG-CMV2 (Sigma) to create the FLAG-tagged expression construct. The HOXC13 DNA-binding mutant construct (HOXC13-delta) was previously described (Potter et al., 2006). 4.6 | Transfection assays 0.5mg plasmid DNA was transfected into 5 3 10 5 HEK 293 cells with Fugene 6 (Roche), and cells were cultured for 48 h. 2 lg plasmid was transfected into 2 3 10 6 BA/F3 cells with Amaxa nucleofector reagent (Lonza). Cells were grown for 48 h to collect total RNA or protein for analysis. 1 mg plasmid DNA was transfected into 1 x 10 5 BCL1 cells with FuGENE HD (Roche) in OptiMEM Media (Sigma). Cells were grown for 48 h for qPCR assays, and 3 or 7 days for knockdown rescue assays.
| Western blot
Cell lysate was prepared in RIPA buffer. 50 lg total protein was subjected to 12% SDS-PAGE and transferred to PVDF membrane. The membrane was incubated with mouse anti-FLAG antibody (M2,Sigma) or goat anti-SPI1 antibody (anti-PU.1 D19, Santa Cruz). Protein was detected by ECL kit (GE Health care).
Ratios of firefly luciferase activity to renilla luciferase activity were calculated for each sample. Reactions were performed in triplicate, and all results represent combined analysis of three separate experiments.
Expression levels are shown relative to transfection of luciferase reporter without cotransfection of protein expression constructs. Statistical significance was determined by t-test comparison to reporter control.
| Protein coimmunoprecipitation
HEK 293 cells cotransfected with 2 lg of SPI1 (wild type or mutant constructs) and Flag-HOXC13 were harvested 48 h following transfection and lysed in PBS with 1% Triton X100 and 0.01% Igepal CA-630.
The expression level of ZNF521, SPI1 and HOXC13 was determined by qPCR as described below for mouse cell line and tissue samples. The qPCR primers are listed in Supporting Information Table 2. 4.13 | Zfp521 shRNA knockdown rescue assays Knockdown of Zfp521 was performed by transfection of a cocktail of 4 vectors with shRNAs targeting Zfp521 into BCL1 cells as previously described (Al Dallal et al., 2016). The rescue assay with SPI1 and HOXC13 was performed by cotransfecting 0.5 lg each of Spi1 and HOXC13 expression constructs into knockdown or control cells 3 days after Zfp521 shRNA transfection. As a control, a rescue transfection was performed with pcDNA3.1 empty vector on day 3 post-shRNA transfection. Cell viability, trypan blue staining, and Caspase 3/7 activity analysis were performed as previously described (Al Dallal et al., 2016).
| Flow cytometry
Bone marrow was extracted from the femurs of two 129S5 mice by flushing the bones with DMEM, pooling the cells, passing the cells through a 21 gauge needle and filtering the cells using a 0.7um filter (BD). Splenocytes were isolated by pressing the spleen from 2 mice against a mesh, then suspending the cells in DMEM. Erythrocytes were lysed by incubating the cells with red blood cell lysis buffer (Roche) for 5 minutes at room temperature. Cells were washed with FACs buffer (PBS, 1% FCS, 0.05% sodium azide), pelleted and resuspended with 1ug of FC receptor block and incubated for 20 minutes at 48C. The cells were washed in FACs buffer, and resuspended in 0.313ug/ml anti-B220 (RA3-6B2) APC (ebioscience), 2.5ug/ml anti-IgM (Il/41) PE (ebioscience), 0.625ug/ml anti-IgD (11-26c) Fitc (ebioscience), 2.5 lg/ ml anti-C-kit (2B8) PE Cy5 (ebioscience) and 1 ll/ml Fixable viability dye eFluor 450 (ebioscience) and incubated for 40 min at 48C in the dark. OneComp eBeads (ebioscience) were used for single stained controls. Cells were washed in FACs buffer, resuspended in DMEM with 25 mM HEPES at a concentration of 1 x 10 7 cells/ml and filtered through a 50-lm filter (BD Biosciences). The cells were sorted into immature B cells (B220 1 IgM 1 IgD -), mature B cells (B220 1 IgM 1 IgD 1 ), pro B cells (B220 1 IgM -IgD -C-kit 1 ) and pre B cells (B220 1 IgM -IgD -C-kit -) using a FACS Aria (BD Bioscience) and collected into a 5 ml polypropylene tube containing DMEM with 10% FBS.
4.16 | Zfp521, Spi1, and Hoxc13 quantitative RT-PCR BCL1 or BA/F3 cells were transfected with pFlagCMV-HOXC13 and pCMVSport6-Spi1 (Amaxa nucleofector), or empty expression vector control, and incubated for 2 days prior to RNA harvesting. Foetal livers were dissected from Tg(Hoxc13) 61B1Awg E16.5 transgenic mice or nontransgenic littermate controls. All animal work at the Medical University of South Carolina was done in compliance with MUSC institutional animal care and used committee-approved procedures. Total RNA was extracted with TRI reagent (Sigma), treated with DNaseI, reverse transcribed, and subjected to real-time quantitative PCR for Zfp521 and Gapdh (Eurogentec qPCR Core kit for SYBR Green). Zfp521 primers were previously reported (Hentges et al., 2005). Hoxc13 primers sequences were the same as those used for RT-PCR. Gapdh primer sequences listed in Supporting Information Table 2. Reaction conditions were 958C for 10 minutes, followed by 958C 15 seconds, 608C 1 minute for 40 cycles. Expression levels were normalized to Gapdh.
|
2018-04-03T05:17:51.344Z
|
2016-08-29T00:00:00.000
|
{
"year": 2016,
"sha1": "e4be2d79d2f01684a95940c60876595af3ff7431",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/dvg.22963",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "e4be2d79d2f01684a95940c60876595af3ff7431",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
266242938
|
pes2o/s2orc
|
v3-fos-license
|
Bioactives and Technological Quality of Functional Biscuits Containing Flour and Liquid Extracts from Broccoli By-Products
Broccoli by-products are an important source of health-promoting bioactive compounds, although they are generally underutilized. This study aimed to valorize non-compliant broccoli florets by transforming them into functional ingredients for biscuit formulation. A broccoli flour and three water/ethanol extracts (100:0, 75:25, 50:50; v/v) were obtained. The rheological properties and the content of bioactive compounds of the functional ingredients and biscuits were evaluated. The 50:50 hydroalcoholic extract was the richest in glucosinolates (9749 µg·g−1 DW); however, the addition of a small amount strongly affected dough workability. The enrichment with 10% broccoli flour resulted the best formulation in terms of workability and color compared to the other enriched biscuits. The food matrix also contributed to protecting bioactive compounds from thermal degradation, leading to the highest total glucosinolate (33 µg·g−1 DW), carotenoid (46 µg·g−1 DW), and phenol (1.9 mg GAE·g−1 DW) contents being present in the final biscuit. Therefore, broccoli flour is a promising ingredient for innovative healthy bakery goods. Hydroalcoholic extracts could be valuable ingredients for liquid or semi-solid food formulation.
Introduction
Oxidative stress and the modern lifestyle are the main influencing factors in human health problems.A variety of negative effects on body functionality can be triggered by the inhibition of our body's natural antioxidant system's ability to destroy free radicals [1].Including more sources of antioxidants in our daily diets prevents such consequences.Natural antioxidants are essential secondary metabolites characterized by powerful antioxidant activity [2].Pigments derived from various fruits and vegetables such as tocopherols [3][4][5], carotenoids [3,6], glucosinolates [7,8], and polyphenols [9,10] have been addressed in different studies in recent decades because of their multiple benefits for human health.Promising opportunities for their application in the food, pharmaceutical, beauty, and various other sectors have been recognized [2].Carotenoids play a significant role as natural antioxidants and serve as precursors to vitamin A [11].Tocopherols have demonstrated their ability to provide a defense against inflammatory conditions and cancer [12].Glucosinolates and their isothiocyanates play a notable role in preventing and treating multiple chronic diseases such as cardiometabolic disorders [8].Phenolic compounds also exert antioxidant, anti-inflammatory and anticancer activities [13].
Cruciferous vegetables including cauliflower, cabbage, brussels sprouts, and broccoli are known for their bioactive compounds which enhance the prevention of colorectal cancer and reduce the prevalence of numerous diseases [14].The production of broccoli had doubled by 2019 compared to the late 19th century, and Italy represents the fourth largest producer after India, the USA, and Spain [15].Broccoli is appreciated especially for its abundance of glucosinolates, which have been recognized as potential candidates for chronic disease prevention [8].Twelve glucosinolates have been identified in eighty genotypes of broccoli florets, including glucobrassicin, 4-methoxyglucobrassicin, 4-hydroxyglucobrassicin, glucoerucin neoglucobrassicin and glucoraphanin [16].Broccoli contains exceptionally high levels of bioactive components such as tocopherols, estimated at around 155-1.57µg•g −1 dry weight (DW) in the leaves and florets, respectively [17].In addition, broccoli had the highest tocopherol content compared to other Brassicaceae vegetables [3].Lutein, β-carotene, violaxanthin, and neoxanthin were the predominant carotenoids in broccoli [3,17].
Unfortunately, 85-90% of broccoli's total aerial biomass, including the leaves, stems, and non-compliant florets, is usually disposed of as by-products and underutilized [13,17,18].These by-products contain high amounts of valuable nutrients that can be recovered and utilized, including vitamins, glucosinolates, carotenoids and chlorophylls.The evaluation of phytonutrients in broccoli by-products revealed that the leaf tissue contained higher levels of essential nutrients, such as β-carotene, vitamins E and K, minerals like Mn and Ca, as well as a greater total phenolic content (TPC) and DPPH antioxidant activity compared to the floral parts [17].Various beneficial effects of broccoli by-products, such as antiobesity, antioxidation, anti-inflammation, anticancer and antifungal effects, have been confirmed in different studies [17,19,20].An extract from broccoli sprouts enhanced the metabolism in individuals with obesity and poorly managed diabetes [8].Broccoli leaf powder was regarded as a compelling and innovative ingredient suitable for gluten-free cake recipes [21].Hence, broccoli by-products emerge as promising functional ingredients for incorporation into food products to valorize their health-promoting attributes.
Recently, increased interest has been shown towards new functional foods due to the beneficial health properties associated with their consumption.Further concerns have been raised regarding the use of food by-products as functional foods aimed at reducing the environmental impact of food production since the introduction of the circular economy principles to food science [22].Different forms of broccoli waste have been used for the formulation of new functional food products with enhanced health benefits, for example snack bars [23], crackers [10], cakes [21] and bread [24].The enhanced technological and functional attributes of these products have been acknowledged.Color is one of the most important parameters able to drive consumer choice [25].The pigments present in broccoli (carotenoids and chlorophylls) provide an attractive green color when broccoli flour is employed in food formulations.For instance, its incorporation in cakes improved the attractiveness of gluten-free sponge cakes [21].The use of broccoli flour in food formulations also has an effect on the texture of the final products.For instance, increasing the amount of broccoli flour led to an increase in the hardness of broccoli-containing crackers [10] and sponge cakes [21].Most of the organoleptic attributes, including the flavor, appearance, crunchiness, and consistency, of bar snacks prepared with a broccolisoybean-mangrove flour were favored by the vast majority of semi-trained panelists [23].In this respect, it is important to valorize broccoli and its by-product ingredients.
In this work, we evaluated the enrichment of biscuits, one of the most popular bakery goods, using different functional ingredients from broccoli by-products.For this purpose, four ingredients (three liquid extracts and one flour) were obtained using eco-friendly (food-grade) solvents and cost-effective techniques.Therefore, we investigated the effect of their inclusion on the rheological properties of the doughs (i.e., extensional viscosity, spread and stickiness, color) as well as the final biscuits (i.e., texture profile, fracture, color).At the same time, the broccoli-derived ingredients and biscuits were analyzed to assess the content of bioactive compounds (i.e., glucosinolates, polyphenols, carotenoids, and tocopherols).
Broccoli By-Products and Derived Ingredients
Blanched frozen broccoli by-products were provided by a local company (O.R.T.O.Verde S.c.a.p.a., Senigallia, Italy).Broccoli by-products, accounting for non-compliant florets and stems, were stored at −40 • C before being freeze-dried (Superco engineering, CryoDryer 5, Augsburg, Germany).Freeze-drying was carried out using an automatic process consisting of a primary drying performed at 0.38 mbar and a secondary drying at 0.25 mbar.Dried broccoli was ground into flour (BF), and some of this flour was employed to make the liquid extracts.According to the Directive 2009/32/EC [26], the extraction solvents chosen to produce food ingredients were water and ethanol, which were employed in three different ratios (i.e., 100:0, 75:25, 50:50, v/v), hereafter referred to as 100W, 75W25ET, and 50W50ET, respectively.A single batch (50 mL) for each extract was prepared using a 1:15 (w/v) sample to solvent ratio, warmed at 40 • C for 1 h under stirring, as suggested by Bojorquez-Rodríguez et al. [27].Later, they were centrifuged (Remi Elektrotechnik Ltd., Neya 16R, Mumbai, India) at 3500 rpm for 5 min at 4 • C to remove the residue and stored at −20 • C prior to chemical analysis and biscuit preparation.
Biscuits Preparation
BF and extracts were added as functional ingredients in the biscuit formulations, for a total of 4 doughs.In detail, 10% BF was used to substitute wheat flour, while liquid extracts (30 mL each) replaced some of the sunflower oil.A control dough was also prepared.The ingredients used for the preparation of biscuits are listed in Table 1 and the samples are shown in Figure 1.A planetary mixer (Kenwood, Model KWL90.244SI,Woking, UK) was used to mix the ingredients and prepare the dough.At first, eggs, sugar and cream of tartar were mixed at maximum speed (200 rpm) for 7 min.The liquid ingredients, i.e., sunflower oil, milk, and broccoli extracts, were poured and mixed for another 3 min.After this, flour, A planetary mixer (Kenwood, Model KWL90.244SI,Woking, UK) was used to mix the ingredients and prepare the dough.At first, eggs, sugar and cream of tartar were mixed at maximum speed (200 rpm) for 7 min.The liquid ingredients, i.e., sunflower oil, milk, and broccoli extracts, were poured and mixed for another 3 min.After this, flour, broccoli flour, salt, and sodium bicarbonate were slowly added, and the mixing speed was reduced to about 70 rpm for 4 min.The dough was left to rest at a refrigerated temperature (4 • C) for 30 min.The dough was rolled on a flat surface with a rolling pin set at a constant rolling thickness of 1 cm.Half of the dough was employed for rheological measurements, and half was sized with a biscuit mold (50 × 30 mm).For each dough, 24 biscuits were obtained and baked in a ventilated oven (Bosch, HSG636ES1, Munich, Germany) at 180 • C for 18 min and then cooled to room temperature for 30 min.
Rheological and Mechanical Analysis
Rheological and mechanical analysis of dough and biscuit samples was performed using a universal testing machine (Zwick Roel, model 1 KN, Ulm, Germany) equipped with specific tools.
Lubricated Biaxial Extension of Doughs
Dough workability was first investigated by evaluating the strain resistance under large deformations (up to 75%) preceding and exceeding the fracture point.Two cylindrical plexiglass probes with a height of 11 mm and a diameter of 15 mm and with lubricated bottom and upper surfaces were used.The load (N) was registered as a function of time (s) and load-line displacement (mm), while the load-line displacement rate was set at 50 mm•min −1 (0.00083 m•s −1 ).
The apparent biaxial extensional viscosity (ABEV) was calculated: where F (t) is the load (N) at time t, h (t) is the height (m) and r is the radius (m) of the dough specimen at time t, and v is the load-line displacement speed (m•s −1 ).The hardening rate of the dough, expressed as the maximum increase in ABEV as a function of strain rate, was calculated in the range of strain rate preceding the fracture point.The latter was expressed as the strain rate (s −1 ) where the ABEV reached its maximum value.The lubricated biaxial extension test was replicated 10 times for each dough formulation.
Spread and Stickiness of Doughs
The ability to adhere to 45 • -inclined plexiglass surfaces (50 mm internal diameter, 25 mm internal depth) under large deformations preceding and exceeding the fracture point was evaluated.Dough surface properties were determined by performing a loadingunloading test, using two complementary conical probes.The same amount of dough specimen was loaded in the receiving conical probe; then, it was left to rest for 10 min to fully recover all internal stresses.During the loading step, the specimen was compressed and the load-line displacement rate was set at 50 mm•min −1 (0.00083 m•s −1 ) up to a gap of 8 mm between the conical surfaces, allowing the dough to exceed its fracture point force.The unloading step was performed immediately after the loading step, allowing the tension load to be registered during full detaching of the specimen from the plexiglass surfaces.
The maximum compressive load measured during the loading step for the spreading resistance and the maximum tension load during the unloading step for stickiness were determined.The test was replicated 10 times for each formulation.
Texture Profile Analysis of Biscuits
The load vs. time curves acquired in two loading-unloading cycles were used to stimulate the two bites.The parameters such as the springiness, hardness, cohesivity, friability, fragility, resiliency, and chewiness were determined.The test consisted of 0.5 N preload, loading the biscuit at 10 mm•min −1 to 25% deformation, unloading at 400 mm•min −1 and resting for 60 s for fast recovery, reloading the cookie at 10 mm•min −1 to 25% deformation, and unloading at 400 mm•min −1 and resting for 60 s for recovery.
The biscuits used for texture profile analysis were characterized by an average size of 50 × 30 × 12 mm for the width, length, and thickness, respectively.The test was repeated 10 times for each formulation.
Fracture Test of Biscuits by Single-Edge Notched Three-Point Bending Test
The biscuit was analyzed in its original shape and geometry after making a notch with a depth of 2 mm at the center of the convex surface.The alignment of the notch on the biscuit surface with the loading tool was ensured and a non-destructive preload of 0.1 N was imposed.The fracture test was carried out with a load-line displacement speed of 1 mm•min −1 under position control conditions.
Load vs. time and load vs. load-line displacement data were numerically analyzed by preforming first, second, and third derivative analysis to calculate the Young's elastic modulus (E), together with the work required to initiate and work required to propagate the fracture within the biscuit ligament.At least 10 replications were performed for each biscuit.
Color Analysis of Biscuits
Baking performance was evaluated by determining changes in color attributes.A digital camera (Panasonic, DMC-FZ1000, Osaka, Japan) operating with high resolution (pixel size of 5.7 micron) was used under uniform light to acquire the images of the biscuit surfaces.The size calibration was performed using a phantom tool with standardized sizes.L* (Lightness; values range from 0-black to 100-white), a* (positive values indicate the degree of redness, while negative values the degree of greenness), and b* (positive values indicate the degree of yellowness, while negative values the degree of blueness) of the CIE Lab system were considered and analyzed using Adobe Photoshop (2020, 21 x).The values of L*, a*, and b* were extracted by using the histogram function and were converted into the standard values, as reported by [28].Then, chroma (C*), hue angle (hab), total color difference (∆E), whiteness index (WI) and yellowness index (YI) were calculated according to CIE 1976 L*a*b* color space [29].These indices were determined according to Pathare et al. [30].
Determination of Glucosinolates
Broccoli-derived ingredients (i.e., flour and hydroalcoholic extracts) plus the enriched biscuits were analyzed for their glucosinolate (GLS) content, subsequent to extraction according to [16] with minor modifications.BF (0.5 g) and biscuits (1 g) were subjected to extraction with 3 and 2 mL of methanol/water (70:30, v/v) at 70 • C for 30 min using a thermoblock (Falc Instruments srl, Model TA120P1, Bergamo, Italy) and then were sonicated for 15 min.The extraction and the analysis were performed in triplicate.The mixture was centrifuged (Remi Elektrotechnik Ltd., Neya 16R, Mumbai, India) at 4500 rpm for 5 min, and the supernatant was collected in a 2 mL Eppendorf tube.The supernatant was dried using an integrated speedvac system (Thermo Fisher Scientific, Model ISS110-230, Waltham, MA, USA) and the residue was dissolved in ultrapure water.The retrieved parts of BF and biscuits along with the extracts were filtered through a 0.45 µm filter (Sartorius Regenerated Cellulose membrane) and 2 µL was injected into a UPLC-PDA-MS system (Waters Corporation, Acquity, Massachusetts, USA).The separation was performed on a CSH C18 (100 × 2.1 mm, 1.7 µm) column according to Nartea et al. [31].The mobile phase consisted of A (0.1% formic acid in water, v/v) and B (0.1% formic acid in acetonitrile, v/v).The gradient was as follows: 96% A and 4% B, 0 min; 85% A and 15% B, 10 min; 30% A and 70% B, 20 min; 30% A and 70% B, 25 min; 96% A and 4% B, 30 min.The flow rate was kept constant at 0.3 mL•min −1 .The column temperature was set at 35 • C and sample loading was performed at 20 • C. PDA spectra were recorded from 200 to 500 nm.
The MS was operated in negative electrospray ionization mode in full scan at a range of 50-600 m/z; the cone voltage was 15 V; and the capillary voltage was 0.8 kV.Retention time and mass spectra were used to identify and further quantify GLSs.
Total Phenolic Content
BF and the biscuits (1 g) were mixed with 15 mL of ethanol/water (50:50, v/v) at 40 • C for 1 h in the dark.Then, they were centrifuged, and the supernatant was collected.The total phenolic content (TPC) of BF, the extracts and the biscuits was determined according to the Folin-Ciocalteu method [32].The mixture was composed of 20 µL of the sample, 1.58 mL of water, and 100 µL of Folin reagent.After 7 min, 300 µL of sodium carbonate solution was added to the mixture, vortexed and left at room temperature for 30 min.The absorbance was measured at 750 nm in a UV-Vis spectrophotometer (Onda, UV-31 SCAN, Beijing, China).Each sample was analyzed in triplicate and the results were expressed as mg gallic acid equivalents (GAE)•g −1 of the sample, using a calibration curve of gallic acid.
Simultaneous Determination of Carotenoids and Tocopherols
Carotenoids and tocopherols from BF and the biscuits were extracted according to [33] with slight modifications.Each sample (100 mg) was added to acetone (5 mL), kept for 15 min at 4 • C, vortexed, and centrifuged (Remi Elektrotechnik Ltd., Neya 16R, Mumbai, India) at 3000 rpm for 5 min at 4 • C. The extraction was repeated twice.The supernatants were collected, filtered (0.45 µm Regenerated Cellulose, Sartorius), dried and resuspended in 1 mL of acetone.Finally, 3 µL was injected into a UPLC-PDA-FLR system (Waters Corporation, Acquity, MA, USA).The column was a HSS T3 C18 (100 × 2.1 mm, 1.8 µm).The chromatographic conditions were set according to [31].PDA was set at 450 nm to detect carotenoids, while for tocopherols, FLR was at 290 nm excitation energy and 330 nm emission energy.Carotenoids and tocopherols were identified via the comparison of retention time and absorbance spectra with pure standards.Their quantification was performed by means of external calibration.Each sample was analyzed in triplicate.
Statistical Analysis
Elaborated data were analyzed using STATISTICA v. 10 (StatSoft, Inc., Tibco, CA, USA).One-way ANOVA and Tukey tests were applied to identify statistical differences (p < 0.05) among samples in terms of bioactive compounds.Principal component analysis (PCA) was performed based on correlation matrix data, factor coordinates and eigenvalues analysis.To obtain insights on the interplay between the dough enrichment strategy and THE final quality of the enriched biscuits, the texture properties of the biscuits were treated as input active variables in PCA, while color properties were treated as supplementary variables.
Rheological Properties of Doughs
Figure 2a shows changes in the apparent extensional viscosity (kPa•s) as a function of the strain rate (s −1 ) in the dough.
As can be inferred from Figure 2A, a fracture point is reached during the lubricated biaxial compression test which corresponds to the maximum ABEV value.Significant differences are observed (p < 0.001) among the investigated dough formulation both in terms of maximum ABEV and maximum hardening rate, as evaluated within the full range of strain rate that precedes the fracture point of the dough structure (Table S1).The ABEV values follow the order 100W < STD < 75W25ET < 50W50ET < BF10, while the strain hardening rate follows the opposite order.
Rheological Properties of Doughs
Figure 2a shows changes in the apparent extensional viscosity (kPa•s) as a function of the strain rate (s −1 ) in the dough.As can be inferred from Figure 2A, a fracture point is reached during the lubricated biaxial compression test which corresponds to the maximum ABEV value.Significant differences are observed (p < 0.001) among the investigated dough formulation both in terms of maximum ABEV and maximum hardening rate, as evaluated within the full range of strain rate that precedes the fracture point of the dough structure (Table S1).The ABEV values follow the order 100W < STD < 75W25ET < 50W50ET < BF10, while the strain hardening rate follows the opposite order.
The partial substitution of sunflower oil with broccoli extracts led to significant changes in the extensional resistance and strain hardening rate compared to D_CTRL.Such a result suggested that the higher the level of ethanol used for the extraction, the higher the extraction of structural active compounds (e.g., polysaccharides, pectin, flavonoids, phenols) which were able to interact with wheat flour hydrocolloids under strain.
On the other hand, the partial substitution of wheat flour in the dough preparation with BF led to a dough (D_BF10) with the highest resistance to the biaxial extension, as suggested by the highest level of ABEV (2395.63 kPa•s) and the lowest hardening strain rate (5.1 × 10 −5 kPa•s 2 ), being 3.3 times higher and 0.3 times lower than D_CTRL, respectively.Such a result suggested that the broccoli's hydrocolloids, mainly consisting of lignin, cellulose, hemicellulose, and pectin play a major role in dough consistency and workability with respect to the wheat flour hydrocolloids.Lignin, cellulose, and hemicellulose behave as elastic and viscoelastic structural elements in vegetable matrices such as cauliflower and broccoli, while pectin plays a plasticizing role [34].
The spread (maximum compressive load, Fmax) and the stickiness (maximum tension load, Fmin) of doughs are shown in Figure 2B.The partial substitution of sunflower oil with broccoli extracts led to significant changes in the extensional resistance and strain hardening rate compared to D_CTRL.Such a result suggested that the higher the level of ethanol used for the extraction, the higher the extraction of structural active compounds (e.g., polysaccharides, pectin, flavonoids, phenols) which were able to interact with wheat flour hydrocolloids under strain.
On the other hand, the partial substitution of wheat flour in the dough preparation with BF led to a dough (D_BF10) with the highest resistance to the biaxial extension, as suggested by the highest level of ABEV (2395.63 kPa•s) and the lowest hardening strain rate (5.1 × 10 −5 kPa•s 2 ), being 3.3 times higher and 0.3 times lower than D_CTRL, respectively.Such a result suggested that the broccoli's hydrocolloids, mainly consisting of lignin, cellulose, hemicellulose, and pectin play a major role in dough consistency and workability with respect to the wheat flour hydrocolloids.Lignin, cellulose, and hemicellulose behave as elastic and viscoelastic structural elements in vegetable matrices such as cauliflower and broccoli, while pectin plays a plasticizing role [34].
The spread (maximum compressive load, F max ) and the stickiness (maximum tension load, F min ) of doughs are shown in Figure 2B.
F max followed the order D_CTRL < D_100W < D_75W25ET < D_50W50ET < D_BF10.Such results were consistent with those related to the extensional resistance, except for the spread resistance of D_100W, which was significantly higher than D_CTRL, irrespective of the observed ABEV values.As far as the stickiness properties are concerned, it is worth noting that no significant difference between D_CTRL and D_BF10 was detected, while the doughs prepared with hydroalcoholic extracts were almost 7.7 times stickier than D_CTRL and D_BF10.This suggests the ability of hydro-alcoholic mixture to extract from broccoli a significant amount of relatively low-molecular-weight polysaccharides which are able to modify the surface properties under strain of the enriched doughs.Consequently, our doughs were limited to 30 mL of hydroalcoholic extracts to allow dough workability and to avoid more stickiness.
Based on the bulk and surface properties, as expressed in terms of apparent extensional viscosity, strain hardening rate, maximum load to spread and stickiness, as well as on the practice used to prepare the investigated doughs, we considered D_BF10 as the functional dough with the best workability features and the highest amount of bioactives from broccoli.
Rheological Properties of Biscuits
Texture properties of biscuits (Table S2) were used as active variables to perform PCA, while color attributes (Figure S1) were used as supplementary variables.The conducted PCA displayed 99.12% of the total variance, with PC1 explaining the most (84.83%)(Figure 3).quently, our doughs were limited to 30 mL of hydroalcoholic extracts to allow dou workability and to avoid more stickiness.
Based on the bulk and surface properties, as expressed in terms of apparent ext sional viscosity, strain hardening rate, maximum load to spread and stickiness, as wel on the practice used to prepare the investigated doughs, we considered D_BF10 as functional dough with the best workability features and the highest amount of bioacti from broccoli.
Rheological Properties of Biscuits
Texture properties of biscuits (Table S2) were used as active variables to perfo PCA, while color attributes (Figure S1) were used as supplementary variables.The c ducted PCA displayed 99.12% of the total variance, with PC1 explaining the most (84.83 (Figure 3).B_BF10 appeared to be the hardest, most fragile, and most resilient biscuit and, at the same time, was the least springy and friable among all the enriched products.B_75W25ET showed the opposite textural features, while B_CTRL, B_100W, and B_50W50ET displayed an intermediate behavior.In detail, B_BF10 showed the lowest springiness (related to the extent of recovery of the original shape from the compressed state) and the highest resiliency (rate of recovery of the original shape after compression), while no differences were detected among the other biscuits, including the control.Such results suggest that BF provided a significant quantity of hydrocolloids, resulting in the highest elastic and viscoelastic interaction in the final biscuit.Moreover, these interactions enable air volume entrapment during cooking, and therefore B_BF10 also displayed the greatest work to initiate and work to propagate a fracture (Figure S2 and Table S3).
According to the derived texture properties, B_75W25ET showed the lowest chewiness, more likely due to the highest quantity of compounds having plasticizing effects on the biscuit structure strain and fracture.In contrast, the chewiness of B_BF10 was the highest among the investigated biscuits.Meanwhile, in gluten-free products, Krupa-Kozak et al. [24] highlighted an increase in chewiness of bread prepared with 5% broccoli leaf powder.The highest chewiness and hardness registered for B_BF10 suggested that the biscuit requires a longer time for oral processing and a higher number of chews before swallowing [35].It has been reported that a higher number of masticatory cycles increases satiety and reduces food intake, which could be helpful for targeting eating disorders [36].
Regarding color attributes, B_BF10 showed great differences compared to the control and the other enriched biscuits in terms of colorfulness (∆E), WI, YI, C*, and b* parameters.This could be attributed to the abundance of colored components such as phenols and carotenoids.Color is important to meet consumer acceptability [25].In a previous study, the incorporation of broccoli by-product in crackers was positively evaluated [10].Therefore, the color of B_BF10 could be appreciated by consumers.
In the other hydro-alcoholic extracts, the higher the amount of ethanol, the higher the concentration of GLSs recorded, for a total of 9749.1 ± 15.3 µg•g −1 DW in E_50W50ET.A similar trend was observed by Bojorquez-Rodríguez et al. [27] in broccoli sprouts, where the concentration of GLSs in the extracts with 50% ethanol was much higher than that with 0% ethanol.The total GLS concentrations detected in E_50W50ET was almost 8 times higher than in BF.This could be attributed to the extraction method, and especially to the sample/solvent ratio used (1:15 in _50W50ET vs. 1:6 in BF).Pham et al. [37] noticed that a high sample-to-solvent ratio limited the dissolution of compounds into the solvent due to a saturation effect.
Unfortunately, no GLSs were detected in biscuits enriched with hydroalcoholic extracts or that made with E_50W50ET, despite the high GLS concentration.This could be related to the limited amount of extract which could be added into the biscuit formulation (see Section 3.1).Therefore, considering the final concentration of GLSs achieved in the extracts accompanied by the limitation of the stickiness of the dough, hydroalcoholic extracts are not suitable ingredients to significantly fortify bakery goods.However, they could be valuable ingredients for other liquid or semi-solid formulations by providing a considerable amount of health-promoting GLSs.
On the other hand, seven out of nine GLSs were identified in the biscuit enriched with 10% BF for a total of 33.2 ± 3.4 µg•g −1 DW (Table 3).Glucoraphanin accounted for 54.5% of the total.Glucoraphanin is particularly important because it can be degraded by enzymes of the intestine into sulforaphane, which is a strong chemo-preventive isothiocyanate [8,13,[38][39][40].For instance, Mirmiran et al. [41] noticed a significant reduction in the level of inflammatory markers after 4 weeks of supplementation with broccoli sprout powder with a high sulforaphane concentration.
Total Phenolic Content
The TPC of broccoli-derived ingredients varied from 8.7 to 10.8 mg GAE•g −1 DW, with E_100W displaying the highest value (Figure 4A).
Tocopherols and Carotenoids
Among the broccoli-derived ingredients investigated, tocopherols and carotenoids were determined only in BF.Due to the lipophilic nature of these compounds, none of them were detected in the hydroalcoholic extracts.BF was mainly characterized by β/γ and α tocopherols (~47.4 µg•g −1 DW in total, Figure 5A).
Lee et al. [3] found concentrations ranging from 1.8 to 286 µg•g −1 DW of total tocopherols among twelve Brassicaceae.However, tocopherols are differently accumulated in different broccoli organs, with leaves having the highest concentration compared to florets and stems [17].No statistical differences were found among enriched biscuits compared to the B_CTRL.However, the results from tocopherols analysis were characterized by a quite high data dispersion.This may be the result of the diverse composition of functional ingredients in low-molecular-weight compounds (e.g., polysaccharides, peptides) which affected the rheological and mechanical properties of doughs and biscuits, probably causing an ununiform oxidative deterioration of tocopherols during biscuit preparation and Our findings are slightly higher compared to those reported by [17], who found a TPC of about 2.5 mg GAE•g −1 DW in florets.This could be related to the shorter extraction time these authors used (10 min vs. 1 h) or to the variety and other environmental factors, which could imply variation in the content of antioxidants [42].
Once the hydroalcoholic extracts were included in the biscuit formulation, they did not lead to a significant increase in TPC compared to the control biscuit (~0.73 mg GAE•g −1 DW; Figure 4B).By contrast, the TPC of B_BF10 was almost double this value (1.9 mg GAE•g −1 DW).Such a result suggests that the food matrix (BF) protects these compounds from thermal degradation [43].A similar concentration was observed in crackers enriched with 15% broccoli stem flour [10].The encapsulation of phenolic compounds is one of the most commonly used techniques to avoid their loss [22].In this case, the incorporation of BF represents a cost-effective alternative to fortify biscuits with phenolic compounds.
Tocopherols and Carotenoids
Among the broccoli-derived ingredients investigated, tocopherols and carotenoids were determined only in BF.Due to the lipophilic nature of these compounds, none of them were detected in the hydroalcoholic extracts.BF was mainly characterized by β/γ and α tocopherols (~47.4 µg•g −1 DW in total, Figure 5A).
Conclusions
The transformation of broccoli by-products into functional ingredients, such as hydroalcoholic extracts and flour, for food applications represents a good circular economy approach and a cost-effective alternative to valorize them.The substitution of sunflower oil hydroalcoholic extracts in biscuit formulations was not suitable to guarantee proper enrichment because of the enhanced stickiness and poor workability of the doughs observed.However, the high glucosinolate content suggests them as potential ingredients for liquid or semi-solid applications.
On the other hand, the incorporation of 10% broccoli flour resulted in a cohesive and easy-to-work dough with enhanced nutritional quality in terms of the glucosinolate, carotenoid and phenolic content.The matrix helped to prevent the thermal degradation of these sensitive compounds.Therefore, broccoli flour is a promising functional ingredient to fortify biscuits, and potentially other bakery goods.Biscuits with broccoli flour will be further analyzed to evaluate their shelf-life as well as consumer acceptance through sen- Lee et al. [3] found concentrations ranging from 1.8 to 286 µg•g −1 DW of total tocopherols among twelve Brassicaceae.However, tocopherols are differently accumulated in different broccoli organs, with leaves having the highest concentration compared to florets and stems [17].No statistical differences were found among enriched biscuits compared to the B_CTRL.However, the results from tocopherols analysis were characterized by a quite high data dispersion.This may be the result of the diverse composition of functional ingredients in low-molecular-weight compounds (e.g., polysaccharides, peptides) which affected the rheological and mechanical properties of doughs and biscuits, probably causing an ununiform oxidative deterioration of tocopherols during biscuit preparation and baking.It is noteworthy that the tocopherol fraction in B_CTRL can mainly be attributed to sunflower oil (used as an ingredient), which was reported to have a concentration of γand αtocopherol of about 92 and 432 mg•kg −1 [44].The sunflower oil was reduced by the hydroalcoholic extracts in the case of the enriched biscuits.
Regarding carotenoids, lutein and βcarotene were predominant in BF (27.4 and 18.7 µg•g −1 DW, respectively; Figure 5B).Previously, violaxanthin and neoxanthin were also identified in broccoli florets, with concentrations of about µg•g −1 DW, such as β-carotene [3,17].Different extraction methods might be more successful in obtaining a better carotenoids profile.The hydroalcoholic extracts were devoid of a carotenoid fraction.Hence, the biscuits made using them were not different from B_CTRL.The presence of lutein in B_CTRL and in the other biscuits containing hydroalcoholic extracts (approximately 6 µg•g −1 DW) can be attributed to wheat flour [45].On the other hand, a 10% substitution level of BF led to a significant enrichment of B_BF10; namely, the content of β-carotene was 3.0 ± 0.6 µg•g −1 DW and that of lutein was 7.8 ± 0.4 µg•g −1 DW.The recovery rate was 166% and 137% for β-carotene and lutein, respectively.Potentially, this confirms that there is no effect of the thermal treatment on degrading these compounds, and in actual fact it promoted their release from the matrix.Previously, [31] enriched pizzas with 10 and 30% cauliflower flour, obtaining a proportional increase in the carotenoid content.However, the recovery rates in pizzas enriched at 10% ranged between 12 and 57%.The lower recovery rates registered could mostly be related to yeast fermentation.Hence, B_BF10 could be an alternative source of vitamin A which could be introduced in a balanced daily diet.
Conclusions
The transformation of broccoli by-products into functional ingredients, such as hydroalcoholic extracts and flour, for food applications represents a good circular economy approach and a cost-effective alternative to valorize them.The substitution of sunflower oil with hydroalcoholic extracts in biscuit formulations was not suitable to guarantee proper enrichment because of the enhanced stickiness and poor workability of the doughs observed.However, the high glucosinolate content suggests them as potential ingredients for liquid or semi-solid applications.
On the other hand, the incorporation of 10% broccoli flour resulted in a cohesive and easy-to-work dough with enhanced nutritional quality in terms of the glucosinolate, carotenoid and phenolic content.The matrix helped to prevent the thermal degradation of these sensitive compounds.Therefore, broccoli flour is a promising functional ingredient to fortify biscuits, and potentially other bakery goods.Biscuits with broccoli flour will be further analyzed to evaluate their shelf-life as well as consumer acceptance through sensory analysis.The future perspective involves developing these functional biscuits at the industrial scale.
Supplementary Materials:
The following supporting information can be downloaded at: https://www.mdpi.com/article/10.3390/antiox12122115/s1, Figure S1: Color attributes of control and functional biscuits prepared with broccoli-derived ingredients.Values are reported as average ± standard error.The color attributes reported are as follows: hab, hue angle; C*ab, chroma; YI, yellowness index; WI, whiteness index; ∆E*, total color difference; Figure S2: Work to initiate and propagate a fracture in cookies under plain-strain conditions; Table S1: Hardening rate and apparent biaxial extensional viscosity (ABEV) values of control and functional; Table S2: Texture profile analysis.Values are reported as average ± standard error; Table S3: Work to initiate and work to propagate a fracture of control and functional biscuits.
Figure 1 .
Figure 1.Control, enriched doughs and their biscuits prepared with broccoli-derived ingredients.
Figure 1 .
Figure 1.Control, enriched doughs and their biscuits prepared with broccoli-derived ingredients.
Figure 2 .
Figure 2. Rheological properties of doughs: (A) biaxial extension of the doughs under large strain lubricated compressive conditions; (B) spread and stickiness of the doughs under large-strain conditions.Error bars represent the confidence interval consisting of two times the standard errors around the mean values.
Figure 2 .
Figure 2. Rheological properties of doughs: (A) biaxial extension of the doughs under large strain lubricated compressive conditions; (B) spread and stickiness of the doughs under large-strain conditions.Error bars represent the confidence interval consisting of two times the standard errors around the mean values.
Table 1 .
List of ingredients in control and enriched biscuits.
Table 2 .
Concentration of glucosinolates in broccoli-derived ingredients (flour and liquid extracts).Results are expressed as µg•g −1 DW.Values are reported as average ± standard deviation.
Table 3 .
Concentration of GLSs in the functional biscuit made with 10% broccoli flour.Results are expressed as µg•g −1 DW.Values are reported as average ± standard deviation.
|
2023-12-16T16:15:20.027Z
|
2023-12-01T00:00:00.000
|
{
"year": 2023,
"sha1": "061eb0dfd48cc7df632e32dd53de20ff37b7982c",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3921/12/12/2115/pdf?version=1702536209",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9b96989e93d520ec3f7a003519a15a17f6318c2d",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
249848101
|
pes2o/s2orc
|
v3-fos-license
|
Experimental evaluation of neutron-induced errors on a multicore RISC-V platform
RISC-V architectures have gained importance in the last years due to their flexibility and open-source Instruction Set Architecture (ISA), allowing developers to efficiently adopt RISC-V processors in several domains with a reduced cost. For application domains, such as safety-critical and mission-critical, the execution must be reliable as a fault can compromise the system's ability to operate correctly. However, the application's error rate on RISC-V processors is not significantly evaluated, as it has been done for standard x86 processors. In this work, we investigate the error rate of a commercial RISC-V ASIC platform, the GAP8, exposed to a neutron beam. We show that for computing-intensive applications, such as classification Convolutional Neural Networks (CNN), the error rate can be 3.2x higher than the average error rate. Additionally, we find that the majority (96.12%) of the errors on the CNN do not generate misclassifications. Finally, we also evaluate the events that cause application interruption on GAP8 and show that the major source of incorrect interruptions is application hangs (i.g., due to an infinite loop or a racing condition).
I. INTRODUCTION
Thanks to the open-source Instruction Set Architecture (ISA), RISC-V based processors are today adopted in several domains, including end-user applications [1], High Performance Computing (HPC) [2], and safety-critical applications [3], [4]. As RISC-V architecture allows full customization, it enables systems design with reduced cost. Consequently, RISC-V processors became a promising option for safety-critical and mission-critical systems, where power consumption, real-time execution, and reliability are of highest importance. However, the majority of RISC-V works mainly focus on design for performance and power consumption, often neglecting reliability. RISC-V architectures need to be evaluated in order to characterize the errors that can reduce their reliability and potentially prevent the system from meeting its mandatory constraints.
The sources of such errors can be environmental perturbations, ionizing radiation, software errors, process, temperature, or voltage variations [5], [6]. It has been demonstrated that the faults caused by radiation have the highest error rates [7]. A terrestrial neutron strike may perturb a transistor's state or generate bit-flips in memory or current spikes in logic circuits that, if latched, lead to an error. Neutron-induced events usually are soft because the device is not permanently damaged. A new write operation will correctly store the value on the struck memory cell, and consequently, a new operation using the struck logic gate will provide the correct result.
To evaluate the device reliability under radiation, researchers expose devices such as CPUs [8], GPUs [9]- [11], FPGAs [12], [13], and Tensor Processing Units [14] to a radiation flux and measure the error rate. Recently, the error rate of soft RISC-V processors synthesized on FPGAs has been measured, when exposed to heavy ions and neutrons [15]- [17]. However, as far as we know, none of the existing works considered RISC-V processors physically implemented as an ASIC.
This work is the first to characterize the error rate of a commercial ASIC RISC-V platform, i.e., the ultra-lowpower GAP8 platform from GreenWaves [18] with a RISC-V master core and a cluster of 8 RISC-V cores, measured on neutron beam experiments. Beam experiments provide a realistic error rate that can be directly scaled to terrestrial flux, and thus, our results can be used to estimate the real error rate of the GAP8 RISC-V platform. Additionally, to effectively investigate the error rate, we classify the fault outcome of each incorrect execution. We, then, can distinguish, for instance, if the execution hangs or crashes due to a memory error.
We report results from beam testing campaigns that assess the reliability of the GAP8 platform, while running a representative set of codes, and measure the error rates and the Mean Execution Between Failures (MEBF). The beam data covers accelerated testing that represents a total of more than 570,000 years of the platform operation. The main contributions of this work are as follows: • Provide experimental data and findings on the impact of the neutron-induced errors on GAP8 RISC-V platform. • Reliability evaluation of 5 representative codes from different domains, from signal processing to machine learning. • Analysis of the fault outcomes for the erroneous executions observed in the experiments using the GAP8 Software Development tool Kit (SDK). • We measure the MEBF to estimate the number of executions that GAP8 devices would perform before experiencing a failure.
The remainder of the paper is organized as follows. Section II presents the background on radiation effects on computing devices and the current advances in RISC-V reliability. Section III presents the tested device, selected codes, and describes the evaluation methodology. The experimental results are presented in Section IV, and finally Section V concludes the paper.
II. RADIATION EFFECTS ON RISC-V DEVICES
The effects of radiation has been studied for CPUs [8], GPUs [9]- [11], FPGAs [12], [13], and Deep Neural Networks accelerators [14], ARM CPUs [19], and memories [20]- [22] on beam experiments. When the device is exposed to the beam of accelerated particles, faults will be induced on the hardware and may manifest as output errors. Beam experiments are the standard methodology for injecting faults on an electronic device. As the chip is equally exposed to the beam of particles the faults will not be limited only to a subset of the device resources. Consequently, beam experiments provide a realistic device error rate running an application.
As RISC-V processors have been employed in several application domains, including space exploration [3], [23], HPC [2], and safety-critical systems [4], it becomes imperative to study RISC-V reliability against neutron-induced errors. The reliability of soft RISC-V cores synthesized on FPGAs exposed to neutrons and heavy ions has been studied in past works [15]- [17]. Researchers applied Triple Modular Redundancy for RISC-V and evaluated its reliability in beam experiments [15]. In fact, as RISC-V is an open-source ISA, developers can easily modify the architecture to add fault tolerance and evaluate the hardened architecture on beam experiments. However, the fault model and error rate measured on beam experiments are significantly hardware-dependent. As a result, the fault model and error rate generated for soft cores synthesized on FPGAs do not correspond to the fault model and error rate for ASIC processors, and thus, the evaluation of soft cores synthesized on FPGAs is not representative of applications running on ASIC RISC-V processors. Note that RISC-V processors built as ASIC are preferable over simulated cores, especially for ultra low power applications, as the use of an FPGA may add an unnecessary power consumption.
To the best of our knowledge, this is the first paper that: (1) evaluates an ASIC commercial mutlicore RISC-V platform (GAP8) on a neutron beam and extracts the realistic error rate for five representative codes; (2) evaluates the observed outcomes on the beam experiments. Furthermore, we keep a trace of each incorrect execution on the beam experiments in order to further analyse it, providing useful insights.
A. Fault propagation on GAP8
The natural flux of high-energy neutrons at sea level is about 13 neutrons/(cm 2 × h) [24]. When iterating with the hardware, high-energy neutrons may generate soft errors in the system. Figure 1 shows the error propagation on GAP8 ultra-low-power RISC-V platform. The striking particle can generate single or multiple bit flips on a memory resource such as caches, registers, buffers, or even corrupt the value of a functional unit inside a processing core. If the value is used as part of the algorithm computation, the incorrect value will be propagated by the code running on the device. At the application level, the fault may manifest as the following outcomes: (1) No effect on the program output: The fault is masked, the program output is not affected or the circuit functionality is not affected. (2) Silent Data Corruption (SDC): The program finishes, but the output is not correct, and The system stops working, forcing it to be rebooted or power cycled. A DUE can result from events such as uncorrectable memory events, crashes, or an error that generates an infinite loop (the program hangs). Commercial devices like GAP8 often offer a SDK that allows the developers to profile and trace events on the application running on the device. Researchers have successfully monitored neutron-induced events using the SDK or the operating system for ARM [25], FPGAs [26], and GPUs [11], [27]. The event tracing helps us understand the sources of the errors and the syndromes caused by the radiation in the experiments. On GAP8, we can classify the DUE observed in each code execution into four main sub-classes: • Illegal instruction: The GAP8 SDK returned a crash code that represents an invalid instruction being executed. Illegal instructions can happen when a neutron-generated error corrupts the instruction's opcode; • SDK exception: The GAP8 SDK is responsible for initializing and configuring the device before executing the application for each iteration. If the SDK cannot pre-set the memory or execute the application on the device, it will throw an exception to the user. When the SDK exception happens multiple times sequentially, we perform a power cycle on the board; • Timeout: A Single Event Effect (SEE) on GAP8 may lead to an infinite loop inside the application. To avoid the application hanging indefinitely, we set a Timeout limit to stop the running code and start it again; • Memory error: When the application running on GAP8 cannot allocate memory, the API will generate an error reporting a memory allocation error.
III. EXPERIMENTAL METHODOLOGY In this section, we describe the GAP8 RISC-V processor, the codes we characterize, the metrics adopted, and how they are measured for GAP8.
A. Device under test and evaluated codes Evaluated platform: We consider the GAP8, a multicore RISC-V platform from GreenWaves technologies. The device under test is built with 55nm TSMC 55LP CMOS technology. The SoC comprises a cluster of 8 RISC-V cores connected by a Logarithmic Interconnect crossbar, and a RISC-V main processor to manage the cluster. GAP8 has an L1 memory of 64KB and an L2 memory of 512KB, and both memories are shared between the cores of the cluster. Each core of the cluster operates on a maximum frequency of 175MHz. It supports only integer and fixed-point arithmetic. GAP8 is an ultra low power device, and it can process 22.65GOPS with a power consumption of 96mW [18].
Evaluated codes: We chose the five representative codes listed in Table I from signal processing to machine learning domains. The selected codes vary in their complexity and implementation characteristics. Codes such as MNIST Convolutional Neural Network (CNN), Matrix Addition, and Matrix Multiplication are highly parallelizable, easily distributed between GAP8 eight cores, and computing-intensive. That is, for those codes, the memory instructions are reduced, and most of the instructions are arithmetic ones. On the other hand, Finite Impulse Response (FIR) and Bilinear Resize also have computing intensive parts but also have a high number of memory instructions and synchronizations between the main core and cluster.
The choice of diverse codes increases the quality of the obtained results, which are extendable to different applications [28]. We select a CNN to classify handwritten digits as a representative case of embedded machine learning algorithms. The CNN is a quantized (16-bit integer) version of LeNet CNN [29] and has two convolutional layers, each one followed by a ReLU and a MaxPooling layer. The last layer that performs classification is a Linear layer. For the CNN error evaluation, we separate the errors observed on the classification output by their criticality. That is, the errors that are observed on the evaluated CNN can be separated into Tolerable SDCs (i.e., the ones that do not affect the inference result) and Critical SDCs (i.e., the ones that modify the classification result).
B. Beam Experiment Setup
Using beam experiments, we can effectively evaluate the reliability of the GAP8 platform. We expose the GAP8 to a neu- tron beam to evaluate its error rate. We use the default GAP8 SDK to build and run the experiments. Our experiments are performed at the ChipIR facility of the Rutherford Appleton Laboratory, UK. Figure 2 shows the setup of the experiments. The facility delivers a beam of neutrons with a spectrum of energies that resembles the atmospheric neutron one [30]. The available neutron flux was about 3.5 × 10 6 n/(cm 2 /s), ∼8 orders of magnitude higher than the terrestrial flux at sea level [24]. Since the terrestrial neutron flux is low, it is improbable to see more than a single corruption during program execution in a realistic application. We have carefully designed the experiments to maintain this property (observed error rates were lower than 1 error per 2,000 executions). Experimental data can then be used to estimate the error rate in the terrestrial radioactive environment.
We created a software watchdog that runs on the GAP8's host computer (see Figure 2) to monitor and perform recovery from device hangs. The software watchdog controls the application under test by executing the applications and monitoring if the execution does not return a result inside a time limit. Then, the program is killed and relaunched if it stops responding in a predefined time interval. We set each code timeout individually depending on the code execution time, up to 6× the expected execution time. The software watchdog also logs if the application crashes during the execution of the code. When the application hangs, the software watchdog uses an ethernet-controlled switch to perform a power cycle on the USB replicator, rebooting the GAP8 board. The board power cycle is necessary to detect when the GAP8 stops responding.
From beam experiments, we can calculate the cross-section by dividing the number of observed errors by the received particles fluence η (neutrons/cm 2 ), as show in Equation 1.
The fluence η is obtained by multiplying the average neutron flux provided by the test facility (neutrons/(cm 2 · s) by the effective execution time. The cross-section (cm 2 ) represents the circuit area that will generate an output error (SDC or DUE) if hit by a particle. The higher the number of computation resources, the higher the cross-section, and the higher the probability of an impinging particle generating an error. Fig. 3: Figures 3a and 3b shows the cross-sections and the Mean Executions Between Failures (MEBF) extracted from the neutron beam experiments for each evaluated code, respectively. The hatched bars represent the average cross-section and MEBF, respectively. It is worth noting that the MEBF is calculated using the execution time provided by GAP8 SDK.
As present in previous works, the cross-section alone does not contain information on the execution time [31], [32]. Thus, to better evaluate the reliability of tested device and codes, we need to correlate error rates with performance. Hence, we evaluate the Mean Executions Between Failures (MEBF) rate for each tested code. The MEBF is defined as the number of correct executions completed before experiencing a failure [32]. We calculate the MEBF by dividing the Mean Time Between Failures (MTBF) rate by the code's execution time. The MTBF is the inverse of the cross-section multiplied by the terrestrial flux (1/(σ × f lux terrestrial )). The MEBF rate is then directly proportional to the tested setup resilience. A higher MEBF rate means more results could be computed without experiencing errors.
IV. BEAM EXPERIMENTS RESULTS
In this section, we present the results of the first evaluation of a commercial parallel ultra-low-power processing processor based on the RISC-V architecture. We compare the crosssection of the various evaluated codes and the Mean Execution Between Failures (MEBF). Additionally, we discuss the erroneous execution outcomes observed in the beam experiments for each code.
A. GAP8 error rate Figure 3a shows the experimental cross-section measured in the beam experiments (y-axis) for the five codes we evaluated and the average cross-section. We present the error rate split into two categories, Silent Data Corruption (SDC) and Detected Unrecoverable Error (DUE).
The results show that the DUE error rate varies less than the SDC rate for the evaluated configurations. The highest DUE rate variations come from FIR (72% higher than the average) and MNIST (54% lower than the average). The FIR code is split into steps interleaved with operations sequentially performed by the master core. The majority of FIR execution is composed of sequential code and managing memory on the main core, leading to less efficient utilization of GAP8 resources. Consequently, an incorrect synchronization between the cluster of cores and the main core controller is expected to generate a DUE. On the contrary, MNIST CNN is highly optimized to extract the maximum performance of the cluster of cores and reduces any unnecessary management performed by the main core controller and reduce the Direct Memory Access (DMA) operations, leading to a lower DUE rate than the other codes. FIR, Bilinear Resize, and Matrix Mul operate over large arrays, which leads to more DMA requests and synchronization actions performed by the main core, increasing the DUE rate. Figure 3a shows that the SDC rate is directly related to how the evaluated codes use the functional units. As most of the GAP8 resources are underutilized during FIR execution, the SDC rate is expected to be much lower than the average (4.5× lower than the average SDC rate). The Matrix Addition performs the summation of two linear arrays, which allows the code to be highly parallelized and equally distributed among the cores of the cluster. An error in a core's functional unit or an unprotected memory resource is likely to generate an SDC. On the contrary, Bilinear Resize and Matrix Multiplication operations can also stress the GAP8 computing units. However, to perform the calculations on large memory arrays that occupy or exceed the L1 memory (See Table I), a high number of memory operations is required, which reduces the usage of the functional units compared to the other codes. As a result, the SDC rate is reduced.
The MNIST CNN stresses all the computational and memory resources of the GAP8 platform, leading to the highest observed error rate (3.2× the average SDC rate). When used for image classification, CNN produces a tensor of values representing the probabilities of classified objects in the frame. Then, after the last layer of the CNN, the probability values Figure 4a shows the erroneous executions outcomes percentages. Figure 4b shows the execution time ratio between the faulty and the fault-free execution time. Timeout executions are not shown as they are always higher than 3 × the expected execution time. The execution time is measured considering the SDK device setting step.
will be ranked, and the highest value will be selected. Although a fault can propagate to the last layer of the CNN, it may not change the classification.
To evaluate the criticality of the SDCs on MNIST CNN we separate the SDC into Tolerable SDCs (a SDC that does not change the classification) and Critical SDCs (a SDC that changes the classification). In our experiments, 96.12% of the SDCs would be classified as Tolerable SDCs. Our findings are aligned with previously published works that measured the criticality of the errors on GPUs and TPUs [14], [33], [34]. In fact, it has been demonstrated that CNNs can support variations on the output values and still produce the correct classification [33], [34]. However, Critical SDCs are a nonneglectable threat, especially for safety-critical and missioncritical systems. Therefore, such errors should be identified and analyzed in order to design hardened techniques for lowpower devices, like the GAP8 platform, to at least detect critical errors that may undermine the system's reliability.
B. Mean Executions Between Failures
The cross-section is a metric that considers the probability of error given the resources used by the application. However, the cross-section does not give us any insight into how the performance can impact the reliability. Thus, we must use the MEBF to analyze the application's reliability and performance impact. Figure 3b shows the MEBF for the evaluated codes. It is worth noting that we calculated the MEBF using the sum of the SDC and DUE cross-section for this evaluation. Then, we can evaluate the MEBF for all the radiation induced-events on the GAP8 platform.
Regarding the MEBF, the differences between the evaluated codes change compared to the cross-section evaluation. MNIST CNN has the highest error rate and, consequentially, even if the code does not have the highest execution time, it has the lowest MEBF (2.2x10 10 MEBF) among the executed codes. Contrarily, Matrix Multiplication has neither the highest error rate nor highest execution time, leading to the highest MEBF (8.2x10 10 MEBF). It is worth noting that even if Bilinear Resize does not have a high cross-section, it has an MEBF that is only 40% higher than MNIST due to the high execution time. On the other hand, FIR and Matrix Addition have an MEBF near the average.
The cross-section and the MEBF can be directly used to estimate the device's failure rate using the obtained crosssection multiplied by terrestrial flux (13 neutrons/(cm 2 ×h)). If we consider the average cross-section from Figure 3a, the sum of SDC and DUE cross-sections (≈ 1.34x10 −9 ) would yield a failure after each ≈ 6.66x10 7 hours of operation. However, this analysis is valid only when one operating device is considered. The scenario changes when we extend the analysis to multiple devices working in parallel. When considering a million GAP8 based-systems executing simultaneously, the interval of faults would be reduced to 66.6 hours, i.e., one fault after each ≈2.8 days. This is an alarming result, since many IoT applications on smart home appliances are expected to be in charge of controlling critical systems, such as heating and cooling systems and high voltage mechanisms.
C. Analysis of the incorrect execution outcomes
Thanks to the setup presented in Section III, we can distinguish the outcomes of each incorrect execution (i.e., executions that do not finish and/or produce an incorrect output). Figure 4a shows the percentages of each outcome for each evaluated code in the vertical axis, i.e., SDC, Timeout, Illegal Instruction, Memory error, and SDK exception.
The executions that finished with SDCs or Timeout are the majority of the incorrect executions observed in the experiments. On average, the sum of SDCs and Timeout represents 92% of the incorrect executions. In fact, the probability of a loop control variable, stored in a memory or a register, is modified and thus leads to a Timeout execution, is higher for codes with more instructions such as branch, memory, and synchronization between main and cluster cores, such as FIR and Bilinear Resize. Furthermore, for FIR and Bilinear Resize codes, more errors from Illegal Instructions, Memory errors, and SDK exceptions are observed compared to codes composed of more arithmetic instructions, Matrix Add, Matrix Mul, and MNIST CNN.
Additionally, in Figure 4b, we report the ratio between the expected average execution time (without faults) and the execution time when an incorrect execution is observed (vertical axis). We draw a black line crossing the y-axis at 1 to represent each code's error-free average execution time. The most exciting result is that most executions finishing with an SDC have an execution time near the expected average, but some executions are far from the expected execution time. In fact, even if the SDCs are only errors observed at the application's output, the neutron-induced events may perturb the control flow or increase/reduce the loop iterations, generating a mismatch between the expected output and the observed one. Additionally, the algorithm number of iterations modified by the error may reduce or increase the execution time of the code.
V. CONCLUSIONS
In this paper, we have evaluated the realistic error rate of the GAP8 RISC-V multicore processor implemented as an ASIC exposed to a flux of neutrons. We have considered five representative codes for our experiments, including a quantized Convolutional Neural Network. The error rate of the device extracted from the beam experiments is directly related to how the code uses the processor resources. That is, codes that have to perform more synchronization actions between the main core and the cluster of core and more memory operations, like FIR, have a higher DUE rate. Contrarily, more computeintensive codes that stress more the functional units and have fewer memory instructions, like the MNIST CNN, have a higher SDC rate.
Using GAP8 SDK, we have been able to trace the outcomes and the execution time of the incorrect executions observed in the radiation experiments. The results show that a fault can reduce or increase the execution time of an instance of a code that finished with SDC. These results can be used to tune the building and deployment of smart cities, HPC, and safetycritical applications.
Even if the beam experiments are the most realistic way to evaluate an electronic device, they still lack fault propagation visibility. In the future, we will inject faults in GAP8 microarchitecture to simulate and evaluate fault propagation. Thanks to RISC-V open-source ISA, we can perform realistic and exhaustive fault simulations in different levels of abstraction, such as RTL models and cycle-accurate simulators. Then, the evaluation of the fault propagation will be an essential step in proposing fault tolerance for parallel multicore RISC-V processors.
ACKNOWLEDGMENT This project has received funding from the European Union's Horizon 2020 research and innovation program under the Marie Sklodowska-Curie grant agreement No 899546 and with the support of the Brittany Region, and partially funded by ANR FASY (ANR-21-CE25-0008-01) and ANR RE-TRUSTING (ANR-21-CE24-0015-02). Neutron beam time was provided by ChipIR (DOI:10.5286/ISIS.E.RB2200004-1). We acknowledge the researchers that supported and helped with the neutron experiments, Dr. Paolo Rech, Dr. Christopher Frost, and Dr. Carlo Cazzaniga.
|
2022-06-20T01:15:40.522Z
|
2022-06-17T00:00:00.000
|
{
"year": 2022,
"sha1": "c851a81bb33c1902d7b25bb953272bf022c98c16",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "8aceaca782a0b51824371e7b439dfdc75e1ef43c",
"s2fieldsofstudy": [
"Physics",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
266098205
|
pes2o/s2orc
|
v3-fos-license
|
Hemostatic capability of ultrafiltrated fresh frozen plasma compared to cryoprecipitate
This in vitro study evaluated the potential hemostatic effect of fresh frozen plasma (FFP) ultrafiltration on clotting factors, coagulation parameters, and plasma properties. ABO-specific units of FFP (n = 40) were prepared for the concentrated FFP and cryoprecipitate. Plasma water was removed from FFP by ultrafiltration using a dialyzer with a pump running at a 300 mL/min. The aliquot of each concentrated FFP after 50, 100, 200, and 250 mL of fluid removal were measured the standard coagulation assay, clotting activity, and plasma properties to compare those parameters of cryoprecipitate. Concentrated FFP contained 36.5% of fibrinogen in FFP with a mean concentration of 7.2 g/L, lower than the cryoprecipitate level. The levels of factor VIII (FVIII), von Willebrand factor (VWF):antigen (Ag), and VWF:ristocetin cofactor (RCo) were also lower in concentrated FFP, whereas the levels of factor V, factor IX, factor XIII, antithrombin and albumin was higher in concentrated FFP. Maximum clot firmness (MCF) in thromboelastometry was approximately one-half of that in cryoprecipitate. Although the levels of VWF:Ag, VWF:RCo, and FVIII differed depending on the ABO blood types, fibrinogen levels, and MCF were not significantly different among the ABO blood groups in FFP and concentrated FFP.
Results
Table 1 demonstrates the pre-and post-UF trends of coagulation factors, coagulation parameters, plasma proteins, and liquid properties after removing predefined volume of fluid.Although the levels of all measured coagulation factors and anticoagulants, protein, and density increased significantly by the plasma concentration, the increases were not parallel with the concentrated volume.The mean fibrinogen, FV, FVIII, FIX, FXIII, VWF:RCo, and albumin level increased by 170%, 178%, 190%, 182%, 114%, 148%, and 144%, respectively, in the final concentrate after UF compared with FFP.The mean MCF in EXTEM and NATEM significantly increased from 21.1 to 36.5 mm (76%) and 22.1 to 34.4 mm (59%), respectively, in aliquot from FFP and post-UF after removing 250 mL of fluid.CT in NATEM and CFT were significantly shortened in concentrated FFP than in FFP, whereas CT in EXTEM was significantly elevated after removing 200 mL of fluid.Alpha angles in NATEM were approximately 66% higher in concentrated FFP after removing 250 mL of fluid than in FFP (p < 0.0001).The process lasted a mean of 32 min, during which the FFP was recirculated through the circuit approximately 6.9 times.In the process of UF, the filtration time got longer as removal volume increased.
Table 2 demonstrates the levels of coagulation parameters, plasma proteins, density, osmotic pressure, and volume between cryoprecipitates and concentrated plasma products.UF resulted in concentrating mean plasma volume of 481.6 mL by an order of 6.4 to 19.3 times.After UF, mean plasma volume was 57.3 mL, which was not significantly different from cryoprecipitate volume.The fibrinogen concentration in concentrated FFP ranged from 4.2 to more than 9.0 g/L (n = 4), with a mean concentration of 7.2 g/L, which was significantly lower than that in cryoprecipitate units.Cryoprecipitates contained more VWF and FVIII than the concentrated FFP, whereas FV, FIX, FXIII, and antithrombin values were significantly higher in concentrated FFP than those in cryoprecipitate units.In addition, total protein, albumin levels, and density were significantly higher in the concentrate after UF than in cryoprecipitate, while osmotic pressure was significantly lower in concentrated FFP than in cryoprecipitate units.
MCF in EXTEM and NATEM were approximately two-fold higher in cryoprecipitate units than in concentrated FFP.CT and CFT in EXTEM were significantly shortened in cryoprecipitate units than in concentrated FFP (p < 0.001), but the difference was not observed in CT and CFT in NATEM.Alpha angles in EXTEM were 11 .The post hoc power analysis showed that a sample size of 20 per unit was adequate with 100% power.The MCF's effect size (f) in EXTEM and NATEM were 4.6 and 3.6, respectively, with an α-level of 0.05.Table 3 demonstrates a subset analysis of the levels of fibrinogen, FVIII, VWF, and ROTEM-MCF in FFP, cryoprecipitate, and concentrated FFP from each blood type of the donors.The group AB donors exhibited significantly higher FVIII and VWF:RCo in FFP and concentrated FFP compared to the group O donors, whereas the values of these coagulation factors in cryoprecipitate were not significantly correlated with ABO blood groups.Each bag of FFP, concentrated FFP, and cryoprecipitate showed similar level of fibrinogen and ROTEM-MCF regardless of blood type.
Discussion
We found that after ultrafiltration, we increased fibrinogen concentration 2.7 fold higher, a level consistent with prior reports 12 in patients where MUF was used for bleeding management following cardiac surgery [6][7][8] .Although concentrated FFP contains less fibrinogen than cryoprecipitate, it also provides multiple other important hemostatic factors that may be important for bleeding management 13,14 .After UF, 36.5% of fibrinogen (mean levels of 448 mg) was extracted from 480 mL of FFP, a level less than in cryoprecipitate, which usually contains 40-60% of fibrinogen levels from FFP 15 .We also noted a correlation between the fibrinogen content of the FFP and concentrated FFP after UF (r = 0.634, p < 0.0001), as a higher fibrinogen content of FFP influences the final levels.FFP with fibrinogen levels higher than 330 mg/dL was concentrated to levels over 900 mg/dL.
In addition to fibrinogen, increased levels of FV, FIX, and FXIII in the concentrated FFP may also improve hemostasis by activating tenase and prothrombinase to increase thrombin generation 16,17 and polymerizing fibrin cross-linked α2 antiplasmin, thus making fibrin more resistant to degradation 18,19 .Another potential advantage of concentrated FFP is that it provides antithrombin and other plasma proteins, which may be significant for restoring hemostatic balance 20,21 .
Further, the increase in total proteins and albumin in concentrated FFP contributes to increasing colloid osmotic pressure [22][23][24] .Overall improvements in hemostasis with concentrated FFP occurred as noted by the www.nature.com/scientificreports/shortened CFT, and the elevated α-angle of NATEM, and fibrin polymerization.However, EXTEM-CT was significantly prolonged after UF, which might be affected by concentrated anticoagulant (Acid Citrate Dextrose-A) in FFP.In contrast, NATEM-CT was significantly shortened with decreased PT, possibly due to TF generation during UF 25 .This discrepancy may be attributed to the sensitivity of viscoelastic tests to the initial activation of the coagulation cascade 26 .Because thrombin generation was amplified by increasing TF concentrations, the EXTEM test is reproducible, and results were comparable among patients 27,28 .In this study, the variability of NATEM-CT also decreased by adding TF in the EXTEM assay.However, NATEM analysis pronounced the changes in CFT and α-angle during UF, which were in agreement with other coagulation assay.This suggests that NATEM analysis could provide a more subtle depiction of the hemostatic status.Despite the ABO blood group, labile FVIII and VWF, known as acute phase reactants 29 were significantly higher in cryoprecipitate than in concentrated FFP.In addition, group O donors had significantly lower FVIII and VWF levels in FFP than group AB donors, findings consistent with other studies 30,31 .Theoretically, the ABO antigens may alter the rate of VWF synthesis, secretion 31 , or clearance 31 accompanied by the changes in FVIII levels.The levels of FVIII and VWF:RCo were significantly different among ABO blood groups after UF, but were the same in the cryoprecipitate.However, blood types were not related to fibrinogen level and fibrin polymerization.
The difference in hemostatic potency and plasma properties between concentrated FFP and cryoprecipitate can be explained by separating fibrinogen and coagulation factors from FFP. Fibrinogen and other coagulation factors in cryoprecipitate were partially lost by the remaining plasma, called cryosupernatant 32 , and by activation or denaturation due to freezing temperature, increased pH 33 , and the technique itself 34 .However, the UF technique removes water, electrolyte, and small molecules across a semipermeable membrane by hydrostatic pressure 35 .Theoretically, the effect of filtration or adsorption 35 on the ultrafilter membrane depends on the molecular weights of coagulation factors.The type of hemofilter used in this study has a cut-off point of 10 kDa, and substances with a molecular weight of less than 50 kDa can be removed by UF.However, larger molecular components (> 300 kDa), such as VWF, FVIII, FXIII, and fibrinogen 36 , were also not completely preserved.The direct flow in UF moves perpendicular to the membrane, resulting in fairing and concentration polarization 37,38 , which reduce the driving force within the membrane, deteriorate the selectivity of separation, and extends filtration time, although average lead time for processing was 32 min.
This study has several limitations.First, it was based on in vitro experiments conducted under static conditions.Second, more than 20 mL of aliquots were required after each removal of predefined plasma water.Especially pre-washing is necessary before manual measurement of the plasma density.Therefore, more fibrinogen would be extracted from FFP after UF.Finally, the extraction rates of fibrinogen and coagulation factors were www.nature.com/scientificreports/specific to UF methods with a polysulfone-based membrane run by an ultrafilter pump.Many polysulfone-based membranes are combined with polyvinylpyrrolidone (PVP) to avoid protein adsorption, as described by the Vroman effect 39 .The state of PVP on the inner surface of membranes has an effect on preventing protein adsorption onto membrane surfaces 36,40 .Consequently, our findings may not be generalizable to other types of dialyzers.In conclusion, UF could reduce the total volume and increase fibrinogen concentration 2.7 times within 32 min but did not reach similar levels of hemostatic factors in cryoprecipitate.Concentrated FFP could restore FV, FIX, and FXIII as well as antithrombin and provide additional plasma proteins.Blood types were not related to hemostatic potency, despite the different level of FVIII and VWF depending on ABO blood groups.Additional clinical effects of concentrated FFP may further determine its potential application for managing bleeding and cardiac surgical patients when cryoprecipitate is not readily available.Finally, this in vitro study offers insights into UF, but its potential effects need to be further investigated in clinical studies.
Outline of the study
We used ABO-specific FFP-LR480 (480 mL) supplied by the Japan Red Cross because plasma levels of VWF and FVIII are different among ABO blood group.Five units of FFP from each blood type were used for making UF concentrate, and equal FFP units were used to produce cryoprecipitate.The final volume of concentrated FFP after 250 mL removal was comparable to that of cryoprecipitate.The validation study was performed in samples after removing 50, 100, 200, and 250 mL of fluid volume from FFP by (1) evaluating the change of prothrombin time (PT), activated partial thromboplastin time (APTT), fibrinogen, antithrombin activity, coagulation factors, VWF:antigen (Ag), activity, and viscoelastic properties as well as, total protein, and albumin, density and osmotic pressure; (2) comparing of the coagulation parameters, plasma proteins, and plasma properties, such as density and osmotic pressure between cryoprecipitate at the time of thaw and the concentrated FFP after 250 mL fluid removal.Our Institutional Review Board (No.5433) approved this study, and informed consent was obtained as an opt-out function on the website.
Preparation of concentrated FFP by ultrafiltration
FFPs were thawed at 37 °C and circulated through a dialyzer with a polysulfone-based membrane (NV-10U: Toray, Tokyo, Japan, an inner diameter of 200 μm, a wall thickness of 40 μm, Gamma-ray sterilization, and Extrapolation rate of 43 mL/h/mmHg), which was a similar type of hemofilter used during CPB, to remove fluid through a convective process involving filtration across membranes (Fig. 1).The ultrafilter pump ran at a flow rate of 300 mL/min for ultrafiltrate removal with a hollow fiber, which is equivalent to UF's flow rate in adults during CPB.The amount of plasma water extracted was measured, and the aliquot from each concentrated FFP was used for the measurements.
Preparation of cryoprecipitate
Cryoprecipitate was prepared from a single-donor FFP that was thawed at 1-6 °C for 12-16 h and then subjected to a hard spin at 4500g for a 10 min centrifugation 34,41 .The supernatant plasma was removed, leaving an insoluble precipitate and a reduced plasma volume.The residual material was resuspended, refrozen within 1 h of thawing, and stored in the blast freezer at − 18 °C.Cryoprecipitate samples were collected for laboratory and viscoelastic measurements after thawing in batches at 30-37 °C.
Laboratory measurements
While total protein and albumin were measured using Labospect008α (Hitachi, Tokyo, Japan), coagulation profiles, including PT, aPTT, PT-international normalized ratio, fibrinogen level (Clauss method), and antithrombin activity, were analyzed in the central hematological laboratory using XN-3000 (Sysmex Co., Kobe, Japan), according to the institutional protocol.The activity of FV, FVIII, and IX and the activity of FXIII were determined with CS-5100 (Sysmex, Kobe, Japan) using a one-stage clotting assay for individual factor-deficient plasma (Siemens Healthineers, Marburg, Germany) and JCA-BM8020 (JEOL, Tokyo, Japan) using synthetic substrate method with Berichrom F XIII (Sysmex, Kobe, Japan), respectively.While VWF:Ag was tested by ACL-TOP 700 (Werfen, Barcelona, Spain) using latex immunoturbidometric assay (HrmosIL von Willebrand Factor Antigen; Instrumentation Laboratory Company), VWF activity was assessed as VWF:ristocetin cofactor activity (VWF:RCo) with CS-5100 (Sysmex, Kobe, Japan) using aggregometry with lyophilized fixed platelets and ristocetin (BC von Willebrand reagent; Simens Healthcare Diagnostics).The plasma density was measured by Density/Specific Gravity Meter (DA-130N: Kyoto Electronics manufacturing, Kyoto, Japan), which detects oscillation specific to a substance in proportion to the weight mass.Osmotic pressure was determined using freezing point descent methods with OsmoStaison (Arkray, Kyoto, Japan).
Sample size calculation
We conducted a post hoc power analysis using G*Power 3.1 based on the result of a study that showed a significant difference in MCF values between 20 units of cryoprecipitate and 20 units of concentrated FFP using an unpaired t-test.The ultrafilter pump ran at a flow rate of 300 mL/min to remove plasma water using a hollow fiber.
Clot formation was induced by adding 20 μL of 0.2 M calcium chloride solution (starTEM 20 reagent, TemInnoations GmbH, Munich, Germany).The reagent was added to the cup and mixed adequately with 300 μL of plasma sample.We performed thromboelastometry without adding additional activators (NATEM), as well as adding recombinant tissue factor-(TF), and phospholipid-activated ROTEM (EXTEM).The following ROTEM parameters were analyzed: the clotting time (CT [s]), alpha angle (angle of the tangent at a 2-mm amplitude [°]), the clot formation time (CFT [s]), and the maximum clot firmness (MCF [mm]).Vol:.(1234567890)Scientific Reports | (2023) 13:21579 | https://doi.org/10.1038/s41598-023-48759-1www.nature.com/scientificreports/Statistical analyses Data were tested for normal distribution using the Shapiro-Wilk test.Either repeated measures one-way analysis of variance, or Friedman test was performed to detect changes in coagulation profiles, plasma proteins and ROTEM values between FFP and after each hemoconcentration.The various parameters obtained from cryoprecipitate and concentrated FFP were compared using the two-tailed Student's t-test or Mann-Whitney's U test, depending on the underlying distribution.The differences in FVIII, VWF:Ag, and VWF:RCo in FFP, cryoprecipitate, and concentrated FFP from each blood type were assessed by analysis of variance with post hoc Tukey test.The criterion for rejection of the null hypothesis was p < 0.05.All statistical analyses, except statistical power analyses, were performed using the Statistical Package for the Social Sciences software (version 11.0; IBM, Chicago, IL, USA).
Figure 1 .
Figure 1.Schematic representation of the ultrafiltration circuit for concentrating fibrinogen and other coagulation factors in FFP using an ultrafilter pump.FFP was circulated using a dialyzer with a polysulfonebased membrane to remove plasma fluid through a convective process involving filtration across membranes.The ultrafilter pump ran at a flow rate of 300 mL/min to remove plasma water using a hollow fiber.
Table 1 .
Trends of activity of clotting factors and coagulation petameters, plasma protein and plasma properties from FFP to each hemofiltration.Values are expressed as mean ± SD or median (range).Differences compared to baseline (*) are shown if results from two-way analysis of variance for repeated measures are significant (P < 0.05).FFP fresh frozen plasma, VWF:RCo von Willebrand factor ristocetin cofactor activity, VWF:Ag von Willebrand factor antigen, PT-INR prothrombin time-international normalized ratio, PT prothrombin time, APTT activated partial thromboplastin time, EXTEM extrinsic test, CT clotting time, CFT clot formation time, MCF maximum clot firmness, NATEM non-activated thromboelastometry, NA not assessed.
Table 2 .
Activity of clotting factors and coagulation petameters, plasma protein and plasma properties in cryoprecipitates and ultrafiltered FFP units.Values are expressed as mean ± SD.VWF:RCo von Willebrand factor ristocetin cofactor activity, VWF:Ag von Willebrand factor antigen, PT-INR prothrombin timeinternational normalised ratio, PT prothrombin time, APTT activated partial thromboplastin time, EXTEM extrinsic test, CT clotting time, CFT clot formation time, MCF maximum clot firmness, NATEM non-activated thromboelastometry.
Table 3 .
Subset analysis of the levels of Factor VIII, VWF:Ag and VWF:RCo in FFP, ultrafiltration units, and cryoprecipitate from each blood type of donors.Values are expressed as mean ± SD.Differences compared to the group O donors (*) are shown if results from two-way analysis of variance for repeated measures are significant (P < 0.05).FFP fresh frozen plasma, VWF:RCo von Willebrand factor ristocetin cofactor activity, VWF:Ag von Willebrand factor antigen, PT-INR prothrombin time-international normalized ratio, PT prothrombin time, APTT activated partial thromboplastin time, EXTEM extrinsic test, CT clotting time, CFT clot formation time, MCF maximum clot firmness, NATEM non-activated thromboelastometry, NA not assessed.
|
2023-12-09T16:28:01.808Z
|
2023-12-07T00:00:00.000
|
{
"year": 2023,
"sha1": "9ec3939e253fe2fb7432cf526cb2f22b01054b02",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "736f0a42bbd0790a1e0c50e6fac067ce4124399e",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
257655510
|
pes2o/s2orc
|
v3-fos-license
|
Effect of Chemical Bath Deposition Variables on the Properties of Zinc Sulfide Thin Films: A Review
Zinc sulfide (ZnS) thin films prepared using the chemical bath deposition (CBD) method have demonstrated great viability in various uses, encompassing photonics, field emission devices, field emitters, sensors, electroluminescence devices, optoelectronic devices, and are crucial as buffer layers of solar cells. These semiconducting thin films for industrial and research applications are popular among researchers. CBD appears attractive due to its simplicity, cost-effectiveness, low energy consumption, low-temperature compatibility, and superior uniformity for large-area deposition. However, numerous parameters influence the CBD mechanism and the quality of the thin films. This study offers a comprehensive review of the impact of various parameters that can affect different properties of ZnS films grown on CBD. This paper provides an extensive review of the film growth and structural and optical properties of ZnS thin films influenced by various parameters, which include complexing agents, the concentration ratio of the reactants, stirring speed, humidity, deposition temperature, deposition time, pH value, precursor types, and annealing temperature environments. Various studies screened the key influences on the CBD parameters concerning the quality of the resulting films. This work will motivate researchers to provide additional insight into the preparation of ZnS thin films using CBD to optimize this deposition method to its fullest potential.
Introduction
Zinc sulfide (ZnS) is a metal chalcogenide adhering to the II-VI compound semiconductors that are gaining widespread interest due to their broad range of applications [1][2][3][4]. ZnS is composed of metal and sulfur atoms [5]. Nanostructured metal sulfides provide crystal chemists with a rich subject for study because of their diverse structural characteristics [6]. They are proving to be highly promising materials for the production of a wide range of devices with a wide range of purposes [7,8]. ZnS has n-type conductivity [3] with a band gap semiconductor (3.54 eV) for cubic zinc blende (ZB) and (3.91 eV) for hexagonal wurtzite (WZ) [9][10][11][12]. These values imply that ZnS has a wide band gap [13]. The ZnS optical band gap is wider than cadmium sulfide (CdS), allowing better short-wavelength visible transmittance [14]. For crystalline structures, ZnS has a cubic phase (zinc blende, sphalerite) at room temperature, whereas at higher temperatures, it has a hexagonal phase (wurtzite) [14][15][16]. It shows that ZnS can have two natural phases: cubic and hexagonal [17,18]. At ambient temperature, the exciting binding energy of ZnS (38 MeV) is greater than the thermal energy (25 MeV) allowing for excitonic emission [16]. Moreover, ZnS exhibits other benefits, including non-toxic, earth-abundant, inexpensive, and a refractive index (n) of 2.35 at room temperature [11,15,19]. Researchers have also focused on synthesizing ZnS that has been doped with many elements, such as Ni, Fe, Mn, Cu, etc., to modify energy [18,47,[51][52][53][54] and optical transmittances [18,47,[51][52][53][54] in the ultraviolet-visible region were reported in previous publications. Varying the parameters affected the final chemical and physical properties of the thin films deposited. Discussing ZnS thin film CBD aspects is crucial. With a reliable CBD method to synthesize ZnS thin film, the combination of compounds with superior electrical properties has resulted in the invention of new composite materials that have attracted significant technological interest in recent years. The addition of a second phase can considerably enhance the composite material's electrical properties [55,56]. ZnS doped with a small percentage of dopants also changes its structural, optical, electrical, and magnetic properties for various purposes [57].
In this work, we give a complete review of the various characteristics of ZnS thin films produced using the CBD method. The deposition of ZnS using CBD is more complex than cadmium sulfide (CdS) [40,58]. This is because ZnS has a low solubility product (K sp ): 10 −25 [14]. Therefore, it can precipitate at low Zn 2+ and S 2− ion concentrations [58]. In particular, the conditions under which ZnS and/or ZnO can deposit simultaneously are much broader [40]. We begin with a brief overview of the investigation into ZnS properties and current research trends in the CBD ZnS thin film development. Next, the CBD method for controlling the parameters for ZnS thin films, including the effect of the complexing agent, concentration ratio [Zn]/[S], stirring, humidity, deposition temperature, deposition time, pH, solution, and Zn precursor, is reviewed. The observed differences in structural, morphological, electronic, optical properties, and chemical bonding results of ZnS thin films formed by the CBD are reviewed. The findings of ZnS thin film with doped ions have attracted considerable interest in the enhancement of the optical, electrical, and structural properties of ZnS thin films are also reviewed. After examining parameter influences on CBD of ZnS thin films, their usefulness, limits, and future possibilities are explored. The objective is to provide suggestions for optimizing the CBD process parameters to produce higher quality ZnS thin films for diverse applications.
Current Trends in Research and Properties of ZnS Thin Films Utilizing the CBD
This section provides a summary of research trends observed between 1985 and 2023 to provide researchers with an overview of current research trends and potential future study directions in ZnS thin films utilizing the CBD method. Physical, chemical, mechanical, electrical, thermal, and optical film properties were addressed revealing that ZnS films have the potential to be employed in a variety of applications.
Current Research Trends
The annual trends in ZnS thin film publications prepared by CBD from 1985 to 2023 are shown in Figure 1. The data was extracted from the Scopus database using the keywords "zinc sulfide thin films" and "chemical bath deposition" as highlight keywords. The period of the database search in Scopus was the second week of November 2022. Evaluating Figure 1A from 1985 to 2010, there were a total of 446 publications, while from 2011 to 2023, there were 1560 publications. As history unfolded in 1988, Lokhande et al. [59] proposed the use of CBD for the electrodeposition of ZnS thin films from an acidic bath. In 1992, ENSCP and IPE began developing a CBD ZnS-based buffer layer for CIGS solar cells with 9-10% efficiency [60] (see Figure 1B). In 1996, Showa Shell developed a CIGS cell of 12.8% efficiency with ZnS as its buffer layer. In 1999, Nakada et al. [61] (AGU group) achieved the most efficient CIGS thin film solar cell at the time with a ZnS buffer layer of 16.9% active area efficiency. Afterward, two distinct distribution routes emerged. Showa Shell technique achieved 14-15% efficiency (see arrow 1 in Figure 1B). AGU group's approach yields 17-19% efficiency (see arrow 2 in Figure 1B) [49]. ASC continued to work on the ZnS-based solar cell buffer and finally achieved 16% efficiency (see arrow 3 in Figure 1B) [49]. Nakada and Mizutani reported in 2002 [48], the CBD-ZnS buffer layer on CIGS increased efficiency to 18.1%. Hariskos et al. [49] recorded that the efficiency of CBD-prepared ZnS/CIGS solar cells is 18.6%. It thus heralded the start of a new era for CBD exploration on ZnS thin films, which helped facilitate new opportunities. As can be demonstrated in Figure 1A, research on CBD ZnS thin film has increased by 77% between 2011 and 2023, a timeframe spanning eleven years. Additionally, the number of papers published in the last eleven years is greater than three times the total number of articles published from 1985 to 2010. It demonstrates that CBD ZnS thin film development is gaining momentum and becoming more widespread.
groupʹs approach yields 17-19% efficiency (see arrow 2 in Figure 1B) [49]. ASC continued to work on the ZnS-based solar cell buffer and finally achieved 16% efficiency (see arrow 3 in Figure 1B) [49]. Nakada and Mizutani reported in 2002 [48], the CBD-ZnS buffer layer on CIGS increased efficiency to 18.1%. Hariskos et al. [49] recorded that the efficiency of CBD-prepared ZnS/CIGS solar cells is 18.6%. It thus heralded the start of a new era for CBD exploration on ZnS thin films, which helped facilitate new opportunities. As can be demonstrated in Figure 1A, research on CBD ZnS thin film has increased by 77% between 2011 and 2023, a timeframe spanning eleven years. Additionally, the number of papers published in the last eleven years is greater than three times the total number of articles published from 1985 to 2010. It demonstrates that CBD ZnS thin film development is gaining momentum and becoming more widespread.
Properties of Zinc Sulfide Thin Film
ZnS is a noteworthy semiconductor compound of the II-VI group [42,43]. ZnS is chemically and technologically more stable than alternate chalcogenides (such as ZnSe) making it a good host material [9]. ZnS is typically found naturally in two crystalline structures: cubic or zinc blende (ZB) and hexagonal or wurtzite (WZ) [14][15][16]. Both Zn and S have tetrahedral coordination geometry [9]. ZnS exhibits a large band gap at room temperature (300 K) and has attracted considerable interest [44]. The hexagonal phase is a polymorph at high temperatures, while the cubic phase is formed at low temperatures. [43]. Studies [17,62,63] have reported a range of temperatures for crystal structure transition between ZB and WZ phases. As the ZB structure, a = b = c = 5.41 Å when Z = 4, whereas for the WZ phase, a = b = 3.82 Å and c = 6.26 Å when Z = 2 [43]. The minor change in atomic arrangement affects both phases' physical properties [64]. The reported band gap energy (Eg) for the ZB and WZ structure are 3.54 eV and 3.91 eV, respectively [9]. ZnS has a greater band gap than ZnO (3.4 eV) [9]. The larger Eg enables the layer to transmit more photon energy and exhibit increased light absorption [13]. It is ideal for visible-blind ultraviolet (UV) light-based electronics, including photodetectors and sensors [9]. Zn has a high melting point (1800-1900 °C) that allows researchers to work at high temperatures [43]. Table 1 lists the properties of ZnS. Additionally, nano ZnS has extraordinary chemistry and
Properties of Zinc Sulfide Thin Film
ZnS is a noteworthy semiconductor compound of the II-VI group [42,43]. ZnS is chemically and technologically more stable than alternate chalcogenides (such as ZnSe) making it a good host material [9]. ZnS is typically found naturally in two crystalline structures: cubic or zinc blende (ZB) and hexagonal or wurtzite (WZ) [14][15][16]. Both Zn and S have tetrahedral coordination geometry [9]. ZnS exhibits a large band gap at room temperature (300 K) and has attracted considerable interest [44]. The hexagonal phase is a polymorph at high temperatures, while the cubic phase is formed at low temperatures. [43]. Studies [17,62,63] have reported a range of temperatures for crystal structure transition between ZB and WZ phases. As the ZB structure, a = b = c = 5.41 Å when Z = 4, whereas for the WZ phase, a = b = 3.82 Å and c = 6.26 Å when Z = 2 [43]. The minor change in atomic arrangement affects both phases' physical properties [64]. The reported band gap energy (E g ) for the ZB and WZ structure are 3.54 eV and 3.91 eV, respectively [9]. ZnS has a greater band gap than ZnO (3.4 eV) [9]. The larger E g enables the layer to transmit more photon energy and exhibit increased light absorption [13]. It is ideal for visible-blind ultraviolet (UV) light-based electronics, including photodetectors and sensors [9]. Zn has a high melting point (1800-1900 • C) that allows researchers to work at high temperatures [43]. Table 1 lists the properties of ZnS. Additionally, nano ZnS has extraordinary chemistry and physical properties, including a high surface-to-volume ratio, quantum size effect, surface and volume effects, macroscopic thermal annealing, enhanced optical absorption, low melting point, and catalysis [9].
The ZnS nanostructure can be used in innovative solar cells, such as a buffer layer for Cu(In, Ga)Se 2 (CIGS)-based thin film solar cells [49], quantum dot-sensitized solar cells (QDSCs), dye-sensitized solar cells (DSSCs), and organic-inorganic hybrid solar cells [65]. In solar cell applications, the ZnS-based buffer layer is a promising replacement for CBD cadmium sulfide (CdS). Liu and Mao [50] found that CBD-ZnS films have higher transmis-sion in the short wavelength range and a higher band gap (E g = 3.51 eV) than CBD-CdS (E g = 2.41 eV). The ZnS buffer layer is also considered harmless, efficient, and cheap. ZnS has grown as a CIGS buffer layer ( Figure 1B) since 1992. Four distinct deposition processes were used: chemical bath deposition (CBD), atomic layer deposition (ALD), physical vapor deposition (PVD), and ion layer gas reaction (ILGAR). To date, CBD has been the most successful approach, described by arrows 1, 2, and 3 in Figure 1B [49].
Synthesis of ZnS Thin Films Using a Chemical Bath Deposition
Chemical bath deposition (CBD) is an established method for the deposition of thin films. CBD has been employed as a synthesis method for over 140 years [66]. CBD is a chemical process wherein the deposition process is controlled by chemical reactions [43]. CBD has advantages: low-cost, simplicity, uniformity, ease of substrate choice, multi-film runs are conceivable, and controlled growth conditions. Therefore, it is considered to have greater commercial potential than sputtering or thermal evaporation [67].
Basic Experimental Setup for CBD
CBD produces durable, adherent, homogeneous, and rigid films with good reproducibility using a relatively straightforward procedure [43,68]. Figure 2 shows that these methods just require a solution container and a substrate for deposition [69]. The complete deposition system for an open system was placed in an enclosed system with a relative humidity (RH) of 60%, 70%, and 80% [69]. A closed and isolating apparatus is preferred to prevent contamination for the preparation of the sample [43]. Immersing the dipping solution and substrate in a water bath is necessary [43]. A magnetic stirrer is used to properly blend the chemicals in the solutions. Stirring and a thermostatic bath are used to maintain a constant temperature [68]. CBD does not require extremely expensive chemicals processed to a certain level of purity, making it a more affordable process. CBD deposition Molecules 2023, 28, 2780 6 of 37 varies on material composition. During the CBD, researchers may deposit films safely using a fume hood and nontoxic complexing agents.
deposition system for an open system was placed in an enclosed system with a relative humidity (RH) of 60%, 70%, and 80% [69]. A closed and isolating apparatus is preferred to prevent contamination for the preparation of the sample [43]. Immersing the dipping solution and substrate in a water bath is necessary [43]. A magnetic stirrer is used to properly blend the chemicals in the solutions. Stirring and a thermostatic bath are used to maintain a constant temperature [68]. CBD does not require extremely expensive chemicals processed to a certain level of purity, making it a more affordable process. CBD deposition varies on material composition. During the CBD, researchers may deposit films safely using a fume hood and nontoxic complexing agents.
Figure 2.
Experimental set-up for CBD with (1) magnetic stirrer, (2) container bath, (3) substrate, (4) beaker, (5) thermometer, (6) clamp stand, (7) relative humidity (RH) controller, and (8) Substrates can be coated in a single cycle by immersing them in a solution comprising the chalcogenide source, the metal ion, additional acid or base (to adjust the pH of the solution), and a chelating agent [40]. CBD can be used to deposit ZnS solution on a diverse range of substrates, such as silicon [70,71], glass [50], gallium arsenide (GaAs) [70], indium tin oxide (ITO) [72], tin oxide SnO [73], and soda lime glass (SLG) [18,69,74]. Between ZnS, GaAs, and silicon, there is a lattice mismatch of 4.9% and 0.9%, respectively. The glass substrate should be carefully cleaned (washed, scrubbed, degreased, and rinsed) using procedures described in numerous studies [41,[75][76][77]. The cleaning technique etches the substrate surface before the deposition to produce nucleation sites, which enhances thin film adherence [41]. CBD chemical bath solution is produced in sufficient volumes to deposit thin coatings on substrates. The reaction solution is thoroughly mixed, and the substrate is vertically clamped into the solution in a beaker covered with synthetic foam to prevent dust or unwanted particles from infiltrating the solution [78]. The solution was obtained in a beaker and left for the appropriate dip periods at deposition temperatures [41].
Basic Principle of CBD
The CBD approach to thin film synthesis is easy and rapid, with deposition at room temperature and normal air pressure, which becomes beneficial from an ecological and economic standpoint [75]. The CBD technique is used to produce ZnS thin films onto glass surfaces [39]. Decomposing a zinc salt, thiourea, and a complexing agent that permits the formation of a soluble Zn 2+ and S 2− species in the solution [39,79]. Zn 2+ and S 2− ions in solution release steadily during deposition, condensing on appropriately mounted substrates to yield ZnS thin films [80]. Substrates can be coated in a single cycle by immersing them in a solution comprising the chalcogenide source, the metal ion, additional acid or base (to adjust the pH of the solution), and a chelating agent [40]. CBD can be used to deposit ZnS solution on a diverse range of substrates, such as silicon [70,71], glass [50], gallium arsenide (GaAs) [70], indium tin oxide (ITO) [72], tin oxide (SnO 2 ) [73], and soda lime glass (SLG) [18,69,74]. Between ZnS, GaAs, and silicon, there is a lattice mismatch of 4.9% and 0.9%, respectively. The glass substrate should be carefully cleaned (washed, scrubbed, degreased, and rinsed) using procedures described in numerous studies [41,[75][76][77]. The cleaning technique etches the substrate surface before the deposition to produce nucleation sites, which enhances thin film adherence [41]. CBD chemical bath solution is produced in sufficient volumes to deposit thin coatings on substrates. The reaction solution is thoroughly mixed, and the substrate is vertically clamped into the solution in a beaker covered with synthetic foam to prevent dust or unwanted particles from infiltrating the solution [78]. The solution was obtained in a beaker and left for the appropriate dip periods at deposition temperatures [41].
Basic Principle of CBD
The CBD approach to thin film synthesis is easy and rapid, with deposition at room temperature and normal air pressure, which becomes beneficial from an ecological and economic standpoint [75]. The CBD technique is used to produce ZnS thin films onto glass surfaces [39]. Decomposing a zinc salt, thiourea, and a complexing agent that permits the formation of a soluble Zn 2+ and S 2− species in the solution [39,79]. Zn 2+ and S 2− ions in solution release steadily during deposition, condensing on appropriately mounted substrates to yield ZnS thin films [80].
The CBD solubility principle and ionic product are discussed. CBD's methodology is dependent on the product's relative solubility [81]. It is critical to know CBD's mechanisms in terms of its solubility product (K sp ). Considering an extremely sparingly soluble salt (ZnS) in equilibrium with its saturated aqueous solution (when placed in water): Using the law of mass action, where K is the stability constant, Zn 2+ S 2− and ZnS (s) are concentrations of Zn 2+ , S 2− , and ZnS (s) in the solution, respectively. The concentration of ZnS (s) (pure solid) is a constant number: Given that K and K are constants, the product of K and K , labeled K sp, is also a constant. Consequently, Equation (6) yields: The solubility product (SP) is K sp and the ionic product (IP) is Zn 2+ S 2− . K sp values for ZnS are 10 −25 [43,49]. The IP equals the SP when the solution is saturated. Supersaturation, precipitation, and ion nuclei on the substrate and in the solution emerge whenever the IP exceeds the SP [43,81]. The solubility product is affected by three primary factors: temperature, solvent, and particle size [81]. Solubility is the spontaneous interaction between two or more compounds to produce a uniform molecular dispersion. When the solute and solvent are in equilibrium, the solution is considered to be saturated. The equilibrium between a precipitate and its ions in the solution will vary in terms of temperature depending on whether the heat of the solution is endothermic or exothermic [81]. Endothermic and exothermic peaks can be determined by using a differential scanning calorimetry (DSC) curve [82]. Eid et al. [83] had previously reported for the CBD-ZnS powder DSC curve: (a) the initial peak at 100 • C refers to an endothermic reaction, (b) the second peak at 350 • C refers to an exothermic reaction due to the crystallization, and (c) when the temperature reaches 740 • C, the sample undergoes a breakdown or dissociation. Regarding the solvent factor, by adding alcohol or another water-miscible solvent with a low dielectric constant, the solubility of the relatively insoluble material in water is lowered [50]. For the particle size factor, if particle size reduces, it implies that solubility increases [81].
Reaction Mechanism for ZnS Deposition
CBD solids occur in thermodynamically unstable, or "supersaturated" baths. Two reactions produce solid material: (i) homogeneous precipitation (well within the bulk of the solution) and (ii) heterogeneous precipitation (at surfaces substrate or embryogenic reaction on the reaction vessel surface) [84].
The reaction mechanism for ZnS deposition was discussed in [43,84], which is depicted in Figure 3. Three distinct mechanisms or models regulate the CBD-ZnS thin film deposition. First, ions condense at the interacting surface to produce the film by the ionby-ion process via a heterogeneous reaction (see Figure 3A). The film has small, compact, and adherent particles. Ion-by-ion growth requires heterogeneous nucleation and nucleus growth [85]. The second is cluster-by-cluster, where surface absorption of colloidal particles pre-formed in solution form particulate films (see Figure 3B). The technique follows a homogenous reaction in which particles from the solution accumulate on a substrate to produce a film [43]. The film has bigger spherical particles, poor compaction and adhe-sion [85]. The third is both processes (ion-by-ion filling on clusters) may occur and interact, resulting in films with colloidal particles (see Figure 3C). Heterogeneous and homogeneous nucleation determine which process dominates [84]. In this experiment involving both mechanisms, the result is a mixed mechanism [43]. The growth of ZnS films involves a mixed process (see Figure 3D) [85]. In general (aqueous solution), OH − binds through to the substrate, then water-soluble [Zn (L) n ] 2+ complex ions react with OHto free Zn 2+ to generate Zn(OH) nuclei around which thiourea was bound and hydrolyzed to create the ZnS film. For high-quality ZnS films, it is essential to restrict the cluster-by-cluster mechanism and enhance the ion-by-ion process. Furthermore, it is expected that the ZnS solution formed cluster-by-cluster, whereas the CdS thin film shall form ion-by-ion growth. CBD generated powdery, poor-adhesive, and transparent ZnS films [47].
Based on Liu's observation [86], they identified an ion-by-ion process as the predominant growth mechanism for ZnS thin films. This reveals that ZnS film growth involves heterogeneous nucleation on the substrate surface and ion-by-ion incorporation. The CBD procedure depends on bath temperature, pH of the reaction solution, reacting species concentration, complexing agents, etc. [85,86]. Great quality (adherent) films are exclusively formed from supersaturated zinc hydroxy solutions depending on the substrate. growth [85]. The second is cluster-by-cluster, where surface absorption of colloidal particles pre-formed in solution form particulate films (see Figure 3B). The technique follows a homogenous reaction in which particles from the solution accumulate on a substrate to produce a film [43]. The film has bigger spherical particles, poor compaction and adhesion [85]. The third is both processes (ion-by-ion filling on clusters) may occur and interact, resulting in films with colloidal particles (see Figure 3C). Heterogeneous and homogeneous nucleation determine which process dominates [84]. In this experiment involving both mechanisms, the result is a mixed mechanism [43]. The growth of ZnS films involves a mixed process (see Figure 3D) [85]. In general (aqueous solution), OH − binds through to the substrate, then water-soluble [Zn (L)n] 2+ complex ions react with OHto free Zn 2+ to generate Zn(OH) nuclei around which thiourea was bound and hydrolyzed to create the ZnS film. For high-quality ZnS films, it is essential to restrict the cluster-by-cluster mechanism and enhance the ion-by-ion process. Furthermore, it is expected that the ZnS solution formed cluster-by-cluster, whereas the CdS thin film shall form ion-by-ion growth. CBD generated powdery, poor-adhesive, and transparent ZnS films [47]. Based on Liu's observation [86], they identified an ion-by-ion process as the predominant growth mechanism for ZnS thin films. This reveals that ZnS film growth involves heterogeneous nucleation on the substrate surface and ion-by-ion incorporation. The CBD procedure depends on bath temperature, pH of the reaction solution, reacting species concentration, complexing agents, etc. [85,86]. Great quality (adherent) films are exclusively formed from supersaturated zinc hydroxy solutions depending on the substrate.
Factors Will Affect the CBD for ZnS Thin Film Properties
The CBD method discussed in this work is used to produce ZnS thin films on glass substrates. In the CBD, numerous variables must be controlled, such as the complexing agent, Zn source, sulfur source, stirring speed, humidity, pH, bath temperature, deposition time, precursor types, and annealing (environmental and temperature effect) [18]. These CBD parameters and how they affect different CBD-ZnS thin films are detailed in Table 2. CBD is dependent upon the purity of its elements. All researchers who aim to produce quality research should be mindful of the following factors (depicted in Table 2), since they may have an impact on the significance of their findings. Experimental conditions greatly affect film characteristics [87].
Parameters of CBD Affected Properties of ZnS Thin Films
Depending on the literature, the ZnS film had varying outcomes. Tec-Yam et al. [76] mentioned the quality of ZnS films concerning their morphologies and optical and structural properties. Inconsistency in the properties of CBD ZnS films, such as thickness, transmittance, and band gap energy, is mostly attributable to the varied compositions and microstructures of the film that resulted from the CBD-controlled parameters (see Table 4). Optical measurements of ZnS thin films were typically measured 300-800 nm [18,75,76,91,96,100,101,112], and the energy band gap was computed using appropriate relations. The high transmittance and low reflection of ZnS film growth on glass substrates (see Table 4) indicate that they are an excellent candidate for antireflective applications. To provide a concise summary, Sections 3.4.1-3.4.9 discuss all the parameters and their potential effects on CBD-ZnS film properties. ZnS thin films can be buffer layers for Cu(In, Ga)Se 2 solar cells with optimized CBD parameters [58,76]. [95,104], tartaric acid (C 4 H 6 O 6 ), [95] and urea (CH 4 N 2 O) [90]. The pH regulator of an alkaline solution was adjusted by adding ammonia (NH 4 OH) [18,69,75,85,96,100] and potassium hydroxide (KOH) [76], and the acidic solution was adjusted by hydrochloric acid (HCl) [90]. The sulfur sources utilized included sodium sulfide (N 2 S) [94], thioacetamide (C 2 H 5 NS) [88,90], and thiourea (SC(NH 2 ) 2 ) [
Influence of Complexing Agents
The CBD approach relies on the controlled precipitation of a deposit from a solution onto a substrate [105]. The complexing agent is a crucial thin film deposition condition for CBD synthesis-based ZnS thin films. A complexing agent binds metallic ions to prevent homogenous precipitation. The created metal complex hydrolyzes slowly to produce positive ions. ZnS thin films are deposited when Zn 2+ and S 2− ions exceed ZnS solubility. A homogeneous or heterogeneous deposition is possible. The faster homogenous process precipitates powdery ZnS particles in large quantities on the substrate. For homogeneity reduction, metal complexes must form. In the heterogeneous process, Zn 2+ and S 2ions are slowly released into liquid and condense on the substrate [12]. The complexing agent is particularly beneficial in preventing precipitation on a solid surface and preventing powder production in the bath solution.
To deposit ZnS using CBD is challenging due to its very low value of K sp = 10 −24.7 [75]. To generate homogenous ZnS thin films, numerous researchers have consequently utilized complexing agents to modulate Zn 2 + ion content during deposition [52,75,85]. The complexing agent aids in thin film formation [85] and affects the morphology of the ZnS layer [52]. Furthermore, a complexing agent in a solution expands the duration of the deposition bath and improves film adhesion on glass substrates [19]. One or even more complex agents may be employed in CBD-ZnS thin films [52]. During thin film deposition, researchers utilize ammonia (NH 3 ) [39,113], hydrazine hydrate (N 2 H 4 .H 2 O), disodium ethylene diamine tetra-acetate, tartaric acid, nitrilotriacetic acid [114], etc. for the complexing agent [19]. Hydrazine hydrate (HH), tri-ethanolamine (TEA), and trisodium-citrate (TSC) form stable compounds with zinc ions, inhibiting zinc hydroxide and zinc oxide production [115].
Mushtaq et al. [93] reported the CBD of ZnS onto glass substrates with ammonia and without ammonia. No contaminant peaks were found for the films with the addition of ammonia. Compared to the films without ammonia solution, the film with ammonia solution had a smooth surface with few pores. The addition of ammonia to ZnS films increased the transmittance from 15.82% to 75.82%. ZnS films with ammonia demonstrated smaller energy band gap values than those without ammonia. All films had band gaps between 4.15 and 4.56 eV.
Goudarzi et al. [88] used the microwave-assisted chemical bath deposition (MA-CBD) method to prepare the ZnS film in a short period without an ammonia solution. Acetic acid as a complexing agent was utilized. XRD revealed the morphology was highly crystalline and cubic. The FE-SEM result showed homogeneous, compact, small particles in the films without cracks or pinholes. Hydrazine is a common additive complexing agent for ZnS deposited via CBD. It increases uniformity, improves homogeneity, and improves growth rate as well as interfacial adherence of the ZnS thin films [50,75,80]. Hydrazine also functions as a bridge-forming ligand and enhances surface binding [39,40]. Hydrazine complexes offer a less steric barrier to the sulfide ion due to having a lower coordination number [40]. Complexing agents, such as ammonia and hydrazine, were popular for CBD to prepare ZnS thin films [40,105,115]. Dona and Herrero [80], Wei et al. [85], and Liu et al. [86] demonstrated CBD-complexing agents for ZnS thin films using NH 3 and N 2 H 4 .
However, hydrazine hydrate is a material that is extremely combustible, carcinogenic, and poisonous [80]. Several researchers have studied ZnS thin film deposition utilizing less harmful complexing agents, such as Na 3 − citrate [18,75,116], EDTA [47], and tartaric acid [1]. All of these complex compounds were developed to replace hydrazine hydrate with non-toxic chemicals. Shin et al. [112] evaluated the impact of complexing agents on ZnS thin film's structural, chemical, compositional, morphological, optical, and growth properties. Zinc acetate, thiourea, Na 3 − citrate, and Na 3 − citrate/EDTA were utilized for the production of ZnS thin films on glass substrates. CBD-ZnS thin films were prepared (a) without the complexing agent, (b) with Na 3 − citrate and (c) with Na 3 − citrate and EDTA (mixture) as shown in Figure 4. The ZnS thin film deposited without complexing agents was amorphous, whereas those formed with complexing agents had a broad ZnS peak at 28 • . Complexing agents produced ZnS thin films thicker (over 100 nm) and smoother than those without the complexing agent (thickness below 50 nm). Films without complexing agents, with Na 3 − citrate, and combined EDTA + Na 3 − citrate had 85%, 65%, and 70% transmittance and optical band gaps of 3.94 eV, 3.87 eV, and 3.84 eV, respectively. The film with Na 3 − citrate and EDTA had 9.12 × 10 10 carrier concentration and 5.98 cm 2 V mobility. citrate and EDTA (mixture) as shown in Figure 4. The ZnS thin film deposited without complexing agents was amorphous, whereas those formed with complexing agents had a broad ZnS peak at 28°. Complexing agents produced ZnS thin films thicker (over 100 nm) and smoother than those without the complexing agent (thickness below 50 nm). Films without complexing agents, with Na citrate, and combined EDTA + Na citrate had 85%, 65%, and 70% transmittance and optical band gaps of 3.94 eV, 3.87 eV, and 3.84 eV, respectively. The film with Na citrate and EDTA had 9.12 10 10 carrier concentration and 5.98 cm 2 V mobility. Liu et al. [86] prepared ZnS on glass substrates using CBD. The impact of several complexing agents (tri-sodium citrate and hydrazine hydrate) and their concentrations on the structure, composition, morphology, optical properties, and growth mechanism of ZnS thin films were examined. XRD study in Figure 5A shows that ZnS thin films (trisodium citrate and hydrazine hydrate concentration) have a zinc-blende structure with a pure cubic phase. These 2θ peaks were compared to the ZnS standard value from the PDF card: 65-9585. ZnS thin film formed utilizing 0.8 M tri-sodium citrate had large voids and spherical particles with sizes less than 100 nm and were not very densely packed (see Figure 5B). ZnS films produced with 0.49 M and 0.82 M hydrazine hydrate contain homogenous, densely compacted 20-30 nm small particles (see Figure 5C). Liu et al. [86] prepared ZnS on glass substrates using CBD. The impact of several complexing agents (tri-sodium citrate and hydrazine hydrate) and their concentrations on the structure, composition, morphology, optical properties, and growth mechanism of ZnS thin films were examined. XRD study in Figure 5A shows that ZnS thin films (tri-sodium citrate and hydrazine hydrate concentration) have a zinc-blende structure with a pure cubic phase. These 2θ peaks were compared to the ZnS standard value from the PDF card: 65-9585. ZnS thin film formed utilizing 0.8 M tri-sodium citrate had large voids and spherical particles with sizes less than 100 nm and were not very densely packed (see Figure 5B). ZnS films produced with 0.49 M and 0.82 M hydrazine hydrate contain homogenous, densely compacted 20-30 nm small particles (see Figure 5C). citrate and EDTA (mixture) as shown in Figure 4. The ZnS thin film deposited without complexing agents was amorphous, whereas those formed with complexing agents had a broad ZnS peak at 28°. Complexing agents produced ZnS thin films thicker (over 100 nm) and smoother than those without the complexing agent (thickness below 50 nm). Films without complexing agents, with Na citrate, and combined EDTA + Na citrate had 85%, 65%, and 70% transmittance and optical band gaps of 3.94 eV, 3.87 eV, and 3.84 eV, respectively. The film with Na citrate and EDTA had 9.12 10 10 carrier concentration and 5.98 cm 2 V mobility. Liu et al. [86] prepared ZnS on glass substrates using CBD. The impact of several complexing agents (tri-sodium citrate and hydrazine hydrate) and their concentrations on the structure, composition, morphology, optical properties, and growth mechanism of ZnS thin films were examined. XRD study in Figure 5A shows that ZnS thin films (trisodium citrate and hydrazine hydrate concentration) have a zinc-blende structure with a pure cubic phase. These 2θ peaks were compared to the ZnS standard value from the PDF card: 65-9585. ZnS thin film formed utilizing 0.8 M tri-sodium citrate had large voids and spherical particles with sizes less than 100 nm and were not very densely packed (see Figure 5B). ZnS films produced with 0.49 M and 0.82 M hydrazine hydrate contain homogenous, densely compacted 20-30 nm small particles (see Figure 5C). Regarding the growth mechanism of ZnS in the tri-sodium citrate-ammonia system, [Zn(citrate) n ] 2+ , [Zn(NH 3 ) 3 ] 2+ , and [Zn(NH 3 ) 4 ] 2+ were formed with stability constants of 10 8.3 , 10 6.6 , and 10 8.9 , respectively. In the hydrazine-ammonia system, [Zn( 2+ have stability constants of 10 5.5 , 10 6.6 , and 10 8.9 , respectively. If the stability constant is low, complex ions release Zn 2+ ions more promptly, and ZnS and Zn(OH) 2 colloid particles develop rapidly. Complex ions release Zn 2+ ions slowly if the stability constant is high, slowing ZnS deposition in solution or on the substrate.
[Zn(citrate) n ] 2+ in the tri-sodium citrate system is more stable than [Zn(N 2 H 4 ) 3 ] 2+ in the hydrazine hydrate system. The hydrazine hydrate system generated more Zn(OH) 2 colloidal particles, which make the reaction solution milky and murky. Zn(OH) 2 nuclei on the substrate adsorb and hydrolyze thiourea to produce ZnS. The nucleation density of Zn(OH) 2 on the substrate affected the formation of high-quality ZnS thin films [86]. Applying hydrazine hydrate leads to a larger nucleation density of Zn(OH) 2 nuclei on the substrate than tri-sodium citrate; in this scenario, ZnS thin films with small and fine particles formed. Deepa et al. [115] reported the impact of various complexing agents (hydrazine hydrate (HH), tri-sodium citrate (TSC), and tri-ethanolamine (TEA)) on the formation mechanism and characteristics of CBD-synthesized ZnS thin films. All films using a distinct complexing agent were polycrystalline structures with different orientations. This was due to the zinc ions' rates being released depending on the complex's stability; the deposition rate would vary with various complexing agents resulting in varied crystallite orientations. Figure 6A shows the AFM for film with TSC. The root means square (R q ) values of surface roughness values for ZnS films with HH, TEA, and TSC were 71.06 nm, 87.39 nm, and 123.6 nm, respectively. Films prepared using HH were smoother than those produced using TEA and TSC. Film surfaces with TSC were roughest. The transmittance of films using HH, TEA, and TSC was 87%, 60%, and 50%, respectively. The film with HH had the highest transmittance due to its smoother surface and larger grains. HH, TEA, and TSC films had direct band gaps of 3.73 eV, 3.64 eV, and 3.57 eV, respectively (see Figure 6B). ZnS film with TSC had the highest emission peak~(at wavelength: 422.5 nm) followed by TEA (at wavelength: 424.5 nm) and HH (at wavelength: 416.2 nm).
Regarding the growth mechanism of ZnS in the tri-sodium citrate-ammonia syste Zn citrate , Zn NH , and Zn NH were formed with stability constant 10 8.3 , 10 6.6 , and 10 8.9 , respectively. In the hydrazine-ammonia system, Zn N H Zn NH , and Zn NH have stability constants of 10 5.5 , 10 6.6 , and 10 8.9 , resp tively. If the stability constant is low, complex ions release Zn 2+ ions more promptly, a ZnS and Zn OH colloid particles develop rapidly. Complex ions release Zn 2+ i slowly if the stability constant is high, slowing ZnS deposition in solution or on the s strate. Zn citrate in the tri-sodium citrate system is more stable th Zn N H in the hydrazine hydrate system. The hydrazine hydrate system genera more Zn OH colloidal particles, which make the reaction solution milky and mur Zn OH nuclei on the substrate adsorb and hydrolyze thiourea to produce ZnS. The cleation density of Zn OH on the substrate affected the formation of high-quality Z thin films [86]. Applying hydrazine hydrate leads to a larger nucleation density Zn OH nuclei on the substrate than tri-sodium citrate; in this scenario, ZnS thin fil with small and fine particles formed. Deepa et al. [115] reported the impact of vario complexing agents (hydrazine hydrate (HH), tri-sodium citrate (TSC), and tri-ethano mine (TEA)) on the formation mechanism and characteristics of CBD-synthesized Z thin films. All films using a distinct complexing agent were polycrystalline structures w different orientations. This was due to the zinc ions' rates being released depending the complex's stability; the deposition rate would vary with various complexing age resulting in varied crystallite orientations. Figure 6A shows the AFM for film with T The root means square (Rq) values of surface roughness values for ZnS films with H TEA, and TSC were 71.06 nm, 87.39 nm, and 123.6 nm, respectively. Films prepared us HH were smoother than those produced using TEA and TSC. Film surfaces with TSC w roughest. The transmittance of films using HH, TEA, and TSC was 87%, 60%, and 50 respectively. The film with HH had the highest transmittance due to its smoother surf and larger grains. HH, TEA, and TSC films had direct band gaps of 3.73 eV, 3.64 eV, a 3.57 eV, respectively (see Figure 6B). ZnS film with TSC had the highest emission peak~ wavelength: 422.5 nm) followed by TEA (at wavelength: 424.5 nm) and HH (at wa length: 416.2 nm). Thus, the literature had proven that complexing agents are crucial to ZnS thin film development and homogeneity in CBD [98,115]. Although ZnS thin films can be produced utilizing the non-toxic complexing agents [86,112,115] described above, the films had a rough shape, a poor growth rate, and a discontinuity microstructure. Hydrazine hydrate produces a smoother film. The high stability constant of zinc complexes in the solution favors gradual zinc ion release, thus diffusion and coalescence are accomplished before second-layer nucleation [115]. It is widely reported that ammonia and hydrazine are the most preferred complexing agents for ZnS films prepared by CBD [105].
Influence of Concentration Ratio of the Reactants
The deposition fluid reactant concentration zinc sulfate heptahydrate and thiourea [Zn]/[S] ratio affected CBD production and the characteristics of ZnS thin films. The origin of the reactants affects the product's composition, which can influence the deposited thin layer's physical and chemical properties. By modifying the composition of the reactive solution, the rivalry between homogeneous and heterogeneous nucleation processes can be controlled to promote the formation of thin films [81]. Li et al. [51] Liu et al. [18] varied SC(NH 2 ) 2 concentrations (0.03, 0.06, 0.09, 0.12, and 0.15 M) for CBD ZnS. The ratios of ZnSO 4 /SC(NH 2 ) 2 were 1:1, 1:2, 1:3, 1:4, and 1:5. As seen in Figure 7A, when ZnSO 4 /SC(NH 2 ) 2 was 1:2, the atomic % of Zn and S was nearly 50% each. Zn atomic % is larger in the film if ZnSO 4 /SC(NH 2 ) 2 was more than 1:2. If ZnSO 4 /SC(NH 2 ) 2 s ratio was less than 1:2, and the percentage of atomic Zn was reduced. The ZnSO 4 /SC(NH 2 ) 2 with concentration ratios of 1:1, 1:2, and 1:3 were smooth and compact; however, films with concentration ratios of 1:4 and 1:5 have surface defects (see Figure 7B) [85], thiourea SC NH concentration did not affect ZnS thin film surface morphology. However, ammonia concentration affects film morphology. Thin films produced with 1.2 M ammonia had cracks, but those deposited with 0.7 M ammonia were denser and smoother. Those results could be explained in terms of ZnS deposition kinetics and reaction mechanism. The Zn N H ions predominate at 0.1 M ammonia. On the substrate, colloidal particles formed through a homogeneous reaction in solution progressively adhered. ZnS film growth was cluster-by-cluster. Zn NH ions are significant for deposition at 0.7 and 1.2 M ammonia, in which ZnS films grown on a substrate ion-by-ion are denser and smoother. The transmittance was greater than 70% between 350 and 900 nm, and the optical band gap of ZnS thin films varied between 3.76 and 3.78 eV. These results demonstrated that a high-quality ZnS thin film requires a specific ratio of reactant concentrations.
Influence of Stirring Speed
Typically, the CBD process stirred the ZnS solution to ensure the chemical components are uniformly distributed [106,117]. This also inhibits colloidal particles and grain size, disrupting particle diffusion to the surface and absorbing particles on the substrate [43]. However, Dona and Herrero [80] found that ZnS film growth was independent of CBD solution stirring. Ke et al. [106] reported that the static reaction bath (unstirred condition) was beneficial for the creation of heterogeneous ZnS thin films and improved the structural and optical features compared to prior literature and stirring conditions. Figure 8 displayed the ZnS thin films deposited for 2 and 2.5 h. Throughout the deposition, the
Influence of Stirring Speed
Typically, the CBD process stirred the ZnS solution to ensure the chemical components are uniformly distributed [106,117]. This also inhibits colloidal particles and grain size, disrupting particle diffusion to the surface and absorbing particles on the substrate [43]. However, Dona and Herrero [80] found that ZnS film growth was independent of CBD solution stirring. Ke et al. [106] reported that the static reaction bath (unstirred condition) was beneficial for the creation of heterogeneous ZnS thin films and improved the structural and optical features compared to prior literature and stirring conditions. Figure 8 displayed the ZnS thin films deposited for 2 and 2.5 h. Throughout the deposition, the temperature reaction solutions were maintained at 50 • C, 70 • C, and 90 • C. Ke et al. [106] found that ZnS thin films deposited at 90 • C without siring showed improved morphology. ZnS films deposited over 2 h without stirring had better crystallinity and all films had band gap energy ranges between 3.93 and 4.06 eV.
Hubert et al. [118] concurred that the stirring rate did not affect ZnS thin film growth during deposition times lowers than 30 min. However, after 30 min of deposition, stirring affects ZnS thin film formation. It showed contradictory results between the references [80,106]. Zhang et al. [117] observed that stirring speed improves cluster adsorption on ZnS films. Zhang et al. [117] discovered that stirring speeds had an effect on the thickness of the film but did not affect its crystallization and optical properties. The films showed similar optical properties, independent of their stirring speeds. The films exhibited high transmittance (70-88%) in the visible spectrum, and the band gap of ZnS was around 3.63 eV. Onal and Altiokka [119] demonstrated that ZnS films obtained without stirring the solution exhibited a low XRD peak intensity, low thickness, and lower band gap values than films obtained by stirring the solution. The resulting films with the stirring solution were denser and less transparent. Typically, metal ions are heavier than water molecules. When the solution was stirred and mixed, centrifugal force discharged metal ions into the bath container and the surface of the glass substrate. Hubert et al. [118] concurred that the stirring rate did not affect ZnS thin film growth during deposition times lowers than 30 min. However, after 30 min of deposition, stirring affects ZnS thin film formation. It showed contradictory results between the references [80,106]. Zhang et al. [117] observed that stirring speed improves cluster adsorption on ZnS films. Zhang et al. [117] discovered that stirring speeds had an effect on the thickness of the film but did not affect its crystallization and optical properties. The films showed similar optical properties, independent of their stirring speeds. The films exhibited high transmittance (70-88%) in the visible spectrum, and the band gap of ZnS was around 3.63 eV. Onal and Altiokka [119] demonstrated that ZnS films obtained without stirring the solution exhibited a low XRD peak intensity, low thickness, and lower band gap values than films obtained by stirring the solution. The resulting films with the stirring solution were denser and less transparent. Typically, metal ions are heavier than water molecules. When the solution was stirred and mixed, centrifugal force discharged metal ions into the bath container and the surface of the glass substrate.
The literature discussing the effect of stirring on the ZnS thin film contradicts the results. Generally, stirring is advantageous because it forces particles to interact throughout the process and introduces solvent components into interaction with the solute [81].
With stirring, the solution obtained was more transparent [85]. As the rate of stirring increased, Onal and Altiokka [119] demonstrated that the ZnS thin film efficiently adhered The literature discussing the effect of stirring on the ZnS thin film contradicts the results. Generally, stirring is advantageous because it forces particles to interact throughout the process and introduces solvent components into interaction with the solute [81]. With stirring, the solution obtained was more transparent [85]. As the rate of stirring increased, Onal and Altiokka [119] demonstrated that the ZnS thin film efficiently adhered to the glass substrate. Several references were continuously stirred at varying rates of stirring speed for preparing ZnS thin films using CBD [58,76,85,105].
Influence of Humidity
In operational use, the CBD process can be carried out either in a closed reaction container (hermetic CBD system) [51,80,85] or an open reaction container (open CBD system) [114,120]. Hermetic CBD prevents evaporation and environmental interference by closing the chemical bath and substrate [85]. The volume of the system is constrained to achieve thermal equilibrium, and the bath-substrate interface is scarcely disturbed [43]. In an open CBD system, ambient humidity interferes with the gas-liquid contact. To date, no systematic research has been conducted on the influence of humidity on CBD-ZnS thin films. Lin et al. [69] examined the influence of humidity on CBD-ZnS thin film synthesis. The experimental conditions were done with hermetic CBD and open CBD under a relative humidity of 60%, 70%, and 80%. For XRD structures, all films had an amorphous form for both open CBD and hermetic CBD. Film morphology was highly sensitive to relative humidity (see Figure 9a-c). Hermetic CBD produced a compact conformal thin layer with tightly packed microstructures (see Figure 9d). Open CBD films had many cracks and powdery samples. The X-ray photoelectron spectrum reveals that relative humidity increases the concentration of ZnO compounds in an open system. The hermetic CBD film had the highest average visual transmittance (77%) in the wavelength range of 400-800 nm, exceeding all other films. The hermetic CBD-ZnS film exhibited improved transmission and morphology rather to the open CBD films. Regarding Shobana et al. [42], a closed and isolated apparatus (hermetic CBD) is preferable for producing a thin film free of impurities.
Influence of Humidity
In operational use, the CBD process can be carried out either in a closed reaction container (hermetic CBD system) [51,80,85] or an open reaction container (open CBD system) [114,120]. Hermetic CBD prevents evaporation and environmental interference by closing the chemical bath and substrate [85]. The volume of the system is constrained to achieve thermal equilibrium, and the bath-substrate interface is scarcely disturbed [43]. In an open CBD system, ambient humidity interferes with the gas-liquid contact. To date, no systematic research has been conducted on the influence of humidity on CBD-ZnS thin films. Lin et al. [69] examined the influence of humidity on CBD-ZnS thin film synthesis. The experimental conditions were done with hermetic CBD and open CBD under a relative humidity of 60%, 70%, and 80%. For XRD structures, all films had an amorphous form for both open CBD and hermetic CBD. Film morphology was highly sensitive to relative humidity (see Figure 9a-c). Hermetic CBD produced a compact conformal thin layer with tightly packed microstructures (see Figure 9d). Open CBD films had many cracks and powdery samples. The X-ray photoelectron spectrum reveals that relative humidity increases the concentration of ZnO compounds in an open system. The hermetic CBD film had the highest average visual transmittance (77%) in the wavelength range of 400-800 nm, exceeding all other films. The hermetic CBD-ZnS film exhibited improved transmission and morphology rather to the open CBD films. Regarding Shobana et al. [42], a closed and isolated apparatus (hermetic CBD) is preferable for producing a thin film free of impurities.
Influence of Deposition Temperature
Bath temperature can affect the chemical reaction rate [81]. As temperature rises, the complex dissociates causing an increase in the kinetic energy of the particles, which increases ion interaction and deposition at substrate volume nucleation centers. Generally, CBD methods precipitated a chemical from a solution onto a substrate at 303-353 K [69].
Influence of Deposition Temperature
Bath temperature can affect the chemical reaction rate [81]. As temperature rises, the complex dissociates causing an increase in the kinetic energy of the particles, which increases ion interaction and deposition at substrate volume nucleation centers. Generally, CBD methods precipitated a chemical from a solution onto a substrate at 303-353 K [69].
Zhou et al. [101] prepared CBD ZnS thin films at different deposition temperatures (75 • C, 80 • C, 85 • C, 90 • C, and 95 • C) to study their properties. The films were annealed at 200 • C. As deposition temperature increased, ZnS thin films thickened, and average particle size increased. Thickness was in the range of 73-200 nm. Above 600 nm, the optical transmittance of the films exceeded 75%. As the deposition temperature increased, transmittance dropped. The film with low deposition temperature is thinner and more transparent. This might be attributed to the varied thicknesses and roughness. Gode et al. [54] prepared the films at deposition temperatures of 60 • C, 70 • C, and 80 • C. Deposition temperatures of 60 • C and 70 • C produced amorphous films. The best diffraction peak was observed at deposition temperatures of 80 • C for deposition times of 4.5 h. Liu et al. [18] varied the bath deposition temperatures for ZnS thin films through 75 • C, 80 • C, and 85 • C. Deposition temperatures enhanced film thickness. Dona and Herrero [80] found that with the deposition temperature elevated from 70 to 80 • C, CBD-ZnS thin films thickened. Jawad and Alioy [121] used the CBD approach to increase Cd 1-x Zn x S film thickness from 60 • C to 80 • C. This was because the hydrolysis of thiourea increased as the temperature increased.
Influence of Deposition Time
Deposition time is a significant factor affecting the crystal structure and optical characteristics of ZnS films deposited using CBD [43,54]. Liu et al. [100] investigated the varied deposition times (20,40,60, and 80 min) on the properties of CBS-ZnS. The surface of a film formed at 80 • C was not smooth and dense even after the deposition times exceeded 60 min. The average transmittance decreased as deposition time increased. This was because the average transmittance dropped as the film's thickness increased. For example, at a constant bath temperature (80 • C), the transmittance (85.2%, 80.3%, 76.8%, and 74.4%) and thickness raised (98 nm, 135 nm, 177 nm, and 273 nm) as deposition time increased (20 min, 40 min, 60 min, and 80 min). Luque et al. [91] studied the different deposition times of 30, 60, and 90 min for ZnS thin film performance. The ZnS growth on glass was more homogenous for the material with 90 min of reaction, exhibiting strong transmission (80% in the spectral region of 300 to 800 nm), electrical resistivity (10 6 Ω cm), and an optical band gap of 3.62 eV.
Kumar et al. [89] reported varied deposition times of 60, 80, 100, and 120 for CBDprepared ZnS. XRD revealed that time deposition films increase crystallinity and crystallite size. The particle size increased (193-242 nm), thickness increased (239-590 nm), the transmission of the films was over 70%, and band gap values declined (3.68 eV-3.68 eV) as deposition time increased. XPS studies showed that ZnS thin film formed at 120 min had two peaks centered at 1044.9 eV and 1021.8 eV, matched to the Zn2p 1/2 and Zn2p 3/2 . The peaks related to the Zn 2+ . The evaluation of electrical performance parameters revealed an enhanced ideality factor in heterostructures manufactured with a deposition time of 60 min compared to other films.
Gode et al. [54] demonstrated that deposition time significantly influences ZnS thin film crystallite, grain size, thickness, and optical properties. They showed the impact of longer deposition time on the growth of ZnS films using CBD. When the deposition temperature was obtained at 80 • C, the glass substrates were then put into the solution. To evaluate the rate of growth, the deposition times were monitored for 3, 3.5, 4, and 4.5 h. The ZnS (0 0 8) peak grew narrower as deposition time increased, indicating improved crystallinity. As deposition time increased, thin film grain size (40-82 nm) and thickness (403-934 nm) increased. The film's transmittance in the visible region was 66-87%. The energy dispersive X-ray analysis (EDX) revealed zinc and sulfur layers in the films. The average atomic ratio of S/Zn was computed to be 0.51, 0.56, 0.57, and 0.58 for deposition durations of 3, 3.5, 4, and 4.5 h, respectively. All films had metal-rich surfaces with ratios below the stoichiometric ratio (S/Zn = 1). A deviation in S/Zn ratios might be attributable to the greater oxygen atomic percentage. Oxygen might occur from the atmosphere or the bath solution. For the layer deposited at 4.5 h, the electrical conductivity is 4.67 × 10 −10 Ωcm and the resistivity is 2.14 × 10 9 Ωcm.
Haddad et al. [122] demonstrated that deposition time influences the structural, morphological, transmission, and photoluminescent properties of CBD-ZnS thin films. Without stirring, deposition times varied from 2 to 6 h. Increasing deposition time increases XRD peak intensity, indicating improved film crystal quality. In their situation, after a few minutes of deposition, ZnO nuclei surrounded by S 2− ions arise. With longer deposition times, greater S 2− ions surround ZnO nuclei (poor Zn-O interaction), and O 2ions diffused from the inner surface leading to ZnS via sulfidation. Deposition time raises crystallite size from 2.6 to 10 nm and decreases the lattice parameter. With an increase in deposition, ZnS nucleation stops, and ZnS film growth was related to an increase in average particle size rather than continuous primary particle nucleation and deposition. Film thickness increased from 300 to 610 nm for 2 to 3 h and then lowered to 530 nm for 4, 5, and 6 h. The film's transmittance was independent of deposition time but absorption was dependent. Visible spectrum transmittance ranged from 60 to 83% from 400 to 800 nm. Photoluminesce data showed deposition time affects ZnS thin film emissions. The 5 h film had the most UV-blue emission, whereas the 4 h film had the lowest.
Goudarzi et al. [47] reported that CBD ZnS film formation began approximately 15 min after reactant mixing and lasted approximately 7 h. At wavelengths exceeding 350 nm, ZnS film with 0.5 h deposition (18 nm of thickness) transmitted more than 85%. The 100 nm-thick layers transmitted greater than 70% of visible light. Lower deposition times result in improved transmittance at shorter wavelengths, which increased short-circuit current. Thus, the literature revealed that the deposition time is a critical factor that affects the growth, structural, and optical properties of CBD ZnS development. Optimizing buffer layer thickness is required for ZnS to improve solar cell performance [47].
Influence of pH Value
Reaction rate and deposition rate rely on supersaturation and rate of MX formation (where M and X are metals and O − /OH − ions, respectively) [81]. The deposition rate is proportional to the ratio of film thickness to deposition time [105]. To produce high-quality thin films, precursor solutions must contain hydroxy ions (OH − ) [81]. The production of a thin film relies on reaction mixture pH, and pH is determined by OHions. pH affects the equilibrium between complexing agents and water [81]. Lekiket and Aida [105] reported that reaction fluid pH affected CBD-ZnS thin films. They discovered that the rate of deposition was pH-dependent. The results varied from 0.9 to 2 nm/s for pH levels between 9 and 11. As seen in Figure 10, the film formed at a pH of 10 had a reduced deposition rate due to the increased ZnS solubility at this pH. In thermodynamic research, Hubert et al. [123] computed ZnS solubility in an ammonia solution reporting a maximum solubility at pH = 10. At pH values ranging from 9 to 10.66, an XRD peak indicated that the films were crystalline. The average transmittance stands between 75% and 80%, while the band gap energy ranges from 4.0 eV to 4.2 eV for all pH values. Selma and Alioy [121] reported that the optimal NH 3 concentration required to bind Zn 2+ to their compound in the bath deposition is pH = 10. Nasr et al. [39] studied CBD zinc sulfide thin films at pH 10 and 11.5. At pH = 10, they obtained a ZnS coating with high crystallinity with transmission ranging from 20 to 46%. However, for the ZnS film at pH =11.5, transmission is at a maximum between 55 and 71%, but there is no discernible XRD diffraction peak. Both pH = 10 and pH = 11.5 resulted in a band gap of 3.78 eV for the ZnS film. Kang et al. [114] synthesized CBD ZnS thin films in an acidic solution with a pH range of 5.0 to 6.5. They utilized various pH levels of ethylenediamine tetra acetic acid disodium salt (Na 2 EDTA). All films had a nanocrystalline structure. The direct band gap (3.78 to 3.91 eV) varies with solution pH. These show the films from the same material formed in acidic and alkaline environments may exhibit distinct characteristics. Tec-Yam et al. [76] reported the CBD-ZnS composed of Zn-OH-NH3-H O components at different pH ranges. Figure 11 shows Zn 2+ ions and no criteria for ZnS film production at pH 5 (acidic solution). pH ranging from 6 to 10 produced undesired intermediate complexes, precipitates, and no ZnS was observed. When the pH exceeds 10, Zn NH is more likely to occur. Zn NH and Zn NH have stability con- Tec-Yam et al. [76] reported the CBD-ZnS composed of Zn-OH-NH 3 -H 2 O components at different pH ranges. Figure 11 shows Zn 2+ ions and no criteria for ZnS film production at pH 5 (acidic solution). pH ranging from 6 to 10 produced undesired intermediate complexes, precipitates, and no ZnS was observed. When the pH exceeds 10, (Zn(NH) 3 ) 2+ 4 is more likely to occur. (Zn(NH) 3 ) 2+ 4 and (Zn(NH) 3 ) 2+ 3 have stability constants of 10 8.9 and 10 6.6 , respectively. Higher stability constants slow Zn 2+ discharge from complexes [11]. In line with the literature [11,39,76,85], (Zn(NH) 3 ) 2+ 4 is the most preferred ion for ZnS production. The CBD-ZnS thin film is generated as follows in the presence of NH 3 Tec-Yam et al. [76] reported the CBD-ZnS composed of Zn-OH-NH3-H 2 nents at different pH ranges. Figure 11 shows Zn 2+ ions and no criteria for Zn duction at pH 5 (acidic solution). pH ranging from 6 to 10 produced undesire diate complexes, precipitates, and no ZnS was observed. When the pH e (Zn(NH) 3 ) 4 2+ is more likely to occur. (Zn(NH) 3 ) 4 2+ and (Zn(NH) 3 ) 3 2+ have sta stants of 10 8.9 and 10 6.6 , respectively. Higher stability constants slow Zn 2+ disch complexes [11]. In line with the literature [11,39,76,85], (Zn(NH) 3 ) 4 2+ is the mos ion for ZnS production. The CBD-ZnS thin film is generated as follows in the p NH 3 as a complexing agent in the CBD-ZnS alkaline solution: [Zn(NH 3 ) N ] 2+ nNH 3 , SC(NH 2 ) 2 + 2OH − → S 2− + CN 2 H 2 + 2H 2 O, Zn 2+ + S 2− → ZnS [11]. The range for producing high-quality CBD-ZnS films was 10 to 12. These ZnS film posited at the optimal pH by varying alkaline complexing agents. Through th tion of NH 3 , a complexing agent, the (Zn(NH) 3 ) 4 2+ zinc complex was formed, ions slowly released. Reduced deposition due to (Zn(NH) 3 ) 4 2+ can improve quality. In recent years, a few researchers [98,[124][125][126] had developed and employ distribution diagrams (SDDs) and solubility curves (SCs) to undertake physic analysis to analyze the synthesis of ZnS through the distinct species produ Tec-Yam et al. [76] reported the CBD-ZnS composed of Zn-OH-NH3-H 2 O components at different pH ranges. Figure 11 shows Zn 2+ ions and no criteria for ZnS film production at pH 5 (acidic solution). pH ranging from 6 to 10 produced undesired intermediate complexes, precipitates, and no ZnS was observed. When the pH exceeds 10, (Zn(NH) 3 have stability constants of 10 8.9 and 10 6.6 , respectively. Higher stability constants slow Zn 2+ discharge from complexes [11]. In line with the literature [11,39,76,85], (Zn(NH) 3 is the most preferred ion for ZnS production. The CBD-ZnS thin film is generated as follows in the presence of NH 3 as a complexing agent in the CBD-ZnS alkaline solution: [Zn(NH 3 ) N ] 2+ → Zn 2+ + nNH 3 , SC(NH 2 ) 2 + 2OH − → S 2− + CN 2 H 2 + 2H 2 O, Zn 2+ + S 2− → ZnS [11]. The ideal pH range for producing high-quality CBD-ZnS films was 10 to 12. These ZnS films were deposited at the optimal pH by varying alkaline complexing agents. Through the introduction of NH 3 , a complexing agent, the (Zn(NH) 3 can improve ZnS film quality. In recent years, a few researchers [98,[124][125][126] had developed and employed species distribution diagrams (SDDs) and solubility curves (SCs) to undertake physicochemical analysis to analyze the synthesis of ZnS through the distinct species produced in the Tec-Yam et al. [76] reported the CBD-ZnS composed of Zn-OH-NH3-H 2 O components at different pH ranges. Figure 11 shows Zn 2+ ions and no criteria for ZnS film production at pH 5 (acidic solution). pH ranging from 6 to 10 produced undesired intermediate complexes, precipitates, and no ZnS was observed. When the pH exceeds 10, (Zn(NH) 3 have stability constants of 10 8.9 and 10 6.6 , respectively. Higher stability constants slow Zn 2+ discharge from complexes [11]. In line with the literature [11,39,76,85], (Zn(NH) 3 is the most preferred ion for ZnS production. The CBD-ZnS thin film is generated as follows in the presence of NH 3 as a complexing agent in the CBD-ZnS alkaline solution: [Zn(NH 3 ) N ] 2+ → Zn 2+ + nNH 3 , SC(NH 2 ) 2 + 2OH − → S 2− + CN 2 H 2 + 2H 2 O, Zn 2+ + S 2− → ZnS [11]. The ideal pH range for producing high-quality CBD-ZnS films was 10 to 12. These ZnS films were deposited at the optimal pH by varying alkaline complexing agents. Through the introduction of NH 3 , a complexing agent, the (Zn(NH) 3 can improve ZnS film quality. In recent years, a few researchers [98,[124][125][126] had developed and employed species distribution diagrams (SDDs) and solubility curves (SCs) to undertake physicochemical analysis to analyze the synthesis of ZnS through the distinct species produced in the ZnS [11]. The ideal pH range for producing high-quality CBD-ZnS films was 10 to 12. These ZnS films were deposited at the optimal pH by varying alkaline complexing agents. Through the introduction of NH 3 , a complexing agent, the (Zn(NH) 3 ) 2+ 4 zinc complex was formed, hence Zn 2+ ions slowly released. Reduced deposition due to (Zn(NH) 3 ) 2+ 4 can improve ZnS film quality. Tec-Yam et al. [76] reported the CBD-ZnS composed of Zn-OH-NH3-H O components at different pH ranges. Figure 11 shows Zn 2+ ions and no criteria for ZnS film production at pH 5 (acidic solution). pH ranging from 6 to 10 produced undesired intermediate complexes, precipitates, and no ZnS was observed. When the pH exceeds 10, Zn NH is more likely to occur. Zn NH and Zn NH have stability constants of 10 8.9 and 10 6.6 , respectively. Higher stability constants slow Zn 2+ discharge from complexes [11]. In line with the literature [11,39,76,85], Zn NH is the most preferred ion for ZnS production. The CBD-ZnS thin film is generated as follows in the presence of NH as a complexing agent in the CBD-ZnS alkaline solution: Zn NH Zn nNH , SC NH + 2OH S CN H 2H O, Zn S ZnS [11]. The ideal pH range for producing high-quality CBD-ZnS films was 10 to 12. These ZnS films were deposited at the optimal pH by varying alkaline complexing agents. Through the introduction of NH , a complexing agent, the Zn NH zinc complex was formed, hence Zn 2+ ions slowly released. Reduced deposition due to Zn NH can improve ZnS film quality. In recent years, a few researchers [98,[124][125][126] had developed and employed species distribution diagrams (SDDs) and solubility curves (SCs) to undertake physicochemical analysis to analyze the synthesis of ZnS through the distinct species produced in the In recent years, a few researchers [98,[124][125][126] had developed and employed species distribution diagrams (SDDs) and solubility curves (SCs) to undertake physicochemical analysis to analyze the synthesis of ZnS through the distinct species produced in the chemical solution as a function of the pH. It may provide the researcher with ideas before developing ZnS thin films.
Influence of Precursor Type
Multiple investigations have confirmed that the chemical effects, such as Zn precursor, influence ZnS thin film growth and physical properties. Hong et al. [11] examined that zinc sulfate (ZnSO 4 ), zinc acetate (Zn(CH 3 COO) 2 ), and zinc chloride (ZnCl 2 ) influenced CBD ZnS thin film growth rate, structure, and optical characteristics. Utilizing ZnSO 4 , Zn(CH 3 COO) 2 , and ZnCl 2 , film thickness was measured to be ∼90 nm, ∼60 nm, and ∼30 nm, respectively. The influence of film thickness on Zn sources was correlated with complex ion stability constants, in which ZnSO 4 , Zn(CH 3 COO) 2 , and ZnCl 2 precursors had stability constants of 0.70, 0.78, and 1.5, respectively. Due to slow Zn 2+ release from ZnSO 4 with a lower stability constant, a thicker ZnS layer was formed. The XPS results imply that the binding energies of Zn 2p 3/2 in ZnS films differ significantly based on the Zn precursors, which results in the discrepancy between Zn, S, and O constituents. By utilizing ZnSO 4 , Zn(CH 3 COO) 2 , and ZnCl 2 , the E g values were 3.40, 3.49, and 3.44 eV, respectively. Although ZnS thin films from different precursors have comparable E g values, the oxygen content in ZnS can influence E g . Khatri and Patel [71] explored the influence of zinc precursors (ZnCl 2 , Zn(CH 3 COO) 2 , and ZnSO 4 ) on ZnS thin growth using CBD.
The XRD analysis revealed a hexagonal phase for all zinc precursors. The particle size decreased (27 nm, 25 nm, and 22 nm), thickness increased (37 nm, 39 nm, and 41 nm), the transmittance at wavelength >350 nm decreased (29%, 25%, and 10%), and band gap increased (4.10 eV, 4.17 eV, and 4.25 eV) for thin films with various zinc precursors ZnCl 2 , Zn(CH 3 COO) 2 , and ZnSO 4 , respectively. ZnS thin film thickness was affected by growth rate and zinc precursors in solution during the growth process. All ZnS films have a 348 cm −1 first-order Raman shift. The Hall-effect measurement showed carrier concentration (10 15 to 10 17 /cm 3 ), indicating the n-type ZnS thin film. ZnS thin films produced with ZnCl 2 have the highest carrier mobility. Both studies [11,71] imply zinc precursors affect CBD-ZnS physical and growth qualities.
CBD [11,71] indicated zinc precursors influence the physical and growth properties of CBD-ZnS, while Ernits et al. [74] demonstrated zinc precursors also affect the physical, structural, and efficiency of ZnS(O,OH) buffer layers. According to Sinha et al. [43] and current updated research (Table 4), ZnSO 4 was discovered to serve as a preferable precursor material for producing ZnS films by CBD.
Influence of Annealing Temperature and Environment
All previously described parameters (Sections 3.4.1-3.4.8) were evaluated before or during the deposition process to determine whether they contributed to the improvement of film growth. After utilizing CBD to deposit on the glass, annealing will be applied. Annealing improves pre-deposited film crystallinity, shape, and optical characteristics (transmissivity) [76,127]. Different films require heat treatment with varying temperatures (200-550 • C) and steps. Thermal energy removes disturbance surface particles during air annealing, hence smoothing the films [43].
Zhou et al. [127] investigated the CBD method's annealing effect on ZnS thin film prepared on SLG. The ZnS films were prepared without annealing and annealing in the atmosphere for 1 h at 200 • C and 300 • C, respectively. They discovered that annealing temperature affects the morphology and optical properties but not the crystalline structure. The films were amorphous. Before annealing, the film's primary components were ZnS and ZnO. Without annealing, a certain quantity of Zn(OH) 2 might produce in the ZnS films, and after annealing Zn(OH) 2 degraded ZnO. For pre-annealing ZnS films, the surface area was tiny, not uniform, and not fined. After annealing, the surface became dense and flat, and the particles dispersed. The ZnS film's transmissivity exceeded 80%, whereas the films without annealing had higher transmissivity than at 200 • C and 300 • C for 1 h.
Research [101,128,129] discovered that the effect on ZnS films resulted in a change in optical properties but had no effect on the crystalline structure (either amorphous or crystalline film on film deposition). Oliva et al. [128] synthesized CBD-ZnS films by annealing the films at 200 • C and 400 • C. According to their findings, the crystal structure of the as-grown and annealed samples did not change. XRD confirms ZnS films had a cubic and sphalerite-type structure. The band gap energy of the films was 3.70 eV and 3.45 eV for as-grown and annealed films, respectively. Both the as-grown and post-annealing transmissivities of the films were observed to be between 80% and 70%. The results of Oliva et al. [128] found that annealing ZnS films lowered optical band gap energy and transmittance without affecting the crystal structure. Zhou et al. [101] examined CBD ZnS thin films before and after annealing in air and discovered that annealing affects the morphology and optical properties but has no effect on the film's crystallinity. All deposited and annealed films at 200 • C were amorphous. The annealed ZnS thin film had a more homogeneous, dense surface morphology and particle distribution than the deposited films (see Figure 12). ZnS oxidised to ZnO and grew smaller particles during air-annealing. The annealed ZnS film had poorer transmittance than the deposited in the 350-800 nm wavelength range. Annealing effect uniforms the ZnS thin film surface, reduces defect density and lowers transmittance. Khalil et al. [129] investigated CBD ZnS thin film post-annealing at 100-400 • C in air during 1 h. All deposited and annealed films exhibited crystalline structures. There was no impurity discovered other than ZnS, which matched the hexagonal structure of wurtzite. After annealing, the ZnS crystal structure expanded and became aggregated. The thickness of the films increases (from 160.13 to 203.3 nm) as the post-annealing temperature rises. Both transmittance and E g decreased as a result of increased post-annealing, greater particle size, and morphological modification (in the range of 3.9653 eV to 3.6888 eV).
optical properties but had no effect on the crystalline structure (either amorphous or crys-talline film on film deposition). Oliva et al. [128] synthesized CBD-ZnS films by annealing the films at 200 °C and 400 °C. According to their findings, the crystal structure of the asgrown and annealed samples did not change. XRD confirms ZnS films had a cubic and sphalerite-type structure. The band gap energy of the films was 3.70 eV and 3.45 eV for as-grown and annealed films, respectively. Both the as-grown and post-annealing transmissivities of the films were observed to be between 80% and 70%. The results of Oliva et al. [128] found that annealing ZnS films lowered optical band gap energy and transmittance without affecting the crystal structure. Zhou et al. [101] examined CBD ZnS thin films before and after annealing in air and discovered that annealing affects the morphology and optical properties but has no effect on the film's crystallinity. All deposited and annealed films at 200 °C were amorphous. The annealed ZnS thin film had a more homogeneous, dense surface morphology and particle distribution than the deposited films (see Figure 12). ZnS oxidised to ZnO and grew smaller particles during air-annealing. The annealed ZnS film had poorer transmittance than the deposited in the 350-800 nm wavelength range. Annealing effect uniforms the ZnS thin film surface, reduces defect density and lowers transmittance. Khalil et al. [129] investigated CBD ZnS thin film post-annealing at 100-400 °C in air during 1 h. All deposited and annealed films exhibited crystalline structures. There was no impurity discovered other than ZnS, which matched the hexagonal structure of wurtzite. After annealing, the ZnS crystal structure expanded and became aggregated. The thickness of the films increases (from 160.13 to 203.3 nm) as the post-annealing temperature rises. Both transmittance and Eg decreased as a result of increased post-annealing, greater particle size, and morphological modification (in the range of 3.9653 eV to 3.6888 eV). According to certain research [97,102,130], the CBD technique yields either amorphous or weakly crystalline films. High-temperature annealing is necessary to increase the film's crystallinity. Sayed et al. [97] presented a study on the effects of different annealing temperatures on CBD-ZnS films. The films were annealed at temperatures According to certain research [97,102,130], the CBD technique yields either amorphous or weakly crystalline films. High-temperature annealing is necessary to increase the film's crystallinity. Sayed et al. [97] presented a study on the effects of different annealing temperatures on CBD-ZnS films. The films were annealed at temperatures ranging from 150 to 300 • C. XRD results showed the films had dominating intensity diffraction at the (111) plane indicating the preferred orientation of crystallization. The transmittance (77.32-79.43%) and band gap values (3.34-3.45 eV) of the films increased with increasing annealing temperatures. Gode [130] reported that the films were annealed at 100 • C to 500 • C in 100 • C increments in the air for one hour. The study found that the films' optical and crystallization properties depend greatly on the annealing temperature. XRD data demonstrated that the film phase was amorphously deposited initially (see Figure 13a), and heat treatment slightly changed the film's structure (see Figure 13b). The film showed polycrystalline after 500 • C annealing, which proved that post-deposition annealing boosts crystallinity. Raman spectra of annealed ZnS samples revealed the compositions of first-, second-, and third-order Raman phonons. The direct band gap dropped from 4.01 to 3.74 eV as the annealing temperature increased. Ahn and Um [102] investigated the influence of varying the annealing temperature from 100 • C to 300 • C on the crystallization and optical properties of the ZnS thin deposited on SLG. The ZnS thin film annealed at 100 • C exhibited an amorphous structure, but as the annealing temperature increased (from 200 to 300 • C), the thin film's crystalline quality enhanced. The average grain size ranged from 134.1 to 178.5 nanometers. Annealing improved ZnS thin film quality by increasing the grain size and the surface became compact and homogeneous as revealed by FESEM images. XPS spectra of the 300 • C-annealed film showed the characteristic Zn-S bond peak. As the annealing temperature increased (100 • C, 200 • C, and 300 • C) the direct energy gap (3.89 eV, 3.85 eV, and 3.82 eV) and surface roughness (28.49 nm, 22.45 nm, and 19.91 nm) decreased. This reduction in the direct energy gap is a result of the increased crystallite size, which decreased the dislocation density.
crystallization and optical properties of the ZnS thin deposited on SLG. The ZnS thin film annealed at 100 °C exhibited an amorphous structure, but as the annealing temperature increased (from 200 to 300 °C), the thin film's crystalline quality enhanced. The average grain size ranged from 134.1 to 178.5 nanometers. Annealing improved ZnS thin film quality by increasing the grain size and the surface became compact and homogeneous as revealed by FESEM images. XPS spectra of the 300 °C-annealed film showed the characteristic Zn-S bond peak. As the annealing temperature increased (100 °C, 200 °C, and 300 °C) the direct energy gap (3.89 eV, 3.85 eV, and 3.82 eV) and surface roughness (28.49 nm, 22.45 nm, and 19.91 nm) decreased. This reduction in the direct energy gap is a result of the increased crystallite size, which decreased the dislocation density. Despite air atmosphere being described in other references, annealing ZnS films with a different atmosphere had also been mentioned by Shin et al. [52], Cao et al. [90], and Despite air atmosphere being described in other references, annealing ZnS films with a different atmosphere had also been mentioned by Shin et al. [52], Cao et al. [90], and Shan et al. [95]. Shin et al. [52] investigated the different annealing temperatures and atmosphere effects on CBD-prepared ZnS thin films on ITO-coated glass substrates. ZnS thin films were annealed at 300-550 • C in a vacuum, N 2 , and N 2 (95%) + H 2 S (5%). As-deposited ZnS thin film demonstrated amorphous characteristics and had Zn-OH and Zn-S bonds. Annealed thin films had only Zn-S bonds. It showed annealing ZnS thin films eliminates ZnOH phases. The films annealed at various temperatures in a vacuum and N 2 atmospheres were amorphous or weakly crystalline. The best environment for generating ZnS thin films with high crystallinity was discovered to be N 2 + H 2 S. S in the N 2 + H 2 S atmosphere boosted the annealed ZnS thin film's X-ray peak intensity. As-deposited and annealed ZnS thin films in a vacuum, N 2 , and N 2 + H 2 S have Zn atomic ratios of 50.14%, 49.85%, 46.85%, and 42.02%, respectively. Due to thermal energy-evaporated Zn atoms and H 2 S gas, ZnS thin films annealed in different atmospheres had increased S concentrations. ZnS thin films annealed in vacuum and N 2 had smaller grain sizes than those annealed in N 2 + H 2 S. Films annealed at various temperatures and environments had poorer transmittance than the as-deposited film. All the films exhibited moderately visible transmittance (>70%). Depending on the annealing conditions, the direct band gap of the annealed films ranged between 3.89 and 3.5 eV. Cao et al. [90] reported the CBD-ZnS film annealed at 500 • C in Ar/H 2 S. (5%) for one hour. XRD results demonstrate the low peak at 28.5 • corresponds to reflections from (1 1 1) planes of cubic structured ZnS and indicates the deposited ZnS thin films showed low crystal quality. After annealing, intense and sharp reflection peaks appeared at 28.5 • and 47.5 • and 56.3 • related with (2 2 0) and (3 1 1) planes of cubic structured ZnS. XRD results demonstrate that annealing ZnS thin films enhances crystallization. Since the majority of ZnO or Zn(OH) 2 was being converted into ZnS, all annealed samples contained an oversupply of S. Shan et al. [95] reported on CBD-ZnS films annealed in a sulfur environment at 500 • C for one hour. The as-deposited ZnS thin films and S powder (0.4, 0.8, and 1.2 mg) were enclosed in a vacuum glass tube for sulfurization with S vapor pressures of 2.0, 4.0, and 6.0 × 10 3 Pa. Sulfurization pressure affects film crystallographic, morphological, and optical properties. XRD data showed that post-annealing in a sulfur environment helped the anomalous ZnS precursor film become crystalline cubic ZnS and increased crystallinity. The transmission spectra demonstrated that the transmittance (50-80%) of the film rapidly increased when the sulfur pressure rise, suggesting that this will reduce the band gap of the ZnS film. In summary, according to the literature, annealing influenced the morphology and optical properties in particular transmissions and E g of CBD-prepared ZnS films [101,102,127]. In terms of morphology, annealing ZnS thin films cause the growth of particles and smoothing of the surface [101]. High-temperature annealing is necessary to enhance the film's crystallinity [105,130]. This is because the CBD method on deposited film typically produces amorphous or weakly crystalline films. The ZnS thin film that was deposited formed a combination of ZnS and Zn(OH) 2 phases [131]. Thermal energy increases the crystallinity of the annealed films [52,90]. The film surface becomes defect-free as contaminants and artifacts disappear during annealing, increasing the XRD peak intensity [43]. Annealing lowers optical transmittance by reducing particle space and making particles more compact [101,102]. The observed transmittances for ZnS thin films were 80% and 70% with an annealing temperature [102,127,128]. ZnS with these transmittance levels is appropriate to be utilized as buffer layers in CIGS solar cells replacing CdS films [102].
Thus, numerous parameters from CBD influence the shape, structure, and optical properties of ZnS films. Additionally, other phases during film formation affect CBD-ZnSdeposited thin film properties. Due to this, several publications have observed that the crystallization, structure, morphology, optical properties, and stoichiometry vary depending on the chemical reagents and conditions used in the production of CBD. All the CBD parameters, when combined, can cause several effects on the films, which are sometimes difficult to regulate and recognize, resulting in inconsistent findings across researchers. Due to the repeatability of the stated procedure, it was necessary to apply the CBD technique to obtain high-quality ZnS films [129]. All researchers who want successful outcomes should consider the above aspects (Sections 3.4.1-3.4.9) that might affect results.
Dopant Concentration Influence on ZnS Thin Film Properties
CBD is gaining popularity as a relatively straightforward and cost-effective technique. Nevertheless, dopant ions have to be added to stabilize the ZnS system against environmental impacts, including chemical corrosion, enhancement of the structural, optical, and electrical properties, and expand the applications of ZnS thin films [132,133]. It might allow scientists to achieve desirable properties, including excellent optical absorbance, tuning the structure, wide or narrow band gap, variable emission color, ferromagnetism, etc., [55][56][57]134]. For example, in solar energy applications, it is essential to modify its energy levels to absorb solar energy. Doping can introduce intermediate-energy band gap values. Intermediate-energy levels or impurities levels can alter the electronic structure and energy-level transitions [135]. The recent production of doping, e.g., manganese (Mn) [132,136], copper (Cu) [133], iron (Fe) [137], nickel (Ni) [138], cobalt (Co) [139], indium (In) [134], and aluminum (Al) [140] into ZnS thin films utilizing CBD will be covered in this section. Talantikite-Touati et al. [132] synthesized Mn-doped ZnS thin films (at concentrations of 0%, 1%, 3%, and 5%) using CBD (in an alkaline bath, pH = 12.83). The films were deposited on the glass. According to the XRD pattern (see Figure 14), all of the films were crystalline and exhibited a ZnS cubic structure. The crystallite size of the ZnS thin film was larger than that of the ZnS: Mn thin films due to internal strain. The Mn doping boosts the transmittance, with values ranging from 50 to 80% in the visible spectrum. Mn doping enhances the optical characteristics of thin films by 30%. The best transmittance value was obtained for the film with 5% of Mn doping. The values for the band gap were between 3.43 and 3.75 eV. The band gap of thin films reduced as the Mn concentration increased. The band gap of ZnS thin film (3.75 eV) was greater than the 3.68 eV reported for ZnS bulk material. Babu et al. [136] synthesized Mn-ZnS thin films using CBD. The Mn concentration was between 0 and 12%. XRD patterns revealed that Mn-doped ZnS films were crystalline and exhibited a cubic structure (see Figure 15). The grain size of the films increased up to 6% of Mn. After Mn concentrations higher than 6%, the grains tend to dissolve and the film surface becomes porous. The band gap energy varied from 3.68 eV to 3.81 eV with increasing Mn doping. Horoz et al. [135] demonstrated a larger absorption of ZnS: Mn into the visible range improved conversion efficiency due to Mn 2+ internal-energy transitions to the ZnS. Their study revealed that intermediate-energy levels in wide-band gap widen the absorption window into the visible range to improve solar cell device performance.
in this section. Talantikite-Touati et al. [132] synthesized Mn-doped ZnS thin films (at con-centrations of 0%, 1%, 3%, and 5%) using CBD (in an alkaline bath, pH = 12.83). The films were deposited on the glass. According to the XRD pattern (see Figure 14), all of the films were crystalline and exhibited a ZnS cubic structure. The crystallite size of the ZnS thin film was larger than that of the ZnS: Mn thin films due to internal strain. The Mn doping boosts the transmittance, with values ranging from 50 to 80% in the visible spectrum. Mn doping enhances the optical characteristics of thin films by 30%. The best transmittance value was obtained for the film with 5% of Mn doping. The values for the band gap were between 3.43 and 3.75 eV. The band gap of thin films reduced as the Mn concentration increased. The band gap of ZnS thin film (3.75 eV) was greater than the 3.68 eV reported for ZnS bulk material. Babu et al. [136] synthesized Mn-ZnS thin films using CBD. The Mn concentration was between 0 and 12%. XRD patterns revealed that Mn-doped ZnS films were crystalline and exhibited a cubic structure (see Figure 15). The grain size of the films increased up to 6% of Mn. After Mn concentrations higher than 6%, the grains tend to dissolve and the film surface becomes porous. The band gap energy varied from 3.68 eV to 3.81 eV with increasing Mn doping. Horoz et al. [135] demonstrated a larger absorption of ZnS: Mn into the visible range improved conversion efficiency due to Mn 2+ internalenergy transitions to the ZnS. Their study revealed that intermediate-energy levels in wide-band gap widen the absorption window into the visible range to improve solar cell device performance. Aghaei et al. [133] utilized CBD to produce Cu 2+ ions (at concentrations: 0.0008, 0.04, and 0.75) doped with ZnS thin films. XRD showed that all Cu: ZnS films were crystalline and formed a cubic zinc blend structure. Increasing the Cu: Zn molar ratio increased the film's grain size (approximately 100 nm). The band gap energy of Cu:ZnS thin films reduced from 3.84 to 3.64 eV when the molar ratio increased. Transmittance of the films was around 50-70%. The sharp increase in transmittance spectra from 310 nm to 340 nm indicated the homogeneous and compact crystal structure in Cu: ZnS thin films. PL intensity was strongly dependent on dopant Cu concentration and increased significantly with the precursor solutions' Cu:Zn molar ratio, achieving the highest when the ratio was 0.04:100. The Cu: ZnS films may be used in optoelectronic devices, such as light-emitting diodes.
Akhtar et al. [137] deposited nanocrystalline Fe-doped ZnS thin films on glass sub- Aghaei et al. [133] utilized CBD to produce Cu 2+ ions (at concentrations: 0.0008, 0.04, and 0.75) doped with ZnS thin films. XRD showed that all Cu: ZnS films were crystalline and formed a cubic zinc blend structure. Increasing the Cu: Zn molar ratio increased the film's grain size (approximately 100 nm). The band gap energy of Cu:ZnS thin films reduced from 3.84 to 3.64 eV when the molar ratio increased. Transmittance of the films was around 50-70%. The sharp increase in transmittance spectra from 310 nm to 340 nm indicated the homogeneous and compact crystal structure in Cu: ZnS thin films. PL intensity was strongly dependent on dopant Cu concentration and increased significantly with the precursor solutions' Cu:Zn molar ratio, achieving the highest when the ratio was 0.04:100. The Cu: ZnS films may be used in optoelectronic devices, such as light-emitting diodes.
Akhtar et al. [137] deposited nanocrystalline Fe-doped ZnS thin films on glass substrates using CBD. Fe concentrations were 0-15.62%. All films formed a cubic zinc blend phase structure. SEM showed particle sizes of undoped and Fe-doped ZnS thin films were 65-150 nm. The magnetic study demonstrated that all Fe-doped ZnS thin films exhibit ferromagnetism at room temperature. Akhtar et al. [138] synthesized nickel-doped ZnS thin films onto glass substrates by CBD. According to XRD analysis, the undoped and 6.25% Ni: ZnS thin films exhibited a cubic phase structure. The average particle diameter of the films was 80 nm and Ni-doped ZnS thin films resulted in ferromagnetism at ambient temperature. Akhtar et al. [139] examined Co-doped ZnS thin film using CBD. All ZnCoS thin films were crystalline. After Co doping, the lattice parameter of the films was lower (5.382 to 5.306 Å) than undoped ZnS (5.406 Å). Co concentration slightly increased the band gap energy of the films, which averaged 3.6 eV. The transmittance of ZnS thin films reduced as Co content increased. The observed transmittance ranged between 60 and 80%. Doping concentration increases luminescence centers, which increase green emission PL intensity at 510 nm. Saturation magnetization increased with increasing in Co concentration. Based on half-metallic and the ferromagnetic characteristics of Ni or Fe or Co-doped ZnS [137][138][139], the films might be utilized in spintronic devices.
Jrad et al. [134] prepared indium-doped zinc sulfide (ZnS: In) thin films with In concentrations ranging from 0 to 10% by CBD. The ZnS bath solution was retained at 80 • C and the deposited time of the films was 90 min. The XRD peak intensity (111) of the films increased with increasing In dopant concentration up to 6%. After In concentration was above 6%, the XRD peak intensity decreased indicating a lack of film crystallinity. The band gap energy varied between 3.70 and 3.76 eV for dopant In concentrations ranging from 0 to 10%. The dopant concentration had a low influence on the band gap energy. In addition, the transmittance values (50 and 70%) and reflectance values range from 20 to 40% for all ZnS: In thin films allowing the films to be used as a buffer or an optical window layer in solar cells.
Maria et al. [140] deposited aluminum-doped ZnS (ZnS:Al) thin films on glass using CBD. Al was utilized at a concentration of 0 to 18 weight percent (wt. %). ZnS and ZnS: Al thin films were deposited at 85 • C for 3 h. XRD showed all ZnS: Al thin films exhibited hexagonal wurtzite crystal structures. FESEM showed that AI concentrations of 12 and 18 wt% exceed ZnS's solubility limit. ZnS: Al films had a thinner thickness (127.61-221.71 nm) than the thickness of the ZnS film, which was 230.27 nm. The XPS spectra reveal the presence of Zn, S, O, C, and Al in the ZnS:Al film. ZnS:Al film with 6 wt.% increased transmittance from 70% to 80%. Through increasing the Al doping percentage, the band gap values reduced from 3.71 eV to 3.52 eV. The carrier concentration varied from (−1.82 × 10 17 to − 3.13 × 10 17 cm −3 ) and resistivity varied from 2.5810 5 to 1.2510 5 Ω-cm.
In this section, it is demonstrated that substantial research has been conducted worldwide to analyze the properties of un-doped and doped ZnS thin films using CBD. The role of dopant in ZnS thin film [132][133][134]136,140] contributes to the varied band gap energy, which may be important in the design of a suitable buffer layer for thin film solar cell manufacturing.
Applications of ZnS Thin Films
Controlling the variables of the CBD (see Section 3) will result in high-quality ZnS films with broad application potential. ZnS attracted considerable interest due to its superior optical and electrical features. Through its unique properties, ZnS had emerged as a promising contender for a variety of applications: photonics, field emission devices, electroluminescence devices, lasers, infrared windows, display technologies, biological devices, nontoxic sensing, optoelectronic devices, and is crucial for buffer layer solar cells [1,10,28]. Nakada and Mizutani [37] and Hariskos et al. [38] discovered that the CBD-ZnS buffer layer on CIGS had an efficiency of 18.1% and 18.6%, respectively.
ZnS can be used as a buffer layer to the Cu(In, Ga)Se 2 (CIGS)-based thin film solar cells as shown in Figure 16. This thin buffer layer is utilized to prevent diffusion during deposition operations and improve cell stability [141]. CIGS thin film solar cells are highly efficient photovoltaic (PV) technologies [142]. The CIGS thin-film solar cell with the highest efficiency to date is 23.35% in 2019 [143]. In 2017, CIGSSe established a record for cell efficiency with a maximum of 22.9% [144]. Nakumra et al. [143] utilized double buffer layers with superior characteristics and no Cadmium instead of CdS buffer layers for CIGSSe. Otherwise, Kato et al. [144] utilized the CdS for the CIGSSe buffer layer. Several contemporary record cells had poisonous CdS buffer layers, rendering industrial manufacture and sale in certain locations unfeasible [143]. ZnS utilized alternative buffer layer materials devoid of Cd. The superior efficiency is attributable to the band gap energy (E g ) of cadmium-free double buffer layers (CBD-ZnS), which is around 3.8 eV [48], which is significantly more than in CdS (2.42 eV) [143]. Subsequently, higher blue light enters the CIGS absorber layer increasing the short-circuit current density (Jsc) [48]. Furthermore, due to exposure to hazardous risks associated with the manufacture and usage of CdS, researchers have focused on the creation of Cd-free buffer layers [1]. Therefore, investigating the prospect of a CBD ZnS-based buffer layers appears fascinating [48]. Present energy demand is rising, and industry relies on irreversible sources. Longstanding global energy concerns have led to the pursuit of clean, renewable energy sources. The renewable energy perspective converts solar energy into electricity making it a potential technology [141]. Future technologies will require eco-friendly, cheaper materials. Thin film photovoltaics are, therefore, predicted to gain popularity. Also needed are cheaper and more energy-efficient techniques. CBD-ZnS films will certainly get greater attention. Bhattacharya and Ramanathan [145] reported that the conversion efficiency of ZnO/ZnS/CIGS solar cells was 18.6%. However, ZnS films had a resistance of roughly 10 7 Ωcm [140]. Solar cell buffer layers cannot withstand high resistance. ZnS films must be adequately doped because doped films have distinct chemical and physical properties compared to undoped ZnS. Doped transition metals (Mn, Cr, Co, Fe, and Ni) enhanced visible light absorption [140]. Doping nanostructures with trivalent metal cations (Al and In) change their optical and photoluminescence properties [140].
Using the CBD method, homogenous ZnS thin films with improved structural and optical properties have indeed been produced over recent times. CIGS solar cells can use ZnS films with superior optical properties [141]. Solar cells, diode lasers, and optoelectronic devices may potentially utilize higher-quality CBD-ZnS films [43]. Present energy demand is rising, and industry relies on irreversible sources. Longstanding global energy concerns have led to the pursuit of clean, renewable energy sources. The renewable energy perspective converts solar energy into electricity making it a potential technology [141]. Future technologies will require eco-friendly, cheaper materials. Thin film photovoltaics are, therefore, predicted to gain popularity. Also needed are cheaper and more energy-efficient techniques. CBD-ZnS films will certainly get greater attention. Bhattacharya and Ramanathan [145] reported that the conversion efficiency of ZnO/ZnS/CIGS solar cells was 18.6%. However, ZnS films had a resistance of roughly 10 7 Ωcm [140]. Solar cell buffer layers cannot withstand high resistance. ZnS films must be adequately doped because doped films have distinct chemical and physical properties compared to undoped ZnS. Doped transition metals (Mn, Cr, Co, Fe, and Ni) enhanced visible light absorption [140]. Doping nanostructures with trivalent metal cations (Al and In) change their optical and photoluminescence properties [140].
Using the CBD method, homogenous ZnS thin films with improved structural and optical properties have indeed been produced over recent times. CIGS solar cells can use ZnS films with superior optical properties [141]. Solar cells, diode lasers, and optoelectronic devices may potentially utilize higher-quality CBD-ZnS films [43].
Limitations of CBD Method and Recommendations
Based on the outcomes of the review, this section summarises the issues and concerns that have been identified. Taking on these difficulties will improve the CBD method for producing ZnS thin films. There are a few restrictions/limitations related to CBD techniques [43].
Researchers should use precaution while selecting a substrate that will not react with the precursor solution. SLG or ITO/FTO-coated glass can be employed as the substrate during deposition.
The solubility products of zinc sulfides are typically quite small~K sp = 10 −24.7 . The concentration of free Zn 2+ ions in the CBD solution should be controlled via precipitation throughout depositing. For example, ZnSO 4 separates into the ions Zn 2+ and SO 2− 4 . The dissolution caused by thiourea: SC(NH 2 ) 2 + OH − → SH − + CH 2 N 2 + H 2 O and SH − + OH − ↔ S 2− + H 2 O. Finally, the bath solution turns into Zn 2+ + S 2− ↔ ZnS [146]. Due to its low solubility, however, ZnS produced by this direct reaction precipitates onto the exposed surface (homogeneous process). Additionally, zinc hydroxide (Zn(OH) 2 ) precipitation is typical during CBD-grown ZnS [147]. This film would result in low optical transmittance because of its rough topology [146]. Zn(OH) 2 must be minimized to get a high-quality zinc sulfide film. This issue can be resolved by employing the proper complexing agent, which discharges tiny concentrations of ions after the complex ion dissociation and equilibrium constant. The popular complexing agent in CBD baths is ammonia and hydrazine. Ammonia offers an adequate alkaline medium for zinc complex ions, whereas hydrazine can promote film ZnS incorporation and aids in the decrease of hydroxide concentrations [146,147].
CBD solution is usually disposed of after each deposition. Filtering and reacting the precipitate using acids and perhaps other chemicals can obtain starting material for subsequent deposition.
If a researcher performs multilayer CBD depositions, unwanted interactions between previously deposited layers and the deposition solution may occur. Researchers must choose the layering sequence to address this challenge [43].
The desired growth of film and thickness cannot be automatically performed during deposition. The most essential issue for cells with ZnS buffer layers is to regulate ZnS thickness [147]. However, CBD is a promising method for controlling ZnS film thickness and crystallinity. To apply ZnS thin films in buffer layers, the film thickness must be optimized. ZnS film thickness must be 60-73 nm to minimize reflectance at 550-700 nm. With minimal reflectance, the optimum buffer layer can be obtained [76]. If a researcher has to control the film's thickness, variables, including the stirring rate, the duration of time, and the bath temperature, can be considered.
Summary and Conclusions
This paper provides an overview of CBD-ZnS thin films as well as numerous parameters that affect their properties and quality. Scopus trends for the second week of November 2022 indicate that CBD ZnS thin film development is gaining momentum and expanding. This is evidenced by the increase of 1560 publications of CBD ZnS thin film from 2011 to a total of 446 publications from 1985 to 2010. CBD is an excellent approach for the ZnS film deposition as it represents the most feasible, reliable, simple, and cost-effective way to produce a thin film in vast areas at near room temperature. There are reportedly three primary deposition processes involved in the CBD technique: ion by ion, cluster by cluster, and a mixed mechanism. Typically, the time of deposition and the nucleation processes determines which mechanism is preferable.
The CBD parameters, such as complexing agent, zinc salt, [Zn]/[S] ratio, stirring speed, humidity, deposition temperature, deposition time, pH solution, precursor types, and annealing, influence the properties of ZnS films. The following is a summary of the most significant findings from this overview of CBD parameters: Complexing agents affect ZnS thin film growth, homogeneity, and secularity in CBDs. ZnS thin films generated without non-toxic complexing agents have a slow growth rate, a rough morphology, and a discontinuous microstructure. Ammonia and hydrazine are the most commonly used complexing agents for CBD-prepared ZnS films. Hydrazine hydrate smooths the film.
CBD's reactant concentration ratio [Zn]/[S] in the deposition fluid governs the ZnS thin layer's physical and chemical properties. Modifying the reactive solution composition can influence the rivalry between homogeneous and heterogeneous nucleation processes to enhance thin film formation.
The stirring effect on ZnS solutions in CBD yields contradictory results in the literature. In general, stirring is beneficial because it promotes particle interaction and introduces solvent components to the solution. The solution became clearer as it was stirred. It was proposed that the ZnS were more densely coated to the glass substrate as the rate of stirring increased (from 300 rpm to 1200 rpm).
The CBD process can be conducted in either a closed reaction vessel (hermetic CBD system) or an open reaction vessel (open CBD system). Hermetic CBD prevents evaporation and environmental interference, whereas humidity disrupts gas-liquid interaction in open CBD systems. The hermetic CBD ZnS film demonstrated superior transmission and morphology in comparison to the open CBD samples. Hermetic CBD is favored for manufacturing an impurity-free thin film.
Researchers generally examined ZnS thin films deposited at temperatures ranging from 60 to 95 • C. The temperature of deposition can be employed to regulate film thickness. As deposition temperature rises, the thickness of ZnS thin films is altered.
CBD-ZnS's growth, structural, and optical properties depend on the deposition time. CBD ZnS film synthesis began 15 min after the reactants were combined. Various studies have reported varying preparation periods in minutes or hours for ZnS thin films. Increases in deposition temperature control thickness and growth rate. The undesirable ZnO phase disappears with longer deposition times. A prior study found that ZnS growth on glass after 90 min of reaction was more uniform and had good transmission (80% in the spectral region of 300 to 800 nm). Short-circuit current increases with shorter deposition times.
Reaction mixture pH affects thin film formation. The formation of superior thin films requires precursor solutions to include hydroxyl ions (OH − ). The acidic solution (pH 5) contained Zn 2+ ions and inadequate circumstances for ZnS film production. Between pH 6 and 10, undesirable intermediate complexes and precipitates are produced but no ZnS. When the pH of the chemical bath is greater than 10, (Zn(NH) 3 ) 2+ 4 is more likely to form, which is the most desired ZnS product. A pH range of 10 to 12 is optimal for producing high-quality CBD-ZnS films.
Multiple studies have revealed that chemical influences, such as Zn precursor, play an important part in the formation mechanism and physical properties of ZnS thin films. On CBD-ZnS thin growth, the following Zn precursor salts were used: Zn(CH 3 COO) 2 .2H 2 O, Zn(Ac) 2 , ZnCl 2 , ZnI 2 , Zn(NO 3 ) 2. 4H 2 O, ZnSO 4 .7H 2 O, and ZnSO 4 . According to current updated research, ZnSO 4 was discovered as a CBD preferable precursor material for ZnS production.
The annealing factor influences the structural, morphological, optical, and electrical properties. High-temperature annealing is necessary to enhance film crystallinity. Different films necessitate heat treatment at 200-550 • C. Annealing eliminates disturbing surface particles, smoothing films. Annealing ZnS film under vacuum, N 2 , and N 2 + H 2 S atmosphere was studied to eliminate the ZnOH phase and increase the crystallinity. The best environment for high-crystallinity ZnS thin films is N 2 + H 2 S. This is due to the continuous supply of S during the annealing process.
Doping ZnS thin films with transition metal ions (Ni, Fe, Mn, Co, and Cu) and trivalent metal cations (Al and In) have garnered attention as a way to achieve and tune properties, such as high optical absorption, high transmittance, narrow or wide band gap energy, tunable emission color, ferromagnetism, etc. Precisely controlling doping in nanocrystals can facilitate the synthesis of functional materials exhibiting potentially desirable features for practical uses, such as solar cells, spintronics, etc.
Consequently, numerous CBD parameters influence the morphology, structure, optical, and electrical properties of ZnS films. All researchers who desire successful outcomes should be mindful of all potential influencing parameters. Although it has been demonstrated that CBD-prepared ZnS thin films have improved properties, there are still concerns that require additional exploration. It is crucial to take precautions when selecting a substrate that can react with the precursor solution. The solubility products of zinc sulfides are normally quite small K sp = 10 −24.7 , and ZnS precipitates onto the exposed surface as a result of this direct interaction. Utilizing a complexing agent that releases modest amounts of ions in line with the complex ion dissociation and equilibrium constant is needed to address this problem. CBD liquids are eliminated following each deposition, hence the precipitation might be filtered and treated using acids and perhaps other chemicals. ZnS thickness control is the most important factor for cells with ZnS buffer layers. If a researcher must manage the thickness of the film, parameters, such as stirring rate, time, and bath temperature, can be assessed. The issues should be resolved shortly due to ZnS films have the potential to serve multiple purposes in a variety of innovative solar cell applications. ZnS is an environmentally safe compound with a higher energy band gap (3.7 eV) than CdS (2.4 eV) that can boost short wave absorption, replace CdS, and substitute Cd-free buffer layer material for CIGSSe thin films. Intending to produce CBD-ZnS that could boost CIGS device efficiency by 20% [49,50], there is still a great deal to discover about the production of optimized ZnS using CBD and its impact on new solar cells. Therefore, numerous CBD techniques and modifications are required to enhance the good features of solar cell materials. The outcomes of the study could provide useful insights and contributions to the synthesis of ZnS thin films utilizing CBD. To obtain the optimal qualities of the ZnS thin film, it is suggested that in-depth research and reviews be made on the parameters of CBD.
|
2023-03-22T15:18:09.282Z
|
2023-03-01T00:00:00.000
|
{
"year": 2023,
"sha1": "ab605563be575464d5c40e4d28d255c9045089eb",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1420-3049/28/6/2780/pdf?version=1679293083",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3a1ce6f9ae0660beca6ef20967c8b7d5d5c8c55c",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": []
}
|
259747098
|
pes2o/s2orc
|
v3-fos-license
|
Artificial Neural Networks and Neuro-Fuzzy Models: Applications in Pharmaceutical Product Development
HIGHLIGHTS Introduction regarding artificial neural networks and neuro-fuzzy logics. Application of neuro-fuzzy logic in pharmaceutics product development. Abstract Pharmaceutical product development is a challenging, time-consuming, and cost-intensive process. Computational methods could be used for assistance and speed up the industrial process. Artificial neural networks (ANN) and neuro-fuzzy models are tools of artificial intelligence that can be used to develop pharmaceutical products to enhance productivity, quality, and consistency. In the present review, the working principle of ANN and neuro-fuzzy models has been discussed, elaborating on their different types, advantages, and disadvantages. Furthermore, the application of these computational techniques in developing pharmaceutical products like suspension, emulsion, microemulsion, nanocarriers, tablets, transdermal preparations, etc., has been discussed in detail.
INTRODUCTION
Pharmaceutical product development requires active and inactive ingredients and several procedures and process-related factors, all of which are difficult to regulate and optimize.For decades, this process has been accomplished by trial and error with the help of the formulator's understanding.Pharmaceutical
HIGHLIGHTS
• Introduction regarding artificial neural networks and neuro-fuzzy logics.
• Application of neuro-fuzzy logic in pharmaceutics product development.
product development is a time-consuming, expensive, and complicated procedure.Computational approaches could be used for assistance and speed up the industrial process.Artificial neural networks (ANN) and neuro-fuzzy models are artificial intelligence techniques that may improve efficiency, quality, and consistency in the production of pharmaceutical products.Artificial intelligence is a branch of computer science that creates and implements algorithms for data analysis and interpretation.Computation is a broad term that encompasses statistics, machine learning, pattern recognition, clustering, similarly based methods, logics, probability theory, and biological techniques like neural networks and fuzzy modeling.
Neural networks have gotten a lot of attention in the last decade, scientists and engineers are paying close attention to them, and they are one of the most powerful computing tools ever constructed.Artificial neural networks (ANNs) technology simulates the brain's neural networks' pattern recognition skills.Like a single neuron in the brain, an artificial neuron unit takes inputs from various external sources, analyses them, and makes a decision.ANNs are vital for modeling diverse non-linear systems since they do not require rigorously organized experimental designs and may map functions using historical or partial data.Medical chemistry, psychology, engineering, and pharmaceutical research are just a few of the domains where ANNs may be used, Because of their ability to make predictions, see patterns, and model.ANN includes analytical data analysis, pre-formulation, optimization of pharmaceutical formulations, in-vitro and in-vivo correlation, quantitative structure-activity relationship and quantitative structure-property relationship, proteomics, prediction permeability of the skin and blood-brain barrier, pharmacokinetic and pharmacodynamic modeling.A neuro-fuzzy method depends on a fuzzy system that is educated using a neural network learning method.The (heuristical) learning technique works with local input and only modifies the underlying fuzzy system locally.A three-layer feed-forward neural network can be used to describe a neuro-fuzzy system.Input variables are represented by the first layer, fuzzy rules are represented by the middle (hidden) layer, and output variables are defined by the third layer.The adopted neuro-fuzzy model is an adaptive neuro-fuzzy inference system (ANFIS) (17), which integrates artificial neural networks (ANNs) with fuzzy logic (FL) to simulate complicated nonlinear issues like pharmaceutical formulation optimization.Fuzzy logic is a powerful signal processing method that has been utilized effectively in various domains such as system modeling and control, detection, pattern recognition, and denoising.Artificial intelligence is a field that deals with the design and applications of algorithms for analysis and interpreting data.It's a broad set of categories of statistical, machine learning, pattern recognition, clustering, similarly based method, logics, and probability theory, and involved in biological approaches such as neural networks and fuzzy modelling collectively called computational intelligence [1].This information enables a specialist to characterize the data that carry on the way toward and altered a primary method to reach perceptive sight settle it by an alternate arrangement of standardized data.The procedure incorporates associations between crude material and procedure conditions [2].The central unit of the nerve is called a neuron or nerve cell.This possesses a cell body called a nucleus.The cell body has a nucleus, which stores information about genetic characteristics, and a plasma that includes molecular equipment for generating the neuron's material.A nerve fibre is having a tree-like structure called dendrites.These dendrites are mainly termed as a receptor that receives signals from other neurons.There is additionally an inclusive bit of long fibre from a cell body called the axon (as shown in Figure 1).Which is mainly divided into various strands, sub-strands, connecting to many other neurons at synaptic junctions or synapses.The Axon of a neuron carried thousands of neurotransmitters related to Brazilian Archives of Biology and Technology.Vol.66: e23210769, 2023 www.scielo.br/babtdifferent neurons.The transfer of signal from one cell to another cell is mainly carried out at the synapse is a very intricate chemical methodology in which explicit transmitter substances are discharged from the dying downside of the intersection [3].
Artificial neural networks (ANNs)
ANNs, also known as neural networks (NNs), are computer systems modelled after the biological neural networks that make up animal brains.Artificial neurons are a set of linked units or nodes in an ANN that loosely resemble the neurons in a biological brain.Each link, like the synapses in a human brain, can convey a signal.An artificial neuron receives a signal, analyses it, and then sends signals to neurons linked to.Each neuron's output is generated by some non-linear function of the sum of its inputs, and the "signal" at a connection is a real number on their inputs, various layers may apply different transformations.Signals pass from the first layer (input layer) to the last layer (output layer), perhaps several times [4].The feedback model is another kind of capacity where one layer's yield moves back to the past layer or a similar layer.Two sorts of the structure are clarified by the presence and absence of the feedback connection model.The structure of the feed-forward model doesn't have the function of backward flow, so it doesn't have the record of previous outputs.Neurons have the option type of information sources loads that will peak an extra opportunity.Such systems are predominantly designed like that the following condition of activity relied on the information loads and the previous condition of the system [5][6][7].Three layered ANN model with a single hidden layer is depicted in Figure 2.
Neuro-Fuzzy logic
A data-driven learning approach developed from neural network theory trains a neuro-fuzzy system based on an underlying fuzzy system.Only local information is used in this heuristic to induce local modifications in the basic fuzzy system.As demonstrated in Figure 3, a neuro-fuzzy system is represented as a unique three-layer feed-forward neural network; The input variables are represented by the first layer, The fuzzy rules are represented by the second layer, The output variables are represented by the third layer, The (fuzzy) connection weights are created from the fuzzy sets.Some methods employ five layers, with the fuzzy sets stored in the second and fourth layer units.These models, on the other hand, maybe turned into a three-layer architecture [8].Neuro-fuzzy framework portrayed as a lot of various fuzzy tenets, this framework absolutely reliance upon the characters of input Brazilian Archives of Biology and Technology.Vol.66: e23210769, 2023 www.scielo.br/babtand output with prior knowledge of fuzzy principles.By intertwining the fuzzy framework with neural systems, the resultant item requires learning through examples and the simple elucidation of its usefulness [9].There are several different ways to develop a hybrid neuro-fuzzy system.The benefit is to allow a comparison between other models and interpret their structural differences.There are several neuro-fuzzy architectures like: -Fuzzy adaptive learning control network (FALCON), Adaptive-network-based fuzzy inference system (ANFIS), Generalized approximate reasoning-based intelligence control (GARIC), Neuronal fuzzy controller (NEFCON), Fuzzy inference and neural network in fuzzy inference software (FINEST), Fuzzy net (FUN), Self-constructing neural fuzzy inference network (SOFIN), Fuzzy neural network (NFN), Dynamic/evolving fuzzy neural network.ANFIS is the implementation process of a fuzzy inference system for the adaptive network to develop the function of fuzzy rules with a suitable function of fuzzy rules with righteous functions of inputs and outputs.Single node and crisp characters are showcasing as the output model depicted in the 5th layer.Adaptive learning type is generally chosen as learning of a hybrid system.Designing of the fuzzy logic controller with ANFIS, the following arrangements are implemented: SUGENO type must necessarily to be established for the controller, There must be inputoutput pairs to implement the system better for a better optimal solution, ANFIS modification and architecture employed by the MATLAB membership functions, Each rule must be worked on a single output membership function in the systemic function [10].• Application of ANN in pre-formulation study ANN model show has been utilized in communicating the plan and study of pre-formulation to determine the physicochemical properties of amorphous polymers.Ebube and coauthors [11] proposed in a study that the water uptake characteristics, rheological properties and glass transition temperatures of the different polymers also indicated how the ANN model specifically depicted the water uptake characteristics, water vapor sorption of viscoelastic properties of various amorphous hydrophilic polymers and their physical blends with low prediction errors {0-8%}.It showcased the use of ANN as a tool for pre-formulation study.The model helps predict polymer blending property with their hydration capacity and polymers solutions viscosity, the relationship of the hydration, and their glass transition temperatures.Onuki and coauthors [12] discussed the tablet properties creating the tablet database for designing tablet formulations.In the design, it is vital to think about the impact of active pharmaceutical ingredients (API) on the tablet itself and where a different form of API is available.Using ANNs study considers the 14 sorts blends as the model of experiment of APIs portrayed without anyone else's input dealing with maps to highlight on the speedy release database and the effect of APIs on the tablet properties.Chin and coauthors [13] affirmed in the study that using ANNs estimation of rheological wall slip velocity.The study showed an advanced multilayer perceptron model with input variables of shear stress, particle size, concentration, temperature with the resulting output of wall slip velocity.It concluded in the study that its proficient method for analyzing Brazilian Archives of Biology and Technology.Vol.66: e23210769, 2023 www.scielo.br/babtrheological wall slip velocity by use of artificial intelligence.Furthermore, ANNs have been effectively used in the development of stable formulations for a variety of active ingredients, including rifampicin and isoniazid microemulsions.ANNs are unquestionably valuable in preformulation design, and they could help lower the cost and duration of the investigation (as shown in Figure 4).
• ANN in the development of nanocarriers
While working in anticancer medication looking for its targeted action but can't ignore the side effects shown by the drugs, Qaderi and coauthors [14] worked on a study of the artemisinin drug for breast cancer proposed form of nanocarrier for medication to reduce the harmful effects towards healthy cells.The study mainly used the ANN design to express the high possibility of nonlinear modulation; it was used mainly for predicting the toxicity that the working nano liposomal formulation showed less toxic than the PEGylated formulation.For early diagnosis, imaging treatment of diseases nanoparticles of various sizes and shapes needs to be developed.So, for this, Boso and coauthors [15] studied optimizing the particle size and shape for the effective action with reducing sophisticated numbers of the experiment.They developed two networks ANN231 and ANN2321, to predict the accurate number of nanoparticles with the complexity associated with terms like shear rate and particle diameter.This demonstrated the optical particle diameter in the parallel plate flow chamber apparatus.By this application of ANNs in this study is a heuristic approach to minimize the number of the experiment for prediction of action of nanoparticles at disease site without compromising the study's accuracy.They studied the comparison of response surface and ANN modelling by a genetic algorithm approach in the preparation of agar microsphere.The multivariate nature of drug-loaded nanosphere manufacturing is the monotonous task expensive because of Nanosphere's various multiplicity factors.Koletti and coauthors [16] show that the present study tests the preparation by design and artificial neural networks of systemically administered NSAID gelatine nanocarriers to benefit quality (ANNs).Compared to MLR, the implementation of ANN shows substantially improved prediction ability, while the verification test demonstrated strong agreement between the expected ANN and the findings obtained experimentally.In the study showcased by Shahbanzadeh and coauthors [17], ANNs were used to exhibit its function to evaluate silver nanoparticles size in the bio composites substrate.A multilayer feed-forward ANN is mainly used to express the output of silver nanoparticles size with correlating different inputs.Hashad and coauthors [18] study demonstrated the use of ANNs to optimize formulation parameters to get a better yield of chitosan-tripolyphosphate nanoparticles.This study mainly states the ability of ANN to predict not only the particle size but also the percentage yield of nanoparticles; this may step in a great work to customize the target form of delivery action.Shalaby and coauthors [19] studied ANNs to determine the particle size and the entrapment capacity in noscapine in PLG/PEG nanoparticles by varying different factors how the effect may characterize the efficiency of entrapment and particle size that maintain the use of polymer factor PEG/PLG.It found molecular weight greater effect on particle size and the polymer-drug ratio influenced the drug entrapment efficiency.Esmaeilzadeh -Ghardeghi and coauthors [20] used the ANNs parameters to determine the effects of processing parameters on the particle size of ultrasound prepare chitosan nanoparticles.It mainly found out the pharmacokinetic action depended upon the particle size of the nanoparticles.The data were structured by using the ANNs results showing that four input factors affect the size of the prepared chitosan nanoparticles.Baharifar and coauthors [21] used the comprehensive ANNs model by input the three-input variable polymer concentration, stirring time, PH, and a second study said the toxicity factor influenced by the particle size of chitosan/streptokinase nanoparticles.The study mainly expressed out how the size of nanoparticles played a vital role in cytotoxicity and showcased that smaller particles showed more toxicity, regardless of the input parameters.Imanparast and coauthors [22] study the characterization, optimization, preparation of simvastatin nanoparticles by electrospraying but by the preparation of nanoparticles main focusing upon the size factors which studied by using ANNs using three input variables polymer concentration, salt, solvent flow rate, and the dimension of the nanoparticles were considered as the output variable.Tejedor-Estrada and coauthors [23] study the factor of subcellular localization of photosensitizers of photodynamic therapy of solid tumours.ANNs classification method used to distinguish the photosensitizers located in the two site mitochondria and lysosomes.Using ANNs, the virtual screening of drugs for PDT, using the ANNs is mainly helpful for predicting outright target action for solid tumours by removing out the false assignation.Sarajlic and coauthors [24] depicted the surface area of the surface of microspores and the prediction of the size of the nanoparticles by using ANNs.By varying the three inputs in a two-layer feed-forward network with a sigmoid transfer function of ANNs showcased the output determining the factor for accuracy data of the size of nanoparticles and microspore surface area.The preparation of a nanostructured lipid carrier is the most sophisticated approach.Rouco and coauthors [25] worked to determine the design space by applying artificial intelligence tools on the nanostructured lipid carrier materials.Artificial intelligence makes it evaluate the mainstream parameters that vary each formulation property and established a pioneering approach towards nanoparticles as a lipid carrier.Baghaei and coauthors [26] worked by using artificial intelligence to maintain the particle size and release factor of PLG biodegradable nanoparticles.By studied the variables, PLG molecular weight played an important role by varying the study of its characteristics of PLGA size and initial burst proposed by genetic algorithm.Bozuyuk and coauthors [27], using the ANNs, study the key parameters of the PEGylation of bio adhesive chitosan nanoparticles.ANNs were mainly a modelling tool to structure to optimize the inherent properties of nanomedicine.ANNs tools are mainly helpful in predicting the different characteristics such as as-size and zeta potential, and adhesion properties on cell surface PEGylation chitosan nanoparticles.El Menshawe and coauthors [28] developed terbutaline sulphate-loaded liposomes and formulated transdermal gel employing ANN modelling.These findings suggested the prospective use of bio losomes loaded gel for the treatment of asthma.Rizkalla and coauthors [29] depicted the study using the ANNs model to determine nanoparticles' nanoparticle size and micropore surface area developed by the double emulsion method.In this study, there use of two commercial ANNs software, neuro cell predictor and neuro solutions.This resulted in the optimization of various characteristics by varying the different properties and resulted that the users of these models for estimation of characteristics greater response than that of a statistical model.It concluded that these ANNs tools are efficient that used to analysis of different physical properties and analysis of the different processes.
ANNs in the development of emulsion formulations
Kundu and coauthors [30], by using the ANNs with a genetic algorithm, showed action for the development of petroleum emulsion formation and a heuristic approach towards the stability factor.It manly found out as preparation of emulsion concerned for stability by using ANNs-GA with response surface methodology will optimize the various parameters needed for stability factor for petroleum emulsion formation.Gupta and coauthors [31] proposed a study of the extraction of NSAIDs(diclofenac) through emulsion liquid membrane by the input of ANNs work.ANNs optimized the various factors surfactant concentration, homogenizer speed, stirring speed, and stripping phase concentration.Its result depicted that ANNs in this section were a prolific model for simulation of the process.Monazzami and coauthors [32] determined the rheological behaviour of β-cyclodextrin loaded Pickering-type oil in water emulsion by implying ANNs using three input variables and an output variable as shear stress.Also, the feed-forward backpropagation algorithm with input three different methods like quasi-newton, conjugate gradient, and gradient descent for network training.The resulting data mainly showed that quasi-newton was the most prolific method for determining shear stress than the other two methods.Messikh and coauthors [33] proposed a study for extraction efficiency of turnout cooper and breakage percentage using emulsion liquid membrane process.Neural networks radial basis function optimized the various factors for efficient extraction of copper like emulsification time, ultrasonic power, mixing speed, sulphuric corrosive fixation, extractant and surfactant concentration, internal phase/organic stage volume proportion, emulsion/outer stage volume proportion, and copper focus.Fang and coauthors [34] mainly proposed a study of the extraction of l-phenylalanine using the process by emulsion liquid membrane from the sodium chloride solutions.A backpropagation neural network was also improved by a genetic algorithm study to simulate the l-phenylalanine in the external phase and extraction efficiency under various operational conditions.Elkatatny and coauthors [35] predicted the drilling fluid rheological properties during the drilling operations carried to expressed various crucial factors like yield point, apparent viscosity, plastic viscosity, and remove out the problems related to change of properties.ANNs back box design converted the white box to create a mathematical model that implicated the determination of properties drilling fluid rheological using Marsh funnel viscosity, solid content, density.They showed the study of a rheological factor of inverse emulsionbased mud during drilling.They used the same method mathematical model of ANNs in white box manners as written in upper for rheological properties.Mahdi and coauthors [36] studied the operational parameters of microdroplet size prediction in microfluidic systems by using ANNs modeling in water in oil emulsion formulation.The study showed the mainly relative importance inputs of the size of microdroplets by using the Garson algorithm.Amasya and coauthors [37] worked on a QbD approach to study fluorouracil-loaded lipid nanoparticles W/O/W double emulsion.Using the ANNs model to evaluate the data obtained data to optimize various parameters that need to be stabilized in preparation.Experimental review of maintaining the design space of both input and output.Vu and coauthors [38] proposed a research study to refine the rosuvastatin-containing self-nano-emulsifying drug delivery system (SNEDDS) formula and determine its physicochemical characteristics.The solubility and compatibility of rosuvastatin have been tested in Brazilian Archives of Biology and Technology.Vol.66: e23210769, 2023 www.scielo.br/babtsurfactants, cosurfactants, and oil excipients.To study the effects of excipients on physicochemical properties, the D-optimal experimental design produced by JMP 15 software was used, SNEDDS to refine the SNEDDS formula of rosuvastatin.Droplet size, polydispersity index, and trapping efficiency were characterized by the produced nanoemulsions from Ros SNEDDS.Izadi and coauthors [39] proposed a study of o/w emulsion scattering phytosterol particles by ANNs and multivariate model.As the investigation for the most part in the emulsion, the principle factor is to keep up the thickness and rheological property, so ANNs consider fundamentally assesses consistency, molecule size, phytosterol paraffinization in o/w emulsion.The result mainly showed that the ANNs model was the most accurate work other than the multivariate regression.Wei and coauthors [40] used various emulsifiers set up by centrifugation to inspect dependability by utilizing ANN.Different mixtures like span 80 and tween 80 are being used, which gives W/O/W emulsion with delicate sensitivity.BY utilizing polymer surfactants like Arlacel P135 for the longterm stability of emulsion particles.ANNs in the development of suspensions Heidari and coauthors [41] proposed a study to design a suspension of PID controller by the backpropagation neural network to simplify the one-dimensional springer damper problems.BPN method is mainly used as the most appropriate for this method.For best results obtained by using the algorithm of levenberg-Marquardt training with 10 neurons and one hidden layer.By utilizing various inputs, it was mainly found that the usability of BPN in this area was helpful for the gain of a PID controller.Bash and coauthors [42] studied the dynamic response of arm-based suspension using artificial neural networks techniques.By using the three inputs and three outputs, it detects out the Max and the dynamic displacement.Finally, regression analysis is performed between finite element results and the values summed up by the neural network model.It mainly found out that the radial basis function neural network was the prominent one to use that diminishes the time required and the effort to evaluate the dynamic response of displacement in the suspension arm technique.Moran and coauthors [43] depicted in the study that using the performance of neural networks used for the analysis of optimal control and identification of nonlinear vehicle suspension.For the most part, it's discovered that the neuro vehicle models were effectively demonstrated to distinguish the dynamic qualities of the actual suspension vehicle.It showed that proper investigation of front suspension with the goal that the dynamic activities of front suspension accommodate advance the qualities of back suspension.
ANNs in the development of tablets
Shao and coauthors [44] studied experimental comparison data on immediate-release tablets using the ANNs and neuro-fuzzy, which showed the prominent models for tablet tensile strength and dissolution profiles.It mainly resulted that ANNs specified the superior capability to predict unseen data where neurofuzzy models generated the rules for cause-effect relationships depicted in experimental data.Chen and coauthors [45] proposed a controlled release structure to recreate the pharmacokinetic property by utilizing the ANNs.ANN model improved different cooperation, various interaction additionally reproduces pharmacokinetic property and the disintegration, bioavailability profiles.This mainly resulted that assisted in the evolvement of complex dosage forms.Sun and coauthors [46] studied the modelling of controlled release drug delivery systems using the ANNs.Mainly in the controlled release delivery system, various problems arise in the previously used surface methodology technique.Still, for limitations, they more specified upon the ANNs, which optimized the data and set a heuristic approach to developing controlledrelease delivery systems.Barmapalexis and coauthors [47] expressed a study in developing nimodipine floating tablets of solid dispersion optimized by ANNs and genetic programming.These are shown a better class of floating and controlled release characteristics.ANNs and GP resulted in an effective tool for optimizing the characteristics.GP equations maintained an optimum global formulation.Stanojevic and coauthors [48] proposed research with a purpose to explore the feasibility of adapting drug release rates from immediate to extended-release through varying tablet thickness and drug processing, as well as designing predictive atomoxetine (ATH) release rate artificial neural network (ANN) models from DLP 3D-Printed tablets.The successful manufacture of a series of tablets with doses ranging from 2.06mg to 37.48mg, displaying immediate and adjusted release profiles, demonstrates the promise of this technology in the development of dosage forms on-demand, with the prospect of modifying the dose and release actions by changing medication loading and tablet measurements.Xie and coauthors [49] studied artificial neural network optimization and evaluation of sustained and immediate-release tablets.The use of the ANN model is mainly helpful for optimized various released properties and complex combination forms.ANN also determines the efficiency factor on dependent and independent variables, which prolifically expressed the release profile.Turkoglu and coauthors [50] studied roller compaction modelling to vary the binder agent, type, and concentration by applying ANN and genetic algorithms.The study mainly stated that both ANNs and genetic algorithms optimized a variety of approaches to make it the best fit for data analysis for determining the approaches of the binder.It resulted out in a study that the genetic algorithm predicted tablet characteristics better than the ANNs models.Colbourn and coauthors [51] studied the gene expression programming to formulate the different formulations with a comparative study of ANNs.Gene expression programming is a development new way of a computing system of modelling data and automated equations that plotted the cause-effect relationship and compared neural models; these techniques are now widely used.The output resulted that GEP being considered an effective and efficient way of structuring the data.Chen and coauthors [52] studied the data of the in-vitro dissolution profile of controlled-release tablets by using the four ANN models and show a comparative study.The commercially used four software are Neuroshell v3.0, Brainmaker v3.7, CAD/Chem v5.0, neural works professional 2/plus.These ANNs programming enhanced diverse variety and kept up a design that demonstrates the heuristic methodologies required for disintegration controlled-discharge tablets.Those resulted in data given out the prediction that the Neuralshell2 showed the best prediction in vitro dissolution data of controlled-release tablets amid the other software used in the study.Ilic and coauthors [53] developed a relevant IVIVC model in osmotic release Nifedipine tablet-based a mechanistic simulation of gastrointestinal and ANNs analysis and their applicability usefulness in the consideration biopharmaceutical characterization of a drug.The study mainly resulted that both the GIS and ANN model was sensitive towards the application of in-vitro profile kinetics also employed in vitro-in silico-in Vivo development by the input of different in silico approaches.Matas and coauthors [54] studied over the in vitro in vivo correlations for delivery in nebulizer by using the ANNs.In the studied data predicted model that would help for the lung bioavailability of salbutamol formulation from the nebulizers lit concluded in study ANNs has the most prolific one for pharmacokinetic performance related to lung disposition.Chansanroj and coauthors [55] were studied the characteristics of drug release from the directly compressed matrix tablets of sucrose ester.The model mainly used in the study is the multilayer perceptron neural model and self-organizing map neural network for the accurate prediction of drug release from the sucrose ester matrix tablets.Colbourn and coauthors [56] proposed a study with defined approaches to neural networks and evolutionary computing towards the pharmaceutical formulation.These models picked up a separated base to decide the instance of optimizing parameters with representation instruments.It was deduced in the investigation that both the models were far-reaching of utilization in pharmaceutical formulations rather than the old traditional statistical methods.Aksu and coauthors [57] worked on the orally disintegrating tablets of ondansetron optimized different characteristics by using the ANN model in this study mainly used the two model's neuro-fuzzy logic and gene expression programming.This study mainly stated that the ANNs programs were a useful tool for detecting out the various development characteristics for R and D, which was beneficial for various terms like raw materials cost and time for ondansetron orally release tablets.Hussain and coauthors [58] studied the property of mucoadhesive tablets working in buccal containing flurbiprofen and lidocaine (HCL) to relieve dental pain.The uses of ANN to carry the different form of characteristics of the tablet-optimized formulations, making it a heuristic approach towards giving stable formulations.Takayama and coauthors [59] worked on simulated different optimizations techniques to determine the different forms of controlled release tablets of theophylline based on ANNs.Working on various approaches by using ANNs strongly optimized the good agreement between the resulted values of release parameters and predicted results.Plumb and coauthors [60] investigated the experimental design strategy to model film coating formulations by using the ANNs.These ANNs used six input and two output nodes with a single hidden layer of five nodes worked with Box Behnken, central composite, pseudorandom designs to train multilayer perceptron.It concluded that extensive internal mapping required the prolific use of the ANN model to determine the basic form of designing tablet coating.Ghennam and coauthors [61] used intelligent model (ANN-GA), an artificial neural network paired with a genetic algorithm is used to predict the in vitro release of Ibuprofen kinetic profiles from two types of formulations: microcapsules and multiple unit pellet system (MUPS) tablets based on soy protein and its two derivatives and the ANN-AG model has shown very good success in predicting the kinetic release of Ibuprofen from all formulations and the impact of pH and the mechanism of vegetal protein alteration on the action of drug release in the simulated gastric and intestinal medium.Simon and coauthors [62] worked on the drug delivery rate optimized the characteristics of formulations by using ANNs with a relatable algorithm.It depicted the approached study of estradiol release of ethylene-vinyl acetate membranes.It concluded in a study that the customized design of different formulations that suitable for its releasing rate desired property.Sovany and coauthors [63] studied the parameters of raw materials that specified the effectiveness of formulations' characteristics and the compression forces on breaking the parameters of tablets.ANNs were mainly used to optimize the different approaches of data analyzing and the modelled in Brazilian Archives of Biology and Technology.Vol.66: e23210769, 2023 www.scielo.br/babt a desired way of properties.ANNs statically outfitted its information uncovered that the subdivision of scored tablets impacted by the diverse parameters and the organizations of powder blends.
ANNs in the development of transdermal formulations
Few researchers worked in the ketoprofen hydrogel containing O-ethyl menthol transdermal formulations by simultaneously incorporated optimized the action of ANNs.The tools of ANNs used for the betterment of action to determine the lag time, rate of penetration, and irritation score removed the tedious task.It concluded that nonlinear relationship between casual factors and response variables and represented that response predicted by ANNs.Onuki and coauthors [64] worked on adhesive dermatological patches of photo crosslinked polyacrylic acid with modified 2-hydroxyethyl methacrylate hydrogel.By using various optimization tools like quadratic regression model, Artificial Neural networks, and multivariate spline interpolation.It concluded they worked on multiple variables to optimize predictions right characteristics of developed dermatological patches of the hydrogel.El menshawe and coauthors [28] present study was to devise, optimize and evaluate the gel transdermal terbutaline sulphate (TBN)-loaded bilosomes (BLS) compared to traditional oral TBN and free TBN-loaded transdermal gel to prevent hepatic first-pass metabolism.The pharmacokinetic analysis found that the enhanced formulation of TBN-CTS-BLS successively strengthened TBN bioavailability by approximately 2.33-fold solution and increased t 1/2 to approximately 6.21 ± 0.24 h relative to the oral solution.These results support the prospective use of BLS in the treatment of asthma as an active and healthy transdermal carrier of TBN.Tergic and coauthors [65] studied various data approaches to determine the suitability for the transdermal application of passive absorption.ANN was worked on optimizations of various physicochemical, pharmacokinetic, and biological characteristics that endeavoured the prolific one for the transdermal applications.It concluded in the study that ANNs the best method to move out the stochastic approaches of problems in heuristic manners.
ANNs in the development of microemulsion formulations
Fatemi and coauthors [66], using the ANNs, mainly modelled and predicted microemulsion electrokinetic chromatography.After worked with ANNs, it's enhanced the different parameters at that point went for similar examinations with the test esteems just as got regression models which demonstrated that ANNs model effectiveness of optimization.It came about that the ANNs prevalence over the regression models.Glass and coauthors [67] considered details of formulation of fixed-partition portion blends of rifampicin, isoniazid, and pyrazinamide in microemulsions.ANNs demonstrate that the dependability and affectability investigation was connected to advance the determination of details.It is chiefly deduced in the investigation that the client of the model to improve the selection of solvents, solubilizing specialists and surfactants preceding definitions of the microemulsion.The model mainly works in advanced the diverse qualities that will be useful for restricting the analyses, decreasing the costs utilized in the pre-formulations examinations.Richardson and coauthors [68] affirmed in the study that the idea of phase behaviour in microemulsions systems was envisioned using an artificial neural networks model.
ANNs model used to consider the use of backpropagation and feed-forward to predict the phase behaviour.It concluded in the study that it's a valuable tool for the development of microemulsion drug delivery systems.Kustrin and coauthors [4] worked in the combination form of rifampicin and isoniazid oral delivery formulation using the ANN methodology.ANN model showed the potential of working by varied the different ingredients that specified the stability of formulations.They went for the study of observed phase behavior using a radial basis function network.Then it resulted from that which experimental study predicted the most successful model as a percentage of success.Gasperlin and coauthors [69] purpose of the study was to predict the internal structure of microemulsions by using ANNs with the combined form of genetic algorithm.ANNs model used to be the specified source with the combined algorithm that maintained composition stability factors.It resulted in findings that the ANN models with genetic algorithm reducing research and development costs for characterizing microemulsion properties and stated that its another source of delivery is colloidal drug delivery system.Klampfl and coauthors [70] worked in the preparation of suntan lotion focused on separating UV filters by applying microemulsion electrokinetic chromatography.The set of ANN model mainly affirmed that the user of the model to optimize the best possible composition of microemulsions with the structured analytes concerning separation factors.Djekic and coauthors [71] studied the factors of phase boundaries of microemulsion by utilizing ANNs.This model is primarily highlighted with different methodologies that will be the productive instruments for advancing the qualities of microemulsion stage limits and the most superlative components searching for it, decreasing the test cost.Ferrer and coauthors [72] has made an effort to evaluate the percolation temperature of different AOT microemulsions in the presence of various additives (crown ethers, glycemes, Brazilian Archives of Biology and Technology.Vol.66: e23210769, 2023 www.scielo.br/babtand polyethylene glycols) developed in the laboratory; three predictive models are presented based on Artificial Neural Network, it can be inferred that the models demonstrated intense percolation temperature predictive ability.Nevertheless, the modification obtained for the model of crown ethers suggests that learning new inputs variables, increasing the number of instances, and using other training algorithms and methods will be convenient.
Advantages of ANN in pharmaceutical product development:
1. ANNs provided a useful tool for the development of microemulsion-based drug-delivery systems in which experimental effort was minimised.ANNs were used to predict the phase behaviour of quaternary microemulsion-forming systems consisting of oil, water and two surfactants.2. ANNs can identify and learn correlative patterns between input and output data pairs.Once trained, they may be used to predict outputs from new sets of data.One of the most useful properties of artificial neural networks is their ability to generalise.These features make them suitable for solving problems in the area of optimization of formulations in pharmaceutical product development.3. ANN models showed better fitting and predicting abilities in the development of solid dosage forms in investigations of the effects of several factors (such as formulation, compression parameters) on tablet properties (such as dissolution).4. ANN models were used to determine the structure activity relationship of compounds.5. ANN were used to detect the microbiological activity in a group of heterogeneous compounds.6. Neural networks produced useful models of the aqueous solubility within series of structurally related drugs with simple structural parameters.Topological descriptors were used to link the structures of compounds with their aqueous solubility.7. A three-layer, feed-forward neural network has been developed for the prediction of human intestinal absorption (HIA%) of drug compounds from their molecular structure.8.A four layer genetic neural network (GNN) model was used to predict the degree of drug transfer into breast milk.9. ANNs could accurately predict PD profiles without requiring any information regarding the active metabolite.Since structural details are not required.10.ANNs are widely used for medical applications in various disciplines of medicine especially in cardiology.ANNs have been extensively applied in diagnosis, electronic signal analysis, medical image analysis and radiology
Fuzzy logic in the development of tablets
Rebouh and coauthors [73] Used ANFIS to study ibuprofen activity by using different alternatives of cellulose derivatives like CMC, HPC, HPMC, MC and checked the release rate by using a fuzzy logic system.Results mainly showed that it has effective prediction capability in designing and testing a new formulation.Mesut and coauthors [74] designed alfuzosin HCL in the tablet using neuro-fuzzy system software.Which showed polymer type and its concentration, compression force, and lubricant concentration used in tablet formation.Results showed good compatibility between input and output, which was obtained from the system.Tan and coauthors [75] used clopidogrel bisulphate for controlled release tablet formulation by decrease its bioavailability.The release effect of the drug was examined by a computer program (INFORM.v.3.7.and FORMRULES).Results showed that the controlled release tablet was more efficient for an extended period of use.Wafa and coauthors [76] have proposed a new method that combines a particle swarm optimization algorithm with a fuzzy logic scheme to introduce a new paradigm that systematically decides the right-first-time output of granules and tablets, through this control technique, the optimum operating conditions to manufacture the necessary granules and tablets can be established, and the waste and recycling ratios can be reduced via actual laboratory-scale test that involve measurements tolerances, both systems have been successfully validated.Hyseni and coauthors [77] used PLGA in their experiment for the preparation of microsphere monodisperse.The controlled release rate depends upon drug porosity which may cause zero-order release kinetics of encapsulated molecules.Results showed high efficacy when it converted from three-phasic to continuous released of a drug in the body.
•
Fuzzy logic in the development of Nanocarriers Heidari and coauthors [78] used nanoparticles like ZIKV nanocarriers to help radiations used as a blood-brain barrier for human cancer treatment.He used fuzzy logic for Nano-fullerenes molecules under synchrotron radiations for controlled production and conditions that may be applied to the process like surfactant condition and reaction temperature without rudimentary bonding changes.Kazemipoor and coauthors [79] studied the anti-obesity of Carumcarvi by using an adaptive neuro-fuzzy interference system, and the results obtained from experimentation were compared with support vector relapse with the assistance of root-mean-square mistake (RMSE) and (R ( 2)).Results demonstrated that ANFIS helped in the forecast and precision improvement.Kumar and coauthors [80] have developed the study over the nanoparticles due to their biocompatible, degradable, and stable nature, and their biopolymeric (polysaccharide/proteins) revolutionized the drug delivery environment.It also explains existing difficulties in delivering insulin.These polysaccharide-based insulin nanocarriers, along with potential possibilities, can be used for selective delivery of insulin with more bioavailability, non-toxicity, and efficacy by using fuzzy logic and insulin pump technology.
•
Fuzzy logic in the development of suspensions Jara and coauthors [81] found uniformed polymeric nanoparticles with Nano-precipitation and were used for filtration and another separation application.By using fuzzy logic technology for uniformed and, mono-modal yield can be produced by using polymethacrylates derivatives.Through these results uniform, NpERS can be achieved and was used as an optimal method for production and biological potential for drug delivery systems.Kisi and coauthors [82] investigated the accuracy of ANF computing technology by collecting monthly data of suspension sedimentation from assorted places and was compared by using the fuzzy model for ANN and sedimentation order curve with root mean square error, mean square error, a coefficient of correlation.Results showed that fuzzy models can be with successive estimation for monthly suspended sedimentation.Azadeh and coauthors [83] execute adaptive neuro-fuzzy as partial least square for controlled drying process by reducing or predicting particle size of granules and produced the non-linear model.Results can have been used to predict drying processes by using ANN, ANFIS and PLS formulations and they were easy to apply on processes.Arab and coauthors [84] used a computational approach based on the fuzzy inference method (FIS) for peptic ulcer treatment; the efficiency of FIS was assessed with a ROC curve that prepares the 90% FCM accuracy and 85% ANFIS accuracy to compare the two methods.The blurry specialist system will theoretically improve the accuracy and efficacy of diagnostic procedures for peptic ulcer diseases to move towards more precision medicine and care.
Fuzzy Logics in development of emulsions
Fingas and coauthors [85] Formulate w/o based emulsion by using adaptive neuro-fuzzy approaches.Some factors like viscosity, density, resin content may show effects on the water-in-crude-oil emulsion.Most regressive models cannot capture non-linear relations.Results demonstrated that ANFIS can be utilized to anticipate the stability of w/o emulsion and SARA (soaks, aromatics, saps, and asphaltenes).It used a neural system and neural-fluffy model on ketoprofen strong scattering (SD)and physical blend (PM).Neuro-fluffy connected as a contribution to Qualitative and subjective examination on SD and PM.Results showed a transformation in neural modal to achieved performance in the sensitivity analysis.Lu and coauthors [86] have employed the Alexnet to automatically identify, quantify and classify three different mechanisms of emulsions.Knowledge entropy determines the degree of disturbance in function images of each mechanism, and the highest activations show that the proposed networks learn the suitable characteristics.These observations thus lead from the viewpoint of deep learning to a greater understanding of emulsion physics.Hussain and coauthors [58] used ultrasonic power to a great extent in preparation for stable O/W emulsion.ANFIS modelling was applied, and specialized emulsion properties were found from the results.Results showed that droplet size range reduced with accrued in sonication time, and various gums like pectin, xanthan were used to intensify stability of the emulsion.
Fuzzy logic in developing novel drug delivery systems
Fatouros and coauthors [87] used a dynamical lipolysis model and neuro-fuzzy network to saw the invitro in-vivo correlation.Oil solution, two self-micro, Nano blended drug delivery was tested with a liposomal model, and results were compared.It shows a less extended-release rate than SMEEDS and SNEEDS.So, outcomes were beneficial to anticipate IVIVC with dynamic lipolysis model and AFM-IVIVC.Azar and coauthors [88] Used an adaptive neuro-fuzzy system to predict post-dialysis urea rebound by using 30-60 samples.Accuracy was compared to predict equilibrated urea (Ceq).Results showed that they are highly promised for comparison of urea kinetic models.Amuthameena and coauthors [89] used a proportional, interconnected derivatives (PID -FLC) controller to estimate errors between setpoint and variables can be measured.The fuzzy controlled system was used to give inputs like 0 and 1.These systems are used for linear quadratic regulation unspecialized results.He observed that PID -FLC is more robust than insulin, which is delivered daily.Alvarez and coauthors [90] Used fuzzy logic in the production of isomerized hop pellets.Isomerized pellets created warming properties which make pellets stable at 50 C.By utilizing fuzzy controller vigorous delivered a high level of products.Arauzo Bravo and coauthors [91] used adaptive neuro-fuzzy systems to produce atomized penicillin by using the soft sensor for internal model controllers (IMC) with different modules.Results showed that accuracy and high production of penicillin were allowed to perform.Wan and coauthors [92] studied supramolecular nanocapsules with a high specific molecular reorganization.These capsules were derived from hyperbranched poly-ethylenimine (HPEI) for selective hosts.The fuzzy mechanism was used to promote specific molecular interactions.Results showed for defined, readily macromolecules and recognizing the potential of the highly particular host with the fuzzy mechanism.Karar and coauthors [93] introduced a new closed-loop fuzzy logic controller for regulating intravenous delivery of anti-cancer drugs.The controller was based on intuitionistic fuzzy sets and invasive weed optimization algorithms.Shahidi and coauthors [94] performed an extraction of polyphenols for flixweed seeds employing fuzzy based logic method.Extraction yield, time, and total phenolic content were the selected fuzzy logic inputs.
Advantages of fuzzy logic.
1.It is a robust system where no precise inputs are required 2. These systems are able to accommodate several types of inputs including vague, distorted or imprecise data 3.In case the feedback sensor stops working, you can reprogram it according to the situation 4. The Fuzzy Logic algorithms can be coded using less data, so they do not occupy a huge memory space 5.As it resembles human reasoning, these systems are able to solve complex problems where ambiguous inputs are available and take decisions accordingly 6.These systems are flexible and the rules can be modified 7. The systems have a simple structure and can be constructed easily 8.You can save system costs as inexpensive sensors can be accommodated by these systems 9.In medicine it is used to Control arterial pressure when providing anaesthesia to patients, Used in diagnostic radiology and diagnostic support systems, Diagnosis of prostate cancer and diabetes 10.NF models also proved a useful alternative IVIVR tool for drugs with complicated PKs where relations between input and output variables are complex and nonlinear, and our mathematical understanding of the system is incomplete
CONCLUSION
It has been shown that ANN and neuro fuzzy-based computational techniques are powerful tools in pharmaceutical product development.Essential basic knowledge of these techniques is required before their implementation in pharmaceutical processes.In pre-formulation studies, prediction analysis could be performed with these techniques.In pharmaceutical product development activities like optimization of various processes and parameters, the relationship between dependent and independent variables, production of responses, compatibility studies could be performed using ANN and neuro fuzzy-based models.The generation of understandable and reusable knowledge demonstrates the delivery of information extracted employing ANN and neuro fuzzy-based computational techniques.
Figure 2 .
Figure 2. A three-layered artificial neural network model with a single hidden layer.
Figure 3 .
Figure 3. Multilayer perceptron model with one hidden layer.
Figure 4 .
Figure 4. Applications of ANNs and Fuzzy logics in the pharmaceutical field.
|
2023-07-12T06:12:04.027Z
|
2023-07-03T00:00:00.000
|
{
"year": 2023,
"sha1": "33b828258e9a88307c03cad4ab33f24a95ebf751",
"oa_license": "CCBYNC",
"oa_url": "https://www.scielo.br/j/babt/a/yRsBXg4ns5bJqSJGYGLkpxn/?format=pdf&lang=en",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "d88c72fe6e25c451d456756c0764ba52c8c10e79",
"s2fieldsofstudy": [
"Medicine",
"Computer Science"
],
"extfieldsofstudy": []
}
|
10956246
|
pes2o/s2orc
|
v3-fos-license
|
Hodge gauge fixing in three dimensions
A progress report on experiences with a gauge fixing method proposed in LATTICE 94 is presented. In this algorithm, an SU(N) operator is diagonalized at each site, followed by gauge fixing the diagonal (Cartan) part of the links to Coulomb gauge using the residual abelian freedom. The Cartan sector of the link field is separated into the physical gauge field $\alpha^{(f)}_\mu$ responsible for producing $f^{\rm Cartan}_{\mu\nu}$, the pure gauge part, lattice artifacts, and zero modes. The gauge transformation to the physical gauge field $\alpha^{(f)}_\mu$ is then constructed and performed. Compactness of the lattice fields entails issues related to monopoles and zero modes which are addressed.
The Method
While gauge fixing is a central tool of lattice simulations, the effect of lattice artifact "Gribov copies" remains a delicate issue, particularly in chiral fermion models which often rely on gauge fixing essentially.
In [1] we proposed a method of gauge fixing to an t'Hooft like gauge which was called Gauge fixing by Hodge decomposition. We work here on a spatial torus in 3-dimensions, thus this presentation is for Coulomb gauge (the algorithm is done in parallel on each time slice). Landau gauge generalizes similarly. The method is as follows.
• Diagonalize some operator which transforms adjointly, O ′ x = G † x O x G x , at each site. (The operator used here is the spatial sum of plaquette clovers and their adjoints at each site.) • Define an Abelian α µ field from the links.
For O ∈ SU (2) or su(2), only one of each gauge transformation is required. For SU (3) operators, about 10 iterations are required to get off−diagonal |a ij | < 10 −7 .
The Residual Abelian Fields
We must define residual U (1) fields, which represent the part of A a µ in the Cartan subgroup in the continuum limit. For SU (2) we use: For SU(3) where a link has diagonal elements {e iβ 1 , e iβ 2 , e iβ 3 }, the angles are suitable Cartan fields.
Hodge Decomposition of α µ
Any lattice vector field α µ (x) can be uniquely decomposed into: µ is the physical part of the field and soley responsible for producing • H µ is the lattice harmonic part responsible for Dirac string loops (and is the major source of Gribov copies).
• w µ is the continuum harmonic part for a torus, ie. a constant.
The decomposition is formal at this stage, but we remark that much of the work in solving for the lattice α (f ) µ is in identifying the neccessary parts of H µ which must be kept.
Solving for
or in Fourier space. k µ is the lattice momentum: 2 sin(πan µ /L µ ). It is thus very easy to find α (f ) µ from f µν by FFT; however we must build the correct f µν in stages.
Since we want α (f ) µ to minimally reproduce the plaquette angles f plq µν : • We first set f µν = f plq µν .
Monopoles
For compact gauge fields, α (f ) µ must also the reproduce gauge invariant monopoles which are the ends of open lengths of Dirac string; thus we need to find all monopole sites on the dual lattice. Since the Dirac strings connecting these monopoles are gauge variant, we define a monopole-anti·monopole pairing which minimizes the string length between pairs. This can be done by simulated annealing for instance, but since this pairing is not unique, this step is a source of Gribov ambiguity. We could also connect pairs in order of finding them (violating rotational invariance), which is however fast and unique. Then, • We next add to f µν = f plq µν + f mnple µν in otherwords, for each plaquette pierced by a Dirac string (as given by our minimal monopole pairing), we add ±2π to f µν .
3.1.
Zero-modes of f µν : globally wrapping string Due to compactness again, f µν may have a zero-mode. This zero-mode can be viewed as a length of non-contractable Dirac string, ie. one that stretches across the torus.
indicates the occurance of c ρ global non-contractable Dirac strings in the ρ-direction in the initial configuration which must be added to f µν . We are at liberty to place these strings wherever we like, either randomly for pseudo translational invariance, or along some particular axis so that we know where they are. Along these strings we add ±2π to the f µν perpendicular to the path so that f µν = f plq We are now ready to solve eq. 6, which will give the minimal α (f ) µ that reproduces the plaquette angles, monopole, and global strings, and which is in Landau gauge: 3.2. One more zero-mode w µ There is one more zero mode that is undetermined by eq. 6 which is responsible for producing the correct Wilson loops. After solving for α (f ) µ we must add to it the correct constant w µ in order that Wilson loops are preserved mod(2π). This is the zero-mode of the original α µ field, and does not contribute to f µν .
The Gauge Transformation
Because of the zero-modes, the only way to find the gauge transformation taking us from α µ −→ α (f ) µ + w µ is by constructing a tree, or in otherwords integrating the equation Figure 2 shows three extremization gauge fixed copies derived from the initial configuration in figure 1. In figure 3, the only two copies obtained by the Hodge method are displayed. The SU (2) starting configuration is relatively smooth though, generated at β = 44, followed by a random gauge transformations. Figure 1. Initial gauge field configuration and string topology, on slice t = 0. The "sum of clovers" of an SU (2) field was first diagonalized according to the introduction, then a random gauge transformation was applied.
Conclusions
• The method seems to work resonably well; it returns a uniquely gauge fixed configuration, up to the connectivity of monopole pairs. However, the Cartan sector of SU (N ) fields at typical β values is fairly rough, and thus the monopole density is also relatively high.
• Closed (contractable) loops of Dirac string are removed, which are a primary source of Gribov copies in extremization methods.
• For high monopole densities, simulated annealing seems to give poorer connectivity for the large number of monopole pairs than extremization, ie. extremization uses less string to connect monopoles.
|
2014-10-01T00:00:00.000Z
|
1996-08-23T00:00:00.000
|
{
"year": 1996,
"sha1": "4845eef7b1bc0231a3125376b57ebaefb0752420",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-lat/9608126",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "4845eef7b1bc0231a3125376b57ebaefb0752420",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
250940376
|
pes2o/s2orc
|
v3-fos-license
|
The association of CD4 lymphocyte count with drug hypersensitivity reaction to highly active antiretroviral therapy, trimethoprim sulfamethoxazole, and antitubercular agents in human immunodeficiency virus patients
Background The introduction of highly active antiretroviral therapy (HAART) and antibiotic regimens for the treatment of human immunodeficiency virus (HIV) and its concomitant opportunistic infections, respectively, significantly improve the morbidity and mortality of the infected patients. However, these drugs commonly cause drug hypersensitivity reactions (DHRs) in patients with acquired immunodeficiency syndrome. The reason proposed are multifactorial, which includes immune hyperactivation, changes in drug metabolism, patient cytokine profiles, oxidative stress, genetic predisposition, and the principal target in HIV patients, the CD4+ lymphocytes. Objective This study determined the association of CD4 count and DHRs to first-line HAART, trimethoprim sulfamethoxazole, and antitubercular agents among HIV patients. Methods This is a retrospective analytical study. Review of charts were done. The demographic and clinical profile used a descriptive statistics such as mean and standard deviation for quantitative data and frequency and percent for categorical data. Chi-square and Fisher exact tests were used to measure the degree of the relationship of CD4 count and DHRs. Results A total of 337 eligible patients were included. There was a 25% incidence of hypersensitivity reactions. However, the prevalence of DHRs across the different CD4 groups was not statistically significant (p = 0.167). Likewise, the study found no significant association between the CD4 count and DHRs to first-line HAART, trimethoprim sulfamethoxazole, and antitubercular agents (p = 0.311). The most common DHR was morbilliform rash, and nevirapine was the most reported antiretroviral drug causing DHR. Conclusion There was no association in the CD4 count and DHRs to first-line HAART, trimethoprim sulfamethoxazole, and antitubercular agents. Hence, regardless of the baseline CD4 lymphocyte count, the physician should be vigilant in monitoring hypersensitivity reactions. Patient education on common DHRs is very important upon diagnosis of HIV and/or initiation of treatment.
INTRODUCTION
In the year 2020, there were 37.7 million people living with human immunodeficiency virus (PLHIV), with 1.5 million new HIV infections, and 680,000 deaths from acquired immunodeficiency syndrome (AIDS)-related causes. Of these, there were 10.2 million people who were not on HIV treatment [1]. In the Philippines, there were a total of 83,755 confirmed HIV-positive individuals, however, only 48,314 PLHIV were presently on antiretroviral therapy (ART) as of January 2021 [2].
The utilization of highly active antiretroviral therapy (HAART) has had a significant impact on the course and treatment of the disease and disease-related morbidity of HIV-infected patients. Its main goal is to provide and maintain viral load suppression to stop the disease progression, and generally, increase their life span and quality of life. The success of HAART in slowing the progression of HIV disease and in increasing the life expectancy of HIV patients is unfortunately accompanied with some significant downsides [3][4][5][6][7][8][9]. These disadvantages are principally related to a higher incidence of adverse drug reactions (ADRs), including drug hypersensitivity reactions (DHRs), which are more frequent in HIV patients than the general population [8,10]. There has been a reported global incidence of 11% to 35.9% of ADRs to antiretroviral (ARV) drugs, and as high as 54% in the presence of opportunistic infections (OIs) such as tuberculosis, pneumocystis pneumonia (PCP) and toxoplasmosis [6][7][8]. A study by Davis and Shearer stated that the frequency of drug hypesensitivity among patients with HIV infection ranges from 3%-20% [9,11]. The reason proposed are multifactorial, which includes immune hyperactivation, changes in drug metabolism, patient cytokine profiles, oxidative stress and genetic predisposition [9]. Additionally, there is a reduction in the number of CD4+ lymphocytes (T helper/Th)-the principal target in HIV patients, interference in the homeostasis, and function of other cells in the immune system. This event will further lead to disruption of the cellular and humoral immunity functions causing a wide clinical spectrum of diseases such as OIs, autoimmune reactions, and hypersensitivity reactions [8,10].
These hypersensitivity reactions in HIV infected patients varies in severity and clinical manifestations [6][7][8]. They vary from cutaneous reactions, liver injury, anaphylaxis, to druginduced anemia, neutropenia, thrombocytopenia, and other systemic manifestations [7]. The determination of DHRs in HIV patients is indeed challenging since these patients are on multiple drug regimens that are used to prevent and/or treat OIs.
Previously published studies have suggested that a lower CD4 count is associated with an increased prevalence of toxicities amongst those on HAART, antituberculosis drugs, and trimethoprim sulfamethoxazole [12][13][14][15][16][17]. Other studies have not found any association [9,11,18]. If such an association exists, then this could affect the patient's and clinician's decision to exert more vigilance in detecting hypersensitivity reactions during treatment.
So far, there is paucity of data in the Philippines on the DHRs to ARV drugs, and to commonly used concurrent medications such as trimethoprim sulfamethoxazole, and antitubercular agents. The objective of this study was to determine the association of the CD4 lymphocyte count with the prevalence and severity of DHRs to first-line ARV drugs tenofovir (TDF), lamivudine (3TC), efavirenz (EFV), zidovudine (AZT), nevirapine (NVP), trimethoprim sulfamethoxazole, and antitubercular agents (isoniazid [ Association of CD4 count and drug hypersensitivity reaction in HIV 3/15 https://apallergy.org profiles were also compared among patients with different baseline CD4 count. Findings of the study will greatly contribute to raising awareness, identification, and characterization of patients at risk, and in the early recognition and education on the signs and symptoms of various hypersensitivity reactions, and hence, guide patients to seek immediate treatment. Pertinent information or results may be used for treatment guidelines review, pharmaceutical planning, and clinician's decision-making prior to the initiation of medications.
Design
This was a single-center, retrospective observational study conducted from January 2012 to June 2018.
Setting and participants
The study was conducted at the HIV/AIDS Core Team (HACT) Clinic of Southern Philippines Medical Center (SPMC), Davao City, the Philippines. SPMC is a government, primary treatment hub and/or referral center for HIV in the region with large number of HIV patients. All adult patients aged more than 18 years old with HIV confirmed and HIV-tuberculosis (TB) coinfection were included in the study. This criterion was based on the fact that patients of that age could give plausible report to health providers rather than children whose reports of hypersensitivity reaction(s) depended on their caregivers.
The following are the inclusion and exclusion criteria of the study:
Inclusion criteria
• Confirmation of HIV infection by enzyme-linked immunosorbent assay and/or Western blot. • HIV positive patients who had received first-line, alternative first-line fixed dose combination of ARV therapy, as recommended by the Department of Health (DOH), for at least 1 month. • HIV-TB coinfected patients on antituberculosis medications such as INH, RIF, ETH, and PZA. • Patients should have a baseline CD4 count, complete blood count, fasting blood sugar, triglyceride level, total cholesterol, alanine aminotransferase, creatinine, estimated glomerular filtration rate (EGFR) based on chronic kidney disease-epidemiology collaboration equation, syphilis, hepatitis B surface antigen, antihepatitis C virus, gene expert, sputum acid fast-bacilli, and chest x-ray prior to the initiation of ART.
Exclusion criteria
• Patients switched to other ARV drugs due to ARV drug resistance • All HIV patients started with second-line ARV agents as initial treatment
Study process
The study protocol was approved by the Department of Health XI Cluster Ethics Review Committee (DOH XI CERC) Davao City, the Philippines (CERC No. P18033001). Prior to enrollment, the investigator discussed the following to the patient: DHRs to HAART, antitubercular agents, and cotrimoxazole, role and importance of CD4 lymphocyte count, and the intent of the study. Written informed consent was obtained from the patients prior to their inclusion in the study. The primary investigator utilized chart review of patient's records. A standardized data collection tool was used in recording clinical information from the patient. All identifying data of the patients remained anonymous. Hence, there was an assigned HACT nurse who prepared the charts to be reviewed. The identifying data of the patients in the chart were covered prior to data gathering. The anonymity of the data was done without the supervision of the primary researcher to ensure confidentiality.
DHRs were gathered based on patient's complaints, the symptoms, and signs, noted by the resident physician on the charts. DHRs due to ARV, cotrimoxazole and antitubercular agents were considered if it was absent prior to the initiation of the above said drugs.
Variables
The independent variables were the patient's CD4 lymphocyte count, ARV drugs, antitubercular agents, and cotrimoxazole, and demographic, clinical, and other biochemical parameters. Dependent variable was the patient's hypersensitivity reactions.
Sample size calculation
The sample size for this study was computed using a calculator in https://select-statistics. co.uk/calculators/sample-size-calculator-population-proportion/. The following assumptions were used in the calculation: (1) There were around 2,848 cases of confirmed HIV-AIDS enrolled in SPMC HACT clinic from 1999 to June 30, 2018, and among these, the total number of currently alive PLHIV on ART is 2491. (2) The rate of hypersensitivity reactions was 50%. In a study by Davis and Shearer, they reported that the incidence of DHR in HIV patients is 3%-20% [9], however, since there is paucity of data on the incidence of DHR in the Philippines, 50% was therefore used. (3) The significance level of the test was 0.05. (4) The total sample size needed for this study is 337.
Data handling and analysis
R software was used for the data analysis. Descriptive statistics was used such as mean, standard deviation, frequency, and percentages to summarize the demographic and clinical profile of the respondents. The parametric analysis of variance test and nonparametric Kruskal-Wallis test, and parametric t test and its nonparametric counterpart, the Mann-Whitney U test were used for the analysis of continuous and categorical data, respectively, to determine the differences between the 2 groups of patients. All tests used the 5% level of significance. A p value of <0.05 was considered statistically significant. Means and standard deviations were also used to determine the temporal relationship of the time to onset of DHR from the initiation of the drugs. To identify the association of CD4 count and hypersensitivity reactions in HIV patients, chi-square and Fisher exact tests were used to measure the degree of its relationship.
RESULTS
A total of 337 patients were included in the study. The patients were grouped according to their baseline CD4 count, such as: group 1 (CD4 <50 cells/mm 3 Table 1 shows the comparison of demographic, clinical profile, and biochemical findings of patients included in the study.
Demographic, clinical profile, and biochemical findings
The mean age of diagnosis was 28 years old (range, 18-56 years old), and 322 patients (95.55%) were comprised of males. There were only 11 female patients (3.26%) included, and among these, 3 patients were pregnant. The mean CD4 count was 180 cells/mm 3 with a median CD4 of 99 cells/mm 3 .
Majority of the patients were symptomatic at the time of diagnosis (n = 225, 66.8%). Of these, 112 (96.6%) had CD4 of less than 50 cells/mm 3 , and 83 patients (71.6%) had CD4 of 51-200 cells/mm 3 . The incidence of tuberculosis coinfection was only 31% (n = 103), and it was frequently noted in patients with CD4 of less than 50 cells/mm 3 . Other commonly reported concurrent infections were syphilis, hepatitis B, oral candidiasis, and PCP pneumonia. However, its occurrence had no significant difference among the patients. https://doi.org/10.5415/apallergy.2022.12.e26 Association of CD4 count and drug hypersensitivity reaction in HIV The difference in the patient's hemoglobin, white blood cell count, neutrophilic and lymphocytic counts were statistically significant. However, hematocrit, monocytes, eosinophils, basophils, platelet count, and EGFR had no significant difference across the different groups.
There were 107 subjects (32%) who had HIV-TB coinfection and these patients are on HRZE treatment. Additionally, there were 201 patients (60%) who were on INH prophylaxis.
Cotrimoxazole, the mainstay of treatment in PCP pneumonia, is currently recommended by the World Health Organization to be initiated in PLHIV with CD4 count of <200 cells/mm 3 , and is only discontinued once the CD4 count improve to >200 cells/mm 3 for at least 2 CD4 count determination, and after 3 months of HAART [3]. In this study, there were 208 patients (62%) on cotrimoxazole for the treatment and/or prophylaxis of PCP pneumonia.
The mean CD4 counts of the patients before and after 6 months of ARV treatment were shown ( Table 2).
It was noted that there was a statistically significant increase in the CD4 count of the patients after the different fixed-dose combination ARV drugs were initiated.
The incidence of DHRs to ARV, cotrimoxazole, and antituberculosis drugs was 25% (83). As shown in Table 3, DHRs were observed in mostly across the different CD4 groups, and it was dominantly seen in patients with CD4 count of less than 200 cells/mm 3 . Specifically, DHRs occurred in 35% of patients in group 1, 37% in group 2, 17% in group 3, 10% in group 4, and only 1% in group 5. There was no significant difference in the occurrence of hypersensitivity reactions among the patients across the different CD4 groups. Furthermore, there was no significant difference on the drugs that caused the hypersensitivity reactions across the different CD4 groups.
Morbilliform form rash (47%) was noted to be the most common hypersensitivity reaction regardless of the CD4 count ( Table 4). This was then followed by hemolytic anemia (13%), erythema multiforme (8%), and lastly, urticaria and thrombocytopenia with an incidence of 6% each. Other reported hypersensitivity reactions include angioedema, bronchospasm, anaphylaxis, granulocytopenia, dyslipidemia, hepatitis, gynecomastia, serum sickness, and https://doi.org/10.5415/apallergy.2022.12.e26 Association of CD4 count and drug hypersensitivity reaction in HIV Stevens-Johnson syndrome (SJS). These hypersensitivity reactions occurred across the CD4 groups but was dominantly present in patients with CD4 count of less than 200 cells/mm 3 . There was no significant difference in the association of CD4 count and the occurrence of the hypersensitivity reactions.
The mean number of days from the intake of the antituberculosis drugs such as INH, HRZE, PZA, and cotrimoxazole to the time of onset of DHRs was <10 days ( Table 5). Other ARV drugs such as NVP, EFV, 3TC, and AZT + 3TC + EFV had an average of 21-60 days to the onset of DHRs. Other fixed-dose combination ARVs including 3TC + TDF + EFV and AZT + 3TC + NVP had a longer time of onset of DHR with an average of 100-160 days.
https://doi.org/10.5415/apallergy.2022.12.e26 Among the subjects who had DHRs, seventeen patients needed hospitalization while 66 patients were managed in the out-patient department. All patients had improved status upon discharge and no mortalities were reported. Also, of the patients with DHR, there were 72 patients who had the drugs discontinued and were shifted to other ARV medications. On the other hand, there were 11 patients who continued the drug. Those patients whose ARVs were not shifted were given with an individual preparation of the drug instead of the fixed dose combination ARVs. After shifting the ARVS, only 2 patients had another DHR, specifically to EFV and fixed dose combination 3TC+TDF+EFV. These patients both presented with morbilliform rash.
DISCUSSION
HIV patients showed an increased incidence of drug eruptions when compared to non-HIV individuals [16,19]. Clinically, hypersensitivity reactions in HIV population are similar than those who are not, being generally manifested as a combination of fever, rash, and internal organ involvement within 6 weeks of drug initiation [20]. The pathophysiology of drug hypersensitivity in HIV patient is multifactorial and are related to changes in drug metabolism, dysregulation of the immune system (immune hyperactivation, patient cytokine profile), oxidative stress, genetic predisposition, and viral factors [13,21,22].
The main target of HIV infection is the CD4+ lymphocytes which is a central regulator of the immune system. CD4+ lymphocytes have 2 types such as T helper-1 (Th-1) and -2 (Th-2) which are differentiated by the released cytokines. The Th-1 cells produce interferon gamma (INF-γ) and interleukin (IL)-2 which are important mediators of the cellular immune response, while Th-2 produces IL-4, IL-5, IL-6 and IL-10 which are an important mediator for the humoral immune response that help B lymphocytes produce antibodies [23]. There is immune dysregulation in HIV patients which plays a significant role in the progression of the disease. Whereby, once infected by HIV, changes in cytokine profiles appears specifically increasing the production of IL-4 and IL-5, and decreasing the production of IFN-γ. Early in the course of HIV infection, cytokines produced by Th-1 and Th-2 are in balance. However, as the disease progresses, the production of cytokines by Th-2 such as IL-4 increases and the production of IL-2 cytokine by Th-1 decreases. This shift causes an increase serum level of IgE associated by a reduction in CD4+ T cell count (less than 200 cells/mm 3 ) [23]. This condition leads to a loss of appropriate immune response [24].
https://doi.org/10.5415/apallergy.2022.12.e26 The pathway by which the drugs are presented in vivo is still unclear. However, there are 2 hypotheses, the hapten-dependent and hapten-independent (p-i or pharmacologic interaction concept) pathways being proposed [13,19]. First, the hapten theory states that the culprit drugs or their reactive metabolites are chemically inert but become immunogenic through metabolism to reactive intermediates which then covalently bind or haptenate with endogenous peptides forming an antigenic hapten-carrier complex. The hapten-carrier complex is presented to the human leukocyte antigen (HLA) molecule and then recognized by T-cell receptor (TCR), resulting in the induction of drug-specific cellular or humoral immune responses. The second theory is the hapten-independent or pharmacological interaction with immune receptor (p-i) concept states that the parent drug itself may directly, reversibly, and noncovalently bind to the HLA and/or TCR protein and bypass the classic antigen-processing pathway in antigen-presenting cells [13,25]. This study revealed a higher incidence of DHR to ARV, antituberculosis, and cotrimoxazole drugs with 25% (83) compared to prior studies [9,26]. On the other hand, subjects with HIV-TB coinfection had a 26.4% incidence of DHRs. This incidence was similar to a study done by Lehloenya et al. wherein the frequency of DHRs, including those on antituberculosis drugs, had an estimated incidence of 27% [23,26]. In HIV patients, Vanker and Rhode [22] stated that DHRs occur 100 times more common than the general population. In South, East, and Southeast Asia, the incidence of drug allergy/hypersensitivity among HIV patients ranges from 3%-20% [9]. Currently in the Philippines, there are no published data on the incidence of drug hypersensitivity.
Association of CD4 count and drug hypersensitivity reaction in HIV
It was observed in this study that DHRs occurred across the different CD4 groups, but the prevalence was not statistically significant (p = 0.104). This result may be explained by the small population of the study and the unequal number of patients across CD4 groups. At present, there is no study that compares the incidence of DHRs among HIV patients across the different CD4 counts. However, in a study by Tatiparthi and Mamo (2015) [27] on the prevalence of ADRs and associated factors of ARV treatment on HIV patients at Jush, they reported that most of the patients with CD4 count between <200-400 cells/mm 3 had 60%-80% of occurrence adverse reactions. Moreover, a prospective study by Bhuvana et al. [28] on the ADRs in HIV patients, it reported a 65.82% incidence of ADRs. However, these previous studies could not be used as comparison since they generally included all ADRs and not merely hypersensitivity reactions.
The results of this study showed that NVP was the most common cause of DHR with an incidence of 35% followed by AZT (17%), then cotrimoxazole (13.5%), then EFV (13%), and the fixed-dose combination of 3TC + TDF + EFV comprising of 8% of patients with DHR. In addition, NVP commonly caused morbilliform rash, seen in 75% of cases (17 of 31). Other reported reactions from NVP included urticaria (4 of 31, 13%), erythema multiforme (3 of 31, 10%), serum sickness (3 of 31, 10%), gynecomastia (2 of 31, 6%), and 1 case each (3%) of SJS and thrombocytopenia. These NVP-associated reactions were observed in patients across the different CD4 groups. It was seen not only in patients with CD4 of less than 200 cells/mm 3 but also in HIV patients with higher CD4 counts. In contrast, a study by Chaponda and Pirmohamed (2011) [29] on the hypersensitivity reactions showed that NVP hypersensitivity reactions occurred in 17%-32% of patients and 13% of these are mild rashes. Severe cutaneous reactions such as drug rash with eosinophilia and systemic symptoms (DRESS) and SJS has also been reported in 0.37% of NVP-associated reactions, and these reactions were also noted to occur in patients with higher CD4 counts [13]. NVPhttps://doi.org/10.5415/apallergy.2022.12.e26 Association of CD4 count and drug hypersensitivity reaction in HIV 10/15 https://apallergy.org induced hypersensitivity was also found to occur in healthy patients receiving the drug for postexposure prophylaxis [30].
AZT, the second drug with the most reported DHR in this study mainly caused anemia in 67% (10 of 15) of patients with AZT-related reactions. The other DHRs included thrombocytopenia (4 of 15, 27%) and morbilliform rash (1 of 15, 6%). This finding was comparable to a study by Chowta et al. (2018) [31], whereby 33% (26 of 79) had reactions to AZT and among these patients, 34% had anemia, 34% had neutropenia, and 31% with thrombocytopenia, although no cutaneous reactions were reported. The drop in the hemoglobin reported in this study was not significant since majority of the patients had a normal to high baseline hemoglobin, hence the decrease was not large enough to cause overt anemia.
In this study, there were 13.5% (12) patients who had hypersensitivity reactions to cotrimoxazole, and all were observed to have cutaneous DHRs. Of these, 8 out of 12 patients were observed to have morbilliform rash. The other reported reactions included urticaria, erythema multiforme, bronchospasm, and anaphylaxis with 1 reported case each. A similar study by Yunihastuti (2014) [8], cutaneous drug reactions were commonly reported in cotrimoxazole, with maculopapular rash being the most reported cutaneous reactions. Other reactions vary from urticaria, eczematous and fixed drug eruptions, erythema multiforme, and severe cases of SJS and TEN [8]. These reactions were observed to occur within 7 days after the initiation of therapy. The incidence of DHR to cotrimoxazole was in contrast with previous studies that reported higher incidence of 40%-80% compared to the 3%-5% in healthy subjects [13,32].
The hypersensitivity reactions to anti-tuberculosis drugs such as INH, PZA, and RIF were rarely reported in this study. There were only 2 cases of RIF-related hypersensitivity reactions, 1 case of PZA-induced DHR, and 3 cases of INH-induced hypersensitivity reactions. These antituberculosis drugs commonly presented as morbilliform rash. INH caused anaphylaxis and bronchospasm in this study. Previous studies similarly reported that rash was the commonly reported DHR among HIV patients on anti-tuberculosis medications occurring within the 16 weeks of treatment [33][34][35]. A cross-sectional study in Kenya by Nunn et al. (1992) [36] reported cases of severe cutaneous reactions such as SJS. However, in this study, there was no reported SJS caused by any of the anti-tuberculosis medications.
In this study, the most commonly reported DHRs were morbilliform rash (46.5%) mainly to NVP and EFV, followed by hemolytic anemia (12.5%) from AZT, then erythema multiforme (9.1%), and urticaria and thrombocytopenia with 6.82% each. Such findings were in line with previous studies whereby cutaneous drug reactions were the most common manifestations of drug hypersensitivity [13,[15][16]31].
It was observed in this study that DHRs occurred in 37% of patients under group 1 (CD4 of 51-200 cells/mm 3 ) followed by group 1 (35%), group 3 (17%), group 4 (8%), and group 5 (1%). However, this study found no association between CD4 count and hypersensitivity reactions to ARV, antituberculosis, and cotrimoxazole drugs (p = 0.311). Although previous studies reported that a reduction in CD4 count leads to immune dysregulation predisposing advance HIV patients to DHRs [13,22,23] [16,37,38]. The findings in this study were in contrast to some studies which have found that those with lower baseline CD4 count were more likely to experience DHRs [4,39].
Apart from the immunologic mechanisms, other risk factors have been identified that may predispose HIV patients to DHRs which included chemical and drug-related factors, host-related factors, genetics, and concomitant infections [6,11]. Specifically, the chemical factors and drug administration factors that can predispose patients to DHRs include a large molecular mass, specific immunologic structural moieties, reactive metabolites, parenteral and topical administration, a longer duration of exposure, and frequent repetitive courses of therapy [40,41].
Host-related factors include gender and older age [6,37,40]. In this study, males principally had high incidence of DHRs compared to females. This may be explained by the increased incidence of HIV in homosexual males in the region with a predominant male-to-male sexual transmission. On the contrary, a study by Srikanth et al. (2012) [4] showed that male gender was observed to be a risk factor.
The study of medical genetics in recent years focused on the area of HLA genotypes and their associations with severe drug hypersensitivity. The association with Abacavir-induced hypersensitivity reaction with HLA-B*57:01 was first discovered in 2002. The positive predictive value of HLA-B*57:01 for Abacavir rechallenge hypersensitivity reactions has been reported to be 55% in Caucasians [42,43]. NVP, meanwhile, has been associated with NVPinduced hypersensitivity or DRESS in patients with HLA-DRB1*01:01 in western Australia, HLA-B*35-05 in Thailand, and HLA-Cw8 in Japan [25].
Additional risk factors include concomitant infections, such that, hypersensitivity reactions may be induced by other pathogens including mycoplasma pneumonia, or viral infections like human herpesvirus-6 (HHV-6) reactivation in patients with DRESS/DIHS (drug-induced hypersensitivity syndrome). HHV-6 reactivation are found to increase T-cell activity after the initiation of the drug eruption and induce the synthesis of proinflammatory cytokines, including tumor necrosis factor-α and IL-6, which may in turn modulate the T-cell-mediated responses [40,44]. A study by Shiohara and Kano (2007) [45] on the associations between viral infections and drug rashes revealed that aside from HHV-6 reactivation, other herpes virus like HHV-7, Epstein-barr virus, and cytomegalovirus were also found to be coincident with clinical symptoms of DHRs. Chung et al. (2013) [46] also reported that a new variant of coxsackievirus A6 acting as the causative agent provide exogenous peptides for dry presentation and participate in HLA/drug/TCR interactions thereby inducing widespread mucocutaneous blistering reactions mimicking the features of erythema multiform major or severe cutaneous adverse reactions (SCAR). White et al (2015) [47] recently proposed that some patients may acquire primary infections via HHVs or other pathogens that in turn induce drug hypersensitivity. The presence of HHV peptides in patients with high-risk HLA alleles may trigger the activation of cytotoxic T cells, thereby resulting in the development of SCAR [48][49][50]. The pathogenic factors underlying the unusual presentations of drug hypersensitivity related to viral infections need to be further investigated.
In conclusion, the prevalence of DHRs across the different CD4 groups was not statistically significant (p = 0.104). Also, the analyses found no significant association between the CD4 count and DHRs to ARV (NVP, 3TC, EFV, and fixed-dose combinations of 3TC + TDF + EFV, https://doi.org/10.5415/apallergy.2022.12.e26 Association of CD4 count and drug hypersensitivity reaction in HIV 12/15 https://apallergy.org AZT + 3TC + NVP, AZT + 3TC + EFV), antituberculosis (INH, RIF, PZA, and fixed-dose HRZE), and cotrimoxazole drugs. Regardless of the baseline CD4 of the patient, the physician should be vigilant in the monitoring of hypersensitivity reactions. Patient education on common DHR to these drugs is very important once the patient has been diagnosed of HIV/AIDS. The institution had no uniform electronic access to both in-and out-patient records. The records that were gathered were merely based on the outpatient records of the subjects. HIV patients with DHR admitted in the hospital were not included because their records were not included in the outpatient clinic. On top of that, only a few number of patients were included. All these affected the true incidence of all DHR in HIV patients, most specifically the data on DHR to antituberculosis drugs. Furthermore, accuracy of data may have been affected since this was only a retrospective study based on chart review. These resulted to a disparity in the data and underreporting of the true incidence of DHR in this institution. There were no published studies on the incidence of DHR according to Philippine databases such as HERDIN, Philippine Journal of Internal Medicine, and Acta Medica Philippina. This paucity of data on the incidence of DHR in the Philippines and in Asia limited the comparison of the results of the study with other related-studies in the region.
RECOMMENDATIONS
A similar prospective study for 6-18 months duration which would include both in-and out-patient registries is recommended. Also, an ideal sample size should be computed based on the true incidence of DHRs to first-line HAART, trimethoprim sulfamethoxazole, and antitubercular agents.
|
2022-07-22T15:12:49.093Z
|
2022-07-01T00:00:00.000
|
{
"year": 2022,
"sha1": "6687343b5b0631ece9fd24c03bb7daa80d6408a4",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.5415/apallergy.2022.12.e26",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "d3e02026ae3fe22352b2aec04b729ff1299b0f0e",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": []
}
|
55589182
|
pes2o/s2orc
|
v3-fos-license
|
Direct imaging of a massive dust cloud around R Coronae Borealis
We present recent polarimetric images of the highly variable star R CrB using ExPo and archival WFPC2 images from the HST. We observed R CrB during its current dramatic minimum where it decreased more than 9 mag due to the formation of an obscuring dust cloud. Since the dust cloud is only in the line-of-sight, it mimics a coronograph allowing the imaging of the star's circumstellar environment. Our polarimetric observations surprisingly show another scattering dust cloud at approximately 1.3"or 2000 AU from the star. We find that to obtain a decrease in the stellar light of 9 mag and with 30% of the light being reemitted at infrared wavelengths (from R CrB's SED) the grains in R CrB's circumstellar environment must have a very low albedo of approximately 0.07%. We show that the properties of the dust clouds formed around R CrB are best fitted using a combination of two distinct populations of grains size. The first are the extremely small 5 nm grains, formed in the low density continuous wind, and the second population of large grains (~0.14 {\mu}m) which are found in the ejected dust clouds. The observed scattering cloud, not only contains such large grains, but is exceptionally massive compared to the average cloud.
Introduction
R Coronae Borealis (R CrB) stars are one of the most enigmatic classes of variable stars. They frequently show irregular declines in visual brightness of up to 9 magnitudes due to the production of thick dust clouds. Such minima typically last for several months with the exception of R CrB's current minimum which has lasted for more than 3.5 years (since July 2007 -AAVSO). The photometric minima, caused by the obscuration of starlight by a dust cloud, are characterised by a rapid decline in brightness and a gradual return to normal brightness levels. Colour variations are also present with a reddening of the star during the decline, followed by the star becoming bluer during the phase of minimum brightness and then becoming redder during the rise to normal brightness levels. Variations in colour during a minimum have also been observed (e.g. Efimov 1988) and are attributed to pulsations, which are also are present in R CrB's light curve during maximum light.
The colour variations during obscuration can be attributed to the presence of a cool dust shell, which has been confirmed by observations of a strong IR-excess (Kilkenny & Whittet Send offprint requests to: S. V. Jeffers, e-mail: S.V.Jeffers@uu.nl ⋆ Based on observations made with the William Herschel Telescope operated on the island of La Palma by the Isaac Newton Group in the Spanish Observatorio del Roque de los Muchachos of the Instituto de Astrofsica de Canarias. 1984;Walker 1986). The cool dust shell has a temperature of ≈ 500-1000 K and L-band infrared photometric observations show that it still emits in the infrared even when the lineof-sight is obscured by a dust cloud (Feast 1986;Yudin et al. 2002). Feast (1979) notes that the L-band observations mirror the pulsation frequencies of the photosphere. The cool dust shell is assumed to be continuously replenished by randomly emitted dust clouds. The extinction of the ejected dust is different to that of the ISM, with an UV absorption peak at 2500 Å as opposed to 2175 Å. Laboratory measurements and theoretical models show that this is due to small glassy or amorphous carbon grains (i.e. soot) formed in a hydrogen-poor environment (Hecht et al. 1984).
The formation of dust clouds on the R CrB star RY Sgr has been observed by de Laverny & Mékarnia (2004) with NACO at the VLT. These observations show that the dust clouds are bright, contributing up to 2% of the stellar flux in the infrared (L-band 4.05 µm), and are located in any direction at several hundred stellar radii from RY Sgr. de Laverny & Mékarnia (2004) also note that the dust clouds might be very dense and optically thick close to the star. Leão et al. (2007) used mid-infrared interferometric observation of RY Sgr, with VLTI/MIDI, to show that the central star is surrounded by a circumstellar envelope with one bright dust cloud at 100 stellar radii (30 AU). This is notably the closest observed dust cloud around an R CrB star. More recently Bright et al. (2011) also used VLTI/MIDI to probe the circumstellar environments of RY Sgy, V CrA and V854 Cen at very small spatial scales (50 mas / 400 R ⋆ ). They find that their observations are consistent with a scenario of random dust ejection around the star, which over time creates a halo of dust.
The dust formation mechanism in these stars is not well understood. However, Crause et al. (2007) show that there is a correlation between the onset of brightness declines and R CrB's pulsation period, linking the expulsion of dust clouds and mass loss to internal stellar processes. This has also been shown to occur on other R CrB-type stars such as V854 Cen (Lawson et al. 1992) and RY Sgr (Pugach 1977 (Clayton et al. 2003). They also note that the P Cygni profile is present during both maximum light and brightness decline. For R CrB, the dust grains are assumed to be formed away from the star, at approximately 20 R * , because at the stellar surface the temperature greatly exceeds the grain condensation temperature for carbon (Feast 1986;Fadeyev 1988). The dust clouds are then driven away from the star by radiation pressure. An alternative model proposes that the dust is formed close to the star and moves quickly away due to radiation pressure (originally proposed by Payne-Gaposchkin 1963). However, the conditions close to the star are far from the thermodynamic equilibrium necessary to form dust grains, though as discussed by Clayton (1996) and Donn (1967), the condensation temperature of carbon in a hydrogen deficient environment, such as on an R CrB star, is much higher than when hydrogen is present.
To further understand the circumstellar environment of R CrB we have used imaging polarimetry observations with ExPo, the EXtreme POlarimeter (Rodenhuis 2012 in preparation), to detect scattered starlight during its current minimum. This method is advantageous as starlight becomes polarised when it is scattered from circumstellar dust and particularly during obscuration can enable a detection of close-in circumstellar dust. In this paper we combine images of the dust cloud in scattered light, using imaging polarimetry, with archival images taken by the Hubble Space Telescope (HST) to determine the properties of the grains in the obscuring and scattering dust clouds.
In this paper we first describe the observations of R CrB using ExPo and the HST in Sect. 2 and 3. In Sect. 4, we summarise the observational facts that we derive from the observations, which we use in Sect. 5 to model the properties of the dust grains. Our results are discussed in Sect. 6.
ExPo: Instrumental setup
The design of ExPo is based on the principle that light reflected by circumstellar material becomes linearly polarised and can be easily separated from the unpolarised light that originates from the central star. One of the design concepts of ExPo is the combination of fast modulation of polarisation states with dual-beam imaging. This setup significantly reduces the impact of flat field and seeing effects on the polarimetric sensitivity. ExPo is a regular visitor instrument at the 4.2m William Herschel Telescope. It has been designed and built at Utrecht University and has a 20"×20" field-of-view and a wavelength range of 500nm to 900nm. The fast modulation of polarisation states is achieved with the combination of a Ferroelectric Liquid Crystal (FLC), a cube beamsplitter and a EM-CCD camera. The FLC modulates the incoming light into two polarisation states separated by 90 • , "A" and "B" states. The beam splitter separates each of these frames into two beams which are imaged on the EM-CCD as A left and A right followed by B left and B right .
ExPo: Observations and data analysis
Observations of R CrB were secured at the 4.2m William Herschel Telescope on La Palma, as part of a larger observing run, from 22 to 27 May 2010. To obtain an image we take typically 4000 frames with an exposure time of 0.028s per FLC angle i.e. 0 • , 22.5 • , 45 • and 67.5 • . These observations are summarised in Table 1. A set of flat fields is taken at the beginning and end of each night and a set of darks is taken at the beginning of each observation.
All of the images are dark subtracted, and corrected for flat fielding, bias and cosmic ray effects. Once the polarization images are free of instrumental effects, they are carefully aligned and averaged. Polarised images are obtained after applying a double-difference approach to the two beams and alternating "A" and "B" frames, i.e.
Special care was taken in aligning the R CrB images, since its polarized image shows one clear dust cloud rather than an extended polarized source such as a disk or a shell. To minimise guiding effects, the images were first aligned with a template. Secondly, to minimize intensity gradient effects, the images were aligned with an accuracy of a third of a pixel. This results in the cloud structure showing more detail than using other techniques such as aligning the images with respect to the brightest speckle. Finally, the reduced images are calibrated using the method of (Rodenhuis et al. 2012 in preparation) to produce Stokes Q and U images. The polarised intensity is defined as P I = Q 2 + U 2 , the degree of polarization P = P I /I, where I is the total intensity and the polarization angle, P Θ = 0.5arctan U Q defines the orientation of the polarization plane. The data analysis is described in more detail by Figure 1. ExPo image of the circumstellar environment of R CrB in linear polarisation. This image was taken at visible wavelengths (500nm -900nm). The scattering dust cloud (Cloud S) is located to the south-east (left) of the image at 1.3" from the star. North is upwards and the scale of the image is in arcseconds. Canovas et al. (2011). The design of ExPo includes a polarisation compensator which reduces the instrumental polarisation of ∼3% to the order of 10 −3 . This is removed in the data analysis by assuming that the central star is unpolarised.
The image of R CrB in linear polarisation is shown in Figure 1. To our surprise the image shows one clearly defined dust cloud, with a detection of 15σ, to the southeast (left) of the star which is indicated by a speckled blob. The cloud to the left is the scattering cloud, Cloud S, which shows clearly defined polarisation vectors. The systematic / noise error on the orientation of these vectors is 10±2 • . The black contour line at the centre of the image indicates the FWHM of the stellar PSF.
HST Observations
HST images of R CrB were retrieved from the MAST data archive . These images were secured on 14 April 2009 using HST/WFPC2 (Wide Field Planetary Camera) imaging in broad-band V (F555W) and I (F814W) filters. These observations are summarised in Table 1. The individual exposures taken with each filter were aligned and combined with the STSDAS task gcombine with a cosmic ray rejection algorithm. There were 5 images for the V filter and 4 for the I filter, respectively giving a total exposure time of 487s and 496s. Only the Planetary Camera (PC) part of the images, where R CrB is centred, are used. The spatial scale for the PC is 0.046 ′′ pixel −1 and its field of view is 37 ′′ × 37 ′′ . The HST observations from April 2009 show an extended dust cloud (Cloud S) at the same location as shown in the ExPo images. The processed images for filters are shown in Figure 2, magnified to match the field-of-view of ExPo.
Observational Facts
The magnitude of R CrB is shown in Table 2 at the epoch of the ExPo and HST observations. At the time of observation R CrB is clearly in a minimum state with M V = 15.0, which started in July 2007. The overall reduction in flux, from M V = 5.91 to 15.00, is approximately 4000. If the obscuring cloud acts like a natural coronograph completely obscuring the star, then the only contribution is from light scattered from the dust in the surrounding halo. To obtain a magnitude drop of ≈ 9 magnitudes the scattered flux must be less than 1/4000 of the stellar flux.
Photometric variations
The photometric light curve of R CrB has been monitored for more than 150 years. During obscuration its light curve is characterised by a sharp decline followed by a gradual return to maximum light levels. There are also significant colour changes during obscuration. Plots of V versus B − V typically show (e.g. Figure 5 from Efimov 1988) a very steep decline at the onset of obscuration followed by reddening and then a shift towards very blue B−V values just before the lowest light level.
On the return to maximum light levels the light curve first reddens and then becomes bluer before reaching its normal light level. Colour declines that vary from the standard model will be discussed later in Sect. 5.
Observed dust cloud properties
The ExPo and HST observations, in scattered light, show that there are two dust clouds around R CrB. The first, Cloud O, is the obscuring cloud which is responsible for the decrease in R CrB's brightness. The second cloud, Cloud S, the scattering cloud, is detected by HST and in linearly polarised light by ExPo. Additionally, in this analysis we consider that there is a third location of dust in the halo surrounding the star and is referred to in this work as Halo Dust. The locations of these dust populations are shown schematically in Figure 3.
Cloud S
The properties of Cloud S are determined via aperture photometry (with a radius=0.47") of the ratio of the flux in Cloud S to the total flux of the star. From the ExPo observations, the ratio (total stellar intensity)/(polarized cloud S intensity)=235.
Similarly for the HST images, the F555W filter gives a Star / Cloud S ratio of 42 and with the 814nm filter a ratio of 56.
Halo dust
The Halo dust is defined as the halo of dust surrounding R CrB which is confirmed observationally by L-band observations staying constant during obscuration. Previous measurements of the infrared IRAS fluxes of R CrB (Lambert et al. 2001) show that 30% of the stellar flux is reprocessed by absorption of stellar light by the halo grains and subsequent reemission at infrared wavelengths. In the optically thin approximation the total amount of reprocessed light in the infrared is directly proportional to the absorption optical depth through the equation: where τ abs is the mean optical depth for absorption averaged over the stellar spectrum, and f IR is the fraction of the light reprocessed in the infrared. For R CrB, f IR = 0.3 and consequently τ abs = 0.36.
The fraction of light that is reprocessed by scattering, f scatt , must be extremely small as the integrated intensity of the system, which is the sum of the starlight and the scattered light, can drop 9.1 magnitudes in the visual when the star is obscured. The fraction of scattered light can be equated as follows: where τ scatt is the scattering optical depth. The observed decrease is f scatt < 1/4000 (9 magnitudes), which means that τ scatt < 2.5 · 10 −4 . The single scattering albedo of the grains must be ω = τ scatt τ scatt + τ abs < 7 · 10 −4 .
Consequently, the albedo of the grains in the visual is extremely small, i.e. on the order of 0.07%.
Cloud O
The properties of Cloud O are derived from the wavelengthdependent extinction as the brightness of R CrB decreases.
Models of Dust Clouds
In this Sect. we use the observed properties of the dust clouds to deduce the properties of the dust grains in Cloud O, Cloud S and in the Halo Dust surrounding R CrB. The dust grains are assumed to be composed of amorphous carbon using the refractive index from Preibisch et al. (1993) and the particle shape model from Min et al. (2005). Cosmic dust grains are not perfect homogeneous spheres. When dust grains are modelled using Mie theory, i.e. assuming the grains are perfect homogeneous spheres, the scattering phase function and degree of polarization show sharp resonances at certain scattering angles. Furthermore, the overall behavior is very different from that of more irregularly shaped grains, and even the sign of the polarization is often wrong (see e.g. Muñoz et al. 2004, Fig. 10). Exact computations of the scattering properties of realistically shaped grains are computationally demanding and beyond the scope of this paper. Fortunately, breaking the perfection of a homogeneous sphere in even the most simple way, suppresses the resonance effects significantly, for e.g. using the model by Min et al. (2005). For the computation of the optical properties we use this model for the grain shape with f max = 0.8 representing rather irregularly shaped grains. This parameter is varied to estimate the error on the derived parameters. Throughout the analysis we fix the shape and composition of the dust grains and vary only their sizes. Since we consider irregularly shaped grains we refer to the volume equivalent radius.
Halo dust
To determine the grain properties of Cloud O, it is first necessary to disentangle the contribution from the surrounding Halo Dust to the photometric light curve. As explained in Sect. 4.2.2, the albedo of a dust grain is very sensitive to the size of the dust grain with small grains also having a low albedo. Consequently, it is necessary to make the grains very small, i.e. smaller than 5 nm to obtain an albedo of 0.07%. This therefore implies that the grains in the surrounding halo are much smaller than those in Cloud S.
Fitting the SED
As previously discussed, to match a dust reprocessing of 30% and decrease of 4000 at visible wavelengths it is necessary to assume that these particles do not scatter very efficiently and consequently are very small. To model the SED of R CrB these grains are set to be 5 nm in radius since R CrB's infrared excess is defined by its Halo dust. For the star we assume a blackbody with T eff = 6750 K and R * = 73.4 R ⊙ , at a distance of 1400 parsec (from Yudin et al. 2002;Asplund et al. 1997). A homogeneous low density dusty wind is included in the models as a static outflow, with a velocity of 200 km s −1 (Clayton et al. 2003). The IRAS fluxes from Walker (1985) are fitted by varying the mass loss rate. A dust mass loss rate of 7.5×10 −9 M ⊙ yr −1 is computed to provide the best fit as shown in Figure 4. This result is similar to that obtained by Yudin et al. (2002) though they use a much slower wind (45 km s −1 ) and consequently determine a lower dust mass loss rate (3.1 × 10 −9 M ⊙ yr −1 ). The best fit model reproduces the global properties of the ISO spectrum of Walker et al. (1999).
Obscuring cloud
A characteristic component of the photometric light curve of R CrB is that there are significant colour changes during obscuration, which can be used to constrain the grain properties of the obscuring dust cloud.
Characteristic light curve properties
R CrB's light curve typically shows first a sharp decrease in its V magnitude while B − V remains constant. This leads to a reddening of the star as the brightness declines further, followed by the star becoming bluer until it reaches its minimum. On the return to normal brightness levels the light curve first reddens before becoming bluer (see e.g. Efimov 1988, Figure 5 for 1983/4 minimum). The variations in colour are found to vary with the depth of the minimum, with shallower minima showing the characteristic drop in V, while B − V remains constant, and subsequent red colour variations but are not followed by the starlight becoming bluer before minimum. An example of this is in the shallow minimum of January 1999, where M V decreases to only 9.5. For deep obscurations the general behaviour begins with a sharp decrease in V while B − V remains constant. It is only just before the minimum light level that the light curve becomes bluer before returning to maximum brightness levels following the standard behaviour i.e. first becoming more red and then more blue.
Models of colour variations
To understand how the sizes of the dust grains in an obscuring 'dust puff' cause the observed colour changes, we have modelled an obscuring dust cloud from formation to dissipation. The characteristic sharp drop in V magnitude, while B − V remains constant, indicates the formation of large grey grains while the dust cloud is still thick and dense. This means that the grain sizes are at least 0.2 µm, which is the smallest pos- sible size that can give grey extinction. As the cloud expands, and consequently has a lower density, only small particles (i.e. 5 nm) can form, resulting in the onset of the reddening in the V versus B − V plot. When the star reaches minimum light, the cloud is then assumed to dissipate before R CrB returns to maximum light levels. The slope of the curve on return to maximum brightness is determined by the size distribution of the grains.
The obscuring dust cloud is modelled with a total optical depth equal to 1, 3 and 10 which is only caused by the large grains. The small grains are added with an optical depth ratio of 3 small grains for every large grain. This small:large ratio of 3:1 was chosen to roughly reproduce the light curve by Efimov (1988). Consequently the total optical depths at minimum light are τ = 12, 4 and 40 the results of which are respectively shown in Figure 5 top, middle and lower panels. For the model that uses τ=4, the minimum brightness level only reaches M V = 9.9 and is comparable to the obscuration in January 1999 where M V only decreased to 9.5. As with all models, large particles are formed at the beginning, with a sharp vertical decrease in V versus B − V, and then small particles are formed leading to a reddening of the curve. In this case for low optical depth, there is no impact from the surrounding Halo dust cloud. For τ=12, the curve begins with the same characteristic behaviour but shows an onset of reddening by small grains beginning at ≈ M V = 9, followed by a bluing due to scattering from the Halo dust at ≈ M V = 12.5. The V magnitude then reaches a minimum of M V = 14 before reddening significantly on its return to maximum brightness levels. For the model with the highest optical depth i.e. τ=40, the minimum light can be reached by using only very large grains (Figure 5 lowest panel). As with the other models discussed here, the return to maximum brightness is characterised by a reddening. This is caused by the dispersal of the dust cloud, which at this time would also contains a significant fraction of very small grains.
Comparison to observed colour variations
The V versus B − V light curve of the current decline, is shown in Figure 6. At minimum V magnitude there are many colour variations of up to B − V=0.5 which is an intrinsic property of R CrB stars (see Sect. 1). Although the light curve is missing several data points at the start of the obscuration, the models of the colour variations are in general agreement with the observed colour variations. The best fitting model is for τ=40 which is also plotted in Figure 6. To investigate the impact of multiple scattering we computed a full radiative transfer model for several points on the V versus B − V plot using MCMAX (Min et al. 2009). The results show that the inclusion of multiple scattering has the effect of making the initial sharp drop in V bluer. To fit the observations, it is necessary to compensate for this by assuming a smaller grain size of approximately 0.13 µm.
Scattering cloud
Observations in scattered light of Cloud S were taken by both HST and ExPo. In general, the colour of scattering is highly dependent on the scattering angle, as is the degree of polarization. The light scattering properties of the dust in Cloud S is determined using the models of Min et al. (2005). To fit the observations it is necessary to model a grain size and a scattering angle that simultaneously match the polarized intensity of the ExPo images and the colour of the HST images.
The degree of polarisation is derived to be > 15% from the estimate of the polarized intensity of the ExPo image. As the ExPo image is not photometrically calibrated, we estimate the polarized intensity from the ratio between the total intensity of the star from the HST image and the polarized intensity from the cloud. In order to derive the properties of the cloud we computed the scattering properties of particles with sizes ranging from 5 nm up to 0.2 µm with irregularity parameters f max = 0.2 − 0.8. The irregularity parameter was varied to determine how it impacts the derived particle size. Assuming a single size for the dust grains we infer that the grains are ∼0.14±0.02 µm in diameter. The error on this value is derived from varying the irregularity parameter and shows that it has little impact on the derived particle size. A significantly smaller grain size results in a colour for Cloud S that is much bluer than the HST images, while using a larger grain size produces a result that is too grey. The scattering angle that matches all of the observations is about θ = 94 ± 33 • (i.e. Cloud S is located approximately in the observation plane of the star). From our models we can also derive the mass of the cloud which depends on the scattering efficiency (i.e. particle grain size and the scattering angle). We derive a dust mass of the cloud of 3 +4 −1 ·10 −4 M ⊕ and an intrinsic degree of polarization of the cloud of ∼ 20 ± 5%.
Summary of models
It is evident from the SED that there must be a significant dust mass surrounding R CrB since roughly 30% of the light is reprocessed as thermal emission. The very low scattering ef-ficiency of these grains, as inferred from R CrB's minimum brightness, requires that these grains are very small. Contrary to this we find, from the scattered light by the dust cloud seen in the HST and ExPo images, that the grains in the dust clouds must be much larger with a size of approximately ∼0.14±0.02 µm . Also, the grey onset of the obscuration is evidence for the formation of very large grains when the dust clouds are dense.
Discussion
6.1. Grain sizes 6.1.1. Halo dust The V-band magnitude of R CrB was 14.45 and 15 respectively at the epochs of the ExPo and HST observations. Since R CrB is obscured by a dust cloud, this magnitude is assumed to be the brightness of the surrounding Halo dust. To simultaneously fit the visible magnitude decrease and the SED, it was therefore necessary to use 5nm grain sizes since their albedo is very low and are hence very inefficient at scattering light. A larger grain size would have a larger albedo and it would not be possible to obtain the observed decrease in visual brightness. The 5nm grain size is at the lower end of the MRN, Mathis, Rumpl, & Nordsieck (Mathis et al. 1977), size distribution. An extremely small particle size for the Halo dust of R CrB has also been found by Yudin et al. (2002). This result is consistent with the observations of V 854 Cen of Whitney et al. (1992) where they state that the scattered flux may arise in the same clouds contributing to the observed IR flux if the albedo is low.
However, Ohnaka et al. (2001), from interferometric observations, show that the visibility function and spectral energy distribution can be simultaneously fitted with a model of an optically thin dust shell at maximum light but not at minimum light. They suppose that the discrepancy is attributed to thermal emission from a newly formed dust cloud, but the results from this paper and that of Yudin et al. (2002) conclude that the discrepancy of Ohnaka et al. (2001) can be resolved by assuming very small particle sizes, which are very inefficient at scattering starlight.
Obscuring cloud
Currently Cloud O totally obscures the star and we only see the scattered light from the low density halo making it impossible to probe its full size distribution. As previously described, the best fitting model to the observed colour changes is for τ=40, where total obscuration can be reached with only large grains, and is overplotted in Figure 6.
Scattering cloud
Both the ExPo and HST images of the circumstellar environment of R CrB clearly show the presence of a large and extended dust cloud, Cloud S. The cloud is surprisingly elongated and could indicate how dust clouds interact with their surroundings. The comprising dust is inferred to be 0.14±0.2µm in di-ameter from combining the light scattering properties of the dust (Min et al. 2005) with the observed colour and degree of polarisation. The derived sizes of grains in Cloud O and Cloud S are in good agreement. Cloud S is significantly older than the recently ejected Cloud O, having been ejected at least 50 years ago,assuming a constant outflow speed of 200 km s −1 (see Sect. 6.3). This indicates that there is no significant grain evolution due to, for example, the high velocity wind that contains many small particles. Additionally from the combination of the ExPo image with the two HST images, we infer that the scattering angle of the dust is approximately θ = 94 ± 33 • i.e. almost perpendicular to the observer.
From our results we derive Halo Dust grain sizes of 5nm, Cloud S grain sizes of 0.14±0.2µm and Cloud O grain sizes of 0.13µm. To investigate the impact of using slightly different grain sizes, we added an size distribution of particles, modelled by delta functions at 5 nm and 0.2µm. This has the effect of reducing the dust mass in Cloud S. However, it is still necessary to include a large number of these very small grains. The derived mass of Cloud S clearly depends on the size of the composing dust grains. For the case where the grain size = 0.14±0.2µm, the dust cloud mass is 3 +4 −1 · 10 −4 M ⊕ . This is a minimum mass as these grains are the most efficient at scattering while still fitting the colour information from the ExPo and HST images. The mass of Cloud S when composed of grains that follow the size distribution, can only be greater than the mass derived for 0.14µm grains because this distribution contains many more grains that scatter less efficiently. However, the increased mass of the model using the size distribution is perhaps more realistic.
Why are there different grain sizes?
The different grain sizes in Cloud S and O and in the Halo dust could be explained by models of the formation of dust-driven winds around late type carbon stars (Gail & Sedlmayr 1987) According to this model, the dust is dominated by very large grains close to the star, but further out the atoms can only form very small nuclei, due to the decreased density. For the case of R CrB, large grains could be formed for as long as the dust cloud stays sufficiently dense. If a significant fraction of the carbon remains after the cloud begins to disperse, then the remaining carbon will be in a low density environment and consequently will only be able to form very small nuclei. This is also what happens in the low density halo, where only very small grains can form.
Stellar Wind
The stellar wind of R CrB has been measured by Clayton et al. (2003) to be 200 km s −1 from an analysis of the He iλ 10830 line. Notably, R CrB shows a P Cygni or asymmetric blueshifted profile at all times, i.e. during both minimum and maximum light indicating that the stellar wind is independent of the ejection of 'dust puffs'. This is consistent with the model presented in this paper. The nature of the stellar wind is highly likely to be dust driven. According to the theoretical models of Gail & Sedlmayr (1986), a dust driven stellar wind is possible in the case of cool Carbon stars with a mass loss rate of the order of 10 −6 M ⊙ yr −1 and with a non-negligible dust production rate. The computed dust mass loss rate for R CrB is 0.9 × 10 −6 M ⊙ yr −1 (from Sect. 5.2.1), though the conditions for a dust driven wind on R CrB are enhanced since it is hydrogen deficient star and because there is enormous radiation pressure on the dust grains.
Polarimetric observations could indicate the presence of permanent clumpy non-spherical dust shells (Clayton et al. 1997;Yudin et al. 2002). Indeed if the 'dust puffs' are ejected from R CrB at a frequency equal to R CrB's 50 day pulsation period (Crause et al. 2007), at a distance of 2 R * , there should be a very large number of them between the edge of the Halo dust and the star. This frequent ejection of dust puffs in combination with the dusty stellar wind are considered to be the dust feeding mechanisms for the circumstellar Halo dust.
Dust clouds
The observations in this paper confirm the 'dust puff ejection' model of R CrB first proposed by Loreta (1934) and O'Keefe (1939). The HST and ExPo images clearly show a dust cloud at a detection signal-to-noise of 33 sigma. Observations of another R CrB star RY Sgr (de Laverny & Mékarnia 2004;Bright et al. 2011) show many dust clouds likely to be randomly ejected from the stellar surface. As noted by de Laverny & Mékarnia (2004) over the last 50 years the number of brightness declines for R CrB is much greater than that of RY Sgr, implying that we should be seeing many more dust clouds in the circumstellar environment of R CrB.
Tests in the laboratory have shown that ExPo can reach contrast ratios of up to 10 −5 which is much fainter than the Cloud S:star ratio of 10 −2 . To determine if there are fainter structures also present in the ExPo images we have smoothed the data using a Gaussian kernel filter, with a width of 1", to reduce the contribution of noise. The resulting image is shown in Figure 7, where the shown vectors are scaled to the degree of polarisation. In addition to the Cloud S, clearly seen in the ExPo images shown in Figure 1, there is a tentative detection of two coherent structures which could be two additional dust clouds. These clouds are located just left of centre at the top of the image and just right of centre at the bottom of the image. The tentative detection is plausible because the dust cloud/star contrast ratio is 10 −5 , they have been detected in all of the ExPo observations and the polarisation vectors are correctly aligned. They are unlikely to be due to spurious instrumental artifacts and do not appear in ExPo images of stars without circumstellar matter. Future instrumentation such as SPHERE at the VLT and HiCIAO on Subaru will be able to confirm this result.
By assuming a 'dust puff' formation radius of 2 R * and a velocity of 200 km s −1 , we derive that Cloud S was ejected 50 years ago. A surprising factor is that it still remains intact and easily detectable meaning that it must have been related to an exceptionally large ejection of dust. If the cloud has a higher density, due to the abnormal mass of the cloud, it could form a much bigger fraction of large, and consequently high-albedo, grains. If the same age calculation is applied to the current obscuring cloud, Cloud O, and assuming that it will continue to travel along the line of sight, it could imply that R CrB will remain obscured for many decades.
Conclusions
We conclude that there are two distinct grain populations in the circumstellar environment of R CrB. The first is a population of small 5nm grains that comprise the low density stellar wind and the second population of large (∼0.14 µm) grains that are formed in the ejected dust clouds. Our polarimetric images together with archival HST images, surprisingly reveal one exceptionally massive dust cloud composed of large grains. The current minimum of R CrB is also noteworthy and lends additional support to the presence of large dust grains in ejected dust clouds.
|
2012-03-06T18:02:08.000Z
|
2012-03-01T00:00:00.000
|
{
"year": 2012,
"sha1": "0097c007f6e2a1dd44761532fb0404354cf7266f",
"oa_license": null,
"oa_url": "https://www.aanda.org/articles/aa/pdf/2012/03/aa17138-11.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "0097c007f6e2a1dd44761532fb0404354cf7266f",
"s2fieldsofstudy": [
"Physics",
"Environmental Science"
],
"extfieldsofstudy": [
"Physics"
]
}
|
258620036
|
pes2o/s2orc
|
v3-fos-license
|
Developing a management and finance training for future public health leaders
Public health leaders are increasingly being asked to address adaptive challenges in the context of finite and often limited resources. Budgets and their associated resources create the financial framework within which public health agencies and organizations must operate. Yet, many public health professionals expected to undertake roles requiring this foundational knowledge and skills are not trained in the fundamentals of public finance and are ill-equipped for managing and monitoring funds. Graduate courses in schools of public health most often are focused on health care management and finance or private sector finance. To meet the needs of future public health leaders, it is critical that academic content builds capacity in management and finance focused on public health practice. This paper describes the development of a Doctor of Public Health program management and finance course designed to prepare future public health leaders. The course aims to build the knowledge and skills of doctoral-level students to recognize the inherent challenges of public health finance and the importance of cultivating and managing resources to improve public health practice and achieve strategic public health goals.
Introduction
Public health management and finance training is critical to the preparation of public health professionals and leaders. Public health leaders are increasingly being asked to address adaptive challenges in the context of finite and often limited resources. Budgets and their associated resources create the financial framework within which public health agencies and organizations must operate. Yet, many public health professionals are expected to undertake roles requiring this foundational knowledge and skills are ill-equipped for managing and monitoring funds because they were not trained in the fundamentals of public finance (1). A 2008 analysis found that there were no U.S. graduate public health programs that required a finance course for public health non-management concentration students (1). In addition, most of those who had taken a management and finance course were trained about health care management and finance or private sector finance, neither of which prepares students for public health practice (2). Based on a 2009 national survey, Honoré and Costich detailed a list of public health finance competencies which covered three domains including financial management, analysis, and assessment; organization and program planning and policy development; and administrative skills (2). The gap between what is taught in graduate public health training programs and the requirements for effectively and .
/fpubh. . efficiently allocating and managing resources in public health practice is especially magnified for those assuming management and leadership roles. Strengthening the skills and knowledge of graduate students based on these competencies is essential preparation for public health practice. The Doctor of Public Health (DrPH) degree was developed to provide doctoral-level training to prepare public health leaders for applied public health practice. In contrast to the Doctor of Philosophy (PhD) degree in public health, which is targeted to preparing students for careers in research, DrPH training is designed to prepare "a transformative leader with expertise in evidence-based public health practice and research who is able to convene diverse partners, communicate to affect change across a range of sectors and settings, synthesize findings, and generate practice-based evidence" (3). The development of DrPH programs grew from the acknowledgment that many leadership positions in public health are held by medical doctors and others without the necessary training or skills to lead agencies or organizations focused on population health (3). A 2014 report from the Association of Schools and Programs of Public Health (ASPPH) includes fiscal responsibility as critical content for DrPH programs to stay relevant in the 21 st Century (4). Competencies closely aligned with public health management and finance comprise three of twenty foundational DrPH competencies required for Council on Education for Public Health (CEPH) accreditation, including the ability to propose human, fiscal, and other resources to achieve a strategic goal; cultivate new resources and revenue streams to achieve a strategic goal; and propose interprofessional and/or intersectoral team approaches to improving public health. Fiscal stewardship, including sustainable and flexible funding, is also a core element for Public Health 3.0, a forward-looking model for public health leadership and practice that recognizes the critical importance of cross-sector collaboration to address upstream social determinants of health (5). For public health to evolve in a way that both learns from the experiences of the COVID-19 pandemic and meaningfully prioritizes advancing health equity, practicing professionals will need to learn innovative strategies and models of funding to stabilize revenue streams and address social determinants of health (5). This will require training diverse and skilled leadership prepared to blend and braid funds to support multi-sectoral collaboration (6), advocate for sustainable public health funding, and match resources to need, targeting those communities experiencing the greatest inequities. This paper describes the development of a DrPH Public Health Finance and Management course. We provide a description of the development of the curriculum, lessons learned after the first year of implementation, and a discussion on the practical applications for the field of public health.
Pedagogical framework
We took a three-pronged approach to developing a course on public health management and finance targeted to DrPH students. This included reviewing the literature on course design and adult learning, reviewing the current literature on public health management and finance, and finally assessing other schools' doctoral-level curricula on management and finance.
We reviewed the literature on course design, learning objectives, and approaches to learning and teaching. Because of the diverse backgrounds of DrPH students, including educational backgrounds, work experience, and levels of experience in finance, the literature on higher education allowed us to bring together theory and practice to strengthen academic content for adult learners. Two theoretical constructs from Light and Cox's Learning Gap Framework and Bloom's taxonomy of educational objectives were used to inform pedagogical approaches for adult learners. The Learning Gap Framework acknowledges approaches to learning and strategies needed for adults who learn different things in different ways (7). Given the overarching goal of preparing public health leaders for applied public health practice, the class structure, activities, and assignments created the space for creativity and innovation in lecturing. Developing the interpersonal construct through peer learning embeds the Learning Gap Framework to close the gap of learning content with the intent to achieve ongoing change (8). Students are placed into groups of 3 to 5 students based on a self-assessment of their own experience and background in management and finance. Groups are made to encourage contributing and learning from others based on student's diverse exposure in management and finance and group size guarantees that all students will have a good opportunity to contribute to the lesson or activity. The learning objectives of each lesson also aim to strengthen skill sets through Bloom's learning spectrum of knowledge, comprehension, application, analysis, synthesis, and evaluation, with an emphasis on higher level application and synthesis (9). These taxonomies supported the development of the curriculum and delivery approaches to help course instructors conduct real-time assessments of students' knowledge and skill levels.
To inform course content, we conducted an online search of the literature, web sites, and presentations. Using search terms, "public health finance; public health management and finance; health finance; public policy management; nonprofit management; public health budget; government finance; government budget, " we reviewed results from PubMed, Google Scholar, and Google. From this online scan, we compiled publications, presentations, roundtable discussions, and a training bootcamp specifically on public health management and finance. Many of the materials we found were products from a 2009 effort supported by the Robert Wood Johnson Foundation to optimize public health management and finance training and education (10). This work generated a significant number of resources that greatly informed this curriculum. We also examined graduate-level management and finance courses and their descriptions in schools of public policy and public health in the United States. Schools of public policy were included because of their academic attention on public finance. We reviewed the course catalogs and syllabi for the top 20 schools of public health with DrPH programs and top 10 public policy schools in the U.S. as rated by U.S. News and World Report in 2020 (11). A search for management and finance courses in the schools' course catalogs was performed using the same search terms as the online search above. Of the 30 schools, four schools did not have course catalogs accessible online and an additional four schools did not yield any courses using our search terms. From the remaining 22 schools we identified 37 applicable courses. Nine of the 37 courses .
/fpubh. . reviewed were relevant to public management and finance but were not specific to public health and 13 courses contained the search terms but either a course description was not available, or the course was primarily focused on health care management and finance. The remaining 15 courses had a course description or learning objective related to public management and finance, of these 24% originating from public policy schools and 16% from schools of public health. These courses offered the fundamentals of accounting principles, an introduction to financial management and managerial accounting, examined public finance policy, and health economics theoretical concepts. While this content is an important foundation for DrPH students, it almost exclusively focused on the application in health care settings, not specifically in public health. While acknowledging the value of accounting skills and the intersections of health care and public health, we developed a curriculum targeted to preparing practicing public health professionals and leaders across a range of settings and roles. As reflected in the foundational competencies, DrPH training should prepare public health professionals to engage in leadership-level budget discussions and provide both critical input and analysis. The course learning objectives are based on the CEPH DrPH foundational competencies and each lesson is associated with the overall course objective, including: • #12: Propose human, fiscal and other resources to achieve a strategic goal.
• #13: Cultivate new resources and revenue streams to achieve a strategic goal.
• #17: Propose interprofessional and/or intersectoral team approaches to improving public health.
The purpose of the course is to prepare public health practitioners for real-world application of management and finance in public health agencies and organizations. This course provides an overview of basic accounting concepts to build skills in preparing, analyzing, and implementing budgets and applies management decision-making strategies to inform public health programs and policies. The intended learning outcomes are to share an array of perspectives of finance and management at the local, state, federal, and global levels, build the knowledge base on accounting principles, and critically apply team building and management skills.
The lessons and learning objectives from the draft curriculum were shared through an online survey with 16 selected experts in the field of public health management and finance to validate course content. Experts were identified by reviewing authorship of peerreviewed publications and advocates of public health management and finance. Lastly, an online survey was completed by 20 out of 30 currently enrolled Georgia State University DrPH students to assess their backgrounds and needs relative to the content areas. Feedback was provided by both experts and students and incorporated into the curriculum.
With consensus on the lessons, a textbook or book chapter(s) on public health management and finance was not identified. We conducted a scan online using key terms from the learning objectives to identify relevant peer reviewed publications, contemporary news articles, webinars, and videos. A variety of modes and relevant or contemporary resources were selected to uphold adult learning.
Learning environment
This course was initially implemented in a remote synchronous format during the 2021-2022 academic year to 23 DrPH students. An adjunct faculty member who was a recent DrPH program graduate with an educational background in accounting and economics and over 10 years of public health experience taught this course in Georgia State University School of Public Health's DrPH program. The learning objectives across 13 lessons aimed to provide a foundation for public health management and finance with real-world application. It also intentionally integrated a health equity lens to both concepts and examples related to both allocation of resources and targeting of funds to socially, economically, and/or environmentally disadvantaged populations experiencing health inequities. The course also incorporated principles of blending and braiding funds to support cross-sector collaboration and address upstream social determinants of health. Table 1 is the list of sessions, learning objectives, and corresponding CEPH competencies. Lessons 1-4 build on foundational skills, recognizing most DrPH students do not have backgrounds in business or accounting, to learn basic principles and terminology. Lessons 5-13 were informed by the literature on public health management and finance. The full curriculum, found in Appendix A, provides the description, learning objectives, and readings.
Results
While a robust assessment and presentation of evaluation data is beyond the scope of this paper, students were provided the space and opportunity to provide feedback on the class content and materials. This allowed for valuable input that we intend to incorporate as we update and improve both course content and delivery. Feedback received to date included: • Guest lecturers with both subject matter expertise and experience in public health finance should be engaged. Because public health finance is such a broad topic area, it is difficult for any one person to bring expertise and experiences relevant to the range of topics and public health settings. Including outside experts to provide guest lectures and/or serve as "the expert in the room" to facilitate content-related discussions provides an opportunity for students to learn from a range of experts with diverse experiences and knowledge.
• A stronger emphasis on management practices and tools should be provided. The current curriculum has a substantial amount of content on budgeting and accounting practices. Exposure to management approaches and examples on how to manage and administer funds can be a valuable contribution to the course.
Lesson Learning objectives CEPH competency
Introduction • Summarize fundamental theories, concepts, and definitions of public health finance.
• Apply strategies for public health financing in Public Health 3.0.
• Self-evaluate personal experiences and expectations to improve knowledge and skills in finance and management.
12
Planning and budgeting • Discuss Congressional appropriation process in theory and practice.
• Analyze differences in U.S. state public health funding levels.
• Critique funding mechanisms and types of revenue sources for federal, state, and local public health.
12
Operating budgets • Evaluate the overall planning process and develop an operating budget.
• Design a budget using different types of costs and budget assumptions.
• Analyze a planning and operating budget.
12
Financial statements • Calculate break-even analysis and discounted cash flow analysis.
• Evaluate financial statements and key players in financial statement regulation.
• Design a planning budget for a public health program.
12 Taxation • Evaluate taxation as a revenue source to support public health goals.
• Compare and contrast states with different tax structures and uses of revenue.
• Design an operational budget for a public health program 13 Revenue generation • Explain social impact bonds and develop a real-world example.
• Critically examine sustainability and utility of new economic models.
• Design a budget that blends and braid funds to address upstream social determinant of health.
13
Spending and return on investments • Discuss evidence-based resource allocation processes.
• Examine real world resource allocation processes.
• Apply equity in a resource allocation decision-making process.
17
Fiscal stewardship and transparency • Examine public health expenditure data sources.
• Assess the relationship between resource allocation and health equity goals.
• Critique funding policy using the Tobacco Master Settlement Agreement as an example.
13 Partnerships • Discuss fiscal stewardship, accountability, and transparency of public health funds.
• Critique public health programming and closing the finance gap.
• Compare and contrast state-level expenditures in social services and social determinants of health.
13
Decision-making strategies • Analyze Medicaid's role in partnering with public health.
• Evaluate the intersection between public health and safety net programs to improve social determinants of health.
• Assess public health agencies' ability to blend funds to support people living with disabilities.
Ethics
• Examine the influence of philanthropic and private industry funds in public health.
• Evaluate the intersection of pharmaceutical research and public health research and practice.
• Critique the decision-making process in accepting funds to address a public health concern.
17 Public health emergency • Evaluate decision-making to allocate resources during a public health emergency.
• Critique public's perception of a common good during an emergency.
• Integrate equity in public health emergency resource allocation.
13
Global perspective • Apply strategies in global health finance.
• Critique universal health coverage financing strategies.
• Examine universal financial protection policies in LMICs.
13
and territories, given the unique contexts in which tribes and territories operate.
• Leveraging the expertise in the room should be highlighted. As adult learners and practicing professionals, DrPH students may have applicable experiences as well as examples that help highlight the relevance of the course content. Providing an open space for discussion, dialogue, and constructive debate between students helps to reinforce the content.
• Given the political nature of the funding process it is important to include news articles to stay abreast of current events and the political dialogue. This also consists of various modes of materials like news articles, podcasts, webinars to suit various types of learning styles.
Discussion
Many public health agencies do not carry out systematic financial analysis as part of their planning or review process. This threatens public health agencies' ability to quantify their fiscal condition and the sustainability and state of readiness of the public health system when an outbreak or natural disaster occurs (12). An analytical tool to measure financial performance in local health departments was recently introduced to help mainstream and apply analytical concepts to the practice of public health management (12). While measuring financial performance is not a requirement, the benefits have implications for policy and practice, including the creation of a uniform chart of accounts to help assess the financial condition of the public health system. This enables public health .
/fpubh. . managers and leaders to forecast national trends, make informed decisions for the continuation or elimination of programs, increase financial accountability, and advocate for additional funds (12). Until public health agencies prioritize financial performance, the field will continue to experience what the World Bank penned as the "cycle of panic and neglect" (13). This term describes the reactionary response from political leaders to invest in public health systems during a time of need until the urgency fades (13). Public health professionals work and adapt in this unpredictable environment. System change will require preparing future public health leaders about these inherent challenges and innovative ways to efficiently manage funds and advocate for sustainable public health funding.
Based on our review, current academic content for doctorallevel students does not meet the practical applications public health professionals are expected to provide in the workplace. The current academic content in public health finance falls short of providing the necessary skills and strategies to effectively mobilize and manage funds in public health. Courses have content that is disproportionately health care-focused and fails to meaningfully address finance policies and management strategies unique to public health. Most current curricula are inadequate and did not properly prepare DrPH students to assume leadership and management functions, including identifying sources of public and private funds and aligning programming and allocation to meet the organization's needs and mission.
Honoré and colleagues outlined potential impediments to developing a management and finance course for public health. Those included the limited number of credit hours available in the curricula, a lack of dedicated faculty with the academic credentials, experience, and time to develop a specific public health finance course, and the lack of teaching materials dedicated to public health finance (2). When dedicated faculty are not feasible, they suggest using adjunct faculty to conserve full-time faculty resources. For DrPH students, in particular, having faculty engaged in applied practice, provides a valuable opportunity to include real-world and real-time case studies. This paper and corresponding appendix help to alleviate many of the barriers to course development and relevant teaching materials.
Based on evaluation feedback from the initial piloting of this course in Georgia State University's School of Public Health during the 2021-2022 academic year showed early signs of success in achieving course objectives. We recognize that continuous improvements to course curriculum are not only valuable but needed to keep up with the latest current events. We plan to incorporate feedback into the next iteration of the curriculum and plan to continue receiving informal and formal feedback through post-course evaluations.
Preparing and strengthening public health leadership in management and finance through a doctoral-level course will not alleviate the systemic challenges facing public health funding and infrastructure. However, aligning academic competencies and training with professional responsibilities is critical to positioning public health organizations and agencies for greater impact and improved population health outcomes. Preparing public health leaders to participate more effectively in budget, financing, and funding strategies and discussions also positions them to justify and advocate for additional public health resources with decision makers and appropriators.
Constraints
The largest constraints to development and implementation of a public health-focused doctoral-level finance and management course were finding relevant course materials, including textbooks, articles, assignments, and curricular content, like learning objectives and lesson plans that were pertinent to applied public health practice. Another practical constraint was that the course was initially administered during the 2021-2022 academic year coinciding with the COVID-19 pandemic. As a result, classes were held in a synchronous remote learning environment. While this mode helps with engagement of guest lecturers outside the geographic area, it inherently limits the faculty's ability to encourage dialogue, debate, and discussion between students. Moving forward, we plan to administer the class using a hybrid approach with half the classes meeting in-person to facilitate open dialogue and discussion and the other half using a synchronous remote environment to facilitate engagement of outside guest lecturers. Lastly, a major constraint that other schools of public health may face in adapting this curriculum is finding dedicated faculty with the academic credentials, experience, and time to teach this course. As noted above, identifying adjunct faculty with the requisite background in public health finance is challenging, but ideal for optimal course delivery.
Conclusion
Preparing the next generation of public health leaders, not only for the demands of current public health practice, but for the essential roles public health must play in collaborating to address upstream social determinants of health is a critical charge for DrPH training programs. Improving population health and advancing health equity require core competencies in both the cultivation and deployment of public health financial resources. This paper describes the development of a public health management and finance course designed to strengthen finance and management knowledge and skills for future public health leaders. The course aims to prepare doctoral-level students to recognize the inherent challenges of public health finance while also providing finance and management knowledge and skills essential for successful public health leadership and practice.
Data availability statement
The original contributions presented in the study are included in the article/Supplementary material, further inquiries can be directed to the corresponding author.
|
2023-05-12T13:39:35.842Z
|
2023-05-11T00:00:00.000
|
{
"year": 2023,
"sha1": "3c1d2b7f2e5083b573063e11bd239df1467fc10b",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "3c1d2b7f2e5083b573063e11bd239df1467fc10b",
"s2fieldsofstudy": [
"Medicine",
"Political Science"
],
"extfieldsofstudy": []
}
|
14279534
|
pes2o/s2orc
|
v3-fos-license
|
Quasitriangular chiral WZW model in a nutshell
We give the bare-bone description of the quasitriangular chiral WZW model for the particular choice of the Lu-Weinstein-Soibelman Drinfeld double of the affine Kac-Moody group. The symplectic structure of the model and its Poisson-Lie symmetry are completely characterized by two $r$-matrices with spectral parameter. One of them is ordinary and trigonometric and characterizes the $q$-current algebra. The other is dynamical and elliptic (in fact Felder's one) and characterizes the braiding of $q$-primary fields.
1. Introduction. In reference [9], we have constructed a one-parameter deformation of the standard WZW model. It is the theory possessing a huge Poisson-Lie symmetry generated by left and right q-deformed current algebra. The goal of this paper is to offer the simplest possible description of the q-deformed chiral WZW theory for the particular choice of the affine Lu-Weinstein-Soibelman double. We address those readers who want to obtain the first acquaintance with the q-deformed WZW model by working with a representative simple example. Nothing will be derived or proved here and only those results will be presented which do not require any preliminary understanding of the Poisson-Lie world. In particular, we do not wish to bother the reader with more general choices of the Drinfeld double neither with the natural origin of the model from the symplectic reduction of a simpler system living on the centrally extended double .
2.
Language. The classical actions of dynamical systems enjoying the Poisson-Lie symmetry look typically forbiddingly complicated when written in some of standard parametrizations of simple Lie groups. This is the case already for toy systems with few degrees of freedom, not even speaking about field theories. It is therefore necessary to develop a language suited for effective dealing with such models.
We can describe a classical dynamical system in more or less three different languages: a) By defining a classical action on a space of fields. This way is best suited for the path integral quantization. b) By identifying a symplectic manifold and the Hamiltonian function on it. This is good for the geometric quantization. c) By picking up a representative set of (coordinate) functions and Hamiltonian and defining their mutual Poisson brackets. This is the starting point for the canonical quantization.
We stress that at the classical level all three languages are fully equivalent though in studying some particular feature of the system one of them can turn out to be more convenient than the others. The standard WZW model [13] was invented in the language a). The language b) was then extensively used for the description of the (finite) Poisson-Lie symmetry of the chiral WZW model [8]. Finally, the language c) has been also developed in papers devoted to the canonical quantization of the (chiral) WZW model [1,3,4,2].
The quasitriangular WZW model [9] was conceived by thinking and working in the language b). However, the most explicit (by physicist's taste) and economical description of this theory can be offered in the language c). This is what we shall do here. The language a) in the quasitriangular case in principle also exists. Indeed, the interrelation between a) and b) is wellknown. We recall that for a couple (symplectic form ω, Hamiltonian H) we can immediately write the first order classical action of the form The reader will agree, however, that the formula (1) contains no additional insights with respect to b). Moreover, as we have already mentioned, the coordinate description of (1) is awful. We do not therefore expect usefulness of the path integral method in the quasitriangular WZW story.
3. Review of the standard chiral WZW model. Now we first review the definition of the standard WZW model [3] in the language c). Consider a compact, simple, connected and simply connected group G. Recall that the Weyl alcove A + is the fundamental domain of the action of the Weyl group on the maximal torus of G. Now the points of the phase space P of the standard chiral WZW model are the maps m : R → G, fulfilling the monodromy condition m(σ + 2π) = m(σ)M.
Here the monodromy M sits in the Weyl alcove A + and it is convenient to parametrize it as In other words, a µ are coordinates on the alcove A + corresponding to the choice of the orthonormal basis H µ in the Cartan subalgebra of Lie(G).
The following matrix Poisson bracket (written in some representation of G) completely characterizes the symplectic structure of the standard nondeformed chiral WZW model [3]: Here κ is the level and η(σ) is the function defined by with [σ/2π] being the largest integer less than or equal to σ 2π . An important Lie(G)-valued observable is the chiral Kac-Moody current It generates the hamiltonian action of (the central extension of) the loop group LG on the phase space P . This is reflected in the following matrix Poisson brackets which follow from (2) and (3): where C is the Casimir element defined by The Poisson bracket (5) corresponds to the non-deformed current algebra and (4) can be interpreted as a statement, that m(σ) is the Kac-Moody primary 1 Our conventions for the normalization of the step generators E α of Lie(G) are as follows The element α ∨ in the Cartan subalgebra of Lie(G C ) is called the coroot of the root α and it is given by the formula field. Finally, the relation (3) can be viewed as the classical version of the Knizhnik-Zamolodchikov equation [10].
The Hamiltonian of the non-deformed chiral theory is given by the Sugawara formula: It leads to the following simple time evolution in the phase space: 4. q-current algebra. The concept of the q-deformation of the current algebra (5) was apparently first introduced in [11] who have worked out the complex case. The detailed discussion of the real case can be found in [9]. Here we shall need only the classical (Poisson bracket) story, which is based on the concept of a meromorphic classical r-matrixr(σ) ∈ Lie(G) ⊗ Lie(G) fulfilling the ordinary classical Yang-Baxter equation with spectral parameter, i.e.
The q-current L(σ) is given by the classical version of the q-KZ equation: From (11) and (12), it follows (13) and The relation (13) can be interpreted that m(σ) is the q-primary field and (14) is nothing but the defining relation (8) of the q-current algebra.
Inserting this into (13) and (14), we obtain in the lowest order in ε the desired relations (4) and (5): It turns out that the flow [m(σ)](τ ) = m(σ − τ ) on P is Hamiltonian also for the q-deformed symplectic structure (11). Its generator H qW Z is the Hamiltonian of the quasitriangular chiral WZW model. The explicit formula for it is given by the relation (1.20) in [9]. We do not list it here in order not to break the basic promise expressed in the introduction: reading of this paper did not require any preliminary knowledge of the Poisson-Lie world.
|
2014-10-01T00:00:00.000Z
|
2001-08-21T00:00:00.000
|
{
"year": 2001,
"sha1": "cef27a206bedca67135ef97457a5a80d7f51b524",
"oa_license": null,
"oa_url": "https://academic.oup.com/ptps/article-pdf/doi/10.1143/PTPS.144.119/23001188/144-119.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "cef27a206bedca67135ef97457a5a80d7f51b524",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
203619303
|
pes2o/s2orc
|
v3-fos-license
|
Bitter Taste Receptors (TAS2Rs) in Human Lung Macrophages: Receptor Expression and Inhibitory Effects of TAS2R Agonists
Background Bitter-taste receptors (TAS2Rs) are involved in airway relaxation but are also expressed in human blood leukocytes. We studied TAS2R expression and the effects of TAS2R agonists on the lipopolysaccharide (LPS)-induced cytokine release in human lung macrophages (LMs). Methods Lung macrophages were isolated from patients undergoing surgery for carcinoma. We used RT-qPCR to measure transcripts of 16 TAS2Rs (TAS2Rs 3/4/5/7/8/9/10/14/19/20/31/38/39/43/45 and 46) in unstimulated and LPS-stimulated (10 ng.mL–1) LMs. The macrophages were also incubated with TAS2R agonists for 24 h. Supernatant levels of the cytokines TNF-α, CCL3, CXCL8 and IL-10 were measured using ELISAs. Results The transcripts of all 16 TAS2Rs were detected in macrophages. The addition of LPS led to an increase in the expression of most TAS2Rs, which was significant for TAS2R7 and 38. Although the promiscuous TAS2R agonists, quinine and denatonium, inhibited the LPS-induced release of TNF-α, CCL3 and CXCL8, diphenidol was inactive. Partially selective agonists (dapsone, colchicine, strychnine, and chloroquine) and selective agonists [erythromycin (TAS2R10), phenanthroline (TAS2R5), ofloxacin (TAS2R9), and carisoprodol (TAS2R14)] also suppressed the LPS-induced cytokine release. In contrast, two other agonists [sodium cromoglycate (TAS2R20) and saccharin (TAS2R31 and 43)] were inactive. TAS2R agonists suppressed IL-10 production – suggesting that this anti-inflammatory cytokine is not involved in the inhibition of cytokine production. Conclusion Human LMs expressed TAS2Rs. Experiments with TAS2R agonists’ suggested the involvement of TAS2Rs 3, 4, 5, 9, 10, 14, 30, 39 and 40 in the inhibition of cytokine production. TAS2Rs may constitute new drug targets in inflammatory obstructive lung disease.
INTRODUCTION
Inflammatory lung diseases such as asthma and chronic obstructive pulmonary disease (COPD) are characterized by airway obstruction and airflow limitation. The bitter taste receptors (TAS2Rs) constitute a family of around 25 G-protein coupled receptors (GPCRs) initially thought to be located exclusively on the tongue, where their activation enables our perception of a bitter taste. However, TAS2Rs are now known to be expressed in the human bronchus (Deshpande et al., 2010;Grassin-Delyle et al., 2013), airway epithelial cells (Shah et al., 2009;Jaggupilli et al., 2017), mast cells (Ekoff et al., 2014) and blood leukocytes (Orsmark-Pietras et al., 2013;Malki et al., 2015).
In human airway smooth muscle, TAS2R stimulation induces relaxation (Deshpande et al., 2010;Grassin-Delyle et al., 2013); inhaled bitter tastants also decreased airway obstruction in a mouse model of asthma (Deshpande et al., 2010). The motile cilia of human airway epithelial cells express TAS2Rs, and bitter compounds increase the ciliary beat frequency as part of an airway defense mechanism (Shah et al., 2009). In sinonasal epithelial cells, activation of TAS2R38 stimulates an increase in nitric oxide production, which in turn increases mucociliary clearance and directly kills bacteria Cohen, 2017). The polymorphisms that underlie TAS2R38's functionality appear to be involved in susceptibility to upper respiratory bacterial infections Cohen, 2017). In IgE-receptor-activated primary human mast cells, agonists known to bind to the expressed TAS2Rs were found to inhibit the release of histamine and prostaglandin D 2 (Ekoff et al., 2014). Furthermore, blood leukocytes from children with severe asthma display elevated TAS2R expression levels and two TAS2R agonists inhibited the release of several pro-inflammatory cytokines and eicosanoids in whole blood from adults (Orsmark-Pietras et al., 2013). In general, TAS2R agonists may have both anti-inflammatory properties and bronchodilatory activities.
The objectives of the present study were to (i) characterize TAS2R expression in human LMs, (ii) describe the inhibitory effects of various TAS2R agonists on lipopolysaccharide (LPS)induced cytokine production, (iii) infer the subtypes of TAS2R involved, and (iv) determine whether IL-10 acts as a potential mediator of TAS2R inhibitory activities on LPS-induced cytokine production.
Preparations of Human Lung Macrophages and Explants
Experiments on human tissue were approved by the regional independent ethics committee board (Comité de Protection des Personnes Île de France VIII, Boulogne-Billancourt, France). In line with the French legislation on clinical research and as approved by the independent ethics committee, patients gave their verbal informed consent for the use of resected lung tissue for in vitro experiments. Lung tissue samples were obtained from 28 patients [18 males and 10 females; smoker/exsmoker/non-smoker: 11/16/1; mean ± standard error (SD) age: 65.4 ± 8.1 years; FEV1 = 80.2 ± 20.9%; pack-years: 41 ± 21; FEV1/FVC ratio: 0.77 ± 0.1; 7 COPD (FEV1/FVC < 0.7; airflow limitation severity: GOLD 1 for 4 patients and GOLD 2 for 3)] undergoing surgical resection for lung carcinoma and who had not received prior chemotherapy. The LMs were isolated from macroscopically normal lung parenchyma (obtained from sites distant from the tumor), dissected free of pleura, visible airways and blood vessels, and then chopped into 3-5 mm 3 fragments, as previously described (Jeyaseelan et al., 2005;Buenestado et al., 2010Buenestado et al., , 2012Abrial et al., 2015;Victoni et al., 2017).
Briefly, the fluid collected from several washings of the minced peripheral lung tissues was centrifuged (2000 rpm for, 10 min). The cell pellet was resuspended in RPMI supplemented with 10% heat-inactivated fetal calf serum, 2 mM L-glutamine, and antibiotics. Resuspended viable cells (10 6 per mL) were then aliquoted into either a 12-well plate (for transcriptional assays) or a 24-well plate (for cytokine assays). Following incubation for at least 2 h at 37 • C (in a 5% CO 2 humidified atmosphere), non-adherent cells were removed by gentle washing. The remaining cells were maintained at 37 • C and 5% CO 2 overnight. It has been shown that the adherence step does not significantly influence overall transcriptional changes in alveolar macrophages, relative to flow-cytometrybased cell-sorting (Shaykhiev et al., 2009). As described in previous reports from our group (Jeyaseelan et al., 2005;Buenestado et al., 2010Buenestado et al., , 2012Abrial et al., 2015;Victoni et al., 2017), the adherent cells (about 2 × 10 5 cells per well, for a 24-well plate) were >95% pure macrophages, as determined by May-Grünwald-Giemsa staining and CD68 immunocytochemistry (data not shown). Cell viability exceeded 90%, as assessed in a trypan blue dye exclusion assay. Culture plates with adherent macrophages were washed with warm medium. One mL of fresh medium supplemented with 1% fetal calf serum was added per well, and the culture plates were incubated overnight at 37 • C in a 5% CO 2 humidified atmosphere.
Treatment of Lung Macrophages
On the day after isolation, macrophages or explants were washed twice, and 1 mL of RPMI was added per well. The LMs were exposed for 24 h to LPS (10 ng · mL −1 ). The latter LPS concentration was selected as being submaximal on the basis of previous data from time-response and concentration-response curves (Buenestado et al., 2012). TAS2R agonists were added to the culture medium 1 h before exposure to LPS. After a 24 h incubation in RPMI at 37 • C (in a 5% CO2 humidified atmosphere), culture supernatants from the LMs and the explants were collected and stored at −80 • C for subsequent cytokine assays. Although more than 100 molecules have been described as TAS2R agonists, some TAS2Rs are still orphan receptors that lack a cognate agonist (Meyerhof et al., 2010;Di Pizio and Niv, 2015). In an initial series of experiments, the preparations were exposed to a maximal concentration (1 mM) of promiscuous TAS2R agonists acting on at least 8 different TAS2Rs (diphenidol, quinine and denatonium) ( Table 1) (Meyerhof et al., 2010;Di Pizio and Niv, 2015). In a second series of experiments, the preparations were exposed to a range of concentrations of more selective TAS2R agonists (chloroquine, dapsone, strychnine, sodium cromoglycate, ofloxacin, phenanthroline, erythromycin, and carisoprodol) to infer the TAS2R subtype.
Cytokine Assays
The cytokine concentrations in the supernatant (TNF-α, IL-10, CCL3, and CXCL8) were measured with an ELISA (R&D Systems), according to the manufacturer's instructions. The assays' limits of detection were 4 pg.mL −1 for CCL3, 8 pg.mL −1 for TNF-α, 30 pg.mL −1 for IL-10, and 32 pg.mL −1 for CXCL8. The supernatants were diluted with RPMI as appropriate, and the optical density was determined at 450 nm using a microplate reader (MRX II, Dynex Technologies, Saint-Cloud, France). Cytokine concentrations were expressed in pg.10 −6 LMs, unless otherwise stated. Cytotoxicity was determined by measuring the lactate dehydrogenase (LDH) activity in LM supernatants (LDH assay, Cayman Chemical, Montignyle-Bretonneux, France) after 24h exposure to the vehicle or the TAS2R agonists.
Reverse Transcriptase -Quantitative Polymerase Chain Reaction (RT-qPCR) Analysis
RT-qPCR experiments were performed as previously described, with some modifications (Grassin-Delyle et al., 2013). Lung macrophages were stimulated (or not) for 24 h with LPS, and total RNA was prepared using TRIzol R reagent (Life Technologies, Saint Aubin, France). The amount of RNA extracted was estimated by spectrophotometry at 260 nm (Biowave DNA; Biochrom, Cambridge, United Kingdom) and its quality was assessed in a microfluidic electrophoresis system (RNA Standard Sensitivity kits for Experion R , BioRad, Marnes-la-Coquette, France). After treatment with DNase I (Life Technologies, Saint Aubin, France), 1 µg of total RNA was subjected to reverse transcription (SuperScript R III First-Strand SuperMix kit, Life Technologies). The resulting cDNA was then used for quantitative real-time PCR experiments with TaqMan R chemistry (Life Technologies). The amplification was carried out using 20 ng cDNA (Gene Expression Master Mix, Life Technologies) in a StepOnePlus thermo cycler (Life Technologies). The conditions were as follows: initial denaturation at 95 • C for 10 min, and then 40 cycles of annealing/extension (95 • C for 15 s and then 60 • C for 1 min). Fluorescence was measured at each cycle, and the real-time PCR's threshold cycle (Ct) was defined as the point at which a fluorescence signal corresponding to the amplification of a PCR product was detected. The reaction volume was set to 10 µL. The expression of transcripts of the 16 TAS2R-encoding genes (TAS2R3, TAS2R4, TAS2R5, TAS2R7, TAS2R8, TAS2R9, TAS2R10, TAS2R14, TAS2R19, TAS2R20, TAS2R31, TAS2R38, TAS2R39, TAS2R43 TAS2R45, and TAS2R46) in the LMs was analyzed using a specific TaqMan R array with predesigned reagents (Assay-on-Demand R , Life Technologies). In order to confirm the extraction of intact cellular mRNA and standardize the quantitative data, the reference gene coding for hypoxanthine phosphoribosyltransferase (HPRT1) was amplified as the same time.
Statistical Analysis
Values in the text and figures are expressed as the mean ± standard error of the mean (SEM) unless otherwise stated from experiments with n independent donors. The quantitative data obtained from RT-qPCR experiments was expressed as the relative expression (2 − Ct ), where C t is the difference between the target gene C t and the mean C t of the reference gene.
Data were evaluated by using either a one-way analysis of variance for repeated measures (followed by Dunnett's post hoc test for multiple comparisons) or a paired Student's t-test, as appropriate. The threshold for statistical significance was set to p < 0.05. Erythromycin 300 Carisoprodol 100 The most recent nomenclature was adopted (Andres- Barquin and Conte, 2004;Sainz et al., 2007;Dotson et al., 2008;Meyerhof et al., 2010;Alexander et al., 2011;Soares et al., 2013). Unless otherwise stated, single values (i.e., lacking ± SEM) represent the threshold concentration (i.e., the lowest concentration (µM) to have elicited a calcium response in transfected HEK-293T cells), whereas values presented as the mean ± SEM represent the EC 50 (µM) (Meyerhof et al., 2010). When both a threshold concentration and the EC 50 were available, both values were reported (in that order). When no quantitative indicators were available, the symbol "+" indicates agonism. The TAS2R numbers in bold type and those underlined correspond to the TAS2Rs found to be expressed in human LMs. The TAS2R numbers in italics correspond those not expressed in human LMs. The TAS2R numbers in standard type correspond to the receptors not assessed in RT-qPCR assays. * values from Liu et al. (2018) and * * values from Jaggupilli et al. (2019).
Expression of Bitter Taste Receptor Gene Transcripts in Lung Macrophages
We analyzed the expression patterns of 16 of the 25 known TAS2Rs in humans, based on our previous results in human bronchi (Grassin-Delyle et al., 2013) and other literature data in human airway smooth muscle (Deshpande et al., 2010), airway epithelial cells (Shah et al., 2009;Jaggupilli et al., 2017) and blood leukocytes (Orsmark-Pietras et al., 2013;Malki et al., 2015). Transcripts of genes coding for 15 bitter taste receptors were identified in all the preparations (n = 5 to 12), whereas TAS2R43 transcripts were found in LMs from 4 of 6 patients only. Exposure to LPS was generally associated with an increase in TAS2R receptor expression; the increase was statistically significant for TAS2R7 and TAS2R38 (Figure 1).
Effects of TAS2R Agonists on Cytokine Production by Human Lung Macrophages
Lipopolysaccharide induced a ∼300-fold increase in TNF-α release (from 209 ± 56 pg.10 −6 LMs in the absence of LPS to 63151 ± 12561 pg.10 −6 LMs in the presence of LPS; n = 6 paired preparations). In the same preparations, the LPS-induced increases in chemokine release were lower: an ∼80-fold increase for CCL3 (from 3467 ± 1009 pg.10 −6 LMs in the absence of LPS to 276235 ± 71937 pg.10 −6 LMs in the presence of LPS) and a 34-fold increase for CXCL8 (from 37 ± 9 pg.10 −6 LMs in the absence of LPS to 1279 ± 347 pg.10 −6 LMs in the presence of LPS); these findings are in agreement with our previous results (Buenestado et al., 2012;Abrial et al., 2015;Victoni et al., 2017;Grassin-Delyle et al., 2018). The basal release of these cytokines in unstimulated macrophages and the release caused by LPS are shown in Figure 2.
The first set of experiments was carried out with three promiscuous TAS2R agonists (diphenidol, quinine, and denatonium) at a concentration of 1 mM (in order to cover the widest possible range of receptors; Table 1). Quinine almost completely abrogated the LMs' LPS-induced production of TNF-α, CCL3 and CXCL8 (inhibition ≥98%), and denatonium significantly inhibited the production of TNF-α and CCL3 (but not of CXCL8) to a much lesser degree (28% and 43%, respectively). Diphenidol was inactive (Figure 3). Quinine is known to activate all of its nine cognate TAS2Rs in the same relatively low concentration range (∼10 µM; Table 1) (Meyerhof et al., 2010;Liu et al., 2018;Jaggupilli et al., 2019). In marked contrast, the activation ranges for denatonium and diphenidol spanned five and three orders of magnitude, respectively ( Table 1) (Meyerhof et al., 2010). Given the panels of TAS2R activated by each of these three agonists and the respective activation concentrations, this first set of experiment suggested that the inhibitory effects of quinine and denatonium are almost certainly not mediated by TAS2Rs 1, 7, 13, and 31, and probably not mediated by TAS2Rs 14,16,20,30,38,43,and 46. To further determine the TAS2R subtypes involved in the modulation of cytokine release by LMs, relatively selective agonists (Table 1) were used at concentrations ranging from 1 µM to 1 mM. The significant inhibitory activity of dapsone at 0.3 mM (increasing up to 1 mM) suggests the involvement of TAS2Rs 4, 10, and 40 (Figure 4).
The involvement of TAS2R4 is also indicated by the inhibitory effects of colchicine. Moreover, the observation that partial inhibition (38-57%) by strychnine was observed at a much higher concentration (1 mM) than its EC 50 value for TAS2R46 expressed by transfected HEK-293T cells ( Table 1) suggests that the latter receptor is not involved but does not rule out the involvement of TAS2R10. Chloroquine was fully active at a concentration of 10 −4 M, suggesting the involvement of TAS2R3 and (to a lesser degree) TAS2R39.
Frontiers in Physiology | www.frontiersin.org FIGURE 2 | Production of TNF-α, CCL3, and CXCL8 in unstimulated human lung macrophages from 6 patients and in LPS (10 ng.mL −1 )-stimulated human lung macrophages from 14 to 15 patients. The results are presented as individual plots and as the mean ± SD level for the group.
although off-target activities could also explain the compound's inhibitory activity; solid evidence of the involvement of this subtype is hard to obtain. Phenanthroline is a selective TAS2R5 agonist; indeed, it is the only TAS2R5 agonist to have been described to date (Meyerhof et al., 2010). The compound's potent inhibitory effect in the present study suggests the involvement of this TAS2R subtype. Ofloxacin is selective for TAS2R9, and was associated with significant inhibition of LPS-induced CCL3 and CXCL8 release, which also suggests the involvement of this TAS2R subtype. Carisoprodol is reportedly selective for TAS2R14, with an activation threshold concentration of 0.1 mM. Its significant inhibition of LPS-induced release of the three cytokines suggests the involvement of the TAS2R14, and is consistent with quinine's inhibitory profile here (Figure 5).
Lastly, sodium cromoglycate (a selective TAS2R20 agonist, at an active concentration of 1 mM) and saccharin (an active agonist of both TAS2R31 and 43 at 1 mM) were devoid of activity (data not shown; n = 7 and 8, respectively); this finding rules out the involvement of these three TAS2R subtypes, as had already been indicated by our results with the promiscuous agonists. None of the TAS2R agonists caused a significant increase in LDH release, with the exception of a ∼2-fold-increase in LDH release for chloroquine and colchicine at their highest concentrations (1 mM, and 0.1 mM); this corresponds to small increases (1.6 and 1.9%, respectively) above the level of basal LDH release by LMs exposed to LPS.
Overall, cross-tabulation of the effects on LPS-induced cytokine release suggests the highly probable involvement of TAS2Rs 3, 4, 5, 9, 10, 30, 39 and 40 and (with a degree of inconsistency) TAS2R14. The high level of TAS2R14 expression further supports the involvement of this subtype. In contrast, the involvement of TAS2Rs 20, 31 and 43 can be ruled out. Lastly, the results obtained with the present set of agonists could not preclude the involvement of other TAS2Rs (particularly, the orphan receptors TAS2Rs 19, 42, and 50). It is noteworthy that the maximal level of inhibition (≥90%) observed with quinine, chloroquine, phenanthroline and erythromycin was similar to that seen for budesonide at 10 −8 M ( Table 2).
Effects of TAS2R Agonists on IL-10 Production by Human Lung Macrophages
Interleukin-10 is an immunomodulatory cytokine with potent anti-inflammatory activity; as such, it may be an essential mediator of the TAS2R agonists' ability to inhibit the LPSinduced production of pro-inflammatory cytokines (Berkman et al., 1995;Armstrong et al., 1996). All the TAS2R agonists that inhibited the production of TNF-α, CCL3 and CXCL8 also inhibited (to much the same degree) the LPS-induced production of IL-10. The compounds that did not inhibit the LPS-induced production of TNF-α, CCL3 and CXCL8 did not alter IL-10 production either (Figure 6). Budesonide (10 −8 M) also markedly reduced IL-10 production (by 95%; Table 2).
DISCUSSION
Our present results demonstrated that (i) TAS2Rs are indeed expressed in human LMs, and (ii) TAS2R agonists inhibit LPSinduced cytokine release by LMs -a process that is not mediated by the release of IL-10.
were upregulated in blood leukocytes from patients with severe, treatment-resistant asthma. Overall, TAS2R expression levels are higher in T lymphocytes than in monocytes in blood from adult patients with asthma (Orsmark-Pietras et al., 2013). The relatively weak expression of TAS2Rs in monocytes versus lymphocytes (i.e., B, T and natural killer cells) has been confirmed in healthy FIGURE 4 | Concentration-response curves for relatively selective TAS2R agonists (colchicine, chloroquine, strychnine, and dapsone) with regard to the LPS-induced production of pro-inflammatory cytokines [TNF-α ( ) CCL3 ( ), and CXCL8 ( )]. Lung macrophages from 5 to 9 patients were stimulated with LPS (10 ng.mL −1 ) in the absence or presence of TAS2R agonists. * p < 0.05, * * p < 0.01, * * * p < 0.001 compared with LPS alone.
LMs were TAS2Rs 8,14,19,39 and 46,and TAS2Rs 31,38 and 45 after exposure to LPS -highlighting differences in expression patterns between human monocytes and LMs. Since almost all the lung tissue donors were smokers or ex-smokers, we cannot Human LMs were incubated for 1 h in the presence of budesonide (10 nM) or vehicle before being stimulated with LPS (10 ng.mL −1 ) for 24 h. The concentrations of the cytokines in the culture supernatant were measured using ELISAs. The data are reported as the mean ± SEM of five independent paired experiments. * indicates a significant difference relative to LPS alone ( * * p < 0.01 and * * * p < 0.001).
rule out the possibility that smoking had altered the expression of TAS2Rs by LMs. The present work was limited by the fact that it did not investigate TAS2R expression at the protein level (using flow cytometry or Western blotting, for example). However, due the very limited availability of suitable antibodies, studies of the cell surface expression of TAS2R on human blood monocytes or monocyte-derived macrophages have been mainly restricted to TAS2R38 and TAS2R43/31 (Malki et al., 2015;Maurer et al., 2015;Gaida et al., 2016;Tran et al., 2018). However, our present results suggest that these two TAS2Rs are not involved in the inhibitory effects of the TAS2R agonists on the LPS-induced production of cytokines.
There are very few specific antagonists for use as chemical probes of TAS2R function. It has been reported that one antagonist (GIV3727) fully suppressed the activation of six TAS2Rs. GIV3727 resembles bitter agonists in that it acts on several receptors (Behrens and Meyerhof, 2013). Some 6-methoxyflavanones have been described as reversible, insurmountable antagonists of TAS2R39 (Roland et al., 2014). However, this TAS2R subtype is unlikely to be involved in inhibition of the LPS-induced production of cytokines by LMs. Given the current lack of appropriate specific TAS2R antagonists, we sought to infer the receptor subtypes involved in the inhibitory effect of the bitter taste compounds by cross-tabulation of the results for the effect of various TAS2R agonists on LPS-induced cytokine production. In Meyerhof et al.'s (2010) extensive work on HEK cells expressing the different hTAS2Rs, the researchers described the molecular receptive ranges of the 25 human TAS2Rs vs. 104 natural or synthetic bitter compounds. The "threshold concentrations" (defined as the minimum concentrations that elicit a response) and the potencies (EC 50 ) determined for some compounds using calcium imaging analysis of transfected HEK cells may not be transposable to the LMs. However, Meyerhof et al.'s (2010) pioneering work was the basis for our choice of the different TAS2R agonists used in the present study (Table 1).
One of the strengths of the present study is its use of a broad panel of TAS2R agonists. One of the limitations relates to potential non-TAS2R-mediated (off-target) activities of some of the agonists, which might have interfered with our analysis and interpretation of the results. For example, the lysosomotropic compound chloroquine inhibits lysosomal functions and also reportedly causes a concentration-dependent suppression of the LPS-induced release of TNF-α, IL-1β, and IL-6 in human monocytes and in monocyte/macrophage cell lines (U937, THP-1, and RAW 264.7) (Jang et al., 2006;Schierbeck et al., 2010). The decrease in TNF-α release has been attributed to chloroquine's ability to block the conversion of pro-TNF-α to mature TNF-α, whereas the decrease in IL-1β and IL-6 release has been attributed to destabilization the corresponding mRNAs (Jang et al., 2006;Schierbeck et al., 2010). Furthermore, chloroquine reportedly degrades in vitro cell viability at concentrations above 100 µM (Jeong and Jue, 1997;Jang et al., 2006;Schierbeck et al., 2010). However, in addition to the very weak increase in the LDH release by chloroquine (present study), the inhibition of LPS-induced cytokine production was significant at 100 µM in the previous studies (Jeong and Jue, 1997;Jang et al., 2006;Schierbeck et al., 2010) and was substantially complete in the present studythus ruling out a cytotoxic effect. Macrolides have much the same cationic and lysosomal properties as chloroquine. In the murine monocyte/macrophage cell line J774, four macrolide antibiotics (including erythromycin and azithromycin) reduced the LPS-stimulated production of TNF-α, IL-1β, and IL-6 in a concentration-dependent manner (up to 80 µM) (Ianaro et al., 2000). The impairment of lysosomal functions by azithromycin and chloroquine deregulates TLR4 recycling/signaling and phospholipase activation -leading to an anti-inflammatory phenotype in LPS-stimulated J774 cells (Munic et al., 2011;Nujic et al., 2012). In human monocytes, azithromycin (50 µM) precisely mirrored the effects of chloroquine on LPS-induced cytokine production, with lower levels of some cytokines (CCL22 and CXCL11), higher levels of others (CCL2 and CCL18), and no changes in TNF-α and IL-6 levels (Vrancic et al., 2012). The effects of azithromycin and chloroquine may not be entirely due to impairments in lysosomal functions or in signaling pathways related to NF-κB activation, cellular accumulation and phospholipid binding (Vrancic et al., 2012). To the best of our knowledge, only one study has shown that certain macrolides (clarithromycin and azithromycin, but not erythromycin) can inhibit cytokine production by human alveolar macrophages; clarithromycin was more effective than azithromycin at ∼10 µM (Cai et al., 2013). In the present study, erythromycin suppressed LPS-induced cytokine production (TNF-α, CCL3, and CXCL8) at a ∼10-fold higher concentration in LMs than in J774 cells. This effect is probably related (at least in part) to activation of TAS2R10 (Meyerhof et al., 2010).
In monocytes and macrophages, cytoskeletal microtubules are involved in a number of cell activities, including cytokine production. Relative to microtubules in monocytes, alveolar macrophage microtubules are longer, more numerous and much more stable, and LPS increases the number and stability of FIGURE 6 | Concentration-response of TAS2R agonists on the LPS-induced production of IL-10. Lung macrophages from 4 to 5 patients were stimulated with LPS (10 ng.mL −1 ) in the absence or presence of TAS2R agonists. * p < 0.05, * * p < 0.01. monocyte microtubules still further (Allen et al., 1997). These differences might explain why treatment with 25 µM colchicine (a microtubule-depolymerizing drug) gave a relative increase in LPS-induced IL-1β release and a relative decrease in LPSinduced TNF-α release by human monocytes but had much weaker effects on human alveolar macrophages (Allen et al., 1991). In the present study, however, colchicine was associated with concentration-dependent inhibition of the production of TNF-α and CCL3; this inhibition was relatively weak at 10 µM but significant at 100 µM. These concentrations suggest that colchicine's effect is exerted (at least in part) through TAS2R receptor activation.
In human blood monocytes, murine monocyte cell lines, and bone marrow-derived dendritic cells stimulated with LPS, chloroquine and azithromycin significantly enhance IL-10 release (Sugiyama et al., 2007;Murphy et al., 2008;Vrancic et al., 2012). Since IL-10 has broad anti-inflammatory properties (Berkman et al., 1995;Armstrong et al., 1996), we assessed the effects of TAS2R agonists on the production of this cytokine by LMs. TAS2R agonists that inhibited the production of TNF-α, CCL3, and CXCL8 also inhibited IL-10 production to a similar extent. Budesonide also markedly reduced the IL-10 production. Hence, in contrast to the situation in monocytes, IL-10 is not involved in the TAS2R agonists' inhibitory effects in LMs. The molecular mechanisms underlying the anti-inflammatory effects of TAS2R activation in human lung macrophages should be investigated as soon as selective, potent TAS2R agonists and antagonists become available. Taken as a whole, these results suggest that a battery of selective TAS2R agonists and antagonists will be needed to confirm our findings and then to establish the full list of receptors involved in the inhibition of LPS-induced cytokine release. With respect to drug efficacy, some of the active TAS2R agonists (quinine, chloroquine, phenanthroline, and erythromycin) led to the essentially complete inhibition of LPS-induced cytokine production by LMs (to much the same extent as an optimal concentration of budesonide), which illustrates the potent anti-inflammatory activity of these compounds and the potential therapeutic value of inhaled TAS2R agonists in obstructive pulmonary diseases. Interestingly, it has been reported that chloroquine has beneficial effects in asthma (Charous et al., 1998). Given the potential TAS2R-mediated effects observed here, it is time to review the actions of chloroquine and related compounds in airway diseases.
Another potential study limitation relates to our use of LMs harvested from lung tissues resected from current smokers or exsmokers. Isolation from minced lung tissue provides the large number of cells required to perform paired series of experiments, such as those described here. This work would be hardly possible with macrophages obtained from bronchoalveolar lavages. It should be noted that our preparations might contain a small proportion of interstitial resident macrophages among the alveolar macrophages. However, exposure to bacterial products (including LPS) and inhaled TAS2R agonists is not restricted to the alveolar compartment, and the use of freshly isolated human LMs mainly from the alveolar spaces but perhaps also from the lung tissue might usefully reflect the clinical response more closely.
About a quarter of the lung tissue donors studied here had COPD and almost all were smokers or ex-smokers. The impact of smoking status and COPD on LPS-induced cytokine release by LMs varies markedly from one study to another and from one cytokine to another. Some researchers have reported that LPS-stimulated cytokine production by alveolar macrophages was higher in COPD patients and smokers than in healthy non-smokers (Hodge et al., 2011). However, there were no significant differences in cytokine secretion between current smokers with COPD and nonsmokers with COPD (Hodge et al., 2011). In contrast, other studies have found that smoking reduces cytokine production by alveolar macrophages upon stimulation with LPS (Chen et al., 2007) or that current smoking status had no effect (i.e., the dose-response curve for any of the cytokines stimulated by LPS was similar in current smokers and ex-smokers) (Chen et al., 2007;Armstrong et al., 2009Armstrong et al., , 2011. Furthermore, the inhibitory effect of corticosteroids on the LPS-induced release of cytokines from LMs or alveolar macrophages isolated from bronchoalveolar lavages was similar (i) in non-smokers, current smokers and COPD patients and (ii) after a short (1 h) vs. long (16 h) plate adherence step in the isolation procedure (Armstrong et al., 2011;Plumb et al., 2013;Higham et al., 2014Higham et al., , 2015. Hence, smoking status and COPD impair LPS-induced cytokine release to a variable extent but do not influence the inhibitory effect of corticosteroids on LMs. Therefore, the inhibitory effects of TAS2R agonists observed in the present study are probably not restricted to LMs from ex-smokers or current smokers, and are likely to accurately reflect the in vivo responsiveness of human LMs. In conclusion, we evidenced TAS2R transcript expression in human LMs and identified the TAS2R subtypes that are most likely to be involved in the inhibition of LPS-induced cytokine production. Furthermore, the potential value of TAS2Rs as drug targets for the treatment of chronic obstructive lung diseases is enhanced by the ability of TAS2R agonists to relax airway smooth muscle (Deshpande et al., 2010;Grassin-Delyle et al., 2013), even when β 2 -adrenergic receptors (the current cornerstone target of bronchodilators) are subject to tachyphylaxis .
DATA AVAILABILITY STATEMENT
All datasets generated for this study are included in the manuscript/supplementary files.
ETHICS STATEMENT
Experiments on human tissue were approved by the regional independent ethics committee board (Comité de Protection des Personnes Île de France VIII, Boulogne-Billancourt, France).
AUTHOR CONTRIBUTIONS
SG-D and HS conceived the study, performed the experiments, analyzed the data, and critically revised the manuscript. NM helped to draft the manuscript and critically revised it. CA, MB, and EN performed the experiments and analyzed the data. CF and L-JC analyzed the data and critically revised the manuscript. PD managed the study, analyzed the data, performed the statistical analysis, and drafted the manuscript. All authors read and approved the final manuscript.
FUNDING
The research was funded by the French Ministry of Higher Education (Chancellerie des Universités de Paris, Legs Gaston Poix).
|
2019-09-17T01:10:16.023Z
|
2019-05-01T00:00:00.000
|
{
"year": 2019,
"sha1": "2ee08d13f774c352a0e473b603a5913d1da94419",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fphys.2019.01267/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "df3f77fabcb52cbd6d9ca8cc19a530631a0dc29c",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
199661950
|
pes2o/s2orc
|
v3-fos-license
|
Estrogen receptor β inhibits breast cancer cells migration and invasion through CLDN6-mediated autophagy
Background Estrogen receptor β (ERβ) has been reported to play an anti-cancer role in breast cancer, but the regulatory mechanism by which ERβ exerts this effect is not clear. Claudin-6 (CLDN6), a tight junction protein, acts as a tumor suppressor gene in breast cancer. Our previous studies have found that 17β-estradiol (E2) induces CLDN6 expression and inhibits MCF-7 cell migration and invasion, but the underlying molecular mechanisms are still unclear. In this study, we aimed to investigate the role of ERβ in this process and the regulatory mechanisms involved. Methods Polymerase chain reaction (PCR) and western blot were used to characterize the effect of E2 on the expression of CLDN6 in breast cancer cells. Chromatin immunoprecipitation (ChIP) assays were carried out to confirm the interaction between ERβ and CLDN6. Dual luciferase reporter assays were used to detect the regulatory role of ERβ on the promoter activity of CLDN6. Wound healing and Transwell assays were used to examine the migration and invasion of breast cancer cells. Western blot, immunofluorescence and transmission electron microscopy (TEM) were performed to detect autophagy. Xenograft mouse models were used to explore the regulatory effect of the CLDN6-beclin1 axis on breast cancer metastasis. Immunohistochemistry (IHC) was used to detect ERβ/CLDN6/beclin1 expression in breast cancer patient samples. Results Here, E2 upregulated the expression of CLDN6, which was mediated by ERβ. ERβ regulated CLDN6 expression at the transcriptional level. ERβ inhibited the migration and invasion of breast cancer cells through CLDN6. Interestingly, this effect was associated with CLDN6-induced autophagy. CLDN6 positively regulated the expression of beclin1, which is a key regulator of autophagy. Beclin1 knockdown reversed CLDN6-induced autophagy and the inhibitory effect of CLDN6 on breast cancer metastasis. Moreover, ERβ and CLDN6 were positively correlated, and the expression of CLDN6 was positively correlated with beclin1 in breast cancer tissues. Conclusion Overall, this is the first study to demonstrate that the inhibitory effect of ERβ on the migration and invasion of breast cancer cells was mediated by CLDN6, which induced the beclin1-dependent autophagic cascade. Electronic supplementary material The online version of this article (10.1186/s13046-019-1359-9) contains supplementary material, which is available to authorized users.
Background
Estrogen plays an important role in hormone-dependent breast cancer progression and metastasis. The effects of estrogen are primarily mediated through the estrogen receptors (ERs), ERα and ERβ [1]. The contribution of ERα to the normal development of the mammary gland and the tumorigenesis and progression of breast cancer is essential [2]. ERα expression in normal breast epithelial cells is approximately 10% but increases to 50-80% in breast cancer cells [3]. Loss of ERα in breast cancer patients indicates poor prognosis, and ERα has been the principal biomarker for endocrine therapy in breast cancer [4]. However, only 70% of ERα-positive breast cancers respond to tamoxifen (ER antagonist) treatment, and 30-40% of patients relapse during treatment and become resistant to endocrine therapy [5]. ERβ has the same structural domains as ERα, but its function is not exactly the same as ERα. The role of ERβ in breast cancer remains elusive, and ERβ is currently not used in the diagnosis or treatment of breast cancer patients [6]. Although a few studies claim that ERβ expression promotes the invasion and metastasis of breast cancer and that high ERβ level is linked with poor prognosis [7], multiple studies have demonstrated that ERβ is an anti-oncogene in breast cancer. In contrast to those of ERα, clinical studies showed that the levels of ERβ were high in mammary epithelial tissues and decreased during tumor progression [3]. In triple negative breast cancer (TNBC), high expression of ERβ was significantly associated with good clinical outcome in patients treated with tamoxifen [8]. In vitro studies showed that ERβ expression inhibited the cell proliferation and the migratory and invasive properties of breast cancer cells [9,10]. Therefore, ERβ may be a potential target for novel therapeutic avenues in breast cancer. Nevertheless, the molecular mechanisms underlying the inhibitory effects of ERβ on breast cancer remain unidentified and need to be explored.
Recent studies have suggested that ERβ could also trigger autophagy [11,12]. Autophagy plays a key role in the maintenance of cellular homeostasis [13]. Dysregulation of autophagy has been implicated in cancers. A recent study reported that ERβ activation could inhibit breast cancer cell proliferation by reducing the G2/M phase as well as triggering autophagy [11]. ERβ-induced damage regulated autophagy modulator 2 (DRAM2)-mediated autophagy has been associated with a reduction of cancer cell proliferation in Hodgkin lymphoma (HL) cells [12]. However, few studies have reported that the inhibitory role of ERβ on migration and invasion is directly related to the modulation of autophagy in breast cancer. Furthermore, the regulatory mechanism of ERβ induced-autophagy is still unclear. Intriguingly, in this study, we found that ERβ induced autophagy and inhibited migration and invasion through claudin-6 (CLDN6) in breast cancer cells.
CLDN6 is a tight junction (TJ) protein that belongs to a family of transmembrane proteins with 4 transmembrane domains, and 27 members of this family have been identified [14,15]. As an important component of TJs, CLDNs not only play important roles in the classic barrier and fence functions of TJs but are also involved in regulating cellular communication and signaling [16]. CLDNs possess a carboxy-terminal PDZ-binding motif. This domain allows CLDNs to interact with cytoplasmic scaffolding proteins (PDZ domain-containing proteins), such as zonula occludens (ZO-1) and afadin, which are important for CLDNs to communicate with a multitude of signaling proteins [17]. In previous studies, our group first cloned the CLDN6 gene from mammary epithelial cells of COP rats and identified CLDN6 as a breast cancer suppressor gene [18,19]. We have reported that CLDN6 expression induces apoptosis and inhibits tumor growth, migration and invasion in breast cancer cells [20][21][22]. Moreover, in our recent studies, we found that CLDN6 overexpression not only strengthened the tight junctions in breast cancer cells but also induced a large number of autophagic vacuoles observed under transmission electron microscopy (TEM). A series of subsequent experiments demonstrated that CLDN6 induced autophagy, whereas the relationship between CLDN6-induced autophagy and breast cancer remains poorly investigated.
Our previous studies have shown that 17β-estradiol (E2) upregulates CLDN6 expression and hinders MCF-7 cell migration and invasion, but the molecular mechanisms are still unclear. Interestingly, in this study, we found that the expression of CLDN6 was increased and migration and invasion were hindered in both MCF-7 (ERα+/ERβ+/ GPR30+) and MDA-MB-231 (ERα−/ERβ+/GPR30-) cells after E2 treatment. Thus, we supposed that this E2-induced effect was not ERα-dependent, and we wanted to explore the role of ERβ in this process. In view of the abovementioned reports, we presumed that ERβ regulated CLDN6 expression and that ERβ-induced autophagy affected migration and invasion in breast cancer cells. In addition, little is known about the role of CLDN6 in ERβ-induced autophagy. Hence, the current study aimed to explore the regulatory role of ERβ on CLDN6 expression in breast cancer cells and the mechanism related to its biological functions.
In this study, we demonstrated a novel finding that ERβ induced autophagy and inhibited migration and invasion via CLDN6 in breast cancer cells. We also analyzed the molecular mechanisms underlying this effect in some details.
Wound healing assay
The cells from each group were seeded onto 6-well culture plates with corresponding treatment. When cells had grown to full confluence, wounds were created through the monolayers by using a sterile pipette tip. The wounded areas were imaged at 0 h and 24 h by an inverted microscope (Olympus, Japan). Image J software (NIH, USA) was employed to analyze the wound widths from the images.
Transwell assay
Transwell chambers (Costar, USA) with matrigel were used to perform cell invasion assays. An equal number of cells (1 × 10 5 cells) were loaded into matrigel (BD Biosciences, USA) precoated chambers in 200 μl serum-free media. NIH/3 T3 conditioned medium as the chemoattractant was placed in the lower compartment of the chamber. After 24 h incubation, cells were fixed in 4% paraformaldehyde and stained with 0.1% crystal violet. No invading cells were removed with cotton swabs. Images were photographed in three randomly selected fields of view at 200 × magnification. NIH/3 T3 conditioned medium was prepared by culturing NIH/3 T3 fibroblasts cells for 48 h in serum-free media [25]. The supernatants were collected and stored at − 80°C.
Transmission electron microscopy (TEM) Cells were exposed to DPN with or without 100 nM for 24 h. The treated cells were fixed with 4% glutaraldehyde and post-fixed in 1% OsO4. Samples were dehydrated through a graded series of ethanol solutions and embedded in Eponate 12 epoxy resin. Ultrathin sections were counterstained with uranyl acetate and lead citrate. Observation and photography carried out by transmission electron microscopy (FEI Tecnai Spirit, USA).
Nuclear/cytosol fractionation assay 2 × 10 6 MDA-MB-231 cells were seeded in 10-cm culture plates and treated with DMSO (control) or DPN (100 nM) for 24 h, respectively. The nuclear and cytosolic fractions were isolated with a Nuclear/Cytosol Fractionation Kit (Beyotime Biotechnology, Shanghai, China) according to the manufacturer's instructions. The subcellular protein extracts were then analyzed by western blot.
Dual luciferase reporter assay
MDA-MB-231 cells were seeded in 6-well culture plates, and transfected with pGL3-CLDN6 plasmid which containing CLDN6 promoter fragment (− 2000/+ 250 bp) and renilla luciferase reporter plasmid (pRL-TK). After transfection for 48 h, cells were treated with DPN. Luciferase activities were measured using the dual-luciferase reporter assay system (Promega, San Luis Obispo, CA, USA) according to the manufacturer's protocol. Firefly luciferase activity was normalized to renilla luciferase activity. Reporter plasmids were purchased from GeneChem Co., Ltd. (Shanghai, China).
Co-immunoprecipitation (co-IP) assay
After DPN treatment, cells were harvested and lysed with NP-40 lysis buffer. Protein lysates were incubated with the anti-UVRAG/anti-ZO-1 antibody or normal rabbit IgG, and rotated overnight at 4°C to form the immune-complex. Reaction mixture was incubated with protein A/G plus-agarose beads (Santa Cruz Biotechnology, Santa Cruz, CA, USA) for 3 h at 4°C. Agarose beads were washed five times with cold washing buffer and heated with 40 μl 1 × SDS buffer to 100°C for 5 min. The samples were analyzed by Western blot.
Immunohistochemistry (IHC)
Human breast cancer tissue specimens (HBreD070CS02) were purchased from Shanghai Outdo Biotech CO. The tissues (n = 70) including 11 paracancerous tissues, 4 intraductal carcinoma and 55 invasive ductal carcinoma. The primary antibodies against ERβ (ab288, 1:200, Abcam), CLDN6 (V118, 1:200, Bioworld Technology) and beclin1 (D40C5, 1:200, CST), respectively, and then incubated at 4°C overnight in a humidified container. Following washing three times with phosphate buffer saline (PBS), the section was treated with UltraSensitive™ SP (Mouse/Rabbit) IHC Kit according to the manufacturer's instructions (KIT-9710, MXB Biotechnology, Fuzhou, China). 3, 3′-diaminobenzidine (DAB) was used for color development. Immunostaining was evaluated by two pathologists using a blind protocol design. For ERβ, CLDN6 and beclin1, each tissue sample was scored according to its staining intensity (0, none; 1, weak; 2, moderate; 3, strong) multiplied by the point of the percentage of stained cells (positive cells ≤25% of the cells: 1; 26-50% of the cells: 2; 51-75% of the cells: 3; ≥75% of the cells: 4). The range of this calculation was 0-12. The median value of scores was employed to determine the cut-off. Cancers with scores above the cut-off value were considered to have high expression of the indicated molecule and vice versa.
Animal studies
An experimental model of lung metastasis was used to investigate the effects of CLDN6-beclin1 axis on breast cancer metastasis. Three groups of five female Balb/c nude mice (4-6 weeks old and 18-20 g) were maintained in an environment with a standardized barrier system of the Experimental Animal Center of Jilin University. One group was injected with the MDA-MB-231/NC cells, the second group was injected with the MDA-MB-231/CLDN6 cells and the third group was injected with the MDA-MB-231/ CLDN6-sh-beclin1 cells. For each nude mouse, 1 × 10 6 cells in 100 μl PBS were injected via the tail vein. After 4 weeks of observation, mice were sacrificed. The lungs and major organs were removed and observed by using a fluorescence imager (IVIS Spectrum, Caliper Life Sciences). Then these organs were fixed, embedded, and sectioned. The metastasis ability was detected by hematoxylin and eosin (H&E) staining.
Statistical analysis
All statistical analyses were performed using the SPSS 13.0 (SPSS Inc., USA) statistical software and GraphPad Prism 7.0 (GraphPad, USA). The statistical significance was analyzed by one-way ANOVA or Student's t-test. The data were presented as the mean ± standard deviation (SD) at least three independent experiments. Categorical data were analyzed by Fisher's exact test or chi-square test. The correlations were analyzed using Pearson's correlation coefficients. P < 0.05 was considered statistically significant.
E2 upregulates CLDN6 expression and inhibits the migration and invasion of breast cancer cells
Estrogen signaling pathways are classified as genomic and nongenomic. Genomic pathways depend on transcriptional modulation of target genes by ERs, and nongenomic pathways mediate rapid activation of signaling cascades partly via membrane-bound G protein coupled estrogen receptor 1 (GPER1/GPR30) [27][28][29][30]. To evaluate a potential functional link between E2 and CLDN6 in breast cancer cells, we treated MCF-7 (ERα+/ERβ+/GPR30+) and MDA-MB-231 (ERα−/ERβ+/GPR30-) cells with DMSO or E2 (from 5 nM to 100 nM) for 24 h. The expression of CLDN6 was measured using semiquantitative RT-PCR and western blot. E2 treatment significantly upregulated the mRNA and protein expression of CLDN6 in a dose-dependent manner in MCF-7 cells, and 50 nM E2 showed the highest upregulation ( Fig. 1a). Similar results were observed in MDA-MB-231 cells (Fig. 1b). However, the E2-induced effect on CLDN6 was not observed at the mRNA level in SK-BR-3 (ERα−/ ERβ−/GPR30+) cells (Fig. 1c), and 50 nM E2 did not exert significant effects on CLDN6 protein levels in SK-BR-3 cells (Fig. 1f). The results indicated that E2 did not regulate CLDN6 through GPR30. We treated MCF-7 and MDA-MB-231 cells with a nonselective estrogen receptor antagonist ICI 182.780 (ICI). ICI cotreatment counteracted the E2induced effects on CLDN6 mRNA and protein levels ( Fig. 1 d-g), suggesting a direct involvement of the ERs in the regulation of CLDN6. Since we have previously demonstrated that CLDN6 overexpression led to a lower migration and invasion of breast cancer cells [20,22], we investigated whether E2-induced CLDN6 expression was involved in reducing breast cancer cell migration and invasion. In agreement with previous studies, E2 treatment caused a reduction of migration and invasion in MCF-7 cells as well as in MDA-MB-231 cells (Fig. 1 h, i).
E2 regulates CLDN6 expression through ERβ
In this study, we found that the expression of CLDN6 was enhanced in MCF-7 and MDA-MB-231 cells by E2. Moreover, when MCF-7 and MDA-MB-231 cells were treated with 50 nM E2, the expression of ERβ in both cells was increased, but ERα expression was not changed in MCF-7 cells ( Fig. 2 a, b). Therefore, this E2-induced effect was not ERαdependent, and we focused our attention on ERβ. To demonstrate that the increased expression of CLDN6 was mediated by ERβ, we knocked down ERβ in MDA-MB-231 cells. Three different ERβ short hairpin RNAs (shRNAs) were tested, and ERβ shRNA-1 was used in the following experiments (Fig. 2c). The depletion of ERβ resulted in no increase in CLDN6 expression after E2 treatment in MDA-MB-231 cells (Fig. 2d). Thus, the induction effect of E2 on CLDN6 could be achieved through ERβ. To observe the regulatory role of ERβ on CLDN6, MDA-MB-231 cells were treated with various concentrations of diarylpropionitrile (DPN), a selective ERβ agonist. It was found that the effect of DPN on CLDN6 expression was similar to that of E2, and the most effective concentration of DPN was 100 nM (Fig. 2e). The immunoreactivity of CLDN6 was prominent along the edges of the MDA-MB-231 cells upon DPN treatment as demonstrated by immunofluorescence (Fig. 2f). By TEM we observed that the tight junctions between cells were prominent in DPN treated MDA-MB-231 cells (Fig. 2g). Moreover, the migration and invasion abilities of MDA-MB-231 cells treated with DPN were decreased, which was consistent with the effect of E2 (Fig. 2h). Next, we asked whether the inhibitory effect of ERβ on migration and invasion of breast cancer cells was associated with CLDN6. We measured the effects of CLDN6 knockdown on the migration and invasion of MDA-MB-231 cells treated with DPN. The results showed that the DPN-induced reduction in migration and invasion could be rescued by the transfection of CLDN6 shRNA in MDA-MB-231 cells (Fig. 2i). CLDN6 knockdown efficiency was detected via western blot (Additional file 1: Figure S1A). To further explore the regulation of ERβ on CLDN6, we used PHTPP (a selective ERβ antagonist) and ERβ shRNA. The results showed that depletion of ERβ abolished the DPN-induced CLDN6 expression in MDA-MB-231 cells ( Fig. 2 j, k). Conversely, overexpression of ERβ induced CLDN6 upregulation in SR-BR-3 and MDA-MB-231 cells after treatment with DPN (Fig. 2l, m). We also observed that the depletion or upregulation of ERβ did not affect the expression of CLDN6 in the absence of DPN (Fig. 2j-m). ERβ transfection efficiency was detected via western blot (Additional file 1: Figure S1B). These results suggest that ERβ regulates CLDN6 expression in a ligand-dependent manner.
ERβ regulates CLDN6 expression at the transcriptional level
In the genomic pathway, ERs act as transcription factors and regulate the transcription of target genes directly by binding to estrogen response elements (EREs) in the promoter region of genes or indirectly by interacting with other transcription factors, such as stimulating protein 1 (Sp1) and activating protein-1 (AP1) [27,29]. Nuclear and cytoplasmic proteins were extracted and examined for ERβ expression. Our results demonstrated that nuclear ERβ protein was increased, whereas cytoplasmic ERβ protein was decreased in DPN treated MDA-MB-231 cells (Fig. 3a). Immunofluorescence experiments showed that DPN induced the translocation of ERβ from the cytoplasm to the nucleus in MDA-MB-231 cells (Fig. 3b). Next, we analyzed the CLDN6 promoter sequence through the UCSC (http://genome.ucsc.edu/) and JASPAR databases (http://jaspardev.genereg.net/). In the present investigation, we were not able to identify a conventional ERE [31], 5′-GGTC AnnnTGACC-3′, in the CLDN6 promoter. However, we predicted potential ERβ-binding sites (half-ERE) and Sp1 binding sites (GC-box) through bioinformatics analysis in JASPAR. To examine whether ERβ or Sp1 could bind to the CLDN6 promoter region in cells, we performed ChIP assays. The primers for the ChIP assays included the potential half-ERE and Sp1 binding sites (Fig. 3c). The results indicated that ERβ and Sp1 bound to the CLDN6 promoter in DPNtreated cells (Fig. 3d, e). To further confirm that ERβ regulated CLDN6 promoter activity, we performed dual luciferase reporter assays. The CLDN6 promoter fragment − 2000/ (Fig. 3c). Both ERβ and Sp1 are expressed in MDA-MB-231cells. After transfection, the cells were treated with 100 nM DPN for 24 h. We found that luciferase activity was significantly increased in DPN-treated pGL3-CLDN6 cells compared with the pGL3-CLDN6 cells (Fig. 3f). These data provide evidence that ERβ regulates CLDN6 at the transcriptional level.
ERβ induces autophagy and suppresses the migration and invasion of breast cancer cells
Previously, we observed the ultrastructure of MDA-MB-231 cells treated with DPN by TEM. Surprisingly, we found that a large number of autophagic vacuoles appeared in the cytoplasm of DPN-treated cells compared with control cells (Fig. 4a). It is known that the amount of LC3B-II represents the number of autophagosomes [32,33]. In this study, LC3B-II-labeled puncta formation was also observed through fluorescence microscopy in DPN-treated cells, but fewer puncta appeared in untreated cells (Fig. 4b). We analyzed the protein expression level of LC3B by western blot, and we found that the ratio of LC3-II/I was significantly increased in MDA-MB-231 cells treated with DPN (Fig. 4c). To further prove that DPN-activated ERβ induces autophagy, we detected LC3B expression in MDA-MB-231 cells with ERβ knockdown after DPN treatment. The western blot results showed that with ERβ knocked down, DPN did not induce an increase in the LC3-II/I ratio in MDA-MB-231 cells (Additional file 2: Figure S2A). Immunofluorescence assays showed that LC3B-II-labeled puncta were rarely observed in DPN-treated ERβ-knockdown cells (Additional file 2: Figure S2B). These results suggest that ERβ might be involved in the induction of autophagy. In fact, LC3-II can accumulate due to increased autophagosome formation or a block in the autophagosome-lysosome fusion process. To distinguish between these two possibilities, we assayed the LC3-II level in the presence of chloroquine (CQ), which blocks autophagosome-lysosome fusion and leads to the accumulation of autophagosomes. After treatment with DPN and CQ, the ratio of LC3-II/I was increased 2.5 times compared with that in MDA-MB-231 cells treated with DPN alone (Fig. 4d). Moreover, when MDA-MB-231 cells were treated with DPN and 3-methyladenine (3-MA) (an inhibitor of early phases of autophagy), the DPN-induced increase was dramatically reduced (Fig. 4d). These results suggest that ERβ is involved in the early steps of autophagosome formation. To understand the biological significance of ERβ-induced autophagy, we tested the migration and invasion of breast cancer cells treated with autophagy inhibitors. We found that both CQ and 3-MA effectively hindered the effects of DPN on migration and invasion (Fig. 4e, f).
ERβ induces autophagy via the CLDN6-mediated increase in beclin1
To identify the mechanisms underlying the ERβ activationinduced effects described above, we used western blot to detect biomarkers of the early stage of autophagy formation. The results showed that beclin1, atg5, atg16 and LC3-II were significantly increased in MDA-MB-231 and SK-BR-3/ERβ (ERβ-overexpressing SK-BR-3 cells) cells after DPN treatment (Fig. 5a, b). The expression of CLDN6 is shown in Additional file 3: Figure S3A. In our previous work, we found that CLDN6 induced autophagy in MCF-7 cells. Thus, we wanted to detect the relationship between CLDN6 and ERβinduced autophagy. We knocked down CLDN6 in DPNtreated cells, and western blot analyses showed that beclin1, atg5, atg16 and LC3-II expression was decreased (Fig. 5a, b). (Fig. 5c). The expression of CLDN6 is shown in Additional file 3: Figure S3B. Given that CLDN6 is strongly associated with autophagy in breast cancer cells, we wanted to explore the role of CLDN6 in autophagy. The above results showed that CLDN6 overexpression or CLDN6 knockdown led to an increase or decrease in beclin1 expression, respectively (Fig. 5a, b). The key autophagy regulator beclin1 might be a downstream molecule of CLDN6. Hence, we presumed that CLDN6 regulated autophagy through beclin1. To verify this hypothesis, we depleted beclin1 in DPN-treated MDA-MB-231 and SK-BR-3/ ERβ cells by using lentiviral vectors. Beclin1 knockdown efficiency was detected via western blot. We found that the expression levels of atg5, atg16 and LC3-II were significantly decreased in both cells (Fig. 5a, b). Similar regulatory roles of beclin1 on autophagy were confirmed in MDA-MB-231/ CLDN6, SK-BR3/CLDN6 and MCF-7/CLDN6 cells (Fig. 5c). Taken together, these results showed that the regulation of CLDN6 on autophagy was beclin1-dependent. Next, we addressed the question of how does CLDN6 regulate beclin1. It is noteworthy that UV-radiation resistance associated gene (UVRAG) is a positive regulator of beclin1 [34]. UVRAG forms a complex with beclin1 and is involved in autophagosome formation [34]. Theoretically, UVRAG can bind with proteins containing Src homology (SH3) domains [35].
Interestingly, ZO-1 contains a C-terminal SH3 domain [36]. ZO-1 is a scaffold protein that directly binds to PDZ motifs on the extreme C-terminus of CLDNs [37]. Thus, we speculated that ZO-1 and UVRAG might function as bridge molecules for the CLDN6-beclin1 interaction. Western blot analyses showed that the expression levels of ZO-1 and UVRAG were increased in both DPN-treated and CLDN6overexpressing cells (Fig. 5d). Because CLDN6 expression of MDA-MB-231 cells was low, we used DPN-treated cells to perform co-IP assays. The results revealed that ZO-1 and UVRAG indeed had binding affinities for CLDN6 and beclin1 in DPN-treated cells (Fig. 5e, f). These results indicated that CLDN6 and ZO-1/UVRAG/beclin1 formed complexes with other autophagy proteins to regulate autophagosome formation in breast cancer cells.
CLDN6 inhibits migration, invasion and metastasis of breast cancer through beclin1 in vitro and in vivo
When beclin1 in DPN-treated MDA-MB-231 (DPN-induced CLDN6 expression) cells and MDA-MB-231/CLDN6 (overexpressed exogenous CLDN6) cells was silenced, the migration and invasion abilities of both cells increased, and the inhibitory effects of CLDN6 were abrogated (Fig. 6a, b). The results suggested that CLDN6 inhibited the migration and invasion of breast cancer cells in vitro through beclin1. (Fig. 6c-e). Figure 6c shows the representative H&E staining of lung tissues from mice in the three groups. Surprisingly, we also observed liver metastasis in the three groups. There were 4 cases of liver metastasis in both the MDA-MB-231/NC and MDA-MB-231/ CLDN6-sh-beclin1 groups. However, 1 case was found in the MDA-MB-231/CLDN6 group (Fig. 6f-h). Our results showed that CLDN6 inhibited breast cancer metastasis by beclin1 in vivo.
The clinical correlation analyses of ERβ, CLDN6 and beclin1 expression and prognosis in breast cancer patients
We next evaluated the expression of ERβ, CLDN6 and beclin1 in tumor samples from 70 breast cancer patients by immunohistochemical (IHC) staining of the tissue microarray. The expression of ERβ, CLDN6 and beclin1 in each IHC sample was classified as either high (score > 8) or low (score ≤ 8), using the median of the IHC scores as the cut-off value. The relationship between ERβ, CLDN6 and beclin1 expression and clinicopathologic features was further evaluated and the results are summarized in Table 1.
We found that high expression of beclin1 was significantly associated with smaller tumors (P = 0.026) and that low expression of ERβ was significantly associated with early TNM stage (TNM, P = 0.023). No association existed between the expression of ERβ, CLDN6 and beclin1 and patient age, lymph node metastasis or pathological grade. Figure 7a shows the representative IHC staining of ERβ, CLDN6 and beclin1 in breast tissues. Pearson correlation analyses indicated that ERβ and CLDN6 were positively correlated and that the expression of CLDN6 was positively correlated with beclin1 in breast cancer tissues (Fig. 7b-c). There was no correlation between the expression of ERβ and beclin1 (P = 0.243). In addition, we further assessed the correlation between ERβ, CLDN6 and beclin1 expression and the overall survival (OS) and disease free survival (DFS) of breast cancer patients in the Kaplan-Meier plotter database (http://www.kmplot.com) by Kaplan-Meier analyses. The results showed that breast cancer patients with high beclin1 expression had significantly longer OS . c IVIS imaging was performed to show lung metastatic sites. Representative H&E staining of lung sections from the three groups (n = 5). d The number of mice with lung metastasis was counted in each group. e The number of lung nodules in each group. f Representative images of liver metastasis and H&E staining in the three groups (n = 5). Black arrowheads indicate the metastatic sites. g The number of mice with liver metastasis was counted in each group. h The number of liver nodules in each group. Data are presented as mean ± SD. The data shown are representative results of three independent experiments. *P < 0.05, **P < 0.01 than patients with low beclin1 expression, and patients with high ERβ, CLDN6 and beclin1 expression levels showed longer DFS than the respective low expression groups (Fig. 7d). Taken together, these findings suggest that high expression of ERβ, CLDN6 or beclin1 indicates a better prognosis for breast cancer patients.
Discussion
Our previous studies have shown that E2 upregulated CLDN6 expression and hindered MCF-7 cell migration and invasion, but the molecular mechanisms were still unclear. In this study, we found that the regulation of CLDN6 by E2 and its biological effects were mediated by ERβ and were not ERα-dependent. Furthermore, the inhibitory effects of ERβ on the migration and invasion of breast cancer cells were mediated through CLDN6-induced autophagy. To our knowledge, this is the first study to link the inhibitory role of ERβ on migration and invasion and autophagy modulation in breast cancer cells.
Estrogen has pivotal roles in the development and progression of breast cancer. The biological effects of estrogen are mediated by intracellular ERs (ERα and ERβ) and the cellmembrane receptor GPR30. GPR30 is a member of the Gprotein coupled receptor (GPCR) superfamily. Estrogen mediates rapid nongenomic actions and cell biological functions partly through GPR30 [38,39]. Our study found that E2 induced the expression of CLDN6 in MCF-7 (ERα+/ERβ+/ GPR30+) and MDA-MB-231 (ERα−/ERβ+/GPR30-) cells but not in SK-BR-3 (ERα−/ERβ−/GPR30+) cells. Moreover, ICI (a nonselective estrogen receptor antagonist) cotreatment counteracted the E2-induced effects. These results indicated that the regulation of E2 on CLDN6 was not through GPR30. ERα is a key target of endocrine therapy and induces proto-oncogene expression to stimulate cell proliferation in breast cancer [40]. Recently, a few studies have reported that ERα suppresses the migration and invasion of breast cancer cells by upregulating cytoskeleton protein expression [41][42][43]. Our results showed that the expression of CLDN6 was enhanced and migration and invasion were hindered in both MCF-7 and MDA-MB-231 cells treated with E2. Therefore, this E2-induced effect was not ERα-dependent, and we wanted to explore the role of ERβ in this process. The role of ERβ in cancer cells is still poorly understood. Growing evidence supports that ERβ is a tumor suppressor. ERβ has been shown to increase integrin α1β1 levels and inhibit the migration of breast cancer cells [44]. In addition, ERβ decreased basal-like breast cancer cell invasion by promoting degradation of epidermal growth factor receptor (EGFR) [45]. Our study found that the expression of ERβ was increased in MCF-7 and MDA-MB-231 cells treated with E2. E2 did not induce CLDN6 expression in MDA-MB-231 cells with ERβ knockdown. DPN is a selective ERβ agonist. DPNmediated ERβ activation could also upregulate CLDN6 expression and reduce migration and invasion of MDA-MB-231 cells. When ERβ was knocked down in breast cancer cells, the expression of CLDN6 was not increased after DPN treatment. The effect of DPN on CLDN6 was similar to that of E2 in breast cancer cells. These results indicated that ERβ played a pivotal role in the regulation of CLDN6 by E2 and its effect on biological behavior. Although many studies have supported the idea that ERβ expression inhibits the migration and invasion of breast cancer cells, Zoi et al. found that ERβ knockdown decreased the expression of matrix metalloproteinases (MMPs) and promoted mesenchymal-epithelial transition (MET) to suppress MDA-MB-231 cell migration and invasion [7]. Hence, the precise role of ERβ and its regulatory mechanism in breast cancer still need to be elucidated.
Our previous study demonstrated that CLDN6 is a breast cancer suppressor gene. CLDN6 overexpression suppressed the migration and invasion of breast cancer cells by reversing epithelial-mesenchymal transition (EMT) [20,22,46]. Then, we examined whether the ERβ-induced effect on biological behavior was related to CLDN6. Here, our results showed that CLDN6 knockdown rescued the effects of ERβ on the migration and invasion of MDA-MB-231 cells. Furthermore, we investigated the regulatory effect of ERβ on CLDN6. ERβ is a member of the nuclear receptor superfamily and functions as a hormone-dependent transcription factor [1]. The classical model of ERβ activation is that after ligand binding to the receptor, ligand-ER complexes directly bind to EREs in the promoter regions of target genes or indirectly interact with other transcription factors (Sp1 Fig. 7 Clinical correlation analyses between ERβ, CLDN6 and beclin1 expression and prognosis in breast cancer patients. a Representative images of IHC analysis of ERβ, CLDN6 and beclin1 expression from tissue microarray (× 400). In breast cancer tissues and adjacent tissues, ERβ expression was observed in the nucleus and the cytoplasm, CLDN6 was expressed in the cytoplasm and the plasma membrane and beclin1 was mainly expressed in the cytoplasm. "Low" indicates proteins with low expression, and "high" indicates proteins with high expression. b Correlation between ERβ expression and CLDN6 expression in breast cancer tissue microarray. Pearson correlation test, n = 67, r = 0.4652, P < 0.0001. c Correlation between CLDN6 expression and beclin1 expression in breast cancer tissue microarray. Pearson correlation test, n = 44, r = 0.3677, P = 0.0141. d Kaplan-Meier analysis of the overall survival and disease-free survival of breast cancer patients with different ERβ, CLDN6 and beclin1 expression levels in the in Kaplan-Meier plotter database. Statistical differences were determined by log-rank test. Data are presented as mean ± SD or AP1) to activate the transcription of target genes [28,29]. When the ligand is absent, ERβ can still be activated by growth factor receptors (insulin-like growth factor 1 receptor (IGF1R) and EGFR), which can stimulate protein kinase cascades that phosphorylate and activate the transcriptional activity of ERβ [47,48]. In this study, we found that the depletion or upregulation of ERβ did not affect the expression of CLDN6 in the absence of DPN. However, CLDN6 expression in breast cancer cells overexpressing ERβ was increased after DPN treatment. Therefore, we believe that the regulation of CLDN6 by ERβ is a ligand-dependent pathway in breast cancer cells. Burek et al. reported that E2-induced CLDN5 upregulation was mediated by binding to the EREs and Sp1 sites of the CLDN5 promoter in brain endothelial cells [49]. Furthermore, we analyzed the promoter region of CLDN6 and identified imperfect EREs and potential Sp1 transcription factor binding sites. ChIP assays showed that the regulation of the CLDN6 promoter could take place either directly by interaction with ERβ or indirectly by interaction with Sp1. Dual luciferase reporter assays showed that ERβ regulated CLDN6 promoter activity. The exact molecular mechanism and promoter elements responsible for ERβ regulation require further investigation. We report for the first time, to our knowledge, that the inhibitory effects of ERβ on migration and invasion are mediated by CLDN6 and that CLDN6 is a target gene of ERβ in breast cancer.
Autophagy is a basic catabolic process in which unnecessary or dysfunctional intracellular materials are degraded and recycled [50]. Autophagy is reported to mediate an oncosuppressive role in the tumor initiation step [51]. Surprisingly, our data indicated that ERβ induced autophagy in breast cancer cells. While previous studies have reported that ERβ triggers autophagy and inhibits cancer cell proliferation [12,52], few studies have investigated the role of ERβ-induced autophagy on the migration and invasion of breast cancer cells. In our study, autophagy inhibitors reversed the inhibitory effect of ERβ on the migration and invasion of breast cancer cells. These results indicated that the inhibitory role of ERβ on the migration and invasion of breast cancer cells was associated with ERβ-induced autophagy. Furthermore, we found that ERβ induced autophagy through CLDN6. CLDN6 is one of the key proteins in the formation of tight junctions (TJs), and plays important roles in maintaining the epithelial barrier, polarity and signal delivery. There are few studies on the relationship between CLDNs and autophagy. Recently, it has been reported that the autophagosome marker Atg16L colocalizes with CLDN5 in endocytosed vesicles transported across the cells, indicating that the process of tight junction Fig. 8 The proposed model for ERβ-induced autophagy inhibiting breast cancer cell migration and invasion. In this model, when ERβ bound with ligands (DPN), DPN-ERβ complexes directly bound to the ERE of the CLDN6 promoter and enhanced CLDN6 expression. In addition, activated ERβ can interact with Sp1 and bind the Sp1 transcriptional regulation domains of the CLDN6 promoter to induce CLDN6 expression. ZO-1 and UVRAG act as bridge molecules for the CLDN6-beclin1 interaction. CLDN6 and ZO-1/UVRAG/beclin1 form complexes and serve as a platform for recruiting other autophagy regulatory proteins (atg5, atg16 and LC3-II) and induce autophagy to suppress the migration and invasion of breast cancer cells remodeling involves the regulation of autophagy [53]. Zhao Z. et al. discovered that CLDN1 regulated drug resistance by promoting autophagy, which was mediated by ULK1 phosphorylation in non-small cell lung cancer [54]. In addition, J. Kim et al. reported that CLDN1 functions as an autophagy stimulator to increase autophagy flux and accelerate the degradation of SQSTM1/p62 [55]. In our previous investigation, we observed by TEM that CLDN6 overexpression resulted in the appearance of numerous autophagic vacuoles. Western blot analysis and IF and acridine orange (AO) staining methods demonstrated that CLDN6 induced autophagy in breast cancer cells. Nevertheless, none of the above studies has clarified the regulatory mechanism of CLDNs on autophagy.
Next, we wanted to unveil the mechanism by which the tight junction protein CLDN6 regulates autophagy. Various proteins participate in autophagy modulation. Beclin1, the first identified mammalian autophagy gene, is a haploinsufficient tumor suppressor and plays an indispensable role in the initiation of autophagy [56][57][58]. Strikingly, we found that the expression of beclin1 was consistent with that of CLDN6 in breast cancer cells. We also found that the expression of CLDN6 was positively correlated with beclin1 expression in breast cancer tissues. Moreover, silencing beclin1 reduced autophagy and reversed the inhibitory effect of ERβ/CLDN6 on the migration and invasion of breast cancer cells and attenuated the inhibitory effect of CLDN6 on metastasis in vivo. Our results are in line with previous evidence that autophagy inhibition upon beclin1 knockdown stimulates the migration and invasion of GL15 glioma cells [59]. Deletion of beclin1 has been shown to promote the invasion and metastasis of breast cancer cells by increasing the phosphorylation of AKT and ERK [60]. Furthermore, in this study, we found that the expression of ZO-1 and UVRAG was increased in CLDN6-overexpressing cells. Co-IP assays revealed that ZO-1 and UVRAG had binding affinities for CLDN6 and beclin1. Our results demonstrated that the scaffold protein ZO-1 and the autophagy regulatory protein UVRAG functioned as bridge molecules for the CLDN6-beclin1 interaction. This is the first study to report that CLDN6 and ZO-1/UVRAG/beclin1 form complexes with other autophagy proteins to regulate autophagosome formation in breast cancer.
Conclusion
Our study reported new findings that CLDN6 is a target gene of ERβ. Mechanistically, we demonstrated that the inhibitory effects of ERβ on the migration and invasion of breast cancer cells were mediated by CLDN6, which induced the beclin1-dependent autophagic cascade (Fig. 8). These findings provide fresh insight into the mechanism underlying the inhibitory effects of ERβ on breast cancer. Moreover, high ERβ, CLDN6 or beclin1 expression predicted a favorable prognosis in breast cancer patients. ERβ agonists and CLDN6 may be novel therapeutic approaches for the treatment of breast cancer. More in vivo experiments are needed to validate these findings.
|
2019-08-16T13:04:04.071Z
|
2019-08-14T00:00:00.000
|
{
"year": 2019,
"sha1": "17b464ae0b241b13e8366dc48d82ce180cb14e71",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/s13046-019-1359-9",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "17b464ae0b241b13e8366dc48d82ce180cb14e71",
"s2fieldsofstudy": [
"Biology",
"Chemistry",
"Medicine"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
8759201
|
pes2o/s2orc
|
v3-fos-license
|
Feasibility of the Short Hospital Stays after Laparoscopic Appendectomy for Uncomplicated Appendicitis
Purpose The aim of this study was to evaluate the feasibility of short hospital stays after laparoscopic appendectomy for uncomplicated appendicitis. Materials and Methods The records of 142 patients who underwent laparoscopic appendectomy for uncomplicated appendicitis from January 2010 to December 2012 were analyzed retrospectively. Patients were allocated to an early (<48 hours) or a late (>48 hours) group by postoperative hospital stay. Postoperative complications and readmission rates in the two groups were evaluated and compared. Results Overall mean patient age was 50.1 (±16.0) years, and mean hospital stay was 3.8 (±2.8) days. Fifty-four patients (group E, 38.0%) were discharged within 48 hours of surgery, and 88 patients (group L, 62.0%) stayed more than 48 hours. Overall complication rates were similar in the two groups (14.8% vs. 21.6%, p=0.318), and wound complications (13.0% vs. 12.5%), postoperative bowel obstruction (1.9% vs. 2.3%), and abdominal pain (1.9% vs. 3.4%) were not significantly different. Conclusion Patients that undergo laparoscopic appendectomy due to uncomplicated appendicitis may be safely discharged within 48 hours. Further study should be conducted to determine the optimal length of hospital stay after laparoscopic appendectomy to reduce hospital costs.
INTRODUCTION
Laparoscopic surgery for appendicitis is being widely performed worldwide, and is now regarded as the gold standard. Compared with open appendectomy, laparoscopic appendectomy has the advantages of less postoperative pain, better cosmesis, and shorter hospital stays. Furthermore, laparoscopic surgery probably reduces hospital costs by shortening hospital stay and providing early postoperative recovery. Recently, many efforts have been made to reduce hospital costs, and the topic of shortening hospital stay after appendectomy has attracted some interest, especially in the context of one-day surgery, [1][2][3][4] because hospital stays after appendectomy usually exceed 3 days. [5][6][7][8][9] The reports issued on this topic have cited hospital stays of from 2.2 to 6 days after uncomplicated appendectomy. [6][7][8][9][10][11] With regard to reducing costs, non-operative management has been tried and reported to be both safe and feasible for uncomplicated appendicitis. 12 However, sur-length of hospital stay, types of antibiotics used, and the relation between complication occurrence and duration of antibiotic use. Statistical analysis was performed using the chi-square test, the t-test, and the Mann-Whitney U test in SPSS ver. 20 (SPSS Inc., Chicago, IL, USA). Statistical significance was accepted for p-values <0.05. Continuous variables are presented as means (±standard deviations) or as medians and interquartile ranges.
This study was approved by the Severance Hospital Institutional Review Board (Institutional Review Board approval no. 4-2013-0785).
Characteristics of the study subjects
One hundred and forty two patients were included in this study. Mean study subject age was 50.1 years (±16.0) and there were 72 men (50.7%). Overall mean hospital stay was 3.8 days (±2.8), and overall time elapsed from entering the emergency room to surgery was 479.5 minutes (353.0-708.0). Fifty-four patients (38.0%) were discharged within 48 hours (the E group), and 88 stayed more than 48 hours (the L group). Patient baseline characteristics are summarized in Table 1. Times to diet and duration of antibiotic use after surgery were significantly different in the two groups (one vs. two days, and two vs. four days, respectively; p<0.001 for both).
Patients outcomes according to hospital stay
Complication rates were similar in the two study groups (p= 0.318), and no intergroup difference was found with respect to complications; that is, wound complications, obstruction, gical intervention remains the treatment standard.
The aim of this study was to evaluate the feasibility of short hospital stays after laparoscopic appendectomy for uncomplicated appendicitis.
MATERIALS AND METHODS
The medical records of patients who underwent laparoscopic appendectomy for uncomplicated appendicitis from January 2010 to December 2012 at Severance Hospital were reviewed retrospectively. In total, 538 patients were diagnosed as having appendicitis and all underwent appendectomy during the study period. Laparoscopic appendectomy was performed in 362 patients and open appendectomy in 43. Uncomplicated appendicitis was defined as an acutely inflamed appendix with no evidence of perforation or generalized peritonitis by computed tomography or based on operative findings. All patients were admitted through an emergency room, and received surgical treatment and post-operative management based on physician's discretion. Only patients aged >18 years were included, and patients with a peri-appendiceal abscess or perforation or drain insertion were excluded (Fig. 1). After applying these criteria, 142 patients were enrolled and allocated to one of two groups according the duration of postoperative hospital stay; that is, to an early group (group E, <48 hours, n=54) or to a late group (group L, >48 hours, n=88). The discharge criteria used were as follows; 1) tympanic temperature <38.3°C, 2) a visual analogue scale (VAS) pain score of <4, 3) no aggravated physical signs, and 4) toleration of diet (>50%). However, these discharge criteria were not applied to all patients, because of primary physician's or patient's preference. Patient medical records were reviewed, and baseline characteristics, hospital stays, types and durations of antibiotics used, and postoperative complications, such as wound complications, intra-abdominal abscess development, urinary retention, pneumonia, postoperative bowel obstruction, abdominal pain, and readmission rate, were collated. Wound complications were defined as postoperative wound problems, including seroma and infection. Wound seroma and infection were not differentiated because of a lack of records. Postoperative bowel obstruction was the presence of gastrointestinal symptoms, such as nausea, vomiting, and abdominal pain with a VAS score of >5 on postoperative day 2 or more. Abdominal pain was defined as intolerable pain at time of discharge. Complication rates were analyzed according to the stays than open surgery. The selective application of outpatient laparoscopic cholecystectomy was advocated as early as 1990, 11 and has now become a recognized postoperative strategy after cholecystectomy. [12][13][14] Furthermore, an outpatient approach has been suggested after laparoscopic adrenalectomy and laparoscopic splenectomy. 15,16 However, no standards have been issued regarding the duration of hospitalization or antibiotic usage after laparoscopic appendectomy for uncomplicated appendicitis, probably because appendicitis is an acute inflammatory infectious disease, therefore, postoperative complications and recovery are largely dependent on underlying patient condition. Furthermore, hospital stay after laparoscopic appendectomy varies by country, surgeon's preference, and culture. In Korea, postoperative hospital stays after appendectomy for uncomplicated appendicitis range from 3.6 to 6.06 days 6-9 and from 2.2 to 3 days in other countries. 11,12 These longer hospital stays in Korea probably reflect a generous national public medical insurance system. postoperative abdominal pain, pneumonia, intra-abdominal abscess, urinary retention, or re-admission (p=0.936, p= 1.000, p=1.000, p=1.000, p=0.288, and p=0.525) ( Table 2).
DISCUSSION
It is widely known that laparoscopic surgery results in less postoperative pain, earlier recovery, and shorter hospital significant difference was observed between patients prescribed or not prescribed oral antibiotics with respect to complication rate (p=0.654), suggesting that long-term antibiotic treatment might be unnecessary in preventing postoperative complications.
A major limitation of this study is that it was performed retrospectively without randomization. Therefore, our results are prone to selection bias, particularly regarding discharge. In particular, the initiation of diet was subject to surgeons' preference, and because of a lack of consensus, not all patients were provided an early diet. In addition, discharge times were depended on patients' and surgeons' preferences despite meeting discharge criteria.
In conclusion, the present study suggests that discharge within 48 hours of laparoscopic appendectomy for uncomplicated appendicitis is probably safe and feasible, and does not increase complications. Further studies should be undertaken to determine optimal hospital stay after laparoscopic appendectomy for uncomplicated appendicitis with the objective of reducing treatment costs.
Several authors have concluded that outpatient laparoscopic appendectomy for uncomplicated appendicitis is feasible, 11,12,17 and reported no increase in complication rates among outpatients. However, the adoption of outpatient laparoscopic appendectomy would not be straightforward in some countries.
In the present study, we analyzed the medical records of 142 patients who underwent laparoscopic appendectomy for uncomplicated appendicitis. Patients were dichotomized by duration of hospitalization after appendectomy about a cutoff of 48 hours. To develop a consensual protocol and suggest a standard length of stay after laparoscopic appendectomy for uncomplicated appendicitis, we examined the safety of short hospital stay after laparoscopic appendectomy. Our analysis revealed no significant differences between the E and L groups in terms of age, body mass index, American Society of Anesthesiology score, or time from emergency room admission to surgery. However, members of the E group resumed an oral diet significantly sooner (1 day vs. 2 days). Traditionally an oral diet is started after flatus passage, but early diet initiation has recently been attempted before flatus passage in uncomplicated situations. Furthermore, an early enteral diet start has been recommended by several parenteral and enteral nutrition societies, 18,19 and has been actively adopted after colon surgery. Unfortunately, due to retrospective nature of the present study, no information was available about time to first flatus. Nevertheless, we have tried to start early feeding within 1 day or appendectomy regardless of flatus or bowel motility, because patients who tolerate an early diet can be discharged earlier. In the present study, the E and L groups showed similar complication rates, and no significant increase in the postoperative obstruction rate was observed in the E group (1.9% vs. 2.3%, p=1.00). These findings suggest the feasibility of early discharge, which we attribute to an early diet after laparoscopic appendectomy in patients with uncomplicated appendicitis.
Interestingly, a significant difference was observed between the two study groups in terms of duration of antibiotic use (1 day vs. 2 days in the E and L groups, respectively; p< 0.001), but no difference was observed for wound complication (13% vs. 12.5%). One case of pneumonia developed in the L group in a patient that had undergone lung lobectomy for lung cancer, and thus, was at high risk of developing a postoperative lung complication.
However, no significant difference was found between patients that developed a complication and those that did not in terms of duration of antibiotic use (p=0.133), and no
|
2017-10-15T16:00:21.065Z
|
2014-10-08T00:00:00.000
|
{
"year": 2014,
"sha1": "463225362ba0531e1ba46c4e9e4e10410a645805",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.3349/ymj.2014.55.6.1606",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "463225362ba0531e1ba46c4e9e4e10410a645805",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
261555829
|
pes2o/s2orc
|
v3-fos-license
|
Combining glucose and high-sensitivity cardiac troponin in the early diagnosis of acute myocardial infarction
Glucose is a universally available inexpensive biomarker, which is increased as part of the physiological stress response to acute myocardial infarction (AMI) and may therefore help in its early diagnosis. To test this hypothesis, glucose, high-sensitivity cardiac troponin (hs-cTn) T, and hs-cTnI were measured in consecutive patients presenting with acute chest discomfort to the emergency department (ED) and enrolled in a large international diagnostic study (NCT00470587). Two independent cardiologists centrally adjudicated the final diagnosis using all clinical data, including serial hs-cTnT measurements, cardiac imaging and clinical follow-up. The primary diagnostic endpoint was index non-ST-segment elevation MI (NSTEMI). Prognostic endpoints were all-cause death, and cardiovascular (CV) death or future AMI, all within 730-days. Among 5639 eligible patients, NSTEMI was the adjudicated final diagnosis in 1051 (18.6%) patients. Diagnostic accuracy quantified using the area under the receiver-operating characteristics curve (AUC) for the combination of glucose with hs-cTnT and glucose with hs-cTnI was very high, but not higher versus that of hs-cTn alone (glucose/hs-cTnT 0.930 [95% CI 0.922–0.937] versus hs-cTnT 0.929 [95% CI 0.922–0.937]; glucose/hs-cTnI 0.944 [95% CI 0.937–0.951] versus hs-cTnI 0.944 [95% CI 0.937–0.951]). In early-presenters, a dual-marker strategy (glucose < 7 mmol/L and hs-cTnT < 5/hs-cTnI < 4 ng/L) provided very high and comparable sensitivity to slightly lower hs-cTn concentrations (cTnT/I < 4/3 ng/L) alone, and possibly even higher efficacy. Glucose was an independent predictor of 730-days endpoints. Our results showed that a dual marker strategy of glucose and hs-cTn did not increase the diagnostic accuracy when used continuously. However, a cutoff approach combining glucose and hs-cTn may provide diagnostic utility for patients presenting ≤ 3 h after onset of symptoms, also providing important prognostic information.
Clinical Assessment
Routine clinical assessment and patient management has been described in detail previously 1 .The estimated glomerular filtration rate (eGFR) was determined using the chronic kidney disease epidemiology collaboration (CKD-MDRD) formula 2 .
Blood sampling and laboratory methods
Glucose levels were measured from routine blood samples obtained at ED presentation on the clinical chemistry platform of each participating hospital.Blood sampling and methods for the determination of hs-cTnT (Elecsys) and hs-cTnI (Architect) concentrations have been previously reported 1,[3][4][5] .
Follow-up
Patients were interviewed by telephone or in written form after 3, 12 and 24 months.
Contact was established with the patient and the family physician.Information regarding mortality was also obtained from the national death registries, the electronic medical record of the hospital or family physicians records.
Statistical analysis
To evaluate whether the presence of diabetes could be an effect modifier of glucose, an interaction between glucose and diabetes was fitted.Considering the low number of events in each subgroup (diabetic/non-diabetic patients) for 30-days (short term) and to avoid overfitting, the interaction was only assessed for 2-year outcomes.To compute the hazard ratios (Y-axis) the reference was a glucose value of 5.6 mmol/L and non-diabetic.
The multivariable model had the same covariates used for the main analysis.The only difference was the interaction.Hence, a likelihood ratio test for nested models was used for evaluating the interaction between glucose and diabetes.In addition, effect modification of diabetes was assessed visually with dose-response plots.
Both prognostic outcomes (all-cause mortality and the composite of cardiovascular death and AMI) were plotted in Kaplan-Meier curves for 30 days and 730-days followup time according to hs-cTn and glucose baseline concentrations.The log rank test was used to assess differences between groups: • Group 1: hs-cTn concentrations below the 99 th percentile and glucose concentrations below 5.6 mmol/L.
• Group 3: hs-cTn concentrations over the 99 th percentile and glucose concentrations below 5.6mmol/L.
Cross-sectional study-Report numbers of outcome events or summary measures N.A.
Main results 16 (a) Give unadjusted estimates and, if applicable, confounder-adjusted estimates and their precision (eg, 95% confidence interval).Make clear which confounders were adjusted for and why they were included 10-14, Fig. 3 Fig. S1 NSTEMI= non-ST elevation myocardial infarction, SBP= systolic blood pressure, CPO = chest pain onset, DBP= diastolic blood pressure, CAD= coronary artery disease, AMI= acute myocardial infarction, ACE-inhibitor= angiotensin-converting-enzyme inhibitor, ARB= angiotensin receptor blocker, GFR-MDRD= glomerular filtration rate-modification of diet in renal disease equation.Supplementary Figure S2.ESC 0/1h hs-cTn Algorithm and concept outlining on how the 0h glucose concentration could be used in combination with the ESC 0/1h-algorithm.0h hs-cTnT < 5ng/L* OR 0h hs-cTnT <12ng/L and Delta hs-cTnT 0-1h <3ng/L 0h hs-cTnT ≥ 52ng/L OR Delta hs-cTnT 0-1h ≥ 5ng0h hs-cTnT < 5ng/L and 0h Glucose <5.6mmol/L * OR 0h hs-cTnT <12ng/L and Delta hs-cTnT 0-1h <3ng/L and 0h Glucose <5.6 mmol/L 0h hs-cTnT ≥ 52ng/L OR Glucose ≥ 11.1 mmol/L OR Delta hs-cTnT 0-1h ≥ 5ngindicates non-ST-Elevation myocardial infarction; hs-cTnT indicates high sensitivity cardiac troponin T; hs-cTnI indicates high sensitivity cardiac troponin I. Observe Rule-in Rule-out 0h hs-cTnI < 4ng/L* OR 0h hs-cTnI <5ng/L and Delta hs-cTnI 0-1h <2ng/L 0h hs-cTnI ≥ 64ng/L OR Delta hs-cTnI 0-1h ≥ 6ng0h hs-cTnT < 4ng/L and 0h Glucose <5.6mmol/L * OR 0h hs-cTnT <5ng/L and Delta hs-cTnT 0-1h <2ng/L and 0h Glucose <5.6 mmol/L 0h hs-cTnT ≥ 64ng/L OR Glucose ≥ 11.1 mmol/L OR Delta hs-cTnT 0-1h ≥ 6ng Cohort Cohort study-If applicable, explain how loss to follow-up was addressed Case-control study-If applicable, explain how matching of cases and controls was addressed Cross-sectional study-If applicable, describe analytical methods taking account of sampling strategy study-Give the eligibility criteria, and the sources and methods of selection of participants.Describe methods of follow-up Case-control study-Give the eligibility criteria, and the sources and methods of case ascertainment and control selection.Give the rationale for the choice of cases and controls Cross-sectional study-Give the eligibility criteria, and the sources and methods of selection of participants 4 + S.4 (b) Cohort study-For matched studies, give matching criteria and number of exposed and unexposed Case-control study-For matched studies, give matching criteria and the number of controls per case N.A. a) Report numbers of individuals at each stage of study-eg numbers potentially eligible, examined for eligibility, confirmed eligible, included in the study, completing follow-up, and analysed
Table 2 .
Baseline Characteristics of patients with and without NSTEMI of funding and the role of the funders for the present study and, if applicable, for the original study on which the present article is based 19 *Give information separately for cases and controls in case-control studies and, if applicable, for exposed and unexposed groups in cohort and cross-sectional studies.SupplementaryMan-Whitney U test for continuous variables (not normal distributed), expressed in medians and interquartile range (IQR) and Chi-square test for categorical variables, expressed in numbers and percentages.
|
2023-09-07T06:17:10.335Z
|
2023-09-05T00:00:00.000
|
{
"year": 2023,
"sha1": "b289dd5c6e7a63d75135af13f713cfe7101fac21",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-023-37093-1.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "1ffb4e86de7462c23676856e4b7259a158c20653",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
253370251
|
pes2o/s2orc
|
v3-fos-license
|
Advanced Automatic Code Generation for Multiple Relaxation-Time Lattice Boltzmann Methods
The scientific code generation package lbmpy supports the automated design and the efficient implementation of lattice Boltzmann methods (LBMs) through metaprogramming. It is based on a new, concise calculus for describing multiple relaxation-time LBMs, including techniques that enable the numerically advantageous subtraction of the constant background component from the populations. These techniques are generalized to a wide range of collision spaces and equilibrium distributions. The article contains an overview of lbmpy's front-end and its code generation pipeline, which implements the new LBM calculus by means of symbolic formula manipulation tools and object-oriented programming. The generated codes have only a minimal number of arithmetic operations. Their automatic derivation rests on two novel Chimera transforms that have been specifically developed for efficiently computing raw and central moments. Information contained in the symbolic representation of the methods is further exploited in a customized sequence of algebraic simplifications, further reducing computational cost. When combined, these algebraic transformations lead to concise and compact numerical kernels. Specifically, with these optimizations, the advanced central moment- and cumulant-based methods can be realized with only little additional cost as when compared with the simple BGK method. The effectiveness and flexibility of the new lbmpy code generation system is demonstrated in simulating Taylor-Green vortex decay and the automatic derivation of an LBM algorithm to solve the shallow water equations.
Introduction.
Modern scientific computing is characterized by growing complexity on several fronts. On the one side, the mathematical models become more advanced and powerful so that the corresponding numercal methods and algorithms grow inceasingly intricate and involved. At the same time, contemporary computer architectures become more difficult to program. This is especially true in high end parallel computing that is characterized by rapidly evolving new hardware architectures, such as general-purpose graphics processing units (GPUs).
Thus the central step of scientific computing, i.e., the step from a model to executable software, faces increasing challenges. The developers of algorithms and scientific software are squeezed in from two sides: Increasingly complex simulation models must be realized in software for increasingly complex advanced supercomputer systems. We note here also that legacy software often lacks performance portability. For example it may present an almost prohibitive effort to port a legacy flow solver to a modern multi-GPU system. It may be speculative, but likely enormous computational resources are underused world wide, since outdated software is underperforming on the given computer systems on which it runs.
At the same time, we observe that modern mathematical methods have evolved to a state where their derivation and analysis depends increasingly on researchers making use of symbolic formula manipulation packages. For the present article, we will consider the class of lattice Boltzmann methods (LBMs), whose most advanced variants could hardly be derived without using symbolic mathematical software. Given this situation, it is a natural step to integrate the algebraic manipulations routinely into the workflow of designing and implementing these methods. A first step of this is presented for the lbmpy package by Bauer et al. in [5].
We emphasize that the effect is twofold: In this approach, the derivation of the methods is consolidated by giving method developers a powerful tool to describe, analyze, and experiment with a range of method variants. But at the same time, the automated symbolic derivations can be used beyond just deriving the discrete approximations. Conventionally, this would then only be another set of mathematical formulas that still have to be transformed into software manually. Here, instead, the symbolic manipulation will be taken a step further to produce operational code directly. Through this automatic code generation approach, numerical kernels can be produced targeting different architectures (such as CPUs or GPUs), using different data structures, and also different memory layouts.
Furthermore, symbolic manipulation offers the possibility to optimize the codes in various ways, both mathematically by simplifying expressions, but also with hardware aware transformations, e.g., with the goal of providing an automatic vectorization. A specific advantage here is that the optimizations can go much beyond traditional optimizing or vectorizing compilers. The domain-specific code generator may leverage information contained in the numerical method's symbolic form to apply highly specific simplifying transformations. This enables automatic optimizations that would otherwise be accessible only to an expert programmer who is at the same time a highly educated model developer -a situation which in our experience is rarely found in scientific computing practice. In lbmpy this rare synthesis of expertises is encapsulated within the automatic code generation package itself.
The LBM is a mesoscopic method for the simulation of transport phenomena, originally developed as an extension of lattice gas automata [7]. More modernly it is derived by the discretization of the Boltzmann equation [28]. It discretizes the continuum of a fluid on a cartesian lattice, modelling its state by the local distribution of particle velocities at the lattice sites. The first application of the LBM was as a solver of the Navier-Stokes equations (NSE), but the LBM has also been applied to a variety of problems. These include the advection-diffusion equation [28], thermal flows [10], the shallow water equations [54,43,49], as well as multiphase and multicomponent flows [28,14,26,21,47].
At the core of the LBM algorithm lies the collision step, where the local particle distributions are relaxed towards a given equilibrium distribution. The equilibrium state depends on the macroscopic physics that the method is meant to simulate, and thus differs significantly between applications. The classical variant of the relaxation process is the discrete BGK collision operator [6,7,28], which employs a single relaxation rate. It has been generalized to multiple relaxation-time (MRT) collision operators, relaxing different statistical moments of the distributions with individual rates, thus improving the representation of simulated physics, as well as numerical stability [9,20,15,19,28]. Originally employing raw moments as a collision space, MRT methods were later extended to central moment [15,39,44] and cumulant [19] space.
As a complex numerical method, the LBM is subject to floating-point roundoff error, which may pollute solutions when the floating point precision is insufficient and not properly controlled. Then the accuracy of the simulation may deteriorate [48,16,33]. Early in the development of LBMs it has been realized that, in hydrodynamic simulations, populations deviate only little from a background distribution typically given by the lattice weights [48]. Since this background distribution is invariant under collision, it may be subtracted from the population vector, thus significantly increasing the number of digits available for an accurate floating point presentation of the populations. We use the term the zero-centered storage format for this algebraic transformation. It has been found to improve the LBM's numerical accuracy substantially [19,16,33].
Implementing all relevant variants of collision operators and application use cases in a single, generic software framework poses a significant challenge. The challenge becomes even greater when different hardware architectures require different optimizations. The creation of the modular and object-oriented software design of frameworks such as Palabos [31], OpenLB [27], TCLB [29], and waLBerla [3,45,46] is a key step to solve this problem. However, as elsewhere, the generality of a software often inhibits specific performance optimizations. An automatic code generation approach, as described above, constitutes a potentially more flexible alternative to an extensive but rigid hand-coded framework. This idea has been successfully employed in the context of classical finite element discretizations, e.g. in [40,35], or for general stencil codes [36,41,23,34]. Metaprogramming techniques can also be found in some of the aforementioned LBM frameworks [29,27]. The original version of lbmpy adopts this paradigm in the form of a symbolic domain-specific language based on the raw moment MRT formalism. Its key functionality is the automatic generation of optimized implementations of single relaxation-time (SRT) [7], two relaxation-time (TRT) [20] and raw-moment MRT methods from their symbolic description.
The aim of this work is the extension of lbmpy to go beyond the existing functionality and support both central moment and cumulant collision spaces, to integrate the zero-centered storage format, and to provide the transformations needed to generate LBM codes for various applications beyond the Navier-Stokes equations. During the development process, we found the original structure of lbmpy to pose severe restrictions. Therefore it was necessary to re-design the original architecture of lbmpy. The outcome of this process will be presented in this article. It has three essential components: First, we introduce a consolidated theoretical framework for modelling MRT methods. This defines a generalized formalism to specify the collision spaces and the equilibrium. Additionally, as a novel feature, this framework integrates the zerocentered representation of the distribution functions. In particular, we develop and present a generalization of zero-centered storage to collision spaces, and formalize its application to the equilibrium distribution.
Second, we present the front-end of lbmpy as an application programming interface (API) in the Python programming language for modelling LBMs on an abstract level. This constitutes a domain-specific language that closely reflects the theoretical framework of LBMs. It is implemented by means of the computer algebra package SymPy [37].
Third, we describe the modularized automatic procedure for symbolically deriving and optimizing the collision rule. This includes the presentation of two novel Chimera transforms between populations and the raw and central moment spaces, designed to minimize the number of arithmetic operation for the mappings between these spaces. We furthermore elaborate on an extensive collection of LBM-specific algebraic simplifications. Due to these improvements, our new code generator can produce efficient implementations of raw moment, central moment, and cumulant LBMs. As a surprising key result we find that the latter of these advanced LBMs require only little arithmetic overhead as compared to simple SRT and raw moment MRT methods. Thus, in many cases the advanced methods are expected to execute almost equally fast as the simpler ones.
In summary, our article presents a versatile and flexible framework for the mod-elling and prototyping of complex LBM methods, combined with the techniques for the automatic generation of highly optimized computational kernels for these methods. The software framework itself is open-source and freely available under the GNU AGPLv3 license. The remainder of this paper is structured as follows. In section 2, we introduce our theoretical framework and present our generalization of zero-centered storage. Section 3 presents the modelling front-end of lbmpy. Section 4 elaborates on the automatic derivation procedure for collision rules, including our novel raw and central moment Chimera transforms. The same section introduces the extensive simplification procedure applied by lbmpy and shows its effectiveness for optimizing collision kernels. Finally, we demonstrate the versatility of the revised framework for modelling advanced LBMs, its applicability to their rapid implementation and testing, and the numerical advantages of zero-centering in MRT methods in section 5. Section 6 concludes the paper.
Theoretical
Background. This section presents the theoretical framework that serves as the basis of our code generation system. To this end, we first introduce our formalized approach to modelling populations and the zero-centered storage format. The multiple relaxation-time LBM collision process is introduced and generalized to arbitrary collision spaces. In particular, we discuss raw moment, central moment and cumulant spaces, as well as the reflection of zero-centered storage therein. Finally, we introduce a novel abstract interpretation of the equilibrium distributions.
Discrete Lattice Structure and Storage
Format. The lattice Boltzmann method acts on a d-dimensional cartesian lattice, where a local particle distribution vector f with q entries is stored on each lattice site. Entry f i is understood to model the relative number of particles moving at a velocity ξ i ∈ {−1, 0, 1} d . We adopt writingī to refer to the population index of fī moving in the direction opposite to f i . The lattice structure is specified by the common DdQq stencil notation. The two-and three-dimensional first-neighborhood stencils found most prevalently in literature are D2Q9, D3Q15, D3Q19 and D3Q27 [28]. Our work will also focus on these stencils. For illustration, we employ D2Q9 throughout this paper. D2Q9 comprises the full set of first-neighborhood velocities ξ i ∈ {−1, 0, 1} 2 ; while their exact ordering is not relevant to our discussion, we adopt ξ 0 = 0.
The lattice Boltzmann algorithm comprises the streaming step, where populations are advected according to their velocities; and the collision step, rearranging every node's populations by some collision operator Ω. Often, the algorithm also includes some type of source term f F , which might be a momentum source modelling body forces acting on the fluid [22,25], or a mass source in an advection-diffusion setting [28]. The LBM update equations read In the remainder of this paper, we shall focus on the collision step; the streaming step will only be mentioned briefly in section 3.
The original and major application of the LBM is as an algorithm to simulate incompressible, Newtonian fluid flow as modelled by the NSE. In such simulations, the local distribution vector is found to deviate only little from a certain background distribution [48,24,19,33]. We denote this background distribution f 0 ; its exact form will be discussed in subsection 2.4. This fact can be exploited to improve the LBM's numerical accuracy: instead of storing the absolute values of the population vector, only its deviation component δf = f − f 0 can be stored per site. As noted in section 1, we denote this as the zero-centered storage format, to set it apart from the classical approach, which we dub absolute storage format. The zero-centered storage format may be less useful in non-hydrodynamic LBMs whenever the assumption of small deviations from the background distribution does not hold.
From the discrete distribution vector, a set of macroscopic quantities can be computed on each lattice site. The sum over a site's populations yields the local particle density ρ while their weighted mean becomes the macroscopic velocity u, which are written as If a background distribution is given, the density can be decomposed as ρ = ρ 0 + δρ, with
The Collision
Step. The site-local LBM collision rule constitutes a relaxation process of the population vector against an equilibrium state. In the MRT paradigm, relaxation occurs in some collision space isomorphic to R q , wherein the equilibrium state is given by a vector q eq . The population vector is transformed to and from collision space by some bijective mapping T . Moreover the source term may also be represented in collision space, denoted by q F [28,11,12,21]. This leads to the general MRT collision equation where S = diag(ω 0 , . . . , ω q−1 ) is the relaxation matrix, specifying relaxation rates separately for each quantity q i . These quantities, and their equilibrium counterparts, usually correspond to different properties of the fluid, and their distinct relaxation is meant to independently model different physical processes [28,19].
To take full advantage of zero-centered storage, we aim to express (2.4) in deviation components only. This is only straightforwardly possible if T is linear. In this case, the decomposition of f can be reflected in the MRT collision space, by q = q 0 + δq, with q 0 = T (f 0 ) and δq = T (δf ). Subtracting the background component, we also obtain the deviation component δq eq = q eq − q 0 of the equilibrium vector. We thus arrive at the deviation-only update equations By definition, a source term cannot affect the constant background component; thus we can include q F in the deviation-only update without alteration.
For nonlinear T , the background-deviation-decomposition can not be transformed into collision space in this manner. Instead, (2.4) is modified. The absolute population vector is re-obtained by adding the background distribution before collision, so that it must be subtracted again, leading to In the following, we will introduce the common MRT collision spaces relevant to this work, and discuss the shape T takes therein.
Collision Spaces.
Assuming a DdQq stencil, we consider q-dimensional collision spaces spanned by either raw moments m, central moments κ, or cumulants C of the discrete population vector. We continue to use the symbol q in place of m, κ and C whenever we discuss general properties of these statistical quantities. We furthermore adopt a method of specifying statistical quantities using polynomials in the variables x, y and z. This notation is reflected exactly within the lbmpy transformation system. Given such a polynomial p, the respective quantity is denoted q p . For clarity, we denote monomial quantities, defined by a monomial x α y β z γ , as q αβγ . A collision space based on a specific type of quantity is now defined through its basis, where a basis is a sequence (p i ) i=0,...,q−1 of linearly independent polynomials.
Raw Moments.
Originally, the MRT methods based on raw moments were developed to overcome deficiencies of the discrete BGK operator [28,9]. Among their advantages are the possibility to separate shear from bulk viscosity, and the ability to tune higher-order relaxation rates in order to improve stability and accuracy [28].
The raw moment-generating function M of a distribution f is defined using the multidimensional Laplace transform L as Since L is an integral operator, f must be integrable. Here the discrete distribution vector is made integrable by means of the Dirac delta distribution as f (ξ) := The monomial raw moments m αβγ of f are now defined as the mixed derivatives of M , evaluated at zero In the discrete case, this evaluates to a simple summation over the entries of the population vector. Polynomial raw moments m p are obtained as linear combinations of their monomial components. This linear combination can be combined with the sum obtained from the generating function. Together, monomial and polynomial quantities may be computed as For a given basis (p i ) i , the transformation to discrete raw moment space can now be represented as an invertible matrix M . Thus, m p = T (f ) = M f is a linear mapping, allowing us to reflect the background-deviation-decomposition in raw moment space.
Central Moments.
LBMs based on raw moments violate Galilean invariance. To correct for this, MRT methods based on relaxing central moments of the discrete distribution function have been developed [15,13,39,8]. In contrast to raw moments, central moments are taken with respect to the co-moving frame of reference, given by the macroscopic velocity u.
The velocity shift is reflected in Laplace frequency space by multiplication with exp (−Ξ · u), which we use to define the central moment-generating function Its derivatives, in turn, give rise to the monomial central moments κ αβγ , from which polynomial central moments κ p are again obtained as linear combinations. The shifted frame of reference appears again in the explicit equations via subtraction These transform equations are again linear, giving rise to the central moment matrix K which defines T for the basis (p i ) i .
Cumulants.
Cumulants, other than any kind of moment, characterize a distribution's shape independent of any frame of reference [19]. They thus share the velocity-independence of central moments, but additionally introduce mutual statistical independence. Similar to moments, monomial cumulants c αβγ are the mixed derivatives of the cumulant-generating function at the origin. The cumulant-generating function is defined as the natural logarithm of the moment-generating function , C αβγ := ρc αβγ .
We follow Geier et al. [19] in defining the rescaled cumulants C αβγ . We give no explicit equations for monomial or polynomial cumulants in terms of populations; instead, we discuss a more practical approach to their computation in section 4. Due to the cumulant transform's nonlinearity, no reflection of the population's decomposition in cumulant space is possible.
Equilibrium and Background
Distributions. The basis of most LBMs is a variant of the Maxwell-Boltzmann distribution [28] that defines the equilibrium. Depending on density ρ, velocity u, and the speed of sound parameter c s , its continuous form reads Since, in hydrodynamic simulations, the fluid density deviates only little from a background density ρ 0 , we take as a background distribution the fluid rest state at ρ = ρ 0 , u = 0. We hence denote Ψ 0 = Ψ(ρ 0 , 0) and define the deviation component of the equilibrium distribution as δΨ = Ψ − Ψ 0 . The background density is typically set to unity. Figure 2.1 illustrates the relation between Ψ, its background and deviation component, with density and velocity chosen from the typical simulation range. We observe a large difference in magnitude between background and deviation component; overall, the background component dominates the distribution.
Representations of Ψ and Ψ 0 in raw moment, central moment, and cumulant space can be obtained by algebraically computing and differentiating the respective generating functions. In case of raw and central moments, this reduces to the wellknown moment integrals. The deviation component δΨ has representations in both moment spaces, but not in cumulant space, since the cumulants of δΨ are undefined at the singularity ρ = ρ 0 , where its area under the curve is zero. This can be verified by expanding cumulants in terms of raw moments: Applying the chain rule to evaluate (2.12) produces equations containing divisions by δρ.
Given e.g. a vector m 0 of q independent raw moments of Ψ 0 , the background distribution in population space may be obtained as f 0 = M −1 m 0 . In the case of the standard hydrodynamic equilibrium, this vector reduces to the lattice weights w.
Alternatively the equilibrium state may also be specified as a discrete distribution f eq . In this case, its representation in collision space is simply obtained as q eq = T (f eq ).
MRT Method Modelling
Frontend. The main component our code generation system is an automatic procedure that takes an abstract LBM specification and derives from it a sequence of symbolic equations implementing one of the collision rules (2.4)-(2.6). This specification is formulated using a flexible Python API. Both the modelling API and the derivation system itself rely on the computer algebra package SymPy [37] to represent and manipulate all components of an LBM in symbolic, mathematical form. The automatic derivation and thus the software performing these transformations follows closely the theory of section 2. This section introduces the user front-end for modeling, while the next section will focus on the derivation and code generation procedures.
The parameter space of lbmpy's abstract method specification is shown in Figure 3.1. The lattice structure is defined by selecting a stencil and a storage format (absolute or zero-centered). Next, the collision space is specified by fixing one type of statistical quantity and stating its basis as a sequence of polynomials. Additionally, corresponding relaxation rates must be specified. Each relaxation rate can be given either as a fixed numerical value or in symbolic form, allowing its value to remain undetermined until runtime of the generated code.
The definition's final three components require a significantly more elaborate structure. To model equilibrium distributions, the computation of macroscopic quantities, and force models, lbmpy provides specific hierarchies of Python classes. Abstract base classes define their respective interfaces. These components do not only encapsulate information about the method, but also take an active role during the code generation process.
For instance, the equilibrium object must not only produce an algebraic form (discrete or continuous) of its distribution; it must also distinguish between absolute Method parameters are split into three groups. The first group governs structure and storage format of the discrete lattice, the second group describes the method's collision space, and the third group comprises those components governing the actual collision process.
form and delta-equilibrium and provide the background distribution in the latter case. Furthermore, it must provide methods to compute its raw moments, central moments, and cumulants. These methods are invoked during the symbolic derivation of the collision rule. The same holds for force models and conserved quantity computation; both components will provide parts of the equations making up the collision rule later on.
Apart from functional requirements, there is another significant advantage to using classes. While certain components are already implemented in the current version of lbmpy, such as the Maxwellian equilibrium, the Guo [22] and He [25] force models, etc., this structure makes lbmpy flexible and extensible. In particular, a developer using lbmpy may use the interfaces of the respective base classes in order to implement arbitrary custom equilibrium distributions, force models, or computation procedures for the macroscopic quantities. An example of this will be shown in subsection 5.2.
The user may freely combine various options for different method parameters, with some restrictions. For instance, deviation-only equilibrium instances may only be used in junction with zero-centered storage. Furthermore, as already discussed in previous sections, cumulant space is incompatible with deviation-only equilibria. As we will demonstrate in subsection 5.2, this interface is versatile and flexible both for implementing common hydrodynamic LBMs, and also for rapidly constructing more specialized methods that are not natively realized within lbmpy.
Derivation
Procedure and Code Generation. The front-end API described in the previous section can be used to specifiy a concrete instance of an LB method. This specification serves as input to our automatic derivation and code generation pipeline. This pipeline comprises several steps, ultimately emitting an optimized numerical kernel implementing the collision rule for either CPU or GPU architectures. It relies on the domain-specific language of our code generation framework pystencils [4], which in turn is based on SymPy. In the following, we shall describe the stages of this pipeline as implemented in version 1.1 of lbmpy 1 .
The Collision
Rule. The first stage of the pipeline takes the abstract definition to derive the equations constituting the collision rule. Depending on the combination of storage and equilibrium format, an implementation of either (2.4)-(2.6) is derived in symbolic form. Manipulating the collision equations on the mathematical level, we can leverage all information available about the method to simplify and optimize the equations. The derivation system is modular, combining equations generated by several components. Among these are the equilibrium and force model instances, providing algebraic expressions of their respective representations in the given collision space; as well as the conserved quantity computation, which produces equations for density (or its analogues) and velocity (cf. (2.2) and (2.3)).
The equations forming the collision space transformation T are provided by a set of dedicated classes within lbmpy. Apart from the simple symbolic matrix-vectormultiplication, lbmpy provides two types of highly efficient Chimera-based transforms for raw and central moment space, which we will discuss in the following subsections.
The equations produced by these various components are combined to produce the collision rule as a sequence of assignments of mathematical expressions to symbolic variables. This assignment collection is passed on to lbmpy's sophisticated simplification procedure, described in subsection 4.3, and will ultimately be transformed into a numerical kernel (cf. subsections 4.4 and 4.5).
Raw Moment Transform.
To compute raw moments from pre-collision populations, we first implement a Chimera transform to obtain monomial raw moments, which are afterward recombined to polynomials. Chimera transforms were first introduced in [19] for transforming to central moment space. Using intermediate 'chimera' quantities m x|βγ and m xy|γ , our raw-moment Chimera transform reads Due to its recursive nature, the Chimera transform minimizes the number of arithmetic operations, since each possible combination of populations and velocities is evaluated exactly once. To transform from raw moments back to populations, we first decompose the polynomial moments into their monomial components. We then derive f * = M −1 m * by symbolic matrix-vector-multiplication. Those equations we simplify by splitting the expressions for populations f i and fī moving in opposite directions into their symmetric and antisymmetric parts, such that This split roughly cuts the number of arithmetic operations in half.
Central Moment Transform.
In lbmpy, central moments are not transformed from and to populations directly, but using monomial raw moments as intermediates. To find an efficient transform between monomial raw and central moments, we observe that they are bidirectionally related through binomial expansions In both directions we may separate the nested sums, introducing them as Chimera quantities. This leads us to the binomial Chimera transforms between raw and central moments, which we state Again, as every combination of moments and velocities is evaluated exactly once, this results in expressions that require only a minimal number of arithmetic operations. Polynomial central moments are combined from monomial quantities after the forward transform and decomposed before the backward transform.
Cumulant
Transform. Similar to their original formulation in [19], cumulants in lbmpy are obtained from central moments. To derive their nonlinear relation, we re-express the cumulant-generating function (2.12) in terms of the central moment-generating function, to obtain the bidirectional relations We then employ SymPy's symbolic differentiation capabilities to obtain expressions for the derivatives of C in terms of derivatives of K, and vice versa. Substituting monomial central moments and cumulants for these derivatives, we arrive at the equations for both transform directions. It must be noted that the equations obtained this way contain logarithms and exponential functions, whose presence is undesirable in a numerical kernel. However, they are only associated with the zeroth-order cumulant C 000 , allowing us to eliminate them in a global simplification step.
A Posteriori Simplifications.
Once the derivation of the collision kernel is complete, there will still be several possibilities to simplify and optimize the resulting kernel by symbolic manipulations. Our simplifiation procedure in lbmpy combines generic algebraic simplifiations with optimizations designed specifically for LBMs, utilizing information about the method available during code generation. In particular, lbmpy applies the following simplification steps to the generated sequence of assignments. For advanced users of lbmpy it is additionally possible to disable these transformations or to customize them. Conserved Quantity Rewriting Equations computing ρ, u, etc. from populations directly are replaced by equations involving zeroth-and first-order raw moments.
Collapsing Conserved Central Moments
We collapse the equations for zerothand first-order central moments, e.g. obtaining, κ 000 = ρ and κ 100 = −F x /2. Propagation of Logarithms To simplify logarithmic and exponential expressions in cumulant-based collision rules, we propagate certain assignments containing logarithms to their usages; thus canceling them with the corresponding exponential functions. Common Subexpression Elimination We use SymPy's common subexpression elimination (CSE) feature to extract common terms into separate assignments, as previously described in [5]. Expression Propagation Assignments whose right-hand sides are constant, single symbols, products of macroscopic quantities, or multiples of body force components, are propagated to their usages. Unused Subexpression Elimination A simple dependency analysis based on the static single assignment (SSA) form allows us to eliminate any assignments whose left-hand side symbols are never used. While all of these simplifications can help in reducing kernel complexity, we found the combination of the final two to have the largest impact. Their strength hinges on the fact that SymPy, as a computer algebra system, automatically evaluates any constant terms, and attempts to cancel equal but opposite terms in any constructed expression. Our propagation steps trigger that behaviour automatically. Apart from minor eliminations occuring throughout the equations, this has two major effects.
First, quantities that are invariant under relaxation are propagated through the relaxation step, and inserted directly into the backward transform equations. This, in turn, leads to an automatic simplification of these equations. Apart from reducing overall operation count, this step also serves to eliminate algebraically idempotent operations. To given an example: Without this propagation, relaxation and backward binomial Chimera transform would include assignments similar to (4.6) The propagation steps described above will cause most of these assignments to be dropped, leaving us with just (4.7) m * 10|0 = ρu x + F x /2. The second and more significant effect of alias and constant propagation occurs whenever some relaxation rates are set to unity. In this case, propagation enables the elimination of large portions of the forward collision space transform, as well as significant algebraic simplification of the backward transform. Substituting ω = 1, a relaxation equation q * p = q p + ω q eq p − q p is immediately simplified to q * p = q eq p . The forward transform equations for q p are then no longer required, allowing us to drop them. This has the potential to massively decrease the overall operation count, since the transform equations especially for higher-order quantities constitute a significant portion of the collision kernel. This will be discussed in further detail below.
If q eq p is a simple expression, it can be propagated into the backward transform equations. The effectiveness of this process is improved by the fact that many statistical quantities of common equilibrium distributions are zero. Propagating a zero into the backward transform equations will typically eliminate only a small number of summands; in fact, for the backward Chimera transform, only a single term is removed from one innermost Chimera. Still, the more quantities are relaxed with ω i = 1, the more significant this effect will grow. Propagation of zeroes is most effective with the cumulant backward transform, due to its nonlinearity. The equations computing higher-order post-collision central moments from post-collision cumulants are by far the most complex derived anywhere within our framework. They do, however, contain a significant number of products of post-collision cumulants. Substituting those cumulants' equilibrium values, which are mostly equal to zero, several such multiplications may be eliminated.
To illustrate the efficacy of our simplification strategy, we count the total number of operations in the collision kernels generated by lbmpy from a number of different method definitions. All methods employ zero-centered storage and the default continuous Maxwellian equilibrium, whose moments are truncated at the second order in velocity. No force model is employed. Kernels are derived for non-weighted orthogonal MRT (denoted O-MRT) and weighted orthogonal MRT (WO-MRT) in raw moment space [28]; as well as polynomial central moment (CM) [15,42,44] and polynomial cumulant (K) [19] methods. We compare these with implementations of the SRT and TRT collision operators, also derived by lbmpy using the original procedure published in [5]. The prefix 'R-' denotes a regularized method; all but the relaxation rates governing shear viscosity are therein set to unity. Such regularization is permissible since higher-order 'ghost' modes do not affect simulated physics [19,17]; further, it is a common measure to improve a method's stability [30,16,8]. Otherwise, every relaxation rate is represented by a dedicated symbol ω i (i = 0, . . . , q − 1). The population and raw moment space methods use the delta-equilibrium for relaxation, while the absolute equilibrium is used in central moment and cumulant space. Table 4.1 lists the arithmetic operation counts of kernels generated for those methods, with three different levels of simplification intensity: None at all; full simplification without CSE; and full simplification including CSE. The table shows that lbmpy is capable of producing kernel implementations for complex and advanced lat- Table 4.2 Savings in arithmetic operations due to regularization for several methods on the D3Q27 stencil. We display the savings relative to the operation count of the kernel variant with purely symbolic relaxation rates. 'Higher Order Regularized' implies that only the relaxation rates for statistical quantites of order 5 and 6 are set to unity. 'Fully Regularized' methods have all but their shear relaxation rates set to one. All kernels were simplified as far as possible, without applying CSE.
Method
Operations tice Boltzmann methods with only little overhead when compared to simple SRT or TRT kernels. In fact, the regularized WO-MRT kernels are even visibly smaller in size than the basic SRT kernels. This observation holds also in comparison with carefully hand-crafted implementations, as in [52], where Wellein et al. report about 200 operations for the D3Q19 SRT kernel. Note that, for non-regularized raw moment MRT, we observe vast improvements (almost 50 % fewer operations) compared to the results originally published for lbmpy in [5]. Furthermore, considering the cumulantbased LBM, we surpass the carefully optimized implementation of Geier et al., who in [16] report 432 arithmetic operations just for the forward transform. Note, however, that these numbers describe implementations in a high-level programming language or, in our case, in symbolic form. The actual number of floating-point operations in the compiled code may differ due to transformations applied by modern optimizing compilers.
Comparing the kernel sizes of methods using purely symbolic relaxation rates with their regularized variants, the effectiveness of our expression propagation steps becomes apparent. Table 4.2 lists the savings in operation counts with respect to the non-regularized version for orthogonal raw-moment, central moment and cumulant methods on the D3Q27 stencil. We compare both the fully regularized methods, whose operation counts are shown in Table 4.1, and only partially (higher-order) regularized methods, with only the relaxation rates for quantities of order ≥ 5 set to unity. The reduction in operation count achieved through regularization grows with the complexity of the collision equations. The largest part of these savings comes from the elimination of transform equations. In the central moment method's kernel, the forward transform equations for the fifth and sixth order moments alone make for roughly eight percent of the total operation count, which can simply be omitted with regularization. The largest potential for simplifiation exists for cumulant-based methods: on D3Q27, ten percent of arithmetic operations can be attributed to just the four equations computing the monomial pre-collision cumulants C 122 , C 212 , C 221 and C 222 from central moments.
Streaming Patterns and Update
Rule. The collision rule generated by stage one constitutes the relaxation process for a single cell, whose populations are represented purely symbolically. In the next stage, we employ the field abstraction of pystencils to map the rule over the cells of a d-dimensional field data structure, holding q values per cell. Therein, the iteration over the field is abstracted through field accesses; special kinds of symbols that model accesses relative to the current cell. A field access reading f[x, y, z](i) corresponds to the i-th entry of the local cell's neighbor with integer offset (x, y, z). The correspondence between pre-and postcollision populations, and relative field accesses, is governed by the streaming pattern. At the time of writing, lbmpy supports the classical pull-and push-patterns, which require separate arrays for reading and writing to avoid data conflicts [53]; as well as four different in-place streaming patterns. With these techniques the double data structures can be avoided. They are the AA-pattern [2], Esoteric Twist [18], Esoteric Pull, and Esoteric Push [32]. The field update rule is constructed from the collision rule by substituting field accesses for symbolic populations, according to the specified streaming pattern.
4.5. Code Generation. As a last step, the update rule is passed on to the pystencils code generator [4]. It transforms its symbolic equations into compilable C or CUDA code; wrapping it either with nested loops over the field, or prepending GPU indexing code. Furthermore, the code generation system may optionally apply acceleration techniques such as OpenMP parallelization [38], loop splitting, or loop blocking [4]. Finally, the generated kernel can be compiled and loaded directly within the Python environment, or it can be written to a file to be integrated into separate code bases. The former option allows for rapid development and testing of LB methods, while the latter enables integration with large-scale simulation frameworks, like waLBerla [5,3,26].
Applications.
In this section, we present a numerical benchmark involving Taylor-Green vortex decay to assess the impact of zero-centered storage. As a second example we employ lbmpy to generate an LBM for the shallow water equations to illustrate its extensibility to novel application fields.
Taylor Green Vortex.
In order to assess the effectivenes of our generalization of zero-centered storage to moment spaces in improving numerical accuracy and reducing round-off error, we replicate a test case recently published in [33] involving the Taylor-Green vortex flow. Therein, a periodic box of vortices with velocity magnitude u 0 is initialized, their transient decay simulated, and compared to the known analytic solution. The analytic solution, which also specifies the initial flow field, reads The system is initialized at t = 0 with u 0 = 0.25. We set the kinematic shear viscosity to ν = 1 6 , which results in the viscosity-governing relaxation rate ω = 1. Furthermore, κ = 2π L and L = 256 is the side length of the squared domain. The initial state of the flow field is displayed in Figure 5.1.
The kinetic energy is calculated as and we denote the initial kinetic energy as E 0 = E (0). Simulation time steps Relativ energy E (t) /E 0 for various D3Q27 LB methods. The PDFs are either stored in the absolute (circular marker) or zero-centered format, the latter relaxed against the absolute (x marker) or delta-equilibrium (square marker). We show only a range of timesteps where deviation from the analytical solution occurs.
We simulated the vortex decay using the SRT, R-WO-MRT, R-CM and R-K methods (cf. subsection 4.3), with all admissible combinations of storage and equilibrium format. In Figure 5.2 the analytical solution of the kinetic energy is compared to simulated results. The vortex velocity and thus the kinetic energy are expected to decay exponentially due to viscous friction. This is matched by the simulation until, at some point, the simulated energy stagnates on a plateau. The reason for this phenomenon is that lower velocities can no longer be represented due to the truncation error caused by the floating point number format. The truncation error for the IEEE-754 double precision format [1] is = 2.2 · 10 −16 . Due to the squared calculation for the kinetic energy, the plateau's location is expected to occur at around 2 . Comparing different compute kernels derived by lbmpy, we observe that some are able to reach the predicted plateau, while others stagnate visibly earlier. The relative energy E (t) /E 0 after 200 000 time steps is also shown in Table 5.1. We consistently Collision space absolute storage zero-centered f eq δf eq SRT 1.9 · 10 −26 -5.4 · 10 −33 R-WO-MRT 1.7 · 10 −29 4.4 · 10 −31 6.1 · 10 −33 R-CM 2.4 · 10 −29 2.7 · 10 −33 1.7 · 10 −34 R-K 9.1 · 10 −27 1.1 · 10 −32 attain much lower levels of the plateau using zero-centered storage in combination with the delta-equilibrium. Relaxing zero-centered populations against the absolute equilibrium, the R-CM and R-K methods still yield very high precision, while R-WO-MRT stagnates earlier. For both the SRT-, and the cumulant method, the difference between absolute and zero-centered storage is the most drastic.
Shallow Water
LBMs. The shallow water equations (SWE) are an approximation to the Navier-Stokes equations for flow regimes where the horizontal characteristic length scale is significantly larger than the vertical length scale, making it permissible to neglect vertical flow phenomena [51]. They are obtained by integrating the NSE in z-direction; fluid density is replaced by water column height h and velocity is averaged across the third dimension [54]. The SWE are therefore purely two-dimensional. A number of LBM solvers for the SWE have been proposed [54,43,51] and succesfully applied to problems of hydraulic engineering [50]. Here, we consider the central moment-based shallow water LBM by de Rosis [43] and the cumulant-based method presented by Venturi et al. [51,50]. We show how these methods may be implemented using the lbmpy modelling framework introduced in section 3 in just a few lines of Python code, automatically generating a highly optimized kernel implementation of the respective collision operators. We then present simulation results obtained with the generated kernels.
The method of de Rosis is based on the discrete shallow water equilibrium proposed by Zhou [54] for the D2Q9 stencil, which reads Here, g denotes gravitational acceleration in lattice units, and λ i = 1 if ξ i 1 = 1, otherwise λ i = 1/4. Listing 1 shows how a central moment-based method using this equilibrium may be set up within lbmpy. In lines 3 to 13, we create an equilibrium instance from the discrete equations, which enters the abstract representation of the shallow water method in lines 15 to 20. We specify the same polynomial basis as used in [43], fix central moments as the collision space, and define a regularized set of relaxation rates. Finally, line 22 invokes the code generation pipeline introduced in section 4, generating and compiling a C implementation of the collision kernel, which is made available to the user as a Python function. The method of Venturi et al. [51] may be recreated even more compactly, using the continuous Maxwellian and setting its squared speed of sound parameter to c 2 s = 1 x , y , g , h , u_x , u_y , ω _s = sp . symbols (...) 2 3 def f_eq (ξ) : 4 ξ _sum = sum ( abs (ξ _i ) for ξ _i in ξ) 5 if ξ _sum == 0: We put the kernels thus generated to work in simulating a circular dam break scenario, using the same setup as in [43]. We place a water column of radius 2.5 m and height 2.5 m at the center of a cubic domain with side length 40 m, which is otherwise filled with 0.5 m of water on top of an even and frictionless bed. The domain is discretized with 100 2 lattice cells and delimited by periodic boundary conditions; the simulated time step is ∆t = 0.05 s per step. The kinematic viscosity is set to unity; this yields a shear viscosity-governing relaxation rate of ω s ≈ 0.696. The entire simulation is set up very rapidly in Python code using the additional facilities of pystencils and lbmpy. The full Python code for this setup is available as part of lbmpy's online documentation 2 . Figure 5.3 shows water column height in a cross-section of the domain at y = 20 m, at 1, 2 and 3 seconds of simulated time. While both methods agree qualitatively, the cumulant-based method shows a visibly deeper trough and a more spread-out wave front. The wave front after two seconds, as predicted by the central moment-based method, is visualized in Figure 5.4.
Conclusion and
Outlook. This article presents lbmpy as a sophisticated software architecture that supports the algebraic modelling of advanced MRT lattice Boltzmann methods. Its key feature is the ability to generate efficient computer code automatically from abstract specifications of the methods. The methodology is based on an extensive theoretical framework for the concise description of MRT collision rules. The framework formalizes the zero-centered storage format as a decomposition of the populations into background and deviation components. This decomposition we generalize to raw and central moment space, applying it also to the equilibrium distribution. Thus we arrive at three general collision equations, relaxing absolute populations against the absolute equilibrium and zero-centered populations against both the absolute and delta-equilibrium.
We present the front-end of lbmpy as a Python-based software incarnation of this theoretical calculus. Utilizing a computer algebra system, the software supports manipulating LBMs on the purely symbolic level. We further describe how object-oriented programming can be used to represent complex components, like the equilibrium distribution and force models. This approach makes lbmpy flexible and extensible.
We discuss in detail the way optimized collision rules are derived within lbmpy through symbolic manipulation. Our derivation system generates efficient implementations for collision rules in raw moment, central moment, and cumulant space, including all permissible combinations of storage and equilibrium formats. We provide specialized derivations for the collision space transforms since they constitute the most expensive computations. The linear mappings between populations, raw, and central moments are realized using two novel Chimera transforms: One recursively decomposing the calculation of discrete raw moments, and one a split-up version of a threefold binomial expansion. Both are designed to leverage the respective transforms' recursive nature in order to minimize the amount of arithmetic operations. We furthermore introduce a method of deriving the transforms between central moments and cumulants by symbolic differentiation of the respective generating functions.
Combining domain-specific knowledge with information contained in the symbolic equations, we succeed in significantly reducing arithmetic operation counts of complex LBM kernels. We observe that especially the common step of regularization permits aggressive optimizations leading to drastically reduced computational cost. This serves to produce remarkably low operation counts; in fact, we were able to produce cumulant-based kernels with only between thirteen (on D3Q19) and forty percent (on D3Q27) more operations than the respective SRT kernel.
We conduct a test case involving decaying Taylor-Green vortex flow, showing the effectiveness of the generalized zero-centered storage format in improving numerical accuracy. Finally, we illustrate lbmpy's versatility and fitness for rapid prototyping by setting up an LB solver for the shallow water equations in just a few lines of Python code.
The reduction of arithmetic complexity in implementations of advanced LBMs is just one first step toward efficient large-scale fluid dynamics simulations. In the past, lbmpy-generated kernels have already been shown to be performant, both individually on single CPUs, as well as powering massively parallel simulations on clusters of CPUs and GPUs [5,26]. With the present revision of the framework, we provide a central ingredient for more time-and energy-efficient implementations of sophisticated lattice Boltzmann solvers employing the promising central moment and cumulant methods. Therein, we aim to minimize time and energy consumed by both software developers and computing hardware. Therefore, the evaluation of the performance characteristics of the generated kernels on diverse modern hardware architectures as well as the assertion of their fitness for latest peta-and exascale supercomputers shall be subjects of future work.
|
2022-11-07T06:44:02.212Z
|
2022-11-04T00:00:00.000
|
{
"year": 2022,
"sha1": "d3cfe29f4a6e452d7b34fa5a3c4a7322f4e180b5",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "d3cfe29f4a6e452d7b34fa5a3c4a7322f4e180b5",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics",
"Physics"
]
}
|
236764058
|
pes2o/s2orc
|
v3-fos-license
|
Results of using different breed studs in commercial fine wool sheep breeding
This work presents the material of research and production experiments on the crossing of stud rams of the North Caucasian meat and wool breed and Australian meat merino with fine wool ewes of a commercial herd. It is identified that the use of North Caucasian rams on fine wool ewes serves to increase the fertilizing ability and milk production of the breeding stock, to increase viability and resistance of young animals, as confirmed by the best morphometrical differences of crossbred offspring placentas. Crossbred offspring from semi-fine wool rams had a large live weight at birth at 21 days and at 4 months of age. The use of Australian producers did not have a significant effect on the above indicators of economically useful traits in comparison with purebred breeding.
Introduction
One of the most important tasks in the development of the current industry is to increase the competitiveness of sheep breeding production. Changes in the contemporary social and economic conditions and, as a result, a decrease in the demand for most of the products of sheep breeding led to a sharp decrease in the number of animals, their productivity, a change in the breed composition and structure of breeds of different directions. Nevertheless, the main types of fine wool sheep products, wool and meat, have not lost their value, while their value has changed in the direction of mutton meat. This type of product was of the utmost importance at all stages of the industry's development. Specifically, in the period from 1985 to 2005, new types of sheep with improved indicators of meat production were bred in commercial fine wool herds. The development of techniques and methods for increasing the fertilizing ability, ewes fertility, resistance and viability of young animals, especially in the first periods of an animal's life, including before separation from mothers is an important element contributing to an increase in the production of mutton meat. The study of these issues is the main objective of our research.
Materials and Methods
To achieve the objectives, reproductive qualities of Caucasian breed ewes were identified when they were crossed with North Caucasian meat and wool rams (group 1), Australian meat merino rams (group 2) and purebred Caucasian studs (group 3), as well as some indicators of productivity, resistance and viability of offspring of different origin. To do this, the flock of fine wool ewes of the senior shepherd of agricultural enterprise "Novomaryevskoe" M.M. Magomedov in the amount of 425 animals was inseminated by rams of the above breeds in approximately the same amount by the method of their daily sampling in the hunt. In the first group 140 animals were inseminated, in the second group -130 animals and in the third group -155 animals. At the same time, the reproductive qualities of rams and ewes, viability, resistance and live weight of the produced offspring, including some morphometrical differences of the placenta were studied. To achieve the objective, the volume and quality of sperm production of stud rams of different breeds, the fertility of mothers by the number of lambs per 100 lambed ewes, the mortality of lambs and their live weight from birth to 4 months of age, some morphometric differences of the placenta, blood serum bactericidal (SBA) and lysozyme (SLA) activity, according to existing zootechnical and biological methods. The feeding level of ewes, stud rams and the produced offspring corresponded to the set norms.
Results and Discussion
Two-year-old studs, which met the economically useful requirements of the elite class and had features typical to the breeds listed above were selected from the existing stud ram flock to carry out research and production experiments. Fresh semen of Australian meat merino was imported from the Stavropol Research Institute of Livestock and Fodder Production experimental station. The volume of ejaculate in the Caucasian breed was 1.42 ml, with a mobility of 9.5 points, and in the North Caucasian breed and Australian meat merino these indicators were 1.53 ml, 9.5 points and 1.12 ml, 9.0 points accordingly.
An important biological peculiarity of different breeds of sheep, which significantly affects the economic efficiency and the overall profitability of the industry, is the fertility of the ewes and the viability of the produced offspring. Scientists [2,4,5,[6][7][8][9][10][11] pointed out that the fertility of ewes and the viability of young animals can be increased with different methods, including targeted selection of multiple ewes and rams, as well as interbreeding of sheep of different productivity directions. They also pointed out that ewes with twins and triplets produced more meat in body weight and teg of finer assortments than ewes with single lamb, despite the lower average weight of each lamb.
Our experimental studies on the individual accounting of the results of insemination and lambing of fine wool ewes inseminated with the sperm of rams of different breeds showed significant differences between the compared animals, including the resulted offspring. Table 1 shows information on fertilization rate, ewe reproduction capability, viability and live weight of young animals. The analysis of the information from Table 1 indicates that the ewes inseminated by the North Caucasian rams had the best fertilizing ability, and somewhat worse was this indicator by the Caucasian and Australian meat merinos. At the same time, 30 animals or 19.4% of the ewes, inseminated in the first hunt by rams of the Australian meat merino, remained barren and were re-inseminated by the rams of the same group. 5 ewes from rams of the North Caucasian breed and purebred Caucasian studs, or 3.6% and 3.8%, respectively remained barren. The fertility of ewes of the compared groups was 114.0% in 1st group, and in 2nd and 3rd groups 103.3% and 103.7%, respectively, that is, the ewes of 1st group prevailed over the peers of 2nd and 3rd groups by 11.3% and 11.7%.
An important indicator that determines the efficiency of the breeding process and the profitability of the sheep breeding industry is the resistance of the produced offspring and its subsequent viability. The resistance is caused by complex protective reactions of the body and ensures the viability of young sheep of different genotypes in postnatal ontogenesis [1,2,10,12,13,14].
In our studies, the estimation of the protective potential of the experimental groups of animals was carried out by monitoring the activity of humoral factors (serum bactericidal activity -SBA, serum lysozyme activity -SLA).
The analysis of obtained results, shown in Table 2, made it possible to indicate a number of patterns due not only to the age characteristics of the produced offspring, but also due to their breed.
Experimental lambs in the early postnatal period had the lowest rates of humoral immunity. At birth, the level of serum bactericidal and lysozyme activity (SBA, SLA) in young animals of different origins varied from 35.94% to 37.10% and from 22.26% to 23.71%. At the same time, SBA indicators were higher in the crossbred offspring of North Caucasian rams by 0.78 abs.% and 1.16 abs.% related to the peers of 2nd and 3rd groups. According to the level of blood serum lysozyme activity, the lambs of the 1st group prevailed over the animals of the 2nd and 3rd groups. The difference reached 0.23 abs.% and 1.45 abs.% in favor of the offspring of 1st group, respectively. By the age of two months, there was an increase in indicators on the protective potential of the experimental animals ( Table 2). It can be assumed that a sharp increase in the level of blood serum bactericidal activity in lambs of the experimental groups in this age period relates to the gradual formation, development of the immune system, which provides the protective potential of the growing organism, as evidenced by the indicators of the growth rate of young animals. At the same time, an increase in bactericidal activity was higher in the offspring of North Caucasian rams related to the lambs of 1st and 3rd groups. Referring to the level of blood serum lysozyme activity, no significant differences were found between the experimental groups in this indicator of humoral immunity in the studied period of ontogenesis. The reactivity of the organism gradually improves with an increase in the age of young animals in the experimental groups and by the age of 4 months a decrease in the serum bactericidal activity and an increase in lysozyme activity are observed.
For a specific tendency of age-related changes in the indices of natural resistance in the experimental young lambs, the superiority of the offspring of North Caucasian rams over the peers of young lambs of 2nd and 3rd groups in terms of the level of activity of humoral factors (SBA, SLA) was revealed. At the age of 4 months, this advantage in favor of the young lambs of 1st group was 1.4 abs.% and 2.2 abs.% in bactericidal activity and 1.8 abs.% and 3.6 abs.% in lysozyme activity.
Consequently, a comparative analysis of the results obtained revealed the advantage of the organism protective potential of the offspring of semi-fine wool rams of the North Caucasian meat and wool breed in all periods of postnatal ontogenesis over animals of other variants of selection according to the level of humoral factors of natural defense. At the same time, the amplitude of the revealed changes was within the physiological norm.
The best protective properties of the offspring of the compared groups had a certain effect on the viability of young animals from birth to the separation from ewes. It is one of the major indicators that determine the economic efficiency of the industry. In our studies, the offspring produced by North Caucasian rams turned out to be the most viable from birth to the separation. This indicator reached 93.4%, which is 3.8% and 4.5% more than in animals of 2nd and 3rd groups.
Sampled placentas, 8 from each ewe, incident with rams of different breeds, were subjected to laboratory studies to determine some morphometrical indicators. It was identified that the weight of the placenta from the ewes of the first group was equal on average to 276 g, while in animals of the second and third groups it was, respectively, 245 g and 234 g, or 12.7 and 17.9 percent less. The measurement of the cotyledon number in the placenta of the ewes of the compared groups made it possible to identified the following patterns. While the ewes inseminated by the North Caucasian rams had 76.7 cotyledons, the peers of the 2nd and 3rd groups had 5.5 and 6.5 less, or 7.7% and 9.3%. Significant differences were also found in the size of cotyledons in the placenta of the ewes of the compared groups. Thus, the size of cotyledons in the placenta of ewes inseminated by semifine wool rams of the North Caucasian breed was 2.2/3.1 cm, and for ewes of 2nd group, this indicator was 1.8/2.2 cm. According to this indicator the ewes of the 3rd group are in the last place. The size of their cotyledons was 1.6/2.0 cm. The distance between cotyledons in animals of 1st, 2nd, 3rd groups was 1.8/3.1 cm; 2.9/3.7 cm and 3.2/4.3 cm, respectively. These results indicate that the density of cotyledons in the placenta of crossbred young ewes from North Caucasian rams is much higher related to the peers of 2nd and 3rd groups, which means that the efficiency of fetal nutrition in this group of animals was higher. The observed patterns indicate that the insemination of ewes by rams of the North Caucasian meat and wool breed during the period of pregnancy contributes to the better development of the placenta, and as a result, better development of its offspring during the period of embryonic development. It is confirmed by our experimental data.
Individual weighing of lambs at birth (Table 1) showed that the lambs produced by the ewes of the 1st group in live weight exceeded the peers of the 2nd and 3rd groups by 0.4 kg or 10.5% (P > 0.05). The milk production of the ewes, calculated as the product of the average live weight of the lambs at the age of 21 days plus the coefficient 5, was the highest in the animals of the first group. This advantage over the ewes of 2nd and 3rd groups was 8.3% and 10.2%, respectively, (P > 0.05). Weighing of lambs when separating from ewes showed a similar pattern between the compared groups of animals. If the average live weight of the lambs of the first group was 24.1 kg, then in the offspring of the second and third groups it was 2.2 kg and 2.6 kg less, or 10.0% and 12.1% (P > 0.05) The calculation of economic efficiency indicators based on the methodology of scientists (3) showed that, with the higher fertility of ewes, the viability and live weight of lambs in the first group of animals, the level of profitability was 20.3% and 21.7% higher, related to the offspring of 2nd and 3rd groups.
Conclusion
Consequently, the use of stud rams of the North Caucasian meat and wool breed in commercial fine wool sheep breeding allows to increase the fertility and milk production of ewes, the viability, live weight of offspring and the profitability of young-stock breeding in comparison with the use of Australian meat merino and purebred breeding.
|
2021-08-03T00:05:36.601Z
|
2021-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "2e60211fd0ba32815d02ce44d36b94967f95fdcb",
"oa_license": "CCBY",
"oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2021/38/e3sconf_iteea2021_02016.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "c906aed612aaa7e9e85821678017f6bcfff33a59",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
}
|
212735199
|
pes2o/s2orc
|
v3-fos-license
|
Rotavirus and Coronavirus Vaccines
and mortality rates in neonatal calves, without the use of enteric disease vaccines, by instituting proper sanitation and management practices. Conversely, vaccination programs for neonatal enteric disease are rarely successful in the absence of reasonably good sanitation and management because heavy exposure to causative agents can overwhelm vaccinal resistance and because of problems with cryptosporidiosis, salmonellosis, and other enteric infections for which vaccines are either not available or minimally effective. Vaccination programs are generally unsuccessful when calves are taken from their dams and raised on a milk replacer diet, which does not contain the antibodies that are found in the dam's milkl1
and mortality rates in neonatal calves, without the use of enteric disease vaccines, by instituting proper sanitation and management practices. Conversely, vaccination programs for neonatal enteric disease are rarely successful in the absence of reasonably good sanitation and management because heavy exposure to causative agents can overwhelm vaccinal resistance and because of problems with cryptosporidiosis, salmonellosis, and other enteric infections for which vaccines are either not available or minimally effective. Vaccination programs are generally unsuccessful when calves are taken from their dams and raised on a milk replacer diet, which does not contain the antibodies that are found in the dam's milkl1 7
Rotavirus and Coronavirus Vaccines
Bovine rotaviruses and bovine coronavirus cause acute malabsorptive diarrheal disease, primarily in calves less than 3 weeks of age. 1 -3 Both agents attack the small intestinal villus enterocytes, causing their wholesale desquamation into the intestinal lumen. 1 ,4 This is accompanied by a drastic reduction in the numbers and size of small intestinal villi. 1,4 A malabsorption syndrome results from a reduction in surface area for absorption and from a lack of brush border disaccharidase enzymes. 1 ,4 Affected calves have watery diarrhea and may become hypovolemic, dehydrated, hyponatremic, hypochloremic, acidotic, and hyperkalemic. 1
General Considerations
Considerable evidence is available to support a conclusion that neither of the two commercial bovine rotavirus-coronavirus vaccine products that are presently available in the United States is capable of providing effective control of bovine rotavirus (BRV) and bovine coronavirus infections in calves, under the constraints prevailing in most commercial cattle production systems. Two approaches have been used in an attempt to protect calves against BRV and coronavirus infections.
The first approach involves vaccination of neonatal calves with an MLV vaccine (Calf Guard, Norden Laboratories, Lincoln, NE 68521), administered by the oral route. The objective is to stimulate a cellmediated immune response and local secretory IgM and IgA antibody production by the intestinal mucosa. s Calves begin producing detectable levels of local secretory (intestinal) IgM antibodies within 4 to 6 days after successful oral vaccination. 6 This is followed, within 2 to 6 more days, by the appearance of detectable levels of locally produced intestinal IgA antibodies. Calves are resistant to challenge from the initial appearance of local IgM antibodies. 6 In order to consistently elicit an effective immune response, the vaccine must be administered orally, immediately after birth, and before the calf has nursed. Ordinarily, this is practical only with dairy calves. Because the colostrum of most heifers and cows contains detectable levels of viral neutralizing antibodies,I,3,4,6-8 administration of colostrum should be delayed for several hours following vaccination, in order to avoid inactivation of vaccine virus. If the calf has nursed before it can be vaccinated, it is recommended that vaccination be delayed until 6 hours after nursing, and that the calf not be fed colostrum again until 6 hours after vaccine is administered.
It has been conclusively shown that Calf Guard is ineffective when a portion of the calves on a farm or ranch are left unvaccinated in double-blind vaccine evaluation studies. 9 -11 Apparently, the resistance induced by vaccination is easily overwhelmed by exposure to large amounts of virus shed by unvaccinated calves. However, when all calves were either vaccinated or not vaccinated in sequential comparisons, morbidity and mortality rates from neonatal enteric disease were significantly reduced by vaccination. 9 -1o Nonetheless, the design and statistical validity of these latter kinds of trials have been questioned. 12 Under practical conditions, in the experience of the author and others,13 this vaccination regimen has seldom resulted in dramatic improvements in calfhealth. Under commercial conditions, few owners or employees will administer vaccine within minutes after birth or effectively regulate the intake of colostrum in relation to the time of vaccination. Therefore, it is likely that many calves are exposed to infection before they can be vaccinated and that vaccine virus is often neutralized by ingested colostral antibodies. 4 ,6,14 Consequently, it is likely that relatively few calves are actually immunized under commercial conditions.
The second approach involves intramuscular vaccination of preg- 16 In addition, vaccination of pregnant cows (at least with an experimental vaccine containing live rotavirus and water-in-oil adjuvant) results in production of antigen-specific transformation colostral lymphocytes. Ingestion of these sensitized colostral lymphocytes by I-day-old calves confers partial protection against challenge with virulent BRV. 8 Colostrum, from most vaccinated cows and from some unvaccinated cows, is sufficiently high in virus-neutralizing (VN) antibodies that it is highly protective during the immediate period when it is being consumed by the calf. 1,3,6-8,13,14,16,17 Since most calves become relatively resistant to the adverse clinical effects of BRV and coronavirus infections prior to reaching 3 to 4 weeks of age, these diseases are readily preventable in production systems in which it is feasible to hand-feed colostrum from vaccinated cows throughout the first 3 to 4 weeks of life. 3 ,18 Even seronegative calves that do not receive colostral BRV antibodies within 24 hours after birth are solidly protected as long as they are fed colostrum or colostrum: milk mixtures that have a minimum BRV VN antibody titer of 1: 1024 or higher. 3 The major unresolved problem area, with respect to prevention of BRV and coronavirus infections by vaccination, is the suckling beef calf. Concentrations of BRV and coronavirus VN antibodies in the milk of vaccinated cows fall below protective levels by 3 to 7 days following . parturition. 1 ,2,14,15 Ideally, beef calves from vaccinated dams will develop subclinical enteric viral infections within the first few days after birth, while either milk VN antibody concentrations or (serum-derived) intestinal IgG I VN antibody concentrations are still partially protective. 1 ,4,5,18 However, some calves may not be exposed to BRV and coronavirus until after milk antibody concentrations fall below protec-tive levels. Other calves fail to mount an immune response to viral challenge while protected by lactogenic immunity and remain susceptible to a subsequent exposure. 18 ,19 Finally, many calves exposed to BRV while protected by lactogenic immunity will begin to shed virus and manifest signs of mild enteric disease within 1 to 9 days after VN antibodies in milk have fallen to nonprotective levels. 18 Unfortunately, in lieu of complete protection, the manifestations of passive immunity to BRV that are often noted are (1) a delay of a few days in the onset of clinical signs,3,13,17-19 and/or (2) reduced severity of clinical signs,3,17,18 and/or (3) a reduction in the length of the period of viral shedding associated with infection, 3,8,18 Although there are reports of successful field trials involvingBRV /BRV -coronavirus -vaccinated cows, 13,14,17,21-26 negative results 12 have also been reported.
One of the major shortcomings of Calf Guard is its relative inefficiency for boostering serum and colostral antibody titers of seropositive cowS. 2 -4 ,12,13,20,27,28 The ranges in BRV VN colostral antibody titers that have been reported in nonvaccinated cows, Calf Guard-vaccinated cows, and Scourguard 3 (K)-vaccinated cows are 1: 32 to 1: 3200, 1: 501 to 1: 4395, and 1: 2896, respectively (Table 24). Experimental vaccines utilizing live BRV, either in Freund's incomplete adjuvant 27 or in water-in-oil emulsions of mineral oil containing mannide oleate, 8 stimulate much higher BRV VN antibody titers than those obtained with present commercial vaccines (see Table 24). Vaccination with an experimental vaccine of this type resulted in concentrations of BRV VN antibodies in milk of 1 : 1680 at 30 days after paturition. 27 This is above the levels regarded as being protective. 14 This vaccine was administered by intramuscular injection 8 to 10 weeks before the anticipated calving date and readministered 2 weeks later by infusion into the involuted mammary glands. 27 At present, there is only one known serotype of bovine coronavirus. 1 ,S,13 However, at least two serotypes of BRV are known to exist in the United States 29 and Japan,30 and at least three BRV serotypes exist in Great Britain. 22 Both passive and active immunity to BRV infections are serotype specific. 4 ,22,31 Some serotypes of human rotavirus (HRV) are pathogenic for calves, l and a high prevalence of serum VN antibodies to three different serotypes of HRV has been reported in British cattle. 22 Both of the bovine rotavirus-coronavirus vaccines that are commercially available in the United States (Calf Guard and Scourguard 3 [K], Norden Laboratories) are prepared utilizing (only) the original Nebraska calf diarrhea rotavirus isolate, which has been designated BRV-l. The implication of this for vaccinal efficacy is, as yet, unclear. Although BRV -1 is thought to be the most common serotype affecting cattle in the United States,29 some vaccine "breaks" have been found to have resulted from herd infections with heterologous virus. 32 Vaccination with a BRV-l vaccine of cows seropositive to BRV-l, BRV-2, HRV-l, HRV-2, and HRV-3 resulted in significant increases in serum VN antibody titers to all five agents. 22 This indicates that effective monovalent vaccines could be useful for control of all BRV and HRV serotypes that are present in a herd at the time of booster vaccination. Table 8). Two doses of vaccine should be administered by intramuscular injection to pregnant cows and heifers at a 2-week interval. The second (immunizing) dose should be given 2 to 3 weeks before the anticipated calving date. Cows that have not calved within 40 days after administration of the immunizing dose should be revaccinated with a single dose. A single annual booster dose should be administered, 2 to 3 weeks prior to each subsequent calving.
This vaccine has a disadvantage in that calves that do not consume adequate quantities of colostrum shortly after birth may not be protected. Newborn calves should be closely observed. Those that do not suckle vigorously within 2 hours after birth should be hand fed or tube fed with 50 to 80 ml/kg body weight of fresh or frozen colostrum. 33 ,34 Dairy calves raised on fresh cow's milk are dramatically less susceptible to neonatal enteric disease than those raised on milk replacers (which do not contain lactoglobulins).13,26,33 Continuous feeding of fresh, frozen, or fermented colostrum from vaccinated cows throughout the first 2 to 4 weeks of life, is a highly effective means of preventing rotaviral and coronaviral infections in dairy calves. 3 ,7,13,14,18 Hightitered colostrum, resulting from vaccination with experimental vaccines, can be preventative when mixed with milk and fed in concentrations as low as 1 %.18 Colostral BRV VN antibody titers achieved with commercial vaccines (see Table 24) should permit successful utilization of colostrum: milk mixtures containing 25 to 50% colostrum.
CALF GUARD ROTA VIRUS AND CORONAVIRUS VACCINATION PROGRAMS
Calf Guard is a ML V rotavirus and coronavirus combination vaccine that is recommended by the manufacturer for administration either to pregnant heifers and cows or to newborn calves but not to both. General recommendations for use of this vaccine in calves and in cows and heifers are summarized (see Table 8).
Calves
Use in calves was previously discussed, under General Considerations.
Cows and Heifers
Two doses of vaccine should be administered by intramuscular injection at a 3-to 6-week interval. The second dose should be administered within 30 days of calving. Cows that do not calve during the first 60 days of the calving season should be given a booster vaccination. An identical regimen is recommended during subsequent pregnancies.
This vaccine does not produce milk and colostral VN antibody titers that are as high and as persistent as those resulting from vaccination with Scourguard 3 (K), mainly because it is relatively ineffective for boostering humoral and colostral antibody titers of seropositive consequently, it cannot be recommended for this use.
|
2019-08-20T01:57:54.445Z
|
1990-03-01T00:00:00.000
|
{
"year": 1990,
"sha1": "6f708d75c7d18e2727e285963f3326d1cdb6240d",
"oa_license": null,
"oa_url": "https://doi.org/10.1016/s0749-0720(15)30924-5",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "79105a7478b88fb4edc336fbd195cfdc4cf04f3a",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science"
],
"extfieldsofstudy": []
}
|
245133720
|
pes2o/s2orc
|
v3-fos-license
|
Editorial: Multi-Omics Technologies for Optimizing Synthetic Biomanufacturing
Integrative Omics Group, Biological Sciences Division, Pacific Northwest National Laboratory, Richland, WA, United States, Department of Energy, Agile BioFoundry, Emeryville, CA, United States, Biological Systems and Engineering Division, Lawrence Berkeley National Laboratory, Berkeley, CA, United States, Department of Biology and Biological Engineering, Chalmers University of Technology, Gothenburg, Sweden, Functional and Systems Biology Group, Environmental Molecular Sciences Division, Pacific Northwest National Laboratory, Richland, WA, United States
Industrial manufacturing endures as an essential human activity yielding a variety of useful products; it plays a significant role in the global economy with huge impacts in everyday life. However, the manufacturing process requires consumption of various raw materials (especially petroleum derivatives), generates a variety of harmful waste products, causes pollution, and is energetically inefficient. Biological manufacturing from sustainable, affordable, and scalable feedstocks potentially enables the displacement of the entire portfolio of currently available products produced by industrial processes, enabling the manufacturing of renewable and eco-friendly products (Clomburg et al., 2017). Thus, successful development of a robust biomanufacturing strategy and technology platform, based on the latest advances in synthetic biology and chemical catalysis, will decrease both the cost and production time compared with previous manufacturing processes. Development of biomanufacturing processes using a synthetic biology platform requires the multidisciplinary efforts of science and engineering fields including molecular biology, microbiology, genetic engineering, informatics, metabolic modeling and chemical or process engineering (El Karoui et al., 2019).
In this research topic, Amer and Baidoo discussed the importance of using multi-omic analytical approaches to monitor and improve the biomanufacturing process. These approaches include genomics, transcriptomics, proteomics, metabolomics and fluxomics ( Figure 1). The multi-omics data acquired from the biomanufacturing process not only provides potential solutions to low production efficiency by identifying underlying metabolic bottlenecks or pathway sinks, but also guides the understanding of how these modified biological systems function. Furthermore, such multi-omics technologies are constantly innovated and improved to expand molecular detection coverage, obtain data with increased accuracy by using new or novel analytical instruments, achieve better computational algorithms, and create wider and deeper databases to support a growing variety of biological host systems. Roy et al. described a combined computational tool to optimize the DBTL (Design-Build-Test-Learn) cycle in biomanufacturing process by collecting, visualizing, and utilizing large multi-omics datasets from various biological systems and emphasized their importance in the following metabolic engineering processes with machine learning approaches.
Gao et al. compared microflow and nanoflow liquid chromatography-selected reaction monitoring (LC-SRM) methods for analysis of hundreds of targeted peptides associated with 132 proteins in major pathways of Pseudomonas putida, a versatile bacterial host for production of bioproducts and biofuels via metabolic engineering. The increased throughput and accuracy of protein measurement will not only reduce the DBTL cycle time in future applications, but is, in addition, easily applied to other biomanufacturing host organisms. Fletcher and Baetz reviewed the toxicity of phenolic compounds which are produced from pretreatment or hydrolysis of natural lignocellulosic biomass based on functional genomics and transcriptomics approaches, especially to the important model organism and industrial bioproduction host strain, Saccharomyces cerevisiae. Information regarding physiological tolerances of toxic phenolic compounds may be applied and evaluated in other host strains for future improvement. In that regard, Garcia et al. developed the genome-scale metabolic model of Clostridium thermocellum for efficient conversion of lignocellulosic biomass which has unique preference for its anaerobic and thermophilic growth attributes. This model will provide a useful tool to understand physiological and metabolic parameters associated with potential future biomanufacturing process.
Pinheiro et al. studied a xylose metabolism by Rhodosporidium toruloides, an oleaginous yeast with significant emerging potential in industrial applications, using a detailed physiological characterization interpreted with absolute proteomics and genome scale metabolic models. Kim et al. performed a multiomics analysis on R. toruloides and the transcriptomics, proteomics, metabolomics and RB-TDNA sequencing data improved the current genome-scale model to make it a more exhaustive and accurate metabolic network model.
Pomraning et al. integrated high-throughput proteomics and metabolomics data as part of a DBTL cycle focused on improving production efficiency of 3-hydroxypropionic acid (3HP) in engineered Aspergillus pseudoterreus strains. This was the first report of 3HP production in a filamentous fungus amenable to industry-level biomanufacturing of organic acids at high titer and low pH. Chroumpi et al. studied another filamentous fungus Aspergillus niger for better understanding of pentose catabolic pathways by deletion of the key genes. The high-throughput multi-omics data (i.e., transcriptome, metabolome and proteome) generated on the mutant strains revealed that these genes are critical for metabolic pathways but not as critical for growth of A. niger on more complex biomass substrates, which raises fundamental questions on nutrient acquisition during growth on various carbon sources.
Wu et al. investigated the metabolic potential of Zymomonas mobilis for conversion of glucose and xylose to 2,3-butanediol. This study used calculated thermodynamic and kinetic parameters to generate insights of Z. mobilis metabolism. They also performed pathway and dynamic flux balance analysis to understand metabolic potential and production efficiency for future industrial applications. Nitta et al. acquired metabolomics and transcriptomics data on antibiotic producing strain, Streptomyces coelicolor to understand the functional connections between the production of antibiotic, actinorhodin and the level of cAMP. They found that higher levels of cAMP improved cell growth and production of actinorhodin, which was confirmed by the metabolomic and transcriptomic data.
We conclude by emphasizing that high-throughput multiomics data play a critical role to unravel the complexities of metabolic engineering to improve production efficiency and product titer produced by a variety of industrial microbes. In addition, generation of multi-omics datasets accelerates the adoption and subsequent application of artificial intelligence approaches such as machine learning to design of improved microbial bioproduction host systems (Lawson et al., 2021). In terms of technological perspectives, enhanced high-throughput measurement and improved coverage of multi-omics analyses with higher accuracy will not only benefit in shortened DBTL cycle times for the metabolic engineering process, but also will lead to improved fundamental understanding of engineered biosystems. Refining tools and analytical platforms will benefit manipulating, modifying, and reshaping potential host systems. The long-term outcomes of these efforts will impact the world and our future by decarbonizing the current manufacturing processes via an environmental-friendly manner.
AUTHOR CONTRIBUTIONS
Y-MK, CP, EK, and SB served as co-editors for the Research Topic: Multi-omics technologies for optimizing synthetic biomanufacturing. Y-MK conceived of the idea for the research topic, and all the authors contributed to writing the editorial.
|
2021-12-15T14:12:21.557Z
|
2021-12-15T00:00:00.000
|
{
"year": 2021,
"sha1": "98c5c40dd29bcd992ed4671e80de86b65fe26636",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fbioe.2021.818010/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "98c5c40dd29bcd992ed4671e80de86b65fe26636",
"s2fieldsofstudy": [
"Environmental Science",
"Engineering",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
251632716
|
pes2o/s2orc
|
v3-fos-license
|
Electric Current Generation by Increasing Sucrose in Papaya Waste in Microbial Fuel Cells
The accelerated increase in energy consumption by human activity has generated an increase in the search for new energies that do not pollute the environment, due to this, microbial fuel cells are shown as a promising technology. The objective of this research was to observe the influence on the generation of bioelectricity of sucrose, with different percentages (0%, 5%, 10% and 20%), in papaya waste using microbial fuel cells (MFCs). It was possible to generate voltage and current peaks of 0.955 V and 5.079 mA for the cell with 20% sucrose, which operated at an optimal pH of 4.98 on day fifteen. In the same way, the internal resistance values of all the cells were influenced by the increase in sucrose, showing that the cell without sucrose was 0.1952 ± 0.00214 KΩ and with 20% it was 0.044306 ± 0.0014 KΩ. The maximum power density was 583.09 mW/cm2 at a current density of 407.13 A/cm2 and with a peak voltage of 910.94 mV, while phenolic compounds are the ones with the greatest presence in the FTIR (Fourier transform infrared spectroscopy) absorbance spectrum. We were able to molecularly identify the species Achromobacter xylosoxidans (99.32%), Acinetobacter bereziniae (99.93%) and Stenotrophomonas maltophilia (100%) present in the anode electrode of the MFCs. This research gives a novel use for sucrose to increase the energy values in a microbial fuel cell, improving the existing ones and generating a novel way of generating electricity that is friendly to the environment.
Introduction
Due to the exponential increase in society, it has generated two main problems, the need for new sources of electricity generation and a way to reuse the waste produced by human consumption. This last problem has generated a problem for the collection centers of the different municipalities of large and small cities [1,2]. In 2016, waste production exceeded 2.02 billion tons, and it was estimated that by 2050 it would be approximately 3.4 billion tons, which will make waste management increase from 1.61 million dollars in 2020 to 2.50 million by 2030 [3,4]. In this sense, part of the waste is that from agricultural production, which in its process of sowing, harvesting, sale and consumption, generates different types of waste that in recent years have begun to be reused in order to give them a second use and take advantage of them in other activities of society [5]. One of the waste products with the highest production in Latin America and consumed worldwide is papaya derivatives (Carica papaya L.). In its different presentations, this product represents approximately 15.36% (11.22 metric tons) of tropical fruits produced each year; approximately 24 countries produce this type of product [6]. The high consumption of this fruit is mainly due to the high amounts of vitamins A, C and E that it presents, besides being a natural diuretic. In the last decade, its production has increased by 85% for South American countries, mainly in their tropical zones [7]. On the other hand, new technologies have emerged to generate electricity in a sustainable way, for example, microbial fuel cells (MFCs). These types of devices generate electricity through the oxidation and reduction processes that occur inside them, converting chemical energy into electrical energy [8,9]. These systems are generally composed of two chambers (anodic and cathodic). In the chamber where the anodic electrode is placed, the microorganisms oxidize the organic matter, producing electrons, which travel through an external circuit to the cathodic chamber where they are oxidized. Due to this process, a flow of electrons occurs, producing electricity [10][11][12].
There is a wide variety of substrates used as fuels in MFCs, with biological factors being a fundamental characteristic because they are used as an environment for microbial growth and other metabolic activities, therefore, the composition and degradability of the substrate promote the rate of activity for the generation of electrons, which translates into a better performance of the MFCs [13][14][15]. Various investigations have reported the use of a wide range of substrates, from domestic, industrial and municipal wastewater to simpler substrates such as glucose, which serves as a carbon source in MFCs for electricity production [16,17]. In this sense, the concentration of glucose as a substrate establishes the maximum amount of chemical energy available to convert into electrical energy, so a substrate with a high content of sugars can improve the generation of electrons in MFCs [18]. In this sense, in their research, Kamau et al. (2020) used waste from avocados, tomatoes, bananas, watermelons and mangos as substrates, mainly monitoring the voltage and current values generated in MFCs, reporting that the tomato produced the highest voltage (0.702 V) and, in terms of current values, it increased linearly over time for all residues. On the other hand, that study indicated that moisture content and carbohydrate level were the main factors influencing electricity generation [19]. Likewise, Kalagbor and Akpotayire (2020) evaluated the generation of electricity from tropical fruit residues (watermelon and papaya) in single-chamber MFCs. The cells were monitored for a period of four weeks, and the maximum voltage generated was 139.5 mV in the use of the watermelon substrate and 222.9 mV produced by the papaya. In terms of the power density of the watermelon substrate, it was 0.2452 mW/cm 2 . For the papaya substrate, on the other hand, the values of dissolved oxygen (DO) and biological oxygen demand (BOD) showed that the medium was conducive to the proliferation of microorganisms [20,21]. These results demonstrated that single-chamber MFCs are capable of generating electricity from tropical fruit residues, so the use of these systems was recommended as a sustainable alternative since they represent an option to increase electricity supply in urban and rural areas [22,23]. Likewise, Utami and Yenti (2018) studied the generation of electrical energy from papaya peel waste in MFCs. In the anaerobic compartment of the anode, the layer of this substrate was used as an electron donor, while in the cathode compartment, KMnO 4 was used as an electron acceptor. Regarding the results, the power density was 121 mW/m 2 and a current of 179 mA with a voltage of 1.095 V [24]. In this sense, it has been proven that carbohydrates rich in glucose and fructose are of essential importance for the generation of electricity in MFCs. In this sense, sucrose is a natural component present in all-natural juices.
In this sense, the main objective of this research was to evaluate the generation of electricity using papaya residues as a substrate by adding different concentrations of sucrose (0%, 5%, 10%, and 20%) in a single-chamber microbial fuel cell manufactured at low cost with Zn-Cu electrodes, monitoring their voltage, current, current density, power density and pH for 30 days. Thus, the values of the internal resistance and absorbance spectrum were also measured by FTIR (Fourier transform infrared spectroscopy). Figure 1 shows the influence of sucrose concentrations on the generation of voltage, current and pH, with the concentration of sucrose at 20% being the one with the highest value when used as a substrate, reaching a maximum value (0.955 V) on day 15. From the first day, the measurements were remarkable, with the highest voltage values when the sucrose concentration was 20% generating 0.19 V more than the MFCs with 10% and 0.26 V with the MFCs used as blank (Figure 1a). While in Figure 1b it is observed that the MFCs with 20% sucrose generated a higher electric current with a peak value of 5.079 mA on the sixteenth day, all electrical current values have their maximum peaks between the eleventh and sixteenth day. The main reason for the increase in current parameters, according to Fujimura et al. (2022), is because sucrose, being a disaccharide (glucose and fructose), has been used for fermentation, releasing electrons in the process, generating electric current [24]. On the other hand, when glucose is coupled in respiratory chains, it is oxidized to gluconate by glucose dehydrogenase and is subsequently oxidized to 2-cetagluconate by gluconate dehydrogenase [25]. While microorganisms consume glucose as a source of carbon electrons and protons, previous research mentions that 24 mol of electrons and hydrogen ions are generated by the oxidation of one mole of glucose under anaerobic conditions [26]. On the other hand, the pH values increased from the first day of monitoring to the last, as shown in Figure 1c [27]. The pH values influence the generation of voltage and current in MFCs mainly because the microorganisms present in each cell need the ideal conditions for their growth and acclimatization [28]. Figure 2 shows the values of the electrical resistance obtained from the microbial fuel cells at different percentages of sucrose, where the experimental data is adjusted to Ohm's law (V = RI), where the x-axis is the current (I) and the y axis is the voltage (V), for which the slope of the linear fit is the internal resistance (R int. ) of the cells. The R int. values found were 0.044306 ± 0.0014, 0.03572 ± 0.00716, 0.02269 ± 0.0015 and 0.1952 ± 0.00214 KΩ and stop the MFCs with 0%, 5%, 10% and 20% sucrose. As clearly observed in Figure 3, the values decrease with increasing sucrose concentration. According to Ueda et al. (2022), the time required for the decomposition of the substrates has a dependence on the resistance of the microbial fuel cells. It is known that when the resistance is low, the electrons flow more freely, generating a greater electric current, and it is probable that this affects the microbes in the electrode biofilm [29,30]. Figure 3 shows the values of power density (PD) and maximum voltage as a function of current density (CD), being the MFCs with 20% sucrose the one that generated a higher value of PD with 583.09 mW/cm 2 at a CD of 407.13 A/cm 2 and a peak voltage of 910.94 mV; with a 36% higher than that generated by the MFCs used as target (0% sucrose) that generated a PD of 427.14 mW/cm 2 in a CD of 4.920 A/cm 2 with a peak voltage of 537.72 mV. These values are higher than those generated by Mohamed et al. (2020), where they used kitchen wastewater and photosynthetic algae as fuel in their dual-chamber MFCs, managing to generate maximum PD peaks of 31.6 ± 0.5 mW/cm 2 in a CD of 172 mA/cm 2 with a maximum voltage of 600 mV [31]. In the same way, Kondaveeti et al. (2019) used citrus peels as fuel in their single-chamber MFCs, managing to generate PD peaks of 63.4 mW/cm 2 at a CD of 280.56 mA/cm 2 at a peak voltage of 0.478 V [32]. According to Yaqoob et al. (2020) obtained high values of PD shown in the research are due to the metallic electrodes used due to the good electrical properties they have, so the current losses are few in the energy generation process [33]. Figure 3 shows the values of power density (PD) and maximum voltage as a function of current density (CD), being the MFCs with 20% sucrose the one that generated a higher value of PD with 583.09 mW/cm 2 at a CD of 407.13 A/cm 2 and a peak voltage of 910.94 mV; with a 36% higher than that generated by the MFCs used as target (0% sucrose) that generated a PD of 427.14 mW/cm 2 in a CD of 4.920 A/cm 2 with a peak voltage of 537.72 mV. These values are higher than those generated by Mohamed et al. (2020), where they used kitchen wastewater and photosynthetic algae as fuel in their dual-chamber MFCs, managing to generate maximum PD peaks of 31.6 ± 0.5 mW/cm 2 in a CD of 172 mA/cm 2 with a maximum voltage of 600 mV [31]. In the same way, Kondaveeti Figure 4 shows the absorbance spectrum of the compounds present in the different substrates (0, 5, 10 and 20% sucrose), observing the most intense peak at 3289 cm −1 belonging to the N-H stretch, O-H groups, phenols, and carboxylate acids, while peaks 2904 and 2848 cm −1 are associated with the C-H stretching of alkanes, aldehydes and ketones. In the same way, the peak of 1756 cm −1 belongs to alkane C-H stretching, while 1660 is associated with C=C stretching, N-H primary amine, C==N stretching and amide stretch. The 1545 cm −1 peak indicates the presence of alkane C-H stretching, alkene C=C stretching, C==N stretching, primary and secondary amine C-N stretching and amide; and finally, the peaks at 1255 and 1030 cm −1 alkane C-H stretching, alkene C=C stretching, C==N stretching, primary and secondary amine C-N stretching and amide [34][35][36]. It has been shown that the high content of phenols releases large amounts of electrons which travel through the external circuit to the cathode electrode, thus generating a higher electrical current output [37,38]. Figure 4 shows the absorbance spectrum of the compounds present in the different substrates (0%, 5%, 10% and 20% sucrose), observing the most intense peak at 3289 cm −1 belonging to the N-H stretch, O-H groups, phenols, and carboxylate acids, while peaks 2904 and 2848 cm −1 are associated with the C-H stretching of alkanes, aldehydes and ketones. In the same way, the peak of 1756 cm −1 belongs to alkane C-H stretching, while 1660 is associated with C=C stretching, N-H primary amine, C=N stretching and amide stretch. The 1545 cm −1 peak indicates the presence of alkane C-H stretching, alkene C=C stretching, C=N stretching, primary and secondary amine C-N stretching and amide; and finally, the peaks at 1255 and 1030 cm −1 alkane C-H stretching, alkene C=C stretching, C=N stretching, primary and secondary amine C-N stretching and amide [34][35][36]. It has been shown that the high content of phenols releases large amounts of electrons which travel through the external circuit to the cathode electrode, thus generating a higher electrical current output [37,38]. Table 1 shows the regions sequenced and analyzed in the BLAST program in which an identity percentage of 99.32% was obtained, which corresponds to the Achromobacter xylosoxidans species, 99.93% to the Acinetobacter bereziniae species, and with 100.0% to the species Stenotrophomonas maltophilia. Figure 5 shows the dedongram, which was built using the MEGA program, which relates and groups sequences of species [39]. These bacteria are ubiquitous, they are found in soil, water, air, plants and animals. They transfer electrons to the anode via external loop carrier proteins, such as cytochrome c, or via membrane appendages called [40,41] nanowires. An essential factor in the production of electric current is the formation of biofilms on the anode electrode. This consists of two types of microorganisms, fermentative and electrogenic. Where the former hydrolyzes organic compounds and the metabolites they secrete and are used as substrates for electrogenic bacteria to generate electrons, protons and CO 2 through oxidative processes [42]. Figure 6 shows the electricity generation process through microbial fuel cells, where MFCs with 5, 10 and 20% sucrose connected in series were used; managing to generate a voltage of 2.09 V, enough to turn on a red LED bulb. This shows that papaya residues have great values for the generation of bioelectricity. Recent research has shown the importance of other residues in other changes, which leads to the sustainability of these types of products [43,44]. Figure 4. FTIR spectrophotometry of the papaya residues with saccharose. Table 1 shows the regions sequenced and analyzed in the BLAST program in which an identity percentage of 99.32% was obtained, which corresponds to the Achromobacter xylosoxidans species, 99.93% to the Acinetobacter bereziniae species, and with 100.0% to the species Stenotrophomonas maltophilia. Figure 5 shows the dedongram, which was built using the MEGA program, which relates and groups sequences of species [39]. These bacteria are ubiquitous, they are found in soil, water, air, plants and animals. They transfer electrons to the anode via external loop carrier proteins, such as cytochrome c, or via membrane appendages called [40,41] nanowires. An essential factor in the production of electric current is the formation of biofilms on the anode electrode. This consists of two types of microorganisms, fermentative and electrogenic. Where the former hydrolyzes organic compounds and the metabolites they secrete and are used as substrates for electrogenic bacteria to generate electrons, protons and CO2 through oxidative processes [42]. Figure 6 shows the electricity generation process through microbial fuel cells, where MFCs with 5, 10 and 20% sucrose connected in series were used; managing to generate a voltage of 2.09 V, enough to turn on a red LED bulb. This shows that papaya residues have great values for the generation of bioelectricity. Recent research has shown the importance of other residues in other changes, which leads to the sustainability of these types of products. [43,44].
Fabrication of Single-Chamber Microbial Fuel Cells
For the chambers of the microbial fuel cells (three in total), 400 cm 3 polyethylene terephthalate cubic containers were used, to which an 18 cm 2 hole was made on one of the faces in which the electrode was placed, cathodic (Zinc, Zn), while the anodic electrode (Copper, Cu) was placed inside the container; both electrodes were joined by means of an external circuit with a resistance of 100 Ω. As a proton exchange membrane, 10 mL of the solution obtained from 6 g of KCl and 14 g of agar in 400 mL of H2O were used (see Figure 7). While the preparation of sucrose was carried out at 0 (target), 5, 10, and 20%, for this a 50% sucrose stock solution and papaya residue extract were used, with the final working volume being 200 mL.
Fabrication of Single-Chamber Microbial Fuel Cells
For the chambers of the microbial fuel cells (three in total), 400 cm 3 polyethylene terephthalate cubic containers were used, to which an 18 cm 2 hole was made on one of the faces in which the electrode was placed, cathodic (Zinc, Zn), while the anodic electrode (Copper, Cu) was placed inside the container; both electrodes were joined by means of an external circuit with a resistance of 100 Ω. As a proton exchange membrane, 10 mL of the solution obtained from 6 g of KCl and 14 g of agar in 400 mL of H2O were used (see Figure 7). While the preparation of sucrose was carried out at 0 (target), 5, 10, and 20%, for this a 50% sucrose stock solution and papaya residue extract were used, with the final working volume being 200 mL.
Fabrication of Single-Chamber Microbial Fuel Cells
For the chambers of the microbial fuel cells (three in total), 400 cm 3 polyethylene terephthalate cubic containers were used, to which an 18 cm 2 hole was made on one of the faces in which the electrode was placed, cathodic (Zinc, Zn), while the anodic electrode (Copper, Cu) was placed inside the container; both electrodes were joined by means of an external circuit with a resistance of 100 Ω. As a proton exchange membrane, 10 mL of the solution obtained from 6 g of KCl and 14 g of agar in 400 mL of H 2 O were used (see Figure 7). While the preparation of sucrose was carried out at 0 (target), 5, 10, and 20%, for this a 50% sucrose stock solution and papaya residue extract were used, with the final working volume being 200 mL.
Collection of Papaya Waste
Three decomposing papayas (approximately 5 kg) were collected from La Hermelinda market, Trujillo, Peru. Which were collected in hermetic bags and transferred to the laboratory for use where they were washed three times with distilled water to remove any type of impurities (sand, dust or insects). These wastes were ground in an extractor (Labtron, LDO-B10-USA) until obtaining homogeneity throughout the substrate and then stored in a 1000 mL bottle at 20 ± 2 °C until used in microbial fuel cells.
Molecular Identification of Microorganisms by Sequencing the 16S rRNA Genes
Molecular identification was carried out by the Analysis and Research Center of the "Biodes Laboratories". From pure or axenic cultures of bacteria, which were based on DNA extraction using the CTAB extraction method, which were analyzed molecularly by amplification of the 16S rRNA gene [46]. The genetic sequences were evaluated with the bioinformatic program MEGA-X to generate consensus sequences and develop phylogenetic trees. The identification of the microbial species was carried out using the Gen-Bank databases and the programs Nucleotide Blast (Basic Local Alignment Search Tool) and EzBio-Cloud [47,48]. The molecular analysis was analyzed only from the MFC with papaya waste with 20% sucrose.
Conclusions
Bioelectricity was successfully generated using papaya waste with sucrose in different percentages (0, 5, 10, and 20%) as fuel through laboratory-scale microbial fuel cells using zinc and copper as electrodes. The cell that obtained the best electrical parameters was the one that contained the highest percentage of sucrose (20%), managing to generate
Collection of Papaya Waste
Three decomposing papayas (approximately 5 kg) were collected from La Hermelinda market, Trujillo, Peru. Which were collected in hermetic bags and transferred to the laboratory for use where they were washed three times with distilled water to remove any type of impurities (sand, dust or insects). These wastes were ground in an extractor (Labtron, LDO-B10-USA) until obtaining homogeneity throughout the substrate and then stored in a 1000 mL bottle at 20 ± 2 • C until used in microbial fuel cells.
Molecular Identification of Microorganisms by Sequencing the 16S rRNA Genes
Molecular identification was carried out by the Analysis and Research Center of the "Biodes Laboratories". From pure or axenic cultures of bacteria, which were based on DNA extraction using the CTAB extraction method, which were analyzed molecularly by amplification of the 16S rRNA gene [46]. The genetic sequences were evaluated with the bioinformatic program MEGA-X to generate consensus sequences and develop phylogenetic trees. The identification of the microbial species was carried out using the Gen-Bank databases and the programs Nucleotide Blast (Basic Local Alignment Search Tool) and EzBio-Cloud [47,48]. The molecular analysis was analyzed only from the MFC with papaya waste with 20% sucrose.
Conclusions
Bioelectricity was successfully generated using papaya waste with sucrose in different percentages (0%, 5%, 10%, and 20%) as fuel through laboratory-scale microbial fuel cells using zinc and copper as electrodes. The cell that obtained the best electrical parameters was the one that contained the highest percentage of sucrose (20%), managing to generate an electrical voltage and current of 0.955 V and 5.079 mA, respectively, with an optimal operating pH of 4.98 on the fifteenth day. Likewise, the internal resistance of the cells decreased as sucrose increased, with the maximum internal resistance being 0.044306 ± 0.0014 KΩ and the minimum being 0.1952 ± 0.00214 KΩ belonging to the cells with 0 and 20% sucrose, respectively. Thus, it was also observed that the maximum power density was 583.09 mW/cm 2 at a current density of 407.13 A/cm 2 with a peak voltage of 910.94 mV, belonging to the cell with 20% sucrose. Finally, the absorbance peaks demonstrate the presence of phenols, which gives indications of the high values of current and voltage. Being able to identify 99.32, 99.93 and 100% of the species Achromobacter xylosoxidans, Acinetobacter bereziniae and Stenotrophomonas maltophilia, respectively, from the anode electrode of the MFCs with 20% sucrose. For future work, replicas (at least three) of each MFC should be made and, using the optimal pH values (4.98) found in this research, standardize the pH, as well as cover the metal electrodes with some chemical compound that is not harmful for the species of microorganisms found (Achromobacter xylosoxidans, Acinetobacter bereziniae and Stenotrophomonas maltophilia species) on the substrates to improve the efficiency of microbial fuel cells.
|
2022-08-18T15:06:38.027Z
|
2022-08-01T00:00:00.000
|
{
"year": 2022,
"sha1": "83a413c1a7b013d21f33c44c53541849f83cf2f3",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1420-3049/27/16/5198/pdf?version=1660558597",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3478b698d6f799eceb9d992f7f93b809ee1fc921",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
258222687
|
pes2o/s2orc
|
v3-fos-license
|
Impact of Different Fertilizer Sources under Supplemental Irrigation and Rainfed Conditions on Eco-Physiological Responses and Yield Characteristics of Dragon’s Head (Lallemantia iberica)
The effects of the irrigation regime and different fertilizer sources on the eco-physiological responses and yield characteristics of dragon’s head were explored in a factorial experiment based on a randomized complete block design with 12 treatments and 3 replications in the 2019 growing season. The treatments included six different fertilizer sources (animal manure, vermicompost, poultry manure, biofertilizer, chemical fertilizer, and control) and two irrigation regimes (rainfed and supplemental irrigation). The results indicated the positive effects of supplementary irrigation and the application of vermicompost, poultry manure, and animal manure by increasing the absorption of nutrients (phosphorus and potassium) and improving relative water contents, chlorophyll and carotenoid contents, and the fixed oil percentage of dragon’s head. The activities of catalase, ascorbate peroxidase, and superoxide dismutase decreased in the rainfed plants, whereas organic fertilizer application increased the antioxidant enzyme activity. The highest grain yield (721 kg ha−1), biological yield (5858 kg ha−1), total flavonoids (1.47 mg g−1 DW), total phenol (27.90 mg g−1 DW), fixed oil yield (200.17 kg ha−1), and essential oil yield (1.18 kg ha−1) were noted in plants that were treated with vermicompost under supplemental irrigation. Therefore, it is recommended that organic fertilizers such as vermicompost and poultry manure be used to substitute chemical fertilizers. These practices can help popularize organic crops using rainfed and supplementary irrigation.
Introduction
The increasing attention to the adverse effects of chemical pharmaceuticals, in comparison with the fewer effects associated with the use of organic herbal medications, has sparked an interest in the cultivation of medicinal plants [1]. Among the most common medicinal plants, dragon's head (Lallemantia iberica, Lamiaceae family) is an important flowering and medicinal plant in Iran [2] that has antioxidant, antibacterial, and analgesic effects [3]. Dragon's head is mainly grown as a rainfed crop and can tolerate mild drought conditions. However, its yield is drastically reduced when drought conditions worsen.
In arid and semi-arid environments, water is the most limiting factor in reducing agricultural crop production [4]. In rainfed farming, plants' responses to water-deficit stress are affected by the varying frequency of dry and wet periods, the patterns of soil and atmospheric water deficits, and the degree and timing of droughts [5]. Dragon's head is mainly cultivated under rainfed conditions and is moderately drought-tolerant, but its yield sharply decreases with rising drought conditions [6]. Rahimzadeh and Pirzad [7] indicated that water-deficit stress decreased chlorophyll content, catalase, ascorbate peroxidase, and superoxide dismutase activities, grain yield, biological yield, and seed oil content of flax in comparison to the control. Indeed, supplemental irrigation warrants the optimal use of rainfall where the minimal water stores of a locale can supply the dampness prerequisites of the plants at appropriate periods [8]. Accordingly, supplemental irrigation, which consumes little water, is helpful during the essential plant development and growth stages. It increases yield relative to the amount of water used [9]. It can decrease the effect of water stress on crop growth, development, and production and improve crop water productivity, mainly where irrigation supplements natural precipitation [10]. Mehrabi et al. [11] stated that the application of supplemental irrigation and biofertilizers on cumin plants can ameliorate the negative effects of water scarcity by boosting leaf water potential, the efficiency of carbon dioxide use, the transpiration rate, nutrient availability, and the water supply to the roots. Moreover, cumin plants' growth, development, and dry-weight yield can also improve under water-deficit stress.
Although the use of biological and organic fertilizers in agriculture has been in place for decades to achieve clean production and sustainable cultivation [12,13], the latter has recently attracted consideration because of the acknowledgement of the damaging impacts of industrial fertilizers and the thoughtful consideration of organic and sustainable farming [14]. Without a serious focus on soil biodiversity, achieving organic and sustainable agriculture's aims will not be easy [3]. Thus, in agricultural systems, the application of microorganisms is of strategic interest to reduce chemical fertilizer usage and improve environmental sustainability [12,15]. A biofertilizer is described as a substance that includes living microorganisms. It is recognized for promoting root growth and development besides playing a pivotal role in increasing germination rate and vigor in young plants, resulting in improved plant growth, development, and assimilation under water-shortage stress [8,15]. Animal manures supply the plant with macro-and micronutrient requirements while increasing organic matter in the soil, the relative C: N balance, and plant nutrient absorbability, all of which contribute to higher plant growth [16,17]. Another potential advantage of animal manure is its ability to reduce water-deficit stress. In such cases, plants alleviate the effects of water-deficit stress by adjusting osmotic pressure, which is achieved by regulating water uptake and cellular swelling [18]. Two fundamental properties of poultry manure are its ability to release nutrients slowly and its residual effect on crops. Fixed nutrients increase soil humus build-up and nutrient supplies in the soil, which improves uptake and the carrying of nutrients required inside the plant's vessel system during water-shortage stress [19]. Vermicompost provides various benefits to agricultural soil, such as increasing its nutrient-holding capacity, retaining moisture, improving soil structure, increasing soil porosity, maintaining average soil temperature, improving nutrient content, increasing microbial activity, and enhancing antioxidant traits in plants and their tolerance to environmental stress [20]. Therefore, organic fertilizers improve soil chemical-compound attributes such as CEC, pH, and nutrient element accessibility to boost the level of organic matter in the field soil and hence the fertility of the earth [17]. Heydarzadeh et al. [5] stated that smooth vetch plants inoculated with biofertilizers improved non-enzymatic and enzymatic antioxidants, reducing water-stress damage. Maddahi et al. [12] showed that dragon's head's growth and nutritional conditions could affect its antioxidant properties. As a result, the winter-sown plants fertilized with vermicompost and biofertilizer had higher antioxidant properties. Darakeh et al. [21] noted that bio-organic fertilizers increased the phenol and flavonoid content of black cumin seeds. Ahmadian et al. [22] showed that the consumption of 20 tons of animal manure per hectare reduced the negative effects of drought stress by providing nutrients such as N, P, and K and improved the quality of cumin essential oil. Keshavarz et al. [23] indicated that mint plants treated with vermicompost and chemical fertilizers presented higher fixed oil percentage, fixed oil yield, essential oil, essential oil yield, and growth characteristics compared with the control. It has been reported that the use of organic fertilizers reduced the use of chemical fertilizers, supplied nutrients in a way entirely appropriate for the natural nutrition of plants, helped to preserve the environment, improved the fertility of agricultural lands, and increased the yield of cumin plants [11]. In addition, organic fertilizers improved crop growth and dry weight and increased the resistance of plants to water-shortage conditions, diseases, and pests [22].
Rainfall fluctuations and pollution from industrial fertilizers and pesticide residues are the two significant challenges in producing medicinal plants, whose production has significantly decreased in Iran. Hence, supplying such crops with organic and biological fertilizers is vital for improving their qualitative and quantitative yields under water stress [24]. Furthermore, introducing such organic and biofertilizers to crop growers and providing advice on substituting chemical fertilizers with environment-friendly ones can help organic crops expand under rainfed and supplementary irrigation conditions. Therefore, this research aimed to evaluate a novel aspect: the impact of organic, biological, and chemical fertilizers on the physiological responses, antioxidant activity, and yield of dragon's head, a plant largely used in traditional medicine, under supplemental irrigation and rainfed conditions. The findings of this study can be used to improve the management approaches currently applied in the cultivation of dragon's head to enhance its production.
Photosynthetic Pigments, Relative Water Content (RWC), and Antioxidant Enzyme Activity
The photosynthetic pigments' content (chlorophyll a (Chl a), chlorophyll b (Chl b), chlorophyll a+b (Chl a+b), and carotenoids (Car)) and relative water content (RWC) of dragon's head were only influenced by the simple effects of the irrigation conditions and different fertilizer source treatments (Table 1). Specifically, photosynthetic pigments and RWC showed higher values under supplemental irrigation conditions (Table 1). Concerning the fertilization source, these traits were significantly promoted by vermicompost and poultry manure ( Table 1). The activities of the CAT, SOD, and APX were influenced by the irrigation conditions, different fertilizer sources, and their interaction (Table 1). Supplemental irrigation increased enzyme activity regardless of the type of fertilizer (Table 1). Finally, among the different fertilizers, vermicompost determined the highest CAT, SOD, and APX activities (Table 1). Different fertilizer sources significantly increased the photosynthetic pigment content compared to control plants in both rainfed and supplemental irrigation conditions (Figure 1a-d). The content of chlorophyll (a, b), chlorophyll a+b, and carotenoids was higher under supplemental irrigation than in rainfed conditions (Figure 1a Plants under supplemental irrigation exhibited a higher RWC than those under rainfed conditions ( Figure 2). The highest RWC was recorded under supplemental irrigation in plants treated with vermicompost and poultry manure ( Figure 2). Conversely, the lowest values were found under rainfed conditions in the biofertilized and control plants ( Figure 2). ns (not significant); ** (significant at p < 0.01); df (degrees of freedom); Chl a (chlorophyll a); Chl b (chlorophyll b); Chl a+b (chlorophyll a+b); Car (carotenoids); RWC (relative water content); CAT (catalase activity); APX (ascorbate peroxidase activity); SOD (superoxide dismutase activity).
Total Flavonoids and Total Phenol Content, DPPH Radical Scavenging, Superoxide Radical Scavenging, and Chain-Breaking Capacity
Irrigation conditions, fertilizer sources, and their interactions significantly influenced the total flavonoid and phenol content. TPC and TFC were promoted under supplemental irrigation ( Table 2). Among fertilizers, the highest values were achieved in plants treated with vermicompost.
DPPH (-2,2-diphenyl-1-picrylhydrazyl) radical scavenging capacity, superoxide radical scavenging capacity, and chain-breaking capacity were only influenced by the simple effects of the irrigation conditions and different fertilizer treatments. Supplemental irrigation promoted DPPH RS, SRS, and CBAC more than rainfed conditions ( Table 2). Among all fertilizer treatments, these specific traits were stimulated by the application of vermicompost and poultry manure ( Table 2).
The highest content of total flavonoids (1.47 mg g −1 DW) and total phenol The application of fertilizer sources significantly increased the percentages of DPPH radical scavenging capacity, superoxide radical scavenging capacity, and chain-breaking capacity versus the control (Figure 5a-c). The highest percentages of DPPH, superoxide radical scavenging capacity, and chain-breaking capacity (54.27, 47.74, and 4.21%, respectively) were observed in plants treated with vermicompost in supplemental irrigation conditions, while the lowest ones (26.74, 35.90, and 2.23%, respectively) were observed in control plants under rainfed conditions (Figure 5a-c). The application of poultry and animal manures did not affect the DPPH and superoxide radical scavenging capacities compared to the chemical fertilizers under supplemental irrigation (Figure 5a,b). Compared to chemical fertilizers, the animal manure treatment showed the same effect on chain-breaking capacity in supplemental irrigation conditions (Figure 5c). In addition, animal manure in rainfed conditions did not produce any effects on superoxide radical scavenging capacity compared to poultry manure ( Figure 5b). ns (not significant); ** (significant at p < 0.01); df (degrees of freedom); TPC (total phenol content); TFC (total flavonoid content); DPPH RS (DPPH radical scavenging); SRS (superoxide radical scavenging); CBC (chainbreaking activity capacity).
Content of Elements, Biological and Grain Yields, Fixed Oil Percentage and Fixed Oil Percentage Yield, and Essential Oil and Essential Oil Yield
The irrigation conditions, different fertilizer source treatments, and their interaction influenced the grain yield, the biological yield, and the essential oil yield (Table 3). On the other hand, the fixed oil percentage, fixed oil yield, essential oil, and the content of phosphorus and potassium were only influenced by the simple effects of the irrigation conditions and different fertilizer source treatments (Table 3). Supplemental irrigation promoted BY, GY, FO, FOY, EOY, and P and K content compared to rainfed conditions, while
Content of Elements, Biological and Grain Yields, Fixed Oil Percentage and Fixed Oil Percentage Yield, and Essential Oil and Essential Oil Yield
The irrigation conditions, different fertilizer source treatments, and their interaction influenced the grain yield, the biological yield, and the essential oil yield (Table 3). On the other hand, the fixed oil percentage, fixed oil yield, essential oil, and the content of phosphorus and potassium were only influenced by the simple effects of the irrigation conditions and different fertilizer source treatments (Table 3). Supplemental irrigation promoted BY, GY, FO, FOY, EOY, and P and K content compared to rainfed conditions, while an opposite trend was observed for EO (Table 3). As regards fertilizer sources, plants receiving vermicompost exhibited the highest values of BY, GY, FO, FOY, EOY, EO, and P and K content, while the lowest values were obtained in control plants (Table 3). ns (not significant); ** (significant at p < 0.01); df (degrees of freedom); EO (essential oil); EOY (essential oil yield); FO (fixed oil); FOY (fixed oil yield); BY (biological yield); GY (grain yield).
The phosphorus and potassium contents in irrigated plants were higher than in rainfed plants (Figure 6a,b). The highest phosphorus and potassium contents were 0.75 and 19.56 mg g −1 DW, respectively, recorded in irrigated plants treated with vermicompost, while their lowest values (0.50 and 14.28 mg g −1 DW, respectively) were found in rainfed control plants (Figure 6a,b). No significant difference in phosphorus in supplemental irrigation conditions between vermicompost and poultry manure was found (Figure 6a).
Biological and grain yields in supplemental irrigated plants were significantly greater than those under rainfed conditions (Figure 7a,b). Moreover, the different fertilizer sources applied under irrigation conditions considerably increased BY and GY compared to the control plots (Figure 7a,b). The highest BY and GY (5858 and 721 kg ha −1 , respectively) were measured in plants treated with vermicompost and supplemental irrigation (Figure 7a,b). Conversely, their respective lowest values (1781 and 377 kg ha −1 ) were found in rainfed control plants (Figure 7a,b).
Fixed oil percentage and fixed oil yield of dragon's head seeds improved considerably in plants treated with supplemental irrigation (Figure 8a Fixed oil percentage and fixed oil yield of dragon's head seeds improved considerably in plants treated with supplemental irrigation (Figure 8a, b) (Figure 8a, b). Under supplemental irrigation (Figure 7b). In contrast, the lowest EOY (0.50 kg ha −1 ) was recorded in rainfed control plants (Figure 7b). Finally, EOY did not show significant differences between plants treated with vermicompost and poultry manure under rainfed conditions (Figure 7b).
Discussion
Based on our findings, by enhancing water-shortage stress in dragon's head plants, chlorophyll a and b, chlorophyll a+b, and carotenoids at flowering stages were decreased. Nevertheless, they improved in response to different fertilizer sources. The degradation of such pigments or the reduction in their synthesis, associated with the reduced activity of enzymes involved in their synthesis, causes chlorophyll decline in crops exposed to water-deficit stress, resulting in reduced assimilation material and, thus, performance losses [25]. Because chlorophyll and carotenoids are always bound to proteins, fertilizer application delivers the nitrogen requirements of photosynthetic pigments and plant cell proteins, leading to an enhancement in the amount of these pigments in crops [25,26]. As a result of this increase, crop productivity rises.
Darakeh et al. [21] maintain that the leaf RWC of dragon's head diminishes as water stress increases. The decrease in tissues' turgor in plant and leaf RWC can be the first effect of water-deficit stress, which can have a natural influence on the development, growth, and ultimate size of cells [27]. Organic fertilizer enhances water uptake in the host plant by altering root architecture and spreading the plant's root system [28]. Indeed, organic fertilizers can alleviate the negative influences of water-deficit stress on plants by enhancing the moisture potential of the leaf, the transpiration rate, the photosynthetic efficiency, and the rate of CO 2 use. Moreover, they can promote the absorption of nutrients, enhancing growth and plant production [28].
Under rainfed and supplemental irrigation conditions, different sources of fertilizers enhanced antioxidant enzyme activity. In agreement with current findings, Ebrahimi et al. [29] reported improved antioxidant enzyme activity in eggplants under deficit irrigation when treated with organic fertilizers. The low CAT activity, particularly under moisture-shortage pressure conditions, may also be attributed to its inactivation. Moreover, as noted by Hameed et al. [30], such a decrease in the activity of CAT could result from the prevention of photorespiration and photosynthesis under moisture-shortage pressure. Figure 3 shows that APX activity decreased remarkably in plants grown under rainfed conditions. Drought tolerance is associated mainly with its suitable manipulation of antioxidant production, supporting the current results [28]. Previous studies have indicated that during water-deficit stress, excessive concentrations of H 2 O 2 may inhibit or down-regulate such antioxidant enzymes [5,7]. Therefore, this suggests that bio-organic fertilizers under rainfed conditions could lessen plant ROS damage. In particular, in inoculated plants, SOD activity converts O 2 − to H 2 O 2 [7]. Under conventional (rainfed) conditions, SOD activity improves considerably in plants treated with bio-organic fertilizers such as vermicompost, poultry, and animal manure.
The application of different sources of fertilizers raised the content of total flavonoids and total phenols in dragon's head plants in both irrigation systems. In addition, it is stated that vermicompost fertilizers improved antioxidant compound synthesis in peppermint plants in irrigation states [31]. Plant flavonoids and phenols are thought to play a significant function in plants as security combinations against different reactive oxygen species (ROS) combinations and the damage of free radicals [12]. Generally, biofertilizers could change the metabolite compounds due to gibberellins or cytokines [21]. This enhancement can depend on more available nutrients and their uptake by plants following the decrease in pH after applying biofertilizers [32]. Furthermore, it was discovered that fertilizer treatments were efficient in DPPH radical suppression, as it was observed with the highest DPPH antioxidant activity obtained using vermicompost [21,33]. As noted by Kumar et al. [34], free radicals, such as superoxide, can oxidize numerous metabolic pathways and harm the physiological function of the plant. Such free radicals can be disposed of using antioxidant compounds that function as free radical scavengers [33]. It is worth mentioning that most of the antioxidant combinations that can counteract and scavenge the ROS appear (in plant extracts) in the polar stage [34]. Breaking chain reactions of such free radicals is a primary strategy under low concentrations of phenolic combinations, metabolites, or other antioxidant combinations. Such phenol combinations appear to be active plant substances that execute this function [35]. The antioxidant characteristics of dragon's head were studied using organic fertilizers such as vermicompost and poultry and animal manure, and it was shown that organic fertilizers enhanced DPPH radical scavenging percentage, total phenol content, and chain-breaking activity in comparison with the control in both irrigation regimes. Such upsurges may be attributed to the functional role of bio-organic fertilizers in enhancing antioxidant combinations that inhibit ROS associated with oxidative tension. The application of bio-organic fertilizers under irrigation conditions subsequently contributed to an increased rate of net photosynthesis in plants and an increased action of the enzymes implicated in the biosynthesis of starch and protein in the synthesis of secondary metabolite combinations [17,31]. Because hydrocarbons are the structure required to biosynthesize phenol combinations, an enhancement in their concentration leads to an increase in the material for phenol combinations. This increment may be related to the assignment of additional carbons to the shikimate route and may contribute significantly to the synthesis of flavonoids and phenols [31].
As the water deficit increases, nutrient availability usually decreases [36]. Therefore, given the lower nutrient content of the seeds under rainfed conditions, it can be concluded that nutrient uptake potential was low in rainfed plants due to a rise in water deficit. Such results align with the observation made by Kulak et al. [37] that nutrient solubility decreases with soil moisture. Furthermore, diminished moisture absorption reduces transpiration and photosynthesis from a physiological standpoint [8]. Additionally, active mobilization procedures are disrupted under such conditions to conserve biological energy consumption. All these mechanisms result in a substantial decrease in root assimilation capacity, resulting in a subsequent reduction in nutrient absorption [38]. It has been previously noted that organic and biological fertilizers enhance plant development and growth, nutrient absorption, and photosynthetic efficiency considerably due to the increased activity of alkaline phosphatase and acid phosphatase [39]. The application of bio-organic fertilizers impacts physiological processes by promoting enzymes and the transfer of photosynthetic products, as well as by stimulating the division and elongation of cells, which lead to increased growth and mineral content in leaves [8,17].
In this study, applying different fertilizer sources improved the biological and grain yields of dragon's head plants compared to those not subjected to fertilization under both irrigation systems. Supplemental irrigation at either the pod-filling or flowering phases of cumin improved biological products by positively affecting the plant's height and the growth of extra branches [11]. Mirzamohammadi et al. [31] reported that vermicompost application improved the development and production of peppermint plants under watershortage stress states by increasing nutrient accessibility and absorption. Rahimi et al. [17] reported that applying bio-organic fertilizers improved the efficacy of photosynthesis and enhanced the growth of Syrian cephalaria in rainfed conditions. Hosseinzadeh et al. [18] stated an enhancement in the biological and grain yield of purslane following the application of animal manure and biofertilizer. The authors attributed such results to the positive effect exerted by the treatments on vegetative growth, chlorophyll synthesis, and photosynthetic capacity, especially under water-shortage tension [29]. The beneficial influence of manure and vermicompost may be related to increased soil organic matter content and a regulated availability of nutrients in the agricultural soil, which directly influence plant photosynthetic and vegetative growth [17].
Given that fixed oil yield is the product of fixed oil percentage and grain yield, any aspect that affects these two traits can affect fixed oil yield. Similar findings indicating a reduced fixed oil percentage under water-deficit tension were observed by Heydari and Pirzad [6]. As a result, the use of bio-organic fertilizers not only increases the water potential of the leaf, the rate at which the plant utilizes transpiration, the carbon dioxide levels, and the production of growth stimulants but also leads to root development besides enhancing water uptake in the plant during water-shortage tension and increasing the percentage of dragon's head fixed oil [6,21]. Rahimi et al. [17] noted that using vermicompost and animal manure improved the uptake of water and nutrients in Syrian cephalaria plants, leading to an increase in seed fixed oil percentage and fixed oil yield. It has been reported that organic and biological fertilizers positively impact photosynthetic products, mainly carbohydrates. The export of carbohydrates from photosynthesizing leaves to seeds may also have increased the fixed oil percentage given that carbohydrates are precursors of fatty acid biosynthesis pathways [21,37].
Hosseinzadeh et al. [18] reported that applying organic and biological fertilizers improved the essential oil content and yield of purslane plants with low irrigation. Additionally, such treatments may enhance the extent of essential oil-producing glands in the flowers and leaves of the dragon's head plant under rainfed conditions. Essential oils are terpenoids, requiring ATP, Acetyl-CoA, and NADPH for synthesis [40]. Therefore, essential oil biosynthesis is entirely dependent on the supply of plant mineral nutrients [41]. In the current investigation, organic fertilizers such as vermicompost and poultry manure may have enhanced nutrient uptake and improved plant moisture relationships compared to control plants, leading to essential oil increment. It has been stated that bio-organic and chemical fertilizers and vermicompost enhanced the amount and quality of essential oils produced in coriander and Moldavian balm [11,41]. Under water-scarcity stress, the yield of essential oils increases and/or decreases induced by the relations between the raised essential oil percentage and the diminished yield [40]. The employment of organic fertilizers such as vermicompost represented an environmentally friendly tactic to improve the sustainable production of bioactive molecules in dragon's head plants.
Experimental Design
The experiment was carried out at the research farm of the Medicinal Plants and Drugs Research Institute of the Urmia University, located in West Azerbaijan, Iran (44 • 58 12.42 E, 37 • 39 24.82 N, and 1338 m elevation). The experiment was performed in a factorial form based on a randomized complete block design with six different fertilizer sources (animal manure, vermicompost, poultry manure, biofertilizer, chemical fertilizer, and control) and an irrigation regime at two levels (rainfed and supplemental irrigation) in three replicates in the 2019 growing season. The plants were planted with a row-to-row distance of 40 cm and an interseed spacing of 3 cm. The 99% purity and 98% vigor seeds were planted in mid-March. At the beginning of the experiment, the soil at the upper depth of 0-0.3 m had a loam-clay texture with a N content of 0.54%, a pH of 7.9, K contents of 407, and P contents of 10.1 mg kg −1 . The climatic conditions of the experimental site were characterized by an average temperature for the planting period (March-July) of 15.7 • C and a total rainfall of 146.7 mm. Most of the rainfall was well distributed for the early vegetative growth of the dragon's head plants.
Plant Material
Before sowing, dragon's head seeds were inoculated with 108 CFU (-Colony-Forming Unit per mL) mL −1 bacterial population of Azotobacter chroococcum nitrogen-fixing bacteria and fertile phosphate-2 (containing two types of phosphate-soluble bacteria of the species Bacillus lentus and Pseudomonas putida) at a rate of 2 L ha −1 in the shade [42]. Then, the seeds were dried in the shade for half an hour before sowing. Chemical fertilizer treatments were applied based on plant needs and soil decomposition results. First, 120 kg ha −1 of triple superphosphate and 60 kg ha −1 of urea fertilizers were added at the planting stage. At the first stage of stem formation, the plants were supplied with 60 kg ha −1 of urea fertilizers as a top dressing. The vermicompost (at 15 t ha −1 ), animal manure (at 20 t ha −1 ), and poultry manure (at 10 t ha −1 ) were added to the selected plots during land preparation practices and thoroughly mixed with soil. The characteristics of the organic fertilizers used in the study are displayed in Table 4. Table 4. Some characteristics of poultry and animal manure and vermicompost used in the experiment. At flower initiation, supplemental irrigation was supplied using the method of Benami and Ofen [43]. The amount of irrigation water needed to reach field capacity (FC) was 600 m 3 ha −1 . Leaves were sampled at the end of the flowering stage to determine physiological parameters. Fresh leaf samples were covered with aluminum foil, stored in nitrogen tanks, and stored in a freezer at −80 • C. All cultivation methods were carried out uniformly for all experimental treatments. All experimental treatments were harvested individually after reaching full maturity (in mid-July), paying attention to grain yield and biological yield from 10 plants per plot. The samples were dried in an oven for 24 h at 70 • C before being analyzed.
Photosynthetic Pigment Content
To determine the photosynthetic pigment content, namely chlorophyll a and b and carotenoids, 0.5 g of fresh leaves was ground in liquid nitrogen, blended with 10 mL of 80% acetone, and separated into supernatant and solid parts of the leaf sample (precipitate or pellet) by centrifugation at 4000 rpm for 15 min. Chlorophyll a and b and carotenoid content was measured using a spectrophotometer at the full flowering stage [44].
Relative Water Content
Relative water content of leaf samples was measured following the method proposed by Khosravi et al. [45] (Equation (1)).
Antioxidant Enzyme Extraction and Assays
For quantification of antioxidant enzyme activity, fresh material (100 mg) was ground in 2 mL of 0.1 M KH 2 PO 4 containing 5% polyvinylpyrrolidone (PVP) and buffered at a pH of 6. Thereafter, the extracts were centrifuged at 3 • C for 30 min at 15,000 rpm, and the activity of the enzymes was estimated from the clear supernatant [46].
Catalase (CAT)
Catalase (CAT) activity was determined at 240 nm based on the variation in concentration of hydrogen peroxide (H 2 O 2 ). In this case, the reaction mixture contained 1.9 mL of 50 mM K 3 PO 4 , which was buffered at a pH of 7.0. Enzymatic activity was then read over 60 s per mg of protein based on absorption variations [47].
Superoxide Dismutase (SOD)
Superoxide dismutase (SOD) activity was assessed at 560 to minimize the loss of nitroblue tetrazolium (NBT) photochemical as noted by Beyer and Fridovich [48]. In this study, one unit of SOD was taken as the quantity of enzyme that inhibits a 50% decrease in NBT.
Ascorbate Peroxidase (APX)
By employing the Nakano and Asada method [49], ascorbate peroxidase (APX) activity was measured with a reaction mixture containing 1 mL of 0.5 mM ascorbic acid, 1 mL 100 mM K 3 PO 4 buffered at a pH of 7, 100 µL enzyme extract, and 0.1 mL H 2 O 2 0.1 mM. The absorption was then read with an absorbance coefficient of 2.8 mM −1 cm −1 at 290 nm.
Antioxidant Compounds
Fresh leaves of dragon's head were cut into small pieces and dried and powdered at room temperature in the shade. The methanolic extraction was performed with the addition of 25 mL solvent to 2 g sample and was shaken for 60 min at 1000 rpm. Then, the extract was passed through Whatman filter paper No. 1 (Whatman Ltd., Maidstone, UK). The solutions were then stored at 4 • C until the experiments. Light exposure was avoided during the extraction process [50].
Total Phenolic Content (TPC)
The total phenolic content was evaluated using the Folin-Ciocalteu technique [51,52]. A total of 1600 µL of distilled water and 10 µL of methanolic extracts were mixed and treated with 200 µL of Folin-Ciocalteau reagent (10% v/v), which was prepared in distilled water for 5 min at 25 • C. Thereafter, 200 µL of NaCO 3 (7.5%) was added, and the mixture was kept at 25 • C in the dark for 30 min. For quantitative examination of TPC, the sample's absorbance was measured using a UV/Visible spectrophotometer (DB-20/DB-20S) at 760 nm. TPC was calculated as mg of gallic acid (3,4,5-trihydroxybenzoic acid) g −1 dry weight using gallic acid as an external standard.
Total Flavonoid Content (TFC)
The aluminum chloride-based colorimetric method was used to evaluate the total flavonoid content in the methanolic extracts. In brief, 150 µL of sodium nitrate (5% w/v) was mixed with 30 µL of the methanolic extract, and after waiting for 5 min, 3 mL of aluminum chloride hexahydrate (10% w/v) was added and incubation was performed for 5 min. Thereafter, 1 mL of NaOH (1.0 M) was added, followed by dilution of the mixture with distilled water to the mark. After incubation in the dark for 30 min at 25 • C, the solution's absorbance was measured in a spectrophotometer at 510 nm. The external standard for TFC quantification was Quercetin (QE), and TFC was reported as mg QE g −1 dry weight [51,52]. DPPH (2,2-diphenyl-1-picrylhydrazyl-hydrate) Radical Scavenging Activity The DPPH radical scavenging activity of samples was measured using the colorimetric process as outlined by Brand-Williams et al. [53]. A total of 2.0 mL of DPPH solution was mixed with 15 µL of methanolic extract, and the mixture was incubated in the dark for 30 min at 20 • C. The solution's absorbance was evaluated at 517 nm. The DPPH inhibition was computed using Equation (2).
where Ab control and Ab sample are the absorbances of the control and the sample, respectively.
Superoxide Radical Scavenging Activity
For measurement of superoxide radical scavenging activity [54], 1 mL of extract was put into 9 mL of 5 mM Tris-HCl buffer (pH 8.2). Thereafter, to the same mixture, 40 µL of 4.5 mM pyrogallol was added. After shaking for 3 min, the solution's absorption spectrum at 420 nm was evaluated using a spectrophotometer. Superoxide radical scavenging activity was computed, as indicated in Equation (3), as the expression of the oxidation degree of a test group in association with that of the control.
Chain-Breaking Capacity
Chain-breaking capacity was measured using DPPH reagent and Brand-Williams et al.'s [53] method. A total of 10 µL of the extract was combined with 1.9 mL of DPPH methanolic solution (0.004%). Their uptakes at 0 h and after 30 min of incubation at room temperature and in darkness were evaluated at 515 nm in a spectrophotometer. The reaction rate was calculated according to the formula in Equation (4).
where A bs0 is the initial absorbance, A bs is the absorbance at the increasing time (t), and the reaction rate is expressed as k. Antioxidant activity was reported as -A bs −3 /min/mg extract.
Nutrients of Potassium and Phosphorus
To determine the nutrient content of leaf samples, dried leaf samples were milled, digested, and analyzed through combustion (4 h at 500 • C). The leaves' ashes (5 mg) were digested in 1 mL of 2 N HCl, and by use of Whatman filter paper of grade 42, the extracts obtained were filtered. The samples were then filtered, and using the vanado-molybdate method, the phosphorus (P) content was determined calorimetrically. The method was based on observing the yellow color of the unreduced vanado-molybdo-phosphoric heteropoly acid suspended in HNO 3 medium. The color intensity was determined at 470 nm using a Spectronic 20 colorimeter [55,56]. The amount of potassium (K) was measured by a flame photometer [55,56].
Seed and Biological Yield Characteristics
All experimental treatments were harvested individually after reaching full growth maturity, paying attention to grain yield and biological yield from 10 plants plot −1 . The plant samples were oven-dried for two days at 72 • C. Then, they were reweighed on a scale to find the difference.
Content of Fixed Oil and Essential Oil
The essential oil content of dragon's head grain was quantified using the Clevenger apparatus [57], whereas the fixed oil content of dragon's head grain was extracted following the Soxhlet technique [58] using a methanol/chloroform organic solvent. Essential oil yield and fixed oil yield were computed using Equations (5) and (6).
Data Analyses
The data generated in this study were analyzed using the SAS 9.1 software. The effects of two independent factors, i.e., different fertilizer applications (F) and irrigation conditions (I), as well as their possible interaction on physiological processes, antioxidant enzyme activity, antioxidant compounds, element content, and plant yield, were assessed by two-way ANOVA. Means were compared by Tukey's HSD at the p < 0.05 level. The graphs were drawn in Excel.
Conclusions
The evaluation of the physiological and biochemical characteristics of the dragon's head plant subjected to supplemental irrigation conditions and different fertilizer sources showed that the application of vermicompost and poultry and animal manure was more effective compared to chemical fertilizers in improving the grain, fixed oil, and essential oil yields of the dragon's head plant. The best plant performance under supplemental irrigation conditions was achieved through the improvement in nutrient absorption (phosphorus and potassium) and the relative water content, as well as an increase in photosynthetic pigment content, enzyme activity, and the percentage of inhibition of antioxidant radicals.
Such a result was the primary goal of our experiment, given the importance of the use of dragon's head plants for nutraceutical and curative purposes. Our data demonstrated that even if the experimentation period was just one year, soil fertilization with vermicompost, and poultry and animal manure in rainfed conditions and supplemental irrigation are strongly recommended to increase the nutraceutical and physiological traits of the dragon's head plant. This practice also effectively reduces the environmental pollution caused by the overuse of chemical fertilizers, achieving sustainable agricultural goals, and allows the cultivation of the dragon's head plant with low water requirements, reducing water consumption.
Even if further investigations are needed to assess what different fertilizer sources may exert over the long term, our results are promising for the implementation of dragon's head cultivation in arid and semi-arid regions.
|
2023-04-20T15:05:30.218Z
|
2023-04-01T00:00:00.000
|
{
"year": 2023,
"sha1": "e8de4d0790996ff087d437178aec9be9597b6ca0",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2223-7747/12/8/1693/pdf?version=1681809898",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ad18b2334db7fadd5f2c2dc3f95240b9d6df8867",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": []
}
|
2083657
|
pes2o/s2orc
|
v3-fos-license
|
A Variant in the BACH2 Gene Is Associated With Susceptibility to Autoimmune Addison’s Disease in Humans
Context: Autoimmune Addison’s disease (AAD) is a rare but highly heritable condition. The BACH2 protein plays a crucial role in T lymphocyte maturation, and allelic variation in its gene has been associated with a number of autoimmune conditions. Objective: We aimed to determine whether alleles of the rs3757247 single nucleotide polymorphism (SNP) in the BACH2 gene are associated with AAD. Design, Setting, and Patients: This case-control association study was performed in two phases using Taqman chemistry. In the first phase, the rs3757247 SNP was genotyped in 358 UK AAD subjects and 166 local control subjects. Genotype data were also available from 5154 healthy UK controls from the Wellcome Trust (WTCCC2) for comparison. In the second phase, the SNP was genotyped in a validation cohort comprising 317 Norwegian AAD subjects and 365 controls. Results: The frequency of the minor T allele was significantly higher in subjects with AAD from the United Kingdom compared to both the local and WTCCC2 control cohorts (58% vs 45 and 48%, re-spectively)(localcontrols, P (cid:2) 1.1 (cid:3) 10 (cid:4) 4 ;oddsratio[OR],1.68;95%confidenceinterval[CI],1.29–2.18; WTCCC2 controls, P (cid:2) 1.4 (cid:3) 10 (cid:4) 6 ; OR, 1.44; 95% CI, 1.23–1.69). This finding was replicated in the Norwegian validation cohort ( P (cid:2) .0015; OR, 1.41; 95% CI, 1.14–1.75). Subgroup analysis showed that this association is present in subjects with both isolated AAD (OR, 1.53; 95% CI, 1.22–1.92) and autoimmune polyglandular syndrome type 2 (OR, 1.37; 95% CI, 1.12–1.69) in the UK cohort, and with autoimmune polyglandular syndrome type 2 in the Norwegian cohort (OR, 1.58; 95% CI, 1.22–2.06). Conclusion: We have demonstrated, for the first time, that allelic variability at the BACH2 locus is associated with susceptibility to AAD. Given its association with multiple autoimmune conditions, BACH2 can be considered a “universal” autoimmune susceptibility locus. ( Clin Endocrinol Metab
of regulatory T cells (1). It is therefore a key modulator of inflammation, keeping immune system activation in check and controlling the balance between immunity and tolerance. The BACH2 gene is located on chromosome 6q15. In murine models, disruption of the BACH2 gene results in mice that are phenotypically normal at birth, but which develop fatal autoimmune disease in the first months of life (1). In humans, a number of common variants in linkage disequilibrium (LD) at the BACH2 locus have been consistently associated with autoimmune conditions, including type 1 diabetes (2), celiac disease (3,4), autoimmune thyroid disease (5,6), Crohn's disease (7), multiple sclerosis (8), vitiligo (9), and rheumatoid arthritis (10). Variants in the BACH2 gene have yet to be studied in autoimmune Addison's disease (AAD), which is a rare but highly heritable, organ-specific autoimmune condition with a prevalence in the European Caucasian population of 110 -220 cases per million (11,12).
We aimed to determine whether the intronic single nucleotide polymorphism (SNP) rs3757247 is associated with AAD in two independent cohorts from the United Kingdom and Norway. This SNP was selected for investigation because it has previously been associated with type 1 diabetes and generalized vitiligo, with odds ratios of 1.13 and 1.21 for the T allele conferring disease susceptibility, respectively (9,13), and it is in tight LD (r 2 Ͼ 0.93) with SNPs rs11755527 and rs619192, which have been associated with type 1 di-abetes (2), and rs72928038, which has been associated with rheumatoid arthritis (10) (Figure 1). In addition, it is in moderate LD (r 2 ϭ 0.4) with rs10806425, which has been associated with celiac disease (3).
Subjects and Methods
This study was carried out with approval of the Leeds East ethics committee (Ref. 05/Q1206/144) and the Regional Committee for Medical and Health Research Ethics (Ref. 2013/1504/REK vest).
The UK AAD cohort comprised 365 Caucasian individuals, of whom 285 were female. A diagnosis of AAD was confirmed by the subjects having either a low basal cortisol level with a high ACTH level or a subnormal response to the short synacthen test (250 mg parenteral synthetic ACTH 1-24 ). Patients with primary adrenal failure due to adrenal gland infiltration or infection, with secondary adrenal failure, or with autoimmune polyglandular syndrome type 1 (on the clinical grounds of mucocutaneous candidiasis, hypoparathyroidism, and/or ectodermal dystrophy) were excluded. The median age at diagnosis was 39 years (range, 10 -83 years). In 197 (54%) patients, an additional autoimmune condition (termed autoimmune polyglandular syndrome type 2 [APS2] was present, whereas 168 (46%) patients had AAD alone (termed isolated AAD [iAAD]). For comparison, a local matched control cohort comprising 183 individuals was available. In addition, genotype data from 5159 healthy individuals was available through the Wellcome Trust Case Control Consortium 2 (WTCCC2), as previously described (14).
We used 330 Norwegian AAD subjects, of whom 215 were female, and 384 matched controls as an independent replication cohort. The diagnosis of AAD was made using the criteria described above.
Genomic DNA was extracted from venous blood for each subject in the AAD and local control cohorts. The rs3757247 SNP was then genotyped using Taqman chemistry (Life Technologies, Thermo Fisher Scientific) according to the manufacturer's instructions. Ten percent of the samples were genotyped in duplicate to ensure accuracy of results. Genotype results were available for 358 of 365 UK AAD subjects (genotyping call rate, 98%) and for 166 of 183 local UK controls (genotyping call rate, 91%). Genotyping call rates in the Norwegian AAD subjects and the Norwegian control cohorts were 96% (317 of 330) and 95% (365 of 384), respectively. Genotypes were checked for Hardy-Weinberg equilibrium in both control cohorts (threshold P Ͼ .05) before analysis.
Statistical analysis was performed using PLINK, a freely available association analysis engine (15). A heterogeneity analysis was performed to compare genotype and allele frequencies between the UK control cohort and the WTCCC2 controls to ensure that they were comparable. Association analysis was then performed between case and control cohorts, followed by a subgroup analysis comparing genotype and allele frequencies between the iAAD and APS2 subgroups from the United Kingdom compared to the WTCCC2 controls, and between the iAAD and APS2 subgroups from Norway and the Norwegian controls.
Results
The genotype frequencies in the control cohorts were in Hardy Weinberg equilibrium (P Ͼ .05). There was no significant difference between genotype and allele frequencies between the UK local control cohort and the WTCCC2 controls (P ϭ .24 and .21, respectively), and therefore these control cohorts were deemed comparable (Table 1).
Comparing the UK AAD cohort to the local controls, the frequency of the TT genotype was higher, whereas the frequency of the CC genotype was lower in AAD subjects (P ϭ .0009) ( Table 2). The minor T allele accounted for 413 alleles (58%) in the AAD cohort compared to 149 alleles (45%) in the control cohort (P ϭ .00011; odds ratio [OR], 1.68; 95% confidence interval [CI], 1.29 -2.18) ( Table 2).
Comparing the AAD cohort to the larger WTCCC2 controls, a similar association was seen. The frequency of the TT genotype was higher, whereas the frequency of the CC genotype was lower in AAD cases (P ϭ 9.98 ϫ 10 Ϫ7 ) ( Table 2). The minor T allele accounted for 413 alleles (58%) in the AAD cohort compared to 4986 (48%) in the control cohort (P ϭ 1.4 ϫ 10 Ϫ6 ; OR, 1.44; 95% CI, 1.23-1.69) ( Table 2).
Discussion
This is the first report of a BACH2 variant being associated with susceptibility to AAD. The association observed in the UK cohort was replicated in an independent cohort from Norway. BACH2 variants have previously been associated with the more common autoimmune endocrinopathies, type 1 diabetes and autoimmune thyroid disease. In We have found that the intronic SNP, rs3757247, is associated with both iAAD and APS2 in the UK cohort, suggesting that this variant is independently associated with AAD in the UK population and that the association observed is not simply due to the fact that this cohort is enriched for those with additional autoimmune comorbidities. In the Norwegian cohort, however, there was association with the cohort as a whole, and in the subgroup analysis, with APS2. This suggests some genetic heterogeneity between these two European cohorts as has previously been reported (16).
To increase the power of this study, the UK AAD cohort was compared to the genotype data available from the WTCCC2 controls. The samples included in the WTCCC2 were genotyped using a SNP array. To control for the use of different genotyping platforms and to ensure that the WTCCC2 results were comparable to those derived from local controls, we compared genotypes and allele frequencies derived from a local UK control cohort to those from the WTCCC2 controls. This analysis demonstrated no significant differences between those two control groups.
How BACH2 variants influence autoimmune disease susceptibility is not yet understood. However, possible mechanisms can be proposed on the basis that the BACH2 protein has a number of crucial functions in modulating inflammation and immunity. The BACH2 protein is known to play an important role in regulating lymphocyte differentiation. It has been shown to repress genes that are important for both Th1 and Th2 lineage differentiation, including GATA3. GATA3 is the master Th2 lineage transcription regulator, and polymorphisms in the GATA3 gene have previously been associated with AAD (16 -18). Indirect evidence, although limited, supports the role of IFN-␥, the Th1-derived cytokine, in adrenocortical destruction in AAD. The expression of major histocompatibility complex class II molecules on adrenocortical cells has been shown to be highly up-regulated during the active phase of AAD; this effect is likely mediated by IFN-␥ (19,20). In addition, T cells derived from AAD patients demonstrate higher IFN-␥ production after stimulation with 21-hydroxylase compared with healthy controls (21). Our finding that genetic variability in the BACH2 gene is associated with susceptibility for AAD further implicates this immune pathway in the etiopathogenesis of this condition.
This study has identified that a BACH2 polymorphism is associated with AAD in two independent European cohorts. The rs3757247 SNP is a common intronic variant, has no known functional consequences, and therefore is not causative of autoimmune disease. It is likely that this SNP is in LD with a causative variant located elsewhere in the locus.
The role of rare variants, with minor allele frequencies of Ͻ0.01, in complex traits has been increasingly recognized (22). A number of rare variants have been reported in the BACH2 gene, and some of these are predicted to be potentially deleterious. A review of SNP data available from the 1000 Genomes project (23), visualized on the freely available Ensembl genome browser (release 85) (24), has shown that five rare missense variants at the BACH2 locus are predicted to be deleterious by both SIFT and Polyphen analysis. These are located in a 20-kB region of the BACH2 gene, with three located in exon 7 and one located in each of exons 8 and 9. These rare missense variants are located over 250 kB from the intronic SNP analyzed in this study and are not predicted to be in tight LD with rs3757247. A single-point analysis of the rare variants in the BACH2 gene in our cohort would be underpowered because of the low copy number of the minority allele; however, they warrant further investigation. The exact mechanism by which BACH2 variants influence autoimmune disease risk requires further research, and a systematic analysis of variation in the region is now required in large cohorts of patients. Our results add to the growing literature that demonstrates that the BACH2 protein is a crucial regulator of both immune function and dysfunction. Autoimmune diseases are known to share a common genetic architecture, with some loci, such as CTLA4, PTPN22, and the HLA, conferring susceptibility to multiple autoimmune diseases. The finding of association of a BACH2 variant with a further autoimmune condition, AAD, supports the hypothesis that variation in BACH2 may be a permissive immune system factor that is implicated in many or most organ-specific autoimmune conditions.
|
2018-04-03T03:18:21.434Z
|
2016-09-28T00:00:00.000
|
{
"year": 2016,
"sha1": "1d86f4e58a01aaf06f088cedfd097707d13c0f2c",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/jcem/article-pdf/101/11/3865/20287771/jcem3865.pdf",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "4fc033d9cbf7d5082c9431b9acfdba5f8b61f901",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
53569526
|
pes2o/s2orc
|
v3-fos-license
|
Performance Evaluation of a Quality of Service Control Scheme in Multi-Hop WBAN Based on IEEE 802.15.6
The performance of a quality of service (QoS) control scheme in a multi-hop wireless body area network (WBAN) based on the IEEE Std. 802.15.6 is evaluated. In medical Internet of Things systems, WBANs are an important technology. In a previous study, an optimal quality of service control scheme that employs a multiplexing layer for priority scheduling and a decomposable error control coding scheme for WBANs were proposed. However, the two-hop extension supported by IEEE Std.802.15.6 has not been considered. Here, the two-hop extension is applied. Then, the packet error ratio, number of transmissions, and energy efficiency of our previously proposed system are compared to a standard scheme under several conditions. Also, novel evaluations based on communication distance are conducted. Numerical results demonstrate that our proposed scheme, in which coding rates change relative to channel conditions, outperforms standard schemes in many aspects. In addition, those systems show the best performance when the communication distance of the first hop equals that of the second hop. In addition, the above result is theoretically clarified.
Introduction
Health monitoring systems that employ wearable vital sign sensors and wireless communication (referred to as medical Internet of Things (m-IoT) systems) have received significant attention recently [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15][16]. Wireless body area networks (WBAN) are an important key technology in the m-IoT field. WBAN sensors can sample, monitor, process, and communicate a significant amount of various vital data [14]. In addition, they can provide real-time feedback [14]. It is expected that WBANs will be implemented to monitor patient health. In particular, they are expected to monitor elderly people in hospitals, nursing homes, and their own homes [15].
Recently, there have been various attempts to develop standards for WBAN systems [17,18]. In 2011, IEEE Std. 802. 15.4a was issued as a standard for wireless personal area networks (WPAN) assuming a wide range of applications [19]. However, IEEE Std. 802. 15.4a has not been optimized for medical and healthcare applications. Therefore, IEEE Std. 802.15.6 was published as a standard specialized mainly for implant and wearable WBAN assuming medical-healthcare uses (but not limited 1.
The performance of our previously proposed QoS control scheme is improved by appropriately determining the coding rate using channel estimation. With this improvement, data packets can be relayed to the hub with a small number of transmissions even when the maximum number of retransmissions is limited by a two-hop extension.
2.
Novel performance evaluations are conducted as a function of the distance between transmitter and receiver assuming a real environment, which were not considered in our previous work [20][21][22]. Through these evaluations, we confirm that our proposed QoS control method is effective even assuming a real environment. It is also clear that it is better to perform two-hop expansion than one-hop case by setting the distance between transmitter and receiver in each hop appropriately. In addition, this paper contributes to theoretically clarifying the relevant optimum setting.
The remainder of this paper is organized as follows. Section 2 introduces the related research in the field of this manuscript. In Section 3, the related descriptions of IEEE Std. 802.15.6 are explained. Section 4 shows our previously proposed error control method. The system model is described in Section 5. Computer simulated performance evaluations and theoretical analysis of the results are presented in Section 6. Section 7 concludes the paper.
Related State of the Art Research
This section introduces the latest research related to this manuscript. WBANs extended to multi-hop communication have been studied to increase their lifetime. Many studies on multi-hop WBANs have focused on energy-efficient MAC or routing protocols [23][24][25][26][27][28][29][30]. For example, previous studies have focused on a cross-layer technique that includes a MAC layer to reduce delay and improve energy efficiency [23][24][25]. In [23], a network tree in a distributed manner has been used to guarantee collision free access to the medium and to route data towards the sink. Computer simulation results have shown that the protocol offers low delay and good resilience to mobility. The proposed solution of [24] extended the cooperation at the MAC layer to a cross-layered gradient based routing solution that allows interaction between WBAN and environmental sensors to ensure data delivery from WBANs to a distant gateway. The MAC layer of [25] provided the network layer with local information about the quality of on-body links to enable the WBAN to identify the most reliable links in a distributed manner. Results of [25] have shown the effectiveness of the proposed design which takes advantage of dynamic scheduling and multi-hop relays as warranted by the link conditions. An energy-efficient and low-delay relay selection method for multi-hop WBANs has also been discussed [26,27]. In [26], a game-theory approach has been proposed to investigate the problem of relay selection and power control with QoS constraints in multiple-access WBANs. Reference [27] considered adaptive power control and routing in multi-hop WBANs, and developed a low overhead energy-efficient routing scheme. The proposed routing protocol has established an energy-efficient end-to-end path as well as adaptively choosing transmission power for sensor nodes. Path loss models have also been considered to evaluate the energy efficiency of multi-hop WBAN topologies in [28]. Ref. [28] has discussed the propagation channel between two half-wavelength dipoles at 2.45 GHz placed near a human body, and then presented an application for cross-layer design to optimize the energy consumption of different topologies. In addition, a security scheme based on PHY characteristics for multi-hop WBANs has been described [29,30]. The game-theory framework of [29] was proposed, wherein wearable sensor devices interact in the presence of wiretappers and under fading channel conditions to find the most secure multi-hop path to the hub, while adhering to the end-to-end delay requirements. Reference [30] proposed MASK-BAN, a lightweight fast authenticated secret key extraction scheme for intra-WBAN communication. However, those studies did not focus on an error control scheme.
On the other hand, the analytical expressions for energy efficiency and packet error ratio (PER) have been formulated for two-way relay cooperative communication in [31] which is similar to our work. Then, [31] introduced a hybrid system, which allows switching between the proposed two-way relay, multi-stage one-way relay, and direct link to maximize energy efficiency, and a joint network-channel coding scheme using convolutional and BCH codes. However, [31] did not consider a hybrid automatic repeat request (ARQ), which was applied to our method. In addition, path loss, shadowing, and additive white Gaussian noise (AWGN) have been taken into account, but multi-path or flat fading has not in [31]. On the other hand, this manuscript considers all of them.
UWB PHY
The IEEE Std. 802.15.6 defines three PHY layers: narrowband (NB), UWB, and human body communications (HBC). This study focused on an impulse radio ultra-wideband PHY layer (IR-UWB-PHY) which offers high data rate transmission, low energy consumption, powerful multi-pass resolution, good coexistence with other wireless communication systems, and so on.
The UWB PHY frame format is formed by the synchronization header (SHR), the physical layer header (PHR), and the physical layer service data unit. (PSDU), respectively, as shown in Figure 1 [17]. The PSDU contains the MAC protocol data unit (MPDU) and the BCH parity bits. The PSDU can therefore be regarded as the payload. The information contained by the PHR includes the data rate of the PSDU and the length of the MAC frame body, and the SHR contains the preamble used for timing synchronization, packet detection, and other purposes, and the start-of-frame delimiter (SFD) for frame synchronization. This research mainly focused on the performance of the payload (PSDU).
Sensors 2018, 18, x FOR PEER REVIEW 3 of 20 [25] provided the network layer with local information about the quality of on-body links to enable the WBAN to identify the most reliable links in a distributed manner. Results of [25] have shown the effectiveness of the proposed design which takes advantage of dynamic scheduling and multi-hop relays as warranted by the link conditions. An energy-efficient and low-delay relay selection method for multi-hop WBANs has also been discussed [26,27]. In [26], a game-theory approach has been proposed to investigate the problem of relay selection and power control with QoS constraints in multiple-access WBANs. Reference [27] considered adaptive power control and routing in multi-hop WBANs, and developed a low overhead energy-efficient routing scheme. The proposed routing protocol has established an energy-efficient end-to-end path as well as adaptively choosing transmission power for sensor nodes. Path loss models have also been considered to evaluate the energy efficiency of multi-hop WBAN topologies in [28]. Ref. [28] has discussed the propagation channel between two half-wavelength dipoles at 2.45 GHz placed near a human body, and then presented an application for cross-layer design to optimize the energy consumption of different topologies. In addition, a security scheme based on PHY characteristics for multi-hop WBANs has been described [29,30]. The game-theory framework of [29] was proposed, wherein wearable sensor devices interact in the presence of wiretappers and under fading channel conditions to find the most secure multi-hop path to the hub, while adhering to the end-to-end delay requirements. Reference [30] proposed MASK-BAN, a lightweight fast authenticated secret key extraction scheme for intra-WBAN communication. However, those studies did not focus on an error control scheme. On the other hand, the analytical expressions for energy efficiency and packet error ratio (PER) have been formulated for two-way relay cooperative communication in [31] which is similar to our work. Then, [31] introduced a hybrid system, which allows switching between the proposed two-way relay, multi-stage one-way relay, and direct link to maximize energy efficiency, and a joint network-channel coding scheme using convolutional and BCH codes. However, [31] did not consider a hybrid automatic repeat request (ARQ), which was applied to our method. In addition, path loss, shadowing, and additive white Gaussian noise (AWGN) have been taken into account, but multi-path or flat fading has not in [31]. On the other hand, this manuscript considers all of them.
UWB PHY
The IEEE Std. 802.15.6 defines three PHY layers: narrowband (NB), UWB, and human body communications (HBC). This study focused on an impulse radio ultra-wideband PHY layer (IR-UWB-PHY) which offers high data rate transmission, low energy consumption, powerful multi-pass resolution, good coexistence with other wireless communication systems, and so on.
The UWB PHY frame format is formed by the synchronization header (SHR), the physical layer header (PHR), and the physical layer service data unit. (PSDU), respectively, as shown in Figure 1 [17]. The PSDU contains the MAC protocol data unit (MPDU) and the BCH parity bits. The PSDU can therefore be regarded as the payload. The information contained by the PHR includes the data rate of the PSDU and the length of the MAC frame body, and the SHR contains the preamble used for timing synchronization, packet detection, and other purposes, and the start-of-frame delimiter (SFD) for frame synchronization. This research mainly focused on the performance of the payload (PSDU).
Two-Hop Extension
An example smart health care monitoring system that includes a WBAN with a two-hop extended star network topology is shown in Figure 2. As can be seen, vital information obtained by WBAN nodes is displayed on a monitoring unit through a WBAN hub.
Sensors 2018, 18, x FOR PEER REVIEW 4 of 20 Figure 1. Ultra-wideband physical layer frame format. SFD, SHR, PHR, and PSUD are abbreviations of start-of-frame delimiter, synchronization header, physical layer header, and physical layer service data unit respectively.
Two-Hop Extension
An example smart health care monitoring system that includes a WBAN with a two-hop extended star network topology is shown in Figure 2. As can be seen, vital information obtained by WBAN nodes is displayed on a monitoring unit through a WBAN hub. In IEEE Std. 802.15.6, a node and a hub can utilize the two-hop extension to exchange frames through another node, except in the medical implant communication service band. In Figure 2, the terminal, intermediate nodes, and the hub function as relayed nodes, relaying nodes, and the target hub of a relayed node, respectively. Here, a relayed node or the target hub can initiate a two-hop extension at times determined by the initiator. Note that a relaying node can exchange its frames with the hub directly.
A relayed node shall not send its frames to a relaying node in contended allocations provided by the target hub [17]. Thus, a scheduled access phase can only be utilized in the case of a two-hop extension. Therefore, the managed access phase as defined in IEEE Std. 802.15.6 is only used in this study.
Previously Proposed Error Control Method
In a previous study, an optimal QoS control scheme that employs decomposable error control coding and Weldon's ARQ scheme was proposed [20][21][22]. As an example of the decomposable code, punctured convolutional code (constraint length = 3; coding rates are 8/9 to 1/16) is used. The = 8/9 punctured code patterns (codeword 1 and codeword 1') are generated from a convolutional code whose generator polynomial is [7,5] and the coding rate is = 1/2 as shown in Figure 3. In IEEE Std. 802.15.6, a node and a hub can utilize the two-hop extension to exchange frames through another node, except in the medical implant communication service band. In Figure 2, the terminal, intermediate nodes, and the hub function as relayed nodes, relaying nodes, and the target hub of a relayed node, respectively. Here, a relayed node or the target hub can initiate a two-hop extension at times determined by the initiator. Note that a relaying node can exchange its frames with the hub directly.
A relayed node shall not send its frames to a relaying node in contended allocations provided by the target hub [17]. Thus, a scheduled access phase can only be utilized in the case of a two-hop extension. Therefore, the managed access phase as defined in IEEE Std. 802.15.6 is only used in this study.
Previously Proposed Error Control Method
In a previous study, an optimal QoS control scheme that employs decomposable error control coding and Weldon's ARQ scheme was proposed [20][21][22]. As an example of the decomposable code, punctured convolutional code (constraint length K = 3; coding rates r c are 8/9 to 1/16) is used. The r c = 8/9 punctured code patterns (codeword 1 and codeword 1') are generated from a convolutional code whose generator polynomial is [5,7] and the coding rate is r c = 1/2 as shown in Figure 3. 1. Firstly, the information bit sequence is encoded via the punctured convolutional code, and codeword 1 is transmitted. 2. If bit errors are detected after decoding codeword 1, the receiver stores the transmitted codeword 1, and the transmitter re-sends the sub-codeword of codeword 1′ times if 1 ≤ ≤ 3. At the receiver, the received sub-codeword and stored codeword are combined, and the reconstructed codeword is decoded. 3. After the third retransmission, codeword 1 is sent times and combined with a buffered codeword at the receiver. If bit errors are detected after decoding reconstructed codeword, the codeword 1 is buffered in the receiver, and codeword 1′ is transmitted times and combined with a stored codeword. 4. After that, codeword 1 and 1′ are sent alternately times and stored. Then, a receiver reconstructs and decodes low-rate decomposable codes by changing the number of data copies in Weldon's ARQ protocol. At this time, a buffered old codeword is updated to a transmitted new codeword. 5. This operation continues until no bit errors are detected or the maximum number of transmissions is achieved. Figure 5 shows a flowchart of the protocol of our proposed error correcting scheme. This scheme has the following advantages [21,22]. The first one is that the coding rate is very wide. Hence, bit (or packet) errors can be sufficiently eliminated by the coding rate of = 8/9 under very good channel conditions, while very low coding rates can remove bit errors under bad channel conditions. As for the second advantage, in the case of a small number of retransmissions, it is sufficient to transmit a small number of redundant bits. This characteristic leads to improvement of energy efficiency and reduction of transmission delay on retransmission. Finally, combining characteristics of Weldon's ARQ protocol makes it possible to perform wider QoS control. That is, by controlling the number of data copies in Weldon's ARQ protocol for transmission, the error correcting capability can be changed even if an error correcting code with the same coding rate is used. 1.
Firstly, the information bit sequence m is encoded via the punctured convolutional code, and codeword 1 is transmitted.
2.
If bit errors are detected after decoding codeword 1, the receiver stores the transmitted codeword 1, and the transmitter re-sends the sub-codeword of codeword 1 n i times if 1 ≤ i ≤ 3. At the receiver, the received sub-codeword and stored codeword are combined, and the reconstructed codeword is decoded.
3.
After the third retransmission, codeword 1 is sent n 4 times and combined with a buffered codeword at the receiver. If bit errors are detected after decoding reconstructed codeword, the n 4 codeword 1 is buffered in the receiver, and codeword 1 is transmitted n 5 times and combined with a stored codeword.
4.
After that, codeword 1 and 1 are sent alternately n i times and stored. Then, a receiver reconstructs and decodes low-rate decomposable codes by changing the number of data copies n i in Weldon's ARQ protocol. At this time, a buffered old codeword is updated to a transmitted new codeword.
5.
This operation continues until no bit errors are detected or the maximum number of transmissions q is achieved. Figure 5 shows a flowchart of the protocol of our proposed error correcting scheme. This scheme has the following advantages [21,22]. The first one is that the coding rate is very wide. Hence, bit (or packet) errors can be sufficiently eliminated by the coding rate of r c = 8/9 under very good channel conditions, while very low coding rates can remove bit errors under bad channel conditions. As for the second advantage, in the case of a small number of retransmissions, it is sufficient to transmit a small number of redundant bits. This characteristic leads to improvement of energy efficiency and reduction of transmission delay on retransmission. Finally, combining characteristics of Weldon's ARQ protocol makes it possible to perform wider QoS control. That is, by controlling the number of data copies n i in Weldon's ARQ protocol for transmission, the error correcting capability can be changed even if an error correcting code with the same coding rate is used.
System Model and QoS Requirement
It is assumed that a sensor node (N1) includes multiple sensors that produce different data types that are transmitted via a relaying node (N2) to the target hub (H) ( Figure 6). Here, → is the number of transmissions from nodes A to B and → is the maximum number of transmissions from nodes A to B. If bit errors are detected, the system retransmits until the maximum number of retransmissions is reached. Then, the transmission is considered to have failed if the data from a sensor node do not reach the target hub. The average number of transmissions from node A to node B → is expressed as follows:
System Model and QoS Requirement
It is assumed that a sensor node (N1) includes multiple sensors that produce different data types that are transmitted via a relaying node (N2) to the target hub (H) ( Figure 6). Here, tr A→B is the number of transmissions from nodes A to B and q A→B is the maximum number of transmissions from nodes A to B. If bit errors are detected, the system retransmits until the maximum number of retransmissions is reached. Then, the transmission is considered to have failed if the data from a sensor node do not reach the target hub.
System Model and QoS Requirement
It is assumed that a sensor node (N1) includes multiple sensors that produce different data types that are transmitted via a relaying node (N2) to the target hub (H) ( Figure 6). Here, → is the number of transmissions from nodes A to B and → is the maximum number of transmissions from nodes A to B. If bit errors are detected, the system retransmits until the maximum number of retransmissions is reached. Then, the transmission is considered to have failed if the data from a sensor node do not reach the target hub. The average number of transmissions from node A to node B tr A→B is expressed as follows: Here, P f ail,i is the probability of transmission failure in the ith transmission. In this study, P f ail,i [21] is the same as PER because packet collisions in the MAC layer are not considered. Hence, bit errors are taken into account due to noise and multipath fading. Then, the average number of transmissions in a two-hop case is expressed as follows: Here, P f ail,1st is the probability of transmission failure at the first hop.
In this study, two data (Data A and Data B) with different types of QoS requirements are considered. Here, it is assumed that a low PER is desired for Data A and high energy efficiency is important for Data B as an example [20][21][22]. As the first reason for selecting those QoSs, this paper particularly focused on an error controlling scheme utilizing a hybrid ARQ, and then PER and the energy efficiency are very important parameters for evaluation of such a scheme. Secondly, those two parameters are related to a trade-off. Hence, we also aimed to show the relationship in some evaluations. Data A is assumed to be a physiological parameter with a low data rate, for example blood pressure, SpO2, or temperature, and Data B to be a waveform, such as an ECG output [20][21][22]. The transmission order and error control process of different types of data packets depend on such QoS requirements. The characteristics of different data types are summarized in Table 1 [13,14,22].
Then, each q A→B is set as shown in Table 2. The maximum number of retransmissions is four in high QoS mode in the IR-UWB (impulse radio ultra-wideband) PHY of the standard. However, the default mode in the IR-UWB PHY, the narrowband PHY, and the Human Body Communication PHY in IEEE Std. 802.15.6 do not define a maximum number of retransmissions. Thus, in the current study, this parameter was set according to the QoS requirements of the data in our previous work [20][21][22]. Figure 7 shows examples of each tr 2hop . P f ail,1st,i and P f ail,2nd,i denote the probability of transmission failure at the ith transmission of the first and second hop, respectively. With Data A, tr 2hop increases steeply under high P f ail,2nd,i conditions, especially in low P f ail,1st,i cases, because tr N2→H increases towards q max as P f ail,2nd,i increases. On the other hand, tr 2hop increases gradually with Data B because q N1→N2 and q N2→H are constant.
Two Proposed Schemes
In this study, two proposed schemes are assumed. The first scheme (Scheme 1) transmits data depending on preset parameters, which was used in our previous study [20][21][22]. On the other hand, in the second scheme (Scheme 2), coding rates are varied with the SNR estimated using a preamble signal according to each QoS requirement (e.g., desired bit error ratio (BER)), which is introduced for the first time in this manuscript. The operation example is shown in Figure 8. Firstly, the channel SNR is estimated by using the preamble of the beacon or the T-Poll received from the hub or the relaying node. Next, the relayed node or the relaying node determines the coding rate according to the estimated channel SNR and transmits data to the relaying node or the hub. If a bit error is detected, elements of the encoded data (codeword) are transmitted to increase error correcting capability after receiving negative-acknowledgement (NACK). Then, if data are transmitted successfully, the channel SNR is estimated by using the returned acknowledgement (ACK) preamble, the coding rate is determined, and the next data are sent. Since Scheme 2 uses an existing preamble, extra overhead is not required.
Two Proposed Schemes
In this study, two proposed schemes are assumed. The first scheme (Scheme 1) transmits data depending on preset parameters, which was used in our previous study [20][21][22]. On the other hand, in the second scheme (Scheme 2), coding rates are varied with the SNR estimated using a preamble signal according to each QoS requirement (e.g., desired bit error ratio (BER)), which is introduced for the first time in this manuscript. The operation example is shown in Figure 8. Firstly, the channel SNR is estimated by using the preamble of the beacon or the T-Poll received from the hub or the relaying node. Next, the relayed node or the relaying node determines the coding rate according to the estimated channel SNR and transmits data to the relaying node or the hub. If a bit error is detected, elements of the encoded data (codeword) are transmitted to increase error correcting capability after receiving negative-acknowledgement (NACK). Then, if data are transmitted successfully, the channel SNR is estimated by using the returned acknowledgement (ACK) preamble, the coding rate is determined, and the next data are sent. Since Scheme 2 uses an existing preamble, extra overhead is not required. Then, the channel SNR is estimated using the following equations [32]: Then, the channel SNR is estimated using the following equations [32]: Here, Γ is the estimated SNR, ρ is a correlation coefficient, x is a preamble signal with noise η, and r is a preamble signal that consists of a signal s and unknown constant c without noise or interference. Then, we explain why Equation (3) becomes the SNR. Let's substitute Equation (4) into Equation (3): Here, x H r 2 can be expanded as follows: Here, η H r = r H η = 0 since noise and a preamble signal are uncorrelated. Then, x H xr H r can be expanded as follows: x H xr H r = s H sr H r+η H ηr H r.
For the same reason, η H s = s H η = 0. Then, Equation (8) can be modified from Equations (9) Finally, Equation (11) is summarized as follows, indicating that the SNR can be derived: where P s and P η are signal power and noise power, respectively. In Scheme 2, the criteria to determine the coding rate are expressed as follows: (14) where L in f o is length of information bits. Hence, the desired BER is calculated from the desired PER such as Table 1 and L in f o . The coding rate is determined based on that and the estimated SNR from Figure 9. As the reason for using the desired PER and L in f o , it is possible to accurately obtain the desired BER for determining the coding rate from the Equation (14) since the required QoS (desired PER) is used. For example, in a case where the desired PER is 10 −2 and L in f o is 400 bits, the desired BER is calculated as 2.5 × 10 −5 . Here, if the estimated SNR is 5 dB, the coding rate is determined to be r c = 1/2 as shown in Figure 9.
where is length of information bits. Hence, the desired BER is calculated from the desired PER such as Table 1 and . The coding rate is determined based on that and the estimated SNR from Figure 9. As the reason for using the desired PER and , it is possible to accurately obtain the desired BER for determining the coding rate from the Equation (14) since the required QoS (desired PER) is used. For example, in a case where the desired PER is 10 and is 400 bits, the desired BER is calculated as 2.5 × 10 . Here, if the estimated SNR is 5 dB, the coding rate is determined to be = 1/2 as shown in Figure 9.
Performance Evaluation by Computer Simulation
In this section, the proposed and standard schemes with two-hop extension are evaluated based on communication distance by computer simulations. The computer simulator was built by us with MATLAB. The main simulation parameters are listed in Table 3 and refer to our previous work [20][21][22]. Table 4 shows the preset parameters of Weldon's ARQ protocol at the th transmission for Scheme 1 [20][21][22]. The computer simulation assumes that there is no error in SHR and PHR. That is, Figure 9. Bit error probability in each coding rate.
Performance Evaluation by Computer Simulation
In this section, the proposed and standard schemes with two-hop extension are evaluated based on communication distance by computer simulations. The computer simulator was built by us with MATLAB. The main simulation parameters are listed in Table 3 and refer to our previous work [20][21][22]. Table 4 shows the preset parameters of Weldon's ARQ protocol at the ith transmission for Scheme 1 [20][21][22]. The computer simulation assumes that there is no error in SHR and PHR. That is, only the characteristics of PSDU are evaluated. In computer simulations of the compared schemes, Data A was transmitted using the default mode with (63, 51) BCH code in IEEE Std. 802.15.6 and the error control scheme utilizing the (63, 55) Reed-Solomon code in IEEE Std. 802.15.4a with ordinary ARQ, whereas Data B was transmitted using the high QoS mode with (126, 63) shortened BCH code and type-II hybrid ARQ, and then the error control scheme utilizing the concatenated code consisting of the (63, 55) Reed-Solomon code and the convolutional code whose constraint length is three and coding rate is 1/2 in IEEE Std. 802.15.4a with ordinary ARQ [17,19]. In these computer simulations, the IEEE model CM 3 is applied as a channel model, which is targeted for wearable WBAN and includes multi-path fading [33]. Then, a hospital room case in the IEEE model CM3 is utilized as a path loss model [33]. The path loss is expressed as follows: Here, a and b are linear fitting coefficients, d is the communication distance (millimeter, mm) between a transmitter and a receiver, and N is a normally distributed variable with zero mean and standard deviation σ N . Details about these parameters can be found in the literature [33]. Using PL(d), the signal to noise ratio (SNR) at a receiver can be expressed as follows: Table 4. Preset number of data copies in Weldon's ARQ n i .
P n = N thermal + (NF) dB + I dB (18) where P t is transmission power and N thermal is thermal noise. The average path loss is shown in Figure 10. It is assumed that the channel condition does not change until the two-hop relay is completed or the two-hop relay fails beyond the maximum number of retransmissions.
In addition, each case of the proposed scheme in each hop is summarized in Table 5. Then, energy efficiency η is derived from our previous work [22] as follows: E link, A→B = (T TOT + N tx T ACK )(P tx,RF + P tx,circ + P rx ) + N tx (ε enc + ε dec ) Here, E link, A→B is the energy consumption of the communication link at each hop and P succ is the transmission success ratio, T TOT is the total duration of packet transmission, T ACK is the duration of ACK, L PSDU,i is the length of PSDU, N tx is the number of transmission, P tx,RF is the transmitter RF power consumption, P tx,circ is the transmitter circuitry power consumption, P rx is the receiver power consumption, and ε enc and ε dec are the encoding and decoding energies, respectively [34][35][36][37].
where is transmission power and is thermal noise. The average path loss is shown in Figure 10. It is assumed that the channel condition does not change until the two-hop relay is completed or the two-hop relay fails beyond the maximum number of retransmissions. In addition, each case of the proposed scheme in each hop is summarized in Table 5. Then, energy efficiency is derived from our previous work [22] as follows: , → = ( + ) , Here, , → is the energy consumption of the communication link at each hop and is the transmission success ratio, is the total duration of packet transmission, is the In this scenario, it can be said that the performance in the range in which the WBAN mainly operates (10 cm~1.5 m) and certain limitations of the WBAN system (1.5 m~2.3 m) are evaluated. PDFR means the ratio at which the two-hop relay failed beyond the maximum number of retransmissions. As can be seen, the proposed scheme satisfies the QoS requirements for data A and B as shown in Table 1, while IEEE Std. 802.15.6 and 15.4a do not. Hence, the proposed method can improve PER of Data A more, while it can improve the energy efficiency and the number of transmissions of Data B more. Conversely, Data B has better performances with respect to both standard schemes. The reason is that those standard schemes are not basically designed so that any QoSs can be satisfied. Hence, it can be considered that the performances of each mode of IEEE Std. 802.15.6 and error control schemes of IEEE Std. 802.15.4a were simply expressed. Also, that is one of problems of these standard schemes. Cases 2 and 3 show better energy efficiency and average number of transmissions than Case 1, because the coding rate of Case 2 and Case 3 is set appropriately for the channel SNR and the number of retransmissions is reduced by utilizing Scheme 2, while Case 1 uses only Scheme 1 and it requires a larger number of retransmissions. However, there is not a large difference between Cases 2 and 3 because d 2nd is short and the error correcting capability of coding rate r c = 8/9 at the first transmission can reduce bit errors sufficiently. That is, there is no large difference between Schemes 1 and 2 with respect to the second hop. standard schemes. Cases 2 and 3 show better energy efficiency and average number of transmissions than Case 1, because the coding rate of Case 2 and Case 3 is set appropriately for the channel SNR and the number of retransmissions is reduced by utilizing Scheme 2, while Case 1 uses only Scheme 1 and it requires a larger number of retransmissions. However, there is not a large difference between Cases 2 and 3 because is short and the error correcting capability of coding rate = 8/9 at the first transmission can reduce bit errors sufficiently. That is, there is no large difference between Schemes 1 and 2 with respect to the second hop. Figures 14-16 show the performance results for fixed communication distance in two hops d 2hops = d 1st +d 2nd (1.5 m) and varying the d 1st and d 2nd values. For d 1st = 1.5 m, data are transmitted using only a single hop. Thus, the proposed scheme satisfies the QoS requirements for Data A and B, while both standard schemes approach do not, like in the first scenario. Also, when comparing the standard schemes and the proposed scheme, the performances of both standards are worse than the proposed one. For example, Data A of the proposed scheme satisfies PDFR < 10 −2 , while that of both standards do not satisfy PDFR < 10 −1 . This is because the correcting capability of error correcting codes used in those standards is lower than that of the proposed scheme. In other words, the standard schemes do not have sufficient correcting capability in a hop with poor channel conditions. Comparing Case 1 and Case 2, it is understood that Case 2 has better characteristics. The reason is that Case 2 can select a coding rate suitable for the channel condition by using Scheme 2 at the second hop. On the other hand, regarding Case 1, since Scheme 1 is used at both hops, it is considered that a hop having a bad channel condition is greatly affected. Then, Case 3 shows the best performance because Scheme 2 is used at both hops. In addition, all systems except Case 2 of the proposed scheme show the best performance when the communication distance of the first hop equals that of the second hop because d 1st or d 2nd becomes long (unlike the previous condition) and the long-distance communication influences performance in other cases.
Theoretical Analysis of Constant
Here, we present a theoretical analysis when is fixed because this scenario appears to show the optimal point in . The communication distance in two hops is defined as follows: , is differentiated by as follows: Here, the case that (30) = 0 is considered. Equation (30) is modified by the following equation: Figure 16. Average number of transmissions with constant d 2hops .
Theoretical Analysis of Constant d 2hops
Here, we present a theoretical analysis when d 2hops is fixed because this scenario appears to show the optimal point in Figures 14-16. The reason for the optimized performances, except for Case 2, when d 1st = d 2nd = d 2hop /2 is described in this section.
Conclusions
In this paper, the performance of our proposed QoS control scheme in the case of two-hop extension was evaluated. The PDFR, number of transmissions, and energy efficiency of our previously proposed system, IEEE Std. 802.15.6, and 15.4a were evaluated for this case. Also, two schemes (Schemes 1 and 2) were compared for the proposed method. The numerical results show that the proposed scheme outperforms the standard scheme in terms of the PDFR, number of transmissions, and energy efficiency. In addition, Case 3 (i.e., the coding rates change depending on the channel's condition) showed better performance than the other cases at both hops. When d 2hops was fixed, it was shown that performance became optimal when d 1st = d 2nd (except Case 2) from computer simulations and theoretical analysis. This result is expected to greatly contribute to the optimization of how nodes and hubs are arranged when designing a WBAN.
In the future, an effective error control scheme for multi-hop WBANs should be considered. In addition, PHY evaluation indexes were mainly considered. Hence, evaluating the system delay and throughput in the network layer should be considered for multi-hop cases. As an extension of IEEE Std. 802.15.6, cases with greater than three hops should also be evaluated and analyzed theoretically.
Author Contributions: K.T. and H.T. conceived and designed the study. K.T. performed the computer simulations. K.T. provided the theoretical analysis of the proposed method. K.T. wrote the manuscript. H.T., C.S., K.S. and R.K. reviewed and edited the manuscript. All authors read and approved the manuscript
|
2018-11-19T16:24:29.076Z
|
2018-11-01T00:00:00.000
|
{
"year": 2018,
"sha1": "602ac3c421f8bab39e036808fdb67b41ce3891af",
"oa_license": "CCBY",
"oa_url": "https://res.mdpi.com/d_attachment/sensors/sensors-18-03969/article_deploy/sensors-18-03969.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "602ac3c421f8bab39e036808fdb67b41ce3891af",
"s2fieldsofstudy": [
"Computer Science",
"Medicine",
"Engineering"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
}
|
119564183
|
pes2o/s2orc
|
v3-fos-license
|
Some comparisons between the Variational rationality, Habitual domain, and DMCS approaches
The"Habitual domain"(HD) approach and the"Variational rationality"(VR) approach belong to the same strongly interdisciplinary and very dispersed area of research: human stability and change dynamics (see Soubeyran, 2009, 2010, for an extended survey), including physiological, physical, psychological and strategic aspects, in Psychology, Economics, Management Sciences, Decision theory, Game theory, Sociology, Philosophy, Artificial Intelligence,.... These two approaches are complementary. They have strong similarities and strong differences. They focus attention on both similar and different stay and change problems, using different concepts and different mathematical tools. When they use similar concepts (a lot), they often have different meaning. We can compare them with respect to the problems and topics they consider, the behavioral principles they use, the concepts they modelize, the mathematical tools they use, and their results.
able spaces (DMCS) problems. Innovation dynamics examine also coverdiscover (CD) problems (Larbani,Yu, 2012). A covering problem is a competency set expansion (CSE) problem. It refers to "how to transform a given competence set CS* into a set that contains a targeted competence set CS". Discovering refers to the following problem. "Given a competence set, what is the best way to make use of it to solve unsolved problems or to create value ? The process underlying this problem solving or value creation involves discovering. A discovering process can be defined as identifying how to use available tangible and intangible skills, competences, and resources to solve an unsolved problem or to produce new ideas, concepts, products, or services that satisfy some newly-emerging needs of people. ....Discovering contributes to reducing the charge level or relieving the pain of some targeted people" (Larbani,Yu, 2012); E) Decision making with changeable spaces (DMCS) games with changing minds , 2009b, 2011.
The "Variational rationality" (VR) approach For an extended presentation of this model, see Soubeyran (2009. This approach focus attention on a single very general problem, the famous self regulation problem, which includes a lot of different aspects in different disciplines, using different terminologies. Among other more specific topics, it considers: A') behavioral stability and change principles; B') habit formation and break (HFB) problems at the individual level, as well as routine formation and break (RFB) problems at the organizational level; C') exploration-exploitation (EEL) learning dynamics; D') adaptive self regulation (SR) problems (goal setting, goal striving, goal revision and goal pursuit) which belong to the general class of decision making with changeable spaces, moving goals and variable preferences (DMCSMG) problems. Such self regulation processes modelize the interrelated dynamics of motivational (desiring and willing), cognitive (perception and knowledge acquisition) and emotional aspects of human behavior. (Bento, Cruz Neto, Soubeyran, 2013), local search inexact proximal algorithms , inexact descent methods (Bento and alii, 2013), equilibrium problems (Bento and alii, 2013), and dual equilibrium problems (Moreno and alli, 2012), variational inequalities (Attouch, Soubeyran, 2006, Luc andalii, 2010), trust region methods (Villacorta and alii, 2013), Tabu search algorithms (Martinez-Legaz, , sequential decision making (Martinez-Legaz, ,.... use the same variational principles as VR do. These algorithms can be seen as reduced forms of the VR self regulation model.
Behavioral principles
HD behavioral principles HD theory is based on eight behavioral principles (Yu, Chen, 2010). Four hypotheses capture the basic workings of the brain. Circuit pattern hypothesis H1, Unlimited capacity hypothesis H2, Efficient restructuring hypothesis H3 and Analogy/Association hypothesis H4. Four other hypotheses summarize how our mind works. Goal setting and State evaluation hypothesis H5 (a basic function of our mind), Charge structure and attention allocation hypothesis H6 (how we allocate our attention to various events), Discharge hypothesis H7 (a least resistance principle that humans use to release their charges) and the Information internal and external inputs hypothesis H8.
VR behavioral principles VR approach is based on, at least, nine behavioral principles (Soubeyran, 2009: K1) agents are bounded rational (Simon, 1955). They do not optimize, except for simple problems. As soon as a problem is complex, they satisfice within a consideration set. They consider, each step, only a limited subset of alternatives related to the behavioral chain "means and capabilities -> actions --> performances --> goals --> desires". This consideration set changes from one period to the next; K 2) human activities follow a succession of temporary stays and changes; K3) each period, the agent is satisfied, or remain unsatisfied, relative to different domains of his life; K4) the agent problem, each step, is to choose between to temporary stay or to change ("should I stay, should I go?"); K5) the agent balances, each step, between advantages and inconvenients to change or to stay, and more generally, motivation and resistance to change or to stay. They consider worthwhile temporary stays or changes; K6) an agent is engaged in a "stop and go" stays and changes course pursuit, between setting, each period, the same old desired ends or (and) new ones, and finding related feasible means to reach or approach each of them; K7) an agent is partially able to self regulate this possibly interrupted course pursuit. He can set goals, strive for them, and revise them (having reach some goals, to reset the same goals is a possibility); K8) experience matters and partially determines the current behavioral chain. Then, almost all things, including preferences, can change. There is a course pursuit between changing preferences, actions, capabilities and beliefs. A current action is chosen on the basis of the current preference, which changes the current preference which, in turn changes the choice of the future action which in turn. . . ; K9) Goal pursuit stops when the agent reaches a behavioral trap, which is worthwhile to reach, starting from the initial position, and where he prefers to stay than to move again.
Comparisons Circuit pattern hypothesis H1 refers to our resistance to change concept (repetitions of thoughts, ideas and actions reinforce circuits, making them difficult to abandon). Unlimited capacity hypothesis H2 supposes that "practically, every normal brain has the capacity to encode and store all thoughts, concepts and messages that one intends to". Efficient Restructuring hypothesis H3 supposes that "the encoded thoughts, concepts and messages H1 are organized and stored systematically as data bases for efficient retrieving. Furthermore, according to the dictation of attention they are continuously restructured so that relevant ones can be efficiently retrieved to release charges". Analogy/Association hypothesis H4 supposes that "the perception of new events, subjects or ideas can be learned primarily by analogy and/or association with what is already known. When faced with a new event, subject or idea, the brain first investigates its features and attributes in order to establish a relationship with what is already known by analogy and/or association. Once the right relationship has been established, the whole of the past knowledge (preexisting memory structure) is automatically brought to bear on the interpretation and understanding of the new event, subject or idea". All these hypothesis are in accordance with bounded rationality (Simon, 1955) and to our resistance to change concept (see K5) where new knowledge and past knowledge are not mixed immediately. For example, as a lot of experiences on habit formation and break have shown in Psychology, many attitudes and beliefs temporary resist to change. The goal setting and state evaluation hypothesis H5 supposes that "each one of us has a set of goal functions and for each goal function we have an ideal state or equilibrium point to reach and maintain (goal setting). We continuously monitor, consciously or subconsciously, where we are relative to the ideal state or equilibrium point (state evaluation). Goal setting and state evaluation are dynamic, interactive, and are subject to physiological forces, self-suggestion, external information forces, current data bank (memory) and information processing capacity". The VR approach supposes (see K6) variable ideal states (moving aspirations and desires) and an adaptive course pursuit between where we are (uncomfortable moving statu quo) and where we want to be (moving aspirations, or desirable ends).
The charge structures and attention allocation hypothesis H6 supposes that "each event is related to a set of goal functions. When there is an unfavorable deviation of the perceived value from the ideal, each goal function will produce various levels of charge. The totality of the charges by all goal functions is called the charge structure and it can change dynamically. At any point in time, our attention will be paid to the event which has the most influence on our charge structure". Discharge hypothesis H7 supposes that "to release charges, we tend to select the action which yields the lowest remaining charge (the remaining charge is the resistance to the total discharge) and this is called the least resistance principle". The VR approach agrees with these two hypothesis H6 and H7 which are very similar to the famous "discrepancy reduction" principle in Psychology. They represent the first side of a self regulation process (see K6). The other side is a "discrepancy production" process (goal setting, goal revision; goal pursuit).
The information Input hypothesis H8 supposes that "humans have innate needs to gather external information. Unless attention is paid, external information inputs may not be processed". The VR approach agrees with this hypothesis, which can be related to the general concept of consideration set (see K1).
Concepts, variables and parameters
HD concepts. To save space, let us list these concepts with respect to only three different HD problems: i) Stabilization of an habitual domain problem. Following an infinite sequence of periods (a transition, in the parlance of the VR approach), the agent considers, each period t, his potential domain at time t, his actual domain at time t, his activation probability at time t, his reachable domain at time t, and his attention and activation levels at time t; ii) Competency set expansion problem. Staying within a given period of time, defined as a single change in the VR approach (the case of a transition, where the agent follows an infinite sequence of periods is not examined), the agent considers his given competency set, true or perceived, his given acquired skill set, true or perceived, a given table of costs needed for acquiring a given skill directly from another given skill, his cost of acquiring a new skill, a chosen optimal expansion process, and decision traps,. . . ; iii) Decision making in changeable spaces (DMCS) problems. They can be represented by a time dependent list including, each time t, i) a list of changing or changeable decision elements (a subset of alternatives, a subsets of criteria, an outcome measured in term of the criteria, and a preference), and ii) a list of changing or changeable decision environmental facets (information inputs, an habitual domain, a subset of other involved decision making agents, and a subset of unknowns). All the elements of this list can change or can be changed with time.
Innovation problems, like successions of competency set expansion problems and cover-discover problems, as well as DMCS game problems are DMCS very important problems which will not be examined here, because our paper considers only an agent.
VR concepts. They are all relative to the self regulation problem. More precisely, given a transition (an infinite sequence of periods), the agent follows a succession of single temporary stays or changes. Given his current experience which changes with time, he considers, each period, using his current changeable consideration set, his current changeable aspiration gap, defined as the current discrepancy between where he is, the statu quo (defined as his most recent past action and situation), and where he wants to be (which represents his current changeable aspiration level), his current changeable goals, which can help to fill a portion of the current aspiration gap and can help to approach his current aspiration level, a future chosen action to be done, repeating the old action, or doing a new action, the regeneration of old capabilities to be able to repeat the old action, the deletion of old capabilities and the acquisition of new capabilities and means to be able to do the new action, his expected new performance and payoff generated by this action, his expected costs to be able to stay and his expected costs to be able to change, his expected advantages to change, inconvenients to change, motivation to change, resistance to change. The distinction can be made between an ex ante perceived and an ex post realized concept, variable or parameter. . . . All this elements can change or can be changed over time. Then, worthwhile changes and variational traps are the keys variational rationality concepts.
The same concepts can be used to examine the functioning of variational games (for a reduced form, see inertial games, Attouch, Redont, Soubeyran, 2007, Flores-Bazán, Luc, Soubeyran, 2012). To save space we do not examine them in this paper which focus on a single agent.
Comparisons.
1) In the parlance of the VR approach, stability and change dynamics consider two dynamics: an intra period dynamic (a single temporary stay or change made of a succession of elementary stay or change operations) and an inter periods dynamic (a succession of periods, named a transition). The Chan, Yu (1985) paper on stable habitual domains examines transitions, while a competency set expansion process (Shi,Yu, 1999) refers to an inter period dynamic (a single change); 2) HD theory examines decision making with changeable spaces (DMCS) problems, while the VR approach considers decision making with changeable spaces and moving goals (DMCSMG) problems. However (DMCS) problems can choose goals as well. VR self regulation processes focus attention on i) goal setting and goal revision (hence changing aspiration levels and goals) and ii) goal striving (discrepancy or charge reduction); 3 5) The two concepts of competency sets and capabilities, while having some similarities, differ. For the HD theory a competency set is a subset (a collection of resources and skills). For the VR approach a capability is a path of operations including a script and a timing, and related means (physiological, physical, cognitive, motivational and emotional, tangible and intangible ingredients, downstream and upstream tools and machines used to perform these operations, following the given script); However, both approachs can consider, as alternative formulations, subsets and paths to modelize competency sets and capabilities.
6) The HD definition of asymmetric costs to acquire a new skill directly from a given skill (see, for example Shi, Yu, 1996 ) is a particular and reduced form of the VR definition of costs to be able to change, defined as costs to acquire, directly and indirectly, the capability to do a new action, starting from having the capability to do again an old action. VR costs to be able to change refer to direct and indirect costs. They include direct and indirect costs to delete some old elementary capabilities, which will not be used anymore and pollute. . . , costs to regenerate some other old capabilities, which will be used again, and costs to acquire new elementary capabilities. HD costs to acquire a new skill from an old one are direct costs. They seem to include only acquisitions costs and they represent the infimum of costs to be able to change (see Soubeyran, 2009); 7) The VR resistance to change concept (see Soubeyran, 2009, 2010) differs from the HD resistance to change concept (Larbani,Yu, 2012). For the HD theory, the discharge hypothesis supposes that "to release charges, we tend to select the action which yields the lowest remaining charge (the remaining charge is the resistance to the total discharge) and this is called the least resistance principle" (Larbani,Yu, 2012). In the VR theory resistance to change is the disutility of inconvenients to have to change capabilities.
8)
In the HD approach, motivation to change is modelized in term of charges and discharges. The closest VR concept will be the utility-disutility of charges and discharges (as tensions). The VR motivation-resistance to change balance can be compared with the excitation-inhibition functions (Chan, Yu,1985); 9) The definition of traps differ. The HD theory (Larbani,Yu, 2012) said that a decision maker is in a decision trap at time t, if his competence set is trapped in some area and cannot expand to fully cover the targeted competence set. ...Then "discovering is getting out of a decision trap. The resolution of challenging problems generally involve covering and discovering or dis/covering. Discovering requires a target to cover, while the covering process requires discovering when it falls in a decision trap". The VR approach defines a variational trap with respect to an initial situation (action, a doing, or a state, some having or being). It is a situation which is worthwhile to reach, starting from this initial situation, but, being there, not worthwhile to leave. Optima, equilibria, decision traps, habits, routines, rules and norms represent specific cases; For game situations, win-win outcomes are examples of variational traps.
Rationality of a single agent.
Rationality and the HD theory In this paper we consider an isolated agent. Then, the comparisons between HD and VR game situations will be examined elsewhere. HD theory examines three different problems and proposes three different behavioral models for a single agent, who can be fully or bounded rational, depending of the model.
i) The stabilization of an habitual domain problem. This situation is modelized by a stabilization of activation propensities model, which represents a reduced form of the stabilization of an habitual domain problem (Chan,Yu, 1985). This model is a differential equation, a variant of the famous global pattern formation model (Cohen, Grossberg,1983). The authors examine its convergence (weak and strong global stability). In this case an agent does not optimize. He is bounded rational; ii) The competency set expansion problem. In this case the main focus is on an optimal "Problem solving" approach. More precisely an agent has a given problem E to solve. To succeed to solve this problem, he must own a collection of skills defined as the competency set Tr(E) (true or perceived), related to the full resolution of the problem. The agent starts the resolution equipped with a given competency set, the acquired skill set Sk (true or perceived) defined as the collection of skills he owns at the beginning, before starting the resolution. The problem is to find an optimal path of expansion of his competency set, from the initial position Sk to the final position Tr(E) which represents a fixed given goal. A lot of different algorithms have been used to find the optimal solution for intermediate and compound skills and asymmetric cost functions (tree expansion processes, deduction graphs, spanning trees,. . . ; see Li, Chiang, Yu, 2000). In this case the agent is fully (substantive) rational; The opposite case of a bounded rational agent which uses a satisficing process to solve a competency set expansion problem remains an open interesting problem in this area of research.
iii) Decision making and optimization in changeable spaces (DMOCS) problems. In this new setting, Larbani, Yu (2012) use some dedicated optimization methods and suggest to search for other new optimizing methods to solve them. Cover-discover problems belong to this class of new optimization problems. However, Larbani, Yu (2012) said that "the operator Min in the models (6)-(8) should be understood in the sense of satisfaction not in the sense of absolute minimum". They notice (Larbani,Yu, 2012, p 742) that optimization must be understood in term of reducing the charge level of the decision maker to a satisfactory or acceptable level. This is in accordance with the satisficing principle (Simon, 1955).
Bounded rationality and the VR approach VR theory proposes a unified model for human behavior, focusing on worthwhile temporary stays and changes, variational traps, and self regulation processes (goal setting, goal striving, goal revision, goal pursuit processes). It is well adapted to complex, changing and high stake decision making problems (see Kunreuther and alii, 2002), where agents cannot be fully rational. In a complex and changing world, full optimization, each step, is too costly and even not economizing, because situations (spaces of feasible means and capabilities, actions, performances, payoffs, intermediate and final goals, desires and aspirations) change each step. Then, an optimal solution at time t can be irrelevant at time t+1. An agent tries to reach, each step, a moving satisficing level (not a fixed one). He considers, each step, worthwhile temporary stays or changes, which include, as special cases, adaptive, satisficing, local, approximate, inexact solutions (and optimal solutions as limit cases).
5 Behavioral stability issues: "how habits and routines form and break".
Different mathematical tools for stability issues The HD theory uses, at least, three main mathematical tools to examine stability issues (convergence to a stable and desirable final situation) and innovation problems (reaching a targeted competency set, starting from a given initial one): i) the dynamics of pattern formation (Grossberg, 1973, 1978, 1980, Cohen, Grossberg, 1983) as the main tool to examine the dynamic of activation propensities (Chan, Yu, 1985), ii) mathematical programming and different graph methods to study competency set formation and innovation problems (see, among many other papers, Shi,Yu, 1996), iii) Markov chains to examine the convergence of second order DMCS games to desirable and stable issues , 2011. In contrast, the VR approach (Soubeyran, 2009, 2010) starts from variational rationality principles in Behavioral Sciences and offers, as immediate applications, a lot of famous mathematical principles of variational analysis (Ekeland theorem, Bronsted Lemma, and other equivalent principles,. . . ; see Flores-Bazán and alii, 2012, Luc, Soubeyran, 2013 ). In turn, it uses a lot of well known variational algorithms (proximal algorithms, descent methods, variational inequalities, trust regions methods, equilibrium problems,. . . ) to help to refine the VR approach relative to stability and innovative issues, for isolated and interacting agents (VR games). While strongly related to the VR approach, competency set expansion problems do not refer to stability issues but to innovative issues, while the dynamics of pattern formation (activation propensities) and second order DMCS games main focus are on stability issues. To save space, and because our inexact proximal algorithm paper chooses, for an application, habit's and routine's formation at the individual and organizational levels (a benchmark stability issue), we will only compare how the HD and VR approaches solve, in a different way, this very difficult problem of "habits and routines formation and break". A comparison of the HD and VR approaches relative to DMCS problems, competency set expansion problems, innovation problems, and second order DMCS games, will be examined elsewhere. However a very preliminary step is done, later, for the comparison of second order DMCS games and VR games.
Let us compare how the HD theory, using the dynamics of pattern formation (Grossberg, 1973(Grossberg, , 1978(Grossberg, , 1980(Grossberg, , 1983 and the VR approach modelizes and explain how habits and routines form and break on two grounds: i) explain convergence to a final issue, ii) explain why this final issue is desirable and stable.
Notice that the pattern formation model and the variational rationality model use, both, a balance principle. Worthwhile changes balance motivation and resistance to change. Changes in pattern allocations balance excitation and inhibition inputs and signals.
HD theories of habit's and routine's formation and break The HD intuition is the following (Chan, Yu,1985) ". . . the existence of stable HD, based on a set of hypotheses is described. Roughly, as each human being learns, his HD grows with time, but at a decreasing rate, because the probability for an arrival idea to be new with respect to HD becomes smaller as HD gets larger. Thus, unless unexpected extraordinary events arrive, HD will reach its stable state. If extraordinary events do not arrive very often, habitual ways of thinking and action will prevail most of the time. This observation is the main motivation to use 'habitual' as the adjective. More formally, HD theory of habit and routine formation considers the convergence of the allocations of time and effort (activities propensities) to different activities up to a final pattern (an habitual pattern of time and effort allocations), using a variant of the famous pattern formation model (Grossberg, 1973(Grossberg, , 1978(Grossberg, , 1980(Grossberg, , 1983. Notice that his model of progressive pattern formation does not explain why this final allocation is desirable and stable. However, as said before, in the contect of DMCS problems, win-win situations refer to desirable and stable final outcomes , 2011. VR theories of habit's and routine's formation and break The VR intuition is very different. Agents make worthwhile changes and stop to change when there is no way to be able to consider and make a new worthwhile change. The convergence can be in finite or infinite time (see Bento, Soubeyran, 2014, Flores Bazan, Luc, Soubeyran, 2012; Bento, Cruz Neto, Soares, . This model explains why, and under which conditions, this final issue is desirable and stable. This is the case when it is a variational trap. Moreover, the formalized VR theory of habit's and routine's formation fits very well (see below) the main non formalized experimental findings of the different theories of "how habits and routines form and break", both in Psychology and Management Sciences, within a bounded rationality approach, and, to some degree, in Economics, in a perfect rationality context.
Habit's formation in Psychology and Economics. In Psychology habits represent "learned sequences of acts that have become automatic responses to specific cues, and are functional in obtaining certain goals or end states"; see . For (Duhigg, 2012), an habit is an automatized action (mental or physical experience), a more or less fixed way of thinking, willing, feeling and doing which follows an automatized three steps pattern: a given trigger which activates the action, a process (or script) that the action follows, and a reward (benefit or gain). Hence habits are learned automatic behaviors. Repetition in a similar recurrent context is a necessary condition for habits to develop. Frequency of past behavior and context sta-bility like internal cues (moods and goals) and external cues ( partners and external goals) determine habit strength. Habits represent a form of automaticity (Bargh,1994). They are more or less conscious and intentional (wanted, i.e., the perception of contexts is more or less goal directed, triggered by goals or by other cues). They are learned in a progressive way. They can be difficult to control, hard to form, because they follow a progressive learning process, and more or less hard to break, given some weak or strong motivation and resistance to change, as the vestige of past behavior. Then, they can resist to change. Habits can be good (mentally efficient, saving on deliberation efforts). Habits can also be bad (addictions, behavioral traps,. . . ).
In Economics agents are perfectly rational and habits are defined as stocks of past experiences. A current habit is modelized as a stock of past behaviors which determines the present preference of the agent with respect to present consumption. In standard models of addictions, see (Becker, 1988), and habit formation, see (Abel, 1990, Carroll, 2000, preferences have the given current numerical representation U n = U (c n , h n ), where the current state h n represents a stock of habits, c n stands for current consumption, and n indexes time. The habit persistence hypothesis implies that instantaneous utility does not only depend on current consumption, but also on a stock of habits, h n .
Routine's formation in Management Sciences. In this discipline routines are defined at the organizational level, as collective patterns of interactions. An enormous literature considers routines as organizational habits in the context of the stability and change dynamics of organizations. The excellent survey is (Becker, 2004), which lists the main points which characterize routines as: patterns of interactions, collective activities, mindlessness vs effortful accomplishments, processes (ways of doing, scripts), context dependent (embeddedness and specificity), path dependent, and triggered by related actors and external cues. Routines have several effects. They favor coordination, control, truce and stability. They also economize on cognitive resources, store knowledge, and reduce uncertainty.
Stability issues in VR and N person second order games Although our proximal algorithm paper considers only an agent and not a game situation, the VR approach includes the examination of variational games which follow, using reduced formulations, the VR list of nine principles given above (Attouch and alii, 2007, Attouch and alii, 2008, Flores-Bazán, Luc, Soubeyran, 2012, Flam and alli, 2012, Cruz Neto and alii, 2013). Here, the nine VR principles will not be repeated. However (to save space, this is left to the reader), to allow a possible more complete comparison of HD and VR stability issues (possible convergence to some stable final situation), let us summarize the content of N persons second order games , 2011. They incorporate human psychology in formulating games as people play them, using the habitual domain theory and the Markov chain theory. A Markov chain modelizes the evolution of the states of mind of players over time as transition probabilities over them. States of mind determine the outcomes of the so called two or N -person second-order game. The final issues are not Nash equilibria, but focal mind profiles, which are desirable to reach and globally stable solutions of the game, and win win profiles, which are focal and absorbing profiles, while Nash equilibria are not. These games can predict the average number of steps needed for a game to reach a focal or win-win mind profile where both players declare victory. Given some hypothesis, the "possibility theorem" states that it is always possible to reach a win-win mind profile, restructuring (reframing) the data of the game, the set of players, the payoff functions, and the set of strategies of the initial game. In this reframing context, the HD information input hypothesis H8 plays a major role. In such games, players are not fully rational. They follow the rule of the HD theory. They refer to DMCS problems, where the structure of the game can be restructured (it is changeable), and information inputs help to reach a win win profile. To summarize, in a conflict situation, second order players try to reach a focal profile (which exists) and, once it is reached, they try to make it stable, as a win win profile.
Variational rationality, changeable payoffs and decision sets, and the inexact proximal algorithm
The Variational rationality approach considers the rationality of agents, when many things change or are changeable, and other things must temporary stay. Let us summarize our finding. First, the inexact proximal algorithm (with relative resistance to change) is a reduced form of the variational rationality model. Second, it is not a repeated optimizing problem in changeable spaces as an exact proximal algorithm can be. In this section, we will show that it represents a worthwhile to stay and change dynamic. Then, it is an adaptive and satisficing course pursue problem, with changeable payoffs, goals, and decision spaces.
Exact proximal algorithms as repeated optimization problems with variable payoffs and changeable spaces Let us consider, first, exact proximal algorithms which are benchmark cases of their inexact versions. They are not satisficing models of human behaviors. However, they can be considered as repeated optimization problems with variable payoffs and changeable decision sets (as worthwhile to change sets, see the Lemma). The fact that they consider variable payoffs has been shown above. Let X = R n be an action space, f : X → R ∪ {+∞} a proper, lower semicontinuous function bounded from below and consider the following problems.
• The fixed payoff and decision set optimization problem (1): This problem is the substantive (global) rationality minimization problem.
• The fixed payoff and fixed decision set exact proximal algorithm (2) PROBLEM 2 : inf f (y) + λ k Γ q(x k , y) , y ∈ X , where x k ∈ X and λ k ∈ R ++ are given for each k ∈ N, q : X × X → R + is a quasi-distance and Γ : R → R represent the relative resistance to change. This problem is the exact version of our inexact proximal problem with relative resistance to change. It is a repeated optimization problem.
• The variable payoffs and fixed decision set problem (2'): where the variable payoff function is shows that this problem is equivalent to Problem 2. It represents a variable and experience dependent payoff with fixed decision set problem. It allows to define a course pursuit problem with variable preferences.
• Variable payoffs and variable decision set problems.
Proof. Take x k 2 ∈ argmin y∈X f (y) + λ k Γ q(x k , y) . Taking into account that Γ q(x k , x k ) = 0, from the definition of W λ k (x k ), it follows immediately that x k 2 ∈ W λ k (x k ). Now, take It is easy to see that: On the other hand, since , and the result follows.
Remark 2 This lemma show that our inexact proximal algorithm is a changeable payoff and decision set process which belongs to the class of "Decision Making and Satisficing (not necessarily Optimizing) Problems in Changeable Spaces", where the changeable payoff is P λ k (x k , y) = f (y) + λ k Γ q(x k , y) , and the changeable decision set is the current worthwhile to change set W λ k (x k ).
Inexact proximal algorithms as adaptive satisficing dynamics with variable payoffs and changeable spaces If the agent follows an inexact proximal algorithm, he will choose to perform, each step k, a worthwhile change This inexact proximal algorithm (with relative resistance to change) introduces a lot of simplifications, as a reduced form of the variational rationality model.
, where η k > 0 is an adaptive worthwhile to change satisficing ratio, which can be changed from period k to period k + 1. If we note λ k = η k /v(E k ),then, a worthwhile change x k y is defined as g(y) − g(x k ) ≥ λ k Γ q(x k , y) . In term of separable experience dependent unsatisfied need functions y) . The topic of our paper is not exact proximal algorithms, but inexact ones. Inexact proximal algorithms examined in this paper represent adaptive satisficing dynamics (dealing with changeable satisficing levels), variable and experience dependent preferences and changeable decision sets, which belong to the class of decision making with changeable spaces and changeable goals problems, noted DMCSCG problems. Let us show this important point, which helps the comparison with the Habitual domain theory and DMCS decision making problems with changeable spaces (Larbani, Yu, 2012). Our VR point of view is the following. An inexact proximal algorithm is a specific instance of a VR worthwhile to stay and change dynamics x k+1 ∈ W e k ,ξ k+1 (x k ), k ∈ N . This dynamic is both satisficing and considers changeable goals and decision sets, i) This dynamic is satisficing because the worthwhile to change condition generalizes the Simon (1955) definition in a dynamical context, balancing satisfactions to change with sacrifices to change. The moving goal is, each period, the chosen worthwhile to change satisficing level ξ k+1 > 0.
ii) This dynamic is adaptive, and considers changeable decision sets W e k ,ξ k+1 (x k ). Each period, the agent can chooses how much changes must be worthwhile (the size of ξ k+1 ) to accept to change. Then, the worthwhile to change set is chosen, each period.
Local inexact proximal algorithms For each k ∈ N fixed, let X k ⊂ X, x k ∈ X, λ k ∈ R ++ and consider the following problem: This a variable preference proximal problem with a changeable feasibility space X k ⊂ X. In Attouch, Soubeyran (2010) it is a changing ball X k = B r (x k ) of constant radius r > 0. We can define the indicator function I X k and consider the changeable payoffs and decision spaces proximal problem, inf f (y) + I X k (y) + λ k Γ q(x k , y) , y ∈ X .
Bento and alii (2013) have also examined an exact local search multiobjective proximal problem, where X k is a lower countour set of the multi-objective function. Now, let us consider "fixed payoff and variable decision set problems", namely, Let us assume that {x k 5 } is a generated sequence from the iterative process , by definition, the agent follows, each step, a worthwhile change.
Lemma 3 Let {x k 2 } be a generated sequence from the iterative process such that {f (x k 2 )} converges to f := inf {f (y), y ∈ X}. Then the sequence {x k 5 } is a minimizing sequence for PROBLEM 1.
Proof. Take k ∈ N arbitrary. Note that x k 2 ∈ W λ k (x k ) (see proof of Lemma 2). It is easy to see that f (x k 5 ) ≤ f (x k 2 ), i.e., each solution of PROBLEM 2 gives an estimation-majoration f (x k 2 ) to the minimal payoff f (x k 5 ) of PROBLEM 5. Now, from the definition of f , we have Therefore, the desired result follows from (1) together the fact of {x k 2 } be a minimizing sequence for PROBLEM 1.
Mathematical stability issues 1) Stability of variational traps. Let λ k = η k /v(E k ) be a stage k proximal ratio, where η k > 0 is a stage k worthwhile to change ratio, and v(E k ) > 0 is the experience rate of influence at stage k. Let λ * = η * /v(E * ) be the limit proximal ratio. It is easy to show that if x * ∈ X is a variational trap, given the proximal ratio λ * , then, x * ∈ X is also a variational trap, for any higher proximal ratio λ > λ * . This comes from the implication: A higher proximal ratio λ = η/v(E) > λ * = η * /v(E * ) means a higher worthwhile to change ratio η > η * or (and) a lower experience rate of influence v(E) < v(E * ). It can also mean higher costs to be able to change parameters and, each period, a longuer length of the exploitation phase, or a shorter length of the exploration phase (Soubeyran, 2009. For example, stability issues with respect to adaptive parameters like λ k ≥ λ ∞ > 0, where λ k represents a changing preference, resistance to change, costs to be able to change, and length of the exploitation period relative to the exploration period, will be examined elsewhere. 2) Stability of worthwhile to change trajectories. For an exact quadratic proximal problem, this is a well known result. The proximal mapping is well defined (recall that f was assumed be bounded from below) and, if f is convex, it is non expansive, i.e., x, x ′ ∈ X, see, for example, . For an inexact proximal algorithm with a non quadratic regularization term and a non convex f , this is an open difficult question.
Habit's and Routine's Formation: Inexact Proximal Processes with Weak Resistance to Change
In this section (see the Arxiv version of our paper, Bento, Soubeyran, 2013), we detail how our inexact proximal algorithm modelizes habit/routine formation and break, using the VR resistance to change modelization.
Comparing worthwhile changes and stays processes, inexact proximal algorithms and habituation-routinization processes. As an application we will consider habit/routine formation as an inexact proximal algorithm in the context of weak resistance to change. Because of its strongly interdisciplinary aspect (Mathematics, Psychology, Economics, Management), to be carefully justified, this application needs several steps. This, because we need to compare three differents processes. First, a general worthwhile change and stay process. Then, as two specific instances, an inexact proximal algorithm and an habituation/routinization process. The comparison must consider three aspects; a dynamical system (which one?), which converges (how?), to an end point (which one?). • Step.1. At the mathematical level, an exact proximal algorithm represents a dynamical system, which, each step, minimizes a proximal payoff f + λ k q(x k , ·) 2 over the whole space X. The perturbation term is λ k q(x k , y) 2 , λ k > 0, y ∈ X, while the payoff function is f . An inexact proximal algorithm represents a dynamical process which, each step, uses a descent condition and a temporary stopping rule. In both cases, the problem is to give conditions under which this process converges to a limit point. Then, an exact or inexact proximal algorithm considers three points: i) a dynamical process. It represents the proximal sub-problem which can be the minimization of the current proximal payoff (for an exact proximal algorithm), or a descent condition, i.e., a sufficient decrease of the proximal payoff (in the inexact case); ii) the existence of some end points, which can be, or not, a critical point, a local, or global minimum of f ; iii) and the convergence of the process towards an end point. This convergence can be linear, quadratic. . . .
•
Step.2. At the behavioral level, in the context of his "Variational rationality approach", (Soubeyran, 2009 has examined "worthwhile change and stay" processes. Such dynamical processes refer to successions of temporary worthwhile changes and worthwhile stays. The end points are variational traps. The convergence of the process materializes in small steps, whose length goes to zero. Let us remind that, in this behavioral context, end points represent traps (reachable, i.e, more or less easy to reach, but difficult to leave). Worthwhile changes balance, each step, motivation and resistance to change forces. The motivation force is the utility U [A(x, y)] of the advantages to change function A(x, y) = f (x) − f (y) where f represents an unsatisfied need to be minimized. The resistance to change force represents the disutility D [I(x, y)] of the inconvenients to become able to change I(x, y) = q(x, y), where q(x, y) is a quasi distance. In the context of this paper, the variational approach considers the relative resistance to change or aversion to change function Γ [q(x, This shows that the perturbation term of an exact or inexact proximal algorithm is a specific instance of a relative resistance to change function Γ [q(x, y)]. (Moreno et al. 2011) have considered the specific quadratic case of weak resistance to change, where Γ [q(x, y)] = q(x, y) 2 . Then, at the behavioral level, two mains concepts, among others, drive a "worthwhile changes and stays" process: i) the unsatisfied need function f (which materializes the motivation to change) and, ii) inertia (the relative resistance to change function Γ [q(x, y)]). • Step.3. Habit formation/routinization processes. A very short survey has been given in the previous section Behavioral stability issues: "how habits and routines form and break". These processes consider, i) a repetitive process where an action is repeated again and again in the same recurrent context, ii) a final stage where this action becomes a permanent habit, iii) the convergent process which describes a slow learning habituation process where this action, being repeated again and again in the same recurrent context becomes gradually automatized. • Step.4. Then, it becomes clear that habituation/routinization processes are specific instances of worthwhile change and stay processes. Unsatisfied needs and inertia play a major role. • Step.5. In our paper we have shown how both exact and inexact proximal algorithms are specific instances of "worthwhile change and stay" processes, because: i) minimization of the current proximal payoff, descent conditions and current stopping rules are special cases of worthwhile changes and marginal worthwhile stays, ii) critical points, local and global minimum are specific representations of variational traps, iii) convergence of the proximal algorithm shows how proximal worthwhile changes converge, depending of the shape of the payoff function (which can be convex, lower semicontinuous, or which can satisfy a Kurdyka-Lojasiewicz inequality,. . . ) and of the shape of the perturbation term (linear, convex,. . . with respect to distance or quasi distance).
Why resistance to change matters much. The benchmark case of lower semicontinuous unsatisfied need functions f and strong resistance to change functions Γ [q(x, y)] (where costs to be able to change are higher than a quasi distance) have been examined by (Soubeyran, 2009 who considered worthwhile change and stay processes. It has been shown that when a worthwhile change and stay process converges to a variational trap, this variational formulation offers a model of habit/routine formation which modelizes a permanent habit/routine as the end point of a convergent path of worthwhile change and temporary habits, where, for a moment, there is no way to do any other worthwhile change, except repetitions. The opposite case of weak resistance to change was left open. In the context of an exact proximal algorithm, Moreno et al.(2011) have examined a specific case of weak resistance to change, namely the quadratic case Γ [q(x, y)] = q(x, y) 2 . However, exact proximal algorithms represent a very specific case of worthwhile changes, where, each step, the descent condition is optimal. This means that, each step, the process minimizes the proximal payoff f + λ k Γ q(x k , ·) on the whole space X. Then, such optimizing worthwhile changes are not step by step economizing behaviors because they require to explore, each step, the whole state space, again and again. The present paper considers the generalized weak resistance to change case in the context of an inexact proximal algorithm instead of an exact one. In both papers the unsatisfied need function f satisfies a Kurdyka-Lojasiewicz inequality. How the strength of resistance to change impacts the speed of habit's/routine's formation is, as an application, the topic of the related paper .
To summarize, we have compared an inexact generalized proximal algorithm with a worthwhile change and stay process with respect to three aspects: A) as a dynamical system, B) with an end point, C) which converges to that end point. To end this paper, it remains to compare an inexact generalized proximal algorithm with an habituation and routinization process, as it is described in Psychology and Management Sciences, using the same three criteria.
Inexact generalized proximal algorithm and habituation/ routinization process are dynamical systems.
• Habituation/routinization process. They represents the repetition of an action of a given kind (some activity related to a given goal), in order to satisfy a recurrent unsatisfied need in a stable context. The repetition concerns the action and what becomes more and more the same is "the way of doing it" (the script). This repetition follows a succession of worthwhile changes and stays. An inexact proximal algorithm represents a step by step processes, a succession of moves in order to "decrease enough" some proximal payoff function. Usually, both dynamical processes are unable to reach the goal in one step. Each step, the level of satisfaction of the recurrent need increases, but some unsatisfaction remains.
An habituation process is driven by two balancing forces : a motivation to change function M [A(x, y)] (an habit/routine must serve us), and a resistance to change function D [I(x, y)], (habits/routines are hard to form and hard to break because learning and unlearning are costly). An inexact proximal algorithm is driven by the two terms f (y) and Γ [q(x, y)] of its proximal payoff f (y) + λ k Γ [q(x, y)] where q(x, y) = C(x, y). This balance describes the goal-habit interface.
The rationality of an habituation process is to improve by repetition the way of doing a similar action in the same context. The agent improves with costs to change. He satisfices, doing worthwhile changes, without exploring too much each step (local consideration and exploration; see Soubeyran, (2009 for this important aspect). An inexact proximal algorithm follows, each step, some descent condition and marginal stopping rule, without optimizing each step.
The influence of the past differs from one process to the other. For an habituation process the impact of the past can be very important (the past sequence matters much). For an inexact proximal algorithm it is as if only the last action matters. The influence of the past is minimal (it is as if the agent has a short memory). The influence of the future seems identical in both cases: myopia seems to be the rule. Only the next future action matters. Agent's behavior driven by habits/routines are not forward looking. For more forward looking worthwhile to change behavior; see Soubeyran, (2009. Convergence: see the last paragraph ("why resistance to change matters much") End points. Our inexact proximal algorithm converges to a critical point, which is not an end point, unless it can be shown that a critical point is a variational trap (as this is done in our paper). A variational trap is worthwhile to reach and not worthwhile to leave. An habituation process ends in a permanent habit/routine which is hard to form and hard to break. It represents the vestige of a past repeated behavior.
Making the assumptions of the proximal algorithms clear in behavioral terms
We have to show how, in Behavioral Sciences, our three proximal algorithms modelize, at least in a reduced form, how habits form and break. A first step has been given before, in Section 7. This have been done in five steps. We have shown that inexact proximal algorithms and habitual processes are dynamical systems, we have compared their end points, critical points or variational traps, we have examined how the strength of resistance to change influences their abilities to converge, and we have linked resistance to change to loss aversion, a famous behavioral concept.
A second step is to detail the behavioral content of all the hypothesis which drive our three proximal algorithms. There are general behavioral hypothesis which are common to the three proximal algorithms, and specific hypothesis relative to each of them. More explicitly, the three algorithms suppose that costs to be able to change are quasi distances. They consider, each step, worthwhile and marginally worthwhile changes (descent conditions), and suppose weak resistance to change, as well as a marginal stopping rule. The first algorithm is targeted to converge to a critical point. The second and third algorithms are targeted to converge to a variational trap.
General behavioral hypothesis
• H.1. Costs to be able to change C are modelized as quasi-distances. For an agent, costs to be able to change C(x, y) = q(x, y) refer to the infimum of the costs to be able to change his capabilities, from having the capability to do an action x, to the capability to do an action y. Then C(x, x) = 0 means that if the agent is able to do an action x,he is able to do this action at no cost. The condition C(x, y) = 0 ⇐⇒ y = x means that if the agent is able to move at no cost, then, he can only repeat the same action, if he is able to do it . The triangle inequality C(x, z) ≤ C(x, y) + C(y, z) for all x, y, z ∈ X means that, for an agent, the infimum cost to change his capabilities from the initial capability to do an action x to the final capability to do an action z is lower than the infimum cost to change his capabilities from the initial capability to do action x to the intermediate capability to do an action y and the infimum cost to change his capabilities from the intermediate capability to do action y to the final capability to do action z, because the way to change successively from an initial capability to an intermediate capability and from this intermediate capability to a final capability is an indirect way to change from the initial to the final capability.
• H.2. Unsatisfied needs f : R n → R ∪ {+∞} is a proper lower semicontinuous function. This is a regularity assumption which supposes no free lunch. The agent cannot reduce his unsatisfied need in a given small amount without changing enough his action (no jump downward are allowed).
• H.3. Advantages to change A(x, y) = f (x) − f (y) refer to separable advantages to change functions, linked, when positive, to a decrease in unsatisfied needs.
• H.4. The ratio ξ k > 0 modelizes how much a change can be worthwhile (the adaptive satisficing case). This ratio can change from period to period. This means that the agent can adapt with delay or not, each step, his degree of satisficing. In this paper he adapts with one period delay (writing ξ k > 0 instead of ξ k+1 > 0, to fit with a proximal formulation).
• H.5. The utility function U [·] is invertible with U [0] = 0. This is the case for a strictly increasing utility function, relative to advantages to change (the usual case).
• Condition (4) supposes that Γ(C) = U −1 [D [C]] is twice differentiable with respect to C. It is a regularity condition, relative to the relative resistance to change function.
• Condition (7) means that the relative resistance to change function is "flat enough in the small" (in the "weak enough resistance to change" case). It supposes that the margnal relative resistance to change must be "lower enough" with respect to the mean relative resistance to change, at least for "low enough" costs to be able to change.
• Assumption 3.1 supposes that costs to be able to change are high (low) enough iff the old and new actions are different (similar) enough.
Hypothesis relative to Algorithm 3.1 Algorithm 3.1 supposes three conditions (20), (21), (22). Condition (20) imposes worthwhile changes (a sufficient descent assumption) along the process. Condition (21) defines subgradients for the unsatisfied needs and costs to be able to change functions. Condition (22) refers, each step, to a stopping rule, where the norm of the marginal decrease of the unsatisfied need is lower than the norm of the marginal relative resistance to change. The Kurdyka-Lojasiewicz inequality (Definition 3.2) refers to a curvature property of the unsatisfied need function, near a critical point. The unsatisfied need must be lower than some increasing function of the marginal unsatisfied need, close to a critical point. Assumption 3.2 supposes that the marginal relative resistance to change function is lower than a power function, near the origin. It is a curvature property.
Condition (53) supposes that the worthwhile to change satisficing ratio λ k > 0 converge along the process.
Hypothesis relative to Algorithm 4.1 This algorithm supposes three conditions (54), (55), (56). Condition (54) is a modified worthwhile to change assumption. Condition (55) defines subgradients for the unsatisfied needs and costs to be able to change functions. Condition (56) is the stopping rule condition (22). This algorithm adopts all the hypothesis of Algorithm 3.1 9 References on the VR, HD and DMCS approaches
|
2014-09-29T01:45:44.000Z
|
2014-03-27T00:00:00.000
|
{
"year": 2014,
"sha1": "44306a90c12d0b504f3f2099a0660ef5bd332f63",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "44306a90c12d0b504f3f2099a0660ef5bd332f63",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
}
|
15427831
|
pes2o/s2orc
|
v3-fos-license
|
Normality preserving operations for Cantor series expansions and associated fractals part I
It is well known that rational multiplication preserves normality in base $b$. We study related normality preserving operations for the $Q$-Cantor series expansions. In particular, we show that while integer multiplication preserves $Q$-distribution normality, it fails to preserve $Q$-normality in a particularly strong manner. We also show that $Q$-distribution normality is not preserved by non-integer rational multiplication on a set of zero measure and full Hausdorff dimension.
Introduction
Let N(b) be the set of numbers normal in base b and let f be a function from R to R. We say that f preserves b-normality if f (N(b)) ⊆ N(b). We can make a similar definition for preserving normality with respect to the regular continued fraction expansion, β-expansions, the Lüroth series expansion, etc.
For a real number r, define real functions π r and σ r by π r (x) = rx and σ r (x) = r + x. In 1949 D. D. Wall proved in his Ph.D. thesis [29] that for non-zero rational r the function π r is b-normality preserving for all b and that the function σ r is b-normality preserving functions for all b whenever r is rational. These results were also independently proven by K. T. Chang in 1976 [7]. D. D. Wall's method relies on the well known characterization that a real number x is normal in base b if and only if the sequence (b n x) is uniformly distributed mod 1 that he also proved in his Ph.D. thesis. D. Doty, J. H. Lutz, and S. Nandakumar took a substantially different approach from D. D. Wall and strengthened his result. They proved in [8] that for every real number x and every non-zero rational number r the b-ary expansions of x, π r (x), and σ r (x) all have the same finite-state dimension and the same finite-state strong dimension. It follows that π r and σ r preserve b-normality. It should be noted that their proof uses different methods from those used by D. D. Wall and is unlikely to be proven using similar machinery.
Research of the authors is partially supported by the U.S. NSF grant DMS-0943870. We would like to thank Samuel Roth for posing the problem that led to Theorem 3.6 to the second author at the 2012 RTG conference: Logic, Dynamics and Their Interactions, with a Celebration of the Work of Dan Mauldin in Denton, Texas. He asked if it is true that x ∈ N(Q) ∩ DN(Q) implies that nx ∈ N(Q) for all natural numbers n. We thank Martin Sleziak for pointing us in a direction that led to the paper [22]. This paper helped us prove a stronger version of Theorem 3.6. C. Aistleitner generalized D. D. Wall's result on σ r . Suppose that q is a rational number and that the digits of the b-ary expansion of z are non-zero on a set of indices of density zero. In [4] he proved that the function σ qz is b-normality preserving. We will show as a consequence of Theorem 3.1 that C. Aistleitner's result does not generalize to at least one notion of normality for some of the Cantor series expansions.
There are still many open questions relating to the functions π r and σ r . For example, M. Mendés France asked in [20] if the function π r preserves simple normality with respect to the regular continued fraction for every non-zero rational r. The authors are unaware of any theorems that state that either π r or σ r preserve any other form of normality than b-normality.
We will focus on the normality preserving properties of π r for the Q-Cantor series expansion as well as two other related functions. We will show that while π r is Qdistribution normality preserving for every non-zero integer r, the set of x where π r (x) is not Q-distribution normal has full Hausdorff dimension whenever r ∈ Q\Z and Q is infinite in limit. Our main theorem will show that the function π r is so far from preserving Q-normality that there exist basic sequences Q and real numbers x that are Q-normal and Q-distribution normal where π r (x) is not Q-normal for every integer r ≥ 2. In the sequel to this paper [3], the authors and J. Vandehey prove that for a class of basic sequences Q, the set of real numbers x where π r (x) is Q-normal for all non-zero rationals r but where x is not Q-distribution normal has full Hausdorff dimension.
Cantor series expansions
The study of normal numbers and other statistical properties of real numbers with respect to large classes of Cantor series expansions was first done by P. Erdős and A. Rényi in [9] and [10] and by A. Rényi in [23], [24], and [25] and by P. Turán in [27].
The Q-Cantor series expansions, first studied by G. Cantor in [6], are a natural generalization of the b-ary expansions. 1 Let N k := Z ∩ [k, ∞). If Q ∈ N N 2 , then we say that Q is a basic sequence. Given a basic sequence Q = (q n ) ∞ n=1 , the Q-Cantor series expansion of a real number x is the (unique) 2 expansion of the form where E 0 = ⌊x⌋ and E n is in {0, 1, . . . , q n − 1} for n ≥ 1 with E n = q n − 1 infinitely often. We abbreviate (2.1) with the notation A block is an ordered tuple of non-negative integers, a block of length k is an ordered k-tuple of integers, and block of length k in base b is an ordered k-tuple of integers in {0, 1, . . . , b − 1}. Let Cantor's motivation to study the Cantor series expansions was to extend the well known proof of the irrationality of the number e = 1/n! to a larger class of numbers. Results along these lines may be found in the monograph of J. Galambos [13]. 2 Uniqueness can be proven in the same way as for the b-ary expansions.
A. Rényi [24] defined a real number x to be normal with respect to Q if for all blocks B of length 1, If q n = b for all n and we restrict B to consist of only digits less than b, then (2.2) is equivalent to simple normality in base b, but not equivalent to normality in base We let N k (Q) be the set of numbers that are Q-normal of order k. The real number It follows from a well known result of H. Weyl [30,31] that DN(Q) is a set of full Lebesgue measure for every basic sequence Q. We will need the following result of the second author [19] later in this paper.
Note that in base b, where q n = b for all n, the corresponding notions of Qnormality and Q-distribution normality are equivalent. This equivalence is fundamental in the study of normality in base b.
Another definition of normality, Q-ratio normality, has also been studied. We do not introduce this notion here as this set contains the set of Q-normal numbers and all results in this paper that hold for Q-normal numbers also hold for Q-ratio normal numbers. The complete containment relation between the sets of these normal numbers and pair-wise intersections thereof is proven in [18]. The Hausdorff dimensions of difference sets such as RN(Q) ∩ DN(Q)\N(Q) are computed in [2].
A surprising property of Q-normality of order k is that we may not conclude that N k (Q) ⊆ N j (Q) for all j < k like we may for the b-ary expansions. In fact, it was shown in [17] that for every k there exists a basic sequence Q and a real number is non-empty. Thus, rather than showing that some functions do not preserve Q-normality of order k, we will show that they do not preserve Q-normality of any order. We will always demonstrate numbers not Q-normal of any order that either have at most finitely many copies of the digit 0 or the digit 1 in their Q-Cantor series expansion.
Results
We note the following theorem which may be stated in terms of Q-normality preserving functions. Instead we present it in its current form for simplicity.
Then there exists a real number y = F 0 .F 1 F 2 · · · w.r.t. Q where E n = F n on a set of density zero and y / ∈ ∞ j=1 N j (Q). Theorem 3.1 may be proven by changing all the digits of a Q-normal number that are equal to 0 to 1. This shows that C. Aistleitner's result does not generalize to Q-normality for all Q-Cantor series expansions by letting q = 1 and by letting z = 0.G 1 G 2 · · · w.r.t. Q where G n is equal to 1 along each of these indices and 0 otherwise and setting y = x + z.
Together, Theorem 3.1 and Theorem 3.2 suggest that Q-distribution normality is a far more robust notion than Q-normality. We wish to give the following example demonstrating that distribution normality is not preserved in all bases by rational multiplication. However, a much stronger result holds: Theorem 3.5. For all basic sequences Q, Q-distribution normality is preserved by non-zero integer multiplication. If Q is infinite in limit and r ∈ Q\Z, then We wish to define an equivalence relation ∼ on the set of basic sequences as follows. If P = (p n ) and Q = (q n ) are basic sequences then we write P ∼ Q if p n = q n on a set of density zero.
Suppose that Q that is infinite in limit and fully divergent. A nonempty subset S Q ⊆ N(Q)\DN(Q) was shown to exist in Theorem 3.12 in [18]. The members of S Q have the following property. If x ∈ S Q , then for any integers n ≥ 2, the real number π n (x) is not Q-distribution normal and not Q-normal of any order. Since Q-distribution normality is preserved by integer multiplication it is natural to ask if there are any basic sequences Q and real numbers x that are Q normal and Q-distribution normal and such that for any integers n ≥ 2 the number π n (y) are not Q-normal of any order. The following theorem answers this question.
Theorem 3.6. Let k ∈ N. If P = (p n ) is eventually non-decreasing, infinite in limit, and k-divergent (resp. fully divergent), then there exists a basic sequence Q = (q n ) and a real number x where the following hold.
(1) The basic sequence Q is infinite in limit, k-divergent (resp. fully divergent), and P ∼ Q.
(2) The real number x is Q-normal of all orders 1 through k (resp. Q-normal) and Q-distribution normal. (3) For every integer n ≥ 2, the real number π n (x) is not Q-normal of any order.
For a sequence of real numbers X = (x n ) with x n ∈ [0, 1) and an interval I ⊆ [0, 1], define A n (I, X) = #{i ≤ n : x i ∈ I}. We will first prove Theorem 3.2. To do this we will need the following standard definition and lemma that we quote from [16].
Definition 3.7. Let X = (x 1 , · · · , x N ) be a finite sequence of real numbers. The number is called the discrepancy of the sequence X.
It is well known that a sequence X is uniformly distributed mod 1 if and only if D N (X) → 0.
Proof of Theorem 3.2. Set ǫ i = 1 i . Define the sets Note that N i is defined since the density of the sets S(ǫ i ) must be 0 by (3.2). Set . Then for any n > N i we have by Lemma 3.8 that Thus, lim n→∞ |D n (x 1 , · · · , x n ) − D n (y 1 , · · · , y n )| = 0. This implies that y is Qdistribution normal if and only if x is.
For the remainder of this paper let E R,i (ξ) be the ith digit of the R-Cantor series expansion of ξ.
Proof of Theorem 3.5. The first part is trivial as any integer multiple of a uniformly distributed sequence is uniformly distributed. It is well known that locally Lipschitz functions preserve null sets. Clearly π r is locally Lipschitz, so (3.3) holds.
Let r = a/b ∈ Q\Z for relatively prime integers a and b. We now wish to show (3.4). Let (x n ) be a sequence of real numbers that is uniformly distributed mod 1. Define Set f (n) = min {log q n , log q 1 q 2 · · · q n−1 } log q n and ω(n) = q Note that lim n→∞ ω(n) Define the intervals Consider the set Note that for any x ∈ Φ Q,b , we have that lim n→∞ E n q n − x n = 0 since ω(n) qn → 0. This implies that En qn is uniformly distributed mod 1, so the seqeunce (T Q,n (x)) is as well. Therefore Φ Q,b ⊆ DN(Q). Furthermore, every digit of x ∈ Φ Q,b is divisible by b. Thus We have that if Let c ≡ a mod b. Thus so π r (x) / ∈ DN(Q). Following the notation of [11], Φ Q,b is a homogeneous Moran set with c k = 1 q k and n k = ω(k). By Theorem 2.1 in [11], we have that We may now turn our attention to Theorem 3.6. Let (P, Q) ∈ N N 2 × N N 2 and suppose that x = E 0 .E 1 E 2 · · · w.r.t. P . We define ψ P,Q : R → [0, 1] by The following theorem of [18] will be critical in proving Theorem 3.6.
Lemma 3.11. Let L be a real number and (a n ) ∞ n=1 and (b n ) ∞ n=1 be two sequences of positive real numbers such that ∞ n=1 b n = ∞ and lim n→∞ a n b n = L.
Proof of Theorem 3.6. Let P be a basic sequence that is infinite in limit and kdivergent. The proof of the statement for when P is fully divergent will follow similarly. Let y be a real number that is P -distribution normal such that ny is P -normal for all natural numbers n. Note that such a y exists by Theorem 2.2. Define the sequence {ℓ n } as follows: Define M (n) = min {c : l c < n}. We have that M (n) tends to infinity since and P is infinite in limit. Furthermore, for any i ≤ M (N ) and n ≥ N we have that . Construct Q = (q n ) as follows: If there is a i ∈ {1, · · · , M (n)} such that E P,n (π i (y)) = 1, then set q n = M (n)p n and q n+1 = M (n)p n+1 . Otherwise, set q n = p n . If both E P,n (π i (y)) = 1 and E P,n−1 (π j (y)) = 1 for some i, j ≤ M (n), put q n = M (n)p n . Put x = ψ P,Q (y). Then at position n, we have that E Q,n (π m (x)) = 1 when m ∈ {2, 3, · · · , M (n)}. Note that for E Q,n (π m (x)) = 1, we must have that mE P,n (y) q n + mE P,n+1 (y) q n q n+1 + · · · ∈ 1 q n , 2 q n or mE P,n (y) M (n)p n + mE P,n+1 (y) M (n) 2 p n p n+1 + · · · ∈ 1 M (n)p n , 2 M (n)p n .
But for this to happen, we must have that E P,n (y) = 0, and E P,i (y) = p i − 1 for all i > n. However this can not happen since y = E P,0 (y).E P,1 (y)E P,2 (y) · · · w.r.t. P is the P -Cantor series expansion of y. Thus we must have that E Q,n (π m (x)) = 1. Then for all m ∈ N 2 , we have that lim n→∞ N Q n (1, π m (x)) is finite. Therefore π m (x) is not Q-normal of any order for all m ∈ N 2 .
Let A ⊆ N be the set of indices where q n = p n . Since is uniformly distributed mod 1 by Lemma 3.8. Since Q is infinite in limit, we have that x is Q-distribution normal.
Using the notation of Theorem 3.10, set a n = χ A (n), w n = 1 pi···pi+j−1 , and s n = 1. Since P is non-decreasing and k-divergent, we can apply Theorem 3. (B, x), we have that x is Q-normal of orders 1, 2, · · · , k. We consider the following two conditions on a basic sequence Q and a real number x.
Further problems
x ∈ DN(Q) ∩ k j=1 N j (Q) and nx / ∈ ∞ j=1 N j (Q) ∀n ∈ N 2 ; (4.1) x ∈ DN(Q) ∩ N(Q) and nx / ∈ ∞ j=1 N j (Q) ∀n ∈ N 2 . (4.2) Problem 4.2. Is it true that for all Q that are k-divergent and infinite in limit that there exists a real number x satisfying (4.1)? If Q is fully divergent and infinite in limit must there exist an x that satisfies (4.2)? If not, what must we assume about Q?
We note that the use of Theorem 2.2 in the proof of Theorem 3.6 means we have not given any explicit examples of the basic sequence or real number x mentioned in Theorem 3.6. There exist some basic sequences Q where the set DN(Q) does not contain any computable real numbers. See [5]. Thus, it is reasonable to ask the following question. Problem 4.3. Give an example of a computable basic sequence and a computable real number x that satisfies the conditions (4.1) or (4.2). Can this be done for every computable basic sequence Q?
|
2014-07-08T23:02:15.000Z
|
2014-07-03T00:00:00.000
|
{
"year": 2014,
"sha1": "ae2c3ffd859e27fe05275eb099b37fb4cd99c86f",
"oa_license": null,
"oa_url": "https://projecteuclid.org/journals/illinois-journal-of-mathematics/volume-59/issue-3/Normality-preserving-operations-for-Cantor-series-expansions-and-associated-fractals/10.1215/ijm/1475266396.pdf",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "8367677ac2dbe6e50d537213ad197b53dfbab30c",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
229349232
|
pes2o/s2orc
|
v3-fos-license
|
2 2 D ec 2 02 0 Another presentation of even orthogonal Steinberg groups
We use the pro-group approach to prove that StO(2l, R) admits van der Kallen’s “another presentation” for any commutative ring R and l ≥ 3. Moreover, we construct an analog of ESD-transvections in even orthogonal Steinberg pro-groups under some assumptions on their parameters.
Introduction
In [7] W. van der Kallen proved that the linear Steinberg group St(n, R) over a commutative ring R is a central extension of the elementary linear group E(n, R) provided that n ≥ 4. More precisely, he showed that the Steinberg group admits a more invariant presentation and actually is a crossed module over the general linear group GL(n, R). This was generalized by M. Tulenbaev in [6] for linear groups over almost commutative rings R.
For symplectic groups the same result was proved by A. Lavrenov in [1]. He used essentially the same approach: there is another presentation of the symplectic Steinberg group StSp(2l, R) over a commutative ring R for l ≥ 3 such that it is obvious that this group is a central extension of the elementary symplectic group ESp(2l, R). Together with S. Sinchuk he also proved centrality of the corresponding K 2 -functors for Chevalley groups of types D l for l ≥ 3 and E l in [2,5] using a different method.
In [8] we reproved that St(n, R) is a crossed module over GL(n, R) using progroups. This more powerful method allowed to generalize the result for isotropic linear groups over almost commutative rings and for matrix linear groups over non-commutative rings with a local stable rank condition. The same result for odd unitary groups (including the Chevalley groups of type A l , B l , C l , and D l ) was proved in [9]. Finally, together with Lavrenov and Sinchuk we generalized this result for all the remaining simple simply connected Chevalley groups of rank at least 3. For Chevalley groups of rank 2 there are counterexamples, see [10].
The pro-groups approach does not give any "another presentation" of the corresponding Steinberg group by itself. In this paper we prove the following: Theorem. Let R be a commutative ring and l ≤ 3 be an integer. Then the even orthogonal Steinberg group StO(2l, R) is isomorphic to the abstract group StO * (2l, R) generated by symbols X * (u, v), where u, v ∈ R 2l are vectors, u is a column of an elementary orthogonal matrix, and u ⊥ v. The relations on these symbols are if v is also a column of an elementary orthogonal matrix; • X * (u, ur) = 1.
Actually, instead of columns of elementary orthogonal matrices it is possible to take columns of all orthogonal matrices. Using this variant of "another presentation", it is obvious, that StO(2l, R) is a crossed module over O(2l, R).
During the proof we also give a general definition of ESD-transvections X(u, v) ∈ StO(2l, R) for isotropic unimodular u ∈ R 2l and u ⊥ v. These elements lift ordinary ESD-transvections T (u, v) ∈ SO(2l, R) and satisfy various identities, but their existence for completely general u is unclear.
The author wants to express his gratitude to Nikolai Vavilov, Sergey Sinchuk and Andrei Lavrenov for motivation and helpful discussions.
Orthogonal Steinberg pro-groups
We use the group-theoretical notation g h = ghg −1 and [g, h] = ghg −1 h −1 . If a group G acts on a group H by automorphisms, we denote the action by g h.
Let R be a commutative unital ring, l ≥ 3 be an integer. We consider the free R-module R 2l with the basis e −l , . . . , e −1 , e 1 , . . . , e l . This module has the split quadratic form q given by be the associated symmetric bilinear form. The even orthogonal group O(2l, R) consists of g ∈ GL(2l, R) such that q(gv) = q(v) for all vectors v.
Consider any vectors u, v ∈ R 2l such that q(u) = u, v = 0. In this case the operator is called an Eichel -Siegel -Dickson transvection (or an ESD-transvection) with parameters u and v, see [4] for details in a more general context. The following lemma summarized the well-known propreties of these operators, all of them may be checked by direct computations. Lemma 1. Each ESD-transvection lies in the special orthogonal group SO(2l, R). Moreover, • T (u, ur) = 1 for q(u) = 0 and r ∈ R; Recall that elementary orthogonal transvections are the elements The orthogonal Steinberg group StO(2l, R) is the abstract group generated by elements x ij (r) for r ∈ R and i = ±j. The relations on these elements are the following: There is a group homomorphism st : We would like to find analogs of ESD-transvections in the Steinberg group StO(2l, R). In order to do so, we use results from [3] or [9]. Recall that there is a "forgetful" functor from the category of pro-groups Pro(Grp) to the category of group objects in the category of pro-sets Pro(Set). This functor is actually fully faithful, so we identify pro-groups with the corresponding pro-sets, and similarly for non-unital pro-R-algebras. The projective limits in Pro(Set) are denoted by lim ← − Pro and various pro-sets are labeled with an upper index (∞), such as X (∞) . We also use the following convention from [3,8,9]: if a morphism between pro-sets is given by a first order term (possibly many-sorted), then we add the upper index (∞) for the formal variables. For example, for a pro-ring R (∞) and its pro-module M (∞) . The domains of such variables are usually clear from the context. For any s ∈ R an s-homotope of R is the commutative non-unital R-algebra R (s) = {r (s) | r ∈ R}. The operations are given by Note that in the definition of StO(2l, R) we do not need the unit of R. Hence we define StO (s) (2l, R) = StO(2l, R (s) ) by the same generators and relations, but with parameters in R (s) . If s, s ′ ∈ R, then there is a homomorphism R (ss ′ ) → R (s) , r (ss ′ ) → (s ′ r) (s) of commutative non-unital R-algebras, and an obvious homomorphism Proof. If suffices to consider an ordinary group G. Consider group homomorphisms f, g : R (s) → G for some s ∈ S such that f • π m = g • π m : R (∞,m) → G for all maximal ideals m disjoint with S. The set is an ideal of R. By assumption, it is not contained in any such m, so S −1 a = S −1 R. In other words, s ′ ∈ a for some s ′ ∈ S. This means that f | R (ss ′ ) = g| R (ss ′ ) .
Take a multiplicative subset S ⊆ R. Each a s n ∈ S −1 R gives an endomorphism of the pro-group R (∞,S) by a s n : b (s n s ′ ) → (ab) (s ′ ) . This endomorphism is denoted by the term a s n r (∞) with a free variable r (∞) . By results from [3] or [9], for any multiplicative subset S ⊆ R the local Steinberg group StO(2l, S −1 R) acts on the corresponding Steinberg pro-group StO (∞,S) (2l, R) by automorphisms. Note that this action is not given by a morphism StO(2l, S −1 R) × StO (∞,S) (2l, R) → StO (∞,S) (2l, R) of pro-sets, instead we have a homomorphism StO(2l, S −1 ) → Aut Pro(Grp) (StO (∞,S) (2l, R)) of abstract groups. The action is given by the obvious formulas on generators when there is a corresponding Steinberg relation: xij ( a s n ) x kl (r (∞) ) = x kl (r (∞) ) if such Steinberg generators commute (in the ordinary orthogonal Steinberg group) and xij ( a s n ) x jk (r (∞) ) = x ik ( a s n r (∞) ) x jk (r (∞) ) for i = ±k. Here both formulas are identities of morphisms R (∞,S) → StO (∞,S) (2l, R) of pro-groups.
These actions are extranatural on S. More precisely, if S ⊆ S ′ are two multiplicative subsets, g ∈ StO(2l, S −1 R) and g ′ is its image in StO(2l, S ′ −1 R), then the following diagram with canonical vertical arrows commutes: By theorem 2 from [3], for any maximal ideal m of R the action of StO(2l, R m ) on StO (∞,m) (2l, R) factors through Spin(2l, R m ). The main result of [3] actually says that there is an action of Spin(2l, R) on StO(2l, R) such that st : StO(2l, R) → Spin(2l, R) is a crossed module (in particular, the action of StO(2l, R) on itself factors through this action of the spin group). Moreover, this action of spin group is extranatural by formula (5.2) from [3]: if g ∈ Spin(2l, R), then the induced automorphisms of StO(2l, R) and StO (∞,m) (2l, R) give a commutative square as above.
The similar results from [9] are slightly more powerful: we may everywhere use O(2l, R) instead of Spin(2l, R).
Transvections in orthogonal Steinberg groups
We need the following well-known result: Lemma 3. Suppose that R is local and the rank l ≥ 1 is arbitrary. Then EO(2l, R) = Im(st : St(2l, R) → SO(2l, R)) acts transitively on the set of isotropic unimodular vectors (i.e. v ∈ R 2l such that R = i Rv i and q(v) = 0). Moreover, it acts transitively on the set of pairs (u, v), where u, v are orthogonal isotropic vectors and u ∧ v is unimodular in Λ 2 (R 2l ).
Proof.
Let v ∈ R 2l be an isotropic unimodular vector. If v l or v −l is invertible, then v ′ = T (e 1 , ae ±l ) satisfy v ′ 1 ∈ R * for a suitable a ∈ R. In this case let j = 1. Otherwise there is an index j = ±l such that v j ∈ R * , so we take v ′ = v. In any case, there is unique To prove the second claim, take such a pair (u, v). Since u is unimodular and isotropic, we may assume that u = e l . Then v −l = 0, l ≥ 2, and v − v l e l is a unimodular isotropic vector in R 2l−2 . All matrices from EO(2l − 2, R) fix u, hence by the above we may assume that u = e l and v = e l−1 + v l e l . It remains to apply the transvection T (e l , −v l e −l−1 ) to get the pair (e l , e l−1 ).
Let S ⊆ R be a multiplicative subset and u ∈ S −1 R 2l be an isotropic unimodular vector. There is a well-defined split epimorphism u, v The subgroup {g ∈ Spin(2l, R m ) | ge l ∈ e l R m } coincides with the standard maximal parabolic subgroup P 1 ≤ Spin(2l, R m ). It is generated by the standard maximal torus of the spin group and the elementary transvections stabilizing e l , since R m is local. It is easy to see that g X(e l , v (∞) ) = X(e l , gv (∞) ge l e l ) for every such a generator g ∈ P 1 : the torus acts by roots on both sides (see formula (4.2) in [3]), and for the elementary transvections this follows directly from the definitions. Here ge l e l denotes the element of R * m such that ge l = e l ge l e l . Now let u ∈ R 2l m be any isotropic unimodular vector. By lemma 3, there is g ∈ Spin(2l, R m ) such that u = ge l . Let X(u, v (∞) ) = g X(e l , g −1 v (∞) ), this morphism is independent on g and satisfies the properties 1, 2, 5.
To prove the third property, note that there is h ∈ Spin(2l, R) such that he l = e l r and h lies in the maximal torus. Hence X(e l r, v (∞) ) = h X(e l , h −1 v (∞) ) = X(e l , v (∞) he l e l ) = X(e l , v (∞) r), and the third property for other vectors u follows from the definition of X(u, v (∞) ).
The last property follows by applying various permutation matrices from O(2l, R) to x l,l−1 (r (∞) ).
Finally, in order to prove the fourth property we again use lemma 3. Without loss of generality, u = e l and v = e l−1 . Then In the last case g X(u, v) = X(gu, gv) ∈ StO(2l, R) for g ∈ O(2l, R), u ∈ O(2l, R)e l , and u ⊥ v. Also X(u, v) ∈ StO(2l, R) maps to T (u, v) ∈ O(2l, R).
Proof. The first case is lemma 4, the second is theorem 1. If S = {1} and u = ge l for some g ∈ O(2l, R), then let X(u, v) = g X(e l , g −1 v) for all vectors v ⊥ u. This morphism satisfies the definition from theorem 1 by extranaturality of the action. The last claims follows from the uniqueness of X(u, v) and the definitions.
Finally, we prove that the even orthogonal Steinberg group admits "another presentation" in the sense of van der Kallen. Note that the third property of X(u, v) from theorem 1 is not needed.
Theorem 3. Let StO * (2l, R) be the abstract group generated by symbols X * (u, v) for vectors u, v ∈ R 2l such that u ∈ O(2l, R)e l and v ⊥ u. The relations are the following: • X * (u, vr) = X * (v, −ur) if v also lies in O(2l, R)e l ; • X * (u, ur) = 1.
Then the canonical morphism StO(2l, R) → StO * (2l, R), x ij (r) → X * (e i , e −j r) is an isomorphism, the preimage of X * (u, v) is the element X(u, v) from theorem 2. Also we may write everywhere Spin(2l, R)e l or EO(2l, R)e l instead of O(2l, R)e l .
The group O(2l, R) (or Spin(2l, R), or EO(2l, R)) acts on StO * (2l, R) by g X * (u, v) = X * (gu, gv). Clearly, StO * (2l, R) is generated by the image of F and its conjugates under this action (here we need that u lies in the orbit of e l ). Since StO(2l, R) is perfect, it follows that StO * (2l, R) is perfect. The canonical homomorphism StO * (2l, R) → O(2l, R), X * (u, v) → T (u, v) has central kernel by the second relation, hence the kernel of G is also central. Now G : StO * (2l, R) → StO(2l, R) is a split perfect central extension. It is well-known that such a homomorphism is necessarily an isomorphism, its splitting F is the inverse.
|
2020-12-23T05:14:12.656Z
|
2020-01-01T00:00:00.000
|
{
"year": 2020,
"sha1": "dc10cbc1e8957c4209cff3f2bdba129c2db98324",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "dc10cbc1e8957c4209cff3f2bdba129c2db98324",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
229438200
|
pes2o/s2orc
|
v3-fos-license
|
High-quality positron acceleration in beam-driven plasma accelerators
Acceleration of positron beams in plasma-based accelerators is a highly challenging task. To realize a plasma-based linear collider, acceleration of a positron bunch with high-efficiency is required, while maintaining both a low emittance and a subpercent-level energy spread. Recently, a plasma-based positron acceleration scheme was proposed in which a wake suitable for the acceleration and transport of positrons is produced in a plasma column by means of an electron drive beam [Diederichs et al., Phys. Rev. Accel. Beams 22, 081301 (2019)PRABCJ2469-988810.1103/PhysRevAccelBeams.22.081301]. In this article, we present a study of beam loading for a positron beam in this type of wake. We demonstrate via particle-in-cell simulations that acceleration of high-quality positron beams is possible, and we discuss a possible path to achieve collider-relevant parameters.
I. INTRODUCTION
Plasma-based particle accelerators potentially enable compact linear electron-positron colliders due to their large acceleration gradients [1]. In a plasma wakefield accelerator (PWFA), an ultrarelativistic, high-charge density particle beam expels all plasma electrons from its propagation axis and an ion cavity is formed [2,3]. The cavity, also referred to as bubble or blowout, features a region with a large, longitudinally accelerating gradient and a transversely linear restoring force for relativistic electrons. Whereas high-energy gain, high-efficiency [4,5], and stably beam-loaded [6] electron acceleration has been demonstrated experimentally in PWFAs, stable and quality preserving positron acceleration remains a challenge. Identifying a positron acceleration scheme that fulfills the requirements imposed by a particle collider, namely the stable and efficient acceleration of high-charge positron bunches, while maintaining both a low emittance and a low energy spread, has been an outstanding challenge, and previously proposed positron acceleration concepts were not able to meet all the necessary requirements. For instance, utilizing hollow core electron drive beams showed only a per-mille-level driver-to-witness energy conversion efficiency [7]. PWFAs driven by a positron beam have been investigated in Ref. [8]. While this scheme demonstrated high-efficiency acceleration of the positron witness beam, the nonlinear nature of the transverse focusing fields, and their variation as the drive beam evolves renders the preservation of the witness beam emittance challenging. Hollow core plasma channels have been proposed as potential plasma target candidates for positron acceleration [9,10]. However, owing to the lack of any focusing field for the beam in a hollow channel, this scheme suffers from severe beam breakup instability [9,11].
In a recent article, a novel method for positron acceleration was proposed that uses an electron beam as driver and a plasma column as the acceleration medium [12]. For a plasma column with a column radius smaller than the blowout radius, the transverse wakefields are altered, resulting in an elongation of the background plasma trajectories returning toward the axis. This creates a long, high-density electron filament, leading to the formation of a wake phase region which is suitable for acceleration and transport of positron beams. Despite the nonlinear nature of the transverse wakefields, it was shown that quasimatched propagation of positron beams was possible. Due to the non-uniformity of the accelerating field created in these structures, the energy-spread was found to be at the percent-level, which is too high for application in a plasma-based linear collider. Another study has investigated beam loading of simple Gaussian beams in these plasma structures [13]. Despite achieving higher efficiency than the one reported in Ref. [12], the emittance was not preserved.
In this article, we investigate beam loading of a positron bunch in the nonlinear wake formed in a plasma column with the goal of minimizing the energy spread of the bunch, while maintaining both a low emittance and a high charge. Beam loading has been first described for linear wakes in Ref. [14]. In the nonlinear blowout regime, beam loading of electron beams was studied in Ref. [15], where an analytical expression for the longitudinal witness beam current profile that eliminates the energy spread was obtained. Owing to the different nature of the wakefield structure, this type of analytic result is not valid in the case of the nonlinear positron accelerating fields considered in this study. Here, beam loading is studied by means of a numerical algorithm that reconstructs, slice-by-slice and self-consistently, the longitudinal current profile of an optimal witness beam which flattens the accelerating fields within the bunch. We further discuss the transport of the positron witness bunch and its optimization with the goal of minimizing the energy spread and preserving the emittance, both crucial parameters for the employment of this acceleration scheme in a future plasma-based linear collider.
Lastly, we assess a possible path to achieve colliderrelevant parameters.
II. NONLINEAR WAKEFIELDS FOR POSITRON ACCELERATION
The generation of positron beam focusing and accelerating wakes using plasma columns was first described in [12]. Using an electron drive beam and a plasma column with a radius smaller than the blowout radius leads to the formation of a wide longitudinal electron filament behind the blowout bubble. This elongated region of high electron density provides accelerating and focusing fields for positron beams. This is illustrated in Figs. 1 and 2, which show two-dimensional maps of the accelerating field E z =E 0 and focusing field ðE x − B y Þ=E 0 , respectively. The fields are normalized to the cold, nonrelativistic wave-breaking limit E 0 ¼ ω p mc=e, where c denotes the speed of light in vacuum, ω p ¼ ð4πn 0 e 2 =mÞ 1=2 the plasma frequency, n 0 the background plasma density, and e and m the electron charge and mass, respectively. In this example, we consider a plasma column with a radius k p R p ¼ 2.5 and a Gaussian electron drive beam with sizes k p σ ðdÞ and peak current I kA is the Alfvén current. The modeling was performed using the quasistatic particle-in-cell (PIC) code HiPACE [16]. To reduce the high computational cost of the modeling imposed by the required numerical resolution, the wakefields were computed using an axisymmetric cylindrical solver based on the one implemented in the quasistatic version of the code INF&RNO [17], while the particles are advanced in full 3D. Denoting by k p ¼ ω p =c the plasma wave number, the dimensions of the computational domain are 12 × 12 × 20k −3 p in the coordinates x × y × ζ, where x, and y are the transverse coordinates, and ζ ¼ z − ct is the longitudinal co-moving coordinate, with z and t being the longitudinal coordinate and the time, respectively. The resolution is 0.0056 × 0.0056 × 0.0075k −3 p . The background electron plasma was modeled with 25 constant weight particles per cell. The drive beam was sampled with 10 6 constant-weight particles.
The positron focusing and accelerating phase is located between −14 ≲ k p ζ ≲ −9. The accelerating field has its peak at k p ζ ≈ −11.5. Unlike in the blowout regime case, E z has a transverse dependence. The inset of Fig. 1 shows E z =E 0 along the transverse coordinate x at three different longitudinal locations denoted by the dashed (k p ζ ¼ −12.5), solid (k p ζ ¼ −11.5), and dotted (k p ζ ¼ −10.5) lines in Fig. 1. In all three locations, E z ðxÞ has an on-axis maximum and decays for increasing distances from the propagation axis. Notably, the transverse gradient of the accelerating field is smaller further behind the driver. The nonuniformity of E z will lead to a ζ-dependent uncorrelated slice energy spread since particles that remain closer to the axis will experience a larger accelerating gradient compared to the ones further off axis. This effect will be investigated more thoroughly in Sec. III C.
The transverse behavior of the focusing field, ðE x − B y Þ=E 0 , is depicted in the inset of Fig. 2, where we show transverse lineouts of the focusing wakefields for the same three longitudinal locations used in Fig. 1. We see that the transverse wakefild decays almost linearly for increasing distances from the propagation axis. The field decrease is smaller further behind the driver. As shown in Ref. [12], the field becomes almost a step-function when sufficiently loaded by a positron bunch.
III. SELF-CONSISTENT BEAM LOADING TO MINIMIZE THE ENERGY-SPREAD
In many beam-driven plasma wakefield accelerator applications, both the driver and the witness beams are usually highly relativistic and evolve on a much longer time scale than the background plasma. In this case the quasistatic approximation [18], which allows treatment of the plasma and the relativistic beams in a separate manner, can be used. In the quasistatic approximation, the wakefields generated by a given beam are determined by initializing a slice of unperturbed plasma ahead of the beam and then follow its evolution as the slice is pushed through the beam from head to tail along the negative ζ direction (here ζ can be interpreted as a fast "time" that parametrizes plasmarelated quantities), while the beam is assumed to be frozen. This implies that to calculate the fields at some longitudinal position ζ, only the information upstream of this point is required.
We used this feature of the quasistatic solution to design an algorithm that recursively constructs, slice-by-slice and starting from the head, the optimal current profile of a witness bunch such that the accelerating field along the bunch is constant and equal to a set value. This leads to a reduced energy spread of the accelerated particles. The algorithm is described in detail in the Appendix. We considered a (radially symmetric) bunch initially described as where g k ðζÞ and g ⊥ ðζ; rÞ denote the longitudinal and transverse density profiles, respectively. We require that, for any ζ, where ζ head is the location of the bunch head, so that the bunch current density profile only depends on g k ðζÞ. For simplicity, we first consider bunches with transverse profiles that are radially Gaussian and longitudinally uniform, i.e., g ⊥ ðζ; rÞ ¼ exp½−r 2 =ð2σ 2 r Þ, where σ r is the (longitudinally constant) rms bunch size. At every longitudinal location ζ (bunch slice), the algorithm performs an iterative search for the optimal bunch current, determined via g k ðζÞ, that flattens the accelerating field in that particular slice. The procedure is repeated recursively for all the slices going from the head to the tail of the bunch. Note that, besides a constant accelerating field along the bunch, other field configurations yielding an energy chirp during acceleration are possible. In order for a solution to be found, the positron bunch has to be located in a phase of the wake where ∂ ζ E z < 0. To take into account the fact that, in general, E z varies in the transverse plane across the beam, the figure of merit considered by the algorithm is a transversally weighted accelerating field hE z i, defined as In case of a transversally uniform accelerating field, e.g., as in the blowout regime, the averaged accelerating field simply reduces to the on-axis accelerating field.
A. Optimization of the witness bunch position
The choice of the location of the witness bunch head, ζ head , sets the amplitude of the accelerating gradient and determines the shape of bunch current profile. In the following, we study the effect of different witness head positions for a bunch in the wake described in Sec. II. To fulfill the requirement that the bunch head has to be located in a wake phase such that ∂ ζ E z < 0, and to achieve a reasonable acceleration gradient, we chose −11.5 ≲ k p ζ head ≲ −10. Also, we consider a witness bunch an emittance such that k p ϵ x ¼ 0.05, and a bunch size k p σ r ¼ 0.0163. Numerical results for the current profiles and their corresponding loaded averaged accelerating fields, hE z i, for four values of the witness bunch head position are depicted in Fig. 3. Interestingly, placing the bunch head in a more forward position in the wake, corresponding to a lower accelerating gradient, does not necessarily increase the charge of the witness bunch. This can be seen in Table I, where we show the witness charge, Q w , as a function of the bunch head position.Values of the charge have been computed assuming a background density of n 0 ¼ 5 × 10 17 cm −3 . For this density the charge of the drive beam is Q d ¼ 1.5 nC.
The driver-to-beam efficiency, η, can be calculated from the charge of the witness beam, its energy gain rate, E þ w , the charge of the drive beam, and its energy loss rate, For the chosen density the driver energy loss rate is Values of the energy gain for the witness bunch and the efficiency as a function of the witness head position are given in Table I.
The results show that the efficiency peaks around the position −10.8 ≲ k p ζ head ≲ −10.6. As shown in Sec. II, the accelerating field is transversely flatter for more negative head positions, therefore the case k p ζ head ¼ −10.8 is preferable since the choice of this witness position will result in a smaller energy-spread, while maintaining close to maximum efficiency. We recall that for this witness position the charge of the bunch is 52 pC and the efficiency η ≈ 3%. This is less than what was achieved with a simple Gaussian density profile in [12], which featured a witness bunch charge of Q w ¼ 84 pC and an efficiency of η ≈ 4.8%.
However, energy-spread minimization was not taken into consideration in that study, which lead to an energy-spread on the few-percent-level.
Choosing bunch head positions that are closer to the driver, i.e., k p ζ head ≥ −10.2, yields complex (e.g., multipeaked) bunch current profiles. In this case, the positron beam significantly alters the background plasma electron trajectories, resulting in the formation of a second on-axis electron density peak behind the blowout region. This, in principle, allows for the loading of a second positron beam or an increase of the length of the first. This can be seen in Fig. 3. In fact, for k p ζ head ¼ −10.4 (dotted line) we see that hE z i has a local maximum behind the bunch which is higher than the value within the bunch, allowing for further beam loading. We did not investigate further such forward starting positions because we consider the resulting complex bunch structures difficult to realize experimentally.
B. Minimizing the correlated energy-spread
Using the weighted accelerating field hE z i from Eq. (2) as the figure of merit in the proposed algorithm yields a bunch current profile that eliminates the correlated energyspread only under the assumption that the bunch size does not change during acceleration. However, this assumption is generally not true. First, if the spot size is not matched to the focusing field at some position along the bunch due to, e.g., the slice-dependent nature of the transverse wakefields, it will evolve until it is matched. Second, due to the acceleration, the matched spot size adiabatically decreases with increased particle energy. Both effects must be taken into account in order to eliminate the correlated energy spread entirely. Eliminating the mismatch requires performing a slice-by-slice matching of the beam, i.e., introducing a slice-dependent bunch size, σ r ðζÞ. Note that this also leads to a ζ-dependence of g ⊥ . In our algorithm, calculation of the self-consistent, slice-dependent bunch size can be done numerically while the optimal bunch is generated. As a desirable side effect, the slice-by-slice matching also minimizes the emittance growth [19]. To take into account the change of σ r ðζÞ due to acceleration, the averaged spot size over the acceleration distance should be used in calculating hE z i in Eq. (2). We recall that for the here considered steplike wakes with a field strength of α the matching condition for a given emittance ϵ x is σ 3 r;matched ≃ 1.72ϵ 2 x =ðk p αγÞ and so the matched spot size is expected to scale with the energy as σ r;matched ∝ ffiffiffiffiffiffiffi 1=γ 3 p , where γ is the bunch relativistic factor [12].
The averaged bunch size over the acceleration distance can then be estimated as where γ init and γ final refer to the initial and final bunch energy, respectively. Note that to calculateσ r ðζÞ, the final beam energy γ final is required. We also notice that the inclusion of the slice-by-slice matching and of the energyaveraged bunch size when computing the optimal bunch profiles do not significantly alter the current profiles and charges discussed in Sec. III A. Changes to the optimal beam loading algorithm including slice-by-slice matching and averaged spot size are described in the Appendix. The efficacy of the slice-by-slice matching and inclusion of the average spot size is demonstrated in Fig. 4, where we show the mean energy of each slice for a positron witness bunch that accelerates from 1 GeV to ≈5.5 GeV in a distance of 15 cm. The blue line refers to algorithm flattening hE z i with a longitudinal uniform σ r . The red line and the green line refer to additionally applying sliceby-slice matching and averaging of the bunch spot size over the acceleration distance, respectively. In this example, the location of the bunch head was k p ζ head ¼ −10.8 and the bunch had an initial emittance such that k p ϵ x ¼ 0.05. All the other parameters were as before (see Sec. II). In order to mitigate the computational cost, these results were obtained with a frozen field approximation (i.e., the particles of the witness bunch are pushed in a nonevolving wakefield). This approach has shown both reasonable agreement of the energy spread and the emittance of the witness bunch with full quasistatic PIC simulations. The agreement is facilitated by the slice-by-slice matching, which mitigates the witness beam evolution. We see that the bunch obtained without slice-by-slice matching and using the initial longitudinal uniform σ r (blue line in Fig. 4) shows a range of mean energy variation of ΔE ≈ 100 MeV. Using the sliceby-slice matching (red line) reduces the amplitude of the variation to ΔE ≈ 60 MeV. Finally, by using the energyaveraged bunch size σ r ðζÞ in the calculation of the optimal bunch (green line) the correlated energy spread is essentially removed (ΔE ≈ 3 MeV).
C. Minimizing the uncorrelated energy-spread
Whereas the correlated energy-spread can be completely eliminated, the uncorrelated energy-spread can only be reduced, as it arises from the transverse nonuniformity of E z . We did not identify a strategy to reduce the transverse gradient of E z by loading the wake with a positron bunch. However, we explored two possible solutions to minimize the impact of such gradient and reduce the uncorrelated energy spread. First, one can position the witness bunch in a region of the wake where E z is transversally as flat as possible, and second, one can use a transversally smaller witness beam.
As described in Sec. II, E z flattens transversally further behind the driver (i.e., for more negative ζ). Therefore, it is favorable to choose the starting position of the bunch furthest behind the driver, which still has a reasonable efficiency. According to this criterion and the results from Sec. III A, the optimal starting position is k p ζ head ¼ −10.8.
In the following, we study the dependence of the uncorrelated energy-spread on the witness bunch emittance. A matched bunch with a smaller emittance will have a smaller transverse extent and will sample a smaller domain of E z and, hence, it will acquire a smaller uncorrelated energy spread. For a flat beam, and assuming that in the vicinity of the axis the accelerating field can be modeled as E z ðxÞ ¼ E z;0 − βjxj, where β describes the transverse gradient of E z (see inset of Fig. 1), then, from geometric considerations, we expect the relative slice (uncorrelated) energy spread at saturation to scale as σ γ =γ ∼ βσ r =E z;0 , and so σ γ =γ → 0 in the limit of a small bunch. We note that it is not possible with our current numerical tools to model collider-relevant low-emittance witness beams, owing to the required high resolution and associated computational costs. To overcome this limitation, we use a reduced model to assess the scaling of the energy-spread for these conditions. Since the correlated energy-spread can be eliminated with the procedure discussed in the previous section, we consider a single slice of the beam in the reduced model. We chose the slice of the peak current of the positron bunch, which we have found to reasonably represent the total energy-spread of the bunch. Using the previous example with a starting position of k p ζ head ¼ −10.8, the peak of the current is located at k p ζ peak ¼ −11.45. We reuse the simulation, which included the slice-by-slice matching and the averaging over the acceleration distance. Assuming the same density of n 0 ¼ 5 × 10 17 cm −3 as for the efficiency consideration, the emittance of the beam is ϵ x ¼ 0.05k −1 p ¼ 0.38 μm. In the reduced model, we generate test particles, which we advance with a second-order-accurate particle pusher in the radial fields provided by the simulation. High-resolution simulations with the cylindrically symmetric PIC code INF&RNO indicate that the focusing field converges toward a step function [12]. Likewise, we model the focusing field in the reduced model with a piecewise constant function, ðE x − B y Þ=E 0 ¼ −αsignðxÞ, where α ¼ 0.6 for our example. We have found the model in reasonable agreement with HiPACE simulations in terms of energy-spread, emittance, and bunch size evolution. This is shown for the energyspread in Fig. 5. The black dashed line and the blue solid line describe the energy-spread at an emittance of ϵ x ¼ 0.38 μm obtained from the HiPACE simulation and the reduced model, respectively. The energy-spread of the peak-current slice of the beam is ≈0.65% for both the simulation (dashed line) and the reduced model (blue line). The final energy-spread and the emittance growth of the whole bunch in the PIC simulation are ≈0.7% and ≈2% (both not shown in Fig. 5), respectively. Under the assumption that a smaller emittance beam with the same charge does not significantly change the wake structure, we can decrease the beam emittance in the reduced model to previously numerically inaccessible values. The results are shown in Fig. 5. The results of the reduced model indicate that for emittances well below 0.1 μm, we can achieve energy spreads below 0.1%. The red and green line denote the energy spreads for initial emittances of ϵ x ¼ 0.19 μm and ϵ x ¼ 0.08 μm, respectively. Their corresponding final energy spreads are 0.3% and 0.1%. This indicates the path to possible collider-relevant parameters. However, this model does not capture the change of the wake structure due to a reduced witness bunch spot size. Eventually, when the on-axis density of the positron bunch exceeds the density of the background electrons, we expect a significant disruption of the positron accelerating wake structure. Additionally, a finite initial background plasma temperature can smooth the piecewise constant focusing field, possibly affecting the results presented here. These effects will be the topic of further research and require extensive development of simulation tools to enable detailed studies.
IV. CONCLUSION
High-quality positron acceleration with sub-percentlevel energy spread is possible in beam driven plasma wakefield accelerators. Utilizing an electron drive beam and a narrow plasma column allows for high-charge, and low-emittance positron beams. By shaping the longitudinal density profile of a transversally Gaussian witness beam, the energy-spread can be controlled and kept at the subpercent-level. Thereby, correlated energy spread can be completely eliminated. The uncorrelated energy spread scales with the transverse beam spot size. Our results indicate that using collider-relevant beam emittances might yield energy spreads as low as 0.1%. Further research will aim to strengthen this result. Additionally, the efficiency might be increased by proper shaping the drive beam [20,21], by optimizing the transverse plasma profile [12] or by using the here proposed technique to generate longitudinally chirped bunches. Extending these results to higher efficiencies will pave the path to a plasma-based collider. The algorithm used in this study calculates, by exploiting the quasistatic approximation, the longitudinal current profile of a witness bunch that maintains the average accelerating gradient over the full bunch length. The average accelerating gradient is set at the bunch head, hE z;head i. The bunch is constructed recursively by stacking infinitesimal longitudinal slices of charge, one after the other, starting from the head and going toward the tail of the bunch. For each slice, the calculation of the optimal current is done using an optimized bisection procedure.
ACKNOWLEDGMENTS
The steps of the algorithm are as follows. First, for any generic longitudinal slice i (i ¼ 0 represents the bunch head, slices are counted starting from the head), the FIG. 5. Relative slice energy spread vs acceleration distance. Advancing test particles in an approximated step function yields similar energy-spread in comparison with the HiPACE simulation.
The results indicate that emittances smaller than 0.1 μm induce an energy spread below 0.1%. algorithm computes the weighted accelerating field right behind the current slice assuming zero charge in the slice. We denote this quantity by hE z;i i. We then check if jhE z;i ij > jhE z;head ij. The absolute value is used so the algorithm works for both electron and positron witness bunches. If this condition is not fulfilled then no further beam loading is possible and the recursive procedure terminates (i.e., the bunch tail is reached). On then other hand, if the condition is satisfied, then beam loading is possible and the algorithm initializes the optimized bisection procedure to determine the current in the slice. We recall that the current is set via the g k function in Eq. (1). In order for the bisection procedure to converge, values of the current lower (g k;min ) and higher (g k;max ) than the optimal one need to be determined. Since we know that with no charge in the ith slice we have jhE z;i ij > jhE z;head ij, then we can set g k;min ¼ 0. Determining g k;max requires a trial and error procedure where, starting from, e.g., g k;max ¼ 1.2g k;i−1 , the value of the current in the slice is progressively increased in a geometric way (i.e., typically multiplying the current by a factor 10) until overloading of the wake is reached, i.e., until the condition jhE z;i ij < jhE z;head ij is satisfied. Note that every time the value of the current in the slice is changed, a solution of the quasistatic field equations for the slice is required in order to determine the current value of the weighted accelerating field behind the slice. Once g k;min and g k;max are known, the optimized bisection procedure begins. A new value of the current is computed according to g k ¼ w g g k;min þ ð1 − w g Þg k;max ; where w g ¼ ðjhE z;head ij−jhE z;min ijÞ=ðjhE z;max ij−jhE z;min ijÞ, and where hE z;min i and hE z;max i are the averaged field values behind the slice correspond to g k;max and g k;min , respectively. The bisection procedure terminates, and the algorithm advances to the next slice (i þ 1), if the averaged field computed with g k converges to hE z;head i within a predetermined tolerance, otherwise g k;min and g k;max are updated and a new optimized bisection is performed. We note that by using Eq. (A1) instead of the classical bisection procedure, i.e., g k ¼ 0.5ðg k;min þ g k;max Þ, the number of iterations required to reach convergence is significantly reduced.
Algorithm modifications for slice-by-slice matching and average bunch size Incorporating the slice-by-slice matching procedure into the algorithm requires the following modification. At each slice i, the matched spot size σ r;matched needs to be determined. This is done by exploiting a fixed-point method. We generate a Gaussian test particle distribution with some rms size, σ r ðiÞ. As an initial guess, the spot size from the previous slice, σ r ði − 1Þ, is used. Then, the test particles are evolved in time without acceleration in the focusing field given by ðE x − B y Þði − 1Þ by using a second order accurate particle pusher until the second order spatial moment of the distribution has saturated. The value of the moment is used to set a new value for σ r ðiÞ, and the whole process is repeated until the sequence of values of σ r ðiÞ has converged. Note that the focusing field of the slice i − 1 is used to compute σ r ðiÞ under the assumption that the longitudinal resolution is high enough that the focusing field changes only marginally between two adjacent slices. To further take into account the spot size reduction due to acceleration of the particles, the averaged matched spot size σ r;matched can be calculated via Eq. (4). Finally, σ r;matched or σ r;matched can be used to calculate the average accelerating field hE z i by Eq. (2). It should be noted that σ r;matched is only used to calculate hE z i, the bunch is still generated with a spot size of σ r;matched .
|
2020-12-10T09:08:02.594Z
|
2020-12-03T00:00:00.000
|
{
"year": 2020,
"sha1": "1791203b5b118ab68f8a122770a451e156fa9818",
"oa_license": "CCBY",
"oa_url": "http://link.aps.org/pdf/10.1103/PhysRevAccelBeams.23.121301",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "96399f8e5118d11dfab0195ac68544961c4549ec",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
256378747
|
pes2o/s2orc
|
v3-fos-license
|
New Bounds for RIC in Compressed Sensing
This paper gives new bounds for restricted isometry constant (RIC) in compressed sensing. Let Φ be an m×n real matrix and k be a positive integer with k⩽n. The main results of this paper show that if the restricted isometry constant of Φ satisfies δ8ak<1 and \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$$\delta_{k+ak}<\frac{3}{2}-\frac{1+\sqrt{(4a+3)^2-8}}{8a} $$\end{document} for \documentclass[12pt]{minimal} \usepackage{amsmath} \usepackage{wasysym} \usepackage{amsfonts} \usepackage{amssymb} \usepackage{amsbsy} \usepackage{mathrsfs} \usepackage{upgreek} \setlength{\oddsidemargin}{-69pt} \begin{document}$a> \frac{3}{8}$\end{document}, then k-sparse solution can be recovered exactly via l1 minimization in the noiseless case. In particular, when a=1,1.5,2 and 3, we have δ2k<0.5746 and δ8k<1, or δ2.5k<0.7046 and δ12k<1, or δ3k<0.7731 and δ16k<1 or δ4k<0.8445 and δ24k<1.
recovering some original n-dimensional but sparse signal\image from linear measurement with dimension far fewer than n. Recently, large numbers of researchers, including applied mathematicians, computer scientists and engineers, have begun to pay their attention to this area owing to its wide applications in signal processing, communications, astronomy, biology, medicine, seismology and so on, see, e.g., survey papers [1,2,19] and a monograph [14].
The fundamental problem in compressed sensing is reconstructing a highdimensional sparse signal from remarkably small number of measurements. We assume to recover a sparse solution x ∈ R n of the underdetermined system of the form Φx = y, where y ∈ R m is the available measurement and Φ ∈ R m×n is a known measurement matrix (with m n). The mathematical model would be to minimize the number of the non-zero components of x, i.e., to solve the following l 0 -norm optimization problem: where x 0 is l 0 -norm of the vector x ∈ R n , i.e., the number of nonzero entries in x (this is not a true norm, as · 0 is not positive homogeneous). A vector x with at most k nonzero entries, x 0 k, is called k-sparse. However, (1) is combinatorial and computationally intractable and one popular and powerful approach is to solve it via 1 minimization (its convex relaxation) One of the most commonly used frameworks for sparse recovery via l 1 minimization is the Restricted Isometry Property (RIP) introduced by Candès and Tao [9]. For some integer k ∈ {1, 2, · · · , n}, the k-restricted isometry constant (RIC) δ k of a matrix Φ is the smallest number in (0, 1) such that holds for all k-sparse vectors. We say that Φ has k-RIP if there is a k-RIC δ k ∈ (0, 1) such that the above inequalities hold. Furthermore, if for integers k 1 , k 2 , · · · , k s there exist δ k 1 , δ k 2 , · · · , δ k s ∈ (0, 1) such that the corresponding inequalities hold, we say that Φ has {k 1 , k 2 , · · · , k s }-RIP. Here, δ k ∈ (0, 1) is often used in literature, see, e.g., [13,14], and δ k has the monotone property for k (see, e.g., [3,4]), i.e., Thus, Φ has {k 1 , k 2 , · · · , k s }-RIP is the same as that Φ has max{k 1 , k 2 , · · · , k s }-RIP.
In addition, if k + k n, the k, k -restricted orthogonality constant (ROC) θ k,k is the smallest number that satisfies for all k-sparse x and k -sparse x with disjoint supports. Candès and Tao [9] showed the link between RIC and ROC θ k,k δ k+k .
By the definition (3), one would observe that where · denotes the spectral norm of a matrix (see, e.g., [18]). Clearly, it is hard to compute RICs for a given matrix Φ because it essentially requires that every subset of columns of Φ with certain cardinality approximately behaves like an orthonormal system. Moreover, as shown by Zhang [20], for a nonsingular matrix (transformation) Q ∈ R n×n , the RIP constants of Φ and QΦ can be very different. However, a widely used technique for avoiding checking the RIP condition directly is to generate the matrix randomly and to show that the resulting random matrix satisfies the RIP with high probability [17].
Although the RIP condition is difficult to check, it is of independent interest to study the bounds for RIC in CS since l 1 -norm minimization can recover a sparse signal under various conditions on δ k , δ 2k and θ k,k , such as, the condition δ k + θ k,k + θ k,2k < 1 in [9], δ 2k + θ k,2k < 1 in [10], and δ 1.25k + θ k,1.25k < 1 in [4].
While many previous results in compressed sensing made reference to δ 2k , probably because it implies that k-sparse signals remain well separated in the measurement space. The first major result of this sort was established in Candès [6], namely, δ 2k √ 2 − 1 is sufficient for k-sparse signal reconstruction. Recently Cai and Zhang [5] obtained the sufficient condition δ 2k 1/2. To the best of our knowledge, the bound for δ 2k on sparse recovery is gradually improved from √ 2 − 1(≈0.4142) to 0.5 in recent years. The details are listed in the Table 1 below.
The main contribution of the present paper is to give the new bounds for RIC in CS in the following theorem. Here, for x ∈ R n , we define the best k-sparse approximation x (k) ∈ R n from x with all but the k largest entries (in absolute value) set to zero. (1) and x (k) be the best k-sparse approximation of x. If the following inequalities hold δ 8ak < 1 ( 8 ) and
Theorem 1 Let x be a feasible solution to
for a > 3/8, then the solutionx to the l 1 minimization problem (2) satisfies for some positive constant C 0 < 1 given explicitly by (16). In particular, if x is ksparse, the recovery is exact.
From Theorem 1, when a = 1, 1.5, 2 and 3, we get that δ 2k < 0.5746, δ 2.5k < 0.7046, δ 3k < 0.7731 and δ 4k < 0.8445 with the corresponding assumption δ 8ak < 1. Observ- ing Table 1, under the extra assumption δ 8ak < 1, our conditions are all weaker than the ones known in the literature. Note that the k-RIP condition implies that every subset of columns of Φ with cardinality less than k approximately behaves like an orthonormal system. In the context of (large scaled) sparse optimization, it is often said that k n. Recently, Candès and Recht [7] showed that a k-sparse vector in R n can be efficiently recovered from 2k log n measurements with high probability, i.e., m = O(2k log n). In this case, 8ak should be less than m for smaller a. Thus, 8ak < m and 8ak n make sense and our extra assumption is meaningful and valuable in large scale sparse optimization.
The organization of this paper is as follows. In the next section, we establish some key inequalities. In Sect. 3, we prove our main result. In Sect. 4, we conclude this paper with some remarks.
Key Inequalities
In this section, we will give some inequalities, which play an important role in improving the RIC bound for sparse recovery in this paper.
We begin with the following interesting and important inequality, which states the connection between several norms of l 0 , l 1 , l 2 , l ∞ and l −∞ . Here, we define x −∞ norm as x −∞ := min i {|x i |}. (In fact, l −∞ is not a norm since the triangle inequality does not hold). For convenience, we call (11) the Norm Inequality, which is essentially from (6) in [3].
Proposition 1 (Norm Inequality)
For any x ∈ R n and x = 0, Furthermore, we obtain the following general inequality, Throughout the paper, letx be a solution to the minimization problem (2), and x ∈ R n be a feasible one, i.e., Φx = y. Clearly, x 1 x 1 . We let x (k) ∈ R n be defined as above again. Without loss of generality we assume that the support of x (k) is T 0 .
Denote that h =x − x and h T is the vector equal to h on an index set T and zero elsewhere. We decompose h into a sum of vectors h T 0 , h T 1 , h T 2 , · · · , where T 1 corresponds to the locations of the ak largest coefficients of h T C 0 (T C 0 = T 1 ∪T 2 ∪· · · ); T 2 to the locations of the 4ak largest coefficients of h (T 0 ∪T 1 ) C , T 3 to the locations of the next 4ak largest coefficients of h (T 0 ∪T 1 ) C , and so on. That is Here, the sparsity of h T 0 is at most k; the sparsity of h T 1 is at most ak; the sparsity of h T j (j 2) are at most 4ak.
In order to get a new bound on RIC, for the above decomposition (12), we define Obviously, ρ ∈ [0, 1] and Applying the Norm Inequality, we can give some inequalities of h, which are very useful in the proof of our main results.
In the end of this section, we give two lemmas which give us the connection about the norms of Φh T j and h T j .
Lemma 2
Let h T 0 and h T 1 be given by (12). Then Because the supports T 0 and T 1 are disjoint, the following equality holds where the second inequality is derived from (11).
Lemma 3
Let h T 0 , h T 1 , h T 2 , · · · , and ρ be given by (12) and (13), respectively. Then Proof By direct calculations, we obtain that where the first inequality holds by the triangle inequality, the second holds due to (3) and (6), the third is from (14) and (15); and the first equality holds from Hence, the desired result follows.
Proof of the Main Result
In this section, we will prove our main result. For simplicity, we first define a quadratic function of variable ρ, Clearly, it is a strictly concave function. We can easily obtain the optimal maximum value of f (ρ) through demanding its derivative, that is where Moreover, we denote that . (16) Before proving our main results, we show that the RIP bound in (9) is a sufficient condition for C 0 < 1. (8) and (9) hold, then C 0 < 1.
Lemma 4 If
Proof From (9), it is easy to verify that which is equivalent to Since 0 δ 8ak 1, and by (16), we have Thus, if (9) holds, we ensure C 0 < 1.
Now we begin to prove our main result.
Proof of Theorem 1 The proof proceeds in two steps, which is a common approach in literature [4,6]. The first step is to prove that The second step shows that x − x 1 is appropriately small. For the first step, we note that Φh = 0, which implies that From Lemmas 2 and 3, the following inequality holds Then we get where the first inequality is derived from (13). Combining with (16), we get (17).
For the second step, we have where the first inequality holds from (12) in [6]. Then This together with (17) yields We complete to prove (10).
In particular, if x is k-sparse, then x − x (k) = 0, and hence x =x from (10).
Conclusion
In this paper, we have gotten that, when a > 3/8, the conditions (8) and (9) enable us to obtain several interesting RIC bounds for measurement matrices, such as δ 2k , δ 2.5k , δ 3k , and δ 4k so on. For intuitionistic analysis, we draw the curve about the connection between t (:= a + 1) and the bound for δ tk . From Fig. 1, it is easy to see that the bounds for δ tk increase fast between 1.75 t 3 and the bounds for δ tk are larger than 0.9 when t 6. In addition, Davies and Gribonval [11] have given detailed counter-examples to show that the bound of δ 2k cannot exceed 1/ √ 2 ≈ 0.7071. Based on 0.5746 < 0.7071, we wonder whether there is a better way to improve the bound 0.5746 for δ 2k without the extra assumption Fig. 1 The curve of bounds for δ tk δ 8k < 1. So the further research topics we can do are to omit the extra assumption δ 8ak < 1 and further to reduce the gap between 0.5746 and 0.7071.
|
2023-01-30T14:34:56.797Z
|
2013-04-25T00:00:00.000
|
{
"year": 2013,
"sha1": "50ae9ea11e7a1afdbf19b9915a7917b792eab02d",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s40305-013-0013-z.pdf",
"oa_status": "HYBRID",
"pdf_src": "SpringerNature",
"pdf_hash": "50ae9ea11e7a1afdbf19b9915a7917b792eab02d",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": []
}
|
219482739
|
pes2o/s2orc
|
v3-fos-license
|
Profile of information and communication technologies (ICT) skills of prospective teachers
ICT skills are skills that prospective teachers must possess. Smartphones owned by all prospective teachers allow to be applied in ICT-based bleended learning. The purpose of this study was to determine the profile of ICT skills of prospective teachers through ICT-based learning on the concept of cell communication. ICT skills using the ETS framework Binkley 2012 are measured by two tools, that are observation and self-assessment on prospective teachers with the assignment of articles review. The study was conducted on one hundred and four at second semester prospective teachers at one of the universities in West Java. The results of the study show that prospective teachers are at the level of mastering independently for all ICT skill indicators. ICT skill indicator covers basic knowledge, download, search, navigate, classify, integrate, evaluate, cooperate and create. This study can be concluded that prospective teachers have ICT skills as pedagogical skills.
Introduction
The development of the world from the industrial era in the 20th century to the digital era in the 21st century, increased awareness of the importance of the skills that support the success of life in the 21st century. Implementation mobile-based technology Learning is recommended to shift conventional learning. Educational aspects which include cooperative, contextual, constructivist learning and authentic learning in conventional learning are also interesting to be applied in ICT-based learning. ICTbased learning facilitates learning that takes place over time and is not limited to space. Information technology is used effectively to improve some skills as a result of the learning process. The use of technology in the learning process is by applying ICT-based learning. ICT-based learning contains several dimensions related to easy access to computers, the use of the internet as a learning resource, and training some of the skills related to information technology that are provided in the learning process. The rapid development of information technology influences the learning process into internet and mobile based learning.
The use of information technology networks is more effective in communicating and a means of finding reference sources to support the learning process. ICT skills which are one of the 21st century skills need to be provided and trained in ICT-based learning so that students are expected to have ICT skills. Educators must be more literate about information technology so that they can guide their students later in practicing the learning process by using internet facilities [1, 2], where interaction between educators and students uses information and communication technology in order to facilitate the function of educators in process learning to students [3][4][5]. In some countries, the use of information and communication technology as a learning tool has been integrated into the curriculum [1] even the use of portable technology in the learning process, that are using smartphones in the learning process [3]. The role of prospective teacher candidates in the 21st century is important to provide ICT skills in which the modernization model of education requires a pedagogical model in science learning that facilitates the context of subject matter in the learning process with blended learning methods [5], as pedagogical skills in digital-based learning for practice ICT skills [4]. Skills, competencies, and attitudes towards information technology are provisions that need to be developed and trained in prospective teachers [6], considering future learning by using online-based information and communication technology. The experience of ICT-based learning explores learners with many online learning environments [7] and has the opportunity to succeed in improving the academic abilities of prospective teachers.
Methods
This study aims to determine ICT skills in prospective teachers. This research was conducted using a self-assessment questionnaire and observation sheet to measure ICT skills. Qualitative descriptive data analysis to understand ICT skills of prospective teachers in ICT-based biosel learning. The population in this study was one hundred and four prospective biology teacher candidates in the second semester who took Biosel courses at one of the State Islamic University Colleges in West Java in the academic year 2017/2018. Data collection was carried out with an observation sheet in the ICT-based learning process on the concept of cell communication and self-assessment questionnaire after learning ended. The research was conducted in the ICT-based learning process in the Biosel course. In the ICT-based learning process, prospective teachers are asked to review articles with the theme of individual cell communication. Measuring ICT skills using the ICT skills framework from ETS [8] which includes the basics, downloading, searching, navigating, grouping, integrating, evaluating, working together, and creating. Retrieval of data through observation by using observation rubric and independent assessment questionnaire with Likert method. ICT skills include four levels of criteria, that are 1) requires intensive guidance, 2) mastering with learning in others, 3) mastering independently and 4) proficient with development. Data analysis was carried out by measuring ICT skill levels and then calculating the percentage of each prospective teacher with an ICT skill level. The following is an ICT skills framework according to ETS which is used to measure ICT skills in teacher candidates described in Table 1. Basic Can open software, sort and store information on a computer, and can use other simple skills in using computers and other software 2 Download Can download various types of information from the internet 3 Search Know how to get access and information 4 Navigate Able to direct yourself in digital networks, learning strategies for using the internet 5 Classify Able to organize information according to certain classification schemes 6 Integrate Can compare and collect various types of information related to multimodal texts 7
Evaluate
Identify and then choose a representative website address as a source of information when browsing files, images, animations and videos 8 Identify and then choose the right computer application to process information data to produce ICT-based products 9 Cooperate Can take part in internet-based learning interactions and utilize digital technology to work together and take part in internet networks
No Categories Skill 10
Create Create an animation model, a series of images, a chart or flowchart that can visualize a student's understanding of a concept by using the appropriate application program to create mind map or video models 11 Make e-mails, send e-mails and know how e-mail works ICT skill score then the ICT skill percentage is measured in each indicator, then anava test is performed to find out the variance equation of ICT skills average and the profile level of ICT skills in prospective teachers through observation and self-assessment.
Result and Discussion
ICT skills in prospective teachers are measured through observation and self-assessment questionnaires in the ICT-based learning process on the concept of cell communication. ICT skills of prospective teachers are described in the following Figures 1 and figure 2. 1 and 2, illustrate the percentage of ICT skills of prospective teachers both through observation and self-assessment for all aspects of the indicator, on average prospective teachers are at level three by independently mastering. This proves that the prospective teacher has mastered ICT independently and uses technology as a tool and source of learning. Measurement of ICT skills is done through observation and self-assessment carried out to test the equality of differences between observation and self-assessment. The variance test between ICT skills on prospective teachers through self-observation and assessment is explained in Table 2 below. Table 2 with data from the analysis, it can be seen that Fcount 1.98 <Fcrit 3.99, so that Ho is accepted, meaning that the data variant on ICT skills of prospective teachers measured by observation is not different from self-assessment. ICT skills are pedagogical skills in 21st century science teacher candidates able to bridge the educational context through blended learning approaches [5] and pedagogical practices reflected in digital-based learning that provides ICT skills [4], ease e-learning learning to be accessed by students and applied in the learning process can improve ICT literacy [9] and ICT-based learning that applies network learning is 21st century learning [10] facilitates the process of transferring mental cognition [11]. Science learning in the 21st century is to invite students to learn and work well using 21st century skills, namely to dismantle the career domain and life skills from the new learning paradigm [12] with the development and application of m-learning and e-learning learning methods in learning environment [13]. The potential for teaching by using cellular and network technology in learning [14], learning together in a team work, using technology, in learning is a pragmatic model for effective 21st century team-based learning [15] by using the internet as a learning resource can improve critical thinking in students in learning as an implication of the effects of navigation and downloading the right files as part of literacy reading science through internet media [16]. Blended learning with conventional learning in the classroom and e-learning outside the classroom, is a multiliteration learning approach [17] by utilizing internet information technology in the learning process is a different problem solving between media studies and study of science and technology in the learning process [18].
Skills, competencies, and attitudes towards information technology, and educator support are the keys to the success of online learning programs [6], so that the profile of ICT skills of prospective teachers needs to be known before applying e-learning, where success academics in the application of e-learning is more applied to distance education that explores sociologically online learning environments [7]. In this study it was found that the ICT skills profile of prospective teacher students is at the level of mastering independently, which means they are able to apply e-learning in the learning process of any subject as a strategy and way to improve 21st century skills by mastering digital literacy in education high [3].
The practice of ICT-based learning not only with the use of computers can also use a smartphone that has been owned by all prospective teacher students. Digital literacy can be trained with ICT-based learning [19], regardless of social status, economics, ethnicity and gender [20] because smartphone devices are owned by all people so that the practice of implementing ICT-based learning is a practice social class and pedagogy applied at any level of education, even in secondary schools [1]. Learning by using a smartphone [2] by developing a mobile application for learning practices is a social and ethical practice applied in learning [21] in the hope of improving 21st century skills in accordance with employment needs with technological knowledge [22] so that the learning process of e-learning is not only intended for distance education but also plays a role in the habitual process of training various skills in the 21st century era, so that learning by m-Learning can be applied in learning practices [23], and learning objectives with learning outcomes that focus as a result of interaction in the learning process can be achieved [24].
Good ICT skills in prospective teachers provide a great opportunity to apply learning with good ICT skills input for students. Prospective teachers with a good level of ICT skills can apply mobile learning by using certain applications on smartphones that are forum-oriented online discussion groups that can improve critical thinking skills [25] or some other skills which are 21st century skills. Mobile learning goals have been done but still lacking that supports interaction between students through online discussion forums and provides some thinking skills such as 21st century skills.
Conclusion
The ICT skills profile of prospective teachers who are at the level of mastering independently for all ICT skills indicators shows that prospective teachers have good skills to be applied to learning through the ICT approach by utilizing various applications on smartphones that are able to provide various skills, such as the skills needed in the 21st century.
|
2020-05-28T09:16:03.177Z
|
2020-04-01T00:00:00.000
|
{
"year": 2020,
"sha1": "9021135fdf71db176a6d599b47d1a62cfad42819",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1521/4/042009",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "19418f4a8630a5560db10c41cacb9d37e36460fc",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Psychology"
]
}
|
216388155
|
pes2o/s2orc
|
v3-fos-license
|
Bifurcations and Chaos in Current-Driven Induction Motor
In this paper, a model of PI-speed control current-driven induction motor based on indirect field oriented control (IFOC) is addressed. To assess the complex dynamics of a system, different dynamical properties, such as stability of equilibrium points, bifurcation diagrams, Lyapunov exponents spectrum, and phase portraits are characterized. It is found that the induction motor model exhibits chaotic behaviors when its parameters fall into a certain region. Small variations of PI parameters and load torque affect the dynamics and stability of this electric machine. A chaotic attractor has been observed and the speed of the motor oscillates chaotically. Numerical simulation results are validating the theoretical analysis.
I. INTRODUCTION
Chaos, also called complex dynamics, is recently one of the most current topics in nonlinear dynamical systems studies [1]. The nonlinear interactions of a system give rise to chaos. Also, it is very sensitive to the system parameters and initial conditions of states. Small variations of these parameters result in great changes in system dynamics [2]. Nowadays, the engineers and industries have paid a lot of attention to the results of specific researches of real systems. Bifurcation is a subfield in nonlinear dynamical systems topics and represents a quantitative measure. The observation of bifurcation diagrams enables one to make a qualitative and quantitative deduction about the behavior and dynamics of the systems. Many theoretical and numerical types of research have been carried out for several qualitative problems in such topics. Practically, the analysis of bifurcation has been applied as a helpful method for exploring the behavior of such systems. They exhibit varieties of bifurcation and chaotic oscillations subjected to period-doubling bifurcation route to chaos [3][4][5]. Indirect filed oriented controlled (IFOC) induction motor is mostly utilized in industries due to its high torque operation. The parameters of the induction motor may alter with temperature, senility, variation error because of the weakness in valuation algorithms, and other circumferential reasons [6][7][8][9][10][11][12][13]. The IFOC induction motor dynamics is extremely susceptible to differences in the motor configuration; rotor resistance, mutual inductance, stator resistance, inertia, PI controller parameters, and load torque. The performance of a steady-state may have violated due to these variations. Also, the dynamic of the induction motor drive system and may lead to bifurcation in the motor dynamics and consequence; speed and currents fluctuations, variations, or even standing the damage of the motor [14,15]. A quantitative bifurcation can be used for obtaining: 1) the Fig. 1: Speed control scheme of induction motor drive with IFOC system. stability and robustness; 2) the limiting control gains; 3) the limiting variations in rotor time constant; 4) dangerous bifurcations for avoidance, and 5) the design of efficient controllers for different applications [16][17][18]. The main objective of this brief will be the bifurcation exploring in the reduced model of current fed induction motor. It is carried out due to the effect of alteration in the PI controller gains and load torque. Bifurcation values of parameters are obtained based on the linearization of the system model near the stationary point. The equilibrium point locus with load torque is also presented. Lyapunov exponents spectrum will be included to confirm the chaotic behavior. The following observations are got from the analysis: 1) the stationary point locus depends on the load torque and rotor resistance estimation; 2) the necessary and sufficient conditions for uniqueness of stationary point; 3) Hopf bifurcation is notable for a specific range of load torque; 4) the avoidance of period-doubling bifurcation can be achieved for higher proportional and lower integral control parameters in the speed feedback. The remnant of this paper is arranged as follows: In Section II, the induction motor adopting IFOC with current fed will be modelled. Based on this model, the stationary points and stability boundaries are discussed. Section III will include the bifurcation diagrams obtained by the numerical simulations due to the effect of PI parameters and load torque. Finally, Section VI draws the paper conclusions.
II. IFOC INDUCTION MOTOR MODEL
A general IFOC on the current-fed induction motor system for speed regulation is shown in Fig.1. The control scheme has a PI controller for speed regulation. The controller output is utilized to produce the torque reference component. The model of IFOC current-driven induction motor can be characterized by [14] = − − + (1) where the subscripts and represent the corresponding the direct-axis and quadrature-axis quantities, respectively; and are the components of the rotor flux; is the mechanical rotor speed; is the slip speed, and / is the inverse of rotor time constant. The system parameters are identified in Table 1. The state variables of IFOC induction motor are defined as 1 = , 2 = , 3 = ( − ), 4 = ( + ∫ )( − ). Following Fig.1, (1), (2), and (3), the model of IFOC currentdriven induction motor can be given as: , and are the speed PI controller gains, = 1 / 1 is the measure between estimated and real rotor time constant. In general, the IFOC induction motor is fully fluxed by exciting it with a constant = 2 0 while the torque is regulated by controlling .
A. Equilibria
It is essential to characterize the locus, uniqueness, and stability of the equilibrium points of the system (4) - (7) and their dependence on the load torque. The equilibrium point can be deduced by the solutions = [ 1 2 3 4 ] of the system equations: The fluxes equilibrium values can be obtained from (8) and (9) as where the diemensionless variable = ( 4 2 0 ⁄ ). It is clear that from (10) and (11) one can get From the definition of and collecting (12)- (14), the equilibrium point can be deduced as Which gives a nonunique equilibrium point, which is dependent on a dimensionless variable . It satisfies the following polynomial equation This a third-order polynomial with dimensionless coefficients depend on the tunning gain and load torque denoted as ]. The real roots of (17) give equilibrium points values for any given gain and any given load * . It is noticed that (17) most three real solutions. The roots can be obtained by using of root locus method for the appropriate form of (17) as Fig. (2b), the nonuniqueness of equilibria for > 3. Three distinct real equilibrium points for a definite range of * .
B. Local stability Analysis
The stability of the equilibrium point is investigated according to the eigenvalues of the characteristic equation where λ, , and are the eigenvalue, identity matrix, and the Jacobian matrix, respectively, of the system (4)- (7) evaluated at the equilibrium point ( 1 , 2 , 3 , 4 ) which has the expression given by (20) 4 ) has four possible roots; two pairs of complex conjugate roots, or two real roots plus pair of complex conjugate roots, or four real roots. Numerical analysis can be achieved and is focussed on the influence of the varying the proportional controller gain . The simulation results are observed for different cases of load torque as listed in Table 2. The dynamics undergo qualitative changes due to the variations of parameter . When load torque = 2.3 Nm and = 0.002, a stable limit cycle is observed. However, for = 0.008, the limit cycle disappears and the stable equilibrium point occurs.
Spiral node
It is evident that the complex eigenvalue crosses the imaginary axis and causes a Hopf bifurcation occurs.
C. Bifurcations
In this subsection, characterization of various dynamical behaviors obtained concerning the system parameters; , , , , is reported. The various bifurcation behaviors which can be obtained from IFOC current-driven induction motor. System (4)- (7) is solved numerically using the 4 th order Rung-Kutta method. The time step is always ∆ = 0.002 . for each set of parameters used in this paper. The system model is integrated for a long time. The bifurcation diagram is used to define the scenario type giving rise to complex dynamics. Also, the dynamics of the system can be identified using its spectrum of Lyapunov exponents. For fixed points, 4 < 3 < 2 < 1 < 0, for periodic orbits, 1 = 0, 4 < 3 < 2 < 0, for quasiperiodic orbits 1 = 2 = 0, 4 < 3 < 0, and for chaotic dynamics, 1 > 0, 2 = 0, 4 < 3 < 0. The main bifurcation parameter is considered, the system parameters are keeping at the values as defined in the previous subsection.
In the case of load torque value = 2 Nm, = 0.55, and = 3, the bifurcation diagram and corresponding Lyapunov exponents spectrum for varying are depicted in Figs. 3a and 3b, respectively. The bifurcation diagram is observed by capturing and plotting the maximum points of the speed difference in as a function of the parameter in the range of 0 < ≤ 0.003. According to Figs. 3a and 3b, when the bifurcation parameter is heavily increased, the following bifurcation sequence emerges. For ≥ 0, the system displays chaotic dynamics. Further increasing the bifurcation parameter beyond the critical valu 1 ≈ 10 −3 , a period-8 oscillation appears.
When increasing up to 1 ≈ 1.2 * 10 −3 , period-4 becomes clear. Thus, a period-doubling sequence leading to a quasi-periodic oscillation. It is clear that the bifurcation diagram and Lyapunov exponents spectrum are well agreed. Different numerical phase portraits on the − ( − ) plane and corresponding speed difference time domain waveform that observed are justifying the bifurcation sequences as illustrated in Fig.4. Which shows clearly that the speed difference goes to chaos through the route of period-doubling bifurcation with the parameter gain decreasing gradually. As can be seen from Fig. 4a, the trajectory settles into a stable below the critical value ≈ 0.006, the IFOC induction motor exhibits limit cycle operation (i.e. oscillations). The corresponding behavior is depicted in Fig. 4b. It can be seen that ( − ) oscillates with a constant speed when = 0.0025. Along with the decreasing bifurcation parameter , the speed difference exhibits the stable period-2 orbits, as shown in Fig. 4c. The onset of the period-4 solution is predicted for = 0.00129. This is shown in Fig. 4d. The collision between the period-4 limit cycle. and unstable equilibrium point takes place for = 0.0005, as shown in Fig.4e. This gives rise to the chaotic attractor. The dynamics sensitivity of IFOC induction motor system concerning control parameters , , load torque , and the ratio , analyzed through the bifurcation diagrams as shown in Fig.5. It highlights the symmetry, periodic windows, and period-doubling bifurcations. According to the bifurcation diagram of Fig.5b, a certain range of multiple attractors can be specified for near 0.76. For the values of in this domain, the behavior of the system dynamics extremely influences in the initial conditions of states. The system has notable phenomena of multiple coexisting attractors. Two different attractors can be obtained (see Fig.6) concerning different initial state values. For example, the chaotic attractor of Fig. 6a generated from the initial conditions 1 (0) = 0, 2 (0) = 0.4, 3 (0) = 150, and 4 (0) = 1; while Fig. 6b is obtained from the initial values 1 (0) = 0, 2 (0) = 0.4099, 3 (0) = 119.99, and 4 (0) = 1, a period-3 attractor is observed. It is important to confirm that, the phenomena of multiple stability involving the coexistence of two different attractors were not previously reported in an IFOC induction motor drive system. Therefore, it appears as a good contribution related to the dynamics of the induction motor drive system. Also, the appearance of coexistence multiple attractors is not desirable and solely the need for control.
VI. CONCLUSION
In this brief, the dynamical analysis of IFOC current fed induction motor system model has been achieved. The equilibrium uniqueness and nonuniqueness have been investigated according to the estimated rotor resistance ratio.
The uniqueness of equilibrium points is ensured for the estimated rotor resistance ratio located in the range of 300%. For a high value of rotor resistance ratio, , three equilibrium points are created. Two of the equilibrium points are asymptotically stable and the other one is unstable. The effect of PI controller gains parameters and load torque on the bifurcations has been addressed more specifically. The bifurcation analysis shows that the complex dynamics and nonlinear oscillations can be grown in the IFOC induction motor system following the period-doubling bifurcation. It is clear that the IFOC induction motor drive system suffers from the uncommon behavior of coexistence multiple attractors for a certain range of integral gain of the PI controller.
|
2020-04-02T09:34:47.048Z
|
2019-12-01T00:00:00.000
|
{
"year": 2019,
"sha1": "036329a18bbff12e5c0697a38dbc8f3fe75542b7",
"oa_license": null,
"oa_url": "https://doi.org/10.37917/ijeee.15.2.1",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "93fd16731c9a9584f80b6bfab1daa1f021b49796",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
248496106
|
pes2o/s2orc
|
v3-fos-license
|
The Directional Bias Helps Stochastic Gradient Descent to Generalize in Kernel Regression Models
We study the Stochastic Gradient Descent (SGD) algorithm in nonparametric statistics: kernel regression in particular. The directional bias property of SGD, which is known in the linear regression setting, is generalized to the kernel regression. More specifically, we prove that SGD with moderate and annealing step-size converges along the direction of the eigenvector that corresponds to the largest eigenvalue of the Gram matrix. In addition, the Gradient Descent (GD) with a moderate or small step-size converges along the direction that corresponds to the smallest eigenvalue. These facts are referred to as the directional bias properties; they may interpret how an SGD-computed estimator has a potentially smaller generalization error than a GD-computed estimator. The application of our theory is demonstrated by simulation studies and a case study that is based on the FashionMNIST dataset.
Introduction
The Stochastic Gradient Descent (SGD) is a popular optimization algorithm that has a wide range of applications, including generalized linear model in statistics and deep Neural Network in machine learning. One main advantage of the SGD is the computational scalability due to low cost per iteration. Recent work also indicates that the SGD might also lead to outcomes that possess nice statistical properties under the linear regression framework, see [19].
In this paper, we study the statistical properties of the SGD under nonparametric regression models. We focus on the Reproducing Kernel Hilbert Space (RKHS) model, which is popular in both statistics and machine learning communities and is often simply referred to as the "kernel trick," see [2,27]. The kernel method can be applied in various domains such as image processing [24] and text mining [11].
Our main approach is to analyze the directional bias of the SGD algorithm under the RKHS model, which can help us to explain why the outcome of SGD has good generalization properties. Directional bias, also referred to as implicit bias, means that an algorithm generates a solution path that is biased towards a certain direction, and it is also closely related to implicit regularization in deep learning [10]. Directional bias also means that the algorithm prefers some directions over the others even though they may have the same objective function value. For example, paper [29] shows the directional bias of SGD and GD in the linear regression model, and analyzes the relationship between the directional bias and the generalization error in such a setting. There is no directional bias result in kernel regression, but one may expect it to exist and to explain the generalization performance of the outcome of SGD in the kernel regression model.
Problem Formulation
In Subsection 2.1 we define the kernel regression; in Subsection 2.2, we present the SGD and GD algorithms; in Subsection 2.3, we state our assumption for later analysis. We also provide a simple example to justify the assumption.
Kernel Regression
Suppose that we have n data pairs {x i , y i } n i=1 , where y i ∈ R is associated with x i ∈ X ⊂ R p through an unknown model f (x i ). The goal is to estimate the unknown model f from the data. One solution is to minimize the empirical risk function where is the loss function. A popular choice for regression task is the squared loss (y, x) = 1 2 (y − f (x)) 2 .
One can see that problem (1) is not well-defined, as there are infinitely many solutions to ∀i : f (x i ) = y i , and some of them do not generalize for a new test data. One way to fix it is to restrict f ∈ H and penalize f H for smoothness, where H is a RKHS with reproducing kernel K(·, ·) and · H is the Hilbert norm. Adding these restrictions and applying Representer Theorem, problem (1) with the squared loss becomes min α∈R n 1 2n where K T i is the ith row of K := K(X, X) = (K(x i , x j )) i,j . For a parameter α, the corresponding estimator in H is f (·) = n i=1 α i K(x i , ·) := α T K(·, X). Now when K is invertible, it is trivial that any algorithm on objective function (2) converges at the unique minimizerα = K(X, X) −1 y, so the RKHS functional estimator iŝ where K(x, X) T = (K(x, x 1 ), . . . , K(x, x n )). Estimator (3) is the minimum norm interpolant, i.e.: arg min f ∈H { f H : f (x i ) = y i , i = 1, . . . , n}, whose properties are studied in [17].
In this work, we compare the convergence direction of SGD and GD toα. Specifically, we consider a two-stage SGD with a phase transition from a larger step-size to a decreased step-size. Note that this matches the training scheme people always use in practice for SGD algorithms: decreasing the step-size after training for a few epochs. For that purpose, in the following sections, we define the one-step SGD/GD update and state our assumptions and notations for analysis.
One step SGD/GD update
For objective function (2), denote the parameter estimation at tth step as α t , then SGD update α t+1 as where i t is uniformly random sampled from {1, . . . , n}.
The GD update α t+1 as
Assumptions and Notations
We state our assumption on the Gram matrix as following: Assumption 1 (Diagonally dominant Gram matrix). Denote by K = K(X, X) the Gram matrix, we assume that K is diagonally dominant. Specifically, suppose w.l.o.g. that K 1,1 ≥ K 2,2 ≥ . . . ≥ K n,n > 0, then we have for a small value τ that Remark 1. Diagonally dominant Gram matrix is common in kernel learning. Mathematically, one can justify that a Gram matrix is diagonally dominant by imposing proper assumptions on the kernel function K(·, ·) and the data distribution. One can see Appendix B for some examples.
Remark 2.
Thinking of the kernel function as the inner product of high-dimensional features, the resulting Gram matrix is diagonally dominant when the high-dimension features are sparse. This happens in a lot of practical problems [23,28], for example, linear or string kernels being applied to text data [11], domain-specific kernels being applied to image retrieval [25] and bioinformatics [22], and the Global Alignment kernel being applied to most datasets [9,8].
Proposition 1 (Lemma 1 in [29]). Consider the bilinear kernel K(x, x ) := x, x . Assume the data x i , i = 1, . . . , n, are i.i.d. uniformly distributed on the unit sphere S d−1 , where d n. When d ≥ 4 log(2n 2 /δ) for some δ ∈ (0, 1). Then with probability at least 1 − δ, we have Though commonly exists, the diagonal dominance is undesired in classification and clustering tasks. It indicates that the data points are dissimilar to each other, which means not enough information for classification/clustering. There are some efforts for solving the issue of diagonal dominance in these cases, see for example [11,16]. But for the regression task, the diagonal dominance, in other words, the dissimilarity of data points, may have benefits. One can find similar conditions such as Restricted Isometry Property and s-goodness that describes linearly dissimilar features in a regression literature [5,7]. Such conditions are required for proving minimax optimality or exact recovery of a sparse signal in many settings. In our case, we adopt the dissimilarity concept and apply it to data points in high-dimensional nonlinear feature space. Later we will see that in our case, the directional bias drives SGD to select a good solution that generalizes well among all solutions of the same level of empirical loss. In this way, our SGD estimator benefits from the diagonal dominance.
Main result
The main results are presented in two subsections: Subsection 3.1 states the directional bias results of SGD and GD estimators, respectively; Subsection 3.2 shows that certain directional bias leads to good generalization performance, and applies this result to show that an outcome from SGD potentially generalizes better than an outcome from GD.
Directional bias
By our assumption, K will be full rank, (S)GD on (2) converges atα = K −1 y. We are interested in the direction at which α t converges toα, i.e., the quantity b t := α t −α.
With Assumption 1 that the Gram matrix is diagonally dominant, we prove that a two-stage SGD has b t converge in the direction that is aligned with the eigenvector associated with the largest eigenvalue of the Gram matrix K.
Theorem 1 (Directional bias of an SGD-based estimator). Assume Assumption 1 holds, run a two-stage SGD with a fixed step-size for each stage: stage 1 with step-size η 1 for steps 1, . . . , k 1 , stage 2 with step-size η 2 for steps k 1 + 1, . . . , k 2 , such that where C 1 , C 2 , C 3 are constants. For a small > 0 such that nτ < poly( ) there exists k 1 = O(log 1 ) and k 2 such that is close to the direction of eigenvector corresponding to the largest eigenvalue of K.
Remark 3. One should assume τ in Assumption 1 to be small enough for to be very small if one would like the resulting estimator b SGD k2 to have the direction that corresponds to the largest eigenvalue of K. Later we will see that if one only wants different directional bias of SGD and GD estimators, a moderate is allowed, the assumption on τ is not that strong.
The proof for Theorem 1 is in Appendix E. Next, we see the different convergence direction of GD.
Theorem 2 (Directional bias of a GD-based estimator). Assume Assumption 1 holds, run GD with a fixed step-size η such that For a > 0, let k = O(log 1 ), we have the GD estimator after k steps satisfying: That is, b GD k is close to the direction that corresponds to the smallest eigenvalue of K.
Remark 4.
The assumption (on τ ) is mild for differentiating the directional bias of SGD and GD. Comparing Theorem 1 and 2,when γ n < (1 − 2 )γ 1 , taking k large enough we have That is, one may expect b SGD k2 to be in the direction of larger eigenvalue compared with b GD k . In the following subsection, we will see that the directional bias towards a larger eigenvalue of the kernel is good for generalization. That is, directional bias helps an SGD-based estimator to generalize.
One can see the detailed proof of Theorem 2 in Appendix F. Though assumption 1 appears in Theorem 2, it is just used to bound the step-size so that GD converges; the diagonally dominant structure of K is not required. Moreover, the choice of is independent of τ , then for an arbitrarily small > 0, run GD long enough then the theorem will apply. The estimator b GD k can be arbitrarily close to the eigenvector that correspond to the smallest eigenvalue. The SGD indeed converges in the direction of a larger RQ, which matches our Theorems 1 and 2. In the third plot we show the prediction error of the solution paths, and the SGD does have lower prediction error than GD, even GD has smaller training loss than SGD. This supports Theorem 4.
Effect of directional bias
In this subsection, the estimator that is biased towards the largest eigenvalue of the Hessian is shown to be the best for parameter estimation, see Theorem 3. Later we define a realizable problem setting of kernel regression where the generalization error depends on the parameter estimation error, and in this way, the directional bias helps an SGD-based estimator to generalize. Theorem 3. Consider minimizing the quadratic loss function L(w) = Aw − y 2 2 . Assume there is a ground truth w * such that y = Aw * . For a fixed level of the quadratic loss, the parameter estimation error w − w * 2 2 has a lower bound: ∀w ∈ {w : L(w) = a} : w − w * 2 2 ≥ a/ A T A 2 . Moreover, the equality is obtained when w−w * is in the direction of the eigenvector that corresponds to the largest eigenvalue of matrix A T A.
Remark 5. Theorem 3 implies that the directional bias towards the largest eigenvalue is good for parameter estimation. As discussed in Remark 4, the SGD-based estimator is biased towards a larger eigenvalue compared to the GD-based estimator, by Theorem 3 the SGD estimator potentialy better estimates the true parameter and thus generalizes better, which we will formalize later.
For an algorithm output f alg , we decompose its generalization error as: where H s is the hypothesis class that the output of the algorithm is restricted to. By formulation (2), we have H s : We define the a-level set of training loss: and denote ∆ * a := inf f ∈νa ∆(f ). Note that the approximation error can not be improved unless we change the hypothesis class, which is, changing the problem formulation in our case. So we just minimize the estimation error for estimators that are in the a-level set. One can check the estimation error is given by where b = α −α, seeing details in Appendix G.2. Similar to Theorem 3, the estimation error is minimized when b is in the direction of the largest eigenvalue of K, so the directional bias towards a larger eigenvalue helps to generalize in kernel regression. We compare the estimation error of SGD and GD in following theorem.
Theorem 4 (Generalization performance). Follow Theorems 1 and 2, we have the following: and could be any positive small constant; where a is the training loss of GD estimator, and in Theorem 1 and combining with Theorem 2 which states that k→∞ −→ 0, we will have 1 + 4 ≤ M 1/2 holds. In this way, ∆(f SGD ) < ∆(f GD ) with high probability. This finishes our claim that SGD generalizes better than GD.
Numeric Study
Simulation. We simulate data from a nonlinear regression model with Gaussian additive noise as y i = 100 j=1 sin(x i,j ) + i , where x i,j ∼ N (0, 1) and i ∼ N (0, 0.01). We fit kernel regression using the polynomial kernel K(x 1 , x 2 ) = ( x 1 , x 2 + .01) 2 on 10 training data and test the estimator on 5 testing data. We run both SGD and GD for two step-size schemes: small step-size, and moderate annealing step-size. The results are in Fig. 1.
Real data experiment. We run a 6-layer ResNet [13] on FashionMNIST. The network structure is We run SGD and GD for two step-size schemes, similar to our simulation. There are 1, 500 training data and 10, 000 testing data in our experiment. The result is in Fig. 2. In (a), we follow [29] to use the Relative Rayleigh Quotient(RRQ) as the measurement of the convergence direction. The SGD with moderate step-size has higher RRQ than the GD with either moderate step-size or small step-size, which supports the theory in Theorems 1 and 2. It is interesting to observe that SGD with a small step-size has a different directional bias compared with SGD with a moderate step-size, indicating that the directional bias studied in this work does not hold for a general SGD. In (b), we plot the testing accuracy from 20 repetitions of experiments, the test accuracy (inside bracket) of SGD with moderate step-size is higher than the other cases, and we have Wilcoxon signed-rank test to confirm that the difference is significant at 0.01 level. The test accuracy validates Theorem 4. For more details of the experiments, the rank test, and more experiments, see Appendix H.2.
Remark 7. The purposes of experiment using a Neural Network (Fig. 2) are: first, the Neural Network results support our finding on kernel regression, since Neural Network is related to kernel regression through NTK theory [15]; second, our experiment indicates that our result may be empirically true for the more general deep learning framework [3].
Discussion and Further Work
We advance one more step towards understanding the directional bias of SGD in kernel learning. We discuss some implications of our results.
Implication to the SGD scheme: Our result shows the directional bias holds to SGD with annealing step-size. Specifically, the first stage of SGD with moderate step-size should run long enough, then in the second stage by decreasing step-size we have the directional bias towards the largest eigenvalue of the Hessian, which helps in obtaining a better generalization error bound. This explains a technique for tuning the learning rate that people adopt in practice: starting with a large step-size, running long enough until the error plateaus, then decreasing the step-size [13]. Although this technique is always used for speed convergence, we show that it also helps in predictive power, which becomes even better.
Implication to deep learning: Our assumption in the analysis implies certain structures for the deep learning models. Per our discussion in Remark 2, our assumption holds when the feature space is high dimensional and/or when features are possibly sparse. This matches the deep learning scenario where we have a highly overparameterized model and when the trained parameter estimator becomes sparse. In addition, considering that some deep learning tasks can be approximated by kernel learning [15], our results help to explain why the SGD-based estimator can perform better in an overparameterized deep learning setting.
Just as stated in [3], to understand deep learning one needs to understand kernel learning. This work improves our understanding in kernel learning, and can possibly lead to deep understanding of practices in deep learning.
A Background on RKHS
This section details the background on RKHS in two subsections. The first subsection includes notations, theorems, and an example of RKHS, the second section reduces the kernel regression in RKHS from infinite dimension to finite dimension, which gives our objective function (2) .
A.1 Nonparametric model in RKHS
In this subsection, we give the definition and notations for our model in RKHS, and its associated norms, basis, etc. The definitions are similar to those in [21].
, where x i ∈ X ⊂ R p and y i ∈ R, assume that y i s are associated with is some unknown function in the reproducing kernel Hilbert space (RKHS) of functions X → R, our goal is to estimate the function f (·) from the data.
Denote the RKHS where f lives as H, with reproducing kernel K : X × X → R + (which is known to us). And we associate the functions in H with probability measure Q, assume w.l.o.g.
that R p f (a)dQ(a) = 0. By Mercer's theorem, K has eigen-expansion: And we have another inner product that is defined for RKHS H as The reproducing property of RKHS says that ∀f ∈ H, we have Cubic Splines Formulate a RKHS. We go over an example of RKHS for better understanding. Consider the cubic spline of one dimension, we can show that the space of cubic splines is a RKHS. One can also find the cubic spline example in [12]. For more details on the relationship between polynomial smoothing splines and RKHS, one can check Section 1.2 of [27] .
The cubic spline f on X is continuous, has a continuous first-order derivative and square integrable second order derivative. By Taylor's theorem with remainder, we have and inner product
A.2 Optimization problem considered
This subsection gives problem formulation of kernel regression. Given data pairs {x i , y i } n i=1 and RKHS H, consider a loss function which is selected according to how y is connected with f (x), we may estimate the model by Example for includes • Squared error loss (y, f ) = (y − f ) 2 , which is usually used in regression; • 0-1 loss (y, f ) = 1(y * f > 0), for binary classification; • Logistic loss ( y, f ) = log(1 + exp(−y * f )), also a loss function for classification, can be considered as a surrogate function of 0-1 loss, and is the same as negative log likelihood function in logistic regression.
Let us come back to the nonparametric model part, to control the model smoothness, the usual practice is to add a penalty to objective (6), result in A popular choice of pen(f ) is f 2 H , or any strictly increasing function of f 2 H . Such method is explicitly controlling the model smoothness, and by Representer Theorem, it has solution of the form Plug equation (7) into objective (6), we have the problem becomes Which gives the formulation (2) under loss function (y, f (x)) = (y − f (x)) 2 /2.
B Diagonal Dominance of Some Popular Kernels
In this section, we justify Assumption 1 by figuring out a problem setting where some popular kernels give a diagonal dominant Gram matrix. For simplicity, we assume the following data distribution throughout this section: x i ∈ R d , i = 1, . . . , n, are normalized such that x i Given assumption set (8), we can bound the inner product of data x i , x j with high probability as follows: Lemma 1 (Lemma 1 in [29]). Under assumption set (8), let d ≥ 4 log(2n 2 /δ) for some δ ∈ (0, 1). Then with probability at least 1 − δ, we have Proof. See proof of Lemma 1 in [29].
The bound on the inner product x i , x j induces bound on K(x i , x j ) for some popular kernels. We show the diagonal dominance for two groups of kernels in the following propositions, and list some examples for kernels in each group.
Proposition 2 (Inner product kernel). The inner product kernel is defined as a smooth transformation of inner product. We can write it as: Assume assumptions (8) hold, and assume the function g : [−1, 1] → R satisfies: we will have with probability 1 − δ where δ andτ the same as in Lemma 1.When g (0)τ + L 2τ 2 g(1) for a small enoughτ , the Gram matrix is diagonal dominant.
Proof. We have the following with probability at least 1 − δ by Lemma 1. For any off-diagonal elements of K: Remark 8. We list some examples of inner product kernels that give diagonal dominant kernel matrices: • Bilinear Kernel: K(x, x ) = x, x , then When tanh(ατ + c) tanh(α + c) (which is the case when α is large, and c,τ are small enough), we have |K(x i , x j )| K(x n , x n ) and the Gram matrix is diagonal dominant.
Proposition 3 (Radial Basis Function (RBF) kernel). Radial Basis Function kernel depends on two data points through their distance, which is of following form Assume assumptions (8) hold, when γ = −c 0 log(τ ) for a constant c 0 , we have with probability 1 − δ where δ andτ the same as in Lemma 1. That is, the Gram matrix is diagonal dominant.
Proof. Bound off-diagonal terms of K by Lemma 1: Remark 9. We note some popular kernels that are related to Radial Basis Function kernel, and show that they lead to diagonal dominance: ). One can see that Gaussian Kernel is reparameterizing RBF Kernel by γ = 1/(2σ 2 ). Thus Gaussian Gram matrix is diagonal dominant when σ 2 ∼ O(− 1 log(τ ) ).
for σ > 0. The Laplace Kernel is very similar to Gaussian Kernel, and one can check by similar steps that when σ ∼ O(− 1 log(τ ) ), Laplace Gram matrix is diagonal dominant.
C Lemmas
This section includes two useful lemmas for characterizing the eigenvalues of a symmetric matrix.
Lemma 2 (Gershgorin circle theorem, restated for symmetric matrix). Let A ∈ R n×n be a symmetric matrix. Let A ij be the entry in the i−th row and the j−th column. Let The eigenvalues of A are in the union of Gershgorin discs Furthermore, if the union of k of the n discs that comprise G(A) forms a set G k (A) that is disjoint from the remaining n − k discs, then G k (A) contains exactly k eigenvalues of A, counted according to their algebraic multiplicities.
Lemma 3 (Cauchy interlacing theorem, restated for symmetric matrix). Let B ∈ R m×m be a symmetric matrix, let y ∈ R n and a ∈ R, and let A = B y y T a . Then Proof. See [14], Chap 4.3, Theorem 4.3.17.
D Spectrum of Gram matrix
This section analyzes the eigen structure of the Gram matrix.
For P −1 K 1 2 : and K T −1 K −1 has all eigenvalues in [γ 2 n , γ 2 1 ] by Cauchy interlacing theorem (Lemma 3), that is, all singular values of K −1 are in [c 1 , c 2 ] by our assumption. Then So we have For P 1 K 1 2 : We have the following: • 0 is an eigenvalue of H −1 , corresponding eigenspace is the column space of P 1 ; • Restricted to the column space of P −1 , the eigenvalues of H −1 are all in the interval: Proof. The first claim is by construction of P 1 and P −1 .
For the second claim, note that H −1 has the same eigenvalues as Now the diagonal entries of H −1 are: And the off-diagonal entries of H −1 are: To use Gershgorin circle theorem, calculate Thus the Gershgorin discs: nτ , the first Gershgorin discs does not intersect with the others, so we have n − 1 nonzero eigenvalues in
E Directional bias of SGD with moderate step size
This section gives formal proof of Theorem 1 and specifies the constants. The proof is done in four steps: Lemma 8 analyzes one update of SGD; Lemma 9 uses Lemma 8 to bound the first stage updates of SGD with moderate step size; Lemma 10 again uses Lemma 8, and bounds the second stage updates of SGD with small step size; finally, Theorem 5 combines Lemma 9 and Lemma 10 to formalize the directional bias of SGD, it is the same as Theorem 1, but restated using the constants defined therein.
For inequalities (19) and (20), check where P 1 K i and P 1 b t are in the same 1 dimensional linear space, thus where the first inequality by upper bounds (17) and (18), second inequality by nτ ≤ O(1). Plug the term (24) into inequalities (22) and (23), take expectation on both sides, we get claims (19) and (20).
For inequality (21), check Then we have where the last inequality from Jensen's inequality. Now the term combine all three inequalities above, take the expectation w.r.t. b t , we have claim (21).
Lemma 9 (Long run behavior of SGD with moderate step size). Assume b 0 is away from 0, nτ where c 5 , c 6 are constants such that Consider first k 1 steps of SGD updates with step size η: Fix a β 0 ≤ A 0 , then for 0 < < 1 and 0 < β < β 0 such that √ nτ ≤ poly( β), there exists Proof. For this choice of η, denote q 1 = q 1 (η), q −1 = q −1 (η), ξ = ξ(η). By Lemma 4, we have Decompose the coefficient matrix as Assume w.l.o.g. that sin θ ≥ 0 ( since otherwise we can take θ → θ + π), then we have We claim the following holds: which we check later. Using inequalities (25), we can upper bound B k1 as In addition, for k = 0, . . . , k 1 We now lower bound A k by mathematical induction. We have A 0 ≥ β 0 , assume A k−1 > β 0 , then We have all lemma claims proved. Now it remains to check inequalities (25). First note that our choice of the upper bound on η guarantees that q −1 < 1.
which is true by our choice of η.
which is true by our lower bound on η.
Check inequality (26b): With (26) we can prove the lemma by mathematical induction. Suppose B k−1 ≤ β and A k−1 ≤ A k1 ≤ B, then check We recap Theorem 1 using our notations in previous lemmas as follows: Theorem 5 (Directional bias of the two-stage SGD). Use the two stage SGD scheme as defined in Lemma 9 and 10. Assume nτ < poly( ), then there exists k 1 = O(log 1 ) and k 2 such that where γ 1 is the largest eigenvalue of K.
Proof. In Lemma 9 let β = β 0 , then for k 1 = O(log 1 ) we have B k1 ≤ β 0 . For the 2nd stage, by Lemma 10 we can early stop at k 2 such that A k2 ≥ β 0 and A k2+1 < β 0 . We then have And the upper bound in the theorem is by definition of γ 1 .
F Directional bias of GD with moderate or small step size
This section includes the proof of Theorem 2. We first rewrite the GD updates as linear combination of eigenvectors. Then the theorem is proved using the transformed variables and finally transformed back to original parameters. The directional bias of GD does not require diagonal dominant Gram matrix. Reloading notations Denote the eigen decomposition of K: where the eigenvectors g i 's are orthogonal. The GD update as Denote w t := G T (α t −α), we can rewrite GD update in w t : We recap Theorem 2 to make reading easier as follows: Theorem 6 (Direction bias of GD). Assume α 0 is away from 0, λ n + 2nτ < λ n−1 , GD with step size: For a small > 0, take k = O(log 1 ), we have Proof. For i = 1, . . . , n, we have Denote q i = 1 − ηγ 2 i /n, then 0 < q 1 ≤ . . . ≤ q n < 1 since Since λ n + nτ < λ n−1 − nτ , we have γ n < γ n−1 by lemma 3, it follows that q n > q n−1 . Denote q = q n−1 /q n < 1, then The lower bound of the theorem holds by definition of γ n .
G Effect of directional bias
In this section, we provide the proof for theorems in Section 3.2. There are two theorems there, so we split this section into two subsections. Subsection G.1 proves Theorem 3. For a general problem setting of squared error minimization, the Theorem gives a straightforward understanding for why directional bias towards the largest eigenvalue of the Hessian is good for generalization. Section G.2 proves Theorem 4 by giving concrete generalization bounds of SGD and GD estimators in kernel regression.
G.1 Proof of Theorem 3
Denote v = w − w * , rewrite the objective function as The equality is achieved when v is in the direction of q 1 , and ρ 1 = A T A 2 . Take L(w) = Av 2 2 = a then the theorem holds.
G.2 Proof of Theorem 4
Calculate ∆ * a : Denote f * =α T K(·, X) +f , then we have for a f ∈ H s , f = α T K(·, X), let b =α − α, then And we further calculate that That is, We claim that where the equality is obtained when b is in the direction of the largest eigenvector of K. To see this, we check Kb 2 2 ≤ γ 1 b T Kb. Recall the eigendecomposition of K = GΓG T where G = [g 1 , . . . , g n ] has orthogonal columns and Γ = diag(γ 1 , . . . , γ n ). Then So we have This finishes our claim on ∆ * a .
H Experiments
We list the implementation details of the experiments in Section 4 and include more experiment results. For better presenting, we split into two subsections: Subsection H.1 includes the details of simulation; Subsection H.2 is about the NN experiment on FashionMNIST, including the data description, network structure, and algorithm details, also there are more experiment results in Subsection H.2 that are not listed in Section 4 due to page limit.
H.1 Simulation
This subsection is corresponding to Figure 1. Data Generation. The training data is simulated as follows: Set n = 10, p = 100, simulate X n×p where elements of X are i.i.d. N (0, 1); denote ith row of X as x i , normalize x i such that it has squared 2 norm in [.49, 1]; set y i = p j=1 sin(x i,j ) + i where i i.i.d.
∼ N (0, .01). The testing data is simulated in exactly the same way, except that we only simulate n = 5 testing data.
Kernel Function. We set the kernel function to be the polynomial kernel SGD and GD implementation. Both SGD and GD is run for small and moderate step sizes. The moderate step size scheme for SGD is: η 1 = .1 for the first 50 steps, and η 2 = .01 for the next 1000 steps; for GD is: η 1 = .5 for the first 50 steps, and η 2 = .05 for the next 1000 steps. The small step size scheme for SGD is η = 0.01 for 1050 steps; for GD is η = 0.05 for 1050 steps. Note that the step size for SGD is a fraction of that for GD, this matches our Theorem 1 and 2 that the step size of GD is of magnitude n/2 times that of SGD.
H.2 Neural Network on FashionMNIST
This subsection is corresponding to Figure 2.
Dataset. The original FashionMNIST consist of 60, 000 training data and 10, 000 testing data. We randomly sample 1, 500 data from original training data for training, and use all 10, 000 original testing data for testing. All data entries are normalized to [0, 1].
Network structure. we use a 6-layer ResNet-like [13] Neural Network, and the structure is as follows The Residual Blocks are as Figure 7.6.3 in [30] (without 1 × 1 convolution). Note that each residual block contains two 3 × 3 convolutional layers, thus total number of layers is as stated.
Algorithm. We minimize the Cross Entropy Loss objective L(w) = 1 n n i=1 l i (w), where l i (w) is the loss function at ith sample. One step SGD is as follows: where I is a randomly sampled subset of {1, . . . , n} (uniform random sample without replacement). We choose the batch size |I| to be 25. One step GD is as follows: Both SGD and GD are run using two settings of step sizes η t . The moderate step size setting is as follows: η t = 0.2, t = 1, . . . , 5000 0.02, t = 5001, . . . , 20000 And the small step size setting has η t = 0.02, t = 1, . . . , 20000.
Comparison of convergence direction. Since the loss surface is nonconvex and the Hessian varies, we follow [29] to measure the convergence direction by Relative Rayleigh Quotient(RRQ), which normalizes the Rayleigh Quotient by the maximum eigenvalue of the Hessian as follows RRQ(w) = ∇L(w) ∇L(w) 2 · ∇ 2 L(w) · ∇L(w) where L(w) is the loss function on the whole training set. A high RRQ indicates that the convergence direction of w is close to a larger eigenvector of the Hessian.
Comparison of test accuracy. We set 20 different random seeds. For each random seed, we run: SGD with moderate step size, GD with moderate step size, SGD with small step size, GD with small step size. For each algorithm, we evaluate its test accuracy once every 500 steps, and use the average of the last 5 values as its test accuracy. We list the test accuracy in Table 1 We also use one-side Wilcoxon signed-rank test to check if the test accuracy of different algorithm are significantly different, the result is in Table 2. All the p-values are significant at 0.01 level, so we reject the null hypothesis and conclude that the SGD with moderate step size has test accuracy significantly higher than all other algorithms.
Null Hypothesis on Test Accuracy p-value SGD + moderate LR ≤ GD + moderate LR 9.54 × 10 −7 SGD + moderate LR ≤ SGD + small LR 9.54 × 10 −7 SGD + moderate LR ≤ GD + small LR 9.54 × 10 −7 Additional experiments. We conduct more experiments using different step sizes. The initial step size is taken in {1, 0.5, 0.2, 0.1, 0.02, 0.01, 0.005, 0.001}, and the step size is divided by a factor of 10 after 5000 steps. The test accuracy is in Figure 3, where we see that SGD with step size 0.2 has the best test accuracy, and GD with step size 0.5 performs better than GD with any other step sizes, but is still worse than the best SGD.
|
2022-05-03T06:47:16.556Z
|
2022-04-29T00:00:00.000
|
{
"year": 2022,
"sha1": "b6f8643c47952d92ee56150654fa158b7145d37a",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2205.00061",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "03ad458c4382a0cc67e19730c78ad7a3ed97b873",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
}
|
236961569
|
pes2o/s2orc
|
v3-fos-license
|
Longitudinal study of a SARS-CoV-2 infection in an immunocompromised patient with X-linked agammaglobulinemia
Antibody dependent enhancement (ADE) of infection is a safety concern for vaccine strategies. In a recent publication, Li et al. (Cell 184 :4203–4219, 2021) have reported that infection-enhancing antibodies directed against the N-terminal domain (NTD) of the SARS-CoV-2 spike protein facilitate virus infection in vitro, but not in vivo. However, this study was performed with the original Wuhan/D614G strain. Since the Covid-19 pandemic is now dominated with Delta variants, we analyzed the interaction of facilitating antibodies with the NTD of these variants. Using molecular modeling approaches, we show that enhancing antibodies have a higher affinity for Delta variants than for Wuhan/D614G NTDs. We show that enhancing antibodies reinforce the binding of the spike trimer to the host cell membrane by clamping the NTD to lipid raft microdomains. This stabilizing mechanism may facilitate the conformational change that induces the demasking of the receptor binding domain. As the NTD is also targeted by neutralizing antibodies, our data suggest that the balance between neutralizing and facilitating antibodies in vaccinated individuals is in favor of neutralization for the original Wuhan/D614G strain. However, in the case of the Delta variant, neutralizing antibodies have a decreased affinity for the spike protein, whereas facilitating antibodies display a strikingly increased affinity. Thus, ADE may be a concern for people receiving vaccines based on the original Wuhan strain spike sequence (either mRNA or viral vectors). Under these circumstances, second generation vaccines with spike protein formulations lacking structurally-conserved ADE-related epitopes should be considered.
In this journal, Walsh and colleagues 1 recently reviewed the evidence supporting that immunocompromised patients may remain positive for Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) infection for long periods of time, up to 20 days. Here, we report the case of an immunocompromised patient with clinically diagnosed X-linked agammaglobulinemia (XLA) ( Supplementary Material ) who was persistently infected with SARS-CoV-2 for almost five months. He was admitted to the hospital on the 14th April 2020 with left bilobar pneumonia, reporting cough, chronic diarrhoea, and fever over the previous four days, and a nasopharyngeal (NP) sample tested positive for SARS-CoV-2 by RT-qPCR ( Fig. 1 A). In the hospital, he was treated with hydroxychloroquine, two courses of remdesivir, lopinavir/ritonavir, antibiotics, antifungal treatments, and glucocorticoids. Infectious SARS-CoV-2 was successfully cultured from a bronchoalveolar lavage (BAL) sample on day 50, showing that the virus was actively replicating in the lower respiratory airways ( Supplementary Material ). On day 133, he was treated with hyperimmune serum from a convalescent patient. Despite treatments and two coronavirus disease 2019 test negativizations, the patient stayed in the hospital most of the time and died in the intensive care unit from multiorgan failure and shock on day 149 (10th September 2020) ( Fig. 1 A). See the Supplementary Material for further details.
Throughout the period described, 26 respiratory samples were collected from the patient (22 NP swabs and 4 BAL samples) ( Fig. 1 A). A urine, faeces, and peripheral blood sample (on day 44), and another peripheral blood sample (on day 87) were collected, but viral genome was not detected in any of their RNA extractions. SARS-CoV-2 viral genomes of a subset of 13 NP and 3 BAL samples were sequenced using alternative methodologies (Supplementary Material). All genomes were assigned to the PANGO lineage A.2 (Clade 19B), which was predominant in Spain during the early months of the pandemic. 2 Assignation of the sequences to the same lineage suggests that the patient had a single viral infection event. Synonymous and non-synonymous mutations accumulated throughout the course of the infection in NP and BAL samples (Spearman correlation, r = 0.77, p = 0.0 0 072) ( Fig. 1 B). Different constellations of mutations were observed in the sequences isolated from NP and BAL samples, suggesting compartmentalization of viral subpopulations evolving independently. The median mutation rate -accumulated mutations per day since diagnosis-was 0.09 mutations/day, higher than the originally estimated for SARS-CoV-2 (0.06 mutations/day 3 ) (One-sample Wilcoxon test, p = 0.005), indicating accelerated mutation rate during infection. There was no significant difference in the mutation rate calculated for NP and BAL samples (Mann-Whitney U test, p = 0.18).
On day 0, viral genome sequences harboured the characteristic mutational pattern of lineage A.2 (ORF1a:F3701Y, ORF3a:G196V, ORF8:L84S, N:S197L) in addition to two other substitutions and four synonymous mutations ( Fig. 1 B). In particular, the spike (S) gene sequence was characterized by the I197V substitution and one synonymous mutation. These were the only two mutations observed in the S gene throughout the course of this five-month infection period in NP samples. However, we observed a different evolutionary pattern in BALs. On day 50, the A653V substitution was observed. This was found as part of the mutational pattern of two variants spreading in France 4 and Germany 5 at the beginning of 2021. On day 87, the P384L in the receptor-binding domain emerged but disappeared together with the A653V at day 136, five days after treatment with hyperimmune serum, when R158S and N501T emerged. Strikingly, the N501T is associated with an increased binding affinity of the S protein to the human angiotensinconverting enzyme 2 (ACE2) receptor and has been identified as an escape mutation against anti-SARS-CoV-2 neutralizing antibodies (NAbs). 6 Interestingly, the position R158 of the S protein is part of the N-terminal domain (NTD) antigenic supersite, a region being recognized by all known NAbs directed to the NTD, 7 and the R158S has been included among the escape mutations of anti-SARS-CoV-2 monoclonal NAbs targeting the NTD of the S protein. 8 Besides, we highlight the emergence, on day 50, of the G204R in the nucleocapsid (N) gene, a mutation characteristic of the P.2 lineage, and, on day 136, of the K1795Q in the ORF1a and the P67S in the N gene, which are distinctive signatures of P.1 and B.1.617.3 lineages, respectively.
This study describes an XLA-immunocompromised patient with prolonged SARS-CoV-2 infection, supporting evidence that these patients undergo viral shedding for long periods of time. 9 , 10 The patient presented RT-qPCR negative NP samples in different time intervals throughout the course of infection that either matched to a positive BAL sample or were followed by a positive RT-qPCR sample. This indicates that a negative RT-qPCR result in NP samples may not imply remission from infection. 9 Viral genome sequencing revealed an accelerated intra-host viral evolution. Different mutations were accumulated in samples collected from NPs and BALs throughout the course of infection, which may point to viral adaptation to the upper and lower respiratory airways. Several host factors may account for this phenomenon, such as temperature and immune response disparities and/or differences in the ACE2 expression. Furthermore, it is worth noting that the mutations emerging in the lower respiratory tract were not detected by sequencing NP samples. Thus, the emergence of potentially worrying viral variants may be underestimated by sequencing standards focusing on NP samples. The emergence of substitutions linked to Graphical representation of SARS-CoV-2 whole-genome consensus sequences with synonymous (blue asterisks) and nonsynonymous mutations (orange asterisks) identified as compared to the Wuhan-Hu-1 reference sequence (NC_045512.2). Only non-synonymous mutations are identified with the amino acid changes in the figure. On day 50, in the N gene, the amino acid substitution S197L is replaced by S197T. (For interpretation of the references to colour in this figure legend, the reader is referred to the web version of this article) immune evasion in the BAL sample collected three days after treatment with hyperimmune serum is remarkable. Of note, the presence of the same or other mutations of interest in the NP samples days after hyperimmune serum treatment could not be ruled out. In fact, we were able to sequence only one NP sample 24 h after treatment, which may not be enough time to observe a possible viral population shifting in these samples. One limitation is that we have no data on the Abs composition and SARS-CoV-2 neutralizing activity of the hyperimmune serum used. Lastly, the emergence of mutations distinctive of currently circulating SARS-CoV-2 variants of concern (VOCs) support the hypothesis for long-term viral shedding in immunocompromised patients as one possible mechanism for the emergence of VOCs. 1. Walsh K.A., Spillane S., Comber L., Cardwell K., Harrington P., Connell J., et al.
The duration of infectiousness of individuals infected with SARS-CoV-2. J Infect [Internet]. 2020 Dec 1; 81 (6) 2 Because poultry showed no clinical symptoms when infected with H10 subtype viruses, their eradication had not been a priority for the control of zoonosis diseases in China. However, H10 subtype viruses had continuously contributed to some zoonotic spillover events. In 2010, a number of cases of human infected with H10N7 were reported in Australia 2 , and subsequently, China reported that the first human case of H10N8 infection that resulted in a human death in 2013 and the recently emerged human-infecting H10N3 virus in 2021. 1 , 3 Therefore, the H10 subtype AIV poses continuous public health concerns. To that purpose, we systematically analyzes the evolutionary dynamics and dissemination pathways of H10 subtype AIV in China.
In the present study, to elucidate the evolutionary process of H10 subtype influenza viruses, we firstly examined HA genes of global H10 subtype virus by performing multiple sequence alignment and phylogenetic analysis. 3 , 4 H10 subtype viruses had divided into two lineages-North American-lineage and Eurasianlineage ( Fig. 1 A). We observed that the H10 subtype viruses in Eurasian-lineage were more complex embodying different neuraminidases, while the H10N7 virus was concentrated on North American-lineage ( Fig. 1 A). All of the H10 subtype viruses isolated from China were derived from Eurasian-lineage. It is interesting to note that 78.5% of the H10 subtype viruses (N2-N9) were isolated from Jiangxi province ( Fig. 1 B-D), indicating that Jiangxi province of China is the epicenter of the H10 subtype viruses. In addition, we also found that the number of H10 subtype viruses increased during 20 0 0-2015, and subsequently decreased after 2016 ( Fig. 1 B). The H10N3 and H10N8 influenza viruses exhibited the highest number of H10 subtype virus in China. H10N8 virus had become the dominant subtype in poultry during 2011-2015; however, the number of H10N3 virus had increased during 2016-2021 ( Fig. 1 B), indicative of dominant H10 subtype switch in China. Previous study showed that genetic diversity of H10N8 viruses were much higher prior to human spillover event. 2 In this study, we found that the genetic diversity of human-origin H10N3 viruses were lower than that of H10N8 viruses ( Supplementary Fig. 1), and we also found that the genotype of human-infecting H10N3 virus was same as the A/chicken/Jiangsu/0110/2019(H10N3) and A/chicken/Jiangsu/0104/2019(H10N3). These findings supported the finding that the avian-to-human transmission of H10N3 virus might have occurred recently without further reassortment.
To estimate the population dynamics of H10 subtype influenza virus in China, we inferred the HA genes of H10 subtype virus demographic history using Bayesian Skyride plots ( Supplementary Fig. 2). Our finding suggested that from 2012 to 2014, the decreasing effective population size indicated that the diversity of H10 subtype viruses declined, while genetic diversity increased dramatically during 2014 to 2015 and then stably maintained ( Fig. 2 A and B). Subsequently, we analyzed the dissemination pathways of H10 subtype viruses in China in different sam pling locations based on the HA phylogenetic tree. Among the result, we found that Jiangxi province was regarded as epicenter for the viral spread ( Fig. 2 C). Specifically, Jiangxi was linked with three locations-Hunan, Hubei, and Hebei. In addition, we found that Zhejiang was linked with closer province including Jiangsu ( Fig. 2 C). However, there are some limitations that sampling bias might have affected the results. Among our findings, the transmission routes of H10 subtype viruses were primarily concentrated in Jiangxi province of China. Previous study showed that the role of wild birds in the dispersal of AIVs during the seasonal migration. 5 , 6,7 In the case of Jiangxi province, the Poyang Lake, with its excellent ecology and vast wetlands, has become a world-famous migratory bird wintering hub, which increase the close contact between wild birds carrying H10 subtype viruses and poultry, accelerating the reassortment and mammalian adaptive mutations.
The emerged H10N3 and H10N8 influenza viruses had caused human infection in China, posing public health threat. In previous study, we found that the key substitutions in PB2 protein of H10N8 viruses including I292V, A588V, and T598M were associated with mammalian adaption. 8,9 . Thus, a key concern is whether the H10 subtype viruses in poultry had acquired the mammalian adaption in recent years. In the present study, we found that the proportion of I292V, A588V, and T598M substitutions in the PB2 protein of H10 subtype viruses in poultry increased sharply during 2013 to 2021 ( Fig. 2 D), and the human-infecting H10N3 and H10N8 viruses all harbored these amino acids substitutions, indicating that H10 subtype viruses in China pose an increasing threat to hu- man health. H10 subtype viruses had been detected in migratory birds in China with increasing trend and H10 subtype viruses were more adapted to mice and chickens 2 , 8 , which signal a very real risk that wild bird migration might probable introduce H10 subtype viruses associated with mammalian adaption to other countries, posing global public health concerns.
In conclusion, our findings suggested that H10N3 and H10N8 AIVs had been co-circulating in poultry population in China, and many molecular markers in PB2 protein associated to mammalian adaption had been detected in bird-and human-origin H10 subtype AIV. Notably, we also found that Jiangxi province of China is the main output region in the dissemination of H10 viruses, indicating that eastern China is the potential epicenter of the H10 influenza viruses. Whether H10 subtype AIV could be spread in other countries and circulated via wild bird migration, comprehensive surveillance is warranted to prevent the potential for the future influenza pandemics.
Declaration of Competing Interest
All authors have no potential conflicts of interest to disclose. We agree with Tan and colleagues, who described the situation of Nontuberculous mycobacteria (NTM) pulmonary diseases (NTM-PDs) in China 1 . NTM-PD remain difficult to be managed even with several guidelines 2 . This is because that NTM-PD infections usually share some symptoms with pulmonary tuberculosis (TB), and most of NTM species are resistant to first-line anti-TB agents. In addition, mycobacterial culture and identification of NTM are not routinely performed in most areas, such as China. These make the diagnosis and treatment of NTM disease more complicated and may contribute to a poor outcome.
Certainly, the rapid identification would aid to improve the management of NTM-PD patients. According to a guideline issued recently, positive culture results from at least two separate expectorated sputum samples is known as one of the microbiologic criteria for the NTM-PD diagnosis 3 . Stools, as an alternative to sputum, have been well characterized in the diagnosis of pulmonary TB, especially in children [4][5][6] . However, the role of stool samples for the diagnosis of NTM-PD remains unclear.
In this retrospective study, we hypothesized that stool samples may be useful for NTM-PD diagnosis. To test this hypothesis, we reviewed our experience of stool cultures for NTM species over the last decades. Fortunately, some interesting findings were found, which could be taken as an indirect evidence for NTM-PD diagnosis using stool samples.
This study was conducted at the Shandong Public Health Clinical Center and confronted with the Helsinki Declaration. The study protocol was approved by the Ethical Committee of Shandong Public Health Clinical Center. Due to the retrospective nature of the study and the anonymous data analyzed, our study was waived for written informed consent by the Ethical Committee of Shandong Public Health Clinical Center. Shandong Public Health Clinical Center is a provincial referral hospital of infectious diseases (including TB), with approximately 20 0 0 beds.
Between January 2012 and August 2020, consecutive patients with positive mycobacterial cultures were included for analysis. Mycobacterial culture was performed using Löwenstein-Jensen medium method, when NTM disease was suspected, further identification was performed using the Mycobacteria Identification Array Kit (CapitalBio, Beijing, China), or sequencing of 16S rDNA. NTM disease was diagnosed according to ATS/IDSA criteria 7 . Continuous data were presented as mean ± standard deviation (SD) and categorical data were presented as count (percentages).
Between May 2012 and June 2021, mycobaterial strains were isolated from 25,617 samples. Of them, 1107 (4.3%) strains were identified as NTM species, including 110 strains isolated from outpatients. Therefore, a total of 997 strains isolated from inpatients were included for the final analysis.
The 11 stool samples were collected from 10 patients. The mean age of them was 42.8 ± 18.2 years old and 60% of them were male. The 11 strains included M. intracellulare ( n = 6), M. kansasii ( n = 4), and M. avium ( n = 1). Of the ten patients, eight have met ATS/IDSA criteria for NTM-PD, including clinical symptoms, radiologic features, and microbiologic evidence obtained from at least two separate sputum samples ( n = 9), one have intestinal infection confirmed with tissue culture, and the remaining one have confirmed NTM-PD and possible intestinal infection. Therefore, an indirect evidence for NTM-PD diagnosis using stool samples was found in this study, and NTM strains isolated from stool samples support the diagnosis of NTM-PD. In addition, the initial symptoms at admission included cough ( n = 5), fever ( n = 4), hemoptysis ( n = 3), chest pain ( n = 3), diarrhea ( n = 2), and dyspnea ( n = 1).
In the current study, the results of the stool culture method reflect a high level of agreement similar to that of the sputum culture-based determination of NTM-PD, indicating that the stool culture assay reliably identified strains of NTM species. This interesting finding implied that stool is an alternative to sputum for NTM-PD diagnosis. It was demonstrated that stool culture is a useful method for the detection of swallowed NTM species for the diagnosis of patients with NTM-PD.
Diagnosis of NTM-PD usually requires the isolation of NTM species in respiratory samples (such as sputum) 2 . However, the diagnosis may turn to be difficult when patients cannot produce sputum, particularly in younger children or HIV-positive patients 8 , 9 . Due to a relative immunodeficiency, sputum production may therefore be reduced by a diminished inflammatory response in these subjects 10 . Other specimens collected by invasive procedures, such as bronchial brushing and BALF, may be then employed. When collected, these specimens typically have a low bacterial burden, resulting in modest detection by current available assays. Although multiple specimens improves the detection yield, it is costly and requires consecutive days 11 , 12 .
Most sputum is swallowed, and NTM strains can survive within the intestinal tract 13 . Therefore, stool samples can be used for detecting NTM species originating from the lungs. Besides, stool culture testing may be taken as a tool for the evaluation of responses to empirical treatment for TB diseases 5 , this is because the test results for follow-up samples have considerable clinical relevance.
All patients in this study had proven NTM-PD, and only two of them had gastrointestinal symptoms that suggested a coexistent diagnosis of NTM intestinal infections. As known, stool culture can support the diagnosis of NTM intestinal infections. It is possible that in such cases in the present study, this may contribute to the presence of NTM species in the stool 14 , 15 . However, one of the two patients also had pulmonary symptoms (such as cough and sputum), and another one was confirmed with tissue culture and only had gastrointestinal symptoms. In general, the strong association that we identified between sputum and stool findings among patients with pulmonary symptoms strongly suggests that strains detected in the stool culture could originate from swallowed sputum, and intestinal disease is not the only source.
Our study has several limitations. First, due to the retrospective nature, sample selection bias may arise and affect the final estimates. Second, the cause of the false-negative stool culture results was not investigated. Third, in the study, a single stool sample was collected for mycobacterial culture. If successive daily samples were used, it may be helpful to improve the diagnosis of NTM-PD directly. In addition, we need to determine whether stool culture would provide additional diagnostic value in patients who are unable to expectorate, especially in children and HIV-positive patients.
The data presented herein indicate that stool culture is favourable for the diagnosis of NTM-PD, and patients who are unable to expectorate sputum, such as patients with younger age, impaired mental, and immunocompromised status, could benefit from this alternative method for the NTM-PD diagnosis. However, currently, stool culture for NTM-PD diagnosis cannot be recommended to replace sputum culture. Further studies are required to investigate its role in a prospective way.
Declaration of Competing Interest
The authors declare no conflict of interest.
Epidemiological characteristics of varicella before and after the implementation of two-dose vaccination schedule in Chaoyang District, Beijing, 2007-2019
Dear editor , We read with interest the paper by Quinn and colleagues 1 in this Journal. They concluded that the Australia's program has impacted on the burden of varicella disease, but vaccine effectiveness of single dose against varicella hospitalizations only moderate. In October 2012, Beijing center for disease control and prevention issued "the technical guideline of varicella vaccine uses in Beijing", recommending vaccinations schedule of two-dose (at 18 months and 4 years) 2 . In order to evaluate the impact of two-dose vaccination schedule on the incidence of varicella, we compared the epidemiological characteristics before and after implementation in Chaoyang District.
For surveillance purpose, varicella is defined as a clinically diagnosed illness with acute onset of a generalized vesicular rash excluding other causes. In February 2007, Beijing required varicella cases to be reported according to the requirements of Class C infectious diseases. Any clinical practitioners should report cases of varicella within 24 h through the National Notifiable Disease Reporting System (NNDRS). Epidemiological data and population data in this paper are collected from the NNDRS, with 2007-2012 as the pre-implementation period and 2013-2019 as the postimplementation period of the two-dose vaccination schedule.
During 2007-2019, a total of 34,122 cases of varicella were reported in Chaoyang District, Beijing, with an annual incidence of 53.91 per 10 0,0 0 0 populations to 108.12 per 10 0,0 0 0 populations, showing a decreasing trend. The average annual incidence rate before the implementation of two-dose vaccination schedule was 60.24 per 10 0,0 0 0 populations, which was 39.02% lower than that after the implementation of the 98.80 per 10 0,0 0 0 populations. Varicella occur year-round with peak months from May to June and from November to December. Compared with the preimplementation levels, the average annual numbers of cases in May-June and November-December after implementation dropped by 42.62% and 34.41%, respectively.
From the incidence data in Chaoyang District, it can be seen that the implementation of the 2-dose vaccination schedule further reduced the incidence of varicella. Firstly, the annual average reported incidence of varicella has declined marked. Secondly, the annual average number of cases in the peak months (May-June and November-December) dropped substantially. Thirdly, the incidence of varicella in the 0-4, 5-9 and 20-24 age groups decreased significantly, especially in the target age group (5-9 years) of the second dose of varicella vaccine.
The switching of vaccination schedule from one to two doses could further reduce the incidence of varicella, which was also confirmed by data from other regions or countries. [3][4][5][6] For countries or areas preparing to include varicella vaccine in their immunization programs, the WHO Position Paper 7 recommends that effective surveillance systems be established to assess the varicella burden prior to introduction and continued monitoring after in-troduction. Beijing carried out varicella surveillance as early as five years before the implementation of the 2-dose schedule, which ensured that the impact of the 2-dose vaccination on the incidence of varicella could be assessed, thus providing experience for other regions and countries to develop varicella vaccination schedule. In addition, it should be noted that vaccination can significantly reduce the incidence of varicella but coverage must be maintained consistently at a certain level. 8 , 9 After the implementation of the 2-dose vaccination schedule, the infection age tends to shift to the older age groups in Chaoyang District. Firstly, the median age of onset showed an upward trend year by year, from 8 years old in 2007 to 22 years old in 2019. Secondly, the incidences of 35-39 years and 40-44 years increased after implementation. It is thought that widespread childhood vaccination reduces risk of infection, leading to an increase in the average age of exposure to Varicella-Zoster Virus (VZV). 10 An upward shift in age of infection should be given attention, because adult patients with varicella have more severe symptoms, higher risk of death and complications compared to children. 7 Another change in the epidemiology of VZV after implementation is that the incidence of 1-year-old children is significantly higher than before implementation. This was not observed in previous studies 3 , 4 , because those studies analyzed age-specific data before and after implementation using the age interval of 3-5 years. It should be noted that the initial age for the first dose after implementation is 18 months by the technical guideline, while which before implementation is 12 months by the instructions of manufacturers. Further studies are needed to determine whether this change is associated with increased morbidity in the 1-yearold group. It also suggests that children should receive varicella vaccine in time to reduce the risk of infection.
To sum up, the implementation of the recommended guideline for two doses of varicella vaccine has led to a dramatic decline in varicella incidence in Chaoyang District, especially among the target group (5-9 years old) of the second dose. However, there are two noteworthy changes in the epidemiological characteristics after implementation, one is that the median age of varicella infection has shifted to adults, the other is that the incidence rate of 1-year-old children has increased.
Emergence of novel recombinant type 2 porcine reproductive and respiratory syndrome viruses with high pathogenicity for piglets in China
Dear editor : Several studies in this journal reported genetic diversity of African swine fever virus (ASFV) genomes and suggested ASFV was a potential threat to unaffected countries in Asia. [1][2][3] Porcine reproductive and respiratory syndrome (PRRS), caused by porcine reproductive and respiratory syndrome virus (PRRSV), is another swine viral disease causing huge economic losses to the global swine industry. 4 PRRSV is an enveloped, positive-strand RNA virus under the family Arteriviridae . This virus is divided into major types, the North American type 2 and the European type 1, with VR-2332 and Lelystad as prototypical strains, respectively. 5 Emergence of novel PRRSV strains have caused many outbreaks of severe PRRS. 6 Since 2013, several PRRSV NADC30-like strains, sharing a unique discontinuous deletion of 131 aa in non-structural protein 2 (nsp2), have emerged in China and caused outbreaks. 7 In 2020, a novel PRRSV variant, with a 142aa deletion in nsp2, emerged in north China. Via this letter we report the unique genetic characteristics of this novel PRRSV variant and its pathogenicity for piglets.
During July to October 2020, severe outbreaks of PRRS with 100% morbidity and approximately 70% mortality were observed on different pig farms in Wuqing, Beichen, and Baodi districts of Tianjin, China. The infected nursery piglets showed high fever (41-42 °C) and respiratory disorders characterized by coughing, dyspnea, and tachypnea. Moreover, the above PRRS-infected swine farms showed a significant increase of secondary infections of bacterium, including Haemophilus parasuis, Streptococcus suis, Mycoplasma hyopneumoniae, etc .
Using real-time PCR tests, the serum samples collected in the detected swine farms in Wuqing districts of Tianjin was confirmed to be positive for PRRSV and negative for African swine fever virus (ASFV). Then the above serum samples were inoculated into porcine alveolar macrophages (PAMs) cells as described previously for virus isolation. 8 The strain, designated TJwq2020, was subsequently isolated and the full-length genomic sequence was determined. The genome of TJwq2020 has a size of 14.987 kgbases (excluding the poly (A) tail), and shares 84.8 and 60.7% sequence identity with VR2332 and LV, respectively. This indicates that TJwq2020 belongs to the type 2 PRRSV. The amino acid alignment of the Nsp2 of TJwp2020 showed that TJwq2020 had a new deletion pattern of "111-aa + 11-aa + 1-aa + 19-aa" in its Nsp2coding region (nt322-nt432, nt466-nt476, nt483, nt504-nt522) when compared with the sequence of VR-2332. This deletion of the nsp2 of TJwq2020 is a novel deletion compared to that of previous PRRSV isolates.
To establish the genetic relationships between TJwq2020 and other PRRSV strains, phylogenetic trees were constructed using the neighbor-joining method. 9 The results showed that TJwq2020 is clustered as lineage 1 (Chsx1401-/NADC30-/MN184-like) based on the full-length genome, but clustered as lineage 8 (JXA1-/HB-1/CH-1a-like) based on 5 UTR and Nsp4-8. These results indicate that mosaic recombination events may have occurred in the genome of the TJwq2020 isolate. To test this hypothesis, a software approach was used to detect possible recombination events within TJwq2020.
The similarity comparisons were performed by using Sim-Plotv3.5.1 software to identify possible recombination events. 10 Based on a set of complete genome sequences of various PRRSV strains, including lineage 1 strains (Chsx1401, NADC30, MN184C), and lineage 8 strains (JXA1, HB-1, CH-1a), the SimPlot graph clearly showed that TJwq2020 is closer to NADC30 than to any other strains. However, there were 2 narrow zones showing disproportionately low levels of similarity between the 2 strains compared to other regions ( Fig. 1 ). Notably, the 2 narrow zones of TJwq2020 had high levels of similarity with HB-1 (a lineage 8.1 strain, isolated in Hebei, China in 2002). These results indicate that TJwq2020 is a recombinant possibly between parental-like strains NADC30 and HB-1. The recombination mechanism was further analyzed by Bootscan.
To determine the pathogenicity of TJqw2020, ten 4-week-old piglets were randomly divided into two groups ( n = 5). The piglets in challenge group ( n = 5) were inoculated intramuscularly (1 ml) and intranasally (2 ml) with TJqw2020-F3 (2 × 10 4.0 TCID 50 in 3 ml per pig). The negative control group piglets were mock infected with 3 ml of DMEM. The inocula were free of bacteria, Mycoplasma, ASFV, CSFV, PCV2, and SIV. The rectal temperature and clinical signs of the piglets were daily recorded. Serum samples were collected on 0, 3, 5, 7, 10, 14 and 21 days post inoculation (dpi) for PRRSV N protein antibody detection (IDEXX, USA, and the cut off value of S/P was 0.4). The piglets died or were euthanized on 21 dpi, and were necropsied for histopathologic examinations.
The rectal temperature of piglets in the TJwq2020-inoculated group exhibited short duration of fever (40.2 °C) at 1-3 dpi and then stayed normal for consecutive 5 days. However, the TJwq2020-inoculated piglets showed high and continuous fever from 9 dpi to 14 dpi ( Fig. 2 A) with severe symptoms of disease progress, such as cough, anorexia, and red discoloration of the body. The antibody was detected in 40.0% (2/5) of TJwq2020inoculated piglets at 3dpi with lower antibody levels (compared to antibody of inoculated piglets at 10 dpi and 14 dpi) and converted antibody negative at 5 dpi and 7 dpi. On 10 dpi, 100% (5/5) of the TJwq2020-inoculated piglets' serum samples were tested antibody positive ( Fig. 2 B). Severe interstitial pneumonia with extensive and marked pulmonary edema, hemorrhage and consolidation were observed in TJwq2020-inoculated piglets, and fibrinous pneumonia and pericarditis were also found at necropsy of dead piglets (data not shown). 100% (5/5) piglets died within 14 days after the TJwq2020 inoculation ( Fig. 2 C). No obvious clinical signs were observed in the control group during the entire experiment period.
In summary, we isolated TJwq2020, a novel PRRSV variant with a unique continuous deletion of 142aa in Nsp2 in China. We demonstrated that this TJwq2020 is a natural recombinant between NADC30 (lineage1 strain) and HB-1 (lineage 8 strain). Exper- iments in piglets demonstrated that TJqw2020 is a highly virulent strain. Our study suggests that recombination might be responsible for the varying pathogenicity of type 2 PRRSV strains, and highlights the importance of monitoring highly virulent recombinant PRRSV strains.
Declaration of Competing Interest
The authors have declared that no conflicts of interest.
Funding
This work was supported by the Natural Science Foundation of Tianjin ( 19JCYBJC29800 ), Project of Tianjin "131 innovative talent team (JRC2018044), and Technical System of Pig Industry of Tianjin (TTPRS2021008).
Infection-enhancing anti-SARS-CoV-2 antibodies recognize both the original Wuhan/D614G strain and Delta variants. A potential risk for mass vaccination?
Dear editor , The aim of the present study was to evaluate the recognition of SARS-CoV-2 Delta variants by infection enhancing antibodies directed against the NTD. The antibody studied is 1052 (pdb file #7LAB) which has been isolated from a symptomatic Covid-19 patient 1 . Molecular modeling simulations were performed as previously described 2 . Two currently circulating Delta variants were investigated, with the following mutational patterns in the NTD : Each mutational pattern was introduced in the original Wuhan/D614G strain, submitted to energy minimization, and then tested for antibody binding. The energy of interaction ( G) of the reference pdb file #7LAB (Wuhan/D614G strain) in the NTD region was estimated to −229 kJ/mol −1 . In the case of Delta variants, the energy of interaction was raised to −272 kJ.mol −1 (B.1.617.1) and −246 kJ.mol −1 (B.1.617.2). Thus, these infection enhancing antibodies not only still recognize Delta variants but even display a higher affinity for those variants than for the original SARS-CoV-2 strain.
The global structure of the trimeric spike of the B.1.617.1 variant in the cell-facing view is shown in Fig. 1 A . As expected, the facilitating antibody bound to the NTD (in green) is located behind the contact surface so that it does not interfere with virus-cell attachment. Indeed, a preformed antibody-NTD complex could perfectly bind to the host cell membrane. The interaction between the NTD and a lipid raft is shown in Fig. 1 B , and a whole raft-spikeantibody complex in Fig. 1 C . Interestingly, a small part of the antibody was found to interact with the lipid raft, as further illustrated in Figs. 1 D-E . More precisely, two distinct loops of the heavy chain of the antibody encompassing amino acid residues 28-31 and 72-74, stabilize the complex through a direct interaction with the edge of the raft ( Fig. 1 F ). Overall, the energy of interaction of the NTDraft complex was raised from −399 kJ.mol −1 in absence of the antibody to −457 kJ.mol −1 with the antibody. By clamping the NTD and the lipid raft, the antibody reinforces the attachment of the spike protein to the cell surface and thus facilitates the conformational change of the RBD which is the next step of the virus infection process 2 .
This notion of a dual NTD-raft recognition by an infection enhancing antibody may represent a new type of ADE that could be operative with other viruses. Incidentally, our data provide a mechanistic explanation of the FcR-independent enhancement of infection induced by anti-NTD antibodies 1 . The model we propose, which links for the first time lipid rafts to ADE of SARS-CoV-2, is in line with previous data showing that intact lipid rafts are required for ADE of dengue virus infection 3 .
Neutralizing antibodies directed against the NTD have also been detected in Covid-19 patients [4][5] . The 4A8 antibody is a major representant of such antibodies 5 . The epitope recognized by this antibody on the flat NTD surface is dramatically affected in the NTD of Delta variants 2 , suggesting a significant loss of activity in vaccinated people exposed to Delta variants. More generally, it can be reasonably assumed that the balance between neutralizing and facilitating antibodies may greatly differ according to the virus strain ( Fig. 2 ).
Current Covid-19 vaccines (either mRNA or viral vectors) are based on the original Wuhan spike sequence. Inasmuch as neutralizing antibodies overwhelm facilitating antibodies, ADE is not a concern. However, the emergence of SARS-CoV-2 variants may tip the scales in favor of infection enhancement. Our structural and modeling data suggest that it might be indeed the case for Delta variants.
In conclusion, ADE may occur in people receiving vaccines based on the original Wuhan strain spike sequence (either mRNA or viral vectors) and then exposed to a Delta variant. Although this potential risk has been cleverly anticipated before the massive use of Covid-19 vaccines 6 , the ability of SARS-CoV-2 antibodies to mediate infection enhancement in vivo has never been formally demonstrated. However, although the results obtained so far have been rather reassuring 1 , to the best of our knowledge ADE of Delta variants has not been specifically assessed. Since our data indicate that Delta variants are especially well recognized by infection enhancing antibodies targeting the NTD, the possibility of ADE should be further investigated as it may represent a potential risk for mass vaccination during the current Delta variant pandemic. In this respect, second generation vaccines 7 with spike protein formulations lacking structurally-conserved ADE-related epitopes should be considered.
Dear editor,
Poole and colleagues demonstrated the impact of coronavirus disease 2019 (COVID-19) pandemic on the epidemiology of general respiratory viruses. 1 The prevalence of non-SARS-CoV-2 viruses during the COVID-19 epidemic has not been wildly reported in China. We conducted to analyze the surveillance data of respiratory pathogens to explore the impact of public health measures against COVID-19 on the prevalence of non-SARS-CoV-2 pathogens in China.
All 41,630 cases were tested for 11 respiratory tract pathogens and 13,630 (32.74%, 95% CI 32.29% to 33.19%) had at least one positive result. Before the COVID-19 epidemic, the most common pathogen among the positive cases was influenza virus (IFV), followed by mycoplasma pneumoniae (MP), human parainfluenza virus (HPIV), human rhinovirus (HRV), enterovirus (EV), respiratory syncytial virus (RSV), seasonal human coronavirus (HCoV), human A, the distribution of 35 sentinel hospitals. The red dot represents the location of the hospital. B, the monthly distribution of samples. The red line represents the number of samples from ARTI during the epidemic of the COVID-19. The black line represents the mean number of samples from ARTI before the epidemic of the COVID-19. The long dash gray line represents the max number of samples from ARTI before the epidemic of the COVID-19. The dot dash gray line represents the min number of samples from ARTI before the epidemic of the COVID-19. The blue column represents the number of the COVID-19 patients using the right vertical axis. C, the pie chart for pathogenic spectrum of acute respiratory tract infections before the COVID-19; D, the pie chart for pathogenic spectrum of acute respiratory tract infections during the COVID-19; E, the column chart for pathogenic spectrum of acute respiratory tract infections for the children; F, the column chart for pathogenic spectrum of acute respiratory tract infections for the adults. E-F, the gray column represents the proportion of the pathogen before the COVID-19; the blue column represents the proportion of the pathogen during the COVID-19, which decreased compared to that before the COVID-19; the red column represents the proportion of the pathogen during the COVID-19, which increased compared to that before the COVID-19. The period between 1 February 2015 and 31 January 2020 for the control was as before the COVID-19 and the period between 1 February 2020 and 31 January 2021 was as during the COVID-19. COVID-19, coronavirus disease 2019. metapneumovirus (HMPV), human adenovirus (HAdV), human bocavirus (HBoV) and chlamydia pneumoniae (CP), which changed to seasonal HCoV, followed by HRV, HPIV, IFV, RSV, EV, HBoV, HMPV, MP, HAdV and CP during the COVID-19 epidemic ( Fig. 1 C-F and Table S1-S3). SARS-CoV-2 was not detected. For children, the top five pathogens were MP, IFV, HPIV, EV and RSV before the COVID-19 epidemic, changing to HPIV, seasonal HCoV, HRV, RSV and IFV. For adults, the top five pathogens were IFV, MP, HRV, HPIV and seasonal HCoV before the COVID-19 epidemic, changing to seansonal HCoV, HRV, IFV HPIV and EV. All the other pathogens had distinct increases in proportions except for IFV.
After on 24/01/2020 in Beijing, the overall positive rates of all pathogens changed according to the implementation of different levels of PHER ( Fig. 2 E-P). The positive rates sharply decreased due to the outbreak of COVID-19 in the Xinfadi Market and in Shunyi District, Beijing. These results indicated the prevalence of A, the positive rate of the pathogens as a whole and in the children and in the adults; the star means the significant difference between the positive rates before and during the COVID-19 by chi square test; The gray column represents the proportion of the pathogen before the COVID-19; the blue column represents the proportion of the pathogen during the COVID-19, which decreased compared to that before the COVID-19; the red column represents the proportion of the pathogen during the COVID-19, which increased compared to that before the COVID-19; B, the difference (upper half part) and difference proportion (lower half part) between the positive rates before and during the COVID-19 as a whole; C, the difference (upper half part) and difference proportion (lower half part) between the positive rates before and during the COVID-19 in the children; D, the difference (upper half part) and difference proportion (lower half part) between the positive rates before and during the COVID-19 in the adults; B-D, the green column/dot represents the decrease difference/difference proportion; the purple column/dot represents the increase difference/difference proportion; The difference of positive rates was defined as the positive rate during the COVID-19 minus the positive rate before the COVID-19. The difference proportion (%) was defined as the difference of positive rate (%) divided by the detection rate before the COVID-19. pathogens was closely related to public health measures against COVID-19.
Four groups were classified according to the dynamic trend of the positive rate of each pathogen. The positive rates of seasonal HCoV and HPIV were lower in the early stage of the COVID-19 epidemic and increased in the late stage, even exceeding the previous peak. For HRV, EV and HBoV, the positive rates returned to their previous levels in the late stage of the COVID-19 epidemic. The positive rates decreased to lower levels and remained until January 2021 for IFV, MP, HMPV, HAdV and CP. IFV was only detected in February and March 2020. COVID-19 had a relatively smaller effect on the positive rate of RSV compared to other pathogens.
Public health measures against COVID-19 and people's daily behavior changed potentially affected the prevalence of other respiratory pathogens. [2][3][4][5][6] The decrease of IFV is closely related to influenza vaccination as well as the public health measures. More influenza vaccinations may have effectively reduced the spread of IFV. The COVID-19 epidemic and the corresponding protective measures have changed people's medical behavior, with significant decreases in non-COVID-19 inpatient and outpatient visits.
Seasonal HCoV became the first pathogen in ARTIs during the COVID-19 epidemic. Anton et al. reported that the detection rates of HCoV and HPIV increased after the H1N1 epidemic in Catalonia, Spain. 7 Four seasonal HCoVs showed biennial incidence peaks in winter with alternating peak seasons for 229E and NL63, and OC43 and HKU1. Studies showed a biennial trend on the peak prevalence of seasonal HCoV in Beijing. 8,9 Therefore, the incidence of seasonal HCoV may have peaked in 2020/2021, which may have been higher without no public measures for COVID-19. This suggests that we need to further strengthen the monitoring of HCoV in the future to prevent a co-epidemic of seasonal HCoV and SARS-CoV-2, especially when control measures for COVID-19 become normalized.
In general, public health measures against COVID-19 substantially reduced the prevalence of other respiratory pathogens. IFV decreased from the first to the fourth during the COVID-19 epidemic. Seasonal HCoV, which became the first pathogen of ARTIs, should be strengthened to control to prevent co-circulation with SARS-CoV-2.
These findings indicated the additional benefits of the public health measures implemented for COVID-19 in reducing the spread of other respiratory diseases. Notes
Acknowledgments
The study was supported by the National Major Science and Technology Project for Control and Prevention of Major Infectious Diseases in China ( 2017ZX10103004 ). We thank all participants, including study volunteers enrolled in the study. We thank the staff of the 35 sentinel hospitals composed the Respiratory Pathogen Surveillance System in Beijing, who enrolled the study participants and the staff members of all district-level CDCs for their assistance on respiratory pathogen surveillance.
Supplementary materials
Supplementary material associated with this article can be found, in the online version, at doi:10.1016/j.jinf.2021.08.013 .
Preserved C-reactive protein responses to blood stream infections following tocilizumab treatment for COVID-19
Dear editor, In this Journal, Rossotti and colleagues provided early data on tocilizumab utility in COVID-19 1 , later confirmed in randomised studies 2 . Tocilizumab-mediated inhibition of IL-6 signalling can decrease CRP concentrations 1 , potentially confounding the diagnosis of bacterial co-infections in COVID-19 that occur more frequently following longer hospital stays and admissions to the intensive care unit (ICU) [3][4][5] .
In inflammatory arthritides, serial tocilizumab dosing variably attenuates CRP responses following bacterial infections 6 , but the effect following single-dose use in COVID-19 is not defined 2 , 7 . In a small COVID-19 cohort with blood stream infections (BSIs) that had received tocilizumab, CRP was reduced but remained detectable at the time of BSI diagnosis 8 . However, CRP kinetics related to BSI were not assessed, and thus the utility of CRP to guide antibiotic prescribing in this context remains unknown 5 , 9 . We addressed this question by testing the hypothesis that a single dose of tocilizumab for COVID-19 retained CRP responses to bacterial infections, as modelled by BSIs.
We identified patients admitted to Royal Free Hospital (RFH) between 01/03/2020 and 01/02/2021, aged > 18 years and diagnosed with COVID-19 by RT-PCR detection of SARS-CoV-2 from nasopharyngeal swabs. Tocilizumab use originated from routine clinical care delivery or randomised clinical trials after unblinding. COVID-19 associated BSIs were defined by isolation in blood cultures of any bacteria, excluding coagulase negative staphylococci, between 14 days prior to and 60 days after COVID-19 diagnosis. We excluded patients that developed BSIs prior to receiving tocilizumab. To assess dynamic CRP responses, we included only patients with blood parameter measurements performed at least 3 days prior to the onset of BSIs. Clinical, laboratory and drug data extraction, and statistical analyses were performed as previously described 5 . The study was approved by the Research and Innovation Group at RFH, which stated that as this was a retrospective review of routine clinical data, formal ethics approval was not required.
Within the COVID-19 patients that met our inclusion criteria, 107 had received tocilizumab, 17 of whom then developed a BSI during their hospital admission ( Table 1 ). A separate cohort of 55 COVID-19 patients developed a BSI but had not received tocilizumab ( Table 1 ). Tocilizumab use preceding BSIs was more commonly associated with ICU admission, but the BSI organisms were comparable between the groups ( Table 1 ). In the first week after tocilizumab administration we observed a rapid fall in CRP ( Fig. 1 A), but not for total white cell, neutrophil or lymphocyte counts ( Fig. 1 A & fig S1). The CRP reduction following tocilizumab was short lived, with CRP concentrations rising within 21 days of tocilizumab receipt ( Fig. 1 A). To exclude confounding by bacterial co-infection, a sensitivity analysis on 90 patients that did not develop a BSI following tocilizumab also showed an early reduction followed by a rebound in CRP (fig S2A). A similar pattern was evident in patients that developed a BSI, although CRP concentrations showed less attenuation and greater heterogeneity within the 21day period since tocilizumab administration ( fig S2B).
To test the hypothesis that CRP would rise following a BSI independent of prior tocilizumab administration, we compared CRP responses in 17 patients that had received tocilizumab prior to a BSI with 55 patients who had not received tocilizumab. Strikingly, in both cohorts, BSIs resulted in clear CRP elevations ( Figs. 1 B & 1 C). We calculated the change in CRP across the time of BSI onset to quantitatively compare this CRP rise. As blood samples were not collected daily in all patients, we derived paired sampling by calculating maximal CRP values 2 or 3 days prior to BSI-detecting blood culture collection and maximal CRP up to 2 days after BSI. This approach revealed an increase in CRP following BSI in 76.5% and 75.0% of patients that had or had not received tocilizumab respectively ( Fig. 1 D). Moreover, there was no difference in CRP increase between the groups (median CRP change + 88 mg/L vs + 76 mg/L respectively, p = 0.67 by Mann-Whitney test).
As patients developed BSIs at varying times following receipt of tocilizumab, we tested the hypothesis that BSI-induced CRP increment would be proportional to the time interval between tocilizumab administration and BSI onset. However, in the 17 patients that both received tocilizumab and subsequently developed a BSI, no relationship was observed between the length of the tocilizumab-BSI interval and the change in CRP ( r = 0.1069, p = 0.6811 by Rank-spearman correlation) ( Fig. 1 E).
By inhibiting IL-6 signalling, tocilizumab may impact CRPguided antibiotic prescribing decisions 5 , 9 . However, we demonstrate that prior administration of a single dose of tocilizumab does not attenuate CRP responses following a BSI, retaining the utility of this biomarker to diagnose bacterial co-infections associated with COVID-19. These findings have important implications for tocilizumab-treated COVID-19 patients: first, clinically-indicated antibiotic prescriptions are unlikely to be delayed, and second, low CRP levels alone are not an indication for continued prescription of unnecessary antibiotics, supporting stewardship efforts. Neverthe- less, BSI onset did not initiate CRP elevations in all patients, irrespective of prior tocilizumab use, emphasising that CRP is only one contributor to diagnosing incipient bacterial infections.
Despite preserved CRP responses to BSI, tocilizumab transiently reduced baseline CRP levels, mostly recovering within 21 days. Furthermore, BSI-associated CRP increments were unrelated to time since tocilizumab, indicating that single tocilizumab dosing may not completely neutralise IL-6 responses 10 , although a role for IL-6-independent CRP stimuli cannot be excluded. Measuring IL-6 signalling activity in vivo may predict attenuation of CRP responses and also inform the need for further tocilizumab dosing in COVID-19 7 .
Our study was limited by its single-centre and retrospective nature, constraining patient numbers and negating correction for potential confounders. Nevertheless, increased frequency of corticosteroid use in tocilizumab recipients could have further attenuated CRP responses, counter to our observations. BSIs provided a standardised definition for bacterial infections, but limited extrapolation to non-BSI settings, an area of required future work to confirm the generalisability of our findings.
In conclusion, we show that tocilizumab use in severe COVID-19 preserves elevations in CRP concentration following the onset of a confirmed bacterial co-infection, as modelled by BSIs. Use of tocilizumab should not negate judicious, CRP-guided use of antibiotics in COVID-19.
Contributions
EQW, IB, SB and GP conceived the study. EQW, CB, AN, BOF, JP, ML, SY, SH, DM, MS and GP collected and analysed the data. EQW, SB and GP drafted the manuscript. All authors reviewed and approved the final version of the manuscript.
Declaration of Competing Interest
We declare that all authors have no conflicts of interest
Funding
No external funding supported this work.
Supplementary materials
Supplementary material associated with this article can be found, in the online version, at doi:10.1016/j.jinf.2021.08.017 .
Dear editor ,
We read with great interest the recently published letter in Journal of Infection by Afshar et al., who suggested that the proportion of severe cases and mortality among cancer patients infected with COVID-19 were higher than those of COVID-19 patients without cancer, due to abnormal autoimmune function. 1 Meanwhile, Liang et al. collected and analyzed 2007 cases from 575 hospitals in China who were diagnosed with COVID-19 and were admitted to hospital for treatment, and found that 1% of COVID-19 patients had a history of cancer, which was much higher than the incidence of cancer in the normal population (0.29%). 2 However, the COVID-19 susceptibility of cancer patients remains a subject of considerable controversy. Observational studies with different samples have even come to opposite conclusions. Gallo et al. suggested that the impaired immune response of cancer patients might be a protective factor for the cytokine storm caused by COVID-19. 3 Therefore, it is necessary to discuss whether susceptibility genes for COVID-19 play a critical role in lung cancers.
In this study, we comprehensively analyzed the genetic alteration, mRNA expression, protein expression, prognosis, immune infiltration and lung cancer correlation of COVID-19 susceptibility genes ( SLC6A20, LZTFL1, CCR9, FYCO1, CXCR6, XCR1, ABO, RPL24, Only genes that were considered to be differentially expressed were shown. The statistically significant differential expression is defined to be P < 0.05 and Fold Change > 1.5. FOXP4, TMEM65, OAS1, KANSL1, TAC4, DPP9, RAVER1, PLEKHA4 and IFNAR2 ) in lung adenocarcinoma (LUAD) and lung squamous cell carcinoma (LUSC). 4 Among the 17 susceptibility genes, the genetic alteration of RPL24 was as high as 6.37% in LUSC and the genetic alteration of TMEM65 was as high as 5.48% in LUAD, suggesting that genetic alteration in these genes greatly affected the occurrence of lung cancer. Most of COVID-19 susceptibility genes were differential expression in samples of TCGA lung cancer datasets, and the differential expression of LZTFL1, TMEM65, OAS1, DPP9, RAVER1 and IFNAR2 were replicated in three cohort studies ( Table 1 ) . Using immunohistochemical staining to validate the protein expression of lung cancer tissues and normal lung tissues in the Human Protein Atlas, we observed that six genes, including SLC6A20, FYCO1, FOXP4, TMEM65, XCR1 and OAS1 , had significantly different protein expression levels ( Fig. 1 A ). Subsequently, we used the cox regression model and log rank test to calculate the impact of COVID-19 susceptibility gene expression in overall survival (OS). Whether under the cox model or log rank test, high expression of FYCO1, CXCR6, XCR1 and TAC4 were protective factors for LUAD, and TMEM65 and OAS1 were risk factors for LUAD ( Fig. 1 B) . Finally, we were pleasantly surprised to find that COVID-19 susceptibility genes were strongly related to almost all of the six main immune cells (B cell, CD8 + T cell, CD4 + T cell, macrophage, neutrophil and dendritic cell) in lung cancer, further confirming the close interaction of COVID-19 with immune responses in tumors. We also explored the association between COVID-19 susceptibility genes and known markers of lung cancer ( P53 → TP53 and KRAS ) in lung cancer. ABO, DPP9, FOXP4, FYCO1, LZTFL1, RAVER1, TAC4 and XCR1 were strongly correlated with both lung cancer markers ( Fig. 1 C) .
In conclusion, we verified the results of COVID-19 Host Genetics Initiative that individuals with mutations in these two genes related to lung cancer, DPP9 and FOXP4 , increased the risk of severe COVID-19 4 . Furthermore, we suggested XCR1, TMEM65 and OAS1 as independent prognostic markers for lung cancer, and supported the potential partial genetic overlap between COVID-19 and lung cancer. We provided new insights and research directions for the diagnosis, treatment and management of lung cancer patients during the COVID-19 pandemic.
Declaration of Competing Interest
All the authors declare that there are no conflicts of interest.
Long-lasting immune response to a mild course of PCR-confirmed SARS-CoV-2 infection: A cohort study
Dear editor , We read with great interest the paper by Wells et al.. 1 Wells et al. investigated seroprevalence of SARS-CoV-2 antibodies in a cohort of 431 non-hospitalized UK twins (mean age = 48.38, SD = 28; 85.1% female), who systematically reported the presence or absence of COVID-19 symptoms on many occasions through time via an ad-hoc study app. Their results indicated that 51 of the 431 individuals (11.8%) were seropositive with IgG response to both N and S proteins 4-fold above the background of the assay. Within the group of seropositive individuals with complete symptom data ( n = 48), there were 35 participants (72.9%) with core symptoms (defined as anosmia, cough and fever) and 13 (27.1%) without core symptoms. Among these 13, 9 (18.7%) participants were fully asymptomatic. This finding is particularly relevant, as it shows that also participants with an asymptomatic course of the disease developed antibodies against SARS-CoV-2. While the longitudinal assessment of symptoms with the app allowed to track the clinical course of the disease, a major limitation of the study by Wells et al. 1 is that SARS-CoV-2 infection was not confirmed by real-time polymerase chain reaction (RT-PCR) at symptom onset. Therefore, the time interval between infection and sample collection was unknown. This is an important source of variability that likely affected the estimated percentage of seropositive individuals. Moreover, the cohort comprised related individuals (twins), whose immune response to exposure to SARS-CoV-2 was probably correlated to some extent. 2 It follows that observations were not independent and this probably affected results. Nevertheless, the idea of studying the immune response against SARS-CoV-2 in individuals who underwent a mild course of COVID-19 is of pivotal importance, because mild cases represent the most typical manifestation of this disease and many infections are even asymptomatic. 3 , 4 Previous studies described the longitudinal stability of immunity against SARS-CoV-2 mostly in hospitalized COVID-19 patients with mixed clinical profiles (mild, moderate or severe symptoms), using different laboratory methods for the analysis of antibodies (see for example [5][6][7][8][9]. Therefore, knowledge about the longitudinal stability of the immune response in mild and asymptomatic SARS-CoV-2 infections is limited. We present recent findings from a cohort of COVID-19 convalescent individuals that partly replicate and extend the evidence by Wells et al. 1 We collected blood plasma samples to determine pan-IG antibody titre from a cohort of 326 non-hospitalized volunteers (median age = 42 years, IQR = 31-52; 61.7% females) tested positive for SARS-CoV-2 by RT-PCR between February 2020 and January 2021. Samples of serum, lithium heparin plasma, sodium citrate plasma, EDTA plasma and EDTA buffy coat were collected at the baseline visit after remission of SARS-CoV-2 infection (median = 66 days, IQR = 42.0-111.8 between infection and baseline visit) and 1, 2, 5 and 12 months after the baseline visit. Clinical symptoms of COVID-19, demographic characteristics, lifestyle, and comorbidities were collected by study physicians via an electronic questionnaire at the baseline visit. Antibody levels were determined with the Elecsys® Anti-SARS-CoV-2 (pan-Ig qualitative test against the viral nucleocapside protein, positive if ≥ 1 COI) and the Elecsys®Anti-SARS-CoV-2 S assays (pan-Ig quantitative test against the receptor-binding domain of the viral spike protein S1-ab, positive if ≥ 0.8 U/ml) (Roche Diagnostics, Rotkreuz, Switzerland). Data and sample collection for late timepoints is still ongoing. Samples collected after the date of first vaccination were censored.
Overall, 300 (92.0%) participants experienced COVID-19 core symptoms (at least one among fever, cough, dyspnea, ageusia and anosmia during the course of the disease), 26 (8.0%) did not; 88 participants (27.0%) presented with comorbidities and they were equally distributed among individuals with/without core symptoms. Note. Visits 2, 3, 4 occurred 1, 2, 5 months after the baseline (visit 1), respectively. * Five observations were censored due to vaccination already at baseline (visit 1). The value of the quantitative antibody assay of the first visit was missing for one participant. Results of antibody tests revealed very high positivity rates (qualitative assay at baseline visit positive in 303 of 321 1 individuals, 94.4%; quantitative antibody assay at baseline visit positive in 310 of 320 2 individuals, 96.9%) and these increased through time, reaching 96.7% and 100% positivity rates by visit 4 for the qualitative and the quantitative assays, respectively.
To account for variability in the time lag between the first PCR test and the baseline visit (visit 1), participants were subdivided into 2 groups: Individuals with baseline visit within 42 days since the first PCR test (early baseline visit group) and individuals with baseline visit occurring more than 42 days since the first PCR test (late baseline visit group). Table 1 shows median (IQR) of the quantitative antibody assay in the whole cohort (see also Fig. 1 ) and in the different subgroups. We modeled the longitudinal variation of quantitative antibody titres through time with linear mixed effects models, adding a random intercept and a random slope to account for the random variation of individual time trajectories. Results revealed a significant positive association between log-transformed antibody quantitative titre and time since the first PCR test (estimate = 1.07, p < 0.001). The effect of the baseline visit group was also significant: The late group generally had higher antibody levels than the early group (estimate = 1.38, p < 0.001). The presence of core symptoms (estimate = 0.52, p < 0.001) and of comorbidities (estimate = 0.30, p < 0.001) was related to higher antibody levels. The interaction between time and the early/late baseline visit group was significant (estimate = −0.61, p < 0.001), indicating that the increase in the antibody levels was steeper in the early vs. late baseline group.
Taken together, our results indicate a strong and persistent immune response against SARS-CoV-2 infection in individuals who recovered from a mild course of COVID-19 for up to 8 months post infection. Importantly, our findings are in line with a recent study 10 that used the very same quantitative anti-RBD antibody assay as we did and reported a robust and persistent antibody response after six months post infection. The titre at the baseline visit and its change through time was modulated by the time lag between infection and sampling. In line with Wells et al., we found that individuals without core symptoms developed immunity against COVID-19, although to a lower degree than individuals showing core symptoms, and it persisted through time.
In the framework of this ongoing pandemic, our study highlights the importance of high quality research data that arise from patient and sample collections of biorepositories.
Declaration of Competing Interest
The authors declare no conflict of interest.
Ethics approval
The study was performed in accordance with the latest version of the Declaration of Helsinki and was approved by the local ethics committee of the Medical University of Graz (ethics vote: 32-423 ex 19/20) All participants signed a Biobank-informed consent and a study-specific informed consent. ing changes include worsening leptomeningeal enhancement (76%), new parenchymal brain lesions (60%), white matter lesions (56%) and new or existing enlarged cryptococcoma (20%), etc. The time for neuroimaging deterioration after VPS is 30 to 60 days, and the time for improvement of paradoxical neuroimaging lesions is 60 days to 12 months (median 4 months). Representative paradoxical neuroimaging changes in patient 11 were showed in Fig. 1 . Paradoxical deterioration of cerebrospinal fluid after VPS include increased protein levels (88%), decreased glucose levels (40%), leukocytosis (56%) and/or shift towards polymorphonuclear pleocytosis (40%) ( Table 1 ). These changes usually peak within 24 weeks after VPS. In patient 11, paradoxical CSF changes lasted for more than 22 months after the cryptococcal culture was negative, but his clinical manifestations improved 2 months after VPS.
In the IRIS group, 14 patients (63.6%) were completely improved, while 7 patients (31.8%) had a moderate outcome and 1 patient (4.5%) had a poor outcome. 5 patients with severe IRIS were misdiagnosed as alternative central nervous system infections, such as bacterial or tuberculosis infection after VPS. Nine patients with severe IRIS were injected with prednisone (0.75-1.5 mg/kg/day) or dexamethasone (10-20 mg/day) for 7-10 days, and then gradually reduced oral low-dose prednisone for 7 days. Seven patients improved only after receiving corticosteroids therapy. Two patients received prolonged prednisone treatment for persistent headache, significant elevation of CSF protein and diffuse brain lesions, which were 22 months for patient 11 and 4 months for patient 23, respectively.
Discussion
The reports have shown that the incidence of C-IRIS related to ART is 13% −30%. 1 , 6 However, in our study, 88% of non-HIV CM patients had paradoxical IRIS after VP shunt. The most important finding of our research is that VPS-related IRIS are very common. Among paradoxical IRIS, CSF paradoxical reactions are more common than paradoxical neuroimaging changes and paradoxical clinical manifestations. Most studies have shown that ART-related CM-IRIS in HIV patients occurs 1-2 months after the initiation of ART. 1 , 7 In our study, clinical paradoxical changes usually occur within a few days after VP shunt. Paradoxical neuroimaging deteri- orations occur 30 to 60 days after VP shunt, and improve from 60 days to 12 months (median 4 months) after VP shunt. Paradoxical CSF changes usually peak 2-4 weeks after VPS, and may persist for more than 4 months after CSF cryptococcal culture is negative. In patient 11, the paradoxical neuroimaging and CSF abnormality lasted more than 22 months after the cryptococcal culture was negative, although he was clinically responsive to antifungal therapy and corticosteroid therapy. These three paradoxical reactions will not occur or recover simultaneously . Paradoxical neuroimaging and CSF changes last longer than paradoxical clinical reactions. This inconsistency makes IRIS after VP shunt may be misdiagnosed as treatment failure or alternative CNS infection.
The pathogenesis of CM-IRIS in HIV and non-HIV patients is still poorly understood. The pathogenesis of ART-related CM-IRIS in HIV patients is related to the immune restoration after ART treatment. Antifungal-induced CM-IRIS in immunocompetent hosts has been proved to have a similar immune response to ART-related IRIS. IRIS occurs when antifungal therapy reverses the immunosuppressive state mediated by Th2 phenotypic response. 8 In our study, timing, changes in CSF and neuroimaging, and good response to steroid therapy indicate that the immune response involves VPS-related IRIS. The main risk factor may be related to the high cryptococcal load before VPS and the rapid decline after VPS. Compared with the non-IRIS group, the IRIS group had much higher quantitative CSF cryptococcal counts before shunt (medium 15,141 vs. 2603 numbers/mL, p = 0.016) and shorter CSF culture-negative time after shunt (medium 39.9 vs. 150 days, p = 0.0 0 0). The results show that the pathogenesis of VPS-related IRIS is similar to the immune restoration syndrome in ART-related IRIS. The immune pathogenesis and prediction of CM-IRIS after VPS are still unclear, and it is worthy of further study. There are no controlled clinical trials for the management of CM-IRIS. Recent reports indicate that there is a good response to corticosteroids in ART-related CM-IRIS or antifungal-induced CM-IRIS. 9 , 10 Our results also show that corticosteroid therapy is a good choice for VPS-related IRIS and will not cause CM recurrence. The dose and duration of corticosteroids are still inconclusive, and they are usually individualized based on clinical response. VP shunt placement is an effective method to continuously relieve uncontrolled ICP, so it is essential to improve the survival rate and neurological function of CM patients. IRIS related to VPS is very common. Recognizing that these reactions are the result of an immune response, not a treatment failure or alternative diagnosis, is important to avoid inappropriate changes in treatment.
Declaration of Competing Interest
On behalf of all authors, the corresponding author states that there is no conflict of interest.
Funding
Funding was not obtained for this study.
Availability of data and materials
All the data supporting the findings is contained within the manuscript.
Genesis, evolution and host species distribution of influenza A (H10N3) virus in China
Dear Editor, A recent article by Wang in the Journal of Infection 1 confirmed the first case of human infection with H10N3 avian influenza virus (AIV) in Zhenjiang, Jiangsu Province, China, in April 2021. Transmission of H10 subtype AIVs from birds to humans is uncommon but has occurred in history. The first reported human infections with an H10 subtype influenza virus occurred in Egypt in 2004. 2 In subsequent surveillance, cross-species transmission of subtype H10 influenza virus has been detected occasionally. The most important of these events were three patients infected with the H10N8 subtype influenza virus and two died in China in 2013. 3 Wang also analyzed the whole genome of the first human-origin H10N3 isolate, indicating that this virus is an avian-origin reassortant strain with the hemagglutinin (HA) and neuraminidase (NA) genes from H10N3 viruses and six internal genes from H9N2 viruses. 1 WHO considered this human case was a sporadic transmission of H10 from avian hosts to humans. However, up to now, there were no same genotype avian-origin H10N3 viruses having been reported.
During our surveillance for AIVs in China, ten H10N3 viruses ( Table 1 ) were isolated from chicken tracheal and cloacal swab samples in live bird markets (LBMs) between December 2019 and May 2021. We sequenced all eight genes of these ten viruses to trace the origin and clarify the genetic properties. The result showed that these avian-origin isolates have a high homology with human-origin strain A/Jiangsu/428/2021. Phylogenetic analysis from sequences available in Global Initiative on Sharing all Influenza Data (GISAID, https://platform.gisaid.org ) showed that H10Nx-HA genes were divided into North America and Eurasian lineages ( Figure 1 ), similar to other HA subtypes of AIVs. In addition, it appears that viruses from the Eurasian lineage are more divergent and have formed several sub-lineages. This ten avian-origin viruses and one human-origin virus belonged to the Eurasian lineage and formed an independent sub-lineage, Group 5 ( Figure 1 ), showing a large evolutionary distance from other H10 strains. In addition, human H10N8 isolated from Jiangxi province in 2013 and 2014 belonged to Group 2 sub-lineage ( Figure 1 ). HyN3 NA genes were divided into three sub-lineages, North America, Eurasian, and Mixed (North America and Eurasian). As shown in Supplementary Figure 1, these novel reassortant H10N3 strains NA belonged to the Eurasian lineage and also formed an independent sub-lineage like the HA segment. Phylogenetic analysis suggested that HA and NA of these H10N3 viruses might have independently evolved a considerably longer time in poultry.
Homology analysis of these avian-origin H10N3 internal genes showed that the polybasic protein 2 (PB2), polybasic protein 1 (PB1), polybasic protein (PA), nucleocapsid protein (NP), matrix protein (M) and nonstructural protein (NS) genes were all derived from H9N2 AIVs. Interestingly, H9N2 AIVs were dominant in LBMs in China in recent years and donate internal genes for human isolates such as H7N9, H10N8 and H5N6. [3][4][5] This type of AIV with H9N2 derived internal genes attracts the attention of the general public and health professionals in recent years.
We next analyzed the important molecular markers of these avian-origin and human-origin H10N3 strains. From the Supplementary Table, these strains all had G228S mutation at the receptor binding site of the HA protein, indicating a stronger binding capacity for human-like receptor binding. 6 In addition, all strains harbored PB2-A588V, PB1-I368V and PA-K356R mutations, which could enhance polymerase activity, viral replication, and virulence of AIV in mammals. [7][8][9] Quite unexpectedly, although human-origin and 6 (6/10) avian-origin H10N3 strains contained avian marker PB2-627E, four (4/10) avian-origin isolates contained PB2-E627V mutation, which was found previously to function as an intermediate between 627E and 627K on the H7N9 genetic background. 10 These mammalian molecular markers suggest that these H10N3 viruses may have an even higher capacity for mammalian adaption, especially the strains containing PB2-627V.
Strains
Host 1. Phylogenetic trees of H10Nx-HA of AIVs isolated from poultry and download from GISAID database. The ten avian-origin H10N3 isolates were highlighted with red; human-origin H10N3 virus was highlighted with blue; human-origin H10N8 viruses were highlighted with green. The neighbor-joining tree was generated by using MEGA 7 software. The tree was constructed by using the neighbor-joining method with the maximum composite likelihood model in MEGA version 7.0 ( http://www.megasoftware.net ) with 1,0 0 0 bootstrap replicates.
other hosts including chicken (6.17%, 10/162) and humans (0.62%, 1/162). Interestingly, these H10N3 viruses from chicken were all isolated by our laboratory after December 2019 and analyzed in this study. The above data showed that the host of this novel reassortant H10N3 transformed from waterfowl to terrestrial poultry.
In conclusion, these avian-origin H10N3 isolates are highly homologous with human-origin H10N3 strain, and some of them were isolated in Jiangsu Province before human infection, indicating that human infection case was transmitted from poultry to human. These H10N3 isolates internal genes were all derived from H9N2 should arouse our focus, because this type reassortants such as H7N9, H5N6 and H10N8 have caused human infection and death in recent years. Low pathogenic AIVs usually do not cause explicit symptoms in poultry, it might have existed for a long time until we discovered it, which proved by our phylogenetic analysis. In addition, these avian-origin H10N3 isolates with several mammalian molecular markers show an even higher capacity for mammalian adaption. Our findings indicate the importance of continuous surveillance for the emergence and evolution of novel influenza viruses in poultry and the potential threat to public health.
Declaration of Competing Interest
No potential conflict of interest was reported by the authors.
Supplementary materials
Supplementary material associated with this article can be found, in the online version, at doi: 10.1016/j.jinf.2021.08.021 .
|
2021-07-07T19:02:50.630Z
|
2021-07-01T00:00:00.000
|
{
"year": 2021,
"sha1": "6e39dc020be4062fed80c7879f85edff3af11cbe",
"oa_license": null,
"oa_url": "http://www.journalofinfection.com/article/S0163445321003923/pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "f4c18ece0967dabd479329f126b24fd935dc437d",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
221880475
|
pes2o/s2orc
|
v3-fos-license
|
Visual-reward driven changes of movement during action execution
Motor decision-making is often described as a sequential process, beginning with the assessment of available options and leading to the execution of a selected movement. While this view is likely to be accurate for decisions requiring significant deliberation, it would seem unfit for choices between movements in dynamic environments. In this study, we examined whether and how non-selected motor options may be considered post-movement onset. We hypothesized that a change in reward at any point in time implies a dynamic reassessment of options, even after an initial decision has been made. To test this, we performed a decision-making task in which human participants were instructed to execute a reaching movement from an origin to a rectangular target to attain a reward. Reward depended on arrival precision and on the specific distribution of reward presented along the target. On a third of trials, we changed the initial reward distribution post-movement onset. Our results indicated that participants frequently change their initially selected movements when a change is associated with an increase in reward. This process occurs quicker than overall, average reaction times. Finally, changes in movement are not only dependent on reward but also on the current state of the motor apparatus.
Scientific RepoRtS
| (2020) 10:15527 | https://doi.org/10.1038/s41598-020-72220-2 www.nature.com/scientificreports/ zero in the center of the rectangle, maximum at the right and/or left rectangle vertices and increasing linearly in between. To test our hypothesis, we changed the original distribution at different times after movement onset on approximately one third of trials. Our results show that participants mainly altered their initially selected movement when, according to the new distribution, a better prospect was offered along a path different to their original choice. Furthermore, changes of movement were more frequent for slow movements and occurred on average quicker than the overall reaction time. In summary, this supports the theory that the system simultaneously considers several motor actions post decision-making and strongly suggests the existence of an interaction between motor and reward systems.
Results
To test our hypothesis, 15 participants performed a decision-making task. The aim of the task was to maximize visual reward by making planar and highly precise reaching movements from an origin cue to a position along the length of a rectangular target (Fig. 1C). We explained to each participant that the amount of reward obtained would depend on their choice and reaching precision as well as the distribution of visual reward (DoVR) presented on each trial. The DoVR was briefly shown at trial start and could be one of three bimodal distributions: 3-3 (even reward on each side), 1-5 (more reward on the right), 5-1 (more reward on the left)- Fig. 1C. There were two possible trial types: baseline (2/3 of all trials, Fig. 1A) or change of movement trials (CoM; 1/3 of all trials, Fig. 1B), which were randomly interspersed. The only difference between the two types was that a second DoVR, replacing the initial one, was shown sometime after movement onset on CoM trials (see in "Methods" section).
Baseline movements.
A set of typical trajectories for baseline trials is shown in Fig. 2A (Participant #2). In this figure, we show trajectories from the origin cue to a position along the wide side of the rectangular target, for Both baseline and change of movement trials started with the presentation of a blank screen during 500 ms. Next, a circular, pale blue origin cue (1 cm diameter circle) was presented on the bottom right of the screen. One cycle (~ 10 ms) after the end-point entered the origin cue, the cue changed to green and the rectangular target (10 cm long, 1 cm wide) was presented on the top left of the screen, 15 cm away from the center of the origin cue and rotated 135°. Simultaneously, the distribution of visual reward was presented in the form of two right-angled triangles, centered in the middle of the length of the rectangle and peaking on either side. One of three possible distributions was presented: 1-5, 3-3 or 5-1. The GO signal was given 100 ms after the presentation of the distribution, by turning the origin cue white (the background color). On CoM trials, some time post-movement onset, a second distribution of visual reward was presented for a 250 ms duration. Upon arrival at the target, the rectangle color went from blue to green to signal correct entry. A red, horizontal bar provided visual feedback related to arrival precision and its length was proportional to greater reward (and therefore increased precision). (C) Geometrical arrangements, associated to the distributions of visual reward, used in this experiment: each DoVR always appeared on top of the rectangular target. One of three possible distributions was presented: 3-3, 1-5, 5-1, shown from left to right respectively.
Scientific RepoRtS | (2020) 10:15527 | https://doi.org/10.1038/s41598-020-72220-2 www.nature.com/scientificreports/ all three baseline DoVR shown in Fig. 1C (3-3; 1-5; 5-1). Consistent with instruction, the trajectories favor the side of the rectangle that offers the largest reward in the 5-1 and 1-5 distributions, and evenly favor both sides in the case of the 3-3 distribution. These observations are also reinforced by the distribution of arrival positions and related rewards shown in Fig. 2B for three participants (P5, P6, and P12). Tangential velocity profiles are shown in Fig. 2A for all three baseline cases, aligned on movement onset. The profiles exhibit a fast rise to peak and a longer deceleration phase until target arrival, consistent with the need for a slower, more controlled movement, during the interval immediately preceding target arrival (and subsequent reward delivery). This notion is reinforced by a positive effect of value on the overall movement time (MT; F(15,1) = 6.95; p = 0.018) and on deceleration time (TTPD; F(15,1) = 5.04; p = 0.043)-group average regression coefficients are shown in Fig. 2C. This suggests the dynamics of a speed-accuracy trade-off, in which the subject's primary drive for reward is counterbalanced by the concern for accurate aiming 15,19,20 . Also, Fig. 2E shows scatter plots of the mean and standard error of the TTPD and MT for each individual subject, color-coded as a function of the amount of reward aimed for, and the tracking apparatus (see "Methods" section). change of movement. Figure 3A shows a set of trajectories during change of movement (CoM) trials, as a function of their initial/final distributions of reward (3-3/1-5; 3-3/5-1; 1-5/3-3; 5-1/3-3; 1-5/5-1; 5-1/1-5). The trajectories confirm that subjects followed the instructions and that their goal was to gain reward, given that target arrival position was most frequently close to the largest reward. However, we identified two major behavioral strategies to attain the desired arrival position. First, similar to baseline trials, the most popular strategy (14/15 participants) consisted of an initial reaching movement towards the side associated with the largest reward, and then altering that ongoing movement if, after the appearance of the second DoVR, the other side now offered a better reward. The second strategy (1/15 participants), for DoVRs where there was a strong imbalance (e.g., 1-5/5-1), consisted of initiating a trajectory towards the least appealing side and changing the motor path only if the second DoVR confirmed the lesser reward. In a way, the first strategy assumes that the initial distribution will not change (there will be no second DoVR), while the second one hopes for a change as the movement progresses. Figure 3B shows a set of typical tangential and radial velocities for all six cases of CoM trials. The velocities are aligned at the time of CoM (vertical black line), which effectively partitions the www.nature.com/scientificreports/ movement into two distinct phases. First, a ballistic phase, tangential from the origin cue, and second, a radial phase towards the opposite target side.
changes of trajectory during ongoing movements. We hypothesized that participants would adjust their reaching movement whenever an alternative option offered some gain in reward with respect to the original choice. Consistent with this, an observation of the trajectories (Fig. 3A) indicated that participants most frequently changed their movement when the second DoVR revealed a better prospect at the other side of the target. Furthermore, to test the potential influence of time and velocity on CoMs, we classified trials as Early/ Late and Slow/Fast by performing median splits on the distribution of presentation times of the second DoVR and on the distribution of Peak Velocities preceding the CoM, respectively. Next, we fitted a binomial distribution to the proportion of CoMs experimentally observed for each of the 6 Gain × 2 Times × 2 Velocities cases at a single participant level, obtaining p-parameter values for different analyses of Gains, Times and Velocities (see "Methods" section).
To analyze the dependence of the Probability of a Change of Movement (PCoM) on the experimental conditions, we first calculated the binomial p-parameter for all six CoM trial cases and participants (see distribution on Fig. 4A). We also obtained this parameter for each combination of the three experimental factors related to CoM trials: reward Gain (G) associated with the CoM, the time (Early/Late) of presentation of the second DoVR, and the peak velocity (Slow/Fast) prior to the tCoM. Furthermore, to assess statistical dependence, we performed a full GLM of the p-parameter as a function of each of the factors and their covariates within-subjects, including a binary variable for group that signaled the movement tracking method (Computer Mouse/Optitrack) for each subject (see "Methods" section).
The grand average regression coefficients across participants are shown in Fig. 4B. Results from the F-tests show two main group effects on PCoM: a strong positive effect of Gain (F(1,15) = 120.32, p = 1.37E−8*), and a the time of change of movement (tcoM). We also calculated the time of CoM (tCoM) for every trial on which a CoM took place. The tCoM is defined as the time interval between the presentation onset of the second DoVR and the hard bend of the trajectory, indicating a change of movement towards the side of the rectangle opposite the initial choice. Consistent with the participant's intent to maximize reward through precise arrival at the target, subjects displayed increased MTs and TTPDs as a function of increasing reward on baseline trials. We hypothesized that the reward to be gained by changing target side and the time of presentation of the second DoVR would influence the subjects' urgency to adjust their motor trajectory and consequently their tCoM. To assess this, we performed a full GLM on each participants' tCoM as a function of three factors: the gain (G) associated with the CoM, the time of presentation of the second DoVR, and the peak tangential velocity (V). Figure 5A shows the grand average GLM regression coefficients across participants. A subsequent F-test on each coefficient across subjects yielded a group effect of Gain on tCoM (F(1,15) = 7.30, p = 0.016*). In other words, the larger the Gain, the later the CoM ( www.nature.com/scientificreports/ the state of the motor system. Thus, we expected the kinematic markers to be different prior to the presentation of the second DoVR on COM trials as compared to baseline trials. To test this hypothesis, we ran a full GLM on the kinematic markers during the pre-CoM interval (Peak Acceleration, Peak Velocity) as a function of the initially aimed at visual reward (V)-based on the first DoVR, its time (T) of presentation, a binary variable indicating a CoM, and its covariates (Fig. 6A,B). Figure 6A,B shows that both the PA and the PV were significantly smaller on trials where a change of trajectory occurred (PA: p = 0.021; PV: p = 0.031). Indeed, although the driving force underlying a CoM is a change in the location of the reward, our results would suggest that the initial state of the motor system, as characterized by the peak velocity and peak acceleration, is significantly different between CoM and non-CoM trials.
Finally, to provide a quantitative characterization of the relationship between the kinematic markers and CoMs, we performed a regression of kinematic markers, pre-and post-CoM (see "Methods" section), as a function of Gain (G), the presentation time of the second DoVR (T), and a binary variable CoM. Figure 7A Fig. 7C,D. In conclusion, these results strongly suggest that: although the trajectory bend occurs ~ 300 ms after the second DoVR presentation (Fig. 5), on trials where CoMs occur more frequently, movements tend to be longer and less energetic. Finally, the gain to be obtained by altering the ongoing movement is the major cause underlying CoMs.
Discussion
The goal of this study was to investigate how changes in visual reward influence decision-making once commitment to an option has been made and the movement response is ongoing. Although previous evidence indicates that activity in the pre-motor cortex may encode several actions simultaneously 1,3,22 , reasonable doubt remains as to how that encoding extends to the execution phase, after a specific option has been selected and a movement is in progress. Here, we focused on decisions between motor trajectories where the option offering the greatest www.nature.com/scientificreports/ reward is most often selected, while equalizing all other factors. In this context, we hypothesized that a change in the distribution of reward at any point in time should dynamically adjust the desirability for each option, putatively resulting in a change of movement trajectory. To test this hypothesis, we performed an experiment in which human participants were instructed to freely select a reaching path trajectory from an origin to a wide rectangular target. The amount of reward attained was contingent upon the distribution of reward along the rectangle side as well as the end-point position upon arrival at the target. Reward distributions were altered, post-movement onset, on one third of all trials. Our results show that participants were most likely to alter their initially selected movement, even after their initial movement was ongoing, when the prospect of reward along a different path was better. Furthermore, changes of movement were more frequent when the initial movement parameters were slower, and required a duration, on average, inferior to the reaction time. Finally, the short latency of CoMs, together with the fact that the early time-to-peak-acceleration exhibited significant differences during CoM and non-CoM trials, strongly suggests that CoMs greatly depend on the current state of the motor apparatus.
Baseline effect of visual reward. First, we examined the influence of visual reward on kinematic parameters during baseline trials. Several studies have shown that larger rewards tend to increase movement vigor 23,24 , energy 25 , and frequency 26 . In contrast, our results were consistent with a concern for precision, expressed by an increase in the overall movement time and duration of the deceleration phase (Fig. 2D). It is important to keep in mind that reward in this task was contingent on precision: responses hitting the extremes of the rectangle length were awarded a close to maximum reward, while those that missed the rectangle, received no reward. Thus, being conservative would be adaptive when there is a large reward at stake 27 .
Gain and noise to change your motor plan. This study aimed at evaluating how online changes in reward distribution affect decision-making during the execution phase. Our results show that PCoM increased when the second DoVR resulted in a larger reward on the opposite side to the initial DoVR and when the initial movement was slow. Remarkably, PCoM in the absence of Gain occurs on 10-15% of all trials (Fig. 4A), implying that although most decisions aim at the largest reward prospect, subjects sometimes opt for lesser gain. Although this does not invalidate the main principle of seeking reward, it may be interpreted as an effect of neural noise, interfering with the commitment for a specific action in the presence of multiple options 3 . www.nature.com/scientificreports/ In a similar fashion, tCoM exhibits the same increasing sensitivity to Gain as PCoM, a mild decrease with respect to the time of second DoVR presentation, and is insensitive to velocity. Consistent with the same hypothesis of neural noise, the mean tCoM is shorter when the Gain is zero than when the Gain is large (Fig. 5). This would imply that CoMs, whenever there is no reward to gain, are either made in the absence of proper processing of reward or guided by the concern of losing the reward offered by the alternative side 28,29 , and are biased by neural noise. This is also consistent with the fact that CoMs often occur close to the presentation of the second DoVR, rendering a hypothetical pre-frontal analysis of the second distribution unlikely. Moreover, this would be consistent with the fact that changing your movement when there is nothing to gain is counterproductive. Finally, tCoM occurs sooner when the presentation of the second DoVR occurs later (Fig. 5C), suggesting an increased urgency for change.
Visual reward and ongoing behavior.
Our analyses have also shown that the way CoMs occur during reaching movements is consistent with an interplay between two sequential movements. The first between the onset of movement and the offset of tangential velocity, and the second from the onset of radial velocity to the final offset. Importantly, the influence of the first DoVR on movement is constrained to a longer deceleration towards the target, consistent with a concern for precision and maximizing reward. By contrast, the second DoVR exerts a much broader influence on the specifics of movement kinematics, extending overall movement duration (MT, TTPV, TDEC, tCoM) and weakening movement intensity (PA, PV, PD), before and after the CoM (Figs. 6 and 7), shaping both the acceleration and deceleration phases and the way in which the movement is executed. Furthermore, our analyses of kinematics have also shown that, as early as the first peak acceleration, the movement exhibits significant differences between those trials where a CoM occurs vs those where there is no CoM. This implies that, although the hard bend in the trajectory occurs ~ 300 ms after the presentation of the second DoVR, the conditions necessary for the CoM to take place are already present around the initial PA (mean = 107 ms, std = 63 ms), shortly after the second DoVR has been presented. This is consistent with previous studies examining unpredictable target perturbations 30 , and together with the fact that PCoM is sensitive to movement velocity, strongly suggest that the state of the motor apparatus exerts a significant effect on PCoM. www.nature.com/scientificreports/ These very short latencies are also consistent with the extension of the affordance competition hypothesis to the execution phase 2,3,31 .
conclusion
The results from the analyses on PCoM and tCoM support the notion that visual reward exerts an influence not only on the choice itself but also on the way in which the movement is executed. These results also provide evidence for the role of goal-directed behavior when planning and executing motor decisions 2,32 , in line with previous studies. Participants changed their minds and adapted their trajectories based on reward, and these adjustments were enacted post-movement onset. Furthermore, reward exerted an influence on online feedback processes in several ways. First, there was a modulation of the time it takes to reprocess a new reward and change path trajectory (tCoM) as a function of the interplay between the currently selected path trajectory and reward, and the time of DoVR change. Thus, we change our mind if given sufficient time, and if the reward associated to a second option outweighs the current one. Second, in the context of voluntary movements, the motor system does not only take into consideration a variety of environmental factors and intrinsic biomechanical and external costs 3,7,33 , but also the perceived (cognitive) reward of the target, supporting the notion that decision-making models should factor in implementations of cost/benefit trade-offs. Third, the fact that changes of mind do occur on average quicker (~ 300 ms) than the RT prior to movement onset (~ 400 ms) supports the hypothesis that several movements may be planned in parallel, given that these adjustments must be made in a relatively small time-frame and require a rapid response. Fourth, feedback corrections appear to share the sophistication of the motor system for planning and executing motor actions; if the new distribution of visual reward shows that the alternative motor option offers a larger reward and there is enough time to adjust, then we are likely to change our mind.
Limitations and future directions. This study focused on the influence of visual reward on decisionmaking between reaching movements during ongoing movements. Specifically, we focused on the subjective perception of reward that results from performing a movement as a function of a visual (non-monetary) reward. Under these conditions, our results yielded a tCoM twice as long as the RTs recorded for decisions where reward was absent and no monetary rewards were made, therefore suggesting the involvement of the reward system during deliberation. At a more methodological level, we acknowledge the potential influence of the reward distribution discontinuities at the target sides, on the choice of movement parameters. In a preliminary fashion, we controlled for this by analyzing the target arrival distributions, showing that the distributions were centered off the edge of the rectangle (Fig. 2B). This shows that the participants were concerned with gaining reward from the distribution presented, and that the concern for failure exerts a symmetrical influence on the arrival points. This symmetry is ensured by design, as the geometrical arrangement of the target with respect to the origin, have been designed to equalize motor costs and potential risks between target sides. This makes its influence on motor choices a minor concern.
Methods participants. Fifteen right-handed individuals (5 M and 10F; Mage = 24.4 years, SD = 5.8), participated in this research study. Nine other participants served as pilots to test the experimental setup, the recording process, and the custom scripts controlling the task flow; their data was not considered in further analyses. All participants had normal or corrected-to-normal vision and hearing and did not suffer from any known neurological disorders. Informed consent was obtained following the guidelines established by the local ethics committee and all participants received monetary compensation (10€/h) for their participation, regardless of completion. The ethics protocol for this study was approved by the Clinical Research Ethics Committee (CEIC-Parc Salut MAR) of the Pompeu Fabra University-Hospital del Mar, with Reference #2015/6085/I. All methods were carried out in accordance with relevant guidelines and regulations. experimental setup and task apparatus. Participants were seated in a comfortable chair, facing the experimental table, with their chest approximately 10 cm from the table edge and both lower arms resting on its surface. The table defined the plane where reaching movements were to be performed. On the same table, approximately 60 cm away from the participant's sitting position, we placed a vertically-oriented, 24″ Acer G245HQ computer screen (1920 × 1080). This monitor was connected to an Intel i5 (3.20 GHz, 64-bit OS, 4 GB RAM) computer that ran custom-made scripts which controlled task flow, programmed using OpenFrameworks v.0.9.8 software. The screen was used to show the geometrical arrangements and related stimuli on each trial. A small cross (1 × 1 cm), whose position was synchronized with the planar coordinates of the end-point as it slid along the horizontal plane (table), was used to show the participant's corresponding movement in the vertical plane on the screen. As part of the experiment, subjects had to respond by performing overt movements with their arm along the table plane. Due to operational laboratory constraints, their movements were recorded with two end-point tracking methods: (1) for participants 1-5; with an Optitrak motion tracking system (Optitrak, Inc), which tracked the position of a spherical marker placed on top of the nail of the right-hand index finger, as it slid across the table; (2) for participants 6-15; with a light computer mouse (Logitech, Inc) that tracked hand position. The weight of the mouse (~ 0.1 kg) was less than 1% of the total weight of the arm (~ 5 kg). A comparison of both methods indicated that the trajectories and velocity profiles recorded with both methods were virtually identical (see Supplemental Figs. S1 and S2). Subjects were instructed to apply minimal pressure but maintain end-point contact with the table surface at all times. They were permitted to minimally lift their elbow off the table to diminish this effect. Given that the monitor was placed vertically and movements were performed horizontally (along the experimental task. We designed a reaction time task, consisting of 630 trials, performed in a single session. The participant was asked to perform a reaching movement from a circular origin cue (diameter: 1.5 cm) to a wide rectangular target (width: 10 cm; depth: 1 cm), placed 15 cm apart and rotated 135 counter-clockwise ( Fig. 1), with the goal of maximizing reward. We explained to the subject that reward in this task was dependent upon the arrival position relative to the long rectangle side, and upon the distribution of visual reward. The orientation of the target was selected in order to equalize motor costs for movements towards the right and left side of the rectangle, as it coincides with the direction of minimal arm mobility 34 . In other words, movements towards either side of the rectangle implied the same motor cost 31 . Furthermore, since the goal of the task was to assess the influence of reward, at the beginning of all trials, we showed one of three bimodal distributions of visual reward (DoVR): 3-3, 1-5, 5-1. These DoVRs peaked at the right/left edges of the rectangle's long side and decreased towards zero when approaching its center (Fig. 1C). They were also equal to zero off the right/left sides of the rectangle, meaning that reaching movements that missed the target would not yield any reward. The subject was instructed to make a reaching movement from the origin cue to a freely selected position along the long side of the rectangular target while attempting to maximize reward. The reward obtained on each trial was dependent on arrival position along the length of the rectangle as well as the final DoVR. The task contained two types of trials: baseline and change of movement (CoM) trials, which were randomly interspersed. Each participant performed 7 blocks of 90 trials in a single session (~ 1 h 15 min). Each block consisted of 72 baseline trials (24 repetitions of each DoVR) and 18 CoM trials. There were 18 types of CoM trials, as a function of the change of DoVR (3-3/1-5; 3-3/5-1; 1-5/3-3; 1-5/5-1; 5-1/3-3; 5-1/1-5) and the time at which the DoVR occurred, measured from the onset of movement (Early (E; t < 80 ms), Medium (M; 80 ms < t < 140 ms) or Late (L; t > 140 ms ± 30 ms)). Each block contained one trial of each possible CoM type. Trial order was counterbalanced and randomized both within and across blocks. The second DoVR was shown for a duration of 200 ms.
Real-time, visual feedback of hand position was provided during the trial by a 1 cm cross shown on the screen, synchronized with the position of the participant's right hand on the experimental table. The time-course of each trial type can be seen in Fig. 1A,B. A baseline trial began when the origin was shown on the screen and the subject entered the cue. Approximately 1 s later, the rectangular target and the initial DoVR were shown (Fig. 1A,B). After a 1 s observation interval, a GO signal was given by making the origin cue disappear. The participant was instructed to perform their selected path trajectory towards the position along the side of the rectangle they deemed most rewarding. If the subject left the origin before the GO signal was given, the trial was invalidated, the experimental arrangement disappeared, and the participant was shown a blank screen where they had to wait until the regular trial duration of 7 s elapsed. Correct target entry resulted in the rectangle turning green. After 500 ms of holding position at the target, the reward associated with the specific end position was shown for a duration of 500 ms. The screen turned white for an interval that equalized the entire trial duration to a fixed overall duration of 7 s. The CoM trials followed the same time-course as the baseline trials, except for the appearance of a second DoVR, which changed 80-200 ms post-movement onset. At the beginning of each block, subjects were verbally reminded that their goal remained to maximize reward and that, on CoM trials, they may have to change their movement to attain that goal. Feedback was provided at the end of each trial in the form of a red horizontal bar, shown for 1000 ms. The length of the bar was proportional to the reward related to their reaching movement. The inter-trial interval was modulated to maintain a fixed 7 s, to prevent participants from increasing their speed to maximize reward. For the following analyses, we discarded trials with a response time greater than 7 s, as well as trials where the subject entered the target rectangle through the sides or top, and/or where the participant left the origin before the GO signal.
Statistical analyses. The probability of a change of movement (PCoM).
We calculated the probability of a change of movement (PCoM; Fig. 4), by counting the times each subject changed their movement on CoM trials, and fitting a binomial distribution for the proportion of changes over the total amount of trials with the binofit function provided by MATLAB. A binomial distribution is characterized by a single p-parameter, which in our case captures the PCoM.
To specifically analyze changes of movement, we operationally defined: the notion of prospect gain (G) as the difference in reward between the currently aimed at rectangle side and its alternative, after the second DoVR onset. In other words, G quantifies the reward to be gained if the trajectory were to be re-directed to the opposite side of the target, vs the reward to be obtained if the subject sticks to the original choice. Furthermore, we also defined a binary variable (Tracking-TR), which indicated the movement tracking device for each participant (0-Optitrak, 1-Mouse). We then calculated the binomial p-parameter for each participant for each possible combination of experimental factors: as a function of G (− 4, 0, 4), for early/late presentation times of the second DoVR (T), and for slow/fast movements, according to the Peak Velocity (PV) of the behavioral response. Both T and PV were classified as Early/Late and Slow/Fast using median splits within the T and PV distributions of each individual participant. We then used a mixed-effects model, and fitted a full GLM of the resulting p-binomial parameter against three factors: G, T, PV, and their covariates for each individual subject. We also incorporated the Tracking (TR) binary variable to make a global fit on the entire dataset and to measure the influence of TR on PCoM. Group significance was established by Bonferroni corrected F-/t-tests on each of the Scientific RepoRtS | (2020) 10:15527 | https://doi.org/10.1038/s41598-020-72220-2 www.nature.com/scientificreports/ regression coefficients ( Fig. 4B; signaled by a * ), and a permutation test of the influence of the tracking device on group PCoM, with p < 0.05. For presentation purposes, we fitted a parametric sigmoidal curve to the p-parameter obtained as a function n of prospect gain (G) for each individual subject-Eq. (1).
The time of change of movement (tCoM).
To test our hypothesis, the analysis of the temporal dynamics of the CoM was crucial. Thus, we placed specific emphasis on the analysis of the onset of the second DoVR and that of the change of motor trajectory. To this end, we calculated the time intervals before the onset of the second DoVR and the hard bend in the trajectory, which coincided with the moment at which tangential and radial velocities were minimal. We tested whether the time of change of movement (tCoM) was dependent on three factors: reward gain (G), the time at which the second DoVR was presented (T), and the Peak Velocity of the ongoing movement (PV)-which we used as proxies of the motor apparatus state during movement. As for the PCoM we used a mixed-effects model approach by regressing a full-GLM for the three aforementioned variables and their covariates, as well as the TR variable indicating the tracking device for that subject. We performed an individual fit per subject within a single global regression, which yielded a set of regression coefficients per subject and a global regression coefficient for the TR variable (0-Optitrak, 1-Mouse). We calculated the average and standard error on beta regression coefficients for G, T, PV and their covariates (Fig. 5A). Statistical significance was assessed via Bonferroni corrected F-/t-tests for G, T and PV, and covariates (signaled by a * ). A permutation test was performed to assess the influence of the TR variable on tCoM. We also calculated the histograms of tCoMs for each G = 0 and G = 4 across subjects (Fig. 5D), and plotted the influence of significant effects (Gain and Time) on tCoM, at the group level (Fig. 5B,D and E) and for each individual participant (Fig. 5C,F).
Kinematic markers. In addition to PCoM and tCoM, which are fundamental metrics underlying changes in movement, we considered it of interest to explore other metrics underlying CoM such as those found within the movement trajectories, velocities, and accelerations. We used custom-made MATLAB scripts to this end.
To characterize the dynamics of CoM, we also analyzed the sequence of kinematic metrics typical of a reaching movement (Figs. 6 and 7). For each trial, we calculated the following kinematic markers from the tangential velocity: peak acceleration (PA), time-to-peak acceleration (TTPA), peak velocity (PV), time-to-peak velocity (TTPV), peak deceleration (PD), time-to-deceleration (TDEC), and overall movement time (MT). On CoM trials, we also calculated kinematic markers as a function of the radial velocity after the CoM including: peak radial velocity (PRV), the time-to-peak radial velocity (TTPRV), the overall switch movement time from the trajectory hard bend to the movement offset (MTCoM), the time of deceleration, from the TTPRV to the movement offset- Fig. 3C.
|
2020-09-25T05:08:15.769Z
|
2020-09-23T00:00:00.000
|
{
"year": 2020,
"sha1": "c21bcff92111c7451ba94a484b2e94b34000bd3e",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-020-72220-2.pdf",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "f081ba72ce270fd6442c22232bb789a6bdabc2a8",
"s2fieldsofstudy": [
"Psychology"
],
"extfieldsofstudy": [
"Medicine",
"Psychology"
]
}
|
49664254
|
pes2o/s2orc
|
v3-fos-license
|
Color Doppler ultrasound and contrast-enhanced ultrasound in the diagnosis of lacrimal apparatus tumors
Color Doppler ultrasound and contrast-enhanced ultrasound (CEUS) in the diagnosis of lacrimal apparatus tumors were investigated. In total, 48 patients undergoing preoperative two-dimensional and color Doppler ultrasound and CEUS examinations were included in this study. Conventional ultrasound and CEUS characteristics of 48 patients pathologically and clinically diagnosed with lacrimal apparatus tumors were retrospectively analyzed. Results of conventional ultrasound of 29 cases with pleomorphic adenoma of lacrimal gland showed moderate-hypoechogenic solid masses in lacrimal gland; CEUS displayed two enhancement modes: High, fast-developed slow-extinct and overall uniform enhancement (20/29, 68.97%) and high, fast-developed slow-extinct, centripetal, uniform or non-uniform enhancement (9/29, 31.03%); after enhancement, the mass edge was clear without changes in size. Results of conventional ultrasound of 6 cases with adenoid cystic carcinoma of lacrimal gland showed hypoechogenic solid masses with unclear edge, irregular form, non-uniform echo, and abundant blood flow signals; the CEUS displayed high, fast-developed fast-extinct and overall uniform enhancement; after enhancement, mass edge was unclear and masses were larger than that in two-dimensional ultrasound. Results of conventional ultrasound of 10 cases with lacrimal sac cyst showed non-uniform, hypoechogenic masses, or cystic solid mixed masses with clear edge but no blood flow signal; the CEUS displayed peripheral circular enhancement and no enhancement inside. Results of conventional ultrasound of 3 cases with adenocarcinoma of lacrimal sac showed hypoechogenic solid masses with unclear edge, irregular form, non-uniform echo inside, and abundant blood flow signals in lacrimal sac; CEUS displayed high, fast-developed fast-extinct and overall uniform enhancement; after enhancement, masses with irregular shapes were obviously larger than that in two-dimensional ultrasound. CEUS shows the microcirculation of tumors and surrounding tissues. Combination of two-dimensional and color Doppler ultrasound can improve the preoperative qualitative diagnosis of tumors and provide references for the selection of operation methods and determination of tumor resection scope.
Introduction
Lacrimal apparatus includes lacrimal gland and passage, and lacrimal passage includes lacrimal punctum, ductule, sac and naso-lacrymal duct. Lacrimal gland exerts the secretion function, while lacrimal passage exerts the excretory function. Lacrimal gland mass is rare, and lacrimal gland mass in palpebral part is even rarer than that in orbital part; but incidence of tumors is the highest in lacrimal gland diseases. Inflammatory lesions are common in lacrimal passage diseases, but malignant tumors are rare. Color Doppler ultrasound applied in ophthalmic diseases can not only clearly show the two-dimensional structures of eyeballs and surrounding orbital wall tissues, but also display the normal and abnormal blood flow conditions. Contrast-enhanced ultrasound (CEUS) is a kind of imaging technique that the ultrasonic contrast agent containing microbubbles is intravenously injected to enhance the back-scattered blood flow signal in human body, which can observe the microvascular perfusion in tissues in a real-time and dynamic way, thereby providing more ultrasound diagnostic information. At present, CEUS has been widely applied in the diagnosis of liver tumors. However, the application of CEUS in superficial organ tumors, especially eye tumors, is still in the exploratory stage. Moreover, reports on CEUS applied in lacrimal apparatus tumors are rare. This study investigated the application values of two-dimensional, color Doppler ultrasound and CEUS in the diagnosis of lacrimal apparatus tumors.
Patients and methods
Objects of study. A total of 48 patients pathologically and clinically diagnosed with lacrimal apparatus tumors were The average diameter of microbubbles was 2.5 µm, and the contrast agent (59 µg) was used to make suspension (SF6 concentration, 5 mg/ml) with 5 ml normal saline. After intravenous bolus injection of contrast agent, 5 ml normal saline was injected, and patients were continuously observed for 3-5 min. Dynamic images were saved for later analysis. Before contrast examination, the patients signed an informed consent. Under a supine position, patients closed both eyes gently, and eyelids were coated with disinfectant coupling agent, followed by axial and non-axial scanning to observe site, shape, edge, size and echo of the mass, and color Doppler ultrasound was used to observe the blood supply and other features of the mass. Then the optimal section of the mass (the mass and some normal choroid were shown in one section as far as possible) was selected, and the contrast agent was injected under the contrast mode; at the same time, the built-in timer of ultrasonic apparatus was started, and the mass and surrounding tissues were observed in real-time for ~3-5 min; the whole process was videotaped.
Color Doppler ultrasound and contrast-enhanced ultrasound in the diagnosis of lacrimal apparatus tumors
According to CEUS features of 48 cases of masses, enhancement degrees of masses were divided into three types: i) Uniform enhancement: uniform filling of contrast agent in the mass; ⅱ) non-uniform enhancement: Non-uniform filling of contrast agent in the mass; non-filling area could be seen inside; and ⅲ) No enhancement: No filling of contrast agent in the mass. There were three types of enhancement modes: ⅰ) Overall enhancement: Overall enhancement of filling of contrast agent in the mass; ⅱ) centripetal enhancement: gradual enhancement of filling of contrast agent in the mass from periphery area to inner area; and ⅲ) circular enhancement: Circular enhancement of contrast agent only around the mass. With the filling and extinction time of contrast agent in normal retina and choroid near the mass as control, the filling and extinction time of contrast agent in the mass was observed: i) Rapid enhancement of the mass, named fast-developed: The filling time of contrast agent in the mass is the same or shorther than that in surrounding normal retina and choroid; ⅱ) rapid extinction of the mass, named fast-extinct: The extinction time of contrast agent in the mass is shorther than that in surrounding normal retina and choroid; and ⅲ) slow extinction of the mass, named slow-extinct: The extinction time of contrast agent in the mass is longer or the same as that in surrounding normal retina and choroid (1).
Results
The 48 pathologically diagnosed masses were 29 cases with pleomorphic adenoma of lacrimal gland, 6 cases with adenoid cystic carcinoma of lacrimal gland, 10 cases with lacrimal sac cyst, and 3 cases with adenocarcinoma of lacrimal sac ( Table I).
Results showed that 20 cases (20/29, 68.97%) had high and overall uniform enhancement in masses; after enhancement, mass edge was clearer in regular shape, and the contrast agent within the mass became extinct slowly; the enhancement mode was ῾high, fast-developed slow-extinct and overall uniform enhancement᾿. Nine cases (9/29, 31.03%) showed gradual enhancement of masses from the periphery to the center, and 6 cases showed uniform enhancement, and 3 cases showed non-uniform enhancement; there was non-enhancement area inside, and after enhancement, the mass edge was clear in regular shape, and retreat of contrast agent within the mass became slowly extinct; the enhancement mode was ῾high, fast-developed slow-extinct, centripetal, uniform or non-uniform enhancement᾿ (Fig. 1).
Results of conventional ultrasound of 6 cases with adenoid cystic carcinoma of lacrimal gland showed hypoechogenic solid masses with unclear edge, irregular form and non-uniform echo inside the mass. Color Doppler ultrasound of blood flow displayed abundant blood flow signals inside the mass, and 6 cases had thick and tortuous vasa vasorum in the mass ( Fig. 2A). After two-dimensional and color Doppler ultrasound, CEUS was immediately performed for 6 cases with adenoid cystic carcinoma of lacrimal gland. Results showed that 6 cases (6/6, 100%) had stong and overall uniform enhancement in masses; in the early stage of enhancement, thick vasa vasorum could be seen around and inside the mass; after enhancement, the mass was larger than that in two-dimensional ultrasound, and retreat of contrast agent within the mass became extinct fast; the enhancement mode was ῾high, fast-developed fast-extinct, overall uniform enhancement᾿ ( Fig. 2B; Table II).
Ten cases with lacrimal sac cyst had clear mass edge, among which 7 cases (7/10, 70%) showed moderate non-uniform hypoechogenic and dot-strip strong echo inside, and 3 cases (3/10, 30%) showed cystic solid mixed echo and wavy shape at the junction of mass and bone wall. Ten cases (10/10, 100%) had enhancement behind the mass. Color Doppler ultrasound of blood flow displayed that 10 cases (10/10, 100%) had no blood flow signal inside the mass, and 3 cases (3/10, 30%) had dotted blood flow signal around the mass. After two-dimensional and color Doppler ultrasound, CEUS was immediately performed for 10 cases with lacrimal sac cyst. Results showed that there was no enhancement in the mass, mild enhancement could be seen around the mass, and the enhancement mode was ῾circular enhancement᾿ (Fig. 3A and B; Table III).
Results of conventional ultrasound of 3 cases with adenocarcinoma of lacrimal sac showed hypoechogenic solid masses, and superficial subcutaneous masses with unclear edge, irregular form, non-uniform echo inside, scattered dotted strong echo and incomplete periostea on the edge. Color Doppler ultrasound of blood flow displayed abundant blood flow signals inside the mass. After two-dimensional and color Doppler ultrasound, CEUS was immediately performed for 3 cases with adenocarcinoma of lacrimal sac. Results showed that the appearance of enhancement in mass was obviously earlier than that in surrounding normal retina and choroid, displaying overall uniform high enhancement; after enhancement, mass was larger than that observed in two-dimensional ultrasound, and retreat of contrast agent within the mass became extinct fast; the enhancement mode was ῾high, fast-developed fast-extinct, overall uniform enhancement᾿ ( Fig. 4A and B).
Discussion
Unrestricted growth of malignant tumors depends on internal neovascularization. Ultrasound contrast agent is a kind of intravascular contrast agent, which can display well the distribution of microcirculation inside the tumor and reflect the abundance of neovascularization inside the tumor. Thus, CEUS was used in the evaluation of blood perfusion inside the tumor. Moreover, the feature of CEUS also compensates the shortcomings of traditional two-dimensional and color Doppler ultrasound. Color Doppler flow imaging is sometimes affected by blood flow direction within the lesion and the poor sensitivity of instrument to low-speed blood flow, and alse-negative blood flow is also observed in some lesions. Therefore, no blood flow signal in color Doppler ultrasound does not indicate no blood supply lesion in the mass (2). Application of CEUS imaging technique makes ultrasound diagnostic instrument display clearer microcirculation within lesion, providing a new method for the diagnosis of ophthalmic space-occupying lesions (3,4).
Yang et al applied ultrasound contrast agent to the human eye after relevant animal experiments, and confirmed that its application in the eye did not cause local and systemic side-effects, and also had no significant impact on retinal function and structure (5).
Lacrimal apparatus tumor includes lacrimal gland and passage tumor. The most common lacrimal gland tumor is the primary epithelial lacrimal gland tumor, among which pleomorphic adenoma of lacrimal gland is common in benign tumors, and adenoid cystic carcinoma of lacrimal gland is common in malignant tumors (6).
Pleomorphic adenoma of lacrimal gland is the most common lacrimal gland epithelial tumor, accounting for ~50%. It is a kind of benign tumor composed of epithelial and interstitial components. In the past, pleomorphic adenoma of lacrimal gland was called benign mixed tumor. It occurs frequently in single eye in adults and histopathological components mainly include epithelial cells or interstitial components. Besides, it is composed of a large number of tubular structures and cell nests in different shapes formed by differentiated epithelial cells, with scattered transparent myxoid and cartilage-like structure, diversified forms and arrangements of tumor cells (7). Ultrasound shows the round or oval solid mass above the orbit, and the majority of masses have clear edge and dense and uniform echo inside, but a small number of masses have unclear edge and non-uniform echo inside with scattered calcification; the mass is not compressed with a small number of blood flow signals. CEUS shows rapid filling of contrast agent within the mass, most of which show strong overall uniform enhancement, and a few of which show concentric uniform or non-uniform enhancement; after enhancement reaches the peak, retreat of contrast agent becomes slowly extinct. In this study, it was found that the mass containing more epithelial and mucinous tissues showed high and uniform enhancement after CEUS, while the mass containing more cartilage-like structures showed high and non-uniform enhancement after CEUS, and flaky non-enhancement area could be seen within the mass, and the proportion of cartilage-like structure was directly proportional to the range of non-enhancement area within the mass. Differential diagnosis of the disease is as follows: i) Inflammatory pseudotumor of lacrimal gland: It often occurs in both eyes, showing eyelid congestion and edema; hormone therapy is effective but recurrence rate is high. Ultrasound examination shows lacrimal gland enlargement in a flat or amygdaloidal shape, and it can extend forward or towards the orbital apex, often accompanied by adjacent extraocular muscle thickening. CEUS shows that after enhancement, the mass edge is unclear, the mass is larger than that observed in two-dimensional ultrasound, and the lesion generally has no enhancement area; ⅱ) lymphoma of lacrimal gland: Orbital lymphoma can often occur in the lacrimal gland. Ultrasound examination shows the low and non-uniform echo in the lesion, and scattered cord-like strong echo. Doppler detection shows that the mass has abundant blood signal in a dendritic shape. Spectral Doppler shows that the high-speed high-resistance arterial spectrum is an imaging feature of orbital lymphoma, which is of significance in the early diagnosis and timely treatment of lymphoma. Moreover, CEUS shows that most tumors develop and disappear quickly, and show high uniform enhancement; after enhancement, mass edge is unclear, and the mass is larger than that observed in two-dimensional ultrasound, and the posterior edge of lesion often shows an ῾inverted-triangle᾿ shape (1,8).
Adenoid cystic carcinoma of lacrimal gland is the most common malignant epithelial tumor of lacrimal gland; the mass is solid with no capsule and has a high degree of malignancy. It often invades surrounding tissues, and relapse and metastasis occur easily. This disease is considered as the most biologically destructive and unpredictable tumor in the head and neck (8). Ultrasound shows solid hypoechogenic mass in a spindle shape with unclear edge and irregular form without capsule surrounding the mass; internal echo is not uniform, and slightly attenuated in the rear; abundant blood flow signals and multiple thick vasa vasorum can be detected in the mass (9,10). In CEUS, the contrast agent in the mass is rapidly filled, showing high overall uniform enhancement; after enhancement, the mass is enlarged and retreat of contrast agent becomes extinct quickly after enhancement reaches the peak, showing fast-developed fast-extinct signs, which is different from pleomorphic adenoma and tuberculosis of lacrimal gland (fast-developed slow-extinct). However, the number of cases with such a sign is small, so it cannot be used as a criterion to distinguish benign from malignant lacrimal gland masses, but may provide references for the diagnosis of malignant tumors. Differential diagnosis of the disease is the same as that of pleomorphic adenoma of lacrimal gland; but it is hard to distinguish it from inflammatory lesions from two-dimensional ultrasound and CEUS, so the clinical manifestations, signs and past medical history should be combined for comprehensive analysis.
Stupp et al (11) performed ultrasound examination for 17 patients with lacrimal sac mass, and the results showed that ultrasound could display the lacrimal sac mass well. The disease should be distinguished from frontal and ethmoidal sinus cysts, which are easily misdiagnosed as ophthalmic diseases.
Lacrimal passage tumor is rare, and the malignant lacrimal passage tumor is even rarer, and the main pathological types are squamous and transitional cell carcinoma, followed by adenocarcinoma. Typical clinical manifestations of lacrimal passage tumor are masses in inner canthus or lacrimal sac, and it only displays the overflow of tears and unblocked lacrimal passage in the early stage. Clinical diagnosis of this disease is difficult. With the progression of disease, pain, overflow of lacrimal punctum and bloody mucus, lacrimal sac area bulges will occur; in the late stage, it may lead to epistaxis, protopsis and other complications. Ultrasound of malignant lacrimal passage tumor shows no specificity, and only shows the solid non-uniform mass in lacrimal sac. In addition, different pathological types of tumors show different strengths of blood flow signal; generally, blood flow signal of squamous cell carcinoma is low, while poorly-differentiated adenocarcinoma shows more abundant signals. The sample size in this study was small, and its sensitivity, specificity and accuracy were not discussed, so the results could not be used as criteria to distinguish benign from malignant lacrimal apparatus tumors. Therefore, future studies with larger number of samples are needed to further confirm the conclusion.
In conclusion, traditional two-dimensional and color Doppler ultrasound may be combined with CEUS in the determination of mass size, edge and shape. This technology may facilitate the identification of tumor scope and improve the diagnosis.
|
2018-07-10T21:23:59.995Z
|
2018-06-01T00:00:00.000
|
{
"year": 2018,
"sha1": "543f48e792de8cba633839a7ebd2971057a53dd8",
"oa_license": "CCBYNCND",
"oa_url": "https://www.spandidos-publications.com/10.3892/ol.2018.8879/download",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "543f48e792de8cba633839a7ebd2971057a53dd8",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
220061464
|
pes2o/s2orc
|
v3-fos-license
|
Decentralized Motion Control for Omnidirectional Wheelchair Tracking Error Elimination Using PD-Fuzzy-P and GA-PID Controllers
The last decade observed a significant research effort directed towards maneuverability and safety of mobile robots such as smart wheelchairs. The conventional electric wheelchair can be equipped with motorized omnidirectional wheels and several sensors serving as inputs for the controller to achieve smooth, safe, and reliable maneuverability. This work uses the decentralized algorithm to control the motion of omnidirectional wheelchairs. In the body frame of the omnidirectional wheeled wheelchair there are three separated independent components of motion including rotational motion, horizontal motion, and vertical motion, which can be controlled separately. So, each component can have its different sub-controller with a minimum tracking error. The present work aims to enhance the mobility of wheelchair users by utilizing an application to control the motion of their attained/unattained smart wheelchairs, especially in narrow places and at hard detours such as 90˚ corners and U-turns, which improves the quality of life of disabled users by facilitating their wheelchairs’ maneuverability. Two approaches of artificial intelligent-based controllers (PD-Fuzzy-P and GA-PID controllers) are designed to optimally enhance the maneuverability of the system. MATLAB software is used to simulate the system and calculate the Mean Error (ME) and Mean Square Error (MSE) for various scenarios in both approaches, the results showed that the PD-Fuzzy-P controller has a faster convergence in trajectory tracking than the GA-PID controller. Therefore, the proposed system can find its application in many areas including transporting locomotor-based disabled individuals and geriatric people as well as automated guided vehicles.
Introduction
People with disabilities exist in a large number around the world. In the United States, according to a study published by the Christopher and Dana Reeve Foundation, one person out of fifty is considered paralyzed [1]. In the United Kingdom, statistical surveys showed 40,000 paralytics increasing by one every eight hours [2]. People diagnosed with paralysis, paraplegia, hemiplegia, spinal cord injuries, and lower limb muscular disorder live with a degraded life quality due to concomitant immobility [3]. Therefore, several mobility aids have been introduced recently in the market such as: conventional and electric wheelchairs, exoskeletons, orthoses, and scooters to facilitate every-day-activity and, consequently, raise productivity. Electrical wheelchairs are preferred over the conventional ones due to the fact that the majority of users cannot apply the needed physical effort to drive the conventional wheelchairs. Milenković et al. in [4] have developed a health monitoring Android application utilizing a smart phone equipped with built-in sensors to record, capture, and process physical activities of conventional wheelchair users. appropriate action by the users. In another study by Kouzehgar et al. [22], the authors developed a crack detection approach based on deep-learning for a modular facade-cleaning robot. A real robot was used and equipped with an on-board camera to load a live video. The authors in [23] presented an integrated system that provided detailed semantic 3D models of buildings. The system could scan and reconstruct large scenes at a high level of detail, passing through five semantic levels, to generate a detailed semantic model of the building. In another study by [24] a robot tower-crane system was developed by studying the feasibility of a laser-technology-based lifting-path tracking system to improve productivity by 9.9%-50%. A path tracking was developed by Huang et al. in [25] for two vision-guided tractors of a robotic vehicle to make the two robotic vehicles move along a guide path accurately and smoothly.
Many wheelchair users are usually seeking independent self-support; they try to avoid asking for help of others in moving their attained/unattained wheelchairs. This research was first motivated by a junior disabled female student at Jordan University of Science and Technology (JUST). She used to store her wheelchair inside a building that was considerably far from her parking spot. She usually had to get help from her sophomore sister in order to move her unattained wheelchair from and to the parking spot almost every day, in addition to the help at home. When we asked her and some other wheelchair users about the features needed in the electric wheelchair, they showed a lot of concern regarding being self-independent, in addition to the fact that most electric wheelchairs have conventional wheels which limit maneuverability, especially when a rotation is needed. A design of a smart wheelchair was developed by a group of researchers in JUST [3]. That design was tested and evaluated by the disabled female student and it was one step toward helping such users with similar cases. The authors in [3] designed and implemented a wireless motion control system for conventional electric wheelchairs. Moreover, a Wi-Fi module was equipped to provide remote control using an Android mobile application. The system showed an easy and effective remote motion control experimentally.
In this paper, a decentralized algorithm for motion control of omnidirectional wheeled robot was used to overcome the complexity and difficulty of the conventional methods in applying to a real system is developed. The method is expressed in the body frame, where the omnidirectional mobile robot has three separated independent components of motion including rotational motion, horizontal motion, and vertical motion, which can be controlled separately with independent controllers, which makes it simple to apply and independent on accurate mathematical model of the controlled object [26]. Two methods of controlling will be tested on the three components of motion and then the optimal one of each is chosen separately to get the best overall result. Thus, this study suggests the design, implementation, and control of a uniquely reliable system that controls the motion of an electric wheelchair utilizing Omnidirectional wheels. Furthermore, the developed system will be designed to be upgradable for potential future versions. The authors believe such a system will add a huge contribution to the community. This paper is organized as follows: Section 2 introduces the modeling and the decentralized algorithm of the Omnidirectional wheelchair. Section 3 presents the trajectory tracking controllers design including both PD-Fuzzy-P and GA-Optimized PID controller depending on the decentralized algorithm. Section 4 summarizes and discusses the simulation results. The paper is concluded in the final section.
Modeling and Decentralized Algorithm of Omnidirectional Wheelchair
In this section, a derivation for the nonlinear kinematics and dynamics equations of motion governing the Omni-wheel system in addition to the decentralized motion control are presented in the following subsections. Details of the derived equation of the omnidirectional motion are adopted from [26].
Omnidirectional Wheelchair Modeling
The proposed omnidirectional wheelchair is considered with three omnidirectional wheels spaced at 120 • from one to another, it is assumed to have single rigid-body chassis with a center of geometry denoted by C. Figure 1 shows the configuration of the model, where O xy represents the global coordinate frame, and C x0y0 represents the body coordinate frame of the wheelchair body.
Sensors 2020, 20, x FOR PEER REVIEW 4 of 16 geometry denoted by C. Figure 1 shows the configuration of the model, where Oxy represents the global coordinate frame, and Cx0y0 represents the body coordinate frame of the wheelchair body. The distance from each wheel's center to the center of geometry is L, and each of the three wheels has the radius r and is driven by a separate motor. The rotation matrix is used from the body frame to the global frame as follows [26]: Each wheel has a position vector PCi(i=1,2,3) with respect to the body frame as follows: Likewise, the unit vectors that specify the wheels' drive direction vectors with respect to the body frame Di(i=1,2,3) are given by: The omnidirectional wheelchair's wheels' position vectors, Pi, and their linear velocity vectors, vi, expressed in the global frame are obtained as follows: So, the angular velocity of wheel i can be presented as: The distance from each wheel's center to the center of geometry is L, and each of the three wheels has the radius r and is driven by a separate motor. The rotation matrix is used from the body frame to the global frame as follows [26]: Each wheel has a position vector P Ci (i = 1,2,3) with respect to the body frame as follows: Likewise, the unit vectors that specify the wheels' drive direction vectors with respect to the body frame D i (i = 1,2,3) are given by: Sensors 2020, 20, x FOR PEER REVIEW 4 of 16 geometry denoted by C. Figure 1 shows the configuration of the model, where Oxy represents the global coordinate frame, and Cx0y0 represents the body coordinate frame of the wheelchair body. The distance from each wheel's center to the center of geometry is L, and each of the three wheels has the radius r and is driven by a separate motor. The rotation matrix is used from the body frame to the global frame as follows [26]: Each wheel has a position vector PCi(i=1,2,3) with respect to the body frame as follows: Likewise, the unit vectors that specify the wheels' drive direction vectors with respect to the body frame Di(i=1,2,3) are given by: The omnidirectional wheelchair's wheels' position vectors, Pi, and their linear velocity vectors, vi, expressed in the global frame are obtained as follows: So, the angular velocity of wheel i can be presented as: The omnidirectional wheelchair's wheels' position vectors, P i , and their linear velocity vectors, v i , expressed in the global frame are obtained as follows: Sensors 2020, 20, 3525
of 16
So, the angular velocity of wheel i can be presented as: Substituting v i from Equation (5) into Equation (6) yields: The angular velocity vector of the three omnidirectional wheelchair's wheels is denoted by ω = ω 1 ω 2 ω 3 , Note that a vector q = x y θ T describes the posture of the wheelchair refers to the global frame. Rearranging Equation (7) to obtain the kinematic equation expressed in the global frame as the following: where . q is the velocity vector of the omnidirectional wheelchair in the global frame. The motion of omnidirectional wheeled wheelchair in the decentralized algorithm should be presented in the body frame. Figure 2 shows the three separated components of the omnidirectional wheeled wheelchair motion presented in the body frame, which are rotational motion; ω C , horizontal motion; H, and vertical motion; V. These three components represent the velocity vector of the wheelchair with respect to the body frame, which is mentioned by Sensors 2020, 20, x FOR PEER REVIEW 5 of 16 Substituting vi from Equation (5) into Equation (6) yields: The angular velocity vector of the three omnidirectional wheelchair's wheels is denoted by x y θ = q describes the posture of the wheelchair refers to the global frame. Rearranging Equation (7) to obtain the kinematic equation expressed in the global frame as the following: where q is the velocity vector of the omnidirectional wheelchair in the global frame.
The motion of omnidirectional wheeled wheelchair in the decentralized algorithm should be presented in the body frame. Figure 2 shows the three separated components of the omnidirectional wheeled wheelchair motion presented in the body frame, which are rotational motion; ωC, horizontal motion; H, and vertical motion; V. These three components represent the velocity vector of the wheelchair with respect to the body frame, which is mentioned by It is easier to transform the system from global frame to body frame as the following: By rearranging Equation (8) and Equation (10), yields: It is easier to transform the system from global frame to body frame as the following: where is the rotation matrix.
Decentralized Algorithm of Omnidirectional Wheelchair for Motion Control
The omnidirectional wheeled wheelchair has more powerful movement than a conventional wheeled wheelchair. It is considered as a holonomic robot, where at any given instant it has three controllable degrees of freedom (DOF) in its motion plane. The decentralized algorithm of omnidirectional wheelchair for motion control is used in the body frame to control the three DOF separately, which makes it simple to apply and independent on accurate mathematical model of the controlled object. As shown in Figure 3 the four moving manners of omnidirectional wheeled wheelchair: rotational motion (a), horizontal motion (b), vertical motion (c), and a translation moving (d). Details for the decentralized algorithm can be found in Reference [26].
Decentralized Algorithm of Omnidirectional Wheelchair for Motion Control
The omnidirectional wheeled wheelchair has more powerful movement than a conventional wheeled wheelchair. It is considered as a holonomic robot, where at any given instant it has three controllable degrees of freedom (DOF) in its motion plane. The decentralized algorithm of omnidirectional wheelchair for motion control is used in the body frame to control the three DOF separately, which makes it simple to apply and independent on accurate mathematical model of the controlled object. As shown in Figure 3 the four moving manners of omnidirectional wheeled wheelchair: rotational motion (a), horizontal motion (b), vertical motion (c), and a translation moving (d). Details for the decentralized algorithm can be found in Reference [26]. For the first case, the pure rotational motion is attained when ωC ≠0 and H=V=0.
( ) Similarly, a horizontal motion is attained when H ≠0 and ωC =V=0 and finally, a pure vertical motion is attained when V ≠0 and ωC =H=0 For the first case, the pure rotational motion is attained when ω C 0 and H = V = 0.
Similarly, a horizontal motion is attained when H 0 and and finally, a pure vertical motion is attained when V 0 and ω C = H = 0 Sensors 2020, 20, 3525 7 of 16 Figure 3d shows the wheelchair moves in a combined manner (horizontal and vertical motion), it is called a pure translation motion where the motion direction is specified by the sum of V and H vectors. This motion attained when V 0, H 0 and ω C = 0.
The following equation gives the moving translation angle: The last case is when the three components of omnidirectional wheeled wheelchair's motion are not zero, this mode is called simultaneous translation-rotation, and could be obtained using Equation (12). The obtained motion of the omnidirectional wheeled wheelchair using the decentralized algorithm is more flexible and effective than the conventional wheeled wheelchair.
Trajectory Tracking Controllers Design for the Omnidirectional Wheelchair
In this section, two types of controllers are designed for the omnidirectional wheeled wheelchair, namely; PD-Fuzzy-P and GA-PID controllers. Each of them has three sub-controllers for the three motion components (ω C , H and V) of the wheelchair separately, instead of using a multi inputs-multi outputs system (MIMO) controller. Then a comparison between the two approaches will be made for each component to choose the best approach for each motion. In this paper the decentralized algorithm is used to control the motion of the wheelchair under consideration where each input could be controlled independently. The tracking error in body coordinate frame can be represented as: where e b = e H e V e θ Many industrial plants are very complicated due to different reasons, including time delays, higher order, and nonlinearities, and consequently it has been not easy to be able to tune PID controller gains accurately due to these difficulties. To overcome these complexities, modified conventional PID controllers have been produced to improve the conventional methods in tuning the PID controller parameters [27,28]. Thus, two approaches are presented in this paper to tune the PID controller's gains; PD fuzzy to tune a proportional controller and a Genetic Algorithm (GA) to tune a PID controller. These two controllers will be developed in the following subsections.
PD Fuzzy-P Controller (PD Fuzzy for Proportional Adaptation)
PID controller is well known and commonly used in many industries, but it does not obtain reasonable performance over a range of process conditions. Fuzzy logic is a technique in control system which deals with if-else logic, its concept was proposed by Zadeh [29], it is insensitive to parametric uncertainty, load and parameter fluctuations, and able to manage trouble with inaccurate data. Adaptation of PID using fuzzy (Fuzzy-PID) has been used in the research [30,31] because of its big advantage over the conventional PID. Thus, in this study a proportional-derivative fuzzy (PD-fuzzy) controller is used to tune a proportional controller for tracking a trajectory of omnidirectional wheeled wheelchair. For this system three separate PD-fuzzy sub-controllers are united to produce the overall controller. As shown in Figure 4, the first sub-controller is used to develop the gain of the proportional controller of the horizontal motion. Its inputs are the horizontal error e H and its derivative, and its output is the change in the horizontal proportional controller gain dKp. Similarly, for the vertical and rotational motion controllers. data. Adaptation of PID using fuzzy (Fuzzy-PID) has been used in the research [30,31] because of its big advantage over the conventional PID. Thus, in this study a proportional-derivative fuzzy (PDfuzzy) controller is used to tune a proportional controller for tracking a trajectory of omnidirectional wheeled wheelchair. For this system three separate PD-fuzzy sub-controllers are united to produce the overall controller. As shown in Figure 4, the first sub-controller is used to develop the gain of the proportional controller of the horizontal motion. Its inputs are the horizontal error eH and its derivative, and its output is the change in the horizontal proportional controller gain dKp. Similarly, for the vertical and rotational motion controllers. The fuzzy rules are arranged to update the proportional controller gain based on the change in error and its derivative at each step. Each sub-controller will have two inputs and one output. The inputs and outputs ranges are normalized to the range of −1 to 1. Figure 5 shows the membership function plots for the horizontal and vertical motions, and Figure 6 shows the membership function plot for the rotational motion. The fuzzy rules are arranged to update the proportional controller gain based on the change in error and its derivative at each step. Each sub-controller will have two inputs and one output. The inputs and outputs ranges are normalized to the range of −1 to 1. Figure 5 shows the membership function plots for the horizontal and vertical motions, and Figure 6 shows the membership function plot for the rotational motion. Table 1 shows the fuzzy law rules used for the Horizontal and Vertical controllers, and Table 2 shows the rules of the rotational controller in the form of If-then rule. For example, if E is NL and DE is NL then dKp is PL, where NL represents the Negative Large range, NM represents the Negative Medium range, NS represents the Negative Small range, Z represents the Zero range, PS represents the Positive Small range, PM represents the Positive Medium range, and finally PL represents the Positive Large range.
GA-PID Controller
The Genetic Algorithm (GA) is one of the artificial intelligence fields that belongs to the evolutionary algorithms; a stochastic global search method that mimics the process of natural evolution. It efficiently determines global minima/maxima of linear or nonlinear problems by depending on bio-inspired operators such as mutation, crossover and selection. The function to minimized is called the objective function, which contains n number of variables. At first an initial population of individual solutions (a vector of n variables) is generated and then goes through three principles to create the next generation based on the current population. These principles are listed as follows:
1.
Selection: chooses the individuals (parents), these individuals produce the population of the next generation.
2.
Crossover: arranges new children for the next generation by merging two old parents.
3.
Mutation: forming new children by making random modulation to the new individuals.
Each individual of the new generation is applied in the fitness function, the individuals that attain the best fitness values have more chance to survive. The old generation passes away and produces a new generation with size n. In this study, GA is used to tune the PID controller gains (Kp, Ki, and Kd), which are the three variables in the objective function exist in the error of the three motion components. The objected function is formulated in the form of: where n is number of the wheelchair steps.
The GA convergence is a user-defined specification (e.g, solution fitness threshold or the maximum number of generations). The parameters that are used in the study are specified as shown in Table 3: Table 3. Parameters of the GA used for GA-PID controller.
Characteristics
Items population type double vector the population size 50
Selection SUS
The cross over fraction 0.8 Figure 7 shows conventional PID sub-controllers with its parameter optimized by GA. It is easy to understand the GA concept and it deals with multi-objective optimization and is good for noisy environments. However, also, it has no guarantee of finding global minima. It is easy to understand the GA concept and it deals with multi-objective optimization and is good for noisy environments. However, also, it has no guarantee of finding global minima.
Results and Discussion
This section presents the performance of the two approaches in different paths such as squared, circular, and rose-shaped paths. The simulation tests of design and implementation of the tuned PID based path tacking are discussed. The design of the omnidirectional wheelchair consists of three wheels spaced at 120° from one to another. MATLAB software is used for programming the system.
The wheelchair initial conditions at starting point are x = 0 m, y = 0 m, θ =30 deg. Figure 8 shows the results of the simulated path without controller and the system response using PD-fuzzy-P and the GA-PID approaches in the squared path, circular path, and rose path in the global coordinate frame, where each side of the square is 1.5 m, the radius of the circle is 1 m, and the rose has 4 petals. The performance shows that the omnidirectional wheelchair can overcome the complexity of the sharp curves' maneuverability, which improves the quality of life of disabled users by facilitating their wheelchairs' maneuverability especially at the complex regions as mentioned before. In the case of square shape movement shown in Figure 8a, the zoom in view in the square shape's corner shows how easily the omnidirectional wheelchair can track a 90˚ corner with a zero-radius turning. As well as the circular and rose shaped paths, the zoom in view in the rose shaped path's curve in Figure 8c, shows how easily can the omnidirectional wheelchair track a U-turn with a quite small error. The PD-Fuzzy-P and GA-PID controllers' parameters are tabulated in Table 4, where both methods attended to present a proportional controller.
Results and Discussion
This section presents the performance of the two approaches in different paths such as squared, circular, and rose-shaped paths. The simulation tests of design and implementation of the tuned PID based path tacking are discussed. The design of the omnidirectional wheelchair consists of three wheels spaced at 120 • from one to another. MATLAB software is used for programming the system.
The wheelchair initial conditions at starting point are x = 0 m, y = 0 m, θ =30 deg. Figure 8 shows the results of the simulated path without controller and the system response using PD-fuzzy-P and the GA-PID approaches in the squared path, circular path, and rose path in the global coordinate frame, where each side of the square is 1.5 m, the radius of the circle is 1 m, and the rose has 4 petals. The performance shows that the omnidirectional wheelchair can overcome the complexity of the sharp curves' maneuverability, which improves the quality of life of disabled users by facilitating their wheelchairs' maneuverability especially at the complex regions as mentioned before. In the case of square shape movement shown in Figure 8a, the zoom in view in the square shape's corner shows how easily the omnidirectional wheelchair can track a 90 • corner with a zero-radius turning. As well as the circular and rose shaped paths, the zoom in view in the rose shaped path's curve in Figure 8c, shows how easily can the omnidirectional wheelchair track a U-turn with a quite small error. The PD-Fuzzy-P and GA-PID controllers' parameters are tabulated in Table 4, where both methods attended to present a proportional controller. The parameters of the GA used for GA-PID controller were mentioned in Table 3 and the parameters of Fuzzy controller were mentioned in Figure 6. Figures 9, 10, and 11 show the tracking errors of the omniwheeled wheelchair in the rose shaped path. Figure 9 shows the horizontal motion tracking error of the omniwheeled wheelchair without controller, with PD-Fuzzy-P and with GA-PID, where the overall error of the both approaches is about zero. PD-Fuzzy-P has the best performance with error within ±1 × 10 −16 meter around zero, and the GA-PID error is within ±2 × 10 −16 meter around zero. Figure 10 shows the vertical tracking error of the omniwheeled wheelchair without controller, with PD-Fuzzy-P and with GA-PID. Where the overall error of the both approaches is about zero. PD-Fuzzy-P has the best performance with error within ±6×10 −17 meter around zero, and the GA-PID error is within ±2 × 10 −16 meter around zero. Figure 11 shows the rotational tracking error of the omniwheeled wheelchair without controller, with PD-Fuzzy-P and with GA-PID, where the overall error of both approaches is zero. The parameters of the GA used for GA-PID controller were mentioned in Table 3 and the parameters of Fuzzy controller were mentioned in Figure 6. Figures 9-11 show the tracking errors of the omniwheeled wheelchair in the rose shaped path. Figure 9 shows the horizontal motion tracking error of the omniwheeled wheelchair without controller, with PD-Fuzzy-P and with GA-PID, where the overall error of the both approaches is about zero. PD-Fuzzy-P has the best performance with error within ±1 × 10 −16 m around zero, and the GA-PID error is within ±2 × 10 −16 m around zero. Figure 10 shows the vertical tracking error of the omniwheeled wheelchair without controller, with PD-Fuzzy-P and with GA-PID. Where the overall error of the both approaches is about zero. PD-Fuzzy-P has the best performance with error within ±6 × 10 −17 m around zero, and the GA-PID error is within ±2 × 10 −16 m around zero. Figure 11 shows the rotational tracking error of the omniwheeled wheelchair without controller, with PD-Fuzzy-P and with GA-PID, where the overall error of both approaches is zero. Sensors 2020, 20, x FOR PEER REVIEW 13 of 16 The angular velocities of the three wheels in the rose shaped path are shown in Figure 12, where the obtained angular velocities from the two approaches are pretty much identical to the desired angular velocities. The angular velocities of the three wheels in the rose shaped path are shown in Figure 12, where the obtained angular velocities from the two approaches are pretty much identical to the desired angular velocities. The angular velocities of the three wheels in the rose shaped path are shown in Figure 12, where the obtained angular velocities from the two approaches are pretty much identical to the desired angular velocities. The angular velocities of the three wheels in the rose shaped path are shown in Figure 12, where the obtained angular velocities from the two approaches are pretty much identical to the desired angular velocities.
The results show that PD-Fuzzy-P controller has the best performance, for more calculations, Table 5 shows the Mean Error (ME) and the Mean Square Error (MSE) of the system are, where they are obtained as follows: The Mean Error The Mean Square Error where the norm of the error (NE) is: The results show that PD-Fuzzy-P controller has the best performance, for more calculations, Table 5 shows the Mean Error (ME) and the Mean Square Error (MSE) of the system are, where they are obtained as follows: The Mean Error where the norm of the error (NE) is:
Conclusion
This work used the decentralized algorithm to control the motion of omnidirectional wheelchairs. In the body frame of the omnidirectional wheeled wheelchair there are three separated
Conclusions
This work used the decentralized algorithm to control the motion of omnidirectional wheelchairs. In the body frame of the omnidirectional wheeled wheelchair there are three separated independent components of motion including rotational motion, horizontal motion, and vertical motion, which can be controlled separately. So, each component can have its different sub-controller with a minimum tracking error. As shown in the results, the performance of this algorithm itself without controller was excellent. Two approaches of artificial intelligent-based controllers (PD-Fuzzy-P and GA-PID controllers) were designed to optimally enhance the maneuverability of the system. MATLAB software was used to simulate the system and calculate the Mean Error (ME) and Mean Square Error (MSE) for various scenarios of motion in both approaches.
The proposed system was tested in different trajectory scenarios including: squared, circular and rose trajectories. The results showed that the PD-fuzzy proportional controller has a faster convergence in trajectory tracking than the GA-PID controller in all scenarios. In addition, the omnidirectional robot showed the ability to overcome the complexity of the sharp curves' maneuverability, such as a 90 • corner in the square shape and a U-turn in the rose shaped path with infinitesimal error using both approaches with the superiority of the PD-Fuzzy-P controller resulted on the best performance on the system.
|
2020-06-25T09:09:14.298Z
|
2020-06-01T00:00:00.000
|
{
"year": 2020,
"sha1": "3fcab0b3c660acc44a842dc8260954eb7456274c",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-8220/20/12/3525/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "10ddd1a255d8b92206a74781c2f6dd63b661cf23",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
}
|
240795310
|
pes2o/s2orc
|
v3-fos-license
|
THE COMPLEX CONCEPT OF PLAGIARISM: UNDERGRADUATE AND POSTGRADUATE STUDENT PERSPECTIVES
The prevalence of plagiarism in university students’ academic writing is well documented. Its complex and multifaceted nature has made it difficult to reduce or manage. The literature reveals a lack of significant understanding of plagiarism and related concepts to be due to a poor or an absence of education, and it advocates for extensive and explicit education in what constitutes plagiarism at higher education level. In this review article we explore the literature on undergraduate and postgraduate student perspectives of plagiarism and related concepts in a global context. These perspectives are discussed under the following themes: students’ understanding of plagiarism and related literacy practices such as referencing, the reasons contributing to why students plagiarise intentionally or unintentionally, students’ understandings and views of the seriousness of plagiarism and students’ views on how to curb plagiarism. We believe that through a deeper understanding of students’ perspectives of plagiarism, we could start to develop an all-encompassing strategy to deal with plagiarism at university level. database used to select articles for inclusion in this literature review was Google Scholar. Articles published in scholarly, reliably peer-reviewed journals were included. Keywords used for searching include “undergraduate students”, “postgraduate students”, “plagiarism”, “referencing” and “perspective”. Most of the articles used focus on research done over the last 20 years or more in order to provide give a historic perspective of the persistence of the plagiarism problem over time. The articles were also from multiple contexts to highlight the complexity and the commonality of the problem. Forward and backward snowballing was used to obtain more relevant articles. Several articles address the issue of plagiarism in connection with other related concepts, such as referencing. This was in order to highlight the multidimensional issue of plagiarism.
INTRODUCTION: THE COMPLEXITY OF PLAGIARISM AND REFERENCING
Plagiarism is a complex subject and integrally connected to different intricate literacy practices, such as accurate referencing skills and academic reading and writing skills. Several scholars have attempted simple definitions (e.g. Lathrop & Foss, 2000;Neville, 2007). However, others have identified complications with these (Angelil-Carter, 2000;Hu & Lei, 2015;Neville, 2007). For example, Neville (2007) acknowledges the difficulty of defining plagiarism, as the elements of unintentional plagiarism are contextually complex. Hu and Lei (2015:234) state that "plagiarism is often associated with such judgmental labels as deception, cheating, academic crime, intellectual dishonesty, and moral failure". These terms suggest that plagiarism is a simple issue that has a clear-cut boundary and situates Referencing has been highlighted as one of the key skills needed in order to avoid plagiarism. It is, however, a highly intricate skill to master. In most cases, referencing is simply defined as acknowledging others' work in one's writing. However, the deeper purpose and meaning of referencing is often poorly conveyed to students. For example, the importance of concepts related to the referencing process, such as citation, bibliography and academic reading and writing are seldom explained in detail. As a result, as the literature shows, students usually understand the more superficial definition of plagiarism and not the benefits of referencing and related concepts (Ashworth, Bannister & Thorne, 1997;Greenwood, Walkem & Shearer, 2014). However, the deeper benefits of the referencing process encompass proper/exact citation and formulating an accurate reference list or bibliography in an integrated academic reading and writing process. Scholars such as Gilbert (1977), Hutchings (2013), Neville (2010) and Samraj (2013) identify several benefits of referencing often not grasped by students. Hutchings (2013:312) mentions that "in referring certain ideas to certain sources the writer is equipped with the ability to distinguish voices of others, and therefore allow for the establishment of their voice". Thus, when reading vast amounts of information in preparation for academic writing, one needs to be able to create an argument using others' voices as well as one's own voice. This can be achieved by referring to others' texts and through practise. Using one's own voice is one of the crucial skills needed in academic discourse and referencing is a crucial part of this process. Samraj (2013) suggests that this intertextual interlinking (synthesising text from different sources) of the student's own voice with those of others in academic writing is the hardest part of constructing an academic text and that making a critical and credible connection between what a student did and what is already done by other writers is not an easy task; it requires an ability to look at "the bigger picture". This is a difficult skill to master and if students disregard the proper citation part of referencing, they miss an opportunity to learn how to cite, and how to use others' texts effectively in support of their own arguments/voices. Gilbert (1977) explained the significance of referencing in scientific/academic writing. He considers a scientific/academic paper a "tool for persuasion". He explains this concept: scholars who conduct research believe their results are important; they have to persuade the scholars in the field of the importance of these by relating their findings to the current literature in their field to provide evidence or create a persuasive argument for the audience http://dx.doi.org/10.18820/2519593X/pie.v39.i2.6
Mbutho & Hutchings
The complex concept of plagiarism (other scholars in the field) that their work has a level of validity and is theoretically based. Referencing forms a crucial part of this process. Similarly, Neville (2012) argues for the crucial role of referencing in the collective development and transmission of academic knowledge.
Referring to the work of other scholars encourages a critical thinking approach through an intelligent selection of quality research findings, reviewing and analysing these findings and presenting them to support a chosen argument. Referencing is the practical manifestation of this engagement (Neville, 2012).
This article's argument is that a simplistic view of the problem of plagiarism held by higher education institutions and educators could form part of the problem. Universities and educators thus need to be mindful of the multifaceted and complex nature of plagiarism and its interlinking with other complex skills that need mastering, such as referencing and academic reading and writing. It is easy as educators to label learners as lazy, thieves or incompetent. Thus, we must begin to look at this field with more intensity and sensitivity and teach it in an in-depth way. This is not yet happening, as the norm persists for educators to assume that university students already possess the skills for avoiding plagiarism. We have purposefully chosen to include undergraduate and postgraduate students' perspectives of plagiarism in a global context as we see this as helping to create a more global, cross-cultural picture of students' perspectives. By including both perspectives, one can begin to see that the struggles are widespread, from students' early years of university and into postgraduate levels. With this in mind, it is important that we change how we view and how we address plagiarism education.
A review of the literature around perspectives on plagiarism represents an attempt to explore these perspectives under these topics: students' understanding of plagiarism and related literacy practices, reasons why students plagiarise intentionally or unintentionally, students' understandings of the seriousness of plagiarism and students' views on how to curb plagiarism.
The database used to select articles for inclusion in this literature review was Google Scholar. Articles published in scholarly, reliably peer-reviewed journals were included. Keywords used for searching include "undergraduate students", "postgraduate students", "plagiarism", "referencing" and "perspective". Most of the articles used focus on research done over the last 20 years or more in order to provide give a historic perspective of the persistence of the plagiarism problem over time. The articles were also from multiple contexts to highlight the complexity and the commonality of the problem. Forward and backward snowballing was used to obtain more relevant articles. Several articles address the issue of plagiarism in connection with other related concepts, such as referencing. This was in order to highlight the multidimensional issue of plagiarism.
STUDENTS' UNDERSTANDING OF PLAGIARISM AND RELATED CONCEPTS
Scholars have shown that, except for verbatim parroting and simple definitions of plagiarism and referencing, undergraduate and postgraduate students lack clarity on what fully constitutes referencing and plagiarism (Ashworth et al., 1997;Branch & Iran, 2013;Du, 2019;Greenwood et al., 2014;Gu & Brooks, 2007;Gullifer & Tyson, 2010;Heckler & Forde, 2014;Lea & Street, 1998;Neville, 2009;Perry, 2010;Selemani, Chawinga & Dube, 2018;Sentleng & King, 2012;Theart & Smit, 2012). Different, but interlinked themes emerged regarding students' levels of understanding of plagiarism and related concepts. These themes include: common trends of superficial understandings of plagiarism, struggles with referencing and its conventions, struggles with intertextuality or text borrowing, difficulty recognising the relevance of referencing when building an argument, difficulty identifying one's voice and others' voices in an argument, difficulty understanding the concept of ownership of ideas, difficulty recognising the relationship between plagiarism and referencing in academic writing and misunderstanding the purpose of literature when developing an argument. Ashworth et al. (1997) found that students demonstrated a lack of understanding of the concept of plagiarism. These students indicated that they battled with the concept of intertextuality and found the concept of ownership of ideas difficult. This could mean that they battled with differentiating their own voice from the voices of others. This could also mean that creating a new argument through the synthesis of texts and integrations of one's' authoritative voice, is not easy. These students were therefore concerned that they might plagiarise unintentionally. This trend of battling with intertextuality is evident in other studies. For example, undergraduate students in a study by Gullifer and Tyson (2010) had a superficial understanding of plagiarism: They understood that word-for-word copying of text without referencing was plagiarism. They were, however, confused about intertextuality and understanding the relevance of citation and attribution when developing an argument built on evidence from existing literature.
Similarly, students in Gu and Brooks' (2007) study misunderstood the purpose of previous literature in the development of their argument. One student said, "[w]riting English essays is really easy, one has their own idea, one gets someone really powerful to support your idea". This demonstrates that some students battle with the concept of intertextuality, trying to minimise its role. Another student in Gu and Brooks' (2007) study thought that referencing is a technical task that could be easily included at the end of the project and expected the lecturer to concentrate on the content rather than on the referencing conventions. Students in Gu and Brooks' study (2007) also acknowledged their confusion about distinctions between common knowledge, original sources and their own creativity.
Regarding the understanding of plagiarism and of referencing and its conventions (paraphrasing, summarising, use of quotation marks and proper citation), in most instances students considered that they fully understood these concepts. However, when they were later asked to explain or apply the concepts, they were unable to do so and battled with understanding the link between these concepts. For example, most of the students in Theart and Smit (2012) and Selemani et al.'s (2018) studies indicated that they knew what plagiarism was and demonstrated fairly good explanations. However, students in the Selemani et al. (2018) study thought that paraphrasing, summarising and acknowledging sources was a form of plagiarism. Some students in their study admitted to paraphrasing, summarising and using quotation marks without proper citation and acknowledgment. This trend is also evident in Heckler and Forde's (2014) study, where all students included knew how to define plagiarism and internet plagiarism in simple terms. For example, they knew that reproducing or including another's work into one's own without proper attribution was a form of plagiarism. However, in the same study, some students indicated that they did not know how to cite others. This suggests students in studies such as these may be unsure of what fully constitutes proper referencing conventions.
Concerning the issue of voice, students in the Lea and Street (1998) study did not feel that they were ready or capable to use their own voice in an authoritative text and were concerned http://dx.doi.org/10.18820/2519593X/pie.v39.i2.6
Mbutho & Hutchings
The complex concept of plagiarism about textual borrowing practices. Gullifer and Tyson (2010) observed similar results. One student asked, "if I think it is my brilliant idea, do I have to actually go somewhere and check whether I have stolen it from someone else?" This clearly illustrates the confusion around the identification of the student's own voice and others' voices in a text. As a result of all this confusion, students appeared to be concerned that they might plagiarise unintentionally or accidentally because they lacked the skill of borrowing others' texts appropriately. The complexity of intertextuality is also highlighted in Du's (2019) study. After attending a sixhour instructional block training on referencing, over three consecutive weeks, students could recognise improper source use. Students significantly reduced blatant and subtle plagiarism. However, the students did not understand that textual synthesis without acknowledging the source was plagiarism and they relied heavily on the language of the original source. Also, in the case of Du (2019), due to language differences, the students felt they needed more time to master the skill of referencing.
Students of all levels (first-year to postgraduate) in Perry's (2010) study had plagiarised in some way. However, first-year undergraduate students were more likely to plagiarise than postgraduate students, with 71% of the first-year students not aware that word-for-word copying without acknowledging the author was plagiarism. Students at all levels seemed to have an issue with intertextuality and identification of voice, as well as a poor understanding of referencing conventions. For example, overall, students were unsure whether copying one or two sentences into their assignments without acknowledgment was acceptable. According to Perry (2010), undergraduate students were more likely to fabricate references (14%) compared with postgraduate students (4%). (2013) identified a poor understanding of plagiarism as the leading reason why many Iranian postgraduate students plagiarise. Their study showed the crucial importance of students being well trained and groomed throughout their university years to develop these skills.
POSSIBLE REASONS WHY STUDENTS PLAGIARISE INTENTIONALLY OR UNINTENTIONALLY
The reasons for plagiarism are shown to be exceptionally broad and multidimensional. Some of the reasons show intentional plagiarism, while others are unintentional. The Provision of plagiarism and referencing documents without explanation.
Lack of guidance and support by the educators.
Unrealistic expectations by educators that university students will read and understand plagiarism documents and quickly transform themselves into good academic writers without explicit guidance.
Use of legalistic, threatening and authoritative techniques to deter students from plagiarising.
The assessment methods used (e.g. assignments not clear or heavy or difficult, group assignments and easily duplicatable assignments).
Inconsistencies among educators: some interested in addressing plagiarism, others ignoring it.
Poor underlying/ supporting skill Poor language proficiency.
Poor writing skills.
The inability to manage large amounts of text (intertextuality).
Lack of Institutional guidelines
Lack of transparent and explicit guidelines and strategies on what to do if a student plagiarises.
Non-existent or overly demanding monitoring systems. Lack of Institutional support Classes being too large for an academic staff member to handle effectively.
Heavy workloads for students and academics.
Lack of student efforts/ commitment
Students' lack of interest in a specific topic The need for immediate gratification (competition, convenience and good grades).
Other factors
Financial issues.
External pressures (family or society, diverse cultural and educational backgrounds).
Time management issues.
In Sentleng and King's (2012) study, students' reasons for unintentionally plagiarising included: poor writing skills, lack of referencing skills and never having been taught how to reference properly. These are similar to those identified in Perry's (2010) study, where students' reasons were: lack of understanding what constitutes academic misconduct; some undergraduate and postgraduate students did not recall being taught about referencing or plagiarism. These reasons seem to suggest that if these students had been afforded an opportunity to learn and acquire the skill, they would be less likely to plagiarise. Students in the Heckler and Forde (2014) study blamed their faculty: the faculty did not explain the assignments clearly enough, faculty's expectations were too high; the classes were too big, resulting in monitoring systems being difficult or non-existent. Some students felt the professors did not care about this issue, thus providing the freedom for students to continue http://dx.doi.org/10.18820/2519593X/pie.v39.i2.6
Mbutho & Hutchings
The complex concept of plagiarism plagiarising. They also blamed lecturers' lack of effective strategies when dealing with plagiarism within departments. Ashworth et al. (1997) identified various reasons for plagiarism: heavy workload, poor understanding of the content and students paying little attention to content they felt would not benefit them in the future. Students identified some of the institutional factors they thought contributed to plagiarism. These included a lack of guidelines from the university regarding what plagiarism entails and vague consequences and penalty guidelines. Students also felt that collaborative work encouraged plagiarism as students could easily collude and steal others' work.
According to students in Gullifer and Tyson's (2010) study, they were given no formal introduction to, or training in, the scholarship of academic writing when they entered university. They felt that when they first entered the university they were bombarded with information on plagiarism and related academic concepts and given no time to absorb the information. They believed that simply providing online access to the plagiarism documents was not enough, as it did not promote an in-depth understanding of academic writing and plagiarism concepts or how to apply these in practice. Exposing students to documents that merely warn them about the consequences of plagiarism might not be beneficial for novice students who have not yet begun to understand why referencing is important for them personally. Undergraduate students who are new to the academic culture of reading and writing would not necessarily be aware of the seriousness of plagiarism and referencing unless someone takes time to induct them into these discourses in a supportive learning environment. According to Gullifer and Tyson (2010), students also need time to absorb and process new information and put it into practice without being subject to threatening behaviour from the institution. Students felt that the university was not doing enough to make all aspects of plagiarism more explicit. Branch and Iran (2013) identified several reasons for plagiarism. Their students mentioned poor writing skills, poor language competencies, difficulty and heaviness of assignments and projects, convenience, external pressures from family and society and financial reasons. One could argue that, out of all these reasons, only three could lead to unintentional plagiarism. The rest of the reasons are influenced by students' choice. For example, students found it convenient to bypass the time and effort involved in learning to synthesise vast amounts of information and learning to identify their own authoritative voice and the voices of others. Students chose to plagiarise as they felt technology made it easy for them and thus one could argue that these students were looking for the immediate gratification of getting good grades with no future benefits in mind resulting from their academic writing development.
Some students in Heckler and Forde's (2014) study took responsibility and accountability for their unacceptable intentional plagiarising. They sometimes plagiarised because of poor time management or they wanted good grades or the immediate gratification from obtaining good grades. In this 2014 study, intentional plagiarism was also attributed to USA cultural value systems. Students identified the two highest contributory values to plagiarism: individualism and freedom. The notion of individualism was made clear by one student: "Our culture tells us to do whatever it takes to be successful even if it means cheating" (Heckler & Forde, 2014:68). Another student mentioned the culture of intense competitiveness to achieve individual success at all costs; a culture that influences the criteria by which one's academic performance is judged: http://dx.doi.org/10.18820/2519593X/pie.v39.i2.6 Perspectives in Education 2021: 39 (2) There is a stronger emphasis on making sure that you are one step ahead of everyone else; hard work and self-knowledge has taken a back burner. Instead of valuing the process by which an education is obtained, and information learned through the process, our culture value of individual achievements means that rewards are based only on grades (Heckler & Forde, 2014:68).
The notion of freedom was also reported as one to be valued. The students felt that, being no longer under the control of their parents, provided freedom to do whatever they liked. It is, however, surprising to see that the same values contributing to cheating and plagiarism for some students were seen as counter-productive by other students. For example, the freedom to make a morally responsible choice was valued by some students. Individualism was also seen as a negative and harmful value by some students, who criticised those students who emphasised and valued individual merit. Based on the students' views, Heckler and Forde (2014) suggested that: …current emphasis on faculty grantsmanship, research, and publication should be balanced with teaching function responsibilities that include faculty vigilance on academic dishonesty issues. The challenge of doing this is significant as institutions opt for larger classes, more online instructions, and use of non-tenure-track contract instructors to carry heavy enrolment loads (Heckler & Forde, 2014:70).
Other students who took some responsibility for intentional plagiarism were those surveyed by Sentleng and King (2012) in South Africa. These students acknowledged that some of them lost track of where the information came from. This deficit could be improved when students understand the purpose and benefits of referencing and practise the skill.
Diverse cultural and educational background factors have also been identified as a possible contributing factor to plagiarism and poor referencing conventions. For example, Chinese students interviewed by Gu and Brooks (2007) felt that the British university they were at concentrated excessively on citation and referencing relative to the focus on this in their background education, which focused mainly on profound collective knowledge a student presented in an essay. Guo (2011) also mentions that a limited command of English could trap students from different language backgrounds into plagiarising. Lea and Street (1998) argued that top down authoritative behaviour towards students by tutors or academic staff when trying to reinforce or enforce plagiarism does not yield the desired results. Their study illustrates how authoritative behaviour, used without proper guidelines, took away from the beneficial teaching and learning aspects that take place between tutors and students. The students were not happy with unclear and intimidating ways of enforcing referencing and required explicit teaching in exactly what constitutes plagiarism. Lea and Street (1998). She argues that, as teaching academics in higher education, we should position ourselves with our students in the periphery, where they are relegated by the institution and with those students who do not understand the value of referencing or a particular literacy skill and make explicit to them the value of the academic literacy skill. Other scholars, such as Gravett and Kinchin (2018), who examined the challenges experienced by students when developing referencing skills, reported that undergraduate students felt that the use of scare tactics or threatening speech about what could happen to them if they plagiarise was terrifying and paralysing. The emphasis on punishment and on authoritative speech made the research development journey a difficult one for students and failed to take into account the foundation http://dx.doi.org/10.18820/2519593X/pie.v39.i2.6
Mbutho & Hutchings
The complex concept of plagiarism of good academic practice. This stress is not beneficial to students, especially when lecturing staff are in the process of trying to initiate students into university life. Gravett and Kinchin (2018) add that the focus on punishment and penalties puts the lecturer in a non-beneficial position of power over these students who are trying to adapt to a new environment and become accustomed to new and unfamiliar academic conventions.
The literature also shows the evidence of the use of intimidation in institutional documents such as study guides, notice boards and assessment marking rubrics. For example, in Lea and Street's (1998) study, such institutional documents used to caution students against plagiarism focused mainly on the legal consequences and disciplinary actions and were couched in "legalistic discourse". This mode of delivery of plagiarism information is not unique to the Lea and Street (1998) study.
According to Manathunga and Goozee (2007), there appears to be an unspoken, "common sense" assumption amongst lecturers and tutors that undergraduate graduates can transform themselves magically into independent critical thinkers and academic writers with minimal pedagogical input or support. However, studies mentioned in this review have found that students need to be introduced to any new discourse gradually and be fully supported. The alternative is that unsupported students will not have a full grasp or a uniform in-depth understanding of or the ability to apply the concepts of academic writing and critical thinking in their own writing. Hamilton (2016) argues that academic staff should not expect first-year students to have acquired high referencing and citation conventions from the first semester. Using the threshold conceptual framework, Hamilton (2016) argues that students need to be given enough time to acquire the skill of referencing. Kiley and Wisker (2009) describe crossing the conceptual threshold as involving a transformation from old understanding of a concept to the new way of thinking. Without this transformative, non-linear, and, what authorities might see as, messy process, students cannot progress to advanced levels of understanding and insight about a discipline. Hamilton (2016) sees attribution as a conceptual gateway that new students need to be given time to cross. Once they cross this, "they may change from experiencing confusion over why the acknowledgment of sources is given so much emphasis in assessments of their writing, to understanding the fundamental role of attribution and referencing in academic writing" (Hamilton, 2016:44). Hamilton (2016) and Manathunga and Goozee's (2007) arguments are similar. However, Hamilton was referring to first year undergraduate students ("novice" academic writers) and Manathunga and Goozee (2007) were referring to graduate students proceeding into the postgraduate level of education and doing research. Both studies suggest that the attribution and referencing skill may take time to master and that students need to be realistically awarded this time, taking into consideration their individual backgrounds. Furthermore, in some cases, Master's students and PhD students are referred to as novice writers, implying that they too are in the process of mastering attribution and referencing skills. For example, a survey conducted by Adika (2014) of 125 Master's students attending graduate programmes at the University of Ghana, showed that although the majority of the students declared themselves to have a substantial level of training in referencing styles, the researcher identified a gap in their practical knowledge of the referencing format. Furthermore, these students indicated that while some lecturers required them to reference, others did not require them to explicitly http://dx.doi.org/10.18820/2519593X/pie.v39.i2.6 2021: 39(2) acknowledge their use of source material to develop their own argument. These inconsistencies among lecturers send mixed messages to students regarding the importance of referencing.
THE SERIOUSNESS OF PLAGIARISM
Contradictory views were found in the literature regarding students' perspectives about the seriousness of plagiarism. While some students thought penalties for plagiarism should be implemented, others felt these were too severe or disproportionally severe, and some felt they were simply vague and not transparent enough. Some complained plagiarism practices were not monitored effectively in their faculties while others did not care about students plagiarising. Some students felt plagiarism was unacceptable, others that it was acceptable in some instances. For example, some students who do their best to follow the right procedures might find it unfair when others plagiarise and pass with no effort and no consequences.
In the Ashworth et al. (1997) study, students felt that plagiarism was unfair, wrong and bad practice. However, some thought plagiarism was unfair only if it impacted their peers' work, but then again, felt that if plagiarising was from a book, it was not as bad, as the individuals who wrote the book were not on the same academic level or status as the student (Ashworth et al., 1997). This sentiment is in itself an indication of the extent to which students are not aware of, or are confused about, the benefits of grasping the referencing/plagiarism concept and able to use this skill in their academic writing and critical thinking for their own benefit. Students in the Ashworth et al. (1997) study also felt that premeditated cheating in an examination was the most serious offence, while plagiarism from course work assignments was less serious. This kind of thinking demonstrates how a lack of understanding of referencing and plagiarism could hinder student development in academic writing. In other words, if students are not referencing and are plagiarising, how can they improve the skills of intertextuality or understand the concept of the ownership of ideas?
In a study by Greenwood et al. (2014), most of the students felt that referencing was important. However, they were not fully confident about referencing and felt inadequately prepared for the required academic writing standards. Therefore, it seems that, for the students in this study, plagiarism was caused by a lack of skill, rather than unwilling attitudes. Gravett and Kinchin's (2018) study further demonstrated students' awareness of the importance of referencing. For example, students in this study were very anxious about academic referencing skills and scared they might fail due to unintentional plagiarism. As a result, some of these students enquired about the skill from individuals such as lecturers and librarians. Gullifer and Tyson's study (2010) perceived intentional plagiarism as serious. However, they felt that for unintentional plagiarism, penalties the university imposed were disproportionally severe. Similarly, some students in Theart and Smit's (2012) study considered punishment for plagiarism unfair, while some did not care if others plagiarised. The majority also indicated that they would not report plagiarism incidences. Most of the students thought that severe punishment for plagiarism and the introduction of a code of conduct was needed to prevent students from plagiarising.
Students in
It could therefore be argued that, in general some students do not see the long-term benefits of referencing; instead they value the immediate gratification of high marks. Sentleng and King's (2012) South African study investigating plagiarism among 139 undergraduates found that 41% of the students felt plagiarism was to be taken seriously. However, plagiarism was still evident amongst students within the department under study. Some students in the http://dx.doi.org/10.18820/2519593X/pie.v39.i2.6
Mbutho & Hutchings
The complex concept of plagiarism Gullifer and Tyson (2010) study felt that the penalties for unintentional plagiarism were too severe. However, while the students in this study were aware that plagiarism is a serious offence, unfair and unethical as a practice in terms of acknowledging other scholars' work, they chose to plagiarise in spite of this view.
STUDENT VIEWS TO CURB PLAGIARISM
The literature regarding undergraduate and postgraduate perspectives of plagiarism and related concepts indicates resolving issues of plagiarism and improper referencing in academic writing to be a daunting task requiring a variety of strategies. Various studies have raised a considerable number of concerns from students and university staff. The recommendations below represent a combination of suggestions from students and researchers from the various studies reviewed.
To address the issue of lack of training, Guo (2011) recommends the inclusion of a module in the curriculum design that addresses plagiarism, referencing and academic writing conventions in all accounting departments. It is, however, clear -from the various studies -that this suggestion could benefit all faculties in a university. Students in the Selemani et al. (2018) study advocated for advanced training in information literacy to be offered to postgraduate students. Guo (2011) proposes new students should be integrated thoroughly and in a variety of practical and experiential ways into their new academic life. Selemani et al. (2018) suggest that academic literacy training for postgraduates should include advanced academic writing skills, paraphrasing, summarising, synthesising of texts as well as referencing skills. This kind of training would also be beneficial to undergraduate students. Branch and Iran (2013) suggest that academic lecturers inform students about plagiarism and the severe punishments incurred. However, several other scholars see the use of intimidating and authoritative punishment messages of punishment by lectures and in institutional documents as creating unnecessary anxiety: students need explicit teaching in what constitutes plagiarism and related concepts. Gullifer and Tyson (2010) and McKenna (2010) also suggest that information regarding plagiarism and other literacy practices need to be made explicit in an environment that promotes learning and that lecturers' common sense assumption that students access the information on plagiarism on their own is unfounded. Guo (2011) points out that students who perceive themselves to be poorly integrated into academic life are more likely to engage in plagiarism and hence appropriate integration is important. Guo (2011) mentions that raising students' awareness of plagiarism and its implications is important for preventing the incidence of plagiarism. This suggests the importance of formal training and continuous awareness campaigns.
Concerning poor writing skills, Selemani et al. (2018) and Branch and Iran (2013) suggest that academic staff strengthen students' writing and research skills. They advocate for students to be taught how to summarise, paraphrase and synthesise information as well as how to cite articles. Students also felt that papers must be planned and written in a step-by-step way under the supervision of an academic member of staff. Branch and Iran (2013) also suggest students be thoroughly informed about the use of software and search engines that can detect plagiarism. Land et al. (2005), cited in Hamilton (2016), recommend a supportive liminal space where it is acknowledged that transformation needs time, as it is a period of uncertainty during which individual novice academic writers move to become more experienced writers. http://dx.doi.org/10.18820/2519593X/pie.v39.i2.6 Perspectives in Education 2021: 39(2) Heckler and Forde (2014) suggest that imbalances in academic staff workloads such as overloaded classes or high teaching load, for example, could be contributing to the following factors: poor and shorter referencing training, handing out of referencing and plagiarism documents without proper guidelines to the students, inconsistencies in terms of some lecturers enforcing referencing skill and others not enforcing referencing and poor feedback or lack of feedback. Therefore, they propose that support of academic staff play a crucial role and should not be taken lightly in developing the students' referencing skills.
Students in Perry's (2010) study recommended the annual design of new assignments that have fewer generic solutions available on the internet, challenging students suspected of plagiarism and giving out positive messages about accurate referencing for solid future scholarship. Furthermore, students in Branch and Iran's (2013) study contributed valuable suggestions and views on allocated assignments. They felt that, firstly, academic staff should select reasonably challenging topics that students are knowledgeable about and interested in. Secondly, academic staff should employ and encourage students in time management techniques and give them realistic time frames to manage an assignment. Thirdly, the students felt academic staff should familiarise the students with plagiarism prevention techniques. However, as mentioned earlier, this kind of support required by the students from the academic staff would put pressure on lecturers and academic staff would need institutional assistance. Recommendations or studies on how academic staff can be supported to deliver more efficiently, without burnout, would be beneficial to their students. Guo (2011) recommends the issue of cultural diversity should be considered. Cultural and educational backgrounds can give rise to a variety of issues, including language incompetency and a complete lack of prior exposure to expected academic literacies when students come to the university.
CONCLUSION
This literature review revealed undergraduate and postgraduate students' lack of an in-depth understanding of plagiarism and related concepts. Although many students were found to be able to define plagiarism and referencing, most were unable to see the multidimensional nature of these conventions. The reasons provided for plagiarism in this literature were mostly related to an absence or lack of in-depth education, of support from the institution and educators, of understanding of the value and relevance of these conventions and lack of appropriate training provided for educators who in many cases are also postgraduate students, inadequate amounts of time for students to transition and build the required academic literacy skills in a gradual and incremental way. Lack of understanding and of pre-university education could be part of the reason why some students do not take plagiarism and related concepts seriously.
Based on these conclusions we advocate for the following to be addressed by universities and educators: universities should refrain from providing shorthand training to the students. We support the views of Gu (2012) on the inclusion of a course that encompasses all of the literacy practices highlighted in this literature. Each year of study, especially at the undergraduate level, should include a course that addresses literacy practices in-depth and in practical ways. When such a course is developed, it should undergo a rigorous process to ensure that it has covered the kind of in-depth knowledge suited for that level of study. We consider this process to be crucial to the development of this academic writing conventions course. Furthermore, this process would ensure that during department quality audits this http://dx.doi.org/10.18820/2519593X/pie.v39.i2.6
Mbutho & Hutchings
The complex concept of plagiarism course is assessed in terms of its appropriateness, comprehensiveness and its delivery and assessment methods. Additionally, the importance given by undergraduate and postgraduate students to the need for the extensive involvement of teachers/academic staff concerning the solution to the problem of plagiarism should be taken seriously. For this suggestion to be put into practice, a university would need to play a significant, active and ongoing role in equipping and supporting academic staff. We would hope that the institution comes to realise that educators are also part of the system that has failed to educate students in the nature and implications of plagiarism and that it behoves the institution to train students and educators more appropriately and thoroughly in this area.
We have argued that the educator's role be a mindful one, that they do not take the simplistic route when educating learners or merely issue instructions about plagiarism and related concepts and the penalties for plagiarising. They need to gauge the extent of students' prior knowledge before making assumptions about their understanding. Educators should also create a positive learning environment, one that encourages students to share their concerns about plagiarism and related concepts, one where they can be supported to incrementally build the skills they need to become competent writers of academic discourse. Ideally an educator would guide the student into an understanding of the complex and interlinking nature of plagiarism, referencing and academic writing. University educators also need to acknowledge that lack of understanding of plagiarism and related concepts is not a first-year student problem but one that runs across all years and levels of study. This acknowledgment is particularly important in the South African context where language diversity and educational inequalities further complicate the issue, undermining the confidence and the futures of our students.
|
2021-08-27T17:00:57.916Z
|
2021-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "2d0f2e5ec4f4a6d3dc2d42d389e7639089ba9937",
"oa_license": "CCBY",
"oa_url": "https://journals.ufs.ac.za/index.php/pie/article/download/4399/4088",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "29a58c8709156cb89d74d2937c3339b105dd716a",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": []
}
|
253996230
|
pes2o/s2orc
|
v3-fos-license
|
Adverse Health Effects and Mercury Exposure in a Colombian Artisanal and Small-Scale Gold Mining Community
The aim of this study was, first of all, to associate the mercury (Hg) concentrations and respiratory functions of the gold miners in the artisanal small-scale gold mining (ASGM) environment in San Martín de Loba, Colombia. We carried out a cross-sectional study using a survey whereby we collected basic demographic information, occupational medical history, and applied two validated questionnaires (Q16 and SF36). We measured Hg levels in all volunteers using direct thermal decomposition-atomic absorption spectrometry. Univariate and bivariate statistical analyses were carried out for all variables, performing logistic regression to assess the effect of ASGM on health outcomes. Volunteers enrolled (n = 124) were between the ages of 20 and 84 years (84% miners and 79% males). No changes were found in the systolic blood pressure, diastolic blood pressure, and heart rate from the ASGM miners, in crude and adjusted statistical analyses. ASGM miners increased 8.91 (95% confidence interval, 1.55–95.70) times the risk of having these than of having neurotoxic effects. Concentrations of total whole blood mercury (T-Hg) in all participants ranged from 0.6 to 82.5 with a median of 6.0 μg/L. Miners had higher T-Hg concentrations than non-miners (p-value = 0.011). Normal and abnormal respiratory spirometry patterns showed significant differences with the physical role and physical function of quality-of-life scales (the (p-value was 0.012 and 0.004, respectively). The spirometry test was carried out in 87 male miners, with 25% of these miners reporting abnormalities. Out of these, 73% presented a restrictive spirometry pattern, and 27%, an obstructive spirometry pattern. The ASGM population had higher Hg concentrations and worse neurotoxic symptomatology than non-miners of the same community.
Introduction
Artisanal and small-scale gold mining (ASGM) directly and indirectly employs more than 100 million people in over 70 countries around the world [1,2]. An estimated 10-15 million people work directly on ASGM activities worldwide, out of which 4-5 million are women and children [3][4][5] and this is increasing in Latin America every year. ASGM produces 12 to 15% of the gold in the world" yet it uses more than 1400 tons/year of elemental mercury (Hg) which is released in the air, water, sediments, land and food [3]. Elemental mercury released into bodies of water after ASGM activities can be oxidized into soluble species (Hg 2+ ), which can then be deposited in the sediments and converted by natural process into methylmercury (MeHg), the most toxic of the Hg species. The MeHg can be bioaccumulated and biomagnified in fish via the food web, and then eaten by the human population [4][5][6].
ASGM is characterized as an informal job that requires limited technical skills and few resources, but requires the use of hazardous substances, such as elemental Hg for gold
Design and Study Site
A cross-sectional study was designed and 124 volunteers of both sexes were enrolled in November 2018, including 98 men and 26 women, aged 20-84 and 25-71, respectively. With the approval of the authorities and miners' associations of the municipality of San Martin de Loba, volunteers who met the study's selection criteria were invited to participate. People were eligible if they had been living for at least two years in San Martin de Loba, Colombia. This municipality is located in northern Colombia (8 • 56 12 N, 74 • 2 20 W) at an altitude of 30 m above sea level with an average temperature of 27 • C and an extension of 414 km 2 ( Figure 1). The selection criteria for the analysis included miners and nonminers (housekeepers, cooks, miner assistants, and additional participants involved in other activities). After reading the entire written informed consent to explain the purpose of the study, all participants were free to ask questions then they signed the consent form. Before sampling, the institutional ethical review board of the University of Cartagena approved this study by minute No. 75, 2014. Miners working in ASGM in San Martin de Loba (Bolivar, Colombia) are exposed to mercury through the extraction of gold from excavations of sinkhole mines which are linked to health hazards related to lung function impairment, chronic lung disease, and cognitive dysfunction, in terms of neurological effects. Therefore, the aim of this study was to determine lung function, total Hg levels in whole blood, neurotoxicity (Q16) and quality of life in ASGM miners and non-miners from this municipality [30,31].
Design and Study Site
A cross-sectional study was designed and 124 volunteers of both sexes were enrolled in November 2018, including 98 men and 26 women, aged 20-84 and 25-71, respectively. With the approval of the authorities and miners' associations of the municipality of San Martin de Loba, volunteers who met the study's selection criteria were invited to participate. People were eligible if they had been living for at least two years in San Martin de Loba, Colombia. This municipality is located in northern Colombia (8°56′12″ N, 74°2′20″ W) at an altitude of 30 m above sea level with an average temperature of 27 °C and an extension of 414 km 2 ( Figure 1). The selection criteria for the analysis included miners and non-miners (housekeepers, cooks, miner assistants, and additional participants involved in other activities). After reading the entire written informed consent to explain the purpose of the study, all participants were free to ask questions then they signed the consent form. Before sampling, the institutional ethical review board of the University of Cartagena approved this study by minute No. 75, 2014.
Demography and Health Surveys
Participants completed a brief written survey so we could collect basic demographic information. Occupational medical history and two validated questionnaires (Q16 and SF36) were applied to evaluate the quality of life and the possible neurological impacts on
Demography and Health Surveys
Participants completed a brief written survey so we could collect basic demographic information. Occupational medical history and two validated questionnaires (Q16 and SF36) were applied to evaluate the quality of life and the possible neurological impacts on the participants. Health-related quality of life (SF-36) is a standardized questionnaire of 36 items grouped into 8 measurable scales: physical functioning (PF), physical role (PR), bodily pain (BP), general health (GH), vitality (VT), social functioning (SF), emotional role (RE) and mental health (MH) [30]. Each dimension is categorized from 0 to 100%, from worst to best quality of life. A standardized questionnaire (Q16) was performed to screen patients with possible neurotoxic prevalence symptoms among the exposed population to hazardous substances such as Hg [28]. If the participant reported more than six negative symptoms in the Q16, the patient was considered as requiring evaluation by a health professional for possible neurotoxic effects [31,32].
Spirometry as a Performed Pulmonary Function Test (PFT)
The spirometry test was applied to the miners involved in this study. Miners working in sinkholes are exposed to PM and may display respiratory conditions. Volunteers who completed the medical history and health surveys underwent a spirometry test using the SpiroBank II (MIR, Roma-Italy) [21]. Spirometry is a breathing test to measure the amount of air volume (ml) that individuals can blow out of their lungs. It is considered a performed pulmonary function test (PFT) and is used widely to be simple and reproducible. A spirometry test is useful to diagnose different types of pulmonary disorders. The PFT took approximately 15 min. The subjects performed three acceptable maneuvers in which the two best measurements of forced vital capacity (FVC) and forced expiratory volume in one second (FEV1) differ by less than 150 mL between them. The result was chosen from the curve in which the sum of the FVC and FEV1 is higher. An obstructive pattern defect on spirometry was indicated if an FEV1/FVC ratio was less than 70%, while a restrictive pattern defect was present at low FVC in the presence of a normal FEV1/FVC ratio.
Blood Sample Collection
A blood sample of 4 mL was extracted by venipuncture and collected in vacutainer ® tubes with EDTA as an anticoagulant (Becton Dickinson, UK, Oxford), which were labeled with the information of the subjects. Tubes with whole blood samples were transported to the laboratory in a refrigerated container at 4 • C and stored at −20 • C until analysis. We organized the blood samples with a consecutive code for each volunteer, avoiding the possibility that any researcher could identify the results of any person in the population. Only medical personnel interviewing patients knew the identity of each subject.
Instrumental Analysis
The determination of T-Hg concentrations in blood samples was tested in whole blood samples using a thermal decomposition atomic absorption spectrometer (TD-AAS) with Zeeman-effect background correction technique, RA-915M equipped with pyrolysis accessory, PYRO 915+ for solid sample analysis (Lumex Instruments, St. Petersburg, Russia). T-Hg concentrations were measured directly in the whole blood samples without any digestion or sample preparation before analysis. Blood samples (~100 mg) were weighed into a quartz boat and heated at 800 • C to complete combustion in order to release the Hg gas to be measured based on the absorbance at a wavelength of 253.7 nm in the enclosed system. After thermal release, the quantitative signal for Hg is shown as the total area under the peak. Before every measurement, the quartz boat was cleaned and heated once again to obtain the instrument baseline.
Quality Assurance and Quality Control
A series of analytical quality assurance and quality control procedures were carried out in order to maintain the traceability of the results of mercury concentrations. Each sample was analyzed in duplicate; if the coefficient of variation (CV) was above 20%, a confirmation reading was carried out. The limit of detection (LOD) and limit of quantification (LOQ) were 1.4 and 4.36 µg/L, respectively. calibration curves were created directly using whole blood Standard Reference Materials (SRM) with known mercury concentrations provided by the Wadsworth Center, New York State Department of Health [33]. For this study, the instrument was calibrated using the SRMs BE12-12, BE12-14, and BE 12-15, corresponding to three different concentrations of 2.39 µg/L, 3.21 µg/L and 11.35 µg/L, respectively. Different masses of the SRMs were weighted in a quartz boat to create a five-point calibration curve (3,7,9,34 and 42 µg/L). The accuracy and precision of the method were obtained by measuring five times the BE12-12, BE12-14, and BE12-15 CRMs with accuracies of 97.1% (2.32 ± 0.22 µg/L), 87.2% (2.8 ± 0.29 µg/L) and 109% (12.37 ± 0.64 µg /L), respectively. For all cases, the precisions of SRMs as %CV were < 5%. Calibration curves were obtained with a regression coefficient ≥ 0.995. The calibration curves were created on a daily basis with the SRMs, verifying measurement with an SRM every ten times as a control during the analysis.
Statistical Analysis
Univariate and bivariate statistical analyses were carried out for all variables. The normal distribution of data was assessed using the Kolmogorov-Smirnov one sample test. Due to the lack of normality of some continuous variables as T-Hg concentrations, the Wilcoxon-Mann-Whitney test (WMW) was used to compare only two groups, whereas the Kruskal-Wallis test was used to find the difference between three or more independent groups. Possible associations between T-Hg concentrations and the independent variables were separately explored using the Spearman correlation analysis. Due to the uncertainties in exposure measures and the response to individual variability, the precise minimum concentration of mercury in the blood that may cause toxic effects in humans could not be determined. However, it can be used as a cut-off concentration in the whole blood of 5 µg/L based on the US-EPA reference dose (RfD) [27]. The population of miners was divided into two subgroups as dichotomous outcomes in descriptive analyses: participants with a low exposure (≤5 µg/L of Hg in the blood) made up the first group and those with high-exposure doses (>5 µg/L of Hg in the blood) formed the second group.
A logistic regression analysis was used to assess the effect of mining exposure on the items of the Q16 questions. In the multivariable model the confounder variables included were age, sex, systolic blood pressure, diastolic blood pressure, heart rate, and bodymass index. Non-significant variables in crude analyses were deemed non-confounders and were not included in the final model. The body-mass index entered the model as a second-degree polynomial to account for non-linearity. We report adjusted odds ratios (OR) with 95% confidence intervals (CI). All comparisons among groups or correlations were presented a level of significance of p ≤ 0.05. All statistical analyses were calculated with R, version 4.2.1 (R Core Team, 2022).
Population Characteristics
The study was conducted among 124 participants who signed the written informed consent and completed the full questionnaires. The participants' sociodemographic characteristics are summarized in Table 1. The participants were 84% miners and 79% males ( Table 1). The average of participants was 46.1 ± 13.0 (mean ± SD), ranging from 20 to 84 years, where miners had a mean age of 45.5 ± 12.4, and non-miners, of 49.3 ± 16.2 years. There were no statistical differences between the ages of the participants of both groups (t-test, p = 0.26). A total of 46% of the participants had attended elementary school; and 38%, middle/high school; and 5% had obtained an undergraduate degree. On average, participants had a body-mass index (BMI) of 24.3 ± 3.9 and ranged from 16.9 to 32.3 kg/m 2 . Ten percent of participants surveyed were overweight (BMI ≥ 30). No ASGM miners were found to have changes in systolic blood pressure, diastolic blood pressure, and heart rate with crude and adjusted analyses ( Table 2). A total of 80% of the entire female population (miners and non-miners) reported joint pain; 43%, fatigue; 53% had undergone one or more abortions; and 26% reported irregular menstrual bleeding. The quality of life in male and female miners is shown in Figure 2. Age (years), median (range) 45 Notes: The percentages are calculated taking the fraction of the parameter in each column (miners and non-miners). Bold p-value numbers are significant at 0.05. SD: Standard deviation.
Concentrations of T-Hg
Concentrations of T-Hg in all participants presented a median of 6.0 µg/L and ranged from 0.6 to 82.5 µg/L. Miners had higher T-Hg concentrations than non-miners, with a statistical difference between the two groups in which the median of T-Hg concentrations for the miners was twice as high as those for the non-miners (median: 6.2 versus 3.1 µg/L, p-value = 0.011) ( Table 1 and Figure 3). Concentrations of T-Hg in women miners were not statistically different (p-value of 0.06). A total of 58.9% of all the participants had higher concentrations of T-Hg than the environmental exposure (>5 µg/L) and 10.4% for higher occupational exposure (≥15 µg/L).
Concentrations of T-Hg
Concentrations of T-Hg in all participants presented a median of 6.0 μg/L and ranged from 0.6 to 82.5 μg/L. Miners had higher T-Hg concentrations than non-miners, with a statistical difference between the two groups in which the median of T-Hg concentrations for the miners was twice as high as those for the non-miners (median: 6.2 versus 3.1 μg/L, p-value = 0.011) ( Table 1 and Figure 3). Concentrations of T-Hg in women miners were not statistically different (p-value of 0.06). A total of 58.9% of all the participants had higher concentrations of T-Hg than the environmental exposure (> 5 μg/L) and 10.4% for higher occupational exposure (≥ 15 μg/L). . Box-plot of whole blood T-Hg concentrations in miners and non-miners, stratified by sex, indicating variability outside the upper and lower quartiles. Note: One outlier with a whole blood T-Hg level value >50 µg/L was excluded from the box plot (a male exposed to mining).
Neurotoxicity
Neurotoxicity factors obtained from the miners were associated to Hg exposure (low and high) ( Table 2). ASGM activities are related to increased neurotoxicity, including loss of understanding, problems with usual activities, fatigue, oppression in chest, feeling (Table 3). In addition, miners in ASGM increased the risk of having more than 6 neurotoxic abnormalities by 8.9 times. The logistical regression analysis was adjusted to demographic factors (age and sex), Hg exposure, BMI, and occupation (Table 3) [34]. Table 3. Crude and adjusted effects of mercury levels as a risk factor for worse neurotoxic symptoms (Q16) in artisanal and small-scale miners from San Martin de Loba, Colombia.
Non-Miners Adjusted Odds Ratio * Low T-Hg High T-Hg Low T-Hg
High T-Hg Low concentration T-Hg (Low): Hg < 5 µg/L, High concentration T-Hg (High): Hg ≥ 5 µg/L. * Odd ratios in logistic regression represent the likelihood that a neurotoxic effect (Q16) will take place which was adjusted by T-Hg concentration, age, sex, body mass index, and occupation. Bold p-value numbers are significant at ≤0.05.
Quality of life (Sf-36)
For participants, the lowest quality of life was noted in the physical role (50%) and general health (57%), although the highest quality of life was social function (87.5%) and emotional role (86%). Both mining and non-mining women had a lower quality of life than men (Figure 2). The physical role and physical function of the quality of life scales were different between normal and abnormal respiratory spirometry patterns (p < 0.05). The physical function, pain, general health, vitality, social function, emotional role and mental health were different between Low and High mercury concentrations in the male and female populations (p < 0.05) ( Table 4).
Spirometry Test and Lung Function in Miners
The results of the spirometry test were classified in different patterns according to lung function such as normal, abnormal, obstructive, and restrictive spirometry. The patterns of the spirometric test were associated with the results of quality of life (SF-36) for a better intepretation ( Table 5). The 87 male miners evaluated had the following lung functions: 25% abnormality, 73% restrictive, and 27% obstructive spirometry patterns (Table 5).
Neurotoxic Symptoms (Q16)
Neurotoxic symptoms were measured in 124 participants, out of which 58.2% reported at least 6 negative responses. Table 3 shows that 57% of the miners had high levels of mercury in their blood (Hg ≥ 5 µg/L) against 45.5% of the miners with low levels of mercury in their blood (Hg < 5 µg/L). In women, 84.6% showed 6 more neurotoxic symptoms compared to 51.5% of men (Table 3).
Discussion
Recent studies in Latin America on ASGM workplaces have shown high mercury concentrations in biological samples (hair, blood, and urine) taken from miners, due to the use of mercury [26,[34][35][36][37][38][39][40][41][42] (Table 6). Several studies have shown a significant impact of the use of Hg on the environment and the health of miners and people living in the vicinity of ASM miners [8,25,36]. Reported concentrations of Hg in the environment increase in ASGM sites. Most of these studies were published in Colombia, Peru, Ecuador and Mexico, displaying the rise of ASGM in Latin America and the lack of knowledge regarding Hg management in extractive processes. Although women are part of this practice, the number of women surveyed is generally low in these recent studies (Table 6).
Our results concurred with the findings of previous studies. Mean concentrations of T-Hg in blood samples of the miners from San Martin de Loba were similar or greater to values from miners or communities near ASGM in other studies in Colombia [43][44][45][46], yet lower than those reported by Cruz-Esquivel et al., 2019 [26] and Calao-Ramos et al., 2021 [45] (Table 6). In general, mean concentrations of T-Hg in blood samples taken from female miners in this study were greater than those from male miners, although there were no significant statistical differences between them. The lack of statistical difference between T-Hg concentrations of males and females may be due to the small sample size in this study. This result was similar to other studies carried out in Colombia [26,43,45] ( Table 6). The results for quality of life (SF-36) were adjusted at low and high T-Hg concentrations; therefore, statistical differences were found between females and males in 7 of the 8 evaluated parameters (Table 4). Note: This study carried out an exhaustive review of the literature, initially finding 221 studies. After refining the search, 15 studies of Hg exposure in artisanal miners in the Americas were chosen. * This is the median estimation of Hg levels. ** This is the interquartile range. *** µg/g creatinine (urine sample).
The largest concentration of Hg in whole blood was found in ASGM in departments such as Choco [41], a poor region in this country, similar to San Martin de Loba (the present study). These socioeconomic inequalities are correlated with ASGM [4,39] and have the potential to interact with Hg by reducing the opportunity for health care for the consequences of toxicity, with lower education levels. In addition, some of these settings are more susceptible to illegal mercury trade, and violence is used to control the territory. These three factors-Hg toxicity, poverty, and violence-are synergically interacting with each other and increasing the adverse effects of Hg on these populations and the environment [47,48].
The ASGMs have been considered hazardous working places for miners because they involve rudimentary and semi-automatic tools, and miners are usually associated with poor health without following the safety standards such as wearing personal protection elements (PPE) that could include gloves, safety glasses, shoes, earplugs, hats, respirators, or body suits, etc. Miners from ASGMs in San Martin de Loba do not wear PPE or they have poor PPE compliance, greatly increasing the risk of mercury exposure. Therefore, these miners had high mercury concentrations in blood samples because they have been directly exposed to mercury vapor produced during the amalgamation and the burning processes, as well as eating fish that has been contaminated with methylmercury. Previous studies report that the exposure of miners to mercury in order to extract gold can cause immune, sensory, neurological, motor, and behavioral dysfunctions similar to neuronal diseases [32,33]. In a systematic review of recent studies reporting Hg concentrations in male and female miners from ASGMs in Latin America, Colombia is the country with the most publications, followed by Mexico, Ecuador, and Peru ( Table 5).
The results of this study showed that the male miners with a normal spirometry had a better quality of life than miners with an abnormal spirometry, specifically in physical function and physical role [1]. These results suggested that the miners from San Martin de Loba have worked in ASGMs and performed activities in tunnels extracting rocks containing gold-rich ores and later used hammers and ball mills to reduce the mineral to a fine powder. In addition, the miners use excavation techniques such as explosives, pneumatic, manual excavation tools, and work in poorly ventilated environments with low ventilation which are associated with dust or PM (Figure 4). Airborne PM has a strong association with lung capacity, reduced lung function, and pneumoconiosis ( Figure 4). However, lung function may have a long impact if the miners are exposed to dust or PM with an aerodynamic diameter ≤ 2.5 µm (PM 2.5) [49,50]. neuronal diseases [32,33]. In a systematic review of recent studies reporting Hg concentrations in male and female miners from ASGMs in Latin America, Colombia is the country with the most publications, followed by Mexico, Ecuador, and Peru ( Table 5).
The results of this study showed that the male miners with a normal spirometry had a better quality of life than miners with an abnormal spirometry, specifically in physical function and physical role [1]. These results suggested that the miners from San Martin de Loba have worked in ASGMs and performed activities in tunnels extracting rocks containing gold-rich ores and later used hammers and ball mills to reduce the mineral to a fine powder. In addition, the miners use excavation techniques such as explosives, pneumatic, manual excavation tools, and work in poorly ventilated environments with low ventilation which are associated with dust or PM (Figure 4). Airborne PM has a strong association with lung capacity, reduced lung function, and pneumoconiosis ( Figure 4). However, lung function may have a long impact if the miners are exposed to dust or PM with an aerodynamic diameter ≤2.5 μm (PM 2.5) [49,50]. . Top left, a miner reduces the particle size of the raw minerals extracted from the mine; top right, the mills; bottom left, the rudimentary system with which it settles up to 40 m, turning the crank manually; in the same quadrant we see an impromptu ventilation system used to exchange air in the tunnels; bottom right, a paper cartridge used to build handmade explosive charges to advance in the tunnels.
In this study, 60% of the miners from San Martin de Loba had neurotoxic abnormalities (≥6) according to the Q16 neurological toxicity test. In addition, miners had 8.9 times more neurotoxic abnormalities (≥6) than non-miners. Therefore, miners with neurotoxic abnormalities (≥6) should be referred to higher levels of health care. These results were significant and were also found in adjusted analyses, including our small sample size. The present analysis also shows significant increases of some neurological parameters in miners, such as loss of understanding of TV/radio, problems with usual activities, fatigue, oppression in chest, painful tingling, loss of strength in arms, legs, loss of sensitivity in arms, legs and having more than 6 abnormalities, which guides us to advance with more specific neurological studies. In this study, 60% of the miners from San Martin de Loba had neurotoxic abnormalities (≥6) according to the Q16 neurological toxicity test. In addition, miners had 8.9 times more neurotoxic abnormalities (≥6) than non-miners. Therefore, miners with neurotoxic abnormalities (≥6) should be referred to higher levels of health care. These results were significant and were also found in adjusted analyses, including our small sample size. The present analysis also shows significant increases of some neurological parameters in miners, such as loss of understanding of TV/radio, problems with usual activities, fatigue, oppression in chest, painful tingling, loss of strength in arms, legs, loss of sensitivity in arms, legs and having more than 6 abnormalities, which guides us to advance with more specific neurological studies.
In Latin America, ASGM activities play an important role in affecting the quality of life of many communities. Dysfunctional households end up being a burden for women in these settings [8]. In San Martin de Loba, ASGM mining is the main economic activity, starting at an early age. It may cause school dropouts and increase addiction in the young population. Diet is also a relevant factor that depends on the consumption of fish from the Magdalena basin, adjacent to this town [4]. This body of water receives mercury contamination from ASGM activities and the nearby communities are affected by methylmercury-fish consumption. An earlier study in the San Martin de Loba showed that 90% of the participants eat fish approximately three to five times a week [24], potentially increasing the amount of Hg ingested in these mining sites. Based on the results of this study, further research is needed to evaluate how ASGM impacts lung function by measuring airborne particles such as PM2.5 and PM10 with a larger sample size and more detailed surveys to assess multiple stressors on respiratory health. Additionally, studies with larger sample sizes studies are needed to evaluate the health impact in miners as well as the impact on their communities.
Conclusions
ASGM is the primary source of employment or income for mining communities in Colombia and other countries, although it has raised concerns due to the effect of mercury pollution impacting the environment, and the health of miners and their families, including children and pregnant women. The exposure of miners to vapor and dust in ASGM activities has impacts on public health and on the environment. This study showed evidence of the association between mercury concentrations in blood samples from miners and neurotoxicity outcomes. In addition, it was observed that a significant association between spirometry results and quality of life in miners could be caused by the exposure to dust produced by limestone mining. This is the first report in Latin America that related the pulmonary function of miners working in ASGM sites. Spirometry patters as a pulmonary function test may be used as a bioindicator of the health status in an ASGM workplace and can be an important tool for health surveillance programs. Government, via the environmental and health agencies, must pay close attention to the mining population, because ASGM is characterized as a health risk and it lacks safety equipment and training for miners.
|
2022-11-27T16:22:48.551Z
|
2022-11-25T00:00:00.000
|
{
"year": 2022,
"sha1": "4b9a76bbd0dcd1d8706936cc3189f6d3d0c7c685",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2305-6304/10/12/723/pdf?version=1669365214",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "df443fa9805d6e5d86298ec69135de920f543638",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
14793768
|
pes2o/s2orc
|
v3-fos-license
|
The discovery of the fourth family at the LHC: what if?
The first evidence of new strong interactions may be a sufficiently massive fourth family observed at the LHC. The fourth family masses, of the leptons in particular, are constrained by the electroweak precision data, and this leads to signatures at the LHC that may imply early discovery. We study the implications of this discovery from a bottom-up perspective, where effective 4-fermion operators model the dominant effects of the new dynamics. We identify simple approximate symmetries of these operators that may be required for realistic masses of the third and fourth families. The large top mass for instance is related to the structure of these operators.
The fourth family and strong interactions
The Higgs scalar of the standard model unitarizes the scattering of massive gauge bosons, thus saving the theory from breaking down above about 1.8 TeV [1]. But the Higgs by itself is not a complete solution. Additional physics must be introduced to cancel quadratically diverging contributions to the Higgs mass m H , in particular from a top quark loop. The mass scale of this additional physics must be less than about 3.5m H to avoid fine tuning [2]. For a light Higgs this necessitates new physics well below 1.8 TeV. Among the reasons why this type of picture is very popular are the following two. The first is that the Higgs sector and the required additional physics can all be weakly interacting, thus allowing the perturbative regime to extend to energy scales far above 1.8 TeV. The second is that the new physics that is required should be very accessible, most notably at the LHC.
On the other hand nature may have chosen a less contrived method of ensuring unitarity, one where the scale of the would-be breakdown of unitarity is the scale of new physics. This new physics would not only be responsible for electroweak symmetry breaking, but it could also be quite closely associated with the physics of flavor and fermion mass. This is in contrast to the light Higgs picture where the origin of the observed pattern of fermion masses, as encoded in a set of Yukawa couplings, is pushed to extremely high and inaccessible energies.
The theory of the Goldstone bosons of electroweak symmetry breaking may be the weak coupling description dual to a strongly coupled theory involving different degrees of freedom. In this case a simple Goldstone description only holds up to an ultraviolet cutoff, beyond which it makes more sense to use the dual description. Given that such a duality is already known to exist, relating as it does the chiral Lagrangian and quark-gluon descriptions of QCD, and given the prevalence of the duality concept in modern theoretical developments, it is curious that another manifestation of weak-strong duality is not widely anticipated to show up at the LHC.
Of course the QCD analogy for electroweak symmetry breaking has been quite well explored, as reviewed for example in [3]. In technicolor theories one expects a ρ-like resonance to be associated with the unitarization of the Goldstone boson scattering amplitudes. A naive scaling up in mass of the QCD ρ puts the new ρ-like state at about 2 TeV. Besides other problems with this classic technicolor picture, this broad resonance is not something that will be quickly and easily probed at the LHC. A more accessible variant is low-scale technicolor [3,4], where a new ρ-like state becomes both lighter and narrower. But this involves increasing the number of technifermions, and thus leads to a tension with the electroweak correction parameter S that typically increases with the number of new fermions [5]. For a small S to emerge the theory would have to be distinctly non-QCD-like, in the sense that a constituent-quark-like approximation would have to be very poor.
We shall relax a different assumption in the original QCD analogy, namely that the new fermions involved with dynamical symmetry breaking are confined. The new fermions certainly have to feel a sufficiently attractive interaction in some channel to cause chiral symmetry breaking, but confinement is not necessary. If gauge interactions are responsible then they may be broken gauge symmetries, broken through the same dynamical fermion masses that break electroweak symmetries and/or by some other agency. This makes possible a very economical picture as far as the new fermionic degrees of freedom are concerned; a sequential fourth family with standard model quantum numbers is all that is needed. 1 The idea that a fourth family is related to electroweak symmetry breaking has some history [7,8]. 2,3 It may seem that a further replication of the family structure, already triplicated in nature, would be the most unimaginative type of new physics that could be postulated. But a fourth family has quite profound implications if the new quarks have mass above about 550 GeV. In this case the Goldstone bosons are strongly coupled to the heavy quarks, as the classic analysis [11] of partial wave unitarity shows. This precludes the perturbative description of the Goldstone modes at this energy scale and above, as would have been implied by a light Higgs. In fact the heavy quark masses would serve as the order parameters for electroweak symmetry breaking, and the new strong 1 This does not preclude the possibility that there are also other new fermions on which a new unbroken gauge symmetry continues to act. If such fermions are confined but are light or massless then their contributions to S may be minimized [6]. We will ignore this possibility here. 2 A fourth family has also been considered for other reasons [9]. 3 For a more general review of new types of fermions see [10].
interactions would be expected to produce these condensates dynamically. Thus the discovery of a heavy fourth family would eliminate the raison d'être of both the light Higgs and the associated physics needed to protect the Higgs mass. The discovery of a fourth family could potentially come quite early. The fourth family quarks and leptons are free to have mass mixing (CKM mixing) with the lighter fermions, and thus treelevel charged-current decays. We will discuss some processes of this type that should be quite accessible at the LHC. The only source of missing energy in these events is due to light neutrinos originating from weak interactions; this is a feature of known physics, but it is not a feature of many popular scenarios for physics beyond the standard model.
There are constraints on a fourth family. From the strong constraint on the number of light neutrinos we know that the fourth family neutrino is heavy. The S parameter is sensitive to a fourth family, but the experimental limits on S have been evolving over the years in such a way that the constraint on a fourth family has lessened. In addition the masses of the fourth family leptons may be such as to produce negative S and T . As we discuss in the next section, the constraints from S and T do not prohibit the fourth family, but instead serve only to constrain the mass spectrum of the fourth family quarks and leptons [12,13]. The implied masses for the fourth family leptons should make them particularly accessible at the LHC, with neutrino pair production providing the most interesting signatures.
We have mentioned that the dynamical symmetry breaking of electroweak symmetries should also be quite closely associated with the physics of flavor and fermion mass. This linkage quite generally introduces some challenging issues, with the prime example being the generation of the top quark mass in a manner consistent with electroweak precision data. After the next section we shall explore such issues in the context of a heavy fourth family. Although we will not follow a top-down approach here, a sequential fourth family is theoretically attractive because it makes it possible that a theory of flavor is related to the breakdown of a simple family gauge symmetry. In contrast new fermions not having standard model quantum numbers would be more surprising and difficult to understand.
Constraints and Signatures
Constraints on the masses of the fourth family fermions t ′ , b ′ , τ ′ and ν ′ Lτ are obtained from their contributions to the electroweak correction parameters S and T . As discussed in the following sections the dynamical mass of all these fermions can arise in a similar way, including the Majorana mass for the fourth left-handed neutrino. The one loop contributions may be approximated as follows [12], where f = 246 GeV. These expressions assume that the masses are sufficiently above the Z mass; note also that g(m 1 , m 2 ) → 4 3 (m 1 − m 2 ) 2 for m 1 ≈ m 2 . The presence of an ultraviolet cutoff Λ ν ′ reflects the dynamical nature of the ν ′ τ mass; namely that the mass function will fall to zero in the ultraviolet. 4 We see that the lepton sector can make negative contributions to both S and T . The Majorana nature of ν ′ τ is responsible [12] for the negative term in T and the reduction of S by 1/12π. The origin of the mass-dependent term in S is described in [14]. For the values of masses that are of most interest it turns out the electroweak correction parameter U is quite small, and we will ignore it henceforth. The use of these one-loop results assumes that the effects of the strong interactions are largely accounted for by using the dynamically generated masses in the loops, while ignoring momentum dependence of the masses themselves. This approximation should be more appropriate in our case of a broken gauge theory dynamics than it is for technicolor or QCD.
Since T from the leptons can be negative, there can be some degree of cancellation between this and the positive contribution from the quarks. If we remove the light Higgs from the standard model (or set its mass to 1 TeV) then current data requires a new physics contribution to T in the range 0.25 ∆T 0.55 at 68% CL. (This is based on the plot at [15].) The edge of the allowed region in the m τ ′ -m ν ′ plane in Fig. (1) corresponds to lepton masses that provide the maximum contribution ∆T = 0.55 along with a vanishing contribution from degenerate quarks. Within the allowed region, the leptons can provide progressively smaller and eventually negative contributions which can cancel against the progressively more positive quark contribution. Going too far into the allowed region implies more of a tuning in this cancellation, since the quark contribution to T increases by one from one contour to the next.
For S the constraints are such that new physics (again with the light Higgs removed) can contribute −0.2 ∆S 0.11 at 68% CL. We show the lines corresponding to the 1σ and 2σ upper bounds on S along with the S = 0 line on the plots. Thus S also limits how far one can go into the allowed region. But acceptable ranges of masses remain, and this is even before realizing the uncertainties in the theoretical estimates due to strong interactions. All these considerations show that a fourth family is quite compatible with present precision data.
Taking Fig. (1) seriously would suggest that m τ ′ /3 m ν ′ m τ ′ /2. We might also expect that m τ ′ m q ′ due to lack of the QCD contribution to the dynamics in the lepton sector, which would tend to enhance the masses of quarks [16]. It then appears plausible that m ν ′ could be in the 150-300 GeV range, with m τ ′ in the 400-600 GeV range. m b ′ and m t ′ may be in the 550-800 GeV range, Figure 1: From the total fourth family contribution to T we shade in yellow the allowed region for the τ ′ and ν ′ τ masses (in TeV). The successive higher contours correspond to increasing the quark contribution to T by 1. The three straight red lines from bottom to top indicate when the total contribution to S is 0, 0.11, 0.22, where the latter two values are 1σ and 2σ away from the central measured value. The left and right figures have Λ ν ′ = 1.5m ν ′ and 2m ν ′ respectively. and with a mass splitting probably not much larger than 100 GeV. Much larger mass splitting would require more tuning in the canceling contributions to T . But note that the total new physics contribution can be as large as half a unit of T , while a unit of T from the quarks corresponds to about a 130 GeV quark mass splitting, so even that much splitting would not constitute a fine tuning. In section 4 we shall argue that m b ′ > m t ′ .
The first signal of a fourth family may involve the fourth family leptons. ν ′ τ ν ′ τ production is more interesting than τ ′ τ ′ production, both because ν ′ τ is expected to be lighter and because its decay modes are more interesting. The decay ν ′ τ → ℓW with ℓ = (τ or µ or e) leads to the following final states from ν ′ τ ν ′ τ production, These processes should be quite accessible at the LHC, although serious studies seem to be lacking. The first process can have same-sign leptons due to the Majorana nature of the ν ′ τ . (This and other properties of Majorana neutrino pair production are discussed in [17].) The last process may be similar to the production and decay of a pair of neutralinos, but the presence of the other two processes should make the distinction between neutrinos and neutralinos clear.
The expected heavy quark decays are t ′ → bW , which would look like a heavy t decay, and b ′ → tW . But if the associated CKM mixing is small then Even if the W has to be virtual due to a small mass difference (thus implying phase space suppression) the b ′ → t ′ W ( * ) process could still be significant when the mixing is small enough. Thus a process of interest is that the b jets can be particularly hard and isolated, and appropriate cuts can help to reduce the background from tt production. This has been used in a study of the pp → t ′ t ′ → bbW W process at the LHC [18]. The b ′ b ′ process has two extra W 's, aiding further the discrimination from background. One of the resulting signals involves two same-sign leptons and missing energy along with the jets. 5
Flavor Physics
Starting with a massless gauge theory of fermions, we suppose that mass and flavor emerges through the breakdown of some of the gauge symmetries. At scales 100 to 1000 TeV some interactions are most likely both strong and chiral, and we assume that they lead in some economical manner to their self-breaking at these scales. The effects of this flavor physics dynamics on lower scales will be carried by a set of effective operators. We expect that all possible operators allowed by the unbroken symmetries are generated, even those that can only be generated nonperturbatively. These manifestations of nonperturbative physics will be important in the following. The only masses allowed by the unbroken SU(2) L ×U(1) symmetry are right-handed neutrino masses; all other fermions are protected from receiving a flavor scale mass and at lower scales will only be affected by the flavor physics through multi-fermion and other nonrenormalizable operators. The mass of the top quark will certainly be well within an order of magnitude of the t ′ , b ′ masses, and this suggests that the physics origin of these three masses should be somehow related. We will take this as a strong hint to consider the possibility that the third family also experiences 4-fermion interactions of the same form and similar magnitude as the interactions involving the fourth family. This leads to the picture where the original flavor gauge symmetry breaks in such a way that the first two families are singlets under an unbroken remnant. This remnant gauge symmetry acts on the third and fourth families and may only break closer to the TeV scale. It will contribute to the anomalous scaling of the various operators, and it may ensure that certain operators remain significant at the TeV scale, even though they are generated at the flavor scale. In particular we assume that the theory exhibits near conformal scaling for some range of scales above a TeV, in which case ψψ has an effective scaling dimension close to 2 [19]. This makes natural the possibility that some 4-fermion operators, at least those that are composed of two such scalars, are close to being relevant operators (close to scaling dimension 4). The role of enhanced operators of this form in theories of flavor has been noted before [7,20,21,22]. In the following we shall focus on operators of the scalar-scalar form and composed of third and fourth family fermions.
We notice how the same fermions, four standard model families, remain the fundamental degrees of freedom throughout the range of energy scales, even though they experience strong interactions at various scales. The light fermions only feel the strong interactions at the flavor scale, while the heavy families also feel strong interactions down to the TeV scale. These latter interactions become strong enough for the fourth family masses to form at the TeV scale. And even then, since the fermions do not become confined, it is still useful to describe the physics of interest at the LHC in terms of the massive fermion degrees of freedom. We note that a massive constituent quark description works quite well in QCD, even though the quarks in that case are confined. The massive quark picture should be even more appropriate in our case.
We are thus led to a phenomenological description of the dynamics responsible for a condensate q ′ q ′ of the fourth family quarks q ′ ≡ (t ′ , b ′ ). The Nambu-Jona-Lasinio (NJL) model provides a minimal framework, where this dynamics is described by a 4-fermion interaction, Λ represents a cutoff above which a softening of this interaction should occur in a more realistic description. For g above some critical value g c a condensation occurs. Without invoking a finetuning of g close to g c , the resulting dynamical mass m q ′ should not be too far below Λ. To get a sense of the fine tuning needed for m q ′ ≪ Λ, we note that a light composite scalar emerges with mass ≈ 2m q ′ in this case [23]. Then contributions of order Λ 2 to the scalar mass must be finetuned away, and thus the degree of fine-tuning is ≈ 4m 2 q ′ /Λ 2 . We believe that fine tuning does not naturally occur and that m q ′ is not much below Λ/2.
There is a relation between m q ′ and Λ and the electroweak symmetry breaking scale v = 246 GeV which is given in a one-loop approximation by the Pagels-Stokar formula For example for m q ′ ≈ 750 GeV and Λ roughly twice that would imply a suitable v from this formula. 6 But ambiguities in matching the phenomenological NJL model to the underlying theory implies that m q ′ as low as 500 GeV may also be acceptable. This is in line with the unitarity analysis [11].
In the next two sections we wish to explore the naturalness of finding the third and fourth family masses emerging in this type of picture. Our task will be to understand not only the origin of top mass, but also the smaller masses of the other members of the third family. Rather than trying to specify more precisely what the flavor interactions are, as in [22], we will continue with a bottom-up approach, and try to find a minimal set of constraints on the 4-fermion operators that could allow for realistic masses. Constraints that can be expressed in terms of approximate symmetries have some chance of being realizable by some underlying flavor dynamics.
Approximate Symmetries
There are two anomaly-free U(1) family symmetries of the third and fourth families that we could consider. The generators have charges (+, +, −, −) and (+, −, −, +) for the fields (ψ ′ L , ψ ′ R , ψ L , ψ R ), where ψ and ψ ′ denote members of the third and fourth families respectively. They are chosen so that they are vector-like and axial-like respectively with respect to the fermion mass eigenstates, and either or both may correspond to gauge symmetries of the high scale flavor physics. Both symmetries must be broken. Of the two, the axial one is of more interest for constraining the operators that are relevant for producing masses; we will label it by Q. We no longer consider the possible vector-like symmetry. Notice that Q is broken at the very least by the q ′ q ′ condensate, and if there is no other much larger contribution to its breaking then it will be a useful approximate symmetry to constrain operators. In particular it will help us to understand the b to q ′ mass ratio.
We can also consider another axial charge, (+, −, +, −), labeled byQ. This is not anomaly-free and so could not be gauged, and we take it to be a more badly broken symmetry than Q. The operators that respectQ include those that can be generated by gauge boson exchange diagrams, while those that violateQ are purely nonperturbative. Since the two classes of operators are generated by distinctly different physics it is not unnatural to assume thatQ-violating operators are somewhat suppressed relative to theQ-invariant ones. This suppression will give rise to the t to q ′ mass ratio.
The quark operators we consider are the products of the following color-singlet, Lorentz scalars, where the products are constructed to preserve SU(2) L ×U(1).
These scalars are either Q-charged or Q-neutral, but we only consider products that are Q-invariant.
The product of a scalar in the top row with a scalar in the second row produces aQ-invariant 4fermion operator, with the LRRL structure as in (5). The product of two operators within a row produces aQ-violating operator with the LRLR structure, for example which is SU(2) L ×SU(2) R invariant. But as we shall see, SU(2) R violation must manifest itself in theQ-violating operators, reflecting the SU(2) R breaking that must originate in the associated nonperturbative dynamics. Depending on the signs and strengths of all these interactions we assume that condensates form. It is then a question of vacuum alignment as to whether the Q-charged or Q-neutral condensates form. We have already assumed the former; more precisely we have assumed that some approximate symmetry exists, labelled by Q, which is axial with respect to the mass eigenstate basis.
The dynamics that produces Q-charged condensates is represented by the 4-fermion operators that involve the Q-charged scalars. There are only two such operators that are both Q-invariant andQ-invariant, It is important to note that q ′ L q ′ R q R q L is not Q-invariant. Although these two operators may have similar (running) coefficients we assume (in the absence of a symmetry) that they are not identical. Then we can assume that the first operator develops an effective coupling above the critical value, while the second operator does not. Alternatively or in addition there may be an effective coupling between the two channels that discourages both condensates from forming simultaneously. This type of coupling between channels could be represented by the multi-quark operator q ′ L q ′ R q ′ R q ′ L q L q R q R q L with the appropriate sign.
Thus if theseQ-invariant operators respect SU(2) R , and if we continue to ignore theQ-violating operators we can have the result, Then to obtain a t mass from a Q-invariant operator we must turn toQ-violating operators. The operator of interest is This type of operator must involve both families to be Q-invariant, and we see that it feeds mass from b ′ to t. Thus we see that the t to b ′ mass ratio is a measure of the amount ofQ violation.
There is a corresponding operator that feeds mass from t ′ to b, and thus that operator must be significantly smaller. The dominance of the operator in (12) indicates that there must be a close to maximal breakdown of SU(2) R in theQ-violating sector of the underlying dynamics. 7 The b mass could also be produced by theQ-invariant but Q-violating operator b Thus the b to b ′ mass ratio puts an upper bound on the amount of Q violation in the quark sector.
Obtaining a large enough t mass has often proven to be difficult in models of dynamical symmetry breaking. This is because the operator responsible for the t mass has typically been taken to be generated by a simple gauge boson exchange. In our context this would correspond to theQ- The trouble is that if this operator was generated by the exchange of a relatively light gauge boson (it cannot be in our context because it is both Q and SU(2) R violating) then the following operators could also be generated through closely related gauge boson exchanges: The first of these operators would give rise to a mass splitting in the (t ′ , b ′ ) doublet of the same order as the top mass itself. A splitting equal to the top mass produces a shift ∆T ≈ 1.7, which is significantly larger than what is currently allowed. For a more detailed analysis of this problem in the technicolor context see [24]. The second operator implies a correction to the Zbb vertex that is similarly too large [25]. These basic problems have motivated many different types of model building efforts such as non-commuting extended technicolor, multiscale technicolor, topcolor, topcolor-assisted technicolor and topcolor seesaw models (for a review and references see [3]). 8 These models generally involve complicating the gauge structure and/or adding new gauge dynamics coupling to the t quark. Here 7 A toy scalar potential was considered in the appendix of the second reference in [22] that illustrates such a maximal breakdown of SU(2) R . 8 The same problems also require special attention in the Higgless models of higher dimensions [26].
we are pointing out that it is not strictly necessary to invoke such complications, given the possibility that the operator (12) gives the dominant contribution to the t mass.
The point is that the side-effects of theQ-violating operator (12) are not so severe [21]. It can give rise to effects similar to those in (13) (which areQ-invariant) only by inserting it twice in a loop. Thus an operator similar to the first operator of (13) is generated with a suppression of m 2 t /m 2 q ′ along with the extra loop suppression. This effect breaks the SU(2) R invariance of the operators q ′ L q ′ R q ′ R q ′ L that are responsible for the t ′ and b ′ masses, giving rise to a mass splitting with m b ′ > m t ′ . The contribution to T , proportional to (m b ′ − m t ′ ) 2 , is then suppressed at least by m 4 t /m 4 q ′ in comparison to the quadratic suppression in models with only gauge-exchange operators. 9 An operator that can affect the Zbb vertex, like the second operator in (13), but which can only be generated by a loop with two insertions of operator (12) 10 (The second operator in (13) is not generated.) In conclusion we see how the corrections to T and the Zbb vertex are more shielded from top mass generation because of theQ-violating nature of the top mass operator.
Leptons
We first turn to the charged lepton sector. For τ and τ ′ we can suppose similar 4-fermion dynamics as in the quark sector, with the same approximate Q symmetry constraining the dynamics. Thus we can again suppose that the Q andQ invariant operators (the analogs of (9) with τ and τ ′ replacing q and q ′ ) generate τ ′ τ ′ = 0 while τ τ = 0. The τ mass can arise similarly to the b mass, and in particular the following SU(2) R andQ violating, but Q-invariant operators can feed mass from t ′ to b and τ : t Here we see our first instance of an operator with both quarks and leptons. (In the Appendix we consider a different choice of the approximate symmetries that results in a different structure for the mixed operators.) Neutrinos are more special. We are supposing that all fermions, including the right-handed neutrinos, participate in the strong flavor interactions at the flavor scale. If SU(2) L ×U(1) is the only exact chiral symmetry remaining below the flavor scale, then there is nothing to protect the right-handed neutrinos from receiving mass from the strong interactions. In fact right-handed neutrino condensates serve as excellent order parameters not only for the breakdown of flavor symmetries, but also for the breakdown of enlargements of the electroweak symmetry such as those involving SU(2) R ×U(1) B−L and/or Pati-Salam-like gauge interactions. With their masses at the flavor scale the right-handed neutrinos are absent in the theory below the flavor scale, and this in 9 The operator b can contribute directly to T would require four insertions of operator (12) and three loops. 10 The SU(2) R invariant operators in (9), and those closely related to them such as q L q ′ R q ′ R q L , neither contribute to T nor correct the Zbb vertex. turn is important for understanding why the small left-handed neutrino masses are so dramatically different from other fermion masses.
But first we consider ν ′ Lτ where we see that its mass (again Q-violating) can arise in a similar way to other fourth family members. Again there are only two Q andQ invariant operators of interest, (Operators such as ℓ ′ L ℓ ′ L ℓ L ℓ L can be Q and SU(2) L ×U(1) invariant, but they don't involve four neutrinos.) Thus by the same reasoning as before we can assume that ν ′ Lτ ν ′ Lτ = 0 while ν Lτ ν Lτ = 0. We are then left with the three light neutrinos (ν Lτ , ν Lµ , ν Le ). Now the question is whether ν Lτ can receive a mass in a manner similar to other third family fermions. The answer is no, since in this case there are no Q-invariant operators that can feed down mass from the fourth family. The Q-violating operator ℓ ′ L ℓ ′ L (ℓ L ℓ L ) † can yield a ν Lτ mass, and thus the relatively tiny value of this mass implies that the Q symmetry must be very well preserved by the effective operators in the left-handed lepton sector.
There are also operators that arise by integrating out the right-handed neutrinos at the flavor scale. The resulting lepton number violating operators necessarily involve six fermions, and they can generate Majorana masses for ν Lµ and ν Le as well as ν Lτ . These 6-fermion operators are naively suppressed by three more powers of the flavor scale compared to 4-fermion operators, thus providing a natural mechanism for the suppression of neutrino masses. This could be thought of as a type of see-saw mechanism, but the right-handed neutrino mass in the see-saw is now set by the flavor scale, of order 1000 TeV. Once again we see how the absence of a Higgs brings down a mass scale of interest.
There are many different 6-fermion operators that can contribute. If they are to feed down mass from the heaviest fermions then they can be constructed by taking Lorentz invariant products of any pair of the following 3-fermion operators (all of which transform as SU(2) L ×U(1) invariant Lorentz spinors).
We see that each element of the 3×3 Majorana neutrino mass matrix has many possible contributions from the various combinations. The relative size of these contributions depends on the detailed structure of the flavor interactions and their breakdown. By dimensional analysis the resulting neutrino masses are probably no less than (600 GeV) 6 /(1000 TeV) 5 ≈ 5 × 10 −5 eV. This is likely an underestimate since it ignores possible anomalous scaling enhancement of the 6-fermion operators.
One is also tempted to use the see-saw estimate of the form m 2 /M, where m is some Dirac mass and M is the right-handed neutrino mass, but this assumes that the anomalous scaling contained in the value of m 2 is the same as that of the 6-fermion operator. This is certainly incorrect for the case of ν Lτ but it may be more appropriate for ν Lµ and ν Le . Reasonable masses seem entirely possible (for example if m 2 ≈ m e m µ ). In addition we see that the structure of the 3 × 3 neutrino mass matrix is quite unrelated to the quark and charged lepton mass matrices, and can have significant off-diagonal terms and thus large mixings [22].
Further remarks
We return to the question of the CKM mixing in the quark sector, responsible for the decays t ′ → bW and b ′ → tW . The off-diagonal tt ′ or bb ′ mass elements would require Q violation, thus making this CKM mixing naturally small. Alternatively these off-diagonal elements could arise as described in the Appendix. As another possibility, [27] shows that kinetic-term mixing effects may be a source of CKM mixing along with CP violating phases. Flavor physics could also generate flavor changing neutral current decay modes of the heavy quarks [28]. But these vertex-type mixing effects are probably smaller than the mass mixing effects, due to less anomalous scaling enhancement of the relevant operators, and thus we expect the charged current decays to dominate. Pair production of the fourth family fermions could exhibit a resonance structure associated with the physics near the cutoff of our effective theory. For example there could be a broken U(1) gauge boson that mixes with the Z and which couples strongly to the fourth (and third) families. Alternatively the strong interactions may imply unconfined bound states of the heavy fermions. And finally if the CKM mixing is small enough then even QCD bound states of the heavy quarks could show up as resonances.
There may also be approximate global symmetries that are broken by the fourth family condensates leading to pseudo-Goldstone bosons, similar to technipions of technicolor theories and coupling to fermions in similar ways. But the masses of such states are so extremely model dependent that we consider them no further. We note though that our practice of assuming the existence of all possible multi-fermion operators generally eliminates the concern over unwanted light or massless pseudo-Goldstone bosons, especially if the original underlying theory has no global symmetries to begin with.
We have been concerned with a Q invariance of operators involving only the third and fourth family fermions. This is only an approximate symmetry of flavor physics in particular because, if the light fermions are Q neutral, it cannot be a symmetry of operators that are needed to feed mass to the light fermions. It may be possible to extend the Q generator to also act on light fermions and thus find an approximate symmetry of a larger set of operators and the full mass matrices. This would lead to the consideration of more complete models, where the full particle content of the theory and the assumed pattern of symmetry breaking are both specified. Such a top-down approach was taken in [22], and there it may be seen that the Q generator and its extension to the light families is a gauge generator of the complete underlying theory. One comment about such a picture is that the hierarchy between the third and fourth family masses may lead in turn to the hierarchy between the first two families. We have chosen in this work to focus in a more model independent fashion on the heavy families, since this is where the more serious issues typically arise.
In summary, a sufficiently massive fourth family points towards an extension of the standard model that treats the Goldstone bosons of electroweak symmetry breaking as the weak coupling dual description of a more fundamental strongly coupled theory. Although we have not specified the fundamental interactions of the fourth (and third) families, we have modeled them phenomenologically via 4-fermion operators. This has enabled us to find some minimal approximate U(1) symmetries of the fundamental interactions that help to explain the range of masses of the third and fourth families. This makes it more likely that such interactions can exist. The fourth family forms part of the fundamental degrees of freedom, and it may constitute all of the new fermionic degrees of freedom. The fourth family quark masses are fixed (up to theoretical uncertainties) by the scale of electroweak symmetry breaking, and then the masses of the fourth family leptons are constrained by the T (and S) parameters. This is analogous to the Higgs picture where the vacuum expectation value v is fixed and there is another parameter, the Higgs mass, that must be adjusted small enough to obtain the correct T . Additional new physics is required to protect the Higgs mass. It is exciting to realize that within a few years we will know which picture of new physics comes closer to describing reality.
Appendix: Alternative choice of approximate symmetries
We have seen how quark masses can affect the lepton mass matrix, and vice versa, but the structure of these mixed operators may be different than described above. To see this we reconsider the possible anomaly free symmetries of the third and fourth families, now generalizing to generators that do not act identically on quarks and leptons.
These cannot all be independent approximate symmetries, since that would suppress any mixed operator, such as the second operator in (14). Thus far we have only needed to assume that Q q A +Q ℓ A (which we labeled simply as Q) is an approximate symmetry. But an interesting alternative is to assume that the following two are approximate symmetries: Q q A + Q ℓ V and Q q V + Q ℓ A . The effect on the pure quark or pure lepton operators of interest to mass formation would be the same as before. But the mixed operator in (14) would not be allowed, and instead there could be the following operators: t This would give rise to off-diagonal mass elements in the charged lepton mass matrix, which along with the τ ′ mass would produce a τ mass in a see-saw manner. Similarly there could be new offdiagonal elements in the quark mass matrix, for example from the operator τ ′ L τ ′ R t L t ′ R , thus creating new sources of CKM mixing [22].
|
2014-10-01T00:00:00.000Z
|
2006-06-13T00:00:00.000
|
{
"year": 2006,
"sha1": "05dc5fdd0574f1c6971459af5ae396a5979921db",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/hep-ph/0606146",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "05dc5fdd0574f1c6971459af5ae396a5979921db",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
214605975
|
pes2o/s2orc
|
v3-fos-license
|
Origin of Star-Forming Rings around Massive Centres in Massive Galaxies at $z\!<\!4$
Using analytic modeling and simulations, we address the origin of an abundance of star-forming, clumpy, extended gas rings about massive central bodies in massive galaxies at $z\!<\! 4$. Rings form by high-angular-momentum streams and may survive in galaxies of $M_{\rm star} \!>\! 10^{9.5}\, M_\odot$ where merger-driven spin flips and supernova feedback are ineffective. The rings survive above this threshold mass after events of compaction to central nuggets. Ring longevity seemed to conflict with the expected inward mass transport driven by torques from violent disc instability. However, by evaluating the torques from a tightly wound spiral structure, we find that the timescale for transport per orbital time is $\propto\! \delta_{\rm d}^{-3}$, with $\delta_{\rm d}$ the cold-to-total mass ratio interior to the ring. A long-lived ring forms when the ring transport is slower than its replenishment by accretion and the interior depletion by SFR, both valid for $\delta_{\rm d} \!<\! 0.3$. The central mass that lowers $\delta_{\rm d}$ and maintains the ring is made of compaction-driven bulge and/or dark-matter, aided by the lower gas fraction at lower $z$. The ring could be Toomre unstable for clump and star formation. Mock images of simulated rings through dust indicate qualitative consistency with observed rings about bulges in massive $z\!\sim\!0.5\!-\!3$ galaxies, in $H_{\alpha}$ and in deep HST imaging. ALMA mock images indicate that $z\!\sim\!0.5\!-\!1$ rings should be detectable. We quote expected observable properties and abundances of rings and their central blue or red nuggets for comparison to observations.
INTRODUCTION
High-redshift galaxies are predicted to be fed by cold gas streams from the cosmic web (Birnboim & Dekel 2003;Kereš et al. 2005;Dekel & Birnboim 2006;Kereš et al. 2009;Dekel et al. 2009). According to cosmological simulations, these streams enter the dark-matter (DM) halo with high angular momentum (AM), which they lose in the inner halo and spiral in into an extended gas ring (Danovich et al. 2015), as seen in Fig. 1.
This ring, like the inner disc, is at a constant risk of E-mail: dekel@huji.ac.il being disrupted by a major merger of galaxies at nodes of the cosmic web, which typically involves a change in the pattern of AM-feeding streams. In Dekel et al. (2020), we showed that in haloes below a critical virial mass of M v ∼ 10 11 M , the merger-driven spin flips are indeed disruptive as they tend to be gas rich and more frequent than the disc/ring orbital frequency. In more massive haloes, the mergers are less frequent, thus possibly allowing the rings/discs to survive for many orbital times. The additional disruptive effects of supernova feedback, which could be strong below a similar critical mass where the potential well is shallow compared to the energy deposited by supernovae in the ISM (Dekel & Silk 1986), are also expected to be weak above this threshold mass, where the gas binding energy is higher. Another process that could in principle disrupt discs even above the mass threshold is the inward mass transport associated with violent disc instability (VDI). When the gas fraction is high, and the bulge is not massive, this process has been estimated to be efficient, such that the disc or ring were expected to be evacuated inwards in a few orbital times (Noguchi 1999;Immeli et al. 2004;Bournaud, Elmegreen & Elmegreen 2007;Genzel et al. 2008;Dekel, Sari & Ceverino 2009). In contrast, as we will see below, the simulations (indicated already in Ceverino, Dekel & Bournaud 2010;Genel et al. 2012), and observations (below and in §6.2), show many longlived rings in massive galaxies, thus posing a theoretical puzzle that is our main concern here. Related to this is the phenomenon of wet compaction into a blue nugget (BN), which tends to occur in most galaxies near a characteristic mass of a similar value (Zolotov et al. 2015;Tacchella et al. 2016a,b;Tomassetti et al. 2016;Dekel, Lapiner & Dubois 2019). This process is sometimes driven by mergers and in other times by other mechanisms such as counterrotating streams. The blue-nuggets are observed as compact star-forming galaxies at z ∼ 1 − 3 (e.g. Barro et al. 2017a), with a preferred stellar mass of M s ∼ 10 10 M (Huertas-Company et al. 2018). We learn from the simulations that the rings tend to form and survive especially after a compaction event, namely above the threshold mass. The way the compaction could give rise to a ring and stabilize it against inward mass transport is the main issue addressed here. We study below the ring survival by analyzing the torques exerted by a spiral structure. We find that the post-compaction massive bulge could be the main reason for slowing down the mass transport while keeping the ring Toomre unstable for giant clumps and star formation. By comparing the inward transport rate of the ring to its replenishment by external accretion and the interior depletion to star formation, we work out the conditions for ring formation and longevity.
In parallel to our analytic modeling, we utilize a suite of VELA zoom-in hydro-cosmological simulations, which are described in appendix §A, Table A1 and in earlier papers (e.g. Ceverino et al. 2014;Zolotov et al. 2015;. The simulations are based on the Adaptive Refinement Tree (ART) code (Kravtsov, Klypin & Khokhlov 1997;Ceverino & Klypin 2009). The suite consists of 34 galaxies that were evolved to z ∼ 1, with a unique maximum spatial resolution ranging from 17.5 to 35 pc at all times. The dark-matter halo masses at z = 2 range from 10 11 to 10 12 M , thus avoiding dwarf galaxies at z < 4. The galaxies were selected at z = 1 such that their dark-matter haloes did not suffer a major merger near that epoch, which turned out to eliminate less than ten percent of the haloes.
Besides gravity and hydrodynamics, the code incorporates physical process relevant for galaxy formation such as gas cooling by atomic hydrogen and helium, metal and molecular hydrogen cooling, photoionization heating by the UV background with partial self-shielding, star formation, stellar mass loss, metal enrichment of the ISM and stellar feedback. Supernovae and stellar winds are implemented by local injection of thermal energy, and radiation-pressure stellar feedback is implemented at a moderate level. In general, the feedback as implemented in this suite is on the weak side of the range of feedback strengths in common cosmological simulations, and no AGN feedback is incorporated.
In the analysis of the simulations, the disc plane and dimensions are determined iteratively, as detailed in Mandelker et al. (2014), yielding a disc radius R d and half-height H d (listed in Table A1 at z = 2) that contain 85% of the cold (T < 1.5 × 10 4 K) gas mass out to 0.15R v , where R v is the halo virial radius. The level of "disciness" is measured by the kinematic ratio of rotation velocity to velocity dispersion V rot /σ, or similarly by R d /H d . Rings are identified and their properties are quantified as described in §3.3 and in appendix §B1. Mock images of simulated galaxies as observed through dust are generated (based on Snyder et al. 2015) for a preliminary comparison to galaxies observed in deep fields of the HST-CANDELS survey. Corresponding mock ALMA images are also generated and mock H α properties are computed.
There are robust observational detections in H α for abundant star-forming rings about massive central bodies at z ∼ 1−2 (Genzel et al. 2014, 2017, which seem to be qualitatively matched by the simulated rings described here. Furthermore, contrary to earlier impressions from HST-CANDELS images, similar star-forming rings about massive bulges are being detected in non-negligible abundances in massive galaxies at z ∼ 0.5 − 3 when properly focusing on the deepest fields (Ji et al., in prep.). Possibly related is the abundance of rings about massive bulges in lowredshift S0 galaxies entering the "Green Valley" (Salim, Fang & Rich 2012), also beautifully seen in IR images of nearby galaxies with massive bulges such as M31 and the Sombrero galaxy. Towards the end of this paper, we attempt very preliminary comparisons between the theoretical and observed high-z rings, and quote certain predicted observable ring properties for more rigorous comparisons with observations, to be performed beyond the scope of this theory paper.
The paper is organized as follows. In a second introductory section, §2, we elaborate on the formation of rings from the cosmic-web streams, the threshold mass for long-lived discs/rings due to merger-driven spin flips, and the expected disc disruption by inward mass transport driven by VDI. In §3 we demonstrate using the simulations the effect of compaction to a blue nugget on the generation of long-lived extended discs and then rings above the threshold mass. In particular, in §3.3, we quantify the ring properties and demonstrate their correlation with the compaction events. In §4, the heart of this paper, we attempt to understand the stabilization of an extended ring by a massive central body via an analytic derivation of the torques exerted on the ring by a perturbed disc with a tightly wound spiral-arm structure. The condition for a longlived ring is evaluated by comparing the inward trans- Ring buildup by a high-angular-momentum stream from the cosmic web. Shown is the face-on projected gas density (color) in a VELA simulated galaxy (V07 at z = 1.08) and the 2D velocity field (arrows). The virial radius is marked by a white circle. Left: Zoom-out on the stream extending out to > 300 kpc with an initial impact parameter comparable to the virial radius. Right: Zoom-in on the spiraling-in to a ring at ∼ 10−20 kpc. As illustrated in a cartoon in Fig. 20 of Danovich et al. (2015), the AM gained outside the halo by tidal torques from the cosmic web is gradually lost in the inner halo by torques from friction, other streams and the tilted disc, causing the buildup of a ring. The question is what prevents the ring from contracting further.
port rate to the rates of accretion and star formation in §4.2, and the model is tested against the simulations in §4.5. In §6 we make first steps of comparing the simulated rings to observations, where we show example mock images and the corresponding profiles from the simulations, both in the HST bands and for ALMA, and show a sneak preview of rings plus bulges detected in deep CANDELS fields. In §7 we summarize our conclusions. Certain more technical matters are deferred to appendices. In §A we describe the VELA simulations.
In §B we elaborate on how we measure ring properties and present the distributions of certain properties. In §C we evaluate the possible torques from a prolate central body. In §D we bring complementary images of rings in the simulations and observations.
RING FORMATION
This more detailed introductory section elaborates on the background and motivation for the analysis of compaction-driven rings and their longevity.
Ring formation from cosmic-web streams
The buildup of an extended ring is a natural result of the feeding of high-redshift galaxies by streams of cold gas that ride the dark-matter filaments of the cosmic web into its nodes. At sufficiently high redshifts, even in massive haloes the streams can penetrate cold through the halo virial radius without being heated by a stable virial shock because their higher density induces efficient post-shock cooling that does not allow pressure support for the shock against gravitational collapse (Birnboim & Dekel 2003;Kereš et al. 2005;Dekel & Birnboim 2006;Cattaneo et al. 2006;Ocvirk, Pichon & Teyssier 2008;Kereš et al. 2009;Dekel et al. 2009;Danovich et al. 2012). The evolution of cold-gas AM leading to the buildup of a ring is described in four stages in Danovich et al. (2015), as summarized in a cartoon in their Fig. 20. Figure 1 demonstrates the buildup of a ring by a high-AM stream, focusing on one dominant stream in an example galaxy from the VELA suite of zoom-in cosmological simulations. The streams acquire excessive AM by tidal torques from the cosmic web while outside the halo (White 1984), expressed in terms of a velocity comparable to the virial velocity and an impact parameter that could be on the order of the virial radius. As the stream penetrates into the halo it spirals in and settles into an extended ring at ∼ 0.15R v . The significant AM loss is by torques due to friction against the circum-galactic medium (CGM) and disc gas as well as by torques from the central galaxy and other streams. As the virial radius is growing with time, the initial impact parameter and the resultant ring radius become more extended in time. Results of related nature were obtained from other simulations (Pichon et al. 2011;Codis et al. 2012;Stewart et al. 2013). Preliminary observational kinematic studies of cold gas, via Lyman-alpha absorption along the line of sight to a background quasar, or Lyman-alpha emission that is typically stimulated by a nearby quasar, indeed in- . Disc disruption below a characteristic mass. Shown is the degree of disciness in terms of Vrot/σ in the VELA simulations. Left: Vrot/σ (color) is averaged within bins in the mass-redshift plane. The black curve refers to the upper limit for effective supernova feedback at a virial velocity Vv = 120 km s −1 . The cyan curves refer to the Press-Schechter νσ peaks, for ν = 1, 2, 3, 4 from left to right, respectively. Right: Vrot/σ as a function of halo mass Mv for all the snapshots of all the 34 evolving galaxies. Each point refers to a snapshot, with the median and 1σ scatter (16% and 84% percentiles) marked in bins of Mv. The color refers to the alternative measure of disciness by shape using R d /H d , showing consistency between the two alternative measures of disciness. In both figures we see a marked transition from non-discs to discs or rings at a threshold mass of Mv 10 11 M , with negligible redshift dependence, as predicted analytically in Dekel et al. (2020).
dicate detections of cold inflowing gas with high AM, consistent with the simulation predictions (e.g. Martin et al. 2019). A recent observed system at z = 2.9, that is not illuminated by a nearby quasar, also indicates three cold inflowing streams (Daddi et al., in prep.) 1 These observations provide preliminary confirmation for the natural buildup of an extended ring, as seen in the simulations.
Mass threshold for discs by merger-driven spin flips
The extended rings that form are expected to be fragile. In Dekel et al. (2020) we used the simulations and analytic estimates to explore how discs and rings populate the M v − z plane. The disc disruption below a charachteristic mass is shown in Fig. 2, which displays the distribution of a kinematic measure of gas disciness, V rot /σ, in the VELA simulations. On the left, this ratio (color) is averaged over the simulation snapshots in bins of M v and z. On the right, this ratio is shown as a function of halo mass for every snapshot. We see a systematic gradient of disciness with mass, and a division between the zones of non-disc and disc dominance at a critical mass of M v (1−2)×10 11 M , where V rot /σ 2. No significant redshift dependence is seen. A measure of disciness by shape reveals similar results, with a transition at R d /H d 2.5 In particular, major mergers are expected to disrupt rotation-supported systems if the orbital AM and the spin of the merging galaxy are not aligned with the spin of the galaxy. This is expected to be the case in mergers of high-sigma nodes of the cosmic web, when the pattern of feeding streams drastically changes. Figures 4 and 5 of Dekel et al. (2020) demonstrate that the disruption below the critical mass is largely due to merger-driven spin flips in less than an orbital time. The mass threshold is derived by a simple analytic model, contrary to the naive expectation of a redshift threshold based on halo merger rates, where the time between mergers with respect to the halo orbital time is t mer /t orb ∝ (1 + z) −1 (Neistein & Dekel 2008;. This turns into a mass threshold when taking into account the increase of the ratio of baryonic galaxy mass to its halo mass with mass and redshift. While the external inflow (and merger) rate of mass and AM that could damage a disc is primarily determined by the total halo mass, the AM of the existing central galaxy that it is affecting is increasing with its baryonic mass, so the damaging relative change in AM is expected to be larger for lowermass galaxies and at lower redshifts. This introduces a strong mass dependence in the disc survivability and weaken its redshift dependence, as seen in the simulations.
We thus learn in Dekel et al. (2020) that above a threshold mass the discs and rings are expected not to suffer disruptions by merger-driven spin flips on timescales comparable to their orbital times.
Given this threshold mass, the expected abundance of gas discs/rings in a given redshift can be estimated by the number density of haloes above the threshold mass. For the LCDM cosmology the Press-Schechter formalism yields a comoving number density of n > 10 −2 Mpc −3 in the redshift range z = 0 − 2, and n > 2.8 × 10 −3 , 5.2 × 10 −4 , 3.2 × 10 −5 Mpc −3 at z 4, 6, 10 respectively.
Rapid inward mass transport driven by violent disc instability
In addition to mergers, another risk for the long-term survival of an extended disc or ring is the inward mass transport associated with VDI. In a gravitationally unstable gas disc, the non-cylindrically symmetric density perturbations exert torques on the rest of the disc, which typically cause transport of AM outward. Then, AM conservation, or the angular Euler equation (eq. 6.34 of Binney & Tremaine 2008, hereafter BT), implies an associated mass transport inward, in terms of clump migration and gas inflow through the disc (e.g. Noguchi 1999;Gammie 2001;Bournaud, Elmegreen & Elmegreen 2007;Elmegreen, Bournaud & Elmegreen 2008;Dekel, Sari & Ceverino 2009;Ceverino, Dekel & Bournaud 2010;Bournaud et al. 2011;Forbes, Krumholz & Burkert 2012;Ceverino et al. 2012;Goldbaum, Krumholz & Forbes 2015, 2016. Considering mutual encounters between the giant clumps in a VDI disc, Dekel, Sari & Ceverino (2009, eq. 21) evaluated the evacuation time of the disc to be where α = 0.2α 0.2 is the instantaneous fraction of the cold disc mass in clumps and Q ∼ 1 is the Toomre parameter. The most meaningful variable here is δ d , the mass ratio of cold disc to total mass within the sphere of radius r where the timescale is evaluated, The disc mass M d refers to the "cold" mass that participates in the gravitational instability. In principle it includes the cold gas and the young stars, but it can be approximated to within a factor of two by the cold gas, as the young stars typically contribute about half of the gas mass. The total mass includes also the "hot" stars in the disc and bulge and the dark-matter mass within r. This quantity, δ d , will turn out to also play a major role in our analytical modeling of rings in §4 below. An alternative estimate of the inflow time has been obtained in Dekel, Sari & Ceverino (2009, eq. 24) based on the shear-driven mass-inflow rate of Shakura & Sunyaev (1973) and the maximum dimensionless AM flux densityα obtained from simulations by Gammie (2001), yielding a lower-limit of With Q between unity and 0.68, appropriate for marginal instability of a thick disc (Goldreich & Lynden-Bell 1965), this is comparable to the estimate in eq. (1) despite the opposite dependence on Q.
In a VDI disc with δ d ∼ 0.3−0.5, we thus expect an inward mass transport within a few orbital times. With such a rapid inflow rate, one would not expect the extended discs to survive for a long time the way they do for massive, post-compaction galaxies in the simulations (see below). This is given that the expected average timescale for accretion into the ring is much longer, t acc ∼ 20 (1 + z) −1 t orb , as estimated in eq. (26) below. This puzzling low inflow rate of the post-compaction rings in the simulated galaxies has been a long-standing theoretical challenge, which we seek to solve here. It turns out that the same quantity, δ d , will play a major role also in our analytical modeling of rings in §4.
POST-COMPACTION DISCS AND RINGS IN SIMULATIONS
The mass threshold for survival of discs and rings, which we interpreted in Dekel et al. (2020) as largely due to merger-driven spin flips on an orbital timescale, is apparently associated with another physical process that tends to occur near a similar characteristic mass, that of a major compaction event. We describe here how longlived discs and rings tend to appear in the simulations after such a compaction event, once a massive bulge has formed.
Compaction to a blue-nugget
Cosmological simulations show that most galaxies evolve through a dramatic wet-compaction event, which tends to occur at its maximum strength when the galaxy mass is near or above the golden value, M v ∼ 10 11.5 M and M s ∼ 10 9.5 M , especially at z = 1 − 5 when the gas fraction is high (Zolotov et al. 2015;Tacchella et al. 2016b;Tomassetti et al. 2016;Dekel, Lapiner & Dubois 2019). The wet compaction is a significant gaseous contraction into a compact central star-forming core within the inner 1 kpc, termed "a blue nugget" (BN). The gas consumption by star formation and the associated gas ejection by stellar and supernova feedback trigger central gas depletion and inside-out quenching of starformation rate (SFR) (Tacchella et al. 2016a). The cartoon in Fig. 3 illustrates the main features of this sequence of events as seen in the simulations via the evolution of gas mass, stellar mass and SFR within the inner kiloparsec. In order to more directly compare to observations, the right panel of Fig. 3 shows simulated evolution tracks of galaxies in the plane of specific SFR (sSFR) versus compactness as measured by the stellar surface density within 1 kpc, termed Σ 1 . A compaction at a roughly constant sSFR turns into quenching at a constant Σ 1 , generating an L-shape evolution track with the "knee" marking the blue-nugget phase. This characteristic L-shape evolution track has been confirmed observationally (e.g. Barro et al. 2017a, Fig. 7). Figure 4 illustrates through images of gas and stellar surface density the evolution through the compaction, Zolotov et al. 2015). The compaction is the steep rise of gas mass (blue), by an order of magnitude during ∼ 0.3 t Hubble , reaching a peak as a blue nugget (BN), and soon after declining as the central gas is depleted by star formation and outflows with no replenishment. The SFR (magenta, in log(M yr −1 )) follows closely, demonstrating post-BN compaction-triggered central quenching. The central stellar mass (red) is rising accordingly during the compaction, and it flattens off post-BN. The inner 1 kpc is dominated by dark matter (green) pre compaction and by baryons (stars, red) post compaction. The "disc" kinematics is dispersion-dominated pre-BN and rotation-dominated post-BN (Fig. 7). The time of the major BN event is typically when the galaxy is near the golden mass, Ms ∼ 10 10 M , separating between the pre-compaction supernova phase and the post-compaction hot-CGM phase. The black-hole growth (black), which is suppressed by supernova feedback pre compaction, is growing during and after the compaction in the hot-CGM phase above the golden mass. The onset of rapid black-hole growth is driven by the compaction event (Dekel, Lapiner & Dubois 2019). Right: The universal L-shape evolution track of eight VELA simulated galaxies in the plane of sSFR and stellar surface density within 1 kpc, Σ 1 , which serves as a measure of compactness (following Lapiner, Dekel et al., in preparation). The compactness is growing at a roughly constant sSFR (horizontally) before and during the compaction event, turning over at the blue-nugget phase (the "knee", marked by a square symbol) to quenching at a constant Σ 1 (vertically). A similar behavior is seen observationally (Barro et al. 2017a, Fig. 7), with the value of Σ 1 at the BN phase weakly increasing with redshift. Note that this phenomenon is not caused by AGN feedback.
blue-nugget and post-blue-nugget phases in an example VELA galaxy (V07), to be discussed below. Figures D1 in appendix §D brings a more detailed sequence of the evolution.
Observationally, it became evident that the massive, passive galaxies, which are already abundant at z ∼ 2−3, are typically compact, encompassing ∼ 10 10 M of stars within 1 kpc, termed "red nuggets" (van Dokkum et al. 2008;Damjanov et al. 2009;Newman et al. 2010;van Dokkum et al. 2010;Damjanov et al. 2011;Whitaker et al. 2012;Bruce et al. 2012;van Dokkum et al. 2014van Dokkum et al. , 2015. Their effective radii are typically one percent of their halo virial radii, which is smaller than one would expect had the gas started in the halo with a standard spin of λ ∼ 0.035 and conserved AM during the infall. This indicates dissipative inflow associated with AM loss, namely a wet compaction (Dekel & Burkert 2014), and it implies the presence of gaseous blue nuggets as the immediate progenitors of the red nuggets. Indeed, star-forming blue nuggets have been convincingly observed, with masses, structure, kinematics and abundance consistent with being the progenitors of the red nuggets (Barro et al. 2013(Barro et al. , 2014aWilliams et al. 2014;Barro et al. 2015;van Dokkum et al. 2015;Williams et al. 2015;Barro et al. 2016bBarro et al. ,a, 2017b. In particular, a machine-learning study, after being trained on mock dusty images of the blue nuggets as identified in the simulations, recognized with high confidence similar blue nuggets in the CANDELS-HST multi-color imaging survey of z = 1−3 galaxies (Huertas-Company et al. 2018).
The AM loss leading to compaction is found in the simulations to be caused either by wet mergers (∼ 40% by major plus minor mergers), by colliding counterrotating streams, by recycling fountains or by other processes (in prep.), and to be possibly associated with VDI (Dekel & Burkert 2014). These processes preferentially occur at high redshifts, where the overall accretion is at a higher rate and more gaseous, leading to deeper compaction events.
The compaction events mark drastic transitions in the galaxy structural, compositional, kinematic and other physical properties, which translate to pronounced changes as a function of mass near the characteristic mass for major blue nuggets (Zolotov et al. 2015;Tacchella et al. 2016a,b). The compaction triggers inside-out quenching of star formation, to be maintained by a hot CGM in massive haloes, possibly aided by AGN feedback. This is accompanied by a structural transition from a diffuse and largely amorphous configuration to a compact system, possibly surrounded by an extended gas-rich ring and/or a stellar envelope. The kinematics evolves accordingly from pressure to rotation support (Fig. 7 Especially important for our purpose here is that the blue nuggets favor a characteristic mass. According to the simulations, minor compaction events may occur at all masses in the history of a star-forming galaxy (SFG). Indeed, repeated episodes of minor compactions and subsequent quenching attempts can explain the confinement of SFGs to a narrow Main Sequence (Tacchella et al. 2016b). However, the major compaction events, those that involve an order-of-magnitude increase in central density, cause a transition from central dark-matter dominance to baryon dominance, and trigger significant and long-lasting quenching, are predicted by the simulations to occur near a characteristic halo mass about M v ∼ 10 11.5 M , see Tomassetti et al. (2016, Fig. 8) and Zolotov et al. (2015, Fig. 21). This has been confirmed by the deep-learning study of VELA simulations versus observed CANDELS galaxies (Huertas-Company et al. 2018), which detected a preferred stellar mass for the observed blue nuggets near the golden mass, M s ∼ 10 9.5−10 M . The significance of this finding is strengthened by the fact that the same characteristic mass has been recovered after eliminating from the training set the direct information concerning the mass, through the galaxy luminosity. One may suspect that the compaction events are especially pronounced in galaxies near the critical mass primarily due to the fact that supernova feedback, which weakens compactions in lower masses, becomes inefficient near . Evolution of profiles through compaction and ring formation. Shown are surface-density radial profiles of gas, SFR, stars and metallicity for three of the VELA galaxies that develop pronounced rings. The profiles for each galaxy are shown at the four phases of evolution (see Fig. 4), namely pre compaction (red), during compaction (yellow), early post compaction (cyan) and late post compaction (magenta), at the redshifts indicated. The profiles are smoothed with a Gaussian of standard deviation 0.05 in log r. The triangles mark the disc radii R d . The shaded area marks the ±1σ ring width in gas at the late post-compaction phase, and the wedge marks r 0 . The gas and SFR profiles show the post-compaction appearance of a ring. It is associated with a growth in the stellar mass encompassed by the ring (note the different vertical axes for gas and stars). The metallicity in the ring is low, reflecting the fact that it is largely made of freshly accreted gas. and above this mass (Dekel, Lapiner & Dubois 2019). The characteristic mass for compaction events, being in the ball park of the mass threshold for discs seen in Fig. 2, may indicate that the compaction events also have a major role in the transition from non-discs to discs, to be addressed in this paper.
Post-compaction discs & rings in simulations
3.2.1 Discs and rings about a massive bulge Figure 4 displays the evolution of one VELA galaxy through the compaction and blue-nugget events and the post-compaction phases. It shows face-on images of projected gas density that can serve as a proxy for the associated SFR following the Kennicutt-Schmidt relation.
The compaction phase (top-left panel) leads to a blue nugget (top-middle) that is characterized by the central blob of high gas density. Immediately after (top-right), a highly turbulent rotating disc develops and grows in extent (bottom-left). It shows a pronounced spiralarm pattern and irregular perturbations including giant clumps. This is a VDI phase, in which the giant clumps and the gas between them migrate inwards (Noguchi 1998;Bournaud, Elmegreen & Elmegreen 2007;Dekel, Sari & Ceverino 2009;Ceverino, Dekel & Bournaud 2010;Ceverino et al. 2012;Bournaud et al. 2011;Ceverino et al. 2012). Then, the central gas is depleted into star formation and outflows (in comparable roles, Zolotov et al. 2015), and an extended clumpy ring forms, continuously fed by incoming cold streams, showing tightly wound spiral arms and giant clumps (bottom- . Kinematic evolution through compaction into rotation-supported discs/rings. Shown is the evolution of rotation velocity Vrot and velocity dispersion σr in VELA galaxies. Left: The median velocities (and 1σ scatter), with respect to the virial velocity Vv, are shown as a function of time (expansion factor a = (1 + z) −1 ) with respect to the blue-nugget event (left) and, quite similarly, as a function of stellar mass Ms (right). We see that the rotation velocity is increasing drastically during the compaction and blue-nugget phase, while the velocity dispersion remains roughly constant across this event. This argues that angular momentum is a key to understanding the emergence of post-BN discs, and hints to extended rings as they naturally posses high AM. Right: The disciness measure Vrot/σ, for all VELA snapshots, as a function of time with respect to the BN event. The symbol color marks Mv. Symbols with error-bars are medians in bins of a. We see a relatively tight correlation between the transition from non-discs to discs and the compaction-driven blue-nugget event where the galaxy mass is near a threshold of ∼ 2 × 10 11 M , hinting that the formation of a massive compact bulge is an important driver of disc or ring formation and longevity.
middle). The ring is maintained at its extended form for several Gigayears, with no significant inward migration (bottom-right).
The complementary stellar-density maps in the lower six panels show how a compact stellar system forms following the gas compaction process into the blue-nugget phase (top-middle), and how it remains massive and compact as it quenches to a passive red nugget. The compaction process thus results in a massive central bulge, which soon becomes surrounded by a gaseous disc that develops into an extended ring. We will argue below that this massive bulge is a key for ring longevity. Figure 5 indicates that the ring phenomenon is robust. It shows examples of images of face-on projected gas density in several simulated post-compaction galaxies with extended gas rings, three at z > 1 and one at z ∼ 4. These examples will serve as our fiducial pronounced rings in the simulations. These cases illustrate the robustness of rings about massive bulges in postcompaction galaxies, above the critical mass for major compaction events.
In order to explore the buildup of the rings through the different phases, Fig. 6 shows the evolution of surface-density radial profiles in the disc plane for (mostly cold) gas as well as SFR, stars and metallicity, Three VELA galaxies that develop pronounced rings are shown, each at four phases of evolution corresponding to pre-compaction, compaction, early post-compaction and late post-compaction, as seen in Fig. 4, with the corresponding redshifts marked. Inspecting the evolution of the gas profiles, one can see the growth of gas density inside r ∼ 1 kpc during the compaction phase, followed by gas depletion in the inner few kpc and the development of a long-term ring at r ∼ 10 kpc post compaction. The SFR density profiles roughly follow the gas density profiles, obeying the KS relation, showing the ring as well.
The stellar profiles show the associated postcompaction growth of the stellar mass within the sphere encompassed by the ring, which can be translated to a decrease in the quantity δ d of eq. (2) that determines the inward mass transport rate via eq. (24) below. The ring itself is hardly detectable in most stellar mass profile. The metallicity is decreasing with radius, being ∼ 0.4dex lower in the ring compared to the inner disc. This indicates that the ring is being built by freshly accreted gas.
Based on the robust occurance of clumpy postcompaction discs and rings in the simulations, we propose that they can be identified with a large fraction of the observed massive and extended star-forming rotating and highly turbulent "discs" showing giant clumps (Genzel et al. 2008(Genzel et al. , 2014Guo et al. 2015;Förster Schreiber et al. 2018;Guo et al. 2018). The long-term survival of these gravitationally unstable rings, in the simulations and in the observed galaxies, seems to be in apparent conflict with the expected rapid inward migration of VDI discs discussed in §2.3 based on Dekel, Sari & Ceverino (2009), and it thus poses a theoretical challenge which we address in §4 below.
Compaction-driven transition to rotating discs/rings
The kinematic transition through the blue-nugget event is of particular relevance to our current study of extended discs and rings. In order to see this, Fig. 7 shows the transition of kinematic properties through the blue-nugget event and, almost equivalently, through the crossing of the threshold mass, in the VELA simulations (see similarly the evolution of spin in Jiang et al. 2019, Fig. G1). The galaxies evolve from pressure to rotation support, with the median V rot /σ growing from near unity to about 4 ± 1. The important fact to note is that it is the rotation velocity that is dramatically growing during the compaction process, from a small V rot /V v = 0.4±0.1 pre compaction to V rot /V v = 1.4±0.1 post compaction. During the same period, over which the normalizing factor V v does not vary significantly, the velocity dispersion remains roughly constant at about We learn that the transition from pressure to rotation support is not because of a significant change in the stirring of turbulence, but rather due to an abrupt increase in the gas AM. This indicates that the inflowing high-AM gas is prevented from forming a long-lived disc in the pre-compaction phase because of an efficient loss of AM, largely due to the merger-driven flips discussed in §2.2 based on Dekel et al. (2020). In turn, the gas seems to retain its incoming high AM in the post-compaction phase. This should guide our effort in §4 below to understand the post-compaction survival of extended rings by means of AM exchange. Also shown in Fig. 7 is the measure of disciness, V rot /σ, for all VELA snapshots as a function of the time with respect to the major BN event, as well as the virial mass (color). The visual impression is that the correlation with the blue-nugget event is rather tight, showing a clear transition from non-discs to discs near the BN event, where the halo mass is near a threshold of ∼ 2 × 10 11 M . In particular, there is only a small number of cases (except very massive ones) where the galaxies are non-discs significantly after the blue-nugget phase. This is yet another possible hint that the formation of a massive central bulge, in the mass range where mergers are infrequent, is in most cases sufficient for disc or ring longevity.
We thus have a hint that the simulations show a correlation between the compaction to a blue nugget and the development of an extended gas disc or ring. In §3.3, we quantify the ring properties and establish this correlation explicitly for the rings, and in §4 we attempt to understand the origin of this correlation.
Ring detection and properties
For crude estimates of expected ring properties, we read from the surface-density profiles of the VELA galaxies with pronounced rings (Fig. 5 and Fig. 6) that the galaxies of M s ∼ 5×10 10 M have mean gas surface densities in the ring of Σ gas ∼ 5×10 7 M kpc −2 , while much of the SFR occurs in clumps of Σ gas In a ring of radius r = 10 kpc r 10 and width ∆r/r = 0.33 η 0.33 , the total gas mass in the ring is M gas ∼ 10 10 M η 0.33 r 2 10 . The average gas number density in the ring is n ∼ 1 cm −3 . The average SFR density in these pronounced rings is Σ sfr ∼ Σ gas /t sfr , which for t sfr ∼ 0.5 Gyr gives Σ sfr ∼ 0.1M yr −1 kpc −2 . The total SFR in the ring is therefore SFR∼ 20M yr −1 η 0.33 r 2 10 . In order to more quantitatively measure ring properties for all the simulated galaxies, we compute for each the radial gas surface-density profile Σ(r) projected onto the disc plane defined by the instantaneous angular momentum, and fit to it a function that captures the main ring with a Gaussian shape in linear Σ versus r, as described in more detail in appendix §B1. Three examples are shown in Fig. 8 to illustrate the profiles and fits. In the case of a single dominant ring (left and middle panels), which is ∼ 80% of the cases where the galaxies have rings, we fit a Gaussian on top of a constant background, with four free parameters.
In the case where a smoothed version of the profile indicates two well-separated rings, which occurs in ∼ 20% of the ringy galaxies, we fit a sum of two Gaussians with the same Σ 0 (right panel). If one of the rings is at least three times as massive as the other, which happens in about half the galaxies with double rings, we choose it as the dominant ring. Otherwise, in ∼ 10% of the cases, we combine the two rings into one, assigning to it the average contrast and a combined radius, width and mass as specified in appendix §B1.
The ring is characterized by its contrast with respect to the background in its interior, ranging from 0 for no ring to δ ring → ∞ for an ultimate ring with an empty interior. An alternative measure of ring strength is the mass excess, the ratio between gas mass in the ring and the total in the disc including the background, The ring mass is taken to be the mass above Σ 0 in the range r 0 ±2σ, and the total mass is measured from r = 0 to r 0 +2σ. The mass fraction µ ring thus ranges from zero for no ring to unity for a pure ring with otherwise no disc component. In order to evaluate the quality of the two measures of ring strength, Fig. 9 plots the two against each other. For the more significant rings the two measures are tighly correlated, spread about the line log µ ring = log δ ring − 0.5, except for the very strong rings where δ ring becomes 1. The scatter is larger for the more minor rings. We note that a threshold in µ ring implies a meaningful threshold in δ ring . For example log µ ring > −1.0, −0.7, −0.5, −0.3 automatically implies log δ ring > −0.7, −0.5, −0.3, 0.1 respectively. On the other hand, a thresholds in δ ring allows a large range of low µ ring values. We conclude that µ ring is a more robust measure of ring strength, and adopt it in our analysis below. Three examples of gas surface brightness profiles and the best-fit Gaussian fitting functions, with either one or two Gaussians above a constant background, eq. (4). The four best-fit parameters are quoted for each ring. The ring is characterized by its contrast δ ring = Σ G /Σ 0 and by the gas mass excess in the ring µ ring = M ring /Mgas,tot, where the ring mass is measured within r 0 ± 2σ and the total gas is inside r 0 +2σ. In the case of two Gaussians, if one ring is three times as massive as the other the massive one is chosen (∼ 10% of the ringy galaxies). Otherwise, the two rings are combined (∼ 10% of the ringy galaxies). For the significant rings the two measures are tighly correlated, roughly about the line log µ ring = log δ ring − 0.5. We find µ ring to be more useful because a threshold in µ ring implies a threshold in δ ring , but not vice versa. Figure B1 in appendix §B2 shows the distributions of certain ring properties among the simulated galaxies with significant rings, µ ring > 0.3. Other relevant properties will be discussed below in different specific contexts.
Compaction-driven rings
The causal connection between the compaction to a blue nugget and the presence of a ring can now be quanti- Figure 10. Ring mass excess as a function of mass and redshift. Shown is the distribution of µ ring , averaged within bins in the Mv − z plane. The corresponding distribution of the fraction of galaxies with significant rings is shown in Fig. 13. We recall from Fig. 2 that long-lived discs or rings are expected to dominate above a threshold mass of Mv ∼ 2 × 10 11 M at all redshifts. We see that significant rings dominate above the threshold mass at z < 4, while there are weaker or no rings at much lower masses and higher redshifts. The weakness or absence of rings in intermediatemass galaxies at very high redshifts indicates that a process other than spin flips also has a role in ring survival. fied using the ring properties measured from the simulations.
Recalling that the major blue-nuggets events tend to occur at a characteristic mass in a broad redshift range, we start by exploring the typical ring strength as a function of mass and redshift. For this, Fig. 10 Left: Ring mass excess µ ring as a function of expansion factor with respect to the blue-nugget time a/a BN for all simulated galaxies. Symbol color marks stellar mass, which is strongly correlated with a/a BN . Symbols where µ ring < 0.01 are put at 0.01. The medians at given mass bins for all simulated galaxies are shown in black with the 25% and 75% percentiles marked by the shaded area. The medians limited to the galaxies that show rings are marked in blue. The magenta curve and the right axes refer to the fraction of galaxies with significant rings of µ ring > 0.3. We see a pronounced transition near the major compaction events to blue nuggets at Ms ∼ 10 10 M , from a tendency to have no rings to a dominance of high-contrast, massive rings. The ring excess is gradually growing with galaxy mass and time with respect to the blue-nugget phase. Right: The distribution of gas ring mass excess µ ring in three different stages of evolution with respect to the blue-nugget event, roughly corresponding to different mass bins with respect to Ms ∼ 10 10 M . Pre BN (black), early post BN (blue) and late post BN (red) refer to log(a/a BN ) prior to −0.05, between 0.00 and 0.15, and after 0.15, respectively. The medians are marked by vertical dashed lines. Pre BN only a small fraction of the galaxies show rings, all weak, while late post BN most of the galaxies show rings, mostly significant ones.
the ring mass excess, averaged within square bins in the M v −z plane. Shown below in Fig. 13 is the associated distribution of f ring , defined as the fraction of galaxies with significant rings obeying µ ring > 0.3. We recall from Fig. 2 (based on Dekel et al. 2020) that M v ∼ 2×10 11 M is the threshold for long-lived discs, as discs in lower mass haloes are disrupted by merger-driven spin flips on timescales shorter than the orbital times, as well as by supernova feedback. We learn that a significant fraction of the galaxies of halo masses M v > 10 11.3 M at z = 1−4, where discs dominate, actually have significant rings. In this redshift range, weaker rings show up also in a fraction of the haloes somewhat below the threshold mass, where they are expected to be less long-lived. At lower masses, M v < 10 10 M , or at higher redshifts, z > 6, there are almost no rings. This is not surprising given that there are no long-lived discs there, as seen in Fig. 2. An interesting feature is the absence of rings at high redshifts near and even slightly above the threshold mass for long-lived discs, which does not follow the insensitivity to redshift seen for discs in general in Fig. 2. This indicates that the formation and survival of rings is not solely determined by the infrequent merger-driven spin flips that provides a necessary condition for longterm disc survival. The galaxies develop long-lived outer rings only after they undergo certain stages of evolution, possibly after their major compaction events, and, as we shall see in §4, they should also obey another condition associated with the quantity δ d of eq. (2) that provides a sufficient condition for long-term rings.
In order to better establish the causal connection between a compaction to a BN and the development of a ring, the left panel of Fig. 11 shows the ring mass excess µ ring as a function of time (expansion factor) with respect to the blue-nugget event, a/a BN , for all snapshots of all galaxies. Almost equivalently, the symbol color shows these quantities as a function of stellar mass M s . The distributions of these ring properties at given mass bins are indicated by the medians and the 25% and 75% percentiles for all galaxies (black, shade). The medians are also shown for the sub-sample of snapshots that show rings with non-vanishing µ ring values (blue). Also shown (magenta) is the fraction of galaxies that have significant rings, with µ ring > 0.3.
When inspecting the whole sample of simulated galaxy, we see a pronounced transition near the major compaction events to blue nuggets at a critical mass M s < ∼ 10 10 M . Prior to the BN, the vast majority of the galaxies show no rings, while sufficiently after the BN, most of the galaxies show rings. Focusing on the galaxies with rings, we see gradual strengthening with mass, where the median mass excess is growing from µ ring ∼ 0.1 to 0.5. Well above the critical mass, post blue nugget, the most pronounced rings approach µ ring ∼ 1, namely pure rings with no gas in the interior disc. The fraction of galaxies with significant rings is rising from ∼ 10% pre BN and below the BN mass to ∼ 65% well after the BN time and above the BN mass. The exact quoted fractions are to be taken with a grain of salt as the VELA suit is not a statistically representative sample of galaxies in terms of mass function.
To further establish the correlation between rings and the post-compaction phase, the right panel of Fig. 11 displays the probability distributions of ring mass excess in three different phases of evolution with respect to the BN event. The pre-BN phase is defined here by log(a/a BN ) < −0.05, to avoid the late compaction phase near the BN. The early-post-BN phase is limited to 0.0 < log(a/a BN ) < 0.15 and the late-post-BN phase is defined as log(a/a BN ) > 0.15. These phases roughly correspond to different mass bins with respect to the characteristic BN mass of M s ∼ 10 10 M . We read from the histograms that pre BN less than ∼ 20% of the galaxies show rings, all weak with a median µ ring ∼ 0.2 for the rings. In contrast, late post BN ∼ 70% of the galaxies show rings, mostly significant with a median above µ ring ∼ 0.3. We read that ∼ 24% of the galaxies are expected to have pronounced rings of µ ring > 0.5, and ∼ 50% are significant rings with µ ring > 0.3, while ∼ 30% have no sign of a ring. Figure 12 shows the distribution of ring fraction in the M v −z plane for significant rings of µ ring > 0.3. Figure B2 in appendix §B2 shows similar maps for all rings of µ ring > 0.01 and for the pronounced rings of µ ring > 0.5. Figure B3 in appendix §B2 shows the same map for significant rings µ ring > 0.3 but with M v replaced by stellar mass M s , to make the comparison with observations more straightforward. This complements the map of ring strength shown in Fig. 10. We see that a high fraction of rings is detected above the threshold mass, M v > 10 11 M , where discs survive spin flips (Fig. 2), and at z < 4. Focusing on massive galaxies, we read that the fraction of strong rings, µ ring > 0.5, is ∼ 30% at z ∼ 1, while it drops to the ∼ 10% level at z = 1.5−3.5. The fraction of significant rings is ∼ 50% at z ∼ 1 and 30−40% at z = 2−4. The fraction of all rings, including marginal minor ones, is ∼ 70% at z = 1 − 3, ∼ 50 − 60% at z = 3 − 4 and ∼ 20 − 40% at z = 4 − 6. In comparison, according to Fig. 10, the average ring strength in massive galaxies is just below µ ring = 0.5 at z ∼ 1, and is µ ring = 0.3 at z = 1.5−4. These fractions and ring strengths are to be compared to observations, where preliminary indications for qualitative agreement are discussed in §6.2.2.
The correlation of ring strength and time or mass with respect to the blue-nugget phase, as seen in Figs. 10, 11, and 12, strengthens our earlier impression that ring formation is correlated with the postcompaction phase of evolution, which was based on visual inspection of rings in gas images and profiles in the different evolution phases (Figs. 4, 5 and 6), combined with the post-compaction appearance of discs (or rings) above the corresponding mass thresholds (Figs. 2 and 7). We next attempt to understand the origin of the post-compaction longevity of the rings.
RING STABILIZATION BY A CENTRAL MASS: AN ANALYTIC MODEL
Having established in the simulations the ring formation and survival after compaction to a massive central body, and given the conflicting expectation for a rapid inward transport in a gaseous VDI disc summarized in §2.3, we now proceed to an analysis that reveals the conditions for slow inward mass transport of an extended ring, compared to ring buildup by high-AM accretion and interior depletion by star formation.
4.1 Inward transport by torques from spiral structure
The torque
In order to estimate the rate of inward transport of ring material [through AM conservation, or the angular Euler equation (eq. 6.34 of Binney & Tremaine 2008, hereafter BT)], we wish to compute the relative AM change in the outer disc outside radius r due to torques exerted by the perturbed disc inside r, during one disc orbital time. For an order-of-magnitude estimate, we follow Chapter 6 of BT assuming a tightly wound spiralarm pattern in a razor-thin disc. The z-component of the torque per unit mass at a position (r, φ) in the disc plane is where Φ is the gravitational potential, and r and φ are the usual spherical coordinates. The relevant part of the potential exerting the torque is due to the disc, Φ d , which is related to the density in the disc ρ d via the Poisson equation. The total torque on the ring outside r is obtained by an integral over the volume outside r, This is a simplified version of the more explicit expression in eq. 6.14 of BT. After some algebra, the general result is Next, assuming a tightly wound spiral structure with m arms, the small pitch angle is given by with k the wavenumber (positive or negative for trailing or leading arms respectively). One assumes a thin disc, in which the spiral structure is described by a surface density (eq. 6.19 of BT) where Σ 1 (r) is the spiral perturbation, assumed to vary slowly with r, the shape function is f (r) where df /dr = k, the spirals are tightly wound |k| r 1, and m > 0. The corresponding potential is Rings around massive centres 15 The torque is then (eq. 6.21 of BT) Thus, trailing arms (k > 0) exert a positive torque on the outer part of the disc and therefore transport AM outwards (we omit the sign of the torque hereafter). Using eq. (10), the torque becomes We should comment that spiral arms are indeed usually trailing. Across the trailing spiral, the gas rotates faster than the spiral pattern ( §6.1.3.d of BT), namely the ring in this region is well inside the corotation radius of the spiral. The pattern speed for the spiral pattern may be not unique, at different radii or at different times, because of the varying potential due to the growth of the bulge. Nevertheless, the torque from the trailing spirals is transferring AM from inside to outside the co-rotation radius and consequently gas in the ring flows inwards.
Angular-momentum transport and mass inflow rate
In order to compute the relative change of AM in the ring during an orbital time, we write for a ring at radius r the torque per unit mass as τ (r) = T (r)/M r , where M r is the ring gas mass, and where Σ r is the average surface density in the ring and η r = ∆r/r is the relative width of the ring. The specific AM in the ring is j = Ω r 2 , and the orbital time is t orb = 2π Ω −1 , where Ω is the circular angular velocity at r. Substituting in eq. (14), the relative change of AM during one orbit is where ∆ r = Σ r /Σ d is the ring contrast with respect to the disc, which we first assume to be > ∼ 1, and where the amplitude of the spiral surface-density pattern is assumed to be a fraction A m of the axisymmetric density, For reference, this amplitude is known to be in the range 0.15−0.60 in observed spiral galaxies. It turns out that the quantity that governs the relative AM change and simplifies the above expressions for a ring is the same mass ratio of cold disc to total that governs the rapid inflow of a VDI disc, eq. (2), namely δ d = M d /M tot , where M d is the cold mass in the disc. Using eqs. (2) and (15), and approximating the circular velocity by (Ωr) 2 = GM tot (r)/r, the surface density in the disc can be expressed in terms of δ d , Inserting this in eq. (16) we obtain Next, the pitch angle can also be related to the key variable δ d by appealing to the local axi-symmetric Toomre instability, which yields a critical wavenumber for the fastest growing mode of instability (eq. 6.65 of BT), Here κ is the epicycle frequency, κ 2 = r dΩ 2 /dr + 4Ω 2 . This gives κ = ψ Ω, where ψ = 1 for Keplerian orbits about a point mass, ψ = √ 2 for a flat rotation curve, ψ = √ 3 for the circular velocity of a self-gravitating uniform disc, and ψ = 2 for a solid-body rotation. The second equality made use of eq. (18). Adopting |k| = k crit in eq. (10) for the pitch angle, we obtain Inserting this in eq. (19) we get The rate at which AM is transported out is actually driven by the sum of the gravitational torques computed above and the advective current of AM. The advective transport rate is given by the same expression as in eq. (13) times a factor of order unity or smaller, explicitly [v 2 s |k|/(πGΣ) − 1], where v s is the speed of sound (BT eq. 6.81, based on appendix J). Thus, the advective transport is generally comparable in magnitude to or smaller than the transport by gravitational torques. We therefore crudely multiply the AM exchange rate of eq. (22) by a factor of two.
The inverse of ∆j/j in an orbital time is the desired timescale for the ring mass to be transported inwards with respect to the orbital time, With the fiducial values ψ = √ 2, m = 2, ∆ r = 1, A m = 0.5, and η r = 0.5 we finally obtain We learn that, for a fixed value of η r that does not strongly depend on δ d , the inward mass transport rate is very sensitive to the value of δ d . A value near unity implies a rapid inflow, while for δ d 1 the inflow rate is very slow. With δ d < ∼ 1, e.g., corresponding to a bulge-less very gas-rich disc in radii where it dominates over the dark matter, eq. (24) indicates a significant AM loss corresponding to inward mass transport in a few orbital times, as estimated in Dekel, Sari & Ceverino (2009) for a VDI disc, eq. (1) above. In this case, the pitch angle is not necessarily very small, and the spiral pattern can cover a large fraction of the disc with a significant radial component. In contrast, with δ d 1, eq. (21) indicates that the pitch angle is very small, making the above calculation valid and practically confining the spiral arms to a long-lived ring, with a negligible mass transport rate.
A low value of δ d is expected when M tot is dominated either by a central massive bulge and/or by a large dark-matter mass within the ring radius. The former is inevitable after a compaction event. The latter is likely when the dark-matter halo is cuspy and when the incoming streams enter with a large impact parameter and therefore form an extended ring that encompasses a large dark-matter mass, as may be expected at late times. A small δ d is also expected when the gas fraction in the accretion is low, as expected at late times.
From disc to long-lived ring
The formation and fate of the disc and ring is determined by the interplay between three timescales. An extended ring originates from high-AM spiraling-in cold streams ( §2), which accrete mass on a timescale t acc . The ring gas is transported inwards by torques from the perturbed disc, on a timescale t inf , filling up the interior disc. This disc gas is depleted into stars (and outflows) on a timescale t sfr . As long as t inf < t acc , the ring is evacuated into the disc before it is replenished. As long as t inf < t sfr , the disc remains gaseous. These two conditions would lead to a gas disc. These conditions are expected to be fulfilled for high values of δ d . On the other hand, a ring would form and survive if t inf > t acc , when the ring is replenished before it is transported in, and if t inf > t sfr , when the interior disc would be depleted of gas. The latter are expected to be valid for low values of δ d . We next quantify these conditions, assuming for simplicity an EdS cosmology, approximately valid at z > 1.
The orbital time of the extended ring is on average a few percent of the Hubble time for all galaxy masses (e.g. following , where λ = R ring /R v is the contraction factor from the virial radius to the extended gas ring, here in units of 0.1, which we adopt as our fiducial value. The accretion timescale, the inverse of the specific accretion rate, is on average t acc ∼ 30 Gyr (1 + z) −5/2 ∼ 20 t orb (1 + z) −1 , (26) with a negligible mass dependence across the range of massive galaxies. The second equality is based on eq. (25). Comparing eqs. (26) and (24), we obtain net ring replenishment, t inf > t acc , for The redshift dependence is weak, e.g., δ c < 0.29 at z = 2. For the depletion time, it is common to assume that the gas turns into stars on a timescale where ff is the efficiency of SFR in a free-fall time and t ff is the free-fall time in the star-forming regions (e.g. Krumholz, Dekel & McKee 2012). We adopted above the observed standard value of ff ∼ 0.01 (e.g. Krumholz 2017), and t ff ∼ 0.3 t dyn ∼ 0.05 t orb , assuming that stars form in clumps that are denser than the mean density of baryons in the ring by a factor of ∼ 10 (Ceverino et al. 2012). 2 Comparing eqs. (28) and (24), we obtain net inner disc depletion, t inf > t sfr , for From eqs. (27) and (29) we learn that the two conditions are in the same ball park, though at z < 3 the ring replenishment condition is somewhat more demanding, while at higher redshifts the disc depletion condition is a little more demanding. We conclude that ring formation and survival is crudely expected below δ d ∼ 0.2, give or take a factor ∼ 2 due to uncertainties in the values of the fiducial parameters.
As the ring develops once δ d becomes smaller than the threshold, it becomes over-dense with respect to the disc, ∆ r > 1. Then in eq. (24) the inflow timescale becomes longer accordingly so the ring can continue to grow in a runaway process.
Long-lived high-contrast rings
The estimate of t inf in §4.1.2 was valid for the stage of transition from disc to ring, namely where the disc is still gas rich with a spiral structure that exerts torques on the outer ring, namely when the ring contrast ∆ r is not much larger than unity. Once the ring develops and is long lived, under δ d 1, it can become dominant over the disc. To evaluate the inflow rate in this situation, we now consider the limiting case of a pure ring, in which the spiral structure exerts torques on other parts of the ring. M d of the previous analysis is now replaced by M r , Σ d in eq. (18) is replaced by Σ r = (2η r ) −1 Σ d , while ∆ r is unity. Now ∆j/j in eq. (19) and tan α in eq. (21) are both divided by 2η r , so ∆j/j in eq. (22) is divided by (2η r ) 3 . For otherwise the fiducial values of the parameters, and in particular for the value of η r fixed at 0.5, the inflow timescale is the same as it was in eq. (24).
However, in the case of a tightly wound strong ring, one may relate the relative ring width to the pitch angle, assuming that the width is the "wavelength" of the spiral arm, namely the radial distance between the parts of the arm separated by 2π, where we used eqs. (10) and (21). Solving for η r we obtain Figure 12. Fraction of significant rings. Shown is the 2D distribution of the fraction of rings with µ ring > 0.3 in the Mv−z plane. Similar maps for all rings with µ ring > 0.01 and for pronounced rings with µ ring > 0.5 are shown in Fig. B2 in appendix §B2. This complements the distribution of ring strength in Fig. 10. We see that a high fraction of rings is detected above the threshold mass, Mv > 10 11 M , where discs survive spin flips (Fig. 2), and at z < 4.
Then from eq. (22) divided by (2η r ) 3 we obtain After multiplying by two for the advective contribution, the inflow timescale becomes where the fiducial values of m = 2 and A m = 0.5 were assumed. This is well longer than the Hubble time and the accretion and depletion timescales, implying that once a high-contrast ring forms, it is expected to live for long.
Ring Toomre instability to clump formation
The rings in the simulations, as seen in Fig. 5, are clumpy and star forming, as observed (e.g. Genzel et al. 2014), with the clump properties analyzed in Mandelker et al. (2014Mandelker et al. ( , 2017. This indicates that the rings are gravitationally unstable. Can this be consistent with the rings being stable against inward mass transport? The Toomre Q parameter can also be expressed in terms of the cold-to-total mass ratio δ d (Dekel, Sari & Ceverino 2009), where ψ is of order unity (mentioned in the context of eq. (20)). This implies that a ring with σ/V < ∼ 0.2 (Fig. 7) could be Toomre unstable with Q ∼ 1 if δ d ∼ 0.2, Figure 13. Analytic model versus simulations in the Mv−z plane. Shown is the average δ d = M d /Mtot in bins within the plane. This quantity is expected to govern the inward mass transport rate and thus tell where rings are expected in this plane, to be compared to the ring fraction map in Fig. 12. We see that a high fraction of rings is detected above the threshold mass, Mv > 10 11 M , where discs survive spin flips (Fig. 2), and at z < 4. This regime is indeed where on average δ d < 0.3, as predicted in eqs. (27) and (29). The redshift dependence of δ d , also reflected in the ring fraction in Fig. 12, is largely due to the general increase of gas fraction with redshift. Below the threshold mass δ d is not too meaningful because the galaxies are dominated by non-discs (Fig. 2). The low values of δ d at low masses could be due to gas removal by supernova feedback below the critical potential well of Vv ∼ 100 km s −1 (black curve Dekel & Silk 1986), but they do not lead to long-term rings because of spin flips.
which is in the regime of long-lived rings based on eqs. (27) and (29), and certainly in the high-contrastrings case, eq. (33). Based on Figs. 13 and 14, a significant fraction of the rings have δ d ∼ 0.2, especially at z ∼ 2−5.
In fact, it has been shown using the simulations that clumps may form even with Q ∼ 2−3 as a result of an excess of compressive modes of turbulence due to compressive tides during mergers or flybys (Inoue et al. 2016). This would allow clumpy rings even when δ d < ∼ 0.1. The ring could thus fragment to giant clumps and form stars while it is stable against inward mass transport.
At the same time, the value of the Q parameter in the disc region encompassed by the ring can be much higher than unity, leading to "morphological quenching" (Martig et al. 2009). This is because δ d in this region is low, both because of the high central mass and the low density of gas, which has been depleted into stars and outflows. Shown for each galaxy is the ring gas mass excess µ ring versus δ d , the cold-to-total mass ratio interior to the ring, predicted to control the ring inward mass transport. The medians of µ ring in bins of δ d are shown for all galaxies (black, shade) and for the galaxies with rings (blue). Symbol color marks time with respect to the BN event (correlated with mass with respect to the characteristic BN mass). Also shown is the fraction of galaxies with rings of µ ring > 0.5, 0.3, 0.01 in bins of δ d (magenta, right axis, label). We see an anti-correlation between ring strength and δ d as predicted by the analytic model. The fraction of rings with µ ring > 0.3 ranges from ∼ 64% at δ d ∼ 0.03 to ∼ 9% at δ d ∼ 0.5. A fraction of 50% in all rings is obtained near δ d ∼ 0.2, in general agreement with the model prediction, eqs. (27) and (29). The fraction of strong rings with µ ring > 0.5 is ∼ 20%, obtained for δ d 0.1, and ∼ 10% at δ d 0.15 The δ d dependence of the ring fraction is expected to be similar to that of t inf , and, indeed, the inverse linear relation t inf ∝ δ −1 d of eq. (33), predicted for significant rings, is roughly reproduced overall. The slight steepening of f ring at higher δ d , and the absence of rings at δ d > 0.5, are consistent with the prediction for weaker rings in eq. (24). The flattening of f ring at low δ d is consistent with saturation of the ring population when t inf is very long compared to all other timescales.
Model versus simulations via the cold fraction
We turn to the simulations again, now for the variable δ d = M d /M tot that is predicted to control the ring formation and longevity according to the analytic model derived in the previous subsections. First, Fig. 13 shows in bins within the M v −z plane the averaged values of δ d for all galaxies. The cold mass M d is computed in the disc within R d including the gas and young stars, with the latter typically contributing about 20−25% of the cold mass. It is divided by the total mass including gas, stars and dark matter in the sphere interior to the ring radius r 0 . In the absence of a ring, the total mass is computed within the median ring radius in the galaxies with rings, r 0 0.5 R d (Fig. B1 in appendix §B2).
This map of δ d is to be compared to the maps of f ring in Fig. 12 for rings of different strengths, and to the distribution of ring strength in Fig. 10. We learned there that a high fraction of rings is detected above the galaxy threshold mass, M v > 10 11 M , where discs or rings survive merger-driven spin flips according to Fig. 2 and Dekel et al. (2020), and typically at z < 4. This ring-dominated range in the M v −z diagram is indeed where on average δ d < 0.3, consistent with the analytic predictions in eqs. (27) and (29). These low values of δ d are largely due to the post-compaction massive bulges that appear above a similar threshold mass.
Near the threshold mass we see an increase of δ d with redshift roughly as δ d ∝ (1 + z). This mostly reflects the general increase of gas fraction with redshift. We learn from eqs. (24) and (26) that the quantity that characterizes a pronounced ring is expected to depend on redshift as t inf /t acc ∼ 0.3 (1 + z) δ −3 d,0.3 , so it is predicted to decrease with redshift as ∝ (1 + z) −2 . This explains the absence of rings at high redshifts even above the mass threshold for discs, as seen in the distribution of f ring .
Well below the threshold mass δ d is not too meaningful for ring survival because these galaxies are dominated by irregular non-disc gas configurations (Fig. 2), as discs/rings are frequently disrupted by merger-driven spin flips (Dekel et al. 2020 , Fig. 4). The relatively low values of δ d ∼ 0.1 − 0.2 at low masses and moderate redshifts partly reflect little gas, possibly due to gas removal by supernova feedback. Indeed, the upper limit for effective supernova feedback at V v ∼ 100 km s −1 (Dekel & Silk 1986), marked in the figure by a black curve, roughly matches the upper limit for the region of low δ d values, with a similar redshift dependence of M v . Another contribution for the low δ d values in this regime may come from the pre-compaction central dominance of dark matter (Tomassetti et al. 2016). However, these low values of δ d do not lead to long-lived rings because they are disrupted by merger-driven spin flips (Fig. 2).
To further explore the match of the analytic model with the simulations, Fig. 14 shows more explicitly the relation between the ring strength and the variable δ d that governs the model. Shown for each galaxy is the ring gas mass excess µ ring versus δ d , as well as the median values of µ ring in bins of δ d , for all galaxies (black, shade) and limited to the galaxies with rings (blue). Most interesting are the fractions of galaxies with rings of µ ring > 0.01, 0.3 or 0.5, shown in bins of δ d (magenta symbols and lines, as labeled, right axis). We see an anti-correlation between ring strength and δ d as predicted by the analytic model. The fraction of rings with µ ring > 0.3 ranges from ∼ 64% at δ d ∼ 0.03 to ∼ 9% at δ d ∼ 0.5. It is encouraging to note that for all rings (µ ring > 0.01) the fraction is f ring = 0.5 near δ d ∼ 0.2, in general agreement with the model prediction in eqs. (27) and (29). We see that the fraction of strong rings with µ ring > 0.5 is ∼ 20%, obtained for δ d 0.1. This fraction is ∼ 10% at δ d 0.15.
The way f ring depends on δ d may be expected to crudely resemble that of t inf . Indeed, the inverse linear relation t inf ∝ δ −1 d predicted in eq. (33) for strong rings is crudely reproduced overall. The steepening at higher values of δ d , and in particular the absence of rings at δ d > 0.5, are consistent with the prediction in eq. (24) for rings in their earlier growth phase. The flattening of the ring fraction at very low δ d values is consistent with saturation of the ring population when t inf is very long compared to the Hubble time and all other timescales.
Another way to test the validity of the analytic model is via the ring width. Figure 15 shows the relative ring width η r = ∆r/r versus δ d for all simulated galaxies. The width is deduced from the Gaussian fit, η r = 2σ/r 0 . Also shown by color is the ring contrast. We see a correlation between η r and δ d , which for high-contrast rings is well modeled by the relation η r ∼ 1.8 δ 1/2 d predicted in eq. (31).
We conclude that the survival of rings about massive central masses, as seen in the simulations post compaction, Fig. 11, can be understood in terms of the analytic model, eq. (27) and (29) as well as eqs. (33) with eq. (31).
Baryons versus dark matter
It would be interesting to figure out the contributions of the different mass components to the central mass that determines the low values of δ d and thus leads to long-lived rings. First, Fig. 16 shows the distribution of B/DM, the mass ratio of baryons to dark matter interior to the ring. In the case of a ring the masses are computed within the ring radius r 0 , and in the case of no-ring they are measured within 0.5R d , the average ring radius when there is a ring. Shown is the average of B/DM in bins within the M v −z plane. We see that Figure 16. Nature of the central mass. Shown in bins within the Mv−z plane is the average of the baryon to dark-matter ratio within the ring radius. For no-rings the radius is chosen to be r 0 = 0.5 R d , the typical ring radius. The dark matter dominates below the characteristic mass of ∼ 10 11 M , while the baryons dominate above it, reflecting the transition due to the major compaction event. Nevertheless, even post compaction the two components are comparable.
the dark matter tends to dominate below the threshold mass near M v ∼ 10 11 M while the baryons tend to dominate above it. This refelects the major compactions to blue nuggets near this characteristic mass. We note that in the regime that tends to populate rings, namely above the threshold mass and at z < 1, the average contributions of baryons and dark matter are comparable. We learn that the average post-compaction baryon dominance is typically marginal, by a factor of order unity and ×2.5 at most.
To find out how dark matter may contribute to low values of δ d and the associated high ring strength, Fig. 17 shows B/DM versus δ d and versus µ ring for all the simulated galaxies which show rings. We learn that the low values of δ d < 0.3, which are supposed to lead to rings, as well as the significant rings themselves, with µ ring > 0.1, say, could be associated with central bodies that are either dominated by baryons or by dark matter. Typically the contributions of the two components are comparable, with a slight preference for the dark matter. However, for maaive galaxies B/DM ranges from 0.4 to 2.5. This implies that long-lived rings could appear even in cases where the central mass is dominated by dark matter with a negligible bulge. This is to be compared to observations. As mentioned in §6.2.1, both cases of baryon dominance and dark-matter dominance are detected (Genzel et al., in prep.).
The important contribution of the dark matter to the mass interior to the ring even after a major wet compaction event is partly because the wet compaction of gas induces an adiabatic contraction of stars and dark . Baryons versus dark matter interior to the rings. Shown is the baryon-to-DM ratio versus δ d (left) and µ ring (right) for all galaxies with rings, with the median marked (blue). Low δ d values (e.g. < 0.3) and the associated significant rings (e.g. µ > 0.1 in this case) are obtained for a range of ratios, from baryon dominance by a factor ×2.5 to DM dominance by a similar factor for massive galaxies, and by ×10 for low-mass galaxies. In the median there is slightly more DM than baryons.
matter, and partly because the ring radius is significantly larger then the 1 kpc, or the effective radius R e , within which the post-compaction baryons are much more dominant.
Nuggets: naked versus ringy, blue versus red
It is interesting to address the interplay between the strength of the ring and the nature of the mass in the central 1 kpc. Figure 18 refers to this in terms of being a nugget, and if so whether it is an early star-forming blue nugget or a late quenched red nugget. We learn from the left panel that significant rings, e.g., µ ring > 0.3, predominantly surround nuggets at their centers, defined by Σ s1 > 10 9 M kpc −2 (see Fig. 3). A larger fraction of the weaker rings have no central nuggets (though they may still have large masses interior to the ring radius, which is typically several kpc, leading to a small δ d ).
These no-nuggets are typically star-forming, pre compaction and DM dominated. Focusing on the galaxies with central nuggets, the right panel of Fig. 18 shows the distribution of µ ring for the nuggets only. We learn that ∼ 40% of the nuggets are naked, while ∼ 60% are surrounded by rings, with ∼ 40% having significant rings of µ ring > 0.3.
The red histogram refers to the fraction of red nuggets among the nuggets, defined by log sSFR 1 / Gyr −1 < −1. We learn that among the significant rings, the nuggest are divided roughly half and half between quenched red nuggets and starforming blue nuggets, which tend to appear later and earlier after the compaction phase respectively.
These predicted fractions of rings with no central nuggets, of naked nuggest, and of blue versus red nuggets in galaxies that show rings, are to be compared to observations (see Ji et al., in prep., and a preliminary discussion in §6).
Torques by a prolate central body
To complement the analytic model for disruptive mass transport, we note that it may be aided by torques exerted by a central body, stars or dark matter, if it deviates from cylindrical symmetry. Indeed, as can be seen in Fig. C1, the VELA simulated galaxies tend to be triaxial, prolate pre compaction and oblate post compaction, showing a transition about the critical mass for blue nuggets Tomassetti et al. 2016). A similar transition has been deduced for the shapes of observed CANDELS galaxies Zhang et al. 2019). The prolate precompaction bulge may exert non-negligible torques that could possibly disrupt the disc. In appendix §C, we learn from a very crude estimate that an extreme prolate central body could have a non-negligible effect on the survival of a disc around it.
The central bulge, which tends toward an oblate shape (Fig. C1), does not exert torques on masses orbiting in the major plane of the oblate system, but it does exert torques off this plane. An extreme oblate system, namely a uniform disc, yields values of ∆j/j ∼ 0.1 per quadrant of a circular orbit in a plane perpendicular to the disc and close to it (Danovich et al. 2015, Figure 16). This implies that the post-compaction central oblate body, above the critical mass for blue nuggets, is not expected to significantly affect the AM of the disc and disrupt it. . Nuggets and rings. Left: Stellar surface density within the inner 1 kpc, Σ s1 , versus ring gas mass excess, µ ring , for all simulated galaxies. Nuggets are characterised by Σ s1 > 10 9 M kpc −2 . Color marks the sSFR within 1 kpc, with blue and red nuggets separated at log sSFR 1 / Gyr −1 ∼ −1. Right: The probability distribution of µ ring for nuggets, obeying Σ s1 > 10 9 M kpc −2 (blue, shaded). The fraction of red nuggets, log sSFR 1 < −1, is marked by the red histogram, with the difference between the blue and red histograms refering to the blue nuggets. We see on the left that pronounced rings tend to surround nuggets, while a significant fraction of the weaker rings have no central nuggets. Among the nuggets, on the right, ∼ 40% have significant rings of µ ring > 0.3, while ∼ 40% are naked with no rings. The nuggets inside rings are roughly half blue and half red nuggets. Figure 19. Mock images of simulated rings. Shown are three-color rgb images showing blue rings about red massive bulges as "observed" from the four simulated galaxies seen in Fig. 5. The corresponding mock images in the three filters F606W, F850LP and F160W are shown in Fig. D2 of appendix §D. Dust is incorporated using Sunrise and the galaxy is observed face-on through the HST filters using the HST resolution and the noise corresponding to the CANDELS survey in the deep GOODS-S field.
PRELIMINARY COMPARISON TO OBSERVATIONS
6.1 Mock observations of simulated rings
For crude estimates relevant for HST imaging, we note that for a source at z ∼ 1 the F606W band of HST falls near 3000Å and thus traces the rest-rame UV luminosity, which can serve as a proxy for star formation in the gaseous rings. Using the relation between UV luminosity and SFR based on Kennicutt (1998), corrected for a Chabrier (2003) Figure 20. Rings in SFR versus luminosity from mock images of simulated galaxies. Shown are profiles of three example simulated galaxies with pronounced rings. Left: Face-on projected SFR surface density profiles from the simulations. Right: Light density profiles in the F606W filter from the mock images shown in Fig. 19. The ring SFR density of ∼ 0.1M yr −1 kpc −2 shows as ∼ 24 mag arcsec −2 , indicating weak dust extinction, but the predicted contrast between the ring and the interior is significantly smaller.
by a factor of ×1.8, the SFR surface densities in the rings and the clumps, Σ SFR ∼ (0.1 − 1)M yr −1 kpc −2 respectively, yield crudely estimated UV magnitudes of (23.4−20.9) mag arcsec −2 without taking extinction into account. The fainter magnitudes may be more representative of the average surface brightness in the most pronounced rings in the simulations. Dust extinction in the UV is expected to lie between zero and 3 magnitudes (e.g., Buat & Xu 1996;Freundlich et al. 2013).
Mock "Candelized" Images
In order to make more quantitative observable predictions for rings based on the simulated galaxies, we use mock 2D images that are generated for each given VELA simulated galaxy at a given time to mimic HST CANDELS images (CANDELization). As described in Snyder et al. (2015) and (Simons et al. 2019) (see MAST archive.stsci.edu/prepds/vela/), the stellar light is propagated through dust using the code sunrise. A spectral energy distribution (SED) is assigned to every star particle based on its mass, age and metallicity. The dust density is assumed to be proportional to the metal density with a given ratio and grain-size distribution. Sunrise then performs dust radiative transfer using a Monte Carlo ray-tracing technique. As each emitted multi-wavelength ray encounters dust mass, its energy is probabilistically absorbed or scattered until it exits the grid or enters the viewing aperture, selected here to provide a face-on view of the gas disc. The output of this process is the SED at each pixel. Raw mock images are created by integrating the SED in each pixel over the spectral response functions of the CANDELS ACS and WFC3 R and IR filters (F 606W , F 850LP and F 160W ) in the observer frame given the redshift of the galaxy. These correspond at z ∼ 1−2 to rest-frame UV, B to U, and R to V, respectively. Thus, the first two are sensitive to young stars while the third refers to to old stars. The images are then convolved with the corresponding HST PSF at a given wavelength. Noise is added following Mantha et al. (2019), including random real noise stamps from the CANDELS data, to ensure that the galaxies are simulated at the depth of the real GOODS-S data and that the correlated noise from the HST pipeline is reproduced Figure 19 shows the resultant mock rgb images for the four example VELA galaxies with rings, whose gas densities are shown in Fig. 5 and the corresponding profiles in Fig. 6. Figure D2 in appendix §D further shows the images of the same simulated galaxies in each filter separately. These are all post-compaction galaxies with M s 10 10.4 M . At z ∼ 1−1.5 the images show extended blue rings encompassing massive red bulges. The rings could be described as tightly-wound spiral arms, which are sometimes not concentric about the bulge, and they tend to show giant clumps. With the added noise the ring structure becomes less obvious in the rgb picture of the z = 3.55 galaxy, though it is seen pretty clearly in the F606W filter (Fig. D2).
In order to make the predictions a bit more quantitative, Fig. 20 shows the SFR surface density profiles of the three VELA galaxies with clear rings at z ∼ 1−1.5. These are compared to the light density profiles in the F606W filter from the mock images shown in Fig. 19. The ring SFR density of ∼ 0.1M yr −1 kpc −2 shows as a surface brightness of µ 606 ∼ 24 mag arcsec −2 , indicating only weak dust extinction. However, the predicted contrast in light between the ring and the interior is significantly smaller in the mock images. These are consistent with the crude estimates made at the beginning of this section.
Predictions for ALMA CO emission
The gas rings could be observable near the peak of star formation through their CO line emission. In particular, the CO(2-1) line at ν rest = 230.538 GHz is the lowestexcitation line observable with NOEMA and ALMA at z < ∼ 2. We assume (a) the Galactic value for the CO(1-0) conversion factor of molecular gas mass to luminosity, α CO = 4.36 M /(K km s −1 pc 2 ), (b) a conservative CO(2-1)/CO(1-0) line ratio r 21 = 0.77 (e.g., Daddi et al. 2015), and (c) the standard relation of Solomon et al. (1997) for converting intrinsic CO(2-1) luminosity into integrated flux (cf. also Freundlich et al. 2019). Then, the typical ring and clump molecular gas surface densities of Σ ring = 5×10 7 and 5×10 8 M kpc −2 respectively (assuming that the ring is dominated by molecular gas) yield integrated CO(2-1) line fluxes of 0.04 and 0.4 Jy km s −1 arcsec −2 at z = 1, and 0.10 and 1.0 Jy km s −1 arcsec −2 if the galaxy is put at z = 0.5.
We estimate that at z = 1, practical ALMA observation times per galaxy of < 10h would enable a detection of the dense clumps of such rings, but probably not the lower-density regions between the clumps. ALMA would need about 50h to detect the mean surface density of the rings with integrated SNR= 6, assuming a line width of 200 km s −1 resolved by three channels. In order to visualize the detectability of rings by ALMA, Fig. 21 presents simulated CO(2-1) ALMA observations of the gas ring of V07 shown in Fig. 5. We see that if the galaxy is put at z = 1, parts of its ring would be traceable with 10h of ALMA but with a low signal to noise. At z = 0.5, the ring is detectable using 2h of ALMA (60h of NOEMA), and can be mapped with a higher signal-to-noise with 10h of ALMA. 2017), utilizing rotation curves for 40 massive star-forming galaxies at z = 0.6 − 2.7 based on data from the 3D-HST/KMOS 3D /SINFONI/NOEMA surveys, verify the common existence of extended gas rings. They report that some of the rings surround massive compact bulges, typically with little dark-matter mass within one or a few effective radii, while other rings surround less massive bulges but preferably with higher central darkmatter masses. This is qualitatively consistent with the model in §4 addressing the crucial role of a massive central body, which could either be a post-compaction massive bulge and/or a centrally dominant massive darkmatter halo. It is also qualitatively consistent with our findings in the simulations ( §5.1, Fig. 17) that the mass interior to significant rings in massive galaxies range from baryon dominance by up to a factor ×2.5 to darkmatter dominance by up to a similar factor or more.
In terms of the ring properties, in Genzel et al. (2014) the reported gas densities in the rings (e.g. their Fig. 23) are Σ gas ∼ 10 8.6−9.0 M kpc −2 . This is higher than the average in the simulated rings and similar to the peak densities in the giant clumps within the simulated rings. The corresponding SFR densities deduced from the observed rings are Σ SFR ∼ (1 − 2)M yr −1 kpc −2 . This is again higher than the average across the simulated rings and comparable to the SFR densities in the simulated giant clumps. This difference may be partly due to the fact that the observed galaxies of M s ∼ 10 10.0−11.5 M are systematically more massive than the simulated galaxies with significant rings where M s ∼ 10 9.5−11.0 (Fig. B1 in appendix §B). It may also reflect the imperfection of the VELA-3 suite of simulations used here, which tend to underestimate the gas fractions (Fig. B1). This is largely due to the relatively weak feedback incorporated, which leads to overestimated SFR at high redshifts.
In CANDELS
The general impression from crude visual inspections of CANDELS-HST galaxies used to be that they are not showing star-forming rings, leading to a common notion that blue and red nuggets tend to be "naked". Naked nuggets were reported at z ∼ 1−2 (e.g. Williams et al. 2014;Lee et al. 2018). In contrast, deeper CAN-DELS images did indicate some rings around massive bulges (e.g., J. Dunlop, private communication). Recall that in §5.2, we predicted based on the simulations that while ∼ 40% of the nuggets are expected to be totally naked, another ∼ 40% of the nuggets are expected to have significant rings of µ ring > 0.3.
Indeed, an ongoing search in the deeper GOODS fields (Ji, Giavalisco, Dekel et al., in prep.) reveals many star-forming rings about massive bulges. The rings are visually identified in the F606W bandpass, sensitive to young stars at z ∼ 1, and the images are then studied in the complementary F850LP and F160W filters, the latter capturing older stars. Our preliminary inspections indicate that, among the galaxies of M s > 10 10.5 M at z = 0.5 − 1.2, a non-negligible fraction of order ∼ 10% clearly show blue star-forming, clumpy rings, typically surrounding a massive bulge, which is either star forming or quenched. Clearly, this detected fraction of rings is a far lower limit, limited to galaxies of low inclinations, sufficiently extended rings, high-contrast rings and bright enough rings that are more easily detected at z < ∼ 1. Furtheremore, a large fraction of the galaxies were eliminated from the ring search based on a pronounced spiral structure, while our theoretical undertanding in §4 is that the spiral structure is intimately linked to the presence of a ring. In our simulations, we read from Fig. 11 that ∼ 24% of the galaxies are expected to have pronounced rings of µ ring > 0.5. Focusing on massive galaxies at z ∼ 1, we read from the colors in Fig. 12 that the fraction relevant to the range of masses and redshifts where the observations were analyzed may be closer to ∼ 30%. In the range z = 1.4−, this fraction is reduced to ∼ 10%. According to Fig. 10, the average ring strength in massive galaxies at z ∼ 1 is indeed just below µ ring = 0.5. Given the underestimate in the observational detection of rings, the numbers in the simulations and the observations may be in the same ball park.
Visualization of such observed rings is provided by Fig. 22, which displays four preliminary example rgb images of CANDELS galaxies showing rings, with masses M s > 10 10.8 M at z = 0.58−1.22 as marked in the figure. Figure D3 in appendix §D shows the same galaxies in the three filters separately. These observed images show rings that qualitatively resemble the mock images from the simulated galaxies shown in Fig. 19. The average F606W surface brightness in the rings is 24.6, 24.7, 25.6, 25.5 mag arcsec −2 in rings id 9485, 13601, 16616, 18419 respectively. These are in the ball park of the ∼ 24 mag arcsec −2 of the most pronounced mock rings shown in Fig. 20, given the uncertainties in the simulations, their mock images and the way the ring surface brightness is estimated as well as in the surface brightness deduced for the observed rings. Both the simulations and the observations show massive bulges, though two of the observed bulges are blue while all the four displayed simulated bulges are redder. We recall from §5.2 that among the significant simulated rings with central nuggets about one half are red nuggets, consistent with what is indicated observationally.
These pictures of observed high-redshift rings in CANDELS are just a sneak preview of a detailed analysis. The challenge to be addressed is to evaluate the effect of dust on the appearance of rings and bulges in these images. If these rings are real and not artifacts of dust absorption in the inner regions, they would be qualitatively compatible with the H α observations of Genzel et al. and along our theoretical understanding of extended star-forming gas rings about blue nuggets or red nuggets. Complementary spectroscopic studies would explore the ring kinematics and dynamics.
At low redshifts
Interestingly, Salim, Fang & Rich (2012) found that most low-redshift "Green-Valley" galaxies, at the early stages of their quenching process, or S0 type galaxies, consist of massive quenched bulges surrounded by star-forming rings. Even closer to home, M31 and the Sombrero galaxy are known to show very pronounced dusty rings in IR surrounding a massive stellar bulge and disc. While our simulations refer more explicitly to high-redshift, stream-fed gaseous galaxies in which the conditions for ring formation may be different from the z = 0 galaxies, we note that the high-z rings tend to appear after the major compaction events, which are the early stages of quenching, namely in the Green Valley (Tacchella et al. 2016b). Our general analysis in §4 of ring stabilization by a central mass through a low δ d , and the considerations based on t inf versus t acc and t sfr , may be relevant with some modifications also for explaining the longevity of the low-redshift rings.
CONCLUSION
In Dekel et al. (2020) we argued, analytically and using simulations, that galactic gas discs are likely to be long-lived only in dark-matter haloes of mass above a threshold of ∼ 2 × 10 11 M , corresponding to a stellar mass of ∼ 10 9 M , with only little dependence on redshift. In haloes of lower masses, the gas does not tend to settle into an extended long-lived rotating disc, as several different mechanisms act to drastically change the angular momentum and thus disrupt the disc. First, the AM is predicted to flip on a timescale shorter than the orbital timescale due to mergers associated with a pattern-change in the cosmic-web streams that feed the galaxy with AM. Second, in this pre-compaction low-mass regime (e.g. Zolotov et al. 2015), violent disc instability exerts torques that drive AM out and mass in, thus making the disc contract in a few orbital times (e.g. Dekel, Sari & Ceverino 2009). Third, in this regime the central dark-matter and stellar system tend to be prolate (e.g. Tomassetti et al. 2016) and thus capable of producing torques that reduce the AM of the incoming new gas.
Furthermore, supernova feedback is expected to have a major role in disrupting discs below the critical mass. This emerges from a simple energetic argument that yields an upper limit of V v ∼ 100 km s −1 for the dark-matter halo virial velocity (i.e. potential well) within which supernova feedback from a burst of star formation could be effective in significantly heating up the gas (Dekel & Silk 1986). Supernova feedback stirs up turbulence that puffs up the disc and it suppresses the supply of new gas with high AM (Tollet et al. 2019), possibly even ejecting high-AM gas from the disc outskirts. As argued in section 3 of Dekel et al. (2020), supernova feedback determines the mass dependence of the stellar-to-halo mass ratio that enters into the merger rate and thus affects the frequency of disc disruption by spin flips. Finally, supernova feedback has a major role in confining the major compaction events to near or above the golden mass (Dekel, Lapiner & Dubois 2019), and, as shown in the current paper, these compaction events are responsible for the formation and longevity of extended rings.
Above this golden mass, the disruptive mergers are less frequent and are not necessarily associated with a change in the pattern of the feeding streams, allowing the discs to survive for several orbital times. In parallel, the effects of supernova feedback are reduced due to the depth of the halo potential well.
The main issue addressed in this paper is the postcompaction formation of long-lived rings above a critical mass, similar to the golden mass for supernova feedback and merger-driven disc flips. We showed using the simulations that in the post-compaction regime, typically after z ∼ 4, the inflowing high-AM streams from the cosmic web settle into extended discs that evolve into long-lived rings. Using measures of ring strength in each simulated galaxy, such as contrast and mass fraction, we quantified the tendency of the rings to appear after the major compaction events and above the corresponding mass threshold, and showed that their strength is grow-ing with time and mass with respect to the blue-nugget phase.
In order to understand the ring longevity, we have worked out the torques exerted by a tightly wound spiral structure on the disc outskirts. We found that the timescale for inward mass transport for a ring of constant relative width is roughly t inf ∼ 6 δ −3 d,0.3 t orb , and the spiral pitch angle is given by tan α ∼ δ d , where δ d is the cold-to-total mass ratio interior to the ring. By comparing this to the timescales for external accretion and interior SFR, t acc and t sfr , requiring ring replenishment t inf > t acc and depletion of the interior, t inf > t sfr , we learned that a ring forms and survives when δ d < 0.3. The required low values of δ d are most naturally due to the post-compaction massive bulge. A similar extended long-lived ring would appear about a massive dark-matter dominated central region, which could be another reason for a reduced δ d . Once the ring develops a high contrast, the inward transport rate becomes longer than the Hubble time and all other relevant timescales. The ring remains in tact but it gradually weakens due to the weakening accretion rate with cosmological time and the gradual ring depletion into stars. The long-lived ring could be Toomre unstable, with giant clumps forming stars, as long as it is fed by high-AM cold gas streams.
In order to allow first crude comparisons of the simulated rings-about-bulges to observations, we generated mock images from ringy simulated galaxies that mimic multi-color HST images in CANDELS deep fields including dust extinction. The pronounced rings at r ∼ 10 kpc are expected to form stars at a surface density of Σ SFR ∼ (0.1 − 1)M yr −1 kpc −2 . This corresponds at z ∼ 1 to an average surface brightness of ∼ 24 mag arcsec −2 in the F606W filter, corresponding to rest-frame UV, with weak dust extinction. We also showed mock ALMA images of CO(2-1) emission, indicating that z ∼ 1 rings would be detectable but at a low signal to noise with 10h of ALMA observations. A ring at z ∼ 0.5 would be detectable at a higher signal to noise even with a few-hour exposure.
Observational studies including Hα kinematics (Genzel et al. 2014, 2017, andwork in progress) show gaseous star-forming clumpy rings around massive bulges or dark-matter dominated centers in a significant fraction of z ∼ 1−2 galaxies above the threshold mass. This is qualitatively consistent with our theoretical understanding that a massive central mass is expected to support an extended ring for long times. It is also along the lines of the prediction of major compaction events that generate massive bulges typically above a similar threshold mass. In our simulations we fild that for massive galaxies with pronounced rings, the baryon-to-DM ratio interior to the ring ranges from 0.4 to 2.5.
We provided a sneak preview of an ongoing study of rings in the deepest fields of the HST-CANDELS multicolor imaging survey (Ji et al., in preparation). The sample galaxies shown qualitatively resemble the mock images from the simulations, with star-forming clumpy rings about massive bulges. Our preliminary results indicate that, indeed, when observed deep enough, a non-negligible fraction of the galaxies of M s > 10 10.5 M at z ∼ 0.5−3 show blue rings about massive bulges.
In our simulations, strong rings typically have nuggets in their 1 kpc centers, of which roughly half are star-forming blue nuggets and the other half are quenched red nuggets. Among the nuggets, about half are naked, and the other half are surrounded by significant rings. There are preliminary indications that these predictions are in the ball park of the observed ring and nuggest populations, but this is a subject for future studies. Table A1. Relevant global properties of the VELA 3 galaxies. The quantities are quoted at z = 2 (a = 0.33) or at the final timestep a fin when it is < 0.33 (marked by stars). Mv is the total virial mass. The following four quantities are measured within 0.1Rv: Ms is the stellar mass, Mg is the gas mass, SFR is the star formation rate, and Re is the half-stellar-mass radius. The disc outer cylindrical volume, as defined in Mandelker et al. (2014), is given by R d and H d , the disc radius and half height that contain 85% of the gas mass within 0.15 Rv. Vrot and σ are the rotation velocity and the radial velocity dispersion of the gas. e and f are the shape parameters of the gas distribution, representing the "elongation" and "flattening" as defined in Tomassetti et al. (2016). a fin is the expansion factor at the last output. naud 2010; Ceverino et al. 2012Mandelker et al. 2014). Like in other simulations, the treatments of star formation and feedback processes are rather simplified. The code may assume a realistic SFR efficiency per free fall time on the grid scale but it does not follow in detail the formation of molecules and the effect of metallicity on SFR. Ceverino et al. (2014), the star formation rates, gas fractions, and stellar-to-halo mass ratio are all in the ballpark of the estimates deduced from observations.
A2 The Galaxy Sample and Measurements
The virial and stellar properties of the galaxies are listed in Table A1. The virial mass M v is the total mass within a sphere of radius R v that encompasses an overdensity of ∆ where Ω Λ (z) and Ω m (z) are the cosmological parameters at z (Bryan & Norman 1998;Dekel & Birnboim 2006). The stellar mass M s is the instantaneous mass in stars within a radius of 0.2R v , accounting for past stellar mass loss. We start the analysis at the cosmological time corresponding to expansion factor a = 0.125 (redshift z = 7). As can be seen in Table A1, most galaxies reach a = 0.50 (z = 1). Each galaxy is analyzed at output times separated by a constant interval in a, ∆a = 0.01, corresponding at z = 2 to ∼ 100 Myr (roughly half an orbital time at the disc edge). The sample consists of totally ∼ 1000 snapshots in the redshift range z = 7 − 0.8 from 35 galaxies that at z = 2 span the stellar mass range (0.2 − 6.4) × 10 11 M . The half-mass sizes R e are determined from the M s that are measured within a radius of 0.2R v and they range R e 0.4 − 3.2 kpc at z = 2.
The SFR for a simulated galaxy is obtained by SFR = M (t age < t max )/t max tmax , where M s (t age < t max ) is the mass at birth in stars younger than t max . The average · tmax is obtained by averaging over all t max in the interval [40,80] Myr in steps of 0.2 Myr. The t max in this range are long enough to ensure good statistics. The SFR ranges from ∼ 1 to 33M yr −1 at z ∼ 2.
The instantaneous mass of each star particle is derived from its initial mass at birth and its age using a fitting formula for the mass loss from the stellar population represented by the star particle, according to which 10%, 20% and 30% of the mass is lost after 30 Myr, 260 Myr , and 2 Gyr from birth, respectively. We consistently use here the instantaneous stellar mass, M s , and define the specific SFR by sSFR = SFR/M s .
The determination of the centre of the galaxy is outlined in detail in appendix B of Mandelker et al. (2014). Briefly, starting form the most bound star, the centre is refined iteratively by calculating the centre of mass of stellar particles in spheres of decreasing radii, updating the centre and decreasing the radius at each iteration. We begin with an initial radius of 600 pc, and decrease the radius by a factor of 1.1 at each iteration. The iteration terminates when the radius reaches 130 pc or when the number of stellar particles in the sphere drops below 20.
The disc plane and dimensions are determined iteratively, as detailed in Mandelker et al. (2014). The disc axis is defined by the AM of cold gas (T < 1.5 × 10 4 K), which on average accounts for ∼ 97% of the total gas mass in the disc. The radius R d is chosen to contain 85% of the cold gas mass in the galactic mid-plane out to 0.15R v , and the half-height H d is defined to encompass 85% of the cold gas mass in a thick cylinder where both the radius and half-height equal R d . The ratio R d /H d is used below as one of the measures of gas disciness.
Another measure of disciness is the kinematic ratio of rotation velocity to velocity dispersion V rot /σ. The rotation velocity and the velocity dispersion are computed by mass-weighted averaging over cells inside a cylinder whose minor axis is along the AM direction of the cold gas (T < 4 × 10 4 K) within a sphere of radius 0.1R v . The cylinder radius is 0.1R v and its half-height is 0.25R e , where R e is the cold-gas half-mass radius (more details in Kretchmer et al., in prep.). The radial velocity dispersion is measured with respect to the mean radial velocity within the cylinder.
Relevant global properties of the VELA 3 galaxies at z = 2 are listed in Table A1 and explained in the caption. It includes the global masses and sizes of the different components, the shape and kinematic properties.
We attempt to identify the major event of compaction to a blue nugget for each galaxy. This is the one that leads to a significant central gas depletion and SFR quenching, and marks the transition from dark-matter to baryon dominance within R e . Following Zolotov et al. (2015) and Tacchella et al. (2016a), the most physical way to identify the compaction and blue nugget is by the steep rise of gas density (and SFR) within the inner 1 kpc to the highest peak, as long as it is followed by a significant, long-term decline in central gas mass density (and SFR). The onset of compaction can also be identified as the start of the steep rise of central gas density prior to the blue-nugget peak. An alternative identification is using the shoulder of the stellar mass density within 1 kpc where its rise due to starburst associated with the compaction turns into a plateau of maximum long-term compactness slightly after the blue-nugget peak of gas density. This is a more practical way to identify blue nuggets in observations (e.g. Barro et al. 2017a).
Major mergers in the history of each galaxy, for the limited purpose they are used here, are identified in a simplified way by following sudden increases in the stellar mass. In order to measure ring properties for all the simulated galaxies, we compute for each the gas surface density profile Σ(r) in the plane perpendicular to the angular momentum, in rings spaced by 100 pc, and fit to it a function with free parameters that captures the main ring with a Gaussian in linear Σ versus linear r. Three examples are shown in Fig. 8, which illustrates the fits. We consider the radius range between r min = 1.5 kpc and the outer disc radius R d (which is typically ∼ 10 kpc or ∼ 0.1R v ), but limit the outer radius to be < 0.15R v .
We first evaluate whether there is no ring, one ring or two rings. For this we temporarily smooth the profile with a Gaussian window of standard deviation σ = max{0.005R v , 7.5∆x}, where ∆x is the minimum cell size (varying between 17.5 and 35 pc), giving values of σ = (0.2 − 1) kpc. We then search for maxima that are separated by more than max{4 kpc, 0.033R v }. This also provides initial guesses for the ring radius r in , the peak level Σ max and the minimum level at the interior to the ring Σ min .
In the case of a single ring, we fit to the raw profile a Gaussian with a constant background, with four free parameters. The fit is performed by minimizing the sum of residuals in radii spaced by 100 pc. The radius r 0 is searched for in the range r in ± 1.5 kpc. Figure B1. Distribution of ring properties for the significant rings in the simulations, µ ring > 0.3. From left to right, top to bottom: Galaxy stellar mass Ms and redshift z for these galaxies with rings. Ring radius r 0 with respect to the disc radius, and the relative ring width ηr = 2σ/r 0 . Then, for the ring (within r 0 ±1σ) and the interior (r < r 0 − 2σ), gas fraction fgas, surface density of stellar mass Σs, specific SFR sSFR, surface density of SFR, and gas metallicity Zg.
The standard deviation σ is allowed to range from 0.008R v to 0.016R v (motivated by the values eventually obtained for the ring full widths of 4σ. In the cases of combined double rings (see below) the value of σ can become as large as 0.033R v . The background level Σ 0 is not allowed to be smaller than the minimum value Σ min in the interior of the ring, such that it will not be biased low by the background level at the exterior of the ring.
The ring at r 0 with a width ±σ can be characterized by its contrast ranging from 0 for no ring, through 1 for a lowcontrast ring, 1 for a high contrast ring, to δ ring → ∞ for an ultimate ring with an empty interior.
The gas mass of the ring M ring is determined by integrating the density profile in (r 0 − 2σ, r 0 + 2σ) and subtracting the beckground mass of surface density Σ 0 . Figure B2. 2D distributions of ring fractions for the simulated galaxies in bins within the Mv −z plane, complementing Fig. 12. Left: all rings with µ ring > 0.01. Middle: significant rings with µ ring > 0.3. Right: pronounced rings with µ ring > 0.5. This complements the distribution of ring strength in Fig. 10. We see that a high fraction of rings is detected above the threshold mass, Mv > 10 11 M , where discs survive spin flips (Fig. 2), and at z < 4. Figure B3. Same as Fig. 12, the distribution of ring fraction for sigbnificant rings with µ ring > 0.3, but in the plane Ms − z, for easier comparison to observations. The corresponding measure of ring strength is the gas mass excess ranging from 1 for a negligible ring to 1 for an ultimate ring with an empty interior.
In the case of two rings we fit a sum of two Gaussians, with the same Σ 0 . If one of the rings is at least three times more massive than the other, we choose it as the dominant ring. Otherwise, for about 10% of the rings, we combine the two rings into one. The contrast is set to the average of the two contrasts. The radius r 0 is set to the average of the two radii, r 01 < r 02 . When computing µ ring , for non-overlapping rings the ring mass is the sum of the two ring masses, and for overlapping rings (within 2σ) the ring mass is integrated between r 01 − 2σ 1 and r 02 + 2σ 2 , with the denominator integrated to r 02 + 2σ 2 . The ring width, which is 2σ for each of the ring, is set to the sum of the widths in the case of non-overlapping rings, and to half the interval from r 01 − 2σ 1 to r 02 + 2σ 2 in the case of overlapping rings.
Finally, we remove rings with r 0 smaller than 400 pc, corresponding to four radial bins (four is the number of free parameters in the fit). We also remove rings when µ ring 0.
B2 More ring properties
To complement the presentation and discussion of ring properties, especially in §3.3, Fig. B1 shows the probability distribution of certain other ring properties of interest, as measured bu the Gaussian fits in the sample of VELA galaxies with significant rings, µ ring > 0.3.
The stellar masses of the galaxies with significant rings are mostly in the range log M s /M 10.25 ± 0.75. This reflects the preferred occurance of rings above the mass threshold for discs by the frequency of mergerdriven spin flips, Fig. 2, and the characteristic mass for major compaction into blue nuggets, Figs. 3 and (7) (Zolotov et al. 2015;Tomassetti et al. 2016). The main redshift range is z ∼ 2.6 with rings also fount out to z ∼ 4. This is largely determined by our sample of galaxies that grow in time and stop near z = 1, and reflects the tendency of rings to appear above a threshold mass. It also reflects the decrease of δ d in time due to the overall decrease in gas fraction, Figs. 12 and (13).
The ring radii are r 0 (0.5 ± 0.3) R d . They can define the outer edge of the disc, but in some cases they are in inner radii. The relative ring widths are η r = 2σ/r 0 0.67 ± 0.4. This means that the rings could be narrow but in some cases they are rather broad. Some of these broad rings are actually made of two rings.
The gas fracion in the ring ranges from 0.1 to 0.8, with the median at f g 0.32, while in the interior it is typically below 0.1. This reflects the accumulation of gas in the ring while the interior has been depleted. The typical gas fraction in the ring is somewhat lower than observed for more massive galaxies at z ∼ 2 (Tacconi et al. 2010Genzel et al. 2014;Tacconi et al. 2018), partly because the simulated galaxies are systematically smaller, as seen in the distribution of M s , partly because the simulated rings are at z ∼ 1, and partly because the gas fractions are underestimated in this suite of simulations because of weak feedback that allows too high SFR at higher redshifts. The stellar surface density in the rings is log Σ s 7.5 ± 0.8, significantly lower than in the interior, where it is log Σ s 8.9 ± 0.8 due to the massive bulge.
The sSFR in the ring is log sSFR −0.1 ± 0.5, typical to the Main Sequence of star-forming galaxies at z ∼ 1 − 2. In the interior it is log sSFR −1.0 ± 1.8, corresponding to both quenched red nuggets and starforming blue nuggets. This is also seen in the SFR surface density Σ SFR , which could be higher than in the ring for blue nuggets and lower for red nuggets. In the rings it is log sSFR −1.6 ± 1.0M yr −1 kpc −2 .
Complementing the metallicity profiles shown in Fig. 6, the gas metallicity in the ring is log Z −0.5 ± 0.3. This is significantly lower than the interior metallicity of log Z −0.1 ± 0.2, consistent with the notion that the ring gas is mostly accreted gas. Figure B2, complementing Fig. 12 for rings of different strengths, shows the 2D distributions of ring fractions for the simulated galaxies in bins within the M v −z plane. Left: all rings with µ ring > 0.01. Middle: significant rings with µ ring > 0.3. Right: pronounced rings with µ ring > 0.5. This also complements the distribution of ring strength in Fig. 10. Figure B3, complementing Fig. 12, shows the distribution of ring fraction for significant rings with µ ring > 0.3 but in the plane M s −z instead of M v −z, to allow a more straightforward comparison to observations.
APPENDIX C: TORQUES BY A PROLATE CENTRAL BODY
As mentioned in §2.3, torques exerted by a VDI disc in the pre-compaction stage below the critical mass for major compaction may cause AM loss and thus shrinkage of the disc. Another mechanism that could help disrupting a pre-compaction disc in a similar way involves torques exerted by a central stellar system that is not cylindrically symmetric, e.g., if it has a prolate shape. Indeed, as can be seen in Fig. C1, the VELA simulated galaxies tend to be prolate pre compaction and oblate post compaction, showing a transition about the critical mass for blue nuggets Tomassetti et al. 2016). The 3D ellipsoidal shape can be measured by the parameters of "elongation", e = [1 − (b/a) 2 ] 1/2 , and "flattening", f = [1 − (c/a) 2 ] 1/2 , where a b c are the principal axes of the mass distribution. The figure shows the difference e−f , which is useful for characterizing the shape from extremely oblate at e−f = −1, through pure triaxial at e−f = 0, to extremely prolate at e − f = +1. A similar transition has been deduced for the shapes of observed CAN-DELS galaxies Zhang et al. 2019). The transition in shape can be explained by the transition from central dark-matter dominance to baryon dominance as a result of the compaction to a blue nugget (Tomassetti et al. 2016). The prolate precompaction bulge may exert non-negligible torques that could make a significant relative change in AM and thus help disrupting the gas disc, as we very crudely estimate below.
For a very crude estimate, consider a central body of mass γM , exerting a torque on the mass outside it at a position (r, θ, φ). The factor γ 1 is the fraction of mass in the body exerting the torque with respect to the total mass M within the sphere of radius r. The torque per unit mass can be written as τ = µ(r, θ, φ) GγM r 2 r , where µ depends on the shape of the central body and on the position where the torque is evaluated. The change in specific AM caused by this torque on a circular orbit of velocity V at radius r in a plane of φ = const., acting from time t 1 to t 2 , is ∆j = t2 t1 τ dt. Writing the specific AM in the orbit as j = V r, approximating V 2 GM/r, and using dt = (r/V ) dθ to relate time and angle, with the corresponding angle θ 1 and θ 2 , we obtain ∆j j = γ θ2 θ1 µ(r, θ) dθ .
Note that this expression does not depend on M ; it depends only on γ, the mass fraction in the body that exerts the torque. In order to obtain an upper limit for the possible effect, we consider an extreme prolate system, a dumbbell, made of two equal point masses separated by a distance 2d along the z axis, and consider a circular orbit of radius r in a plane that includes the dumbbell where φ is constant and θ is varying. We obtain (for r in units of d) µ(r, θ) = 0.5 r 2 sin θ [ (1 + r 2 − 2r cos θ) −3/2 −(1 + r 2 + 2r cos θ) −3/2 ] . (C3) The torque is periodic, flipping sign in every quadrant of the orbit. When evaluated from 0 to π/2, the integral gives γ −1 ∆j j = 2π 0 µ(r, θ) dθ = r 2 (r 2 − 1) − r (r 2 + 1) 1/2 .
(C4) If the dumbbell also dominates the potential, γ = 1, example values are ∆j/j = 5.02, 0.97, 0.44, 0.18 at r/d = 1.1, 1.5, 2, 3 respectively. We learn that the relative change of AM during a quadrant of a circular orbit Figure C1. Evolution of shape (e − f , see text) of the stellar system (blue) as a function of stellar mass for stacked VELA simulated galaxies (median and 1σ scatter), It shows a transition from prolate (> 0) to oblate (< 0) at the critical mass for blue nuggets. A similar transition is seen when plotted against time with respect to the blue-nugget event. Also shown is the baryonto-dark matter ratio within the effective radius (red). It demonstrates that the transition in shape is associated with a transition from dark matter to baryon central dominance as a result of the compaction to a blue nugget (Tomassetti et al. 2016).
about a dumbbell, before it flips sign in the following quadrant, can be significant out to r ∼ 2d and beyond. This crude estimate indicates that the effect of a very prolate central body could in principle have a nonnegligible effect on the survival of a disc. However, this may be a severe over-estimate for the effect of a more realistic prolate ellipsoid, which should be computed properly for a general ellipsoid.
Post compaction, the central body tends toward an oblate shape. Such a body does not exert torques on masses orbiting in the major plane of the oblate system, but it does exert torques off this plane. An extreme oblate system, namely a uniform disc, yields values of ∆j/j ∼ 0.1 per quadrant in a plane perpendicular to the major plane of the body (Danovich et al. 2015, Figure 16). This implies that the post-compaction central oblate body, above the critical mass for blue nuggets, does not affect significantly the AM of the disc, and is not capable of disrupting it.
APPENDIX D: COMPLEMENTARY FIGURES
Here we show complementary relevant images from the VELA simulations. Figure D1 is a more detailed extension of Fig. 4, showing more stages in the evolution, with emphasis on the post-compaction discs and rings. Figure D2 shows the mock images of the four simulated galaxies shown in Fig. 19, now presenting the images in the three HST filters F606W, F850LP and F160W. Figure D3 shows the same for the four observed galaxies shown in Fig. 22. Figure D1. Evolution from compaction through a blue-nugget to post-blue-nugget disc and ring, complementing Fig. 4. Shown are the face-on projected densities of gas (top) and stars (bottom) of the simulated galaxy V07. The first panels at a = 0.22−0.23 show the precompaction phase and the compaction process, leading to a blue nugget at a = 0.24. The following few panels show the post-compaction VDI disc. Already at a = 0.27 we see the onset of central depletion, and the panels from a = 0.29 and on display a long-lived extended ring, fed by high-AM cold streams. A compact stellar bulge forms during and soon after the compaction and it remains compact and massive as the stars fade to a red nugget surrounded by a stellar envelope. Figure D2. Mock HST three-color images of simulted galaxies. Complementing Fig. 19, this figure presents the images in three HST filters separately. They show blue rings about red massive bulges as "observed" from the four simulated galaxies seen in Fig. 5. Dust is incorporated using Sunrise and the galaxy is observed face-on through the HST filters using the HST resolution and the noise corresponding to GOODS-S. Figure D3. Observed rings in three colors. Complementing Fig. 22, this figure presents the images in three HST filters separately, for galaxies observed in the deepest GOODS-S fields of CANDELS. The images display extended blue rings about massive bulges, two blue and two red.
|
2020-03-23T01:00:36.607Z
|
2020-03-19T00:00:00.000
|
{
"year": 2020,
"sha1": "2e534168a805338ca87f95721cde8963c856308c",
"oa_license": null,
"oa_url": "https://academic.oup.com/mnras/article-pdf/496/4/5372/33516687/staa1713.pdf",
"oa_status": "BRONZE",
"pdf_src": "Arxiv",
"pdf_hash": "d056aaca3f6193382a45f4bccccd19e117c0da48",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
258170159
|
pes2o/s2orc
|
v3-fos-license
|
The R-mAtrIx Net
We provide a novel Neural Network architecture that can: i) output R-matrix for a given quantum integrable spin chain, ii) search for an integrable Hamiltonian and the corresponding R-matrix under assumptions of certain symmetries or other restrictions, iii) explore the space of Hamiltonians around already learned models and reconstruct the family of integrable spin chains which they belong to. The neural network training is done by minimizing loss functions encoding Yang-Baxter equation, regularity and other model-specific restrictions such as hermiticity. Holomorphy is implemented via the choice of activation functions. We demonstrate the work of our Neural Network on the two-dimensional spin chains of difference form. In particular, we reconstruct the R-matrices for all 14 classes. We also demonstrate its utility as an \textit{Explorer}, scanning a certain subspace of Hamiltonians and identifying integrable classes after clusterisation. The last strategy can be used in future to carve out the map of integrable spin chains in higher dimensions and in more general settings where no analytical methods are available.
Introduction
Neural Networks and Deep Learning have recently emerged as a competitive computational tool in many areas of theoretical physics and mathematics, in addition to their several impressive achievements in computer vision and natural language processing [1].In String Theory and Algebraic Geometry for instance, the application of these methods was initiated in [2][3][4][5].Since then, deep learning has seen several interesting and remarkable applications in the field, both on the computational front [6][7][8][9][10][11][12][13][14][15] as well as towards the explication of foundational questions [16].They have appeared in the context of Conformal Field Theory, critical phenomena, spin systems and Matrix Models [17][18][19][20][21][22][23].More generally, deep learning has found interesting applications in mathematics, ranging from the solution of partial nonlinear differential equations [24,25], to symbolic calculations [26] and to hypothesis generation [27,28].
Interestingly, deep learning is starting to play an increasingly important role in symbolic regression, i.e. the extraction of exact analytical expressions from numerical data [29].While it is difficult to pinpoint any one solitary reason for this confluence of several fields into deep learning, there are some important themes that do seem to play a recurring role.Firstly, deep neural networks are a highly flexible parametrized class of functions and provide us an efficient way to approximate various functional spaces and scan over them [30][31][32][33].The same neural network, as we shall shortly see, can learn a Jacobi elliptic function as easily as it does a trigonometric or an exponential function.Such approximations train well for a variety of loss landscapes, including non-convex ones.Secondly, over the previous many years, robust frameworks for the design and optimization of neural networks have been developed, both as an explication of best practices [34][35][36][37][38][39] and the development of standardized software for implementation [40][41][42].This has made it possible to reliably train increasingly deeper networks which are optimized to carry out increasingly sophisticated tasks such as the direct computation of Ricci flat metrics on Calabi Yau manifolds [12][13][14][15] and the solution of differential equations without necessarily providing the neural network data obtained from explicitly sampling the solution.Further, in recent interesting developments, deep learning has been applied to analyze various aspects of symmetry in physical systems ranging from their classification to their automated detection [19,[43][44][45][46][47].
The profound role played by symmetry in theoretical physics and mathematics is hard to overstate.
Probably its most compelling expression in theoretical physics is found in the bootstrap program which rests on the idea that a theory may be significantly or even fully constrained just by the use of general principles and symmetries without analysis of the microscopic dynamics.For example, the S-matrix Bootstrap bounds the space of allowed S-matrices relying only on unitarity, causality, crossing, analyticity and global symmetries [48][49][50].This provides rigorous numerical bounds on the coupling constants and significantly restricts the space of self-consistent theories [51][52][53].This line of consideration finds its ultimate realization in two dimensions once applied to integrable theories.Integrable Bootstrap complements the aforementioned constraints with one extra functional Yang-Baxter equation, manifesting the scattering factorization, which allows us to fix the S-matrix completely [54].The same Yang-Baxter(YB) equation appears in the closely related context of integrable spin chains.Now instead of S-matrix, it restricts the R-matrix operator whose existence allows one to construct a commuting tower of higher charges and prove integrability.Practically, one has to solve the functional YB equation in a certain functional space.There is no known general method to do so, and all existing approaches are limited in the scope of application and fall into three groups.The first class of methods is algebraic in nature, exploiting the symmetry of R-matrix [55,56].The second approach aims to directly solve the functional equation or the related differential equation [57].The third alternative utilizes the boost operator to generate higher charges and impose their commutativity [58][59][60].
In this paper we shall demonstrate how neural networks and deep learning provide an efficient way to numerically solve the Yang-Baxter equation for integrable quantum spin chains.On an immediate front, we are motivated by recent interesting work on classical integrable systems using machine learning [43][44][45]61].The approach taken in the work [61] of learning classical Lax pairs for integrable systems by minimization of the loss functions encoding a flatness condition has a particularly close parallel to our approach.However, to the best of our knowledge, the present work is the first attempt to apply machine learning to quantum integrability, the analysis of R-matrices and the Yang-Baxter equation.
Our analysis utilizes neural networks to construct an approximator for the R-matrix and thereby solve functional Yang-Baxter equation while also allowing for the imposition of additional constraints.
We look into the sub-class of all possible R-matrices, namely those that are regular and holomorphic, and incorporate the Yang-Baxter equation into the loss function.Upon training for the given integrable Hamiltonian, we successfully learn the corresponding R-matrix to a prescribed precision.Using spin chains with two-dimensional space as a main playground we reproduce all R-matrices of difference form which was recently classified in [58].Moreover, this Solver can be turned into an Explorer which scans the space (or a certain subspace) of all Hamiltonians looking for integrable models, which in principle allows us to discover new integrable models inaccessible to other methods.Below we provide the summary of the Neural Network and its training, as well as an overview of the paper.
Summary of Neural Network and Training:
The functional Yang-Baxter equation, see Equation (2.3) below, is holomorphic in the spectral parameter u ∈ C and as such, holds over the entire complex plane.In this paper, we shall restrict our training to the interval Ω = (−1, 1) on the real line, but design our neural network so that it analytically continues to a holomorphic function over the complex plane.Each entry into the R-matrix is separately modeled by multi-layer perceptrons (MLP) with two hidden layers of 50 neurons each, taking as input parameters the variable u ∈ Ω.More details are available in Section 3.2 and Appendix B. All the neurons are swish activated [62], except for the output neurons which are linear activated.Training proceeds by optimizing the loss functions that encode the Yang-Baxter equations (3.7), regularity (3.12), and constraints on the form of the spin chain Hamiltonian, for instance via (3.13).Hermiticity of the Hamiltonian, if applicable, is imposed by the loss (3.15).Optimization is done using Adam [63] with a starting learning rate of η = 10 −3 which is annealed η = 10 −8 in steps of 10 −1 by monitoring the Yang-Baxter loss (3.7) on validation data for saturation.
Adam's hyperparameters β 1 and β 2 are fixed to 0.9 and 0.999 respectively.In the following, we will refer to this learning rate policy as the standard schedule.We apply this framework to explore the space of R-matrices using the following strategies: 1. Exploration by Attraction: The Hamiltonian loss (3.13) is imposed by specifying target numerical values for the two-particle Hamiltonian, or some ansatz/symmetries instead (like 6-vertex, 8-vertex, etc.).We also formally include here the ultimate case of general search when no restrictions are imposed on the Hamiltonian at all.This strategy is predominantly used in our Section 4.1.
Exploration by Repulsion:
We can generate new solutions by repelling away from an ansatz or a given spin chain.This requires us to activate the loss function (3.17) for a few epochs in order to move from the specific Hamiltonian.This strategy is employed in Section 4.2.
Further, we also have two schemes for initializing training.
Random initialization:
We randomly initialize the weights of the neural network using He initialization [64].This samples the weights from either a uniform or a normal distribution centered around 0 but with a variance that scales as the inverse power of the layer width.
2.
Warm-start: we use the weights and biases for an already learnt solution .
A brief overview of this paper is as follows.In section 2 we quickly introduce the R-matrix and other key concepts from the quantum integrability of spin chains relevant to this paper.Particularly in subsection 2.1, we review the classification program of 2-D spin chains of difference form through the boost automorphism method [58].Section 3 contains a review of neural networks with a view towards machine learning the R-matrix given an ansatz for the two-particle Hamiltonian.Our methodology for this computation is provided in Section 3.2.We then present our results in section 4. Section 4.1 focuses on hermitian XYZ and XXZ models (section 4.1.1),and prototype examples from the 14 gauge-inequivalent classes of models in [58](section 4.1.2).The latter sub-section also contrasts training behaviour for integrable and non-integrable models.Section 4.2 presents a preliminary search strategy for new models which we illustrate within a toy-model setting: rediscovering the two integrable subclasses of 6-vertex Hamiltonians.Section 5 discusses ongoing and future research directions.
An Overview of Spin Chains and Quantum Integrability
Quantum integrability, like its classical counterpart, hinges on the presence of tower of conserved charges in involution, i.e. operators that mutually commute.In this paper we will consider quantum integrable spin chains and the goal of this section is to introduce such systems, and provide a brief overview of the R-matrix construction in their context.
The Hilbert space of the spin chain is a L-fold tensor product The Hamiltonian H of a spin chain with nearest-neighbour interaction is a sum of two-site Hamiltonians H i,i+1 : where we assume periodic boundary conditions : H L,L+1 ≡ H L,1 .The chrestomathic example of the integrable spin-chain is spin-1/2 XYZ model : where α = {x, y, z} and S α i are Pauli matrices acting in the two-dimensional space V i = C2 of i-th site.In particular case when J x = J y it reproduces XXZ model, while in the case of three equal coupling constants J x = J y = J z = J the Hamiltonian reduces to the XXX spin chain.These famous magnet models are just a few examples of integrable spin chains and now we turn to the general construction.
The central element for the whole construction and proof of quantum integrability is the R-matrix operator R ij (u) which acts in the tensor product V i ⊗ V j of two spin sites 2 and satisfies the Yang-Baxter equation: where the operators on the left and right sides act in the tensor product assumed to be an analytic function of the spectral parameter u.Further, in order to guarantee locality of the interaction in (2.1), it must reduce to the permutation operator P ij when evaluated at u = 0, i.e.
This condition will be referred to as regularity in the following sections.We next turn to defining the monodromy matrix T a (u).This matrix, denoted by , acts on the spin chain plus an auxiliary spin site labeled by a with Hilbert space as V a ∼ C d .It is defined as a product of R-matrices R a,i (u) acting on the auxiliary site and one of the spin chain sites and is given by (2.5) The transfer matrix T (u) ∈ End( obtained by taking a trace over the auxiliary vector space V a : (2.6) From the Yang-Baxter equation one can derive the following RT T relation constraining monodromy matrix entries This condition can be used to prove that the transfer matrices commute at different values of the momenta The above condition implies that the transfer matrix T (u) encodes all the commuting charges Q i as series-expansion in u : (2.9) Hence we have3 (2.10) The Hamiltonian density H i,i+1 introduced earlier in equation (2.1) can be generated from the R-matrix using where P i,i+1 is the permutation operator between sites i, i+1.Also, we emphasize that while the charges are conventionally computed in Equation (2.10) at u = 0, this computation can equally well be done at generic values of u to extract mutually commuting charges.The only difference is we no longer recover the Hamiltonian directly as one of the commuting charges.
Yang-Baxter equation (2.3) should be supplemented with certain analytical properties of R-matrix.
For example, as was already mentioned, we assume that the R-matrix is a holomorphic function of spectral parameter u and equal to the permutation matrix at u = 0 (2.4).Furthermore, one can impose extra physical constraints like braided unitarity crossing symmetry 4 , and possibly additional global symmetries.We shall also impose restrictions on the form of the resulting Hamiltonian.These restrictions may follow from requirements such as hermiticity and from symmetries of the spin chain.In addition, given a solution for the Yang-Baxter equation, one can generate a whole family of solutions by acting with the following transformations : 1. Similarity transformation : where Ω ∈ Aut(V ) is a basis transformation.
It transforms the commuting charges as 2. Rescaling5 of the spectral parameter : u → cu, ∀ c ∈ C.This leads to a scaling in the charges as Multiplication by any scalar holomorphic function f (u) preserving regularity condition : R(u) → f (u)R(u), f (0) = 1.This degree of freedom can be used to set one of the entries of R-matrix to one or any other fixed function.
4. Permutation, transposition and their composition: P R(u)P, R(u) T , P R T (u)P .They transform the commuting charges as well.The Hamiltonian H is transformed to P HP, P H T P, H T respectively.
In general, one should always be careful of these redundancies when comparing a trained solution against analytic results.Following [58], we shall fix the above symmetries when presenting our results in section 4.1.2and appendix A. We look at gauge-equivalent solutions as well, by introducing similarity transformations in 4.1.2.
Reviewing two-dimensional R-matrices of the difference form
We will illustrate the work of our neural network using two-dimensional spin chains as a playground.
The regular difference-form integrable models in this context have recently been classified using the Boost operator in [58].Here, we present a brief overview of the methods and results of this paper.
Boost automorphism method allows one to find integrable Hamiltonians by reducing the problem to a set of algebraic equations.Let us focus on a spin chains with two-dimensional space V = C 2 and nearest-neighbour Hamiltonian (2.1).One formally defines the boost operator B [65] as which generates higher charges Q n , n ≥ 3, from the Hamiltonian Q 2 via action by commutation: This was used in [58] to successfully classify all 2-dimensional integrable Hamiltonians by solving the system of algebraic equations arising from imposing vanishing conditions on commutators between Q i , upto some finite value of i. Surprisingly it turns out that for the considered models, the vanishing of the first non-trivial commutator [Q 2 , Q 3 ] = 0 is a sufficient condition to ensure the vanishing of all other commutators.Then making an ansatz for the R-matrices and solving Yang-Baxter equation in the small u limit, the authors constructed the corresponding R-matrices and confirmed the integrability of the discovered Hamiltonians.The solutions can be organized into two classes: XYZ-type, and non-XYZ type, distinguished by the non-zero entries appearing in the Hamiltonian.
Generically, all non-zero entries would be complex valued.Hermiticity, for the actual XYZ model and its XXZ and XXX limits, places additional constraints.Integrability also imposes additional algebraic constraints between the non-zero entries, none of which involve complex conjugation in contrast to hermiticity.Amongst the XYZ type models, there are 8 distinct solutions each corresponding to some set of algebraic constraints among the matrix elements of H.In particular, there is one is fed via the Input layer (green) to a series of three Fully Connected layers (purple) containing four neurons each and finally feeds into the Output layer of three neurons (orange).Every neuron in a given layer receives inputs from all neurons in the preceding layer, and in turn, its output is passed as input to all neurons in the next layer.
Neural Networks for the R-matrix
This section reviews several essential facts about neural networks before presenting our own networkdesign for deep-learning and the associated custom loss functions.Further details regarding the network architecture and training schedule can be found in Appendix B.
An overview of Neural Networks
The central computation in this paper is the utilization of neural networks to construct R-matrices that correspond to given integrable spin chain Hamiltonians.We therefore furnish a lightning overview of neural networks in this section, along with the details of our implementation of the neural network solver for Yang-Baxter equations.We will focus on dense neural networks, also known as multi-layer perceptrons (MLPs), schematically displayed in Figure 1.These networks consist of an input layer a in ∈ R n0 , followed by a series of fully connected layers and terminate in an output layer Data is read in to the network at the input layer and the output is collected at the output layer.There are L fully connected layers in this network, where the -th layer contains n neurons.Each neuron a (l) m in a l-th fully connected layer receives inputs from all the neurons in the previous (l − 1)-th layer and the output of the neuron is in turn fed as an input to neurons in the succeeding layer: . . .
where w (l) ∈ M(n l , n l−1 , R) is a weight matrix, b (l) ∈ R n l -bias vector, h(z) is in general a non-linear, non-polynomial function known as the activation function acting component-wise : In (3.1) we also identify a (0) = a in and a (L+1) = a out with input and output layers respectively.Introducing shorthand notation for the affine transformations in equation (3.1) as A ( ) (a ( −1) ) ≡ w ( ) a ( −1) + b ( ) , the neural network a out (a in ) : R n0 → R n L+1 can be expressed as compositions of affine transformations, and activation functions: The output function of the neural network is tuned by tuning the weights and biases.It is by now well established that such neural networks are a highly expressive framework capable of approximation of extremely complex functions, and indeed there exists a series of mathematical proofs which attest to their universal approximation property, e.g.[30][31][32]66,67].This property, along with the feature learning capability of deep neural networks is the key driver to the automated search for R-matrices which we have implemented here.
while allowing for the possibility of outliers.A popular class of loss functions are where q = 1 corresponds to the mean average error and q = 2 to the mean square error, respectively.We will shortly see that in contrast to the above classic supervised learning set-up, our loss functions impose constraints on the neural network output functions rather than train on a dataset of input/output values for the functions R (u) directly sample R (u) at various values of u for training.
Machine Learning the R-Matrix
We are now ready to describe our proposed methodology for constructing R-matrices R (u) by optimizing a neural network using appropriate loss functions.An R-matrix has elements R ij (u) at least some of which are non-zero.In the following, we shall focus solely on the R ij (u) which are not identically zero as functions of u.We also restrict the training to the real values of spectral parameter u ∈ Ω = (−1, 1) and exclusively use holomorphic activations function in order to guarantee the holomorphy of the resulting We have decomposed the matrix element R ij (u) into a ij (u) + i b ij (u) in order to learn complex-valued functions R ij while training with real MLPs on the real interval Ω.In this paper, purely for uniformity, we have modeled each such a ij (u) and b ij (u) using an MLP containing two hidden layers of 50 neurons each and one linear activated output neuron.We emphasize that the identification of a ij (u) and b ij (u) to real and imaginary parts of R ij (u) is only valid over the real line, and these functions separately continue into holomorphic functions over the complex plane whose sum R ij (u) is holomorphic by construction.Now, R ij (u) is required to solve the the Yang-Baxter equation (2.3) subject to (2.4).We may also place constraints on the corresponding two-particle H given by (2.11).These criteria are encoded into loss functions which the R-matrix R ij (u) aims to minimize by training.For example, in order to train 3) for all values of spectral parameter u from the set Ω ⊂ C we introduce the following loss function : In practice of course, one cannot scan across the space of all functions and is restricted to a hypothesis class.Here the hypothesis class is implicitly defined by the design of the neural network, the choice of numbers of layers, number of neurons in each layer, as well as the activation function.Varying the weights and biases of the neural network allows us to scan across this hypothesis space.While in general the exact R-matrix may not belong to this hypothesis class, and the loss function would then be strictly positive, deep learning may allow us to approach the desired functions R → R to a high degree of accuracy.In summary, if we restrict to a hypothesis class which does not include an actual solution of the Yang-Baxter equation then where ideally would be small, indicating that we have obtained a good approximation to the true solution.We expect that scanning across wider and wider hypothesis classes would bring closer and closer to zero.Further, while the RTT equation (2.7) follows from the Yang-Baxter equation (2.3), it can also be imposed separately as a loss function on the network in order to improve the training : Next, we have constraints that must be imposed on the R-matrix at u = 0. Following equation (2.4) and equation (2.11) in previous section, we require that 7 where H is the two particle Hamiltonian.They both can be encoded in the loss function as ) Here, we should mention that we have some flexibility in the manner in which we implement the Hamiltonian constraint L H . Firstly, one can fix the exact numerical values for the entries of H and learn corresponding R-matrix.We will also consider extensions of this loss function where we supply only algebraic constraints restricting the search space for target Hamiltonians to those with certain symmetries or belonging to certain gauge-equivalence classes.In general, such Hamiltonian constraints give us the requisite control to converge to the different classes of integrable Hamiltonians, and we will name such regime as a exploration by attraction.
In the same spirit, when working with the XYZ spin chain or its XXZ and XXX limits, we also require that the two-particle Hamiltonian computed from R (u) is hermitian, i.e., We impose this condition by means of the loss function where H ij are the matrix elements of H.We shall therefore train our neural network with the loss function where putting the coefficients λ α , for α = {RT T, H, †}, to zero removes the corresponding loss term from being trained.
The loss function (3.16) produces a very complicated landscape and the NN should approach its minimum during the training.Usually, this search is performed with gradient based optimization methods.
One might be skeptical about being stuck in some local minimum instead of finding the global minimum of such complicated loss function in a very high dimensional hypothesis space.However, recent analysis revealed that deep NNs end up having all their local minima almost at the same value as the global one [68], [69].In other words, there are many different configurations of weights and biases resulting into a function of similar accuracy as the one corresponding to the global minimum.There are also many 7 It is tempting to consider a variation of our method which involves residual learning a la the ResNet family of networks [64].As opposed to learning deviations from identity, which is typically the approach adopted in the ResNet architecture, we may define where R (u) is the target function of the neural network, which we design to identically output R (0) = 0.While this is possible in principle, in practice it turns out that since the neural network is learning a function in the vicinity of P , which trivially minimizes the Yang-Baxter equation and all other constraints imposed, it almost invariably collapses to the trivial solution and learns R (u) = 0 across all u.It would nonetheless be interesting to identify such architectures that successfully learn non-trivial R-matrices and this is in progress.
saddle points and some of them have big plateau and just a small fraction of descendent directions, making them practically indistinguishable from the local minima.However, most of their losses are close to the global minimum as well.Those with significantly higher losses have a bigger number of descendent directions and thus can be escaped during the learning [68], [69].
We find that the training converges to yield simultaneously low values for each of the above losses as applicable.Further, while the hyper-parameters {λ} are tunable experimentally, setting them all to 1 is a useful default.However, for fine-tuning the training it is also useful to tune these parameters to reflect the specific task at hand.We provide the requisite details in Section 4 where we discuss specific training methodologies and the corresponding results.We will also discuss there a new loss function which is useful to fine-tune the training to access new integrable Hamiltonians H in the neighbourhood of previously known integrable Hamiltonians H o , we will call such regime as a exploration by repulsion.
As a final observation on the choice of activation functions, we note that at the level of the discussion above, any holomorphic activation function such as sigmoid, tanh, and the sinh would suffice.In practice we find that the training converges faster and more precisely using the swish activation [62].
This is given by swish (3.18) We have provided some comparison tests across activation functions in Appendix B.2.
Results
We present our results for learning R-matrices within the restricted setting of two dimensional spin chains of difference form.Our analysis will be divided into three parts.First, we will learn hermitian XYZ model and its well-known XXZ and XXX limits, comparing our deep-learning results against the analytic plots.Then we remove hermiticity and reproduce all 14 classes of solutions from [58].The last set of experiments demonstrates how our Neural Network in the Explorer mode can search for Integrable models exploring the space of Hamiltonians.
Specific integrable spin chains
In this sub-section we look at specific physical models, by imposing tailored conditions on the Hamiltonian derived from the training R-matrix.This includes constraints on the Hamiltonian entries at u = 0, and hermiticity of the Hamiltonian.
Hermitian models: XYZ spin chain and its isotropic limits
Imposing hermiticity on the 8-vertex Hamiltonian, we learn the classic XYZ integrable spin chain and its symmetric XXZ limit.We start with the following 8-vertex model ansatz for the R-matrix and impose the loss functions for YBE, hamiltonian constraint, regularity, and hermiticity (see equation 3.16).The target Hamitonians comprise of a 2-parameter family H XY Z (J x , J y , J z ) given by where we have set J x +J y to be equal to 2. The symmetric limit of XXZ model is realised for J x = J y = 1.
A useful reparametrisation for these models is in terms of (η, m) [70] 3) The analytic solution for the XYZ R-matrix is given in terms of Jacobi elliptic functions as where ω = 2 sn(2η | m), and m is the elliptic modular parameter8 .Our model consistently learns the R-matrices for the XYZ model for generic values of the free parameters η, m. Figure 2 gives the time evolution of the different loss terms during training.Figure 3 plots the R-matrix component ratios with respect to R 12 in terms of the spectral parameter, and compares them with the corresponding analytic functions for a generic choice of deformation parameters η = π 3 and m = 0.6.Letting m = 0, we recover the XXZ models for generic values of η.
Two-dimensional classification
Here, we lift the hermiticity constraint on the Hamiltonian, thus allowing for more generic integrable models.As we shall see below, the neural network successfully learns all the 14 classes [58] of differenceform integrable (not necessarily Hermitian) spin chain models with 2-dimensional space at each site.The R-matrices corresponding to each of these classes are written down explicitly in appendix A. Towards the end of this sub-section, we also present results for learning solutions in generic gauge obtained by similarity transformation of integrable Hamiltonians from the aforementioned 14 classes.We shall discuss the results in two parts: XYZ type models, and non-XYZ type models.
The first set of Hamiltonians under consideration are generalisations of the XYZ model (discussed in the previous sub-section), with at most 8 non-zero elements in its Hamiltonian density where the coefficients can take generic complex values.The XYZ model corresponds to the subset with As discussed in section 2.1, these models can be further sub-divided into four, six, seven and eight vertex models.On the other hand, there are 6 distinct classes of non-XYZ type solutions.Here we will discuss the training results for one example each from the XYZ and non-XYZ type models, since the training behaviour is similar within these two types.Rest of the models will be presented in Appendix A. Figure 4 plots the R-matrix components as ratios with respect to R 00 for a generic 6-vertex model with d 1 = d 2 = 0, and a 1 = a 2 .The figure also includes the absolute and relative errors with respect to the corresponding analytic R-matrix (see equation (A.2)).
From the non-XYZ classes, we will focus on the following 5-vertex Hamiltonians For integrability, we require the additional condition Training the Hamiltonian constraint (3.13) for generic values a 1 = 0.5, a 2 = 0.3, a 3 = 0.9, a 4 = 1.5, a 5 = 0.4 satisfying the above integrability condition, we get over 0.1% accuracy for training over ∼100 epochs.
Figure 5 plots the trained R-matrix components and absolute errors with respect to the analytic Rmatrices in equation (A.20), for the above choice of target Hamiltonian.We have also surveyed more general solutions beyond the representative solutions of the 14 classes a la [58], by changing the gauge For comparison with analytic formulae, we normalised our results by taking ratios with respect to a fixed component R 00 , i.e. we plot Rij R00 .As a result of starting from the XXZ model, the R-matrix R XXZt in the general gauge has following highly symmetric form Thus we only plot the entries R 00 , R 01 , R 03 , R 10 , R 11 , R 12 , R 30 .Since there exists overall normalisation ambiguity, we should only compare ratio of R-matrix entries with the analytic solution written in the same gauge.
These are the models with Hamiltonian H 6v,1 , H 6v,2 in appendix A. Figure 7a which measures the relative error in the approximate solution of the Yang-Baxter equation.This metric is evaluated for the trained R-matrix for both integrable and non-integrable models in Figure 8 (for H 6v,1 model), and Figure 9 (for H class−4 model).We see that the normalized error can be up to two orders of magnitude larger for the non-integrable case.Note that irrespective of the choice of Hamiltonian, there are two lines along u = v and v = 0 on which the Yang-Baxter equation is trivially satisfied, due to regularity.This metric also can detect anomalous situations when the learned solution once satisfied the Hamiltonian constraint at u = 0 quickly evolves to a true solution of Yang-Baxter equation producing relatively small YB loss (3.7).In this case we will see the big spike in (4.13) around zero which will indicate the fakeness of the found solution.
The above consideration shows that one can define the metrics which together indicate the closeness of the given system to the integrable Hamiltonian.However, the final conclusion in the binary form of "integrable/nonintegrable" regarding the given spin chain can be made only asymptotically, namely increasing the number of neurons, density of points and training time one can get the normalized YB loss (4.13) uniformly decreasing to zero for integrable Hamiltonians while for nonintegrable case it will be bounded from below by some positive value.Also let's stress that such problem is specific for the solver mode once we stick to a given Hamiltonian, while in the case of relaxed Hamiltonian restrictions as we will see in the next section, the neural network moves to the true solution of the Yang-Baxter equation.
Explorer: new from existing
In this section we will present two kinds of experiments that illustrate how the neural network presented above can be used to scan the landscape of two-dimensional spin-chains for integrable models.The training schedule adopted in this section is visualized in Figure 10 and relies essentially on two new ingredients which distinguish it from the previous solver framework.These are warm-start and repulsion.
We will illustrate each by an example.In the first case we shall simply use warm-start, and in the second, we shall combine warm-start with repulsion.Finally, we shall use unsupervised learning methods such as t-SNE and Independent Component Analysis to identify distinct classes of Hamiltonians within the set of integrable models thus discovered.Collectively, these strategies make up our explorer framework.
The first key new ingredient is a warm-start initialization.As mentioned previously, the standard solver framework of the previous section uses He initialization [64] to instantiate the weights and biases of of integrable Hamiltonians, we explore it using repulsion to identify new integrable models.
the neural network.In warm-start initialization, we use the knowledge of integrable systems previously discovered by the neural network to find new systems in its vicinity.The idea, at least intuitively, is that it should be possible to find new integrable systems more efficiently than with the random initialization by exploring the vicinity in weight-space of previously determined solutions using an iterative procedure such as gradient-based optimization.On doing so, we find a significant acceleration in training convergence, with new solutions being discovered typically in about 5 epochs of training after warm-start initialization.
For definiteness, we consider the hermitian XYZ model discussed earlier in Section 4.1.1.This has a two-parameter family of solutions, corresponding to independent choices for the parameters η and m of the Jacobi elliptic function, as seen from Equation (4.4).The XXZ model is embedded into this space as the m = 0 subspace of solutions.
We now describe how the above strategy can be used to quickly generate the cluster of XYZ Rmatrices starting from a particular one which we choose from XXZ subclass.We begin with pre-training our neural network using the solver mode of the previous section, but with the learning rate of the Our next key new ingredient for the Explorer mode is repulsion, which is added to the previous strategy of warm-start initialization.In principle, it should allow us to rediscover all 14 classes of integrable spin chains.However, for sake of simplicity, we will illustrate it now with a toy-model example and return to the general analysis later [71].Namely, we consider the class of 6-vertex Hamiltonians with unrestricted a 1 and a 2 .It includes both integrable 6-vertex classes H 6v,1 , H 6v,2 (A.2, A.4) as well as nonintegrable models.In order to mimic the general situation when all integrable classes intersect at zero, we begin by pre-training the neural network to a Hamiltonian belonging to the intersection of the classes H 6v,1 and H 6v,2 , i.e. whose matrix element satisfy the constraints a 1 = a 2 and a 1 + a 2 = b 1 + b 2 simultaneously.The results mentioned in this paper correspond to setting Having arrived at this model, we would like to navigate to neighboring models not by specifying target values of the Hamiltonian, but by scanning the neighborhood of the current model.To do so, we employ a two step strategy.First, we navigate to two 9 new 6-vertex integrable Hamiltonians by random scanning the vicinity of the current model without giving specific target values.We shall use these new models as our warm-start points.From each of them, we navigate away by using the repulsion loss term (3.17) for 1 epoch, followed by training for another 5 epochs.Note in this step, we still train within the restricted class of 6-vertex models by fixing the corresponding entries of the R-matrix to zero.We repeat this schedule 25 times starting from either of the saved models.This way, we generate fifty 6-vertex integrable Hamiltonians with over 1% accuracy 10 .The training curve displaying how the Yang-Baxter loss evolves is shown in Figure 12.
The learnt models are classified into two classes using clusterisation methods as shown in Figure 13.
Conclusions and Future directions
In this paper we constructed a neural network for solving the Yang-Baxter equation (2.3) in various contexts.Firstly, it can learn the R-matrix corresponding to a given integrable Hamiltonian or search for an integrable spin chain and the corresponding R-matrix from a certain class specified by imposed symmetries or other restrictions.We refer to this as the solver mode.Next, in the explorer mode, it can search for new integrable models by scanning the space of Hamiltonians.
We demonstrated the use of our neural network on two-dimensional spin chains of difference form.
In the solver mode, the network successfully learns all fourteen distinct classes of R-matrices identified in [58] to accuracies of the order of 0.01 − 0.1%.We demonstrated the work of the Explorer mode, restricting the search to the space of spin chains containing both classes of 6-vertex models as well as nonintegrable Hamiltonians.Starting from the hamiltonian at the intersection of two classes , Explorer found 50 integrable Hamiltonians which after clusterisation clearly fall into two families corresponding to 9 We stop the scanning once we found a representative from each of two classes because we know that there are only two integrable families here.In general case one of course should generate sufficiently many points in order to find all classes.
We will return to this subtle point later in [71] 10 If we further train the individual models for more epochs, we can improve the accuracy of the obtained solution to similar levels as obtained in the examples presented in Section 4.1.
two integrable classes of 6-vertex model.Working in the explorer mode, we find that warm-starting our training from the vicinity of a previously learnt integrable model greatly speeds up convergence, allowing us to identify typically about 50 new integrable models in the same time that random initialization takes to converge to a single model.
The main focus of this paper was creating the neural network architecture and demonstrating its robustness in various solution generating frameworks using known integrable models as a testing ground.
However, we expect that this program can be extended to various scenarios such as the exploration and classification of the space of integrable Hamiltonians in dimensions greater than two.This would be of great interest since the general classification of models is currently limited to two dimensions.Our experiments with exploration and clustering are a promising starting point in this regard.In our setup the strategy is quite straightforward [71].Because all integrable families of Hamiltonians can be multiplied by arbitrary scalar, we should only scan the Hamiltonians on the unit sphere which is compact.Scanning over sufficiently dense set of points on the sphere will allow us to identify integrable Hamiltonians from various classes.Then we can use the Explorer to reconstruct the whole corresponding families and perform clusterisation in order to identify them.On another footing, it would also be interesting to extend our study to R-matrices of non-difference form as these are particularly relevant to the AdS/CFT correspondence [72][73][74][75].
While our network learns a numerical approximation to the R-matrix, it can also be useful for the reconstruction of analytical solutions using symbolic regression [29,76].Alternately, one may try to use the learnt numerical solution for the reconstruction of the symmetry algebra such as the Yangian and then arrive at the analytical solution.Remarkably, machine learning is already proving helpful in the analysis of symmetry in physical systems.In particular, one may verify the presence of a conjectured symmetry or even automate its search using machine learning [19,[43][44][45][46][47]77].It would be very interesting to explicate the interplay of our program in this broader line of investigation.
In addition, the flexibility of our approach would also allow us to implement various additional symmetries or other restrictions, both at the level of the R-matrix and the Hamiltonian.It would therefore be very interesting to develop an 'R-matrix bootstrap' in the spirit of the two-dimensional S-matrix bootstrap and analyze the interplay between various symmetries.For example, all 14 families of R-matrices considered in this paper satisfy the condition of braided unitarity (2.12) and it would be interesting to rediscover them from the use of braided unitarity and other symmetries without imposing the Yang-Baxter condition, similar to how integrable two-dimensional S-matrices have been identified in the S-matrix Bootstrap approach [51,53,78].
With mild modifications, we can adapt our architecture to the analysis of Yang-Baxter equation for the integrable S-matrices in two dimensions.The only new feature to implement is the analytic structure in the s-plane.It can be naturally realized with the use of holomorphic networks.
Learning solutions for different classes with the same architecture, we noticed that the number of epochs needed to reach the same precision varies for different classes while being roughly the same for the Hamiltonians from the same classes.Thus, it would be very tempting to use the training of losses to define the complexity of spin chains.Ideally, we should be able to go beyond the class of integrable models and see that they sit at the minima of complexity, matching common beliefs that the integrable models are the "simplest" ones.
A Classes of 2D integrable spin chains of difference form
In this appendix, we list the 14 gauge-inequivalent integrable Hamiltonians of difference form and the corresponding R-matrices.Amongst the XYZ type models, the simplest solution is a diagonal 4-vertex model with Hamiltonians and R-matrices as follows: In 6-vertex models, we have two distinct classes depending on whether the Hamiltonian entries H 00 and H 33 are equal or not.In the first case, the R-matrix R 6v,1 (u) and its associated Hamiltonian H 6v,1 are given by where Figure 4 gives a representative training vs actual plot for this class.
For the case H 00 = H 33 , the R-matrix R 6v,2 (u) is given by where solution distinguished by the Hamiltonian entries H 00 , H 33 being equal or not.In the first case, we have where Figure 17 plots the predicted R-matrix components as ratios with respect to the (12) component against the above analytic results, and their differences for a generic choice of parameters 1.00 0.75 0.50 0.25 0.00 0.25 0.50 0.75 1.00 1.0 0.5 0.0 0.5 In the second case for H 00 = H 33 , we have where Figure 18 plots the predicted R-matrix components as ratios with respect to the (12) component against the above analytic results, and their differences for a generic choice of parameters a where .11)with Hamiltonian coefficients given by for free parameters b 1 , η, m, δ 1 , δ 2 .Figure 19 plots the predicted R-matrix components as ratios with respect to the (12) component against the above analytic results, and their differences for a generic The second class of 8-vertex XYZ-type solution has Hamiltonian H 8v,2 and R-matrix R 8v,2 (u) defined as follows with the Hamiltonian coefficients given by The second class of 8-vertex XYZ-type solution has Hamiltonian H 8v,3 and R-matrix R 8v,3 (u) defined as follows where In the class-2 solution above, the non-zero R-matrix components are explicitly given by and class 6 respectively, with generic Hamiltonian parameters.Also we note that allowing for complex parameters results in generically complex R-matrices.We compare the predictions against the actual formulae by taking ratios with respect to the real part of the (00) component for classes 2-5, and ( 12) component for class 6.
B Designing the Neural Network
This appendix contains an extensive overview of the architecture of our neural network solver, as well as details of the hyperparameters with which the network is trained.Our starting point is the close analogy between our problem of machine learning R-matrices by imposing constraints and the design of the Siamese Neural Networks [79,80].These were designed to function in settings where the canonical supervised learning approach of (3.5) for classification becomes infeasible due to the large number of target classes {y} and the paucity of training examples {x α } corresponding to each class y α .In such a situation, one may instead define a similarity relation and train the neural network to learn a function φ (x) : R D → R d such that the Euclidean distance between representatives φ (x) of two input vectors x 1 , x 2 that are similar to each other is small, while the distance between dissimilar data is large.Schematically, This is visualized in Figure 25.
There are many loss functions by which such networks may be trained, see for example [79][80][81][82].For definiteness, we mention the contrastive loss function of [79,80], given by where Y = 1 if x 1 ∼ x 2 and Y = 0 otherwise.Clearly this loss function causes the network to learn a function φ such that similar inputs x are clustered together while dissimilar inputs are pushed at least a distance r o apart.This therefore realizes our naive criterion for φ laid out in Equation (B.2).We also see very explicitly that the loss function in Equation (B.3) does not directly depend on the values y in contrast to Equation (3.5).Instead, the network is trained to learn a function φ (x) which obeys a property which is not given point-wise for each input x but instead is expressed as a non-linear constraint (B.2) on φ (x) evaluated at two points x 1 and x 2 .
B.1 The Neural Network Architecture and Forward Propagation
We now provide some more details about our implementation of R (u) and the training done to converge to solutions of the Yang-Baxter equation (2.3) consistent with additional requirements such as regularity (2.4).As already mentioned in Section 3.2, each matrix element R ij is decomposed into the sum a ij +i b ij which are individually modeled by MLPs.In principle each MLP is independent of the rest and can be individually designed.We shall however take all MLPs to contain two hidden layers of 50 neurons each, followed by a single output neuron which is linear activated 11 .The possible activations for the hidden 11 One might also construct an alternate formulation of the neural network where a single MLP of the kind shown in Figure 1 accepts a real input u and outputs all the requisite real scalar functions that comprise R (u).So far we have observed that such a network does not perform as well as our current formulation of independent neural networks for each real function.Nonetheless, it is possible that this formulation may eventually prove competitive with our current one and the question remains under investigation currently.).We also evaluate R (0) and R (0) thus completing the forward propagation.Next, we compute the losses (3.7), (3.12) and (3.13) as well as possibly (3.15).The loss function is trained on by using the Adam optimizer [63], with an initial learning rate η of 10 −3 which is annealed to 10 −8 in steps of 10 −1 by monitoring the saturation in the Yang-Baxter loss computed for the validation set over 5-10 epochs.
The effect of this annealing in the learning rate is also visible in the training histories in Figures 2 and 27 where the step-wise falls in the losses correspond to the drops in the learning rate.Across the board, training converges in about 100 epochs and is terminated by early stopping.
B.2 Comparing different activation functions
We now turn to a brief comparison of the performance of different activation functions with the above set up.Again for uniformity, we will use one activation throughout for all the MLPs a ij and b ij , but for the output neuron which is linearly activated.We then compared the performance of this neural network architecture while learning the Hamiltonian Baxter and Hamiltonian losses after saturation is also mentioned.We observed that the swish activation converges sooner and to lower losses.This is stable across multiple runs.See also Figure 27.
which is 6-vertex Type 1 in the classification of [58], see Equation (A.2) above.The neural network was trained with the loss functions (3.7), (3.13) and (3.12) and setting λ H and λ reg to 1 each.The training was carried out for 200 epochs on observing that the networks did not perform better on training for longer.Further, we set a batch size of 16 and optimized using Adam with a starting learning rate of 10 −3 which was annealed to 10 −8 using the saturation in the Yang-Baxter loss over the validation set as the criterion as mentioned above.We conducted this training using the activations sigmoid, tanh, swish, all of which are holomorphic, as well as elu and relu.The last two are not holomorphic but have been included for completeness.The evolution of the Yang-Baxter and the Hamiltonian loss for all these activations is shown in Figure 27 and Table 1.On the whole, we see that the swish activation tends The precise numbers are given in Table 1.
to outperform the others quite significantly.While these are the results of a single run, we found that the result is consistent across several runs and tasks, leading us to adopt the swish activation uniformly across the board for all the analyses shown in this paper.
B.3 Proof of Concept:
Training with a single hidden neuron.
As a final observation we present a simple proof of concept of our approach of solving the Yang-Baxter equation along with other constraints by optimizing suitable loss functions.Here, instead of attempting to deep learn the solution, we use a single layer of neurons for each R-matrix function and pick an activation function by the form of the known analytic solution.In effect, rather than rely on the feature learning properties of a deep MLP as we have done in the rest of our paper, we ourselves provide activation functions which should furnish a natural basis to express the known analytic solutions in. .presupposed our knowlege of the exact solution -in effect, the true R-matrix lay within our hypothesis class -the model trained to losses of the order of 10 −8 which is several orders of magnitude below the typical end of training losses we observed in the standardized framework.Further, we also obtain an excellent performance even out of the domain of training, which is usually not the case in machine learning.
Neural networks learn a target function f (x) of the input data x by optimizing a cost function L which provides a measure of the discrepancy between the actual and desired properties of the function f .The parameters w and b are then tuned to minimize this discrepancy.The best known and the most canonical examples of this are the supervised learning problems where the neural network is supplied with data D = {(x, y)} consisting of pairs of input vectors x with their expected output values y.The neural network then tunes its weights and biases to minimize the cost function.Having done so, the output function f thus learned by the neural network obeys where ||...|| is a matrix norm defined as ||A|| = n α,β=1|A αβ | for an complex-valued n×n matrix A. During the forward propagation we sample a mini-batch of u and v values, from which the corresponding u − v is constructed.Along this paper, the spectral parameters u and v run over the discrete set of 20000 points randomly chosen from the interval Ω.6 The loss function L Y BE is positive semi-definite, vanishing only when R (u) solves the Yang-Baxter equation.In principle, one may imagine a scan across the space of all functions in which case, solutions of the Yang-Baxter equation would minimize the loss (3.7) to zero.
Figure 2 :
Figure 2: The evolution of training losses for the XYZ model, shown on the log scale.The losses tend to fall in a step-wise manner, which corresponds approximately to the learning rate schedule the network is trained with.
9 )= v 11 v 12 v
If R(u) satisfies Yang-Baxter equation, so does R Ω (u).A generic similarity matrix Ω Ω 21 v 22 (4.10)with non-zero off-diagonal entries v 12 , v 21 = 0, results in conjugated R-matrices and Hamiltonians with all 16 non-zero entries.We trained 16-vertex Hamiltonians resulting from XYZ model in the general gauge and recovered the corresponding R-matrix with a relative error of order O(0.1%).Generic XYZ type models, as well as non-XYZ type models gave similar results for different gauges.Figure6plots the learnt R-matrix components for XXZ model with η = π 3 conjugated by the matrix Ω
Figure 6 : 3 and 1 ,
Figure 6: (a) Predicted R-matrix component ratios w.r.t.R 00 , for conjugated XXZ model with η = π 3 compares the training for a generic Hamiltonian with coefficients satisfying none of the above conditions against the training for H 6v,1 -type model.We see that while the Hamiltonian constraint (3.13) saturates to similarly low values in both cases, the Yang-Baxter loss saturates at approximately one order of magnitude higher.Similar behavior holds for the non-XYZ type models as well.The training for a generic class-4 Hamiltonian with coefficients a 1 = 0.5, a 2 = 0.3, a 3 = 0.4, a 4 = 0.9 (see Equation A.19) and a non-integrable deformation is shown in Figure7b.One can further discriminate between integrable and non-integrable models by checking the pointwise values of the Yang-Baxter losses in the two cases.Let us define the metric
Figure 7 :
Figure 7: Comparing the training history of the Type XYZ and non-XYZ models against corresponding non-integrable Hamiltonians.There is approximately an order of magnitude difference between the Yang-Baxter losses for the integrable case vs the non-integrable case after the training saturates, indicated by the gray region in the graph.The step-wise drops in the loss functions approximately correspond to the learning rate schedule.The presented Hamiltonians are the same as on Fig.8 and Fig.9
Figure 9 :
Figure 9: (a) The normalized Yang-Baxter error (4.13) plotted in the logarithmic scale at the end of training for the Hamiltonian H class−4 from (A.19), with a 1 = 0.5, a 2 = 0.3, a 3 = 0.4, a 4 = 0.9, (b) Non-integrable deformation with same Hamiltonian parameters as in the integrable case, except for H 13 = −0.9.
Figure 10 :
Figure 10: Visualizing the Explorer scheme.We start with random initializations, marked by lightning symbols, and perform solver learning represented by red curve arrows.Once we reach an submanifold
Adam optimizer set to 10 − 3 .
The pre-training is stopped when all losses saturate below O 10 −3 , which typically requires about 50 epochs of training.We carried out this pre-training setting arbitrary reference values of η, but with m fixed to zero.The results shown here correspond to η = π 4 .The weights thus obtained correspond to our warm-start values.Then we shift the target Hamiltonian values to correspond to η → η + δη, where δη are randomly chosen O 10 −1 numbers, and m can take on non-zero values as well.We then retrain the model with a smaller learning rate, 10 −4 for a few epochs until all loss terms fall to O 10 −4 , which typically takes about 5 epochs, upon which we update the target Hamiltonian by updating η and m and continue training.This strategy generates about 10 XYZ models within the same time-scale (i.e. about 100 to 200 epochs of training) as we earlier needed for a single model.For best results, while we randomly update η, we systematically anneal the modular parameter m to upwards of (a) Evolution of Yang-Baxter Loss (b) Evolution of Hamiltonian Loss
Figure 11 :
Figure 11: The convergence to XYZ models from XXZ models trained with different parameters.XXZ was trained for 50 epochs at η = π 3 and m = 0.Then, it was trained for 5 more epochs at η = π 4 and η = π 6 , still with m = 0.After that, 5 non-zero values of m were used for each XXZ model, and we trained for another 15 epochs.Loss spikes occurred when the target hamiltonian values were reset.The final training was run in parallel for convenience, but it can be run sequentially.
Figure 12 :
Figure 12: Time evolution of the Yang Baxter loss as the neural network explores the space of integrable Hamiltonians of 6-vertex models H 6v1,6v2 by repulsion.The loss evolves together until the 50 th epoch after which it fragments slightly as the training converges to the two warm-start points on the 60th epoch.For the remaining epochs the losses fragment completely as the neural network seeks out different new Hamiltonians and is terminated when the loss reaches the neighborhood of 1 × 10 −4 .
FastICAFigure 13 :
Figure 13: Clustering of Hamiltonians from the 2 classes of gauge-inequivalent 6-vertex models obtained by Explorer using repulsion from solution at intersection of both classes.
Figure 14 :
Figure 14: The 6-vertex models learnt by exploration.The graph visualizes the obtained Hamiltonians by plotting their values along the a 1 + a 2 − b 1 − b 2 and the a 1 − a 2 axes.The models H 6v,1 lie along the y-axis and the models H 6v,2 along the x-axis with an error margin of order 10 −3 as shown in the telescoped inset plots.
Figure 14 plots
Figure 14 plots the trained models in terms of coordinates defined by the integrability conditions of the Hamiltonians H 6v,1 , H 6v,2 .Models lying near the two axes were classified correctly into the two classes in Figure 13 with 100% accuracy.
d 2 d 1 eFigure 21 :
Figure 21 plots the predicted R-matrix components as ratios with respect to the (12) component against the above analytic results, and their differences for a generic choice of parameters a 1 = 1, b 1 = −0.45,d 1 = 0.6, d 2 = 0.75.
26 )
Amongst the above non-XYZ type models, we have already looked into the training for Class 1 model in section 4.1.Figure 22, 23, 24 plot the training vs actual R-matrix components for classes 2,3,4, class 5,
Figure 26 :
Figure 26: Visualizing the forward propagation of the neural network R (u).This has a very strong parallel to Figure 25, with the function R (u) playing the role of the map φ.The only difference is that R (u) also has additional constraints on R (0) and R (0) which are unique to our problem.
Table 1 :
Performance of different activation functions on learning the Hamiltonian (B.4).The saturation epoch is the approximate epoch after which the model did not train further.The final values of the Yang-
Figure 27 :
Figure 27: The evolution of the Yang-Baxter loss (left) and the Hamiltonian loss (right) for a variety of activation functions when training for 200 epochs.The swish activation tends to outperform the others.
For definiteness, consider the XXZ model at η = π 3 .
The non-zero entries in this R-matrix areR 00 (u) = sin (u + η) , R 11 (u) = sin (u − η) = R 22 (u) , R 12 (u) = sin (η) = R 21 (u) , (B.5)as may be observed by setting m = 0 in Equation (4.4).We define the networks a ij and b ij to have a single hidden layer of a solitary neuron activated by the sin function.This means that the functions learnt by the network are simply of the forma ij = W • sin (W • u) ,(B.6) and similarly for the b ij .The W and W are the weight and bias of the hidden and the output neuron respectively and the composition W • u is shorthand for the affine transformation w u + b.Next, we train the network imposing the losses (3.7), (3.12), (3.13) and (3.15), each with weight λ = 1, and theAdam optimizer with our standard learning rate scheduling.Figure28plots the trained XXZ R-matrix components for u lying in the range (-10,10).Note that since we trained with an activation function that
Figure 28 :
Figure 28: The figure on the left shows the XXZ model R-matrix with η = π 3 obtained by trained on a single sin activated hidden neuron in the range u ∈ (−1, 1) shown in gray.The solution remains valid outside the training domain as well.The figure on the right shows the corresponding training curves.
|
2023-04-17T01:15:07.441Z
|
2023-04-14T00:00:00.000
|
{
"year": 2023,
"sha1": "cccbc788defe5f437f911d71ba37108ea94ef959",
"oa_license": "CCBY",
"oa_url": "https://iopscience.iop.org/article/10.1088/2632-2153/ad56f9/pdf",
"oa_status": "GOLD",
"pdf_src": "ArXiv",
"pdf_hash": "cccbc788defe5f437f911d71ba37108ea94ef959",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Physics"
]
}
|
235684628
|
pes2o/s2orc
|
v3-fos-license
|
Sanctioning Chile’s Public Health Care System for Not Providing Basic Services to the Elderly
Abstract This paper analyzes the Inter-American Court of Human Rights’ ruling in the case of Poblete Vilches et al. v. Chile. Poblete Vilches, a senior citizen, died in February 2001 due to septic shock and bilateral bronchopneumonia after being treated in a public hospital in Chile. The ruling held the state of Chile responsible for a number of human rights violations. The paper evaluates the interpretation of the American Convention on Human Rights as carried out by the Inter-American Court of Human Rights. It concludes that the sentence explicitly developed criteria in relation to informed consent as a derivation of the right to health and implicitly recognized, from a gerontological perspective, a manifestation of structural abuse toward older persons and their supportive environments. The gerontological gaze brings new challenges for the development of older persons’ rights. The ruling is unique in the inter-American human rights system, as recognized by the court itself.
Introduction
This paper analyzes the reasoning of the Inter-American Court of Human Rights in its judgment in Poblete Vilches et al. v. Chile, issued on March 8, 2018, which interpreted the human rights of older persons in the context of medical and social care. 1 The court pronounced on the right to life and personal integrity and the right to health for older persons based on the recognition of economic, social, cultural, and environmental rights, invoking for the first time the Inter-American Convention on Protecting the Human Rights of Older Persons.
The Inter-American Court of Human Rights is an autonomous judicial institution tasked with the application and interpretation of the American Convention on Human Rights (ACHR). Decisions of the court are justified and final, and they may not be appealed. States parties to the conventionas in the European human rights system-are obligated to comply with the court's judgments. 2 The ACHR establishes and limits the jurisdiction of the Inter-American Court. According to the facts of the case, Mr. Vinicio Poblete Vilches (hereinafter Poblete) was admitted to the Sótero del Río public hospital on January 17, 2001, due to severe respiratory failure. On January 22, he was moved to the surgical intensive care unit. A surgical intervention was performed while he was unconscious, without the prior consent of his relatives. Nevertheless, doctors performed a surgical intervention while he was unconscious, without the prior consent of his relatives. On February 2, Poblete was discharged without further instructions. 3 Three days later, on February 5, Poblete was admitted for a second time to the same hospital, where he remained in the intermediate care unit due to lack of available beds, despite the clinical record specifying his admission to the intensive care unit. Poblete died on February 7, 2001, at the age of 76, due to septic shock and bilateral bronchopneumonia. 4 According to the sequence of events, there was a failure to comply with medical care standards regarding to informed consent required under lex artis.
Unlike the European and universal human rights systems, the inter-American human rights system has a binding instrument for the protec-tion of the human rights of older persons (the Inter-American Convention on Protecting the Human Rights of Older Persons). Therefore, the Inter-American Court's references to the European Court of Human Rights in this ruling can only be partial, because the reference frame is different.
The objective of this paper is to analyze the arguments used by the Inter-American Court in its application of the Inter-American Convention on Protecting the Human Rights of Older Persons, identifying the gerontological elements in the Poblete case.
Experts in gerontology have increasingly turned to the courts in their battles to protect the human rights and health of older persons. Yet while a significant literature analyzes legal mobilization on these issues, it tends to focus predominantly on domestic legislation and cases. This paper analyzes the effect of these issues when they reach the Inter-American Court. It begins by describing the court's ruling in Poblete Vilches et al. v. Chile, which offers an authoritative interpretation of older persons' rights to life, personal integrity, health, and autonomy. As our analysis demonstrates, the court balanced medical, ethical, and legal considerations in its judgment. The paper then considers how rulings such as this one can drive legal reforms to protect and promote the rights of older persons on the American continent.
To date, Poblete Vilches et al. v. Chile is the only case concerning the rights of older persons to reach the Inter-American Court, and it shows the trajectory from domestic jurisdiction to the regional human rights system, and vice versa. 5 On October 1, 2012, through Law 20.584, Chile changed its domestic legislation on the rights of the patient; this was before the Inter-American Court issued its ruling in Poblete.
Development of the topic
In its ruling, the Inter-American Court developed two rights. The first of these is the right to life and personal integrity of older persons, and the second is the right to health, which encompasses the right to health of older persons and the right to informed á. a. massa, m. b. villouta, and c. r. ferrada / general papers, 251-258 It should be noted that this case represents the court's first-ever recognition of the right to health as an autonomous right. 6 The ruling also represents the court's first decision on the rights of older persons in matters of health. 7
Right to life and personal integrity of older persons
The court held that the right to life constitutes a "prerequisite for the enjoyment of all other rights." 8 The court used a systematic argument (whereby laws regarding the same matter must be construed with a reference to each other; what is clear in one statute may be called in aid to explain what is doubtful) and referred to a previous ruling of the European Court of Human Rights. 9 According to the court, a state holds international responsibility for death in a medical context when the following conditions are met: (1) a treatment is denied to a patient in a situation of medical emergency or essential medical care, despite the risk that this denial poses for the life of the patient; (2) there is serious medical negligence; and (3) there is a causal link between the act and the damage suffered by the patient. Verification of the state's international responsibility must also consider any situation of special vulnerability of the affected person (in this case, the patient's status as an older person) and any measures taken to avoid that situation. 10 The court also quoted article 5(1) of the ACHR, noting that the protection of the right to personal integrity requires the regulation and implementation of health services. It further noted that states "must establish an adequate regulatory framework that regulates the provision of health services, establishing standards of quality for both public and private institutions." 11 In the particular case at hand, the court held that repeated omissions in the care provided to Poblete and the failure to treat his specific health conditions contributed to the deterioration of his health. 12
Right to health of older persons
The court argued that civil and political rights and economic, social, cultural, and environmental rights are interdependent and without hierarchy and that they must be understood integrally. 13 It made direct reference to the observations made by Chile in 1969, during the drafting of the ACHR, in which the state considered that a certain legal obligation must exist with regard to economic, social, and cultural rights. 14 On this same point, the court ended with a comment of a teleological nature, referring to the international and national corpus iuris. 15 The court also argued that article 26 of the ACHR creates two types of obligations: progressive and immediate. Progressive obligations mean that states have a concrete and constant obligation to proceed as expeditiously and efficiently as possible toward the full effectiveness of economic, social, cultural, and environmental rights; they also imply an obligation of non-retrogressivity with regard to the rights that have been realized. Meanwhile, immediate obligations "consist [of] adopting effective measures in order to guarantee access, without discrimination, to the benefits recognized for each right." 16 The court cited several international tools, such as the Charter of the Organization of American States, the American Declaration, and the international corpus iuris on the right to health. 17 With regard to situations of medical emergency, the court referred to General Comment 14 of the United Nations Committee on Economic, Social and Cultural Rights, noting the minimum standards of quality, accessibility, availability, and acceptability. 18 Quality is understood as the "adequate infrastructure required to meet basic and emergency needs," including life support devices and qualified human resources. 19 Accessibility is understood in its "overlapping dimensions of non-discrimination, physical accessibility, economic accessibility, and information accessibility." 20 Availability implies sufficient material and human resources and the coordination of facilities and networks. 21 Acceptability refers to the fact that health services "must respect medical ethics and culturally appropriate criteria … [and] include a gender perspective, as well as the conditions of the patient's life cycle. The patient must be informed of Health and Human Rights Journal his diagnosis and treatment and, in this regard, his wishes must be respected." 22 Lastly, the court held that "older persons have the right to increased protection, [which] requires the adoption of differentiated measures." It upheld "the right to a dignified old age and consequently the measures required to this end." 23 Again, with a systematic argument, the court quoted the Committee on Economic, Social and Cultural Rights, namely its General Comment 6 and General Comment 14, which guide states to maintain measures of prevention and rehabilitation in order to preserve the functional capacities of older persons, thereby reducing costs in health care and social services. 24 In the ruling, some relevant concepts-such as functionality, autonomy, care, chronic patients, and patients in the terminal stage of life-appeared but were not defined.
Right to informed consent of older persons and their family members
Regarding older persons in the health care context, the court noted the existence of several factors that increase their vulnerability, such as physical limitations, limited mobility, economic status, and severity of a disease. It further noted that due to frequent imbalances in the doctor-patient relationship, it is essential that the patient be provided with the information needed to understand their diagnosis and possible treatments. 25 In this regard, it pointed out that "informed consent forms part of the accessibility of information … and, therefore, of the right to health (Article 26 [of the ACHR])," establishing the right to information in article 13 of the ACHR as an instrument to ensure and to respect the right to health. 26 The court interpreted informed consent according to international standards in health care. 27 It noted that health providers must, at a minimum, provide information to the patient on the following: (1) "an evaluation of the diagnosis"; (2) "the purpose, method, probable duration, and expected benefits and risks of the proposed treatment"; (3) "the possible adverse effects"; (4) "treatment alternatives"; and (5) the progression of the treatments. 28 In addition, the court held that informed consent by representation is granted when the patient is unable to make a decision regarding their own health. 29 The court concluded with a teleological element, "dignity" (article 11 of the ACHR), which is linked to autonomy, stating that dignity consists of "the possibility of all human beings for self-determination and to freely choose the options and circumstances that give a meaning to their existence, based on their own choices and convictions." 30 This is related to the protection of the family (article 17 of the ACHR), which plays a central role in the existence of a person and in society in general.
Conclusions from the interpretative argumentation
Recent decades have seen profound transformations in international human rights law, motivated by considerations of international ordre public, and which confirm that human rights are applicable to all people irrespective of where they live. At the beginning of the 21st century, the "reason of humanity" took primacy over the reason of the state, inspiring the historical process of humanization of international law. 31 As a consequence, we can see explicit ethical guidelines, improved domestic laws, and international legal norms. 32 In addition, the Inter-American Court's ruling reflects the constant process of improvement in interpretive legal techniques. The establishment of human rights in societies does not occur automatically; rather, it implies states' acceptance of the restriction of the power they exercise over citizens, as well as the acceptance of jurisdiction of international institutions in a very sensitive area. 33 Most of the arguments embraced in the court's ruling were systematic. But the court also used precedent and theological elements.
Concerning the right to health, the court used precedents from other rulings, as well as a genetic argument in relation to Chile's position regarding the legal applicability of the right to health, which the state had expressed during the drafting of the ACHR.
Finally, the ruling held the Chilean state responsible for the violation of Poblete's rights to á. a. massa, m. b. villouta, and c. r. ferrada / general papers, 251-258 N U M B E R 1 Health and Human Rights Journal 255 health, to life, and to personal integrity; the violation of Poblete's and his family members' right to informed consent and access to information on health-related matters; and the violation of his family members' right to personal integrity. 34 Issues from a gerontological perspective Among the many issues relevant to gerontology, the Inter-American Court's ruling proposed overcoming stereotypes and stigma against older persons in the social and health care spheres. It is clear that a cultural and social structural change, as well as a new way of relating to and with elderly people, is required. 35 It is necessary to undertake a paradigm change toward older persons, who have the right to assistance benefits, that views such persons as subjects of law who can make demands of the state. 36 In this sense, a person's age is not an indicator of medical diagnosis or prognosis, unlike other areas, such as functionality, to which the ruling did not refer. 37 The Inter-American Convention on Protecting the Human Rights of Older Persons refers to prejudices and stereotypes, requiring state parties to "create and strengthen mechanisms for the participation and social inclusion of older persons in an environment of equality that serves to eradicate the prejudices and stereotypes that prevent them from fully enjoying those rights." 38 It is worth noting that the Poblete case is an example of structural abuse, where social stereotypes form the basis of abuse and directly affect the rights to life and to integrity. According to the National Service for the Elderly in Chile, structural abuse is "that which occurs from and within the structures of society through legal, social, cultural, [and] economic norms that act as a background for all other forms of existing abuse." 39 The court's ruling referred to events that occurred in Chile in 2001, when the national and international normative standard was lower in matters of health care for older persons. 40 Today, a similar case would likely be resolved with a more demanding standard. At the time of the events, the World Health Organization had not coined the term "healthy aging," which is based on the pil-lars of health, safety, and participation, and there was no recognition of older persons' autonomy in health-related matters. 41 On the other hand, the Madrid International Plan on Action on Ageing promotes the idea of considering the increase in life expectancy as an opportunity. 42 According to the plan of action, older persons should enjoy the right to security, including health benefits and care; it also recognizes older persons' rights to participation, autonomy, and informed consent. These standards were not implemented by the court because they are not mandatory.
The aforementioned instruments generate changes at the level of sociocultural and legal standards. At the international level, this includes guidelines on good clinical practices in geriatrics that encourage integrated care for older persons and the Inter-American Convention on Protecting the Human Rights of Older Persons, among others. 43 At the national level, it includes Chile's Universal Access Plan to Explicit Guarantees in Health, in force since 2005, which promotes the enactment of Law 20.584 on the rights and duties that people have concerning actions related to their health care, replacing the paradigm of biomedical paternalist care with a model of autonomy. 44 In this sense, an evaluation of the clinical situation of health care for older persons should incorporate a comprehensive view of the individual that considers not just biomedical aspects but also the person's social, biographical, functional, affective, and cognitive characteristics. Care for older persons should be continuous and integrated and should seek to enhance their functionality and prevent iatrogenesis, regardless of the level of care at which they are being treated. This care must pay special attention to the prevention of risks associated with hospitalization, particularly for those who are frail. The opinion of older persons must be incorporated into decision-making; to this end, a competence evaluation must be performed, and advance care planning should be a priority. Good communication tools among the different actors are encouraged, benefitting not only the relationship between patient, family, and medical team but also the patient's transitions between types of N U M B E R 1 Health and Human Rights Journal care. Regarding the communication and delivery of information, special attention should be paid to the level of health literacy of those involved, and communication strategies should be adapted so that people can actively participate in their health care. This requires a commitment from states both in the training of human resources at the undergraduate and postgraduate level and in the continuous review and adjustment of existing practices and resources, all of which are key for the nonrepetition of violations. 45 With regard to the right to life-in relation to the denial of emergency medical treatment by medical personnel-the Inter-American Court found that Chile did not adopt necessary, basic, and urgent measures to guarantee Poblete's right to life. The state could not justify its denial of basic emergency services. The court argued that such assistance would have at least prolonged Poblete's life and thus concluded that the omission of basic health benefits affected his right to life. 46 Health care teams must provide technically viable and justified alternatives in light of the clinical condition of older patients. Still, the court's decision does not constitute a vote in favor of therapeutic obstinacy, which would ultimately imply unnecessary suffering for the patient, as well as the misuse of health resources. In this sense, it is the duty of the health care team to consider death as a part of life, and, consequently, the team should offer appropriate support and consolation to relatives after the patient's death. 47 Regarding informed consent, such consent is part of the growing recognition of the autonomy of older persons. This implies considering informed consent as a principle that allows for existential and practical choices that arise from one's personal identity, life history, and environmental conditions. In general, the term "autonomy of the will" is understood as the ability of legal subjects to establish rules of conduct for themselves and in their relationships with others within the limits established by law. 49 And the term "autonomy and individual responsibility" is understood as the autonomy of persons to make decisions, while taking responsibility for those decisions and respecting the autonomy of others. For persons who are not capable of exercising autonomy, special measures are to be taken to protect their rights and interests. 50 The Inter-American Convention on Protecting the Human Rights of Older Persons maintains that independence and autonomy constitute general principles for the interpretation of the convention. 51 An important dimension of autonomy occurs in the health field, where decision-making capacity and responsibility constitute guiding principles for the relationship between the patient and the health care team, in an effort to avoid verticalization and asymmetry of information. In technical terms, the Universal Declaration on Bioethics and Human Rights defines these concepts as the power to make decisions about one's own life, assuming responsibility for those decisions, and respecting others. 52 Regarding people who lack decision-making capacity, special measures must be taken to protect their rights and interests. This declaration unites the concept of autonomy and responsibility, moving away from a conception of freedom that exalts the individual. The obligation to take "special measures" does not fall exclusively on health service users but instead applies to other subjects as well, since these special measures must protect 'their rights and interests." At the same time, the autonomy of the subject is appreciated because it is essential for the integration of decision-making processes, such as informed consent. Consent (agreement of wills) relates not to the narcissist satisfaction and autonomy of the patient but to the realization of their possible therapeutic wellness. 53 Therefore, we must incorporate more demanding standards associated with clinical á. a. massa, m. b. villouta, and c. r. ferrada / general papers, 251-258 The ruling of the Inter-American Court of Human Rights in Poblete Vilches et al. v. Chile marks an important milestone regarding the recognition of the rights of older persons, especially in the spheres of life and health. Further, it emphasizes the importance of ensuring that older patients' wishes are heard and that guidelines are in place concerning how to proceed in cases where a person is unable to express their wishes. The principle of informed consent is not irrelevant with regard to older people. Since the tragic events that happened to Poblete and his family, national and international legal instruments have taken a positive turn, moving toward greater recognition of the rights of older persons, with dignity as their guiding light.
Conclusion
First, the Poblete case is important for its effective application of the Inter-American Convention on Protecting the Human Rights of Older Persons. This is a critical development in the international context, since the Organization of American States differentiates the legal protection of older persons from that of disabled people. The ruling is a major step forward in terms of the promotion of positive stereotypes of older persons, as embraced by the World Health Organization-namely active aging, positive aging, and healthy aging.
Second, international public order and the Inter-American Court of Human Rights in particular have made efforts to move forward in the recognition of older persons' rights. The Inter-American Court declares this case as groundbreaking and, for this reason, a greater specialization in older persons' rights can be reasonably expected over time, in which a person's biographical identity is accepted as an ethical and gerontological core of reflection. 55 Third, regarding the court's argument, specifically the systematic element, the inter-American human rights system requires that the arguments used by the Inter-American Court to interpret the ACHR be legal and within the framework of a previously enshrined right. Therefore, the Inter-American Court is not acting within its jurisdiction if it uses extra-systemic arguments, such as quoting the European Court of Human Rights. This bad practice of the Inter-American Court does not comply with the international standards of the system or with the cultural realities of the continent.
Finally, this ruling applies some of the same principles enshrined in the Inter-American Convention on Protecting the Human Rights of Older Persons, among them dignity, autonomy (expressed through informed consent), solidarity and empowerment of family and community protection, and effective judicial protection. 56 These legal principles will bring new perspectives in future trials in the region.
|
2021-07-01T05:14:17.026Z
|
2021-06-01T00:00:00.000
|
{
"year": 2021,
"sha1": "57b32b3573a15b03d6d04fb3ed670f050e78fa2e",
"oa_license": "CCBYNC",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "57b32b3573a15b03d6d04fb3ed670f050e78fa2e",
"s2fieldsofstudy": [
"Law"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
237449455
|
pes2o/s2orc
|
v3-fos-license
|
Numerical investigation on water-entry impact load of TMA based on CEL algorithm
This work presents comprehensive numerical research on the impact load of the trans-medium aircraft (TMA) using the coupled Eulerian-Lagrangian (CEL) method during a water-entry event. The water-entry velocity, angle, and attitude play a significant role in the impact load characteristics of the water-entry trajectory. In this paper, a numerical model of a typical TMA structure is established to study the water-entry load with the velocity 0, 2m/s, 4m/s, 6m/s, 8m/s, the angle 90°, 80°, 70°, 60°, 50°, the attitude 90°, 80°, 70°, 60°, 50°. Subsequently, the variation laws of the impact load with different water-entry velocity, angle, and attitude are analyzed. The results obtained from this investigation can supply a good reference to structural design of the TMA.
Introduction
Trans-medium aircrafts (TMAs) have become a popular study topic because of their range of activities, unbalanced dominance in counteract, access to complex structures and relatively low cost. During the trans-medium maneuvering process, the motion of the TMA causes the stationary liquid to accelerate suddenly, inducing a tremendous force associated with momentum transfer. Most of the previous investigations on the water-entry physical process have focused on two aspects: the impact load and the dynamics of the cavity characteristics behand the body. A simplified theoretical model by a numerical simulation FSI method for a single rigid wedge was performed by Carcaterra et al [1]. An experiment conducted measuring the velocity of the projectile after water entry and an empirical formula were carried out to study the water-entry phenomenon by Shi [2]. The hydrodynamic problem of twin wedges entering water vertically at constant speed was analyzed based on the velocity potential theory by Wu [3]. A review work undertaken in the field of water entry between 1929 and 2003 and future directions for research in the field of water entry were discussed by Seddon and Moatamedi [4]. Xu et al [5] investigated hydrodynamic problem of a twodimensional asymmetrical wedge at constant speed based on velocity potential theory. A preconditioned Navier-Stokes (NS) method was developed for multiphase flow with application to the water entry problem of moving bodies by Nguyen [6][7]. Aiming at the lack research of high-speed water-entry for large projectile, Sun et al [8] studied the high-speed water-entry for hemisphericalnosed projectile in the method of fluid-solid coupling. The CFD approach was utilized to discuss the water entry and exit of axisymmetric bodies by Nair and Bhattacharyya [9]. An integrated experimental and theoretical framework to isolate the effects of oblique and asymmetric impact on the water entry of a rigid wedge was proposed by Russo et al [10]. Additionally, the violent water entry of spheres and inclined cylinders were described by Dyment [11] and Xia [12], respectively. Numerical study on the [13][14][15][16]. Experimental study on the influence of the angle of attack on cavity evolution and surface load in the water entry of a cylinder was established by Sun [17]. Zheng et al [18] extended the CIP-based model for the treatment of water entry of two-dimensional structures with complex geometry.
In can be inferred that it is difficult to solve the impact load problem of the TMA during the waterentry process. In this paper, the CEL numerical method is adopted to investigate the impact load characteristics of a typical TMA structure. The accelerations of the TMA under different water-entry velocities, angles, and attitudes are analysed and discussed. The current paper is illustrated as follows: i) the theoretical description is given in Section 2, ii) Section 3 provides the numerical setup details, iii) the simulated results are discussed and reported in Section4. Eventually, some conclusions obtained from the results are summarized in Section 5.
Flow equations
The water is modelled by Euler approach, which considers the water as approximately incompressible viscous laminar flow. The volume response of the water medium is described by the U s -U p format of the Mie-Grüneisen equation. The common form of Mie-Grüneisen state equation is given as: where H p is Hugoniot pressure, m E is Hugoniot specific energy which merely relates with the density, is the nominal volumetric compressive strain. Further, we obtain: The Hugoniot curve fitting relationship of common materials is: where the linear shock wave velocity s U is defined as: The Hugoniot form of Mie-Grüneisen equation can be rewritten as: Thus, we have described the volume strength and the hydrostatic volume response characteristics. Assuming that the shear response and the volume response are uncoupled, the viscous shear response is defined by Newtonian viscous shear model as:
Structural equations
The equation of motion for the structural part can be illustrated by the momentum balance equation as: where s u is the structural velocity tensor, s is the solid density, s σ is the Cauchy stress tensor, and where F is the deformation gradient tensor. In the dynamic case, the strain energy and the kinetic energy of the body is provided by:
FSI solution algorithms
In CEL algorithm, based on the immersed boundary method, the FSI is realized and solved by capturing the Euler-Lagrangian contact. Establishing the analysis function of the Lagrangian part movement in the Euler part, the Euler elements and the Lagrangian elements are correlated and coupled simulated.
Geometry model presentation
A two-dimensional and a three-dimensional typical structures of TMA are adopted to investigate the water-entry characteristics. The geometrical sizes of the models are shown in Figure 1. For the twodimensional model, the width is 45.826mm and the height is 30mm; for the three-dimensional model, the radius is 22.913mm and the height is also 30mm. Especially, the three-dimensional model is a shell type structure with the thickness of 3mm. Figure 1. Geometry models.
The material of the solid structure is 7075-T6 aluminum alloy, and the corresponding material parameters are given in Table 1. The U s -U p model parameters of the water are shown in Table 2.
Definition of feature units
In the study of the impact loads of TMA water-entry problem, it is necessary to evaluate the force characteristics using a physical variable. Therefore, the acceleration variable is adopted in the output results. Considering the computing cost, the solid structure is modelled as a rigid and the output point is located at the geometrical center. The unit of the acceleration variable is g, the value of which is 9.8m/s 2 .
Computational domain and grid
The simulation is carried out by the explicit finite element method based on a multi-medium coupling method. The water is simulated in the Eulerian formulation, whereas the motion of the TMA and the free surface are tracked using the Lagrangian formulation. The structure is modelled with Lagrangian elements and the water is meshed with Eulerian elements. The element size of the TMA structure is 1mm, and the element size of the water field is 3mm. The dimensions of the water field are 240mm×600mm (2D case) and 240mm×240mm×600mm (3D case). The computational domain and the meshing of the simulation numerical finite element model is displayed in Figure 2. The VTF tool is used to defined initial water field. Since the large simulating cost, the computing codes were run on a cluster with one management node and five computing nodes and every node has 112 cores and 512 GB RAM, i.e. 560 cores were used in this work.
Simulation conditions
The boundary conditions are set as 0 V and the gravity effect is considered. In order to research the maximum acceleration loads during the water-entry process under different water-entry velocities, water-entry angles, and water-entry attitudes, 125 cases of 2D model and 5 cases of 3D model are simulated. The water-entry velocity V , the water-entry angle and the water-entry attitude are defined as Figure 3. The simulation time is set as 0.5s and the nonlinear geometrical deformation is included in the computing process. The cases of V , , and are illustrated in Table 3.
Results and discussions
The maximum impact acceleration (MIA) performances under different water-entry velocities, waterentry angles and water-entry attitudes are analyzed and discussed in this section. The results of the 125 2D-model cases and the 5 3D-model cases are characterized. The water-entry velocity V varies from 0m/s to 8m/s, meanwhile the water-entry angle and water-entry attitude vary from 90° to 50°.
Effect of water-entry velocity
The MIA of the 2D model varies with the water-entry velocity V under different water-entry angles and water-entry attitudes are given in Figure 4 and Figure 5, separately. The results of the MIA for the 3D model, which relate with the water-entry velocity V at the water-entry angle of 90° and the water-entry attitude of 90°, are display in Figure 6. From the results, we can obtain that the MIA increases with increasing water-entry velocity and has a linear relationship with the square of the water-entry velocity in the near term. The MIA is in the range of 30g at 0 m/s and 180g at 8 m/s.
Effect of water-entry angle
The MIA of the 2D model varies with the water-entry angle under different water-entry velocities and water-entry attitudes are given in Figure 7 and Figure 8, separately. As is seen from Figure 7 and
Effect of water-entry attitude
The MIA of the 2D model varies with the water-entry attitude under different water-entry angles and water-entry velocities are shown in Figure 9 and Figure 10, separately. It can be concluded from Figure 9 and Figure 10 that the MIA has the same trend as previous results. The most cases, the MIA first increases, then decreases, and then increases again. In summary, the MIA has an obvious relationship with the water-entry velocity, and there are no obvious law with the water-entry angle and the water-entry attitude.
Conclusions
The CEL-based numerical model is extended to simulate the water-entry impact loads of the TMA for two-dimensional and three-dimensional structures. First of all, the theoretical details are supplied. Then the numerical model is modelled for the water-entry process of the TMA structures and more than 125 cases are carried out. Finally, these numerical investigations perform evidence for the influence of the water-entry velocity, the water-entry angle, and the water-entry attitude on the MIA, which can help us to understand and evaluate the MIA of the typical TMA structures and predict the structural strength furtherly.
|
2021-09-09T20:07:48.548Z
|
2021-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "ba035096970c3d813a110de002289382c74abdc3",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/2012/1/012055",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "ba035096970c3d813a110de002289382c74abdc3",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
219555907
|
pes2o/s2orc
|
v3-fos-license
|
Genotypic x Environment Interaction and Stability Analysis in Turmeric (Curcuma longa L.)
Seventeen genotypes of turmeric (Curcuma longa L.) were grown during two consecutive seasons during 2007-08 and 2008-09 at two locations i.e. Main Experiments Station, Department of Vegetable Science, Narendra Deva University of Agriculture and Technology, Kumarganj, Faizabad, U.P and Krishi Vigyan Kendra, Masodha, Faizabad, U.P, for their stability analysis for yield and yield components like weight of mother rhizome (g), weight of primary rhizomes per plant (g), weight of fresh rhizome per plant (g), rhizomes yield (q/ha), dry matter (%), curcumin (%) and oleoresin (%). Mean square due to environment and linear were expressed highly significant for all the traits except curcumin and oleoresin per cent. Linear components of genotypic × environment interaction assumed importance for weight of mother rhizome, weight of primary rhizome per plant and dry matter. Genotypic ×Environment interactions were found to be significant for all the characters except rhizome yield, curcumin and oleoresin. The pooled deviation was found to be significant for weight of mother rhizome and weight of fresh rhizome per plant. The genotypes NDH-88 was most desirable for wide range of environment for weight of primary rhizome per plant whereas, NDH-118, NDH98 and NDH-79 were suited for adverse environmental conditions and NDH-88, NDH-45, NDH-9 and NDH-74 produced highest yield under favourable environmental conditions. Prabha was most desirable variety for wide range environment for curcumin per cent whereas, NDH-7 and Rajendra Sonia were suited for poor environmental conditions and NDH-18, NDH-14 and NDH-98 produced highest curcumin under favourable environment. The two characters namely weight of primary rhizome and curcumin per cent were characterized as stable traits over the four environments. K e y w o r d s
9%) and oleoresin (3-13%). Curcumin is gaining more importance in food industries, pharmaceuticals, preservatives and cosmetics. The ban on artificial colour has prompted the use of curcumin as a food colourant. In pharmaceuticals it is valued for the anticancerous, anti-inflammatory, antiseptic, antimicrobial and anti-proliferative activities (Srimal, 1997)(10). It is a tropical crop and needs a warm and humid climate with an optimum temperature of 20 to 30 o C for normal growth and satisfactory production. It thrives best on sandy, loamy or alluvial, loose, friable and fertile soil rich in organic matter status and having a pH range of 5.0 to 7.5. The crop cannot withstand water logging. The occurrence of genotype-environment interaction has provided a major challenge for obtaining full understanding of the genetic control of variability. The study of genotypeenvironment interaction in its biometrical aspects is thus important not only from the genetical and evolutionary pain of view, but it is also very relevant to the production problems of agriculture in general plant breeding in particular (Breese, 1969) (1). A variety having wide or good adoptability is one which consistently given superior production over a wide range of environments (Frey, 1964) (5). This combination of stability and performance is very important. Stability is a common practice in trials involving varieties and breeding lines to grow a series of genotypes in a range of different environments.
Therefore, the present investigation was conceived with the objective to study thegenotype x environment interaction and to identify the most productive and stable genotype and environment.
Materials and Methods
The present experiment was carried out for two consecutive seasons during 2007-08 and 2008-09 at two different locations of main Experiments Station, Department of Vegetable Science, Narendra Deva university of Agriculture and Technology, Kumarganj, Faizabad, U.P and KVK, Masodha, Faizabad, U.P. The experiment comprised of seventeen genotypes Prabha, and laid out in complete randomized block design with three replications. The soils of main Experiments Station, Department of Vegetable Science, Narendra Deva university of Agriculture and Technology, Kumarganj, Faizabad, U.P. was silty loam in texture, slightly sodic in reaction with pH 8.6 and low in organic carbon (0.36%), available nitrogen (118.7 kg/ha) and medium in available P (14.9 kg/ha) and K (224.2 kg/ha) and soils of KVK, Masodha, Faizabad, U.P. farm was sandy loam in texture, slightly saline in reaction with pH 7.4 and medium in organic carbon (0.50%), low in available nitrogen (165.4 kg/ha) and medium in available P (16.4 kg/ha) and K (245.2 kg/ha). Rhizomes of each genotype were planted in the month of June at a spacing of 30 x 20 cm and harvested in the month of February. The crop was grown with recommended package of practices. Observation was recorded from five randomly selected plants for weight of mother rhizome (g), weight of primary rhizomes per plant (g), weight of fresh rhizome per plant (g), rhizomes yield (q/ha), dry matter (%), curcumin (%) and oleoresin content (%).The collected pooled were subjected for statistical analysis as per method of Eberhart and Russel (1966) (2).
Results and Discussion
A highly significant difference was observed among the genotypes for all characters under observation. The differences amongst the environment were also significant for all characters except rhizome yield, curcumin and oleoresin. Genotype x environment interaction were highly significant for weight of mother rhizome, weight of primary rhizomes per plant, weight of fresh rhizomes per plant and dry matter while significant at low probability for plant girth. Environment (Genotype x Environment) interaction were highly significant for weight of mother rhizome, weight of primary rhizomes per plant, weight of fresh rhizomes per plant and dry matter and non-significant for rhizomes yield, curcumin (%) and oleoresin (%) content.
The linear component of environment was highly significant for all the characters except rhizome yield, curcumin and oleoresin. The linear component of genotypes x environment interaction was highly significant for all the characters except rhizome yield, curcumin and oleoresin. The pooled deviations were highly significant for weight of mother rhizome, and weight of fresh rhizomes per plant remaining characters showed non-significant response. Three stability parameters viz., mean (Xi), linear sensitivity coefficient (bi) and non linear sensitivity coefficient (S 2 di) were carried out only for rhizome yield and important yield contributing traits along with quality traits. These characters were weight of mother rhizome, weight of primary rhizomes per plant, weight of fresh rhizome per plant, rhizome yield, oleoresin and dry matter per cent ( Table 2). The mean performance over environments showed that weight of mother rhizome varied from 22.39 (R. Sonia) to 192.64 g (NDH-98). Six genotypes namely, NDH-98, NDH-7, NDH-9, NDH-68, NDH-118 and NDH-45 had significantly higher mean for weight of mother rhizome as compared to the general mean. However, genotypes NDH-108, NDH-88, NDH-86, Prabha, NDH-18, NDH-8, NDH-53, NDH-14, NDH-74, R. Sonia and NDH-79 were found significantly inferior from general mean ( Table 1). Out of seventeen genotypes, NDH-98 and NDH-45 exhibited more than one bi value. Fifteen genotypes had lower linear response. Non-linear sensitivity coefficient (S 2 di) was significant for NDH-7, NDH-108, NDH-86, NDH-45, NDH-18, NDH-8, NDH-68, NDH-9 and NDH-98. The rest of the genotypes were characterized by S 2 di = 0. Two genotypes viz., NDH-98 and NDH-45 had high mean, (Xi), bi >1 and S 2 di = 0 which indicated that these genotypes were highly responsive to favourable environments. The genotypes NDH-7, NDH-8, NDH-64 and NDH-118 had high mean value (Xi), bi = <1 with S 2 di = 0 which indicated that this genotype was stable for unfavourable environments. The mean performance of the genotypes over environments (Xi), linear coefficient (bi) and deviation from linearity (S 2 di) for this character are presented in Table 1. Weight of primary rhizomes per plant ranged from 45.27 (NDH-68) to 157.72g (NDH-98). Seven genotypes viz., NDH-88, NDH-45, NDH-79, NDH-98, NDH-9, NDH-74 and NDH-118 had significantly higher mean for weight of primary rhizomes per plant than general mean. Ten genotypes namely, NDH-7, NDH-108, R. Sonia, NDH-86, Prabha, NDH-18, NDH-8, NDH-68, NDH-53 and NDH-14 had lower mean values for this trait. Out of seventeen genotypes, seven genotypes namely NDH-88, NDH-45, NDH-9, NDH-14, NDH-18, NDH-74 and R. Sonia showed greater than one regression value (bi >1). The non linear deviation from regression coefficient was found significantly different from zero in case of seven entries namely, NDH-86, NDH-45, NDH-68, NDH-9, NDH-14, NDH-79 and R. Sonia. Three genotypes NDH-98, NDH-79 and NDH-118 had high mean performance (Xi), bi = <1 S 2 di = 0 which indicated that these genotypes were stable for unfavourable environments and three genotypes viz., NDH-9, NDH-74 and NDH-45 had high mean performance, bi > 1 with S 2 di=0 which indicated that these genotypes were suitable for only favourable environments. NDH-88 showed high mean performance, bi=1 with S 2 di=0 which indicated that these genotype were stable wide range of environment. On the basis of mean performance over environments, the weight of fresh rhizome per plant varied from 176.76 (NDH-86) to 937.94 g (NDH-98). Out of seventeen genotypes three genotypes i.e., R. Sonia, NDH-118 and NDH-98 had significantly higher mean values than general mean while, rest of the genotypes had lower mean values than general mean for weight of fresh rhizomes per plant ( Table 2). The regression coefficient (bi) was greater than one for eight genotypes while three genotypes showed less than one bi value. The deviation from regression (S 2 di) was significantly greater than zero in fourteen genotypes viz., NDH-108, NDH-88, NDH-86, NDH-45, NDH-18, NDH-68, NDH-53, NDH-9, NDH-14, NDH-74, NDH-79, NDH-98, NDH-118 and R. Sonia while rest of the genotypes had non-significantS 2 di=0. Two genotypes namely, NDH-98 and R. Sonia had high mean value bi >1 with S 2 di = 0 which indicated that this genotypes are suitable for favourable environments. One genotype namely NDH-118 showed high mean performance, bi <1 which S 2 di = 0 which indicated that these genotypes were suitable for unfavourable environment. The mean performance over all the environments showed that rhizome yield ranged from 232.59 (NDH-86) to 421.79 q/ha (NDH-98). Nine genotypes namely, NDH-118, NDH-98, NDH-79, NDH-74, NDH-14, NDH-9, NDH-68, NDH-8 and NDH-18 had significantly higher mean for rhizome yield as compared to the general mean ( Table 2). Out of seventeen genotypes, four genotypes had more than one regression coefficient, while eight genotypes had less than one regression coefficient. Non-linear sensitivity coefficient (S 2 di) was significant for NDH-14 whereas, rest of the genotypes were characterized by S 2 di = 0. The genotype NDH-79, NDH-98 and NDH-18 had high mean (Xi), bi >1 and S 2 di =0 which indicated that genotypes were more responsive for favourable environments. Six genotypes viz., NDH-118, NDH-74, NDH-14, NDH-9, NDH-68 and NDH-8 had high mean values, bi <1 with S 2 di = 0 which indicated that these genotypes were suitable for unfavourable environment. The mean performance across the environments showed that dry matter per cent varied from 17.97 (NDH-8) to 28.23% (NDH-68). Out of seventeen genotypes, six genotypes namely, NDH-88, Prabha, NDH-68, NDH-53, NDH-79 and NDH-98 had high mean values for this trait as compared to the general mean ( Table 3). The regression coefficient (bi) was higher than unity for three genotypes while nine genotypes showed less than one values for bi. The deviation from regression was significant in three genotypes viz., NDH-108, Prabha and NDH-68 while, rest of the genotypes had nonsignificant S 2 di. The genotype viz., Prabha, NDH-68 and NDH-53 had high mean (Xi), bi >1 with S 2 di = 0 which indicated that these genotypes were suitable for favourable environment. The genotypes viz., NDH-88, NDH-79 and NDH-98 had high mean value, bi <1 and S 2 di = 0 were most responsive in unfavourable environments. The curcumin per cent ranged from 3.16% (NDH-68) to 8.43 (NDH-98) with a general mean of 5.49%. Out of seventeen genotypes, six genotypes showed significantly higher while six genotypes showed significantly lower mean performance values for curcumin per cent ( Table 3). The regression coefficient (bi) was higher than units for two genotypes while eleven genotypes showed less than one value for bi. The deviation from regression for genotypes viz., NDH-18 and NDH-98 were significant and for rest the genotypes were non-significant. NDH-53 had high mean, bi = 1 and S 2 di = 0 thus considered stable genotypes for wide range of environments. Four genotypes viz., Prabha, NDH-18, NDH-14 and NDH-98 had high mean (Xi), bi >1 with S 2 di = 0 which indicated that these genotypes were suitable for favourable environments. The genotypes viz., NDH-7 and R. Sonia had high mean value, bi <1 and S 2 di = 0 were most responsive in unfavourable environments. The oleoresin per cent in various genotypes ranged from 3.05 (NDH-108) to 13.59% (NDH-14) with a general mean 8.63%. Out of seventeen genotypes, nine genotypes showed significantly higher mean performance while two genotypes showed lower mean values for oleoresin % ( Table 3). The regression coefficient (bi) was higher than unity for two genotypes while six genotypes showed less than one value for bi. The deviation from regression was found nonsignificant except for NDH-8. Non-linear sensitivity coefficient (S 2 di) was significant for NDH-8 and NDH-79 whereas, rest of the genotypes were characterized by S 2 di = 0. The genotype NDH-86 had high mean (Xi), bi>1 and S 2 di = 0 which indicated that these genotypes were suitable for favourable environment. The genotypes viz., Prabha, NDH-18, NDH-9, NDH-14 and NDH-118 had high mean (Xi), bi = <1 and S 2 di =0 which indicated that these genotypes were suitable for unfavourable environments.
|
2020-05-21T09:19:25.503Z
|
2020-04-20T00:00:00.000
|
{
"year": 2020,
"sha1": "298937c654a49925386a221861acc243253a54b5",
"oa_license": null,
"oa_url": "https://www.ijcmas.com/9-4-2020/Abhishek%20Pratap%20Singh,%20et%20al.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "4ffaf5eb184444deee897e4296085f3620551aad",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
}
|
145822953
|
pes2o/s2orc
|
v3-fos-license
|
Hepatocellular carcinoma recurrence in living and deceased donor liver transplantation: a systematic review and meta-analysis
Supplemental Digital Content is available in the text
Introduction
Early stage hepatocellular carcinoma (HCC) has become one of the major indications for liver transplantation since successful transplantations for HCC were initially reported by Mazzaferro et al. [1] Liver transplantation has also been reported be beneficial to patients with relatively advanced HCC. [2,3] However, the shortage of deceased liver donors has limited the supply and therefore the application of deceased donor liver transplantation (DDLT). As a result, living donor liver transplantation (LDLT), which provides an alternative for patients waiting for DDLT, has markedly increased of late, especially in East Asia. Although a number of technical problems and donor safety issues associated with LDLT have been resolved, one problematic finding was an increased risk of HCC recurrence in LDLT as reported in some initial clinical studies. [4,5] While it has been speculated that differences in HCC staging or features prior to transplantation may contribute to this recurrence of HCC following LDLT, this issue has yet to be resolved and concerns regarding the impact of LDLT as related to HCC recurrence remain.
During LDLT, the main branches of recipient's portal vein and hepatic artery, as well as the hepatic vein and retrohepatic segment of the inferior vena cava, are typically preserved. This procedure may increase the potential of HCC residue and dissemination. Moreover, the relatively small-sized grafts that are usually used with living donors will quickly grow after LDLT with the result that HCC colonization or growth may be accelerated under such conditions. There are data from animal studies which support such a conclusion. For example, Picardo et al [6] reported increases in HCC growth and cytokine growth factor expression within a partial hepatectomy in a rat model. Similar results were reported by Yang et al [7] in a rat orthotopic liver transplantation model. Therefore, the high recurrence of HCC in LDLT may result from these surgical techniques and use of a small graft in LDLT. As a result of these findings from clinical and animal studies, serious concerns remain regarding LDLT in patients with HCC. Such concerns are revealed in clinical reports where patients with tumors close to the main branches of vessels were only offered DDLT for liver transplants as performed in the Toronto General Hospital [8] ; and only patients with HCC in United Network for Organ Sharing (UNOS) T2 stage were considered as candidates for LDLT in the Niguarda Ca' Granda Hospital. [9] Further, the issue of LDLT in patients with HCC raises many ethical and clinical questions. For example, should this risk be explained to the donor and ethics committee? Should special criteria for LDLT be established? Should changes in surgical techniques or graft size be implemented to reduce HCC recurrence risk in LDLT?
Currently, no randomized clinical trial has been conducted as related to organ driven transplantations. Only findings from retrospective studies have supplied some evidence that can be used to address this issue at present. Though systematic reviews and/or meta-analyses [10][11][12] exist, variables associated with these reports lack sufficient descriptions. While cohort studies of large samples can provide important information on the relationships between observed factors and events and control for major confounding factors, the complicated nature of liver transplantation limits their utility. Factors including patient selection, donor preservation, surgical technique, posttransplantation treatment, and anti-HCC therapies all contribute to complexities involved with liver transplantation. Moreover, discrepancies in selection criteria, death, and dropout prior to liver transplantation have not received sufficient consideration, and characteristics correlating with HCC staging should also be included in these analyses.
Donor livers are allocated by deferent national regulations, which vary among countries or regions. In most countries/regions, priorities are afforded to patients with relatively small HCC. [13,14] Thus, patients receiving a DDLT comprise a special population of patients with HCC, often in early stages of the disease, with relatively high model for end-stage liver disease (MELD) scores, long waiting periods and favorable prognoses. As recipients in most cases of LDLT are designated by the living donor, the criteria for LDLT in HCC are not the same as those for DDLT. Patients with advanced HCC are more likely to receive LDLT. In clinical practice, characteristics indicating high risks for HCC recurrence are taken into consideration for each patient selected, which may be more precise than that of a stratified HCC staging criterion. Thus discrepancies in HCC features between LDLT and DDLT may remain, even after adjusting for HCC staging. Therefore, detailed characteristics in patient selection should be discussed in any meta-analysis.
We conducted the present meta-analysis after selecting articles based on a rationale emphasizing inter-group comparability. Articles with significant differences in HCC staging and post-transplantation anti-tumor therapies between groups were excluded. Several items were used to grade articles as supplements to the Newcastle-Ottawa scale (NOS), including adjuvant therapy, MELD score, non-tumor factors, patient selection, recurrence rate of HCC, waiting period, patient survival, and methods used to determine HCC staging and screen for tumor recurrence. Results from multivariate analyses were combined and discussed together with univariate analyses. Taken together, we found that a higher incidence of HCC recurrence was observed in LDLT as compared with that of DDLT after adjusting for HCC staging.
Literature review
"Cochrane Library," "PubMed," and "Embase" databases were reviewed and included the period from databases build up until October 1, 2017. Search strategies included the keywords "Liver Transplantation," "Hepatocellular Carcinoma," "Living Donor," "Recurrence Rate," and their synonymous terms [Supplementary Table S1, http://links.lww.com/CM9/A45]. After removal of duplicate articles, titles and abstracts were independently reviewed and assessed by two authors (HZ, YS). Further reviews of full texts were conducted in the same manner to establish whether details of articles met inclusion criteria. Bibliographies from all reviews and reports were examined to identify additional studies for potential inclusion in our analysis.
Evidence quality assessment
After identifying articles for inclusion in the review, a risk of bias was assessed with use of the NOS [Supplementary Table S2, http://links.lww.com/CM9/A45]. [15] Articles with less than four stars were regarded as low quality and excluded. A further bias assessment was conducted based on items suggested by transplant experts from the National Clinical Research Center for Digestive Disease, which may have serious impact on the result of analyses. Three items in each category were generated to assess biases in "selection," "comparability," and "precision" [ Table 1]. According to these inclusion criteria, studies involving only comparisons of unadjusted cumulated tumor recurrences between groups with non-comparable baseline data (HCC staging or post-transplantation anti-HCC therapies) of HCC were excluded. As multivariate analysis and propensity score matching were employed in some studies, baseline discrepancy and statistical matching were both taken into consideration. In univariate analyses, comparisons for HCC recurrence required that they should be conducted between LDLT and DDLT groups with similar HCC staging. In the analysis for HCC recurrence, type of donor (LDLT vs. DDLT) and clinical/pathology information on staging of HCC were required for inclusion in the multivariate models.
Inclusion and exclusion criteria
Inclusion criteria: (1) reports written in English; (2) cohort studies (prospective and retrospective) with HCC recurrence information in LDLT and DDLT groups.
Exclusion criteria: (1) duplicated reports; (2) case reports and studies with samples less than five in LDLT or DDLT group; (3) reviews without origin data; (4) studies including tumors other than HCC (eg, cholangiocarcinoma) and required data of HCC were not showed alone; (5) HCC staging in LDLT or DDLT group were missing Chinese Medical Journal 2019;132 (13) www.cmj.org and HCC staging was not adjusted in analysis either; (6) data overlapping with any included report; (7) reports received less than four stars according to NOS; (8) unadjusted statistical difference (P 0.05) existed in HCC staging at the time of transplantation or in anti-HCC therapies after transplantation between LDLT and DDLT groups; (9) unadjusted discrepancy in patient selection criteria of HCC characteristics between LDLT and DDLT groups.
Outcome measurement
The hazard ratio (HR) that based on accumulated HCC recurrence rates after liver transplantation served as the only outcome measure in the current analysis. Adjusted HRs in multivariate analyses were extracted for the metaanalysis. Unadjusted HRs in univariate analyses were estimated as based on the number of HCC recurrences and P value for accumulated recurrence rates between groups. When only HCC recurrences were defined as the event used in the estimation of recurrence/relapse-free survival (RFS), the P value calculated in analyses of RFS comparison can also be used. However, HCC recurrence and patient death were set as a combined endpoint in the calculation of disease-free survival (DFS). Results of comparisons for DFS were screened and eliminated from the analysis. O-E and variance were calculated following methods described previously [16] and were combined using Review Manager 5.3 software (Cochrance Collaboration, UK).
O-E and variance were calculated by HR and its 95% confidence interval using the following equations. [16] varðlnðHR i ÞÞ ¼ O-E and variance were calculated as based upon the number of events and P values using the following equations. [16] where O ri denotes observed number of events in the research group; O ci denotes observed number of events in the control group; E ri denotes log rank expected number of events in the treated group; E ci denotes log rank expected number of events in the control group; and 1/V ri denotes Mantel-Haenszel variance of the log HR.
Results
A summary of the processes involved with retrieval and screening of articles is presented in Figure 1. After removing duplicated articles, a total of 641 articles were screened by title and abstract and 22 full texts of these articles were reviewed. Seven articles with partially overlapping data and two articles with indiscriminate liver cancers (HCC and cholangiocarcinoma) were excluded. In the report of Wan et al, [17] one case of intra-hepatic cholangiocarcinoma (ICC) was included in the DDLT group and one case of combined HCC and cholangiocarcinoma (cHCC-CC) was included in the LDLT group. This report was not excluded, as the proportions of the ICC or cHCC-CC were very low.
Discrepancies in baseline data of HCC staging were present in three articles. For these articles, only univariate analyses were conducted. One report by Sandhu et al [8] noted that "patients with tumors abutting the inferior vena cava, hepatic veins, or porta hepatis were offered only DDLT to prevent the compromising of oncological margins during the course of LDLT." Although the data (13) www.cmj.org of this report was subjected to multivariate analysis, the analysis did not adjust for the factor "tumor abutting vessel." In another report, Di Sandro et al [9] indicated that "patients with stage II HCC were proposed for LDLT" and only univariate analysis was conducted. After discussing the data of these reports within our group, these five articles were excluded [ Table 2]. [8,9,[18][19][20] Finally, eight articles remained for our analysis. One article from the mainland of China including 6471 cases of DDLT and 389 cases of LDLT only reported DFS between the groups. The impact of this large sample study was subjected to sensitive analysis, in which the HR for DFS was combined with HR for HCC recurrence after replacing articles with data overlapping with this study. The quality of evidence was assessed by the NOS and items suggested by experts [ Table 3]. [5,17,[21][22][23][24][25][26] Characteristics of the eight studies used in our analysis are presented in Table 4. Potential confounding factors in baselines, which showed little variation, are listed for each study. HCC within the Milan criteria was 40% to 75% in each study. Characteristics correlating with HCC staging and invasion were similar in five studies. [5,17,21,22,25] Significant differences in microvascular invasion, number, or diameter of HCC were adjusted with use of the Cox regression model in three other studies, [23,24,26] including the study for DFS. [26] Waiting periods, which were much shorter in the LDLT group, were reported in four of the articles. Differences in factors, such as surgery duration, graft-to-recipient weight ratio (GRWR) and graft size, showed essential distinctions between LDLT and DDLT patients, which were regarded as part of grouping factors. Thus, differences in risk for HCC recurrence between groups could be a direct or indirect result of any of these factors.
RFSs were relatively lower in the mainland of China (70.5-70.9%) as compared with that of other countries/regions (87.9-100%) and considerable heterogeneity was present between countries/regions. In the DFS study of the mainland of China, the observed high recurrence risk in the DDLT group (P < 0.001) was reversed in the adjusted comparison (P = 0.281). While in the three multivariate studies from other countries/regions, similar results were obtained from the univariate and multivariate analyses. Thus, in the studies from the mainland of China, discrepancies in baseline may reduce HCC recurrence in the LDLT group.
The seven articles included in the meta-analysis [ Figure 2], consisted of three articles using multivariate analyses and four using univariate analyses. In one univariate analysis, a P value was only reported as "P < 0.05," and it was set as 0.05 in the calculation for HR in the current metaanalysis. Overall, a salient result that emerged from the seven studies was a significant increased risk of HCC in the LDLT group (P = 0.01). A high level of heterogeneity was present (I 2 = 48%) among the studies, which were grouped by univariate and multivariate analyses [ Figure 3]. A high level of heterogeneity was found in univariate studies and a very low level in multivariate studies (I 2 = 61%, I 2 = 0%). The presence of a high risk in the LDLT groups was only supported by results obtained with multivariate studies.
Again, HRs of the two studies from the mainland of China showed apparent differences compared with that of the other studies. Therefore, a subgroup analysis was conducted after studies were grouped in terms of the mainland of China vs. that from the other countries/regions [Supplementary Figure S1, http://links.lww.com/CM9/A45]. Heterogeneities disappeared for analyses performed within each subgroup (I 2 = 0%, I 2 = 0%). The increased risk in LDLT groups was only supported by results from reports of countries/regions other than that of the mainland of China (P = 0.0002). Studies were further grouped by details of policies for organ allocation. The most significant increase in HR was found in studies where organs tended to be allocated to non-tumor patients [ Figure 4].
A sensitive analysis was performed by removing the study with the maximum weight in the meta-analysis [Supplementary Figure S2, http://links.lww.com/CM9/A45]. Another sensitive analysis was performed by introducing the study [26] for DFS, after two studies [17,22] containing overlapping data were removed [Supplementary Figure S3, http://links.lww.com/CM9/A45]. In these sensitive analyses, the overall effect or combined HR in each subgroup was not significantly changed.
Discussion
The current meta-analysis was conducted in an attempt to isolate the specific impact of surgical procedure and graft size of LDLT on HCC recurrence after reducing confounding effects from tumor features. For this analysis, the combined data of articles from different countries/ regions were included. As noted above, differences in patient selection criteria and death/dropout prior to liver transplantation may result in discrepancies in HCC staging prior to liver transplantation. Such factors can exert a serious impact on HCC recurrence. After scrutinizing the full text of related studies, five articles were excluded from the meta-analysis for uncontrolled confounding in HCC staging or patient selection [ Table 2]. No statistically significant differences were found for HCC recurrence in these excluded studies. In two studies, relatively low HCC recurrences were observed in the LDLT group. Only three cases in the LDLT group were included in one study, and a relatively high proportion of small HCC in the LDLT group (80.0% vs. 68.8% conforming to Milan criteria) was found in another study. Thus all excluded articles would not likely show any effects opposite to the combined result of current meta-analysis.
Tumor size/stage along with macro/microvascular invasion is considered as important prognostic factors. Though no statistically significant differences were found between baseline HCC size/staging and macro/microvascular invasion as determined using univariate analyses, differences in HCC staging or other characteristics for invasion still existed within four of the univariate studies included in our analysis. In two studies, proportions of patients with vascular invasion were quite different between the LDLT and DDLT groups (Chen et al [22] : 40.9% vs. 30.7%; Lo et al [5] : 34.9% vs. 17.6%). Data on vascular invasion were Chinese Medical Journal 2019;132(13) www.cmj.org Ninomiya et al, [20] not reported in one article [25] and HCC staging was only approximately matched in another. [17] The macro/microvascular invasion variable was adjusted in all three multivariate analyses, together with tumor size/stage. The prognostic values of these factors were also clearly shown in these models. Thus, among studies included in our analysis, results from multivariate analyses were more reliable than that of univariate analyses. The combined HR of meta-analysis for multivariate models showed a high risk for HCC recurrence in the LDLT group, which was concurrent with the overall results of our meta-analysis. Such findings most likely reflect a high HCC recurrence in LDLT. While we identified two meta-analyses studies on similar topics, only unadjusted results were combined in these studies. [11,12] Potential discrepancies on baselines weakened the reliability of results in these studies. No statistically significant differences in HCC recurrence were found in one of these meta-analysis studies. [12] In the other, only a difference in DFS was found and HCC recurrence was not discussed separately. [11] The results in the current meta-analysis show a consistent high risk of HCC recurrence in LDLT.
According to the experience of liver transplant experts at our institution, preserving intra-hepatic branches of vascular or bile ducts may potentially increase HCC residue and dissemination, especially when the HCC is abutting these structures. In one report, not included in our analysis, it was noted that "patients with tumors abutting the inferior vena cava, hepatic veins, or porta hepatis were offered only LDLT to prevent the compromising of oncological margins during the course of LDLT." [8] Under conditions where similar baselines were present, 5-year cumulated recurrence rates were nearly identical between the LDLT (15.4%) and DDLT (17%) groups. This finding partially reveals the impact of surgical methods. While in one study for LDLT, a decreased RFS was found in the GRWR <0.8 group as compared with the GRWR >0.8 group (P = 0.17), and this difference in RFS was statistically significant in the subgroup of patients with HCC beyond the Milan criteria (P = 0.047). [27] Thus, an impact of graft size on HCC recurrence may also exist and contribute to the high rates of HCC recurrence in LDLT.
In the UNOS system of the United States, patients receive deceased donated livers on the bases of "urgency" and "utility." As patients with HCC that would benefit from transplantation should have a compatible prognosis with that of benign disease patients, a relatively strict criteria for transplantation was imposed upon these patients with HCC. In 2002, MELD scores were employed for donor liver allocation in the United States. Initially, a priority score was assigned to patients with HCC meeting the Milan criteria (one lesion smaller than 5 cm; or up to three lesions, each smaller than 3 cm; no extra hepatic manifestations; no evidence of gross vascular invasion). Although this priority score policy was revised, a priority score of 22 was still given to patients with HCC meeting the UNOS T2 criteria. Thus, transplants were more frequently performed in patients with smaller HCCs. Similar strategies for liver transplantation in HCC with UNOS criteria have been employed. [13] However, in Korea [28] and Hong Kong, China, [29] no priority policy exists for HCC and donor livers are allocated based only on severity of the liver disease (MELD score). In the mainland of China, the establishment of the national system was relatively late, and before that most patients have received liver transplantations primarily based upon their date of registration. Different policies for DDLT may result in discrepancies of HCC features between groups and studies. Organ allocation policies were either identical or similar with that of UNOS in three studies, which were from the United States and France. Relatively high rates of HCC recurrence in LDLT were obtained in these studies, which were similar to that found in two studies from Korea and Hong Kong, China. The priority for transplant in patients with HCC was based upon the diameter of HCC, number of lesions and vascular invasion, with all of these characteristics being discussed or adjusted in these studies. Therefore, the potential confounding effects associated with this organ policy were reduced and similar results were found among the studies. Azoulay et al, [21] 2016, France 1 0 1 1 1 0 1 1 0 6 1 1 1 0 0 1 0.5 0 1 1 0.5 5 Chen et al, [22] Table 2). HCC: Hepatocellular carcinoma; HR: Hazard ratio.
Chinese Medical Journal 2019;132 (13) www.cmj.org Chen et al, [22] 2015, Park et al, [23] 2014, Lo et al, [5] 2007, China (1 center) Discrepancies in other characteristics of HCC may still have an impact on the results. For example, a long waiting period may increase patient dropout rates as associated with HCC progression and reduce the number of patients with invasive HCC. Under such conditions, patients with relatively advanced HCC may be more likely to select LDLT. As no priority is allocated to patients with HCC in Korea and Hong Kong, China, waiting periods can be quite long. In one study from Hong Kong, China, it was reported that approximately 80% of HCC candidates expired during the waiting period for a DDLT. [29] In the current meta-analysis, the highest HRs were found in two reports. The risk of patient dropout from waiting lists for DDLT and the need of LDLT were usually determined using multi-features of HCC, while only one to three of these features were adjusted in these studies. In this way, discrepancies in other baseline features of HCC may still increase HCC recurrence in LDLT.
One included report from the United States, [24] reviewing data from January 1998 to August 2009, found that a high recurrence of HCC in LDLT was significant before 2002 (in 2002 a priority for small HCCs was employed) and became non-significant after 2002. The number of recurrences decreased in LDLT patients with T2-stage HCC (P = 0.026) while recurrence numbers increased in DDLT patients with T3-stage HCC (P = 0.29). More donor livers were allocated to patients with HCC in the era when MELD scores were employed. Thus, reduced waiting times in patients with HCC may result in relatively increased HCC recurrence rates in DDLT patients. An approximate trend for decreased HRs with time can also be found in the forest graph of the current meta-analysis [ Figure 2]. Improvements in surgical techniques and waiting periods were both proposed as explanations for these results.
No mandatory allocation policy has been implemented in the mainland of China before 2010, though tumor stage and MELD scores were typically used in decisions of organ allocation. The mean and median waiting times of 45 to 47 days for DDLT patients in the mainland of China were considerably shorter than that in other countries/ regions. As a result, the impact of waiting time was significantly reduced. However, features of HCCs in LDLT and DDLT were not similar. A large sample study from the mainland of China showed significantly reduced DFS in LDLT patients (HR = 0.650, 0.514-0.823) as determined using univariate analysis but slightly increased DFS (HR = 1.418, 2.538-0.794) by Cox analysis. HCC features tended to reduce the recurrence risk in the LDLT group. Stresses in medical practice and requirements for approval by government agencies have resulted in surgeons within the mainland of China usually selecting DDLT for relatively advanced HCC. Such a tendency may then balance the impact of factors like surgical methods and graft size, and result in nearly equal risks in LDLT and DDLT, after combining HRs in this subgroup.
In conclusion, the results of our analysis indicate that there is an overall increased risk for HCC recurrence in LDLT as compared with that of DDLT. Though biases in patient selection and waiting periods may reduce the reliability of such findings, the result of meta-analysis for adjusted studies support the conclusion that increases of HCC recurrence in LDLT were due to factors other than discrepancies in HCC staging by current systems. The relatively shorter preoperative observation windows in LDLT may lead to fewer cases of HCC with invasive features being screened out, which may provide a possible explanation for the high rates of HCC recurrence. The impact of surgical methods and graft size cannot be confirmed as a contributing factor. Further studies are required to establish the exact roles of adjusting for HCC staging, patient selection, waiting periods, and perioperative treatments.
|
2019-05-07T13:03:08.809Z
|
2019-07-05T00:00:00.000
|
{
"year": 2019,
"sha1": "47139b0925ed67b846bdab42ba5f181a49b75feb",
"oa_license": "CCBYNCND",
"oa_url": "https://europepmc.org/articles/pmc6616234?pdf=render",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "47139b0925ed67b846bdab42ba5f181a49b75feb",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
258187878
|
pes2o/s2orc
|
v3-fos-license
|
Carboxymethyl Chitosan-Functionalized Polyaniline/Polyacrylonitrile Nano-Fibers for Neural Differentiation of Mesenchymal Stem Cells
Electroconductive scaffolds based on polyaniline (PANi)/polyacrylonitrile (PAN) were fabricated and surface-functionalized by carboxymethyl chitosan (CMC) as efficient scaffolds for nerve tissue regeneration. The results of scanning electron microscopy (SEM), Fourier-transform infrared (FTIR) spectroscopy, and water contact angle measurement approved the successful fabrication of CMC-functionalized PANi/PAN-based scaffolds. Human adipose-derived mesenchymal stem cells (hADMSCs) were cultured on the scaffolds for 10 d in the presence or absence of β-carotene (βC, 20 µM) as a natural neural differentiation agent. The MTT and SEM results confirmed the attachment and proliferation of hADMSCs on the scaffolds. The expression of MAP2 at the mRNA and protein levels showed the synergic neurogenic induction effect of CMC-functionalization and βC for hADMSCs on the scaffolds. The CMC-functionalized nanofibrous PANi/PAN-based scaffolds are potential candidates for nerve tissue engineering.
Introduction
Electroconductive and biocompatibility polymers such as polyaniline (PANi), polypyrrole, polythiophene, and their derivatives have found wide applications in biomedical fields including bioactuators, biosensors, neural implants, drug delivery systems, and tissue engineering [1].Tissue engineering uses the proper scaffolds to help regenerate/repair the missing/damaged tissues.Biocompatibility, biodegradability, three-dimensional (3D) structure, and interconnected porosity of scaffolds are important parameters to support cell attachment, proliferation, and differentiation [2,3].The electroconductivity of scaffolds is important in tissue engineering since it improves the propagation of bioelectrical signals to stimulate cell attachment and tissue formation on them [1].Electroconductive polymers are used as biomaterials to stimulate muscle, bone, cardiac, and nerve tissues, although their in vivo applications may be constrained by their non-degradability [4][5][6].
PANi is one of the most promising electroconductive polymers having a wide range of applications in bioengineering and biomedicine.PANi appears to be simple to synthesize and display a variety of electroconductivities, biocompatibility, good environmental stability, and a distinctive and simple doping method [7,8].Despite all these positive qualities, PANi has low mechanical strength hindering its applications, especially as tissue engineering scaffolds [7].The mechanical stability of PANi scaffolds can be improved via the combination of another polymer with high mechanical strength such as polyacrylonitrile (PAN) [8][9][10][11].PAN is a cost-effective and thermally and chemically stable engineering polymer generally used for textile purposes.Using the electrospinning process, the PANi/PAN nanofibrous scaffolds provided adequate support for cell adhesion, proliferation, and differentiation into muscle-like cells [8][9][10].
Electrospinning is a promising technique to fabricate nanofibrous scaffolds from a wide range of natural/synthesis polymers including electroconductive polymers [12].The high porosity and high surface-to-volume ratio, as well as, the appropriate mechanical stability of electrospun nanofibers make them attractive scaffolds for neural tissue engineering.The nanofibrous nature of electrospun scaffolds mimics the structure and composition of collagen fibers within native ECM, which facilitates the migration, attachment, and proliferation of cells as well as the transition of nutrients and biochemical factors [13].Meanwhile, the composition, morphology, structure, and 3D architecture of electrospun scaffolds can be easily controlled by choosing adequate material and processing parameters to regulate their biological functions and specific light/electric/magnetic properties [14].Nanofibers with various dimeters (40-2000 nm) can be fabricated by choosing a suitable combination of polymers and solvents [13].
Beta-Carotene (βC) is a plentiful provitamin A carotenoid found in dark green, yellow, and orange fruits and vegetables [20,[25][26][27].Because of conjugated double bonds, βC has electrical activity [28,29].The ability of βC to induce neurogenesis for stem cells has been reported recently [30][31][32][33][34].This research aimed to fabricate electroconductive nanofibers based on PANi/PAN via the electrospinning process and then surface-functionalization with CMC.βC was used as a natural bioactive component to induce neural differentiation for stem cells.Here, we investigated how a combination of CMC-functionalized PANi/PAN nanofibers and βC might imitate the structure of extracellular matrix (ECM) and facilitate cell proliferation and neural differentiation of human adipose-derived mesenchymal stem cells (hADMSCs).
Synthesis of CMC
Carboxymethyl chitosan (CMC) was synthesized according to our previous reports [21][22][23].Briefly, CTS (1 g) was purified by dissolving in acetic acid solution (1%, 40 mL) at room temperature, precipitating with sodium hydroxide solution (1 M, 50 mL), and washing with deionized water and later isopropanol.The purified chitosan was added to sodium hydroxide solution in isopropanol (0.1 g/mL, 20 mL) and mechanically stirred for 5 h to completely be dissolved.Afterward, a monochloroacetic acid solution in isopropanol (0.4 g/mL, 5 mL) was added dropwise to the chitosan solution and stirred at room temperature for 8 h.Finally, the resulting precipitate was filtered, washed with a mixture of ethanol/deionized water (1/3, v/v) three times, and dried in a vacuum oven at room temperature.
Fabrication of Nanofibers
PANi (0.01 g) and CSA (0.01 g) were added to DMF (3 mL), while PAN (0.39 g) was separately added to DMF (2 mL) and stirred at room temperature overnight for complete desolvation.The PANi/CSA solution was filtered, mixed with the PAN solution, and stirred at room temperature for 2 h.The electrospinning process was performed on a Nano Spinner apparatus (Iran) operating at a voltage of + 15 kV, a flow rate range of 0.2 mL/h, and a nozzle-to-collector distance of 16 cm.The PANi/PAN nanofibers were collected on a drum covered with aluminum foil.The electrospun mats were cut in 0.5 × 0.5 cm 2 dimensions before use.
CMC-Functionalization of Nanofibers
The PANi/PAN mats (0.5 × 0.5 cm 2 ) were then immersed in a solution of ethanol/deionized water (1/1, v/v) for 3 h and rinsed with a large volume of deionized water to eliminate any contamination.The washed mats dried under the fume hood.CMC was added to deionized water at three concentrations (1%, 2%, and 3% w/v) and stirred for 24 h to completely be dissolved.The PANi/PAN mats were placed in CMC solutions for 1 h, immersed in glutaraldehyde solution (0.5%, w/v) at 37 °C for 2 h, and rinsed with a large volume of deionized water to remove the uncrosslinked CMC.The surface-functionalized mats (PANi/ PAN/CMC1%, PANi/PAN/CMC2%, and PANi/PAN/CMC3%, respectively) were dried in a vacuum oven at 40 °C for 24 h.
Characterization of Scaffolds
The morphology and fiber diameter of mats were studied by scanning electron microscopy (SEM, Tescan, Mira3, Czech Republic).All samples were coated with a gold layer before microscopy.The diameter of fibers was measured using Image J software (version 1.41).Fourier-transform infrared (FTIR) spectroscopy (Equinox 55, Bruker, Germany) was employed to characterize the chemical structure of mats.
Cell Seeding on Scaffolds
Human adipose-derived mesenchymal stem cells (hADMSCs) obtained from the Iranian Biological Resource Center (Tehran, Iran) were cultured in DMEM medium supplemented with FBS (10%) in a humidified incubator at 37 °C under CO 2 (5%).Each side of the scaffolds was sterilized under UV radiation for 20 min.The hADMSC suspension (100 µL, 10 4 or 10 6 cells) was placed on the scaffold (0.5 × 0.5 cm 2 ) in a 96-well tissue culture plate in triplicate and incubated at 37 °C under CO 2 (5%) for 2 h for the attachment of cells.Finally, the seeded scaffolds were cultured in nutrient DMEM (1 mL) with or without βC (20 µM) in an incubator at 37 °C under CO 2 (5%).The media were replaced every three days.A tissue culture plate (TCP) without any scaffold was used as a control.
Biocompatibility Assays
The morphological changes of the seeded cells (10 4 cells/0.5 × 0.5 cm 2 scaffold) up to 72 h of incubation were observed under an optical microscope (Olympus, Japan).The proliferation of the seeded cells (10 4 cells/0.5 × 0.5 cm2 scaffold) up to 10 d of incubation was assessed through MTT assay.For this purpose, the medium of each sample (n = 3) was replaced with 200 µL of MTT solution (5 g/L) followed by incubation at 37 °C for 3 h.The formed formazan crystals were dissolved in DMSO and the optical density (OD) of the solution was measured at 570 nm on an Eliza reader instrument (Bio-Tek ELx 800).
The morphology and attachment of the cells (10 4 cells/0.5 × 0.5 cm 2 scaffold) cultured for 72 h were studied by SEM.To this end, the cells were washed with PBS and fixed in a glutaraldehyde solution (2.5%) for 2 h, dehydrated in a series of ethanol solutions (60%, 70%, 80%, 90%, and 100%), and dried at room temperature.
Neural Differentiation Assays
The expressions of the Microtubule-associated protein 2 (MAP2) gene as a specific neural marker at the mRNA level in the cells (10 6 cells/0.5 × 0.5 cm2 scaffold) cultured for 10 d were evaluated by quantitative reverse transcription-polymerase chain reaction (qRT-PCR).After washing twice with PBS, the total RNA of the cells was extracted by a total RNA isolation kit (Ribo-Ex, GeneAll, South Korea) and converted to cDNA using an easy cDNA synthesis kit (Parstous Biotechnology, Iran) according to the manufacturer's protocols.Finally, the RT-PCR process was performed on a thermal cycler (Rotor-Gene Q, Qiagen, USA) using a master mix reagent (2X PCR Master Mix, Biofact Co., South Korea).The expression of the Glyceraldehyde 3-phosphate dehydrogenase (GAPDH) gene was evaluated as a reference gene.The expressions of the MAP2 gene were quantified by normalizing them with the expressions of the GAPDH gene.The primers designed for RT-PCR were as follows; MAP2; F: 5'-TAAGGATCAAGGCGGAGCAG-3', R: 5'-AGA-CACCTCCTCTGCTGTTTC-3', GAPDH: F: 5'-CTCATTTCCTGGTATGACAACG-3', R:5'-CTTCCTCTTGTGCTCTTGCT-3'.
The expressions of MAP2 protein in the cells (10 4 cells/0.5 × 0.5 cm 2 scaffold) cultured for 10 d were assessed by immunocytochemistry (ICC) assay.The scaffolds were washed with PBS, fixed with paraformaldehyde solution (4%) at 4 °C for 20 min and then at room temperature for 5 min, washed with PBS, permeabilized with Triton X-100 (4%) for 10 min, and washed again with PBS.The permeabilized cells were immersed in FBS for 45 min to block the nonspecific binding sites, incubated with the primary antibody for the MAP2 protein (Santa Cruz Biotechnology, USA) at 4 °C overnight, washed with PBS, incubated with goat serum (5%) for 45 min, and incubated with secondary antibody conjugated with FITC (Abcam, USA) for 30 min, and washed again with PBS.Eventually, the nuclei of cells were stained with DAPI, washed twice with PBS, and observed under a fluorescent microscope (Olympus LS, IX53, Japan).
Statistical Analysis
The results expressed as mean ± SD are representing at least three independent experiments.The differences between groups were analyzed using the one-way ANOVA method after testing for homogeneity of variances by the PASW Statistics program package (version 19, SPSS Inc., USA).The statistical significance was assigned as * for p ≤ 0.05, ** for p ≤ 0.01, and *** for p ≤ 0.001.
Fabrication of Scaffolds
PANi presents in three different redox forms including fully-reduced-state Leucoemeridine, half-oxidized-state Emeraldine, and fully-oxidized-state Pernigraniline.Emeraldine is the most applicable form of PANi due to its high stability at room temperature and high electroconductivity in the protonated form (Emeraldine salt) [8].Here Emeraldine base was protonated/doped with CSA, which makes it also soluble in common organic solvents such as DMF.Meanwhile, a high level of room-temperature electroconductivity (300 ± 30 S/cm) can be obtained [35].
Here, PANi/PAN (2.6/100, wt.wt.) fibrous mats were fabricated via the electrospinning process.In PANi/PAN nanofibers, the hydrogen bonding between the amine groups of PANi and the nitrile groups of PAN is possible [11].SEM images (Fig. 1) showed that the electrospun PANi/PAN mats are bead-free and random-oriented nanofibers with an average diameter of 183 ± 37 nm, which presented a highly micro-porous structure with Fig. 1 SEM images of virgin and CMC-functionalized PANi/PAN nanofibers large interconnected cavities suitable for providing the appropriate biological conditions for the migration, attachment, and proliferation of cells.A similar nanofibrous PANi/PAN (2.1/100, wt./wt.)scaffold developed by Hosseinzadeh et al. [8] showed an elastic modulus of 146.19 ± 7.22 MPa, a maximum stress of 1.69 ± 0.11 MPa, and a strain at break of 56.3 ± 3.4%.
The surface of PANi/PAN nanofibers was functionalized with CMC.For this purpose, the water-soluble CMC was coated on the surface of nanofibers and crosslinked with glutaraldehyde.The crosslinking occurred by reacting the amine groups of CMC and the aldehyde groups of glutaraldehyde [17,21].The surface of PANi/PAN nanofibers was coated using CMC solutions with three various concentrations (1%, 2%, and 3% w/v) resulting in scaffolds PANi/PAN/CMC1%, PANi/PAN/CMC2%, and PANi/PAN/CMC3%, respectively, to examine the effect of surface-immobilized CMC contents on the attachment, proliferation, and differentiation of cells.
Fig. 2 ATR-FTIR spectra of PANi/PAN nanofibers, CMC, and PANi/PAN/CMC2% nanofibers SEM images of CMC-functionalized PANi/PAN nanofibers (Fig. 1d) displayed the immobilization of CMC on and between the fibers.The content of surface-immobilized CMC was improved by employing CMC solutions with higher concentrations.The water contact angle of PANi/PAN nanofibers decreased from 49 °C to 45 °C for PANi/PAN/CMC2% nanofibers.
Biocompatibility
The biocompatibility of tissue engineering scaffolds is essential for the attachment of cells to them and later proliferation.Thus, primarily, hADMSCs were cultured on scaffolds PANi/PAN/CMC1%, PANi/PAN/CMC2%, and PANi/PAN/CMC3% for 10 d in the absence of βC.TPC and scaffold PANi/PAN were used as negative controls to see how the presence of a nanofibrous electro-conductive PANi/PAN substrate and surface-immobilized CMC moieties can affect the proliferation of hADMSCs.The daily observation under the optical microscope (Fig. 3a) showed that the hADMSCs on the scaffolds cultured in a nutrient medium without any growth factor were viable with rapid growth during the first 72 h of incubation without any growth factor.The spindle-shaped hADMSCs were pointed in the direction of scaffolds and demonstrated strong cell-cell adhesion.Furthermore, the cell growth on the scaffolds was similar to that on TCP as a control.
The MTT assay was carried out to assess the viability of the seeded cells (Fig. 3b).The MTT results showed that the cells cultured on all scaffolds were viable and proliferated continuously up to 10 d of incubation.Compared to TCP as a control, both CMC-functionalized and virgin PANi/PAN-based scaffolds were able to support cell proliferation up to 10 d and no cytotoxic effect was observed (p > 0.5).The biocompatibility of PANi-based scaffolds has been reported earlier [8,10,[37][38][39].Increasing the surface-immobilized CMC contents did not result in any significant change in the proliferation of hADMSCs up to 7 days incu-Fig.3 (a) Optical microscopy images of hADMSCs (10 4 cells/0.5 × 0.5 cm 2 scaffold) cultured on scaffolds for 72 h without any growth factor.(b) The cell viability of hADMSCs cultured up to 10 d obtained from MTT assay (*: p < 0.05).(c) SEM images of hADMSCs cultured for 72 h bation.However, after 10 days, the proliferation of hADMSCs displayed a gradual increase with rising surface-immobilized CMC content.
SEM images (Fig. 3c) displayed a good spread and attachment of hADMSCs on all scaffolds after 72 h of incubation, in which cells have become more elongated due to the well-established cell-cell and cell-scaffold interactions with actin filaments (invadopodia and filopodia).The surface-functionalization of PANi/PAN nanofibers with CMC resulted in higher cell coverage on the scaffolds demonstrating boosted cell proliferation on them, which is in agreement with OD values obtained from the MTT assay (Fig. 3b).These results confirmed that the fabricated scaffolds are biocompatible and can support the attachment, proliferation, and later differentiation of hADMSCs and mimic the ECM environment for nerve tissue engineering applications.
Neural Differentiation
Stem cells can differentiate into a variety of cell types.Growth factors like cytokines and hormones are always required for the differentiation of stem cells into specialized cells.βC is a precursor for retinol that can be metabolized to retinoic acid and act in retinoic acid signaling pathways by transmitting the cell-cell signals during the progression of evolutionary processes [40].Retinoic acid enters the cell nucleus and binds to the target gene via the nucleus receptors [41].Therefore, βC increases neural differentiation by improving the phosphorylation of extracellular signal-regulated kinases (ERK) [34].The ability of βC to induce neurogenesis for stem cells has been reported recently [30][31][32][33][34].
The neurogenic induction for the differentiated cells was quantified by assessing the expression of MAP2 genes in mRNA levels through qRT-PCR and normalizing with the expressions of the GAPDH gene as a control gene (Fig. 5).MAP2 gene is one of the most significant in vitro markers for mature neurons, investigated in the course of the neurogenesis process study [33].The employment of βC and CMC-functionalization improved the level of MAP2 expression for the differentiated cells on the scaffolds PANi/PAN + βC and PANi/PAN/CMC3%, respectively, compared to the scaffold PANi/PAN and TCP.Meanwhile, the combination of PANi-incorporation, CMC-functionalization, and βC displayed a synergic effect on the neurogenic induction, where a significantly higher level of MAP2 expression (p < 0.001) was recorded for the differentiated cells on the scaffold PANi/PAN/ CMC3%+βC comparing to others.Finally, the expression of MAP2 at the protein level for the differentiated cells was studied by ICC assay.MAP2 is a neuronal phosphoprotein that regulates the structure and stability of microtubules, neuronal morphogenesis, cytoskeleton dynamics, and organelle trafficking in axons and dendrites.Isoforms of MAP2 are expressed in the perikarya and dendrites of neurons.The fluorescence microscopy images (Fig. 6a) demonstrated the expression of MAP2 protein for scaffolds PANi/PAN + βC, PANi/PAN/CMC3%, and PANi/ PAN/CMC3%+βC.The presence of βC and CMC provided a neuro-inductive environment for the differentiation of hADMSCs into neuron-like cells.Meanwhile, the cells resembled nerve cells in morphology.DAPI staining showed the healthiness of cells' nuclei.Similar to qRT-PCR results, the combination of βC and CMC-functionalization resulted in a synergic neurogenic induction effect, where a significantly higher level of the expression for MAP2 protein (p < 0.001) was detected for the differentiated cells on the scaffold PANi/ PAN/CMC3%+βC (83%) comparing to others (Fig. 6b).
Conclusion
The surface of PANi/PAN nanofibers can be functionalized with CMC to fabricate scaffolds for nerve tissue engineering applications.The FTIR, SEM, and water contact angle results showed the successful functionalization process.The microscopy and MTT data confirmed the biocompatible of the fabricated scaffolds to support the attachment, proliferation, and differentiation of seeded hADMSCs.The CMC-functionalized PANi/PAN-based scaffolds could provide a suitable microenvironment to induce neurogenesis for hADMSCs in the presence of βC as a natural neural differentiation agent.The qRT-PCR and ICC results exhibited the synergic neurogenic induction effect of CMC-functionalization and βC.The neuroinductivity of the surface-immobilized CMC on the scaffolds was concentration-dependent.
As a research limitation, witnessing the complete neurogenesis of hADMSCs into mature neurons, oligodendrocytes, or astrocytes was not possible in this in vitro study; although the expression of MAP2 (a marker of mature neurons) confirmed the neurogenic induction on the cells cultured on the scaffold PANi/PAN/CMC3%+βC for 10 d.The neural differentiation could be more quantitated by investigating the expression of other genes such as Myelin basic protein (MBP, a marker of oligodendrocytes) and Glial fibrillary acidic protein (GFAP, a marker of neural progenitor cells, NPCs, and astrocytes).This study can be further extended by an in vivo study, by exploring the mechanisms of the neurogenesis process induced via surface-immobilized CMC moieties or by comparing the results with a generally used substrate such as fibronectin/Poly-L-Lysine as a positive control.
|
2023-04-19T06:17:35.254Z
|
2023-04-18T00:00:00.000
|
{
"year": 2023,
"sha1": "d62fbdc0d2a72b1f014c408bc822ca07bcaf54f7",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s12010-023-04526-6.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "50e65669ce8c605dc313f6bd46fc9e087f746848",
"s2fieldsofstudy": [
"Biology",
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
2864623
|
pes2o/s2orc
|
v3-fos-license
|
Acetate-buffered crystalloid infusate versus infusion of 0.9% saline and hemodynamic stability in patients undergoing renal transplantation
Summary Background Infusion therapy is one of the most frequently prescribed medications in hospitalized patients. Currently used crystalloid solutes have a variable composition and may therefore influence acid-base status, intracellular and extracellular water content and plasma electrolyte compositions and have a major impact on organ function and outcome. The aim of our study was to investigate whether use of acetate-based balanced crystalloids leads to better hemodynamic stability compared to 0.9% saline. Methods We performed a sub-analysis of a prospective, randomized, controlled trial comparing effects of 0.9% saline or an acetate-buffered, balanced crystalloid during the perioperative period in patients with end-stage renal disease undergoing cadaveric renal transplantation. Need for catecholamine therapy and blood pressure were the primary measures. Results A total of 150 patients were included in the study of which 76 were randomized to 0.9% saline while 74 received an acetate-buffered balanced crystalloid. Noradrenaline for cardiocirculatory support during surgery was significantly more often administered in the normal saline group, given earlier and with a higher cumulative dose compared to patients receiving an acetate-buffered balanced crystalloid (30% versus 15%, p = 0.027; 68 ± 45 µg/kg versus 75 ± 60 µg/kg, p = 0.0055 and 0.000492 µg/kg body weight/min, ±0.002311 versus 0.000107 µg/kg/min, ±0.00039, p = 0.04, respectively). Mean minimum arterial blood pressure was significantly lower in patients randomized to 0.9% saline than in patients receiving the balanced infusion solution (57.2 [SD 8.7] versus 60.3 [SD 10.2] mm Hg, p = 0.024). Conclusion The use of an acetate-buffered, balanced infusion solution results in reduced need for use of catecholamines and cumulative catecholamine dose for hemodynamic support and in less occurrence of arterial hypotension in the perioperative period. Further research in the field is strongly encouraged.
Summary
Background Infusion therapy is one of the most frequently prescribed medications in hospitalized patients. Currently used crystalloid solutes have a variable composition and may therefore influence acidbase status, intracellular and extracellular water content and plasma electrolyte compositions and have a major impact on organ function and outcome. The aim of our study was to investigate whether use of acetate-based balanced crystalloids leads to better hemodynamic stability compared to 0.9% saline. Methods We performed a sub-analysis of a prospective, randomized, controlled trial comparing effects of 0.9% saline or an acetate-buffered, balanced crystal-Institutional Review Board (IRB) contact information The study was approved by the local institutional review board (EK 1048/2009 and EK 1828/2014), of the Medical University of Vienna, Austria, and registered at a clinical trials registry (NCT01075750). Written informed consent was obtained from every patient included in the study; this report describes a prospective randomized clinical trial. The authors state that every item in the CONSORT checklist is included in the report; Registry Url: clincaltrials.gov Identifier: NCT01075750; this manuscript was screened for plagiarism using Plagiarism Checker. loid during the perioperative period in patients with end-stage renal disease undergoing cadaveric renal transplantation. Need for catecholamine therapy and blood pressure were the primary measures. Results A total of 150 patients were included in the study of which 76 were randomized to 0.9% saline while 74 received an acetate-buffered balanced crystalloid. Noradrenaline for cardiocirculatory support during surgery was significantly more often administered in the normal saline group, given earlier and with a higher cumulative dose compared to patients receiving an acetate-buffered balanced crystalloid (30% versus 15%, p = 0.027; 68 ± 45 µg/kg versus 75 ± 60 µg/kg, p = 0.0055 and 0.000492 µg/kg body
Introduction
Infusion therapy is one of the most frequently prescribed medications in hospitalized patients. The importance of research on fluid therapy has been unrecognized for a long time with normal saline being the most popular infusate [1][2][3][4]. During recent years newer balanced infusates containing lactate or acetate became increasingly popular in many European countries whereas in the United States 0.9% saline is still the most widely used infusate in the perioperative and intensive care setting [4]. Currently used crystalloid solutions have a variable composition and may therefore influence acid-base status, intracellular and extracellular water content and plasma electrolyte compositions [5] and have a major impact on organ function and outcome [6]. Despite continuing evaluation no superiority of one particular type of fluid has been achieved so far [7][8][9][10]. Nonetheless during recent years it has been shown that the theoretically more physiologically balanced buffered infusion solutions may have a respectable advantage in terms of patient morbidity. A study on healthy human volunteers found a balanced chloride-reduced infusion solution to be associated with better mean renal artery flow velocity, renal cortical tissue perfusion and urine output than infusion of 2 l of 0.9% saline [11]. In a further study including elderly surgery patients, gastric mucosal perfusion was reduced in patients receiving 0.9% saline compared to those receiving chloride-reduced infusion solutions [12]. These data, together with data from rodents with experimental sepsis, which showed significantly lower mean arterial pressure levels when receiving chloride-rich infusion solutions compared to lactated Ringer's, could suggest an effect of the crystalloid fluid used on a patients' hemodynamic situation [13].
In the present study we performed a sub-analysis using data from our previously performed prospective randomized controlled trial comparing an acetate-buffered balanced infusion solution compared to 0.9% saline in patients with end-stage renal disease receiving cadaveric renal transplantation [14]. Our aim was to investigate whether use of an acetatebuffered, chloride-reduced balanced infusion solution would result in A.) less need for catecholamine use than use of 0.9% saline and B.) in better hemodynamic stability of patients expressed by the mean arterial blood pressure.
Setting
The study was conducted at the Clinic of General Anesthesiology, Intensive Care and Pain Medicine of the Medical University of Vienna.
Patients, randomization and study fluids
All patients with end-stage renal disease undergoing cadaveric renal transplantation were included in the study. Patients younger than 18 years were excluded from the study as well as patients with a preoperative potassium concentration of more than 5.5 mmol/l. Enrollment started on 1 June 2010 and terminated on 28 February 2013. Prior to enrollment the study was registered at clinicaltrial.org (NCT01075750).
Computer-based randomization was performed at time of transfer to the preoperative care unit of the Department of Anesthesiology. Patients either received normal saline (osmolality 308 mOsm/kg body weight, base excess -24 mmol/l, Na + 154 mmol/L, Cl -154 mmol/L) or a chloride-reduced, acetate-buffered balanced crystalloid (Elomel Isoton ® , Fresenius Kabi Austria GmbH, Graz; osmolality 302 mOsm/kg, base excess 0 mmol/L, Na + 140 mmol/L, K + 5 mmol/L, Cl -108 mmol/L, Mg ++ 1.5 mmol/L, Ca ++ 2.5 mmol/L, acetate 45 mmol/L). No intravenous fluid was given prior to randomization. The following data were obtained from patients: age, sex, height, actual weight, dry weight, residual daily urine output, number of prior renal transplantations, total of fluid administered intraoperatively, total of fluid at start catecholamines, use of catecholamines, dose of catecholamines per kg body weight per min, time on catecholamines and cumulative vasopressor dose per kg body weight per min.
Anesthesia
Details on the anesthesia protocol are described elsewhere [14]. After intubation patients were fitted with a central venous line as well as a peripheral venous line.
Hemodynamic management
Intraoperatively a set infusion rate at 4 ml per kg of ideal body weight per hour (4ml/kg/h) was infused. Hypotension was defined as a mean arterial pressure of less than 60 mm Hg. If hypotension occurred a fluid K challenge with 250 ml of infusate was conducted. Additional fluid boluses of 250 ml were administered depending on volume reactivity (increase in mean arterial pressure or/and central venous pressure). If no reaction to volume challenge was seen a vasopressor (either phenylephrine or etilefrine) was administered. A maximum dose of 0.1 mg of phenylephrine and 2 mg of etilefrine was given at one time. If more than ten applications of vasopressor were necessary per hour or likely to exceed ten applications per hour (severe refractory hypotension) noradrenaline infusion was commenced. Additionally, repeated fluid boluses were administered if considered necessary by the anesthesiologist.
Ethics
The study was approved by the local institutional review board (EK 1048/2009 Oct 2009 and EK 1828/2014 Oct 2014, Chairman Prof. E. Singer), of the Medical University of Vienna, Austria, and registered at a clinical trials registry (NCT01075750). Written informed consent was obtained from every patient included in the study.
Statistics
The sample size calculation for the original study is described in detail elsewhere [14]. Statistical analysis was performed by SPSS version 17.0 (Chicago, IL). Distribution of interval variables was assessed using normal plots. Interval variables with a normal distribution are presented as means ± standard deviation (SD). Non-normally distributed interval variables and ordinal variables are presented as medians with interquartile ranges (IR). Comparisons of normally distributed interval variables between the saline group and the acetate-buffered balanced crystalloid group were performed using Student's t-test. Comparisons of non-normally distributed interval variables and ordinal variables between the saline group and the acetate-buffered balanced crystalloid group were per- In order to test whether mean arterial blood pressure differed between the saline group and the acetate-buffered balanced crystalloid group, we used a generalized estimating equation assuming a normal probability distribution and a first order exponential correlation matrix for repeated observations in one patient. In order to test for differences in the incidence of vasopressor administration between the two study groups, we used a log-rank test and a Kaplan-Meier plot for visualization. In order to test whether vasopressor use was anteceded by hypotension, we used a Cox regression with vasopressor use as the dependent variable and mean arterial blood pressure as the time varying predictor variable. For all analyses statistical significance was defined by a two-sided P < 0.05.
Results
A total of 150 patients were included in the study, 76 patients were randomized to normal saline and 74 patients to an acetate-buffered balanced crystalloid. No differences between the groups were found for the following parameters: age, sex, height, actual and dry weight, residual daily urine output, number of prior renal transplantations. For an overview on baseline characteristics see Table 1. The CONSORT flow chart is given in Fig. 1.
In the 0.9% saline group the mean volume of fluid received during surgery was 1691 ± 664 ml compared to 1798 ± 679 ml in the balanced acetate-based infusate group (p = 0.34). Noradrenaline for cardiocirculatory support during surgery was administered significantly more often in the normal saline group compared to patients receiving an acetate-buffered balanced crystalloid (30% versus 15%, p = 0.027). Patients receiving normal saline needed noradrenaline 600 Acetate-buffered crystalloid infusate versus infusion of 0.9% saline and hemodynamic stability in patients. . . significantly earlier and the cumulative noradrenaline dose was significantly higher than patients receiving an acetate-based crystalloid solute (68 min ±45 versus 75 min ±60, p = 0.0055 and 0.000492 µg/kg body weight/min, ±0.002311 versus 0.000107 µg/kg/min, ±0.00039, p = 0.04, respectively). No difference in necessity for vasopressor or cumulative dose of vasopressors was seen between the groups (p = 0.47 and p = 0.08, respectively) (see Table 2; Fig. 2).
Mean minimum arterial blood pressure was significantly lower in patients randomized to 0.9% saline than in patients receiving the balanced infusion solution (57. were significantly lower in patients randomized to 0.9% saline. Patients randomized to 0.9% saline over time had a mean arterial blood pressure, which was 4 mm Hg lower than in patients receiving an acetatebuffered balanced solution during whole surgery (95% CI: 1-7 mm Hg; p = 0.009). Fig. 3 gives mean and systolic blood pressure levels during surgery for patients receiving either 0.9% saline or an acetate-buffered balanced infusion solution. In the Cox regression analysis it could be shown that patients receiving catecholamines had significantly lower blood pressure levels before administration of catecholamines than patients without catecholamines (see Fig. 4). There was no difference in heart rate (normal 73 beats per minute ±12.6 versus acetate-buffered solution 74 beats per minute ±12.2, p = 0.66) and central venous pressure (normal saline 9 ± 10 mmH2O, acetatebuffered solution 8 ± 5 mmH2O, p = 0.34) between the groups. Peak chloride levels were significantly higher in patients randomized to the normal saline group Acid base disturbances are described in more detail elsewhere [14].
Discussion
These are the first data from a prospective, randomized, controlled study comparing the effects of 0.9% saline and an acetate-buffered balanced crystalloid solution on hemodynamics in the perioperative period. We showed that perioperative use of a balanced crystalloid as a maintenance fluid is associated with better hemodynamic stability of patients with endstage renal disease receiving cadaveric renal transplantation. These findings are expressed by a significantly lower need for use of catecholamines for original article hemodynamic support and higher systolic and mean arterial blood pressure levels.
The strength of the current study is explained by the study collective: all patients had end-stage renal disease and received cadaveric renal transplantation. This collective of patients is especially vulnerable to Fig. 4 Kaplan Meier analysis: probability to need norepinephrine effects due to infusion solutions since they lack the capacity of the kidneys to rapidly compensate for electrolyte and acid-base derangements to a variable degree. Theoretically, this makes even slight effects due to the infusion solution obvious, which would not be seen in patients with normal kidney function or only after administration of far larger volumes of fluid.
We found a significantly lower need for catecholamines, a longer time to catecholamines and a lower cumulative catecholamine dose in the acetate-buffered group. The significantly better hemodynamic stability in patients receiving a balanced acetate-based infusate compared to normal saline might be related to A) acetate itself and B) hyperchloremia induced by normal saline.
So far no study has compared the hemodynamic effects of normal saline to an acetate-buffered balanced crystalloid; however, there is some older evidence that the use of sodium acetate may have an influence on the cardiovascular system. In 1978 Liang and Lowenstein infused acetate and pyruvate in to anesthetized dogs to measure their impact on circulation [15]. They found that increased acetate levels were associated with a significant increase in cardiac output [15]. Even though myocardial oxygen consumption increased during acetate infusion, the decrease in myocardial oxygen uptake and the increase in coronary sinus blood oxygen saturation suggest that an active coronary vasodilation which was not a result of the increased cardiac work takes place [15]. Acetate infusion also increased blood flow to the gastrointestinal tract, kidneys, intercostal muscles and diaphragm [15]. In another study by Conahan et al. on resuscitation fluid composition and myocardial performance during burn shock in guinea pigs, the authors were able to show that treatment with Ringer's acetate significantly improved cardiac output and contractility when compared to normal saline and Ringer's lactate [16]. In the same study Ringer's acetate was found to have better blood pressure stabilizing effects compared to Ringer's lactate and normal saline [16]. Concerning 602 Acetate-buffered crystalloid infusate versus infusion of 0.9% saline and hemodynamic stability in patients. . .
K original article
blood pressure stabilization three studies on sodium acetate found that such an infusion may have a positive effect on cardiac output but it lowers peripheral vascular resistance [17][18][19], a finding that we cannot confirm; however, it has to be born in mind that data on acetate and acetate-buffered crystalloid solutes and hemodynamics remain scarce and are mostly experimental. There is a lack of controlled randomized trials in humans that compare currently available crystalloid solutes with respect to hemodynamics and further research certainly should be encouraged.
In 2004 the study group around John Kellum showed in a model of experimental sepsis that infusion of chloride-rich crystalloids led to a decrease in arterial blood pressure in rodents compared to infusion of lactated Ringers [13]. In another crossover study on 12 healthy volunteers Chowdhury et al. found that a colloid, embedded in a balanced crystalloid solution, led to increased renal cortical tissue perfusion while the same concentration of colloid, embedded in 0.9% saline, did not result in improved renal perfusion [20]. A similar trial of this group in which 2 l of 0.9% saline or a chloride-reduced, balanced crystalloid was administered to healthy volunteers showed that administration of 0.9% saline even resulted in a decline in renal blood flow velocity and renal cortical tissue perfusion [11]. Given this evidence, together with other recent studies on this subject [2,21,22], it is plausible that hyperchloremia induced by infusion of chloride-rich solutions, is the direct trigger for unfavorable hemodynamic effects, as seen in our current study. Only recently, large scale studies and meta-analysis showed that use of chloride-rich infusion solutions might be associated with adverse outcome [2,23,24]; however, a Cochrane review from 2012 still came to the conclusion that although balanced crystalloids are associated with less occurrence of hyperchloremia and concurrent metabolic acidosis, use of conventional solutions (i. e. 0.9% saline) can be considered safe in the perioperative period [25]. Additionally, a recently published randomized controlled trial comparing 0.9% saline to an acetatebuffered crystalloid solute found no differences in terms of acute kidney failure and renal replacement therapy [26]. Given the current data together with previously published studies in the field, the need for prospective randomized controlled trials is obvious. While studies comparing the currently available crystalloid solutes with mortality as the primary endpoint will be hard to perform due to the large patient numbers needed and probable issues with the financing of such a study, it seems applicable to perform studies focusing on patient hemodynamics or renal function.
Limitations
Our study has several limitations. The major drawback of this study is the lack of invasive hemodynamic monitoring such as esophagus Doppler, pul-monary artery catheter or thermodilution techniques. Additionally, blood pressure was monitored non-invasively, which was performed in order to preserve blood vessels in patients potentially in need of dialysis shunts in the future. Additionally, it has to be borne in mind that the original study was designed to investigate incidence rates of perioperative hyperkalemia, other electrolyte and acid-base disorders and not hemodynamics. Therefore, hemodynamic management followed hospital standard operating protocols (SOP but not a hemodynamic protocol specially designed for the present study. Additionally, the study was not blinded, so investigator bias is possible but not very likely as hemodynamics were not an outcome in the first study and hospital SOP's were followed. Nonetheless the results of the present study should be interpreted with some caution. We see our results as a first indiction that the choice of crystalloid may impact hemodynamic stability, but certainly further research needs to be done before an overall conclusion can be reached.
Conclusion
In conclusion, the results of the prospective, randomized, controlled trial suggest that use of an acetatebuffered, balanced infusion solution results in less occurrence of arterial hypotension and reduced need for use of catecholamines and cumulative catecholamine dose for hemodynamic support in the perioperative period. The more favorable hemodynamic outcome of patients receiving an acetate-buffered crystalloid solute may be attributed to increased cardiac output related to acetate as well as lower susceptibility for hyperchloremia and concurrent metabolic acidosis when compared to normal saline. Future randomized, controlled trials are certainly desirable to clarify the effects of current crystalloid infusates on hemodynamics in the perioperative field and the critically ill.
|
2018-04-03T00:34:51.618Z
|
2017-03-02T00:00:00.000
|
{
"year": 2017,
"sha1": "98bc4a6531f9328fa59f9904ad5975ae426a7703",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00508-017-1180-4.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "98bc4a6531f9328fa59f9904ad5975ae426a7703",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
15877177
|
pes2o/s2orc
|
v3-fos-license
|
Application of Raman spectroscopy in Andrology: non-invasive analysis of tissue and single cell
As a fast, label-free and non-invasive detection method, Raman spectroscopy has been widely used for the interrogation of biological tissues, any alterations of molecular structure and chemical components during pathological processes would be identified and revealed via the differences on Raman spectrum. In clinics, the Raman spectroscopy has great potentials to provide real-time scanning of living tissues and fast diagnosis of diseases, just like discrimination of various carcinomas. A portable Raman spectroscopy which combined Raman system with an optical fiber probe has also been developed and proved to be able to provide intraoperative assistance in both human study and animal models. In Andrology, interests in Raman spectroscopy had just emerged. In this review, we summarized the progress about the utility of Raman spectroscopy in Andrology, the literatures were gathered from PubMed and Ovid database using MeSH terms associated with prostate, testis, seminal plasma and single sperm cell. We also highlighted the serious challenges as to the final clinical application of Raman technique. In conclusion, research in Raman spectroscopy may herald a new era for Andrology.
Introduction
The famous quote by Sydney Brenner "Progress in science relies on new techniques, new discoveries and new ideas, probably in that order" implies the significance of introducing new technology or research tools for scientific progress (1). For instance, X-ray for body scanning, advent of endoscopic surgery, microarrays for DNA sequencing analysis. Raman spectroscopy is an optical technology that relies on the principle of inelastic scattering between light photons with biological tissues. When photons of light interact with the molecules in materials, different lightscattering patterns arise. In most cases, the photons emitted hold the energy or wavelength unchanged as the incident light (= Rayleigh scattering), but occasionally their donate or receive energy due to the molecular interactions, the process known as inelastic scattering or "Raman shift" (2). In 1930, the Indian physician C.V. Raman was awarded the Nobel Prize for his contribution to discover the Raman affect that lays the foundation of Raman spectroscopy (3).
The Raman frequency shifts are conventionally measured in wavelength (cm -1 ) (4), depending on the atomic mass or molecular bonds of specimens, all the chemical information involved is presented on a Raman spectrum that can be further interpreted and analyzed using statistical, chemical, and morphological methods. Raman enables subtle analysis of molecular structure and biological compositions, and as a non-invasive, non-destructive and even non-contact detection method, it has been employed in various medical fields. Today, there are four main types of Raman spectroscopy being used, including resonance Raman spectroscopy (RRS), coherent anti-Stokes Raman spectroscopy (CARS), stimulated Raman spectroscopy (SRS) and surface enhanced Raman scattering (SERS) (5). Recently, a new technique called Raman optical tweezer emerges, it combines Raman spectroscopy with optical tweezer, which benefits the trap and analysis of micrometersize particles (6). The Raman optical tweezer levitates the trapped cells above the substate to reduce the fluorescence effects and the Brownian motion from untrapped cells.
Application of Raman spectroscopy in human studies is marked by its capacity to distinguish normal and pathological tissues. Skin is one of the most human tissues studied at the early age of Raman spectroscopy, Emma et al. [1977] first examined the skin lesions and compared its Raman spectra with that of normal skin (7). And in the ensuing years, numerous promising results were witnessed, including detection of different parts of in vivo and in vitro skin, identification of skin pigment melanin, or distinguishment of skin cancers (8)(9)(10)(11). Raman spectroscopy also contribute to the diagnosis and even prognostic evaluation of cancers in breast, lung, stomach, cervix uteri, brain, larynx, etc. (12)(13)(14)(15)(16)(17).
In this review, we summarized the advance about the utility of Raman spectroscopy in Andrology, articles pertaining to the mechanism of Raman spectroscopy and its applications were gathered from PubMed and Ovid database using MeSH terms associated with prostate, testis, seminal plasma and single sperm cell ( Figure 1). We also reported on the feasibility of Raman spectroscopy in real-time clinical practice.
Prostate
Special interest in the use of Raman spectroscopy in Urology emerges recently. It has been used in vitro to detect renal and bladder cancers, identify renal lithiasis, study different layers of bladder wall, and distinguish malignant cells in urinary system (18)(19)(20)(21). In an article by Amos Shapiro and his colleagues, the Raman system successfully discovered a distinct peak at 1,584 cm -1 wave shift that separate benign and different grades of malignent bladder tissues, with an excellent sensitivity of 92% and specificity of 91% (22). Here, we just focus on the field of Andrology, the prostate and testis.
In prostate, Raman spectroscopy is mostly used to differentiate benign prostatic hypertrophy (BPH) and prostate adenocarcinoma (CaP). Crow et al. were the first to record the Raman signal of prostatic tissue, and successfully discovered the variations in glycogen and nucleic acid contents between BPH and CaP (23). The promising results propelled them to further examined three different grades of adenocarcinoma as classified by Gleason Score-GS <7, GS =7, and GS >7 (24). By constructing a diagnostic algorithm, the Raman spectroscopy was able to distinguish each pathological type with an overall accuracy of 89%. Due to its high sensitivity and specificity, they suggested that Raman spectroscopy has some potentials to guide prostate biopsy in vivo and help with determination of tumor resection margins during radical prostatectomy. A meaningful study using Raman spectroscopy was conducted by Patel et al.
[2011] (25), who compared the benign prostatic tissue between populations of high-risk (UK) and low-risk (India) of CaP, attempted to figure out the biological foundations for their different susceptibilities to diseases. The results indicated the secondary protein structure variations (involving Amide I/II-1,582, 1,551, 1,667, 1,541 cm -1 in glandular epithelium; Amide I/II-1,663, 1,624, 1,761, 1,782, 1,497 cm -1 for adjacent stroma) as pivotal biomolecular markers segregating the two cohorts. Raman spectroscopy was also employed to study the relationship between spectral signal of Gleason 7 prostate cancers and their prognostic after radical prostatectomy, as well as the diagnosis and prognosis of castration-resistant prostate cancer (26).
Different parts of prostate were also examined, respectively. Patel and Martin [2010] used Raman spectroscopy to interrogate the normal, cancer-free prostate zones, tried to explore the underlying reasons of zonespecific susceptibility to pathology, as CaP mainly arises in PZ, prostatic hypertrophy (BPH) arises in TZ, while CZ has a relative immunity to diseases (27). The results highlighted 781 cm -1 (cytosine/uracil) and 787 cm -1 (DNA) as the key factors differentiating PZ from TZ and CZ epithelia; and identified 1,459 cm -1 (lipids and proteins) and 1,003 cm -1 (phenylalanine) as the determinant features discriminating TZ with CZ epithelia. The authors supposed that the increased amounts of nucleic acid in PZ were probably due to differential gene expression profiles or DNA adduct formation that contain phase I/II metabolising enzymes capable of inducing carcinogens; while the elevated levels of proteins and lipids in TZ could be explained as the variations of hormones like testosterone, dihydrotestosterone or epidermal growth factors leading to BPH and TZ CaP. In that study, they also examined the stromal zones against glandular elements, as characterized by protein/lipid (1,459 and 1,100 cm -1 ) and less DNA/RNA (781 and 787 cm -1 ).
PNT1A (immortalized normal prostate cell line) and LNCaP (malignant cell line derived from prostate metastases) were mapped using Raman spectroscopy by Taleb et al. (28). Applying spectral processing methods of partial least-squares discriminant analyses (PLSDAs) and adjacent band ratios (ABRs), the two cell lines were perfectly distinguished from each other, positions indicating to A-DNA/RNA conformation (711, 813, 1,100, 1,243, and 1,572 cm -1 ) were mainly found in LNCaP, while bands corresponding to B-DNA conformation (733, 798, and 1,091 cm -1 ) were presented more intensified in PNT1A. Crow et al. compared two well-differentiated, androgensensitive cell lines (LNCaP and PCa 2b) with two poorly differentiated, androgen-insensitive cell lines (DU145 and PC 3), demonstrated that Raman was able to identify CaP samples of varying biological aggressiveness with an overall sensitivity of 98% and a specificity of 99% (29). Other works in this area involved the detection of complexed forms of prostate specific antigen (PSA) from blood sample using SERS, the low-level detection of biomarker could be used for diagnosis and early-stage prediction of prostate cancer (30).
Testis
In comparison to other tissues, the use of Raman spectroscopy has lagged behind for interrogation of testicular function and constitution, this is probably due to the scarce sheer number of testicular samples available. De Jong et al. [2004] first reported the research about testis, he performed Raman mapping of frozen sections of testicular microlithiasis, and demonstrated that the microliths were composed of hydroxyapatite and were located within the seminiferous tubules (31). And when microliths were found surrounded by glycogen, the testicular tissues were usually associated with malignant germ-cell neoplasms. Maybe the Raman spectroscopy could also be used to evaluate intact human seminiferous tubules, once the spectral differences between tubules of complete and incomplete spermatogenesis could be found, the Raman spectroscopy © Translational Andrology and Urology. All rights reserved.
Seminal plasma
Actually, Raman spectroscopic analysis of seminal plasma was initially carried out for forensic purposes rather than concerns in male fertility, since precise determination and identification of body fluid at a crime scene would certainly benefit forensic investigations (32)(33)(34). Lednev et al. reported three principal components that satisfactorily represented Raman spectra of semen, involving tyrosine, proteins (amide I/III, albumin, choline) and spermine phosphate hexahydrate (SPH). The combination of the three spectral components can be used to identify any unknown fluids to be semen, and even semen from different species could be differentiated, for instance, the signal of peaks at 536, 829, and 1,452 cm -1 were more stronger in human semen than that of canine, and the 1,342 and 1,004 cm -1 peaks were found exclusively in human sample. But, there were no significant differences in the Raman spectra of semen acquired from various human donors.
Since seminal plasma is composed of the fluids gathered from seminal vesicles, prostate, and other reproductive glands, it provides the nutritive and protective environment for sperm and has some relationships with semen quality. Huang et al. examined the Raman spectra of seminal plasma from normal and abnormal semen samples, and found the two could be clearly distinguished according to the peak ratios between 1,449 and 1,418 cm -1 (35). And the lefthanded polarized SERS spectroscopy delivered the best diagnostic result with a sensitivity of 95.8% and specificity of 64.9% (36).
Sperm
Kubasek et al. [1986] evaluated the DNA of salmon sperm and found it had a B-type conformation very similar to that of a synthesized model B-DNA oligomer (37). This is the first attempt to examine the sperm by Raman spectroscopy. Since then, however, no other Raman investigations of sperm were ever reported until very recently scientists tried to find a non-invasive method to evaluate the human sperm. Several approaches have been described regarding to the study of human sperm by Raman spectroscopy (38)(39)(40), the semen samples were generally washed and smeared directly on quartz coverslips, and then detected by Raman spectroscopy, except for only one case in which the sperm membranes and tails were removed for the concern of obtaining spectra of pure nuclear DNA-protein complex (41). Though these studies reported some similar results, fundamental discrepancies existed. Huser et al. reported the 785 cm -1 peak strength was indicative of the efficiency of DNA packaging process, and the variations in the ratio of 785/1,092 cm -1 or 1,442/1,092 cm -1 could be employed to discriminate sperm of normal, pear, small and double head, a relationship that was not found in other studies. Besides, the Raman spectra in this study showed almost excellent similarities in tail, mid-section and acrosomal region, a phenomenon which was repeated in only one article by our team who used Raman spectroscopy to distinguish sperm that can bind to zona pellucida (40) (Figure 2). We found two regions (800-900 and 3,200-4,000 cm -1 ) shifted high at the acrosome region of the ZP-bound sperm compared with unbound sperm. Since ZP naturally select sperm of normal DNA integrity and morphology, thus the ZP-bound sperm are more likely to fertilize the oocytes than ZP-unbound sperm. Using Raman spectroscopy to distinguish ZP-bound sperm could in theory enhance the outcomes of ICSI. In this article, we also described a novel scanning method for single sperm cell according to its anatomic structure (Figure 3). In other studies, however, distinct Raman spectra of different parts of sperm were demonstrated, for example, Meister et al. found the sperm nucleus was dominated by nucleic acid bands around 788 cm -1 , and the middle piece was represented by the breathing mode of the adenine and guanine bases near 1,575 cm -1 . Even concensus was failed to reach on as to the signal of same regions of sperm between studies, for example, C. Mallidis et al. described a unique 1,092 cm -1 peak in distal head segment, indicating the PO 4 backbone of DNA. Reasons for those discrepancies were probably due to different Raman systems being used, including different laser wavelengths, power, and scanning time. And the Raman system should be calibrated for every single time before its usage. Fortunately, the Raman spectroscopy has been demonstrated to successfully detect sperm DNA damage, mitochondrial status, and fertilization potential. And our group also used it to distinguish sertoli cells from testes of obstructive azoospermia and non-obstructive azoospermia (42).
Except for single point analysis, applying confocal Raman microspectroscopy (CRM) that combines Raman spectroscopy with a confocal optical microscope, chemical map of single sperm cell could be constructed with location of same feature being assigned to a specific color (38,39). Raman image enabled unambiguously identification of different parts of sperm, even very small irregularities could be distinguished, such as vacuoles in the sperm head. In a latest study by Huang et al. (43), with a self-written image analysis system the Raman spectroscopy was able to identify normal sperm with both morphological and biochemical information, and the intensity ratio of 1,055 to 1,095 cm -1 indicated a potential biomarker for assessing the sperm DNA integrity.
Further directions
Further application of Raman spectroscopy as a feasible clinical tool still needs additional improvements in this technology. Considering the sheer size of Raman spectroscopy, specially designed optical fiber probes need to be developed for the purpose of intraoperative assistance. The probes should be able to be incorporated into catheters, laparoscopes, endoscopes, or cannulas. Surprisingly, progress has been made, Huang et al. developed a Raman endoscopy system by inserting a 1.8 mm Raman endoscopic probe into the working channel of an endoscope during gastroscopy, with which he gathered a total of 2,748 in vivo gastric tissue spectra (2,465 normal and 283 cancer) from 305 patients, and developed a diagnostic algorithm with a predictive accuracy of 80.0% , sensitivity of 90.0% and specificity of 73.3% (44). He also applied this technique to assess Raman spectral properties of nasopharyngeal and laryngeal tissues through transnasal, image-guided Raman endoscopy (17) (Figure 4A,B). Mohs et al. developed a hand-held spectroscopic pen device called SpectroPen (45) (Figure 4C,D). During in vivo studies using mice model bearing breast tumors, the pen device was able to precisely detect the tumor margins preoperatively and intraoperatively, and was demonstrated to have almost the same signal-to-noise ratio, spectral resolution, and accuracy as the routine Raman spectroscopy. Also, in the manuscript by Mosier-Boss et al., two commercially available, portable Raman systems were evaluated (46).
Serious issue has to do with the safety of Raman spectroscopy. Though many studies described Raman laser as a non-invasive detection method, almost none of them examined its safety, such as DNA impairments caused by Raman laser toward tissue or cells. Taking sperm for example, our aim of Raman study on single sperm cell is to distinguish morphological normal and DNA integral sperm for infertile patients and use it for intracytoplasmic sperm injection (ICSI), if Raman spectroscopy damages the sperm DNA integrity or causes other subtle changes in chromosome structure leading to fertilization failure, abortion, or unpredictable congenital diseases of infants, we should be blamed by our conscience. Thus, additional investigations are required before its final application in clinic. Finally, it is often hard to set up standard criterions for signal of Raman spectra, just like the discrepancies in the studies of sperm. The outcomes of Raman scanning are vulnerable to various external influences, including the working parameters of Raman system, methods by which samples are prepared, even the room temperature and moisture around the Raman spectrometer would exert influences on the outcomes.
Though there are limitations that need to be addressed seriously, Raman spectroscopy still has great potentials to serve as a clinical tool for real-time scanning of living tissues in vivo and for fast diagnosis of diseases. Use of Raman spectroscopy avoids unnecessary biopsies and delivers immediate pathological diagnosis in real-time intraoperative cancer resections. And in the future, the Raman spectroscopy could have some potentials to guide testicular biopsy, help to raise the sperm retrieval rates and narrow down postoperative complications.
Conclusions
As a novel technique, Raman spectroscopy provides fast, convenient and non-invasive interrogations of biological tissues, and reveals the changes in molecular structure and chemical components. Application of Raman spectroscopy for real-time detection of living tissues and diagnosis of diseases provides potential intraoperative assistance, but still needs a lot of improvements in the technique and strict safety investigations. Future research in Raman spectroscopy may herald a new era for Andrology.
|
2016-05-12T22:15:10.714Z
|
2014-03-01T00:00:00.000
|
{
"year": 2014,
"sha1": "9f3b5c64cf6add90c0c7f0740a119119cc115073",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "9f3b5c64cf6add90c0c7f0740a119119cc115073",
"s2fieldsofstudy": [
"Engineering",
"Medicine"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
}
|
260048190
|
pes2o/s2orc
|
v3-fos-license
|
Approach of Solving Multi-objective Programming Problem by Means of Probability Theory and Uniform Experimental Design
332-337
INTRODUCTION
Multi-objective programming (MOP) is a branch of mathematical programming, which studies the optimization of more than one objective function [1]. The idea of multiobjective programming sprouts in 1776 in the study of utility theory in economics. In 1896, economist Pareto first proposed the multi-objective programming problem in the study of economic balance, and gave a simple idea, which was later called Pareto optimal solution. In 1947, von Neumann and Morgenstern mentioned the multi-objective programming problem in their work on game theory, which attracted much more attentions to this problem. In 1951, Koopmans proposed the multi-objective optimization problem in the activity analysis of production and distribution, and proposed the concept of Pareto optimal solution for the first time. In the same year, Kuhn and Tucker gave the concept of Pareto optimal solution of vector extremum problem from the perspective of mathematical programming. The sufficient and necessary conditions for the existence of this solution are also studied. Debreu's discussion on evaluation equilibrium in 1954 and Harwicz's research on multi-objective optimization problems in topological vector spaces in 1958 laid the foundation for the establishment of this discipline. In 1968, Johnsen published the first monograph on multi-objective decision-making models. Until the 1970s and 1980s, the basic theory of multiobjective programming was finally established after the efforts of many scholars, which made it a new branch of applied mathematics [2].
There are generally the following methods for solving multi-objective programming: one is the method of transforming multiple objectives into a single objective that is easier to solve, such as main objective method, linear weighting method, ideal point method, etc. The other is called hierarchical sequence method, that is, the target is given a sequence according to its importance, and each time the next target optimal solution is found in the previous target optimal solution set, until the common optimal solution is obtained.
The main target method takes a certain f1(x) as the main target, and the other p-1 are non-main targets. At this time, it is hoped that the main target will reach the maximum value, and the remaining targets should meet certain conditions; the linear weighting method will assign the same weight to the objective functions f 1 (x), f 2 (x), ..., f p (x) respectively. The coefficient ω j , perform a linear weighted sum to obtain a new evaluation function, , then the multiobjective problem becomes a single-objective problem, but normalization is required when the dimensions are different; for a linear programming problem with multiple objectives, the decision maker hopes to achieve to, these goals in turn under these constraints by means of minimizing the total deviation from the target values, which is the problem to be solved by goal planning [1]. In practical engineering systems, such as many nonlinear, multi-variable, multi-constraint and multiobjective optimization problems in power systems, the existing mathematical methods have limited ability to optimize these problems, and the obtained solutions are not satisfactory [2].
Above discussions indicate that the normalization and the introductions of subjective factors are indispensable treatment in the above "additive" algorithms to transfer diverse criteria into a "unique criterion", and the final result depends on the normalization process significantly [3]. Different normalization methods could result in complete differences in the consequence. Besides, beneficial performance index and unbeneficial performance index are treated in non-equivalent or inconsistent manners in some algorithms. In addition, the "additive" algorithm in the multiobjective optimization is corresponding to the form of "union" from the viewpoint of set theory. So, above algorithms could be seen as a semi-quantitative approach in some sense.
Recently, a probability-based method for multi-objective optimization (PMOO) was proposed to solve the intrinsic problems of subjective factors in previous multiobjective optimizations [3][4][5]. A brand new idea of favorable probability was proposed to reflect the favorable degree of a performance index in the optimization in the PMOO. The PMOO aims to treat the simultaneous optimization of multiple objectives in the viewpoint of probability theory. In the novel methodology of PMOO, all performance utility indicators of alternatives are preliminarily divided into two types, i.e., beneficial or unbeneficial types according to their functions and preference in the optimization; each performance utility indicator of an alternative contributes to a partial favorable probability quantitatively. Moreover, the product of all partial favorable probabilities produces the total favorable probability of an alternative, which thus transfers the multi-objective optimization problem into a single-objective optimization one rationally. In this paper, it regulates a rational approach of multiobjective programming by means of probability theory, discrete uniform experimental design, and sequential algorithm for optimization. Furthermore, examples for illumination of this approach are given.
NEW APPROACH OF SOLVING MULTI-OBJECTIVE PROGRAMMING PROBLEM
The rational approach of multi-objective programming is conducted by the combination of probability theory, discrete uniform experimental design, and sequential algorithm for optimization integrally.
The probability-based method for multi-objective optimization is used to conduct conversion of the multiobjective optimization problem into a single-objective optimization one in the viewpoint of probability theory.
The discrete uniform experimental design is used to supply an efficient sampling to simplify the conversion, which is especially important for the goal functions in multiobjective programming problem being continuous functions. Sequential algorithm for optimization is employed to carry out further optimization.
Probability Theory Based Treatment
In the viewpoint of probability theory, the entire event of appearance of "simultaneous optimization of multiobjective" is corresponding to the product of the each individual objective (event). Therefore, the usual term "the higher the better" for the utility index of performance indicator needs to be expressed quantitatively in term of probability theory, which stimulates us to seek a proper expression for the term "the higher the better" in probability theory quantitatively. A brand new idea of "favorable probability" was proposed in [3][4][5] to interpret the preference degree of the candidate in the selection, i.e., it uses the term "favorable probability" to characterize the preference degree of the utility index of a performance indicator quantitatively in the optimization.
As to the multi-objective programming problem, each goal is indeed an objective of the PMOO. All performance utility indicators of alternatives are preliminarily divided into two types, i.e., beneficial or unbeneficial types according to their functions and preference in the optimization; thus, the subsequent process of PMOO can be employed rationally.
In Eqs. (1) and (2), X ij is the value of utility index of performance indicator; n expresses the number of the performance indicator; m expresses the number of the alternative in the evaluation. j X represents the arithmetic average of the value of utility index of performance indicator X ij over index i for specific j; X jmax and X jmin indicate the maximum and minimum values of X ij over index i for specific j, respectively.
Moreover, the total / overall favorable probability of an alternative is written as The total / overall favorable probability of an alternative, i.e. Eq. (3), thus transfers the multi-objective optimization problem into a single-objective optimization one in viewpoint of probability theory for the simultaneous optimization of multiple objectives rationally.
Discrete Uniform Experimental Design and Sequential Algorithm for Optimization
Since the goal functions in multi-objective programming problem are usually continuous ones, discretization can be used to conduct the simplified treatment for the simplicity.
As was stated in [6], the methodologies of good lattice point (GLP) and uniform experimental design (UED) make the discretization possible and practical. The methodologies of GLP and UED are based on number theory, which could supply effective assessment for a definite integral with finite sampling points [6,7]. The finite sampling points are uniformly distributed within the integral domain with lowdiscrepancy [8,9]. The characteristic of the uniformly distributed point set makes the convergence much faster than Monte Carlo sampling [8,9], which thus has been promising a very good algorithm in approximate calculations with a surname -"quasi -Monte Carlo Method". Fang specially developed uniform design and uniform design table for the proper using of UED [10]. Sequential uniform design or sequential algorithm for optimization (SNTO) can be used to conduct further optimization for the multi-objective programming problem due to its similarity to problem of multi-objective optimization [6,8].
Finally, the multi-objective programming problem is conducted by means of the probability -based multiobjective optimization and discrete uniform experimental design straightforward.
APPLICATIONS
In this section, two examples are given to illuminate the applications of the regulated approach in solving multiobjective programming problem by means of probability theory and discrete uniform experimental design.
Production with Maximum Profits and Least Pollutions
A factory produces two kinds of products α and β during the planning period. Each product consumes three different resources, A, B, and C [1]. The unit consumption of resources for each product, the limit of various resources, the unit price, unit profit and unit pollution caused by each product are shown in Tab. 1 [1]. Assume that all products can be sold. Now, the problem is how to arrange production which can maximize profit and output value, and cause the least pollution. Solution: Assume the output of products α and β are x 1 and x 2 , respectively, the mathematical model of the problem is as following with s. t. (restraint) conditions, Max f 1 (x) = 70x 1 + 120x 2 , Max f 2 (x) = 400x 1 + 600x 2 , Min f 3 (x) = 3x 1 + 2x 2 , s. t. 9x 1 + 4x 2 ≤ 240, 4x 1 + 5x 2 ≤ 200, 3x 1 + 10x 2 ≤ 300, Since this problem is with two input variables, says, x 1 and x 2 , according to literatures [6] and [10], at least 17 uniformly distributed sampling points are needed to conduct the discretization with uniform experimental design within the working domain. Here we try to employ the uniform table U*24(24 9 ) to perform the discretization, the consequences are shown in Tab. 2.
From Tab. 2, it can be seen that 5 sampling points are excluded due to the restraint of the s. t. conditions, and 19 sampling points are within the working domain of the s. t. conditions, which meets the requirement of at least 17 uniformly distributed sampling points within the domain of the s. t. conditions. In this problem, both the goal functions f1(x) and f 2 (x) are beneficial indexes, while the goal functions f 3 (x) is an unbeneficial index. Tab. 3 shows the results of the assessments with PMOO, P f1 , P f2 and P f3 represent the partial favorable probabilities of functions f 1 , f 2 and f 3 at the corresponding discretized sampling points, respectively; P t expresses the total / overall favorable probability of each alternative. From Tab. 3, it can be seen that the sampling point No. 2 exhibits the maximum value of total favorable probability. Therefore, further optimization by using sequential uniform design is conducted around the sampling pint No. 2 of the Tab. 2.
Tab. 4 shows the results of the assessments by using sequential uniform design for further optimization, in which c (t) = (Max P t (i-1) -Max P t (i))/Max P t (i-1) expresses the relative error of the maximum total favorable probability at i th sequential step. If we assume a pre-assigned value δ = 2% for c (t) , then the final optimal consequences for this multiobjective optimization problem are f 1Opt. = 3591.927, f 2Opt. = 17962.24 and f 3Opt. = 59.9609 at the 5 th step with "coordinates" 1 x * = 0.0521 and 2 x * = 29.9023. Obviously, 1 x * and 2 x * approach to 0 and 30 at ultimate limit, respectively, which corresponds to optimum values of f 1Opt. = 3600, f 2Opt. = 18000 and f 3Opt. = 60, individually. Table 4 Results of the assessments by using sequential uniform design with U*24(24 9 ) Step Domain Optimum "coordinates" Value of goal Max. total favorable probability P t ×10 5
Production with Maximum Profits and One Output
A factory produces two products: A and B. The profit of producing each piece of A is 4 ¥RMB, and the profit of producing each piece of B is 3 ¥RMB. The processing time of each piece of A is twice as long as that of each piece of B. If the whole time is used to process B, 500 pieces of B can be produced for per day. The factory's daily supply of raw materials is only enough to produce a total of 400 pieces of A and B. Besides, the product A is a tight-fitting product that sells very well. Now, the problem is how to arrange the daily outputs of A and B so that the factory can obtain the maximum profit under the existing conditions. Solution: Let's first set x1 = daily output of product A, x 2 = daily output of product B [11]. Then, it gets following mathematical model, Since this problem is with two input variables, says, x 1 and x 2 , again at least 17 uniformly distributed sampling points could be used to conduct the discretization with uniform experimental design within the working domain [6,10]. Here we try to use the uniform table U*31(31 10 ) to perform the discretization, the consequences are shown in Tab. 5. From Tab. 5, it can be seen that 14 sampling points are excluded due to the restraint of the s. t. conditions, and 17 sampling points luckily are within the domain of the s. t. conditions, which satisfies the requirement of at least 17 uniformly distributed sampling points within the domain of the s. t. conditions. In this problem, both the goal functions f1(x) and f 2 (x) are beneficial indexes. Tab. 6 shows the results of the assessments with PMOO, P f1 and P f2 indicate the partial favorable probabilities of functions f 1 and f 2 at the corresponding discretized sampling points, individually; P t represents the total / overall favorable probability of each alternative. From Tab. 6, it can be seen that the sampling point No. 25 exhibits the maximum value of total favorable probability. Therefore, further optimization by using sequential uniform design is conducted around the sampling point No. 25 of the Tab. 5. Tab. 7 shows the results of the assessments by using sequential uniform design for further optimization. Again, set a pre-assigned value δ = 2% for c (t) , then the final optimal consequences for this multi-objective optimization problem are f 1Opt. = 1000.56 and f 2Opt. = 249.597 at the 6 th step with "coordinates" 1 x * = 249.597 and 2 x * = 0.7258. Analogically, the tendencies of 1 x * and 2 x * are 250 and 0 at ultimate limit, respectively, which leads to optimum values of f 1Opt. = 1000 and f 2Opt. = 250, separately. Table 7 Results of the assessments by using sequential uniform design with U*31(31 10 ) Step Domain Optimum coordinates Value of goal Max. total favorable
DISCUSSION
In the past, the multi-objective programming problem was solved usually by using "linear weighting method" [1, 2], i.e., "additive algorithm" in the previous approaches to transfer the multiple objectives into a single objective one, which is the intrinsic problem in principle in the viewpoint of probability theory with "union set" in essence [3]. Or some approaches even took certain objectives as restraint conditions to solve the multi-objective programming problem [1, 2], which obviously deviates from the original intention of multi-objective programming problem in the spirit of the "simultaneous optimization of multiple objectives" essentially.
While, the probability-based method for multiobjective optimization attempts to treat the simultaneous optimization of multiple objectives in the viewpoint of probability theory, which is the proper methodology for multi-objective optimization [3][4][5]. Therefore, the consequences of the previous approaches are incomparable to the results of the probability-based method for multiobjective optimization due to their intrinsic problem.
CONCLUSION
By using probability-based multi-objective optimization for the simultaneous optimization of multiple objectives, discrete uniform experimental design for performing simplification, and the sequential algorithm for conducting further optimization, the multi-objective programming problem can be conducted rationally. The approach properly takes the simultaneous optimization of each objective of multi-objective programming problem into account, which reflects the essence of the multi-objective programming naturally, and creates a new way.
Conflict Statement
There is no conflict of interest.
|
2023-07-21T15:18:24.372Z
|
2023-07-19T00:00:00.000
|
{
"year": 2023,
"sha1": "4cdc709e94352d0d13c50126ab4552fb56c44245",
"oa_license": "CCBY",
"oa_url": "https://hrcak.srce.hr/file/441843",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "66ad73f3153b19c852dc9fdb7e86f6556a33ff4b",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
}
|
248903705
|
pes2o/s2orc
|
v3-fos-license
|
The extended avian urban phenotype: anthropogenic solid waste pollution, nest design, and fi tness
reproducing in a gradient of urbani-sation, and (ii) by quantifying (ii) the contribution of different anthropogenic materials in their nests. We further examine potential drivers of solid waste pollution by inferring three distinct properties of the urban space: environmentalsolidwastepollutionontheground,humanpresence,andtheintensityofurbanisation(e.gimpervious surfaces)innestboxvicinity.Finally,(iii)weexploretherelationshipbetweenanthropogenicnestmaterialsandrepro-ductive success. We found that environmental solid waste pollution was positively associated with human presence and urbanisationintensity. Therewas also a positiverelationship between increased human presenceand the amount ofanthropogenicmaterialsingreat titnests. Interestingly, inbothspecies, anthropogenicnest materials covariedneg-atively with nest materials of animal origin (fur and feathers). We suggest that fur and feathers – key insulating materials in nest design – may be scarcer in areas with high levels of human presence, and are consequently replaced with anthropogenic nest materials. Finally, we report a negative relationship between anthropogenic nest materials and blue tit reproductive success, suggesting species-speci fi c vulnerability of urban birds to solid waste pollution.
• Human presence and urbanisation positively covary with solid waste pollution.• Urban solid waste pollution covaries with avian nest design and fitness.• Human presence positively covaries with solid waste in great tit nests.• The more anthropogenic materials in nests, the less fur and feathers • Anthropogenic nest materials negatively covary with blue tit breeding success.
G R A P H I C A L A B S T R A C T A B S T R A C T A R T I C L E I N F O
Editor: Rafael Mateo Soria Solid waste pollution (garbage discarded by humans, such as plastic, metal, paper) has received increased attention given its importance as a global threat to biodiversity.Recent studies highlight how animals incorporate anthropogenic materials into their life-cycle, for example in avian nest construction.While increasingly monitored in natural areas, the influence of solid waste pollution on wildlife has been seldom explored in the urban habitat.There is limited data on the relationship between anthropogenic solid waste pollution, nest design, and reproductive success in an urban context.We address this knowledge gap (i) by investigating the presence of environmental solid waste pollution in the breeding habitats of great tits Parus major and blue tits Cyanistes caeruleus reproducing in a gradient of urbanisation, and (ii) by quantifying (ii) the contribution of different anthropogenic materials in their nests.We further examine potential drivers of solid waste pollution by inferring three distinct properties of the urban space: environmental solid waste pollution on the ground, human presence, and the intensity of urbanisation (e.g impervious surfaces) in nestbox vicinity.Finally, (iii) we explore the relationship between anthropogenic nest materials and reproductive success.We found that environmental solid waste pollution was positively associated with human presence and urbanisation intensity.There was also a positive relationship between increased human presence and the amount of anthropogenic materials in great tit nests.Interestingly, in both species, anthropogenic nest materials covaried negatively with nest materials of animal origin (fur and feathers).We suggest that fur and featherskey insulating materials in nest designmay be scarcer in areas with high levels of human presence, and are consequently replaced with anthropogenic nest materials.Finally, we report a negative relationship between anthropogenic nest materials and blue tit reproductive success, suggesting species-specific vulnerability of urban birds to solid waste pollution.
Introduction
Humans are waste producers on a massive and global scale; the rate of solid waste production is rapidly increasing, and is estimated to double by 2050 relative to current estimates (Kaza et al., 2018).At least 37% of the current c. 2 billion tonnes of waste generated annually ends up in landfills or natural areas, constantly accumulating in the environment (Kaza et al., 2018).Among all anthropogenic materials contributing to waste, plastic emerges as a durable, versatile material that does not biodegrade, but breaks up into smaller pieces instead, dispersing easily in the environment (Ter Halle et al., 2016).Due to these inherent properties, plastic pollution became a global threat to biodiversity (UNEP, 2014), and interacts with other global change drivers such as global warming, landscape use change or biological invasions (Malizia and Monmany-Garzia, 2019).In this context, while a growing number of studies investigated the impact of plastic pollution on marine ecosystems, little is known about the effects of plastic pollution inland, where it is mainly produced (Jâms et al., 2020;MacLeod et al., 2021).Additionally, previous studies have largely focused on microplastics, leaving an important knowledge gap in our understanding of the effects of macroplastics on the environment (Malizia and Monmany-Garzia, 2019).
Anthropocene: plastic pollution & urbanisation effects on wildlife
Several studies have highlighted the effects of solid waste pollution on free-living organisms, for example by altering their behaviour and physiology (Suarez-Rodriguez et al., 2012).Solid waste pollution, and specifically plastic pollution, has been shown to increase individual mortality due to ingestion, entanglement or entrapment (Gall and Thompson, 2015;Santos et al., 2021).Birds, one of the most affected groups globally (Gall and Thompson, 2015;Wilcox et al., 2015), are known to incorporate anthropogenic materials such as plastic strings or plastic foil pieces into their nests (Jagiello et al., 2019).Plastic strings can cause entanglement of growing chicks, leading to increased mortality rates at this developmental stage (Townsend and Barker, 2014).Other anthropogenic materials used in nest building, such as cigarette butts, can cause genotoxicity in nestlings blood cells, presumably decreasing nestling survival due to their toxicity (Suárez-Rodríguez and Macías Garcia, 2014).However, knowledge of the temporal and spatial variability of anthropogenic materials inclusion into nests is very limited, as is information on the impact of such materials on nest design and avian fitness (Jagiello et al., 2019;Reynolds et al., 2019;Tavares et al., 2016;Antczak et al., 2010).
Nests are the cornerstone of avian reproductionby providing a secure place for the development of offspring, and by maintaining stable thermal and humidity conditions (reviewed in Deeming and Reynolds, 2016).Nests are also considered as an extended phenotype, defined as nonbodily characteristics of the individual who constructs it (Schaedelin and Taborsky, 2009).Extended phenotypes are often expected to play a key role in sexual signaling, by carrying information on individual fitness and reproductive investment (Järvinen and Brommer, 2020;Schaedelin and Taborsky, 2009) in both natural and human-modified environments (Sergio et al., 2011).It is thus suprising that studies concerning the effect of solid waste pollution on nest building, behaviour and fitness has been seldom explored, particularly in an urban context (Reynolds et al., 2019).
Urban areas are considered key producers of environmental solid waste pollution (Forman, 2014).Radical landscape transformation is required to create cities, therefore urban ecosystems also constitute a major global threat to biodiversity (McKinney, 2002;Grimm et al., 2008).Many studies have readily described urban-induced behavioural, physiological, ecological and evolutionary effects on wildlife (e.g Forman, 2014;Szulkin et al., 2020).However, research to date on the biological impact of urban pollution has largely focused on atmospheric pollution (i.e.gases, light and noise) rather than on solid waste (e.g.Isaksson, 2010;Halfwerk et al., 2011;Dominoni et al., 2013).Surprisingly, few studies have quantified the impact of solid waste pollution across the urban lanscape.Rare exceptions are the work of Radhamany et al. (2016) and Wang et al. (2009), who report a higher use of anthropogenic materials in nest construction by house sparrows Passer domesticus and Chinese bulbuls Pycnonotus sinensis with increasing urbanisation in Asia.These findings, in addition to the rapid expansion of urban areas worldwide (Seto et al., 2012) and the potential negative effects of solid waste pollution on avian reproduction (e.g.Suárez-Rodríguez and Macías Garcia, 2014), highlight the urgent need of assessing the relationship between cities and the use of anthropogenic materials in nest building and design (Reynolds et al., 2019).
Why do birds incorporate anthropogenic materials in their nests?
The first reported observation of anthropogenic materials recorded in avian nests dates back to 1933, when Warren (1933) recorded metal wire in a pied crow Corvus scapulatus nest.Since then, the number of such observations has considerably increased, reflecting the pervasive nature of human activities at a global scale (Jagiello et al., 2019).Three main (and non-exclusive) hypotheses have been proposed to explain the incorporation of anthropogenic materials into avian nests: i) availability, ii) age and iii) adaptive/functional hypothesis (reviewed in Reynolds et al., 2019).
The availability hypothesis predicts an increased amount of anthropogenic material in nests as the result of human activities, (e.g.transformation of land, alterations of ecosystems) and a consequent reduction of natural materials such as plants, animal hair or feathers originally used in nest construction (Antczak et al., 2010;Lee et al., 2015).It implies that in more polluted environments, birds are most likely to use anthropogenic materials to build their nests, as these are more accessible and ubiquitous than natural materials (Lee et al., 2015;Radhamany et al., 2016).To properly test this hypothesis, it is necessary to measure anthropogenic materials both in the nest and in the surrounding environment at the time of nest construction.
The age hypothesis refers to an association between the use of anthropogenic materials with the age of breeding individuals, and assume a causal relationship between age and individual experience (Sergio et al., 2011).Previous studies conducted in two long-lived species, the black kite Milvus migrans and the white stork Ciconia ciconia, reported that older and more experienced individuals are more likely to incorporate anthropogenic materials into their nests (Sergio et al., 2011;Jagiello et al., 2018).Anthropogenic nest materials in those species were likely to serve as an extended phenotype and sexual signal expressing builder quality (Sergio et al., 2011;Jagiello et al., 2018).
The third, adaptive/functional hypothesis, links individual behaviourthe incorporation of anthropogenic materials in nest buildingto possible associated reproductive benefits and, as such, can also be considered as an extended phenotype (Sergio et al., 2011): for example, cigarette butts may act as ectoparasite-repellent (Suarez-Rodriguez et al., 2012), durable plastic strings may serve to reinforce the structure of the nest (Antczak et al., 2010), and anthropogenic materials may modify nest insulation properties (Reynolds et al., 2019;Corrales-Moya et al., 2021).However, studies demonstrating clear links between individual behaviour in nest building (e.g. the inclusion of anthropogenic materials) and individual fitness are scarce to date (Reynolds et al., 2019).Suárez-Rodríguez and Macías García 2017 showed that anthropogenic nest materials (cigarette butts) act beneficially on the fledging success of House finches (Carpodacus mexicanus), but it is also possible that the cost of such exposure (due to toxicity), will only appear in post-fledgling life.Overall, the adaptive potential of the inclusion of anthropogenic nest materials, viewed as a trait in the extended phenotype framework (Sergio et al., 2011), remains poorly understood.
Study aims
We here studied the association between environmental solid waste pollution, avian nest design and fitness.We specifically focused on two urban adaptersgreat tits Parus major and blue tits Cyanistes caeruleus -breeding in a gradient of urbanisation in one of the largest European cities (Warsaw, Poland).First, we examined factors associated with urban environmental solid waste pollution on the ground and in nestbox vicinity.Second, we investigated mechanisms underlying the use of anthropogenic materials in nest design.We specifically tested hypotheses focusing on (1) solid waste availability, (2) parental age, and (3) the adaptive role of solid waste in terms of reproductive success.
Study sites
Data on environmental solid waste pollution -inferred on the ground and from avian nests -was collected in 2020 during the breeding season of great tits and blue tits (March-June) in the capital city of Warsaw, Poland, and its surroundings.We used eight environmentally heterogeneous study sites arranged in a gradient of urbanisation starting in forested areas outside of the city and ending in the city center (Fig. 1).Specifically, two study sites corresponded to city outskirts (Fig. 1A and B), while the remaining six sites offered different intensities of urbanisation within Warsaw city borders (Fig. 1C-H).The study sites constitute a mixture of habitat patches (suburban village, natural forest, urban parks, residential areas, office areas) representative of the urban mosaic (Forman, 2014).A brief description of each study site is provided below.More details can also be found in earlier studies (Corsini et al., 2019;Szulkin et al., 2020).
Suburban village (number of nestboxes (N) = 47)
Palmiry (20°46′48.9748″E-52°22′11.3382"N) is a suburban village with c. 370 inhabitants: the area is mainly characterized by residential homes with gardens interconnected by tree-lined avenues.A large commercial centre located close to a highway and two small stores are also present in the area.
Urban forest (N = 65)
Las Bielanski Natural Reserve (20°57′33.3″E-52°17′38.2842"N) is the only remnant of the Mazovia Primeval Forest.This deciduous forest is mainly characterized by the presence of oaks (Quercus spp.), hornbeams (Carpinus spp.) and maples (Acer spp).This study site stands as an island of wilderness in the city: with walking paths and resting areas, it also attracts visitors all year around.
Residential area II (N = 52)
Osiedle Olszyna neighbourhood (20°57′39.37097″E-52°16′ 23.71883"N) is a block of flats intermixed with green areas, but also schools, groceries and recreational facilities for families.It is located in close proximity to the urban woodland "Las Olszyna" (Site E).
Office area (N = 28)
The "Ochota" Campus (20°59′8.85224″E-52°12′43.77676"N) is the University of Warsaw science campus, largely designated for students and university researchers.The area is composed of office buildings, laboratories, dormitories and canteens for students.(Poland).Red dots correspond to study site locations, which include: a suburban village (A), a natural forest (B), an urban forest (C), residential areas II (D) and I (G), an urban woodland (E), an office area (F), an urban park (H).The black dot stands for Warsaw city centre.Each piechart shows solid waste categories within each study site (in %).While study sites vary in terms of solid waste composition (as reported on the figure), the amount of solid waste items also varied between study sites; details of this variation are reported in Tables S5. 2.1.7.Residential area I (N = 46) The "Muranow" district (20°59′5.74332″E-52°14′52.17925"N) is a residential area, similar in structure and use to Residential area II (Site D).
Urban Park (N = 105)
Pole Mokotowskie (21°0′6.98321″E-52°12′46.66874"N) is a large urban park.It includes a combination of habitat patches such as meadows, tree-covered areas and recreational structures (i.e.playgrounds, and numerous sport facilities).It constitutes a centrally-located recreational area for urban dwellers.
Environmental solid waste pollution survey
To estimate environmental solid waste pollution in the vicinity of avian nestboxes, we used the standardised protocol of the CSIRO Global Leakage Baseline Project, specifically applying the protocol section designed for inland sites (Schuyler et al., 2018; access: https://research.csiro.au/marinesolid waste/resources/).Briefly, this protocol establishes three random, 12.5 m long transects for each sampling location (herean avian nestbox).To adapt the protocol to our study, we established transects within a radius area of 25 m from a nestbox as its central point.This distance of 25 m around the nestbox is well within the typical territory size of c. 1 ha (territory of 100 × 100 m) for these species (Krebs, 1971;Wilkin et al., 2006).Nestboxes in our study area were spaced 50 m from each other, thus avoiding the overlap of transects corresponding to different nestboxes.As soon as nest building started in a particular nestbox, we located and categorised all solid waste items found along each of the three transects attributed to a given nestbox by following protocol guidelines (Schuyler et al., 2018).The main categories of anthropogenic materials detected in the environment included paper, plastic, glass, metal, cloth, rubber and "other".Data from these ground transects, collected for every nestbox where active nest building was taking place, were further used as proxy for environmental solid waste pollution birds were exposed to during the nest building stage.
Life history data
The eight study sites monitored in this study are home to 474 nestboxes specifically designed for great tit and blue tit breeding (Schwegler woodcrete nestboxes; type 1b, with a 32 mm entrance hole).Starting from the end of March 2020, all nestboxes were checked weekly to identify those occupied by tits, and to record the following life-history traits: egg laying date (1st of April = 1), clutch size and hatching date.Only first broodsdefined as broods that started no later than 30 days after the very first brood in a given sitewere included in the analyses (Van Balen, 1973).When nestlings were 10 days old, adults were trapped in the nestbox whilst feeding young.Both adults and nestlings were ringed with an alphanumeric metal ring, and basic biometrics were taken.Adults were also aged based on their wing plumage, which allowed to distinguish first-year breeders from older birds (second-year breeders or older).Only female age was further considered in the analyses as in both species, females are the sex that builds the nests (Mainwaring, 2017).All chicks were ringed 15 days after hatching.Finally, individual fledging success was assessed by visiting all active nestboxes c. 25 days after hatching to record the number of birds that successfully left the nestbox.
Nest collection and dissection
We collected 100 great tit and blue tit nests (43 and 57 nests, respectively) once they became inactive, independently on whether they were successful (N = 76; at least one offspring fledged) or unsuccessful (N = 24; at least one egg was laid but no chick fledged).Successful nests were collected up to 5 days after the fledglings left the nestbox.We excluded predated nests from our analyses as predators often destroy and/or remove part of the nest (Z.J., pers.obs.), thereby preventing the correct quantification of nest materials.Tit nests were gently removed from nestboxes and stored in cardboard boxes at ambient temperature in the field.Within the next 72 h, nests were transported to the University of Warsaw, where they were stored at −80 °C for at least 24 h to halt the process of nest material biodegradation (Mainwaring et al., 2014).
Once in the lab and after freezing, we measured nest total weight using an electronic scale to the nearest 0.0005 g.We further dissected nests following Mainwaring et al., 2014.Natural nest materials categories were based on Hanmer et al., 2017, but these were adjusted to reflect the content found in great tit and blue tit nests (Table S1).Soft elements of animal originthese included fur, human hair and feathers -were combined into the animal origin category; these materials were mostly limited to the lining part of the nest cup (Z.J. pers.obs.).Moss, grass and other (natural elements) categories followed, the latter including bark, needles and twigs.Finally, we also introduced the compost category, which included all plant-based elements that were impossible to assign elsewhere due to decomposition.Anthropogenic nest materials were categorised with the same protocol that was used for ground transect quantification (Schuyler et al., 2018, Table S2), with one adjustment: in ground transects, the sampling unit for recording environmental solid waste pollution were counts (e.g. the number of items in each anthropogenic solid waste category recorded along a transect).This approach was not feasible while quantifying the contribution of anthropogenic materials in nest design: indeed, tits tear solid waste items into many small pieces (Z.J., pers.obs.).Therefore, the amount of anthropogenic nest materials was weighed rather than counted (Fig. S1).
Human presence and urbanisation intensity
We used a readily computed, repeatable estimate of human presence around each nestbox following methodology detailed in Corsini et al., 2019.Briefly, the human presence index reports the number of humans and dogs detected in a 15 m radius around each nestbox during a 30 s long count (Corsini et al., 2019), averaged over 20 counts performed during the day and across the breeding season.The total observation time per nestbox during the breeding season was 10 min.The estimate of human presence was found to be repeatable over time (Corsini et al., 2019), and humans -recorded as pedestrians and bikers -contributed 93% to the dog and human presence index in this dataset (Table S3).
Urbanisation intensity was computed as the percentage of Impervious Surface Area (ISA) in a 100 m radius around each nestbox as described in Szulkin et al. (2020).Briefly, ISA was calculated in QGIS using a 20 m pixel resolution map of ISA processed via satellite imagery from 2015 (Copernicus Land Monitoring Services, https://land.copernicus.eu/sitemap).This index includes all types of built-up areas, such as infrastructural networks (roads), parking lots and buildings.
Weather information
Temperature data was included in all null models because nest design in tits was readily found to be associated with local temperature in the timeframe preceding clutch initiation (i.e., lay date, see Deeming et al., 2012).Since rainfall is a strong determinant of breeding success in cavitynesting birds (Radford and Du Plessis, 2003), it was also included in all null models.2020 weather data was obtained from the Polish Institute of Meteorology and Water Management (IMGW-PIB).Daily temperature and rainfall were averaged from two stations: Warsaw Okęcie and Legionowo, referring respectively to sites situated within (sites C\ \H) and outside (sites A&B) Warsaw city borders, to provide fine-scale data on climatic conditions in urban and non-urban sites.Temperature and rainfall were further averaged for each nest for a seven-day period prior to -and including -laying date, following Deeming et al. (2012).
Statistical analyses
All analyses were performed with the opensource R computing environment (version 4.0.2).All plots were visualised using the R package ggplot2 (v.3.1.0.) (Wickham, 2011) and further assembled with the open source Inkscape software (v.1.0.2) (https://inkscape.org;Oualline and Oualline, 2018).Analyses were performed in great tits and blue tits separately, as (i) these two species may respond to urbanisation differently in terms of anthropogenic nest materials (Hanmer et al., 2017) and (ii) nest structural characteristics are also known to be species-specific (Mainwaring, 2017).
Statistical analyses were performed in a five-step process based on two different datasets: (1) Transect Data (abbreviated as "TD", N = 100 sampling locations (nestboxes), covering 300 transects of environmental solid waste pollution from the ground), (2) Nest Data, which includes speciesspecific data on nest components resulting from nest dissection (abbreviated as "ND" N = 100 nests, 43 of which are great tit nests, and 57 blue tit nests).
Environmental solid waste pollution in a gradient of urbanisation (TD)
For each nestbox, 3 ground transects were surveyed for environmental solid waste pollution.Information on environmental solid waste pollution was collected along 300 ground transects, corresponding to 100 sampling locations surrounding nestboxes.Data from each of the 3 transects per nestbox were summed, and further analyses were run at the nestbox level.Variation in environmental solid waste pollution driven by urbanisation was inferred at two levelsboth in terms of (i) the total amount of environmental solid waste pollution and (ii) environmental solid waste pollution composition, partitioned into solid waste type categories (see Section 2.4).Each sampling location (e.g. each nestbox) was defined as located either below or above the median value of (i) human presence and (ii) Impervious Surface Area (ISA) for the entire transect dataset, thereby generating two contrasted levels for each variable (low/high human presence, low/high ISA, respectively).Changes in environmental solid waste composition in low/high human presence or ISA were tested using Chi-square tests of independence (χ 2 ).
To illustrate the urban environmental differences occurring in low/high environments, we report that for great tits, the average (± SD) number of humans around each nestbox in 30s-long counts were 0.06 (± 0.12) and 1.60 (± 0.99) humans for low and high human presence areas, respectively.Values were equivalent for blue tits, whose nests surrounded by low and high human presence were 0.19 (± 0.19) and 2.15 (± 1.82) humans, respectively.Great tits breeding in low and high ISA environments were surrounded by an average (± SD) of 3.6% (± 3.17) and 21.1% (± 9.62) ISA respectively.Similarly, blue tit nests in this study were characterised by an average of 2.7% (± 2.27) and 17.9% (± 10.06) in low and high ISA environments, respectively (see Summary statistics in Table S4).
Interspecific variation in nest design (ND)
We run a multivariate analysis of variance (MANOVA) to test for differences in great tit and blue tit nest components (modelled in terms of weight (grams)), and a principal component analysis (PCA) using prcomp (https:// stat.ethz.ch/R-manual/R-devel/library/stats/html/prcomp.html) in R and visualized by ggord (Marcus, 2017) to analyse variation within and between nest components in the two target species.Explanatory variables in MANOVA included the weights of: anthropogenic materials, compost, dry grass, feather, fur, moss and other natural materials.Prior to PCA, the weights of all categories of nest components were mean-centered using the function scale (where x i -x mean /sd) in R.
Association between environmental solid waste pollution, human presence, and urbanisation intensity on the weight and proportion of anthropogenic materials in the nest (ND)
Contrasted levels of environmental solid waste pollution, human presence or urbanisation (ISA) were defined as below or above the median of the variable of interest, calculated for each species separately.We first assessed whether these contrasted levels of solid waste pollution, human presence and urbanisation influenced species-specific variation in nest composition using t-tests.
We further investigated in detail the extent to which species-specific environmental solid waste (see below), human presence and urbanisation (ISA) influenced the distribution of anthropogenic materials in the nest.For these analyses, we only included those categories of anthropogenic solid waste that were found both in the nests and in the environment (based on transect data).Indeed, not all solid waste items found in the environment were used by great tits or blue tits during nest construction.Before running such models, we generated a new variable reporting environmental solid waste found on the ground, termed "species-specific environmental solid waste", which only includes solid waste items found in species-specific nests (for precise information about species-specific solid waste items, see Supplementary information, Table S2).
We further examined variation in (A) the proportion of anthropogenic materials in the nest, fitted as the ratio of anthropogenic materials relative to nest total weight (in grams), and (B) the total weight of anthropogenic materials in the nest.We tested whether this ratio varies depending on (i) the number of environmental solid waste items found in the surroundings of nestboxes based on transect data, but also (ii) human presence and (iii) urbanisation intensity.Models were tested with Generalised Linear Mixed Effects models (GLMMs, function glmmTMB in the R package glmmTMB, Brooks et al., 2017) in a model averaging framework.We also assessed the covariation between the amount of anthropogenic materials in the nests (fitted as either a proportion relative to total nest weight or as the total weight of anthropogenic materials in the nest) and the following explanatory variables: species-specific environmental solid waste (fitted as the number of solid waste items identified on transects), human presence, ISA (urbanisation intensity proxy), but also the proportion/weight of components of animal origin (specifically feather and fur, as only these materials, together with anthropogenic nest materials, line the nest cup), as well as temperature and rainfall.This was modelled in a linear mixed model framework using the lmer function in the R package lme4 (Bates et al., 2015).We used a Z-score function to standardise the explanatory variables.The proportion/weight of anthropogenic nest materials were fitted as response variables after applying a linear beta transformation (Smithson and Verkuilen, 2006).Study site was included as a random effect to control for the non-independence of broods belonging to the same study location (8 categories).As variance inflation factors (VIF) for all explanatory variables here included were below 2, model structures were not subjected to multicollinearity issues (Zuur et al., 2009).A set of models including all possible combinations of fixed effects were subsequently generated from the global model detailed above (R package MuMIn v. 1.43.15,see Bartoń, 2018).Models were classified according to the Akaike's information criterion (AIC c ) to identify those with the best fit (Burnham and Anderson, 2004), and model-averaged coefficients for a subset of models (ΔAIC c < 2) were further obtained.Because some Akaike weights of best models were below 0.9 and high model selection uncertainty existed, we applied full model averaging (Symonds and Moussalli, 2011).Finally, we extracted upper and lower bounds of 95% confidence intervals (CI) for each variable kept in the best fitting model.
Effect of female age on the presence/absence of anthropogenic nest materials in tit nests (ND)
We carried out additional analyses to test whether there is a relationship between female age and the presence of anthropogenic materials in tit nests.These analyses were performed on a reduced dataset since some nests failed before adults could be caught, and a few age records were missing for some females caught at the nest (thus, N great tits = 33; N blue tits = 52).We used a similar procedure for model building as described in section 2.7.3, but with an additional fixed factor: female age, coded as first year bird vs adult breeder (2 years or older).
Anthropogenic nest materials and reproductive success. (ND)
In a final analytical step, we inferred the relationship between anthropogenic nest materials in avian nest design (e.g. the extended phenotype) and reproductive success (e.g.avian fitness).Fitness was here defined in terms of reproductive success assessed at two life-history stages of the offspring, and measured in terms of number of hatchlings (e.g. the number of chicks that hatched in the brood) and in terms of number of fledged birds (e.g. the number of chicks that successfully fledged from the nestbox).We used Linear Mixed Effects models in a model averaging framework as described above.Analyses were performed at a species level, on nests where at least one hatchling hatched.The total number of hatchlings per breeding event (e.g. the number of chicks that hatched in the nest) was fitted as Gaussian-distributed response variable, while the following parameters were fitted as fixed predictors in the models: species-specific environmental solid waste, proportion of anthropogenic nest materials, human presence, ISA, temperature and rainfall.Variance inflation factors (VIF) for all explanatory variables here included were below 2, so model structure was not affected by multicollinearity issues.The categorical variable "Study site" was fitted as random effect to control for non-independence of nests sampled within the same area and in order to control for site heterogeneity.We further built an equivalent model with an analogous structure, but with the number of fledged birds fitted as response variable (using Gaussian residuals).For significant effects, we calculated and visualized predicted marginal effects quantifying effect sizes of the percentage of anthropogenic nest materials on the number of hatchlings and fledglings using the ggeffects package (Lüdecke, 2018).
Variation in ground solid waste pollution across the urban mosaic (TD)
We characterized 300 transects, located within a 25 m buffer zone from 100 nestboxes (3 transects/nestbox) in terms of solid waste pollution on the ground (Fig. 1 and Table S5).A total of 2317 solid waste items were recorded in the study system.The majority of solid waste were identified as paper (30.6%,N = 709), followed by plastic (25.5%), glass (21.4%,), cloth (2.46%) and rubber (0.56%) (see also Fig. 2).
Residential Area I (site G) was the urban area with the highest number of solid waste items detected (Table S5), with paper and plastic solid waste emerging as dominant categories (Fig. 1).On the opposite end of the pollution spectrum, the lowest incidence of solid waste pollution was found in the Natural forest (site B) with c. hundred-fold lower number of solid waste items compared with Residential Area I (Table S5).
When splitting the sampling locations (nestboxes) into two equal groups reflecting high and low levels of human presence and urbanisation (N = 50 nestboxes for high and N = 50 nestboxes for low levels), Chi-squared tests of independence (χ 2 ) revealed significant differences between certain solid waste categories found in the environment (Fig. 2, Table S6).Specifically, in areas characterized by higher levels of human presence, the number of solid waste attributed to paper, glass, metal and other categories were significantly higher than in areas with lower levels of human presence (p < 0.005, Fig. 2a, Table S6).When looking at contrasted levels of urbanisation related to impervious surfaces (ISA), we recorded a significantly higher number of solid waste items in high-ISA environments for paper, glass and metal, (p < 0.005, Fig. 2b, Table S6), and a significantly lower number of cloth items in high-ISA environment (p < 0.005, Fig. 2b, Table S6).
Interspecific variation in nest design (ND)
A multivariate analysis of variance (MANOVA) revealed clearcut, significant differences in terms of nest composition between species (F 7,95 = 10.871,Pillai = 0.44, p < 0.005), as visualised on Fig. 3.The weight of moss, dry grass, feather and compost were significantly higher in blue tit nests than in great tit nests (Table S7).The amount of fur (in grams) was significantly higher in great tit nests relative to blue tit nests (Table S7).Other nest materials (both natural and anthropogenic) did not differ between species.These results were visualised in a principal component analysis (Fig. 3): the first three principal component axes (PC) explained 29.3%, 20.4% and 15.23% of the total variance, respectively, and contributed to a total variance of 64.9% in nest components (Fig. 3).Nest weight components such as moss, dry grass and feathers were negatively correlated with PC1, and feather and other natural materials correlated positively with PC2.Compost, fur and other natural materials positively correlated with PC2, while the weight of anthropogenic materials correlated negatively with PC2 (Fig. 3a).PC3 was related positively to the weight of anthropogenic materials in the nest, and negatively with moss weight (Fig. 3b).
Drivers of urban nest design variation
3.3.1.Species-specific nest-composition in the context of environmental solid waste pollution, contrasted levels of human presence and urbanisation 3.3.1.1.Great tit.There was a c. 3-fold increase in anthropogenic materials in great tit nests from nestboxes surrounded by high levels of Fig. 2. Ground environmental solid waste pollution in contrasted levels of human presence and Impervious Surface Area (ISA) (Transect Data).Total number of solid waste items detected in the environment by surveying ground transects and grouped by contrasted levels of Human presence (a) and Impervious Surface Area (b), N = 100 nestboxes, corresponding to 300 ground transects.Low (mean ± se, 0.29 ± 0.03) and high (2.18 ± 0.23) levels of human presence included each 50 nestboxes.Low (mean ± se, 1.03 ± 0.2) and high (mean ± se, 24.7 ± 2.25) levels of ISA, included 50 nestboxes.Chi-squared tests of independence (χ 2 ) were run for each solid waste category ("Paper", "Plastic", "Glass", "Metal", "Cloth", "Rubber" or "Other") to compare contrasted levels (reported as "Low" versus "High") of human presence and urbanisation (ISA).Groups were characterised by the same number of sampling locations (i.e., N = 50 nestboxes per group).Only significant outputs are indicated (p ≤ 0.001***).environmental solid waste pollution (relative to nests from low levels of environmental solid waste pollution; p < 0.005; Fig. 4a, Table S8).Great tit nests also significantly differed in nest composition in contrasted levels of human presence (Fig. 4b, Table S8).Specifically, in great tit nests characterised by a higher human presence in their vicinity, we observed a significant, 1.46-fold increase in moss, a 0.6-fold decrease in animal origin material, and an impressive 6.8-fold increase in anthropogenic nest materials (Fig. 4b, Table S8).Interestingly, variation in urbanisation modelled in terms of impervious surfaces (low vs. high ISA) did not influence great tit nest design.
3.3.1.2.Blue tit.In blue tits, attributes of the urban space (solid waste pollution, human presence, ISA) did not strongly covary with nest design.However, we recorded a lower contribution of fur in nestboxes surrounded by lower levels of environmental pollution (p = 0.005; Fig. 4).We also recorded a higher contribution of fur in nests with higher ground environmental solid waste pollution, feathers in areas with higher human presence (though the proportion of animal origin components in the nest was not statistically different), and compost in more urbanised areas (ISA; Fig. 4, Table S8).
Anthropogenic materials in nest design
In both great tits and blue tits, we report a significant and negative relationship between the weight of anthropogenic materials and the weight of materials of animal origin.In other words, the more anthropogenic materials were found in the nest, the fewer the materials of animal origin such as fur and feathers (Table 1 and S9).In addition, the proportion of anthropogenic nest materials in great tit nests increased with higher values of human presence in the nestbox surroundings (Table S9 and S10).Interestingly, none of the other environmental parameters retained in the final models (such as rainfall, human presence, species-specific environmental solid waste in great tits, and ISA in blue tits) were associated with the weight of anthropogenic materials in the nest (Table 1, S9 and S10).
Age effects
While female age was retained in the final model inferring variation anthropogenic nest materials in great tits, the confidence intervals for this variable overlapped with zero (Table S11).Our results thus did not support any association between female age and the amount of anthropogenic nest materials in the nest, whether modelled as a proportion or as a weight (Tables S12 and S13).In great tits, there was no relationship between the amount of anthropogenic materials in the nest and the number of hatchlings or fledglings (Table 2, S14-S16).
Blue tit
We detected a significant, negative relationship between the proportion of anthropogenic nest materials and blue tit number of hatchlings and fledglings (Tables 2, S14-S16, Fig. 5).Thus, a 10% increase in the proportion of anthropogenic materials in the nest was associated with a decrease in brood size of 2.2 hatchlings (5.04-7.57,95% CI), which is equivalent to a c. 30% decline in reproductive success (Fig. 5a).Similar estimates were found at the fledging stage, where a 10% increase in the proportion of anthropogenic materials in the nest was associated with a decrease in fledging success by 2.0 fledglings (3.88-6.93,95% CI), which is equivalent to a 27% decline in reproductive success (Fig. 5b).
Discussion
Our study demonstrates that the strength of environmental solid waste pollution in the city is unequivocally and positively associated with human presence and urbanisation intensity measured as a percentage of impervious surface area (Fig. 2).We also found a positive relationship between human presence and the amount of anthropogenic materials in great tit nests (Fig. 4 and Table S8).Crucially, avian nest design was altered in both urban great tits and blue tits, as we demonstrated a negative relationship between the amount of anthropogenic materials in the nest and those of animal origin, such as fur and feathers (Table 1).Equally importantly, anthropogenic materials (e.g.solid waste pollution) in the nest were found to have a strong, and negative relationship with reproductive success in blue tits, but not great tits (Table 2).
Factors associated with environmental solid waste pollution
Environmental solid waste pollution was higher in locations with higher human activity and with higher urbanisation (measured by ISA; Fig. 2).Our findings are one of the first empirical studies on solid waste pollution in the urban space, here reported at a fine spatial scale (but see Schuyler et al., 2021).Importantly, our data confirm the findings of Schuyler et al. (2021), who reported that the number of visible humans when measuring Fig. 3. Composition of great tit and blue tit nests (Nest Data, N = 100 nests, N great tits = 43; N blue tits = 57).PCA visualization of species-specific nest components in blue tits (red dots reflecting nests and ellipse reflecting empirical approximate 95% confidence region) and great tits (blue dots and ellipse) for PCA axes 1 and 2 (A) and PCA axes 1 and 3 (B).solid waste pollution in the environment was positively associated with the number of solid waste items detected.Our findings also imply that measuring human presence with appropriate protocols designed to maximise repeatability whilst reducing time spent on the ground (Corsini et al., 2019) is an insightful tool to infer the distribution of solid waste pollution inland, especially when comparing fragments of the urban mosaic which are distinct in anthropogenic use.
We also report on a significant, positive relationship between urbanisation and environmental solid waste pollution in our system (Fig. 2).Our work provides an additional information layer on environmental solid waste pollution variation in the urban space.Specifically, thanks to the evaluation of 300 ground transects of urban environmental pollution, this study is a valuable reference for small-scale variation in environmental solid waste pollution that is usually difficult to obtain when only working with socioeconomic datasets.Poor prediction properties of socioeconomic datasets may be caused by the fact that, to date, most studies of environmental solid waste pollution use indirect evidence based on highly aggregated global socioeconomic datasets -such as population density, or Gross Domestic Products (GDP) (Barnes et al., 2009;Eriksen et al., 2014;Lebreton et al., 2017).By definition, aggregated datasets are of considerably lower spatial resolution, thereby preventing the same level of precision as reported here when exploring environmental solid waste variation in a Fig. 4. Species-specific nest composition in the context of contrasted (a) environmental solid waste pollution, (b) human presence and (c) Impervious Surface Area (ISA) (Nest Data, N = 100 nests, N great tits = 43; N blue tits = 57).Barplots reporting the proportion of nest components relative to total nest weight in great tits and blue tits in contrasted levels of (a) ground environmental solid waste pollution, (b) human presence, (c) impervious surface area (e.g.urbanisation intensity).Welch-two-sample t-test results for great tits and blue tits are reported in Table S8.Significant p-values are indicated in bold (p ≤ 0.005*, p ≤ 0.01**, p ≤ 0.001***).Note that the category "% animal origin" includes the categories "Fur" and "Feathers" combined.biological context.Our work also confirms previous work ran on a global dataset (Hardesty et al., 2021), where the authors demonstrate the crucial role of infrastructural networks, national wealth, and artificial light at night on increased levels of solid waste pollution inland.The same work highlights that solid waste pollution is heterogenous at a sub-national, local scale.
Data collection for this study took place in 2020, at the very start of the COVID-19 pandemic crisis.In Poland, the strictest lockdown measures prohibiting citizens from attending urban green areas occurred in April, and lasted for a 20 day-period (Dziennik Ustaw Rzeczypospolitej, 2019, Legislation nr 566 and nr 697).This timeframe largely overlapped with our data collection on environmental solid waste pollution, which started at the end of March and finished by the first week of May 2020 (at the same time, note that our measures of human presence, sampled multiple time and in a repeatable manner, were made prior to 2020; see Corsini et al., 2019).Unsurprisingly, solid waste items were still detected in the environment despite the important lack of human activity in the field for this short period: this indicates that short-term restrictions on human activities do not neutralize the consequences of human activities accumulated in the environment (such as the presence of solid waste pollutants) in the long run.Importantly, as human presence emerged as a temporally stable and generally repeatable dimension of the urban mosaic (Corsini et al., 2019), our findings further highlight the pervasive role of humans on solid waste pollutants distribution across human-dominated landscapes.
This study confirms that the urban habitat offers a unique opportunity to study and understand our impact on nature through the disposal of solid waste.As we are currently facing a global pollution crisis, more research conducted on the distribution and accumulation of solid waste inland, specifically in understudied habitats such as urban areas, is timely and much needed (e.g.Hardesty et al., 2021).
Species-specific nest composition
Nest dissection revealed significant differences in nest composition between great tits and blue tits (Fig. 3); these are in agreement with previous findings on the topic, reporting a greater amount of feathers and anthropogenic material in blue tit than in great tit nests (Britt and Deeming, 2011;Hanmer et al., 2017).Importantly, this study reports an important change in urban nest design pertaining to blue tits and great tits: in both species, there was a negative relationship between the amount of (i) dry grass, moss, feathers and fur, and (ii) anthropogenic materials in the nest (Fig. 3).These findings are further discussed in the context of the availability hypothesis below.
Availability hypothesis
Interestingly, environmental solid waste pollution detected on ground transects in nestbox vicinity was not retained in final models of anthropogenic materials variation in the nest (Table 1).This suggests that tits selectively pick anthropogenic materials for nest building, as only some solid waste categories found in the environment were incorporated by tits into nests.In both tit species, the anthropogenic materials most commonly found in nests included: cloth insulation materials, cloth threads, and plastic strings.It is possible that tits selectively choose these materials for their function (e.g. insulation, structure;Reynolds et al., 2019).Indeed, past work suggests that birds do not pick nesting material randomly (Bailey et al., 2014;Briggs and Mainwaring, 2019).In an experiment where artificially dyed wool was provided as nest material to four different species of tits (blue tit, great tit, coal tit Periparus ater, and marsh tit Poecile palustris), some great tit individuals flew considerable larger distances (>200 m) than Table 1 Negative relationship between the weight of anthropogenic nest materials and those of animal origin in great tits and blue tits (in grams; Nest Data).In both species, the weight of anthropogenic materials in the nest increases with decreasing weights of animal origin components.Model: averaged summary statistics of Linear Mixed Effects Models (LMMs) testing the effect of ISA, human presence and environmental solid waste on the mass of anthropogenic materials (fitted as a Gaussian distribution) in great tits and blue tit nests.All global models included the following predictors: Impervious Surface Area (ISA, % of built-up areas in a 100 m radius), human presence, speciesspecific environmental solid waste identified on transects (transect solid waste), animal origin components (fur and feather mass in grams), rainfall and temperature.Study sites were fitted as random effect.Parameters with confidence intervals not overlapping 0 are reported in bold.other members of the population to select this specific material (Surgey et al., 2012).Further work is needed to confirm whether the proactive selection of anthropogenic materials generates an adaptive outcome, or whether it acts as an ecological trap at the reproductive level (Reynolds et al., 2019).Importantly, our study clearly shows a negative relationship between the amount of anthropogenic nest materials and reproductive success in blue tits (see Section 4.3.3 for more details).
A clear signal related to urban nest design that emerged for both species here investigated was the significant, negative association between the amount of anthropogenic nest materials and that of animal origin (fur and feathers).Although we did not measure the availability of fur and feathers in the environment, it is possible that their availability is reduced in places where there are high levels of human presence and are consequently replaced with anthropogenic nest materials.In nests collected from the forest, fur in nests mostly originated from wild animals, such as wildboars and deers (Z.J., pers.obs.), which are not that abundant in Warsaw city.It is worth noting that fur and feathers are the most insulating materials used in natural nest construction (Deeming et al., 2020), which leads to the possible assumption that, in urban tits, anthropogenic nest materials play a crucial insulating function.Yet, this requires further investigation, and we encourage further work to refute or confirm these trends, which would also strongly benefit from an experimental approach.
There are also some possible limitations of our results, as we quantified solid waste pollution only on the ground, whilst tits may also collect nesting elements in other placesfrom buildings, trash bins or backyard clothing lines.Such sources are all undeniably difficult to quantify.Also, we were only able to detect items visible to the human eye from a standing position, in order to comply with the CSIRO protocol guidelines for the Global Leakage Baseline Project.Detection efficiency is also likely to vary depending on the substrate, as it was easier to identify items on sand, asphalt, or bare ground and less so in thick vegetation such as tall grass or nettles.
Age hypothesis
Despite the fact that earlier studies showed a positive association between age and the presence of anthropogenic nest materials in long-lived avian species (Jagiello et al., 2018;Sergio et al., 2011), the age hypothesis was not supported by our findings.There are at least two reasons that may explain these results.First, great tits and blue tits are short-lived species in the wild, with a mean life expectancy of c. 2 years (Payevsky, 2006); thus, it is more likely that the age hypothesis gains greater functional meaning in long-lived species (Jagiello et al., 2018;Sergio et al., 2011) than in shorter-living ones.Indeed, experience plays a crucial role while collecting solid waste during the nestbuilding phase (Sergio et al., 2011).Second, such lack of association between nest composition and age may also be the result of the limited age partitioning possible for great tit and blue tit females.Indeed, age could only be established based on feather moulting characteristics, which only allows to establish two age groups (first-year breeder or older females); this contrasts with the use of a more diverse and continuous age variable used in earlier studies (Jagiello et al., 2018;Sergio et al., 2011).
Adaptive hypothesiseffect of anthropogenic nest materials on reproductive success
Our study revealed a significant, negative relationship between the amount of anthropogenic nest materials and blue tit reproductive success, but the causal relationship behind it requires further investigation.Crucially, this is the first study reporting reduced reproductive success (measured both in terms of number of hatchlings and number of fledged birds) in nests polluted with anthropogenic materials (Fig. 5, Table 2 and S15).These results also improve our understanding of the use of novel nesting materials of human origin (Reynolds et al., 2019).
Interestingly, nests with a higher contribution of anthropogenic materials were also poorer in terms of feathers and fur (Table 1).Our results suggest that feathers may be replaced by anthropogenic materials, which ultimately results in a negative reproductive outcome in the blue tit, as confirmed at two consecutive offspring developmental stages.From a functional perspective, feathers enhance thermal insulation (Windsor et al., 2013), protect against microbial infections (Ruiz-Castellano et al., 2016, 2019), and act as ectoparasite repellent (Deeming and Reynolds, 2016).Interestingly, Järvinen and Brommer (2020) reported that blue tit nestlings raised in feather-rich nests had a higher chance of recruitment to the breeding population.The authors suggest that in blue tits, it is more likely that feathers in the nest are used as an extended phenotype and signal of female quality.Thus, the process we report herewhere blue tit nestlings are less likely to survive in nests rich with anthropogenic material (and feather poor; Fig. 5) is in agreement with reduced blue tit reproductive success as reported by Järvinen and Brommer (2020).At the same time, this pattern also highlights the anthropogenic interference driven by cities that is acting on the extended phenotype of birds.
Consequently, valuable further work is required to explore the relationship between parental quality, expressed phenotypically in terms of presence of anthropogenic materials in the nest, and fitness.Specifically, this study also highlights the need for an experimental framework to improve our understanding of these associations.More generally, these findings confirm considerable scope for exciting work on the impact of solid waste pollution on extended phenotypes and sexual selection in the urban space.
Anthropogenic materials used by great tits and blue tits in their nest mainly largely consisted of pieces of cloth, straps or plastic strings (Table S2), and are the irrefutable evidence of living in the Anthropocene.In all the nests here examined (n = 100), we did not record any nestling unequivocally harmed by anthropogenic nest materials, but for one case where a nestling had its wing entangled in human hair, and was gently detangled (Z.J., pers.obs.).In contrast to earlier reports (Suárez- Rodríguez and Macías Garcia, 2014), the inclusion of cigarette butts was limited to only 10 nests (that is 10% of all investigated nests), and their amount, relative to other anthropogenic nest materials, was marginal.Therefore, we do not expect any considerable toxic effects from the c. 400 harmful substances present in cigarette butts (Suárez-Rodríguez and Macías Garcia, 2014) to have interacted with and / or enhanced the reported reduction in reproductive success in our blue tit population (Fig. 5).Future studies are needed to gain a finer understanding of the possible functions of anthropogenic nest materials in nest construction, and nest microclimate.Moreover, establishing causal links between anthropogenic nest materials, parental quality and avian fitness in the urban space will further allow assessing the possible role of these materials in generating ecological and/or evolutionary traps in the long run.
Conclusions
In this study, we showed a pervasive, negative relationship between environmental solid waste pollution and reproductive success.Specifically, environmental solid waste pollution driven by human presence emerged as key factor associated with anthropogenic materials in avian nests.Thus, in territories characterised by higher levels of human presence (undeniably the proximate cause for increased environmental solid waste pollution), great tits and blue tits included more anthropogenic materials in their nests, and fewer materials of animal origin (e.g.fur and feathers).Finally, anthropogenic nest materials were negatively associated with reproductive success in blue tits, but not in great tits, which highlights speciesspecific sensitivity to environmental pollution (Fig. 6).Further work targeting the functional role and fitness consequences of anthropogenic materials in avian nest design across a broader range of taxa and environments is crucial to determine the role of macroplastics on urban wildlife biology and to better inform waste policy management in the light of the acute global plastic pollution crisis we are currently facing.
Declaration of competing interest
The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.Summary.An increasing number of humans are moving into cities, driving urban areas to expand in size.We demonstrate that human presence and urbanisation (modelled as impervious surfaces), assessed at a fine spatial scale, significantly covary with solid waste pollution in the environment.Moreover, we demonstrate a positive relationship between human-driven environmental solid waste pollution and the contribution of anthropogenic nest materials in great tit nest design.Importantly, we also report on a clear, negative relationship between anthropogenic nest materials (nest design) and blue tit reproductive success.Green and red arrows report on significant positive and negative relationships demonstrated in this study, respectively.The black arrow reflects the positive relationship between human presence and urbanisation as reported in Szulkin et al., 2020.
Fig. 1 .
Fig. 1.Study sites and environmental solid waste categories (Transect Data).Study site locations in Warsaw (Poland).Red dots correspond to study site locations, which include: a suburban village (A), a natural forest (B), an urban forest (C), residential areas II (D) and I (G), an urban woodland (E), an office area (F), an urban park (H).The black dot stands for Warsaw city centre.Each piechart shows solid waste categories within each study site (in %).While study sites vary in terms of solid waste composition (as reported on the figure), the amount of solid waste items also varied between study sites; details of this variation are reported in Tables S5.
Fig. 5 .
Fig. 5. Blue tit reproductive success decreases with increasing proportion of anthropogenic material in the nest (Nest Data).The prediction of increasing proportion of anthropogenic nest materials in nest on number of hatchlings (a) and fledglings (b) of blue tit.
Fig
Fig.6.Summary.An increasing number of humans are moving into cities, driving urban areas to expand in size.We demonstrate that human presence and urbanisation (modelled as impervious surfaces), assessed at a fine spatial scale, significantly covary with solid waste pollution in the environment.Moreover, we demonstrate a positive relationship between human-driven environmental solid waste pollution and the contribution of anthropogenic nest materials in great tit nest design.Importantly, we also report on a clear, negative relationship between anthropogenic nest materials (nest design) and blue tit reproductive success.Green and red arrows report on significant positive and negative relationships demonstrated in this study, respectively.The black arrow reflects the positive relationship between human presence and urbanisation as reported inSzulkin et al., 2020.
Table2Relationship between anthropogenic nest materials and fitness (Nest Data).Blue tit reproductive success decreases with an increasing proportion of anthropogenic nest materials in the nest.Model: averaged summary statistics of Linear Mixed Effects Models (LMMs) testing the effect of species-specific environmental solid waste on avian fitness.All global models included the following predictors: Impervious Surface Area (ISA, % of built-up areas in a 100 m radius), human presence (Human Presence), proportion of anthropogenic materials in the nest, species-specific environmental solid waste, animal origin components (proportion of fur and feathers), rainfall and temperature.Study sites were fitted as random effect.Parameters with confidence intervals not overlapping 0 are reported in bold.
|
2022-05-20T15:14:55.064Z
|
2022-05-01T00:00:00.000
|
{
"year": 2022,
"sha1": "49c6de2205362ed3b478aebc2cd20fb6311dd2d6",
"oa_license": "CCBYNC",
"oa_url": "https://digibug.ugr.es/bitstream/10481/76668/1/1-s2.0-S004896972203131X-main.pdf",
"oa_status": "GREEN",
"pdf_src": "Anansi",
"pdf_hash": "b53102c9e4045efdd2b61b6fab2cd7f44e9c974a",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
60359
|
pes2o/s2orc
|
v3-fos-license
|
Connection between continuous and digital n-manifolds and the Poincare conjecture
We introduce LCL covers of closed n-dimensional manifolds by n-dimensional disks and study their properties. We show that any LCL cover of an n-dimensional sphere can be converted to the minimal LCL cover, which consists of 2n+2 disks. We prove that an LCL collection of n-disks is a cover of a continuous n-sphere if and only if the intersection graph of this collection is a digital n-sphere. Using a link between LCL covers of closed continuous n-manifolds and digital n-manifolds, we find conditions where a continuous closed three-dimensional manifold is the three-dimensional sphere. We discuss a connection between the classification problems for closed continuous three-dimensional manifolds and digital three-manifolds.
Introduction
A digital approach to geometry and topology plays an important role in analyzing ndimensional digitized images arising in computer graphics as well as in many areas of science including neuroscience, medical imaging, industrial inspection, geoscience and fluid dynamics. Concepts and results of the digital approach are used to specify and justify some important low-level image processing algorithms, including algorithms for thinning, boundary extraction, object counting, and contour filling [1-3, [5][6][7][8][9]13,17,22,23]. We use an approach in which a digital n-surface (digital normal n-dimensional space) is considered as a simple undirected graph of a specific structure. Properties of nsurfaces were studied in [5][6][7][8][9]. Paper [8] analyzes a local structure of the digital space Z n . It is shown that Z n is an n-surface for all n>0. In paper [9], it is proven that if A and B are n-surfaces and A⊆B, then A=B. This paper presents conditions which guarantee that every digitization process preserve certain topological and geometrical properties of continuous closed two-surfaces. In papers [5][6][7], X. Daragon, M. Couprie and G. Bertrand introduce and study the notion of frontier order, which allows defining the frontier of any object in an n-dimensional space. In particular, they investigate a link between abstract simplicial complexes, partial orders and n-surfaces. In the framework of abstract simplicial complexes, they show that n-dimensional combinatorial manifolds are n-surfaces and n-surfaces are n-dimensional pseudomanifolds and that the frontier order of an object is the union of disjoint (n-1)surfaces if the order to which the object belongs is an n-surface. A digital n-manifold which we regard in this paper is a special case of a digital nsurface. It seems desirable to consider properties of digital n-manifolds in a fashion that more closely parallels the classical approach of algebraic topology in order to find out, how far the fundamental distinction between continuous and digital spaces due to different cardinality restricts a direct modification of continuous tools to digital models on one hand and how effectively the digital approach can be applied to solve classical topology problems on the other hand. As an example, we consider the Poincaré conjecture about the characterization of the 3-dimensional sphere amongst 3dimensional manifolds. The review of some of the major results obtained in an attempt to prove the Poincaré conjecture may be found in [15]. Recently, three groups have presented papers that claim to complete the proof of the Poincaré conjecture. The results of these papers are based upon earlier papers by G. Perelman [19][20][21]. In May 2006, B. Kleiner and J. Lott posted a paper [16] on the Arxiv. They claim to fill in the details of Perelman's proof of the Geometrization conjecture. In June 2006, H-D. Cao and X-P. Zhu published a paper [4] claiming that they give a complete proof of the Poincaré and the geometrization conjectures. In July 2006, J. Morgan and G. Tian posted a paper [18] on the Arxiv in which they claim to provide a detailed proof of the Poincaré Conjecture. Our approach to the characterization of the 3-dimensional sphere amongst 3dimensional manifolds is different from previous attempts. It is based on the connection between LCL covers of closed n-manifolds and digital n-manifolds. In section 2, we describe computer experiments which provide a reasonable background for introducing digital spaces as simple graphs. Then we remind basic definitions and results related to digital n-dimensional spaces (n-spaces) (section 3). In sections 4, we study properties of digital n-disks and n-spheres, which are similar to properties of their continuous counterparts. We introduce disk transformations of digital n-manifolds, which retain their basic features. It is proven that a digital nsphere converts into the minimal one by disk transformations and that a digital nsphere without a point is homotopic to a point. In sections 5, we study properties of compressed digital n-manifolds. In section 6, we introduce LCL collections of ndimensional continuous disks. We consider a decomposition of a closed continuous nmanifold to an LCL union of n-disks and study properties of the cover. We find conditions where an LCL collection is a cover of a continuous n-dimensional sphere. We prove that a given continuous closed n-manifold is an n-dimensional sphere if any LCL cover of this manifold can be converted to the minimal one consisting of 2n+2 elements by the merging of n-disks. The results of sections 4, 5 and 6 are based on results obtained in [10] and [11]. We find a link between intersection graphs of LCL covers of continuous closed n-manifolds and digital n-manifolds (section 7). In sections 8 and 9 apply obtained results to find conditions including Poincaré conjecture about the characterization of continuous 2-and 3-dimensional spheres amongst closed continuous 2-, 3-dimensional manifolds. Finally, we discuss ways, which can help in treating the classification problem for closed 3-dimensional manifolds. Throughout the paper, by a continuous n-manifold, we mean a closed (compact and without boundary) path-connected n-manifold and digital spaces all have a finite amount of points.
Computer experiments as the basis for digital spaces.
An important feature of this approach to the structure of digital spaces is that it is based on computer experiments whose results can be applied to computer graphics and animations. The following surprising fact is observed in computer experiments modeling If G n and H n are normal n-dimensional spaces and H n is a subspace of G n , then H n =G n [9]. Proposition 3.4 [14].
• The cone v⊕G of any space G is a contractible space. • Let G be a contractible graph and H be its contractible subgraph. Then G can be converted into H by contractible deleting of points in any suitable order. • Let G be a contractible graph. Then for any point v belonging to G, subgraphs O(v) and G-v are homotopic and G-v can be converted into O(v) by contractible deleting of points in any suitable order. Proposition 3.5. Let G{v 1 ,v 2 ,...v t } and F={G 1 ,G 2 ,….G n } be a graph and a family of its non-empty subgraphs G k {v k(i) }, k=1,2,…n, of G with the following properties.
• Family F is a cover of G, G=G 1 ∪G 2 ∪….G n .
4. Properties of digital n-spheres and n-disks.
Since in this and next sections section, we consider only digital spaces, we will use the word space for digital space if no confusion can result. In order to make this section self-contained we will use the necessary information from paper [10].
A 0-ball and a 0-disk is a single point v, a 0-sphere S 0 (a,b) is a disconnected graph with just two points a and b. A 1-sphere S 1 is a connected graph such that for each point v of S 1 , the rim O(v) of v is a 0-sphere S 0 ( fig. 4.1). A 1-disk D 1 is a connected graph S 1 -v obtained from a 1-sphere S 1 by the deleting of a point v.
Lemma 4.1. The minimal 1-sphere S 1 min consists of four points, S 1 min =S 0 (a,b) ⊕S 0 (c,d). The minimal 1-disk D 1 min consists of three points, D 1 min =a ⊕S 0 (b,c) Any 0-sphere S 0 belonging to a 1-sphere S 1 divides S 1 into two separated parts. If S 1 is a 1-sphere and S 0 is a 0-sphere belonging to S 1 , then S 1 is the union A∪S 0 ∪B where subspaces A and B are separated and subspaces A∪S 0 and B∪S 0 are 1-disks. The properties are checked directly. To define n-disks and n-spheres, we will use a recursive definition. Suppose that we have defined k-disks and k-spheres for dimensions 1≤k≤n-1. Obviously, the boundary ∂N of N is an (n-1)-sphere. Remark 4.1. Any n-manifold is a normal n-dimensional space, but a normal n-dimensional space in not necessarily an n-manifold. For example, the join A=S 0 (a,b)⊕P 2 of a 0-sphere S 0 and a 2-dimensional projective plane P 2 is a normal 3-dimensional space, but A is not a 3-manifold because the rims of points a and b are not 2-spheres. Definition 4.2. ( a ) A connected space D is called an n-disk if it has the following properties: • D is a contractible graph (that is D can be converted to a point by contractible transformations). • D can be represented as the union D=∂D∪IntD of two non-empty subspaces such that if a point v belongs to ∂D, then the rim O(v) of v is an (n-1)-disk and if a point v belongs to IntD, then O(v) is an (n-1)-sphere.
• The boundary ∂D of D is an (n-1)-sphere.
( b ) Let D and C be n-disks such that ∂D and ∂C are isomorphic, ∂D=∂C. The space D#C obtained by identifying each point in ∂D with its counterpart in ∂C is called an n-sphere ( fig. 4.2).
Obviously, S is the connected sum of D and C over ∂D. The join S n min =S 0 1 ⊕S 0 2 ⊕…S 0 n+1 of (n+1) separated copies of the 0-sphere S 0 is called the minimal n-sphere [9]. The join of the minimal (n-1)-sphere S n-1 min and a point v not belonging to S n-1 min is called the minimal n-disk, D min =v⊕S n-1 min ( fig. 4.3). Remark 4.3. As it follows from definitions 4.2 and 4.3, the minimal n-sphere is an n-sphere and an n-manifold. To prove ( b ), note that S-v is an n-manifold with boundary by ( a ). Therefore, we have to prove that S-v is contractible. Let us use a double induction. For n=1, the lemma is plainly true. Assume that the lemma is valid for dimensions n<k. Suppose that n=k. Note that for S=S n min , the lemma is obvious. Assume that the lemma is valid for S with a number of points |S|=r≤t. Let r=t+1. By definition 4.2, S is the connected sum D#C of n-disks D and C over ∂D. With no loss of generality, suppose that a point v belongs to the interior of D, v∈IntD, and |IntC|>1. Suppose that a point x is separated from S, connect point x with any point belonging to C and delete all points belonging to IntC. Obviously, this is a sequence {g 1 …g m } of contractible transformations and the obtained space N=S+v-IntC is homotopic to S. By construction, N is a connected sum, N=E#D, where E=v⊕∂C, |N|<t+1. Therefore, N is the n-sphere by definition 4.2. Hence, N-v=F is an n-disk by the assumption i.e., F is a contractible space. Obviously, S-v can be converted to F=N-v by the same sequence {g 1 …g m } of contractible transformations. Therefore, S-v is homotopic to F=N-v. Since F=N-v is contractible, then S-v is contractible. Figure 4.4 shows a 2-sphere S and a 2-dimensional projective plane P. S-v is a 2-disk, which is homotopic to a point. P-v is not a 2-disk, it is homotopic to a 1-sphere S. Lemma 4.3. Let S be an n-sphere and D be an n-disk belonging to S. Then S-IntD is an n-disk. Proof. We have to prove that S-IntD is a contractible space and an n-manifold with boundary. Note that if D is the ball U(v) of a point v, then IntD=v and C=S-IntD=S-v is an n-disk by lemma 4.2. ( a ) Let us prove that S-IntD is a contractible space. Suppose that IntD contains more than one point. Take disk D separately from S and take an n-disk E=v⊕∂D separated from D and S. Then a space D#E over ∂D is an n-sphere by definition 4.2. Let a point u belong to IntD. Then F=D#E-u is an n-disk by lemma 4.2. Therefore, F is a contractible space. By proposition 3.4, F-v converts into O(v)=∂D by the contractible deleting of points. Therefore, S-v converts into S-IntD by the same contractible deleting of points. Hence C=S-IntD is a contractible space. ( b ) The proof that S-IntD is an n-manifold with boundary is similar to the poof of assertion ( a ) in lemma 4.2 an is omitted.
The following corollary is an easy consequence of lemma 4.2. Suppose that the rim O(v) of a point v of M is the minimal (n-1)-sphere with points {v 1 ,v 2 ,…v 2n } where point v 2k+1 is adjacent to all points except v 2k+2 , k=0,1,…n-1. Consider the rim of point v 1 . O(v 1 ) contains points {v 3 ,v 4 ,…v 2n ,v,u} where a point u is adjacent to all points except v. Consider the rim of point v 3 . O(v 3 ) contains points {v 1 ,v 2 ,v 5 ,v 6 ,…v 2n ,v,u}. Since points u and v are non-adjacent, then u must be adjacent to v 2 . Therefore, subspace G of M consisting of points {v 1 ,v 2 ,…v 2n ,v,u} is the minimal n-dimensional sphere. Since G⊆M, then G=M according to proposition 3.3. We say that points v 1 ,v 2 ,…v k can be merged (into one point) if there is an n-disk D belonging to N such that v 1 ,v 2 ,…v k belong to IntD. In fact, both operations are the replacings of n-disks by n-disks. The replacing of ndisks in an n-manifold N is an application of contractible transformations [14] of digital spaces to n-manifolds. d-Transformations are represented by a sequence of contractible transformations of digital spaces that retain such properties of digital spaces as the Euler characteristic and the homology groups. Lemma 4.5. Any d-transformation is a sequence of contractible transformations. Proof, To prove that the d-transformation (D,v) is a sequence of contractible transformations, suppose that N is an n-manifold, D is an n-disk belonging to N and a point v does not belong to N. Since D is a contractible space, then point v can be glued to N in such a manner that O(v)=D according to definition 3.2. This is a contractible transformation converting N into N+v. Suppose that a point u belongs to IntD. Denote by O(u) the rim of u in N. Then the rim A(u) of u in N+v is the join v⊕O(u) of v and O(u), A(u)=v⊕O(u). According to proposition 3.4, A(u) is a contractible space. Therefore, point u can be deleted from N+v. In the same way, any point belonging to IntD can be deleted from N+v. Hence, the obtained space M=N+v-IntD is homotopic to N. In the same way, it is easy to prove that the d-transformation (v,D) is too a sequence of contractible transformations. Lemma 4.6. An n-sphere S is equivalent to the minimal n-sphere S min . Proof. Suppose that S=D#C is the connected sum of n-disks D and C and points v and u are separated and not belonging to S. Replace IntD and IntC by points v and u according to definition 4.4. Then S converts into the n-sphere S 0 (v,u)⊕∂D. Since ∂D is an (n-1)sphere, then for the same reason as above, ∂D can be converted to S 0 (a,b)⊕∂E where ∂E is an (n-2)-sphere. Hence, S can be turned in S 0 (v,u)⊕S 0 (a,b)⊕∂E. Repeat the above transformations until we obtain the minimal n-sphere. If a point y does not belong to D, then its rim does not change and remains an (n-1)sphere. The rim of point v is an (n-1)-sphere ∂D. Therefore, M is an n-manifold by definition 4.1. Suppose that the boundary ∂D of an n-disk D is isomorphic to the rim O(v) of some point v of N. In the same way as above, we can prove that M=N-v+IntD obtained by identifying any point in ∂D with its counterpart in O(v) and deleting point v is an nmanifold.
Lemma 4.8. d-Transformations convert an n-sphere into an n-sphere. Proof. Let N=gS be the space obtained from an n-sphere S by an d-transformation g. We have to prove that N can be represented as the connected sum of n-disks A and B over ∂A. Suppose that S is an n-sphere and v is a point belonging to S. Let D be an n-disk separated from S such that ∂D is isomorphic to O(v). Suppose that N=gS=S-v+IntD is the space obtained by identifying any point in ∂D with its counterpart in O(v) and deleting point v. Clearly, N is the connected sum of A=S-v and D over ∂D. Since A is an n-disk by lemma 4.2, then N=A#D is an n-sphere.
Suppose that S is an n-sphere and D be an n-disk belonging to S and a point v be separated from S. Let the space N=S+v-IntD is obtained by joining point v with any point in ∂D and deleting from S points belonging to IntD according to definition 4.4. Then N is the connected sum Of A=S-IntD and B=v⊕∂D. Since A is an n-disk by lemma 4.3 and B is an n-disk by definition 4.2, then N=A#B is an n-sphere by definition 4.2.
The following theorem summarizes the previous results. We say that C is embedded in D if there is an n-disk D such that IntC⊆IntD (and C⊆D). If ∂C∩IntD≠∅, then after the merging of all interior points of D into a point v, D is collapsed into a set of points, which is not an m-disk. If ∂C⊆∂D, then after the merging of all interior points of D into a point v, C converts into an m-disk v⊕∂C with only one interior point v.
An m-sphere S is not necessarily the boundary of some (m+1)-disk D. For example, a 1-sphere S in a 3-manifold M may be a knot for which there exist no 2-disk D in M such that S=∂D.
There is an open problem: Suppose that an n-manifold M is homotopic to an nmanifold N (M can be turned into N by contractible transformations). Does it follow that M and N are equivalent? This problem is linked with a similar problem arising in the study of LCL covers of closed continuous n-manifolds.
Compressed spaces.
Although in this section we deal with n-manifolds, most of the results can be applied to n-spaces in which the rim of a point is not necessarily an (n-1)-sphere. As we have already mentioned, the main difference between digital and continuous nmanifolds is that a digital n-manifold has a finite or countable number of points while a continuous n-manifold has the cardinality of the continuum. If a digital n-manifold has a finite amount of points, it can be reduced by d-transformations while it is impossible for continuous spaces. This is essential for our further study because nmanifolds with a small number of points are easier to analyze. In the rest of the paper, we consider n-manifolds with n>0. Proof. Note first, that the ball U(v) of any point v of N is an n-disk. Take an n-disk D belonging to N and different from the ball of any point of N. Therefore, D contains more than one interior point. Introduce connections between a point v belonging to IntD and all points belonging to ∂D and delete all points belonging to IntD except for v. Then N moves to an n-manifold M equivalent to N. Repeat this procedure until any n-disk is the ball of some point. If N contains the finite number of points, the number of replacings is finite. This completes the proof. In the following lemma, we prove some properties of compressed n-manifolds which will be used further. Suppose that there is no connection between points belonging to IntA and IntB. Then A#B is an (n-1)sphere and C is an n-disk by definition 4.4. This contains a contradiction because C is not an n-disk. Therefore, there are points w∈IntA and a∈IntB, which are adjacent. Hence, {v,u,a,w} is the 1-sphere.
To prove (c), suppose that u and v are non-adjacent points such that the intersection O(u)∩O(v) of their rims is an (n-1)-disk. Then the subspace U(v)+u containing point u and all points belonging to U(v) is an n-disk. From this contradiction, we conclude Let N be a compressed n-manifold. If N is an n-sphere, then N is necessarily the minimal n-sphere S min . Proof.
Since N is an n-sphere, then N-v is an n-disk, where v is a point belonging to N, according to lemma 4.2. Since N is compressed, then N-v is the rim of some point u belonging to N, N-v=O(u). Therefore, O(v)=O(u) and N=S 0 (v,u)⊕O(v) is the join of a 0-sphere S 0 (v,u) and an (n-1)-sphere O(v). Take any point v 1 belonging to O(v). For the same reason as above, is the joint rim of adjacent points v and v 1 ((n-2)-sphere). Acting in the same way we finally obtain that N=S 0 (v n ,u n )⊕⊕S 0 (v n-1 ,u n-1 )⊕…S 0 (v 1 ,u 1 )⊕S 0 (v,u).
is not an n-disk, then by lemma 5.1, there is a 1-sphere S(4) containing points v and u and consisting of four points. Therefore, S(4) is necessarily {v,u,a,b}, where points a and b are adjacent. Hence, A=U(v)∪U(u)=S 0 (a,u)⊕O(u)=S 0 (a,u)⊕S 0 (b,v)⊕S is the minimal n-sphere. Since A⊆N, then A=N by proposition 3.3. Hence, N is the minimal n-sphere. Assertion ( b ) can be proven similarly.
The process of compression of an n-manifold by a sequence of d-transformations (which can be applied in some orders) can give a family of compressed spaces G 1 ,G 2 ,…G k which are not isomorphic to each other. However, if N is an n-sphere, then the process of compression always converts N to the minimal n-sphere. 6. Locally centered and lump covers of continuous closed n-manifolds and its properties.
In order to make this section self-contained we will use the necessary information from paper [11]. Suppose that a map h is a homeomorphism from R n to itself. If a set D is homeomorphic to a closed n-dimensional ball B n on R n , then D is called a closed ndisk. If a set S is homeomorphic to an n-dimensional sphere S n on R n+1 , then S is called an n-sphere. We denote the interior and the boundary of an n-disk D by IntD and ∂D respectively.
Since in this paper we use only closed n-disks, we say n-disk to abbreviate closed ndisk if no confusion can result.
Remind that collections of sets W={D 1 ,D 2 ,…} and U={C 1 ,C 2 ,…} are isomorphic (homotopic) if the intersection graphs G(W) and G(U) of W and U are isomorphic (homotopic).
Facts about n-disks and spheres that we will need in this paper are stated below.
Fact 6.1. If D 1 and D 2 are n-disks such that D 1 ∩D 2 =D n-1 is an (n-1)-disk, then D 1 ∪D 2 =B is an n-disk. ( c ) Let V={H 1 ,H 2 ,…H r } be a collection of n-disks such that any H k =D k1 ∪D k2 ∪… is the union of n-disks belonging to W and if D i ⊆H k , then D i ⊄H p , p≠k. Then V is the lump collection of n-disks.
Corollary 6.1. The union of any number of n-disks belonging to W is an n-disk, D i1 ∪D i2 ∪…D ip =D.
Helly's theorem [24] states that if a collection of convex sets in n E has the property that every (n + 1) members of the collection have nonempty intersection, then every finite subcollection of those convex sets has nonempty intersection. In application to digital modeling, this concept was studied in a number of works. In paper [23], a collection of convex n-polytopes possessing this property was called strongly normal (SN). One of the results was that if SN holds for every n+1 or fewer n-polytopes in a set of n-polytopes in R n , then the entire set of n-polytopes is SN. In paper [9], a collection of sets with a similar property was called continuous. It was shown that the continuity of covers is necessary for digital models to be homotopic. In classical topology, the collection of sets W is centered if every finite subcollection of W has a point in common. This definition implies an infinite collection of sets. In this paper, we use only finite collections of sets. Since the word "normal" has already been used in the definition of a normal digital space [9], we define a locally centered collection (LCL-collection) as follows.
Obviously, a lump collection is locally centered. The following proposition is an easy consequence of definition 6.2. Proposition 6.2. Let collection W={D 1 ,D 2 ,…D s } of n-disks be locally centered. Then: • Any subcollection of W is locally centered.
We have already mentioned that in this paper, we consider continuous n-manifolds, which can be covered by a finite LCL collection of n-disks. By a continuous nmanifold, we mean a continuous closed (compact and without boundary) pathconnected n-manifold. Since for a compact, each of its open covers has a finite subcover, then for a closed continuous n-manifold there is an LCL cover of it i.e., a closed continuous n-manifold can be decomposed as the union of n-disks belonging to its LCL cover. Proposition 6.4. Let an LCL collection W={D 1 ,D 2 …D t } of n-disks be a cover of a closed n-manifold M. ( a ) Suppose V={D 2 ,D 3 …D p } is the collection of all n-disks intersecting D 1 and U={E 2 ,E 3 ,…E p } is an LCL collection of (n-1)-disks such that E i =D 1 ∩D i , i=2,3,…p. Then U is an LCL cover of the boundary ∂D 1 of D 1 , collections U and V are isomorphic and C=D 1 ∪D 2 ∪…D p is an n-disk. For any n>0 there is an LCL cover W={D 1 ,D 2 …D t } of n-sphere S by n-disks such that t=2n+2. Let us give an example of such a cover. Let U n+1 be an (n+1)dimensional cube in the Euclidean (n+1)-dimensional space R n+1 . Obviously, ndimensional faces F n k , k=1,2,…2n+2, of U n+1 form an LCL collection W={F n 1 ,F n 2 …F n 2n+2 } of n-dimensional disks. W is the minimal cover of an ndimensional sphere, which is the boundary ∂U n+1 of U n+1 .
LCL covers of a 1-and 2-spheres are depicted in fig. 6.3 and 6.4. The minimal LCL cover of a 1-sphere consists of four 1-disks, the minimal LCL cover of a 2-sphere consists of six 2-disks. Figure 6.5 shows examples of LCL tiling of a 2-plane and an LCL tessellation of Euclidean 3-space. An LCL cover of a 2-dimensional torus is depicted in fig 6.6. Proposition 6.5. Suppose that collection W={D 0 ,D 1 ,D 2 …D t } of n-disks is an LCL cover of an n-sphere S and V={D 1 ,D 2 …D p } is a subcollection of all n-disks intersecting D 0 . Then collection U={D 0 ,D 1 ,D 2 …D p ,C}, where C=D p+1 ∪D p+2 ∪…D t , is an LCL cover of S by n-disks such that if D i(1) ∩D i(2) ∩D i(m) ≠∅, D i(k) ∈V, k=1,2,…m, then C∩D i(1) ∩D i(2) ∩D i(m) ≠∅. (Proof: see appendix.) Notice that the intersection graphs G(A) and G(B) of collections A={D 0 ,D 1 ,D 2 …D p } and B={D 1 ,D 2 …D p ,C} are isomorphic. Definition 6.4. Suppose that collection W={D 0 ,D 1 ,D 2 …D t } of n-disks is an LCL cover of an n-sphere S and V={D 1 ,D 2 …D p } is a collection of all n-disks intersecting D 0 . Then collection U={D 1 ,D 2 ,…D t } is called a segmented n-disk, collection {D 1 ,D 2 …D p } is called the boundary ∂U of U and collection {D p+1 ,D p+2 …D t } is called the interior IntU of U ( fig. 6.7). Proposition 6.6. Suppose that collection U={D 1 ,D 2 …D t } of n-disks is a segmented n-disk, collection ∂U={D 1 ,D 2 …D p } is the boundary of U and collection IntU={D p+1 ,D p+2 …D t } is the interior of U. Then: ( a ) A=D 1 ∪D 2 …∪D t is an n-disk. ( b ) If D k ∈IntU, then ∂A∩D k =∅. ( c ) If D k ∈∂U, then ∂A∩D k =E k is an (n-1)-disk and collection U={E 1 ,E 2 ,…E p } is an LCL cover of the boundary ∂A of A. ( d ) Collection V={D 1 ,D 2 …D p ,C} where C=D p+1 ∪D p+2 …∪D t is a segmented n-disk with only one interior n-disk C. Proof. Suppose that collection W={D 0 ,D 1 ,D 2 …D t } is an LCL cover of an n-sphere S according to definition 6.4. To prove ( a ), notice that S=D 0 ∪D 1 …∪D t . Then assertion ( a ) follows from fact 6.3. To prove ( b ) and ( c ), notice that ∂A=∂D 0 . According to proposition 6.4, if D k ∈IntU, then D 0 ∩D k =∅. If D k ∈∂U, then ∂A∩D k =D 0 ∩D k =E k is an (n-1)-disk and collection U={E 1 ,E 2 ,…E p } is an LCL cover of the boundary ∂A=∂D 0 . Assertion ( d ) follows from proposition 6.5. Definition 6.5. Suppose that collection U={D 1 ,D 2 …D t } of n-disks is a segmented n-disk, collection ∂U={D 1 ,D 2 …D p } is the boundary of U and collection IntU={D p+1 ,D p+2 …D t } is the interior of U. Denote by V={D 1 ,D 2 …D p ,C} the LCL collection of n-disks, where C=D p+1 ∪D p+2 …∪D t . The replacing of collection U by collection V is called the merging of the interior of U or the merging of U. The replacing of collection V by collection U is called the splitting of the interior of V or the splitting of V. In this paper, this merging and splitting are called disk transformations or d-transformations of covers. Obviously, A=D 1 ∪D 2 ∪…D t =D 1 ∪D 2 ∪…D p ∪C is an n-disk.
The merging and the splitting of segmented n-disks belonging to LCL covers of a closed n-manifold M change the amount of elements in LCL covers of M. By the merging of segmented n-disks, we can reduce the amount of elements of a cover to the level where any segmented n-disk contains only one interior element. It is important that d-transformations do not change M itself. They change only the amount of elements in an LCL cover of M. Definition 6.6. Suppose that W={D 0 ,D 1 ,D 2 …D t } and U={C 0 ,C 1 ,C 2 …C s } are LCL covers of a closed n-manifold M. W and U are called equivalent if W can be converted into U by dtransformations.
Theorem 6.1. ( a ) Let an LCL collection W={D 0 ,D 1 ,D 2 …D s } of n-disks be a cover of an n-sphere S. Then any sequence of mergings necessarily converts W into the minimal LCL cover of S with 2n+2 elements. ( b ) Let an LCL collection W={D 0 ,D 1 ,D 2 …D s } of n-disks be a cover of a closed nmanifold M. Then a sequence of d-transformations of W converts W into another LCL cover of M. Proof. ( a ) Suppose that collection W={D 0 ,D 1 ,D 2 …D t } of n-disks is an LCL cover of an nsphere S and V={D 1 ,D 2 …D p } is the collection of n-disks intersecting D 0 . Then collection U={D 1 ,D 2 …D t } is a segmented n-disk with the boundary ∂U={D 1 ,D 2 …D p } and the interior IntU={D p+1 ,D p+2 …D t }. Let us convert U to V={D 1 ,D 2 …D p ,C} by replacing the collection of interior disks by C=D p+1 ∪D p+2 …∪D t . According to proposition 6.5, collection W 1 ={D 0 ,D 1 ,D 2 …D p ,C} is an LCL cover of S by n-disks. Take element D 1 instead of D 0 and apply to U 1 ={D 0 ,D 2 …D p ,C} the same operation. Finally, we obtain the minimal LCL cover of S consisting of 2n+2 elements according to remark 6.1. ( b ) To prove (b), suppose that W={D 1 ,D 2 …D t } is an LCL cover of a closed nmanifold M, U={D 1 ,D 2 …D s } is a segmented n-disk, A=D 1 ∪D 2 …∪D s is an n-disk, ∂U={D 1 ,D 2 …D m } is The boundary of U (and subcollection of U containing all ndisks intersecting the boundary ∂A of A) and IntU={D m+1 ,D m+2 …D s } is the interior of U (and a subcollection of U containing all n-disks not intersecting the boundary ∂A). Merge all n-disks belonging to IntU to C=D m+1 ∪D m+2 ∪…D s . Then collection V={D 1 ,D 2 ,…D m ,C,D s+1 …D t } is a cover of M. Note that V 1 ={D 1 ,D 2 ,…D m ,C} is an LCL collection as it follows from proposition 6.5, V 2 ={D s+1 …D t } is too an LCL collection according to proposition 6.3. Since C∩D i =∅, i=s+1,s+2,…t, then V is an LCL collection. The proof is complete.
An irreducible LCL cover of a 2-dimensional torus is illustrated in fig. 6.6. Problem 6.1.
There is an open problem similar to problem 4.1. Suppose that W={D 0 ,D 1 ,D 2 …D t } and U={C 0 ,C 1 ,C 2 …C s } are LCL covers of the same closed n-manifold M. Can W be converted to U by a sequence of d-transformations? The answer yes is only for a continuous n-sphere.
A connection between digital spaces and LCL covers of continuous n-manifolds.
For technical convenience, call the collection of sets W={u 1 ,u 2 ,…} contractible, if the intersection graph G(W) of W is contractible, homotopic if G(W) and G(U) are homotopic and isomorphic if G(W) and G(U) are isomorphic.
A part of this section with technical results leading to the proof of theorem 7.1 is placed in appendix Theorem 7.1. Let W={D 1 ,…D s } be an LCL collection of n-disks and A=D 1 ∪D 2 ∪…D s . The intersection graph G(W) of W is contractible if and only if A is an n-disk ( fig. 7.1). Lemma 7.1. Let an LCL collection W={D 0 ,D 1 ,D 2 …D s } of n-disks be a cover of a closed nmanifold M. Then the intersection graph G(W) of W is a digital n-manifold. Proof. The proof is by induction. For n=1, the theorem is plainly true for s≥4 ( fig. 6.3). Assume that the theorem is valid whenever n<a+1. Let n=a+1. For definiteness, consider n-disk D 0 , subcollection O(D 0 )={D 1 ,D 2 …D r } of all n-disks intersecting D 0 without D 0 and collection V={C 1 ,C 2 ,…C r } where C i =D 0 ∩D i . By proposition 6.3, V is an LCL collection of (n-1)-disks and collections V and O(D 0 ) are isomorphic. Obviously, V is a cover of (n-1)-sphere ∂D 0 such that ∂D 0 =C 1 ∪C 2 ∪…C r . Therefore, the intersection graph G(V) of V is a digital (n-1)-manifold ((n-1)-sphere) by the induction hypothesis. Since collections V and O(D 0 ) are isomorphic by proposition 6.3, the intersection graphs G( O(D 0 )) of O(D 0 ) and G(V) of V are isomorphic. Therefore, G( O(D 0 )) is a digital (n-1)-manifold. Hence, the intersection graph G(W) of collection W is a digital n-manifold by definition 4.1. This completes the proof. Lemma 7.2. Let W={D 0 ,D 1 ,D 2 …D s } be an LCL collection of n-disks. If the intersection graph G(W) of W is a digital n-manifold, then W is the LCL cover of a closed continuous nmanifold M=D 0 ∪D 1 ∪…D s . The proof is similar to the proof of lemma 7.1 and hence omitted.
The following theorem summarizes the results of lemmas 7.1 and 7.2. Theorem 7.2. Let W={D 0 ,D 1 …D t } be an LCL collection of n-disks. W is a cover of a closed nmanifold M=D 1 ∪D 2 ∪…D s if and only if the intersection graph G(W) of W is a digital n-manifold. Lemma 7.3. Let W={D 1 ,D 2 …D s } be an LCL collection of n-disks. If W is a segmented n-disk, then the intersection graph G(W) of W is a digital n-disk. Proof. Suppose that W is a segmented d n-disk. Then A=D 1 ∪D 2 …∪D s is an n-disk according to proposition 6.6 and the intersection graph G(W) of W is contractible according to theorem 7.1. According to definition 6.4, there is an LCL collection U={D 0 ,D 1 …D s } of n-disks such that S=D 0 ∪D 1 ∪…D s is an n-sphere. Suppose that G(U)={v 0 ,v 1 …v s } is the intersection graph of U where f(D k )=v k , k=0,1,…s. According to theorem 7.2, G(U) is a digital n-manifold. Then G(W)=G(U)-v 0 is a digital n-manifold with boundary. Therefore, G(W) is a digital n-disk according to definition 4.2.
The following lemma is an easy consequence of lemma 7.2. Lemma 7.4. Let W={D 1 ,D 2 …D s } be an LCL collection of n-disks. If the intersection graph G(W) of W is a digital n-disk, then W is a segmented n-disk. Let collection W={D 1 ,D 2 …D s ,…} be an LCL tiling of n-dimensional Euclidean space R n by n-disks. Then for the collection O(D i )={D i(1) ,D i(2) …D i(k) } of all disks intersecting any given n-disk D i (without D i ), the intersection graph G(O(D i )) of this collection is a digital (n-1)-sphere. Therefore, the intersection graph G(W)=Y n of W is a digital n-manifold. Y n is a digital model of continuous n-dimensional Euclidean space. Y n can be constructed in a variety of ways depending on the choice of tiling W. Proof. 1. To prove (a), suppose that W={D 0 ,D 1 ,D 2 ,…D t } is an LCL cover of M, V={D 0 ,D 1 ,D 2 ,…D s } is a segmented n-disk, X={D 0 ,D 1 ,D 2 ,…D m } is the boundary of V and Y={D m+1 ,D m+2 ,…D s } is the interior of V. Then by theorems 7.2 and 7.3, G(W)={v 0 ,v 1 ,v 2 ,…v t } is a digital closed n-manifold, f(D i )=v i , i=0,1,2,…t, G(V)={v 0 ,v 1 ,v 2 ,…v s } is a digital n-disk, G(X)={v 0 ,v 1 ,v 2 ,…v m } is the boundary of G(V) and G(Y)={v m+1 ,v m+2 ,…v s } is the interior of G(V). The merging of all n-disks belonging to Y into an n-disk C=D m+1 ∪D m+2 ∪…D s according to definition 6.5, generates the replacing of all points belonging to G(Y) by one point u which is adjacent to all points belonging to G(X). This replacing is the dtransformation of G(W) 2. (b) can be proved applying the same procedure in the reverse order. Lemma 7.5. Suppose that closed continuous n-manifolds M and N are not homeomorphic. Then any two LCL covers W={D 0 ,D 1 ,D 2 ,…D s } and U={C 0 ,C 1 ,C 2 ,…C t } of M and N are not isomorphic. Proof. Assume that there is an LCL cover U={C 0 ,C 1 ,C 2 ,…C s } of N such G(U)=G(W). Then there can be established a homeomorphism between D 0 ∪D 1 ∪D 2 …∪D s and C 0 ∪C 1 ∪C 2 …∪C s . Since M=D 0 ∪D 1 ∪D 2 …∪D s and N=C 0 ∪C 1 ∪C 2 …∪C s are not homeomorphic, then the assumption is not valid.
Let us introduce a construction, which will be used in further proofs. Definition 7.1. Let W={D 0 ,D 1 ,D 2 ,…D s } be an LCL collection of n-disks. W is called a segmented kdisk, k-sphere or k-manifold, k≤n, if the intersection graph G(W) of W is a digital kdisk, k-sphere or k-manifold respectively ( fig. 7.2). If W is a segmented k-disk, then the boundary ∂W of W is a segmented (k-1)-sphere belonging to W and such that G(∂W) is the boundary of G(W). The interior IntW of W is defined as W-∂W. Obviously, if k=n, then a segmented n-disk by definition 7.1 is a segmented n-disk by definition 6.4. Remark 7.1. As in the digital approach, d-transformations of an LCL cover W of a closed nmanifold M can change segmented m-spheres and m-disks belonging to W, m<n. Suppose that collection W={D 0 ,D 1 ,D 2 ,…D s } of n-disks is an LCL cover of a closed nmanifold M, S is a segmented m-sphere belonging to W, C is a segmented m-disk belonging to W, m<n. We say that S is embedded in D if there is a segmented n-disk D such that S⊆D. If S∩IntD≠∅, then after the merging of all interior elements of D into an element w, S is collapsed into a set of elements which is not a segmented msphere. If S⊆∂D, then after the merging of all interior elements of D into an element w, S belongs to the boundary of a segmented n-disk D 1 with only one interior element w (fig, 4.6). We say that C is embedded in D if there is a segmented n-disk D such that IntC⊆IntD (and C⊆D). If ∂C∩IntD≠∅, then after the merging of all interior elements of D into an element w, D is collapsed into a set of elements, which is not a segmented m-disk. If ∂C⊆∂D, then after the merging of all interior elements of D into an element w, C converts into a segmented m-disk v⊕∂C with only one interior element w. Obviously, a segmented m-sphere S (a segmented m-disk C) is embedded into a segmented n-disk D if and only if the intersection graph G(S) (G( C)) is embedded into the intersection graph G(D). Further for technical convenience, n-disks belonging to an LCL collection W are called elements of W. Remark 7.2. Suppose that W={w 1 ,w 2 ,…w t } is an LCL cover of a closed 3-manifold M. For a segmented 1-sphere S={w 1 ,w 2 ,…w p } belonging to W there is a continuous closed curve C belonging to M with the following properties. C⊆w 1 ∪w 2 …∪w p =A. w k ∩C, k=1,2,…p is a closed 1-disk. We say that C is a continuous analog of S and S is a segmented analog of C ( fig. 7.3). Similarly, for a segmented 2-disk U={w 1 ,w 2 ,…w p } belonging to W there is a continuous 2-disk D belonging to M with the following properties. D⊆w 1 ∪w 2 …∪w p =A. w k ∩D, k=1,2,…p is a closed 2-disk. We say that D is a continuous analog of U and U is a segmented analog of D ( fig. 7.3).
Properties of a continuous 2-sphere.
Obviously, a digital normal 2-space is necessarily a digital 2-manifold. Let us apply the previous results to 2-manifolds and consider conditions about the characterization of the continuous 2-sphere amongst continuous closed 2-manifolds using LCL covers and their intersection graphs, which are digital 2-manifolds. As it follows from the previous results, any LCL cover of a closed continuous n-manifold can be converted to an irreducible cover by the merging of elements of the cover. If a closed continuous n-manifold M is an n-sphere, then any LCL cover of M can be necessarily converted to the minimal cover, which is the collection W={F 1 ,F 2 …F 2n+2 } of n-dimensional faces F k , k=1,2,…2n+2, of an (n+1)-dimensional cube U. Let us first prove a digital theorem whose results will be used in further proofs. Let S⊆H be a 1-sphere,D be a 2-disk such that S=∂D and v be a point not belonging to H. Replace D with the 2-disk v⊕∂D by the deleting of all points belonging to IntD and connecting point v with any point of ∂D. This is a d-transformation, which converts D into v⊕S. Repeat this procedure until any 1-sphere S is the rim of some point v and any 2-disk is the join of a point v and a 1-sphere S. Denote the obtained space by G. Note that G is a compressed digital 2-manifold equivalent to H. Suppose {v 1 ,v 2 ,…v s } are points of G. According to lemma 5.2, for any two adjacent points, say v 1 and v 2 , there is a digital 1-sphere, say S 1 ={v 1 ,v 2 ,v 3 ,v 4 }, containing four points. Since any digital 1-sphere is the rim of some point, then there is a point, say v 5 , adjacent to v 1 ,v 2 ,v 3 and v 4 ( fig. 5.1, 8.1). For the same reason, adjacent points v 1 and v 5 belong to a digital 1-sphere S 2 consisting of four points and S 2 is the rim of some point v i . Obviously, point v i is either v 2 or v 4 . Let v 2 be v i . Then S 2 ={v 1 ,v 5 ,v 3 ,v 6 }. Applying the above arguments to adjacent points v 5 and v 4 , we see that points v 4 and v 6 must be adjacent. Obviously, subspace G 1 ⊆G containing point {v 1 ,v 2 ,…v 6 } is the minimal digital 2-sphere, G 1 =S 2 min . According to proposition 3.3, if G 1 ⊆G, then G 1 =G. Therefore, H is a digital 2-sphere by theorem 4.1. The proof is complete.
Let M be a closed continuous 2-manifold. If for any LCL cover W of M by 2-disks, any segmented 1-sphere belonging to W is the boundary of some segmented 2-disk belonging to W, then M is a continuous 2-sphere. Proof. Suppose that W={w 1 ,w 2 ,…w t } is an LCL cover of M by 2-disks and S is a segmented 1-sphere ( fig. 8.1). According to theorems 7.2 and 7.4, the intersection graph G(W) is a digital 2-manifold. Since for any segmented 1-sphere S ( fig. 8.1) belonging to W there is a segmented 2-disk D={w 1 ,w 2 ,…w p } belonging to W such that S⊆D, then for any digital 1-sphere X=G(S) belonging to G(W) there is a digital 2-disk Y=G(D) belonging to G(W) such that X⊆Y. Therefore, G(W) satisfies to conditions of theorem 8.1. By theorem 8.1, G(W) is equivalent to the minimal 2-sphere S 2 min with points {v 1 ,v 2 ,…v 6 }. According to theorems 7.2 and 7.4, there is an LCL cover U={u 1 ,u 2 ,…u 6 } of M equivalent to W and such that the intersection graph G(U) of U is S 2 min . Obviously, S 2 min is the intersection graph G(F) of collection F={F 1 ,F 2 ,…F 6 } of 2-faces F k , k=1,2,…6, of a continuous 3-cube U ( fig. 8.1). There is an obvious homeomorphism between M=u 1 ∪u 2 ∪…u 6 and the boundary ∂U=F 1 ∪F 2 ∪…F 6 of U. Hence, M is a continuous 2-sphere. The proof is complete.
We can change the conditions of theorem 8.1 and 8.2 as follows. For any 1-disk there is a digital 2-disk D belonging to G and such that IntL⊆IntD. Since any 2-disk is the ball of some point, then D=U(a 1 ) and L consists of three points x, y and a 1 . Therefore, O(v)=S min =S(x,y)⊕S(u,a 1 ). Applying the above arguments to any other point in G, we see that the rim of any point is S min consisting of four points. According to lemma 4.4, G is the minimal 2-sphere. Therefore, H is a digital 2sphere.
Theorem 8.4. Let M be a closed continuous 2-manifold. If for any LCL cover W of M by 2-disks and for any segmented 1-disk L belonging to W there is a segmented 2-disk D belonging to W and such that IntL⊆IntD, then M is a continuous 2-sphere. The proof is similar to the proof of theorem 8.2 and is omitted.
It can be checked directly for a digital 2-dimensional projective plane P that there are digital 1-disks belonging to P which can not be contracted to a point ( fig. 4.5). An LCL cover of a 2-dimensional continuous torus is depicted in fig. 6.6. Any segmented 1-disks containing 5 elements of the cover can not be contracted to a point.
Properties of a continuous 3-sphere.
At first, let us briefly review some topological notions related to the Poincaré conjecture.
Here is the standard form of the Poincaré conjecture: Every simply connected closed 3-manifold is homeomorphic to a 3-sphere.
A closed 3-manifold M is called simply connected if and only if M is path-connected and the fundamental group of M is trivial, i.e. consists only of the identity element.
Loosely speaking, if the fundamental group of M is trivial, then any closed curve belonging to M can be continuously shrunken to a point.
Our approach is based on a decomposition of a closed 3-manifold M into an LCL collection of 3-disks with certain properties. We study an LCL cover of a manifold instead of the study of the manifold itself. We do not use a mapping from a circle or a 2-disk into a closed 3-manifold. We introduce segmented 1-, 2-and 3-disks and spheres (according to definition 7.1) which are segmented analogs of continuous 1-, 2-and 3-disks and spheres belonging to a closed 3-manifold. We have to find conditions, which guarantee that a closed continuous 3-manifold is a continuous 3sphere. It is important to emphasize that d-transformations of an LCL cover of M do not change the manifold itself. They change only a cover of M by converting one LCL cover into another LCL cover.
Let us first prove a digital theorem whose results will be used in the further proof.
Theorem 9.1. Let H be a digital 3-manifold and W(H) be a collection of digital 3-manifolds equivalent to H. If for any digital 2-disk D belonging to any H∈W(H) there is a digital 3-disk U belonging to H∈W(H) such that IntD⊆IntU, then H is the digital 3sphere. Proof. Let U be a digital 3-disk belonging to H. Suppose that a point v belongs IntU. Delete all points belonging to IntU except point v and connect point v with all points belonging to ∂U. This is a d-transformation that converts U into U 1 =v⊕∂U. Repeat this procedure until any digital 3-disk U is the ball of some point according to lemma 5.1. Denote by G the obtained digital 3-manifold. Clearly, G is equivalent to H because all replacings are d-transformations. For a point v∈G, take some point u belonging to the rim O(v) of v. Note that O(v) is a digital 2-sphere. ( fig. 9.1). Then O(v,u)=S is a digital 1-sphere and O(v)-u is a digital 2-disk D according to lemma 4.2. Therefore, there is some digital 3-sisk U such that IntD belongs to IntU. Since any digital 3-disk is the ball of a point of G, then there is a point u 2 such IntU=u 2 . Since IntD⊆IntU(u 2 ), then IntD=u 2 , v∈O(u 2 ) and O(v,u 2 )=S. Therefore, O(v)= S(u,u 2 )⊕S. Take a point v 1 belonging to O(v,u)=S and apply to v 1 the same arguments as above. We obtain that O(v)=S(u,u 2 )⊕S(v 1 ,v 2 )⊕S(w 1 ,w 2 ). Therefore, O(v) is the minimal digital 2-sphere S 2 min . Since point v is chosen arbitrarily, then the rim of any point in G is S 2 min . Then according to lemma 4.4, G is the minimal digital 3-sphere S 3 min with eight points {u,u 2 ,v 1 ,v 2 ,w 1 ,w 2 ,v, p}. ( fig. 9.1). Hence, H is a digital 3-sphere according to theorem 4.1. The proof is complete.
In this theorem, a digital loop is presented only implicitly, as the boundary of a digital 2-disk. Theorem 9.2. Let M be a closed continuous 3-manifold. If for any LCL cover W of M by 3-disks and for any segmented 2-disk D belonging to W there is a segmented 3-disk U belonging to W such that IntD⊆IntU, then M is a continuous 3-sphere. Proof. Suppose that W={w 1 ,w 2 ,…w t } is an LCL cover of M by 3-disks and G(W) with points {v 1 ,v 2 ,…v t }is the intersection graph of W, where w i corresponds v i . According to theorem 7.2, G(W) is a digital 3-manifold satisfying to conditions of theorem 9.1. Therefore, G(W) can be transformed to the digital minimal 3-sphere S 3 min with points {v 1 ,v 2 ,…v 8 } by d-transformations. According to theorem 7.4, W can be transformed to an LCL cover W 1 ={b 1 ,b 2 ,…b 8 } consisting of eight elements and such that the intersection graph G(W 1 ) of W 1 is S 3 min . According to proposition 6.4 and remark 6.1, the minimal digital 3-sphere S 3 min is the intersection graph G(F) of collection F={F 1 ,F 2 ,…F 8 } of 3-dimensional faces of the unit continuous four-dimensional cube U 4 . There is an obvious homeomorphism between M=b 1 ∪b 2 ∪…b 8 and ∂U 4 =F 1 ∪F 2 ∪…F 8 . Therefore, M is a continuous 3-dimensional sphere. The proof is complete.
A geometrical sense of this theorem is intuitively clear. Suppose that M is a closed path-connected (continuous) 3-manifold, W={w 1 ,w 2 ,…w t } is an LCL cover of M by 3-disks. Suppose that M is a continuous 3-sphere and D is a closed 2-disk belonging to M. If there is a segmented 2-disk U containing D, then there is a segmented 3-disk V containing U and D. Suppose that M is not homeomorphic to a 3-sphere. Then there exists a segmented 2disk U such that there is no segmented 3-disk containing U. Therefore, for a continuous 2-disk D belonging to M and such that D is a continuous analog of U, there is no segmented 3-disk containing D. U and D are just too large for being contained in a segmented 3-disk and there are not enough elements left in W in order to form a segmented 3-disk containing U and D. Therefore, there is a continuous closed curve C (for example, the boundary of D, C=∂D) such that its segmented analog -a segmented 1-sphere does not belong to any segmented 3-disk belonging to W. This property resembles the condition used by Bing who showed that a simplyconnected, closed 3-manifold with the property that every loop is contained in a 3-ball is homeomorphic to the 3-sphere. As it is seen from theorem 9.2, we do not use segmented loops explicitly and therefore, we do not need to impose any restrictions or requirements on them. Implicitly, a segmented closed curve is presented only as the boundary of a segmented 2-disk. 10. A connection between the classification problem for closed continuous 3manifolds and digital 3-manifolds.
Possibly, the approach presented in this paper can help in treating the problem of classification of compact 3-dimensional manifolds. The advantage of this approach is that a continuous closed 3-manifold can be presented as a digital 3-manifold and, therefore, investigated by means of computers. Suppose that W={w 1 ,w 2 ,…w t } is an LCL cover of a closed 3-manifold M. By the merging or splitting of 3-disks belonging to W, the amount of elements of W can be reduced or increased. According to theorems 7.2 and 7.4, the intersection graph G(W) of W is a digital 3-manifold and d-transformations of W generate d-transformations of G(W). Conversely if G(W) is a digital n-manifold, then an LCL collection is a cover of a closed (continuous) 3-manifold. Therefore, if we can classify digital 3-manifolds, then this classification can be applied to continuous closed 3-manifolds. On the first step, digital 3-manifolds can be distinguished by an amount of points contained in their compressed versions. Obviously, for any digital 3-manifold there always exist one or several compressed versions with an equally small amount of points. Suppose that E(H) is a family of all digital 3-manifolds equivalent to a digital n-manifold H and {G 1 ,G 2 ,…G k } is a family of 3-manifolds belonging to E(H) with the minimal amount p of points among manifolds belonging to E(H). Let Let an LCL collection W={D 1 ,D 2 …D t } of n-disks be a cover of a closed n-manifold M. ( a ) Suppose V={D 2 ,D 3 …D p } is the collection of all n-disks intersecting D 1 and U={E 2 ,E 3 ,…E p } is an LCL collection of (n-1)-disks such that E i =D 0 ∩D i , i=2,3,…p. Then U is an LCL cover of the boundary ∂D 1 of D 1 , collections U and V are isomorphic and C=D 1 ∪D 2 ∪…D p is an n-disk. (b ) For any D i there exists D k such that D i ∩D k =∅. (c ) For any D i and D k such that D i ∩D k ≠∅ there exist D p such that D i ∩D p =∅, Proof. Assertion ( a ) follows directly from proposition 6.3. To prove ( b ) and ( c ), suppose that subcollection V={D 1 ,D 2 …D k } contains all ndisks intersecting D 1 including D 1 . Then the union C k =D 1 ∪D 2 ∪…D k is an ndimensional disk by proposition 6.3. Therefore, V is not a cover of M and there is at least one n-disk, which does not intersect D 1 . Suppose that U={D 1 ,D 3 ,D 4 …D m ,D p+1 ,D p+2 …D p+h } is the collection of all n-disks belonging to W and intersecting D 2 , where D i ∈V, i=3,4,…m, D i ∉V, i=p+1,p+2,…p+h. Then X={E 1 ,E 3 ,E 4 …E m ,E p+1 ,E p+2 …E p+h } where E i =D 2 ∩D i , D i ∈U, is an LCL cover of an (n-1)-sphere ∂D 2 by (n-1)-disks according to corollary 6.1. According to proposition 6.3, E 1 ∪E 3 ∪…E m is an (n-1)-disk. Therefore, h>0 and at least E p+1 =D 2 ∩D p+1 ≠∅. Therefore, D 1 ∩D p+1 =∅. To prove ( d ), use the induction. For n=1,2, the proposition is checked directly ( fig. 6.3, 6.4). Assume that the proposition is valid whenever n<p. Let n=p. Suppose that V={D 2 ,D 3 …D m } is the collection of n-disks intersecting D 1 . Then U={E 2 ,E 3 …E m }, E k =D k ∩D 1 is the LCL cover of an (n-1)-sphere ∂D 1 by (n-1)-disks according to proposition 6.3. Then the amount x of elements in U is more than or equal to 2n, x≥2n by the assumption. Since there is at least one n-disk not intersecting D 1 (proposition 6.4(a)), then t≥2n+2. This completes the proof. Proposition 6.5. Suppose that collection W={D 0 ,D 1 ,D 2 …D t } of n-disks is an LCL cover of an n-sphere S and V={D 1 ,D 2 …D p } is a collection of all n-disks intersecting D 0 . Then collection U={D 0 ,D 1 ,D 2 …D p ,C}, where C=D p+1 ∪D p+2 ∪…D t , is an LCL cover of S by n-disks such that if D i(1) ∩D i(2) ∩D i(m) ≠∅, D i(k) ∈V, k=1,2,…m, then C∩D i(1) ∩D i(2) ∩D i(m) ≠∅. Proof. Obviously, U is a cover of S. According to proposition 6.3, the union A=D 0 ∪D 1 ∪D 2 ∪…D p is an n-disk. Hence, C=S-IntA is an n-disk. By proposition 6.4, H i =C∩D i ≠∅, i=1,2,…p. For n=1,2, the proposition is verified directly. Assume that the proposition is valid whenever n≤s. Let n=s+1. Suppose that X 1 ={D 0 ,D 2 ,D 3 ,D 4 …D m ,D p+1 ,D p+2 …D p+h } is the collection of all n-disks belonging to W and intersecting D 1 , where D i ∈V, i=2,3,…m, D i ∉V, i=p+1,p+2,…p+h. As in the proof of proposition 6.4(b), h>0 and Y 1 ={E 0 ,E 2 ,E 3 ,…E m ,E p+1 ,E p+2 …E p+h }, E i =D 2 ∩D i , D i ∈X 1 , is an LCL cover of an (n-1)-sphere ∂D 2 by (n-1)-disks according to proposition 6.3. Then collection Z 1 ={E 0 ,E 2 ,E 3 …E m ,H 1 }, where H 1 =E p+1 ∪E p+2 ∪…E p+h =C∩D 1 , is an LCL cover of ∂D 1 by (n-1)-disks according to the assumption. Since D 1 is taken arbitrarily, then Z k , k=1,2,…p, is an LCL cover of ∂D k by (n-1)-disks. Obviously, Z C ={H 1 ,H 2 ,…H m } is a cover of ∂C.
Further for technical convenience, let us call the collection of sets W={u 1 ,u 2 ,…} contractible, if the intersection graph G(W) of W is contractible, let us call the rim of u k the collection O(u k ) of all sets belonging to W and intersecting u k . Proposition 7.1. Suppose that W={u 1 ,u 2 ,…} is a tiling of the n-dimensional Euclidean space R n into a family of n-cubes with the edge length L, B is an n-box in R n , U={u 1 ,u 2 ,…u s } is a family of n-cubes intersecting B. Then the intersection graph G(U) of U is contractible. Proof. Obviously, U is a cover of B. For small number s, it is checked directly. With no loss of generality, suppose that the edges of B are parallel to the coordinate axes, L is much smaller than the length of the shortest edge r of B, L<<r, and if B∩u k ≠∅, then IntB∩Intu k ≠∅ ( fig. 10.1). Let U={u 11…1 ,…u mp…q }. Obviously, for any cube u 1a…b there is a cube u 2a…b such that u 2a…b is adjacent to all other cubes belonging to the rim O(u ma…b ). Therefore, the rim O(u 1a…b ) is contractible and all cubes u 1a…b can be deleted. In the same way, all cubes u 2a…b , u 3a…b , …u ma…b can be deleted except for cube u mp…q . The proof is completed.
Note that G(U)=G(V), where V={e 1 ,e 2 ,…e s } is a cover of B by e k =B∩u k . Proposition 7.2. Suppose that W={u 1 ,u 2 ,…} is a tiling of the n-dimensional Euclidean space R n into a family of n-cubes with the edge length L, D is a finite convex n-disk in R n , U={u 1 ,u 2 ,…u s } is a family of n-cubes intersecting D. Then the intersection graph G(U) of U is contractible.
Proof.
To simplify the proof, consider the dimension two (fig 10.2). Suppose that a point (x,y) belongs to the cube u kp if x 0 +kL≤x≤x 0 +(k+1)L, y 0 +pL≤y≤y 0 +(p+1)L, k∈Z. Let U={u kp } be the cover of a convex finite two-disk D such that for any u kp ∈U, the intersection e kp =D∩u kp is a closed n-disk. Denote X k the collection of cubes belonging to cover U whose first coordinate equal to k and denote Y p the collection of cubes belonging to cover U whose second coordinate equals p. Call X k the boundary level if X k+1 (or X k-1 ) is empty. of two n-disks such that their intersection C=D 1 ∩B is an (n-1)-disk. Therefore, P is an n-disk. The following statement is an easy consequence of propositions 7.5 and 7.6. Theorem 7.1. Let W={D 1 ,…D s } be an LCL collection of n-disks and P=D 1 ∪D 2 ∪…D s . The intersection graph G(W) of W is contractible if and only if P is an n-disk ( fig. 7.1). S is a 2-sphere, S-v is a 2-disk, which is homotopic to a point. S is not compressed. The union U(v)∪U(u) of balls is a two-disk. P is a 2-dimensional projective plane, P-v is homotopic to a 1sphere S. Figure 10.2. The digital model G(U) of a convex n-disk D 2 can be converted into a point by the deleting of cubes belonging to layers Y 6 , Y 5 , Y 1 , Y 2 and Y 3 .
|
2006-08-24T10:45:45.000Z
|
2006-08-24T00:00:00.000
|
{
"year": 2006,
"sha1": "af71a935a64a9d1f2f8bd40b1e02ba961e43488d",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "af71a935a64a9d1f2f8bd40b1e02ba961e43488d",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
}
|
54679115
|
pes2o/s2orc
|
v3-fos-license
|
Analytical Modelling and Simulation of Photovoltaic Panels and Arrays
In this paper, an analytical model for PV panels and arrays based on extracted physical parameters of solar cells is developed. The proposed model has the advantage of simplifying mathematical modelling for different configurations of cells and panels without losing efficiency of PV system operation. The effects of external parameters, mainly temperature and solar irradiance have been considered in the modelling. Due to their critical effects on the operation of the panel, effects of series and shunt resistances were also studied. The developed analytical model has been easily implemented, simulated and validated using both Spice and Matlab packages for different series and parallel configurations of cells and panels. The results obtained with these two programs are in total agreement, which make the proposed model very useful for researchers and designers for quick and accurate sizing of PV panels and arrays.
Introduction
Photovoltaic (PV) systems have been used worldwide for the last three decades.Their early applications were mainly concentrated in remote areas and in uses were other types of energy are either very expensive or not feasible.However, with the reduction in their fabrication costs, PV systems have seen a tremendous increase in different applications.The maximum efficiency of utilization of solar panels is obtained at maximum power operating point, which is a function of panel physical characteristics and fabrication parameters, solar irradiation, and operating temperature.To design a PV-based system, a model of the panel to be used in simulation prior to implementation is often required.Some PV panel equivalent circuit approximations based on a single-diode model _____________________________________ *Corresponding author's e-mail: hadj@squ.edu.omhave been studied (Ouennoughi, Z and Cheggar, M., 1999;Lee, J.I., Brini, J. and Dimitriadis, C.A., 1998;Araujo, G.L., Sanchez, E. and Marti, M., 1982;Gottschalg, R. et al. 1999;Kaminski, A. et al. 1997).Whereas, the simple models presented give acceptable results only for single crystalline cells.However, for polycrystalline cells, which are cost effective, the models presented are not accurate enough and hence the extended model of two-diode gives better results (Araujo, G.L., Sanchez, E. and Marti, M., 1982).In addition to this, a precise determination of the internal physical parameters of cells and panels is not always possible and the different errors introduced on these parameters during parameter extraction process induce large errors in the models of solar panels.
In this paper, analytical model for equivalent circuit parameters of solar panels and arrays are derived from basic cell models used to build them.The effects of equivalent series and shunt resistances on the panel and array characteristics are studied as their effects are important for the operation of PV systems.Spice and Matlab are used for model validation and sensitivity evaluation of the series and shunt resistances.The approach presented is valid for single-diode model as well as for two-diode model provided the specific parameters for a given model are considered, as it will be shown in subsequent section of the paper.
Analytical Model for Panel and Array
Solar panels and arrays may be described in terms of a set of electric and optical parameters that represent solar cell properties.This is in addition to the number of cells and panels connected in series and/or in parallel.Most of solar cell models available in the literature represent the solar cell by a single diode in parallel with a current source as shown in 1-a (Ouennoughi, Z. and Cheggar, M., 1999;Lee, J.I., Brini, J. and Dimitriadis, C.A., 1998).However, in order to have a more generalized and accurate model, a two-diode equivalent circuit model shown in 1-b has been developed (Araujo, G.L., Sanchez, E. and Marti, M., 1982;Gottschalg, R. et al. 1999;Kaminski, A. et al. 1997).This model consists of an ideal current source, which represents the optical irradiation connected in parallel with two different diodes D 1 and D 2 and a shunt resistance R SH .All these elements are connected to a series resistance R S .Note that diode D 1 models the generated photocurrent in the space charge region, which dominates the total current at low diode voltages whereas diode D 2 models the recombination photocurrent outside the space charge region.This photocurrent is more dominant at high diode voltages.
The shunt and series resistances are two important parameters in the operation of solar cells.This is because the shunt resistance affects mainly the panel power output and the series resistance affects the efficiency as well as the fill factor (Lee, J.I. et al.; McMahon, T.J. et al.).Therefore, the accuracy in determining these two parameters and the knowledge of the different errors involved in their determination is a key point for a better simulation of the panel and array characteristics.Note also that the absolute values of R SH are very important in cells qualification testing, module performance testing and failure analysis (McMahon, et al.).
With reference to Fig. 1-b, for a generalized panel equivalent circuit, the I-V characteristics of solar cells can be expressed in terms of physical and electrical parameters as, (1) where, I ph is the total photogenerated current, I L the load current, I D1 and I D2 the equivalent diode currents and I SH the net current through the shunt resistances R SH .For a single solar cell or a single panel that can be also considered as a cell, the currents I D1 , I D2 and I SH may be expressed by the following equations (Gottschalg, R. et al. 1999). ( (3) (4) (5) In the above equations, R S and R SH are the series and shunt resistances respectively, I SD1 and I SD2 are diffusion and saturation currents respectively, n 1 and n 2 are the diffusion and recombination diode ideality factors, k is the Boltzman's constant, q is the electronic charge, T is temperature in Kelvin, C 0 and C 1 are empirical constants modeling the temperature and the irradiation dependence, and Note that in case of a single-diode model shown in Fig. 1-a I D2 must be removed from Eq. (1).
A typical connection configuration of cells and panels that form a PV array used for power system applications to feed a resistive load is shown in Fig. 2. For this typical array configuration with m horizontal units and n vertical units, the cells or panels connected in series are numbered as P/C i1 to P/C im for a row i, where i varies from 1 to n, whereas, the panels or cells connected in parallel placed in a given column j are numbered as P/C 1j to P/C nj where j varies from 1 to m.Note that P/C represents a cell or panel unit.
As indicated above and has been reported in the literature, the extracted values of both the series resistance R S and the shunt resistance R SH have significant errors (Jervase, J., Bourdoucen, H. and Al-Lawati, 2001).It is therefore important to take this in consideration and study the effects of these two parameters on the panel and array characteristics.Based on the models developed elsewhere (Araujo, G.L. et al.; Gottschalg, R. et al.; Kaminski, A. et al.) and by taking into account typical changes of R S and R SH from one cell to another, an improved and useful analytical panel model is proposed.The expressions to be used for the model together with Eq. ( 1) are given in subsequent section.Typical figures for errors on the R S and R SH have been suggested and their effects are studied by simulation using Matlab/Simulink and Spice software packages.For demonstrating the validity of the analytical model developed, panels having different number of cells and different sizes have been considered.However, typical results will be shown for panel of 72 cells (6x12) with six cells connected in series and twelve in parallel.
Assuming the basic cell parameters similar except for R S and R SH , and making appropriate change of variables on saturation currents and ideality factors, then expressions of total currents, ideality factors, equivalent resistances R S and R SH can then be formulated using analytical expressions.This is supported by practical considerations where values of these parameters are affected by the way external connections of cells and panels are done.Based on models shown in Fig. 3 for a set of two ideal cells D 1 and D 2 connected in series and in parallel, one can write a set of equations for each type of configuration: These are parallel, series and combination of parallel-series connections.
Parallel connection: with reference to model shown in Fig. 3a, one can write I 1 = I S1 [exp (v P q / (n 1 kT)) -1] for D 1 and D 2 .Assuming D 1 and D 2 having equivalent characteristics, the parallel equivalent circuit composed of the two diodes can be represented by current source I T = 2I 1 and diode D as shown on the figure.The I-V expression of this circuit can be expressed as: For m cells connected in parallel, saturation current of the equivalent diode is to be multiplied by m and hence the total current becomes: (7) Similar analysis can be done for finding equivalent series and shunt resistances R S and R SH .For the series resistance of a parallel connection of two cells, the current through each resistance is half the total current I T , while the voltage drop stays unchanged.Thus, equivalent series resistance R SE of two cells in parallel circuit equals half the single-cell series resistance R S .Hence, for m cells connected in parallel R SE = R s / m.The same approach applies for shunt resistances and hence, the equivalent shunt resistance for m cells is Series connection: with reference to circuit models of Fig. 3-b, one can write: (8) for D 1 and D 2 .
The voltage v s for a single cell can be expressed as: Since the two cells connected in series are assumed to have equivalent characteristics, the voltage across the equivalent cell D is 2v s .One needs to multiply the ideality factor n 1 in Eq. ( 8) and Eq. ( 9) by 2. For the case of n equivalent cells connected in series, the value of n 1 is to be multiplied by n.Thus, the equivalent voltage and current for a panel of n cells can be expressed as (10) (11) Similar analysis can also be done for finding equivalent series and shunt resistances R S and R SH .The equivalent series resistance of the two cells equals two times the single cell series resistance.Hence, for n cells connected in series the total series resistance R SE = nR S .For shunt resistance, the same approach applies, and the equivalent shunt resistance R SH for n cells is R SHE = nR SH .
Note that the above approach has also been used to deduce the equivalent panel parameters for a two-cell model.
Combined series-parallel connections: In case of panel or array with a combined series and parallel connections of many solar cells, a change of variables can be done on the physical parameters of a single cell described by Eqs.
(1-5) to determine the model of the panel.Hence, expressions of the equivalent parameters can be derived as follows.[ ] Note that since R S and R SH vary also with optical irradiance and ambient temperature, the equivalent array series and shunt resistances R SE and R SHE will also vary with these two parameters.These are considered in simulations as shown in the following section.A formulation of this dependency for one single cell can be found in (Veissid, N., De-Andrade, A.M., 1991).
To validate the proposed analytical model of panels and arrays to be used for designing PV systems, simulations were done using Matlab and Spice.The following sections describe how the above models were implemented using these two packages.
Matlab/Simulink Simulation
A Matlab/Simulink program was used for to validate the developed solar panel model.Figure 4 shows the solar panel model as it is implemented with Simulink based on previous equations.The inputs to the model are the cell temperature T in o K and the solar irradiation G in W/m 2 .The output is a voltage that drives a resistive load R L .Note that the solar panel current can be varied by changing the load resistance and both voltage and current at the output of the solar panel can be tracked and measured.The model of the panel was based on the implementation of Eqs. ( 1), ( 5) and (12-15) with the assumption that all cells' parameters are interrelated according to expressions (12).
Spice Simulation
Circuit models for cells' and panels' configurations shown in Fig. 2 have been implemented by generating models using the model editor in Spice.This has been done for basic solar cells having actual parameters as well as for panels and arrays modeled using the developed analytical approach.The results obtained were compared with the ones obtained using Matlab/Simulink for similar configurations.These are presented in a subsequent section.
It is worth noting that the analytical model presented above is very useful for simulation and modeling of PVbased systems.It simplifies the mathematical computations and design of solar panels without altering physical parameters that are very well known for solar cells.Hence, the parameters derived even though purely mathematical give accurate physical behavior of the PV panel and array.
Results
Typical parameters of solar cells used to evaluate the developed analytical model are given in (Gottschalg, R. et al. 1999).These are .99, n 2 =1.9,I SD1 =2.4x10 -9 A, I SD2 =5.5x10 -5 A. The series and shunt resistances are kept variable to see their effect on the output power of the simulated arrays.However, their values for the basic cell units are R SH = 200 Ω and R S = 2.5x10 -2 Ω.
Variations of the output power as a function of the load voltage, for different values of cell and panel equivalent series resistance R S and shunt resistance R SH for the panel of 72 cell configuration are shown in Figs. 5 and 6 respectively.The percentage changes in the values of R S (relative to typical value of R s =2.5x10 -2 Ω) used to obtain these characteristics are 50, 100, 150, and 200%.The values of shunt resistance R SH in percentage relative to typical value of R SH = 200 Ω used to obtain these characteristics are also 50, 100, 150, and 200%.Note the significant effect of series resistance fluctuations on the output power (refer to Fig. 5).However, the shunt resistance has practically an insignificant effect.Theses significant fluctuations of the output power versus the output voltage occur at low load resistances as they are more affected by R S rather than R SH which is dominated by the two diodes connect to it in parallel.
On the other hand, the main parameters that affect the output current and voltage of the modeled panel have been considered.These are the solar irradiation G of the site and the temperature T of the panel.To illustrate their effects, the circuit model of Fig. 4 was simulated with different temperatures and solar irradiations using a 72 cell panel.Figure 7 shows the simulation results.Notice that, if the temperature of the panel decreases, the output voltage increases.For instance, when the temperature of the panel is 20 o C and the irradiation is 1000 W/m 2 the open circuit voltage (V oc ) is about 6.3 V.However, if the temperature is decreased to 0 o C and the irradiation is kept constant, the open circuit voltage (V oc ) becomes equal to about 6.6 V. Notice also that reducing the irradiation will result in decreasing the output voltage of the panel.Note also that, if the irradiation is decreased from 1000 W/m 2 to 500 W/m 2 , the open circuit voltage is decreased from about 6.3 V to 5.8 V at a constant temperature of 20 o C.
Conclusions
An analytical model for PV panels and arrays based on extracted physical parameters of solar cells has been presented in this paper.The proposed approach has the advantage of simplifying mathematical modeling of different cells' and panels' configurations without losing necessary accuracy of system operation.The effects of temperature and solar irradiance have been considered in the modeling.The developed analytical model has been simulated and validated using both Matlab and Spice packages for different cells and panels connected in series and parallel.This makes the proposed model very useful for researchers and systems designers as it allows a quick and accurate sizing of PV panels and arrays.
(a) I-V characteristics
A sensitivity analysis of the output power with changing R SE and R SHE for a panel of 72 cells has been also carried out.Errors on R SE and R SHE of up to ±40 % of the nominal values of R SE and R SHE which are respectively 0.05 Ω and 400 Ω .
The results of simulations have shown that deviations of output power due to 40% change in R SE and R SHE were less that 7% and 1%, respectively.
Figure 3 .
Figure 3. Parallel (a) and series (b) connections of cells, and corresponding equivalent circuits used to build analytical model of panels and arrays
Figure 5 .
Figure 4. Solar panel model implemented with Matlab/Simulink
Figure 6 .
Figure 6.Output power as a function of load voltage for different values of panel equivalent shunt resistsance R SH .The values of R SH in % of typical value 200 Ω are: 50, 100, 150, and 200
|
2018-12-12T06:01:15.491Z
|
2007-12-01T00:00:00.000
|
{
"year": 2007,
"sha1": "e5552dcadb928913288b12749fd4c6dbfd913739",
"oa_license": "CCBY",
"oa_url": "https://journals.squ.edu.om/index.php/tjer/article/download/30/30",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "e5552dcadb928913288b12749fd4c6dbfd913739",
"s2fieldsofstudy": [
"Engineering",
"Environmental Science",
"Physics"
],
"extfieldsofstudy": [
"Engineering"
]
}
|
115288736
|
pes2o/s2orc
|
v3-fos-license
|
3D-Printed Gastric Resident Electronics
Long-term implantation of biomedical electronics into the human body enables advanced diagnostic and therapeutic functionalities. However, most long-term resident electronics devices require invasive procedures for implantation as well as a specialized receiver for communication. Here, a gastric resident electronic (GRE) system that leverages the anatomical space offered by the gastric environment to enable residence of an orally delivered platform of such devices within the human body is presented. The GRE is capable of directly interfacing with portable consumer personal electronics through Bluetooth, a widely adopted wireless protocol. In contrast to the passive day-long gastric residence achieved with prior ingestible electronics, advancement in multimaterial prototyping enables the GRE to reside in the hostile gastric environment for a maximum of 36 d and maintain ≈15 d of wireless electronics communications as evidenced by the studies in a porcine model. Indeed, the synergistic integration of reconfigurable gastric-residence structure, drug release modules, and wireless electronics could ultimately enable the next-generation remote diagnostic and automated therapeutic strategies.
GRA folding force measurement: Funnel test apparatus was used to simulate the passage of GRA through the pylorus, where the experiment was set up as described with the prior publications by Bellinger et al. [19] and Kirtane et al. [20] (See Figure S3C of [19] and Supplementary Figure 2 of [20] for detailed information of the set-up). In these experiments, the GRA prototype was pushed by an aluminum rod in a mechanical tester (Instron) to a maximum displacement of 13 mm and the maximum folding force were measured. As described in Figure S1, the folding force evaluated with GRA ranges from an average of 7.7 N (at the first cycle) to 7.0 N (after 10000 cycles). This folding force from a three-arms architecture is larger than prior folding force measurement of 3.5 N for a six-arm gastric residence devices developed by Bellinger et al. [19] (See Supporting Information Figure S3C of [19] ). GRA also maintained the folding forces after 10000 cycles, with a relatively smaller degree of reduction of folding force (10%) in comparison to the result shown by Kirtane et al. [20] Figure S1: Maximum folding force measured with funnel test apparatus to simulate the passage of GRA through pylorus. The measurement was repeated for 10000 cycles to evaluate the fatigue properties. Note that only 1 out of 100 points is plotted in this graph to clearly illustrate the standard deviation.
PCL-PLA GRA: We demonstrate the ability to incorporate PCL-based polyurethane by first coprinting PLA with water-soluble polyvinyl alcohol (PVA) as a supporting structure. 3D Computer Aided Design (CAD) models of GRA as shown in Figure 2A were first created with Solidworks 2016 (Dassault Systèmes) as described previously, with the exception of modifying of co-printing with PVA instead of thermoplastic polyurethane (orange region of the Figure 2A). Stereolithography (STL) files were then digitally sliced and converted to print path in G-code (3D Slicer). The converted and optimized G-code were then 3D printed with a multi-material Fused Deposition Modeling (FDM) 3D Printer (Ultimaker 3, Ultimaker). PLA and PVA filaments (Ultimaker) with a diameter of 2.85 mm were used to create the stiff and supporting components respectively where the PVA is then removed with water. PCL elastomer is synthesized by first mixing of a 6:1.3:0.027:9.5 molar ratio of PCL diol (MW, 530, Sigma Aldrich), PCL triol (MW, 900, Sigma Aldrich), linear PCL (MW, 45,000, Sigma Aldrich), and hexamethylene diisocyanate (Sigma Aldrich) as described in [19] . The prepolymer is then casted into the removed PVA structure (orange region of the Figure 2A) of the 3D printed model rested on a negative mold to create the PCL-PLA based GRA structure. The PCL-PLA GRA structure demonstrates a folding force from an average of 6.7 N (at the first cycle) to 6.5 N (after 3000 cycles), as described in Figure S2. This is in the same order as the folding force described for GRA at Figure S1. The folding force decreases from 6.5 N after 3000 cycles to 3.1 N after 5980 cycles, which is due to the weakening and subsequently a fracture of one of the gastric residence arm. This is likely to be due to crack propagated from the microscopic bubble in the casted PCL elastomer. Future work can improve the materials synthesis process to improve the fatigue resistance of the device, for instance by developing a PCL elastomer 3D printing strategy to replace the casting process. In summary, we show that the fabrication procedure of GRA and GRE can be modified to incorporate thermoset plastic that cannot be directly 3D printed through FDM. This, for instance, can potentially enable the incorporation of FDA-approved materials and novel responsive material such as enteric polymers that can further minimize potential clinical complications. Figure S2: Maximum folding force measured with funnel test apparatus to simulate the passage of PCL-PLA GRA through pylorus. The measurement was repeated for 6000 cycles to evaluate the fatigue properties. Note that only 1 out of 70 points is plotted in this graph to clearly illustrate the standard deviation.
Remote triggering demonstrations: Two proof-of-concept experiments were performed in vivo to demonstrate the ability to achieve remote triggering with GRE in the stomach of a large animal model. Specifically, electroactive adhesive was developed to achieve GRA separation from electronics as well as remote-delivery of drugs.
Synthesis of electroactive adhesive: First, a low melting temperature electrically conductive nanocomposite was synthesized. Specifically, a poly(-caprolactone) (Sigma Aldrich) and 10 wt% carbon nanotubes (Sigma Aldrich) are mixed with twin-screw micro-compounding (Xplore TM Instruments, Netherlands) to create a 3D printable filament with an average diameter of 1.75 mm and an electrical conductivity of 100 Sm -1 . The electroactive adhesive was electrically connected to a microcontroller switch in the GRE via printed conductive traces. The electroactive adhesive was used to compress a spring with the 3D printed PLA structures. Upon wireless triggering with Android tablet, Joule heating would melt the composite matrix to weaken the adhesive strength, allowing the stored elastic energy in the spring to cause structural separation. We show that such triggering can be achieved in vivo in the gastric cavity, as shown in the following endoscopy image sequences. We anticipate that future work, which is beyond the scope of this paper, will include a long-term in vivo assessment on the robustness of this interface and optimization of power consumption to reduce the device footprint.
(1) In vivo triggered GRA separation: To demonstrate the ability to achieve device separation, a GRA was bonded with electroactive adhesive (see "Synthesis of Electroactive adhesive" above for the detail synthesis of the electroactive adhesive) to a "head" of 3D printed GRE. The device was delivered to the stomach of a pig (see "In vivo experiments" at Experiment Methods for detail description of in vivo procedure.) To help the capturing of the separation process, the GRA arms were tied. As describe in Figure S3A, the device was initially intact. The separation is then triggered via an Android tablet where the GRA was separated after a minute. A slight movement of the separated structure shows that GRA was completed detached from the "head" of the GRE. Figure S3C shows the separated device where the compression system (an embedded spring) can be observed.
Figure S3: Triggerable GRE device separation. (A)
Prior to triggering, the GRA is bonded to the "head" of GRE electronics with electroactive adhesive. (B) Upon triggering, the GRA is separated as the adhesive failed. (C) A slight movement of the separated structure shows that GRA was completed detached from the "head" of the GRE where the compression system (an embedded spring) of the device was exposed.
(2) In vivo wireless triggerable release of drug-reservoir cover: Gold, which is otherwise inert in acidic environment, can be electrochemically corroded by shifting the electrochemical potential. We have previously shown that using gold as the drug release membrane, ingestible electronics can be used to power the release of micro-gram of model drug (methylene blue) in a reservoir (2 mm × 1 mm × 1.5 mm) that is sealed with a 300 nm thick gold membrane. During corrosion, the maximum power consumption required is 0.8 mW, which is well within the maximum power affordable by the GRE system (45 mW). To demonstrate the ability to achieve wireless large volume drug delivery, a 3D printed PLA drug window (4 mm × 4 mm × 0.5 mm) encapsulate the doxycycline powder reservoir 3D printed at the "head" of 3D printed GRE where the electroactive adhesive compressed a spring (see "Synthesis of Electroactive adhesive" above for detail synthesis of the electroactive adhesive). As shown in Figure S4A, the device was delivered to the stomach of a pig with procedure as described earlier (see "In vivo experiments" at Experiment Methods). Upon triggering, the Joule heating of the electroactive adhesive causes the release of the reservoir cover, allowing the infiltration of gastric fluid to dissolve the encapsulated drugs as shown in Figure S4B. We note that the triggered opening of the reservoir cover was successful despite of the mucosal coverage on the delivery site. (See the attached Supplementary Videos). Figure S4C shows the compression system after the mucous covering the triggered well was removed by injecting water through endoscope. This experiment was repeated with twodifferent pigs with two other devices and were all successful. We hypothesize that the infiltration of gastric fluid into the opened drug cover will dissolve the water-soluble doxycycline. Here, we have demonstrated the ability to achieve the wireless release of drug-reservoir cover. Such system should be compatible to store ingestible pills for delivery. We note that further detail in vivo experiments with pharmacokinetic studies are needed to fully demonstrate the efficacy of drug delivery for the specific drug of interest.
In summary, we demonstrate that we are able to achieve on-demand mechanical and structural changes with the GRE chipset, which can be used for releasing drug-containing reservoir in vivo and other potential applications. Supplementary movies S1, S2: Supporting videos show the in vivo triggerable release of drugreservoir covers of two representative devices in a porcine stomach.
|
2019-04-16T13:29:14.972Z
|
2018-12-13T00:00:00.000
|
{
"year": 2018,
"sha1": "a38afb422fe8112f3baafbb6d5b22481e5d4ce8e",
"oa_license": "CCBY",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1002/admt.201800490",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "280b3909d801f85aec7af688b5c734a802d889d0",
"s2fieldsofstudy": [
"Engineering",
"Medicine"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
}
|
174814132
|
pes2o/s2orc
|
v3-fos-license
|
Electroacupuncture alleviates morphine-induced hyperalgesia by regulating spinal CB1 receptors and ERK1/2 activity
Electroacupuncture (EA), a traditional Chinese therapeutic technique, is considered an effective method for treating certain painful neuropathies induced by various neuropathological damage. The current study investigated the effect of EA on intrathecal (IT) morphine-induced hyperalgesia (MIH) and examined the hypothesis that activation of cannabinoid receptor 1 (CB1) could enhance the antinociceptive effect of EA on MIH via regulation of the extracellular signal-regulated kinase 1/2 (ERK1/2) signaling pathway. Using a rat model of IT MIH, mechanical and thermal hyperalgesia were evaluated by an electronic von Frey filament and hotplate at baseline (1 day before IT administration) and at days 1, 3, 5 and 7 after IT administration. Rats received IT normal saline, IT morphine or IT morphine + EA at ST36-GB34. The protein levels of ERK1/2, phosphorylated (p)-ERK1/2 and CB1 in the spinal cord were assayed by western blotting. Furthermore, the effect of IT injection of the CB1 agonist WIN 55,212-2 and the CB1 antagonist SR141716 on the antinociceptive effect of EA in rats with MIH was investigated. Nociceptive behavior and ERK1/2, phosphorylated (p)-ERK1/2 and CB1 protein levels were evaluated as mentioned above. The results revealed that chronic IT injections of morphine induced a significant decrease in mechanical withdrawal threshold (MWT) and thermal withdrawal latency (TWL) accompanied with remarkable upregulation of p-ERK1/2 in the spinal cord, which could be attenuated by EA at the ST36-GB34 acupoints. In the rat model of MIH, IT injection of WIN 55,212-2 combined with EA induced a significant increase in MWT and TWL accompanied with a significant decrease in p-ERK1/2 and a significant increase in CB1 protein level compared with EA alone, while SR141716 induced the opposite results. The present study suggests that EA alleviates hyperalgesia induced by IT injection of morphine partially through the inhibition of ERK1/2 activation. Activation of the CB1 receptor enhances the antinociceptive effect of EA in rats with MIH partly through the regulation of the spinal CB1-ERK1/2 signaling pathway.
Introduction
Morphine-induced hyperalgesia (MiH) is a type of classic opioid-induced hyperalgesia (oiH) that is characterized by increased sensitivity to noxious stimuli or even a painful response to previously non-noxious stimuli (allodynia) induced by long-time use of morphine (1). it was suggested that paradoxical pain can be elicited by chronic opioid exposure in humans and in animal models (2). certain neuroplastic adaptations, including increased expression of calcitonin gene related peptide, substance P and various nociceptive receptors, were deemed to be the possible mechanism underlying oiH (3,4). The activation of mitogen-activated protein kinase (MaPK) in the central and peripheral nervous systems was indicated to be a possible signaling pathway in morphine-induced neuroplastic adaptations by a series of findings from different laboratories (4). extracellular signal-regulated protein kinase 1/2 (erK1/2), a member of the MaPK family, serves an important role in oiH by being activated by self-phosphorylation and mediating the synthesis and expression of downstream neuropeptides. The phosphorylated form of erK1/2 (p-erK1/2) can cause the activation of erK1/2 and transfer the electrical signal to the nucleus to cause cell damage (5)(6)(7). cannabinoid receptors, which comprise two subtypes, including cB1 and cB2, belong to the G protein-coupled receptor superfamily and are involved in the modulation of pain sensation (8)(9)(10). cB1, which is mainly expressed in the central nervous system (cnS), is considered to mediate the pain sensation of the cnS (11). cB1-knockout mice exhibited reduced locomotor activity and hypoalgesia in hot plate and formalin tests (12). in addition, upregulation of cB1 receptor primarily within the ipsilateral superficial spinal cord dorsal horn was revealed in sciatic nerve injury (chronic constriction injury)-induced hyperalgesia model in rats, which was partially Electroacupuncture alleviates morphine-induced hyperalgesia by regulating spinal CB1 receptors and ERK1/2 activity attributed to erK activation (13). a previous study suggested that the cB1-selective cannabinoid receptor antagonist aM251 completely reversed the peripheral antinociception induced by the µ-opioid receptor agonist morphine but not by agonists of δ-or κ-opioid receptors, which indicated that cB1 is involved in the analgesic mechanism of morphine (14). The cB2 receptor is expressed mainly in the immune system and in hematopoietic cells (15). a previous study indicated that co-administration of a selective cB2 agonist (aM 1241) attenuates chronic intraperitoneal morphine exposure-mediated thermal hyperalgesia and tactile allodynia in rats, which is partially due to attenuated immunoreactivity of the spinal astrocyte and microglial marker and pro-inflammatory mediators interleukin-1β and tumor necrosis factor-α (16). However, the exact association between MiH and cB1/2 in cnS is poorly understood. electroacupuncture (ea) has been demonstrated to effectively mitigate hyperalgesia induced by chronic constriction injury (cci) and cancer pain caused by intraplantar injection of Walker 256 carcinoma cells in rats (17,18). in addition, ea combined with a sub-threshold dose of morphine (2.5 mg/kg) enhanced the anti-inflammatory hyperalgesia effect compared with that produced by each component alone in rats, which indicated that there is a synergistic association between ea and morphine in this regard (19). However, whether ea can attenuate the hyperalgesia induced by chronic morphine exposure is still unknown. another study revealed that ea inhibited zymosan-induced hypernociception in rats. The cB1-selective antagonist aM251 and the cB2-selective antagonist aM630 significantly reversed the antinociceptive and anti-inflammatory effects of ea separately, suggesting that cB1 and cB2 are involved in the mechanism of ea (20).
as mentioned above, erK1/2, a classic member of the MaPK family, is involved in the mechanism of oiH. However, erK1/2 is also considered as a mediator between ea effects and cBs (21). in the Freund's complete adjuvant-induced hind paw pain model in rats, thermal hyperalgesia and erK phosphorylation in the ipsilateral dorsal horn of l4-5 segments were inhibited by ea stimulation (22). notably, nocifensive behavior and activation of erK1/2 in the lumbar dorsal spinal cord were also observed following intrathecal (iT) injection of a cB1 receptor antagonist, namely aM251, which were both inhibited by iT injection of a MaPK/erK kinase inhibitor, namely u0126 (23). However, it is unknown whether erK1/2 is involved in mediating ea's effect through cB1/2 in the spinal cord following MiH.
The present study hypothesized that ea could ameliorate MiH and that this effect was partially mediated by cBs via the erK1/2 signaling pathway. The present study aimed to evaluate the effect of ea on nociceptive behavior, as well as the activation of erK1/2 and to investigate the effect of cB1 activation or inhibition in regulating the effect of ea via the erK1/2 activated state in rats undergoing MiH.
Materials and methods
Animals. all experimental protocols were approved by the animal experimental ethics committee of Tianjin Medical university General Hospital (Tianjin, china). a total of 128 adult male Sprague-dawley rats, weighing 240±20 g each, were obtained from the laboratory animal center of the Military Medical Science academy (Beijing, china). For 1 week before the experiments, all animals were housed in cages (5 rats per cage) at room temperature (20-22˚C) with 30-70% humidity on a 12 h light-dark cycle, were fed a standard diet and had access to water.
IT morphine delivery. The rats were anesthetized with 3% sevoflurane plus 60% oxygen and catheterizations of the rat spinal subarachnoid space were performed on anesthetized rats 3 days before morphine administration, as described by Yaksh and Rudy (24). Briefly, rats were implanted with a Pe-10 polyethylene catheter (8 cm) the lumbar subarachnoid space. Rats with no postoperative neurological deficits following surgery were kept for the experiments. animals showing neurological dysfunction such as paralysis postoperatively were immediately sacrificed using carbon dioxide. Upon surgery, rats were kept in individual cages for 3 days before morphine administration.
EA. rats received ea stimulation (16 rats/group) (2 Hz, 1.5 ma, 30 min) at 20 min after each administration, as described by Yu et al (18). Briefly, rats without any anesthetic drug received acupuncture with two pairs of stainless acupuncture needles connected to two pairs of electrodes. each pair of needles was inserted perpendicularly ~6 mm into the ipsilateral acupoints on the hind legs of the rats. acupoints were located according to the Zusanli (ST36) and Yanglingquan (GB34) acupoints in humans. in rats, ST36 is located at the proximal 1/5 point on the line from the depression lateral to the patella ligament, while GB34 is located at the depression anterior and inferior to the fibular head (19). A total of 2 non-acupoints were located 0.5 cm horizontal and lateral to the ST36 and GB34 acupoints, respectively, at non-meridian points. a constant electronic pulse (2 Hz, 1.5 ma) was administered by an electroacupuncture stimulator (SdZ-ii; Suzhou Medical appliance Factory, Suzhou, china) which was connected to the other end of the electrodes. When the ea was starting, the current was stimulating from one acupoint (ST36 or GB34) to another nonacupoint. rats in the control group (n=16) received the acupuncture needles at the same points as the rats in the ea group but without ea treatment. These rats were kept in tubular acrylic holders for 30 min as served as controls.
Experimental protocol Experiment 1: Effects of EA on MIH. The animals were randomly divided into 3 groups (n=16 rats/group): The control group (c); the chronic morphine group (M); and the morphine + ea at ST36-GB34 group (Me). animals in the M and Me groups were iT administrated twice with 15 µg (10 µl) morphine at 8 am and 6 pm daily for 8 days. animals in the c group were iT treated daily with 10 µl saline at the same time as the M group for 8 days. The animals in the Me group received ea stimulation (2 Hz, 1.5 ma, 30 min, two times/day) at the Zusanli-Yanglingquan acupoints (ST36-GB34) 20 min after morphine or saline administration every day. The mechanical withdrawal threshold (MWT; n=8 rats/group) and thermal withdrawal latency (TWl; n=8 rats/group) were determined at baseline (24 h before iT administration, day-1) and at the same time following the second treatment on days 1, 3, 5 and 7 after drug administration. on the 8th day after drug administration, randomized selecting of 6 rats in each group to collect the l 4-6 segments of the spinal cord for determination of the levels of erK1/2, p-erK1/2 and cB1 in the intumescentia lumbalis of the spinal cord (as shown in Fig. 1).
Experiment 2:
Role of CB1 on the protective effects of EA against MIH. The cB1 agonist Win 55,212-2 and antagonist Sr141716 were used in this experiment. The animals were randomly divided into 5 groups (n=16 rats/group): i) The c group; ii) the M group; ii) the Me group; iv) the morphine + ea treatment + cB1 agonist Win 55,212-2 group (MeW); and v) the morphine + ea treatment + cB1 antagonist Sr141716 group (MeS). Saline, morphine and ea treatment were administered as in experiment 1. Win 55,212-2 (cayman chemical company, ann arbor, Mi, uSa; 30 µg) (25) and Sr141716 (cayman chemical company; 30 µg) (26), which were dissolved in 5% dimethyl sulfoxide (10 µl; Sigma-aldrich; Merck KGaa, darmstadt, Germany) were iT administrated to the MeW and MeS groups, respectively, upon morphine administration, followed by 10 µl normal saline. The animals in the c, M and ea groups received the same volume of vehicle in identical conditions. MWT (n=8 rats/group) and TWl (n=8 rats/group) were determined at baseline (24 h before iT administration) and at the same time after the second treatment on days 1, 3, 5 and 7 post-administration. on day 8 after administration, randomized selecting of 6 rats in each group to collect the l 4-6 segments of the spinal cord to detect the levels of erK1/2, p-erK1/2 and cB1 in the intumescentia lumbalis of the spinal cord was performed (as shown in Fig. 1).
Mechanical hyperalgesia. on days 1, 3, 5 and 7 post-administration, 8 rats per group were chosen for the mechanical hyperalgesia test. Mechanical hyperalgesia was assessed using an electronic von Frey filament (BSeVF3; Harvard apparatus, Holliston, Ma, uSa), as described previously (18). animals were placed in individual wire cages (20x20x30 cm) and allowed to acclimatize for 1 h before testing. Mechanical allodynia was determined by calculating the mean value of 3 MWT measurements with an interval of 5 min between each measurement. a cut-off pressure of 60 g was used to prevent tissue damage.
Thermal hyperalgesia. on days 1, 3, 5 and 7 post-administration, another 8 rats per group (not the same rats that were used to do the mechanical hyperalgesia test) were chosen to do the thermal hyperalgesia test. Thermal hyperalgesia was determined with intelligence Hot plate equipment (YlS-6B; Zhenghua Biologic apparatus Facilities ltd., co., Hefei, china), as described previously (18). animals were allowed to habituate to the environment for 1 h before testing. animals were placed on the hot plate (50˚C) until a positive response (a clear paw withdrawal) was observed. The time was then recorded as the TWl. The mean TWl was obtained from the mean value of the 3 measurements of TWl with an interval of 5 min between each. a cut-off time of 30 sec was used to prevent tissue damage.
Protein analysis by western blotting. Tissues from the lumbar spinal cord (n=6 rats/group) were quickly removed under anesthesia on day 8 after administration and immediately frozen in liquid nitrogen at -196˚C. Tissues were homogenized in immunoprecipitation buffer (Thermo Fisher Scientific, Inc., Waltham, Ma, uSa) containing Protease inhibitor cocktail (Sigma-aldrich; Merck KGaa) and centrifuged at 15,000 x g for 5 min at 4˚C. Protein concentration was determined by the bicinchoninic acid assay method (Pierce; Thermo Fisher Scientific, Inc.). Equal quantities of protein (30 µg) were used to determine the protein expression of erK1/2, p-erK1/2, cB1 and β-actin. Samples were separated using SdS-PaGe (8-10% gradient gels) and then transferred to nitrocellulose membranes. The membranes were blocked with 5% non-fat milk for 1 h at room temperature and incubated with anti-erK1/2 (1:500; cat. no. 4696; cell Signaling Technology, inc., danvers, Ma, uSa), anti-p-erK1/2 antibody (1:500; cat. no. 5726; cell Signaling Technology, inc.) or anti-cB1 Figure 1. experimental design. Male Sprague-dawley rats (weighing 240±20 g each) randomly received c, M or Me treatment. animals in the M and Me groups were iT administered with 15 µg (10 µl) morphine twice daily at 8 am and 6 pm for 8 days, while animals in the c group were iT treated with 10 µl saline at the same time as the M group daily for 8 days. Furthermore, the animals in the Me group also received ea stimulation (2 Hz, 1.5 ma, 30 min) at Zusanli-Yanglingquan acupoints (ST36-GB34), 20 min after morphine or saline administration every day. in order to verify the key role of cB1, the cB1 agonist (Win 55,212-2) and antagonist (Sr141716) were iT administered, respectively following morphine, while animals in the other groups were injected with the same volume of saline. Mechanical withdrawal threshold and thermal withdrawal latency were determined at baseline (24 h before iT administration, day-1) and at the same time after the second treatment on days 1, 3, 5 and 7 after administration. on the 8th day after administration, the l 4-6 segments of the spinal cord were collected for determining the levels of erK1/2, phosphorylated erK1/2 and cB1 in the intumescentia lumbalis of the spinal cord. iT, intrathecally; ea, electroacupuncture; cB1, cannabinoid receptor 1; erK1/2, extracellular signal-regulated kinase 1/2; c, control; M, chronic morphine; Me, morphine + ea at ST36-GB34.
Statistical analysis. all data are reported as the mean ± standard deviation. an unpaired Student's t test was used if the values had a Gaussian distribution, while Mann-Whitney test was used if values did not have such a distribution, to analyze differences between 2 groups and one-way analysis of variance with Bonferroni comparison was employed to analyze interactions among various groups. P<0.05 was considered to indicate a statistically significant difference. Significance testing was 2-tailed. Statistical analysis was performed using GraphPad Prism software (version 5.0; GraphPad Software, inc., la Jolla, ca, uSa) and SPSS statistical software (version 16.0; SPSS, inc., chicago, il, uSa).
Results
EA at ST36-GB34 acupoints attenuates IT MIH. compared with the control animals (group c), iT administration of morphine (group M) significantly decreased MWT and TWL on days 5 and 7 (P<0.05; Fig. 2a and B), which indicated that the animal model of iT morphine-induced hyperalgesia was successfully established. in addition, compared with the M group, combined ea at ST36-GB34 acupoints (group Me) induced a significant increase in MWT and TWL on days 5 and 7 (P<0.05; Fig. 2a and B), which indicated that ea at ST36-GB34 acupoints significantly reduced the mechanical and thermal hyperalgesia induced by iT administration of morphine.
Inhibition of the spinal cord ERK1/2 activation and increased expression of CB1 caused by EA at the ST36-GB34 acupoints may be involved in the protective effects of EA against MIH.
compared with the control group (c), repeated iT treatment with morphine did not affect the protein levels of the cB1 receptor in the spinal cord. However, in comparison with the repeated administration of morphine group (M), repeated morphine plus ea raised the expression of the cB1 receptor in the spinal cord (P<0.05; Fig. 3a and B). Furthermore, there was also a significant increase in p-ERK1/2 (P<0.05; Fig. 3c and d) but not in erK1/2 (P>0.05; Fig. 3c and d) levels in the spinal cord of animals in the chronic morphine (M) group compared with the c group. ea at ST36-GB34 acupoints (Me) induced a significant decrease in p-ERK1/2 (P<0.05; Fig. 3c and d) but not erK1/2 (P>0.05; Fig. 3c and d) levels compared with the Me group, which indicated that ea stimulation at acupoints attenuated the level of erK1/2 activation caused by chronic iT administration of morphine.
IT administration of WIN 55,212-2/SR141716 enhances/attenuates the inhibitory effects of EA on MIH at the ST36-GB34
acupoints. as mentioned above, iT administration of morphine significantly decreased MWT and TWL (P<0.05 in group M vs. group c; Fig. 4a and B). ea at the ST36-GB34 acupoints induced a significant increase in MWT and TWL (P<0.05 in the Me group vs. the M group; Fig. 4a and B). However, compared with the Me group, iT administration of WIN 55,212-2 (group MEW) significantly increased MWT and TWl on days 3-7 (P<0.05 for MWT on day 3 and TWl on day 7; P<0.01 for MWT on days 5 and 7 and for TWl on days 3 and 5; Fig. 4a and B). compared with the Me group, iT administration of Sr141716 (group MeS) significantly decreased MWT and TWL on days 3 to 7 (P<0.01; Fig. 4a and B). These results demonstrated that activation of CB1 significantly enhanced the inhibitory effect of EA on IT morphine-induced mechanical and thermal hyperalgesia at the Figure 2. effects of ea on the behavioral tests of morphine-induced hyperalgesia. on day-1 (1 day before iT administration) and in days 1, 3, 5 and 7 after drug administration, rats received iT normal saline, iT morphine and iT morphine + ea at ST36-GB34. next, (a) mechanical hyperalgesia and (B) thermal hyperalgesia were evaluated by electronic von Frey filament and hot plate, respectively. Data are expressed as the mean ± standard deviation (n=8 rats/group for mechanical hyperalgesia test and n=8 rats/group for thermal hyperalgesia test). * P<0.05 vs. the c group, # P<0.05 vs. the M group. iT, intrathecal; ea, electroacupuncture; c, control; M, chronic morphine; Me, morphine + ea at ST36-GB34. ST36-GB34 acupoints, while inhibition of cB1 attenuated the inhibitory effect of ea.
Inhibition of ERK1/2 activation induced CB1 overexpression may enhance the inhibitory effects of EA on MIH at the ST36-GB34 acupoints. There were significant increases in p-erK1/2 levels in the spinal cord of the animals in the chronic morphine (M) group compared with those in the c group (P<0.05 M vs. c; Fig. 5c and d). These increases were significantly attenuated by EA at the ST36-GB34 acupoints (group Me; P<0.05; Fig. 5c and d). compared with the Me group, iT administration of the cB1 agonist Win 55,212-2 combined with EA significantly increased the CB1 levels but decreased the p-erK1/2 levels in the spinal cord of rats with IT MIH (P<0.01; Fig. 5). On the contrary, there was a significant decrease in CB1 protein level and a significant increase in p-erK1/2 level in the spinal cord of rats that received iT administration of the cB1 antagonist Sr141716 combined with ea (P<0.05 compared with the Me group; Fig. 5a). There was no significant difference in total ERK1/2 levels across all Figure 4. effects of ea upon administration of Win 55,212-2 or Sr141716 on the behavioral tests of morphine-induced hyperalgesia. on day-1 (1 day before iT administration) and on days 1, 3, 5 and 7 after drug administration in rats that received iT normal saline, iT morphine, iT morphine + ea at ST36-GB34, iT morphine + ea treatment + cB1 agonist Win 55,212-2 or iT morphine + ea treatment + cB1 antagonist Sr141716, (a) mechanical hyperalgesia and (B) thermal hyperalgesia were assessed by electronic von Frey filament and hot plate, respectively. Data are expressed as the mean ± standard deviation (n=8 rats/group for mechanical hyperalgesia test and n=8 rats/group for thermal hyperalgesia test). * P<0.05 vs. the c group, # P<0.05 vs. the M group, ## P<0.01 vs. the M group, & P<0.05 vs. the Me group. iT, intrathecal; ea, electroacupuncture; cB1, cannabinoid receptor 1; c, control; M, chronic morphine; Me, morphine + ea at ST36-GB34; MeW, the morphine + ea treatment + cB1 agonist Win 55,212-2 group; MeS, the morphine + ea treatment + cB1 antagonist Sr141716 group. Figure 3. effects of ea on cB1, erK1/2 and p-erK1/2 expression in morphine-induced hyperalgesia. Spinal cord tissue from different groups was collected 8 days after intrathecal treatment with saline, morphine and ea. (a) cB1, (c) erK1/2 and p-erK1/2 levels were detected by western blotting. Quantitative analysis of (B) cB1, (d) erK1/2 and p-erK1/2 are shown as the ratio of protein relative density to β-actin. data are expressed as the mean ± standard deviation (n=6 rats/group). * P<0.05 vs. the c group, # P<0.05 vs. the M group. ea, electroacupuncture; cB1, cannabinoid receptor 1; erK1/2, extracellular signal-regulated kinase 1/2; p, phosphorylated; c, control; M, chronic morphine; Me, morphine + ea at ST36-GB34. groups (P>0.05; Fig. 5c and d). These results indicated that ea at the ST36-GB34 acupoints may have a protective effect against MiH through upregulating cB1 and downregulating erK1/2 activation.
Discussion
The present study used a rat model of chronic MiH to investigate the antinociceptive mechanism of ea and to explore the effect of cB1 in this mechanism. The results of the present study indicated that 2-Hz ea at the ST36-GB34 acupoints attenuated MiH, which was accompanied by an increase in cB1 levels and a decrease in p-erK1/2 levels. The present results also revealed that iT administration of the cB1 agonist Win 55,212-2 combined with ea at the above acupoints enhanced the antinociceptive effect of ea on MiH and induced an increase in cB1 levels and a decrease in p-erK1/2 levels compared with administration of ea alone, while the cB1 antagonist Sr141716 had the opposite effect. These data indicated that ea at the ST36-GB34 acupoints may have a protective effect against MiH through upregulating cB1 and downregulating erK1/2 activation. as a type of chinese traditional therapy, ea is used to treat various diseases. There is increasing evidence that ea may have clinical potential in attenuating certain types of chronic pain, including adjuvant arthritis, cci and cancer-associated pain (17,18,27). Frequency is regarded as an important parameter in ea treatment, with 2, 15 and 100 Hz being the most commonly used frequencies for analgesic therapy. among these frequencies, 2-Hz ea has been demonstrated to have a better analgesic effect for neuropathic pain; thus, this frequency was selected in the present study (18). The data from the present study revealed that 2-Hz ea at the ST36-GB34 acupoints but not at non-acupoints attenuated mechanical and thermal hyperalgesia caused by iT administration of morphine, which is similar to the findings of previous studies regarding the antinociceptive effect of ea at the ST36 and GB34 acupoints (17,18,28). activation of erK1/2 within spinal neurons by various peripheral noxious stimulation has been reported to be involved in generating pain hypersensitivity (29). activation of erK1/2 induced short-term functional changes by non-transcriptional processing and long-term neuronal plastic changes by increasing the gene transcription of hyperalgesia-associated downstream neuropeptides (30). Furthermore, there are accumulating data about the roles of erK in mediating the neuronal plasticity that contributes to MiH (4). Previous evidence has suggested that activation of erK in the spinal cord is implicated in the formation of MiH (7,31). it was reported that iT injection of morphine for 7 days induced a remarkable increase in p-erK1/2 levels in the spinal cord of rats, which contributed to morphine tolerance and associated hyperalgesia (7,32). inhibition of erK1/2 activation by iT injection of the erK1/2 inhibitor u0126 or knockdown of spinal erK1/2 by antisense oligonucleotides attenuated withdrawal-induced mechanical allodynia in rats (31,33). consistent with previous studies, the results of the present study indicated that the p-erK1/2 levels in the spinal cord significantly increased by IT injection of morphine (15 µg, twice a day) for 7 days in rats with MiH. recent studies suggested that ea at acupoints attenuated Spinal cord tissue from different groups was collected 8 days after intrathecal treatment with saline, morphine, ea, Win 55,212-2 and Sr141716. (a) cB1, (c) erK1/2 and p-erK1/2 were detected by western blotting. Quantitative analysis of (B) cB1, (d) erK1/2 and p-erK1/2 are shown as the ratio of protein relative density to β-actin. data are expressed as the mean ± standard deviation (n=6 rats/group). * P<0.05 vs. the c group, # P<0.05 vs. the M group, ## P<0.01 vs. the M group, & P<0.05 vs. the Me group. ea, electroacupuncture; cB1, cannabinoid receptor 1; erK1/2, extracellular signal-regulated kinase 1/2; p, phosphorylated; c, control; M, chronic morphine; Me, morphine + ea at ST36-GB34; MeW, the morphine + ea treatment + cB1 agonist Win 55,212-2 group; MeS, the morphine + ea treatment + cB1 antagonist Sr141716 group. hyperalgesia caused by peripheral noxious stimulation and decreased the activation of erK1/2 (18,28). The results of the present study revealed that, accompanied by attenuated mechanical and thermal hyperalgesia, ea at the ST36-GB34 acupoints induced a decrease in p-erK1/2 levels in the spinal cord of rats that received iT morphine for 7 days. These data demonstrated that inhibition of erK1/2 activation is at least partially involved in the ea treatment of MiH. cB1 is highly expressed in regions involved in pain transmission and modulation, including the majority (76-83%) of nociceptive neurons of dorsal root ganglions, the dorsal horn of the spinal cord, the thalamus and the periaqueductal grey (34,35). in the spinal cord, results revealed that cB1 levels were a slightly increased by iT morphine and ea administration could greatly increase the expression of cB1 under iT morphine administration. upregulation of cB1 partially participated in the antinociceptive effect of ea and cB1 may serve a major role in this process at the level of the spinal cord, which is in agreement with a previous study (36).
as aforementioned, upregulation of the cB1 receptor in the spinal cord was observed in a sciatic nerve injury-induced hyperalgesia model in rats (13). another study suggested that the cB1 antagonist aM251 completely reversed the peripheral antinociception induced by morphine, which demonstrated that cB1 is involved in the analgesic mechanism of morphine (14). consistent with earlier studies (13,14), the results of the present study revealed that chronic iT injection of morphine significantly increases CB1 protein levels along with the onset of hyperalgesia. in a pain model of l 5 spinal nerve ligation, intraperitoneal injection of the cB1 agonist Win 55,212-2 significantly attenuated mechanical hyperalgesia and thermal allodynia, while co-administration of the cB1 antagonist Sr141716 but not the cB2 antagonist Sr144528 prevented this effect, suggesting that this effect of Win 55,212-2 is mediated via the cB1 receptor (37). additionally, cB1 was also considered to be involved in the mechanism of ea treatment. The CB1 selective antagonist AM251 significantly reversed the antinociceptive and anti-inflammatory effects of EA in a rat model of zymosan-induced hypernociception (20). The results of the present study revealed that iT injection of the cB1 agonist Win 55,212-2 enhanced the antinociceptive effect of EA and induced a significant increase in CB1 protein levels in a rat model of MiH, while iT injection of the cB1 antagonist Sr141716 induced the opposite results. These data demonstrated that the cB1 receptor system was partially involved in the mechanism of ea treatment. Various studies have suggested that the erK signaling pathway is involved in the antinociceptive mechanism of the cB1 receptor system (23,38). Katsuyama et al (23) demonstrated that iT injection of the cB1 antagonist aM251 induced a remarkable activation of erK1/2 in the spinal cord along with nocifensive behavior in mice, while the cB1 agonist acea and the MaPK/erK inhibitor u0126 reversed these results. a previous study suggested that the erK1/2 signaling pathway may be involved in ea pretreatment-induced cerebral ischemic tolerance via the cannabinoid cB1 receptor in rats (21). The results of the present study revealed that the cB1 agonist Win 55,212-2 combined with ea decreased the p-erK1/2 levels compared with ea treatment alone, while the cB1 antagonist Sr141716 induced the opposite results. These data demonstrated that the enhancement produced by the cB1 agonist Win 55,212-2 on the effect of ea in attenuating MiH was partially mediated by inhibition of erK1/2 activation. However, the results of the current study revealed that ea treatment alone increased the cB1 protein levels and decreased the p-erK1/2 levels in rats with MiH, which indicated that other mechanisms probably participated in the inhibition of erK1/2 activation.
in conclusion, the present study suggests that ea produces an antinociceptive effect on iT injection of morphine-induced hyperalgesia partially through the inhibition of erK1/2 activation. activation of the cB1 receptor induced an enhancement of the ea-mediated antinociceptive effect on MiH partially through regulation of the spinal cB1-erK1/2 signaling pathway. The current study may contribute to the understanding of the antinociceptive mechanism of ea and developing novel methods for the treatment of MiH.
|
2019-06-07T20:32:38.442Z
|
2019-06-04T00:00:00.000
|
{
"year": 2019,
"sha1": "fd35c2b53015c09d3a8a1c773401ccdfe87e92ca",
"oa_license": "CCBYNCND",
"oa_url": "https://www.spandidos-publications.com/10.3892/mmr.2019.10329/download",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "fd35c2b53015c09d3a8a1c773401ccdfe87e92ca",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
259298124
|
pes2o/s2orc
|
v3-fos-license
|
Bats possess the anatomical substrate for a laryngeal motor cortex
Summary: Cortical neurons that make direct connections to motor neurons in the brainstem and spinal cord are specialized for fine motor control and learning [1, 2]. Imitative vocal learning, the basis for human speech, requires the precise control of the larynx muscles [3]. While much knowledge on vocal learning systems has been gained from studying songbirds [4], an accessible laboratory model for mammalian vocal learning is highly desirable. Evidence indicative of complex vocal repertoires and dialects suggests that bats are vocal learners [5, 6], however the circuitry that underlies vocal control and learning in bats is largely unknown. A key feature of vocal learning animals is a direct cortical projection to the brainstem motor neurons that innervate the vocal organ [7]. A recent study [8] described a direct connection from the primary motor cortex to medullary nucleus ambiguus in the Egyptian fruit bat (Rousettus aegyptiacus). Here we show that a distantly related bat, Seba’s short-tailed bat (Carollia perspicillata) also possesses a direct projection from the primary motor cortex to nucleus ambiguus. Our results, in combination with Wirthlin et al. [8], suggest that multiple bat lineages possess the anatomical substrate for cortical control of vocal output. We propose that bats would be an informative mammalian model for vocal learning studies to better understand the genetics and circuitry involved in human vocal communication.
Main Text:
Speech learning in humans is largely based on imitation of an adult tutor [9]. This vocal learning process is thought to depend on cortical circuits and is disrupted by a number of genetic disorders ranging from FOXP2-related speech and language disorder to autism spectrum disorder [10,11]. To fully understand the neural mechanisms of vocal learning and the underlying causes of related disorders, a mammalian vocal learning model is necessary. For decades, songbird species like zebra finches have been the laboratory model of choice to study vocal learning because they undergo a neurodevelopmental song learning process with several parallels to human speech acquisition [4]. While much has been learned about vocal learning behavior and related circuitry in songbirds, its translation to humans is not without limitations. Humans and songbirds lineages split over 300 million years ago, and have evolved different cortical organization [12]. A mammalian system for studying vocal learning may more closely parallel human speech acquisition and related circuits, and serve as means for which communication disorders can be modeled.
To date, there is limited evidence for vocal learning in non-human mammals. A number of studies have examined whether mice are vocal learners, as they are highly amenable to laboratory experiments and genetic manipulation [13][14][15]. While mice may be valuable models for studying vocal communication [16], they do not learn their social vocalizations [17][18][19][20]. In contrast, bats exhibit multiple behavioral hallmarks of vocal learning [6], including disruption in vocalizations from acoustic isolation [5], evidence of dialects [21], and babbling [22]. The babbling behavior in bats exhibits many features seen in humans, where it is a key component of language development [23]. Despite the behavioral studies suggesting vocal learning, little is known about the neural circuits that would enable this trait in bats. To further validate the use of bats as a model for vocal learning, it is necessary to determine whether bats have the neural circuitry required for this complex behavior.
Control of vocal output via direct motor cortex projections to brainstem motor neurons is critical to vocal learning in both humans and songbirds. In humans, the laryngeal motor cortex (LMC) projects to nucleus ambiguus [7], which contains laryngeal motor neurons, ( Figure 1A) and in songbirds, the robust arcopallial (RA) nucleus projects to vocal motor neurons in a subdivision of the hypoglossal nucleus [24]. A recent study revealed that a region of the primary motor cortex in the Egyptian fruit bat (Rousettus aegyptiacus; Yinpterochiroptera, Pteropodidae) makes a monosynaptic projection to the vicinity of cricothyroid-projecting motor neurons [8] and is active when the bats are vocalizing. While this represents the first report of a laryngeal representation within a bat motor cortex, bats are highly speciated, with distantly related families showing behavioral evidence of vocal learning.
In the present study we explored whether Seba's short-tailed bat (Carollia perspicillata; Yangochiroptera, Phyllostomidae, Figure 1B) possesses the anatomical substrate for a laryngeal motor cortex. C. perspicillata are gregarious bats with a complex social repertoire [25]. While C. perspicillata itself has not been studied for vocal learning capabilities, multiple species within Phyllostomidae exhibit this trait [26,27]. In contrast to R. aegyptiacus, which echolocate using tongue clicks, C. perspicillata produce echolocation calls laryngeally [28], suggesting they may have precise control over laryngeal muscles. These behavioral characteristics support C. perspicillata as an informative mammalian model for understanding the neural mechanisms underlying vocal learning and associated communication disorders.
To search for evidence of an LMC in C. perspicillata, we first utilized in situ hybridization to identify the primary motor cortex based on expression of Gpm6a. While this gene was reported to share convergent downregulation in human LMC and songbird RA [29], we have previously shown that Gpm6a downregulation in songbirds is not limited to RA but also includes the adjacent arcopallial motor region, analogous to the non-laryngeal representation of the primary motor cortex in mammals [30]. The boundaries of primary motor cortex (M1) in C. perspicillata could be delineated with the Gpm6a expression pattern, noting the downregulation in deep layers ( Figure 1C).
We then investigated whether C. perspicillata possess a direct projection from motor cortical neurons to nucleus ambiguus, a characteristic feature of vocal learners [7]. We injected an anterograde tracer targeted to the deep layers of the primary motor cortex area identified by gene expression ( Figure 1D, upper left panel), and found labeled fibers in nucleus ambiguus, as well as in the periaqueductal grey (PAG) and reticular formation ( Figure 1D, bottom panels). Upon staining for choline acetyltransferase (ChAT) to label brainstem motor neurons, we found that the anterogradely labeled fibers from the motor cortical injection were in close proximity of ChAT+ cells in nucleus ambiguus ( Figure 1D, upper right panel). To confirm this projection, we injected a retrograde tracer into the nucleus ambiguus ( Figure 1F, left panel), which resulted in labeled cell bodies both in the primary motor cortex overlapping the Gpm6a downregulation zone and in the ventrolateral PAG ( Figure 1F, middle and right panels), noting the tracer was not restricted to nucleus ambiguus. These results provide supportive evidence of a direct projection from the primary motor cortex to vocal motor neurons in nucleus ambiguus in C. perspicillata ( Figure 1E). While the fibers terminating in nucleus ambiguus represent a direct cortical projection that may support fine motor control of laryngeal muscles, the apparent projections from motor cortex to PAG and reticular formation, and from these areas to nucleus ambiguus suggest that C. perspicillata may also possess an indirect cortical vocal pathway, which could possibly be involved in non-learned vocalizations [31]. Overall, these results support the conclusion that C. perspicillata possess the neural circuitry necessary for precise control of vocalizations.
A direct projection from the motor cortex to the nucleus ambiguus is not present in non-vocal learning animals including non-human primates, cats, rats, and non-vocal learning birds [7], though there is a reported sparse projection from motor cortex to nucleus ambiguus in mouse [32]. However, mice vocalizations are innate, not learned [17, 18], and mice produce ultrasonic vocalizations (USV) through a different mechanism than humans produce speech [33], suggesting that mice USVs may not be under the same degree of cortical control. Recent assessments have proposed that vocal learning is not binary, where a species either can or cannot learn via imitation, but rather that vocal learning is on a spectrum [34] and may involve several modules that could reflect partial components of vocal learning systems [35]. It is unclear as of yet where bats fall along this spectrum of vocal learners, how many bat species are capable of vocal learning, or whether different species of bats exhibit a greater degree of vocal learning than others [36].
Bats likely produce echolocation calls and vocalizations using different laryngeal muscles [37], but duplication and specialization of the echolocation pathway may serve as a possible origin for vocal learning abilities. In combination with . CC-BY 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made Wirthlin et al. [8], our findings point to Pteropodidae and Phyllostomidae as the first non-human mammalian groups shown to exhibit both behavioral hallmarks and brain anatomy for vocal learning. While this suggests that the anatomical projection was present in the common ancestor of all bats, with the less parsimonious explanation being multiple independent gains, characterizing more bat species is needed to further clarify the origin of bat vocal learning.
In summary, this study adds to the evidence that bats possess the anatomical substrate to produce learned vocalizations. Bats are the only non-human mammal known to exhibit both the behavior and neural circuitry for vocal learning. Our results, in combination with the sociality of bats, suggest that bats are an accessible model to study mammalian imitative vocal learning, and subsequently diseases and disorders that affect vocal communication in humans.
Methods:
Animal and tissue preparation: We used adult Seba's short-tailed bats (Carollia perspicillata) bred in our colony. All animals lived in a flight room under natural harem social structures and had free access to food and water. All care and procedures were in accordance with the guidelines of the National Institutes of Health and were approved by the Washington State University Institutional Animal Care and Use Committee.
In situ hybridization:
Prior to tissue collection, bats (n=4, males) were isolated overnight in a custom-built sound-proof chamber to reduce auditory stimulation. A microphone was placed near the cage to ensure the bats were not vocalizing for 1-2 hours prior to tissue collection. Bats were sacrificed after isoflurane inhalation and brains were rapidly removed, blocked coronally, and frozen with OCT (Tissue-Tek) in a dry ice/isopropyl alcohol slurry. Brains were sectioned coronally at 10μm with a cryostat (Leica) and stored at -80°C until use. Nissl staining was performed on adjacent sections to those used for in situ hybridization. We used a cDNA clone from the Mammalian Gene Collection for GPM6A (Human, Clone ID #BI670057). The protocol for in situ hybridization has been described previously [38]. Briefly, the clone was digested and purified to recover cDNA insert. Digoxygenin-(DIG) labeled antisense riboprobes were generated from the cDNA template using T7 RNA polymerase and a DIG-UTP RNA labeling kit (Roche). Brain sections were fixed in 3% phosphate buffered paraformaldehyde for 5 min followed by a rinse in phosphate buffered saline. Sections were then acetylated for 10 min (0.25% acetic anhydride in 1.4% triethanolamine and dH2O), washed in 2X SSPE, and dehydrated through an ethanol series. Sections were incubated in hybridization solution (50% formamide, 2X SSPE, 2 µg/µL tRNA, 1µg/µL BSA,1µg/µL Poly A, and 2 µL of riboprobe per slide), coverslipped and incubated overnight in mineral oil at 62 o C. The following day, slides were washed in chloroform and de-coverslipped in 2X SSPE. Slides were then washed in post-hybridization washes at . CC-BY 4.0 International license available under a (which was not certified by peer review) is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. It is made The copyright holder for this preprint this version posted June 26, 2023. ; 62 o C, first in 50% formamide in 2X SSPE for 70 min, then two washes in 0.1X SSPE for 30 min with agitation every 10 min. Slides were then transferred to a solution of 0.3% Triton X-100 in a Tris-HCl, NaCl buffer, sections were framed with a PAP pen, and blocked with 8.3 ng/µL of BSA in a Tris-HCl, NaCl buffer (TNB) for 30 min. Slides were then incubated in alkaline phosphatase-labeled anti-DIG antibody (Roche; 1:600) in TNB for 2 h. Slides were washed in Tris-HCl, NaCl, and MgCl2 buffer, placed in slide jars, and incubated in 5-bromo-4-chloro-3-indolylphosphate/nitroblue tetrazolium (BCIP/NBT; Perkin-Elmer) at room temperature overnight. Following chromogen incubation, slides were washed in dH2O and fixed in 3% phosphate-buffered paraformaldehyde for 5 min. Slides were air-dried and coverslipped with VectaMount AQ (Vector Labs).
Tract tracing and immunohistochemistry:
For tracer injections, bats were anesthetized by isoflurane inhalation in a custom stereotaxic apparatus. A 1mm x 1mm craniotomy was made over the primary motor cortex. A tracer deposit via iontophoretic injection (5μA, 10 min) at varying depths was made targeting the deep layers of the primary motor cortex using 10% biotinylated dextran amine (BDA; 10,000 MW) (Life Technologies) (n=4, males) or 1% cholera toxin subunit B (CTB, List Biological Laboratories) in nucleus ambiguus/brainstem (n=2, males). We then covered the craniotomy with petroleum jelly and bone wax to prevent the brain from dehydrating, applied lidocaine and Neosporin to the exposed tissue, and returned the bat to a home cage. After one week the bat was deeply anesthetized with isoflurane and transcardially perfused with 10% buffered formalin. The brains were dissected and cryoprotected overnight in 20% sucrose in 0.1M phosphate buffer (PB). We sectioned coronally at 40 μm using a Leica freezing microtome. The primary antibody solution contained of goat anti-ChAT (Millipore AB144P, 1:200) and anti-CTB (List Biological Laboratories, 1:10,000) antibodies, 3% normal donkey serum (Millipore), and 0.4% Triton X-100 (Sigma-Aldrich) in 0.1M PB. Primary antibodies were visualized using Alexa Fluor 488-and 568conjugated secondary antibodies (1:500; Life Technologies) and BDA was visualized using Alexa Fluor 568-conjugated streptavidin (Life Technologies; 1:250). Sections were then mounted on slides, dehydrated and cleared, and then coverslipped with DPX (Electron Microscopy Sciences). Label was observed using a Leica TCS SP8 confocal microscope. Brightness and contrast adjustments were used to enhance images.
|
2023-07-01T13:09:54.498Z
|
2023-06-26T00:00:00.000
|
{
"year": 2023,
"sha1": "70336602ae07a4f739b85868e2b084cb240b7f47",
"oa_license": "CCBY",
"oa_url": "https://www.biorxiv.org/content/biorxiv/early/2023/06/26/2023.06.26.546619.full.pdf",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "d7b409ea24675b673f06ca5acd85710b2bbf7ddf",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
202222532
|
pes2o/s2orc
|
v3-fos-license
|
Numerical Study on Enhancing Heat Transfer of the Belt-type Resistance Heater in the Hypersonic Wind Tunnel
Numerical studies of Heat transfer enhancement of belt-type Electric-resistance Heater is reported in this paper. The effect of different kinds of belt arrangements is studied in detail. Results show that, for changing m and n, which is the spacing between adjacent belts both in vertical and horizon, the comprehensive Performance Evaluation Criterion(PEC) increases as belt spacing decreases; For the case of model adding several baffles, the PEC reaches 1.6, with Nu/Nuo and f / f0 comes to 3.38 and 9.51; For the staggered arrangement belt geometry, the PEC attains 1.18 and the flow field uniformity improves significantly compared with the aligned arrangement one.
Introduction
The hypersonic wind tunnel is widely used to simulate the critical parameters of the real hypersonic flight environment as a main aerodynamic ground test equipment. Being of particular significance for the hypersonic wind tunnel, the heater is installed before the Setting Chamber to increase the airflow's stagnant temperature in case of air condensation due to high Mach number when flowing through the nozzle. Considering its advantages of rapid heating and no impurities are introduced to airflow, belttype Resistance Heater is adopted in the CRADC Ф0.5m hypersonic wind tunnel. However, the heating resistors element is easily burned out at a high Mach number and the repair processes is time-consuming and expensive. But few studies have touched on this area. In contrast, shell-and-tube heat exchangers (STHEs), which are particularly similar to the heater we discussed, carried on an extensive and thorough research about how to improve efficiency in the past decades.
In general, ways to enhance heat transfer can be classified into two types: passive and active techniques. The passive method, in which heating due to convection alone and needs no additional external power, such as extended surfaces, displaced enhancement devices, is concerned.
A typical structure of STHEs is shown in figure 1 [1]. Baffles are widely used in STHEs for it improves heat transfer by forcing the shell-side fluid to flow in a tortuous, zigzag manner across the tube bundle, which can enhance turbulence and increase the shell-side flow channel[ [2]]. Muley and Manglik[[3]~ [6]] derived formulas for calculating the heat transfer coefficient of STHEs with various types of baffles experimentally, and pointed out that the non-uniform distribution of fluid is the main factor that affect performance. The tube has two kinds of arrangement, staggered and in-line bundle. According to Zhukauskas' work[ [7]], compared with the in-line bundle, the heat transfer performance of heat exchanger with staggered tube bundle is 20% higher, and the friction is 40% lower. OuYang and Xiong compared the thermal performance of aligned and staggered arrangement, and came up with the conclusion that aligned arrangement is more appropriate under condition of large flow rate or friction of fluid is limited strictly.
Figure 1. Diagram of a typical STHE [1]
With the rapid development of computer science, numerical simulation become popular in study of STHEs[[9]- [11]]. Taher et al.[ [12]] simulated heat exchangers with helical baffles by using FLUENT, and found that the heat transfer efficiency decreases with the increase of baffle space.
Inspired by the research above, we proposed three ways, including changing space between belts, inserting baffles and using staggered belts bundle, to strengthen the belt-type heater's heat transfer efficiency. Numerical simulation is used to verify the effectiveness of these methods.
Performance evaluation criteria
The enhancement of the heat transfer and the changing of drag losses need to be considered at the same time.
The Nusselt number Nu is used to characterize the strength of convective heat transfer. f stands for friction factor which represents the flow resistance of airflow. Webb[[13]- [15]] used the ratio of equation (5) as the Performance evaluation criteria for heat transfer enhancement (The subscript 0 stands for the benchmark model). And PEC is widely adopted in the field of heat transfer enhancement, such as enhanced surface [[16]- [18]], micro channel heat transfer[ [19], [20]], tubes[ [21], [22]].
Consequently, the Performance Evaluation Criteria (PEC) of heat transfer enhancement is adopted in this paper.
Model
This research set the Ф0.5m belt-type resistance heater in CARDC as the prototype. The structure of a heating box is shown in Figure2. When the heater is working, belt-type resistances reach a high temperature dramatically after energized, meanwhile the airstream travel through the box axially and To improve the computational efficiency and make it more convenient to study the different enhancement methods, a two-dimensional model is built which mainly concern of the typical flow characteristic in axial direction. The physical model is 1:1 with the heater in Φ0.5m, and due to the symmetry of the model, half of the original heater is built shown in Figure 3. The direction of airflow is along the x axis.
Numerical Method
The simulation makes assumptions as follows: 1)Heat transfer mode is the forced convection heat transfer, and the heat radiation is neglected; 2)Ignore air gravity effect; 3)The heater shell is an adiabatic wall; RNG k-ε engineering turbulent model and SIMPLEC algorithm is applied after selection. The grid number is 82,000 for the whole computational domain. Besides, boundary conditions of mass flow inlet and pressure outlet are used here, and the surface of the resistance belt is set as heat flux wall.
Numerical method validation
In order to validate the numerical method, considering the symmetry, the 1/4 3D model of 1:1 is built for numerical simulation as shown in Figure 4, to be compared with the experiment results of the Φ0.5m heater. And air flows the other way along the Z axis. Table 1. shows four different calculate states. After simulation, the outputs are compared with data from Φ0.5m tunnel's experiment at the same state as shown in Table 2. Seen from the table above, overall numerical results are lower than the experimental results. More specifically, the absolute error of the highest is 7.65k. The reason that caused the error may be: 1) The error caused by model simplification 2) Inside of the heater, thermocouple measurement is a single point temperature, and numerical calculation results take the average temperature of the outlet, which caused an error; 3) Heat flux released by each layer of the electric heater should be different, but the calculation is set to be equal.
On the whole, the numerical calculation result is close to the experimental result at each state, and the relative error is less than 3%. So, the numerical simulation method is considered reliable.
Analysis of different heat transfer enhancement methods
According to the following three kinds of enhanced heat transfer means, heat transfer efficiency of varied internal arranged structures of heater are investigated with the two-dimensional model.
Changing spacing between resistance belts
Here defines the distance between two adjacent layers of the resistance belt gap as m and the same layer resistance as n, and sketch is as follows. The effect of changing m and n on heat transfer is discussed. f f is lower than 1 when m<8mm. The ratio of resistance factor generally increases as m increases. Combining the two sides, curve of PEC versus m indicate that, PEC is larger than 1 when m<12mm, and PEC increases when m drops, which achieves maximum when m=2mm. It means that, when m is getting smaller, PEC is becoming greater, and effect of enhanced heat transfer is getting better.
Changing n.
To study the effect of the variable n, the range of m is set from 10mm to 25mm, and the interval is 3mm. Yet inside of the benchmark model, n is unequal-spaced. The result from simulation is as follows: To sum up, changing distance of belts do have effect on the heater's performance. More specifically, the smaller the value of m and n, the better heat transfer performance is indicated. But the temperature in the surface of resistant belts can reach up to thousands of degrees.
Adding baffles
Adding baffles is frequently used in heat exchangers to improve heat transfer capability, because that it can lengthen the flow channel and change the direction of air flowing through resistance belts. Wondering whether this method is applicable for heater used in the hypersonic tunnel, a twodimensional baffled model is established for calculation seen in Figure 8. And in which the height of the baffle plate is as follows: A complete model is built due to the loss of structural symmetry, and the direction of the flow is from left to right. Pressure is severely unevenly distributed, and is descending along the flowing direction. Specifically, there exists a blockade at the entrance when the flow field width narrows, and pressure reaches about 5Mpa at the entrance, which is 250% bigger than the initial pressure. The velocity distribution all over the domain, particular at the exit, is uneven too. Meanwhile, vortex of various size is formed at the corner of the flow-path and the resistances' leeside.
Specifically, according to the calculation we can get: Obviously, the heat transfer efficiency and resistant factor both increases in the model adding baffles, and the resistant factor reaches more than nine times as before, which says that, the baffles bring more loss in energy as improving the heat transfer efficiency. And the Performance Evaluation Criteria attains 1.60, because compared to the original heater, heater with baffles produces lots of vortex when working, and vortex can remarkable improves the heat transfer, but causing energy dissipation as well. Furthermore, the airflow uniformity turns worse for baffles making the flow channel narrow. 9.51
Staggered arrangement heater
Aligned and staggered arrangements are classic arrangements which are used widely in shell-and-tube heat exchangers. The Φ0.5 heaters use the aligned type and here staggered model is established shown below, to be compared with the aligned one. The displacement S of resistance bands in the even row can be expressed as follows: Figure 11. The two-dimensional computational domain of heater with staggered arrangements The calculated temperature contours of aligned and staggered models are shown in Figure 12. The air flows from top to bottom. In comparison with aligned one, heater in staggered arrangements have more flow paths among resistance belts. Thus, the airflow is heated evenly and flow field uniformities of temperature improves obviously. Nu Nu PEC f f (11) Seen from above, both of Nu and f for the staggered arrangement are bigger than the aligned one. And the PEC reaches 1.18, namely staggered arrangement has a certain effect on enhancing heat transfer. Because the area of airflow been heated is larger in the staggered heater than in the aligned model, the disturbance of airflow is strengthened which is helpful to heat transfer enhancement.
Conclusion
Study found that: 1) For changing spacing m and n, the smaller the interval, the greater the PEC, the better the heat transfer enhancement effect. However, when the heater is working, the resistance belt of high temperature being too close to each other may be harmful to the heater, such as reducing the resistor's service life and causing circuit fault.
2) For adding baffles, the flow channel becomes S type, and the long edge of belts become to windward side, which lead to plenty of vortex. As a result, PEC reaches 1.60. Meanwhile, resistance factor increases to more than nine times than the benchmark model, that is, the energy loss extremely increases. And the sudden increased pressure at the entrance of the heater is dangerous when the wind tunnel is working. Lastly, the non-uniform flow field in heater is against the wind tunnel's overall flow field quality.
3) For staggered arrangement resistance belt, it has more flow channels, and its area of airflow been heated become larger, and the disturbance of airflow is strengthened. On the whole, staggered arrangement has a certain effect on enhancing heat transfer for PEC being 1.18. Besides, the flow field uniformities improve obviously.
Thus, future we will focus on the experiment validation of staggered arrangement.
|
2019-09-11T02:02:51.539Z
|
2019-08-01T00:00:00.000
|
{
"year": 2019,
"sha1": "dd05ff29999c389838c874652382d4acc1e6fd1a",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1300/1/012009",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "5b3efb2dbda7713f76e9c5e9b55337c2c993eccf",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
}
|
234078607
|
pes2o/s2orc
|
v3-fos-license
|
Efficacy of endoscopic surveillance in the detection of local recurrence after radical rectal cancer surgery is limited? A retrospective study
Background Rectal cancer, one of most common neoplasms, is characterized by an overall survival rate exceeding 60%. Nonetheless, local recurrence (LR) following surgery for rectal cancer remains a formidable clinical problem. The aim of this study was to assess the value of postoperative endoscopic surveillance (PES) for the early detection of LR in rectal cancer after radical anterior resection with sigmoid-rectal anastomosis. Methods We performed an anterior resection in 228 patients with stages I‑III rectal cancer who had undergone surgery from 2001 to 2008 in the Oncology Center in Bydgoszcz, Poland. Of these patients, 169 had perioperative radiotherapy or radiochemotherapy. All patients underwent PES with abdominal and pelvic imaging (abdominal ultrasound, computed tomography, magnetic resonance) and clinical examination. Sensitivities, specificities, positive likelihood ratios, negative likelihood ratios, and receiver operating characteristic curves were calculated to compare the value of colonoscopy versus imaging techniques for the diagnosis of LR. Results During the 5-year follow-up, recurrences occurred in 49 (21%) patients; of these, 15 (6%) had LR, which was most often located outside the intestinal lumen (n = 10, 4%). Anastomotic LR occurred in 5 (2%) patients. The mean time to anastomotic LR was 30 months after initial surgery, similar to that of other locations (29 months). Both imaging and endoscopy were shown to be efficient techniques for the diagnosis of LR in anastomotic sites. In the study group, endoscopy did not provide any additional benefit in patients who were receiving radiation therapy. Conclusions The benefit of PES for the detection of LR after curative treatment of rectal cancer is limited and not superior to imaging techniques. It remains a useful method, however, for the histopathological confirmation of suspected or confirmed recurrence. Supplementary Information The online version contains supplementary material available at 10.1186/s12957-021-02413-0.
treatment for the disease; they represent the third largest group of long-term cancer survivors [2]. At least 30% of CRCs are located in the rectum [3].
In 2017, 5617 Polish patients were diagnosed with cancer of the rectum and rectosigmoid junction [4]. Poland belongs to the group of countries with a medium risk of CRC, and the number of cases and cancer-related deaths is constantly increasing [1,4].
Standard treatment of rectal cancer usually involves surgery, systemic therapies, and radiotherapy (RT) or chemoradiotherapy (CRT). The 5-year survival rates of patients undergoing radical therapy can reach 60% or more in developed countries regardless of stage at diagnosis; the prognosis varies significantly, however, depending on the initial stage of the disease [5][6][7]. The locoregional recurrence rate has decreased from about 30-50% to 5-10% as a result of precise qualification methods based on modern imaging, treatment that incorporates RT, and improved surgery with techniques such as total mesorectal excision (TME) [8][9][10][11]. Currently, most rectal cancer recurrence is systemic, not local. Nevertheless, local recurrence (LR) is still a major diagnostic and therapeutic problem in patients after radical treatment of rectal cancer, as LR significantly reduces the patient's chances for long-lasting recovery. Diagnostic and therapeutic possibilities depend largely on the localization of the LR. Although there is no universally accepted classification of LRs according to their location, 4 typical LR zones are frequently distinguished: central/axial (anastomotic site, perianal region, rest of the mesorectum tissue), lateral (lateral pelvic sidewall: iliac vessels, lateral pelvic lymph nodes, sidewall musculature), anterior (genitourinary region, pubic bone), and posterior (presacral zone) (Fig. 1).
Although radical resection is commonly accepted as the best option to effectively treat LR of rectal cancer, this treatment is feasible (with a curative intent) in only a minority of patients [12,13]. Oncological follow-up in patients after radical treatment aims to detect disease recurrence early and-theoretically-improve radical resection rates of non-advanced recurrence. However, the impact of intensive endoscopic and imaging surveillance on improvement of survival in patients with rectal cancer has not yet been unequivocally proven [14][15][16]. Moreover, the methods used for recurrence surveillance are still under discussion and currently not enough data are available to support their efficacy [17].
In this study, we aimed to clarify the clinical value of postoperative endoscopic surveillance (PES) for the early detection of LR in rectal cancer after radical surgery.
Patient selection
Between 2001 and 2008, 228 adult patients with pathological TNM (pTNM) stages I-III of sporadic cancer of the rectum [18] underwent radical anterior resection with primary anastomosis in Oncology Center-Prof Franciszek Łukaszczyk Memorial Hospital, (Bydgoszcz, Poland). Patients were eligible for perioperative treatment according to the established principles described Table 1. This retrospective study was approved by the Bioethical Committee at the Collegium Medicum Nicolaus Copernicus University.
Treatment and follow-up
Patients with a tumor of the lower and middle part of the rectum underwent TME, whereas a partial mesorectal excision was performed for those with more proximal tumors (upper third of the rectum). All surgeries were performed with open procedures by experienced teams (7 senior and 3 junior surgeons supervised by senior). A protective stoma was not performed as standard procedure; rather, it was created only if an anastomotic leak was suspected following anastomosis (2 ileostomy, 2 colostomy). The overall 30-day perioperative mortality rate was 1.3% (3 patients).
A total of 169 patients (74%) received perioperative RT, of whom 149 (65%) had preoperative RT or CRT. One-fourth of the total group did not receive irradiation because of various patient-related factors, including previous RT for the pelvic region and lack of consent for RT. Short-course RT (sRT; 5 × 5 Gy) was the most common treatment approach, followed by immediate surgery (< 10 days from the first RT fraction). Ninety-one patients (40%) had stage III (ypTNM) disease at presentation. The characteristics of the patients who underwent surgery are presented in Table 2. After treatment, all patients remained under surveillance according to the scheme described in Table 3.
We established a diagnosis of LR based on confirmation of at least one of the following major criteria: (a) histological confirmation, (b) clear bone destruction, and (c) positron emission tomography/computed tomography (PET/CT) indicating local recurrence, and at least one of the following minor criteria: (a) progressive tissue mass, (b) infiltration of adjacent organs, (c) subsequent growth of tumor markers, and (d) typical appearance of recurrence on endoscopic ultrasound, CT, or magnetic resonance imaging (MRI) [19].
In the case of recurrence, patients were restaged in order to develop an appropriate treatment plan.
Statistical analysis
Statistical analysis was conducted by using the Statistica version 13.3 software package (TIBCO Software Inc., www. stati stica. io). Qualitative and continuous variables are described with the usual descriptive statistics: (51) Preoperative CRT 33 (14) Postoperative RT 20 (9) Anastomotic leak requiring reoperation within 30 days of surgery n (%) 16 (7) Perioperative mortality 30 days after surgery n (%) 3 (1) numbers and percentages or medians with range (minimum-maximum) and moda with inerquartile range, respectively. Sensitivities, specificities, positive likelihood ratios, negative likelihood ratios, and area under receiver operating characteristic curves were calculated to compare the diagnostic value of colonoscopy versus that of imaging techniques. All analyses assumed a non-parametric distribution of predictors and cut-off value equal 1 meaning presence of recurrence. The significance level in the analyses was P ≤ 0.05.
Post hoc sample size calculations for ROC analyses were performed using MedCalc ® Statistical Software version 20 (MedCalc Software Ltd., Ostend, Belgium; https:// www. medca lc. org; 2021), assuming alpha (significance) = 0.05, beta (1-power) = 0.2, calculated area under curve, and observed negative/positive ratio. Null hypothesis value equal 1 was considered as equivalence of the colonoscopy vs. imaging examinations. In all analyses, except local recurrences in RTH+ group, sample size was sufficient to reject the null hypothesis of equivalence of both examinations. Detailed results are presented in Supplementary data.
The effectiveness of treatment
At 5-year follow-up, recurrences of any type were detected in 49 (21%) patients, 15 (6%) of whom had LR (Table 4). In the group of patients who had preoperative sRT, LR was detected in only 2 (2%) of them within 5 years of resection of rectal cancer. Distant metastases were confirmed in 41 (18%) patients, 8 of whom had distant metastases associated with LR. In total, LR affected 11 (6%) of the patients treated with RT.
Detection of LR
In most cases, LR was not available for endoscopic examination (10 of 15; 67%). In these patients, the diagnosis of recurrence was made on the basis of imaging. After we analyzed the medical records, we found that endoscopic examination allowed for histopathological verification in 4 (of 5; 80%) patients with recurrence in the anastomosis. Only in 1 case was endoscopy the first examination to indicate the presence of LR; in the remaining 4 patients, endoscopy was performed after abnormal imaging results (imaging in these cases being the first indication of the presence of a recurrence) (Fig. 2). Time to diagnosis of recurrence from primary surgery did not differ between the intraluminal and non-intraluminal recurrence groups (30 months vs. 29 months). The use of imaging techniques and endoscopy in the analyzed material was of similar effectiveness in diagnosing LR, although the results of these methods were not completely consistent. The specificity of colonoscopy was satisfactory (> 98% in all groups); however, its sensitivity was much lower than that of imaging (46.7% for all recurrences and 80% for anastomotic recurrences).
The effectiveness of imaging modalities in detecting recurrent rectal cancer did not differ significantly between groups of patients who did or did not undergo RT. Our analysis showed that colonoscopy was a good method for diagnosing recurrent rectal cancer in the anastomosis (area under the receiver operating characteristic curve > 0.8); however, it did not provide advantages over other diagnostic methods for diagnosing LR in patients who did not receive RT (P > 0.05 for both types of recurrences; Table 5).
Discussion
Gastrointestinal endoscopy is used in the surveillance of patients after radical treatment of rectal cancer to identify and verify LR in order to increase the ultimate success rate. This method also enables clinicians to identify and remove metachronous tumors and precancerous lesions. Current guidelines recommend this examination as one of the foundations of surveillance. However, much of the evidence that forms the basis of these recommendations originates from outdated literature reported when patients were treated with various treatment regimens.
Colon cancer, rectal cancer
The vast majority of published studies on postoperative surveillance have included patients with 2 separate entities: colon cancer and rectal cancer [15,17,20,21]. Differences between these cancers include anatomical location (rectal cancer: retroperitoneal), diagnostic requirements (MRI, transrectal ultrasound), and therapies used (RT), which in turn may affect the diagnostic and therapeutic processes of the LR. Recurrent tumors located up to 8 cm from the sphincters are usually available by digital rectal examination and, above all, they show earlier clinical symptoms (altered bowel habits, hematochezia, abdominal pain).
Risk of local recurrence
In most cases, colorectal LRs are localized outside the anastomosis [22][23][24][25], which may suggest a use for diagnosis imaging method, such as CT colonography [23]. Fuccio et al. [26] showed in the meta-analysis that the incidence of intraluminal LR in rectal cancer is 2 times higher than that in colon cancer. The authors reported that anastomotic LR virtually did not appear after 60-72 months following surgical intervention.
Currently, less than 10% of patients who undergo radical treatment experience LR [27][28][29], owing to the use of an appropriate surgical technique (TME), the radical nature of the procedures (R0, circumferential resection margins: negative), and the combined treatments based on the RT schedule delivering a biologically effective dose above 30 Gy [30,31]. Several studies have shown that about half of LRs are isolated, with no distant metastatic lesions [32,33].
The risk of LR is associated with the following factors (among others): more advanced disease stage (American Joint Committee on Cancer/TNM), more distal location of the tumor, and perioperative treatment used. Preoperative RT reduces LR by approximately 50-70% and postoperative RT by approximately 30-40% in all locations of the rectum [34,35]. This effect may be enhanced by the use of concurrent chemotherapy [36,37], but are not observed, if adjuvant chemotherapy will be used after radiotherapy and radical resection [38,39]. Some studies reported a significant reduction in the risk of LR in anastomosis after anterior rectal resection after the use of preoperative 5 × 5 Gy sRT [35].
Synchronous and metachronous lesions
In patients with CRC, an estimated risk of the presence of synchronous neoplastic lesions is 2-4% [40,41]. Epidemiological data show that after radical treatment, patients with CRC have a 1.5-to 2-fold increased risk of developing metachronous lesions compared with that in a healthy population, as well as an increased risk (1-2%) of developing a second primary CRC [42][43][44], especially in the first years after resection [41,43]. The risk of developing metachronous adenoma after CRC resection can be estimated at less than 10% [45,46], which is similar to that of developing adenomatous changes after polypectomy in the general population [47,48].
Postoperative surveillance
Improvement of overall survival in patients under postoperative surveillance after resection was confirmed in studies in which carcinoembryonic antigen testing, imaging (such as CT, MRI, PET/CT, or PET/MRI), and clinical visits were regularly performed in addition to endoscopic examination [49].
The advantage of MRI over CT is that it allows better differentiation of the recurrent tumor tissue from fibrosis, of postoperative changes, and of changes after RT, with a sensitivity of 80-90% and a specificity of 100%, as has been described in the literature [51,52] and also applies to other neoplasms [53].
Close monitoring of asymptomatic cancer patients allows for earlier detection of recurrence than does a diagnosis based solely on the presence of suspicious symptoms [52]. Nevertheless, the importance of extensive postoperative surveillance for recurrence after rectal cancer resection remains controversial. More recent publications indicate that intensified surveillance after surgery does not improve treatment outcomes [54][55][56]. PES remains only part of a multidisciplinary approach. A few studies that have investigated the effects of intensified follow-up endoscopy have consistently shown that, despite more frequent detection of asymptomatic recurrences and thus more frequent qualification for radical treatment, there was no improvement in overall survival in groups subjected to frequent endoscopic examinations [57].
Guidelines
Earlier guidelines for post-rectal cancer surveillance included frequent endoscopic checkups of at least once every 6-12 months [58,59]. The currently recommended schemas, based on current publications on surveillance after radical treatment of rectal cancer, advocate examinations being done at least 2-3 times over a 5-year follow-up period [60,61], that is, much less often than previously recommended (Table 6). However, taking into account the clinical conditions that affect the likelihood of LR, such as the use of RT or the quality/radicality resection, it is possible to distinguish a group with a higher risk of intraluminal LR, which could allow individualized indications for intensifying PES. Identification of such groups is beyond the scope of our study and will require separate analysis of a larger amount of data, preferably coming from multicenter studies.
Our study has limitations because of its single-center and retrospective nature. However, the fact that patients were analyzed in one center contributes to the standardization of therapeutic and diagnostic procedures. The percentage of LRs in our analysis, including those located directly in the anastomosis, remained low (6.5%, 5 patients with anastomosis) and is similar to that reported by other studies [62][63][64]. The low recurrence rates are not conducive to reliable statistical analyses, although endoscopic examination is known to have a low sensitivity in detecting recurrences. Nonetheless, high specificity and the ability to sample biological material make endoscopy the preferred method for confirming the presence of recurrent lesions and verifying them histopathologically. Diagnosis of relapse is most often based on physical or imaging examinations (CT, MRI). Factors that increase the value of regular imaging tests as an alternative to endoscopy are the possibility of a simultaneous diagnosis of a lesion located outside the intestinal lumen and distant (systemic) lesions, as well as the diagnosis of possible consequences of radical treatment: postoperative fistulas, radiation-induced changes, and pelvic insufficiency fractures [65]. In addition, the invasiveness of endoscopic examinations should be taken into account, as they often result in poor patient tolerance associated with an increased risk of serious complications (including gastrointestinal perforation) [66]. Although small doses of radiation from X-rays that patients receive during imaging examinations (CT) have an impact on the body, the levels are too low to contraindicate even frequent examinations [56].
Our results do not confirm the advantage of PES in detecting recurrences in patients who are not receiving RT. This finding may have resulted from the small number of LRs detected (although a low rate of LR is the current standard). Given the results of other studies, however, a higher percentage of LRs and those located in the anastomosis can be suspected in this group of patients [34]. Although on the one hand, the use of RT reduces the number of LRs; on the other hand, it is recommended in more advanced tumors: in patients who are in general characterized as having a higher risk of LR, frequently located outside the bowel lumen. Thus, it remains debatable as to whether diagnostic indications for endoscopy in postoperative surveillance after rectal cancer treatment depend on the use of RT.
Conclusions
Endoscopy of the gastrointestinal tract in patients under multidisciplinary surveillance after radical treatment for rectal cancer remains a useful diagnostic test that allows for histopathological confirmation of LR. However, because most recurrences are located outside the intestinal lumen and because of the higher sensitivity of imaging examinations such as CT or MRI, the role of endoscopy seems to be limited. Both our own results and the updated recommendations of oncological associations confirm this hypothesis, also taking into account the risk of the presence of metachronous lesions, which are better diagnosed with modern imaging techniques. We conclude that imaging studies in the follow-up of patients with rectal cancer should play a leading role, whereas endoscopy-although necessary-should be regarded as an additional and supplementary modality limited mainly to the intraluminal inspection and verification of imaging-diagnosed lesions. If no advanced adenoma-repeat in 3 years, then every 5 years
|
2021-05-10T00:03:38.290Z
|
2021-02-01T00:00:00.000
|
{
"year": 2021,
"sha1": "37dab4c8a524c4307a0e89ac2d6bda99b3b98148",
"oa_license": "CCBY",
"oa_url": "https://wjso.biomedcentral.com/track/pdf/10.1186/s12957-021-02413-0",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "337fbfd7b0740675777f5f8df9341a6d55cfe923",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
22891311
|
pes2o/s2orc
|
v3-fos-license
|
Anti-obesity Effect of Fermented Whey Beverage using Lactic Acid Bacteria in Diet-induced Obese Rats
High-protein fermented whey beverage (FWB) was manufactured using whey protein concentrate (WPC) and Lactobacillus plantarum DK211 isolated from kimchi. This study was designed to evaluate the anti-obesity activity of FWB in male rats fed a high-fat diet. Male Sprague-Dawley rats were randomly assigned to three groups (n=8 per group). The three groups differed in their diet; one group received a normal diet (ND), another, a high-fat diet (HD), and the third, a HD plus fermented whey beverage (HDFWB), for 4 wk. Supplementation with FWB (the HDFWB group) prevented weight gain and body fat accumulation. The food intake in the HDWFB group was significantly lower (p<0.05) than that of the HD group. The HDWFB group also showed a significant decrease in organ weights (p<0.05), except for the weight of the testis. There was a significant decrease in total cholesterol, LDL-cholesterol, and triglycerides in the HDFWB group compared with the HD group (p<0.05), but there was no significant difference in serum HDL-cholesterol levels among the experimental groups. Rats ingesting FWB (the HDFWB group) also showed a significant decrease in blood glucose levels, and plasma levels of insulin, leptin, and ghrelin compared to HD group (p<0.05). These results indicate that FWB has beneficial effects on dietary control, weight control, and reduction in fat composition and serum lipid level; consequently, it may provide antiobesity and hypolipidemic activity against high fat diet-induced obesity in rats.
Introduction
To maintain body weight in a stable condition, there must be an energy balance; energy intake has to be equal to energy expenditure. However, when the energy balance gets disturbed, this may eventually lead to sustained weight problems, such as obese subjects (Klok et al., 2007). Obesity is primarily considered as a disorder of lipid metabolism as well as a disarray of energy balance. It is one of the major public health problems in the world due to its association with an increased risk of various chronic diseases, including cardiovascular diseases, type 2 diabetes, hypertension, dyslipidemia, and various cancers (Jin et al., 2013). Chronic consumption of a high-fat and highcholesterol diet may induce hyperlipidemia, hepatic lipid accumulation, lipid peroxidation, and hepatotoxicity .
There are many different treatments to help control obe-sity, including diet, exercise, behavior modification, and medication (You et al., 2014). Among these treatments, reducing excessive energy intake might be one of the major ways to correct obesity, but the compliance of energyrestricted diets should not be easy (Yu et al., 2009). Therefore, dietary manipulations maximizing satiety or consumption of food components with adipogenesis-suppressive ability may be obvious applications for the future treatment of the overweight and obese (Yu et al., 2009;Xu et al., 2013). Consuming more frequent, higher protein meals is a commonly practiced dietary strategy to increase healthy eating and promote weight loss, as well as, the prevention of weight re-gain following weight loss (Douglas et al., 2013). It is well known that dietary protein, compared with high fat and/or high carbohydrate diets, has a higher appetite control and satiety effect, shown by its ability to suppress the subject's caloric intake in subsequent meals (Douglas et al., 2013;Yu et al., 2009). Dairy proteins, especially whey proteins, have been suggested to have a better effect on appetite control than other protein sources, such as egg and casein, and to play a key role as a mediator of beneficial metabolic effects (Pal et Several prospective observational studies and clinical studies have been conducted to examine the relationship between dairy product consumption and weight gain or overweight/obesity status (Martinez-Gonzalez et al., 2014). Dairy products intake has shown the beneficial effects on metabolic risk factors in overweight and obese individuals (Pal et al., 2010;Shi et al., 2012). It has been thought that the slenderizing effects of yogurt are due to a probiotic bacteria-mediated mechanism. The relationship between the intestinal environment and host health has been extensively studied since the importance of lactic acid bacteria on good health and longevity has been suggested by Mechinikoff (Ikeda et al., 2014). Lactic acid bacteria are important members of the normal intestinal microflora and are reported to exert beneficial effects, not only for bowel function, but also many biological functions or expressions of diseases, such as serum cholesterol, lipid metabolism, blood pressure, immune system, and obesity (Ikeda et al., 2014;Lee et al., 2013). Therefore, dietary probiotic consumption alters gut microbiota and may be another effective strategy for weight control.
Recently, consumers' interests in ready-to-drink (RTD) protein drinks have increased in the market for sports nutrition, and medical and therapeutic beverages, because of their unique nutritional value, especially abundant supply of essential amino acids for protein synthesis, and excellent functional properties in food products (Rittmanic, 2010;Sanmartin et al., 2013). In previous study (Cho et al., 2015), we developed a functional fermented whey beverage (FWB) manufactured using whey protein concentrates 80 (WPC 80) and Lactobacillus plantarum DK211 isolated from traditional Korean fermented food, and evaluated its functionality and sensory properties. The aim of the present study was to investigate the potential anti-obesity and metabolic effects of FWB through the measurements of body weight gain, organ weights, serum lipid, blood glucose, insulin and appetite-related hormones in high-fat diet-induced obese rats.
Materials
The strain, Lactobacillus plantarum DKL 121 was iso-lated from kimchi samples, and maintained in glycerol stocks at -20°C. Lactococcus lactis (L. lactis R704, Christian Hansen, Denmark) was provided from Jupiter Int. Co. (Korea). Whey protein concentrate (WPC 80) and skim milk were purchased from Sung Poon Co. (Korea) and Seoul Milk (Korea), respectively. Sucrose was obtained from CJ Co. (Korea). MRS broth was purchased from Difco (USA). All other chemicals and reagents used were of analytical grade, and were purchased from Sigma-Aldrich (USA). The general and high-fat diets were purchased from Daehan Biolink Co. Ltd. (Korea), and their formula and nutritionals compositions are shown in Table 1 and Table 2, respectively.
Manufacture of fermented whey beverage
Fermented whey beverage was prepared using 11% WPC 80, 2% skim milk powder, and 10.3% sugar. The mixed culture of Lactobacillus plantarum DK211 and Lactococcus lactis was previously sub-cultured in triplicates in MRS broth at 37°C. All dry ingredients were dissolved in sterile water and homogenized with a homomixer (IKA, Korea) at 10,000 rpm. This mixture was then pasteurized at 70°C for 30 min, cooled to about 40°C, inoculated culture at a rate of 20 mL•L -1 (10 9 CFU•mL -1 ), and fermented at 37°C for 10 h until the desired pH of 4.5 had developed. The nutritional composition of final FWB product is shown in Table 2.
Experimental animals and diets
Four-week-old male Sprague-Dawley (SD) rats (n=24) with an average body weight of 156.04±11.74 g were purchased from Daehan Biolink Co. Ltd. (Korea). The rats were housed in stainless steel cages in a room with controlled temperature (22±2ºC), humidity (65±5%), and lighting (12 h altering light-dark cycle). Following one week of acclimatization with a pelletized normal diet, rats were randomly divided into three groups as follows: normal diet (ND), high-fat diet (HD) and HD plus FWB (HDFWB). Each experimental group consisted of 8 animals. All mice in the HD and HDFWB groups received a high-fat diet. The mice in the HDWFB group were given the FWB (3,000 mg•day -1 ) using oral gavage once a day, while the mice in the ND and HD groups were given same amount of saline solution via oral gavage. During the experimental period (4 wk), the rats were given free access to food and water. Food intake was monitored daily, and their body weights were measured once per week. Weight gain was determined from body weight. The experimental design was approved by Experimental Animals Ethics Committee of Dankook University, and the rats were maintained in accordance with their guidelines.
Preparation of organ samples and body fat pads
At the end of the experimental period, following 12 h of fasting, the rats were anesthetized with ethyl ether. Blood was collected using a cardiac puncture, and serum was separated by centrifugation at 10,000 g for 10 min. The serum was stored at -80ºC until analysis. The liver, spleen, kidney, and testis were removed to measure the change in weight. The body fat pads of the abdominal and epididymal areas were collected, washed with saline, and weighed.
Measurement of serum glucose, insulin and appetite-related hormones levels The concentrations of blood glucose were measured by an enzymatic kinetic assay, using the enzyme kit GLU (Germany). Serum insulin concentrations were measured by an enzyme-linked immunosorbent assay (ELISA) using the Insulin-Rat/Mouse ELISA Kit (Millipore, USA). The major appetite-related hormones, leptin and ghrelin were measured by ELISA using the Mouse Leptin ELISA Kit and Rat/Mouse Ghrelin (Total) ELISA (Millipore, USA), respectively.
Statistical analysis
For statistical analysis, statistical analysis system (SAS) program was used and data were expressed as the means± standard deviation (SD). The significance of differences among the three groups was assessed by Duncan's multiple range test. Results were considered to be significantly different when P values were <0.05.
Results
Effects of the FWB on body weight changes and food intake After the experimental period, the final body weight in HD group was 316.76±16.15 g (Table 3), consequently, rats in HD group showed a significant body weight gain compared to ND and HDFWB groups (p<0.05). There was no significant difference in the final body weights between ND group (274.78±10.54 g) and HDFWB group (259.28±19.23 g), but the body weight gain in HDFWB group was lower than that of ND group. Similarly, the food intake in HD group was significantly higher than other two groups (p<0.05).
Effects of the FWB on the weight organs and body fat pads
The HDFWB group showed significantly lower liver, spleen, and kidney weights compared to the HD group (Table 4). However, no significant increase was observed with the high fat diet on the weight of the kidney. In accor- Values are expressed as mean±SD (n=8).
a,b
Means with different superscript letter among groups are significantly different at p<0.05 by Duncan's multiple range test. dance with body weight, the weight of the abdominal and epididymal fat pads was also markedly higher in the HD group compared to the ND and HDFWB groups. The difference between the ND and HDFWB groups with respect to the weight of liver and fat pads did not reach statistical significance.
Effects of the FWB on serum lipid profiles It was observed that rats supplemented with FWB (the HDFWB group) for 4 consecutive weeks showed a significant decrease in the levels of serum triglycerides (65.63 ±1.78 mg•dL -1 ), total cholesterol (73.75±2.05 mg•dL -1 ), and LDL-cholesterol (56.88±2.94 mg•dL -1 ) compared to the HD group (p<0.05) ( Table 5). LDL-cholesterol levels in the HDFWB group were even significantly lower than those of the ND group (p<0.05). No significant differences in the HDL-cholesterol levels among the three groups were observed in this study.
Effects of the FWB on the levels in blood glucose and serum insulin As shown in Fig. 1, the fasting blood glucose level and serum insulin level were significantly increased in the HD Means with different superscript letter among groups are significantly different at p<0.05 by Duncan's multiple range test.
group compared to the ND and HDWFB groups after 4 wk of the experimental period (p<0.05). There was no significant difference between the ND and HDWFB groups.
Effects of the FWB on serum appetite-related hormones Fig. 2 shows the effects of FWB on the levels of appetite related hormones, leptin and ghrelin. Both leptin and ghrelin levels in the ND and HDWFB groups were not significantly different, but those in the HD group were significantly higher than those in the ND and HDWFB groups (p<0.05).
Discussion
Obesity is an important topic in the world of public health and preventive medicine. It is characterized by an increase in the number and size of adipocytes at the cellular level in humans and animals (Pal et al., 2010). In the present study, the effect of FWB on obesity in male rats fed a high-fat diet was evaluated.
As in other animal studies, this study proved that a high fat diet for 4 wk resulted in a significant increase in the rats' body weights. Daily food intake was also significantly increased in the HD group compared to the ND and HDFWB groups. This observation indicates that the increase in body weight is related to the amount of food consumption. Supplementation with FWB causes a remarkable reduction in body weight when compared to the HD group, showing the anti-obesity action of FWB. Shi et al. (2011) showed that a diet based on whey protein enhanced weight loss and decreased body fat content in high-fat diet fed mice compared with casein. The major whey protein fractions are α-lactalbumin, β-lactoglobulin, and lactoferrin. β-lactoglobulin is a member of the lipocalin family, capable of trapping hydrophobic molecules, such as fatty acids and cholesterol, and consequently it may participate in reducing intestinal lipid absorption (Shi et al., 2012). It has also been recognized that microbiota and obesity or cardiovascular diseases are related. The probiotics, such as Lactobacillus and Bifidobacterium, that come from dairy products have been suggested to positively modulate gut microbiota, and consequently may help to reduce the risk of overweight or obese (Martinez-Gonzalez et al., 2014). Poutahidis et al., (2013) discovered, in their mouse model study, that feeding of probiotic yogurt together with general chow or with 'fast food' style chow entirely inhibited the mice's fat accumulation and body weight gain. They noted that both lactic acid bacte-ria and the resident microbiome affect the host immunity, which in turn affects obesity. Martinez-Gonzalez et al.
(2014) also found that a higher consumption of yogurt was associated with a lower risk of being overweight or obese in their cohort study.
The inter-meal interval and number of meals are considered to be indicators of satiety, which indicates a prolonged fullness after meals and a reduction of motivation to reinitiate a next meal (Yu et al., 2009). The results showed that dietary supplementation with a fermented whey beverage had significant effect on the daily food intake compared to the HD and ND groups (p<0.05). The decrease in the number of meals in the HDWFB group indicates that the diet reduced spontaneous meal frequency during the experiment period. Yu et al. (2009) also reported that the inter-meal interval of mice fed a whey protein diet was the longest and their number of meals was the lowest among the tested diets. These results indicate that the whey protein diet had a potent satiety effect.
Obesity is characterized not just by an increase in body weight, but also by changes in body composition, and in particular, an increase in body-fat is a key indicator of obesity (Bocarsly et al., 2010). Compared to the HD and ND group, the weights of spleen and kidney were significantly decreased in the HDFWB group. Also, there were significant decrease in the weight of liver, and the body fat pads in the abdominal and epididymal areas of the HDFWB group compared to those of the HD group. Poutahidis et al. (2013) observed that abdominal fat and subcutaneous fat accumulations were significantly reduced in mice eating purified probiotics. Martinez-Gonzalez et al.
(2014) stated that the potential anti-inflammatory actions of probiotics contained in yogurt may contribute to reducing the risk of overweight/obesity, possibly as a result of their ability to reduce lipopolysaccharide production. As mentioned earlier, whey protein also contributes to the reduction of body fat accumulation.
Dyslipidemia is another important hallmark in the pathogenesis of obesity characterized by hypertriglyceridemia with decreased levels of LDL and VLDL-cholesterol (Bais et al., 2014). To investigate the effects of FWB on the improvement of lipid disorders in mice, the levels of triglycerides, total cholesterol, HDL-cholesterol, and LDL-cholesterol in serum were measured. It was observed that supplementation with FWB did significantly attenuate the serum levels of triglycerides, total cholesterol, and LDL-cholesterol. These results were in accordance with the results reported by Jacobucci et al. (2001), where the blood serum and liver cholesterol were significantly dec-reased in the rats on a whey protein concentrate (WPC) diet. According to Shi et al. (2012), β-lactoglobulin, one of the major whey protein fractions, is a member of the lipocalin family, capable of trapping hydrophobic molecules such as fatty acids, cholesterol, retinol, and vitamin D. Therefore, it is possible that β-lactoglobulin participates in reducing intestinal lipid absorption. Pal et al. (2010) also stated that whey protein supplementation can significantly improve metabolic risk factors associated with chronic diseases in overweight and obese individuals.
Insulin resistance is associated with a number of metabolic disorders such as obesity, hyperlipidemia, and hypertension (Bais et al., 2014). Several studies (Bais et al., 2014;Fan et al., 2014) indicated that in animal studies, high fat diets resulted in disturbance in glucose metabolism and caused glucose tolerance. Similar to their studies, the present study demonstrated a significant increase in blood glucose and serum insulin levels in the HD group, while the HDFWB group showed lowest levels (Fig. 1). These findings suggest that FWB can reduce blood glucose levels and improve insulin resistance. These results are coincident with other animal studies (Park et al., 2008;Shi et al., 2011) in which whey protein had significant effect on the decrease in blood glucose and serum insulin. Pal et al. (2010), in their clinical study, also indicated that fasting insulin levels and the homeostasis model assessment of insulin resistance scores were significantly decreased in the whey group. They explained that the improvement of insulin sensitivity in the whey protein diet group was due to the reduction in visceral fat because visceral obesity was strongly correlated with insulin resistance.
Leptin and ghrelin are two hormones that have been recognized to have a major influence on energy balance through regulation of food intake and body weight (Klok et al., 2007). Leptin, a hormone mainly produced in the adipocytes, is a mediator of long-term regulation of energy balance, suppressing food intake and thereby inducing energy expenditure and weight loss (Fradinho et al., 2014;Klok et al., 2007;Kondoh and Torii, 2008). On the other hand, ghrelin, an endogenous peptide hormone which is known as a ligand of growth hormone secretagogue receptors (GHS-Rs), is a fast-acting hormone and plays a role in meal initiation (Klok et al., 2007). It has been found that the serum leptin level is positively correlated with body fat mass (Klok et al., 2007;Kondoh and Torii, 2008). This correlation was observed in this study, as the serum leptin levels in the HDWFB group (1.46±0.36 ng• mL -1 ) were significantly lower than those in the HD group (2.79±0.69 ng•mL -1 ). In general, the blood level of ghrelin is increased during fasting and decreased after food intake (Klok et al., 2007: Sim et al., 2014. It has been found ghrelin to be reduced in obese humans. However, our study showed conflicting results, as the serum ghrelin levels were significantly higher in the HD group than in the other groups (p<0.05) and decreased in the HDFWB group. This result agrees with those of Park et al. (2008), in which the serum ghrelin level was significantly decreased in rats fed whey protein. Therefore it refers to the ghrelin increased in obese that can be reduced by fed FWB.
Conclusion
The present study showed that FWB could suppress body weight gain, organ weight gain, and white adipose tissue formation, reduce the levels of serum lipids and appetite-related hormones, and improve insulin sensitivity in obese rats fed a high-fat diet. Both probiotics and whey protein used for the fermentation of the FWB product might contribute to the anti-obesity and lipid lowering effects.
|
2018-04-03T05:45:00.152Z
|
2015-10-31T00:00:00.000
|
{
"year": 2015,
"sha1": "655fe4ad82e7b1f455eda5089116dc889dab862d",
"oa_license": null,
"oa_url": "http://www.ndsl.kr/soc_img/society/ksfsar/CSSPBQ/2015/v35n5/CSSPBQ_2015_v35n5_653.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "362fd86785c6c72ffe5789300285cde3e3a7a32c",
"s2fieldsofstudy": [
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
233456727
|
pes2o/s2orc
|
v3-fos-license
|
Pre-diagnostic circulating concentrations of fat-soluble vitamins and risk of glioma in three cohort studies
Few prospective studies have evaluated the relation between fat-soluble vitamins and glioma risk. Using three cohorts—UK Biobank (UKB), Nurses’ Health Study (NHS), and Health Professionals Follow-Up Study (HPFS), we investigated associations of pre-diagnostic concentrations of fat-soluble vitamins D, A, and E with incident glioma. In 346,785 participants (444 cases) in UKB, associations with vitamin D (25-hydroxyvitamin D [25(OH)D]) were evaluated by Cox proportional hazards regression. In NHS (52 cases, 104 controls) and HPFS (32 cases, 64 controls), associations with 25(OH)D, vitamin A (retinol), and vitamin E (α- and γ-tocopherol) were assessed using conditional logistic regression. Our results suggested plasma concentrations of 25(OH)D and retinol were not associated with glioma risk. Comparing the highest to lowest tertile, the multivariable hazard ratio (MVHR) for 25(OH)D was 0.87 (95% confidence interval [CI] 0.68–1.11) in UKB and the multivariable risk ratio (MVRR) was 0.97 (95% CI 0.51–1.85) in NHS and HPFS. In NHS and HPFS, the MVRR for the same comparison for retinol was 1.16 (95% CI 0.56–2.38). Nonsignificant associations were observed for α-tocopherol (MVRRtertile3vs1 = 0.61, 95% CI 0.29–1.32) and γ-tocopherol (MVRR tertile3vs1 = 1.30, 95% CI 0.63–2.69) that became stronger in 4-year lagged analyses. Further investigation is warranted on a potential association between α- and γ-tocopherol and glioma risk.
Materials and methods
Study population and design. UK Biobank. UKB is a population-based cohort of 502,536 participants living in the United Kingdom (UK), who were aged 40-69 years at recruitment in 2006 to 2010. Participants were identified from National Health Service patient registries and completed questionnaires that included information on demographics, lifestyle, diet, and health and medical history/conditions. At study enrollment, the majority of participants also provided a 45 mL blood sample. Blood samples were collected and transported overnight at 4 °C to a central laboratory, where they were processed and stored as aliquots at − 80 °C 14 until analyzed. Participants provided written consent at recruitment, and data were downloaded from UK Biobank Resource under approved protocol 16944.
Nurses' Health Study and Health Professionals Follow-Up Study. NHS, established in 1976, included 121,701 female nurses aged 30-55 years 15 . HPFS began in 1986 and enrolled 51,529 male health professionals aged 40-75 years 16 . All participants completed baseline questionnaires on demographics, lifestyle factors, and medical and other health-related information. Both cohorts have been followed up biennially, with follow-up compliance exceeding 90% 17 . Blood samples were returned by 32,826 NHS participants from 1989-1990 and by 18,018 HPFS participants from 1993 to 1995, through overnight mail. On arrival at a centralized laboratory, the samples were centrifuged to separate plasma from the buffy coat and red cells and were stored in liquid nitrogen freezers until analyzed. Over 95 percent of samples arrived within 26 h of phlebotomy 18,19 . Among participants with available blood samples, two controls were randomly selected for each glioma case via incidence-density sampling among participants who were still alive and free of cancer at the date of the case diagnosis 20 . Controls were individually matched to the cases on fasting status (fasting vs. non-fasting), year of birth (± 1 year; 2 years was used for 10 matched sets), month of blood collection (± 1 month), and race (white vs. non-white).
The study protocol was approved by the Institutional Review Boards of the Brigham and Women's Hospital and Harvard T.H. Chan School of Public Health, as well as participating registries (as required).
In NHS and HPFS, glioma cases were identified initially through self-report with additional cases confirmed by vital status or medical review post-death. Written consent for medical record review was collected from participants or next of kin. Data on tumor subtype (GBM and non-GBM as described above for UKB) was extracted directly from medical records for all cases.
Statistical analysis.
Statistical analyses were performed independently for UKB and NHS/HPFS, as different study designs were implemented and only 25(OH)D was measured in all three studies. In addition, 25(OH) D was measured using different methodologies in UKB compared to NHS and HPFS, and 25(OH)D concentrations have been shown to differ across assay methods 23 . In UKB, we excluded participants with a history of cancer at baseline, those genetically related, and those without a 25(OH)D measurement, leaving 346,812 participants and 444 incident glioma cases in the final analysis. In NHS and HPFS, we included 84 cases (52 from NHS and 32 from HPFS) and 168 controls (104 from NHS and 64 from HPFS) in the final analysis.
To adjust for seasonal variation in each study, 25(OH)D was regressed on the week of blood draw in the full cohort in the UKB and the control participants in NHS/HPFS using sine-cosine functions 24 . In each study, a season-standardized value was calculated for each participant by adding their residual (the difference between their observed concentration and predicted concentration from the regression model) to the predicted average 25(OH) D concentration over the entire year from the regression model in that study 24 . The 25(OH)D concentrations were categorized in two ways-tertiles using cutoffs from the full cohort for UKB and from control participants in NHS/HPFS, and cut-points based on the Institutes of Medicine (IOM) guidelines for bone health: < 50 nmol/L (deficiency/insufficiency), 50 to < 75 nmol/L (sufficiency), and ≥ 75 nmol/L (above sufficiency) 25 In UKB, Cox proportional hazards regression was used to calculate multivariable hazard ratios (MVHRs) and 95% confidence intervals (CIs) for the associations between 25(OH)D and glioma risk. Follow-up consisted of the time from enrollment to cancer diagnosis, death, or last linkage to the National Health Service (October 31, 2015), whichever came first. Multivariable models adjusted for the matching factors used in NHS and HPFS, including age (continuous), fasting status (fasting vs. non-fasting), race (white, non-white), month of blood draw (continuous), and sex (men, women), as well as body mass index (BMI, continuous) and smoking status (never, past, current smoker). Analyses were performed for all glioma cases and separately for GBM and non-GBM subtypes. In the GBM analysis, non-GBM cases were censored at the date of diagnosis and vice versa. Analyses were performed using the "SURVIVAL" package in R version 3.5.0 (Vienna, Austria).
NHS and HPFS, unless otherwise specified, were analyzed as a combined dataset using conditional logistic regression to estimate multivariable risk ratios (MVRR) for the associations between 25(OH)D, retinol, and αand γ-tocopherol and glioma risk. The distribution in the control participants was used to assign tertile cutpoints for each vitamin. In addition to being conditioned on matching factors, multivariable models were additionally adjusted for BMI (continuous) and smoking status (never, past, current smoker) reported on the questionnaire completed prior to blood draw. For retinol and α-and γ-tocopherol, we also adjusted for plasma total cholesterol (continuous). Analyses were performed for all glioma and by subtype. Among the controls, Spearman correlation coefficients (r) were calculated between the four fat-soluble vitamins and cholesterol in the control participants. Analyses were performed using the SAS statistical package, version 9.4 for UNIX (SAS Institute, Cary, NC).
In all three cohorts, sensitivity analyses were conducted by sex, excluding current smokers, and excluding the cases diagnosed within four years of blood collection. All statistical tests were two-sided with P values < 0.05 considered statistically significant. All methods were carried out in accordance with relevant guidelines and regulations.
Results
In UKB, average follow-up time was 6.7 (SD: 0.9) years; 444 glioma cases (330 GBM and 114 non-GBM) were identfied. In NHS and HPFS combined, 84 glioma cases (54 GBM and 30 non-GBM) cases were diagnosed, on average, 8.6 (SD: 4.3) years after blood collection. The mean (SD) age at blood collection was 55.7 (8.1) years in UKB, and 60.2 (7.9) years in NHS and HPFS. The NHS and HPFS had a higher proportion of multivitamin users than the UKB (38.1% vs. 17.5%). Only a small proportion of participants were current smokers (UKB: 10.2%, NHS and HPFS: 7.1%); the proportion of never smokers was slightly higher in the UKB (56.1%) than NHS and HPFS (52.4%) ( Table 1). Irrespective of source cohort, women and men shared similar characteristics except participants in NHS were younger than those in HPFS (Supplementary Table S1). In the UKB, participants who were diagnosed with glioma during follow-up tended to be older at study enrollment and were less www.nature.com/scientificreports/ likely to be women compared to the overall study population. By design, in the NHS and HPFS, age and sex were comparable in glioma cases and controls. In the UKB, glioma cases were less likely to be never smokers at study enrollment compared to the entire study population. In NHS and HPFS, glioma cases were also less likely to be never smokers than controls. (Table 2). Likewise, no significant associations were observed in UKB or NHS/HPFS when 25(OH)D was modeled using categories based on IOM guidelines for bone health or continuously (Table 2). Results were similar for women and men (Supplementary Table S2), and after excluding current smokers or the cases diagnosed within four years of blood collection (Supplementary Table S3).
Retinol and α-and γ-tocopherol.
In NHS and HPFS, among the controls, the mean (SD) concentration Table S4). Overall, we observed no significant associations for circulating retinol. The MVRRs were 1.16 (95% CI 0.56-2.38) comparing the highest vs lowest tertile and 1.05 (95% CI 0.50-2.21) per SD increment (0.5 μmol/L) in retinol concentration (Table 3). We observed similar results for women and men (Supplementary Table S5), and after excluding current smokers and the matched pairs where the case was diagnosed within four years of blood collection (Supplementary Table S6).
In NHS and HPFS, among the controls, the mean (SD) concentration was 44.9 (19.4) μmol/L for α-tocopherol and 5.8 (3.0) μmol/L for γ-tocopherol. For the controls, we observed a stronger correlation between α-tocopherol and total cholesterol (Spearman r: 0.55 in NHS and 0.41 in HPFS) than between γ-tocopherol and total cholesterol (0.24 in NHS and 0.31 in HPFS) (Supplementary Table S4). Compared with the lowest tertile, participants in the highest tertile of α-tocopherol had a 39% lower glioma risk (MVRR = 0.61, 95% CI 0.29-1.32), whereas those in the highest tertile of γ-tocopherol had a 30% elevated glioma risk (MVRR = 1.30, 95% CI 0.63-2.69), although neither association was statistically significant. Results from the continuous analyses generally aligned with the tertile results, with the findings for γ-tocopherol being marginally significant (per SD 1 Adjusted for sex (male, female), age (continuous), race (nonwhite, white), month of blood collection (continuous), body mass index (continuous), and smoking status (never, past, current). 2 In addition to conditioning on matching factors in the NHS and HPFS year of birth, fasting status, month of blood collection, and race), adjusted for body mass index (continuous) and smoking status (never, past, current). 3 4 The p-value for linear trend corresponds to the p-value of the ordinal variable constructed by assigning the median value of each category to all participants in that category. 5 Results for the continuous analyses are given for a 25 nmol/L increment. Table 3). Associations were similar when mutually adjusting α-and γ-tocopherol (data not shown). When cross-classifying by the median concentration of the two tocopherols, we found a 40% reduced risk in the higher α-and lower γ-tocopherol group compared with the lower α-and lower γ-tocopherol group (Supplementary Table S7). Results were similar for women and men (Supplementary Table S5) and after excluding current smokers (Supplementary Table S6). Stronger associations were observed after excluding cases (and their matched controls) diagnosed within the first four years after blood collection (MVRR tertile 3vs.1 : 0.46 for α-tocopherol and 1.42 for γ-tocopherol) though estimates remained nonsignificant (Supplementary Table S6). Given the association between γ-tocopherol and GBM was different from that for total glioma, we analyzed its association with non-GBM and found a twofold higher risk per SD increment in γ-tocopherol (MVRR = 2.10, 95% CI 1.12-3.93, n = 30 cases).
Discussion
In three prospective cohorts (UKB from the UK, and NHS and HPFS from the US), statistically significant associations were not observed between circulating 25(OH)D and risk of glioma. In two of these cohorts, NHS and HPFS, associations with retinol and α-and γ-tocopherol were also investigated and were not significantly associated with glioma risk. However, the associations for α-tocopherol and γ-tocopherol were in opposite directions (inverse for α-tocopherol and positive for γ-tocopherol).
Most studies investigating potential roles of fat-soluble vitamins in glioma development have examined associations based on self-reported dietary intake. However, fat-soluble vitamin status in particular 27 , can be affected by numerous factors including bioavailability and specific intrinsic and extrinsic factors, such as food content, health, and genetic characteristics of the population in question 28,29 . Moreover, dietary assessment of fat-soluble vitamin status can be confounded by other methodological issues. For instance, depending on the time of year, sun-induced vitamin D synthesis in the skin may be another major source of vitamin D. In addition, vitamin E intake typically does not reflect actual levels of various tocopherol isoforms in the body because γ-tocopherol is disproportionately metabolized and excreted 30 . Hence, dietary assessment studies could yield different nutrient-disease associations than studies investigating circulating levels. Table 3. Association between circulating retinol, α-and γ-tocopherol and glioma risk in the Nurses' Health Study and Health Professionals Follow-Up Study. CI confidence interval, RR risk ratio. 1 In addition to conditioning on matching factors in the NHS and HPFS (year of birth, fasting status, month of blood collection, and race), adjusted for body mass index (continuous), smoking status (never, past, and current), and circulating cholesterol (continuous). 2 Median plasma retinol for each tertile in the NHS and HPFS: 1.74, 2.16, and 2.74 μmol/L. 3 The p-value for linear trend across tertiles was the p-value of the ordinal variable constructed by assigning medians to all participants in the tertiles. 4 Results for continuous analyses for each standard deviation increment. 5 Median plasma α-tocopherol for each tertile in the NHS and HPFS: 29.66, 41.32, and 57.50 μmol/L. 6 Median plasma γ-tocopherol for each tertile in the NHS and HPFS: 2.94, 5.60, and 8.59 μmol/L. www.nature.com/scientificreports/ There is a paucity of studies investigating vitamin D and glioma risk, and results are inconsistent. In an ecological study, the prevalence of brain cancer was elevated in the southern region of the US 31 . Although this geographical variation could be related to different distributions of genetic and occupational risk factors, it is plausible that geographic differences related to sunlight exposure and vitamin D synthesis could also contribute to the differences in glioma prevalence by region 32 . A nested case-control study from the Janus Serum Bank in Norway 33 found no overall association between vitamin D status and glioma risk (OR quintile5vs.1 = 1.04, 95% CI 0.73, 1.47). Although the authors reported nonsignificant associations in opposite directions in younger (n = 210 cases) and older (n = 185 cases) male participants, the same pattern was not observed in the current study (data not shown). Consistent with a recent Mendelian randomization study that found no evidence for a causal relationship between 25(OH)D and glioma risk 34 , the current analysis, which used directly measured 25(OH) D, also found an overall null association between 25(OH)D and glioma risk.
Vitamin A has been shown to inhibit proliferation and induce differentiation in various cell types mainly through binding to the nuclear retinoic acid receptors (α, β, γ), which are transcriptional and homeostatic regulators whose functions often compromised early in neoplastic transformation 11,12 . Therefore, vitamin A has been hypothesized to lower risk of glioma. Intake studies of vitamin A have shown inconsistent results. A previous meta-analysis of eight case-control studies suggested a significant inverse association between vitamin A intake and glioma risk 35 . However, there was statistically significant heterogeneity in the study-specific results, as well as the potential for dietary recall and selection bias. A null association between serum retinol and brain cancer risk was observed in the Alpha-Tocopherol, Beta-Carotene Cancer Prevention (ATBC) study (HR quintile5vs.1 = 1.03, 95% CI 0.47, 2.25, n = 78 cases) 6 . In NHS and HPFS, we also observed a weak, nonsignificant association for retinol and glioma risk. Failure to identify an association in our study, as well as the ATBC study, may be attributed to small sample size or the tight regulation of circulating retinol 36 .
In the present study, we observed a nonsignificant inverse association between circulating α-tocopherol, but a nonsignificant positive association for γ-tocopherol, with risk of glioma. These findings are in-line with some 37, 38 , but not all prior studies 39 . In a prospective nested case-control study from ATBC Study with 64 glioma cases, circulating α-tocopherol was inversely associated with glioma risk (for a one SD increment, OR 0.65, 95% CI 0.44-0.96); γ-tocopherol was not investigated 38 . Further, a case-control study of 34 GBM cases also reported that prediagnostic α-tocopherol, but not γ-tocopherol, was inversely associated with glioma risk 37 . However, in the Janus Serum Bank study of 110 GBM cases, high α-and γ-tocopherol concentrations were both positively associated with risk of GBM, and the associations were stronger based on samples collected at least 10 years prior to diagnosis 39 . Although we were unable to apply such a long lag due to our limited sample size, we noted a stronger though still nonsignificant positive association for γ-tocopherol in a 4-year lagged analysis, supporting the possibility of a potentially adverse influence of γ-tocopherol on glioma risk. Results from vitamin E dietary intake studies are not consistent with those utilizing biomarkers. A meta-analysis of 12 studies including 3180 glioma cases found that intake of vitamin E from foods or foods and supplements was not associated with risk of glioma 40 . Dietary recall or selection bias issues may have contributed to the different findings, as the meta-analysis mainly included retrospective studies; only one of the studies was prospective in design. Among vitamin E intervention trials [41][42][43][44][45][46][47][48][49][50][51][52] , only one study has reported results for cancers of the central nervous system with equivalent numbers of cases observed in the treatment (11 cases) and placebo (8 cases) arm; glioma was not reported separately 48 .
Strengths of the current study include the prospective design and availability of pre-diagnostic blood samples, which reduced the possibility of established glioma affecting the measured concentrations of circulating fat-soluble vitamins. In addition, the entire UKB cohort had circulating 25(OH)D measurements, allowing for robust evaluation of potential associations with glioma risk. Although results cannot be directly compared in the UKB and NHS/HPFS studies due to the use of different 25(OH)D assessment methodologies, that could lead to differences in reported 25(OH)D concentrations 26 , we note that neither study supported a role for vitamin D in glioma development. The primary limitations of this study include the small sample size, particularly for the analyses of retinol and α-and γ-tocopherol (84 cases), and use of a single blood sample for each participant. However, in plasma samples collected at two time points 1-2 years apart in NHS, we have shown high reproducibility for 25OH-D (intraclass correlation coefficients [ICC] = 0.71), retinol (ICC = 0.87), and α-tocopherol (ICC = 0.86); γ-tocopherol was not assessed in that study 53 . In addition, the population was mainly Caucasian with limited ethnic diversity. The non-statistically significant inverse association observed for α-tocopherol and non-statistically significant positive association observed for γ-tocopherol require further examination in larger, more ethnically heterogeneous cohorts.
In conclusion, our results suggest circulating 25(OH)D (vitamin D) and retinol (vitamin A) are not associated with glioma risk. Further investigation is however warranted to evaluate the potential associations we observed between vitamin E isoforms, α-and γ-tocopherol, and glioma risk.
Data availability
The work is based on the UK Biobank Resource under application number 16944. For the NHS and HPFS cohort data, we provide access through an NIH approved data enclave. Instructions for how to access the data are publicly available through their respective cohort websites.
|
2020-11-26T09:02:21.994Z
|
2020-11-25T00:00:00.000
|
{
"year": 2021,
"sha1": "3c274a086301f2f0dfa48c12503f4c1ca4e7d8b6",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41598-021-88485-0.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5953b2db1c3b11a1968d6acd12ad68e4c1af513f",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
266132413
|
pes2o/s2orc
|
v3-fos-license
|
Exploring criteria for university mathematics teachers’ selection of calculus textbooks
This study delves into the nuanced process of calculus textbook selection by university mathematics lecturers. Drawing from a conceptual framework that encompasses various selection criteria, including culture-driven, student-driven, resource-driven, teacher-driven, mathematics-driven, and constraints-driven factors, the research examines the predominant criteria guiding lecturers’ choices. Data from participating lecturers reveal a multifaceted decision-making process. “Culture-driven selection” emerges as a significant criterion, indicating the influence of curriculum committees. Additionally, educators prioritize “student-driven selection,” valuing accessibility and contextualization, while also favoring textbooks with abundant resources and problems. The study underscores the complexity of textbook selection, shedding light on how instructors navigate educational, commercial, and student-centered considerations. These findings offer insights beneficial for textbook authors and publishers, useful for the creation of resources aligned with educators’ preferences.
Introduction
The influence that textbooks have on the teaching and learning of mathematics is a recognized fact in the field of mathematics education, which has stimulated the development of a research area focused on mathematics textbooks.Although studies on mathematics textbooks began to be published at the end of the 1970s (e.g., Kepner & Koehn, 1977), it was not until the 1980s that this type of study became more common in our discipline (see, for instance, Pinker, 1981;Turnau, 1980).Today, mathematics textbook research is a well-established area within the field of mathematics education.
Despite being a relatively new area of study, some articles have been developed that provide an overview of the development of this research area (e.g., Fan, Zhu & Miao, 2013;Rezat, Fan & Pepin, 2021;Schubring & Fan, 2018).In particular, Fan (2013) develops an analysis of contemporary issues on mathematics textbook research discussed in specialized conferences and proposes a framework for textbook research divided into three areas: 1. Textbooks as the subject of research.The focus is on studying how textbooks are structured and their qualities.For example, how do they represent mathematical knowledge, reflect a specific pedagogy, and portray cultural and social values.2. Textbooks as a dependent variable.The focus here is on how mathematics textbooks are affected by other factors.For instance, how textbooks are developed in one or more countries, what role different agents (the government, mathematicians, curriculum specialists, teachers) play in the development of textbooks, and how research informs the contents of textbooks.3. Textbooks as an independent variable.Here, it is studied how textbooks affect other factors.
For example, how and to what extent textbooks affect the learning and teaching of mathematics; how textbooks are used by students and teachers and why; how textbooks serve to transmit socio-cultural and political norms and values.
One of the least studied aspects within the third area of research proposed by Fan (2013) is the textbook selection process by mathematics teachers.In particular, little is known about the criteria applied and the characteristics of textbooks valued by university mathematics teachers when selecting a textbook.When a university lecturer has the possibility of choosing a textbook as a support to teach her own class, what aspects or characteristics does she look to select one textbook over another?What are the selection criteria she uses?The selection of textbooks is a critical process not only because of the influence that these instructional materials have on the teaching and learning of mathematics, but also due to the economic implications-related to the access, sales, and distribution of textbooksthat this process can generate within school mathematics.As noted by Zeringue, Spencer, Mark and Schwinden (2010, p. 3), "The selection process is not just a process of curriculum decision-making; it is also a purchasing process, a human resources process, and a political process."Such economic and political dimensions tend to be overlooked in mathematics education research (Pais, 2014).
The study reported in this article contributes to clarify some of the questions raised above.In particular, an exploration of the selection criteria applied by university lecturers when choosing a calculus book for their own classes is reported.The purpose of this study is to investigate how calculus teachers select textbooks for their own courses.In particular, some of the characteristics that teachers look for in a calculus textbook are identified.The research question guiding this exploration is the following: What selection criteria do university lecturers apply when selecting a calculus textbook for their own courses?
The article structure is as follows.First, an overview of research related to mathematics textbook selection processes is presented.Then, a conceptual framework on the selection of resources by mathematics teachers is introduced.Subsequently, the method implemented to conduct the research is explained.Finally, the findings of the study and its discussion are presented.
Research on mathematics textbooks selection processes by mathematics teachers
Although textbook selection processes have been studied in areas other than mathematics education (e.g., Makgato & Ramaligela, 2012) or with a focus not centered on mathematics teachers but on other populations such as mathematics teacher educators (Harkness & Brass, 2017), the processes of textbook selection by mathematics teachers have received limited attention.Nevertheless, some existing studies can serve as references for the research presented in this paper.These prior studies can be categorized into descriptive studies and normative studies.
Descriptive studies on the selection of textbooks by mathematics teachers
Descriptive studies refer to those investigations that look into the preferences and criteria that come into play when mathematics teachers have the opportunity to select a mathematics textbook to teach their students.
The first study we identified is that of Shield (1989), who investigates the preferences in mathematics textbook characteristics of 28 teachers teaching junior mathematics (teaching at lower secondary schools).He argues that institutional and commercial writers of textual materials work with little knowledge of teacher and student preferences about mathematics textbooks, and there is a lack of knowledge of how the textual materials are actually used by both groups.As part of his research design, Shield provided teachers with a 13-item list of textbook characteristics they prefer, which they were asked to rate on a scale from 10 (very important) to 1 (of no importance).Some examples of these items are "It provides real problems to make students think," "It contains a lot of exercises," and "The sequence of topics in the book can be varied."Shield found that the items rated as most important are that the textbook provides: (a) real problems to make students think; (b) the correct terms; (c) new words and symbols; (d) readable explanations for students; and (e) important ideas, rules and definitions.These results show an emphasis on students' needs; in fact Shield (1989) concludes that mathematics teachers consider the textbook: "as primarily a source of exercises and problems for students to do both in class and at home.The textbook is also regarded as a reference for students for revision and to overcome difficulties with homework" (p.14).
The studies by Barry, Gay, Pelkey and Rothrock (2017) and Gay, Barry, Rothrock and Pelkey (2020) focus on exploring the type of textbook that student teachers prefer to teach mathematics.Using an online survey, they posed the following question to ninety middle and upper secondary school student teachers in 16 different states in the United States: "If you had the opportunity to choose a type of text for your students, which type of text would you choose?" (Barry et al., 2017, p. 3).To answer this question, student teachers were provided with a list of options that included: • Digital-Text with interactive components and multimedia content like video clips or animation • Print-Traditional printed math textbook • eText-Image of a print text on a digital device without interactive components The majority of student teachers (52%) declare to prefer the digital-text with interactive components, followed by 36% who declare to prefer a traditional printed mathematics textbook.When asked to explain the reasons for their choice, student teachers who stated that they preferred a digital textbook claimed that because "students these days love to use technology" (p.4), the digital book could be more interesting, relevant, and engaging for them.Some respondents also highlighted the independence of study that a digital book can provide, for example: "With a print book, the only resource for teaching is myself …With a digital book, students who want to work at a faster pace have resources to study on their own and students who have trouble understanding my instruction could watch videos."(p.4).In turn, the student teachers who stated that they preferred a printed mathematics book highlighted that its accessibility did not depend on the access to technology, for instance: "I prefer to use a print textbook because they are available to all of the students.The library checks out textbooks to the students for each of their classes.When textbooks are only provided online, students are required to have access to computers and/or internet, which is not always possible" or "Most of my students do not have access to a computer" (p.4).Here again, prospective instructors seem to be concerned about the needs of their students when choosing a mathematics textbook.
We are aware that not all studies on mathematics textbook selection processes focus on exploring the individual preferences of mathematics instructors.Zeringue et al.'s (2010) work focused on exploring textbook selection at the district level in the United States.Based on in-depth interviews with over 150 K-12 mathematics curriculum decision-makers representing districts in eight states, Zeringue et al. identify "key factors that matter most when districts are selecting mathematics instructional materials" (p. 7).These key factors are as follows: • The degree of alignment of the candidate materials with state standards and tests; • The anticipated level of teacher acceptance of the materials under consideration; • The advocacy of a curriculum leader(s) for a particular approach or set of materials; • A committee's evaluation of the quality of the materials being reviewed (usually based on an established set of criteria); and • Additional information about the materials such as, data about student performance, reviews from trusted sources, and advice from neighboring districts.
We can see that other types of criteria are involved in these district selection processes that do not make explicit reference to the students' needs.
Normative studies on the selection of mathematics textbooks
Normative studies on textbook selection provide frameworks or guidelines that can guide mathematics teachers when selecting a mathematics textbook.Two such studies are presented in this overview.
The first normative study identified is that of Tarr, Reys, Barker and Billstein (2006).Due to the central role that textbooks have in the teaching of mathematics, these authors provide a general framework for reviewing and selecting mathematics textbooks.The framework is constituted by questions that must be asked in order to evaluate a textbook.Such questions are organized around three key dimensions: • Mathematics content emphasis (e.g., is there an appropriate balance of skill development, conceptual understanding, and mathematics processes?Are mathematics ideas connected and interwoven across strands instead of studied in isolation?Do contextual problems engage students and, where appropriate, give rise to mathematics ideas?) (p.51).• Instructional focus (e.g., to what degree do activities foster the development of mathematics as a human endeavor and a way of thinking?Do lessons promote classroom discourse by explicitly requiring students to share their thinking or strategies?Where appropriate, do lessons involve the use of instructional technology, manipulatives, or other tools so that students can visualize complex concepts, acquire and analyze information, and communicate solutions?)(p.52) • Teacher support (e.g., what assessment tools are provided for assessing student learning and informing instructional decision-making?Do the materials provide opportunities for teachers to increase their own understanding of the mathematics ideas that students are studying?Do the materials provide a rich source of problems, exercises, and projects that can be used for homework?)(p.53).
The second normative study identified is that of Czeglédy and Kovács (2008).This study was motivated by the possibility for Hungarian mathematics teachers to choose textbooks for their teaching practice from among an increasing number of texts.The purpose of the study was to develop a textbook evaluation system based on the opinions of 41 mathematics teachers who taught grades 1 to 12 (primary, lower and upper secondary school).Thus as a result of their study, Czeglédy and Kovács (2008) propose a textbook evaluation system focused on the following five desirable aspects in a mathematics textbook: • The textbook is appropriate for individual learning.
• The textbook is well applicable in the teaching process.
• The logical orientation of the textbook is inductive.
• The textbook makes talent improvement possible.
• There were supplementary materials (a problem solving book, a collection of exercises, test sheets, etc.).
This paper presents a descriptive study on the process of selecting textbooks by university mathematics instructors.The study aims to expand our understanding of the selection practices of this underresearched population.Research on the criteria used by mathematics teachers-particularly university teachers-in textbook selection has been largely overlooked in mathematics textbook research, and this is reflected in the few studies found and reported in this section.The study described in this article was conducted through a survey and interviews with mathematics teachers, and the data collected was analyzed using the conceptual framework outlined in the next section of the paper.
Conceptual framework
To establish a conceptual framework for this study, we rely on research on mathematics textbooks and teachers' resources, particularly on research focused on teachers' selection of resources (Siedel & Stylianides, 2018).The notion of "resource" in mathematics education can be understood broadly, including elements such as textbooks, curricular guidelines, student worksheets, webpages, etc. (Pepin, Gueudet & Trouche, 2013).Siedel and Stylianides (2018) define a resource as "any assets teachers might draw on to support any stage of their everyday teaching practice" (p.121).These authors argue that "the proliferation of instructional resources and the potential impact of teachers' resource selection on students' learning opportunities create a need for research on teachers' selection of resources" (p.119).One contribution of these authors to the research on teachers' selection of resources is developing a "pre-disposition taxonomy" which can serve as a "generic template for categorizing teacher choices, helping the mathematics education community to examine the selection process and determine, for example, whether teachers' pre-dispositions influence their choices in significant ways" (Siedel & Stylianides, 2018, p. 120).To develop this taxonomy, the authors reviewed existing resource selection studies.They developed an interview study on resource selection (including mathematics textbooks) involving 36 mathematics teachers at six secondary schools in England.According to the authors, the cultural context from which these teachers come provides a large pool of possibilities for resource selection.This facilitates the exploration of resource selection criteria by mathematics teachers.
By conducting a qualitative content analysis of the responses from the interviewed teachers, Siedel and Stylianides (2018) identify six themes related to the reasons for teachers' resource choice: student-driven selection, teacher-driven selection, mathematics-driven selection, constraints-driven selection, resource-driven selection, and culture-driven selection.As noted by the authors, the first three pertain to classroom interactions, while the last three are related to influences other than classroom context.Each of these themes is briefly explained below.
• Student-Driven Selection.This theme refers to resource selection driven by considering students' needs.For example, selecting a resource because it promotes student engagement or adds value to students' mathematical learning.• Teacher-Driven Selection.This resource selection is driven by the way the teacher prefers to teach.For instance, selecting a resource because it aligns with their preferred teaching style or their convictions about the nature of mathematics teaching and learning.• Mathematics-Driven Selection.In this mathematics-driven resource selection, some teachers associate the selection of resources with specific mathematical subjects.For example, selecting a resource because they believe it is particularly good for a specific mathematical topic but may not be used for any other topic.• Constraints-Driven Selection.In this case, resource selection is influenced by accessibility constraints.Often, resource selection may be constrained by a need for specific elements, such as time or money, that restrict access to resources.For instance, a teacher may argue that they prefer to use a ready-made resource for classroom use to save time.• Resource-Driven Selection.This resource selection is affected by the features of the resources themselves.Such features may include "ease of use," "adaptability," "flexibility," "esthetics," "accessibility," "reliability," and "fit for purpose".• Culture-Driven Selection.In this case, the resource selection is influenced by the culture of each department or school district.This institutional influence on resource selection could be reflected in a teacher choosing not to use textbooks in their class because the school they work at discourages textbook use and emphasizes teacher autonomy.
This "pre-disposition taxonomy" was used as a conceptual framework to analyze the responses of university mathematics teachers to a questionnaire and interview related to their textbook selection processes.More details about this methodological process are provided in the following section.
Method
In this section of the article, the implemented research method is detailed.Specifically, it describes the population participating in the study, the instruments used, and the procedure followed.
Population participating in the study
Sixty calculus teachers from an engineering department within a public university in Ciudad Juarez, Chihuahua, on the northern border of Mexico, south of El Paso, Texas, were invited to participate in this study anonymously.This Mexican university has an estimated student population of 36,524 from this geographic area, with socioeconomic backgrounds ranging from medium to low.Out of the total number of teachers invited, nine agreed to participate in the study, during which they completed a questionnaire and underwent the interview described below.Prior to the commencement of the study, written authorization was obtained from the participants using a format provided by the University's Ethics Committee.
Description of the instruments used in the study
The study was divided into two stages: in the first stage, a questionnaire was used, while in the second stage, an interview was conducted.
The questionnaire used consisted of 11 open-ended questions.However, only some of the questions are directly related to the selection of calculus textbooks.Some questions refer to general information about the teachers (e.g., briefly describe your professional or academic background).In contrast, other questions sought to obtain information related to other topics of interest to the researchers (e.g., how do you use the textbook to prepare or teach your calculus classes?do you use other support materials-besides textbooks-to prepare and teach your calculus lessons?).The questionnaire's questions directly related to the teachers' textbook selection processes are the following three: 1. Write the title and author of the textbooks you use to prepare and teach your calculus lessons.2. Why did you choose the textbooks referred to in your previous answer?This is, how do you select a textbook for preparing and teaching your calculus lessons?Please explain your selection in detail.3. What characteristics are most important when selecting the textbooks to prepare and teach your calculus lessons?Explain the characteristics in a detailed manner.
As part of the questionnaire, teachers were asked if they would be willing to participate in a subsequent semi-structured interview to further elaborate on their responses.All nine teachers who responded to the questionnaire agreed to participate in the interview.A guide was designed for the semi-structured interview, consisting of an initial prompt and four open questions, described below: Initial prompt: Let us start from the premise that you could select the textbooks for your calculus lessons, then Open questions: • What textbooks would you choose?
• Why would you choose those textbooks?
• What factors do you consider when selecting a calculus textbook?
• What characteristics or features make you select one calculus textbook over another?
Implementation of the instruments and data collection
The study presented in this paper was conducted during the lockdown amidst the COVID-19 global pandemic.Consequently, the implementation of instruments and data collection was executed entirely online.
In the case of the questionnaire, it was implemented in August 2021 using the Microsoft Forms application.The nine participating teachers received an email link to access the questionnaire and had 7 days to complete it.Thanks to the application employed, the teachers' responses to the questionnaire were automatically recorded in a spreadsheet, which was later downloaded for further analysis.
Regarding the interviews, they were conducted in January 2022, following the questionnaire's administration.Each of the nine teachers was individually interviewed through synchronous meetings on the Microsoft Teams platform.Each interview lasted approximately 30 mins.All interviews were audio-recorded for subsequent analysis.
Data analysis
The empirical data generated through the implementation of the questionnaire and individual interviews were analyzed using the pre-disposition taxonomy previously introduced (Siedel & Stylianides, 2018).Furthermore, investigator triangulation (Rothbauer, 2008) was implemented for data analysis.Specifically, the two authors of this article individually analyzed the empirical data, firstly identifying the criteria for selecting calculus textbooks used by teachers and subsequently classifying them according to the themes proposed in the pre-disposition taxonomy.Subsequently, the authors met to share their analyses, discussion, and reach a consensus on any discrepancies in the interpretation and classification of the empirical data.The method of percentage agreement was used to assess the inter-rater reliability of this analysis.Each author independently assessed 29 items, and there was only a discrepancy in one; thus, the percentage of agreement was 96.55%.
In the case of the questionnaire, the empirical data were directly obtained from the spreadsheet generated by the Microsoft Forms application.For the interviews, a tape-based analysis (Onwuegbuzie, Dickinson, Leech & Zoran, 2009) was employed: We initially familiarized ourselves with the data-by repeatedly listening to the interviews-in order to identify the selection criteria used by the teachers.Subsequently, we transcribed only the parts of the interviews that were relevant to answer the research question.
Codes were assigned for each of the defined themes to classify the teachers' selection criteria according to the pre-disposition taxonomy.These codes were used to label segments of the empirical data that corresponded to those themes.Table 1 shows the themes used, their corresponding codes, and an excerpt of the data illustrating each theme.
Results
After the data analysis process, we are now in a position to answer the research question posed, What selection criteria do university lecturers apply when selecting a calculus textbook for their own courses?, in terms of the pre-disposition taxonomy used as the conceptual framework for analysis.To illustrate the results, we have chosen responses from the teachers' questionnaires that represent the utterances identified in both the questionnaire and the interviews.This methodological decision was made because the interviews with the teachers confirmed their questionnaire responses, which tended to be better articulated than their oral utterances.
The first question of the questionnaire regarding the selection of calculus textbooks asked to write the title and author of the textbooks that the teachers used to prepare and teach their calculus lessons.The participating teachers mentioned four textbooks: Five teachers mentioned Larson and Edwards (2010a), three teachers mentioned Larson and Edwards (2010b), three teachers referred to Hughes-Hallet and Gleason (2013), and three teachers made reference to Stewart (2012).
When questioned about the selection criteria they apply when choosing a calculus book for their own classes, calculus mathematics teachers report a variety of responses that encompass all the Table 1.This table shows the six themes that constitute the pre-disposition taxonomy, the codes assigned to each theme, and illustrative excerpts of the empirical data that illustrate them.
Theme
Illustrative excerpt Student-driven selection "The way in which topics are presented should be clear, concise, and have an appropriate level for the student."Teacher-driven selection "[I pay attention to] the quantity of examples presented for each topic.The more examples the book provides, the better it is for the teacher to be able to teach the lesson properly."Mathematics-driven selection "Particularly, Larson's book seems to contain the fundamental mathematical topics for an engineering-level course, the order in which the course content appears, and how each topic is presented looks pretty appropriate to me."
Constraints-driven selection
"It is the newest edition that I have found in Spanish and PDF, in addition to having a solutions manual."Resource-driven selection "I prefer textbooks that contain different situations in problems or various problems on the same topic."Culture-driven selection "I use the mandatory or complementary books indicated in the course syllabus.I consider them to be a tool to achieve homogenization of the courses taught by the group of teachers." themes of the pre-disposition taxonomy.However, some themes are more prominent than others.In the following, we explain and illustrate each selection criteria identified during the analysis, starting with the most frequently mentioned themes.The frequency of responses for each theme is included to provide the reader with a more precise idea of how recurrent each theme was.
Culture-driven selection
This was the selection criteria that most frequently appeared in the teachers' answers, with eight occurrences.The teachers' responses suggest that it is important for them to select textbooks recommended in the course syllabus or selected by the "curriculum committee," which is a committee of mathematics teachers appointed by their colleagues to decide which textbooks to recommend collectively.The following responses from the teachers illustrate this criterion: "[I selected that textbook] because it is the main textbook specified in the course syllabus." "[I select those textbooks] because their content aligns with the curriculum, and the subject committee chose them." "[I selected those textbooks] because they are suggested in the mandatory and supplementary bibliography of the course syllabus." "I followed the textbook specified in the course syllabus to avoid controversies regarding the content to be covered in class."
Student-driven selection
Among the teachers' responses, this selection criterion appeared the second most frequently, with a total of seven occurrences.In this case, teachers state that they look for textbooks accessible to students in terms of the language used and the academic level but also contextualized in reality close to the students.The following excerpts illustrate this criterion: "I look for a textbook that is easy for students to understand, in case they want to review the topics individually." "[The textbook] should have little theory or 'simple' theory with language accessible to the students' level." "In my opinion, there should be a calculus textbook suitable for the interests and objectives of engineering, focused on the reality of students who have completed secondary education in Chihuahua." "The way [mathematical] applications are approached, meaning that the contextualization given to the topics, should be situated in the students' daily environment."
Resource-driven selection
This selection criterion is another one that appeared most frequently among the teachers' responses, occurring six times.In this case, teachers refer to specific characteristics of the textbook they seek, mainly focusing on the exercises, problems, and examples it contains.The teachers state that they look for textbooks that provide various exercises with solutions and problems contextualized in different situations.This category of responses is illustrated as follows: "[I look for a textbook] that has a Spanish edition and a solutions manual." "It must contain a large number of exercises, with an appropriate level of difficulty." "I should like the exercises and the explanation it provides."
Teacher-driven selection
In this case, there were four utterances where teachers expressed a selection criterion apparently related to and compatible with their way of using resources for teaching or their perspectives on how mathematical topics should be presented.The following excerpts illustrate this criterion: "I don't usually follow a single textbook throughout the semester; I simply create my course material based on various bibliographies." "For the textbook selection, I analyze the topic, review how it is presented in different books, and then choose what I believe is most appropriate."
Mathematics-driven selection
In this category, three teachers' utterances were included that expressed selection criteria that distinguished between the mathematical approach to calculus required for engineering students and mathematics students from other areas, such as pure mathematics.The excerpt included in Table 1 and the following excerpt illustrates this criterion: "It should be considered that [the textbook for engineering students] should not be the same Calculus course as for the mathematics degree."
Constraints-driven selection
Only one mathematics teacher referred to a selection criterion determined by accessibility constraints.In this case, the teacher stated that he selected a textbook for their calculus courses because it is the latest version he could find in a digital format and his native language.He mentioned, "It is the newest edition I have found in Spanish and PDF, and it also comes with a solutions manual." Figure 1 shows a graphical representation of the distribution of the selection criteria university lecturers participating in this study declare to apply when choosing a calculus textbook for their courses regarding the pre-disposition taxonomy.
Concluding discussion
The results of this research paint a complex picture of the criteria and processes teachers utilize in selecting calculus textbooks.It is evident that various factors, from institutional influences to personal preferences, play a role in this decision-making process.
The results show that the teachers' most frequently mentioned selection criterion for calculus textbooks is "culture-driven selection."This can be interpreted as an indication of the importance and influence of the curriculum committee in the textbook selection processes of the participating teachers in this study.This apparent relevance of the curriculum committee in the process of selecting university mathematics textbooks raises new questions, such as: What are the collective textbook selection processes that take place within the curriculum committee?What selection criteria are applied?
It is intriguing to consider how these collective processes within curriculum committees differ from or even conflict with the individual preferences of educators.The dynamics of textbook selection within curriculum committees probably involve selection criteria not considered in Siedel and Stylianides (2018) pre-disposition taxonomy which would require an expansion of this theoretical tool (Figure 2 shows a graphical representation of this extended pre-disposition taxonomy).In particular, we refer to the commercial pressures these academic entities are subject to, which could give rise to a "commercial-driven selection" criterion.We hypothesize this based on anecdotal evidencethe first author of this article works in the same academic department as the participating teachers in the study-which indicates that publishing houses have approached the curriculum committee of the engineering department and individual teachers to offer them discounts and free copies of textbooks.When questioned about the textbooks used for his calculus lessons, one of the participating teachers in this study stated, "The promoter of the book publishing company has provided us with certain textbooks."This statement illustrates the influence of commercial entities on textbook selection processes.We emphasize here that the collective processes of textbook selection within an academic committee are likely different from the individual textbook selection processes experienced by teachers.Exploring the nature of these collective and individual processes and their interrelationships would give us a clearer picture of the functioning and complexity of textbook selection processes in undergraduate mathematics education.
Regarding "Student-driven selection," the research underscores a pivotal facet of teaching: placing student needs at the forefront.Recognizing the preferences and requirements of students is not only commendable but also pivotal for effective instruction.As pedagogical practices evolve and student demographics and needs change, textbook selection processes must remain flexible and adaptable.The results of this study reveal that the second most frequently mentioned selection criterion for calculus textbooks among teachers is "Student-driven selection."This finding is consistent with other descriptive studies on the selection of textbooks by mathematics teachers.For example, in his study on teachers' preferences in mathematics textbook characteristics, Shield (1989) found that the participating teachers emphasized the needs of the students in their statements about which textbook characteristics they prefer.Similarly, the study by Barry et al. (2017) provides evidence that several prospective instructors participating in his study seem to be concerned about the needs of their students when choosing a mathematics textbook.These data suggest that mathematics teachers, when selecting mathematics textbooks, place significant emphasis on considering the needs and preferences of their students.
The results also show that the third most frequently mentioned selection criterion is "Resource-driven selection."Teachers value textbook characteristics such as exercises with solutions and explanations and problems contextualized in different situations.A teacher also stated that it is crucial for him that the textbook could be written in his native language.The "Resource-driven selection" criterion highlights the practical aspects teachers consider.While content is paramount, the format, language, and additional resources, such as exercises and solutions, are equally important.Publishers must recognize and cater to these needs.Additionally, the transition from traditional print textbooks to digital resources becomes pertinent with the rise in technology integration in classrooms and the increasing use of e-learning platforms.The features and interactivity offered by digital textbooks could influence educators' preferences, pushing "Resource-driven selection" to evolve in ways we might not yet fully grasp.
Figure 1 .
Figure 1.Graphical representation of the distribution of the selection criteria for textbooks that the participating teachers declared using.
Figure 2 .
Figure 2. Graphical representation of an extended pre-disposition taxonomy including a "commercial-driven selection" criterion.
|
2023-12-10T16:07:56.284Z
|
2023-12-01T00:00:00.000
|
{
"year": 2023,
"sha1": "42545f02ff060795f721501e10088ac60cee1da1",
"oa_license": "CCBYNC",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/27527263231217575",
"oa_status": "GOLD",
"pdf_src": "Sage",
"pdf_hash": "e78bbae8e06ea6d3e1a998b9aa95284ddfe961d9",
"s2fieldsofstudy": [
"Mathematics",
"Education"
],
"extfieldsofstudy": []
}
|
267545893
|
pes2o/s2orc
|
v3-fos-license
|
Over-the-Counter Hearing Aids Challenge the Core Values of Traditional Audiology
Purpose: Regulatory changes in the United States introduced over-the-counter (OTC) hearing aids with the goal of increasing the accessibility and affordability of hearing health care. It is critical to understand the values inherent to hearing health care systems to evaluate their effectiveness in serving people with hearing difficulty. In this study, we evaluated the relative importance of values across service delivery models and the extent to which the introduction of OTC hearing aids represents a values shift relative to traditional audiology. Method: We performed a qualitative content analysis of two document categories: critique documents that motivated the creation of OTC hearing aids and regulatory documents that defined OTC hearing aids. Team members coded portions of text for the values they expressed. In total, 29,235 words were coded across 72 pages in four documents. Rank-order analyses were performed to determine the prioritization of values within each category of documents and subsequently compare values between OTC and traditional audiology documents analyzed in a previous study. Results: Critique and regulatory documents both prioritized values related to reducing barriers to hearing aid access and use, but the lack of a significant correlation in the rank order of values in these documents was evidence of inconsistency between the motivation and implementation of OTC hearing aids. Differences in the rank order of values in the OTC documents compared to traditional audiology were consistent with a values shift. Conclusions: The introduction of OTC as a solution to low hearing aid use represents a values shift, challenging the values of traditional audiology. This research demonstrates a need to establish the values of hearing health care service delivery through a consensus of stakeholders, including individuals from diverse backgrounds underserved by the traditional model.
Hearing loss is the third most common chronic health condition among Americans, affecting approximately two-thirds of adults over 70 years (Bainbridge & Wallhagen, 2014).Despite the high prevalence of hearing loss and its associated consequences, less than 30% of adults who could benefit from amplification devices seek them (Dammeyer et al., 2017).Hearing aid adoption is a complex issue that may be affected by a variety of individual factors, such as self-perceived degree of hearing impairment, technology commitment, socioeconomic status, psychomotor function, and self-reported health status (Jorgensen & Novak, 2020;Nixon et al., 2021;Tahden et al., 2018).Systemic factors also mediate the use of hearing aids among those who could benefit, and the National Institute on Deafness and Other Communication Disorders made it a goal to understand and eliminate these systemic barriers (Donahue et al., 2010).Two high-profile expert analyses of hearing health care in the United States identified broader systemic forces that contribute to untreated hearing loss, published by the President's Council of Advisors on Science and Technology (PCAST, 2015) and the National Academies of Sciences, Engineering, and Medicine (NASEM, 2016).Both reports provided an evidence-based critique of current hearing health care practices in the United States and identified several issues that might contribute to the apparent preference for nonuse of hearing aids.The reports concluded that lack of access to a hearing health care professional and the high cost of devices are the most significant barriers to adopting hearing aids.To overcome issues related to access and affordability, the expert analyses recommended that the Food and Drug Administration (FDA) introduces a new class of hearing aids that can legally be sold direct to consumer.Legislation was subsequently passed in 2017 that introduced a new class of amplification devices to be sold over the counter (OTC; Over-the-Counter Hearing Aid Act of 2017, passed as part of the FDA Reauthorization Act HR 2430, §934 2017, of U.S. Congress, 2017).In 2022, the FDA issued a finalized rule establishing the legalization of OTC hearing aids (U.S. FDA, 2022).Beginning October 17, 2022, adults over the age of 18 years with perceived mild-to-moderate hearing loss were enabled to independently purchase hearing aids without consulting a hearing health care professional.
OTC hearing aids were explicitly justified as an effort to alleviate issues identified in the PCAST and NASEM reports (Warren & Grassley, 2017), but they are not a panacea.Multifaceted, individual, and systemic factors contribute to the low rates of hearing aid use by people who could benefit from them, and OTC hearing aids address a subset of the systemic barriers identified in the reports.OTC hearing aids are an opportunity for the market to produce effective solutions.The development of effective solutions leveraging the OTC model will require further examination of the objectives that motivated the regulatory changes.We can use values as a framework to evaluate and critique novel hearing health care solutions, whether those solutions reflect the OTC model or the traditional model of hearing health care, where traditional audiology refers to hearing health care provided by a hearing health care professional in the United States.Furthermore, values can be used to understand how the introduction of the OTC model represents a challenge to the values of traditional audiology, which, as the dominant model of hearing health care, was the source of many systemic barriers identified by PCAST and NASEM.Although access and affordability are valued in traditional audiology, accuracy, safety, efficacy, and other values drive the provision of hearing health care in that model (Menon et al., 2023).By gaining a deeper understanding of the values that underlie different hearing health care models, we can potentially reduce unintended adverse consequences that major regulatory changes may have on patients and consumers.
This study leverages value-sensitive design (VSD), an approach for the design of technologies that embody specific values (Friedman et al., 2002(Friedman et al., , 2013)).Values are principles or qualities appreciated as important by an individual or system.VSD assumes that by eliciting values of different stakeholders and identifying values in products and services, we can design solutions that better reflect stakeholder values.In recent years, VSD has been applied in health care research to inform both the development of ambulatory therapy technologies for patients in their homes and information and communication technologies for patients with chronic diseases (Dadgar & Joshi, 2018;Mueller & Heger, 2018).Our work is the first to apply this methodology to hearing health care (Menon et al., 2023).The goal of this work is to leverage values to encourage the millions of people who could benefit from amplification to obtain the services they need or to reduce the delay in getting those services (Simpson et al., 2019).This will be accomplished by matching the values of hearing health care products and services to the values of underserved patient populations.First, it is necessary to understand the values inherent to the systems of hearing health care that are designed to serve people with hearing difficulty.
Our recent work developed a comprehensive list of values in traditional audiology through an iterative textual analysis of documents representing the ethics, procedures, and outcome measures used in audiology clinical practice (Menon et al., 2023).In this study, we analyzed documents representing traditional, clinical audiology in order to identify the values instantiated by the current dominant model of hearing health care in the United States.Three categories of documents representative of traditional audiology were selected: questionnaires, clinical practice guidelines, and codes of ethics.These documents were selected, because they represent dimensions of intended and enacted values in traditional audiology (Shilton et al., 2013).Questionnaires represent enacted audiology values, because audiologists use them to assess outcomes and determine treatment success.Clinical practice guidelines are a record of what audiologists do, following best practice recommendations where available, and, therefore, document both dimensions of intended and enacted values in clinical practice.Codes of ethics represent intended values that define moral behavior for audiologists, as determined by professional organizations.Although the documents chosen for analysis did not encompass all aspects of audiology clinical practice, it is unlikely that including additional documents would have expanded the list of values because of the methodology used (Curtis et al., 2001).The final list of values in traditional audiology was divided into three categories: instrumental, patient use, and moral values.Instrumental values included accuracy, cost, design, efficiency, evidencebased, objective benefit, and safety.Patient use values included comfort, ease of use, health, satisfaction, selfefficacy, and subjective benefit.Moral values included access to care, autonomy, privacy, equity, and professional duties.The values identified in traditional audiology documents were then rank ordered according to the number of times each value appeared across documents.The current study applied VSD to hearing health care by identifying values in documents representing the introduction of the OTC model and contrasting them with values in traditional audiology.
Our central hypothesis was that the introduction of OTC hearing aids reflects the prioritization of access and affordability over all other values and the deprioritization of core values of traditional audiology that could compete with access and affordability.This was evaluated with two research questions.The first research question was if the implementation of the OTC model shares the same values as the critique documents that motivated its creation.This was evaluated by comparing the rank order of values in documents representing the critical motivation to create OTC compared to the regulatory documents that defined OTC hearing aids.The similarity of values rankings between these documents was used to determine the extent to which the introduction of OTC hearing aids represented a coherent values framework that could be contrasted with the values of traditional audiology.The second research question was if the introduction of OTC hearing aids represents a challenge to the values of traditional audiology.This was evaluated by comparing the rank order of values between OTC documents and traditional audiology documents.A positive correlation between the values in OTC and traditional audiology would reject the central hypothesis that the introduction of OTC hearing aids represents a targeted reprioritization of values in hearing health care.
Method
A summative content analysis (Hsieh & Shannon, 2005;Morse & Field, 1995) was performed to identify the values expressed by the text in two critique documents representing the motivation to create an OTC model (PCAST and NASEM reports) and two regulatory OTC documents representing the implementation of OTC hearing aids in the United States (Final FDA regulations on Establishing OTC Hearing Aids; OTC Hearing Aid Act of 2017).The rank order of values found in documents representing OTC hearing aids were compared to the rank ordered values found in traditional audiology documents reported in Menon et al. (2023).The complete list of the 33 documents representing traditional audiology can be found in the Appendix of Menon et al.The PCAST and NASEM reports represent intended values in hearing health care, because they describe the motivation to create a new hearing health care model.The OTC Law and FDA regulations are our best representations of enacted values in hearing health care, since they describe principles of the model that have been translated into concrete laws, regulations, and policies.It is critically important to analyze the enacted values in OTC hearing aids to ensure that intended values driving the creation of the system manifest in the actual implementation of the model.
Two team members independently read materials and methodically assigned meaning to text data by tagging relevant passages with the appropriate value(s), a process referred to as coding (Locke et al., 2020;Saldaňa, 2009).The codebook of values in hearing health care developed in Menon et al. (2023) was used to guide our analysis.The codebook contains a comprehensive list of values that includes a definition, a short description, and examples of how that value appears in the text.Four documents (listed in Table 1) were analyzed using NVivo 12 Pro (QSR International Pty Ltd., 2018), a software program that facilitates the textual analysis of unstructured documents.This tool was used to identify sections of content in which the meaning of the text corresponds to one or more values as defined in the codebook.Following independent coding, the data were merged via consensus.Passages with nontrivial disagreement between the two coders were discussed until a consensus was reached.The use of multiple coders allows for broader possibilities for interpreting and understanding data (Fonteyn et al., 2008) and ensuring intercoder agreement improves consistency and accuracy of results (Fernald & Duclos, 2005).After coding was complete, quantitative data were extracted from the software to compare the relative importance of values within and between documents.There are multiple ways to quantify the results of the coding process.For this study, we calculated the frequency of each value by counting the total number of words coded to that value.A total word count was used instead of a proportional index, because the documents differed greatly in length and scope, making a proportional index weigh too highly the values relevant to the scope of the shorter documents.We operationally defined the priority of a value as the frequency of coding references to that value in the text.By defining coding frequency, and thus importance, as the number of words coded, we are assuming that the authors of the documents dedicated more text to issues that they valued more.
Statistical Analysis
Kendall's rank correlation was used to test the similarity of the rank order of values.The correlation between rankings was determined at the alpha criterion of α = .05.Kendall's rank distance was used to quantify the magnitude of the difference between documents in the importance of values.Rank distance counts pairwise differences between ranked lists.If one value is ranked above another value in a document, that pair of values would add to the ranked distance if they appeared in the opposite order in a comparison document.A normalized rank distance of zero indicates that the two lists are identical and one indicates that the lists are in reverse order.
Results
The first goal of this study was to evaluate if the documents that represent the implementation of OTC share the same values as the critique documents that motivated its creation.Figure 1 shows the frequency of values in each of the analyzed OTC documents.Table 2 shows the rankings of the number of coding references for each value in critique documents and OTC regulatory documents.Kendall's rank correlation revealed no statistically significant correlation between the ranks of referenced values in regulatory and critique documents (τ = .294,p = .096).The normalized Kendall rank distance (K d ) was calculated to evaluate the number of misaligned pairs between these two ranked lists.The result was K d = .353,indicating that approximately 35% of code pairs were in the reverse order in the critique documents compared to the OTC regulatory documents.The second goal of this study was to evaluate whether documents representing the OTC model share the same values as documents that represent the traditional audiology model.The results of the first analysis showed that the critique and regulatory documents prioritized values differently, so each set of documents was compared to traditional audiology separately.Figure 2 shows the number of words coded to each value in OTC and traditional audiology documents, and Table 3 shows the ranked order of values found in OTC regulation documents compared to the ranked order of values found in documents that represent traditional audiology (data from Menon et al., 2023).Calculation of Kendall's τ revealed no statistically significant correlation between the ranks of referenced values in regulatory and critique documents (τ = .072,p = .709).The normalized K d was also calculated to evaluate the number of misaligned pairs between these two ranked lists.We found that K d = .464,indicating that approximately 46% of code pairs were in the reverse order in the OTC regulatory documents compared to traditional audiology documents.The PCAST and NASEM reports were evaluated to determine whether the critique of audiology that motivated the development of OTC shared the same values as traditional hearing health care.Figure 3 shows the number of words coded to each value in critique documents and traditional audiology documents, and Table 4 shows the ranked order of values found in critique documents compared to the ranked order of values found in documents that represent the traditional audiology (data from Menon et al., 2023).Calculation of Kendall's τ revealed no statistically significant correlation between the ranks of referenced values in audiology and critique documents (τ = −.137,p = .454).The normalized K d was also calculated to evaluate the number of misaligned pairs between these two ranked lists.We found that K d = .569,indicating that approximately 57% of code pairs were in the reverse order in the OTC regulatory documents compared to traditional audiology.
Comparison of Values in Critique Documents and OTC Regulations
We applied the codebook developed in Menon et al. (2023) to systematically assess the values in source material relating to the justification and implementation of OTC hearing aids.Our central hypothesis was that the development of an OTC regulatory category represents a values shift relative to the values of traditional audiology, prioritizing access, and affordability over all other values in hearing health care.The first objective of this study was to compare the frequency of values referenced in critique documents (PCAST and NASEM reports) and regulatory OTC documents (OTC Law and final FDA regulations) to determine whether the implementation of the OTC model shared the same values as the critique documents that motivated its creation.Although there was no significant correlation between the ranks of referenced values, Kendall's rank distance test showed that approximately two-thirds of values were in the same rank order in both documents (35% misaligned), indicating moderate agreement between the two lists of rankings.The motivation for the critique of audiology reported by PCAST and NASEM was to evaluate cost and access to care as barriers to hearing aid use (Donahue et al., 2010).The OTC regulations prioritized these values, although not as highly as the critique documents.In the OTC regulatory documents, access to care is ranked 4/18 and cost is ranked 7/18 compared to 1/18 and 3/18, respectively, in the critique documents.This is evidence that OTC regulation generally addressed the values that motivated its creation.
The top 3 most frequently coded values in the critique documents were those that supported the creation of OTC hearing aids: access to care, autonomy, and cost.In the critique documents, the introduction of OTC hearing aids was intended to improve access and affordability and to support autonomy by making it possible for people to selfadminister hearing health care.This contrasts with the three values most coded in the OTC regulations: ease of use, accuracy, and safety.Ease of use arguably supports autonomy consistent with the values of the critique documents, reflecting the need for devices to be used independently without a hearing health care professional.However, accuracy and safety reflect concerns that are not aligned with the critique documents' focus on reducing barriers to hearing aid use.In the OTC regulations, the high ranking of accuracy was due to the amount of text dedicated to electroacoustic requirements and technical specifications that must be met by a device prior to its registration as an OTC hearing aid.Examples from the final FDA regulations include limits on maximum output (Output Sound Pressure Level 90, OSPL90), the full-on gain value, output distortion control, and self-generated noise levels.The high ranking of safety likely reflects the FDA's core responsibility of establishing the safety of a medical device.According to the FDA, "there is reasonable assurance that a device is safe when it can be determined, based upon valid scientific evidence, that the probable benefits to health from use of the device for its intended uses and conditions of use, when accompanied by adequate directions and warnings against unsafe use, outweigh any probable risks" (21 CFR 860.7).Ensuring safety and accuracy in the implementation of OTC hearing aids is an obvious benefit to consumers but will add to the cost of the devices.Safety and accuracy are values that do not directly support the goal of reducing barriers to hearing aid use and prioritizing safety and accuracy above access and affordability reflects differential value prioritization between critique and regulatory documents.
The prioritization of equity and evidence-based practice was elevated in critique documents relative to regulatory documents.In the context of hearing health care, equity is defined as "fairness in treatment or outcomes" (Menon et al., 2023).Thus, the high prioritization of equity in the critique documents reflects the goal of ensuring that individuals receive effective hearing health care regardless of their background or circumstances.Equity was ranked 5/18 in critique documents, contrasted with 10/18 in the regulatory documents.The low priority of equity in OTC regulatory documents, on the other hand, represents a potentially important values conflict between the intentions of the authors of the critique and regulatory documents.Devices and services consistent with the values of the regulatory documents could reinforce existing inequities in the hearing health care system.
Evidence-based practice was ranked 4/18 in critique documents, which promoted expanding hearing health care research and the translation of this research to patient care.In contrast, OTC regulations ranked evidence-based practice 12/18.Large sections of the NASEM report (e.g., Chapter 3: Health Care Services: Improving Access and Quality) were devoted to exploring ways in which evidencebased practice can improve quality in the provision of care by hearing health care professionals, stating "evidencebased clinical practice guidelines and standards of practice can be used to educate health professionals, inform practice patterns, and facilitate widespread adherence to best practices."The PCAST report listed the lack of adherence to evidence-based practice by hearing health care professionals as a reason for eliminating the need to obtain hearing aids through a provider, stating "studies of dispensers have found that average dispensing rates of various hearing-aid features do not follow evidence-based practice (EBP) guidelines."Both documents criticized the poor adherence to evidence-based practice by hearing health care professionals and used this argument to support the introduction of an OTC model.Deprioritizing evidence-based practice risks suboptimal hearing health care outcomes but may support the broader goal of getting a large quantity of affordable products on the market.Disagreement between critique and regulatory documents reflects difficulty balancing the cost and importance of evidence-based practice.
A Challenge to the Values of Traditional Audiology
The second objective of this study was to determine if the introduction of OTC hearing aids represents a challenge to the values of traditional audiology.Overall, there was agreement between the values found in documents representing traditional audiology and the values found in documents that represent OTC hearing aids.Seventeen of the 18 values identified in traditional audiology documents were also identified in OTC documents, suggesting a fundamental similarity in values between the two models.
After the initial analysis revealed that the rank order of values in OTC critique and regulatory documents differed, rankings for each set of documents were compared separately to traditional audiology documents.Calculation of Kendall's τ revealed no statistically significant correlation between the rank order of values in traditional audiology documents to either critique or regulatory documents, consistent with a difference in rank order but not a reversal (negative correlation).Kendall's rank distance test showed that 50% of all code pairings were misaligned between rank order lists of values found in traditional audiology and OTC regulatory documents, consistent with a random reshuffling of the lists, and 65% of code pairings were misaligned between traditional audiology and critique documents, indicating greater disagreement than would be expected from random chance.Although they shared most of the same values, results were consistent with a reprioritization of values rankings for traditional audiology compared to both sets of OTC documents.
The goal of creating the OTC hearing aids category was to improve accessibility and affordability of amplification devices.The values of cost and access to care were central to critique documents, which recommended the introduction of OTC hearing aids to address barriers for individuals who may have previously been hesitant or unable to seek care.In contrast, traditional audiology ranked access (15/18) and cost (17/18) among the least important values.By shifting access and cost from near the bottom of the list in traditional audiology to the top of the list, the critique documents represent a direct challenge to the values of traditional audiology.
The most striking difference between the values of traditional audiology and OTC was in values related to patient benefit.Subjective benefit, the benefit from treatment that is perceived by the patient, ranked 1/18 in traditional audiology documents.Objective benefit, the benefit from treatment measured as improved performance on a task, was ranked toward the top at 6/18 in traditional audiology documents.The critique documents clearly did not prioritize these values, ranking subjective benefit 11/18 and objective benefit 16/18.In OTC regulatory documents, subjective benefit was in the bottom three values (14/18) and objective benefit was the only value with zero coding references (18/18).The stated intent of the FDA in creating the OTC regulations to ensure safety and efficacy (U.S. FDA, 2021).When the efficacy of an intervention is evaluated in traditional audiology, the metrics used typically measure subjective or objective benefit, or both, as reflected in the high ranking of these values in audiology clinical practice guidelines and questionnaires (Menon et al., 2023).Subjective metrics typically assess changes in perceived activity limitation, participation restriction, or quality of life, and objective metrics typically assess speech perception in noise.One major implication of the shift away from patient benefit is that the efficacy of OTC hearing aids will not necessarily be evaluated based on the same criteria used to evaluate audiology interventions.OTC hearing aids may instead be evaluated by the values prioritized by either the critique documents or both critique and regulatory documents, such as cost, access to care, and ease of use.One can argue that barriers to access and affordability have been effectively reduced if there are readily available devices that people can afford, even if those devices provide little subjective or objective benefit.The shift away from prioritizing subjective and objective benefit raises concerns about the impact of OTC hearing aids on the overall quality of hearing health care.
OTC hearing aids are only relevant to adults with perceived mild-to-moderate hearing difficulty.Adults between the age of 50 and 69 years who have mild hearing difficulty represent the largest population of American adults with unmet hearing health care needs (Humes, 2023).Thus, an OTC model targeting this specific patient population could provide an alternative hearing health care choice for millions of Americans with mild-to-moderate hearing difficulty.However, individuals with severe hearing loss, children, and those who cannot reasonably expect to self-fit hearing aids will not benefit from any improvement in access and affordability from the OTC model.Although the reprioritization of values in the OTC model may meet the needs of some individuals, addressing issues of access and affordability for all individuals will, according to the audiology critique documents, require reprioritizing these values within traditional audiology.Access and affordability (cost) are values in traditional audiology and, although they are low in the values prioritization, there are ongoing efforts targeting these values.Access and affordability are being addressed by other changes in audiology that are occurring parallel to the introduction of OTC hearing aids, such as the expanded use and availability of telehealth solutions (Bush & Sprang, 2019;Muñoz et al., 2021) and proposed changes to Medicare coverage (Lin et al., 2022).The success of these changes may depend on the priorities of the methods used to evaluate them, whether metrics of access and affordability are used rather than metrics of individual benefit that are traditionally used in audiology.Outside of the OTC model and the large, albeit limited population it targets, the issues with hearing health care service delivery identified by the critique documents can be addressed by targeting the values of access and affordability.
Limitations
A limitation of the current study is the operational definition of values prioritization from the frequency of coded text references.We operationalized the rank order priority of values by counting the number of coded references to each value in the texts and verified that other methods of ordering coding references did not change the rank order.The limitation of this approach is that it is possible that some values are a high priority and yet not mentioned frequently in the documents.The act of ranking values in a definitive order is inherently challenging.Different methodologies, such as conducting surveys among audiologists or other stakeholders, might produce distinct rank orders based on their perspectives and priorities.
The purpose and motivation of the selected documents can significantly influence the resulting rank order.For example, FDA policy dictates many issues that must be addressed in establishing a medical device, so the authors of that document were not free to elaborate on issues they considered most important, unlike the authors of the critique documents.Nevertheless, FDA policy reflects a values system that directly influenced the regulatory definition of OTC hearing aids.The values prioritization in the FDA document reported in this study was consistent with the fact that the document was the product of a regulatory process with multiple constraints rather than the opinion of its authors.In reference to authors of the critique documents, the PCAST group was largely independent of the field of hearing health care, while the NASEM groups incorporated diverse stakeholders representing audiology, academia, industry, individuals who are deaf or hard of hearing, and others.Their document content, topic selection, and language reflect their values regarding hearing health care.Nevertheless, the validity of this work depends on the selection of specific documents and the rank order of values in these documents as representative of the prevailing values of the associated model.
Future Directions
OTC hearing aids represent an opportunity to expand the market for hearing aids for adults with perceived mild-to-moderate hearing loss.Previous research shows that consumers tend to prefer engaging with services and products that align with their values (Vinson et al., 1977).The results of this study indicate that the OTC hearing aids model challenges the values of the traditional audiology model, which may provide an alternative mode of treatment that respects and aligns with a wider range of personal values and preferences.For example, OTC hearing aids may be a desirable hearing health care solution for consumers who highly value ease of use, safety, and accuracy.The critique and regulatory documents analyzed here were foundational to the OTC model but do not represent the present and future implementation of OTC hearing aids.Moving forward, traditional audiology and OTC hearing aids may work synergistically
Conclusions
This study tested the hypothesis that the introduction of the OTC hearing aid model represents a challenge to the values of traditional audiology.We found that the values of documents representing the introduction of the OTC model highly prioritized values consistent with reducing barriers to access and affordability of self-administered hearing health care, in contrast to the low priority of these values in traditional audiology.Elevating these values was consistent with the goal of reducing barriers to the use of hearing aids.Values highly prioritized by traditional audiology-subjective and objective benefit-were downgraded to a low priority in OTC documents.Although the reprioritization of values may benefit some people who are underserved by the current model, it is important to consider the critique of traditional audiology more broadly and explore other solutions to address the remaining barriers to access and affordability.Further research is needed to develop new solutions that align with the values of underserved patients, parallel to or facilitated by the OTC model.
Figure 1 .
Figure 1.Frequency of each coding reference representing values in critique documents (President's Council of Advisors on Science and Technology, 2015 and National Academies of Sciences, Engineering, and Medicine, 2016) and regulatory OTC documents (OTC Law 2017 and Food and Drug Administration OTC Regulations, 2022), sorted by the frequency of values coded in the critique documents.OTC = over the counter.
Figure 2 .
Figure 2. Frequency of each coding reference representing values in traditional audiology documents (clinical practice guidelines, codes of ethics, and questionnaire) and regulatory OTC documents (OTC Law 2017 and Food and Drug Administration OTC Regulations, 2022).OTC = over the counter.
Figure 3 .
Figure 3. Frequency of each coding reference representing values in traditional audiology documents (clinical practice guidelines, codes of ethics, and questionnaire) and critique documents (President's Council of Advisors on Science and Technology and National Academies of Sciences, Engineering, and Medicine reports).
Table 1 .
Source material for qualitative content analysis.
Note.OTC = over the counter; FDA = Food and Drug Administration; NASEM = National Academies of Sciences, Engineering, and Medicine; PCAST = President's Council of Advisors on Science and Technology.
Table 2 .
Rankings of the number of coding references for each value in critique documents and OTC regulatory documents.
Note.Bolded values indicate total rankings across document groups.PCAST = President's Council of Advisors on Science and Technology; NASEM = National Academies of Sciences, Engineering, and Medicine; OTC = over the counter; FDA = Food and Drug Administration.
Table 3 .
Rankings of the number of coding references for each value in traditional audiology and over-the-counter (OTC) regulatory documents.
Note.Bolded values indicate total rankings across document groups.
Table 4 .
Rankings of the number of coding references for each value in traditional audiology documents and critique documents.
Note.Bolded values indicate total rankings across document groups.
Menon et al.:Values in Over-the-Counter Hearing Aids 665 to reduce barriers to hearing health care.Audiologists may incorporate OTC hearing aids into their clinical practice by offering them as a cost-effective and accessible point of entry for patients with mild-to-moderate hearing impairment.As OTC and traditional audiology continue to advance in tandem and intersect, new values prioritizations may emerge to better serve a diverse range of individuals with different hearing abilities.Future work should focus on patient-centered outcomes by eliciting values from all stakeholders including those who could benefit from amplification but have not yet decided to seek care.Mismatches between the values of patients and the systemic values of hearing health care could explain the apparent nonuse of resources among those who could benefit.Leveraging VSD methodology will allow us to better understand the values of those who experience hearing difficulty and to develop hearing health care solutions that reflect the values of specific populations, including groups that are underserved by traditional audiology.OTC hearing aid regulations facilitate the use of VSD to bring products and services to market that are consistent with the values of underserved patients.
|
2024-02-09T06:17:36.219Z
|
2024-02-01T00:00:00.000
|
{
"year": 2024,
"sha1": "dbbdf7c4e9146afd3f8142109a8fab8f1a603a2f",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1044/2023_jslhr-23-00306",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "3dee6b6701a7ef80f23358e11cae5c432197aaa1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
229513539
|
pes2o/s2orc
|
v3-fos-license
|
Optimization of Aqueous and Alcoholic Extraction of Phenolic and Antioxidant Compounds From Caper (Capparis spinosa L.) Roots Assisted by Ultrasound Waves
Background: Herbals are rich in effective compounds such as phenolic and antioxidant. Various methods are developed to extract these compounds, including Soxhlet, maceration, microwave, and ultrasound. The extraction method affects the quantity and quality of materials. Objectives: The current study aimed to investigate the effect of ultrasound in phenolic and antioxidants compounds extraction from Caper roots. Methods: Response surface methodology (RSM) and Box-Behnken design were used to optimize the two extraction parameters, including extraction time (10, 25, and 40 min) and ultrasound power (40%, 70%, 100 %) by aqueous and alcoholic solvents. Results: Based on the results, ultrasound power was more effective than the extraction time. A direct association was observed between ultrasound power and the extraction time with the total extraction. The optimum aqueous and alcoholic extraction condition for phenolic and antioxidant compounds extraction were as follow: extraction time 36 min and ultrasound power 91 percent. Total phenolic content was obtained 14.96 mg/g with aqueous solvent and 17.24 mg/g with alcoholic solvent, and IC50 was 52.17 µg/mg with aqueous solvent and 40.20 µg/mg with alcoholic solvent. Conclusions: Overall, alcoholic extracts had more phenolic and antioxidant compounds than aqueous extracts.
Background
Traditional medicine heavily relies on herbal extracts and active ingredients (1). Phenolic compounds have valuable properties such as anti-allergic, anti-inflammatory, antimicrobial, and antioxidant, which resulted in their extensive use in pharmaceutical, nutritional, cosmetic, and agricultural fields (2). One of the beneficial effects of phenolic compounds roots in their antioxidant properties (3). Phenolic compounds contain antioxidant properties due to their free hydroxyl group on their aromatic ring. Also, their antioxidant activity depends on the number of hydroxyl groups (4). Antioxidants can inhibit and control the oxidation process, by eliminating free radicals. Besides, antioxidants act as reductant, chelating, or aggregating singlet oxygen agents. Antioxidants are classified into two major groups of synthetic and natural. In general, synthetic antioxidants are phenolic compounds that contain varying amounts of alkyl substituents, whereas natural antioxidants can be phenolic compounds such as quinone and lactone (5). Phenolic compounds are divided into simple phenols, phenolic acids, coumarins, flavonoids, stilbenes, condensed tannins (procyanidins), lignans, and lignins (6).
tribute to the extraction of phenolic compounds. The extraction process can be performed either through traditional techniques (e.g. Soxhlet and maceration) or new technologies (e.g. microwave or ultrasound) (7). Ultrasound extraction is one of the most important methods for extracting valuable compounds from plant sources that can be implemented at all scales. Two common systems for using ultrasound are probe and bath systems. Ultrasonic baths not only greatly reduce the size of the particles, but also increase their solubility (8). The ultrasonic bath is an efficient method for extraction from dried and powdered samples at industrial and large scales (9). In the ultrasonic probe system, the plant sample is in direct and continuous contact with the probe (ultrasonic waves), so it has a greater impact on plant tissues, but it has low repeatability, and its application is limited to low volume samples. Besides, sample contamination and foam production are more common than the ultrasonic bath. Ultrasound bath can be applied for a wide spectrum of samples simultaneously, and its repeatability is high. It is therefore preferred over the ultrasonic probe system (10).
Caper is a perennial plant that bears rounded, fleshy leaves with large white to pinkish-white flowers (11). Caper has about 250 species, most of which are wild and can grow in arid and semi-arid environments with adaptability to drought conditions (12,13). Phytochemical studies reported that this plant contains several bioactive factors, including saccharides, glycosides, flavonoids, alkaloids, indoles, and phenolic acids, terpenoids, volatile oils, fatty acids, vitamin C, vitamins E, and steroids (14,15). The root of the Caper contains pectin, saponin, a very small amount of essential oil, resinous substance, aminoglycoside, and capparirutine (16,17). Also, its root skin contains stachydrin and a volatile substance with garlic aroma (18). Caper has anti-diabetic and blood lipid-lowering properties (19)(20)(21). Caper has been widely used in traditional medicine due to its diuretic, antihypertensive, and vasodilator effects (22).
Besides, it's reported that, based on biological and chemical tests, aqueous and alcoholic extracts from the roots of this plant have antioxidant activity (23). Najafi et al. (24) investigated the chemical constituents of Caper fruit essential oil in the Sistan Region and optimized the extraction conditions of antioxidant compounds of fruit extract using the microwave method. The results of the analysis of essential oil extracted by water distillation using gas chromatography and gas chromatography-mass spectrometry (GC/MS) revealed 33 compounds in the essential oil that constituted the main ingredients of the essential oil of the fruit, including thymol (24.1%) and isothiocyanate (29.2%) (24). A study showed that ethanolic extract of the Caper fruit has the highest antimicrobial activity in Staphylococcus aureus. Analyzing the aqueous and ethanolic extracts revealed that leaves and fruit of the Caper contain the highest level of antioxidant properties (23).
Objectives
Due to the increasing tendency towards using natural compounds of medicinal plants in the treatment of diseases, the current study was designed to investigate the optimum conditions for aqueous and alcoholic extraction of phenolic and antioxidant compounds obtained from Caper root.
Identification and Preparation of the Plant
Caper plant was collected from Anbarabad farms in Kerman Province in March 2018 and was identified in the Herbarium and Systematic Laboratory of Islamic Azad University of Jiroft. The roots of the plant were dried at room temperature and using shadow. It was then powdered by a laboratory mill and passed through a 40-mesh sieve.
Ultrasound Extraction
Initially, 50 g of powdered root sample was mixed with 200 mL of ethanol solvent 70% (for alcoholic extract) and 50 g of powdered sample with 200 mL distilled water (for aqueous extract). Solvent and sample containers were placed in an ultrasonic bath (JK-DUC-8200LHC, China) with a temperature control system and a circulation system at a constant frequency of 35 KHz. Ultrasound treatment levels consisted of three-extraction time levels (10, 25, and 40 min) and three levels of sound intensity (40%, 70%, and 100%) (16). After the ultrasound treatments, the extracts were centrifuged at 7800 rpm for 30 minutes, and then the supernatant was separated and filtered (MF-Millipore 2 Zahedan J Res Med Sci. 2020; 22(4):e100747.
Membrane Filter, 0.45 µm pore size). Vacuum rotary evaporator at 40°C and 200 rpm performed to the separation of alcoholic solvent. To remove the residual alcoholic solvent and the aqueous solvent, the extracts were spread in a plate and placed in a vacuum oven at 40°C. All extracts were stored in a freezer at -18°C until the tests (25).
Estimation of Total Phenolic Contents (TPC)
The total phenolic contents of each extract were determined by the Folin-Ciocalteu micro-method (26). Briefly, 100 µl of extract solution was mixed with 1 mL distilled water and 500 µL of Folin-Ciocalteu reagent, followed by the addition of 1.5 mL of Na 2 CO 3 solution (20%) after 1 min. Subsequently, the mixture was stored in a dark place at room temperature for 120 min and its absorbance was measured at 760 nm. Gallic acid was used as a standard for the calibration curve. The phenolic content was expressed as gallic acid equivalents using the following linear equations based on the calibration curves. Equations 1 and 2 are used as the basis of gallic acid with the aqueous and ethanol solvents, respectively.
Where X is the absorbance and Y is the concentration as gallic acid equivalents (mg/g).
Determination of Anti-Radical Activity
The anti-radical activity was determined by the DPPH test using 2,2-diphenyl-1-picrylhydrazyl reagents (27). Afterward, 5 mL of DPPH solution (0.004%) was added to 50 µL of different concentrations of the extract prepared with ethanol and aqueous solvents. After mixing at room temperature, mixtures were stored in a dark place for 30 min. The absorbance of sample and control were read at 517 nm with a spectrophotometer (UV/Visible Spectrophotometer AQUARIUS, CE7500, UK). The antiradical activity was calculated using Equation 3:
Statistical Analysis
Response surface methodology (RSM) and Design Expert software (version 11) were used to investigate the effect of studied variables (extraction time and sound intensity) on the amount of phenolic compounds and the radical scavenging power of aqueous and alcoholic extracts. Based on the response surface design, the Box-Behnken model was selected to investigate the two variables at three levels, and 13 runs were performed to evaluate the extraction process and determining the optimal conditions ( Table 1). As data were not distributed normally, the Mann-Whitney U test was used to compare aqueous and alcoholic extracts of the plant root. Data were analyzed using SPSS version 16. Statistical significance was considered when P-value < 0.05.
Selecting the Best Model
The most appropriate model was selected using adjusted R-squared, if it was greater than 0.80, and nonsignificant lack of fit test. The quadratic model by Response Surface method (RSM) was used for statistical analysis of tests. After selecting the best model, to determine the overall equation according to the ANOVA analysis, parameters with a non-significant F test (P > 5%) were removed from the model. Then, the general equation was obtained using the given coefficients for each parameter. The model defined for each response is shown in Equation 4. In this equation, Y is the predicted response, b 0 constantcoefficient, b i linear effects, b ii squares effect, and b ij interaction effects, as well as x j and x i , are encoded independent variables. (4)
Effect of Extraction Time and Sound Intensity on Total Phenolic Compounds of Aqueous and Alcoholic Root Extraction
The simultaneous effect of extraction time and sound intensity on total phenolic compounds of the root is shown in Figure 1. The amount of total phenolic compounds extracted from Caper root with ethanolic and aqueous solvents were 10.93 mg/g and 9.35 mg/g, respectively. The mean total phenolic compounds of alcoholic and aqueous root extracts of Caper did not show any significant difference (P > 0.05). The high R 2 coefficient between the actual and predicted values in this study indicates a very good correlation between the results obtained from the experimental method and the predicted values of the total phenolic compounds by statistical methods (Tables 2 and 3).
According to the parameters which were significant in the alcoholic and aqueous extraction process of total phenolic compounds from the root of the Caper plant, based (Tables 2 and 3), the general equations can be reported as follows: Equations 5 and 6 present the general formulas for the extraction of total phenolic compounds from the plant root with ethanol and aqueous solvents, respectively. Where: Y, total phenolic compounds (mg/g); X 1 , extraction time (min); and X 2 , sound intensity (%). According to Equations 5 and 6, the sound intensity (X 2 ) was obtained as the most effective factor for the extraction of total phenolic compounds from the roots of Caper. At optimum conditions (36 min and 91% sound intensity), total phenolic compounds extracted from Caper by alcoholic and aqueous solvents were 17.24 mg/g and 14.96 mg/g, respectively, indicat- ing that the alcoholic solvent resulted in higher extraction of total phenolic compounds than aqueous solvent (Table 4).
The Effect of Extraction Time and Sound Intensity on IC50 of Aqueous and Alcoholic Extracts of Root
The effect of extraction time and sound intensity on IC 50 of aqueous and alcoholic extracts of roots is shown in Figure 2. As a measure of a substance potency to inhibit a particular function, the IC 50 indicates a concentration that can inhibit up to 50% of free radicals. Therefore, the extract that contains the highest antioxidant activity has the least IC 50 . The mean IC 50 s of alcoholic and aqueous extracts were 61.70 µg/mg and 71.11 µg/mg, respectively, which showed a higher antioxidant activity for alcoholic solvent than aqueous solvent. The predicted IC 50 values for alcoholic and aqueous extracts, by the model, showed statistically significant correlations with the experimental results (Tables 5 and 6). The results revealed an inverse asso-ciation between extraction time and sound intensity and IC 50 . According to the parameters which were significant in the variance analysis of alcoholic and aqueous extraction process of antioxidant compounds from plant root (Tables 5 and 6), the general equations can be reported as follows: Equations 7 and 8 are the general formulas for the determination of IC 50 of ethanolic and aqueous extracts, respectively: Where, Y is IC 50 (µg/mg), X 1 : extraction time (min), and X 2 : sound intensity (%). According to Equations 7 and 8 sound intensity (X 2 ) was the most effective factor for extraction of antioxidant compounds from the roots of Caper. At optimum conditions (36 min extraction time and 91% sound intensity), IC 50 extracts of ethanolic and aqueous extracts were 40.20 and 52.17 µg/mg, respectively.
Discussion
This study demonstrated a direct association between extraction time and sound intensity with the amount of total phenolic compounds extracted from the root of Caper. Gu et al. (28), have used the ultrasonic technique to extract catechins and caffeine from tea, indicated a direct association between these compounds and the extraction time. The extraction time factor increases the mass transfer rate. Also, sound intensity, due to its high energy content of waves, can cause shear forces to break and disintegrate cell walls and increase the release of plant contents to extraction medium and improve mass transfer (29). Dehghan Tanha et al. (16) have optimized the extraction of This difference in the amount of total phenolic extracted compounds can be due to differences in environmental conditions of plant growth and experimental conditions (30). Arrar et al. (31) reported that total phenolic compounds extracted from Caper root with methanol solvent and distilled water, by maceration method, were 9.2 mg/g and 15.5 mg/g, respectively. Compared to the optimum value reported in the present study, although the total phenolic compounds of aqueous extract were higher, but, in the present study, the amount of total phenolic compounds in methanol solvent was lower than that of ethanol solvent. This difference in the rate of alcohol extraction of phenolic compounds may be due to the effect of different solvents in the extraction process and the efficiency of ultrasonic extraction of phenolic compounds over the traditional maceration method. In optimum conditions, the efficacy of ethanolic extracts in removing DPPH radicals was higher than aqueous extracts, which was directly related to the amount of total phenolic compounds in this extract. So that the higher the phenolic compounds, the higher the antioxidant activity and the less IC 50 . Rez-zan et al. (30) investigated phenolic compounds, antioxidant activity, and mineral analysis of Capparis spinosa using ultrasonic bath extraction. They reported that the mean plant compounds had an inhibitory concentration of 0.32 mg/mL, which showed less antioxidant activity than that of the present study (30). Mahboubi et al. (23) reported an IC 50 value for ethanolic and aqueous extracts as 88 µg/mL and higher than 2000 µg/mL, respectively, which compared to the optimal level of the present study, IC 50 of extracts were higher, but consistent with the present study, the IC 50 of the alcoholic extracts was lower than the aqueous extracts.
This difference in IC 50 content can be attributed to the effective effect of ultrasound in extracting antioxidant compounds from the root of the plant compared to the traditional maceration method (23). In another study, Arrar et al. (31) reported that the IC 50 contents of methanolic and aqueous extracts of Caper root by maceration method were about 1.8 mg/g and 1.6 mg/g, respectively, that compared with the results of the present study, the IC 50 of alcoholic and aqueous extracts was higher. This difference in IC 50 may be due to the efficiency of ultrasound in the extraction of antioxidant compounds over the traditional maceration method (31).
Conclusions
Ultrasound extraction is one of the fastest and most efficient currently available methods. This study demonstrated a direct association between extraction time and sound intensity with the extraction of phenolic and antioxidant compounds from the roots of the Caper plant using the alcoholic and aqueous solvents. Besides, according to the findings, the sound intensity was the most effective factor in extraction. The optimum conditions for extraction of total phenolic and antioxidant compounds were 36 minutes and the sound intensity of 91%. Generally, alcoholic extracts had more phenolic and antioxidant compounds than aqueous extracts.
|
2020-12-03T09:05:47.211Z
|
2020-10-31T00:00:00.000
|
{
"year": 2020,
"sha1": "969edf70d6e3c77151edca87e4165df41f959a11",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.5812/zjrms.100747",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "2f59382a3c978467fc510638ef4558be4b7ed629",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
134805419
|
pes2o/s2orc
|
v3-fos-license
|
Seasonal and Regional Manifestation of Arctic Sea Ice Loss
TheArcticOceaniscurrentlyonafasttracktowardseasonallyice-freeconditions.Althoughmostattention has been on the accelerating summer sea ice decline, large changes are also occurring in winter. This study assesses past, present, and possible future change in regional Northern Hemisphere sea ice extent throughout the year by examining sea ice concentration based on observations back to 1950, including the satellite record since 1979.At present,summerseaicevariabilityand changedominatein theperennialice-coveredBeaufort, Chukchi, East Siberian, Laptev, and Kara Seas, with the East Siberian Sea explaining the largest fraction of September ice loss (22%). Winter variability and change occur in the seasonally ice-covered seas farther south: the Barents Sea, Sea of Okhotsk, Greenland Sea, and Baffin Bay, with the Barents Sea carrying the largest fraction of loss in March (27%). The distinct regions of summer and winter sea ice variability and loss have generallybeenconsistent since1950, butappearat presentto bein transformation as a result of therapid ice loss in all seasons. As regions become seasonally ice free, future ice loss will be dominated by winter. The Kara Sea appears as the first currently perennial ice-covered sea to become ice free in September. Remaining on currently observed trends, the Arctic shelf seas are estimated to become seasonally ice free in the 2020s, and the seasonally ice-covered seas farther south to become ice free year-round from the 2050s.
Introduction
The rapid decline of Arctic sea ice is one of the clearest indicators of ongoing climate change (Serreze and Barry 2011).Along with reduced sea ice cover in both extent and thickness (Kwok and Rothrock 2009;Cavalieri and Parkinson 2012), the multiyear ice cover is decreasing (Maslanik et al. 2007;Nghiem et al. 2007), the melt season is extending (Stroeve et al. 2014), and drift speeds and deformation rates are increasing (Rampal et al. 2009).The current Arctic sea ice transition from a thick, strong ice pack to a thinner, more fragile ice cover affects the marine Arctic ecosystems, possibly alters weather conditions and climate (e.g., Honda et al. 2009;Francis and Vavrus 2012), and increases the interest for commercial maritime activity in the Arctic (Emmerson and Lahn 2012).Understanding the changing Arctic sea ice cover is thus of scientific and practical urgency.
The loss of Arctic sea ice has been linked to a variety of atmospheric and oceanic processes, and can be explained by a combination of internal climate variability and anthropogenic forcing (IPCC 2013).As a consequence of a warmer Arctic atmosphere, the summer melt season has become longer, the ocean has absorbed more heat, and the winter freezing has been delayed (e.g., Stroeve et al. 2012b).The recent sea ice loss is also consistent with warmer oceanic conditions in the Barents Sea (Årthun et al. 2012), Fram Strait (Beszczynska-Möller et al. 2012), Bering Strait (Woodgate et al. 2006;Shimada et al. 2006), and the eastern Eurasian basin (Polyakov et al. 2017), as well as changes in atmosphere, ocean, and sea ice circulation (e.g., Rigor et al. 2002;Lindsay and Zhang 2005;Comiso et al. 2008;Ogi and Wallace 2012;Smedsrud et al. 2017).
Despite a large focus on the changing Arctic sea ice cover, studies tend to be concerned with summer sea ice decline, and with less attention on differences and similarities between seasons and regions.Change and variability in freeze-up over different Arctic regions may be used for winter climate predictions in middle and high northern latitudes (Koenigk et al. 2016), and various regions contribute differently to large-scale atmospheric circulation anomalies (Screen 2017).Enhanced knowledge about regional and seasonal sea ice extent similarities and differences is thus needed.
The Arctic sea ice extent is shrinking in all seasons, but the largest trends are currently found in summer, at the end of the melt season (e.g., Fig. 1; Serreze et al. 2007;Cavalieri and Parkinson 2012).In recent years the reduction in summer sea ice extent has accelerated (Stroeve et al. 2012b); the summer minima have for the last two decades consistently been below the minima inferred from observations beyond the satellite era back to 1850 (Walsh et al. 2017).The largest summer sea ice extent loss has occurred within the Arctic Ocean and is currently largest along the North American and Russian coasts (e.g., Fig. 1b; Comiso et al. 2008;Stroeve et al. 2012b).The loss of sea ice is likely to persist (Kay et al. 2011;Notz and Marotzke 2012;Notz and Stroeve 2016), and a seasonally ice-covered Arctic Ocean is expected by the middle of this century (Notz and Stroeve 2016).The first question this study addresses is the following: How large, and where, is the recent summer sea ice extent loss, and when may regional seas become seasonally ice free?While the largest observed sea ice changes have occurred in summer and within the Arctic Ocean, sea ice extends well into both the Pacific and Atlantic domains in winter (Fig. 1).The overall decline in winter sea ice extent is smaller than the summer decline, but has increased since the 2000s and is now statistically significant (Meier et al. 2005;Cavalieri and Parkinson 2012).There is, however, to date little loss of winter sea ice extent inside the Arctic basin (Fig. 1c).As the Arctic transits toward an ice-free summer, a further decrease in sea ice extent must increasingly be carried by the winter and first-year ice.The second question we address is the following: How large, and where, is the recent winter sea ice extent loss, and to what extent can an increasing weight carried by the winter be identified at present?
Most studies assessing changes in the Arctic sea ice cover do not go beyond the satellite era.The Arctic sea ice cover displays, however, multidecadal low-frequency oscillations (Vinje 2001;Polyakov et al. 2003;Divine and Dick 2006), and it expanded for instance in summer from the 1950s to 1980s (e.g., Walsh and Johnson 1979;Mahoney et al. 2008;Gagné et al. 2017), prior to the recent sea ice decline.To evaluate whether the summer and winter contrasts in the satellite era, presented here, are consistent in a longer time frame, we examine the gridded synthesis based on historical observations from 1950 to date provided by Walsh et al. (2015).The overall question we face is the following: What are the regional variations in observed summer and winter sea ice extent loss, and how will they play out in the future?
Data and methods
This study concerns the Northern Hemisphere (NH) sea ice cover.To address regional variations, the NH is separated into 13 different regional seas, mostly the Arctic shelf seas and those in the northern Atlantic, North American, and Pacific domains (Fig. 2a).The regional seas and geographical boundaries are consistent with the definitions by the National Snow and Ice Data Center (NSIDC; available online from ftp://sidads.colorado.edu/DATASETS/NOAA/G02186/ancillary/).The Arctic shelf seas are the Beaufort Sea, Chukchi Sea, East Siberian Sea, Laptev Sea, Kara Sea, and Barents Sea.These seas border the polar-most region, the central Arctic.The other regions farther south are the Canadian Archipelago and Hudson Bay (North American domain), Greenland Sea and Baffin Bay/Gulf of St. Lawrence (Atlantic domain), and Bering Sea and Sea of Okhotsk (Pacific domain).We use the term ''NH sea ice'' when the entire Northern Hemisphere region is assessed.To describe sea ice variability and trends in different seasons, we often exemplify by discussing the sea ice extremes in March (NH sea ice maximum) and September (NH sea ice minimum).
Monthly NH sea ice concentration from 1979 to 2016 is obtained from NSIDC (Cavalieri et al. 1996), derived from the Nimbus-7 Scanning Multichannel Microwave Radiometer (SMMR), the U.S. Defense Meteorological Satellite Program (DMSP) Special Sensor Microwave Imager (SSM/I), and Special Sensor Microwave Imager/Sounder (SSMIS).The sea ice concentration is gridded to a horizontal resolution of approximately 25 km 3 25 km, on a polar stereographic projection.The NASA Team sea ice algorithm and the method used to derive a consistent dataset are described in Cavalieri et al. (1999) and references therein.
To examine the NH sea ice cover before the satellite era, we use the NSIDC gridded monthly sea ice extent and concentration, 1850 onward, version 1 dataset (hereinafter referred to as the Walsh data; Walsh et al. 2015Walsh et al. , 2017)), providing gridded (0.258 latitude 3 0.258 longitude) midmonth sea ice concentrations from 1850 to 2013.The Walsh data build on the commonly used dataset by Chapman and Walsh (1991a, b;cf. e.g., Maslanik et al. 1999;Titchner and Rayner 2014), but add additional historical sources, extend the time series, and refine the merging of the data sources.In total, 16 different data sources contribute to the dataset, including historical ship observations, compilations by naval oceanographers, and national ice services.Satellite passive microwave data, calculated by combining output from the bootstrap and NASA Team algorithms (Meier et al. 2013), constitute the data coverage since 1979.It thus constitutes the longest NH sea ice concentration record based on observations.
We use the historical dataset as provided, but restrict our analysis to the relatively well-observed period onward from 1950 as documented below.Detailed FIG. 2. (a) The Northern Hemisphere regional seas, clockwise from 908N: 1) central Arctic, 2) Canadian Archipelago, 3) Beaufort Sea, 4) Chukchi Sea, 5) East Siberian Sea, 6) Laptev Sea, 7) Kara Sea, 8) Barents Sea, 9) Greenland Sea, 10) Baffin Bay/Gulf of St. Lawrence, 11) Hudson Bay, 12) Bering Sea, and 13) Sea of Okhotsk.Contribution from each regional sea to the (b) September and (c) March Northern Hemisphere sea ice extent trends, 1979-2016.uncertainty estimates are unfortunately not provided with the dataset, presumably from the synthesis of multiple sources required to produce a hemispherescale gridded dataset.Gagné et al. (2017) for instance find that the Walsh data in the eastern Arctic are associated with less variance and trends than the Russian Arctic and Antarctic Research Institute (AARI) sea ice concentration dataset since 1950.The Walsh data should therefore be approached with this caveat in mind.
However, Walsh et al. (2015) do detail the temporal and regional coverage of the different data sources in their synthesis.The data coverage can be summarized as follows.The Arctic-wide data record is essentially continuous since 1953 (Walsh and Johnson 1979).The presatellite data availability is typically larger in summer than winter, and there is direct observational coverage in practically all regions in all years in both summer and winter.In the Arctic shelf seas, the observations of the Russian AARI contribute 40%-90% data coverage.On the hemisphere-scale, the analog Walsh and Johnson ice concentration data contribute the largest coverage in March and September (typically 60%-80%).The AARI database is the second largest contributor to general NH data coverage in these months (typically 10%-20%) for 1950-78.The Walsh data also include observations from, for example, Naval Oceanographic Office sea ice maps (mainly in the North American, Atlantic, and Pacific domains; typically less than 20%), Arctic Climate System Study data (Atlantic domain; 10%-80%), and Japan Meteorological Agency charts (sporadically up to 30% in the Pacific domain).Analog filling added by Walsh et al. (2015) amounts to less than 10% of the data.We refer the reader to Walsh et al. (2017), and references therein, for further details.
In addition to restricting our use of the historical data in time, we furthermore consider sea ice extent rather than sea ice area or concentration; most historical observations are of the sea ice edge, and hence most directly represent extent.Sea ice extent is beneficial also in the satellite era because sea ice concentration derived from passive microwaves can be biased during melt (Cavalieri et al. 1992).Sea ice extent is calculated as the cumulative area of all grid cells having at least 15% sea ice concentration.In the northernmost region sea ice extent is furthermore beneficial over concentration or area as observations are lacking near the pole throughout the satellite record.We assume the unobserved area to be at least 15% ice covered.Normalized sea ice extent is the regional monthly sea ice extent divided by the regional maximum sea ice extent (for 8 out of 13 regions, this is the full domain).We note that when considering the Walsh data we do not go beyond 2013, as merging of the bootstrap and NASA Team product, and the NASA Team product causes inconsistency in the time series in some regions (not shown).
Correlation r and variance explained r 2 are calculated for detrended time series, unless otherwise stated.The F statistics are used to test the significance of linear trends, with no trend as the null hypothesis.Statistical significance is associated with a 95% confidence level.Linear trends are calculated using the least squares approach.
3. Recent Northern Hemisphere sea ice variability and trends The annual mean NH sea ice extent has decreased by 2.0 3 10 6 km 2 since 1979 (Fig. 1a).The sea ice extent loss is significant throughout the year (Table 1; Cavalieri and Parkinson 2012), and with the largest changes occurring in summer in agreement with previous studies (see also, e.g., Cavalieri and Parkinson 2012;Serreze et al. 2007).Updated through 2016, the sea ice extent has decreased by 45% in September, 35% in August, and 26% in July, relative to 1979.The smallest changes, still significant, occurred in winter and early spring and are close to 9% in March, April, and May.
The NH sea ice extent changes are, however, characterized by large regional variations.The summer sea ice extent loss is widespread throughout the Arctic Ocean (Fig. 1b; Stroeve et al. 2012b) with largest trends along the North American and Russian coasts, whereas the winter loss generally occurs farther south (Fig. 1c and Table 1).Although the overall NH sea ice extent decline is twice as large in September as in March, regional winter trends (e.g., in the Barents Sea) are equal to the largest summer trends of other regions.The Bering Sea has positive trends throughout the winter (Table 1).To further assess seasonal and regional differences in NH sea ice variability and change, we first examine trends in September and March.We note that there may be regional differences within the regional seas assessed here; however, our qualitative results do not appear fundamentally concerned with the specific choice of regions.For a more detailed description of timing and geographical distribution of onset of rapid sea ice decline, the reader is referred to Close et al. (2015).
The September NH sea ice extent has decreased by 3.2 3 10 6 km 2 since 1979, in which the East Siberian Sea (contributing with 22%), Chukchi Sea (17%), Beaufort Sea (16%), Laptev Sea (14%), and Kara Sea (9%) contribute the most (Table 1 and Fig. 2b).These regions typically have large interannual summer variability and trends, but they are fully ice covered in winter (Figs. 3 and 4).Combined, these five perennial ice-covered seas account for 89% of the interannual variance in the NH September sea ice extent since 1979 (not shown).The recent large summer sea ice extent loss has resulted in very little sea ice being left in September in the Chukchi, East Siberian, Laptev, and Kara Seas.The central Arctic and Canadian Archipelago together contribute to 14% of the NH September loss.
The seasonally ice-covered seas farther south dominate NH winter variability and change (Figs. 3 and 4).The largest contributors to the NH March sea ice extent loss (1.5 3 10 6 km 2 since 1979) are the Barents Sea (27%), Sea of Okhotsk (27%), Greenland Sea (23%), and Baffin Bay/Gulf of St. Lawrence (22%) (Table 1 and Fig. 2c).The four seas also account for 81% of the interannual variance in the NH March sea ice extent since 1979 (not shown).These regions are practically ice free in summer (Fig. 3).
Within the Arctic Ocean (i.e., the Beaufort, Chukchi, East Siberian, Laptev, Kara, and Barents Seas and the central Arctic) the winter sea ice extent variability and loss have almost exclusively occurred in the Barents Sea (Table 1).The other regions are essentially fully ice covered in winter, neither contributing to interannual variability nor trends.The Barents Sea has contributed to 95% of the observed March sea ice extent loss in the Arctic Ocean since 1979, and also carried the interannual variability in Arctic Ocean sea ice extent (r 2 5 0.99, standard deviations are 0.17 3 10 6 and 0.18 3 10 6 km 2 in the Barents Sea and Arctic Ocean, respectively).The Barents Sea has thus carried the variability and trend in the winter Arctic Ocean sea ice extent to date (Onarheim et al. 2015), and will continue to do so until other Arctic seas may get open-water areas in winter in the future.
Long-term Northern Hemisphere sea ice variability (1950-2013)
The NH sea ice extent displays pronounced interannual variability both in summer and winter over the longer time period since 1950 (Fig. 5; Walsh et al. 2017).The sea ice extent increases slightly in the 1950-70s, particularly in summer, and is followed by the rapid sea ice loss in recent years (Fig. 5a), consistent with findings by, for example, Polyakov et al. (2003), Mahoney et al. (2008), andGagné et al. (2017).The 15 smallest annual mean NH sea ice extents since 1950, updated through 2016, all appear during the last 15 years, with the 10 smallest extents within the 12 last years.Both interannual and multidecadal variations in the Walsh data are more prominent in summer than winter, but are associated with large regional differences (Fig. 5; Walsh et al. 2017).
To examine long-term sea ice extent changes regionally, we first contrast the time periods 1950-99 and 2000-13, as the largest sea ice extent minima occurred after year 2000 (Fig. 5a).Summer and winter variability is exemplified by assessing sea ice variability in September and March.Figure 5b shows a September sea ice cover that is generally within the Arctic Ocean throughout the 1950-2013 time period, but with a present poleward contraction of the sea ice edge.The Arctic Ocean is mainly completely ice covered throughout the 1950-99 period, except for parts of the Barents, Kara, and Laptev Seas (Fig. 5b).The NH September sea ice extent loss since 1950 has thus predominantly occurred in the satellite era (Fig. 5a; Mahoney et al. 2008).
The March sea ice edge generally extends beyond the Arctic Ocean, with the Barents Sea as the prominent exception (Fig. 5c).The recent poleward shift of the March sea ice edge is generally smaller than the September change.In the Greenland and Barents Seas, however, the winter sea ice edge is distinctly farther north in recent years compared to in the 1950-99 period (Fig. 5c).Divine and Dick (2006) have suggested that recent sea ice retreat in the Greenland and Barents Seas may be part of a multidecadal oscillation, superimposed on a long-term sea ice retreat since the second half of the nineteenth century (Vinje 2001).
It was shown that the Beaufort, Chukchi, East Siberian, Laptev, and Kara Seas carry the recent NH September sea ice extent loss (e.g., Fig. 2b).These regions also account for 88% of the interannual variance in NH September sea ice extent between 1950 and 2013 (Fig. 6a).The Greenland and Barents Seas largely account for the remaining variability, and contribute to the multidecadal variability (not shown).We note that these conclusions are drawn from an incomplete observational basis; however, most of the Arctic shelf seas have 40%-90% data coverage in sea ice concentration in September (not shown), and we only consider sea ice extent.
The NH March sea ice extent variability and trends are explained by sea ice changes in the Barents Sea, Greenland Sea, Baffin Bay/Gulf of St. Lawrence, and Sea of Okhotsk, and these seas carry 86% of the interannual NH March sea ice extent variability throughout the observational record (Fig. 6b).The winter sea ice cover in the Barents Sea, Greenland Sea, and Baffin (Cavalieri et al. 1996).
Bay/Gulf of St. Lawrence is relatively well observed since the 1950s (typically 20%-80% observational data coverage prior to 1979; not shown), whereas the Sea of Okhotsk is in general sparsely observed before the satellite era.We find from the Walsh data that the distinct spatial differences between the NH September and March sea ice extent variability (i.e., summer variability in the north and winter variability generally farther south) appear consistent over 60 years in the observation-based data (Figs. 5 and 6).
Summer, winter, and transition modes
Recent NH sea ice extent appears unprecedentedly small both in summer and winter based on the available observations (Figs. 1 and 5; Walsh et al. 2017).The overall NH trends for 1979-2016 (Fig. 4) are statistically significant in all months and approximately twice as large in summer as in winter (Table 1).The matrixed data of Fig. 7 offers a concise summary of the monthly and regional sea ice extent trends since 1979.The most poleward regions are characterized by loss restricted to a few summer months and most strongly so in September; this may be unsurprising, but only to the extent that a perennial sea ice cover remains.These northern regions have their largest sea ice extent variability and trend in summer (Fig. 3), and are therefore hereafter described as being in a ''summer mode.''The regions in summer mode at present are the central Arctic, Canadian Archipelago, the Beaufort, Chukchi, East Siberian, Laptev, and Kara Seas, and Hudson Bay.Moving clockwise and southward (toward the right in Fig. 7), the extent of change tends to broaden from summer toward winter, eventually to the extreme that summer sea ice is practically absent, a seasonal ice cover is realized, and change is more pronounced in winter.
The regions with largest sea ice extent variability and trend in winter are hereafter referred to as being in a ''winter mode.''The regions in winter mode at present are generally the seas outside the Arctic Ocean, but importantly also the Barents Sea (cf.Fig. 7; Figs.1c and 5c); that is, it characterizes the Barents Sea, Greenland Sea, Baffin Bay/Gulf of St. Lawrence, Bering Sea, and Sea of Okhotsk.Where the summer and winter modes spatially connect, a ''transition mode'' can be associated.The Kara, Barents, and Greenland Seas and Hudson Bay could thus also be placed in the transition mode (cf.Figs.1b and 5b and Figs.1c and 5c).The transition mode is characterized by sea ice extent variability and change in both summer and winter (similar to the Greenland Sea in Fig. 3), or largest variability and change in spring and fall (similar to Hudson Bay in Fig. 3).The NH sea ice cover as a whole is in transition mode, with large variability and trends in both summer and winter (Figs. 3 and 4).
The summer mode regions are typically completely ice covered in winter, whereas winter mode regions are typically ice free in summer.The transformation from summer to winter mode thus implies a substantial change in the regional sea's seasonal cycle; from a perennial to a seasonal sea ice cover.The gradual transformation between the different modes suggests that regions may change from one mode to another in the future, or that they may have done so in the past.Implicit to the summer mode's larger retreat in summer than winter is an increasing range of the seasonal cycle, and the seasonality, calculated as the difference between the annual sea ice extent maximum and minimum divided by the annual mean, increases in all summer mode regions.The largest amplification of the seasonal cycle has occurred in the East Siberian Sea (Fig. 3).The winter mode regions experience, in contrast, decreased range of the seasonal cycle, by going toward ice-free conditions year-round.
a. Past variability in modes
Here we examine whether some regions may have transformed from one mode to another according to the observation-based record back to 1950.By assessing monthly mean sea ice extent between 1950 and 2013, we find that the Barents Sea and Baffin Bay/Gulf of St. Lawrence used to have a partial summer sea ice cover until the recent decades, whereas they are presently in winter mode and essentially ice free in summer (Fig. 3).These seas have thus completed the transition mode, with sea ice variability and trends in both summer and winter, and entered the winter mode, with only winter variability and trends.
The remaining seas we find have remained in the same mode since 1950 according to the Walsh data.We note, however, that the Kara Sea and Hudson Bay are currently becoming ice free in summer, and thus entering transition mode.Section 7 assesses possible future transformations between modes in a climate that continues to warm.
b. Present state
The NH sea ice extent decline is currently largest in summer (e.g., Fig. 4; Serreze et al. 2007).Figure 8 and southward from the central Arctic (toward the right in Fig. 8; same order as in Fig. 7), regions typically become gradually less ice covered (they get smaller normalized extent), and summer and winter trends (length of vertical arrows) decrease and increase, respectively.
As regions become ice free in summer (reach the hatched area in Fig. 8), they enter transition mode.
Figure 8 summarizes how the summer mode regions have large changes in the minimum sea ice extent (Fig. 8 mode regions (Fig. 8, bottom).The summer sea ice extent has also decreased in most winter mode regions since the 1950s, and summers are now ice free, except for the Greenland Sea.Overall, there is an approximate 2: 1 ratio for September versus March trend for the Northern Hemisphere sea ice extent loss (cf.also NH in Fig. 4).
We note that the linear trend in Bering Sea winter sea ice extent is positive since 1979 (Table 1), but negative when considering the longer time frame since 1950 (Fig. 8).The recent positive trend may be linked to internal variability (Gagné et al. 2017).The slightly increasing winter sea ice extent estimated for the Sea of Okhotsk since the 1950s may be influenced by the regional sparsity of winter observations before the 1970s (not shown; Walsh et al. 2017), and is contrasted by the rapid decline since 1979 (e.g., Fig. 4).
Seasonal asymmetry in summer and winter modes
We identified the current summer and winter mode regions based on their sea ice extent variability and trends in summer versus winter since 1979 (e.g., Fig. 7).
At present, there are also other fundamental differences between the two modes; we find seasonal asymmetry in extent and trends comparing the melt and freezing seasons, particularly for the present summer mode regions (Figs. 3 and 4; the melt season is considered to be the months between the sea ice extent maximum in March and the minimum in September, and similarly the months between September and March define the freezing season).The seasonal cycle of the NH sea ice extent as a whole broadly follows a sinusoidal (Fig. 3), but with the largest monthly trends in September and the smallest in May (Fig. 4) (i.e., the trends are not symmetric around September).The sea ice loss is predominantly carried by the months July, August, and September, and the NH loss is thus slightly larger in the melt season compared to the freezing season (Fig. 4).
The individual present summer mode regions also have larger trends in the months prior to the late melt season than in the early freezing season (Fig. 4).In, for example, the Laptev Sea, there is retreat from June to October, with the trend being larger in August and July than in June and October.The Laptev Sea is fully ice FIG. 8. Regionally normalized sea ice extent change in (top) September and (bottom) March.Arrows represent regional change in sea ice extent between the 1950-99 mean (arrow tail) and the 2000-13 mean (arrowhead).Dots indicate no change in regional max and min sea ice extent between the two periods.A normalized extent of 1 (0) indicates complete (no) sea ice cover.The hatched area illustrates the transition mode between summer and winter variability; that is, a region reaching no summer sea ice [0 in (top)] enters the transition mode (hatched region); when the ice cover starts to decrease in winter, the region enters winter mode at (bottom).covered within two months after the September sea ice extent minimum, and has thereby no trend in November (Fig. 4).The asymmetry with larger trends in the melt season compared to the freezing season, demonstrates that sea ice melt is enhanced in spring and that melt occurs earlier, but that the sea ice cover refreezes rapidly after the sea ice extent minimum in September.The ocean typically refreezes completely within two months after the sea ice extent minimum in the central Arctic, Canadian Archipelago, Beaufort Sea, East Siberian Sea, and Laptev Sea, whereas it refreezes by December in the Chukchi Sea, and by January in the Kara Sea (Fig. 3).
In contrast to the summer mode, seasonal trends in the winter mode regions are essentially symmetric (Fig. 4).Large trends in fall indicate that the freeze-up is reduced or delayed, and that open-water areas persist further into winter (Fig. 3).Negative trends in the melt season indicate that sea ice melt is enhanced in spring.
The current asymmetry (symmetry) in the summer (winter) mode regions is not implicit to the proposed modes, and may be due to regional settings.We note that the present summer mode regions are tightly linked regionally, whereas the winter mode regions are generally more disconnected, largely separated by continents and with the summer mode Arctic Ocean between the Atlantic and Pacific sectors.The regional climatic forcing can thus differ substantially (e.g., Cavalieri and Parkinson 1987;Smedsrud et al. 2013).
As areas of open water develop earlier in the melt season, the upper ocean absorbs more solar radiation, basal and lateral sea ice melt increase, and the ocean absorbs more heat (e.g., Perovich et al. 2007).Large negative sea ice extent trends in the melt season, particularly in the summer mode regions (Fig. 4), thus appear accelerated by the ice-albedo feedback (Stroeve et al. 2014) (i.e., caused by more melting).We note that parts of the large open-water regions in the Arctic shelf seas may also be due to sea ice divergence along the coasts, a more mobile sea ice cover (Rampal et al. 2009), and above-normal sea ice export in Fram Strait (Williams et al. 2016;Smedsrud et al. 2017).
Despite decreasing summer sea ice extent minima and a warmer ocean at the end of the melt season (Steele and Dickinson 2016), the summer mode regions still refreeze rapidly in fall and reach a complete sea ice cover (Fig. 3).The rapid refreeze indicates that the ocean efficiently loses its heat to the atmosphere in fall.
One way to evaluate the freeze-up is to estimate how quickly new ice forms.We estimate the tendency for large areas to freeze-up by quantifying rapid ice growth events (RIGEs; Fig. 9), defined here as an increase in sea ice extent of at least 10 6 km 2 over a weeklong period.We find that RIGEs primarily occur during the month of October (although they may also occur as late as December; not shown).All the RIGEs are observed in the Arctic Ocean within the current summer mode regions (not shown).As seen in Fig. 9, the number of RIGEs has increased in recent decades following record minima in Arctic sea ice extent.For example, in 2007 and 2008 there were 15 consecutive 7-day periods in October through early November where more than 10 6 km 2 of sea ice formed.Similarly, in 2012, the lowest September minima recorded to date, the 12 RIGEs all occurred in October.The summer mode regions' small trends in the freezing season can thus be related to the increasing number of RIGEs.
The increasing number of RIGEs in the summer mode regions may be related to the summer mode regions' strong salinity stratification.The Arctic shelves receive strong river runoff in summer (Rudels 2015;Rawlins et al. 2010), and the consequent strong stratification suggests that the increased amount of heat absorbed by the ocean in summer generally accumulates in the upper ocean.When the ocean surface cools in fall, and thereby densifies, the strong salinity stratification limits vertical displacements, and thereby inhibits warmer waters to be brought up from below.Consequently, only the upper layer has to cool to the freezing point before sea ice formation starts.Sea ice generally forms quickly within the Arctic Ocean once temperatures drop below freezing (Stroeve et al. 2014).
In contrast to the summer mode, the winter mode regions experience large trends of sea ice retreat also in FIG. 9. Number of yearly RIGEs, that is, an increase in sea ice extent of at least 10 6 km 2 over a weeklong period, 1979-2016.The RIGEs are computed as a weekly running mean, so the number of occurrences can imply either a consecutive number of weeklong periods where at least 10 6 km 2 of sea ice form in a row, or it can also occur at different times during winter.Note that there were no RIGEs between 1979 and1987.the months following the annual sea ice extent minimum (Fig. 4), with delayed and more gradual freeze-up in fall (Fig. 3).Consistently, no RIGEs are observed to compensate for decreasing sea ice extent in the winter mode regions.The winter mode regions are more climatologically heterogeneous and more geographically separated.The regional conditions appear generally less favorable for RIGEs than in the Arctic proper, as alluded to below.
Changes in the Atlantic domain exert a dominant role on winter sea ice loss in the Barents Sea (Årthun et al. 2012), Nansen basin (Onarheim et al. 2014), and the eastern Eurasian basin (Polyakov et al. 2017).The Atlantic heat inhibits sea ice freezing (Årthun et al. 2012), and maintains a warmer less stratified water column inhibiting RIGEs in the freezing season.
Moving westward, Arctic sea ice is generally advected to and through the Greenland Sea (Kwok 2009), where there up to the 1980s also used to be local sea ice formation in the so-called Odden ice tongue (Wadhams and Comiso 1999;Comiso et al. 2001).The Norwegian Atlantic Current inhibits local sea ice formation and keeps the eastern parts of the Greenland Sea ice free year-round.The current, by lateral mixing, also provides the heat given up through open-ocean convection in the central basin thus maintaining a weakly stratified water column (Rudels et al. 1989;Dickson et al. 1996) and scarce sea ice formation.Variations in the Greenland Sea ice cover are, however, largely atmospherically driven (Fang and Wallace 1994;Deser et al. 2000).
Also the Baffin Bay/Gulf of St. Lawrence receives sea ice exported from the Arctic Ocean (Kwok 2005).Sea ice also forms locally (e.g., Close et al. 2018), and the sea ice variability depends largely on atmospheric conditions including the North Atlantic Oscillation (NAO; Fang and Wallace 1994;Deser et al. 2000).We note that this region extends far southward into the North Atlantic Ocean with generally higher air temperatures and solar radiation providing heat throughout the year.
The Bering Sea and Sea of Okhotsk are also relatively southern, both being south of the Arctic Circle, and particularly the Sea of Okhotsk is practically disconnected from the Arctic Ocean (cf.Fig. 2).Both regions have atmospherically driven sea ice formation in the north, and the ice melts when it drifts southward into warmer water (Muench and Ahlnas 1976;Martin et al. 1998;Kimura and Wakatsuchi 1999).Whereas the Bering Sea is more characterized by variance than trends, the latter partly positive (Fig. 4), there is reduced sea ice formation in the Sea of Okhotsk in fall linked to increasing air temperatures (Kashiwase et al. 2014).
We have demonstrated that the northernmost (summer mode) regions currently refreeze rapidly in fall, contrasting the regions farther south (winter mode).Although actual drivers of regional and seasonal sea ice distribution are not extensively assessed herein, we submit from the above that asymmetry between the melt and freezing seasons-and summer and winter modesreflects a region's climatological preconditioning, and more specifically that the combination of a freezing polar night and freshwater stratification sustains RIGEs in the present summer mode regions.
Future perspectives
If the Northern Hemisphere sea ice loss persists, the Arctic Ocean will become seasonally ice free, and further reduction of sea ice extent will have to be increasingly concerned with wintertime change.Summer mode regions will accordingly shift via transition mode into winter mode.A winter mode region can in this perspective be understood to be at a later stage in the transformation from a complete sea ice cover to ice-free conditions, than a summer mode region.
a. Space for time?
Different seas are at different stages-the modes-as part of one overall transformation in time (i.e., the general 2: 1 ratio of September vs March retreat of total NH sea ice extent; cf.Figs. 4 and 8).A geographical region's transformation in time from a complete perennial sea ice cover to none (i.e., sequentially going through the summer, transition, and winter modes) implies that Fig. 7 offers a possible ''space for time'' perspective on the evolution of Arctic sea ice extent as current change moves poleward.Figure 7 then represents, looking right, a regional sea's further sequence of change moving forward in time.
Keeping in mind the more disconnected nature of the Pacific regional seas (cf.section 6), the above concept is probably most relevant from an Atlantic perspective (cf.also the general spatial trend patterns in Figs.7 and 8).We further note that the regional ordering of Figs. 7 and 8 from the central Arctic (summer mode) and clockwise along the Arctic shelf seas toward the Atlantic domain farther south (winter mode), is generally against the poleward flow of Atlantic water.The closer to the inflow of Atlantic water, the generally larger winter trends and smaller summer trends.The Barents and Kara Seas are accordingly at the cusp of recent and present Arctic transformation from summer to winter mode (cf.Figs.7 and 8), and from where future transformation can progress farther into the Arctic (toward the left in Fig. 7).The increasing ''Atlantification'' of the Barents Sea, Nansen basin, and eastern Eurasian basin (Årthun et al. 2012;Polyakov et al. 2017) is observational evidence of transformation and consequently a seasonal ice cover moving poleward.
b. Future transformations from current change
A quantitative, if approximate, assessment of the future in line with the above space-for-time concept is the extrapolation of satellite-era regional trends for a representative summer (Chukchi Sea) and winter mode (Barents Sea) region.Linear trends since 1979 are considered (Fig. 7), same as for the sea ice extent trends updated monthly by the NSIDC (e.g., NSIDC 2016).We find that with the current observed Chukchi Sea summer loss rate (20.15 3 10 6 km 2 decade 21 ; Fig. 4), the Chukchi Sea becomes ice free in summer during the 2020s and thus enters the transition mode (not shown).The estimate for the Chukchi Sea is also representative for the Laptev, Beaufort, and East Siberian Seas, as they have similar sea ice extent and trends (Figs. 3 and 4).These Arctic shelf seas may thus become seasonally ice free within the next decade, consistent with model projections of Arctic summer sea ice (e.g., Wang and Overland 2009).
We acknowledge that estimates based on extrapolation of linear trends are generically uncertain, particularly in a system that is rapidly changing (Meier et al. 2007).However, extrapolation on 1979-present trends can be considered a nonextreme estimate of future change; present shorter-term trends are distinctly larger (Fig. 5; Stroeve et al. 2012b), whereas climate models generally underestimate observed trends (Stroeve et al. 2012a).It is currently debated whether the latter mismatch reflects the models' imperfection in simulating externally forced change (from global warming), or the contribution of internal variability to observed trends (e.g., Li et al. 2017;Onarheim and Årthun 2017).
Given continued future warming, all the summer mode regions will eventually enter the transition mode and thus be ice free for parts of the year.Hudson Bay has currently approached the transition mode: it has practically no summer sea ice left, and is still completely ice covered in winter (Fig. 3).The corresponding monthly sea ice extent trends are thus large in fall and spring, but zero in summer and winter (Fig. 4) as the ocean is either completely ice free or fully ice covered.Also the Kara Sea has practically been ice free in recent summers and may thus be considered to be in a transition mode.The September trend in the Kara Sea (Fig. 4) is smaller than in the months prior to and after September.This indicates that there is limited September sea ice left to lose and that with continued warming the September sea ice extent trend decreases toward zero, as in the Hudson Bay and current winter mode regions.
Negative winter trends in the present winter mode regions will also persist in a climate that warms further, until these regions become ice free year-round.Here we assess the transformation from a winter ice-covered region to an ice-free sea exemplified by the Barents Sea (20.11 3 10 6 km 2 decade 21 ; Fig. 4).As the winter sea ice extent loss progresses, larger open-water areas appear and the ice-free season lengthens.The Barents Sea becomes ice free year-round around 2050 if the currently observed winter trend in sea ice extent persists (not shown).This is in agreement with results from four CMIP5 models including a large ensemble simulation in a strong climate forcing scenario (Onarheim and Årthun 2017).The scenario of winter Barents Sea ice extent loss is also representative for the Greenland Sea, as it has similar sea ice extent and trends (Figs. 3 and 4), whereas the Sea of Okhotsk and Baffin Bay/Gulf of St. Lawrence are estimated to become ice free in the 2080s (not shown).We again note that the different regions are affected by different atmospheric and oceanic forcing; the partial sea ice cover in the Greenland Sea is for instance likely to persist as long as sea ice continues to be advected into the Greenland Sea from the Arctic Ocean (Kwok 2009).
Summary and conclusions
The NH sea ice cover has decreased dramatically over the past few decades, but with large seasonal and regional variations (Fig. 1).We note that only sea ice extent is considered herein, but that the observed decline in sea ice volume is also projected to continue toward 2100 (e.g., Gregory et al. 2002;Arzel et al. 2006).The observed NH sea ice extent variability and trend have here been assessed regionally and seasonally for the 1950-2016 period.Updated through 2016, changes are overall largest in summer and smallest in winter (e.g., Fig. 1) in agreement with previous studies (e.g., Cavalieri and Parkinson 2012).If the NH sea ice extent loss is to persist, summer trends will decrease as areas become ice free in summer, and trends toward the winter season will increase and spread to larger areas.On this background, we posed three questions initially (section 1): How are regional NH summer and winter sea ice extent variability and trends at present?How were they in the past?How may they be in the future?
Based on satellite observations, we propose two dominant patterns of NH sea ice extent variability and change, the summer and winter modes.The NH summer variability and trends dominate in the Arctic shelf seas (e.g., Figs.2b and 5b).These regions are completely ice covered in winter but have large sea ice variability and trends in summer, and are thus classified to be in a summer mode.The seas recently in summer mode are the central Arctic, Canadian Archipelago, the Beaufort, Chukchi, East Siberian, Laptev, and Kara Seas, and Hudson Bay.Current summer mode regions are characterized by larger trends in the melt season compared to the freezing season (Fig. 4), indicating that melt starts earlier whereas freeze-up happens relatively quickly.We find that the Arctic Ocean appears to refreeze particularly quickly-have rapid ice growth events-in years with a small sea ice extent minimum (Fig. 9).
The recent NH winter sea ice extent loss is significant and increasing, but still less extensive than in summer (Fig. 1; Cavalieri and Parkinson 2012).The winter variability and loss generally take place in the seas farther south, in the Barents Sea, Greenland Sea, Baffin Bay/ Gulf of St. Lawrence, and Sea of Okhotsk (Fig. 2c).We classify these seas to be in a winter mode as they have largest sea ice extent variability and trends in winter.The Bering Sea is also placed within the winter mode, but has positive winter trends for the 1979-2016 period (Fig. 7).In contrast to the summer mode regions, the winter mode regions display similar trends in the freezing season compared to the melt season, and no rapid ice growth events are observed in fall.
Observations since 1950, although limited prior to the satellite era, indicate that the distinct summer (winter) mode regions explain large parts (.85% of the variance) of the NH summer (winter) sea ice extent variability (Fig. 6).With continued global warming and associated sea ice loss, however, regional seas may change from one mode to another.The summer mode regions may lose their summer sea ice, thereby transforming into a transition mode, and thereafter to the winter mode with sea ice variability and trends only in winter.The Kara Sea and Hudson Bay are currently about to leave the summer mode because they have lost nearly all summer sea ice (Figs. 3 and 9).By extrapolating current linear trends we find that the Arctic shelf seas in summer mode may become completely ice free in summer during the 2020s.Present winter variability and trend in sea ice extent within the Arctic Ocean occur exclusively in the Barents Sea (Fig. 2c).If the sea ice loss continues, the winter sea ice extent in the current summer mode regions may start decreasing and winter trends will then become increasingly important.Winter sea ice extent loss in the current winter mode regions will also persist with continued warming, until these regions become completely ice free throughout the year, possibly onward from the 2050s.
This work improves our understanding of past and present seasonal and regional NH sea ice extent variability by providing a unifying framework: the summer and winter modes.The modes highlight the ongoing transformation and mark possible stages for the future seasonally ice-free Arctic Ocean.
FIG. 1.(a) March (blue), September (red), and annual mean (black) Northern Hemisphere sea ice extent, 1979-2016.Shaded regions indicate plus and minus one standard deviation.Linear sea ice concentration trends (% decade 21 ) in (b) September and (c) March, 1979-2016.Black contours show the mean sea ice edge.
km 2 ) for 1979-2016.The three largest monthly contributors to the Northern Hemisphere sea ice loss are in italic.Boldface values indicate trends that are statistically significant.
FIG. 3. Monthly sea ice extent for the Northern Hemisphere and its individual seas in successive 10-yr periods from 1950 to 2013 (Walsh et al. 2015).The seasonal cycles are shown from March to March, centered around September.Months before (to the left) of September represent the melt season, whereas months after (to the right) of September represent the freezing season.The three thin red lines indicate the sea ice extent in 2014-16(Cavalieri et al. 1996).
FIG. 4 .
FIG. 4. Monthly trends in sea ice extent for the Northern Hemisphere and its individual seas, 1979-2016.The trends are shown from March to March, centered around September.Months before (to the left) of September represent the melt season, whereas months after (to the right) of September represent the freezing season.Bars indicate 95% confidence intervals.
FIG. 6. Anomalous Northern Hemisphere sea ice extent (blue) in (a) September and (b) March, 1950-2013 (Walsh et al. 2015).Anomalous sea ice extent (red) in the Beaufort, Chukchi, East Siberian, Laptev, and Kara Seas in (a) (summer mode regions) and Barents Sea, Greenland Sea, Baffin Bay/Gulf of St. Lawrence, and Sea of Okhotsk in (b) (winter mode regions).
|
2019-04-26T03:27:12.790Z
|
2018-05-31T00:00:00.000
|
{
"year": 2018,
"sha1": "5d416b82fb76f00436b23d41838bee374b439b53",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1175/jcli-d-17-0427.1",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "a5cd043ca492e87643074e886af638f88869902e",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": [
"Environmental Science"
]
}
|
224858807
|
pes2o/s2orc
|
v3-fos-license
|
Palynological evidence supporting widespread synchronicity of Early Jurassic silicic volcanism throughout the Transantarctic Basin
Throughout the Transantarctic Mountains, Early Jurassic silicic magmatism preceding the emplacement of the Ferrar flood-basalt province (Heimann et al. 1994) is documented by the increasing input of silicic ash into otherwise epiclastic, fluviolacustrine deposits of the Beacon Supergroup (see Elliot et al. 2017). Vertebrate biostratigraphy and radiometric analyses indicate a Sinemurian to Pliensbachian age span for silicic volcaniclastic deposits in the central Transantarctic Mountains (CTMs) (Elliot et al. 2017). For northern Victoria Land (NVL), radiometric geochronology and palynostratigraphy revealed that explosive silicic volcanism began with minor pulses during the early Sinemurian (c. 195 Ma) and reached a peak phase beginning in the middle Pliensbachian (c. 187 Ma) (Bomfleur et al. 2014). A basin-wide correlation of these widely separated age frameworks has so far been hampered by the scarcity of data on coeval deposits in southern Victoria Land (SVL). Here, we present new palynostratigraphic data from mixed epiclastic–volcaniclastic deposits in the Prince Albert Mountains that provide supporting evidence for the widespread synchronicity of silicic volcanic episodes preceding Ferrar magmatism.
Throughout the Transantarctic Mountains, Early Jurassic silicic magmatism preceding the emplacement of the Ferrar flood-basalt province (Heimann et al. 1994) is documented by the increasing input of silicic ash into otherwise epiclastic, fluviolacustrine deposits of the Beacon Supergroup (see Elliot et al. 2017). Vertebrate biostratigraphy and radiometric analyses indicate a Sinemurian to Pliensbachian age span for silicic volcaniclastic deposits in the central Transantarctic Mountains (CTMs) (Elliot et al. 2017). For northern Victoria Land (NVL), radiometric geochronology and palynostratigraphy revealed that explosive silicic volcanism began with minor pulses during the early Sinemurian (c. 195 Ma) and reached a peak phase beginning in the middle Pliensbachian (c. 187 Ma) (Bomfleur et al. 2014). A basin-wide correlation of these widely separated age frameworks has so far been hampered by the scarcity of data on coeval deposits in southern Victoria Land (SVL). Here, we present new palynostratigraphic data from mixed epiclastic-volcaniclastic deposits in the Prince Albert Mountains that provide supporting evidence for the widespread synchronicity of silicic volcanic episodes preceding Ferrar magmatism.
Samples were collected during the 13th German Antarctic North Victoria Land Expedition (GANOVEX XIII 2018-19) from a raft of sedimentary deposits exposed between Ferrar dolerite sills at the southern tip of McLea Nunatak, Prince Albert Mountains (76.00849°S, 159.61997°E; Fig. 1a & b). The ∼40 m-thick sedimentary section consists of trough-cross-bedded, medium-to coarse-grained sandstone, carbonaceous mudstone and coal, with intercalations of up to > 1 m-thick beds of silicic tuff (Fig. 1b & c). Sandstones contain abundant zeolite cement and altered volcanic rock and glass fragments (Bernet & Gaupp 2005). Similar mixed epiclastic and silicic-volcaniclastic deposits in the Convoy Range further south have been informally referred to as 'Jurassic beds' (Elliot & Grimes 2011). Based on lithology and sedimentary features, they can be correlated with the upper Section Peak Formation of NVL (e.g. Schöner et al. 2011) and with the lower Hanson Formation of the CTMs (e.g. Elliot et al. 2017).
Two palynological samples were taken from the middle and upper parts of a 1 m-thick succession of carbonaceous mudstone, coal and thin beds of tuff near the base of the section. Following standard palynological processing, both samples yielded well-preserved pollen-and-spore assemblages strongly dominated (82% and 85%, respectively) by Classopollis grains (Fig. 1d & e), mainly intrastriate forms (Fig. 1d), with subordinate occurrences of bryophyte and lycophyte spores (Fig. 1f) and with consistent occurrences of Podosporites variabilis ( Fig. 1g; see Supplemental Material).
Compared to the palynostratigraphic framework for eastern Australia, this composition indicates correlation with the Ischyosporites punctatus Association Subzone of the Sinemurian-Toarcian Classopollis Abundance Zone (de Jersey & McKellar 2013; see Supplemental Material). As additional, younger index taxa (e.g. Nevesisporites vallatus) are absent and as the proportion of Classopollis is still very high, we suggest an assignment to the basal part of this subzone (see Supplemental Material), equivalent to unit APJ21 of Price, which is late Sinemurian in age (see Bomfleur et al. 2014). Beacon Supergroup deposits in the southern Prince Albert Mountains have thus far been considered to range from Permian to Triassic in age, based on finds of typical Glossopteris and Dicroidium fossils from Beta Peak and Benson Knob, respectively (Capponi et al. 2002).
Our results facilitate the correlation of Early Jurassic sedimentary successions that reflect the transition from epiclastic sedimentation and coal formation (typical of the underlying Beacon Supergroup) to silicic-volcaniclastic sedimentation over a distance of > 1200 km across the Transantarctic Basin. Our age assignment also narrows the succeeding time interval for the peak phase of silicic ash input (recorded in SVL so far only in the form of isolated rafts of massive silicic tuff within the overlying Antarctic Science 32(5), 396-397 (2020) © The Author(s) 2020. This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http:// creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution, and reproduction in any medium, provided the original work is properly cited. doi:10.1017/S0954102020000346 Mawson Formation; e.g. see Elliot et al. 2017) to be essentially Pliensbachian in age. Taken together, there is increasing evidence to suggest that two phases of silicic volcanism are recorded with similar characteristics and synchronous timing throughout the Transantarctic Mountain Range: an early phase characterized by pulsed input of silicic ash lasting at least throughout the Sinemurian, represented by the lower Hanson Formation (see Elliot et al. 2017), upper Section Peak Formation (see Schöner et al. 2011) and deposits described herein, and a peak phase of activity with massive ash input during the Pliensbachian, represented by the upper Hanson Formation (see Elliot et al. 2017), Shafer Peak Formation in NVL (see Bomfleur et al. 2014) and clasts of silicic tuff in the Mawson Formation (see Elliot et al. 2017).
|
2020-07-09T09:15:19.454Z
|
2020-07-07T00:00:00.000
|
{
"year": 2020,
"sha1": "c8596d5687a4bb5047fc473d81548ebe298b7a0e",
"oa_license": "CCBY",
"oa_url": "https://www.cambridge.org/core/services/aop-cambridge-core/content/view/A2171858C005A707EF321300343B95EF/S0954102020000346a.pdf/div-class-title-palynological-evidence-supporting-widespread-synchronicity-of-early-jurassic-silicic-volcanism-throughout-the-transantarctic-basin-div.pdf",
"oa_status": "HYBRID",
"pdf_src": "Cambridge",
"pdf_hash": "4c82447b57d78b177df40ac62e06539d41782f35",
"s2fieldsofstudy": [
"Geology"
],
"extfieldsofstudy": [
"Geology"
]
}
|
139297025
|
pes2o/s2orc
|
v3-fos-license
|
Recycled Natural Rubber Latex Gloves Filled CR Rubber: The Effects of Filler Size and loading
The effect of (rNR-G) as a filler with different size and loading on curing characteristics (scorch time, cure time, minimum and maximum torque) and physical properties (swelling test, crosslink density and hardness test) on rNR-G/CR (chloroprene) were investigated. Two sizes of fillers (fine size and coarse size) were utilized with different loadings (0, 5, 10, 15, 20 and 25 phr) for each size. The results indicated, the optimum cure characteristics for the (rNR-G)/CR compounds were at 5 phr for the fine size, meanwhile, for the physical properties, the hardness increased with the increasing of filler loading, and gave higher values in the case of fine size.
Introduction
Rubber is one of the most important material in the world. Its versatility has gotten the attentions of researchers worldwide. Nowadays, there are many advancements related to rubber technology and still progressing even now. However, even progress has its own price. Rubbers are basically polymers with high density cross-linked structures, due to those strong bonds, rubbers are difficult to decompose [1]. This causes problems to the world as there are no effective methods of deposing rubber wastes. Polychloroprene (CR) [poly(2-chloro-1,3-butadiene)] is one of the most important rubber with an annual consumption of nearly 300 000 tons worldwide. CR is utilized as a part of diverse specialized regions, essentially in the elastic business, but on the other hand is paramount as an issue material for glues (both dissolvable based and water based, and has distinctive latex applications, for example, dipped articles (e.g. gloves), formed froth and change of bitumen. Unfortunately, CR as other types of rubber becomes stiff and fragile after long-term utilizing under high-temperature conditions [2,3]. Nowadays, the natural rubber glove (NRG) is one of the most important in different areas such as health applications. After its use, in general, NRG is convert to waste rubbers after using, which it is not degradable, because polyisoprene chains of NR are crosslinked either by sulphur or peroxide system [4,5]. As the huge using of NRG in parallel with the diseases diffusion and population growth, the problem of NRG waste started in growing. In addition to NRG waste from hospital and other toxic hazardous waste, more than 11% of NRG products considered as a rejected or defect products during the processing of latex rubber. all of the huge waste rubber quantities might cause serious 2
1234567890''""
International Conference on Materials Engineering and Science IOP Publishing IOP Conf. Series: Materials Science and Engineering 454 (2018) 012189 doi:10.1088/1757-899X/454/1/012189 environmental problem and great health risk to public [6]. To solve this problem, many research investigated the reuse of NRG waste in several applications such as mat and large tire [6]. Recently, the rNR-G started in use which blended with virgin synthetic rubber and NR [7,8]. In this present work, the rNR-G waste was blended with CR to study the effects of different size and loading filler on the cure characteristics and physical properties.
Materials
The materials used in the research are listed in Table 1. Along with materials, the functions and also the suppliers name are also listed with the materials. The rNR-G which are obtained from Top Glove Group Corp. Malaysia, requires further processing before it can be used as compounding material. Table 1. List of materials and chemicals used in this study.
Materials Function Supplier Chloroprene rubber
Elastomer or rubber matrix RRIM Guthrie Group Sdn. Bhd.
Zinc oxide
Activator Anchor Chemical Co. (M) Ltd.
Raw materials for fillers
Top Glove Corp. Malaysia
Formulation and Testing of CR/ rNR-G Compounds
rNR-G waste was cut and masticated by using two-roll mill (X(S)K-160x320) at room temperature for 30 minutes to break the crosslinking of the latex glove, to be able for grinding. After the grinding process, the rNR-G were sieved to obtained filler with particulate size of 300 -600 µm (fine size), and the rest was considered as coarse size fillers. After preparation the gloves waste in suitable sizes, the compounding was done as shown in Table 2.
Characterizations and Testing of CR/ rNR-G Compounds
After preparing all the specimens, the cure characteristics were done by using the Rheometer Monsanto Model MDR 2000. By the cure test, the minimum torque (ML), maximum torque (MH), cure time (t90), and scorch time (t2) were studied. In addition, the tensile test was studied according to the ASTM D412 method, which was done by using the universal testing machine Instron 5582. Tensile strength, elongation at break and the modulus at 100% elongation were investigated. Scorch time t2, cure time (t90), minimum torque (ML) and maximum torque (MH) for recycled natural rubber latex gloves filled chloroprene rubber were studied as cure characteristics.
Cure Characteristics
3.1.1 Scorch time (t 2 ). Figure 1, shows the graph of scorch time (T2) for CR filled with different loading of rNR-G with different filler size. From the graph, the scorch time showed an increasing trend as the levels of loading of the rNR-G increased. The increased in scorch time can be attributed to the levels filler which hindered the crosslinking process. However, the T2 of both coarse and fine size filler at 5 phr of filler loading were lower compared to CR without the addition of rNR-G. But after the incorporation of rNR-G at 10 phr and above, the T2 of each respective filler loading were higher than the T2 of CR. This indicated the level of rNR-G at 5 phr did not suffice to hinder the T2 of the CR. On the other hand, rNR-G loading at 10 phr and above were more than sufficient to delay the T2 of CR. Additionally, the scorch time of the coarse size filler showed slower T2 compared to the fine size filler. The coarse size filler has smaller surface area and are larger in terms of diameter compared to the fine sized filler. Thus, the bigger sized filler gets in the way of the formation of the crosslinking process, which delayed the T2 of the coarse filler, resulting the fine size filler to have lower T2 [9].
Cure time (t 90 )
The graph illustrated in Figure 2, shows the cure time of CR with different levels of filler loading and the incorporation of different size filler. The result revealed that the cure time increased as the filler loading increased. From the result it is obvious that the presence of filler within the matrix inhibits the crosslinking process. Therefore, delaying the cure time as the level of filler within the matrix increased. Based on Figure 2, the cure time of 5 phr of rNR-G loading were the lowest. Yet, at 10 phr loading and above, the results shown the cure time at the respective filler loading were higher than CR without any filler present. The cure time at 5 phr of rNR-G loading was the fastest due to the decrease of matrix, as the filler loading increased. Even though rNR-G were also incorporated, the results suggested that at 5 phr of filler loading, that level of loading was not ample enough to influence the cure time of CR. Inversely, at 10 phr and above filler loadings, the cure time of the filled CR were higher compared to the unfilled CR. This is because at 10 phr of filler loading, the level of rNR-G was enough to cause an influence (Banerjee, 2012). Thus, further increase of filler loading from 10 phr will increase the cure time even more. In addition, although the cure time increased as the filler loading increased. The cure time of fine size filler was relatively lower compared to the cure time of the coarse size filler. As described in the previous section, the size of filler played a role in affecting the starting time of the crosslinking process. Similarly, coarse size filler will disrupt the crosslinking process by getting in the way of the formation of the crosslink bonds, thus delaying the cure time. Then again, fine filler will also disrupt
Minimum torque (ML)
The graph in Figure 3, illustrates the minimum torque of series 1 based on different levels of rNR-G loadings with different filler size. Based on the obtained results, the graph depicted an increasing trend of minimum torque as the filler loading of rNR-G increases. Based on the results it is apparent that with increase of the loading of rNR-G, the torque of CR also increased. Therefore, as the t increased so did the ML of CR. However, the ML of both coarse and fine size filler at 5 phr of filler loading were lower compared to CR without the addition of rNR-G. But then again, after the incorporation of rNR-G at 10 phr and above, the ML of each respective filler loading were higher than the T2 of CR with no rNR-G. This indicates the level of rNR-G at 5 phr was not suffice to increase the viscosity of the CR. On the other hand, rNR-G loading at 10 phr and above were more than sufficient to increase the viscosity of CR [10]. Figure 4, shows the graph of maximum torque for CR filled with different loading of rNR-G with different filler size. From the graph, the MH showed an increasing trend as the levels of loading of the rNR-G increased. The increased in MH can be attributed to the levels filler which increased the overall viscosity of the compounds. Higher MH indicates that the difficulty of processing the compound. Therefore, it is clear that the increase in rNR-G levels had increased the viscosity of the CR which made the maximum torque increased. Based on Figure 4, the MH of 5 phr of rNR-G loading were the lowest for both coarse and fine sized rNR-G filler. Yet, at 10 phr loading and above, the results shown the MH at the respective filler loading were higher than CR without any filler present. The MH at 5 phr of rNR-G loading was the lowest due to the decrease of matrix; CR. Even though rNR-G were also incorporated, the results suggested that at 5 phr of filler loading, that level of loading was not ample enough to influence the viscosity of CR. Inversely, at 10 phr and above filler loadings, the MH of the filled CR were higher compared to the unfilled CR. This is because at 10 phr of filler loading, the level of rNR-G was enough to cause an influence. Thus, further increase of filler loading from 10 phr will increase the MH even more.
Conclusion
According to the results obtained, it can be concluded that the addition of recycled natural rubber glove (rNR-G) filler increased the cure characteristics of CR. The increment loading of rNR-G increased the processing time and reduced the ease of processing. By increasing the filler loading, the minimum and maximum torque also increased. As a result, the scorch time and cure time also increased. Moreover, it was determined that the swelling percentage of CR increased with the incorporation of more rNR-G filler. This suggested that the incorporation of rNR-G filler reduced the crosslink density of CR.
|
2019-04-30T13:08:48.132Z
|
2018-12-12T00:00:00.000
|
{
"year": 2018,
"sha1": "daf63234f1c809b55a0a5658858ac0de57db3fb0",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/454/1/012189",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "2530821400825031c1ba28f45a44b6369d740716",
"s2fieldsofstudy": [],
"extfieldsofstudy": [
"Materials Science"
]
}
|
139354199
|
pes2o/s2orc
|
v3-fos-license
|
Modeling of the stress-strain state for a disk and a shaft from different Ni-based alloys during welding under shear pressing
The stress-strain state in a disk and a shaft made of different heat-resistant Ni-based alloys during pressure welding is analyzed by the finite element method. The disk and the shaft have a complimentary cone shape, and during welding the shaft is inserted into the disk or in-serted and simultaneously rotated. In the first case, a single component shear deformation is re-alized, while in the second case, one has a two-component shear deformation in the welding zone. The cone angle of 5° can be recommended, because in this case the plastic deformation in the welding zone is about 3%, which is typically sufficient for obtaining a reliable joint.
Introduction
Improving the manufacturing technology of rotor structures designed for the 5th generation gas turbine engine is a very important problem [1][2][3][4][5][6][7][8][9][10][11][12][13][14][15][16]. The use of bimetallic parts, such as blisk, disk-disk and disk-shaft, made of heat-resistant nickel and titanium alloys, reduces the weight of the engine. Of particular interest for the engine is a disk fixed on the turbine shaft. When creating such units, it is necessary to ensure a reliable connection between the welded surfaces, since the presence of defects in the weld of such structures can lead not only to the destruction of the part, but also to the destruction of the entire engine. Pressure welding (PW) is widely used to obtain permanent joints in such structures. In the process of PW, a solid-phase joining takes place under the relatively small applied pressure between the parts being welded, which provides plastic deformation of the contact surfaces and stimulates diffusion processes. This approach has several advantages, e.g., it is easy to execute, it allows welding of similar materials and materials with very different properties, it allows controlling the structure and properties of the joint. However, this method should be applied with caution, because in some cases the joining can be accompanied by significant pore formation, which leads to heterogeneity of the welded joint and to the deterioration of the quality of welding. To avoid such drawbacks, a combined PW can be used, in which together with pressing, the shear deformation is present in the welding zone. The application of shear during welding helps to increase plastic deformation in the layer close to the surfaces to be welded and does not affect strongly the rest of the material, which is very important for such parts of the aircraft engine as disks and shafts.
In this paper, we consider the method for producing a welded joint by PW, carried out by inserting and/or rotating a shaft in contact with a disk. With the help of finite element simulation, the stressstrain state in the disk and the shaft from the heat-resistant nickel-base alloys EP975 and EK79 is in- vestigated with the aim to estimate the magnitude of shear deformation at the contact surface. It is well known that shear deformation in the welding zone improves the quality of welding.
Finite element model
Computer simulation is performed using the DEFORM-2D software package. Pressure welding of the shaft and the disk is modeled in the axisymmetric setting. A schematic view of the disk and the shaft is presented in figure 1 a. Different forms of the shaft and the disk are considered. The shaft is a cylinder with a diameter of 12 mm and a height of 12 mm, coupled with a truncated cone with a height of 20 mm having the angle 0°, 0.5° or 5°. In the case of a cone angle of 0°, the shaft has a simple cylindrical shape. The hole in the disk has a narrowing with the same cone angle as that of the shaft, so that the shaft and the disk have complimentary shapes. The disk height is 14 mm and the diameter is 28 mm. The finite element model is shown in figure 1 b. When conducting computer simulation, the deforming tool (the upper moving traverse) and the supporting body (the lower fixed traverse) have the properties of an absolutely rigid body. The deformable bodies (the disk and the shaft) are assumed to be elastic-plastic. The disk and the shaft are made of the ultrafine-grained superalloys EP975 and EK79, respectively. The material properties were determined by experimental stress-strain curves obtained under uniaxial compression of the alloys at the welding temperature. Welding was carried out in isothermal conditions at a temperature of 1100 °C.
Deformable bodies were divided into twenty-node isoparametric finite elements with a quadratic approximation of the displacement fields. The number of elements is 2500 for the disk and 1700 for the shaft. For welding, the disk was mounted on a fixed traverse and the conical part of the shaft was placed into the disk hole. Contact conditions at the boundaries of the traverse-shaft and the disk-shaft pairs are described by the Ziebel friction model. The value of the friction coefficient is assumed to be 0.3. The deformation of the shaft and the disk in the course of PW, carried out during the insertion and/or rotation of the shaft in contact with the disk, was simulated. Two different loading schemes are considered: Scheme 1: the shaft is inserted into the disk by the motion of the upper traverse down by 3 mm with the strain rate of 10 -3 s -1 ; Scheme 2: a combination of the insertion of the shaft into the disk and simultaneous rotation of the shaft about the z axis. The shaft is inserted into the disc by 3 mm with a strain rate of 10 -3 s -1 and rotates with a speed of 3 rpm.
Simulation results
When analyzing the results of computer simulation, the distribution of the shear deformation is considered, since the normal deformation components have little effect on the quality of the welded joint. The simulations show that, in welding according to Scheme 1, a single component of shear deformation prevails, which is ε rz , while ε rθ is much smaller. In this case one can speak of a onecomponent shear deformation. On the other hand, welding according to Scheme 2 results in both shear deformation components, ε rz and ε rθ , to be comparable, i.e., a two-component shear deformation takes place in this case.
The distribution of ε rz in the disk and the shaft after welding according to Scheme 1 is shown in Figure 2 for the cone angle a) 0°, b) 0.5°, and c) 5°. It can be seen that shear deformations in the contact zone increase with an increase in the cone angle. In the cases shown in b) and c) the shear deformation is distributed more evenly, while in a) it is concentrated close to the upper and lower surfaces of the disk. The maximum values of ε rz increase with increasing cone angle. In b) they are 10 times greater than in a) and in c) 100 times greater than in a).
The distribution of ε rz and ε rθ in the disk and the shaft at the end of welding according to Scheme 2 are presented in figure 3 and 4, respectively. Both components of shear deformation are of the same order of magnitude in this case. The component ε rz is distributed similarly to the case of Scheme 1. The distribution of ε rθ is somewhat different from the distribution of ε rz , but in both cases shear deformation is localized in the welding zone. As the cone angle increases, the maximum values of shear deformation also increase similarly to the case of Scheme 1. The area of maximum values with increasing angle of the shaft is distributed more evenly in the Scheme 2.
Conclusions
In our work we came to the following conclusions: 1) To create a permanent connection between the disk and the shaft, it is preferable to use pressure welding, which is carried out with a combination of insertion and rotation of the shaft in the disk. In this case, a two-component shear deformation is provided, which, as is well known, improves the quality of the welded joint.
2) To improve the quality of the welded joint, the shaft and the disk should have a complimentary cone shape. The cone angle of 0.5° seems to be too small because it produces only 0.3% of shear strain. The angle of about 5° can be recommended because in this case the plastic shear deformation of about 3% is achieved, which is typically sufficient for obtaining a reliable joint.
|
2019-04-30T13:08:39.228Z
|
2018-11-21T00:00:00.000
|
{
"year": 2018,
"sha1": "4dfdecbff88524ff1f1eaa4f74b6788ec5f4ca4e",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/447/1/012049",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "b003888e493edbb3b862fa51687801924bb91a14",
"s2fieldsofstudy": [
"Engineering",
"Materials Science"
],
"extfieldsofstudy": [
"Physics",
"Materials Science"
]
}
|
239472758
|
pes2o/s2orc
|
v3-fos-license
|
Simple Self-Assembly Strategy of Nanospheres on 3D Substrate and Its Application for Enhanced Textured Silicon Solar Cell
Nanomaterials and nanostructures provide new opportunities to achieve high-performance optical and optoelectronic devices. Three-dimensional (3D) surfaces commonly exist in those devices (such as light-trapping structures or intrinsic grains), and here, we propose requests for nanoscale control over nanostructures on 3D substrates. In this paper, a simple self-assembly strategy of nanospheres for 3D substrates is demonstrated, featuring controllable density (from sparse to close-packed) and controllable layer (from a monolayer to multi-layers). Taking the assembly of wavelength-scale SiO2 nanospheres as an example, it has been found that textured 3D substrate promotes close-packed SiO2 spheres compared to the planar substrate. Distribution density and layers of SiO2 coating can be well controlled by tuning the assembly time and repeating the assembly process. With such a versatile strategy, the enhancement effects of SiO2 coating on textured silicon solar cells were systematically examined by varying assembly conditions. It was found that the close-packed SiO2 monolayer yielded a maximum relative efficiency enhancement of 9.35%. Combining simulation and macro/micro optical measurements, we attributed the enhancement to the nanosphere-induced concentration and anti-reflection of incident light. The proposed self-assembly strategy provides a facile and cost-effective approach for engineering nanomaterials at 3D interfaces.
Introduction
The development of nanomaterials and nanostructures provides new opportunities for performance boosting of optical and optoelectronic devices [1][2][3][4][5]. For example, dielectric nanostructures are used to enhance the transmission and brightness of different transparent windows or display screens [6][7][8][9], and plasmonic or dielectric nanocoatings are widely proposed and applied to enhance the efficiency of photovoltaic devices or sensitivity of photodetectors [1,10]. In many optoelectronic devices, there are complex 3D surfaces [11,12]. For example, micron and nanoscale heterogeneities exist in various photovoltaic devices [13], such as the intrinsic grains in polycrystalline solar cells [14][15][16] and textured surfaces in the commercial silicon solar cells for light-trapping [17].
In this context, there has been a drive for designing and fabricating nanostructures on 3D surfaces with desired control. On planar substrates or interfaces, both top-down and bottom-up self-assembly strategies enable precise control over distribution density and geometrical shapes [8,[18][19][20][21][22][23]. Although substantial advances have been made, control over nanofabrication on complex 3D surfaces is still very challenging [7]. Top-down strategies, such as electron beam lithography and focused ion beam, show some difficulties in the Nanomaterials 2021, 11, 2581 2 of 14 fabrication on 3D substrates limited by the inherent process characteristics of photoresist coating or beam focus [24]. Self-assembly methods, using colloidal nanoparticles as building blocks, show great potential in assembling nanostructures on 3D substrates [18]. Previous reports have demonstrated the self-assembly results on 3D fabrics and textured wafers [25][26][27]. However, the nanostructures usually accumulate and form multi-layers at the bottom of valleys [25,26], while precise control over the distribution density and layer numbers on the 3D substrate has remained elusive.
We emphasize the importance of controllable nanosphere assembly for understanding the mechanism of nanosphere-enhanced textured photovoltaics. For photovoltaic devices application, wavelength scale dielectric nanospheres have attracted much attention. Nanosphere array supports Whispering Gallery Modes (WGM), which can be coupled with the guided modes in photovoltaic thin-film to improve the light absorption of the devices [10,28]. Some studies have shown that dielectric microspheres can also enhance planar bulk material cells [29,30], mainly due to the anti-reflection effect of close-packed SiO 2 spheres. Besides, SiO 2 arrays enable colorful photovoltaic devices without sacrificing device performance [31]. In 2020, Bek et al. proposed that in planar cells, coexisting anti-reflection and light concentration effects are the reason for the nanosphere enhancement [32]. The former contributes to an enhanced photocurrent, and the latter contributes to an enhanced fill factor of the devices. When it comes to textured solar cells, it is highly desirable to control nanosphere distribution density with specific layer numbers to explore the optimized parameters and discern the underlying mechanism.
In this work, we propose a simple assembly strategy on 3D substrates featuring controllable distribution density and coating layers. The influence of gravitational sedimentation on assembly behavior was analyzed by comparing substrates with different surface features (planar and textured) and under different orientations (upward and inverted) in the SiO 2 colloid. The effect of SiO 2 microsphere coating on the device performance under different assembly conditions was further explored. For the first time, the nanosphere anti-reflection and light concentration mechanisms were analyzed on the textured solar cells with numerical simulation, macro and micro-region optical characterization.
Synthesis of SiO 2 Nanosphere
The~550 nm diameter SiO 2 nanosphere was synthesized according to previous work [33]. Briefly, 25 mL ultrapure water, 62 mL ethanol, and 9 mL NH 4 OH were mixed and stirred at a moderate speed for 30 min. Then, 4.5 mL TEOS was added into the mixed solution and reacted for 3 h at 30 • C. The resulting products were centrifuged three times using absolute ethanol and dried in an oven at 200 • C for an hour to obtain SiO 2 nanosphere powders for future use. For the sake of a fast self-assembly process, SiO 2 nanospheres were redispersed in absolute ethanol with a concentration of 50 mg/mL, and we call it SiO 2 assembly solution.
Self-Assembly Process of SiO 2 Nanosphere
The four-step self-assembly processes are outlined schematically in Figure 1. We first immersed the substrate into the poly-L-lysine solution (pH value about 6.2) for 4 min. The ionic form of the ε-amino group in poly-L-lysine solution depends primarily on the pH [34,35]. In the acid environment, the ε-amino group is in a positive charge state (-NH 3 + ), widely used in biology and chemistry [34,36]. In such a dip process, the absorption of poly-L-lysine on the textured silicon substrate yields a positively charged surface. Then we used ultra-pure water to remove the redundant poly-L-lysine. Following this, the substrates were dip-coated onto the solution of SiO 2 assembly solution for different time durations. It has been reported that the solution synthesized SiO 2 nanospheres with an isoelectric point (IEP = 2.0) [37]. At a pH value higher than 2, the SiO 2 nanospheres are abundant with OH − and negatively charged. The negatively charged SiO 2 surface will promote the electrostatic attraction to the positive textured surface while resisting the absorption of multi-layer SiO 2 . At last, the silicon solar cell was rinsed with ultra-pure water a second time to remove the unabsorbed SiO 2 particles, which are not adhesive to the devices. It should be noticed that if the first two steps are not adopted, the SiO 2 nanospheres could not be maintained on the surface after washing with ultra-pure water. When the assembly process is repeated, another layer of poly-L-lysine will be absorbed upon the top surface of the SiO 2 nanocoating assembled on the textured surface, forming a positive surface and enabling a layer-by-layer assembly process.
Characterization of Textured Silicon Solar Cells
Brightfield reflection and darkfield scattering optical images are taken from a Nikon (Ti-U) inverted microscope under the illumination of a halogen lamp. The grayscale transformation and statical analysis of the optical micro-region images were conducted using a MATLAB code. A long working distance 100×/0.80 NA objective lens (Nikon Plan Fluor ELWD 100×) was used to collect bright and darkfield images. The reflection spectra were taken from a fiber spectrometer (Nova, Ideaoptics Co., Ltd., Shanghai, China) coupled with an integrating sphere (IS-30-6-R). Electrical properties of solar cells are measured with a high-precision source-meter-unit (SMU, Keithley 2651) under the illumination of a 1000 W xenon lamp equipped with an AM1.5 filter (Crowntech Inc., Indianapolis, IN, USA).
Modeling of Silicon Solar Cell with SiO 2 Nanospheres
The three-dimensional finite element method (COMSOL software) was used to simulate the light-concentrating performance of silica nanospheres, and the simulation process is similar to previously reported work [32]. In our simulation model, a single silica nanosphere or a vertically aligned dimmer is placed on the surface of the silicon substrate. The simulation width of the device is 2000 nm, the size of the nanoparticle is 550 nm, and the thickness of the air layer is 1500 nm. The linear-polarization plane wave is used as the incident excitation, and its amplitude is 1 V/m. In addition, to eliminate the unwanted reflection of the interface, the boundary of the device region was selected and was delimited by the perfectly matched layer (PML). The optical constants of SiO 2 and Si are obtained by linear interpolation in the optical manual [38].
Results and Discussion
3.1. Self-Assembled SiO 2 Nanocoating on Planar and Textured 3D Substrates In order to fully demonstrate the assembly characteristics of the SiO 2 nanosphere coating on the 3D textured surfaces, we compared the assembly results on the flat surface and the texturing Si wafer surface under the same assembly time. As shown in Figure 2a, on the surface of the flat substrate, the assembly of SiO 2 nanospheres has two characteristics. Firstly, some of the SiO 2 nanospheres become aggregated during the assembly process. Similar aggregates, also found in previously reported gold or dielectric nanocoatings via electrostatic assembly [39,40], can be attributed to the capillary force [41] (originating from water bridges formed between particles when the substrate is taken out of the solution). Secondly, the covered area of nanospheres is estimated at less than 35% according to SEM analysis (using ImageJ software packages). This situation has also been widely studied in previous electrostatic assemblies [39,42]. In the absence of external force, the assembly process of spherical nanoparticles usually conforms to the random sequential adsorption (RSA) model [43], where particles are supposed to be fixed on the substrate and cannot be moved after adsorption. Under this model, the surface coverage of particles could not exceed the jamming limit (54.7%) [39]. Furthermore, the repulsion between charged particles will further reduce surface coverage. Therefore, in the self-assembly coating on a planar substrate without external forces, the surface coverage of nanospheres usually does not exceed 40% [39].
Under the same assembly time, we observed that the particle coverage of silica nanospheres on the 3D substrate is significantly larger than that of a two-dimensional substrate, as shown in Figure 2b. Except for the sharp features, a single layer of densely arranged SiO 2 nanospheres is almost formed on the entire substrate. Note that this assembly result is significantly different from the multi-layer accumulation only located at the bottom of the valley obtained by spin-coating or dip-coating [25,26]. In addition, our dense arrangement result is similar to the coating effect formed by the Langmuir-budget liquid-liquid assembly with an external force and the following transfer method [31]. However, our assembly method is simple and does not require the introduction of other external forces. We infer that the dense nanosphere assembly originates due to gravitational sedimentation [44]. For~550 nm particles, the gravitational potential energy of the particle is comparable to the thermal energy accounting for thermal motion [45]. Therefore, the sedimentation of nanospheres in micron-scale valleys may lead to dense assembly behavior. Previously, the literature also proposed that even if there is repulsion between particles, nanoparticles (100-1000 nm) will still undergo sedimentation in the solvent [45]. Nanomaterials 2021, 11, x FOR PEER REVIEW 5 of 15 Under the same assembly time, we observed that the particle coverage of silica nanospheres on the 3D substrate is significantly larger than that of a two-dimensional substrate, as shown in Figure 2b. Except for the sharp features, a single layer of densely arranged SiO2 nanospheres is almost formed on the entire substrate. Note that this assembly result is significantly different from the multi-layer accumulation only located at the bottom of the valley obtained by spin-coating or dip-coating [25,26]. In addition, our dense arrangement result is similar to the coating effect formed by the Langmuir-budget liquidliquid assembly with an external force and the following transfer method [31]. However, our assembly method is simple and does not require the introduction of other external forces. We infer that the dense nanosphere assembly originates due to gravitational sedimentation [44]. For ~550 nm particles, the gravitational potential energy of the particle is comparable to the thermal energy accounting for thermal motion [45]. Therefore, the sedimentation of nanospheres in micron-scale valleys may lead to dense assembly behavior. Previously, the literature also proposed that even if there is repulsion between particles, nanoparticles (100-1000 nm) will still undergo sedimentation in the solvent [45].
To verify the influence of gravitational sedimentation in the assembly process, the textured substrates were set at different orientations in the SiO2 colloid. As shown in To verify the influence of gravitational sedimentation in the assembly process, the extured substrates were set at different orientations in the SiO 2 colloid. As shown in Figure 3, the assembly procedures with the substrate upward ( Figure 3a) and inverted ( Figure 3d) were performed under the same assembly time. The resultant assembly characteristics under the different orientations of substrate changed dramatically. A relatively dense assembly is achieved by an upward substrate (shown in Figure 3b,c). However, when the substrate was placed invertedly in the SiO 2 colloid, the resultant surface coverage of SiO 2 nanospheres was obviously reduced (shown in Figure 3d,e). The assembly time was increased to 30 min for further investigation of the assembly characteristics. As shown in Figure S1, for substrates with different assembly orientations, the surface coverages of SiO 2 nanospheres were similar compared to that of 10 min assembly time. It can be concluded that the gravitational sedimentation contributes to a dense-packed nanocoating in the textured solar cell when the substrate is placed upward. Notably, gravitational sedimentation is a natural phenomenon, and therefore the dense-packed assembly can be achieved without other external forces such as electric field force or pressure [8,19,31]. The assembly behavior of nanoparticles on 3D texturing substrates (textured Si solar cells) under different timelines (10 s, 30 s, 10 min) and different assembly rounds was further investigated. The samples for different assembly times were named sample 1 (S1), sample 2 (S2), and sample 3 (S3) for 10 s, 30 s, 10 min, respectively. Figure 4a-i is one round of self-assembly SEM images and schematic diagrams of different assembly times. It can be found that, in the initial stage of assembly (10 s), the particles first deposit at the valley bottom, which indicates that the gravity of the nanoparticles is greater than the particle-substrate electrostatic attraction in this stage. As the assembly process proceeds, the silica nanospheres gradually assemble along the sidewalls of the valley. Subsequently, we repeated the assembling process twice, and the SEM images are shown in Figure 4j,k. As the schematic diagram Figure 4l shows, multiple layers appeared on the surface of the textured substrate, especially inside the valley. The sample for two round assembly was named sample 4 (S4). the silica nanospheres gradually assemble along the sidewalls of the valley. Subsequently, we repeated the assembling process twice, and the SEM images are shown in Figure 4j,k. As the schematic diagram Figure 4l shows, multiple layers appeared on the surface of the textured substrate, especially inside the valley. The sample for two round assembly was named sample 4 (S4).
The Performance Analysis of Textured Si Solar Cells with SiO2 Nanosphere Coatings
We compared the effects of different nanosphere coatings (different distribution densities and number of layers) on the electrical performance of the textured Si solar cells to obtain optimized parameters, as shown in Table 1 and Figure 5. Figure 5a shows a structural diagram of the textured Si solar cell with nanocoatings. The polycrystalline silicon solar cell thickness is ~180 μm, and the junction depth is about 500 nm. The SiO2 nanosphere coating is assembled on the textured upper surface of the device. As shown in Table 1, we tested the electrical properties of the devices under a solar simulator with an irradiance of 1000 W/m 2 . In the case of a single-layer SiO2 coating, as the distribution density increases, the relative enhancement of device efficiency gradually increases. When the nanospheres are in close packing, the maximum efficiency enhancement reaches 9.35%. Table 1. Relative enhancement of electrical properties of samples before and after SiO2 nanosphere assembly.
The Performance Analysis of Textured Si Solar Cells with SiO 2 Nanosphere Coatings
We compared the effects of different nanosphere coatings (different distribution densities and number of layers) on the electrical performance of the textured Si solar cells to obtain optimized parameters, as shown in Table 1 and Figure 5. Figure 5a shows a structural diagram of the textured Si solar cell with nanocoatings. The polycrystalline silicon solar cell thickness is~180 µm, and the junction depth is about 500 nm. The SiO 2 nanosphere coating is assembled on the textured upper surface of the device. As shown in Table 1, we tested the electrical properties of the devices under a solar simulator with an irradiance of 1000 W/m 2 . In the case of a single-layer SiO 2 coating, as the distribution density increases, the relative enhancement of device efficiency gradually increases. When the nanospheres are in close packing, the maximum efficiency enhancement reaches 9.35%. With a two-layer SiO 2 coating (S4), the improvement is lower than that of the singlelayer close-packed situation. So, we achieve an optimized self-assembly distribution for SiO 2 coating on textured Si solar cells. Further optimizations in particle size or dielectric constant of nanospheres may obtain a higher efficiency enhancement, as discussed in planar devices [10].
Further, we hope to discuss the possible enhancement mechanism via analyzing the changing trends of electrical properties for different nanocoating, as shown in Figure 5b,c. We first noticed that the relative enhancement of short-circuit current density (J sc ) follows the trend of enhancement of the solar cell efficiency, and the maximum relative enhancement is achieved under the single-layer close-packed situation. More importantly, it is noteworthy that efficiency enhancement is more significant than the J sc enhancement. The results are different from the previous results on thin-film solar cells, where the efficiency enhancements were almost the same compared to J sc enhancement [28]. Therefore, we further compared the open-circuit voltage (V oc ) and fill factor of the devices, shown in Figure 5c. It is also found that the V oc has a slight increase after coating with SiO 2 nanospheres due to the logarithmic relation between V oc and J sc .
Next, we focused on the interesting changing trend of fill factor after adding nanosphere coatings. When the density increased under single-layer conditions (S1, S2, and S3), the fill factor of the device gradually increased, which is similar to the changing trend of the current. However, although the J sc enhancement of S4 (with two layers of SiO 2 coating) is very close to that of S2 (single-layer, non-close-packed), their fill factor enhancement is significantly different (0.46% compared to 1.79%). This phenomenon indicates that multiple mechanisms exist accounting for enhancing the J sc and fill factor of the textured solar cells. In the applications of dielectric nanospheres for enhanced photovoltaics, J sc enhancement relates to more light being coupled into the photovoltaic device, increasing the generation rate of the photo-generated carriers [28][29][30]. Moreover, the increase in the fill factor was recently discovered and elucidated in planar solar cells [32]. It was attributed to the nanosphere concentrating effects. Here, we also found direct evidence for fill factor enhancement on textured solar cells, indicating the nanosphere's light concentration could also enhance the efficiency of textured solar cells. Our results suggest that the nanosphere's concentrating effect enables further efficiency enhancement, even under similar light absorption enhancement (with similar J sc enhancement).
In order to clarify the light concentration effects for S2 and S4, we simplify the model to a single nanostructure (a single SiO 2 sphere or two vertically arranged SiO 2 dimmers) on a flat Si substrate. In the model, the transmission direction of incident light is along the direction of −Z, and the polarization direction is along the x-axis. From Figure 6a, it can be observed that a single SiO 2 nanosphere focuses the incident electromagnetic energy into the solar cell region. As the incident light is transmitted into the device, the electric field gradually diverges, similar to the focusing phenomenon of a single macroscopic lens. As shown in Figure 6b, when two vertically arranged nanospheres are located on the surface of the device, they do not show apparent focusing and divergence effects. We further calculated the electric field distribution on different Z planes, as shown in Figure 6c,d. It is found that the electric field distribution caused by a single nanosphere at a distance of 15 nm from the surface in a silicon cell is similar to that of the vertically arranged dimmer. By contrast, a single nanosphere causes a more intensive electric field near the junction area. Therefore, the single-layer~550 nm SiO 2 nanospheres will concentrate more light energy into the junction region of the device where the extraction efficiency of photo-generated carriers is largest, thereby effectively increasing the output power.
The Influence of Poly-L-Lysine on the Performance of Solar Cells
Another question we want to discuss is the influence of poly-L-lysine on solar cells. Both optical and electrical properties are discussed. The reflection spectra and the J-V characteristics of the original S3, S3 treated by the poly-L-lysine, and S3 with 10 min SiO 2 nanosphere coating are shown in Figure 7a,b. Both the reflectance spectrum and J-V characteristic cure of S3 are overlapped with those of S3 treated by poly-L-lysine. Besides, S3 with 10 min SiO 2 nanosphere coating shows a noticeable reduction in reflection and increased current density compared to that of the original S3. That is to say, the influence of the poly-L-lysine could be neglected on the performances of solar cells.
Optical Analysis of Textured Si Solar Cells with SiO 2 Nanosphere Coatings
In order to analyze the influence of the introduction of SiO 2 nanocoating on the optical properties of surface-textured Si solar cells, we systematically analyzed the optical properties of the devices.
First, we test the macro-reflectance spectra of uncoated and coated solar cells. As shown in Figure 8a, when there is only one single layer of SiO 2 coating on the surface of the device (one round with assembly time of 10 s, 30 s, 10 min), the reflectivity of the device is reduced in a broad spectrum. Moreover, the larger the particle distribution density, the more significant the decrease in reflectivity of the device. When the assembly time is 10 min, that is, when the close-packed condition is reached, the reflectance of the device is the lowest. On the contrary, the reflection will increase (especially in the 700 nm to 850 nm region) when two-layer multi-layer nanocoating is applied (also shown in Figure S2). The change in reflection spectra is well-matched with the variation of photocurrent of the device. We noticed no narrow-band peaks in the reflection spectrum (the feature of WGM) in all textured devices. On the textured surfaces, the height of the nanospheres is different, so the WGM effect caused by the planar close-packed photonic crystals no longer exists [28][29][30]. To further analyze the reflectance variation at different wavelengths, we normalized the reflectance of the device with SiO 2 nanosphere coating to that of the uncoated device. As shown in Figure 8b, there are two dips in the normalized reflectance spectra. The first dip is in the 550-600 nm band, and the second is in the 800 to 850 nm band. Moreover, as the particle distribution density increases, both dips have a redshift. Macroscopically, the equivalent refractive index can be used for analysis. When the particle density increases, the equivalent refractive index of the nanosphere coating increases. Similar to the antireflective coating in the device, the increase in refractive index causes a redshift of the reflection spectrum [29]. Then, brightfield and darkfield optical microscopy imaging techniques are used to analyze the influence of nanocoating with different assembly times and rounds on the optical reflection and scattering of the textured solar cells. A wide-spectrum halogen tungsten lamp was used as the white light source. Besides, a scientific-grade color camera is used to collect micro-region brightfield (Figure 9a-e) and dark-field graphs (Figure 9k-o). The color graphs were transformed into grayscale images by a grayscale transformation code (MATLAB), shown in Figure S3. The gray image divides the intensity into 256 levels. We analyzed the number of pixels on different gray levels in the form of histograms, shown in Figure 9f-j, for brightfield images, and Figure 9p-t for darkfield images. In order to analyze the intensity and contrast of the image, we further calculated the average and standard deviation (std. in Table 2) of the intensity of the brightfield grayscale image and the darkfield grayscale image, as shown in Table 2. Brightfield images are usually used to reflect the specular reflectance of the devices [46,47]. By adding a single-layer SiO 2 coating, the specular reflectance of the textured solar cell is obviously reduced. When the assembly is carried out for two rounds, the average reflection intensity is increased. Therefore, one-round assembly for 10 min yields the lowest average reflection intensity value. Darkfield images reflect the large-angle scattering ability for the textured solar cells [48]. It can be found that the textured surface of polycrystalline Si solar cell exhibits strong backward light scattering, especially in the sharp regions. By adding SiO 2 nanospheres, the backscattering of these sharp features can be weakened. At the same time, the SiO 2 nanospheres could also enhance the scattering of the original silicon devices in areas with extremely weak scattering. Statistics data show that the average backscattering on the device surface was the weakest when 10 min one-round assembly was conducted. Fewer particles or multiple layers of particles will increase the backscattering. The brightfield and darkfield analysis verify that the specular reflection and backscattering of the textured solar cell can be simultaneously suppressed when coated with a single-layer close-packed SiO 2 nanocoating. In addition, the standard deviations of the brightfield and darkfield scattering intensities are gradually reduced with increased SiO 2 nanosphere distribution density. The trend still holds after a five-round assembly ( Figure S4). That is to say, except for reduced reflection and scattering, SiO 2 nanocoating also makes reflection and scattering intensity more uniform from the textured Si surface.
Conclusions
In this work, a simple assembly scheme on a 3D substrate with controllable distribution is demonstrated. By employing this strategy, for the first time, we realize the controllable assembly of SiO 2 nanospheres on the surface of textured silicon. The distribution density can be varied from sparse to dense, and the number of layers can be varied from singlelayer to multi-layer. This assembly method allows us to study the effect of SiO 2 nanosphere arrangement on the performance parameters of textured Si solar cells. The optimized enhancement was achieved by close-packed single-layer nanospheres. The efficiency increased by 9.35%, the current increased by 6.76%, and the fill factor increased by 1.96%.
In comparison with the planar substrate, it is found that 3D substrate promotes dense packing due to gravitational sedimentation effects. The quantitative mechanism analysis still needs further study, including morphology of 3D substrate, the combined effect of gravitational sedimentation, the attractive and repulsive force between particles, and that of the substrate with particles. The quantitative mechanism analysis still needs further study, including morphology of 3D substrate, the combined effect of gravity, the attractive and repulsive force between particles, and that of the substrate with particles. The resulted enhanced fill factor of textured Si solar cells is evidence of the nanosphere concentration effect. Reduced surface reflection and backward scattering play a vital role in photocurrent enhancement. Morphology engineering of the nanosphere may further increase the concentration effects. This research provides heuristic guidelines for nanofabrication and photon management on complex 3D surfaces at the nanoscale.
Supplementary Materials: The following are available online at https://www.mdpi.com/article/10 .3390/nano11102581/s1, Figure S1: Self-assembly nanocoating under different substrate orientations in the SiO 2 assembly colloid with an assembly time of 30 min. (a-c) Schematic illustration and SEM graphs of the assembly with an upward substrate, (d-f) Schematic illustration and SEM graphs of the assembly with an inverted substrate. Figure S2: Optical reflection measurement of the textured silicon solar before (black line) and after (green line) five-round SiO 2 repeated assembly process. Figure S3: Grayscale image transformation of the micro-region optical images. Bright filed grayscale image for (a) reference solar cell, (b) after one round 10 s assembly, (c) after one round 30 s assembly, (d) after one round 10 min assembly, and (e) after two round 5 min + 5 min assembly. Dark filed grayscale image for (f) reference solar cell, (g) after one round 10 s assembly, (h) after one round 30 s assembly, (i) after one round 10 min assembly, and (j) after two round 5 min + 5 min assembly. Figure S4
|
2021-10-15T16:06:19.012Z
|
2021-09-30T00:00:00.000
|
{
"year": 2021,
"sha1": "9e7f11f560e05a638b06b93e5e53efc2497d56ac",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2079-4991/11/10/2581/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "9b803a2c32fc25b4233aaaffe591f59ce88bbea9",
"s2fieldsofstudy": [
"Materials Science",
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
247500647
|
pes2o/s2orc
|
v3-fos-license
|
Long-Term Monitoring of Diel and Seasonal Rhythm of Dentex dentex at an Artificial Reef
Behavioral rhythms are a key aspect of species fitness, since optimize ecological activities of animals in response to a constantly changing environment. Cabled observatories enable researchers to collect long-term biological and environmental data in real-time, providing relevant information on coastal fishes’ ecological niches and their temporal regulation (i.e., phenology). In this framework, the platform OBSEA (an EMSO Testing-Site in the NW coastal Mediterranean) was used to monitor the 24-h and seasonal occurrence of an ecologically iconic (i.e., top-predator) coastal fish species, the common dentex (Dentex dentex). By coupling image acquisition with oceanographic and meteorological data collection at a high-frequency (30 min), we compiled 8-years’ time-series of fish counts, showing daytime peaks by waveform analysis. Peaks of occurrence followed the photophase limits as an indication of photoperiodic regulation of behavior. At the same time, we evidenced a seasonal trend of counts variations under the form of significant major and minor increases in August and May, respectively. A progressive multiannual trend of counts increase was also evidenced in agreement with the NW Mediterranean expansion of the species. In GLM and GAM modeling, counts not only showed significant correlation with solar irradiance but also with water temperature and wind speed, providing hints on the species reaction to projected climate change scenarios. Grouping behavior was reported mostly at daytime. Results were discussed assuming a possible link between count patterns and behavioral activity, which may influence video observations at different temporal scales.
INTRODUCTION Diel (i.e., and seasonal biological processes of species inhabiting temperate regions, are synchronized to changes in photoperiod length and overall levels of environmental illumination (Foster and Kreitzman, 2010;Visser et al., 2010;Helm et al., 2013;Kronfeld-Schor et al., 2013). In marine coastal fishes, the photoperiod light intensity are among the most important environmental variables controlling biological rhythms and overall phenology (Naylor, 2010). For example, environmental illumination determines the timing of activity of predators and preys, that perform their ecological tasks according to a trade-off between maximum opportunities of visual-based feeding and minimum mortality risk (Daan, 1981;Reebs, 2002;Brierley, 2014;Mittelbach et al., 2014). However, the exposure of marine costal ecosystems to solar light produces a seasonal co-variation of photoperiod length with other habitat variables that also affect biological rhythms. For example, temperature can have strong effects on fishes at day-night and seasonal scales (Reebs, 2002;López-Olmeda et al., 2006). Combined photoperiod length and temperature cycles regulate physiological processes over the day-night alternation, resulting in global growth and reproduction patterns at a seasonal level (Falcón et al., 2010;Bulla et al., 2017;Cowan et al., 2017). Nevertheless, many marine species can also follow the lunar or tidal cycle to carry out their biological processes within the lunar day of 24.8-h (Naylor, 2010). In particular, tidal rhythms in marine species were related to locomotion and reproduction (Wagner et al., 2007;. The interaction of activity rhythms of all species within a marine community may affect the estimation of its overall biodiversity. This is particularly significant for ecologically important species, such as top-predators, that play a critical role in maintaining the structure and stability of communities and affect ecosystem functioning (Heithaus et al., 2008(Heithaus et al., , 2012Byrnes et al., 2021). Sampling should be repeated at a frequency sufficient to grasp the whole alternation between consecutive peak and through in population abundances as a product of massive rhythmic displacement (Aguzzi et al., 2015b). Moreover, that sampling has to be repeated in association with concomitant data collection to understand how photoperiod length, light intensity and other environmental variables modulate behavioral responses (Aguzzi et al., 2020d). Similar temporal effects exist on fish grouping behavior (Rodriguez-Pinto et al., 2020), whose strategy can be related to foraging, spawning and predator evasion (Ford and Swearer, 2013;Makris et al., 2019;Lear et al., 2021). Moreover, environmental modulation of grouping behavior of fish has been observed in association to photoperiod changes (Meager et al., 2012;Georgiadis et al., 2014). Changes on grouping behavior, driven by human activities such as fishing, could affect the ecosystem functioning, and have repercussions for biodiversity conservation and fisheries management strategies (Sbragaglia et al., 2021).
Data on the phenology of marine fishes, as a product of a variation in local abundances, can be studied by cabled observatories for their capability to perform high-frequency, continuous and long-lasting imaging along with a concomitant multiparametric oceanographic data acquisition (Snelgrove et al., 2014;Danovaro et al., 2017;Aguzzi et al., 2019Aguzzi et al., , 2020aRountree et al., 2020). In particular, cabled systems have the capacity to host many environmental sensors at high resolution, collecting many habitat variables, thus giving a better instrumental field approach to fishes' ecological niches (Hutchingson, 1957). Stand-alone or lander-based cameras are also good tools to study those aspects of species (e.g., Langlois et al., 2020;Drazen et al., 2021). But, given to energy constrains, a limited set of environmental variables is usually acquired. Each environmental variable measured by the installed sensor (e.g., essential environmental variables) can add habitat information for each imaged species (Aguzzi et al., 2020b). Time-lapse imaging studies with that technology have been efficiently used to describe diel and seasonal patterns in fish counts as a proxy for behavior rhythms, resulting in projected abundance changes at all depths of the continental margin (e.g., Juniper et al., 2013;Doya et al., 2014;Matabos et al., 2014Matabos et al., , 2015Milligan et al., 2020). In fact, in the marine three-dimensional scenario of the seabed and the water-column, day-night and seasonal shifts in populations bathymetric distributions, displacement ranges, and overall activity, influence the number of collectable animals into our sampling windows (e.g., Scapini, 2014;Chatzievangelou et al., 2021). A variation in counted animals produce changes in estimated abundances for a species in comparison to all the others (i.e., evenness; Aguzzi et al., 2015b). When rhythmic abundance changes are not carefully considered at sampling, their effect transcend to the computed biodiversity (Doya et al., 2017).
The use of cabled observatories for the monitoring of economically or ecologically important fish species is of relevance for the international conservation strategy agendas (Aguzzi et al., 2020c). Here, we used a coastal cable observatory to videomonitor the 24-h and seasonal occurrence of a top-predator, the common dentex (Dentex dentex; hereafter refers to as Dentex), at an artificial reef at high frequency over almost a decade. This species represents an iconic study case also for its value in commercial and recreational fisheries (Marengo et al., 2014;Sbragaglia et al., 2020), and a previous time-lapse study at the same artificial reef using the same cabled observatory suggested a relationship of fish presence with temperature, salinity and photoperiod length (Sbragaglia et al., 2019). Here, we moved a step forward and attempted to measure the association of count patterns over the 24-h to the photoperiod length, scaling this phenomenon over the whole seasonal cycle (i.e., photoperiodism). In doing so, we evaluated which of the measured oceanographic and meteorological variables mostly affected the reported count patterns. At the same time, we innovatively quantified the occurrence and the temporal dynamic of grouping behavior, also relating this phenomenon to the environmental variation.
The OBSEA Platform Location and Equipment
The coastal Seafloor Observatory (OBSEA 1 ) is a cabled observatory platform located at 4 km off Vilanova i la Geltrú (Catalonia, Spain) at 20 m depth within the Colls i Miralpeix Natura 2000 area Del Rio et al., 2020; Figures 1A,B). The observatory is equipped with an OPT-06 Underwater IP Camera (OpticCam), which can acquire images/footages of the surrounding environment with a resolution of 640 × 480 pixels. OBSEA is also equipped with two custom developed white LEDs (2,900 lumen; color temperature of 2,700 K), located besides the camera (with an angle of 120 • ) at 1 m distance from each other to allow image acquisition at night . A procedure controlling the ON-OFF status of lighting immediately before and after image acquisition, was performed because of the artificial photic footprint on species (e.g., Matabos et al., 2011). The lights were switched ON and OFF (lasting for 3 s) by a LabView application that also controlled their white balance.
Image Acquisition, Fish Counting and Environmental Data Processing
We acquired 70,254 images with a 30 min time-lapse mode, continuously during 8 years (2012-2019), preserving the same field of view, centered on the artificial reef at 3.5 m in front of the OBSEA (see Figure 1C). Individuals of Dentex were manually counted for each image ( Figure 1D) by a trained operator following procedures by Condal et al. (2012) and Aguzzi et al. (2013).
Temperature ( • C), salinity (PSU), and depth (m) were measured by the CTD probe installed aside the camera . Furthermore, we collected data of air temperature ( • C), wind speed (km/h), and wind direction (deg.) from the meteorological station located on SARTI (Development Center of Remote Acquisition and Information Processing Systems) rooftop in Vilanova i la Geltrú. We also gathered sun irradiance (W/m 2 ) and rain (mm) from the Catalan Meteorological Service station in San Pere de Ribes (6 km away from the OBSEA). Time series for all the environmental data compiled by selecting and extracting only readings contemporary to the timing of all acquired images.
We applied range filters for the fluctuation of environmental variables in order to remove out-layer data (i.e., due to instruments malfunctioning). Guillén et al. (2018) was referenced for water temperature and salinity (i.e., ranges of 11-28 • C and 36.80-39.67 PSU, respectively), since authors have a 10 years' time series of readings from a nearby station in Barcelona (Spain). For air temperature and wind speed (ranges of 3-31 • C and 0-60 km/h, respectively) we used an online website 2 with 30 years of hourly weather modeled data. Rain and solar irradiance were not filtered since downloaded from an Institutional and already filtered source 3 .
Multivariate Analysis
Prior to the multivariate analysis, we transformed the number of individuals of Dentex per photo into nominal presence/absence response variable (Zuur et al., 2007). In order to obtain an optimized model for fish presence/absence, we then executed a correlation analysis on the environmental variables, to group the highly-correlated ones, removing the lesser representative from further analyses (Zuur et al., 2007). We used a General Linear Model (GLM) and a General Additive Model (GAM) using a binomial distribution, to identify which selected environmental variables mostly affected fish presence/absence, and compared the results between those analyses. We tested both methods because we did not have a priori reason for using a particular model.
We proceeded with the same multivariate analyses to describe the grouping behavior of Dentex. In order to do so, we firstly ranked images depending on variable number of pictured individuals (i.e., starting from 1). That frequency of groups of individuals was compiled into a frequency histogram plot. Then, we transformed the number of individuals into a nominal variable for grouping or not grouping behavior (i.e., "0" when in the photo there was only one individual, and "1" there was more than one individual). Then, we added this column of values to the temporal variables (i.e., hours, months and years), to detect any temporal pattern for this social behavior and identify which environmental variables affect it. We interpreted the data based on ethological common use of the wording (as per the general definition Pitcher, 1983). Thus, we consider the occurrence of the grouping behavior as the co-presence of fishes in the same field of view of the camera.
The correlation analysis was carried out with the library "PerformanceAnalytics, " and GLM and GAM models were executed using the libraries "gdata" and "mgcv" of R software.
Time Series Analysis
In order to obtain a global overview of Dentex diel and seasonal behavioral rhythms across consecutive years, we first plotted the 8-years visual counts time series computing the means and standard errors (SE) values per each month of the time series. Temporal gaps in image acquisition were evidenced by line discontinuity. Time series analysis was performed separately for time series of fish counts and each relevant environmental variable for the presence/absence of Dentex evidenced by GLM and GAM modeling (see previous Section). All graphic outputs were again plotted in local time.
Waveform analysis was carried out to describe the diel and seasonal pattern of activity rhythm of the species. Waveforms computing was as follows: time series of visual counts were subdivided in 30 min time-series and averaged together over a standard 24-h period (i.e., 48 values per segment). A consensus averaged fluctuation over that standard 24-h period was then obtained by averaging all values of the different segments at the corresponding timings. The resulting means (± SE) were plotted to identify peaks and troughs in the waveform profile. The peaks temporal amplitude (i.e., the phase) was then computed according to the Midline Estimating Statistic of Rhythm (MESOR) method (Aguzzi et al., 2006), by re-averaging all waveform averages and the resulting value was represented as a threshold horizontal line superimposed onto the waveform plot. The Onset and Offset timings of activity (delimiting peaks intervals) were estimated by considering the first and the last waveform value above the MESOR. The peak was considered as continuous if no more than 3 values occurred below the MESOR (Aguzzi et al., 2020d). All waveform analyses were carried out using the library "ggplot2" of R software.
That waveform analysis was firstly conducted on the fish 8years count time series and solar irradiance data, to visualize the general peaks as a proxy for the solar-driven, behaviorally induced changes in abundance as a product of behavioral activity (i.e., the photic character of the species ecological niche). Then, the same waveform analysis was repeated for each month and each season, by joining time series counts for winter (i.e., December, January, and February), spring (i.e., March, April, and May), summer (June, July, and August), and autumn (i.e., September, October, and November), to assess peaks' timings and amplitude variations as marker of photoperiodic regulation of behavioral rhythms. Moreover, to better describe the seasonal behavior of Dentex, and its relation with the photoperiod, we plotted the mean values (± SE) and MESORs of number of counts and solar irradiance of each month of the year. Finally, the same waveform analysis was performed for those environmental variables selected by models of presence/absence data (see previous Section).
Additionally, we assessed precisely the average values of those environmental variables selected by GLM and GAM modeling for presence/absence data (see previous Section) at Dentex waveform peaks crossing MESOR (see above), in order to add information on the species multidimensional niche (sensu Pocheville, 2015). At the same time, to better describe the environmental and temporal pattern of grouping behavior, we additionally plotted conditional densities of the environmental variables selected by GLM and GAM models of grouping or not grouping behavioral data.
An integrated chart depicting the temporal relationships of waveform peaks (i.e., the phases) in fish counts and the solar irradiance was created month by month over the whole 8 years of data acquisition (Aguzzi et al., , 2015a. The values of each monthly waveform were compared with the respective MESOR through an inequality function in Excel (i.e., each waveform value per 30 min automatically resulted as "major" or "minor" in relation to the MESOR). All waveforms values identified as greater than the MESOR (i.e., the peak duration) were then plotted as horizontal continuous bar per each month. That operation was repeated for solar irradiance.
RESULTS
A total of 140257 photos should have been obtained during 8years of monitoring (i.e., one per 30 min, from 2012 to 2019), but due to several malfunctioning problems creating gaps in the time series, we were able to analyze only 7,0254 photos (50.09% out of the total expected photos). The 95.99% of analyzed images (67438 photos) contained no Dentex (i.e., has "zero" as count value), and a few observations had high abundance (e.g., in only 5 photos there were more than 8 individuals; 0.18%).
We counted a total of 3,747 individuals of Dentex. The three months with the highest number of individuals were ( Table 1) July with 615 counts (16.41%), and finally October with 494 individuals of Dentex counted (13.18%). By compiling this time series into monthly estimates (± SE), we observed a consistent seasonal trend in Dentex counts (Figure 2). A major peak occurred in spring-summer and its height progressively increased over the consecutive years.
Multivariate Statistic
From the correlation analysis among the environmental variables, we observed a significant relationship between water and air temperatures (Correlation Index = 0.69) (Figure 3). Accordingly, we removed the air temperature as explanatory variable from the further analysis. We did not eliminate water temperature because it was considered a more biologically important variable for Dentex.
We observed that in both GLM and GAM models on presence/absence data all the variables were significant at the 5% level, except for salinity, wind direction and rain (Supplementary Table 1). Both approaches gave a model where water temperature, wind speed and solar irradiance were selected ( Table 2 and Supplementary Table 2). So, we selected these variables for the next time series analysis.
To study the grouping behavior of Dentex, we computed the percentage on the total number of images where it was present (2816 photos; Figure 4). Mostly, it was observed as solitary (2231 photos; 79.23%), but more rarely it appeared in FIGURE 3 | Correlation chart among the environmental variables. The name of each variable is shown on the diagonal. Below the diagonal the bivariate scattered plots with the fitted line in red are displayed. Above the diagonal the value of the correlation plus the significance level as stars: to p-values of 0, 0.001, 0.1, 0.05, 0.1, and 1 correspond respectively "***", "**", "*", ".", and "". pairs or in larger groups. In particular, in 395 photos (14.03%) it occurred in pairs, in 169 photos (6%) it occurred in groups of 3-5 individuals. Finally, it was observed in groups of 6-8 or more individuals (i.e., 16 and 5 photos respectively, equal to 0.57 and 0.18%). The maximum number of individuals in a single photo has been detected during 27th July 2019 at 8:00 in a group of 11 individuals.
Afterward, we carried out correlation analysis between environmental and temporal variables observing that there was a significant relationship between water and air temperature (Correlation Index = 0.69) (Supplementary Figure 1). As before, we removed air temperature as explanatory variable.
Then, we performed GLM and GAM models on the grouping or not grouping behavioral data (respectively when Dentex was observed alone or in group of two or more individuals) with the selected environmental and temporal variables. In both models all the variables were significant at the 5% level, except for wind speed and direction, solar irradiance, rain Table 3). We observed in both approaches that water temperature, solar irradiance, hours, months and years were selected as relevant variables for the grouping behavior of the Dentex (Table 3) (Supplementary Table 4). It has to be noted that solar irradiance and month were slightly less significant than the other variables regarding the p-values (respectively Pr (> | z|) = 1.27 * 10 −02 and Pr (> | z|) = 3.82 * 10 −03 ). We decided to report only GAMs upon GLMs results for both presence/absence and grouping or not grouping behavioral data, even if the two methods obtained same outputs, because GAMs models were considered an extension of GLMs.
Diel and Seasonal Fish Count Patterns
The waveform analysis on the 8-years' time series showed the occurrence of a solid diurnal peak, defining an increase of occurrence in the light hours ( Figure 5). That waveform analysis repeated at the seasonal level (Figure 6) evidenced the photoperiodic regulation of occurrence with transient uni-and bimodality in counts peaks: crepuscular and diurnal peaks during respectively short and long photophases (i.e., autumn-winter versus spring-summer). Also peaks temporal limits are following irradiance temporal limits.
In the plotting of mean counts per month of Dentex vs. mean solar irradiance depicting the overall seasonal fluctuation trend in local abundance evidenced a general increase from winter to summer, with two peaks, a major on August and a minor on May (Figure 7). The increase of the solar irradiance follows a similar pattern but with a peak in June (Figure 7). In accordance, the waveforms MESORs values of Dentex for the different months (see Table 1) is increasing from February, when this average value is at the minimum, to August, when this average value is at the maximum (i.e., 0.007 and 0.122 individuals per photo, respectively). At the same time, the MESORs values of the solar irradiance are increasing from a minimum in December to a maximum in June (i.e., from 76.23 to 297.95 W/m 2 , respectively) ( Table 4).
In the integrated chart comparing Dentex waveforms peaks (i.e., means values higher than the MESOR as horizontal continuous band) (Figure 8 and Supplementary Figures 2, 3) we could observe counts increases form December to June, with onset and offset timings that shift form 8:00 and 16:30, to 5:00 and 19:00, respectively (see Table 1). For solar irradiance peaks amplitude also varied from December to May, with onset and offset at 8:00/15:00 and 7:00/16:30, respectively (see Table 4). The integrated chart (see Figure 8) indicated that Dentex counts followed the solar irradiance pattern, with values of onset and offset that could anticipate and are delayed to the irradiance onset of maximum 2 and 2.5-h, respectively.
In order to describe the photic niche of Dentex, we noted that the average values of irradiance when Dentex averaged counts start spiking (i.e., is becoming active) as the peak onset; these are between 3 and 76.3 W/m 2 (see Table 1 and Supplementary Figures 2, 3). Inactivity (i.e., offset) occurs for average values Table 1 and Supplementary Figures 2, 3).
Environmental Cycles
In Table 4, we reported MESOR values, onset and offset per each month of the year for the environmental variable previously selected by GLM and GAM models for presence/absence data (i.e., water temperature, wind speed, and solar irradiance). The temporal dynamic of those variables is described below, but not for solar irradiance that was already described (see previous Section).
The water temperature cycle (Supplementary Figures 4, 5) had a phase shift to early hours from January, with an onset and offset at 15:30 and 6:30 respectively, to December, with onset and offset at 0:00 and 17:00 respectively. Furthermore, we reported that the water temperature had a minimum and a maximum MESOR value in February and August (i.e., 13.11 • C and 23.14 • C, respectively). One should notice that those two months also correspond to the minimum and maximum for Dentex.
The wind speed cycle (Supplementary Figures 6, 7) followed the same pattern of solar irradiance and fish visual counts (see previous Section). Its onset anticipated its timing from In order to describe the ecological niche of Dentex, we annotated the average values of the detected relevant variables from the statistical models when Dentex started and finished its active phase (i.e., onset and offset, respectively) (see Table 1 and Supplementary Figures 2-7). Indeed, when this species started to spike (i.e., is becoming active) as the peak onset, the values of water temperature and wind speed were, respectively, between 13.1-22.96 • C and 2.47-5.06 km/h (see Table 1). Instead, inactivity (i.e., offset) occurs, in average values, between 13.12-23.3 • C water temperature, and 4.01-8.5 km/h wind speed.
Plotting conditional densities for the most important explanatory variables of the grouping or not grouping behavioral data of Dentex (i.e., the distribution of the nominal variable for the grouping behavior of Dentex given a certain value of environmental and temporal driver) (Supplementary Figure 8), we observed that Dentex form groups during the day or at dusk and down, but not during the night. Moreover. The frequency of grouping increased along the years of observation. No particular seasonal pattern along the months of the year has been observed for grouping behavior. Furthermore, we could not obtain particular information on the relationship between grouping and the environmental variables selected by the models for this behavior.
DISCUSSION
We described the occurrence of diel and seasonal behavioral patterns in a coastal marine top predator, Dentex, by analyzing 8-years of high-frequency and continuous time series of visualcounts plus concomitant multiparametric oceanographic and meteorological data. Firstly, we detected a relationship between fish counts and the solar irradiance as a proxy for rhythmic activity. Then, a seasonal variation in video-counts was evidenced with a major peak in August and a minor one in May, suggesting for local abundance changes, possibly linked to population dynamics (e.g., seasonal migration). Also, the species counts were significantly correlated to water temperature and wind speed. Finally, we detected the occurrence of grouping behavior correlated to solar irradiance and water temperature, suggesting an effect of the environment as a regulator of grouping behavior.
Limitations in Cabled Observatory Monitoring Strategies
Cabled observatories provide a spatially limited data acquisition (a single platform can provide a relatively narrow field of view of few m 2 ). Another problem is that with this methodology it is not possible to separate the influence of abundance variation from activity variation, and the first one certainly affects the results of the second. Anyway, general inferences can be made on activity rhythms with spatially limited sampling windows (Hansteen et al., 1997;Refinetti et al., 2007;Bu et al., 2016;Gaudiano et al., 2021). Even trawling, which is the more spatially representative tool, is still anyway limited in comparison to the real extent of marine species distributions (Cama et al., 2011;Sonnewald and Türkay, 2012;Ünlüoglu, 2021). Furthermore, Campos-Candela et al. (2018) recently reviewed some methods to inference abundance from visual counts with cameras stating that averaged estimates of animal density do not show any substantial improvement after an adequate sampling effort (i.e., number of cameras and deployment time).
In our monitoring, fish were observed during daytime and this could cause more observations per day in summer than in winter, being the photoperiodic difference between months the cause for an increased probability in observing fishes in a summer day rather than in a winter day. In any case, there are diurnal species that are sampled more in winter for a reason that is related to an increase in their abundance and not to the possible effect of increasing photoperiod (Condal et al., 2012;Aguzzi et al., 2015a). Here, it is difficult to methodologically distinguish this abundance/activity/photoperiod phenomenon with the present methodology.
In order to acquire more representative results on rhythmic movements and habitat use of fishes at the scale of species distribution a better spatial coverage in monitoring would be needed (Holt, 2009). Networks of cameras with synchronous image acquisition routines may be required to track the species movements across different levels of habitat heterogeneity (e.g., Doya et al., 2017;Aguzzi et al., 2020b;Rountree et al., 2020). Such a synchronous image acquisition could clarify if the peaks in video counts of Dentex in different areas are associated to a different habitat uses (e.g., preying vs. resting), and then could be used to relate this information to the activity rhythms. Inspiration on how to set the network monitoring may be drown from spatially extended surveys with camera traps, aiming at the visual census of fauna in terrestrial environments (e.g., Beaudrot et al., 2016;Norouzzadeh et al., 2018).
OBSEA data collection could be implemented with other complementary actions within the monitoring area, such as the classic visual census sampling by divers (Samoilys and Carlos, 2000;Grane-Feliu et al., 2019), and collected data could be cross-checked with information provided by telemetry. This technology allows the tracking of particular individuals over large period of times (Hussey et al., 2015;Villegas-Ríos et al., 2017;Brownscombe et al., 2020;Matley et al., 2021). Acoustic telemetry could help achieving continuous long-term tracking of single individuals to study the habitat use of fish species (Hussey et al., 2015;Dominoni et al., 2017;Lennox et al., 2017), overcoming the spatial and temporal bias of fixed-point video monitoring for a reliable evaluation of population demography and local biodiversity (Aguzzi et al., 2020a,b). It is impossible with fixed cameras imaging technologies to support for fish "site fidelity" when this area specificity is not a clear life trait of the species (e.g., territoriality, burrowing, etc.). We have no morphological tools to identify the individuals, whose position and orientation changes within the field of view. For this reason, we may need acoustic tagging coupled with imaging to enforce such a site specificity study.
Cabled observatory imaging equipment could have some monitoring footprint on coastal areas for the introduction of light at nocturnal image sampling, which can induce behavioral disturbance on the local fauna (e.g., Davies et al., 2015;Kurvers et al., 2018;Czarnecka et al., 2019;Lucena et al., 2021). Nevertheless, in our case it is unlikely that the OBSEA lightening system, active every 30 min for about 3 s, affected the reported Dentex count patterns, being the individuals of this species absent at nighttime all the yearlong (see previous Section). However, the environmental footprint of future long-term monitoring could be reduced with the use of acoustic multi-beam cameras (Aguzzi et al., 2019).
Despite the evidenced monitoring limitations, we would like to stress out that one positive aspect of cabled observatory use is the low-invasive character at data collection. For example, visual census obtained by divers implies a factor of disturbance on the organisms as human presence (Harmelin-Vivien and Francour, 1992;Januchowski-Hartley et al., 2011;Assis et al., 2013;Azzurro et al., 2013;Emslie et al., 2018;Pais and Cabral, 2018).
How to Interpret Day-Night Rhythms in Dentex Visual Counts
Counts peaks timing and amplitude followed the photophase. In our case, the interpretation of video-counts peaks in terms of increase or decrease activity should be carried out with precaution. A similar precaution is adopted when evaluating the ecological meaning of species peaks in catches or visual census; i.e., animals captures or spotting are provoked by their increased availability in the sampling area for their resting or because of their activity (Aguzzi and Bahamon, 2009;. Notwithstanding, many species of fishes display activity rhythms (e.g., Eriksson, 1978;Muller, 1978;Helfman, 1986) that drive changes in abundance between day and night in coastal areas, as detected by different sampling systems and methodologies (e.g., Aguzzi et al., 2013;Hawley et al., 2017;Schalm et al., 2021). Diurnal, nocturnal, and crepuscular activity is often described as a product of fish behavioral response to solar irradiance variations (Helfman, 1986;Coles, 2014).
In this scenario, almost no Dentex was consistently detected at nighttime over several consecutive years. This observation suggests that video-counts peaks are a product of an increase activity at daytime. Laboratory data on fish behavior and physiology may provide a first insight on this phenomenon, assuming a link between visual counts and activity. Photoperiodic regulation of fish physiology and swimming behavior occur for the modulation that light intensity and temperature exert on the production of hormones (e.g., Pavlidis et al., 1999;Cowan et al., 2017;Sánchez-Vázquez et al., 2019). Fish melatonin measures environmental light levels and, as a result, variable rates of swimming occur (Saha et al., 2019).
A daytime activity increases for Dentex resulting in the increment of video-spotting at the OBSEA can be postulated for the following reasons. First, animals rest at nighttime within Posidonia seagrass beds (Zabala et al., 1992). Second, the species has a home range of less than 1 km 2 in specific period of the year , with the exception of moments in which a migration may follow bathymetric changes related to optimal water temperature (Aspillaga et al., 2017; see next Section). Third, Dentex is a visual predator whose prey spotting is optimized during light hours (Marengo et al., 2014).
Seasonal Fluctuation in Fish Video-Counts
Here, we reported a seasonal rhythm in visual counts of Dentex, with a significant increase in August and a second minor peak in May, as consistent across multiple years. In the past study of Sbragaglia et al. (2019) at the OBSEA, the major peak in counts of Dentex on August was detected, but not the minor one of May. This points out the strategic importance of a prolonged monitoring activity at the OBSEA. That seasonal pattern has been also detected with recreational fishing data for the Italian coasts (Sbragaglia et al., 2020).
We interpreted the first large peak of August as the product of thermocline regulation on fish behavior. Dentex shows a preference for warm suprathermoclinal waters, whose shallowest depths (i.e., between 20-30 m) are usually reached in our monitoring geographic zone (i.e., the NW Mediterranean) in July and August (Aspillaga et al., 2017). The OBSEA is placed within that depth range and this fact may explain the count increase of summer.
Another explanation could be that Dentex seasonal counts increase are synchronized upon maximum abundances of its preys (see previous Section), that augment in the OBSEA area in spring-summer; e.g., D. vulgaris from June to October, O. melanura in May and June, and S. maena from May to July (Aguzzi et al., 2015a). Seasonally synchronic abundance changes may occur between fish predators and preys (Fox and Bellwood, 2011;Bustos et al., 2015;Mishra et al., 2020;Liu et al., 2021). Possibly, the presence of artificial reef structures nearby the OBSEA attract fish preys and consequently concentrate the presence of the Dentex as well.
We observed a second, minor peak of Dentex counts in May that can be discussed in relation to the phenology of breeding. If from one side, the species migrates deeper to reproduce in areas at 40-100 m depth from March to June (Marengo et al., 2014;Grau et al., 2016), from the other we did not observe a temporally concomitant drop in counts at the OBSEA location in May (as an indication for a deeper migration of individuals in that period). Possibly, some individuals that have finished the reproduction (or with no mature gonads), return (or stay) to shallower depths for foraging. In fact, regressing ovaries in females and late developing testes of Dentex were already reported during May (Grau et al., 2016).
Species Relationship With Water Temperature and Wind Speed
The oceanographic and meteorological monitoring was dedicated to understand the species tolerance to certain ranges in the variation of selected measured habitat variables, as an indication of the effects that climate change may exert on fish's phenology (Stevenson et al., 2015). Those ranges have a practical value for ecological monitoring, since indicate a roadmap to develop smart sampling procedures in marine species: i.e., the optimum time window when to expect a maximum presence of individuals, according to the fluctuation status of key environmental drivers (Aguzzi et al., 2020b).
Here, counts of Dentex were related to the water temperature, being the seasonal peak always reported above an averaged threshold of 13.1 • C. A past study at the OBSEA with 3 years' time-series detected the increase in number counts of Dentex above 20 • C (Sbragaglia et al., 2019). This highlight the importance of pursuing the monitoring activities at the OBSEA to better characterize the environmental preference of this species.
The importance of water temperature as environmental driver has already been described in many fish species (Vinagre et al., 2016;Van Der Walt et al., 2021). Temperature deeply affects fish presence (or absence), because it influences directly species physiological performance (Cussac et al., 2009;Freitas et al., 2016;Day et al., 2018;Waldock et al., 2019). Dentex can cope with temperature range above our reported threshold, as also indicated by the current trend of geographic expansion in the North Mediterranean (Orozco et al., 2011;García-Rubies et al., 2013). We confirmed that trend by a progressive increase in counts over the years (i.e., see Figure 2), which would possibly continue in the next decade, when temperature is expected to grow in the NW Mediterranean (Bahamon et al., 2020). This indicates the value of cabled-observatory assets to disclose the occurrence of progressive trends in population shifts beyond more contingent seasonal dynamics due to the climate forcing.
We found a significant relationship between Dentex counts and wind speed. This variable affects the population distribution in some fish species (Daskalov, 2003;Teo et al., 2007;Bakun and Weeks, 2008;Selleslagh and Amara, 2008;Brander, 2010;Kuparinen et al., 2010), based on upwelling nutrient inputs (Bakun and Weeks, 2008;Bellido et al., 2008;Brander, 2010) although this phenomenon is not relevant in a shallow costal area, such as the one where the OBSEA is deployed.
The changes in wind speed and direction could also affect indirectly other environmental variables, that consequently affect the marine biota. For example, it was observed that changes in wind affected salinity in the North Sea and in the Baltic Sea (Schrum, 2001), which had negative consequences on cod recruitment in both areas (Brander, 2010). In our case, salinity was not significantly associated to counts of Dentex nor to wind. Hence, the same dynamic reported for cod recruitment in the North Sea and Baltic Sea may not be valid in our case. Notwithstanding, wind speed may resuspend and mix seabed and water column nutrients at periods of blowing, hence influencing the coastal food web with the consequent overall increase of trophism at all predator levels of the trophic food web (Bellido et al., 2008). For the overall increase in pray abundance, Dentex counts may consequently increase at moments of wind blowing.
The Grouping Behavior of the Species
We reported data on the grouping behavior of Dentex, that showed a clear 24-h modulation. Here, the formation of groups of Dentex significantly occurred more during daytime (including twilight hours) than nighttime, given the broad phase relationship between all visual counts and solar irradiance as a proxy for diurnal activity rhythms (see previous Section). Differently, no peaking was reported over different seasons. A seasonality for Dentex grouping behavior was described in rocky coastal areas for juveniles during summer (Chemmam-Abdelkader et al., 2004;Sahyoun et al., 2013). We did not observe this phenomenon, but we could not resolve if our video-monitoring were composed by individuals in this stage of development, since no tools for body sizing (e.g., lasers) were present aside the camera; however, we can assume that the majority of individuals were adults. Indeed, for adults Dentex, groups of individuals may be detected during the spawning season in spring, between 40 and 100 m depth (Marengo et al., 2014), but, given the shallower depth of OBSEA deployment, we did not observe this phenomenon (see previous Section).
The grouping behavior of Dentex was associated to solar irradiance and water temperature. Grouping has been already broadly correlated to the environmental variation in previous works for different fish species (Félix-Hackradt et al., 2010;Meager et al., 2012;Georgiadis et al., 2014;Palacios-Fuentes et al., 2020). In particular, the formation of fish groups has been related to light intensity (Meager et al., 2012;Georgiadis et al., 2014) and to water temperature (Power et al., 2000;Davoren et al., 2006;Meager et al., 2012;Palacios-Fuentes et al., 2020). Also, the weak increase of grouping behavior reported across consecutive years of observations (see Supplementary Figure 8) is likely the result of the increasing abundance of this species in the OBSEA area (see also previous Section).
DATA AVAIALABILITY STATEMENT
The original contributions presented in the study are included in the article/Supplementary Material, further inquiries can be directed to the corresponding author/s.
ETHICS STATEMENT
Ethical review and approval was not required for the animal study because we did not actively sampled, nor manipulated, nor killed any animals. We just performed video imaging.
AUTHOR CONTRIBUTIONS
MF: conceptualization, formal analysis, investigation, writing-original draft, and writing-review and editing. VS: conceptualization and formal analysis. ET, JAn, IM and JP: formal analysis. JdRF: funding acquisition. MNC and DMT: data curation. JAg: conceptualization, formal analysis, and writing-review and editing. All authors gave final approval for publication.
ACKNOWLEDGMENTS
We want to thank the members of the Development Center of Remote Acquisition and Information Processing Systems (SARTI) for the maintenance of the OBSEA seafloor platform. In particular we want to thank DMT, who is included in the RESBIO project, and MNC, Matias Carandell and Enoc Martinez, who performed the tasks for the maintenance of the OBSEA structure. We also acknowledge financial support from the Spanish Ministry of Science and Innovation (Juan de la Cierva Incorporación Research Fellowship to VS #IJC2018-035389-I), plus the funding from the Spanish government through the 'Severo Ochoa Centre of Excellence' accreditation (CEX20 19-000928-S).
|
2022-03-18T13:20:30.116Z
|
2022-03-18T00:00:00.000
|
{
"year": 2022,
"sha1": "cbae9166ba895ce63334427b5024beb4ffd6f577",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fmars.2022.837216/pdf",
"oa_status": "GOLD",
"pdf_src": "Frontier",
"pdf_hash": "cbae9166ba895ce63334427b5024beb4ffd6f577",
"s2fieldsofstudy": [
"Environmental Science",
"Biology"
],
"extfieldsofstudy": []
}
|
201118593
|
pes2o/s2orc
|
v3-fos-license
|
2-Aminomethylene-5-sulfonylthiazole Inhibitors of Lysyl Oxidase (LOX) and LOXL2 Show Significant Efficacy in Delaying Tumor Growth
The lysyl oxidase (LOX) family of extracellular proteins plays a vital role in catalyzing the formation of cross-links in fibrillar elastin and collagens leading to extracellular matrix (ECM) stabilization. These enzymes have also been implicated in tumor progression and metastatic disease and have thus become an attractive therapeutic target for many types of invasive cancers. Following our recently published work on the discovery of aminomethylenethiophenes (AMTs) as potent, orally bioavailable LOX/LOXL2 inhibitors, we report herein the discovery of a series of dual LOX/LOXL2 inhibitors, as well as a subseries of LOXL2-selective inhibitors, bearing an aminomethylenethiazole (AMTz) scaffold. Incorporation of a thiazole core leads to improved potency toward LOXL2 inhibition via an irreversible binding mode of inhibition. SAR studies have enabled the discovery of a predictive 3DQSAR model. Lead AMTz inhibitors exhibit improved pharmacokinetic properties and excellent antitumor efficacy, with significantly reduced tumor growth in a spontaneous breast cancer genetically engineered mouse model.
■ INTRODUCTION
The lysyl oxidase (LOX) family of copper-dependent extracellular proteins comprises the founder member enzyme, LOX, and four LOX-like enzymes (LOXL1−4). 1−6 While there is greater than 50% sequence identity between the isoforms, which includes a conserved C-terminal catalytic domain across the family containing the copper binding site and the lysine tyrosylquinone (LTQ) cofactor, the enzymes can be divided into two subgroups based on differences to the N-terminal structure. Indeed, LOX and LOXL1 contain a variable Nterminal propeptide that undergoes proteolytic cleavage to form the active enzyme extracellularly. LOXL2−4 differ in that they do not possess this propeptide region and instead contain four scavenger receptor cysteine-rich (SRCR) domains at the N-terminus, which are thought to mediate protein−protein interactions in the ECM. In the case of LOXL2, the protein undergoes proteolytic cleavage of the first two SRCR domains upon secretion; however, unlike LOX, processing is not required for catalytic activation. 7,8 The most widely studied function of the LOX enzymes is their ability to form cross-links in fibrillar elastin and collagens through oxidative deamination of specific lysyl residues, thus stabilizing the ECM. 3 However, recent reports suggest that these enzymes have a multitude of biological functions, which include cell proliferation and epithelial−mesenchymal transition (EMT). 3,5,9 Consequently, the more widely studied LOX and LOXL2 isoforms have been implicated in tumor progression, where they are highly expressed and actively involved in remodeling the tumor microenvironment. 10−16 The LOX family has thus become an attractive therapeutic target for the treatment of many types of invasive cancers, particularly those with poor patient outcomes. Targeting LOXs with small molecule inhibitors is very challenging owing to the lack of crystal structures useful for drug design for any of the isoforms (the only reported LOXL2 structure is a precursor state without cofactor formed) 17 and difficulties associated with isolating several of the enzymes in an active form. Nevertheless, in recent years several LOXL2-selective inhibitors have been reported, including haloallylamine-based inhibitors PXS-S1A and the highly potent PXS-S2A (full structures not disclosed), 18 as well as dual LOXL2/LOXL3 inhibitors PXS-5153A (1) 19 and aminomethylenepyridine 2 ( Figure 1). 20 Our dual LOX/ LOXL2 inhibitor CCT365623 (3a) 21,22 is an orally efficacious aminomethylenethiophene (AMT) based inhibitor, which has been used to help elucidate mechanisms by which LOX drives tumor progression. These novel inhibitors offer significant advantages over the prototypical pan-LOX inhibitor β-amino-propionitrile (BAPN), 23,24 whose lack of sites amenable for chemical modification precludes preclinical optimization.
We recently reported the discovery of AMT inhibitor CCT365623 (3a) following a significant medicinal chemistry campaign to elucidate the structure−activity relationship (SAR) of this class of compound with respect to LOX inhibition. 22 Systematic modifications were made to a hit compound identified following a high-throughput screen (HTS), leading to development of submicromolar half maximal inhibitory concentration (IC 50 ) inhibitors possessing desirable selectivity and pharmacokinetic (PK) properties.
During the course of these studies, the 2,5-substituted thiophene core was replaced with various other five-membered heterocyclic rings to ascertain the importance of this moiety on activity. Of those assessed, only a 2-aminomethylene-5-sulfonyl thiazole core retains activity, with naphthalenesulfonyl-substituted thiazole 6 showing comparable levels of LOX inhibition to the analogous thiophene compound 4 ( Figure 2). 22 By contrast, thiazole regioisomer 5 is relatively inactive. Interestingly, in the case of the active thiazole compound (6) we also observe a modest increase in potency toward LOXL2 inhibition, with 6 proving equipotent against both isoforms. These observations prompted parallel investigations into the development of 2-aminomethylene-5-sulfonylthiazoles (AMTz) as dual LOX/LOXL2 inhibitors, and given the commercial availability of purified LOXL2 enzyme, these studies were carried out using LOXL2 enzyme, both as a target and as a surrogate to assess LOX activity.
■ RESULTS AND DISCUSSION
Time-Dependent Inhibition. Initial studies concerned the enzyme−inhibitor preincubation time in our biochemical assay, whereby we assessed whether our compounds inhibit enzyme activity in a time-dependent manner. Dual LOX/LOXL2 inhibitors 4 and 6, along with the less active 5-aminomethylene-2-sulfonyl thiazole regioisomer 5, were assessed using longer preincubation times of 1 and 3 h, and the activity was compared to previously obtained 20 min preincubation data ( Table 1). The results of this study demonstrate that longer preincubation times result in increased levels of enzyme inhibition with the greatest difference in effect observed when the time is increased from 20 min to 1 h, upon which up to a 5fold increase in potency is observed. Further increasing the time to 3 h has a smaller positive effect on activity while remaining within 2-fold of the 1 h data. On the basis of these findings, we decided to employ a 1 h preincubation time for the remainder of the studies described herein, which provides an optimal determination of enzyme inhibition at a physiologically relevant time point.
Thiophene vs Thiazole Core. Optimization studies began with our recently published AMT inhibitor (3a), 22 whereby we modified the core to the analogous 1,3-thiazole compound (7a) and assessed LOXL2 activity inhibition ( Table 2, compare entries 1 and 4). Pleasingly, this gives a modest increase in potency, with an IC 50 of 0.086 μM. We then assessed whether this increase in activity, attributed to the presence of a nitrogen atom in the ring, could be mimicked through the incorporation of an electronegative halogen atom on the thiophene ring, which could also engender the formation of an intramolecular hydrogen bond to improve ligand binding: 25 3-chlorothiophene (entry 2) was found to be less potent by >10-fold compared to the parent compound (entry 1), while a fluorine substituent was 4-fold less active (entry 3). A second matched pair was synthesized to confirm this trend, and again the potency achieved with the thiazole analogue was slightly greater than that of the thiophene compound (compare entries 6 and 7). This suggests that the presence of a nitrogen atom in the heterocyclic core is advantageous, providing additional stabilization to the protein−inhibitor complex through either resonance mechanisms or noncovalent interactions.
In order to confirm that the aminomethylene group was still essential for inhibition, we assessed the effect of an N-methyl substituent group (entry 5). As expected, this results in significant loss of activity, which is consistent with previous conclusions that the aminomethylene group is required to form a stable Schiff base between the inhibitor and the LTQ cofactor that is a feature of both LOX and LOXL2.
Modification of Thiazole C-5 Group. We next assessed the C-5 substituent group of the thiazole-based inhibitors to determine whether the same SAR trends were observed as in the thiophene series and whether the lead scaffold remained optimal. An aryl sulfonyl group is preferred to either an alkyl or cycloalkyl group (Table 3, compare entries 1−3). Focusing on the aryl substituents, we find that monosubstitution with either a methane sulfonyl (entry 4) or a phenyl group (entry 6) is tolerated since these are equipotent with the unsubstituted
Journal of Medicinal Chemistry
Article example (entry 3), but a disubstituted aryl group (entry 8) remains preferred for anti-LOXL2 activity.
We assessed the effect of the oxidation level of the sulfonyl linker and find that use of a sulfoxide or sulfide linker results in less active compounds (compare entries 8−10). This is expected based on previous studies; however, it is noted that the impact resulting from a decrease in oxidation state is less significant in the AMTz series herein than it was in the AMT series. 22 Removal of the sulfonyl linker is tolerated but results in partial loss of activity, with the most significant impact observed in the biaryl systems (compare entries 4 and 5; 6 and 7; 8 and 11). As explored later (Table 6), this increased tolerance to a range of linkers appears to be a feature of the thiazole compared to thiophene core, whereby the presence of the nitrogen atom in the ring increases the electron-withdrawing properties, 26,27 thus mitigating the need for an electron-withdrawing linker.
Variation of the Phenylsulfonyl Ring Substituents. Incorporation of a bis-sulfonyl biaryl C-5 group is advantageous for activity and was previously found to improve oral in vivo PK exposure in the AMT series of LOX inhibitors. 22 We subsequently looked to ascertain if there was scope to optimize the substituent effects further (Table 4). Varying the methanesulfonyl group (R 2 ) to an ethyl or isopropyl group is tolerated, albeit resulting in a drop in potency; however, a larger tert-butyl group proves detrimental to activity (compare entries 1−4). A methoxy substituent is similarly well tolerated (entry 5), while a methylamino group appears to be less favorable (entry 6).
Small modifications to the aryl substituent (R 1 ) do not result in a significant change in potency. Inhibitors bearing either an electron-donating p-methyl group or an electron-withdrawing p-fluoro or p-trifluoromethyl substituent demonstrate comparable levels of activity to the parent compound (compare entries 7−9 to entry 1). Replacing the phenyl group with an Nmethyl pyrazolyl group is found to be favorable in conjunction with an ethyl group (compare entries 2 and 11), though it does not appear to confer additional potency when R 2 is a methanesulfonyl group (compare entries 1 and 10). Further modification of the ethyl group of 21b to an electronwithdrawing trifluoromethyl or chloro group has a slightly negative effect on activity, though a fluoro substituent is well tolerated (compare entries 11−14). Overall, from these results we conclude that the parent inhibitor 7a is well optimized while gaining further understanding of the SAR and identifying additional scaffolds for further study.
Activity of AMTz Regioisomers. Final SAR studies concerned the substitution pattern around the thiazole core, whereby we compared the anti-LOXL2 activity of selected 2,5substituted AMTz inhibitors to that of their 2,4-substituted regioisomers ( Table 5). Although there is a trend favoring 5substitution vs 4-substitution, it is again less pronounced than that previously observed in the thiophene series against LOX 22 (compare entries 1 and 2; 3 and 4; 5 and 6; 7 and 8).
Selectivity Studies. We assessed the selectivity profiles of our AMTz inhibitors against LOX and LOXL3 isoforms ( Table 6) and against common amine oxidases and the potassium ion channel hERG (Table 7).
Activity Profiles against LOX and LOXL3 Isoforms. While use of LOXL2 as a surrogate for LOX was a practical and informative approach, it was necessary to assess the activity of a range of AMTz inhibitors against LOX to confirm our belief that these are in fact dual LOX/LOXL2 inhibitors. A diverse set of compounds were selected, encompassing 2-aminomethylene-5-sulfonyl-(7a, 7e, 21a, and 21b) and 2-aminomethylene-4sulfonyl-(22b, 22c, and 22d) AMTz regioisomers, sulfidelinked 7d, and 17 with no linker, as well as thiophene 3a and BAPN. From this study we were able to confirm that 2aminomethylene-5-sulfonyl thiazoles 7a, 21a, and 21b, thiophene 3a, and BAPN exhibit good anti-LOX activity that is in accordance with previously obtained data. 21,22,28 A slight decrease in potency toward LOX inhibition is observed with compound 7e, which is consistent with LOXL2 activity. Removal of the sulfonyl linker (17) is detrimental to activity; however, a sulfide linker (7d) is well tolerated which, as discussed previously, can be attributed to the presence of a nitrogen atom in the thiazole core which presumably increases the ability of these compounds to form a stable covalent bond upon binding. Interestingly, 2,4-AMTz regioisomers (22b, 22c, and 22d) are found to be inactive against LOX, which follows a similar trend to the thiophene series in which 2,4-regioisomers were demonstrated to be 15-fold less potent. 22 As seen previously, all AMTz inhibitors demonstrate increased potency toward LOXL2 inhibition versus LOX.
Article
As discussed previously, literature compounds 1 and 2 are in fact dual LOXL2/LOXL3 inhibitors, 19,20 while selectivity data concerning LOXL isoforms have not been reported for other literature inhibitors. As a member of the LOX family, LOXL3 is known to modulate the ECM, 29−31 though there is tissue expression variance compared to other LOX proteins, and it has been shown to play a significant role in muscular, skeletal, and lung development in mice. 32−34 More recently, studies have demonstrated an involvement of LOXL3 in cancer and metastasis, suggesting it to be a potential therapeutic target for malignant disease. 35,36 With regards to LOXL3 inhibition, it is interesting to observe that our AMT and AMTz compounds exhibit moderate to high selectivity toward LOXL2 in all cases (Table 6), unlike the nonselective LOX-family inhibitor BAPN and reported literature compounds 1 and 2 (activity within 3-fold for LOXL2 and LOXL3 in all cases). 19,20 2,5-AMTz inhibitors 7a, 7e, 21a, and 21b demonstrate ≥10-fold selectivity toward LOXL2, with IC 50 values of approximately 1 μM; compound 7e is consistently less potent against all isoforms. Modification of the sulfonyl linker to a sulfide (7d) or direct aryl-linked compound (17) results in a decrease in potency, comparable to that observed against LOXL2. Increased selectivity is observed with 2,4-AMTz regioisomers (22b−d), in particular compound 22d which does not possess a bis-sulfonyl group. AMT inhibitor 3a is found to be a weak inhibitor of LOXL3, providing selectivity in excess of 100-fold. This study demonstrates that 2-aminomethylene-5-sulfonyl thiazoles are potent inhibitors of three LOX isoforms, while selectivity toward LOXL2, particularly 2,4-AMTz regioisomers, supports their use as valuable tool compounds to study the biology and functions of LOXL2. Owing to the lack of availability and
Journal of Medicinal Chemistry
Article difficulties involved in obtaining other LOXL enzymes in an active form, 20 we have been unable to assess selectivity of our compounds over other LOX-family members. Selectivity over Common Amine Oxidases and hERG. A selection of compounds including our most potent AMTz inhibitors were assessed for their selectivity over the flavincontaining monoamine oxidases (MAO) A and B, coppercontaining diamine oxidase (DAO) and semicarbazide-sensitive amine oxidase (SSAO), and the hERG channel (Table 7). In general, our inhibitors show excellent selectivity over MAO-A and -B and DAO. Bis-sulfonyl compounds 7a, 18, and 21a, along with biphenyl compound 7e, demonstrate moderate SSAO inhibition. N-Methyl pyrazolyl inhibitors 21b and 21e show improved selectivity, and further studies carried out with 21b indicate that it is not a substrate of SSAO, unlike 3a. 22 Bissulfonyl compounds 7a and 18 are also found to be moderate inhibitors of the hERG channel, while others compounds assessed show good selectivity. On the basis of the in vitro activity and selectivity profiles, 2-aminomethylene-5-sulfonylthiazole inhibitors 6, 7e, 21b, and 21e, along with 7a for direct comparison with thiophene 3a and 22d as an exemplar of the 2,4-AMTz subseries, were advanced to metabolic stability assessment and in vivo mouse PK studies.
Pharmacokinetic Evaluation. All compounds assessed in vivo demonstrate good metabolic stability against mouse liver microsomes (MLM) and inhibitor exposure following oral administration in mouse PK studies ( Table 8). Naphthalenesulfonyl compound 6 shows low to moderate plasma exposure (AUC = 1−10 μM), as does 3-ethyl-5-phenyl inhibitor 7e and 21e bearing N-methyl pyrazolyl and fluoro substituents. Pleasingly, a number of compounds assessed demonstrate desirable PK profiles, achieving greater plasma exposure levels than that of our previously published inhibitor (3a), 22 and excellent oral bioavailability. Indeed, compounds 7a, 21b, and 22d have AUCs above 18 μM·h, achieving C max concentrations of up to 32 μM. These compounds also exhibit good permeability in the Caco-2 assay, used to model adsorption of orally administered drugs in the small intestine, and present lower efflux levels than those seen previously for thiophene 3a. Given our observations that these compounds demonstrate time-dependent inhibition (see Tables 1 and 9), it is likely that efficacy would be driven by C max .
Further PK studies were carried out using AMT compound 3a and AMTz 21b in a rat model. Metabolic stability against rat liver microsomes (RLM) is moderate in 3a and good in 21b; however, a significant difference in oral bioavailability is observed between the compounds, with 3a demonstrating very poor levels of plasma exposure. In contrast, 21b exhibits comparable AUCs of 18 μM·h between the species and maintains a good C max concentration (6.5 μM) and oral bioavailability (68%). On the basis of the balance of potency, selectivity profile, PK, and permeability, compound 21b is determined to have the best profile overall and thus was chosen for further in vivo antitumor efficacy evaluation.
Evaluation of Anti-Tumor Efficacy. In vivo efficacy studies involving compound 21b were carried out using a genetically engineered mouse model (GEMM) that functions as a LOX-driven spontaneous breast cancer model. 21 Mice were Mouse liver microsome (MLM) stability values represent the percentage of compound remaining after 30 min. b Rat liver microsome (RLM) stability values represent the percentage of compound remaining after 30 min. Mouse plasma PK parameters were determined following a single po dose at 50 mg/kg or iv dose at 10 mg/kg. Rat plasma PK parameters were determined following a single po dose at 20 mg/kg or iv dose at 4 mg/kg.
Journal of Medicinal Chemistry
Article dosed daily via oral gavage (70 mg/kg) from around 60 days after birth, once primary tumors become palpable. Inhibitor 21b was very well tolerated, with no observed body weight loss. Pleasingly, we observe a delay in primary tumor development and a significant reduction in tumor growth rate in the 21btreated group compared to that of the controls (Figure 3a), with no deaths due to tumor volume reaching ethical size limits necessitated in the inhibitor-treated group during the course of this study (Figure 3b).
Potency and Mode of Inhibition (MOI) of AMT and AMTz Inhibitors. We assessed the time-dependent inhibitory activity of select AMT and AMTz compounds (Table 9) and investigated the mode of enzyme inhibition for key series examples (Figure 4).
Time-Dependent Inhibition of Lead Compounds. We previously demonstrated that series exemplars of AMT and AMTz compounds inhibit LOXL2 in a time-dependent manner ( Table 1); we now sought to ascertain if this trend remained true for the series in general. As such, a range of AMTz inhibitors including potent 2-aminomethylene-5-sulfonyl compounds 7a, 21a, and 21b, 2,4-regioisomers 22b, 22c, and 22d, along with AMT compound 3a and BAPN, were assessed for anti-LOXL2 activity following different enzyme preincubation times (Table 9). All compounds were found to exhibit time-dependent inhibition, with the most significant effect again observed on increasing the time from 20 min to 1 h, whereupon up to a 7-fold increase in activity is observed; further increasing the preincubation time to 3 h provides a small additional increase in activity, with IC 50 < 0.1 μM attained for all 2aminomethylene-5-sulfonylthiazoles (7a, 21a, and 21b) assessed.
MOI Studies. Given the time-dependent inhibition observed with these compounds, we wanted to clarify the mechanism of LOXL2 inhibition; as such, we set up a jump dilution assay, which readily distinguishes between reversible and irreversible modes of enzyme inhibition (Figure 4). 37 Leading AMT and AMTz inhibitors 3a and 21b, respectively, along with BAPN were assessed, whereby the enzyme was preincubated for 1 h with 10 × IC 50 of these compounds, at which concentration we see complete inhibition of the enzyme in all cases. The enzyme/ inhibitor mixture was then diluted 100-fold into a solution containing all enzyme reaction components, and the activity of the enzyme was assessed.
The resulting curve obtained for 3a displays about 80% recovery of activity following dilution, as compared to the DMSO control, which suggests that this compound behaves as a reversible inhibitor under these assay conditions, unlike BAPN which can be characterized as an irreversible inhibitor. In
Journal of Medicinal Chemistry
Article the case of 21b a more modest regain in activity of around 30% is observed following the jump dilution. This is suggestive of either a slowly reversible compound or an irreversible inhibitor whereby a residual amount of enzyme has not committed to forming a stable covalent bond during the two-step inhibition process. The results obtained from this study suggest that the electronic nature of the heterocyclic core affects the binding mechanism of these inhibitors, with the AMTz inhibitors better able to form stable, irreversible Schiff base intermediates following initial reversible binding to the LTQ cofactor. 20,38 One possible explanation for this is that the initial Schiff base formed is more susceptible to hydrolysis in the case of 3a compared with 21b, which is better able to rearrange to form a more stable intermediate as a result of either increased resonance stabilization or noncovalent electrostatic interactions. Alternatively, this could also be the result of differences in enzyme−inhibitor kinetics, with initial rates of binding affecting the ability of these inhibitors to form covalent bonds.
Pharmacophore and QSAR Modeling. Binding Mode/ Pharmacophore Hypothesis Generation. With the dearth of protein−ligand crystal structures for LOXL2 in its active form, we decided to embark upon a ligand-based approach to propose a legitimate binding conformation for SAR analysis and modeling. FieldTemplater 39,40 was used to derive a pharmacophore model by comparing conformational ensembles of molecules using their electrostatic and hydrophobic characters to identify common motifs. The field point pattern for a conformer of 7a is shown in Figure 5.
The field-based alignment can be independent of chemical structure, allowing alignment of molecules from different series. The compounds reported in this article are thus augmented with a set of LOXL2 inhibitors obtained from the literature, 20 along with in-house inhibitors not explicitly described in this article, with only those compounds with pIC 50 > 5 included (see Supporting Information). The collection of 54 molecules (with IC 50 data) were visually inspected, with the four most active, most structurally diverse compounds selected for pharmacophore modeling (7a, pIC 50 = 7.07; 21e, pIC 50 = 6.84; JMC2017-31, 20 pIC 50 = 6.55; and JMC2017-33, 20 pIC 50 = 6.51).
The FieldTemplater experiment was run using Normal (large molecules) conformation hunt settings, with a group constraint placed on the cationic NH 3 + groups to force their alignment in the templated result, since these compounds are all believed to follow the same mechanism of inhibition via covalent binding of the aminomethylene group to the LTQ cofactor. The best scoring template consisting of all four compounds was taken forward to Forge, 40,41 and the 7a and 21e conformations were used as references for the field-based alignment of the other molecules in the data set.
Structure−Activity Relationship Modeling. We attempted to calculate statistically relevant mathematical models to predict activities of new compounds. Pleasingly, we were rewarded with two predictive, complementary models as described below.
Field-Based 3DQSAR. 40,41 Cresset's approach to 3DQSAR is similar to traditional CoMFA; 42,43 however, there are some striking differences around how sampling points are selected and the use of irregular grids, such that calculation speed is greatly improved. The 54 compound data set was randomly partitioned to put 15% in the test set (8 molecules), leaving 46 in the training set, upon which the model was built; tested with 50 y-scrambles, followed by leave-one-out (LOO) crossvalidation. The resulting three-component model features an r 2 of 0.856, q 2 of 0.665, RMSE of 0.147, and a Kendall's tau 44 value of 0.760, a model we believe to be statistically relevant 45 and able to predict rank order of activity.
Forge can be used to visualize the field/steric contributions to predicted activities, as demonstrated in Figure 6c; these plots were helpful in rationalizing the SAR observed for the reported compounds. The details of this field-based 3DQSAR model are shown in Figure 6a and Figure 6b, with the model details included in the Supporting Information.
As a follow-up and to build confidence in the model and binding mode hypothesis, an alternative set of machine-learning algorithms, k-nearest neighbor (kNN), random forest (RF), support vector machine (SVM), and relevance vector machine (RVM) regression models were used to model the data. 41 The statistics for RVM for the full model on the training set were r 2 = 0.852, RMSE = 0.150, and Kendall's tau = 0.757. For the cross-validation, the model statistics were q 2 = 0.649, RMSE = 0.230, Kendall's tau = 0.638. This model is similarly predictive to the field-based 3DQSAR model described earlier, and the calculation of multiple predictive models from a set of molecules aligned to a common binding mode hypothesis lends support to our having derived a sensible binding mode in the absence of protein−ligand crystallographic information. While not reported here, it is noteworthy that the other machine learning models also have statistics that suggest that they are predictive.
Activity Atlas Analysis: Activity Cliff Summary. In tandem with the predictive 3DQSAR model, Activity Atlas/Activity Miner 40,41 was used to conduct a qualitative assessment of the SAR for the data set. The summary plots of the calculated electrostatic and steric activity cliffs are shown in Figure 7 and compare favorably to the 3DQSAR model visualizations. This information is built up from doing a pairwise analysis of all molecules in their aligned conformation and automatically examining activity cliffs, highly similar pairs where there is a large change in activity. The most potent and least potent compounds are shown in the context of the activity cliff summary.
The predicted SAR around the thiazole moiety (Figure 7a) is consistent with the observed decrease in potency when the thiazole nitrogen is replaced by the larger CX groups (3b, 3c), suggesting that this position is sterically restricted and requires negative electrostatics. In addition, the requirement for negative electrostatics is consistent with the moderate decrease in potency of the thiophene analogues (3a) and the 2,4 substituted thiazoles (22b). Examination of the SAR around
Journal of Medicinal Chemistry
Article the thiazole C-5 substituent using Activity Atlas suggests that the meta-position of the phenyl group (R 1 ) is predicted to favor large groups; in 21b, replacement of phenyl with N-methyl pyrazolyl substituent meets both the desired steric and electrostatic conditions near the 5-position. With regard to the 3-position, the activity cliff summary (Figure 7b) can be used to rationalize the drop in potency on varying the methanesulfonyl group of 7a to an ethyl (7e) or tert-butyl (7g) group, which increases the number of cliff violations (i.e., a mismatching of field points with activity cliff summary): In 7a, the 3-sulfonyl group has favorable electrostatics and is able to thread the needle of a sterically unfavorable region; a larger tert-butyl group has unfavorable electrostatics and multiple steric clashes that result in diminished activity. Further, in the context of the model, the central phenyl group is predicted to have a less electron-rich π-cloud, in line with the observed improvement in potency when small electron-withdrawing groups are present in the 3-position.
We have developed predictive field-based 3DQSAR and visual qualitative models of activity based on the LOXL2 inhibition data acquired during the development of the aminomethylenethiazole inhibitors. This will be a useful tool to aid further development of the series as well as to design new chemical inhibitors in the future. . Activity Atlas/activity cliff summary plots for LOXL2 activity shown with (a) 7a around thiophene/thiazole core, (b) 7a, 7e, and 7g to illustrate steric and electrostatic contributions (using field points, as described above) that rationalize the observed SAR. All isosurfaces are shown at ≥2.0 confidence level.
Journal of Medicinal Chemistry
Article ■ SYNTHETIC CHEMISTRY Sulfone-linked AMTz analogues were synthesized from N-Bocprotected (5-bromothiazol-2-yl)methanamine 23 and the appropriate thiol, using palladium-catalyzed coupling methods, as shown in Scheme 1. In the case of aryl thiols, tris-(dibenzylideneacetone)dipalladium catalyst and XantPhos ligand were used in conjunction with sodium tert-butoxide
Journal of Medicinal Chemistry
Article base in a 4:1 solvent mixture of toluene/tert-butanol. In the case of alkyl thiols, DIPEA was found to be preferable as a base and toluene was used as the solvent. Oxidation of the sulfide was achieved using m-CPBA to afford the desired sulfone, and subsequent Boc-removal using 4 M HCl in dioxane gave rise to the target AMTz inhibitors.
Substituted aryl thiols that were not commercially available were synthesized according to the method shown in Scheme 2. 46 Palladium-catalyzed coupling of an aryl halide with 2ethylhexyl 3-mercaptopropanoate afforded a thiol surrogate that also functions as a thiol protecting group for further chemical modification. The thiol surrogate could then be used directly in a palladium-catalyzed coupling reaction, whereby deprotection was achieved in situ, or the protecting group can be removed using sodium ethoxide to afford the desired aryl thiol.
AMTz analogues bearing aryl groups directly attached to the thiazole ring were synthesized according to the method shown in Scheme 3. Suzuki reaction of N-Boc-protected (5bromothiazol-2-yl)methanamine (23) with a boronic acid or ester, followed by acid-mediated deprotection, afforded the desired phenyl-linked target compounds.
Synthesis of the thiol surrogate used in the synthesis of 7a and close analogues was achieved starting from 1,3-dibromo-5-(methylsulfonyl)benzene, Scheme 4. Palladium cross-coupling with 2-ethylhexyl 3-mercaptopropanoate installed the protected thiol group and Suzuki coupling with phenyl boronic acid at the remaining bromo-position completed the synthesis of the required thiol reactant. In situ deprotection followed by coupling with a 5-bromo-2-AMTz intermediate (23 or 24) resulted in the corresponding sulfide-linked compound. Direct N-Boc-deprotection provided the sulfide-linked final compound 7d, or oxidation using either 1 or 2 equiv of m-CPBA prior to deprotection resulted in the sulfoxide or sulfone-linked inhibitors, 7c and 7a or 7b, respectively.
Palladium coupling was then carried out with the thiol surrogate, and m-CPBA oxidation of the resulting bis-sulfide followed by treatment with HCl afforded the desired halogenated thiophene analogues.
■ CONCLUSION
Following our recent discovery of a potent, selective, and orally bioavailable LOX inhibitor we observed that replacement of the thiophene core with a 2-aminomethylene-5-sulfonyl thiazole core leads to potent, irreversible LOXL2 inhibitors. SAR investigations revealed similar trends as seen in the analogous AMT series, resulting in potencies of <0.1 μM achieved, and enabled development of a predictive LOXL2 3DQSAR model. Selectivity studies concerning LOX and LOXL3 isoforms revealed a modest selectivity toward LOXL2 in our main series inhibitors, while 2-aminomethylene-4-sulfonyl thiazole regioisomers exhibit excellent selectivity for LOXL2 and thus have the potential to be used as probe compounds. Further selectivity and ADME assessment leads to the discovery of 21b, which has an improved PK profile and demonstrates excellent antitumor efficacy in a LOX-driven GEMM breast cancer model.
■ EXPERIMENTAL SECTION
Synthesis of Inhibitors. All chemicals, reagents, and solvents were purchased from commercial sources and were used as received. Flash chromatography was performed on a Biotage Isolera or Combiflash Rf + UV−vis flash purification system using prepacked silica gel cartridges (Biotage) with HPLC grade solvents. Thin layer chromatography (TLC) analysis was performed using silica gel 60 F-254 thin layer plates and visualized using UV light (254 nm) and/or developed with vanillin stain. LCMS and HRMS analyses of chemical compounds were performed on an Agilent 1200 series HPLC and diode array detector coupled to a 6210 time-of-flight mass spectrometer with a multimode
Journal of Medicinal Chemistry
Article ESI source; or a Waters Acquity UPLC or I-class UPLC with a diode array detector coupled to a Waters G2 QToF, SQD, or QDa mass spectrometer fitted with a multimode ESI/APCI source. 1 H, 19 F, and 13 C NMR spectra were recorded using a Bruker Avance 500, 400, or 300 MHz spectrometer using an internal deuterium lock. Chemical shifts are expressed in parts per million (ppm), and splitting patterns are indicated as follows: br, broad; s, singlet; d, doublet; t, triplet; q, quartet; p, pentet; h, hextet; m, multiplet. All coupling constants (J) are reported in hertz (Hz). All final inhibitors submitted for biological evaluation were at least 95% pure by HPLC−MS. Synthesis of inhibitors 3a, 4, 5, 6, and 8 has previously been described in the literature. 22 Below are a representative synthesis of compound 7a and analytical data for all final inhibitors. All tested inhibitors have purity of >95% (LCMS/UV). were then added with stirring, and the solution was bubbled with nitrogen for a further 5 min before sealing the flask and heating to 110°C, with stirring for 18 h. After cooling to room temperature, the reaction mixture was diluted with ethyl acetate (60 mL) and washed with water (50 mL) and brine (50 mL), dried over anhydrous magnesium sulfate, filtered, and concentrated to give the crude product, which was purified using flash column chromatography (5− 100% EtOAc in PE) to give the title compound (664 mg, 72% purity) as a clear pale yellow oil, which is used in the subsequent transformation as an impure mixture. 1 4 (333 mg, 10 mol %) and K 2 CO 3 (796 mg, 5.76 mmol) were then added with stirring, and the mixture was bubbled with nitrogen for a further 5 min before sealing the flask and heating at 100°C with stirring for 18 h. After cooling to room temperature the reaction mixture was diluted with ethyl acetate (60 mL) and washed with brine (50 mL), dried over anhydrous magnesium sulfate, filtered, and concentrated to give the crude product, which was purified using flash column chromatography (5−80% EtOAc in PE) to give the title compound (998 mg, 72% purity) as a clear pale yellow oil, which is used in the subsequent transformation as an impure mixture. 1 , with stirring for 18 h. After cooling to room temperature, the reaction mixture was diluted with ethyl acetate (60 mL) and washed with water (50 mL) and brine (50 mL), dried over anhydrous magnesium sulfate, filtered, and concentrated to give the crude product, which was purified using flash column chromatography (5− 100% EtOAc in PE) followed by reversed-phase chromatography (C18 silica, 5−95% CH 3 CN in water) to give the title compound (108 mg, 17%) as a yellow oil. 1
Journal of Medicinal Chemistry
Article In Silico Modeling. All compounds were imported to Forge and subjected to a field-based alignment to the reference structure (the FieldTemplater model described in the article), using maximum common substructure to guide the alignment; a similarity score was calculated as the average of the field and shape similarity scores. 40 The calculated alignments were visually inspected to ensure best alignment and adjusted as required.
PAINS Assessment. To identify reactive compounds that might exhibit interference in biochemical assays, the PAINS filters as described by Baell and Holloway 50 were curated as SMARTS and scripted as a flagging protocol deployed in Vortex (version 2018.09.76561.53-s, 2018, https://www.dotmatics.com/products/ vortex) and Pipeline Pilot (Dassault Systemes BIOVIA, BIOVIA Pipeline Pilot, release 2018, San Diego, Dassault Systemes, 2018). The 480 patterns were used to recognize structures that may result in nonspecific binding to multiple biological targets by virtue of comprising one or more fragments established to be of concern. No LOXL2 inhibitor in this study showed any potential PAINS liability when screened against this PAINS filter.
LOX Protein Preparation and Enzyme Assays. LOX enzyme was extracted from pig skin by the method of Shackleton and Hulmes. 51 LOXL2 and LOXL3 were purchased from R&D Systems. LOX, LOXL2, and LOXL3 catalytic activity were determined using the Promega ROS-Glo assay kit with cadaverine dihydrochloride as substrate, BAPN as the reference inhibitor control, a preincubation time of 20 min, 1 h, or 3 h, with nine dilutions from a top concentration of 10 μM or 100 μM.
Journal of Medicinal Chemistry
Article LOXL2 Jump Dilution Assay. LOXL2 catalytic activity was determined using the Amplex Red hydrogen peroxide assay kit with cadaverine dihydrochloride as substrate. LOXL2 at 100-fold final assay concentration and compound at 10 × IC 50 were preincubated for 1 h. The enzyme/inhibitor mixture was then diluted 100-fold into a solution containing substrate and detection reagents and read kinetically every 5 min.
Amine Oxidase Assays. All amine oxidase assays were performed with concentrations as above. MAO-A and MAO-B enzymes were purchased from Sigma. The catalytic activity of MAO-A and MAO-B was determined using the Promega MAO-Glo assay kit (substrate included), with clorgyline and deprenyl as reference inhibitor controls, respectively. DAO was purchased from Sigma, and the catalytic activity was determined using the Promega ROS-Glo assay kit, with aminoguanidine as the reference inhibitor control. SSAO was purchased from Sigma. SSAO catalytic activity was determined using the Promega MAO-Glo assay kit, with mofegiline as the reference inhibitor control.
Assessment of Compound 21b as a Substrate for Amine Oxidases. The catalytic activities of MAO-A, MAO-B, and SSAO with compound 21b as a substrate were determined using the respective enzymes described above, and the hydrogen peroxide produced was quantified using an Amplex red monoamine oxidase assay kit. p-Tyramine was used as the positive substrate control for MAO-A and MAO-B, and benzylamine was used for SSAO.
MLM Stability Assay. Mouse liver microsomes (CD1 female; M1500) and rat liver microsomes (Sprague Dawley female; R1500) were purchased from Tebu-bio, and the assay was performed by methods previously described. 21 Inhibitors at 10 μM concentration incubated with the microsomes were assessed at 0, 15, and 30 min. Control samples containing no microsomes and no cofactors were also assessed at 0 and 30 min. Samples were extracted by protein precipitation, and centrifugation for 20 min in a refrigerated centrifuge (4°C) at 3700 rpm. The supernatant was analyzed by LC−MS/MS for % metabolized over time.
Animal Procedures. All procedures involving animals were performed under licenses PPL-70/7635, 70/7701, and PE3DF1A5B and National Home Office regulations under the Animals (Scientific Procedures) Act 1986. Procedures were approved by the Animal Welfare and Ethical Review Bodies (AWERB) of the CRUK Manchester Institute and the Institute of Cancer Research and reported in accordance with ARRIVE guidelines. All mice and rats were maintained in pathogen-free, ventilated cages in the Biological Resources Unit at Cancer Research UK Manchester Institute and the Biological Services Unit at The Institute of Cancer Research. All mice and rats were allowed free access to irradiated food and autoclaved water in a 12 h light/dark cycle with room temperature at 21 ± 2°C. All cages contained wood shavings, bedding, and a cardboard tube for environmental enrichment. PyMT-driven breast cancer model mice (FVB background) were bred in a specific pathogen-free facility at The University of Manchester (U.K.) under a Home Office approved license.
In Vivo PK. Female Balb/C or CD1 nude mice (Charles River Laboratories) at 6 weeks of age were used for the mouse PK analyses. The mice were dosed orally by gavage (50 mg kg −1 in DMSO/water 1:19 v:v; n = 21) or intravenously in the tail vein (10 mg kg −1 in DMSO/Tween20/saline 10:1:89 v:v:v; n = 24). Blood samples were taken at seven (po) or eight (iv) time-points between 5 min and 24 h, after one single dose of the inhibitor. Three mice were used per timepoint per route; average values are reported. They were placed under halothane or isoflurane anesthesia, and blood for plasma preparation was taken by terminal cardiac puncture into heparinized syringes. Female Sprague Dawley (Charles River Laboratories) weighing between 170 g and 200 g were used for the rat PK analyses. The rats were dosed orally by gavage (20 mg kg −1 in DMSO/water 1:19 v:v; n = 21) or intravenously in the tail vein (4 mg kg −1 in DMSO/ Tween20/saline 10:1:89 v:v:v; n = 24). Blood samples were taken at five (po and (iv) time-points between 5 min and 8 h, after 1 single dose of the inhibitor. One rat was used per route with serial bleeds taken through the time points. They were placed in a heated box for 10 min prior to sampling to increase vasodilation, and blood for plasma preparation was taken by tail vain bleed into heparinized tubes. Plasma samples, obtained after blood spun at 1300 rpm for 3 min, were pipetted into cryovials and immediately snap frozen in liquid nitrogen and then stored at −80°C prior to analysis.
In Vivo Antitumor Efficacy. LOX inhibitor treatment was carried out in a genetically engineered MMTV-PyMT driven mouse breast cancer model where female mice were randomized as described previously. 21 Mice were treated daily by oral gavage with 70 mg/kg compound 21b (n = 3) in vehicle (5% DMSO/2.5% Tween20 in water), and controls (n = 5) received vehicle alone or were untreated. Oral administration of 21b was initiated at day 57 (n = 1) and day 61 (n = 2) with all treatments continuing for 34 days. The spontaneous breast tumors arising in the model were measured twice weekly. All of the controls bar one were culled due to large tumor size by day 95. All of the treated were culled at the end of treatment at day 91 (n = 1) and day 95 (n = 2); none were culled due to reaching license limit tumor volumes. Statistical significance was calculated using Welch's t test on day 91 (*p = 0.0367), utilizing the final measured tumor volume of the culled control mice and linear interpolation for the two remaining control mice (between measurements made on days 89 and 92). All animals allocated to the study were used.
Commercial ADME-T Services. hERG inhibition was determined using the "hERG human potassium ion channel cell based antagonist Qpatch assay" by Eurofins Ltd. Cell permeability was determined using the "Caco-2 permeability assay" by Cyprotex Ltd.
* S Supporting Information
The Supporting Information is available free of charge on the ACS Publications website at DOI: 10.1021/acs.jmedchem.9b01112.
Experimental procedures and characterization data for all synthetic intermediates; NMR and LCMS/UV spectra for lead inhibitors (PDF) Compound alignment data set used to derive pharmacophore/binding mode hypothesis; calculated field-
|
2019-08-22T13:04:14.946Z
|
2019-08-20T00:00:00.000
|
{
"year": 2019,
"sha1": "8f8cd7e31267dcbb69de1fac7ccb39c3934968ab",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1021/acs.jmedchem.9b01112",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "ba631f38985655a7bd0b96a31408f456e651518f",
"s2fieldsofstudy": [
"Medicine",
"Chemistry"
],
"extfieldsofstudy": [
"Chemistry",
"Medicine"
]
}
|
7823188
|
pes2o/s2orc
|
v3-fos-license
|
Influence of Genetic Polymorphisms of Tumor Necrosis Factor Alpha and Interleukin 10 Genes on the Risk of Liver Cirrhosis in HIV-HCV Coinfected Patients
Objective Analysis of the contribution of genetic (single nucleotide polymorphisms (SNP) at position -238 and -308 of the tumor necrosis factor alpha (TNF-α) and -592 of the interleukin-10 (IL-10) promotor genes) and of classical factors (age, alcohol, immunodepression, antirretroviral therapy) on the risk of liver cirrhosis in human immunodeficiency (HIV)-hepatitis C (HCV) virus coinfected patients. Patients and Methods Ninety one HIV-HCV coinfected patients (50 of them with chronic hepatitis and 41 with liver cirrhosis) and 55 healthy controls were studied. Demographic, risk factors for the HIV-HCV infection, HIV-related (CD4+ T cell count, antiretroviral therapy, HIV viral load) and HCV-related (serum ALT concentration, HCV viral load, HCV genotype) characteristics and polymorphisms at position -238 and -308 of the tumor necrosis factor alfa (TNF- α) and -592 of the interleukin-10 (IL-10) promotor genes were studied. Results Evolution time of the infection was 21 years in both patients’ groups (chronic hepatitis and liver cirrhosis). The group of patients with liver cirrhosis shows a lower CD4+ T cell count at the inclusion in the study (but not at diagnosis of HIV infection), a higher percentage of individuals with previous alcohol abuse, and a higher proportion of patients with the genotype GG at position -238 of the TNF-α promotor gene; polymorphism at -592 of the IL-10 promotor gene approaches to statistical significance. Serum concentrations of profibrogenic transforming growth factor beta1 were significantly higher in healthy controls with genotype GG at -238 TNF-α promotor gene. The linear regression analysis demonstrates that the genotype GG at -238 TNF-α promotor gene was the independent factor associated to liver cirrhosis. Conclusion It is stressed the importance of immunogenetic factors (TNF-α polymorphism at -238 position), above other factors previously accepted (age, gender, alcohol, immunodepression), on the evolution to liver cirrhosis among HIV-infected patients with established chronic HCV infections.
Introduction
Chronic infection with hepatitis C virus (HCV) is characterized by a broad spectrum of clinical manifestations that can culminate in decompensated cirrhosis. An estimated 20-30% of infected individuals will develop cirrhosis while others largely remain asymptomatic [1].
Liver fibrosis is the most important prognostic factor in chronic HCV-infected patients [2]. The hepatic stellate cell is the major cell responsible for fibrosis in the liver, with activation of these cells being a key fibrotic event [3,4]. The influence of inflammatory mediators in this liver process has been theorized [5]: impaired intestinal permeability and microbial translocation favour the presence of increased serum endotoxin or lipopolysaccharide (LPS) concentration in patients with chronic hepatopathies [6]. After been recognised by a toll-like receptor (toll-like receptor 4 -TLR4-), endotoxin signalling triggers a cascade that leads to proinflammatory cytokine production, including tumour necrosis factor (TNF)-a synthesis [7,8]. TLR4 can also detect endogenous ligands, many of which are abundant during tissue injury, such as hyaluronan, fibronectin and heat shock proteins [7].
TNF-a can potentially affect liver fibrogenesis by stimulating hepatic stellate cells [9]. The pathogenic importance of TNF-a in liver disease has been previously demonstrated: besides the increased concentration of TNF-a in the liver of patients with chronic hepatitis C [10], it has been observed that serum levels of this cytokine are correlated with histological grading score of hepatitis [11]; moreover, patients with increased serum levels of TNF-a or their receptors showed a reduced survival [12].
A wide range of TNF-a production has been observed and can be attributed to polymorphisms in the TNF-a promoter and their corresponding extended HLA haplotypes [13]. In particular, two common biallelic variants at the -308 (G or A) and -238 (G or A) positions of the TNF-a promoter have been the first to receive attention [14]. The TNF-a polymorphism in -308 and -238 positions of the TNF promoter has been involved in the variability of the histological severity of chronic hepatitis C infection [15,16,17,18,19]. A possible explanation to the variable progression of liver fibrosis was provided by Wilson et al [20] with the demonstration that carriage of the -308 allele A, a much stronger transcriptional activator than -308 allele G in reporter gene assays, has direct effects on TNF-a gene regulation which may be responsible for the association with higher constitutive and inducible levels of TNF-a. However, a metaanalysis of 11 different studies about this topic has not detected association between this polymorphism and the risk of liver cirrhosis [21]. The -238 allele A functional consequences are not yet clear compared with -238 allele G [22].
Other cellular cytokine genes in which genetic variation has been examined within the context of fibrotic disease include interleukin-10 (IL-10). IL-10 is an anti-inflammatory cytokine that down regulates the synthesis of pro-inflammatory cytokines, including TNF-a, and has a modulatory effect on hepatic fibrogenesis [15]. IL-10 levels differ widely between individuals, possibly because of polymorphisms in the promoter region of the IL-10 gene [23]. IL-10 polymorphisms have been studied in the context of hepatic fibrosis, with controversial results [24,25,26,27,28].
Additionally to the possible contribution of genetic factors, evolution to cirrhosis in HCV-induced chronic hepatitis is dependent of the presence of coinfection by hepatitis B virus, alcohol ingestion, or immunodeficiency (immunomodulation to prevent graft rejection or HIV coinfection), among others [29]. Effectively, liver fibrosis progression to cirrhosis is faster in HIV-HCV coinfected individuals than in HCV-mono-infected subjects [30,31]. In fact, complications of the hepatitis C virus (HCV) infection are one of the main causes of death in human immunodeficiency virus (HIV)-coinfected patients [32]. Thus, we would like to address the question of whether the possible genetic contribution to liver fibrosis progression might be overruled by more vigorous stimuli, such as the HIV coinfection. With this objective, we have assessed the influence of TNF-a and IL-10 single nucleotide polymorphisms on the progression of HCVinduced chronic hepatitis to cirrhosis in HIV-coinfected patients. Likewise, an analysis of serum concentration of molecules related with pro-inflammatory (TNF-a, interleukin-6 -IL-6-), anti-inflammatory (IL-10) and fibrogenic molecules (transforming growth factor beta 1 -TGF-b1-) was performed in healthy controls and patients with the diverse genotypes.
Design
This was a cross-sectional population association study.
Patients
All subjects were consecutively recruited from a prospectively collected cohort of HIV-infected patients treated at the HIV outpatients' clinics of an university hospital. Caucasian patients with chronic HIV and HCV infection were studied. Patients were classified in those with cirrhosis (n = 41) and those with chronic hepatitis (n = 50). For the control group we studied a sample of healthy subjects recruited from voluntary hospital workers (n = 55), whose age and gender were comparable with the patients.
All were 18-70 years old. All patients had serum negative for hepatitis B surface antigen and antinuclear, antismooth muscle antibody and antimitochondrial antibody. None had hemochro-
Definitions
Positive serum antibody to HIV was required for the diagnosis of HIV infection. Patients were classified according the 1993 Centers for Disease Control and Prevention classification of HIV infection. Spanish Group for AIDS Study guideliness (www.gesida. es) were used to indicate the antiretroviral treatment (HAART). Plasma HIV RNA load was quantified by polymerase chain reaction (PCR) assay; a value lower than 50 copies/ml was considered as undetectable HIV load. CD4+ T cell counts were determined by standard flow cytometry; values obtained at diagnosis of HIV infection and at the time of inclusion were considered.
Positive serum antibody to HCV and persistent (more than 6 months) HCV RNA were required for the diagnosis of chronic HCV infection. Diagnosis of chronic hepatitis or cirrhosis was Table 3. Single nucleotide polymorphism of TNF-a and IL-10 genes studied in patients with HIV-HCV coinfection with chronic hepatitis or liver cirrhosis. established according to histological criteria when liver biopsy was performed [33], or by transient elastography, performed according to a standardized technique by one trained operator (JAGG) (according to data validated in HIV-HCV coinfected patients using liver biopsy as reference, patients with a liver stiffness . 14,6 kPa were classified as individuals with cirrhosis [34]. Duration of hepatitis C infection was estimated using an interviewer-assisted questionnaire assessing risk factors for HCV infection. The earliest exposure was designated as the point of acquisition [35].
Alcohol abuse was considered when an ingestion higher than 50 g/day during at least 5 years occurred.
Laboratory determinations
HIV-1 infection was diagnosed using an EIA (Abbott Laboratories, North Chicago, IL, USA) and confirmed by New Lav Blot I (Bio-Rad, Marnes La Coquette, France). Plasma HIV-1 viral load was determined by the Cobas Amplicor HIV Monitor (Roche Diagnostics, Basel, Switzerland); the cutoff for undetectable viral load was 50 copies/ml. Blood CD4+ T-cell count were determined by flow cytometry (FAC Scan, Becton Dickinson Immunocytometry Systems, San Jose, CA, USA).
TNF-a and IL-10 polymorphisms
The -308 TNF-a polymorphism (rs1800629) consists of a G to A substitution at position -308 in the proximal promoter of the TNFa gene. The -238 TNF-a polymorphism (rs361525) consists of a G to A substitution at position -238 in the proximal promoter of the TNF-a gene. The IL-10 polymorphism (rs1800872) consists of a C to A substitution at position -592 in the proximal promoter of the IL-10. Each polymorphism was genotyped by predesigned Taqman assays (Applied Biosystems, Foster City, CA, EEUU), following the instructions of the manufacturer's.
Intracellular expression and LPS-stimulated secretion of cytokines
Blood samples from healthy controls with genotype GG (n = 10) and with genotype GA (n = 10) at -238 of the TNF-promoter, collected in pyrogen-free heparinized tubes ((Biofreeze, Costar, Table 4. Intracellular expression of tumor necrosis factor alpha, interleukin 6, transforming growth factor beta1 and interleukin 10 by monocytes from healthy individuals distributed in function of the -238 promotor gene polymorphism. In a separate set of experiments, whole-blood samples were mixed 1:1 with RPMI 1640 (Gibco, Germany) and lipopolysaccharide -LPS-(Escherichia coli 0111:B4, Difco, Detroit, MI) was added to the final concentrations of 1000 ng/mL; cells were stimulated for 4h at 37uC under 5% CO2 atmosphere. In addition, unstimulated baseline samples were obtained to act as a control. Concentration of TNF-a, IL-6, TGF-b1 and IL-10 in the supernatant of the cultures, after 24 hours incubation, was assayed.
Concentration of pro-and anti-inflammatory and of fibrogenic molecules
Serum and culture supernatants concentrations of TNF-a, IL-6 and IL-10 were analyzed using Milliplex MAP kit High Sensitivity Human Cytokine (Millipore, Billerica, MA, USA). Serum TGF-b1 levels were detected by Quantikine Human Immunoassay (R&D, Minneapolis, MN, USA).
Statistical analysis
Descriptive data were expressed as the median (25-75 interquartile range -IQR-) or as absolute number (percentage). Qualitative variables, including genotype distribution, were compared by the chi-square test or Fisher's exact test when necessary. Quantitative variables were compared using the Mann-Whitney U test or ANOVA when necessary. The Pearson's correlation coefficient analysed the association between quantitative variables. The independent predictive value of genotypes and of those other variables associated to liver cirrhosis in the bivariate analysis was analyzed by linear regression analysis. A p value ,0.05 was considered significant. The statistical analysis was carried out using the SPSS 15.0 statistical software package (SPSS Inc., Chicago, IL, USA).
Ethical aspects
This study was performed according to the Helsinki Declaration. The project was approved by the hospital ethical research committee. Informed consent was obtained from each participant.
Results
Ninety one HIV-HCV coinfected patients and fifty five healthy controls were studied. Table 1 shows the genotype distribution and allele frequencies of the TNF-a -238 G to A, TNF-a -308 G to A and IL-10 -592 C to A gene promoter single nucleotide polymorphisms for the control group and the HIV-HCV coinfected patients. Observed genotypic frequencies approximated to those expected based on allele frequency calculations, and thus conformed to Hardy-Weinberg equilibrium. Bivariate comparisons between HIV-HCV patients with chronic hepatitis and cirrhosis The two patients' groups, with and without cirrhosis, were comparable in terms of age, gender, risk factors for the infection, HCV viral load and viral genotypes. The frequency of alcohol use was significantly higher in patients with cirrhosis. More importantly, estimated duration of HCV infection was comparable in patients with chronic hepatitis and those with cirrhosis ( Table 2). Analysis of HIV-related immune characteristics demonstrated that the CD4 T cell count at inclusion in the study was significantly lower in patients with cirrhosis, although CD4 T cell at diagnosis of HIV infection showed no significant differences. The percentages of patients with undetectable HIV viral load due to antiretroviral therapy were similar in both groups. HCV-related characteristics demonstrated that there were no significant differences between serum ALT concentration, HCV genotype, and HCV viral load of both groups.
Different distributions were noted in the single nucleotide polymorphisms among the patient groups (Table 3). Specifically, the TNF-a -238 A alleles were significantly more common in patients who suffered from chronic hepatitis than in those patients with cirrhosis. Differences in the distribution of IL-10 genotypes approach to significance. These differences persisted when only those patients diagnosed by liver biopsy (chronic hepatitis, n = 32; liver cirrhosis, n = 31) were considered (data not shown).
Analyses of the association between epidemiological, clinical, HIV-or HCV-related characteristics and TNF-a -238 phenotypes according to polymorphisms did not shown any significant differences. In brief, age, sex, alcohol ingestion, CD4+ T cell count at diagnosis and at inclusion, percentage of patients with undetectable HIV load, HCV genotypes, HCV load or serum ALT concentration were similar in patients with genotype GG or GA/AA of TNF-a -238 (p . 0,05 in each case).
Relation among the detected polymorphisms and the serum concentrations of pro-and anti-inflammatory and profibrogenic molecules
Serum concentrations of TNF-a, IL-6, IL-10 and TGF-b1 were measured, separately, in healthy controls and in patients with HIV-HCV coinfection, distributed in two groups, those without and those with liver cirrhosis. In healthy controls, increased serum concentrations of TGF-b1 were detected in individuals with genotype GG at -238 position of the TNF-a promotor gene with reference to those with genotype GA or AA. The rest of comparison showed no significant differences among individuals with the different genotypes, both in healthy controls and in HIV-HCV coinfected patients (Figure 1).
Due to the relation between polymorphism at -238 position of the TNF-a promotor gene and serum concentration of TGF-b1, stimulation experiments were performed. Ten healthy controls with the genotype GG at -238 position of the TNF-a promotor gene and ten with the genotype GA were selected. No significant difference was detected when compared intracellular expression of TNF-a, IL-6, IL-10 and TGF-b1 in peripheral blood monocytes from both groups. LPS-stimulated secretion of these cytokines was analyzed in the culture supernatants. Peripheral blood mononuclear cells from healthy controls with the genotype GG at -238 position of the TNF-a promotor gene synthesize significantly higher concentrations of TGF-b1 than those of those with the genotype GA. Concentrations of TNF-a, IL-6 and IL-10 were similar in the culture supernatants from patients with genotype GG and GA (Table 4).
Multivariate analysis of parameters associated with cirrhosis
We evaluated, using a linear regression analysis, those factors independently associated to cirrhosis. The evaluated parameters were those whose differences among patients with chronic hepatitis and those with liver cirrosis were lower than 0,1 (age, alcohol abuse, CD4 T cell count at inclusion, TNF -238 and IL-10 -592 polymorphism) in the bivariate analysis and those other with possible clinical significance (CD4 T cell count at diagnosis of HIV infection) ( Table 5). Continuous variables were categorized in function of their median values. In the resultant model, the polymorphism of TNF-a at -238 was the only factor associated with cirrhosis. Kaplan-Meier curves in function of the TNF-a -238 polymorphism are shown in Figure 1.
Discussion
Several factors, including age, alcohol abuse, duration of the HCV infection or the presence of immunodepression, influence the natural history of HCV-induced chronic hepatitis to cirrhosis [1,2,29,30,31]. However, the contribution of genetic factors to this evolution has been rarely examinated. Moreover, the independent contribution of genetic modifications to the evolution to liver cirrhosis in HIV-HCV coinfected patients has not been practically studied, with the exception of the IL28 B polymorphism [36].
The present work has analyzed the influence of genetic parameters on the risk of cirrhosis in a series of HIV-HCV coinfected patients. Selected genetic markers were those influencing the expression or secretion of molecules implicated in innate or acquired immunity, such as the TNF-a and IL-10. In this cohort of Caucasian patients, we found that TNF-a -238 polymorphism was involved in the risk of liver cirrhosis. Our data also indicated that differences in disease outcome among HIV-infected patients with established chronic HCV infections could have more to do with immunogenetic factors than with other factors previously accepted (age, gender, alcohol consumption, immunodepression), although several of them were associated with cirrhosis in the univariant analysis. This association persisted when only those patients with histologic diagnosis of chronic hepatitis or liver cirrhosis were considered.
The A allele at -238 position of the TNF-a promotor gene has been associated with a more strict virological control in HIVinfected patients [37]. Moreover, several studies in HCVmonoinfected patients have reported that the A allele at -238 position of the TNF-a promotor gene was associated with development of chronic active hepatitis C, advanced fibrosis progression or higher risk of cirrhosis [16,17], although others had no demonstrated any association [18]. Our study in HIV-HCV coinfected patients has demonstrated precisely the opposite finding: those patients with a genotype GG shows an increased association with cirrhosis. The discrepancy could be explained by the consideration in our study, and not in the others, of the several factors which can be implicated in the evolution (age, sex, alcohol abuse), those related with the immunodepression (CD4 T cell count at diagnosis or at inclusion), as well as the evaluation of the independent influence of each one in a multivariant analysis.
The A allele at -238 position of TNF-a promoter gene has been associated with a more intense inflammatory activity [22]; however, data about its influence on parameters associated to fibrosis are lacking. After TNF-a activation, Kupffer cells secrete TGF-b1 [9], the more important fibrogenic molecule [38]. Levels of serum TGF-b1 have been correlated with the degree of liver fibrosis in patients with HCV-induced chronic hepatitis [39].
In patients with HCV-induced chronic hepatitis or cirrhosis an increase of several of these cytokines have been demonstrated, being related with the liver inflammatory activity, with the immune system activation and/or with the increased intestinal permeability [5,10,11,12,18]. Consequently, serum levels of these cytokines are not a reliable parameter of the innate ability of inflammatory/immune cells to secrete them in patients with hepatopathies. Thus, it is important stressed that in our study the association among -238 TNF-a polymorphism and serum concentration of TGF-b1 was observed in healthy individuals, in which others factors influencing TGF-b1 secretion are presumibly not involved. Our study has demonstrated that LPS-stimulated monocytes from healthy controls with the genotype GG synthesize significantly higher concentrations of TGF-b1 than those with genotype non-GG and that the serum concentration of the TGF-b1 is increased in healthy controls with the genotype GG in comparison with those with genotypes GA or AA. The other proinflammatory (TNF-a, IL-6) and anti-inflammatory (IL-10) molecules analyzed showed no significant different levels in healthy controls. This is the first article in which the constitutive ability of monocytes to secrete fibrogenic factors in function of the TNF promoter gene polymorphism has been assayed. Three explanations can be proposed: 1) Linkage disequilibrium with TNF-a genes is not probable because TNF-a is codified by chromosome 6 [40] and TGF-b1 by chromosome 19 [41]. 2) Presence of polymorphisms of the TGF-b1 gene with influence on the secretion of this cytokine. This topic has not been studied in the present work.
3) The gene encoding TNF-a is located within the human leukocyte antigen (HLA) class III region, a region which is positioned between the HLA class I and class II region. About 40% of the genes that are encoded within the HLA-region are involved in immune processes [40]. Owing to the location of the TNF-a gene within the HLA-region, the relation between HLAhaplotypes and TNF-a polymorphisms, including the influence on disease severity or chronic hepatitis evolution to cirrhosis, is possible. As a factor implicated in the evolution of inflammatory diseases, it can be hypothesized the influence of TNF-a polymorphism in a complex inflammatory and immune cascade inducing an increased ability to secrete TGF-b1. In each case, the more intense fibrogenic activity, detected in patients with genotype GG, could be influencing the collagen secretion by hepatic stellate cell, favoring the fibrosis progression in the liver.
The influence of the GG genotype at -238 position of TNF-a promoter gene was observed after a prolonged period: in fact, Kaplan Meier analysis demonstrated that curves of development of cirrhosis in function of the presence of genotypes GG or GA and AA at -238 position of TNF-a promoter gene begin to be clearly different after 20 years of evolution ( Figure 2). IL-10 is a cytokine which down-regulates the inflammatory response and modulate the liver fibrogenesis [42]. The polymorphism at -592 position of the IL-10 promoter gene has been associated with more accelerated progression of HIV infection [43] and with persistent HCV infection [28]. Data about influence on liver fibrosis progression are controversial [24,25,26,27]. In our series, the genotype CA at -592 position of the IL-10 promoter gene was associated with a near significant higher frequency of liver cirrhosis in the bivariant analysis. This haplotype has been correlated with a decreased synthesis of IL-10 [23], suggesting that a lower anti-inflammatory activity could be implicated in the progression of liver fibrosis. However, serum concentrations of IL-10 were similar in healthy controls with the different IL-10 genotypes in our study. Moreover, the polymorphism at -592 position of the IL-10 promoter gene was not associated with cirrhosis in the multivariant analysis.
Several characteristics support the validity of our results: a) No significant difference existed between the distribution of TNF-a and IL-10 genotypes in the healthy control and HIV-HCV groups. b) Populations of Spanish healthy controls and HIV-or HCVinfected patients shows similar frequency of the genotypes tested [37,44]. c) Data were similar when only those patients diagnosed of chronic hepatitis or cirrhosis by liver biopsy were considered. d) The evolution time from the infection until the moment of the inclusion was similar in HIV-HCV patients with chronic hepatitis and those with liver cirrhosis, excluding the possibility that cirrhosis developed exclusively as a consequence of a higher time with the infection. e) Patients were carefully evaluated for every one of the different known factors influencing the evolution to cirrhosis (age, sex, ethanol abuse, CD4+ T cell count at diagnosis and at inclusion, antirretroviral therapy or undetectable HIV viral load).
In conclusion, a GG genotype at -238 position in the TNF-a promoter gene influences the risk of liver cirrhosis in HIV-HCV infected patients, even more than those classically accepted factors influencing the progression of liver fibrosis. The implications of the influence of this polymorphism include the need of a more strict vigilance and counseling about other risk factors in them.
|
2017-04-19T23:42:20.026Z
|
2013-06-26T00:00:00.000
|
{
"year": 2013,
"sha1": "62d022820be40a8439beacddeed77a5b4f3af22c",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0066619&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "62d022820be40a8439beacddeed77a5b4f3af22c",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
4379110
|
pes2o/s2orc
|
v3-fos-license
|
Obtaining insights from high-dimensional data: sparse principal covariates regression
Background Data analysis methods are usually subdivided in two distinct classes: There are methods for prediction and there are methods for exploration. In practice, however, there often is a need to learn from the data in both ways. For example, when predicting the antibody titers a few weeks after vaccination on the basis of genomewide mRNA transcription rates, also mechanistic insights about the effect of vaccinations on the immune system are sought. Principal covariates regression (PCovR) is a method that combines both purposes. Yet, it misses insightful representations of the data as these include all the variables. Results Here, we propose a sparse extension of principal covariates regression such that the resulting solutions are based on an automatically selected subset of the variables. Our method is shown to outperform competing methods like sparse principal components regression and sparse partial least squares in a simulation study. Furthermore good performance of the method is illustrated on publicly available data including antibody titers and genomewide transcription rates for subjects vaccinated against the flu: the selected genes by sparse PCovR are higly enriched for immune related terms and the method predicts the titers for an independent test sample well. In comparison, no significantly enriched terms were found for the genes selected by sparse partial least squares and out-of-sample prediction was worse. Conclusions Sparse principal covariates regression is a promising and competitive tool for obtaining insights from high-dimensional data. Availability The source code implementing our proposed method is available from GitHub, together with all scripts used to extract, pre-process, analyze, and post-process the data: https://github.com/katrijnvandeun/SPCovR.
Background
Traditionally, data analysis methods are divided in two classes with different goals: Methods for prediction (or, supervised learning) and methods for exploration (or, unsupervised learning). An example of the former is assessing whether someone is at risk for breast cancer; in this case the aim is to use currently available information to predict an unseen (often future) outcome. On the other hand, the goal of exploratory methods is to gain an understanding about the mechanisms that cause structural variation and covariation in the available information. For example, exploration of gene expression data *Correspondence: k.vandeun@uvt.nl 1 Department of Methodology & Statistics, Tilburg University, Warandelaan 2, 5000 LE, Tilburg, The Netherlands Full list of author information is available at the end of the article collected over time after addition of serum gave not only insight in the transcriptional program but also in processes related to wound repair [1]. There are many cases, however, where it is of interest to reach both objectives and to predict an outcome of interest while simultaneously revealing the processes at play. This is for example the case in the study of [2]: The gene expression response soon after vaccination and the antibody titers much later in time were measured with the aim of both predicting immunogenicity and revealing new mechanistic insights about vaccines.
To reveal the underlying mechanisms, component or matrix decomposition based methods can be used. Well known examples are principal component analysis (PCA) and the singular value decomposition [3]. Yet, another frequent use of such methods is in the context of prediction with many covariates: A popular approach is to first reduce the covariates to a limited number of components and to subsequently use these for prediction. This is known as principal components regression (PCR, see [4]). A drawback of this two-step approach is that the components are constructed with no account of the prediction problem and hence may miss the components that are relevant in predicting the outcome. This is especially true when the number of predictor variables is huge and represents a large diversity of processes, as is for example the case with genomewide expression data. Sparse regression approaches like the lasso [5] and elastic net [6], on the other hand, only focus on modeling the outcome with no account of the structural variation underlying the covariates. Hence, approaches that find components that simultaneously reveal the underlying mechanisms and model the outcome of interest are needed. Partial least squares (PLS; see for example [7]) and principal covariates regression, PCovR [8], are such methods. Yet, partial least squares may have a too strong focus towards prediction [9] while principal covariates regression can be flexibly tuned to balance between prediction of the outcome and reduction of the covariates to a few components.
Apart from implementation issues (meaning that existing PCovR software can only be used on data with a modest number of variables), a shortcoming of PCovR is that the components are based on a linear combination of all variables. This is undesirable when working with a large set of variables, both from a statistical and an interpretational point of view. First, the estimators are not consistent in the p > n case [10], and second, the interpretation of components based on a high number of variables is infeasible. Furthermore, components that are based on a limited set of selected variables better reflect the fact that many biological processes are governed by a few genes only. To overcome such issues in partial least squares, sparse methods have been developed [10,11]. Likewise we propose here a sparse and efficient version of principal covariates regression 1 . The proposed method offers a flexible and promising alternative to sparse partial least squares.
The paper is organized as follows. First we propose the PCovR method and its sparse extension (SPCovR), and we discuss its relation to (sparse) PLS. The (comparative) performance of SPCovR is evaluated in a simulation study and in an application to genomewide expression data collected for persons vaccinated against the flu [2]. The implementation of the SPCovR algorithm is available online (https://github.com/katrijnvandeun/ SPCovR) together with the scripts used for analyzing the data.
Sparse principal covariates regression
We will make use of the following notation: matrices are denoted by bold uppercases, the transpose by the superscript T , vectors by bold lowercase, and scalars by lowercase italics. Furthermore, we will use the convention to indicate the cardinality of a running index by the capital of the letter used to run the index (e.g., this paper deals with J variables with j running from 1 to J), see [12].
Formal model
The data consist of a block of predictor variables X and a block of outcome variables Y. We will assume all variables to be centered and scaled to sum of squares equal to one. Now, consider the following decomposition of the I × J x matrix of covariates X , together with the following rule to predict the J y outcome variables Y, with XW = T the I × R matrix of component scores, P x the J x × R matrix of variable loadings, P y the J y × R vector of R regression weights, and E x , E y the residuals. Note that we consider the general problem of a multivariate outcome, hence R regression coefficients for each of the J y outcome variables. The component scores are often constrained to be orthogonal : T T T = I with I an identity matrix of size R × R. Despite this restriction there still is rotational freedom and the PCovR coefficients are not uniquely defined. The data model represented in (1) and (2) is one that summarizes the predictor variables by means of a few components and uses these as the predictors of the outcome. Note that the same model also underlies principal components regression and partial least squares.
Objective function
Principal covariates regression [8] differs from the former methods in the objective function used: Minimize over W, P x , P y such that T T T = I and with 0 ≤ α ≤ 1, The parameter α is a tuning parameter giving either more weight to the prediction of the outcome (α close to 0) or to the reconstruction of the predictor variables (α close to one). In fact, α = 1 corresponds to principal components regression while α = 0 corresponds to ordinary regression. Let R 2 X denote the percentage of variance in X accounted for by T and R 2 Y the percentage of variance in Y. It can be seen then that the criterion is equivalent to maximizing A solution to (3) based on the singular value decomposition of X was proposed by [13]. An efficient implementation that accounts for large data (either I or J large) can be found in the online code.
Partial least squares is based on the optimization of the following criterion [8,10] for r = 1, . . . , R and such that w T r w r = 1 for all r = 1, . . . , R and w T r X T Xw r = 0 for r = r . Note that this is equivalent to maximizing var(Xw r )corr 2 (Xw r , Y) (6) under the restrictions. Criterion (6) is approximately equal to R 2 X R 2 Y and can be compared to criterion (4) to obtain some intuition about the similarities and differences between both methods. Given that the PLS and PCovR criteria are different, it can be expected that the obtained estimates are different as well. Whereas PLS cannot be expressed as a special case of PCovR with a particular value of the tuning parameter α, it has been shown to be a special case of continuum regression with the continuum regression parameter set equal to 0.5 [14].
A drawback of the principal covariates regression model is that the components are based on a linear combination of all the predictor variables. Having components that are characterized by a few variables only is easier to interpret and often a better reflection of biological principles. This motivates the introduction of a sparseness restriction on the component weights w jr : with |W| 1 = j,r |w jr | the lasso penalty and |W| 2 2 = j,r w 2 jr the ridge penalty. λ 1 (λ 1 ≥ 0) and λ 2 (λ 2 ≥ 0) are tuning parameters for the lasso and ridge penalties respectively. The effect of the lasso is that it shrinks the coefficients, some (or many for high λ 1 ) to exactly zero thus performing variable selection. Note that the lasso penalty in Eq. (7) is imposed only on the component weights and not on the loadings P x nor on the regression weights P y . The penalty implies that some or many of the component weights will become zero; because the loadings and regression weights are not subject to the lasso penalty, these are not affected by the penalty. The ridge also introduces shrinkage and is included here for two reasons: To introduce stability in the estimated coefficients and to allow for more than I non-zero coefficients; this combination of penalties is known as the elastic net. Both the lasso and the elastic net are known to over-shrink the non-zero coefficients [15,16]. One way to undo the shrinkage of the non-zero coefficients, is to re-estimate them using an ordinary least squares approach [17]. When α = 1, the objective function (7) reduces to the sparse PCA criterion [18] and the resulting estimates can also be obtained under a sparse principal components regression approach. When α = 0 and R = 1, the elastic net regression formulation is obtained [6] and the two problems are equivalent. Note that the introduction of the sparseness restriction eliminates the rotational freedom and, under suitable conditions, has a unique solution.
Similarly, sparse PLS approaches have been proposed that are based on the same penalties: This sparse PLS criterion is different from the SPCovR criterion (7) over the whole range of α. The two methods can be expected to yield different estimates.
Algorithm
The procedure that we will use to estimate the model parameters is one which estimates all R components simultaneously and not -as is often the case in the literature -one by one. The main benefit is that this gives control over the constraints that are imposed on the parameter estimates. More specifically, we offer the choice to constrain the loadings P x either to be orthogonal or length restricted (diag P T x P x = 1). The former is the usual constraint used in sparse PCA approaches [18], the latter is more flexible and allows for correlated component loadings. Note that the length constraint is needed to avoid trivial solutions where very small component weights that satisfy the penalty are compensated by very high loadings. To solve the optimization problem in (7) under these constraints, we rely on a numerical procedure and alternate between conditional estimation of W given fixed values for P and of P given fixed values for W. For the moment we assume the number of components R and the value of the tuning parameters α, λ 1 , and λ 2 to be given; how to tune these meta-parameters is discussed in the next subsection.
The conditional estimation of the weights W is based on a coordinate descent procedure and of the loadings on a restricted least-squares routine; both procedures are detailed in the Appendix. Using these routines, the loss is guaranteed to be non-increasing. Furthermore, because the loss is bounded from below by zero the algorithm converges to a stationary point (for suitable starting values).
To deal with the problem of local optima, a multistart procedure can be used. We recommend to use a combination of both a rational and several random starting configurations. A rational start may be obtained from the non-sparse principal covariates regression analysis. Note that often sparse multivariate approaches are initialized by a rational start only which does not account for the problem of local optima.
Algorithm 1 SPCovR
Input: Data X and Y, values for the tuning parameters R, λ 1 , λ 2 , α, the type of constraint on the loadings, a maximum number of iterations T, and some small > 0
Tuning and model selection
The sparse PCovR model is estimated with fixed values for the weighting parameter α, the number of components R, the Lasso tuning parameter λ 1 , and the ridge parameter λ 2 . The problem that we consider here, is how to tune these meta-parameters. Cross-validation is frequently recommended in the literature but this requires data that are rich in the number of observations. In addition, the computational cost of cross-validation for the SPCovR model is considerable (because all possible combinations of the values for each of the tuning parameters need to be considered). Furthermore, in the context of PCovR, simulation studies showed that this is not a superior model selection strategy compared to strategies that rely on a stepwise approach [9]. Hence, we propose to use a stepwise strategy.
First, α is determined using the so-called maximum likelihood approach [19]: with σ 2 x and σ 2 y the variance of the error on the predictor and outcome variables respectively. In the case of a large number of predictor variables J x will dominate the expression and we can assume that α will be almost -but not exactly -equal to one without having to estimate the size of the error variances. It is important to keep α strictly smaller than one, for example α = .99, and to use PCovR instead of a PCR approach [19].
Second, we fix the number of components by a so-called scree test that is based on a visual display of the value of the loss function (3) against the number of components r in the model for r = 1, ..., R. In this display, we look for the point where the plot levels off and select the number of components just before this point.
Next we tune the ridge penalty. We recommend to set λ 2 equal to a small value to have more emphasis on variable selection by the lasso (for example, 5% of the value of the lasso). This small value should be sufficient to stabilize the estimates and to encourage grouping of strongly correlated variables [20].
The final metaparameter to tune is the lasso parameter λ 1 . A straightforward and often used procedure to find a proper value for λ 1 is cross-validation [6]. In the more recent literature it has been established that crossvalidation results in selecting a superset of the correct predictors, and thus in false positives (see for example the Permute the columns of W to maximal agreement Count q the number of stable non-zero coefficients for which π (Stable) ] end while retrospective paper on the lasso and the included discussions [21]). One proposal to address this issue of falsely selecting variables, is the use of a stability selection procedure [22] which allows to control the false positive error rate. Stability selection is a general method that can be easily applied (in adapted form) with our SPCovR procedure. In brief, it is a resampling based procedure that is used to determine the status of the coefficients (zero or non-zero) over a range of values for the tuning parameter λ 1 . The original procedure has been proposed for a single set of variable coefficients. Here, we have R such sets due to the fact that we estimate the weights of all components simultaneously. The order of the components between different solutions may change (permutational freedom of the components) and this has to be taken into account.
To explain the stability selection procedure, we start with the for loop in Algorithm 2: Given a fixed value λ 1 , N resamples are created by drawing with replacement a fraction f of the observations (with .50 ≤ f < 1). The resampled data are subjected to a SPCovR analysis and the resulting N matrices of component weights W are used as follows: For each coefficient w jr the proportion of occurences for which it is non-zero in the N resamples is recorded in the probability matrix (λ) . Note that between samples, permutation of the components may occur. We account for this by permuting the components to maximal agreement in the component scores as measured by Tucker's coefficient of congruence; the component score matrix resulting from the SPCovR analysis of the original (non-resampled) data is used as the reference for congruence.
Next, we turn to the while loop in which λ 1 is decreased until the condition q > q R is met with q the number of non-zero coefficients over the range of λ 1 values considered so far and q R a value that results from controlling the expected number of falsely non-zero coefficients V. From [22] we have that the expected number of non-zero coefficients q is Note that this is the expression for a single component; to obtain the upperbound q R for R components we use Hence, by fixing E(V ), e.g. to one, and the probability threshold π thr = 0.90 [22], an upper bound on the number of non-zero coefficients is obtained. For a range of λ 1 values, the non-zero probability is given by (Stable) = max λ∈ (λ) and the set of non-zero coefficients by those for which π (Stable) jr ≥ π thr . If q ≤ q R the procedure continues by extending the range of λ 1 values with the next value. The values of λ 1 are taken from the interval = [λ max , ..., λ min ] with λ min = 1e −4 λ max and the remaining values equally spaced and arranged in decreasing order between log 2 (λ max ) and log 2 (λ min ); see [23].
Results
To compare the performance of sparse principal covariates regression with competing methods, we make use of both synthetically created data in a simulation study and empirical data resulting from a systems biology study on the flu vaccine.
Simulation study
In a first study on the performance of SPCovR, we make use of artificially generated data. The main aim is to study the behavior of SPCovR, also in relation to competing methods, in function of the strength of the components in relation to the covariates on the one hand and in relation to the outcome on the other hand. Therefore, the following factors were chosen to set up the simulation experiment based on a model with two components (see [9] for a similar setup): All factors were crossed, resulting in a simulation experiment with 3 × 3 × 3 = 27 conditions. The number of observations and variables was fixed to I = 100 and J = 200 respectively, 80% of the component weight coefficients were set equal to zero (this is 320 of the in total 400 coefficients), and the regression weights for the first and second component were set equal to b 1 = 1 and b 2 = −0.02, implying that the first component is much more associated to the outcome than the second one (for equally strong components).
We expect SPCR to perform well -in terms of recovering the components -in all conditions where the components account for a considerable amount of variation in the covariates (VAFX = 0.40/0.70) but not when the components are submerged in the noise (VAFX = 0.01). In terms of prediction, SPCR can be expected to perform well when the components not only account for variation in the covariates but also in the outcome; when VAFY = 0.02 predictive performance can be expected to be bad for any method, including SPCR. For SPLS, we expect good performance when the components account for quite some variation both in the covariates and the outcome (VAFX = 0.40/0.70 and VAFY = 0.50/0.80) but poor performance when eiter VAFX or VAFY is low. Lastly, we expect SPCovR to perform well in terms of recovering the components when either VAFX or VAFY is considerable but not when both are low (VAFX = 0.01 and VAFY = 0.02). In terms of prediction, performance of SPCovR will be bad when VAFY = 0.02.
To generate data under a sparse covariates regression model with orthogonal loadings, the setup briefly described here was used. Full details can be found in the online available implementation: https://github.com/ katrijnvandeun/SPCovR. An initial set of weights W (0) was obtained by taking the first two right singular vectors obtained from an I × J matrix X (0) generated by random draws from a standard normal distribution. Sparsity was created by setting 320 values, chosen at random, to zero. Next, the resulting sparse weight vector was rescaled according to the relative strength of the components in the condition considered. These initial component weights W (0) and the fixed regression weights b 1 = 1 and b 2 = −0.02 were used to calculate an initial outcome vector, Note that the initial matrix X (0) was not generated under a SPCovR model. To obtain data that perfectly fit such a model, a principal covariates regression analysis with fixed zero weights was performed to yield sparse component weights W and orthogonal loadings P. Again, the weights were rescaled and a block of covariates X TRUE = X (0) WP T and the outcome y TRUE = X TRUE W [b 1 b 2 ] T were calculated on the basis of the scaled component weights and the loadings resulting from the SPCovR analysis. These are data with no noise and in a final step noise was added by sampling from the normal distribution with mean zero and variance set in correspondence to the level of the proportion of variation accounted for by the components in the covariates and the outcome yielding data X and y. For each of the 27 conditions, 20 replicate data sets were generated resulting in 540 data sets in total. The code used to generate the data, including the seed used to initialize the pseudo random number generator, can be found on GitHub: https:// github.com/katrijnvandeun/SPCovR.
Each dataset was subjected to five analyses: sparse principal components regression (SPCR), sparse partial least squares (SPLS), and three SPCovR analyses with different values of α, namely 0.01, 0.50, and 0.99. For the SPCR analysis, we used the elasticnet R package that implements the sparse PCA approach in [18]. The elasticnet R package uses a least angle regression (LARS) [17] procedure and hence allows to find a solution with a defined number of zeros for each of the components. We set this number equal to the exact number of zero coefficients occuring in W. For the SPLS analysis, the R package RGCCA was used that allows to (approximately) set the total number of zero coefficients over the components; the analyses options were set to result in (approximately) 320 zero coefficients. For the SPCovR analyses, we used stability selection with the upperbound on the number of non-zero coefficients q set equal to 80. Hence all analyses were tuned such that they had exactly or approximately the same number of zeroes as present in the true underlying component weight matrix. This makes the interpretation of the resuls easier in the sense that performance of the methods is not dependent upon proper tuning of the sparseness penalty. In the comparison of the methods, we will consider their performance in recovering the underlying components and how well they predict a new set of test data (generated under the same model, this is with the same W, regression weights, and the same error level for the covariates and outcome).
The results with respect to the recovery of the components is shown in Fig. 1. These boxplots display the Tucker congruence between the true componentscores T = X TRUE W and those obtained from the analysesT = XŴ. Tucker congruence, φ, is defined as [24], this is the cosine of the angle between the vectors vec(T) and vec(T) with higher values indicating more similarity between the components. Values over 0.95 indicate that the components can be considered equal while values in the range [0.85 − 0.94] correspond to a fair similarity [24]. In Fig. 1 the Tucker congruence of the 20 replicate data sets is shown for the 27 conditions and the three methods (SPCovR with α = 0.99, SPCR, and SPLS). For each combination of the variation accounted for in the covariates and in the outcome (e.g., VAFX = 0.01 and VAFY = 0.02 at the left of the left panel), three boxplots are shown for the three methods. These are the three levels of the relative strength factor with the boxplots at the left referring to the conditions where the first component is weaker than the second one, the boxplots in the middle referring to the conditions where they are equally strong, and boxplots at the right to the conditions where the second component is stronger than the first To assess the predictive performance of the methods, the squared prediction error (PRESS) was calculated on test data as follows, (14) whereŷ i was obtained with one of the three considered models and we normalized with respect to the total variation in the observed scores. The lower the PRESS, the better the model performs in terms of prediction. Figure 2 displays the PRESS values for the 27 conditions for each of the three methods. Clearly, as could be expected, when the outcome is submerged in the noise (VAFY = 0.02) all methods perform badly (PRESS ≥ 1). Another striking feature is that SPLS has the largest prediction error in all conditions. When it comes to the relative predictive performance of SPCovR and SPCR in the conditions where the components account for the variation in the outcome (VAFY = 0.50/0.80), the methods seem to perform equally well. Only in the conditions where the components account for almost no variation in the covariates (VAFX = 0.01) but a lot of variation in Y (VAFY = 0.80 and equal strength of the components or more strength of the predictive component) SPCovR outperforms SPCR.
SPCovR was run with three levels of the weighting parameter α: namely α = 0.01, α = 0.50, and α = 0.99. For both performance measures and in all conditions, SPCovR with α = .99 yields the best results. Hence, it seems that little weight should be given to fitting the outcome in order to obtain good results in terms both of recovering the components and prediction of the outcome. Note that giving no weight at all to the outcome in modeling the components, this is a SPCR analysis, leads to worse recovery in general. For prediction, on the other hand, the gain of using SPCovR is limited to a few conditions and, when the noise in the covariates is considerable, SPCovR is prone to overfitting while SPCR is not.
Systems biology study of the flu vaccine
We will illustrate SPCovR and compare with SPLS using data that result from a systems biology study on vaccination against influenza [2]. The general aim of this study was to predict vaccine efficacy with micro-array gene expression data obtained soon after vaccination and to gain insight in the underlying biological mechanisms. First we will give a general description of the data and how these were pre-processed, then we wil discuss the SPCovR and SPLS analyses and results.
The authors made data for two seasons, 2007 and 2008, publicly available on the NCBI Gene Expression Omnibus (https://www.ncbi.nlm.nih.gov/geo/) with accession numbers GSE29614 and GSE29617. For both seasons, a micro-array analysis was performed on the genomewide expression in peripheral blood mononuclear cells collected just before and 3 days after vaccination for all participants (26 in before vaccination were considered as the baseline and subtracted from the data at Day 3. For each variable (probeset), the difference scores were centered and scaled to sum-of-squares equal to one. These scaled difference scores form the set of predictor scores X in the SPCovR and sparse PLS analyses.
To assess the efficacy of the vaccine, three types of plasma hemagglutination inhibition (HAI) antibody titers were assessed just before and 28 days after vaccination. As described by [2] vaccine efficacy was measured by subtracting the log-transformed antibody titers at baseline from the log-transformed antibody titers 28 days later and taking the maximum of these three baseline-corrected outcomes (to reduce the influence of subjects who started with high antibody concentrations due to previous infection). These maximal change scores were centered, resulting in the scores used as the outcome variable y in the SPCovR and sparse PLS analyses.
We start with a principal component analysis of the gene expression data. The variance accounted for by each component is displayed in Fig. 3. We see that the first two components stand out and this will be the number of components that we will use in the PCovR and PLS analyses. To appreciate the flexibility of the weighting of R 2 X versus R 2 Y in PCovR, we first consider the nonsparse analyses. The fit measures for the two components resulting from PCovR (with 100 equally spaced values for α = .01, .02, . . . , .99) are compared to those resulting from the PLS analysis using the RGCCA R package [11]: Fig. 4 displays the variance accounted for by the components in the block of predictor variables as well as the squared correlation between observed and modeled outcome scores for the two seasons. As could be expected, the variance accounted for in the block of predictor variables is highest for PCovR with high values of α. Because these solutions with high values for α give little weight to explaining the variance in the outcome, low R 2 y values for the 2008 data are observed. PLS, on the other hand, seems to behave as the other extreme with values similar to those observed for α → 0. When it comes to the use of the components obtained for the 2008 season to predict the outcome in the 2007 season, better results are obtained with the PCovR components obtained with α close to one, this is giving more importance to explaining the variance in the predictor data than in the outcome variable.
Next we turn to the analyses with imposed sparseness on the component weights. The metaparameters of the SPCovR model were set using the proposed stepwise model selection procedure. Hence, based on Fig. 3 a model with two components was selected, α was set to a value close to one (α = .99), and the ridge penalty was set equal to .05λ 1 . In the stability selection procedure, we used N = 500 resamples, the threshold π was set equal to 0.90 and E(V ) = 1. This results in q R = 416. We compare with the sparse PLS results from two R packages, RGCCA [11] and spls [10] also using R = 2 components and tuned such that approximately 416 non-zero component weights were obtained in order to have similar sparseness of the sparse PCovR and sparse PLS solutions. spls [10] uses a univariate soft thresholding approach, this is λ 2 → ∞. The SGCCA function in RGCCA [11] was used with the default option for tuning the ridge penalty.
The fit of the solutions to the observed data is summarized in Table 1. The first column shows the variance accounted for by the components in the block of covariates. The SPCovR components account for 19% of the variance while this is much less for the sparse PLS approach as implemented in SGCCA. For the spls package, we could not include such a measure of fit because this package reports fit values only with respect to the outcome variable. On the other hand, the fit of the modeled outcome for the 2008 flu season, which was used to derive the model parameters, is almost perfect for the sparse PLS solutions and low for the sparse PCovR solution. Yet, the predicted antibody titers for the 2007 data, using the estimated component and regression weights of the 2008 analysis, have the highest correlation with the observed antibody titers when the estimates resulting from SPCovR are used (r(ŷ 2007 , y 2007 ) 2 = 0.79 compared to 0.55 and 0.53 for spls and SGCCA respectively).
The percentage of variance accounted for by each of the individual components can be found in Table 2. From these numbers it appears that the first SPCovR component contributes almost exclusively to the variance accounted for in the transcriptomics data while the second component contributes both to the variance accounted for in the transcriptomics data and in the antibody titers. Hence, the first SPCovR component is important for reconstructing the transcription rates in the gene expression data while the second SPCovR component is important both for fitting the transcriptomics data and for predicting the antibody titers. The sparse PLS components resulting from the SGCCA analysis are both focused more towards predicting the antibody titers, the first SGCCA component having the strongest contribution.
Another criterion that is important when comparing the different solutions is related to the interpretation of the solution: Do thecomponents reflect a common biological theme that gives insight into the mechanisms that underly vaccines? To answer this question, a functional annotation based on the strength of association of the genes with the components can be performed. The SPCovR and SGCCA results contain such information in two ways, namely in the component weights and in the loadings. The [25]. First, we performed a functional annotation of genes associated to the probesets with non-zero component weights using the publicly available annotation tool of PANTHER [26]. A list containing the official gene symbols for the probesets with a non-zero component weight, together with the value of these weights on the two SPCovR and SGCCA components can be found online: https://github.com/katrijnvandeun/SPCovR We performed the functional analysis of these gene lists using the statistical test of over-representation in PAN-THER. This means that, for each functional class, the number of genes belonging to that class and present in our list of selected genes was compared to the number of genes for that class in the whole genome. A test of overrepresentation was conducted for each class. An overview of significantly overrepresented functional classes is given in Table 3: Bonferonni correction for multiple testing was used and only classes signficant at the 0.05 level are reported. The first SPCovR component was significantly enriched for rRNA methylation; the second component was significantly enriched for leukocyte activation (and also for its parents, cell activation and immune system process), immune effector process, and negative regulation of metabolic process. Clearly, the second component reflects biological processes thar are important in establishing immunity. This is also the component explaining most of the variance in the outcome and having the highest regression weight: p y2 = 0.02 compared to p y1 = 0.004. Notably, the gene encoding for Calcium/calmodulin-dependent kinase IV (Camk4) was included as an active predictor in the set. This gene was singled out in the original study of [2] and further validated as an important player in the regulation of the antibody response using knockout experiments. Also, the BACH2 (Transcription regulator protein BACH2) gene, which is a known transcription factor necessary for immunity against influenza A virus [27], was included with a very high weight on this component. No significantly over-represented terms were found for the genes underlying the non-zero component weights for the two sparse PLS components obtained with SGCCA. In fact, there was very little overlap in the genes selected by SPLS and SGCCA.Except for one probeset, shared non-zero weights were obtained only between the second SPCovR component and the two SGCCA components. Remarkably, the first SGCCA component is a subset of the second SGCCA component. In the list of non-zero weights (available from https://github.com/katrijnvandeun/SPCovR) it can be seen that only 32 probesets have non-zero weights both for SPCovR and SGCCA, corresponding to 19 unique gene symbols. Relatively high weights in both analyses were obtained for SMUG1 (Single-strandselective monofunctional uracil-DNA glycosylase 1) which has a role in antibody gene diversification and PPP1R11 (Protein phosphatase 1 regulatory inhibitor subunit 11) known to effect NF-κB activity [28]. Also for the genes associated to the selected probesets by the sparse PLS analysis performed with the spls package [10], no terms were found. Second, we performed an enrichment analysis based on the loadings. The loadings reflect the strenght of association of a gene with the component with higher loadings indicating that the gene is more important for the process at play. Both PANTHER and GSEA [29] accept as input lists of genes together with a value that indicates the importance of the gene 2 . Output resulting from the enrichment analyses can be found online (https://github. com/katrijnvandeun/SPCovR), here we summarize the main results. A first result of interest is that the same kind of processes are recovered from the enrichment analyses of the loadings as obtained previously when looking for over-represented classes in the gene lists with nonzero component weights. Also here, the annotation of the loadings obtained with SPCovR shows evidence of immune related processes while such evidence is weak for the SGCCA loadings. Notably, some immune related gene ontology terms are found in the enrichment analyses of the SGCCA loadings. In fact, overall more terms are recovered from the enrichment analyses of the loadings. This could be expected given the small number of genes involved in the lists obtained from the non-zero component weights.
Taken together, the results suggest that SPCovR, by putting more emphasis on accounting for the structural variation in the gene expression data when building the prediction model, catches the processes that are important in establishing the immune response to the vaccine. This pays off in the sense that a more stable prediction model is obtained that has better generalizability (and thus better prediction for the held out sample).
Discussion
Often a large amount of variables is measured with a double goal: Predicting an outcome of interest and obtaining insight in the mechanisms that relate the predicting variables to the outcome. In the high-dimensional setting this comes along with a variable selection problem. Principal covariates regression is a promising tool to reach this double goal; we extended this tool to the high-dimensional setting by introducing a sparse version of the PCovR model and offerering a flexible and efficient estimation procedure.
In this paper we showed through simulation that sparse PCovR can outperform sparse PLS as it allows to put less emphasis on modeling the outcome: By putting more weight on accounting for the variation in the covariates, more insight in the processes that underly the data may be obtained and this, in turn, results in better out-ofsample prediction. The benefit of this was illustrated for publicly available data: clearly a meaningful annotation of the selected genes was obtained with SPCovR while no enriched terms were found for the genes selected by sparse PLS. At the same time, the SPCovR analysis resulted in a much better out-of-sample prediction. Endnotes 1 [30] proposed a so-called sparse principal components regression method that in fact is a sparse covariates regression method. As this method, implemented in the spcr R package, gave an out-of-memory failure on the illustrative example we do not consider it further. 2 The reason to also consider GSEA and not only PAN-THER is that the latter only allowed to use a very limited set of gene ontology terms in the enrichment analysis, unlike in the overrepresentation analysis.
of T T Z. When the number of variables is much larger than the number of observations a more efficient procedure is to calculate the eigen-value decomposition of the R × R matrix T T ZZ T T and to use the resulting eigenvectors and eigenvalues to obtain VU T (see the implementation for details).
|
2018-03-28T05:45:32.280Z
|
2018-03-27T00:00:00.000
|
{
"year": 2018,
"sha1": "00e960b0b49e0d19b594cdd9e3a0a34d60159501",
"oa_license": "CCBY",
"oa_url": "https://bmcbioinformatics.biomedcentral.com/track/pdf/10.1186/s12859-018-2114-5",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "3489d8362b34d3c49279a321eb820670e3bca05d",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
}
|
218538165
|
pes2o/s2orc
|
v3-fos-license
|
Constrained de Bruijn Codes: Properties, Enumeration, Constructions, and Applications
The de Bruijn graph, its sequences, and their various generalizations, have found many applications in information theory, including many new ones in the last decade. In this paper, motivated by a coding problem for emerging memory technologies, a set of sequences which generalize sequences in the de Bruijn graph are defined. These sequences can be also defined and viewed as constrained sequences. Hence, they will be called constrained de Bruijn sequences and a set of such sequences will be called a constrained de Bruijn code. Several properties and alternative definitions for such codes are examined and they are analyzed as generalized sequences in the de Bruijn graph (and its generalization) and as constrained sequences. Various enumeration techniques are used to compute the total number of sequences for any given set of parameters. A construction method of such codes from the theory of shift-register sequences is proposed. Finally, we show how these constrained de Bruijn sequences and codes can be applied in constructions of codes for correcting synchronization errors in the $\ell$-symbol read channel and in the racetrack memory channel. For this purpose, these codes are superior in their size on previously known codes.
I. INTRODUCTION
The de Bruijn graph of order m, G m , was introduced in 1946 by de Bruijn [7]. His target in introducing this graph was to find a recursive method to enumerate the number of cyclic binary sequences of length 2 k such that each binary k-tuple appears as a window of length k exactly once in each sequence. It should be mentioned that in parallel also Good [31] defined the same graph and hence it is sometimes called the de Bruijn-Good graph. Moreover, it did not take long for de Bruijn himself to find out that his discovery was not novel. In 1894 Flye-Sainte Marie [24] has proved the enumeration result of de Bruijn without the definition of the graph. Nevertheless, the graph continues to carry the name of de Bruijn as well as the related sequences (cycles in the graph) which he enumerated. Later in 1951 van Aardenne-Ehrenfest and de Bruijn [1] generalized the enumeration result for any arbitrary alphabet of finite size σ greater than one, using a generalized graph for an alphabet Σ of size σ. Formally, the de Bruijn graph G σ,k has σ k vertices, each one is represented by a word of length k over an alphabet Σ with σ letters. The in-degree and the out-degree of each vertex in the graph is σ. There is a directed edge from the vertex (x 0 , x 1 , . . . , x k−1 ) to the vertex (y 1 , y 2 , . . . , y k ), where x i , y j ∈ Σ, if and only if y i = x i , 1 ≤ i ≤ k − 1. This edge is represented by the (k + 1)-tuple (x 0 , x 1 , . . . , x k−1 , y k ). The sequences enumerated by de Bruijn are those whose length is σ k and each k-tuple over Σ appears in a window of k consecutive (cyclically) symbols in the sequence. Such a sequence enumerated by de Bruijn is represented by an Eulerian cycle in G σ,k−1 , where each k consecutive symbols represent an edge in the graph. This sequence is also an Hamiltonian cycle in G σ,k , where each consecutive symbols represent a vertex in the graph. Henceforth, we assume that the cycles are Hamiltonian, i.e., with no repeated vertices.
Throughout the years since de Bruijn introduced his graph and the related sequences, there have been many generalizations for the graph and for the sequences enumerated by de Bruijn. Such generalizations include enumeration of sequences of length ℓ, where ℓ < σ k , in G σ,k [48] or coding for two-dimensional arrays in which all the n × m sub-arrays appear as windows in exactly one position of the large array [19]. The interest in the de Bruijn graph, its sequences, and their generalizations, is due to their diverse important applications. One of the first applications of this graph was in the introduction of shift-register sequences in general and linear feedback shift registers in particular [28]. These will have an important role also in our research. Throughout the years de Bruijn sequences, the de Bruijn graph, and their generalizations, e.g. for larger dimensions, have found variety of applications. These applications include cryptography [25], [41], and in particular linear complexity of sequences, e.g. [10], [20], [22], [27], [38], interconnection networks, e.g. [4], [23], [55], [58], [61], VLSI testing, e.g. [3], [42], two-dimensional generalizations, e.g. [6], [19], [45] with applications to self-locating patterns, range-finding, data scrambling, mask configurations, robust undetectable digital watermarking of two-dimensional test images, and structured light, e.g. [35], [49], [50], [54], [57]. Some interesting modern applications are combined with biology, like the genome assembly as part of DNA sequencing, e.g. [9], [16], [36], [44], [52], [65] and coding for DNA storage, e.g. [11], [26], [39], [53]. This is a small sample of examples for the wide range of applications in which the de Bruijn graph, its sequences, and their generalizations, were used.
The current work is not different. Motivated by applications to certain coding problems for storage, such as the ℓ-symbol read channel and the racetrack memory channel, we introduce a new type of generalization for de Bruijn sequences, the constrained de Bruijn sequences. In these sequences, a k-tuple cannot repeat within a segment starting in any of b consecutive positions. This generalization is quite natural and as the name hints, it can be viewed as a type of a constrained sequence. The goal of this paper is to study this type of sequences in all natural directions: enumeration, constructions, and applications.
Our generalization is motivated by the need to combat synchronization errors (which are shift errors known also as deletions and sticky insertions) in certain memories. These types of synchronization errors occur in some new memory technologies, mainly in racetrack memories [12], [13], and in other technologies which can be viewed as an ℓ-symbol read channel [8], [14], [63]. By using constrained de Bruijn sequences to construct such codes we will be able to increase the rate of codes which correct such synchronization errors. But, we believe that de Bruijn constrained sequences and codes (sets of sequences) are of interest for their own right from both practical and theoretical points of view. The new defined sequences can be viewed as constrained codes and as such they pose some interesting problems. This is the reason that the new sequences and the related codes will be called constrained de Bruijn sequences and constrained de Bruijn codes, respectively.
The rest of this paper is organized as follows. In Section II we introduce some necessary concepts which are important in our study, such as the length and the period of sequences in general and of de Bruijn sequences in particular. We will also introduce some elementary concepts related to shift-register sequences. In Section III we define the new type of sequences, the constrained de Bruijn sequences and their related codes. There will be a distinction between cyclic and acyclic sequences. We consider the concept of periodicity for acyclic and cyclic sequences and define the concept of forbidden patterns. The main result in this section will be a theorem which reveals that by the given definitions, three types of codes defined differently, form exactly the same set of sequences for an appropriate set of parameters for each definition. In Section IV the enumeration for the number of constrained de Bruijn sequences with a given set of parameters will be considered. We start with a few very simple enumeration results and continue to show that for some parameters, the rates of the new defined codes approach 1, and there are other parameters with one symbol of redundancy. Most of the section will be devoted to a consideration of the new defined codes as constrained codes. Enumeration based on the theory of constrained codes will be applied. These considerations yield also efficient encoding and decoding algorithms for the new defined codes. In Section V, a construction based on shift-register sequences will be given. This construction will be our main construction for codes with large segments in which no repeated k-tuples appear. The next two sections are devoted to applications of constrained de Bruijn sequences in storage memories. In Section VI, the application to the ℓ-symbol read channel is discussed. This application yields another application for another new type of storage channel, the racetrack channel which is considered in Section VII. We conclude in Section VIII, where we also present a few directions for future research.
II. PRELIMINARIES
In this section we will give some necessary definitions and notions concerning, cyclic and acyclic sequences, paths and cycles in the de Bruijn graph, shift-register sequences and in particular those which are related to primitive polynomials and are known as maximum length shift-register sequences. In Section II-A we discuss the length and period of sequences as well as cyclic and acyclic sequences. Shift-register sequences are discussed in Section II-B.
A. Paths and Cycles in the de Bruijn Graph
A sequence s = (s 1 , s 2 , . . . , s n ) over an alphabet Σ is a sequence of symbols from Σ, i.e., s i ∈ Σ for 1 ≤ i ≤ n. The length of such a sequence is the number of its symbols n and the sequence is considered to be acyclic. A cyclic sequence s = [s 1 , s 2 , . . . , s n ] is a sequence of length n for which the symbols can be read from any position in a sequential order, where the element s 1 follows the element s n . This means that s can be written also as [s i , s i+1 , . . . , s n , s 1 , . . . , s i−1 ] for each 2 ≤ i ≤ n. Definition 1. A cyclic sequence s = [s 1 , . . . , s n ], where s i ∈ Σ and σ = |Σ|, is called a weak de Bruijn sequence (cycle) of order k, if all the n windows of consecutive k symbols of s are distinct. A cyclic sequence s = [s 1 , . . . , s n ] is called a de Bruijn sequence of order k, if n = σ k and s is a weak de Bruijn sequence of order k.
The connection between a weak de Bruijn cycle of length n (as a cycle in the de Bruijn graph) and a weak de Bruijn sequence of length n in the graph is very simple. The sequence is generated from the cycle by considering the first digit of the consecutive vertices in the cycle. The cycle is generated from the sequence by considering the consecutive windows of length k in the sequence. This is the place to define some common concepts and some notation for sequences. An acyclic sequence s = (s 1 , . . . , s n ) has n digits read from the first to the last with no wrap around. If all the windows of length k, starting at position i, 1 ≤ i ≤ n − k + 1, are distinct, then the sequence corresponds to a simple path in G σ,k . The period of a cyclic sequence s = [s 0 , s 1 , . . . , s n−1 ] is the least integer p, such that s i = s i+p , where indices are taken modulo n, for each i, 0 ≤ i ≤ n − 1. It is well known that the period p divides the length of a sequence n. If n = rp then the periodic sequence s is formed by a concatenation of r copies of the first p entries of s. It is quite convenient to have the length n and the period p equal if possible. There is a similar definition for a period of acyclic sequence with slightly different properties, which is given in Section III. In this paper we will assume (if possible) that for a given cyclic sequence s = [s 1 , . . . , s n ] the period is n as the length. Hence, usually we won't distinguish between the length and period for cyclic sequences. We will elaborate more on this point in Section II-B. Finally, the substring (window) (s i , s i+1 , . . . , s j ) will be denoted by s[i, j] and this substring will be always considered as an acyclic sequence, no matter if s is cyclic or acyclic. In addition, s[i] will be sometimes used instead of s i and the set {1, 2, . . . , n} is denoted by [n].
B. Feedback Shift Registers and their Cycle Structure
The theory on the sequences in the de Bruijn graph cannot be separated from the theory of shift-register sequences developed mainly by Golomb [28]. This theory, developed fifty years ago, was very influential in various applications related to digital communication [29], [30]. A short summary on the theory of shift-registers taken from [28] which is related to our work, is given next.
A characteristic polynomial (for a linear feedback shift register defined in the sequel) c(x) of degree k over F q , the finite field with q elements, is a polynomial given by where c i ∈ F q . For such a polynomial a function f on k variables from F q is defined by For the function f we define a state diagram with the set of vertices If x k+1 = f (x 1 , x 2 , . . . , x k ) then an edge from (x 1 , x 2 , . . . , x k ) to (x 2 , . . . , x k , x k+1 ) is defined for the related state diagram. A feedback shift register of length k has q k states corresponding to the set Q k of all q k k-tuples over F q . The feedback function x k+1 = f (x 1 , x 2 , . . . , x k ) defines a mapping from Q k to Q k . The feedback function can be linear or nonlinear. The shift register is called nonsingular if its state diagram consists of disjoint cycles. Any such state diagram is called a factor in G q,k , where a factor in a graph is a set of vertex-disjoint cycles which contains all the vertices of the graph. There is a one-to-one correspondence between the set of factors in the de Bruijn graph G q,k and the set of state diagrams for the nonsingular shift registers of order k over F q . It is well-known [28] that a binary feedback shift-register is nonsingular if and only if its feedback function has the form where g(x 2 , . . . , x k ) is any binary function on k−1 variables. A similar representation also exists for nonsingular feedback shift-registers over F q .
An Hamiltonian cycle in G q,k is a de Bruijn cycle which forms a de Bruijn sequence. There are (q!) q k−1 /q k distinct such cycles in G q,k and there are many methods to generate such cycles [21], [25]. One important class of sequences in the graph, related to de Bruijn sequences, are the so called m-sequences, or maximal length linear shift-register sequences. A shift-register is called linear if its feedback function f (x 1 , x 2 , . . . , x k ) is linear. An m-sequence is a sequence of length q k − 1 generated by a linear shift-register associated with a primitive polynomial of degree k over F q . Each primitive polynomial is associated with such a sequence. In such a sequence all windows of length k are distinct and the only one which does not appear is the all-zero window. The following theorem is well known [28].
where φ is the Euler function.
The exponent of a polynomial f (x) is the smallest integer e such that f (x) divides x e − 1. The length of the longest cycle, of the state diagram, formed by the shift-register associated with the characteristic polynomial f (x) is the exponent of f (x). The length of the cycles associated with a multiplication of several irreducible polynomials can be derived using the exponents of these polynomials and some related algebraic and combinatorial methods. This theory leads immediately to the following important result.
Theorem 2. The state diagram associated with a multiplication of two distinct characteristic primitive polynomials f (x) and g(x) of order k over F q contains q k + 1 cycles of length q k − 1 and the all-zero cycle. Each possible (2k)-tuple over F q appears exatly once as a window in one of these cycles. The m-sequences related to f (x) and g(x) are two of these cycles.
III. CONSTRAINED DE BRUIJN CODES, PERIODS, AND FORBIDDEN PATTERNS
In this section we give the formal definitions for constrained de Bruijn sequences and codes. We present a definition for a family of sequences with a certain period and a definition for a family of sequences avoiding certain substrings. Three different definitions will be given and it will be proved that with the appropriate parameters the related three families of sequences contain the same sequences. Each definition will have later some role in the enumeration of constrained de Bruijn sequences which will be given in Section IV.
A. Constrained de Bruijn Codes
Definition 2.
In other words, in each substring of s whose length is b + k − 1 there is no repeated k-tuple, i.e. each subset of b consecutive windows of length k in s contains b distinct k-tuples.
• A set of distinct (b, k)-constrained de Bruijn sequences of length n is called a (b, k)-constrained de Bruijn code. The set of all (b, k)-constrained de Bruijn sequences of length n will be denoted by C DB (n, b, k).
The alphabet Σ and its size sigma should be understood from the context throught our discussion.
such that i < j and i − j + n ≤ b − 1. In other words, in each cyclic substring of s whose length is b + k − 1 there is no repeated k-tuple. Note, that two sequences which differ only in a cyclic shift are considered to be the same sequence.
• A set of distinct cyclic (b, k)-constrained de Bruijn sequences of length n is called a cyclic (b, k)constrained de Bruijn code. The set of all cyclic (b, k)-constrained de Bruijn sequences of length n will be denoted by C * DB (n, b, k). We note that by the definition of a cyclic constrained de Bruijn sequence two codewords in a cyclic (b, k)-constrained de Bruijn code cannot differ only in a cyclic shift, but in some applications one might consider these cyclic shifts of codewords as distinct codewords.
de Bruijn sequences form a special case of constrained de Bruijn sequences as asserted in the following theorem which is readily verified by definition.
Theorem 3.
• The cyclic sequence s of length q k over F q is a de Bruijn sequence if and only if s is a cyclic (q k , k)constrained de Bruijn sequence.
• The (acyclic) de Bruijn sequence s of length q k + k − 1 over F q is a de Bruijn sequence if and only if s is a (q k , k)-constrained de Bruijn sequence.
Theorem 3 introduce the connection between de Bruijn sequences and constrained de Bruijn sequences. But, there are a few main differences between de Bruijn sequences of order k and (b, k)-constrained de Bruijn sequences.
• A de Bruijn sequence is usually considered and is used as a cyclic sequence while a constrained de Bruijn sequence will be generally used as an acyclic sequence.
• A de Bruijn sequence contains each possible k-tuple exactly once as a window of length k, while in a (b, k)-constrained de Bruijn sequence, each possible k-tuple can be repeated several times or might not appear at all.
• A de Bruijn sequence contains each possible k-tuple exactly once in one period of the sequence, while in a (b, k)-constrained de Bruijn sequence, each possible k-tuple can appear at most once among any b consecutive k-tuples in the sequence.
• The length (or equivalently period in this case) of a de Bruijn sequence is strictly q k , while there is no constraint on the length and the period of a (b, k)-constrained de Bruijn sequence.
These differences between de Bruijn sequences and constrained de Bruijn sequences are important in the characterization, enumeration, constructions, and applications of the two types of sequences.
B. Periods and Forbidden Subsequences
There are two concepts which are closely related to constrained de Bruijn sequences, the period of an acyclic sequence and avoided patterns. Let u and s be two sequences over Σ. The sequence s avoids u (or s is u-avoiding) if u is not a substring of s. Let F be a set of sequences over Σ and s a sequence over Σ. The sequence s avoids F (or s is F-avoiding) if no sequence in F is a substring of s. Let A(n; F) denote the set of all σ-ary sequences of length n which avoid F. A subset of A(n; F), i.e. a set of F-avoiding sequences of length n, is called an F-avoiding code of length n.
The second concept is the period of a sequence. For cyclic sequences the length and the period of the sequence either coincide or have a very strict relation, where the period of the sequence divides its length. This is not the case for acyclic sequences.
Note, that the definition for periods of a cyclic sequence coincides with this definition).
• A sequence s ∈ Σ n is called an m-limited length for period p substrings if any substring with period p of s has length at most m.
• A set of m-limited length, period p sequences from Σ n is called a code of m-limited length for period p substrings. The set of all such m-limited length for period p substrings is denoted by C LP (n, m, p).
The first lemma is a straightforward observation.
Lemma 1.
If F is the set of all sequences of length m + 1 and period p, then for each i, 1 ≤ i ≤ m, the set C LP (n, i, p) is an F-avoiding code.
Proof. Recall, that for each i, the set of sequences in C LP (n, i, p), have length n. If s is a sequence in C LP (n, i, p), 1 ≤ i ≤ m, then each substring of s with period p has length at most i, where i is a positive integer strictly smaller than m + 1. The set F contains the sequences of length m + 1 and period p and hence a sequence of F cannot be a substring of s. Thus, C LP (n, i, p) is an F-avoiding code.
C. Equivalence between the Three Types of Codes
Let F p,p+k be the set of all period p sequences of length p + k for any given The following result implies a strong relation between constrained de Bruijn codes, code of limited length for period p substrings, and F-avoiding codes. By the related definitions of these concepts we have the following theorem.
Theorem 4. For all given admissible n, b, k, and any code C ⊂ Σ n , Proof. We will prove that A(n; . These three containment proofs will imply the claim of the theorem. First, we prove that if c is a sequence in A(n; F) then c is a sequence in C DB (n, b, k). Let c = (c 1 , c 2 , . . . , c n ) be any sequence in A(n, F), and assume for the contrary that c ∈ C DB (n, b, k). This implies that there exist . . , c n ) be any sequence in C DB (n, b, k), and assume for the contrary that Corollary 1. For all given admissible n, b, k, and any code C ⊂ Σ n , the following three statements are The set F of all forbidden patterns of an F-avoiding code C is never a minimal minimal, i.e., there exists another set F ′ such that F ′ ⊂ F and C is also an F ′ -avoiding code. It implies that there always exist two different sets, F 1 and F 2 , where F 1 ⊂ F 2 , such that A(n; F 1 ) = A(n; F 2 ). But, there always exists one such set F whose size is smaller than the sizes of all the other sets To see that, let F be a set of forbidden sequences over Σ. As long as F contains σ sequences of length ℓ + 1 which form the set S = {uα : α ∈ Σ}, where u is a string of length ℓ, these σ sequences of F which are contained in S can be replaced by the sequence u. When this process comes to its end, instead of the original set F of forbidden patterns we have a set F ′ of forbidden patterns for the same code. The set of sequences F ′ will be called the forbidden reduced set of F.
Lemma 2.
If F is a set of forbidden sequences and F ′ is its forbidden reduced set, then A(n; F) = A(n; F ′ ).
Proof. The proof is an immediate observation from the fact that all the sequences of length n which do not contain the patterns in S = {uα : α ∈ Σ}, where n is greater than the length of u, as substrings do not contain the pattern u as a substring. Hence, A(n; F) ⊆ A(n; F ′ ).
Clearly, all the sequences of length n which do not contain u as a substring do not contain any pattern from S as a substring. Hence, A(n; F ′ ) ⊆ A(n; F).
IV. ENUMERATION OF CONSTRAINED DE BRUIJN SEQUENCES
In this section we consider enumeration of the number of sequences in C DB (n, b, k). Note, that we are considering only acyclic sequences. A σ-ary code C of length n is a set of σ-ary sequences of length n, that is C ⊆ Σ n . For each code C of length n, we define the rate of the code C to be R(C) = log σ (|C|)/n, and the redundancy of the code C to be r(C) = n − log σ (|C|), where |C| is the size of the code C. We define the maximum asymptotic rate of (b, k)-constrained de Bruijn codes to be R DB (b, k) = lim sup n→∞
A. Trivial Bounds
In this section trivial bounds on the number of constrained de Bruijn sequences are considered. The number of (cyclic) de Bruijn sequences of length σ n over an alphabet of size σ is (σ!) σ n−1 /σ n [1] which implies the following simple result (for acyclic sequences).
Theorem 5. For any given a positive integer σ ≥ 2, a positive integer k, and n ≥ σ k + k − 1, Corollary 2. For any given a positive integer σ ≥ 2 and a positive integer k, is an integer whose value depends on σ and k.The reason that the rates are zeroes for so many values is that once we have a long simple path of length b in G σ,k (and the number of such paths is very large [48]), to continue the path for a long sequence which is a (b, k)-constrained de Bruijn sequence, the sequence will be almost periodic (with a possibility of some small local changes. In the case of Theorem 5, where the path is a de Bruijn sequence, there is only one way to continue the path without violating the constraint. There are some intriguing questions in this context. The first one, question to be is to be specific in the value of b 0 . Another question is to find a good bound on R DB (b, k), where b is large compared to k and R DB (b, k) > 0. Such a bound will be given in Section V, but we have no indication how good it is. The other extreme case is when b = 1 and hence the sequence is not constrained and we have the following trivial result.
Except for these cases, to find the rates of constrained de Bruijn sequences, where b > 1, is not an easy task. When the rate is approaching one, we are interested in the redundancy of C DB (n, b, k). Unfortunately, finding the redundancy of C DB (n, b, k) is even more difficult than to find the rate. Fortunately, for small values of b we can use the theory of constrained coding, and for k large enough we can even show that the redundancy is one symbol.
The last trivial case is the (b, 1)-constrained de Bruijn codes. The sequences of such a code will be used for error correction of synchronization errors in racetrack memories in Section VII-A. Since this constraint implies that any b consecutive elements in the sequence will be distinct, we must have that σ ≥ b to obtain any valid sequence. If σ ≥ b, then we have to choose b elements from the σ alphabet letters to start the sequence and in any other position we can choose any of the alphabet letters, except for the previous b − 1 positions in the sequence. Hence, we have Theorem 7. For any b ≥ 1 and any alphabet of size σ we have Corollary 3. For any b ≥ 1 and any alphabet σ we have that We observe that the asymptotic rate R DB (b, k) is getting close to 1 when b is fixed and k tends to infinity. In this case, we are interested in the redundancy of the code.
For a subset A of a set U , the complement of A, A c , is the subset which consists of all the elements of U which are not contained in A. For a code C of length n over Σ, C c = Σ n \ C. The following simple lemma will be used in the next theorem.
Proof. It is well known from set theory that Combining this with the trivial assertion that for any two sets A and B, |A ∪ B| ≤ |A| + |B| we have that Theorem 8. For all σ, n and b < k, In particular, for k ≥ ⌈log σ n + log σ (b − 1)⌉ + 1, the redundancy of C DB (n, b, k) is at most a single symbol.
Combining this fact with Lemma 3 we have that and only if it has length n and a substring of period i whose length is at least i + k. There are at most n − (i + k − 1) possible starting positions for such a substring. Once the first i symbols in this substring, of length i + k, are known, its other k symbols are uniquely determined. The remaining n − (i + k) symbols in the word of length n can be chosen arbitrarily (note, that some choices cause other substrings of the same period and larger length in this word. They might cause other substrings with larger period than i, so the computation which follows have many repetitions and some sequences which do not have to be computed.). Hence, the number of words in C LP (n, i + k − 1, i) c is upper bounded by and hence using (1) we have, In particular, for k ≥ ⌈log σ (n) + log σ (b − 1)⌉ + 1, we obtain Thus, the redundancy of C DB (n, b, k) is at most a single symbol.
A weaker bound than the one in Theorem 8 for σ = 2 was given in [13] (Theorem 13). Finally, to encode the (b, k)-constrained de Bruijn code efficiently with only a single symbol of redundancy, we may use sequence replacement techniques [62].
C. Representation as a Graph Problem
The key for enumeration of the number of (b, k)-constrained de Bruijn sequences of length n is a representation of the enumeration as a graph problem. The de Bruijn graph G σ,k was the key for the enumeration of the cyclic (as implies also the acyclic) de Bruijn sequences of length σ k , i.e., the number of (σ k , k)-constrained de Bruijn sequences of length n ≥ σ k + k − 1. This number is equal to the number of Hamiltonian cycles in the graph and also the number of Eulerian cycles in G σ,k−1 . This number was found for example with a reduction to the number of spanning trees in the graph [25]. The (not necessary simple) paths in G σ,k (where vertices are considered as the k-tuples) represent the (1, k)-constrained de Bruijn sequences, i.e. sequences with no constraints. Can the de Bruijn graph or a slight modification of it represents other constraints. The answer is definitely yes. If we remove from G σ,k the σ self-loops, then the paths (represented by the vertices) in the new graph represents all the (2, k)-consrained de Bruijn sequences. This can be generalized to obtain a graph for the (3, k)-constrained de Bruijn sequences and in general the (b, k)-constrained de Bruijn sequences, when b is a fixed integer. For the (3, k) constraint, we form from G σ,k a new graph G ′ σ,k as follows. First, we remove all the self-loops from G σ,k to accommodate the (2, k) constraint. Next, we consider all the cycles of length two in G σ,k . These cycles have the form x → y → x, where x = (x 1 , x 2 , . . . , x k ) and y = (y 1 , y 2 , . . . , y k ), x i = y j = a, for odd i and even j, and x i = y j = b, for even i and odd j, where a and b are two distinct letters in Σ. In G ′ σ,k , x and y are replaced by four vertices x1, x2 instead of x, and y1, y2 instead of y. All the in-edges and the out-edges to and from x (except for the ones from and to y) are duplicated and are also in-edges and out-edges to and from x1 and x2. Similarly, all the in-edges and the out-edges in and from y (except for the ones from and to x) are duplicated and are also in-edges and out-edges to and from y1 and y2. There is also one edge from x1 to y1 and from y2 to x2. All the paths (related to the vertices) in the constructed graph G σ,k is σ 2 and hence the number of vertices in G ′ σ,k is σ k + σ(σ − 1). An example for the representation of the graph (i.e., state diagram) for the (3, 3)-constrained de Bruijn sequences via the de Bruijn graph is presented in Figure 1.
A second representation is by using the theory of constrained coding [47]. One have to construct the state diagram of the constrained system. From the state diagram one has to generate the related adjacency matrix to compute the rate of the constrained de Bruijn code by computing the largest eigenvalue of the adjacency matrix. If λ is the largest eigenvalue of the adjacency matrix then asymptotically there are λ n sequences and the rate of the code is log σ λ. An example will be given in the next subsection.
While the first representation is related to the code C DB (n, b, k), the second representation is related to the code A(n; F), where F are all the substrings which are forbidden in a (b, k)-constrained sequence. By Theorem 4 these two codes are equal. Each vertex in the state diagram (of the constrained code representation) represents a substring, which is not in F. When the last vertex in the path is generated by the current prefix s ′ of the sequence s is vertex v, it implies that the substring represented by vertex v is the longest suffix of s ′ which is a prefix of a forbidden pattern in F. Each one of the σ out-edges exist in v if and only each one does not lead to a forbidden substring in F. Eventually, both representations are equivalent and one can be derived from the other by using a reduction of the state diagram [47]. But, each representation has some advantages on the other one. The advantage of using the state diagram of the constraint, based on the forbidden substrings, is a simple definition for the state diagram from which the theory of constrained codes can be used. The advantage of using a graph like a de Bruijn graph is that each point on the sequence is related to a specific k-tuple and it is identified by a vertex (note that some k-tuples are represented by a few vertices) which represents this k-tuple. An example of the state diagram for the (3, 3)-constrained de Bruijn sequences via the forbidden patterns is depicted in Figure 2.
D. State Diagrams and Rates based on the Forbidden Subsequences
As a sequence of a constrained code, the (b, k)-constrained de Bruijn sequence has several forbidden patterns, s 1 , s 2 , . . . , s ℓ . W.l.o.g. we assume that s i is not a prefix of s j for i = j. The state diagram has an initial vertex x, from which there are ℓ paths. The ith path has length smaller by one than the length of the ith forbidden pattern s i . Paths share a prefix if their related sequences share a corresponding prefix. Each vertex in the state diagram, except for the ones at the end of the ℓ paths, has out-degree σ, one edge for each possible symbol of Σ that can be seen in the corresponding read point of the constrained sequence. The edges are labelled by the related symbols. The last symbol of s i does not appear of an edge from the last vertex of the related path. A vertex in the state diagram is labelled by the prefix of the path which it represents. The initial vertex x is labelled with the sequence of length zero. Thus, to construct the state diagram we have to determine all the forbidden substrings in the (b, k)-constrained de Bruijn sequence. This representation coincides with the representation of the constrained system and the related example for the (3, 3)-constrained de Bruijn sequences is depicted in Figure 2.
To determine exactly the maximum asymptotic rates of (b, k)-constrained de Bruijn codes, we use the wellknown Perron-Frobenius theory [47]. When b and k are given, by using the the de Bruijn graph for the given constraint or the related state diagram, we can build a finite directed graph with labelled edges such that paths in the graph generate exactly all (b, k)-constrained de Bruijn sequences. For example, when (b, k) = (3, 3) the adjacency matrix is Its largest eigenvalue is λ ≈ 1.73459. Hence, the capacity of this constrained system, which is the maximum asymptotic rate of (3, 3)-constrained de Bruijn code, is log λ = 0.7946. Similarly, we can compute the maximum asymptotic rates of (b, k)-constrained de Bruijn codes for other values of b, k. Table I presents some values for the asymptotic rates of the constrained systems for small parameters. TABLE I THE The asymptotic rate can be evaluated for infinite pairs of (b, k) as proved in the next theorem.
Theorem 9. For any positive integer k > 1, the maximum asymptotic rate of a binary (3, k)-constrained de Bruijn code is log 2 λ, where λ is the largest root of the polynomial Proof. Recall that F p,p+k is the set of all period p sequences of length p + k. Let F 1 = F 1,k+1 ∪ F 2,k+2 , i.e., F 1 contains the all-zero word of length k + 1, the all-one word of length k + 1, and the two words of length k + 2 in which any two consecutive positions has distinct symbols. By Corollary 1, C DB (n, 3, k) = A(n; F 1 ), i.e. binary (3, k)-constrained de Bruijn code of length n is an F 1 -avoiding code of length n.
Let F 2 by the set which contains the all-zero word of length k and the all-one word of length k + 1. Consider the D-morphism defined first in [40], D : B n → B n−1 , B = {0, 1}, where D(x) = D(x 1 , x 2 , . . . , x n ) = y = (y 2 , . . . , y n ), with y i = x i + x i−1 , 2 ≤ i ≤ n. It was proved in [40] that the mapping D is a 2 to 1 mapping.
Hence, A(n; F 1 ) and A(n; F 2 ) have the same maximum asymptotic rate, and hence A(n; F 2 ) can be computed instead of A(n; F 1 ).
Let A 0 (n; F 2 ) be the set of all F 2 -avoiding words of length n which start with a zero and let A 1 (n; F 2 ) be the set of all F 2 -avoiding words of length n which start with a one. Clearly, the asymptotic rates of A(n; F 2 ), A 0 (n; F 2 ), A 1 (n; F 2 ) are equal. Let be the mapping for which Φ 1 (x = (0, x 2 , . . . , x n )) = (x i+1 , . . . , x n ) ∈ A 1 (n − i; F 2 ), where i is the smallest index that x i+1 = 1. Since A 0 (n; F 2 ) avoids the all-zero sequence of length k, it follows that the mapping Φ 1 is a well-defined bijection. Therefore, Similarly, we can define the bijection and obtain the equality Equations (2) and (3) imply that It is again easy to verify that the maximum asymptotic rates of A 0 (n − ℓ; F 2 ) for all 2 ≤ ℓ ≤ 2k − 1 are equal. Let λ n be this maximum asymptotic rate. The recursive formula can be solved now for λ and the maximum asymptotic rate of A 0 (n; F 2 ) will be log 2 λ, where λ is computed as the largest root of the polynomial (2) and (3), we can compute the exact size of A(n; F 2 ) efficiently. Hence, we can rank/unrank all words in A(n; F 2 ) efficiently using enumerative technique [17].
Using recursive formulas in Equations
Computing the largest eigenvalue of the adjacency matrix is the key for computing the asymptotic rate. The size of the matrix and the number of its nonzero entries is important for reducing the complexity of the computation. Fortunately we can evaluate some of these parameters to get some idea for which parameters it is feasible to compute the largest eigenvalue. For lack of space and since this is mainly a combinatorial problem we omit this computation and leave it for a future work. Finally, we note that we can use the idea in the proof of Theorem 9 to reduce the number of forbidden patterns by half, while keeping the same asymptotic rate. This is done in the following example.
Example 1. Consider the (3, 3)-constrained de Bruijn sequences. The set of patterns which should be avoided
is {0000, 1111, 01010, 10101}. As was illustrated in Figure 1 and Figure 2, 10 nodes are required to represent a graph of constrained sequences avoiding these four patterns. Using Theorem 9, it can be observed that the asymptotic size of this code is the same as the code avoiding all patterns in set {000, 1111}. Hence, only 5 nodes are required to represent the state diagram of the constrained sequences avoiding these two patterns as depicted in Figure 3.
V. CODES WITH A LARGE CONSTRAINED SEGMENT
After considering in Section IV enumerations and constructions for (b, k)-constrained de Bruijn sequences which are mainly efficient for small b compared to k, we will use the theory of shift registers for a construction which is applied for considerably larger b compared to k. Such constructions for (σ k , k)-constrained de Bruijn codes are relatively simple as each sequence in the code is formed by concatenations of the same de Bruijn sequence. The disadvantage is that the rate of the related code is zero. In this section we present a construction of codes in which b is relatively large compare to k and the rate of the code is greater than zero.
Let P k be the set of all primitive polynomial of degree k over F q . By Theorem 1, there are φ(q k −1) k polynomials in P k . Each polynomial is associated with a linear feedback shift register and a related m-sequence of length q k − 1. Let S Pk be this set of m-sequences. In each such sequence, each nonzero k-tuple over F q appears exactly once as a window in one period of the sequence. We can single out that in each such sequence there is a unique run of k − 1 zeroes and a unique run of length k for each other symbol. This run of the same symbol is the longest such run in each sequence. But, there is another window property for all the sequences in S Pk . Lemma 4. Each (2k)-tuple over F q appears at most once as a window in one of the sequences of S Pk .
Proof. Let f 1 (x) and f 2 (x) be two distinct primitive polynomials in P k , whose state diagrams contain the cycles of length q k − 1, C 1 and C 2 , respectively. By Theorem 2, the polynomial f 1 (x)f 2 (x) has degree 2k and its state diagram contains the two cycles C 1 and C 2 . Each (2k)-tuple appears exactly once in a window of length 2k in one of the cycles of the state diagram related to f 1 (x)f 2 (x). Hence, each such (2k)-tuple appears at most once in either C 1 or C 2 .
The windows of length k are distinct within each sequence and the windows of length 2k are distinct between any two sequences which implies the claim of the lemma.
We are now in a position to describe our construction of constrained de Bruijn codes with a large constrained segment.
Construction 1. Let S Pk be the set of all m-sequences of order k over F q . Each sequence will be considered as an acyclic sequence of length q k − 1, where the unique run of k − 1 zeroes is at the end of the sequence. We construct the following code C {(0 ǫ0=0 s 1 0 ǫ1 , s 2 0 ǫ2 , . . . , s ℓ−1 0 ǫℓ−1 , s ℓ 0 ǫℓ ) : s i ∈ S Pk , 0 ≤ ǫ i ≤ k + 1, 1 ≤ i ≤ ℓ} .
Theorem 10. The code C contain (q k − 1, 2k)-constrained de Bruijn sequences, each one of length at least ℓ(q k − 1) and at most ℓ(q k + k). The size of C is M ℓ , where M = φ(q k −1)(k+2) k and k ≥ 3.
Proof. We first prove that each codeword of C is a (q k − 1, 2k)-constrained de Bruijn sequence. Let s = 0 ǫi−1 s i 0 ǫi , 1 ≤ i ≤ ℓ, be a substring of a codeword in C. We first note that except for the windows of length 2k having more than k − 1 zeroes at the start or at the end of s, all the other windows of length 2k contained in s appear in the cyclic sequence s i . These new windows (having more than k − 1 zeroes) as well as the windows which contain the run of zeroes 0 ǫi+k−1 appear only between two sequences s j and s j+1 and as such they are separated by at least q k − k symbols. Moreover, each window of length 2k containing zeroes from one such run is clearly unique. This implies that each sequence of C is a (q k − 1, 2k)-constrained de Bruijn sequence (as repeated windows of length 2k can occur only between different s i 's). Since each s i has length q k − 1 and 0 ≤ ǫ i ≤ k + 1, it follows that the length of a codeword is at least ℓ(q k − 1) and at most ℓ(q k + k). The number of sequences that can be used for the substring s i 0 ǫi is φ(q k −1)(k+2) k since s i can be chosen in φ(q k −1) k ways (see Theorem 1) and 0 ǫi can be chosen in k + 2 distinct ways. It implies that |C| = M ℓ .
The codewords in the code C obtained via Construction 1 can be of different lengths. We will now construct a similar code in which all codewords have the same length. Let C ′ be a code which contains all the prefixes of codewords from C. Let Proof. The codewords in C 1 are formed from the codewords in C by lengthening them to the required length with prefixes of codewords in C. The lengthening does not change the structure of the codewords, it just change their length. Hence, the proof that the codewords are (q k − 1, 2k)-constrained de Bruijn sequence is the same as in the proof of Theorem 10. The number of codewords is the same as in C if there is exactly one way to lengthen each codeword of C. Since there is usually more than one such way, it follows that the code will be of a larger size.
Corollary 4. The rate of the code C 1 is k q k +k .
Construction 1 yields acyclic sequences. Can we find a related construction for cyclic constrained de Bruijn sequences with similar parameters? The answer is definitely positive. Construction 1 can be viewed as a construction for cyclic sequences of different length. To have a code with cyclic (q k − 1, 2k)-constrained de Bruijn sequences of the same length (as the acyclic sequences in C 1 , we can restrict the values in the ǫ i 's. For example, we can require that ℓ i=1 ǫ i = ⌊ ℓ(k+1) 2 ⌋. This will imply similar results for the cyclic code as for the acyclic code.
The code C 1 can be slightly improve by size, but as it does not make a significant increase in the rate we ignore the possible improvements. The same construction can be applied with a larger number of cycles whose length is σ k − 1 (σ not necessarily a prime power) and with a the same window length or even a smaller one. We do not have a general construction for such cycles, but we outline the simple method to search for them.
Let D(σ, k) be the set of de Bruijn sequences in G σ,k . Let δ be a positive integer. Construct a graph whose vertices are all the de Bruijn sequences in D(σ, k). Two vertices (de Bruijn sequences) are connected by an edge if they have the same window of length greater than k + δ. Now let T be an independent set in the graph. Two sequences related to this independent set do not share a window of length k + δ or less, so we can apply Construction 1 and its variants, where 0 ≤ ǫ i ≤ δ. Note that in Construction 1, the set of sequences in S Pk form an independent set with δ = k.
A computer search for the 2048 binary de Bruijn sequences of length 32 was performed. An independent set of size 8 was found for δ = 5 compared to the 6 m-sequences of length 31. Furthermore, for δ = 4, an independent set of size 4 was found, and for δ = 2 the size of the independent set that was found was only 2. This implies that this direction of research can be quite promising.
VI. APPLICATION TO THE ℓ-SYMBOL READ CHANNEL
In this section, we show that cyclic (b, k)-constrained de Bruijn codes can be used to correct synchronization errors in the ℓ-symbol read channel [8], [14], [63]. Previously, only substitution errors were considered in such a channel. Each cyclic (b, k)-constrained de Bruijn sequence which forms a codeword in the channel can be used to correct a number of limited synchronization errors which might occurred. The correction does not depend on the other codewords of the code. The mechanism used in this section will be of help in correcting such errors in racetrack memory as discussed in the next section. We will consider now F q as our alphabet although the method can be applied on alphabets of any size. The reason is that other error-correcting codes, for this purpose, are defined over F q . Definition 4. Let x = (x 1 , x 2 , . . . , x n ) ∈ F n q be a q-ary sequence of length n. In the ℓ-symbol read channel, if x is a codeword then the corresponding ℓ-symbol read vector of x is π ℓ (x) = ((x 1 , . . . , x ℓ ), (x 2 , . . . , x ℓ+1 ), . . . , (x n , x 1 . . . , x ℓ−1 )).
VII. APPLICATIONS TO RACETRACK MEMORIES
Racetrack memory is an emerging non-volatile memory technology which has attracted significant attention in recent years due to its promising ultra-high storage density and low power consumption [51], [59]. The basic information storage element of a racetrack memory is called a domain, also known as a cell. The magnetization direction of each cell is programmed to store information. The reading mechanism is operated by many read ports, called heads. In order to read the information, each cell is shifted to its closest head by a shift operation. Once a cell is shifted, all other cells are also shifted in the same direction and in the same speed. Normally, along the racetrack strip, all heads are fixed and distributed uniformly [64]. Each head thus reads only a block of consecutive cells which is called a data segment.
A shift operation might not work perfectly. When the cells are not shifted (or under shifted), the same cell is read again in the same head. This event causes a repetition (or sticky insertion) error. When the cells are shifted by more than a single cell location (or over shifted), one cell or a block of cells is not read in each head. This event causes a single deletion or a burst of consecutive deletions. We note that the maximum number of consecutive deletions is limited or in other words, the burst of consecutive deletions has limited length. An experimental result shows that the cells are over shifted by at most two locations with extremely high probability [64]. In this paper, we study both kinds of errors and refer to these errors as limited-shift errors.
Since limited-shift errors can be modeled as sticky insertions and bursts of consecutive deletions with limited length, sticky-insertion/deletion-correcting codes can be applied to combat these limited-shift errors. Although there are several known sticky-insertion-correcting codes [18], [37], [46], deletion-correcting codes [5], [34], [43], single-burst-deletion-correcting codes [15], [56], and multiple-burst-deletion-correcting codes [33], there is a lack of knowledge on codes correcting a combination of multiple bursts of deletions and sticky insertions. Correcting these type of errors are especially important in the racetrack memories. In this section, motivated by the special structure of having multiple heads in racetrack memories, we study codes correcting multiple bursts of deletions and sticky insertions. To correct shift errors in racetrack memories with only a single head, Vahid et al. [60] recently studied codes correcting two shift errors of deletions and/or insertions.
Another approach to combat limited-shift errors is to leverage the special feature of racetrack memories where it is possible to add some extra heads to read cells. If there is no error, the information read in these extra heads is redundant. However, if there are limited-shift errors, this information is useful to correct these errors. Recently, several schemes have been proposed to leverage this feature [12], [13], [64] in order to tackle this problem. However, in [12], [13], each head needs to read all the cells while in this model, each head only needs to read a single data segment. Our goal in this section is to present several schemes to correct synchronization errors in racetrack memories, all of them are based on constrained de Bruijn sequences and codes. In some of these schemes we add extra head and some are without adding extra heads.
A. Correcting Errors in Racetrack Memories without Extra Heads
Our first goal in this subsection is to construct q-ary b-limited t 1 -burst-deletion-correcting codes to combat synchronization errors in racetrack memories. Such a code can correct t 1 bursts of deletions if the length of each such burst is at most b, i.e., at most b deletions occurred in each such burst and each two of these bursts are separated by symbols which are not in any error. Proof. Assume that ∆ − = {i 1 , i 2 , . . . , i t } where i 1 < i 2 < · · · < i t is the set of t locations of all t deleted symbols. Since i 1 is the leftmost index in which a deletion has occurred, it follows that s[1, To correct the first error, we insert the symbol s[i 1 ] into the i 1 -th position of s(∆ − ) and obtain the vector s(∆ − 1 ). Similarly, we can continue now to determine, one by one, the other positions where deletions have occurred using words s and s(∆ − 1 ). Thus, the set ∆ − can be determined from c and c(∆ − ) and the lemma is proved.
We are now ready to present a construction of q-ary b-limited t 1 -burst-deletion-correcting codes. We will show that the maximum rate of these codes is close to the maximum rate of codes correcting multiple erasures, especially when q is large. A q-ary t-erasure-correcting code of length ℓ is a set of q-ary words of length ℓ in which one can correct any set of t-erasures, i.e. t known positions whose values are not known.
Theorem 14. Given 0 < δ, ǫ < 1, there exists a q-ary b-limited t 1 -burst-deletion-correcting code of length ℓ such that its rate R satisfies where The next goal is to study error-correcting codes to combat both under shift and limited-over-shift errors. For this purpose we start by generalizing Lemma 5. Lemma 6. Let s be a (b, 1)-constrained de Bruijn sequence over an alphabet of size q. Let s(∆ − , ∆ + ) be the sequence obtained from s after deleting symbols in locations specified by ∆ − = {i 1 , . . . , i t } and insertion of symbols in locations specified by ∆ + . Assume further that i 1 < · · · < i t and there are at most b − 2 consecutive numbers in ∆ − . Then the sets ∆ − and ∆ + are uniquely determined from s and s(∆ − , ∆ + ).
Proof. Since s is a (b, 1)-constrained de Bruijn sequence, it follows that between any two equal symbols there are at least b− 1 different symbols. Hence, all the sticky insertions can be located and corrected. Now, Lemma 5 can be applied to find the positions of the deletions, i.e. to determine ∆ − . Since, the locations of the sticky insertions are already known, it follows that ∆ − can be now determined.
It is now straightforward to generalize Theorem 13. An q-ary b-limited t 1 -burst-deletion sticky-insertioncorrecting code is a code over F q which corrects any number of sticky insertions and t 1 bursts of deletions, where each burst has length at most b.
Corollary 5. Given 0 < δ, ǫ < 1, there exists a q-ary b-limited t 1 -burst-deletion sticky-insertion-correcting code of length ℓ, where t 1 · b = δ · ℓ whose rate R satisfies It is clear that an upper bound on the maximum rate of our codes is at most 1 − δ, Since ǫ is arbitrarily small, when b is small and q = 2 m is large, it follows that the rates of our codes are close to the upper bounds and hence they are asymptotically optimal.
B. An Acyclic ℓ-symbol Read Channel In this subsection, we consider a slight modification of the ℓ-symbol read channel, where the symbols are not read cyclically. This acyclic ℓ-symbol read channel will be used in subsection VII-C to correct synchronization errors in the racetrack memories with extra heads.
In this channel, as in the cyclic ℓ-symbol read channel two type of synchronization errors can occur, bursts of deletions where the length of each one is restricted to at most b − 2 deletions and sticky insertions. The proof of Theorem 16 is identical to the proof of Theorem 12 with only one distinction. In the cyclic channel we used the relation between the last ℓ-read and the first ℓ-read to determine where they overlap. This cannot be done in the acyclic case. Instead, we assummed that the first ℓ-read has no error. Hence, we can start generating the codeword from the first symbol and thus we can complete its decoding.
C. Correcting errors in Racetrack Memories with Extra Heads
In this subsection, we present our last application for (b, k)-constrained de Bruijn codes in the construction of codes correcting shift-errors in racetrack memories.
Let N, n, m be three positive integers such that N = n · m. The racetrack memory comprises of N cells and m heads which are uniformly distributed. Each head will read a segment of n cells. For example, in Fig. 4, the racetrack memory contains 15 data cells and three heads are placed initially at the positions of cells c 1,1 , c 2,1 , and c 3,1 , respectively. Each head reads a data segment of length 5.
In general, if c = (c 1 , c 2 , . . . , c N ) is the stored data then the output of the i-th head is c i = (c i,1 , . . . , c i,ℓ ) where c i,j = c (i−1)·n+j for 1 ≤ i ≤ m and 1 ≤ j ≤ n. Hence, an output matrix describing the output from all m heads (without error) is: When an under-shift error (a sticky-insertion error) occurs, one column is added to the above matrix by repeating a related column of the matrix. When over-shift errors (a deletion error or a burst of b consecutive deletions) occur, one or a few consecutive columns in the matrix are deleted. Our goal is to combat these shift errors in racetrack memories. We note that each column in the above matrix can be viewed as a symbol in an alphabet of size q = 2 m . In particular, let Φ m : F m 2 → F q be any bijection. For each column c j = (c 2,j , . . . , c m,j ) T , Φ m ( c j ) = v j ∈ F q . Hence, by Corollary 5 , it is straightforward to obtain the following result. Another way to combat these type of errors is to add some consecutive extra heads next to the first head. For example, in Fig. 4, there are two extra heads next to the first head. We assume in this section that there are ℓ − 1 extra heads. Since there are two types of heads, we call the ℓ − 1 extra heads secondary heads, while the first m uniformly distributed heads are the primary heads. Hence, there are ℓ heads which read the first data segment together, the first primary head and all the ℓ − 1 secondary heads. For 2 ≤ i ≤ m, each other primary head will read one data segment individually. The output from the last (m − 1) primary heads is It is readily verified that this is the acyclic ℓ-symbol read sequence π ℓ (c[1, n]) for the first data segment c[1, n] = (c 1,1 , . . . , c 1,n ). This motivates the following construction. Proof. The size of the code is an immediate observation from the definition of the code C 3 (N, t 1 , b − 2). The first data segment of length n consists of any sequence s from C DB (n, b, k). By Theorem 16, we can recover the sequence in the first data segment when there are any number sticky-insertions and bursts of deletions of length at most b − 2. Moreover, we can also determine the locations of these errors. In the output from the last m − 1 heads, all sticky insertions can be corrected easily and all deletions become erasures since we know the locations of these errors. There are at most t = t 1 · (b − 2) erasures and the decoding procedure of the code C q2 (n, t) can be applied to correct these erasures. Thus, the sequence s can be recovered. Thus, the code C 3 (N, t 1 , b − 2) can correct all sticky insertions and at most t 1 bursts of deletions whose length at most b − 2.
Corollary 6. Consider a racetrack memory comprising of N = m · n cells and m primary heads which are uniformly distributed. Using ℓ − 1 extra secondary heads, it is possible to construct a code correcting a combination of any number of sticky insertions and t 1 bursts of deletions whose length is at most b − 2 such that its asymptotic rate satisfies where ℓ − b + 2 = h, t 1 · (b − 2) = δ · n and 0 < δ, ǫ < 1.
Since q 2 = 2 m−1 and N = mn, we have Moreover, the asymptotic rate of the constrained de Bruijn code is Therefore, by (4), (5), and Theorem 17, the rate of the code C 3 (N, t 1 , b − 2) in construction 3 can be computed as follows: Corollary 6 can be compared with the result in Proposition 1. When b = h = 3, by Table I, we have that R DB (3, 3) ≈ 0.7946. Hence, using two more extra heads, the asymptotic rate of the relate code is m−1 m (1 − δ − ǫ) + 0.7946 m = 1 − δ − ǫ + 0.7946−1+δ+ǫ m . We note that, without using extra head, the maximal asymptotic rate is 1 − δ. Hence, when 1 − δ < 0.7946, using two extra heads, the asymptotic rate of our constructed code is higher than the maximal asymptotic rate of codes without extra heads.
VIII. CONCLUSIONS AND OPEN PROBLEMS
We have defined a new family of sequences and codes named constrained de Bruijn sequences, This family of codes generalizes the family of de Bruijn sequences. These new defined sequences have some constraints on the possible appearances of the same k-tuples in substrings of a bounded length. As such these sequences can be viewed also as constrained sequences and the related codes as constrained codes. Properties and constructions of such sequences, their enumeration, encoding and decoding for the related codes, were discussed. We have demonstrated applications of these sequences for combating synchronization errors in new storage media such as the ℓ-symbol read channel and the racetrack memories. The new defined sequences raise lot of questions for future research from which we outline a few. 1) Find more constructions for constrained de Bruijn codes with new parameters and with larger rates.
Especially, we are interested in (b, k)-constrained de Bruijn codes for which b is about q t and k = c · t, where c is a small constant, and the rate of the code is greater than zero also when k go to infinity.
2) Find better bounds (lower and upper) on the rates of constrained de Bruijn codes with various parameters. Especially we want to find the exact rates for infinite families of parameters, where each family itself has an infinite set of parameters. 3) What is the largest number of de Bruijn sequences of order k over F q such that the largest substring that any two sequences share is of length at most k + δ, where 2 ≤ δ ≤ k − 1. 4) Find more applications for constrained de Bruijn sequences and constrained de Bruijn codes.
|
2020-05-08T01:00:52.763Z
|
2020-05-06T00:00:00.000
|
{
"year": 2020,
"sha1": "faa1b18f6f887814b982ee33878f7eaebaab3ad1",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "faa1b18f6f887814b982ee33878f7eaebaab3ad1",
"s2fieldsofstudy": [
"Computer Science",
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics"
]
}
|
249438000
|
pes2o/s2orc
|
v3-fos-license
|
Enablement and empowerment among patients participating in a supported osteoarthritis self-management programme – a prospective observational study
Background In Sweden, core treatment for osteoarthritis is offered through a Supported Osteoarthritis Self-Management Programme (SOASP), combining education and exercise to provide patients with coping strategies in self-managing the disease. The aim was to study enablement and empowerment among patients with osteoarthritis in the hip and/or knee participating in a SOASP. An additional aim was to study the relation between the Swedish version of the Patient Enablement Instrument (PEI) and the Swedish Rheumatic Disease Empowerment Scale (SWE-RES-23). Methods Patients with osteoarthritis participating in a SOASP in primary health care were recruited consecutively from 2016 to 2018. The PEI (score range 0–12) was used to measure enablement and the SWE-RES-23 (score range 1–5) to measure empowerment. The instruments were answered before (SWE-RES-23) and after the SOASP (PEI, SWE-RES-23). A patient partner was incorporated in the study. Descriptive statistics, the Wilcoxon’s signed rank test, effect size (r), and the Spearman’s rho (rs) were used in the analysis. Results In total, 143 patients were included in the study, 111 (78%) were women (mean age 66, SD 9.3 years). At baseline the reported median value for the SWE-RES-23 (n = 142) was 3.6 (IQR 3.3–4.0). After the educational part of the SOASP, the reported median value was 6 (IQR 3–6.5) for the PEI (n = 109) and 3.8 (IQR 3.6–4.1) for the SWE-RES-23 (n = 108). At three months follow-up (n = 116), the reported median value was 6 (IQR 4–7) for the PEI and 3.9 (IQR 3.6–4.2) for the SWE-RES-23. The SWE-RES-23 score increased between baseline and three months (p ≤ 0.000). The analysis showed a positive correlation between PEI and SWE-RES-23 after the educational part of the SOASP (rs = 0.493, p < 0.00, n = 108) and at follow-up at three months (rs = 0.507, p < 0.00, n = 116). Conclusions Patients reported moderate to high enablement and empowerment and an increase in empowerment after participating in a SOASP, which might indicate that the SOASP is useful to enable and empower patients at least in the short term. Since our results showed that the PEI and the SWE-RES-23 are only partly related both instruments can be of use in evaluating interventions such as the SOASP. Trial registration ClinicalTrials.gov. NCT02974036. First registration 28/11/2016, retrospectively registered. Supplementary Information The online version contains supplementary material available at 10.1186/s12891-022-05457-9.
Background
Worldwide, osteoarthritis (OA) is a common joint disease that causes pain, disability and decreased health-related quality of life [1]. Prevalence is expected to increase since the number of aging and obese people is growing [2,3] and also due to the fact that there is no contemporary cure for OA [4,5]. The global burden of this disease is huge [3] and the costs for health care are increasing [2,6]. First-line evidence-based treatment for OA is patient education, exercise and, if needed, weight loss [7][8][9].
The World Health Organization (WHO) recommends patient education as part of the management of all patients with chronic disease, including OA [10]. Patient education is defined as "helping patients acquire or maintain the competencies they need to manage as well as possible their lives with a chronic disease. It is an integral and continuing part of patient care. It comprises organised activities, including psychosocial support, designed to make patients aware of and informed about their disease and about health care, hospital organization and procedures, and behavior related to health and disease, so that they (and their families) understand their disease and their treatment, collaborate with each other and take responsibility for their own care as a means of maintaining or improving their quality of life" ( [10], p. 17). In Sweden, patients with OA in the hip and/or knee are offered first-line treatment through a Supported Osteoarthritis Self-Management Programme (SOASP) [11,12]. Selfmanagement can be defined as "the individual's ability to manage the symptoms, physical treatment, psychological consequences, and lifestyle changes inherent in living with a chronic condition" ( [13], p. 547) and the SOASP aims to provide the patients with coping strategies and knowledge to support in self-managing the disease [11,14]. Swedish OA patients participating in a SOASP are offered to report data in a national quality register called "Better Management of patients with Osteoarthritis" (BOA) [14]. The BOA register evaluates for example pain, health-related quality of life and physical activity that supports improvement of treatment for patients with OA in the hip and/or knee [11]. Today, to our knowledge, patients´ ability to cope and self-manage their disease is not evaluated routinely neither after participating in a SOASP nor reported in the BOA register.
The WHO has recognised that health care should make more effort to enable and empower patients with chronic disease [10,15,16]. Enablement and empowerment are closely related concepts [17][18][19]. As evaluated by the Patient Enablement Instrument (PEI), patient enablement is defined as patients´ ability to understand and cope with their illness after a consultation in health care [20,21]. The WHO defines empowerment as "a process through which people gain greater control over decisions and actions affecting their health" and "Individual empowerment refers primarily to the individuals' ability to make decisions and have control over their personal life" ( [22], p. 6). Empowerment can be measured by the Swedish Rheumatic Disease Empowerment Scale (SWE-RES-23) [23].
Taking all these aspects into account it seems important to increase knowledge about how patients experience their ability to self-manage their OA. To our knowledge, there are no studies about enablement or empowerment in relation to patients with OA after participating in a SOASP. This study can contribute with more knowledge on relevant evaluation methods. Therefore, the aim was to study enablement and empowerment among patients with OA in the hip and/or knee participating in a SOASP. An additional aim was to study the relation between the Swedish version of the PEI and the SWE-RES-23.
Design and setting
We conducted a prospective observational study, using data from patients with OA in the hip and/or knee participating in a SOASP in primary health care (PHC). The study was approved by the Regional Ethical Review Board in Lund, Sweden (2015/918). The study was reported in accordance with the STROBE checklist [24] and was registered with ClinicalTrials.gov. NCT02974036, first registration 28/11/2016, retrospectively registered.
The supported osteoarthritis self-management programme
According to existing national guidelines, patients diagnosed with OA are to be offered participation in a SOASP in relation to getting diagnosed. The programme combines education and exercise and is often provided by a physiotherapist (PT) in PHC [12]. The SOASP usually consists of two to three educational sessions once a week providing the patients with information about OA, risk factors, symptoms, treatment, coping strategies and self-management [12]. After the educational part of the programme, patients are offered an individually adapted exercise programme that they can practice either at home Keywords: Physiotherapist, Patient education, Osteoarthritis, Patient partner, Enablement, Empowerment, Primary health care or in a group training, supervised by a PT for about 6 to 8 weeks [12]. The SOASP has been described in more detail elsewhere [12].
Participants, data collection and measurements
Data was collected in PHC in two health care regions in southern Sweden: Region Skåne (five PHC centres, n = 87) and in Region Blekinge (two PHC centres, n = 56) between April 2016 and June 2018. Inclusion criteria in the study were patients with hip and/or knee OA understanding Swedish and participating in the SOASP. There were no exclusion criteria. Patients participating in a SOASP were recruited consecutively and asked to participate in the study by the PT responsible for the SOASP at the PHC centre in question. All patients that were interested in participating in the study were given written and verbal information about the study and gave their written informed consent for study participation prior to the start of the study. Flowchart for the inclusion of participants for analysis in the study is presented in Fig. 1.
Patient reported outcome measures (PROMs) were answered at baseline (SWE-RES-23), after the educational part of the SOASP and at three months followup (PEI and SWE-RES-23). The Patient Enablement Instrument (PEI) (Additional file 1) was used to measure enablement [20,21,25]. The PEI was developed in the 1970´s with the aim to measure a patient´s ability to understand and cope with their disease after a consultation [20,21,25]. The PEI consists of six questions relating to an introductory sentence "As a result of your visit to the doctor today, do you feel you are…". In our study, the introductory sentence was changed to "As a result of your participation in the SOASP, do you feel you are…". Each question had four alternative answers, i.e., much better (scored 2), better (scored 1), same or less (scored 0), not applicable (scored 0), resulting in a total consultation score between 0 to 12 [20,21,25]. A higher total score indicates higher enablement [20,21,25]. There is no baseline data reported for the PEI as the instrument is based on the patients´ own perception of change in enablement after a consultation [20].
Empowerment was measured with the Swedish Rheumatic Disease Empowerment Scale (SWE-RES-23) [23] (Additional file 2) that has been developed from the Swedish Diabetes Empowerment Scale [23]. The SWE-RES-23 consist of 23 questions divided in five subscales. The questions 1 to 3 start with "In terms of how I take care of my rheumatic disease, I…", the questions 4 to 7 start with "In terms of my rheumatic disease, I…", questions 8 to 11 start with, "In terms of my rheumatic Fig. 1 Flowchart for the collection of participants for analysis in the study disease, I…" and, questions 12 to 23 start with, "In general, I think I…". In our study, the word "rheumatic disease" was replaced with the word "osteoarthritis". Each question is scored on a five-point Likert scale ranging from strongly disagree (1 point) to strongly agree (5 points). The total score is calculated by summating the score of each question and dividing the sum by 23, resulting in a total score between 1 to 5 points where a higher score indicates higher empowerment [23].
Both the PEI and the SWE-RES-23 have been translated to Swedish and have been tested for reliability [23,26] and for validity [23,27]. The PEI has shown high internal consistency and moderate to good reliability [26] whereas content validity, construct validity and internal consistency was fair [27]. The SWE-RES-23 has shown acceptable psychometric properties, in terms of construct validity and internal consistency reliability [23].
Patient partner
To enhance the patient perspective, a patient partner (PP) from the Swedish Rheumatism Association was involved in the study process from the beginning. The PP contributed with feedback on the aim of the study, feasibility of the study approach, the PROMs used in the study and assisted in interpreting the results. We first met with the PP face to face at a network event lasting three days to plan the study, discuss the aim and the feasibility of the study approach and to practically test the PROMs, estimate the time to answer them and discuss their relevance in the context of the SOASP. We met once more physically when the study was ongoing to discuss preliminary results. Thereafter, we met digitally thrice, and we kept contact through email. The GRIPP-2 checklist was used when reporting the PP´s involvement in the study process (Additional file 3) [28,29].
Statistical analysis
Descriptive statistics (median and interquartile range (IQR)) were used to describe the degree of enablement and empowerment patients with OA in the hip and/or knee report after the educational part of the SOASP and at three months follow-up after participating in a SOASP.
The Wilcoxon's signed rank test was used when analysing the significance of the change in the SWE-RES-23 from baseline to three months follow-up. The effect size for the change in the SWE-RES-23 from baseline to three months follow-up, based on the Wilcoxon's signed rank test, was computed according to the formula r = Z / √ N [30,31] and categorized as small (0.1), medium (0.3) or large (0.5) [32].
The Spearman's rho (r s ) was used when analysing the relation between the PEI and the SWE-RES-23 after the educational part of the SOASP and at three months follow-up. The correlation values were categorised as weak (0.1-0.3), moderate (0.3-0.5) or strong (0.5 or more) [32]. The median, IQR and non-parametric tests were used in the analysis since the PEI and SWE-RES-23 scales were treated as ordinal scales. A sample size calculation showed that to be able to detect a correlation coefficient between 0.3 to 0.5 with a power of 0.80 at a chosen significance level of 0.05, 110 participants were needed. The calculation was performed in SAS Enterprise Guide 6.1 for Windows (SAS Institute Inc., Cary, NC, USA). Data from 143 participants were collected to compensate for potential missing data. No imputation was made for missing values [33].
Results
In total, 143 patients agreed to participate and were included in the study, 111 (78%) were women (mean age 66, SD 9.3 years). Demographic data for the study cohort (n = 143) are presented in Table 1.
Information from 143 patients at baseline, from 109 patients after the educational part of the SOASP and from 120 patients at follow-up at three months was collected ( Fig. 1). Ten patients (7%) dropped out after baseline due to unknown reasons.
The Wilcoxon's signed rank test revealed a statistically significant increase in empowerment from baseline to three months follow-up, Z = -4.07, p ≤ 0.000 (n = 115), with an effect size close to medium (r = 0.27). The analysis showed positive correlation between PEI and SWE-RES-23 both after the educational part of the SOASP (r s = 0.493, p < 0.00, n = 108) (Fig. 2) and at follow-up at three months (r s = 0.507, p < 0.00, n = 116) (Fig. 3). Both correlations were close to cut-off point (0.5) for strong correlation.
Discussion
Our study shows that patients with OA report having moderate to high enablement and empowerment both after the educational part of the SOASP and at three months follow-up after participating in a SOASP. Moreover, there was a statistically significant increase in empowerment at three months after participation in the SOASP. In addition, the relation between the PEI and the SWE-RES-23 was close to the cut-off point for strong correlation both after the educational part of the SOASP and at three months follow-up.
To our knowledge, neither the PEI nor the SWE-RES-23 have been used in a similar setting i.e., patient education for OA before [25]. However, in a Swedish validation study of the PEI, three groups of patients with musculoskeletal disorders were included [27]. One group included patients with chronic pain that were referred to a multimodal rehabilitation programme [27]. The programme lasted 6 to 8 weeks and included pain education and exercise [27]. The authors analysed the median value of the PEI after the programme which was reported to be 3 [27]. This is considerably lower than in our study where the median PEI value at three months follow-up in our study was 6. This lower PEI score has been reported for patients with severe pain [34] and for patients with three or more chronic diseases [35].
The patients reported moderate to high values on both the PEI and the SWE-RES-23 at both measuring points. It may be that participants in a SOASP might be more informed and motivated per se since they seek health care for their problems and accept to participate in an active intervention such as the SOASP. Thus, there might be a selection bias when it comes to which patients with OA participate in the SOASP in comparison to the total population with hip and/or knee OA in Sweden, which is an issue that has been raised in previous studies on patients with OA in relation to SOASP [36,37]. This perspective was supported by the PP incorporated in the study who also reflected that patients who seek health care might be more willing to change and to do something about their problems.
In our study, the PEI score was maintained at the same level at the follow-up which is not in line with the results in a study by Rööst et al. [26] where the PEI decreased at follow-ups at two days and two weeks after consultation in PHC. However, the divergence in the results between our study and the study by Rööst et al. might be due to differences in intervention, length of the intervention, what health profession the patient consulted with, and the time point for follow-up. These thoughts were highlighted by the PP and are supported by other researchers [27,38]. In addition, there is a risk of recall bias since the PEI is based on the patients´ perception of change in enablement which the patients might not recall when answering the questions [27,39]. Thus, it might be argued that is not known how enabled the patient is [27].
It is not surprising that the scores from both the PEI and the SWE-RES-23 seem to be moderate to high even at three months follow-up since patients that have participated in supervised exercise probably have been continuously encouraged with information and reminded of coping strategies and the importance of exercising. However, comparison with other studies is challenging since there is no global consensus on what a high value in context is either on the PEI [40] or the SWE-RES-23. For the PEI, a total score of 6 or more has been suggested to be high [21]. In a recently published study where the SWE-RES-23 was used, a score of ≥ 4.05 was considered to be high [41]. In relation to these studies, we therefore believe that the results of our study indicate relatively high scores. Moreover, patient education itself is a patient-centered learning process and self-management is about what patients themselves decide to do to manage their treatment and prevent complications [10]. It takes time for an individual to adapt to a new health-condition [10]. Therefore, three months follow-up is a short time when it comes to a chronic disease like OA, and it would be interesting to follow the development of enablement and empowerment after participating in a SOASP in the long-term. In the future, it would also be interesting to study those who report lower values on the PEI and/or the SWE-RES-23 more closely since it might be important to identify these patients as early as possible to optimise the support and care.
Our study showed that empowerment significantly increased after participating in the SOASP which is encouraging. However, if the increase is due to participating in the SOASP needs to be further studied as well as whether the increase is sustainable. To our knowledge, there are no other studies evaluating empowerment in relation to SOASP, thus the results from our study can be used as comparison in future studies.
The PP pointed out that it might be difficult for newly diagnosed patients to answer the PEI and above all question number 4 i.e., "able to keep yourself healthy" directly after the educational sessions. According to national guidelines, all patients should be offered to participate in SOASP at diagnosis, but that is not always the case in practice. Therefore, some patients might have had their diagnosis for some time, sometimes many years, before participating in SOASP. Reasons for that might vary (patients not wanting to see at PT before, patients not being referred to at PT but to a doctor, all health care professionals not following the guidelines and so on) [42]. This delay in participation in SOASP would affect patients´ adaptation, understanding and coping with their illness and that would impact on their PEI score and their enablement. So, patients participating in a SOASP who have had their diagnosis for a long time might already have some knowledge about coping and might answer the PEI and the SWE-RES-23 differently than a newly diagnosed patient. This is in accordance with other studies [26,27] that have raised the idea that patients might answer the PEI differently depending on how long they have had their disease. In the future, it would be interesting to study the time relationship between diagnosis and self-reported enablement and empowerment.
Our results showed that the relation between the PEI and the SWE-RES-23 was close to the cut-off point for strong correlation at both measuring points thus, the instruments only partly measure the same entity. Therefore, we believe that this relationship needs to be further investigated. However, the results suggest that both the PEI and the SWE-RES-23 could be useful when evaluating the SOASP, which was supported by the PP incorporated in the study. One might argue that the instrument that the patients find most relevant and valuable should be the one used for evaluation. However, the PP thought that the large amount of data collected in this study show that it is feasible to use both the instruments i.e., patients seem to think that it is acceptable to answer them both.
There seems to be some confusion about the concepts of enablement and empowerment in the literature and the concepts are sometimes used interchangeably [17-19, 39, 43]. This makes comparison with different studies challenging. Enablement occurs after an intervention or consultation in health care [21,25,44,45] while empowerment can be achieved both after an education but also by oneself [46]. Today, enablement and empowerment are not routinely evaluated in relation to SOASP and the PEI and/or the SWE-RES-23 could possibly be used both in the clinic and included in the BOA register in the future to ensure evaluation of these relevant outcomes. However, more research is needed before it can be concluded which of the two outcomes is the most relevant to measure in this context.
Strengths and weaknesses
A strength with the study was that data was collected by PTs´ used to collecting PROMs in connection to the SOASP which might explain why the response rate was high and the small amount of missing data. Another strength was that a PP was included in the study process from the planning phase of the study to the interpretation of the results. In the planning phase, the PP gave feedback on the PEI and the SWE-RES-23 regarding feasibility to answering them after participating in a SOASP and estimating the time to answer them. The PP also provided valuable input regarding the interpretation of the results and clinical implications. The results and implications were validated by the PP who also added new perspectives based on experiential knowledge of living with OA. Moreover, the PP gave valuable suggestions for future research. Engaging a PP in research was not common in Sweden when planning this study (2015) and there were not many PPs with adequate education available at the time. In future studies, we hope to incorporate more than one PP since we believe that it would enhance the research process considerably.
There are some limitations in our study. The results of our study are difficult to compare with other studies for several reasons. We analysed the median value since both the PEI and the SWE-RES-23 can be considered as Likert scales and thus ordinal data. Other studies have analysed the PEI and the SWE-RES-23 using the mean values [23,25,40,47] and also there are not many studies using the SWE-RES-23 [23]. In addition, the PEI score outcomes vary in different countries [25,[48][49][50], which make comparison between different studies challenging. In our study, we used the SWE-RES-23 to measure empowerment. Developed in 2012 and thus being a relatively new instrument, it has not been much used or studied. Therefore, an argument might be that we should have used another instrument when measuring empowerment. However, the SWE-RES-23 was developed for rheumatic disease and was in the developing phase tested by patients with OA, which we thought was valuable when planning the study. In this study, we compared a generic instrument i.e., the PEI to a disease specific instrument i.e., the SWE-RES-23 (rheumatic diseases). Generic instruments are developed for measurements in a broad range of populations with or without chronic illness while disease specific instruments are designed for measuring concerns relevant to a particular disease [51]. Unfortunately, as this was an in vivo clinic based study, we have no information about the reasons for dropouts. However, we believe that since distributing the questionnaires after the educational part of the SOASP was added to the clinical routine, some PTs might have forgotten to do so. Moreover, no control group was included, and casual relationships cannot be assessed in our observational study. These limitations are something to keep in mind when interpreting the results.
Implications
Even though the main objective with the SOASP is to support patients´ ability to cope and self-manage their disease this is not routinely evaluated after participating in a SOASP today. We find it important to evaluate patient enablement and empowerment after participating in a SOASP and therefore we suggest using the PEI and/ or the SWE-RES-23 together with the PROMs that are used currently.
We believe that including a PP in the study process from the planning phase to the interpretation of the results enhances the constructive learning experience of health care professionals and researchers drawn from the study and we highly recommend other researchers to incorporate a PP in their studies.
Conclusions
Patients reported moderate to high enablement and empowerment and an increase in empowerment after participating in a SOASP, which might indicate that the SOASP is useful to enable and empower patients with OA in the hip and/or knee at least in the short term. Since our results showed that the PEI and the SWE-RES-23 are only partly related, we believe that both instruments can be of use in evaluating interventions such as the SOASP depending on the outcome of interest.
|
2022-06-08T13:50:30.480Z
|
2022-06-08T00:00:00.000
|
{
"year": 2022,
"sha1": "cf7b0081d9d597ce17012d17f8919f7828a3ea41",
"oa_license": "CCBY",
"oa_url": "https://bmcmusculoskeletdisord.biomedcentral.com/track/pdf/10.1186/s12891-022-05457-9",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "cf7b0081d9d597ce17012d17f8919f7828a3ea41",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
238788350
|
pes2o/s2orc
|
v3-fos-license
|
NOVEL TOPICAL, REGIONAL & TRANSDERMAL DOSAGE FORMS USING NANOTECHNOLOGY: A REVIEW
Purpose : To summarize main findings of topical or transdermal preparation delivered in the skin or eye region using novel nanotechnology. Method: Narrative review of all relevant papers from peer reviewed and high impact journals. Results: The delivery system of pharmaceutical preparations for skin and eyes are now elevated to novel aerosolized gels, foams, sprays and other forms that are delivered using nano-pharmaceutical dynamics such as Ag nanoparticles,impregnated urethanes, liposomes, nanoemulsions,polymers and nanopolymers. And these trends provide elevation from benefits and limitations of conventional systems such as enhanced spread and permeation, better bioavailability, prolonged efficacy, reduced toxicity and superior stability. Conclusion: An improved delivery system was made possible by novel discoveries using nano-technology. More and more studies in pharmaceutics are centered on using this technological dynamics to formulate and reformulate drugs that are delivered topically or transdermally. (with, and and or) ways and year of publication were considered. Pharmaceutical Nanotechnology were use plus the qualifier topical, transdermal, regional delivery system. The narrative review criteria includes (1) articles on novel delivery system focused on topical and transdermal application (2). Also that its topical and transdermal application in regions of skin and eye only (3) This novel dosage delivery are products of recent nanotechnologies and development.
Skin is considered as one of the most complex organ in the body. Characterizing the skin would take into consider its specialized cells and multi-functional structures. One character the skin possess is its protective function as a primary defense to mechanical as well as chemical or microbiological invasion. It's morphology actually is one of the reason why dosage forms applied in this organ maybe delivered in various ways: topical, regional or transdermal. With these modes arise challenges of dermatological and pharmaceutical maneuver of products such that the active ingredients as well as the excipients are delivered in the target site of action (Pando et al.,2013).
Topical application is delivery of active ingredient directly into the surface of the affected skin. This can be in various forms some are semi-solids and some are liquids in consistency. (Pando 2013;Chuo 2009). Some are delivered on a local area or deeper region. This are considered regional application such as those in the case of muscle or joint pains. There are emollients, gels, liniments & poultices with essential oils and other active antiinflammatory or analgesic agents and even counter-irritants that are contained in patches, creams and lotions to provide localized treatment. (Crommelin et al.,2003;Garg et al., 2013). The more complex drug skin delivery is via transdermal application whereby the active component permeates through the dermal layer of the skin, avoiding the All these various dosage forms applied through the skin faces a variety of challenges from pharmaceutical design to bioavailability. Topical dosage forms for example need to have capsule enclosures of the active ingredient so that it will have better bioavailability (Andersson et al., 2013).Others in the formulation has to have a suitable base that will aid in its dissolution, dispersion and delivery (Date et al.,2006). In the case of ointment bases alone, hydrocarbon matrices (Huckzo et al., 1999;Parnami et al., 2013). Accelerants and absorption promoters are essential adjuncts considered specially when penetration, sensitivity, phase separations are considered (Jain et al., 2015;Ting et al., 2004;Walfg 2005). Globule size, rheological and even thermal stabilities are important factors in the design and reformulation of topical and transdermal products (Patel et al., 2012;Kumar et al., 2012;Tanwar 2012) With these considerations and more, advancements have been rapid and massive to the extent so that topical agents regardless of size of the globules, or the penetrability and stability are no longer such an issue. This is because of the availability of current approaches in optimizing the delivery of active components to the site of action in different delivery forms. Technologies like nano-deliveries ( Bala et al., 2004;Karen K et al.,2004), Enhancing agents like polymers such as sodium hyaluronate (John VT et al., 2002;Castelvetro et al., 2004), liposomal agents in spheronized forms and mechanically aided delivery using novel delivery such as gel formulation, aerosolized forms, nano-emulsions, thermal energy, ultrasonography or iontophoresis are some of the latest advances elevating the conventional modes of delivery.
An example of cutting edge technology in topical and transdermal applications are products that are silver-based formulations using novel delivery and nanotechnology. Initially there was a clamor for silver as a good antibacterial agent and thus was incorporated in topical preparations the conventional way of just mixing silver in regular ointment and cream bases yielded a not significant result in wound healing. But after a decade the shift to employing a novel technology to deliver silver to the target site of action is crucial thus pharmaceutical maneuvers pave way to making silver novel delivery a breakthrough.
In the case of Diabetes, complications includes development of foot ulcers which are effects of a poor perfusion of blood, an injury or deformity. Wound care and healing then is essential but is made extra difficult due to the elevated high sugar condition that slows the healing process and exposes the patient to microbial contamination. Hence, three controlled clinical studies where made on topical solutions and topical foams where findings show that silver containing foams & solutions did not promote a faster healing process after a month of checking (Vermeulen et al., 2007;Bergin et al.,2006) , though one of the clinical studies expressed that the size of the ulcer was decreased with the silver containing preparation (Vermeulen et al., 2007). In the case of burn patients it is thought that silver topical application may help in hastening the healing process and retarding further infection. But a twenty (20) trials review involving approximately 2000 participants did a comparison of the one with and without silver and results show that there is not enough evidence to support the addition of silver in dressings and creams for burn treatment (Versloot et al.,2010).
Topically applied dermal and transdermal delivery systems finds its advantages in skipping first-pass effect, degradations and frequent dosing. This paper reviews current novel topical & transdermal delivery of active components that will utilize novel drug delivery systems and nanotechnology.
Methods:-
The review was conducted using journal databases in Pubmed, Elsevier, Sciencedirect, Research Gate and MDPI. Search strategies usingwithout limitation, with limitation such as Boolean (with, and and or) ways and year of publication were considered.
Keywords such as Novel, Novel Nanoparticles, Novel Nanotechnology, Pharmaceutical Nanotechnology were use plus the qualifier topical, transdermal, regional delivery system. The narrative review criteria includes (1) articles on novel delivery system focused on topical and transdermal application (2). Also that its topical and transdermal application in regions of skin and eye only (3) This novel dosage delivery are products of recent nanotechnologies and development.
340
Novel Pharmaceutical Nano-Technology Maneuvers: Novel Delivery of Silver (Ag) using Gel or Aerosolized Foams Aerosol foams are usually known for their spreadability, texture and higher bioavailability advantages. Latest pharmaceutical maneuver is elevated in such a way that anti-microbials are delivered in gels and foams using impregnation mechanism of active ingredients in polymers and silicones.
Ag in a gel formulation time-kill kinetics
With the aforementioned study reviews that is inconclusive of the benefits of silver in wound healing, the new era in drug formulation of topical products pave the way of the use of novel technological design such aa time-kill kinetics of novel silver containing gel formulation for post-surgical wounds, diabetic wounds and other skin ulcers. The gel form should be administered in a two (2) weeks-time to avoid toxicity and was found to be highly effective in the treatment of infection. An anti-microbial wound gel (CelaCare Technologies) was given in a low dose of a proprietary silver salt combined with acemannan which demonstrated immense healing of ulcers and skin infections (Lee et al.,2015).
Ag impregnated polyurethane
Recent advances include silver chloride impregnated in a novel hydrophilic polyurethane (PU) foam. The elution of silver for 168 hours in a simulated wound and the effect of the foam against opportunistic pathogens such as Staphylococcus aureus (MSSA), Acinetobacter baumannii, Candida albicans, and antibiotic-resistant strains (Methicillin-resistant S. aureus [MRSA] and Vancomycin-resistant Enterococci [VRE]) resulted a high zone of inhibition of 16mm after a day and a reduced viability by a log of four in the direct kill assay. Overall, The technology of delivering silver as a salt and its integration in the polyurethane is found to be an effective wound dressing arresting highly opportunistic pathogens that invades and deters healing of wounds. A logarithmic kill value of six (6)against S. aureus and E.coli, in a 16 hour span contributes to the array of clinical management for wound infection (Percival 2018).
Ag containing foam in soft silicone technology
Given the rise of evidence that silver really do possess anti-microbial activity provided that there is a mechanism that will carry the component to the site of action. Another notable technology which makes silver an adjunct in the treatment of skin infections cause by micro-organisms is the Safetac technology. This employs the use of soft silicone mix with a silver impregnated foam dressing developed by Mölnlycke Health Care. Results of this novel mechanism of wound management provides a fast and sustained anti-microbial effect against common opportunistic skin pathogens. This extended the utilization of the Ag impregnated soft silicone foam technology to treatment of acute and long-term wound burns, severe diabetic ulcers and other forms of chronic and cancerous wounds (Davies et al., 2017).
Liposomes
Controlling the delivery of pharmaceutical actives as well as optimizing topical and transdermal effect to the different layers of the skin may be one of the factors considered in ensuring that skin preparations address injuries, wounds and infection. The conventional preparations may have the actives applied on surface but lipid by-layer can be a challenge in perfusion and action. Thus, liposomes are employed to address this as it creates a spherical vesicle. This creates an amphiphilic environment (Schafer-Korting et al., 1994; Brisaert 2001).
Diclofenac delivery by transcutol containing liposomes
Novel delivery using liposomal mechanism via penetration enhancer containing vesicles (PEVs). This is a Minoxidil's application for hair growth is renowned. However, the regional delivery to the thick scalp layers poses a challenge in ensuring that minoxidil will illicit it's hair growth function. In the recent researches new liposomal mechanism employs PEVs using a blend of penetration enhancers cineole, Labrasol® -capryl-caproyl macrogol 8glyceride, anscutol® -2-(2-ethoxyethoxy) ethanol and Transcutol® -2-(2-ethoxyethoxy) ethanol and soy lecithin. The in-vitro diffusion research showed that the PEVs dermal delivery provided effective penetration to skin layers with novel enhancers compared to the one without. The Fourier Transform Infra-red Spectrophotometry (FTIR) spectra records formulation of vesicle a clear evidence that the liposomal penetration enhancers efficiently is resuspended and diffused on the layers of the skin (Mura et al., 2013). In another study labrasol improves minoxidil delivery in the cutaneous layer of the scalp as evidence by accumulation in the area ( Caddeo et al.,2012). While transcutol PEV loaded in 5-10% resulted to better fluidity than the regular liposomes. And what is important to not is that the zetal potential shows better stability for minoxidil (Mura et al., 2011).
Nanoemulsions
The morphology of skin specially its selective permeability characteristic makes the pre-formulation and formulation studies difficult. Securing that active ingredient is well dispersed in either water in oil or oil in water media. But novelty of nano-emulsion lies in ensuring ultra small droplet mix where thermodynamics is controlled in a special surfactant assisted manufacturing process. While stability is a major concern in traditional emulsion, nanoemulsion's stability is established through nano droplets. Another unique feature of nanoemulsion in skin delivery is that it does not cause clogging of pores instead provides benefits like making skin surface firm and elastic ( Yilmaz 2006
Besifloxacin-Loaded Ocular Nanoemulsion
One of the trends in nanoemulsion evolves in using a simple yet highly efficient emulsification at a relatively low energy with a combination of the use of cremohor and Transcutol surfactants and co-surfactants relatively. There 342 was a comparative efficacy with besifloxacin loaded nanoemulsion from the conventional product. But it is notable that the Besifloxacin-loaded NE shows higher permeation and does not give corneal tissue damage (Kassaee 2021).
Acyclovir Nanoemulsion Viral infections of skin and eyes are usually treated with acyclovir. The preparations can range to topical, region based like ophthalmic or otic preparations. Drug efficacy has been an issue given concerns on solubility, absorption in the superficial layers of the skin, or isotonicity of solutions. Difficulties associated with opthalmic delivery of acyclovir were addressed in nanoemulsion technology whereby acyclovir pre-formulation solubility testing using Tween 20, Triacetin, and Tramsectol® surfactants and co-surfactants are the loading media for acyclovir nanoemulsion. Acyclovir penetration in nano emulsion in excised bovine cornea showed a sustained drug release and action. While the chorionic test on hens as well as Draize Test showed that ocular administration of the nanoemulsion is safe ( Mohammadi et al., 2021). A similar result with a slight difference in the process is the promise of a better acyclovir delivery and action as a nanoemulsion. The results of an acyclovir loaded albumin using desolvation method without an aldehyde glutaraldehyde yielded better permeation through the transcorneal human epithelial tissue (Suwannoi et al., 2017).
Fluconazole pH triggered Nanoemulsion
Skin and eye fungal infection is a primary concern to medical practitioners due to the fact that fungal infections are difficult to treat because of the structure of the cell walls of the fungi that makes it impenetrable and sometimes recurrence of infections happen if treatment is not thorough. Such concern are believed to be address by having nanoemulsion gel. A study of fluconazole as a novel nano-emulsified gel where its release and action is triggered by pH was developed in situ. The formulation includes emulsification with capmul oil phase, a tween surfactant and the usual transcutol secondary surfactant. Polymer solution such as Carbomer 934 solution created a sol which is a the nanoemulsified media for the loading of the active ingredient fluconazole. The research provided a good foundation for further research such that results showed an intensive permeation of the fluconazole phtrigerrednanoemulsion, a long residency time of the emulsion in the cornea and a sustained action for a designated time (Pathak et al., 2013). The use of carbopol as a powerful polymer with biological adhesive property primarily is for the increase contact time of the active ingredient to the eye. The gel-core carbosome formulation showed a nano-size of 339 +/-5.50 nanometer and an efficiency entrapment percentage of 62%. Compared to regular fluconazole suspension the residency time of the nano formulation of fluconazole has an optimized time of 18 hours and is deemed promising in the ophthalmic delivery of anti-fungal agents (Moustafa et al., 2018). As is the case of a similar active ingredient integrated in a nanolipoemulsion a prolonged effect was seen when fluconazole was integrated in hyaluronic acid. The hydrogel and liposome showed prolonged and sustained corneal permeability than conventional preparation (Moustafa et al., 2017). In a fungal keratitis case niosomal gel and nanoemulsion using Span surfactant, cholesterol and 1% Carbopol has better ocular bioavailability than solutions and the niosomes extra performed in permeation (Soliman et al.,2019).
Polymers and Nanopolymers
Polymers though are macromolecules presents a versatile feature that is widely use in skin formulation industries. Microcells that is entrapped due to an acrylic acid polymer -water complex allows ease of application, faster and prolonged release with good product stability. Moreover, polymer gels when incorporated with other compounds such as moisturizers and other emollients may have enhanced dermatologic benefits. Such exist in anti-pimple actives like benzoyl peroxide which now high efficacy due to its improved gel based formula (Rouse et al., 2007). PEGylated nanopolymers are the currently marketed nanostructures with an array of application (Farjadian et al., 2018).
Anti-microbial Peptide Loaded in Biodegradable Nanopolymer
Wound healing process is a time series process. Any delay in the wound healing exposes an individual to further microbial contamination to opportunistic pathogen. Hence, a formulation that would have sustained wound healing action and good bioavailability could arrest contamination. The novel Poly-Lactic-Co-Glycolic Acid (PLGA) nanopolymer system was entrapped with growth factors and anti-bacterial polymers via solvent diffusion method. Using carbodiimide chemistry the delivery system was optimized. This resulted to a high efficiency penetration, enhanced angiogenesis and a high degree of anti-bacterial action against gram (+) and gram (-) strain (Vijayan et al., 2019).
Polymeric Nano-Particle Delivery for Dermatologic Problems
Permeation enhancement both active and passive are necessary in skin conditions such as psoriasis, contact dermatitis and the like. And natural polymers like chitosan in nanoparticle delivery is enriched with the presence of synthetic and degradable aliphatic polymers. Nanospheres are created when all these polyester or poly acrylates from metabolites such as the tyrosine derived nanospheres. It acts as a nano carrier thus delivering the active ingredients in a selectively permeable skin is enhanced in an ultra-nanosize of 40 mm deep into the layers of the epidermis (Zhang et al., 2013) Polymeric Micelle Nanocarrier of Tacrolimus The conventional delivery of topical solutions and topical creams in skin conditions atopic dermatitis in Psoriasis cases has been extremely difficult due to poor cutaneous absorption thereby affecting bioavailability as a whole. The biocompatible methoxy-poly(ethylene glycol)-dihexyl substituted polylactide (MPEG-dihexPLA) diblock copolymer creates a micelle biodegradable formulation of about 1% was kept stable for 7 months. The micelle when analyzed using UHPLC-MS/MS showed that the Tacrolimus has better deposition than ointments meanwhile the profiling of Tacrolimus in terms of its bioavailability distribution reaches 400um meter of tissue deep.This validates the improvement of the degree of efficacy tacrolimus may have due to enhanced action and deeper cutaneous penetration of the micellar polymer loaded drug .
Micellar Nanocarriers of Ciclosporin A
Ciclosporin A management of Plaques psoriasis condition presents difficulty in the treatment process due to the systemic nephron and hepato-toxicity ciclosporin presents. A transdermal permeation is a good way to omit those toxicities. MPEG-dihexPLAdiblock copolymer plus the active ciclosporin A creates a micellar formulation which was visualized in confocal microscopy and statistically presents a delivery of spherical nanometer micelles deep into the human skin. Th increased solubility is key as it had a solubility rate increase of 518 fold. Cutaneous delivery was enhanced and images of micelles in corneocytes and inter-corneal regions are evidences of penetration to stratal layers (Lapteva et al.,2014a,b).
Micelle Formulation and Azole Anti-fungals
Anti-fungal administration of azole preparations such as cotrimazole, fluconazole and econazoles are now potentially delivered by passing systemic toxicity effects. Novel amphiphilic preparation using MPEG-hexPLAblock copolymers develop in a micelle solution. Using fluorescein dye the penetration pathways and micellar distribution showed a good micellar loading as it was visualized in a confocal scanning microscopy. Overall MPEG-dihexPLA micelles showed superior cutaneous drug bioavailability and clinical efficacy even in hair follicles compared to branded conventional preparations (Bachhav et al.,2011).
Conclusion:-
Nanotechnologies truly have addressed the pharmacokinetic and pharmacodynamic challenges of the past. Reformulation of our conventional treatments for various skin and eye infections elevated in to BETTER products through maneuvering the formulation via impregnations in nano carriers, use of nano particles, embedment in nanoemulsions, trapping actives in gel polymers, micellar spheres and other nano-technological dynamics are great innovations to Increase drug efficacy by enhancing absorption, targeting regional site of actions, prolonged action and stable preparations.
|
2021-09-01T15:03:12.977Z
|
2021-06-30T00:00:00.000
|
{
"year": 2021,
"sha1": "7f9e3ef144c0c38a8538fcbbb7d851ba254beb39",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.21474/ijar01/13020",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "f5540dba852563687c7de2da879612c88f157a50",
"s2fieldsofstudy": [
"Medicine",
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
252918783
|
pes2o/s2orc
|
v3-fos-license
|
MoSE: Modality Split and Ensemble for Multimodal Knowledge Graph Completion
Multimodal knowledge graph completion (MKGC) aims to predict missing entities in MKGs. Previous works usually share relation representation across modalities. This results in mutual interference between modalities during training, since for a pair of entities, the relation from one modality probably contradicts that from another modality. Furthermore, making a unified prediction based on the shared relation representation treats the input in different modalities equally, while their importance to the MKGC task should be different. In this paper, we propose MoSE, a Modality Split representation learning and Ensemble inference framework for MKGC. Specifically, in the training phase, we learn modality-split relation embeddings for each modality instead of a single modality-shared one, which alleviates the modality interference. Based on these embeddings, in the inference phase, we first make modality-split predictions and then exploit various ensemble methods to combine the predictions with different weights, which models the modality importance dynamically. Experimental results on three KG datasets show that MoSE outperforms state-of-the-art MKGC methods. Codes are available at https://github.com/OreOZhao/MoSE4MKGC.
Introduction
Multimodal knowledge graphs (MKGs) organize multimodal facts in the form of entities and relations, and have been successfully applied to various knowledge-driven tasks (Marino et al., 2019;Zhang et al., 2018;Zhu et al., 2022). To address the inherent incomplete problems in MKGs, multimodal knowledge graph completion (MKGC) has been proposed (Xie et al., 2016(Xie et al., , 2017, which utilizes auxiliary visual or text information to help predict missing entities. As shown in Figure 1a, given the head entity The United States of America . It can be observed that the descriptions attached to entities provide supplementary information for entity prediction.
Existing MKGC methods usually share a common relation embedding across all modalities for a pair of entities, which tightly couples multiple relations from different modalities. We define this paradigm of MKGC as Tight-Coupling Relation (TCR). As shown in Figure 1b, according to the way that the relations from different modalities are coupled, existing methods can be roughly divided into two categories: Implicit TCR (I-TCR) meth-ods and Explicit TCR (E-TCR) methods. I-TCR methods (Wang et al., 2021(Wang et al., , 2019 usually first fuse multimodal information of an entity into a single embedding, and then learn a unified relation representation based on the embedding. E-TCR methods (Mousselly-Sergieh et al., 2018;Xie et al., 2016Xie et al., , 2017 directly model the relationship between separate multimodal information of entities without fusion. They usually learn a single relation embedding to simultaneously represent all intra-modal and inter-modal relations. Although existing MKGC methods have achieved promising results, they are limited by TCR in two folds: (1) Modality relation contradiction. The TCR usually simultaneously represents multiple relations from different modalities only with a single embedding. However, for a pair of entities, the relation from one modality probably contradicts that from another modality. For example, as shown in Figure 1a, the description "American sitcom" of entity The United States of America , while the images do not. The inherent contradiction of TCR results in modality interference during representation learning in MKGs. (2) Modality difference ignorance. Based on TCR, existing methods usually treat the input in different modalities equally and make a unified prediction, which ignores the difference of modality importance. However, different modalities vary in data quality and entity coverage, and should contribute to the final prediction in varying degrees.
To overcome the above limitations, we propose a Modality Split learning and Ensemble inference framework, MoSE. As shown in Figure 1b, in the training phase, MoSE decouples TCR and learns multiple modality-split relation embeddings instead of a single modality-shared one, which alleviates mutual interference between modalities. In the inference phase, MoSE first makes predictions for each modality separately based on the modalitysplit embeddings, and then merges them into the final prediction. We explore the best combination of modality predictions with various ensemble methods, and model the modality importance by modulating the modality weights dynamically. Experimental results and analysis on three widely-used datasets show that MoSE outperforms state-of-theart methods for MKGC task.
Overall, the contributions of this paper can be summarized as follows: • To the best of our knowledge, we are the first to deal with the modality contradiction of relation representation and discuss modality importance in MKGC task.
• We propose a modality-split learning and ensemble inference framework MoSE for MKGC, which decouples the tight-coupling relation embedding into modality-split ones in the training phase, and modulate modality importance adaptively in the inference phase.
• Experiment results on three datasets demonstrate that MoSE outperforms 9 baselines and obtain the state-of-the-art performance in MKGC task. The results also show that text modality is a useful complement for MKGC rather than visual modality.
Related Work
Existing researches on MKGC mainly focus on extending unimodal knowledge graph embedding (KGE) models to further exploit multimodal information. We notice that for a pair of entities in an MKG, existing multimodal KGE methods all exploit a modality-shared relation embedding which tightly couples multiple relations from different modalities, which we call Tight-Coupling Relation (TCR). We divide existing methods to two categories: implicit tight-coupling relation (I-TCR) methods and explicit tight-coupling relation (E-TCR) methods.
Implicit TCR Methods
I-TCR methods (Wang et al., 2021(Wang et al., , 2019 fuse multiple modalities into a unified entity embedding and utilize a shared relation representation as shown in Figure 1b. Thus the learned relation implicitly fuses multimodal relations. TransAE (Wang et al., 2019) extends TransE with auto-encoder fusing visual and text information into entity representation. Recently, RSME (Wang et al., 2021) notices the noise in visual modality and propose a forget gate to adjust the fusion rate of image to entity embeddings and reaches state-of-the-art (SOTA) performance. Though I-TCR methods show promising improvements, they neglect modality contradictions in modality-shared relation representation. Moreover, they make unified predictions without assessing whether the modality information is relevant to final predictions. In terms of SOTA RSME, the fusion ratio of visual information is determined by image information itself, i.e. similarity, regardless of modality importance to final prediction. -Sergieh et al., 2018;Xie et al., 2016Xie et al., , 2017) utilize a shared relation embedding which tightly couples multiple relations between intra-modal and intermodal entities. E-TCR methods learn representations with an overall score across all modalities: structure-structure, structure-visual/text, visual/text-structure, visual/text-visual/text, all connected by a single modality-shared relation embedding as shown in Figure 1b, which explicitly tightly couples multiple relations. DKRL (Xie et al., 2016) and IKRL (Xie et al., 2017) extend TransE with text and visual modality respectively. MKB (Mousselly-Sergieh et al., 2018) extends IKRL (Xie et al., 2017) from visual modality to visual-text multimodalities. Although E-TCR methods project multimodal features to a common latent space, the inherent semantic contradiction of relations between different modalities is not eliminated. Moreover, they utilize weighted concatenation of multimodal entities to make a unified prediction and does not consider modality importance either.
Preliminaries
In this section, we introduce the notation used in this paper and formulate the MKGC task. KGC task. Knowledge graph is a collection of factual triples G = {(h, r, t)}, where head entity and tail entity h, t ∈ E and relation r ∈ R. The KGE model (1) represents entities and relations to vectors h, r, t, (2) utilizes a score function f (h, r, t) : E × R × E → R to decode the plausibility of a triple to scores. For a particular query q = (h, r, ?), the KGC task aims at ranking all possible entities and obtaining prediction preference.
MKGC task. In MKGs, each entity e ∈ E has multimodal embeddings e m , m ∈ M = {S, V, T }, which denotes structure, visual, and text modality respectively. We use e s , e v , e t to denote corresponding entity embedding, where visual and text embedding is projection of extracted TCR methods. I-TCR methods design a fusion mechanism Φ({e m }), m ∈ M to get a fused embedding of multimodal entities and extends the score function as f (h, . It is worth noting that both ways utilize a modality-shared relation representation.
Overview
Figure 2 shows our Modality Split learning and Ensemble inference framework, MoSE, for multimodal knowledge graph completion task. We first decouple TCR to modality-split relation embeddings corresponding to each modality. With the decoupled TCR, we construct modality-split triple representations for each modality to prevent modality interference in representations. Through the KGE score function, the modality-split representations are decoded to corresponding score distributions. In the training phase, we train modality-split entity and relation representations with intra-modal scores simultaneously. Considering visual and text modalities usually embody more contradictory and uncertain noise than structure modality, we apply confidence-constraint training objectives for the two modalities. In the inference phase, we exploit ensemble inference to combine the modality-split predictions and obtain the final predictions. We explored three kinds of ensemble inference methods aiming at modeling modality importance.
Modality-Split MKG Construction
In our paper, we assume that the TCR embedding used in existing methods represents multiple contradictory relations simultaneously and results in modality interference. Thus we propose to decouple the TCR and construct a modality-split MKG. With decoupling the contradictory relations from a modality-shared embedding to multiple modalitysplit relation embeddings, MoSE alleviates modality interference in relation representations. Formally, we construct decoupled modality-split relation embeddings for each relation type. In this paper, we construct structure, visual and text modality relation embedding r s r v and r t for relation r. Since relation representation is decoupled, we also avoid modality fusion in entities, which also prevents interference within entity representations. Together with the modality-split entity and relations, we form a modality-split KG of multiple unimodal KGs which have identical topology but different entity and relation representations.
KGC Decoder
With the modality-split KG construction, the score function is also separated to multiple scores de- With the modality-split architecture, MoSE is able to present score distribution for each modality. For each query triple (h, r, t), the decoder gives different scores depending on the learned representation, which intuitively reflects the strengths and limitations of each modality for entity predictions.
Training
We utilize multi-class cross-entropy (CE) loss for training following Lacroix et al. (2018). Given a query (h, r, ?), we construct corrupted triples by replacing the tail entity with randomly selected entities in E. We also construct reverse triple (?, r −1 , h) for each triple in the training set and apply the same setting. For all triples, KGC decoder provides corresponding probability of truth p m (t|(h, r)) = sof tmax(f m (h, r, t)) computed with a sof tmax applied to the output of the score function. We denote p m (t|(h, r)) as probability obtained in modality m. The CE loss of modality m can be calculated as Equation (1). (1) Confidence-constraint Training. Visual and text modalities usually embody contradictory information due to data complexity and diversity. We notice that the contradiction lies in the fact that the modality information of entity is not always relevant to the knowledge of factual triple. In consequence, visual and text modality usually present uncertainty. To ease the uncertainty, we train the visual or text modality KGC in a confidenceconstraint manner with a temperature-scaling technique (Guo et al., 2017). Since the predicted probability can approximately represent the confidence score of predictions, we simply compact the probability distribution by adding a temperature parameter T to the output of the KGC decoder as Equation (2) and extend CE loss to a confidence-constraint form as Equation (3). In this way, the confidence of predictions is constrained and the distribution is softened while prediction results remain the same. Formally, we use the confidence-constraint loss for visual and text modality as L cc v and L cc t respectively.
Overall Objective. In the training phase, we simultaneously train three modalities to learn intramodal representations separately after the overall objective L KGC . The overall KGC objective is the sum of modality losses as Equation (4).
Inference
In the inference phase, we explore combination mechanisms of modality predictions by modeling modality weights. Modalities have strengths and limitations due to data quality and entity coverage, which are always complementary to each other. Appropriately adjusting modality weights to fully exploit the complementary strengths would lead to better prediction performance.
Ensemble Inference. Inspired by Chen et al. (2020), we exploit ensemble inference to obtain final predictions. We propose to directly combine scores instead of ranks since information from score distributions may get lost during the ranking process. For each query, we obtain three score distributions f m (h, r, t), m ∈ M = {S, V, T } from three modalities, which could directly reflect the strengths and limitations of modality for entity predictions. The scores are combined as Equation (5).
Next, we propose three variants of MoSE: MoSE-AI, MoSE-BI and MoSE-MI, which varies in the modality weight w m calculation. We utilize a small amount of unbiased meta-set to learn modality weights that can be finely transferred to test-set. We choose validation-set as meta-set.
Equal-importance Average Inference. We utilize modality average weight without considering modality importance as a baseline MoSE-AI. For all modalities, we average the scores to obtain final prediction as Equation (6).
Relation-aware Boosting Inference. We find that entity-relevant triples are sparse and thus hard to capture the accurate correlation between entity and modality importance. In this paper, we assume that the relation of each modality varies in relevance level. So we propose to learn modality weight in relation-level to adjust modality importance to final predictions. We divide meta-set by relation type and upgrade RankBoost (Freund et al., 2003) mechanism to generate modality weights w m (r) corresponding to relation r and combine modality scores as Equation (7). The MoSE-BI Algorithm is illustrated in Appendix A.
Instance-specific Meta-Learner Inference. However, for KGs with fewer relations, such as WN9 (Xie et al., 2017) with only 9 relations, MoSE-BI is limited by coarse-grained relation-level weight learning. Thus we propose to train a metalearner to find optimal weight functions for each triple instance. Following Shu et al. (2019), we exploit an MLP (Multilayer Perceptron) with only one hidden layer as a meta-learner to combine the scores and approximate true predictions. For a triple (h, r, t), we use the concatenation of three scores F (h, r, t) = [f m (h, r, t)], m ∈ M as input, and train the weighted scores to fit the final predictions. The final prediction is obtained as Equation (8) where weight parameter Θ is trained in metaset and transferred to test-set.
The optimal weight Θ is obtained with CE loss as Equation (9). Datasets. To evaluate the proposed model, we conduct experiments on three widely used KGC datasets: FB15K-237 (Toutanova et al., 2015), WN18 (Bordes et al., 2013), and WN9-IMG (Xie et al., 2017). The former two are unimodal KGC datasets with only structure modal, and the latter one contains both structure and visual modalities. We follow previous studies (Wang et al., 2021;Xie et al., 2016;Yao et al., 2019) to augment the text and visual modality information of each dataset. The dataset statistics are shown in Table 1.
Implementation details. To evaluate MoSE, four metrics are used, i.e., Hits@K, K=1, 3, 10, representing accuracy in top K predictions, and Mean Rank (MR). Higher Hits@K and lower MR indicate better performance. We use Pytorch 1.11.0 to implement MoSE. The operating system is Ubuntu 18.04.5. We use a single NVIDIA A6000 GPU with 48GB of RAM.
We report the results of three MoSE variants which vary in the inference methods. when evaluating. We exploit ComplEx (Lacroix et al., 2018) as KGC Decoder. In this paper, we mainly focus on contradiction in relation embeddings. Thus, we employ SOTA pretrained encoder to extract visual and text features of entities, i.e., ViT (Dosovitskiy et al., 2020) following RSME (Wang et al., 2021) for visual modality and BERT (Kenton and Toutanova, 2019) for text modality. We use Adagrad (Duchi et al., 2011) to optimize the model. The hyperparameters are selected with the best Hits@10 on the validation set. Baselines. We compare MoSE with several baselines to demonstrate the advantage of our framework. We mainly compare MoSE with KGE methods, which can be grouped into two categories: (1) the unimodal KGE methods, including TransE (Bordes et al., 2013), DistMult (Yang et al., 2015), ComplEx (Trouillon et al., 2016), RotatE (Sun et al., 2018); (2) the multimodal KGE methods, including a) E-TCR methods: IKRL (Xie et al., 2017), b) I-TCR methods: TransAE (Wang et al., 2019) and RSME (Wang et al., 2021). We also list the results of pre-trained language models (PLMs) for KGC, i.e., KG-BERT (Yao et al., 2019) and MKGformer (Chen et al., 2022).
Comparison to the Baselines
The experimental results in Table 2 show that MoSE obtains the best performance compared to all 9 baselines, which demonstrates the superiority of MoSE. Compared to unimodal KGE methods, MoSE outperforms the best unimodal method RotatE, while other multimodal methods do not. Compared to multimodal KGE methods, MoSE achieves 2% -10% improvements in Hits@10 and 13 -216 improvements in MR over the best existing methods. It is worth noting that even compared to the pre-trained language model methods, MoSE outperforms KG-BERT and MKGformer in all metrics on FB15K-237 and WN18 datasets.
Q1: Does
MoSE succeed in avoiding modality interference? Compared with the corresponding base model, while other multimodal methods face a certain level of performance decline, MoSE achieves consistent improvements in all metrics. For example, the Hits@1 and Hits@3 of IKRL drop compared to those of TransE, and the Hits@3 of TransAE drops compared to that of TransE on the FB15K-237 dataset. Even SOTA RSME faces a slight drop on the WN9 dataset in terms of Hits@1 and Hits@3 compared to ComplEx. It reveals that MoSE can steadily enhance unimodal models with auxiliary modality information and successfully avoid modality interference to structure modality.
Q2: Is it necessary to assess modality importance? We explored three inference methods with modality importance in different aspects. MoSE-AI treats each modality equally and does not consider modality importance at all, while MoSE-BI considers modality importance in relation-level and MoSE-MI in instance-level. As shown in Table 2 Table 3: Effectiveness of relation decoupling. For the I-TCR variation, we fuse the multimodal entities and exploit a shared relation representation, which yields a unified prediction. For the E-TCR variation, we replace the modalitysplit relation embeddings of MoSE with a shared relation embedding, which yields three prediction scores as well.
MoSE-MI performs the best on WN18 and WN9.
All the best inference methods on the three datasets outperform MoSE-AI. It demonstrates the necessity of assessing modality importance for MKGC. Q3: How to choose the suitable inference methods? As we can observe, different inference methods expert in different KG characteristics. The relation-aware inference MoSE-BI performs better in complex KGs with extensive relation types such as FB15K-237 (237 relations) and fails in KGs with fewer relation types such as WN18 and WN9 (18 and 9 relations respectively) while instancespecific inference MoSE-MI performs the opposite. The possible reason is that the inference methods are with different capabilities to approximate the optimal combination of modalities. MoSE-BI is easy to scale to KGs with more relations and able to achieve relatively better performance. Though MoSE-MI performs the best in two datasets, we believe that the single layer MLP may still limit the fitting capability of MoSE-MI.
Effectiveness of Relation Decoupling
Since the TCR baselines in Table 2 vary in KGC decoder and modality types, we further investigated different TCR variations of MoSE under the same setting to demonstrate the effectiveness of relation decoupling. The results are presented in Table 3. For I-TCR and E-TCR variation, we replace the modality-split relation embeddings in MoSE with a single modality-shared relation embedding. For I-TCR variation, we further fuse the multimodal entities with weighted concatenation, which yields a unified prediction.
As shown in Table 3, MoSE outperforms all its E-TCR variations under the same inference method. As for I-TCR method, the best performance of MoSE exceeds I-TCR in all metrics. It demonstrates the necessity of modality relation decoupling. We also notice that I-TCR exceeds MoSE-AI in Hits@1/3 and MoSE-BI in Hits@1 on WN18. The possible reason is that the modality information of WN18 has many mutual semantics. So modality fusion brings accurate entity representations. However, I-TCR obtains a large MR score, indicating it is not stable as MoSE for MKGC.
Modality Ablation
To demonstrate how each modality supports final predictions, we conduct modality ablation. Table 4 shows the experimental results obtained by (1) ensemble inference of three structure unimodal models Str-Str-Str-AI/BI/MI, (2) modality-split predictions obtained by KGC decoder Mose-Str/Vis/Text.
The improvements of Str-Str-Str over MoSE-Str is insignificant compared to that of MoSE-BEST over MoSE-Str. It reveals that MoSE improves the base unimodal model via effectively utilizing modality information instead of performing ensemble inference. For modality-split predictions MoSE-Str/Vis/Text, no single one of three prediction performances exceeds MoSE-BEST. It demonstrates that modalities in MoSE effectively enhance each other and successfully avoid modality mutual interference. The modality-split predictions also indicate the modality quality for assisting MKGC. The structure modality, which is directly learned from KGs, remains the best performance on all datasets, while visual modality has erratic performance and text modality consistently provides the best MR metric.
Case Study
To demonstrate the intuitive ability of MoSE to assess modality importance, we conduct case studies with MoSE-BI, which provides modality weights for each modality corresponding to each relation. Figure 3 shows modality weights in Equation (7) to combine predictions from multiple sources.
Modality Importance. Figure 3a shows average modality weights on each dataset obtained by MoSE-BI. We can observe that text modality provides the greatest contributions on WN18 and WN9, while visual modality provides the minimum on all datasets. It demonstrates that text modality provides valuable information supporting knowledge predictions while visual modality in the opposite. The possible reason is that descriptions often mention relevant entities, while images are only highly related to entity itself.
Relation Cases. Figure 3b presents some examples to show how much each modality contributes to relation learning on FB15K-237. The higher level of modality importance often stems from more relation-relevant modality information. For example, for relation country_of_origin (abbr. country) shown in Figure 1a, the text modality provides more relevance information than visual modality. As shown in Figure 3b, text modality presents importance up to 70% while visual modality presents 0%. The results also demonstrate that MoSE-BI is able to identify which modality is more credible and then assign a higher weight in a finegrained relation level.
Uncertainty in MKGs
To investigate the uncertainty of MKG predictions, we adjust the temperature parameter as shown in Figure 4. We use MoSE-AI to rule out the impact of ensemble inference. We vary the temperature T in Equation (2) from 2 −1 to 2 5 with exponential growth. As the temperature increases, the performance tends to grow and converge to stable. When T = 2 −1 , the confidence to visual and text modality is enlarged and MoSE faces great performance decline. We can also observe that MoSE with larger T always outperforms T = 2 0 = 1 in which the confidence is not constrained. It proves our assumption about the uncertainty of visual and text modalities.
Conclusion
In this paper, we propose a novel modality split learning and ensemble inference framework for multimodal knowledge graph completion called MoSE. MoSE first decouples modality-shared relation embedding to modality-split relation embeddings and performs modality-split representation learning in the training phase, aiming at overcoming modality relation contradiction. Then, MoSE exploits three ensemble inference techniques to combine the modality-split predictions by assessing modality importance. Experiment results demonstrate that MoSE outperforms state-of-the-art methods for MKGC task on three widely-used datasets.
Limitations
Despite that MoSE achieves some gains by modalitysplit learning and ensemble inference, MoSE still has the following limitations: First, MoSE does not fully exploit visual modality. Since the image of the entity is highly self-relevant and covers less information about other related entities, we reduce the visual modality importance during ensemble inference to cater to the MKGC task, which heavily relies on the relationship between entities. Nevertheless, we believe there are other ways to exploit visual modality suitably.
Second, for a fair comparison, we follow SOTA method RSME (Wang et al., 2021) and utilize a single-image setting. We believe that under the multiple-image setting, the problem of modality relation contradiction still holds. Intuitively, even with more images, the image of the entity "The United States of America" in Figure 1a is unlikely to involve the entity "Friends". Quantitatively, the similarity of multiple images from the same entity is up to 99.250% on FB15K-237 and 99.255% on WN18 respectively. Therefore, there is little difference between single-image and multiple-image settings in our work. However, more images may introduce more side information, such as related entities, from which MKGC model may benefit.
|
2022-10-18T01:16:24.017Z
|
2022-10-17T00:00:00.000
|
{
"year": 2022,
"sha1": "b46e4c8b59c271de3ade7569bee60d670680580d",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ACL",
"pdf_hash": "df4533aa2babf89662440fb536de2492f72779c5",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
10018983
|
pes2o/s2orc
|
v3-fos-license
|
Virus-Like Particle Vaccine Confers Protection against a Lethal Newcastle Disease Virus Challenge in Chickens and Allows a Strategy of Differentiating Infected from Vaccinated Animals
ABSTRACT In this study, we developed Newcastle disease virus (NDV) virus-like particles (VLPs) expressing NDV fusion (F) protein along with influenza virus matrix 1 (M1) protein using the insect cell expression system. Specific-pathogen-free chickens were immunized with oil emulsion NDV VLP vaccines containing increasing dosages of VLPs (0.4, 2, 10, or 50 μg of VLPs/0.5-ml dose). Three weeks after immunization, the immunogenicity of the NDV VLP vaccines was determined using a commercial enzyme-linked immunosorbent assay (ELISA) kit, and a lethal challenge using a highly virulent NDV strain was performed to evaluate the protective efficacy of the NDV VLP vaccines. NDV VLP vaccines elicited anti-NDV antibodies and provided protection against a lethal challenge in a dose-dependent manner. Although the VLP vaccines containing 0.4 and 2 μg of VLPs failed to achieve high levels of protection, a single immunization with NDV VLP vaccine containing 10 or 50 μg could fully protect chickens from a lethal challenge and greatly reduced challenge virus shedding. Furthermore, we could easily differentiate infected from vaccinated animals (DIVA) using the hemagglutination inhibition (HI) test. These results strongly suggest that utilization of NDV VLP vaccine in poultry species may be a promising strategy for the better control of NDV.
N ewcastle disease virus (NDV), also known as avian paramyxovirus serotype 1 (APMV-1), is an enveloped nonsegmented negative-strand RNA virus which is a member of the genus Avulavirus in the family Paramyxoviridae (1). On the basis of their pathogenicity for chickens, specifically by the intracerebral pathogenicity index (ICPI), NDVs are classified as asymptomatic, lentogenic, mesogenic, or velogenic (virulent) strains (2). Among infections by these NDV strains, only infection by a virulent NDV strain is defined as Newcastle disease (ND) (1,2), which is one of the most devastating diseases in the poultry industry. Since its first recognition in 1926 at Newcastle-upon-Tyne in England, ND has become endemic across large geographical regions, causing enormous economic losses worldwide (1). ND outbreaks often result in approximately 100% mortality in fully susceptible poultry species, and due to the serious impact of outbreaks, ND was previously on the World Organisation for Animal Health's (OIE) list A of diseases (http://www.oie.int/en/animal-health-in-the-world/ the-world-animal-health-information-system/old-classification -of-diseases-notifiable-to-the-oie-list-a/), and it is now categorized as a disease notifiable to the OIE (http://www.oie.int/animal -health-in-the-world/oie-listed-diseases-2014/). Currently, in order to keep ND under control, stringent vaccination policies are being maintained in many countries worldwide. Classically, inactivated or live NDV vaccines have been frequently used to control ND. These vaccines have been used routinely for decades, especially in regions where ND is endemic, and have acted as effective control measures in ND outbreaks (2). However, despite their contributions to the control of ND, these vaccines have significant limitations on differentiating infected from vaccinated animals (DIVA), and this necessitates the development of novel vaccines that allow a DIVA strategy with solid protective efficacy.
Virus-like particles (VLPs), which morphologically resemble authentic virus structures (from which they were given their name), have been suggested as a novel vaccine antigen against several viral pathogens (3)(4)(5). VLPs have been produced in prokaryotic and eukaryotic expression systems (5,6), and VLP vaccines have been shown to confer high levels of protective efficacy against various viral pathogens. The safety, immunogenicity, protective efficacy, and mode of host immune response stimulation have been described well in recent studies (7)(8)(9)(10)(11). The formation of NDV VLPs was first characterized using an avian cell line (12), and the immunogenicity of the avian-cell-expressed NDV VLPs has been evaluated using a mouse model (13). A review article written by Trudy G. Morrison describes the development and the immunogenicity of NDV VLPs in detail (11). However, the protective efficacy of NDV VLP vaccine against a lethal NDV challenge in chickens has not been studied.
In this study, we developed NDV VLPs expressing NDV fusion (F) protein along with influenza virus matrix 1 (M1) protein using insect cell lines for the first time and evaluated the immunogenicity and protective efficacy against a lethal NDV challenge in specific-pathogen-free (SPF) chickens. Furthermore, the DIVA test was performed to differentiate VLP-vaccinated chickens from vaccinated and then infected chickens.
Cloning of NDV F and influenza virus M1 genes.
For amplification of the NDV F gene, viral RNA was extracted from the virulent NDV strain Kr-005/00 (kindly provided by the animal and plant quarantine agency of Korea; GenBank accession no. AY630423) using the Viral Gene-Spin viral DNA/RNA extraction kit (iNtRon Biotechnology, Republic of Korea) according to the manufacturer's instructions. For cDNA synthesis, reverse transcription (RT) was performed on extracted viral RNA using Superscript III (Invitrogen, USA) with random hexamers. From the cDNA, the NDV F gene was PCR amplified using a previously described primer pair (14), F-forward and F-reverse (see Table S1 in the supplemental material).
Viral RNA was extracted from influenza virus strain A/Puerto Rico/8/ 1934 (H1N1; GenBank accession no. KC866600.1) as described above, and the influenza virus M1 gene was amplified as previously described (15) using the primer pair M-1 and M-1027R (see Table S1 in the supplemental material).
PCR-amplified NDV F and influenza virus M1 genes were cloned into the TA cloning vector pGEM-T (Promega, USA), and each gene sequence was determined by DNA sequencing. The two resulting plasmid vectors containing the NDV F and influenza virus M1 genes were designated vF and vM1, respectively.
Generation of recombinant baculoviruses and production of NDV VLPs. The F and M1 genes were further amplified from vF and vM1 by PCR using primer pairs EcoRI-F-forward/HindIII-F-reverse and EcoRI-M1-F/HindIII-M1-R, respectively (see Table S1 in the supplemental material), and cloned into the pFastBacT1 (Invitrogen) bacmid transfer vector. The resulting transfer vectors containing the F and M1 genes were designated pF and pM1, respectively. MAX Efficiency DH10Bac competent Escherichia coli cells (Invitrogen) were transformed with one of the constructed transfer vector plasmids, pF and pM1, to generate recombinant bacmids according to the manufacturer's instructions. The recombinant bacmids were transfected into Sf9 cells seeded in 6-well plates for generating recombinant baculovirus (rBV), at a density of 8 ϫ 10 5 cells/ well, using Cellfectin reagent (Invitrogen). The two resulting rBVs encoding the NDV F and influenza virus M1 proteins were designated rBV F and rBV M1, respectively.
Production and characterization of NDV VLP and preparation of VLP vaccines. For NDV VLP production, Sf9 cells were coinfected with rBV F and rBV M1, both at a multiplicity of infection (MOI) of 5, for 72 h. The culture medium containing VLPs was collected and clarified by lowspeed centrifugation (2,000 ϫ g, 30 min, and 4°C) to remove large cell debris, and VLPs from the clarified supernatants were pelleted (30,000 ϫ g, 1.5 h, and 4°C). The pellet was resuspended in phosphate-buffered saline (PBS) solution (pH 7.2), loaded onto a 20%-35%-50% (wt/vol) discontinuous sucrose density gradient, and sedimented by ultracentrifugation (150,000 ϫ g, 1.5 h, and 4°C). After sedimentation, two major visible bands at the top of the 35% and 50% sucrose density gradients were collected. The protein concentration of each bands were determined by Bradford protein assay (Pierce, USA), and 2 g of each band was analyzed by Western blotting. Expression of the NDV F protein was detected with chicken polyclonal sera against NDV and a horseradish peroxidase (HRP)-conjugated rabbit anti-chicken IgG secondary antibody (Bethyl, USA). Influenza virus M1 protein was detected with rabbit anti-M1 polyclonal antibodies (Immune Technology, USA) and an HRP-conjugated goat anti-rabbit IgG secondary antibody (Merck, Germany). The presence of VLPs was observed by transmission electron microscopy (TEM; Tecnai G2 Spirit, FEI, Netherlands, installed at Korea Basic Science Institute) using a negative staining method as previously described (16). Different concentrations of NDV VLPs collected at the top of the 35% sucrose density gradient were emulsified in the oil adjuvant Montanide ISA70 (SEPPIC, France) at a ratio of 30:70 (vol/vol) to obtain escalating doses of VLP vaccines (0.4, 2, 10, and 50 g of VLPs/0.5-ml dose).
Immunization of animals and determination of immunogenicity.
A total of 50 6-week-old SPF White Leghorn chickens (Namduck Sanitec, Republic of Korea) were divided into five groups (10 chickens per group) and marked individually. Four groups of chickens were intramuscularly immunized (0.5 ml per chicken) with escalating doses (0.4, 2, 10, and 50 g of VLPs/chicken) of NDV VLP vaccine. As a mock-vaccinated control group, another 10 SPF chickens were injected with an emulsified solution of PBS with ISA70.
All animal procedures performed in this study were reviewed, approved, and supervised by the Institutional Animal Care and Use Committee (IACUC) of Konkuk University. Three weeks after a single immunization, sera were collected for determination of serum NDV-specific antibody levels using a commercially available enzyme-linked immunosorbent assay (ELISA) kit (Median Diagnostics, Republic of Korea), which is precoated with NDV strain LaSota (GenBank accession no. JF950510.1). ELISA was performed according to the manufacturer's instructions, with minor modifications. Briefly, serum samples were diluted 50-fold and added into the wells. After 30 min at 20°C, the plate was washed, and HRP-conjugated anti-chicken IgG antibody was added to each well. The plate was washed after 30 min at 20°C, the substrate tetramethylbenzidine peroxidase (TMB) was added, and the plate was incubated for 15 min at 20°C. The reaction was stopped by the addition of stop solution, and the optical density at 450 nm (OD 450 ) was measured using an ELISA reader (Tecan, Switzerland) for calculation of the sample-topositive (S/P) value of each sample [determined by the equation (OD 450 sample Ϫ mean OD 450 negative)/(mean OD 450 positive Ϫ mean OD 450 negative), where OD 450 negative is the value for the negative control provided by the manufacturer and OD 450 positive is the value for the positive control provided by the manufacturer].
Lethal NDV challenge and assessment of protection. Three weeks after a single immunization, chickens were intramuscularly challenged with 1 ml of 10 5.5 50% egg infective doses (EID 50 )/ml of the virulent NDV strain Kr-005/00. To assess the protective efficacy of the NDV VLP vaccines, mortality and clinical symptoms were observed daily for 14 days postchallenge (dpc). Clinical symptoms were classified as follows: normal (score of 0), mild depression (score of 1), neurological signs and/or severe depression (score of 2), or death (score of 3). The average clinical scores for each group were calculated daily.
To determine the challenge virus shedding, oropharyngeal and cloacal swab samples were collected and suspended in 1 ml of PBS supplemented with gentamicin (400 g/ml) at 3, 5, 7, and 10 dpc. Viral RNA was extracted from 150 l of this suspension using the Viral Gene-Spin viral DNA/RNA extraction kit (iNtRon Biotechnology), and the amount of NDV RNA was quantified by the cycle threshold (C T ) value using M gene-based real-time reverse transcription-PCR (rRT-PCR), as previously described (17). For extrapolation of C T values of rRT-PCR to infectious units, serial dilutions of oropharyngeal and cloacal swab samples from the mock-vaccinated control group at 3 dpc were calculated as the EID 50 /ml. In parallel, the corresponding virus doses were analyzed by rRT-PCR, and the EID 50 of virus was plotted against the C T values of viral dilutions. The resulting calibration curves (see Fig. S1 in the supplemental material) were highly correlated (r 2 Ͼ 0.99) and used for converting C T values to EID 50 .
DIVA. For serological differentiation of NDV VLP-vaccinated chickens from vaccinated and then infected chickens, the hemagglutination inhibition (HI) test was used because the NDV VLPs developed in this study do not contain hemagglutinin-neuraminidase (HN) protein, and thus HI antibodies were expected to be raised only after NDV infection. Serum samples were collected at 0 dpc, which is 3 weeks postvaccination (wpv), and at 14 dpc from all surviving chickens in the groups vaccinated with 2 g and 50 g of VLP vaccine. Additionally, five 6-week-old SPF chickens were intramuscularly immunized with commercial inactivated oil emulsion ND vaccine containing inactivated LaSota antigen (KBNP, Republic of Korea) and challenged with the virulent NDV strain Kr-005/00 at 3 wpv. From these five SPF chickens, serum samples were taken at 0 and 14 dpc. Collected serum samples were analyzed for the presence of HI antibodies according to the OIE standard method (2) using NDV LaSota antigen.
Statistical analysis. Analysis of variance (ANOVA) with a Tukey-Kramer post hoc test was performed for comparison of serum antibody titers between groups. Fisher's exact test was used for statistical analysis of the mortality and morbidity results. To compare differences of HI titers between the groups vaccinated with 2 and 50 g of VLP vaccine, an unpaired t test was used. Results with P values of Ͻ0.05 were considered to be statistically significant.
Characterization of NDV VLPs.
Both bands positioned at the top of the 35% and 50% sucrose density gradients were found to contain F and M1 proteins by Western blotting (Fig. 1A), which identified bands at 48 kDa and 28 kDa, indicating the NDV F1 and influenza virus M1 proteins, respectively. Identification of the F1 protein, the cleavage product of F protein, indicates that F protein expressed in Sf9 cells could be cleaved by host proteases. The band positioned at the top of the 35% sucrose density gradient was found to contain both F and M1 proteins, with higher concentrations than that of the 50% sucrose density gradient (Fig. 1A), and was used for this study. Examination of negatively stained preparations by TEM revealed the presence of VLPs with a diameter of approximately 70 nm (Fig. 1B).
Immune responses to NDV VLP vaccines. Three weeks after a single immunization, as shown in Fig. 2, VLP-vaccinated groups showed significantly increased antibody responses, except for the group immunized with 0.4 g of VLP vaccine. Antibody responses to the VLP vaccine between groups showed dose-dependent increases up to 10 g. Groups immunized with 10 or 50 g showed antibody responses superior to those of other groups, with statistical significance (P Ͻ 0.001).
Protection against lethal NDV challenge and reduced viral shedding in VLP-vaccinated chickens. As shown in Fig. 3 Table S2 in the supplemental material, mock-vaccinated chickens showed severe clinical signs and 100% mortality within 4 days after challenge, which ensures that a proper challenge was accomplished. The group immunized with 0.4 g of VLP vaccine showed 100% mortality with severe clinical signs, and the group vaccinated with 2 g showed partial protection (20% mortality) with moderate clinical signs. Importantly, two surviving chickens from the group vaccinated with 2 g showed neurological signs (score of 2), including head tilt and posterior paresis, indicating that the 2 g of NDV VLP vaccine was not effective enough to confer high levels of protective efficacy against lethal NDV infection in chickens. However, chickens vaccinated with 10 or 50 g of vaccine were fully protected from mortality and morbidity, except one chicken from each group which showed only moderately reduced activity (clinical score of 1), as shown in Fig. 3B and in Table S2 in the supplemental material. Moreover, chickens vaccinated with 10 and 50 g of vaccine not only survived but also showed significantly lower levels of challenge virus shedding than did mockvaccinated chickens from both the oropharyx (Table 1) and cloaca ( Table 2). Titers of the viral shedding from individual chickens are described in Fig. S2 in the supplemental material. Although the group vaccinated with 2 g showed significantly lower levels of challenge virus shedding than the mock-vaccinated group, it showed higher levels of viral shedding than did chickens vaccinated with 10 or 50 g. The low levels of protective efficacy in the group vaccinated with 2 g was in accordance with the result that the group vaccinated with 2 g showed significantly lower levels of antibodies than the groups vaccinated with 10 and 50 g (Fig. 2) and showed 20% mortality after the challenge (Fig. 3A). Interestingly, as shown in Fig. 4, the group vaccinated with 2 g showed significantly higher levels of seroconversion, as measured by HI test, than did the group vaccinated with 50 g at 14 dpc, reinforcing that the suppression of challenge virus replication was greater in the chickens vaccinated with 50 g than in those vaccinated with 2 g. All chickens immunized with 0.4 g of VLP vaccine not only succumbed to death but also could not reduce challenge virus shedding.
and in
These results suggest that a single immunization of chickens with 10 or 50 g of NDV VLP vaccine could fully protect chickens after a lethal NDV challenge and could effectively reduce challenge virus shedding.
DIVA.
As expected, HI antibody responses were not observed in prechallenge sera from VLP-vaccinated chickens. However, as shown in Fig. 4, HI antibodies were detected after NDV challenge from all VLP-vaccinated chickens, which allowed the DIVA. In contrast, the sera from chickens vaccinated with inactivated vaccine (n ϭ 5) were all positive for HI antibodies both pre-and postchallenge. Therefore, VLP vaccine and the companion HI test could allow the utilization of the DIVA strategy, which is not applicable to the commercial inactivated ND vaccine.
DISCUSSION
The insect cell expression system has been recognized as a versatile recombinant protein expression tool, and it is now widely accepted as a proven technology in the industry. The key advantage of this recombinant protein manufacturing platform is that it can perform most of the posttranslational modifications (e.g., glycosylation, disulfide bond formation, and phosphorylation) and thus can produce biologically active proteins while offering the potential for low manufacturing costs. Moreover, insect cell expression systems have been widely used for the development of various VLP vaccines against both human and animal viruses (15,16,(18)(19)(20).
In the present study, we generated NDV VLPs using the insect cell expression system and evaluated its possible use as a DIVA vaccine in SPF chickens for the first time. To the best of our knowledge, this is the first study reporting the generation of NDV VLPs using the insect cell expression system. Even a single immunization with 10 or 50 g of NDV VLP vaccine elicited significant levels of antibodies against NDV, fully protected chickens from a lethal NDV challenge, and strongly reduced challenge virus shedding. Moreover, we could differentiate VLP-vaccinated chickens from VLP-vaccinated and then infected chickens using the standard HI test, which is one of the most commonly utilized methods for detection of antibodies against NDV.
For the generation of VLPs in the insect cell expression system, adoption of a core protein from a well-established VLP production system is a frequently used strategy, resulting in so-called chimeric VLPs. For example, Haynes et al. used the murine leukemia virus gag protein for influenza VLP generation (21), and both Quan et al. and Wang et al. used influenza virus M1 protein for the generation of respiratory syncytial virus (RSV) VLPs (22) and porcine reproductive and respiratory syndrome virus (PRRSV) VLPs (23), respectively, in insect cell expression systems. In this study, we adopted influenza virus M1 as a core protein for the generation of NDV VLPs incorporating NDV F protein. In accordance with previous studies on the generation of RSV or PRRSV VLPs using influenza virus M1 protein (22,23), coexpression of NDV F and influenza virus M1 protein in insect cells resulted in the generation of spherical particles of uniform size, indicating successful formation of NDV VLPs. These results may support the further utilization of influenza virus M1 as a core protein for the development of VLP vaccines in the insect cell expression system against various viruses.
Induction of complete prevention of virus infection, so-called sterilizing immunity, is an ideal goal for vaccination. Although 10 or 50 g of NDV VLP vaccine in this study fully protected chick- ens from a lethal NDV challenge and strongly reduced challenge virus shedding, sterilizing immunity could not be achieved, since chickens vaccinated with 10 or 50 g showed detectable levels of challenge virus shedding, especially at 7 dpc. It is expected that increasing the immunogenicity of the NDV VLP vaccine might confer sterilizing immunity against lethal NDV infection, and further studies on enhancing immunogenicity of the NDV VLP vaccine (e.g., incorporation of a molecular adjuvant into VLPs) will be required. Especially, application of a priming-boosting vaccination regimen in combination with viral vector vaccines expressing the F gene (24) and the NDV VLP vaccine developed in this study is strongly expected to confer sterilizing immunity while still allowing DIVA, although this needs to be studied. Vaccination is being considered as an attractive control measure to prevent animal diseases. For effective vaccination programs, adequate surveillance in vaccinated animals is essential to determine whether the field virus is circulating in vaccinated animals (25). Therefore, vaccine development should be accompanied by development of proper DIVA strategies, which is often time-consuming and troublesome because of technical challenges. The NDV VLP vaccine developed in this study did not require the development of a companion DIVA test and enabled DIVA without performing complicated and expensive laboratory tests, since the HI test was successfully adopted as a DIVA test in NDV VLP-vaccinated chickens. Previously, Kumar et al. reported elicitation of HI antibodies even after immunization with recombinant APMV-3 as a vector vaccine expressing NDV F protein (26). Although this result by Kumar et al. is not in accordance with other previous studies (14,27) or our study, it may be explained by differences in experimental approaches, since different vaccination strategies (e.g., differences in vaccine vector) were used. The successful DIVA test performed in this study supports the possible application of VLP vaccine as part of an NDV control strategy.
Cost of vaccine is one of the main concerns that might limit the practical use of VLP vaccines for poultry. Based on our analysis, the production cost of 10 g of NDV VLP vaccine developed in this study was approximately 3 times higher than that of conventional inactivated whole-virus NDV vaccine produced in SPF eggs. Since this cost analysis of NDV VLP production was performed based on a laboratory-scale production process, optimization of a large-scale manufacturing process is expected to significantly reduce the production cost of NDV VLP vaccine. Moreover, as performed in our previous VLP study (15), omitting the costly purification step and the use of crude NDV VLP antigen might also be applicable for the reduction of NDV VLP vaccine production cost. For the industrialization of veterinary VLP vaccines with economic feasibility, dedicated studies on reducing VLP vaccine production cost are required.
In conclusion, we developed an NDV VLP vaccine using the insect cell expression system for the first time, which was highly immunogenic and fully protected chickens from lethal NDV infection, with strongly reduced challenge virus shedding. Furthermore, DIVA was successfully performed with a simple serological test, the HI test. These results strongly suggest that utilization of NDV VLP vaccine in poultry species may be a promising strategy for the better control of NDV.
|
2018-04-03T04:34:59.941Z
|
2014-01-08T00:00:00.000
|
{
"year": 2014,
"sha1": "946a7f0cf3ddd3495bbf2fe4150e134444bca147",
"oa_license": null,
"oa_url": "https://doi.org/10.1128/cvi.00636-13",
"oa_status": "GOLD",
"pdf_src": "Highwire",
"pdf_hash": "9f95de54d454da086c2a1eaf735f05af412fa3c3",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Medicine",
"Biology"
]
}
|
254615914
|
pes2o/s2orc
|
v3-fos-license
|
Substance Use Among Young Adult Survivors of Childhood Cancer With Cognitive Impairment: An Analysis of the Project Forward Cohort
PURPOSE: Young adult childhood cancer survivors (YACCSs) are often impacted by cancer-related cognitive impairment (CRCI) and psychological distress. Using the Project Forward Cohort, we evaluated the relationship between CRCI and substance use behaviors. METHODS: YACCSs were surveyed between 2015 and 2018 (N = 1,106, female = 50.8%, Hispanic = 51.5%, median age = 25.5 years). Associations between CRCI and substance use (tobacco, binge drinking, marijuana, prescription drug misuse, and e-cigarette/vaporizer) were examined in multivariate logistic or log-binomial regressions, adjusting for child at diagnosis (0-14 years), years since diagnosis, sex, race/ethnicity, cancer type, and treatment intensity. Mediation analysis was performed to determine opportunities for interventions. RESULTS: CRCI was reported by 144 (13.0%) survivors. The highest prevalence was observed in CNS cancers (25.4%) and leukemia (13.3%) survivors. After covariate adjustment, CRCI was associated with 2.26 times the odds of prior 30-day vaping (95% CI, 1.24 to 4.11; P = .007). Mediators with significant indirect effects in the CRCI-vaping relationship include depressive symptoms (Center for Epidemiological Studies Depression Scale) and having two or more cancer-related late effects (P < .05). CONCLUSION: CRCI among YACCSs was associated with reports of vaping. Oncologists should screen for vaping behavior if CRCI is apparent. Increasing access to long-term follow-up clinics, addressing physical and mental health issues, and monitoring and educating on vaping and other substance use behaviors is recommended to improve the long-term health of YACCSs.
INTRODUCTION
The reported prevalence of cancer-related cognitive impairment (CRCI) ranges between 10% and 40% across different neurocognitive domains among childhood, adolescent, and young adult patients with cancer, [1][2][3] and it is more prominent among those diagnosed with central nervous system (CNS) malignancies and leukemia. 1,4 Key insults leading to CRCI have been identified as cancer itself, especially with CNS tumors, as well as anticancer treatments such as cranial radiation and intrathecal chemotherapy. 4 Concurrent emotional and social dysfunction is observed with cognitive impairment, with afflicted survivors reporting higher risks of unemployment, not attaining a college degree, and dependent living. 1,2,5 As CRCI may persist up to 20 years after cancer diagnosis among childhood cancer survivors, 6 young adult childhood cancer survivors (YACCSs, 15-39 years) are at risk experiencing developmental problems compared with their peers. While survivorship care providers will monitor neurocognitive issues during followup care visits, there remains a lack of effective interventions to manage CRCI in cancer survivorship. [7][8][9] Alcohol use, cigarette use, and drug use are risky lifestyle behaviors that are recommended for monitoring during survivorship care of young cancer survivors as they are frequently linked to poor health outcomes. 9,10 Known predictors of smoking initiation and alcohol consumption include psychological and emotional distress, which are associated with CRCI. 7,11,12 Hence, there is potential for higher risks of substance use among CRCI-affected survivors which may indicate greater need for close monitoring of these behaviors during follow-ups on the basis of Author affiliations and support information (if applicable) appear at the end of this article.
recommendations in the Children's Oncology Group. 9 Substance use behaviors may also have a negative impact on cognition which indicates the possibility of a cycle of worsening in the cognition-substance use relationship. 13,14 To our knowledge, however, this relationship is yet to be explored in YACCSs.
By using a hypothesis-generating approach, this study investigated the association between CRCI and substance use behaviors in the Project Forward Cohort, 15 a population-based and ethnically diverse sample of YACCSs diagnosed in Los Angeles County. Through a secondary analysis of existing data, correlates of CRCI were examined to verify the classification of participants with CRCI in the study data set. Additionally, we performed a mediation analysis to identify important health and psychosocial mediators that could be intervened so as to reduce substance use behaviors that were associated with CRCI.
Data Source
Potential participants were identified from the Los Angeles Cancer SEER Cancer Registry. Inclusion criteria included (1) diagnosis with any cancer (stage II or greater and all stages for the brain) during 1996-2010, (2) age 0-19 years at diagnosis, (3) residence in Los Angeles County, and (4) at least 5 years having passed since diagnosis. All recruited subjects provided active consent for participation, and the research protocol was approved by the California State Committee for the Protection of Human Subjects, the California Cancer Registry, and the Institutional Review Board at the University of Southern California (No. HS-14-00817). The study procedures have been described previously. 15 A total of 1,106 subjects completed the survey between 2015 and 2018 and were included in the analysis. Key measures were identified from both selfreported survey data and cancer registry data. The sources (survey or registry) with survey questions (if applicable) for each measure are summarized in the Data Supplement (online only).
Measures
Self-reported CRCI. We defined participants as having selfreported CRCI (yes, no) if they reported having difficulties with learning and memory as a problem at the time of survey.
Substance use behavior. Substance use (cigarette use, binge drinking, marijuana, prescription drug misuse, and e-cigarette/vaporizer use) was defined by any reported use (yes, no) in the prior 30 days. Binge drinking was defined by having five or more drinks of alcohol on a single occasion (within a couple of hours). Prescription drug misuse was determined by the use of (any) prescription drugs not prescribed by a physician. The e-cigarette/vaporizer use question was added after initial study launch (with 71 participants missing this item).
Demographic and clinical factors. Demographic information included age at survey, age at diagnosis, years since diagnosis, sex, race/ethnicity, education level, insurance, employment, and socioeconomic status (SES). Quintiles of SES were estimated with an area-based composite index computed using socioeconomic factors (education, occupation, employment, household income, poverty, rent, and house valuations) from census sources. [16][17][18] Cancer registry data contributed the SES, cancer type, age at diagnosis, and ethnicity.
Treatment intensity was determined using the Intensity of Treatment Rating Scale 3.0 with cancer registry data such as cancer diagnosis and initial therapy received. 19,20 The validation of the methodology against chart-abstracted data has been published elsewhere. 19 There were four levels of intensity from level 1 (least intensive) to level 4 (most intensive). 20 Participants were asked (yes/no) whether they were experiencing cancer-related late effects at the time of survey (inability to have children, heart problems, second cancer, weight gain, liver damage, hearing problems, lung problems, poor eyesight, sexual functioning problems, and bone fractures). A summative score ranging from 0 to 10 was generated for each participant by adding up each reported late effect for all participants.
Psychosocial variables. Depressive symptoms were assessed with the 20-item Center for Epidemiological Studies Depression Scale (CES-D). 21 The questionnaire queries about the frequency of experiencing depressive symptoms in the previous week on a 4-point Likert scale Post-traumatic growth was evaluated with an 11-item modified Post-Traumatic Growth Inventory that has been previously administered in patients with cancer. 22 Items ask about the degree of positive and negative changes in different aspects of life as a result of cancer (eg, priorities in life, self-appreciation, compassion for others, handling difficulties, and spirituality), using a 5-point scale (1 5 highly negative change, 2 5 somewhat negative change, 3 5 no change, 4 5 somewhat positive change, and 5 5 highly positive change). A total mean score was calculated, with higher scores representing more posttraumatic growth (a 5 .891).
Health care self-efficacy was determined using three items adapted from the Stanford Patient Education Research Center Chronic Disease Self-Efficacy Scales. 23 These questions evaluate patients' confidence in asking physicians about things that concern them, scheduling and attending doctor appointments when needing care, and receiving cancer-related follow-up care over the next 2 years. Responses comprised a 3-point Likert scale ranging from not at all confident to somewhat confident and totally confident. A summed score ranging from 0 to 6 was calculated, with higher scores indicating greater confidence in navigating the health care system for cancerrelated care (a 5 .715).
Cancer-related follow-up care. Participants were asked if they had attended cancer-related follow-up care in the previous 2 years.
Statistical Analysis
The prevalence of self-reported CRCI was determined for each cancer type and reported with the number of events and 95% CIs. We tested for significant differences in characteristics between subjects with CRCI and those without it using the Wilcoxon rank-sum test for continuous variables due to non-normality of data. For categorical variables, depending on the proportions of cells with counts of , 5, Pearson's chi-squared test (, 20%) or Fisher's exact test ($ 20%) were used. Univariate and multivariate logistic (if outcome is rare, # 15%) or log-binomial (if outcome is not rare, . 15%) regression models were generated to determine the associations between selfreported CRCI and substance use. Adjusted confounders, including child at diagnosis (0-14 years), years since diagnosis, sex, race/ethnicity, cancer type, and cancer treatment intensity, were selected as these were sociodemographic variables and childhood cancer-related characteristics that remained unchanged before the presentation of the outcomes. Education and employment outcomes were then included in the model to verify the robustness of the findings with nonbaseline characteristics that had unclear temporal relationships with substance use.
Substance use behavior(s) that was significantly associated with CRCI was brought forward for mediation analysis conducted with the paramed package in Stata to determine the natural direct effect (NDE), natural indirect effect (NIE), and proportion mediated effect corresponding to each mediator. 24,25 NDE is defined as the average change in substance use behavior when CRCI is present as compared with when CRCI is absent while fixing the mediator to a level that naturally occurs in the absence of CRCI. NIE is defined as the average change in substance use behavior in the presence of CRCI, but the level of the mediator is changed from the level it would take if CRCI is absent to a level it would take if CRCI is present. 24,25 Determining NDE and NIE allows decomposition of the total effect (TE) of CRCI on substance use behaviors into direct and indirect components for a specific mediator. 26,27 For binary outcomes, TE would be equal to NDE 3 NIE. Hence, the proportion mediated effect is equal to (NDE 3 [NIE -1])/(TE -1) 3 100%, whereby TE is replaced by NDE 3 NIE. 26,27 Mediators of interest included attendance to a recent cancer-related follow-up care within the previous 2 years, number of late effects, and psychosocial variables (depressive symptoms, posttraumatic growth, and health care self-efficacy). These mediators were selected as these were actionable opportunities to manage substance use behaviors in both cancer and noncancer populations. [28][29][30][31] Each mediator was examined independent of other mediators. The same set of confounders was included for mediation analysis. Referring to Figure 1, we reasoned that this set of confounders were necessary to control for pathways A, B, and C to accurately quantify NDE and NIE. All statistical tests were two-sided, and P , .05 was considered statistically significant. As this was a secondary data analysis with a hypothesis-generating objective, adjustment for multiple testing was not conducted. Stata/SE version 16.1 was used to perform all analyses.
Factors Associated With CRCI
Of the 1,106 participants available for analysis, 144 (13%; 95% CI, 11 to 15%) reported problems with learning and memory. The highest prevalence of CRCI by cancer type (Data Supplement) was observed among brain/CNS cancer (25.4%) and leukemia (13.3%). A more detailed breakdown of cancer sites by self-reported CRCI can be found in the Data Supplement.
Participants self-reporting CRCI were younger at diagnosis, reported a lower education level, had public insurance, were more likely to be unemployed or disabled, and reported a larger number of cancer-related late effects than those without CRCI (P , .05; Table 1 characterized by more depressive symptoms, less posttraumatic growth, and poorer health care self-efficacy, which may have influenced a higher attendance rate to a cancer-related follow-up care in the prior 2 years (P , .05; Table 1). After adjusting for potential confounders, self-reported CRCI was associated with having one more cancer-related late effect (b 5 1.34; 95% CI, 1.17 to 1.51; P , .001; Data Supplement). Post hoc logistic regression analysis revealed that CRCI was associated with statistically higher odds of individual cancer-related late effects (Data Supplement).
CRCI and Current Substance Use
In the Project Forward Cohort (N 5 1,106), the proportions of substance use behaviors included 32.0% for binge drinking (n 5 354), 18.6% for marijuana use (n 5 206), 11.4% for cigarette use (n 5 126), 7.1% for e-cigarette/vaporizer use (n 5 79), and 4.9% for prescription drug misuse (n 5 54). Among participants with self-reported CRCI, there was a significantly larger proportion of current e-cigarette/vaporizer users (12.5% v 6.3%, P 5 .023) and fewer binge drinking participants (24.3% v 33.2%, P 5 .021) than among those without cognitive problems (Table 2). No significant difference was observed for cigarette, marijuana, and prescription drug misuse ( Table 2). After confounder adjustment, selfreported CRCI was associated with 2.26 times the odds of current e-cigarette/vaporizer use (95% CI, 1.24 to 4.11; P 5 .007; Table 2), and this remains significant after including education and employment outcomes into the regression model (odds ratio, 2.42; 95% CI, 1.31 to 4.47; P 5 .005). As missingness was . 10% with current e-cigarette/vaporizer use, comparisons were made between those with (n 5 986) and without (n 5 120) e-cigarette/vaporizer use information. We found that certain characteristics differed, notably a higher proportion of participants with skin cancer (14.2% v 2.4%, P , .001) and lower prevalence of CRCI (3.3% v 14.2%, P , .001) among those who did not report vaping behavior (Data Supplement). Among the 120 participants with missing e-cigarette/vaporizer use information, 71 were not provided with the question during the initial study phase. Thus, sensitivity analysis excluding these 71 participants was conducted and showed that the CRCI and substance use associations were robust (data not reported). Considering these results, we proceeded with mediation analysis for e-cigarette/vaporizer use.
Mediation Analysis for CRCI and e-Cigarette/Vaporizer Use
Among the five mediators, on the basis of the NIE point estimates, 95% CIs, and P values, only depressive symptoms (CES-D) and number of late effects demonstrated a significant indirect pathway from CRCI to e-cigarette/vaporizer use (Data Supplement). The proportion mediated effect was the largest for late effects (82.6%), followed by depressive symptoms (48.5%), post-traumatic growth (22.5%), and health care self-efficacy (1.8%; Data Supplement). For recent cancer-related follow-up care, the proportion mediated effect could not be computed as this mediator has an opposing indirect effect on e-cigarette/vaporizer use when compared with CRCI, albeit without reaching statistical significance (Data Supplement).
DISCUSSION
Learning and memory problems were self-reported in one in eight YACCSs in the Project Forward Cohort, especially among survivors of brain/CNS cancer and leukemia, which is consistent with current literature. 1,4 Addressing our research question, self-reported CRCI was associated with higher odds of vaping and this relationship was significantly mediated by depressive symptoms and late effects. Those reporting CRCI had lower education levels, higher rates of unemployment and disabilities, poorer psychosocial outcomes, and more cancer-related late effects which are all characteristics understood of CRCI-afflicted YACCSs. 1,2,5 Our findings suggest that YACCSs face substantial challenges in coping with their cognitive and related complications as well as poor mental health, potentially leading to self-medication with vaping to improve concentration.
Information regarding vaping can be misleading or equivocal. 32 A common example of misinformation is the utility of vaping as a smoking cessation tool, which is opposed by existing smoking cessation guidelines. [32][33][34] The long-term health effects of vaping are also inconclusive due to recency of the phenomenon 32 ; thus, there is need for prospective trials and cohort studies. 35 At least 23 chemicals, including nicotine, have been found in the liquid contents and emissions of vaping, and some were found to have carcinogenic effects. 35,36 There have also been multiple reports of vaping-associated acute lung injuries requiring hospitalization, intensive care, and mechanical intubation. 37 The available evidence is unable to substantiate claims of e-cigarette/vaporizers as being a safer alternative than tobacco and other substances in the short and long term, which should be emphasized to YACCSs.
The current study is limited by its study design as a secondary data analysis of a cross-sectional data set. The question for determining CRCI status in the study was brief compared with the gold standard of using a robust psychometric tool (eg, PROMIS Cognitive Function Short Form 8a or Functional Assessment of Cancer Therapy-Cognitive Function) together with neuropsychological cognitive batteries, 47,48 although our findings on CRCI prevalence and correlates agreed with current literature 1-5 and provided confidence in this classification approach. Anxiety, a key mediator of substance use, was not assessed in the original cohort. 49 Our questions regarding substance use behaviors are also less detailed compared with other substance use questionnaires such as the National Survey of Drug Use and Health 50 and the National Epidemiologic Survey on Alcohol and Related Conditions. 51 For instance, prescription drug misuse could be further subdivided by its indications (pain relief, stimulant, or depressant), and female binge drinking behavior should have been defined as 41 drinks in a single occasion instead of 51 drinks. 50 This may have led to nondifferential misclassification of substance use behaviors with bias to the null for cigarette use, binge drinking, prescription drug misuse, and marijuana use. Data regarding other behaviors of clinical significance, such as misuses of illicit drugs, were not explored as they were not asked to the participants. We recommend that researchers continue to explore the relationship between substance use behaviors and CRCI and not limiting to only vaping, in future studies. Due to the cross-sectional design, causal inference cannot be established. The high proportion of missing e-cigarette/vaporizer use data further limited the interpretability of the results. Additionally, because the racial and ethnic composition in Los Angeles county is different from the demographic breakdown of YACCSs in the United States, 52 our prevalence of substance use behaviors may not be applicable in other states and countries. However, our observed associations between CRCI and vaping are likely applicable in other U.S. states as race/ethnicity were controlled for in the analysis, but we would encourage research to be conducted in other states and countries to account for state-and country-level differences in legality and societal standards. Nonetheless, the association between CRCI and vaping has not been previously investigated. This paper serves as preliminary evidence for future vaping-associated studies in YACCSs and highlights the importance of such studies to better educate the benefits and risks of vaping to young cancer survivors.
In conclusion, we have demonstrated preliminary evidence that there are higher odds of vaping among patients with self-reported CRCI in a cohort of YACCSs. Cancer-related follow-up visits present opportunities for oncologists and other clinicians to correct misconceptions and address physical and mental health issues that may facilitate the uptake of vaping behavior. Interventions that encourage engagement in long-term cancer-related follow-up care visits through a survivor-focused care model that targets unmet health and psychosocial needs of survivors will also help with reducing vaping and other substance use behaviors. Future research is needed to confirm our findings with longitudinal studies, investigate reasons for vaping among YACCSs, determine the long-term health effects of vaping, evaluate the relationship between CRCI and substance use behaviors (other than vaping) with detailed measures, and develop new interventions or validate existing ones to increase cancer-related follow-up rates.
|
2022-12-14T16:13:21.415Z
|
2022-12-12T00:00:00.000
|
{
"year": 2022,
"sha1": "3110f44c209fa0b3b4f3410ec987bf533d6ac887",
"oa_license": null,
"oa_url": null,
"oa_status": "CLOSED",
"pdf_src": "WoltersKluwer",
"pdf_hash": "b2aa8a3a1578c753483f6f81ba91ce01b9e96982",
"s2fieldsofstudy": [
"Medicine",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
204034456
|
pes2o/s2orc
|
v3-fos-license
|
Health and Demographic Surveillance Systems Within the Child Health and Mortality Prevention Surveillance Network
Abstract Health and demographic surveillance systems (HDSSs) provide a foundation for characterizing and defining priorities and strategies for improving population health. The Child Health and Mortality Prevention Surveillance (CHAMPS) project aims to inform policy to prevent child deaths through generating causes of death from surveillance data combined with innovative diagnostic and laboratory methods. Six of the 7 sites that constitute the CHAMPS network have active HDSSs: Mozambique, Mali, Ethiopia, Kenya, Bangladesh, and South Africa; the seventh, in Sierra Leone, is in the early planning stages. This article describes the network of CHAMPS HDSSs and their role in the CHAMPS project. To generate actionable health and demographic data to prevent child deaths, the network depends on reliable demographic surveillance, and the HDSSs play this crucial role.
Demographic surveillance systems (DSSs) are commonly used to monitor populations and their health over time within geographically defined demographic surveillance areas (DSAs). DSSs are longitudinal data collection platforms that track births, deaths, migrations, and socioeconomic and health circumstances over time in places where vital statistics are not reliably collected. When disease surveillance, either passive, through hospital-based surveillance, or active, through community surveillance, is attached to the DSS, the whole platform is called a health and demographic surveillance system (HDSS).
A key objective of the Child Health and Mortality Prevention Surveillance project is to define population-based rates for definitive causes of death through diagnostic and laboratory methods nested within population surveillance [1]. The CHAMPS HDSSs provide this platform, including through demographic data for estimating population-based mortality rates and contextual information for understanding factors associated with the deaths of children <5 years of age. The CHAMPS investments since 2016 have led to the modification and expansion of HDSSs that already existed in Mozambique, Mali, Ethiopia, and Kenya and to the establishment of new sites in South Africa and Bangladesh and, in the future, in Sierra Leone.
HEALTH AND DEMOGRAPHIC SURVEILLANCE SYSTEMS IN CHAMPS
HDSSs monitor population dynamics and population health, including demographic events, social and economic conditions, health-seeking behaviors, pregnancies, and disease outbreaks. HDSSs also provide demographically characterized sampling frames from which representative samples can be selected; this is a platform for conducting surveys, demonstration projects, and effectiveness studies, implementing and evaluating interventions, and carrying out investigational trials for new products with potential public health value [2][3][4]. Longitudinal population monitoring is accomplished through enumeration of all residents of a geographically defined area and routine visits at least 2 times per year to all homesteads to collect data on all births and other pregnancy outcomes, deaths, migrations into, out of, and within the DSA, and additional information relevant to health [5][6][7][8]. HDSSs also obtain cause of death information through verbal autopsies; this involves interviews with family members of deceased individuals about the symptoms and circumstances surrounding the death, which are analyzed through computer-based algorithms and physician adjudication panels [5,9]. Many HDSSs also geocode homesteads, infrastructure, and community landmarks, such as clinics and hospitals.
CHAMPS uses age-and sex-specific population counts and person-years lived at each site to calculate rates, including infant mortality rates, under-5 mortality rates, and stillbirth rates. This information is also used to assess the proportion of child deaths that undergo minimally invasive tissue sampling (MITS) procedures and the proportion of deaths for which the cause was identified through verbal autopsies or clinical records [1,10]. The information collected by the HDSSs on households' demographic and socioeconomic characteristics, access to healthcare, and health facilities in the DSA provides contextual information for child mortality, helping the CHAMPS project to identify factors contributing to mortality and opportunities for interventions.
Given that HDSSs have long been in operation in several CHAMPS sites, there is some variability in instruments and methodology used. At the same time, they share similarities in core conceptual, methodological, and community engagement procedures. The CHAMPS network balances the need for standardized, comparable data collection with recognition that each site has different circumstances and needs to contribute to local priorities, circumstances, and research and policy interests. Sites maintain well-trained and supervised staff, use documented field methods, implement data monitoring and data management procedures, and engage in data analysis and dissemination. They emphasize ethical, respectful data collection, reporting methods and community engagement. Table 1 shows the data elements used in CHAMPS surveillance. For CHAMPS, the priority HDSS data are demographic characteristics, specifically, age-and sex-specific population and demographic events that change the population: births, deaths, and in-and out-migrations. A secondary priority is contextual information about children's circumstances that may be related to their health and survival. This includes characteristics of mothers, households and homesteads, and healthcare utilization. Thus, many HDSSs collect information on household members' education and marital status; mothers' pregnancy history, antenatal care, and delivery; children's immunization history; household assets, water and sanitation practices, and malaria control measures; and homestead building materials of dwellings and number of rooms. As HDSSs become well established and explore more diverse approaches to understanding child health and mortality, additional relevant indicators are considered, such as anthropometric measurements, anemia assessment, HIV testing, and 24-hour dietary recall.
CHAMPS maintains a Program Office, which monitors and evaluates procedures and data quality at each HDSS, works with HDSSs to assess their capacity to collect quality data, and identifies needs and priorities for capacity improvements. Each HDSS reports aggregate data after each data round and also reports final, cleaned data at the end of each year. At the first stage, these are aggregate data on demographic events (eg, births, deaths) and population counts for each sex and age group. On-site assessments and technical assistance are provided when a site or the Program Office identifies a need. When needed, CHAMPS partners with local and regional demographic surveillance experts to build capacity. Knowledge-sharing across CHAMPS HDSSs is supported through regular teleconferences, site exchange visits, and in-person meetings.
HDSS SITE PROFILES
Characteristics of the CHAMPS HDSS network are presented below and summarized in Table 2. Table 3 provides an overview of the data collected currently across the HDSSs.
Manhiça HDSS, Mozambique
The longest-running HDSS participating in the CHAMPS network was established in 1996 by the Manhiça Health Research Center (CISM). It is one of the founding member of the International Network for the Demographic Evaluation of Populations and their Health (INDEPTH) Network of HDSS. It is located in Manhiça district, about 85 km north of Maputo City, Table 1
Core indicators
The minimum data elements essential for calculating mortality rates: age-and sex-specific population size, age-and sexspecific numbers of deaths, sex-specific number of births; number of in-and out-migrations
Household and individual indicators
These data elements are important for contextualizing CHAMPS results, and provide opportunities for examining the relationships between environmental factors and child mortality • Households: water and sanitation, cooking fuel, socioeconomic status, distance and access to healthcare, household composition • Mothers/adults: age, education, reproductive history • Children: antenatal, delivery, and postnatal care, breastfeeding, and immunizations
Biomarkers and nutrition indicators
These are elements considered for future expansion of data collection, as they can provide new directions for understanding maternal and child health. the capital of Mozambique. It initially covered an area of 100 km 2 with 32 000 people [2,11]. The HDSS, with a staff of 71 personnel, was expanded in 2002 to an area of 450 km [2], in 2005 to an area of 500 km 2 , and in 2014 to cover the entire district of Manhiça (2380 km 2 and 186 000 people). Initially, three rounds of data collection were conducted each year until 2001 when they were reduced to two, and to one per year during the period 2013 to 2016; they increased again to two rounds each year in 2017 afterwards. Each round includes modules to update the household and individual data and, where applicable, modules for migration, pregnancies, pregnancy outcomes and fertility histories. Other modules collect data on water and sanitation, household assets, malaria prevention information, and immunization of children <5 years of age. All deaths are documented, including date, place, and details about cause of death using verbal autopsy methods. A link between demographic and ongoing childhood morbidity surveillance is made through individual demographic identification cards printed from the HDSS databases and distributed to the households for each child <15 years old. Key informants selected in each community provide alerts about deaths, births, marriages, and new households to HDSS supervisors who visit them every week. Some modifications in procedures were made to adapt the HDSS for CHAMPS. Verbal autopsy forms have been upgraded to the CHAMPS/World Health Organization (WHO) 2016 version in 2017 [12]. To meet CHAMPS needs, the period to contact families for verbal autopsies interview has been shortened from 3 months to 2-4 weeks after death. In addition, the site implemented a call center, which the key informants can contact to report demographic events in the community immediately 24 hours a day, 7 days a week. More information about the Manhiça HDSS and its findings has been published elsewhere [11,13].
Bamako Site, Mali
An HDSS was established in 2006 in 2 low-income urban communities of Djicoroni Para and Banconi, in Bamako, the capital of Mali. The HDSS, with a staff of 143 personnel, covers an area of 11.15 km 2 with 227 219 people. An average of two rounds of demographic data have been conducted annually. The relationship with CHAMPS began in 2017. A network of informants provides alerts about births, deaths, and pregnancies in the community. The HDSS has also served as a platform for multiple studies, including serologic surveys, to assess, the impact of the introduction of Haemophilus influenzae type b vaccine [14] and the prevalence of echocardiogram-diagnosed rheumatic heart disease, healthcare utilization surveys [15], and randomized selection for case-control studies for the Global Enteric Multicenter Study (GEMS) [16] and the Vaccine Impact on Diarrhea in Africa (VIDA) study. In 2017, the verbal autopsy forms were updated to collect information on stillbirths and to correspond with the CHAMPS/WHO 2016 version. The HDSS is exploring methods for early pregnancy detection through community-based last menstrual period tracking with the assistance of a network of midwives.
Kersa and Harar Sites, Ethiopia
There are 2 HDSSs associated with the CHAMPS site in Ethiopia: 1 in the rural region of Kersa, established in 2007, and the other in the urban region of the Harari Regional State, established in 2012. At onset, the system in Kersa included 12 subdistricts (kebeles) with a total population of 52 000 and in Harar included 6 kebeles with 34 000 inhabitants. In 2014, other kebeles were added, doubling the catchment area of each site. Currently, Kersa HDSS has 24 kebeles and covers 353 km 2 and a population of 131 431. Harar HDSS has 12 kebeles, covering 25.4 km 2 and a population of 60 044. There are 80 and 40 HDSS staff members at Kersa and Harar, respectively. Two data rounds are conducted per year, during which demographic and health-related information is collected, including immunization of children, morbidity, family planning, and verbal autopsies [17,18]. "√" indicates that the HDSS data instruments include 1 or more questions on the topic. There are differences across HDSS sites in question wording and response options. Some HDSSs will begin collecting additional variables in 2020.
Abbreviation: HDSS, health and demographic surveillance system.
Soweto and Thembelihle Site, South Africa
An HDSS was initiated for CHAMPS in Soweto and Thembelihle in 2017-2018. Soweto is an urban township with around 1.3 million individuals covering an area of 200 km 2 in about 100 subplaces; Thembelihle Local Municipality and its adjoining areas are informal urban settlements with a population of >20 000. The HDSS catchment areas cover 8 mostly noncontiguous subplaces of low socioeconomic status, with an area of 17.7 km [2] in Soweto; Thembelihle and its surrounding informal settlements has an area of 19.0 km [2]. The population under surveillance in 2018 was 123 225 individuals in 35 302 households. The HDSS is conducting 2 rounds of household visits per year with a staff of 40 members, collecting demographic and socioeconomic information, geographic indicators, pregnancy history, pregnancy outcomes, child health, and migration.
Bombali Shebora and Bombali Siari, Sierra Leone
The CHAMPS surveillance area in Sierra Leone does not have an HDSS yet, but is in the early stages of planning for one. The HDSS is expected to comprise 2 chiefdoms, with a population of 161 000 in an area of 281.7 km 2 : Bombali Shebora, which includes Makeni City, and Bombali Siari. CHAMPS is working with local partners to assess local capacity for establishing and maintaining an HDSS and engaging government and other possible national and international stakeholders.
DISCUSSION
The CHAMPS network of HDSSs is a collaborative undertaking of independent, yet connected research groups. They share the common goal to collect high-quality data that can be used to characterize and prevent child mortality; each also has methods, priorities, collaborations, and challenges that are specific to the community and country within which it operates. As the network of HDSSs develops, several opportunities and challenges emerge.
Challenges
HDSS capacity at some sites was established prior to engagement with CHAMPS, while at other sites an HDSS is being newly established through CHAMPS financial and technical support. As a result, instruments, field methods, and data processes differ across sites and are not standardized. CHAMPS Program Office experts work with scientific staff and leadership at each site to systematically assess needs pertaining to data collection protocols, fieldwork procedures, data entry and management, and technology and software. They ensure that the data meet CHAMPS requirements, and that site-to-site differences are understood and considered during calculations of the indices and interpretation of results. CHAMPS requires extensive community engagement and trust. The constant presence of the HDSS in communities through visits to all households every few months for data rounds can be a platform for CHAMPS to build the rapport and trust in communities to work around the sensitive subject of child mortality.
Some sites are conducting HDSS activities in particularly challenging settings. For example, several sites cover urban areas, where individuals and households are frequently moving and changing circumstances, so they are difficult to track over time. A few HDSSs have noncontiguous DSAs, which makes it difficult to ensure that the catchment areas are well demarcated and to track the population without double-counting movers; it also makes travel to households longer for data collection and supervision.
For CHAMPS sites that are newly developing, there are challenges in creating standard operating procedures, establishing both cause of death-related activities and HDSS activities simultaneously, and training new staff and creating collaborative teams. In some countries, there is limited local availability of demographic expertise; in some communities, hiring and specialized training is needed to develop a team of qualified staff for data collection and data management.
Tracking large populations over time is intensive and expensive, as it requires sufficient qualified locally based staff with longterm commitment to the study area. Developing and maintaining a quality HDSS is also scientifically challenging, requiring expertise in demography and epidemiology, survey and fieldwork methodology, data processing, and analysis and scientific writing. Sustained financial investments are required to support the staff and infrastructure necessary to visit tens or hundreds of thousands of individuals multiple times per year and collect, manage, and analyze the resulting data. However, funds are often difficult to attract and maintain for ongoing surveillance. Yet, in lowresource settings with limited or no vital registration, HDSSs, like the ones operating within the CHAMPS network, are the only source of much-needed information on population health.
Lessons Learned
The CHAMPS network provides opportunities for HDSSs to share and exchange expertise, survey instruments and indicators, and methodologies for dealing with both routine and unusual research circumstances. As with any scientific collaboration, each site must meet its multiple priorities, including research, programmatic, publication, and sustainability goals. One theme in the challenges and successes has been the necessity of community acceptance for successful implementation of surveillance. In-country expertise in demography and in-field methodology are assets, as is the establishment of standard operating procedures locally. The use of technology, such as tablets, for data collection can be useful, but is neither required for quality data, nor guarantees quality data.
Future Directions
The CHAMPS project requires HDSS data rounds at least twice yearly, bridging a balance between resource constraints and the need to collect complete data on births, deaths, and population characteristics. Some HDSSs conduct 3 or more data rounds per year to achieve better tracking of pregnancies and migrations, which are needed for accurate estimates of population and mortality. A key value of HDSSs is the longitudinal tracking of populations, which makes it possible to document temporal trends. HDSSs contribute to filling knowledge gaps about child mortality by providing population-based enumeration of children and of deaths in a well-characterized population; these data are additionally valuable when linked with cause of death data [20]. Systematic surveillance of vital events in HDSSs helps MITS to be performed within the necessary short timeframe of 24 hours after death; it also provides data on mortality by age group and on the household and community contexts of child mortality.
As the network develops, a priority is to envision and specify scientifically the applicability of the mortality data to populations outside of the DSA. We are exploring statistical methods for how mortality rates, contextualized using HDSS data, can yield results applicable to broader geographic areas.
High-quality data on child health and mortality can provide actionable evidence for developing strategies and interventions to prevent child deaths. For this purpose, the CHAMPS network depends on HDSSs to conduct reliable demographic surveillance and provide data gathered and maintained using appropriate methods.
|
2019-09-16T23:09:50.996Z
|
2019-10-09T00:00:00.000
|
{
"year": 2019,
"sha1": "65871da310b1b23de14bf5437fcccc773643610c",
"oa_license": "CCBY",
"oa_url": "https://academic.oup.com/cid/article-pdf/69/Supplement_4/S274/33391794/ciz609.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "194efd1e51515cd1af644b32ef78c6864a1cbbe8",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
248620706
|
pes2o/s2orc
|
v3-fos-license
|
Accidental ingestion of magnetic foreign body in a pediatric patient: A potentially fatal attraction
Foreign body ingestion is one of the most common pediatric emergencies. As part of their cognitive development, infants and toddlers are extremely curious and constantly explore their surroundings through their senses, namely taste. The ubiquity of toys containing magnetic elements consecutively meant an increase in the cases of children ingesting said magnets. While most ingested foreign bodies, including a single magnet, will spontaneously traverse the gastrointestinal tract without problems, some may give rise to grave and potentially life-threatening complications; the latter is often seen in the presence of 2 or more magnets or paramagnetic material. The diagnosis of ingestion of magnetic foreign bodies remains a challenge, given its often ambiguous history and presentation; nonetheless, their abundance, gravity, and preventability of their complications alone should render physicians vigilant and keep a low threshold of suspicion.
Introduction
Foreign body ingestion is among the most common presenting complaints to pediatric emergency departments. Most cases seen are between the ages of 6 months and 3 years; however, there have been instances reported in children older than 3 years of age [ 5 ]. Older children may swallow such objects, particularly if suffering a behavioral or mental problem [1] . Foreign body ingestion is often benign and self-limiting; however, in the rare occasion where the ingested toys or objects con- abdominal pain and repeated vomiting of 1-day duration. History was negative for recent illness or travel. Parents denied sick contact or ingestion of contaminated food. There was no fever or change in bowel habits. Physical examination revealed an afebrile and vitally stable child, and a soft and nondistended abdomen. A provisional diagnosis of acute gastroenteritis was made, and the patient was sent to the radiology department for an abdominal ultrasound. Ultrasound examination revealed a whirlpool configuration of the small bowels, seen predominantly at the level of the ileal loops ( Fig. 1 ). In addition to small amounts of free fluid and surrounding lymphadenopathy. The patient did not demonstrate any tenderness or guarding throughout the examination. Ultrasound findings prompted a plain radiographic examination of the abdomen, which had not been done prior. It revealed multiple radiopaque, adherent metallic beads surrounded by distended bowel loops ( Fig. 2 ), which resembled the configuration seen on ultrasound. The appearance of small bowel volvulus around the foreign body raised the suspicion of it being of magnetic origin.
The patient was referred to pediatric surgery and urgently taken for laparoscopic removal of the foreign body. Intraoperative findings included distended ileal loops and adhesions resulting from the attractive magnetic forces. Four out of eleven beads were successfully retrieved laparoscopically; consequently, conversion to an open laparotomy ensued. Following the retrieval of the remaining magnetic beads, the involved bowel segments were examined. Erosive changes to the walls of the duodenum, jejunum, and mesentery were noted; in addition to a fistula formation between the duodenum and jejunum through the mesentery, and distal ileum ( Fig. 3 -7 ). The postoperative course was unremarkable.
Discussion
The increasing availability of toys and objects containing magnets has been paralleled with an increase in the reported cases of accidental ingestion of said magnetic objects [1] [ 3 ] [ 7 ]. As with most cases of foreign body ingestion, a single object containing a magnetic element may pass through the gastrointestinal tract spontaneously without causing much damage. However, due to the nature of the attractive forces between magnets, the presence of 2 or more such objects or paramagnetic objects increases the risk of gastrointestinal complications; which may range from intestinal obstruction and ischemia to necrosis, fistula formation, and perforation [ 2 ] [8] [ 6 ].
Human cells contain sodium, potassium, and chloride ions, rendering it a conductor of the microscopic currents that form magnetic forces. This adhesive force acts through the gastrointestinal tract. If undetected and untreated, magnetic foreign bodies gradually erode the bowel wall, causing pressure necrosis to its mucosa and subsequent perforation. Therefore,
Fig. 6 -Ileo-ileal internal fistula formed by the magnetic foreign body.
early diagnosis and treatment are of utmost importance. In some cases where a parent has witnessed a child swallowing such foreign object, or the child is old enough to state so, history is clear. While clinical presentation may be nonspecific, close attention ought to be paid to children exhibiting obstructive symptoms, such as nausea, vomiting, and abdominal pain; which in turn should prompt radiographic examination. Magnetic beads or toys often have a characteristic bond and appearance.
Due to the often negative or vague history, arrival at a diagnosis of magnetic foreign body ingestion may be delayed. However, with the increasing availability and popularity of toys with magnetic elements, one must maintain a low index of suspicion; especially considering the serious complications that may ensue. In this particular patient, the configuration and looping of the bowel walls around characteristic beady structures strongly indicated the presence of a magnet. Owing to the length of the small bowel, it is most commonly affected by complications compared to other parts of the gastrointestinal system. This is followed by fistulas between the small bowel and colon, and stomach and jejunum [4] .
Intervention may be either endoscopic, laparoscopic, or open surgical exploration. The selection of a treatment modality is guided by history, physical examination, and radiolog- ical investigations. In the aforementioned patient, only four out of eleven magnetic beads were successfully retrieved laparoscopically; the remaining beads were suctioned deep into the bowels, which ultimately necessitated the conversion to an open laparotomy.
Conclusion
The rising availability of magnetic toys and objects has led to an increase in the cases of ingestion of said foreign bodies in the pediatric population. Unlike most cases of foreign body ingestion, the presence of magnetic and paramagnetic elements imposes the risk of serious injury to the gastrointestinal tract. While history and clinical presentation are often ambiguous, it is paramount that physicians carefully investigate the patient; a simple plain abdominal radiograph may suffice in revealing any dense foreign bodies of metallic densities. Moreover, parents and caregivers ought to be made aware of the risks of magnetic toys, particularly those small enough to be swallowed. Early diagnosis and management may significantly decrease and even prevent the complications and associated morbidity with magnetic foreign body ingestion.
Patient consent
Given the patient is a minor, consent for the documentation and publication of this case has been obtained from her legal guardian accompanying her at the time (father).
The information provided about this patient will maintain their anonymity and honor their privacy.
|
2022-05-10T15:27:44.568Z
|
2022-05-06T00:00:00.000
|
{
"year": 2022,
"sha1": "8d9ea2fbec3c361601b2af3a3dae02197958ce55",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.radcr.2022.04.007",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "f4ae257efc9dc73ff876f7a198b099073584fc47",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.