id
stringlengths 3
9
| source
stringclasses 1
value | version
stringclasses 1
value | text
stringlengths 1.54k
298k
| added
stringdate 1993-11-25 05:05:38
2024-09-20 15:30:25
| created
stringdate 1-01-01 00:00:00
2024-07-31 00:00:00
| metadata
dict |
|---|---|---|---|---|---|---|
241976726
|
pes2o/s2orc
|
v3-fos-license
|
Kidney Stone Composition in Various Country Around the World
To compare urinary stone composition patterns in different populations around the world in relation to the structure of their population, dietary habits, and climate. 1204 adult patients with urolithiasis and stone analysis was included . International websites were searched to obtain data. We observed 710(59%) patients with calcium oxalate, 31(1%) calcium phosphate, 161(13%) mixed calcium oxalate/calcium phosphate, 15(1%) carbapatite, 110(9%) uric acid, 7(<1%) urate, 100(9%) mixed uric acid/ calcium oxalate, 56(5%) struvite and 14(1%) cystine stones. Calcium stones were the most common in all countries (up to 91%) with the highest rates in Canada and China. Oxalate stones were more common than phosphate or mixed phosphate/oxalate stones except Egypt and India. The rate of uric acid stones, being higher in Egypt, India, Pakistan, Iraq, Poland, and Bulgaria. Struvite stones occurred in less than 5% except India (23%) and Pakistan (16%). Cystine stones occurred in 1%. The frequency of different types of urinary stones varies from country to country. Calcium stones are prevalent in all countries. Uric acid stones seems to depend mainly on climatic factors, being higher in countries with desert or tropical climates. Dietary patterns can also lead to an increase it. Struvite stones are decreasing in most countries.
Introduction
Over the past 50 years, the prevalence of urinary calculi continued to increase, both in adults and pediatric population (46). The increase was initially observed in Western countries, but in more recent years the prevalence of kidney stones also tended to increase in many other countries around the world. In the US, the prevalence increased from 3.2% in 1976-80 to 8.8% in 2007-10 [1]. Similar trends have also been observed in Europe and the Far East [2][3][4][5]. More recently increased rates of stone prevalence were described in regions [6] where previously the rate of prevalence was lower.
Increased prevalence of urinary calculi has been associated with changes of diet and lifestyle and climate modi cations and is related to increased prevalence of other non-communicable diseases such as diabetes, hypertension, obesity, and osteoporosis. The effect of these environmental changes overlaps with individual genetic predisposition to stone formation and adds to genetically determined forms of stone disease (cystinuria, primary hyperoxaluria, and other renal tubular defects). The increase in the prevalence of urinary calculi has a signi cant impact on health systems due to the costs of diagnosing and treating the stones and the loss of working hours due to the disease [7]. The annual cost of kidney stone disease was estimated in the United States at 2.81 billion dollars in 2000 and at 3.79 billion dollars in 2012, after adjustment for in ation to 2014, and it was estimated that with the current trend of population growth and increase in the prevalence of obesity and diabetes it will increase of 1.24 billion dollars by 2030 [8]. The economic burden of the disease is more in Western countries but is increasing in developing countries where the prevalence of kidney stones was lower in the past.
The chemical composition of urinary stones depends on the eating habits and lifestyle of each population, but also on the climatic conditions of the region. The comparison of the patterns of composition of urinary stones in different populations can give information on the pathogenesis of different types of stones.
The patterns of composition of urinary stones has been described in numerous studies from different regions of the world, although most data have been collected in the Western world [9][10][11][12][13][14].
In Western countries, oxalate and/or calcium phosphate stones are prevalent, while uric acid containing stones are observed in a lower number of cases. In the past, infection stones accounted for up to 15% of all stones but their frequency decreased over time as a result of the more effective and less invasive methods for the treatment of stones and the prevention and the most effective treatment of urinary tract infections. Stones of cystine, dihydroxyadenine and xanthine represent a lower share of all the stones that tends to be constant in all the series as they are caused by genetic defects present in all the populations. Similar spectra of stone composition are observed in developing countries, although the rate of uric acid and infection stones may vary in different populations depending on diet, climate, and e ciency of health system.
Although several studies evaluated series of urinary stones analyzed with different methods around the world, it is still not available a study comparing the different patterns of stone composition in patients from different countries depending on the gender and age structure of their populations, dietary habits and climatic conditions. U-merge, an association gathering urologists from all over the world, is the ideal platform for this task. For this reason, the scienti c o ce of U-Merge launched a study to evaluate the results of urinary stone analyses in the countries of its members in order to compare them in relation to the structure of the populations, dietary habits and climatic conditions.
Materials And Methods
U-merge membership is open to academic urologists, nephrologists, other specialists and researchers, who are keen to network and participate in academic activities worldwide. Applicants are evaluated on their record of academic activities in urology and related elds. Currently, U-merge includes members from 66 countries spread across all 5 continents. The Scienti c O ce of U-merge launched a call among all its members to retrospectively review the results of the stone analysis of patients attending their stone clinics for the treatment of urinary stone.
Participant members were asked to retrospectively review the charts of adult patients (> 18 years) with renal or ureteral stones assessed in previous years until the number of patients needed for the study was accomplished. All the patients for whom the stone analysis was available were considered eligible cases and the informed consent was taken. Duplicate cases have been excluded. No experimental protocols were applied for this study. Sex, age, country, and stone composition of each patient were recorded in an Excel data base. Any method of stone analysis was accepted, but the methodology had to be known and registered. A minimum number of 30 patients per center was required. Stones analyzed by wet chemical were classi ed as calcium oxalate (CaOx) (unspeci ed), calcium phosphate (CaP) (unspeci ed), mixed calcium oxalate/calcium phosphate (CaOx/CaP), struvite, uric acid (UA), mixed uric acid/calcium oxalate (UA/CaOx) and cystine.
The pattern of stone composition was expressed as frequency of stone composition, that is the ratio of the number of stones with a given composition by the total number of stones (e.g. number of calcium oxalate stones/ total number of stones). The assessment of prevalence or incidence would require studying a sample of the general population. At present, no epidemiological study ever assessed the prevalence (or incidence) in the general population for a given type of stone composition, because this study would require having the result of the stone analysis in the stone-forming subjects identi ed in the general population (which in most cases is not available). On the contrary, a rough estimate of the prevalence of a given type of stones can be obtained by multiplying the frequency of the stone composition in a series by the known prevalence of urinary stone disease (in general) in the same country or region. Data of prevalence of urinary calculi in the general population were available for 6 out of 10 countries that participated in the study: Argentina 5.14% [15], Canada 8.2% [16], China 7.0% [6], Southern India 2.6% [17], Italy 7.5% [18], and Pakistan 12% [19].
International institutional statistical websites were searched to obtain data on population structure, dietary habits and climate in the different locations where the series included in the study were collected.
The spectrum of stone composition by gender and age is shown in were more frequent than calcium oxalate dehydrate (COD) stones. Frequency of calcium oxalate stones was equal in women and men (58% vs 59%), whereas frequency of uric acid containing stones was lower in women than in men (13% vs 21%) and frequency of calcium phosphate and mixed calcium phosphate/calcium oxalate stones (21% vs 14%) and frequency of struvite stones were higher in women. Frequency of COM stones tended to be higher in men than in women (78 vs 71%) and to increase with age (18-39 =78%, 40-59=80%, > 60%=85%). Frequency of uric acid stones was higher in males and tended to increase with age. The distribution of the different types of stones in RSFs in different countries is described in Table 3. Calcium-containing stones were the most common in all countries. Among calciumcontaining stones, calcium oxalate stones were more frequent in all countries except in Egypt and India where the frequency of calcium phosphate or mixed calcium phosphate/ calcium oxalate was 74% in Egypt and 53% in India, respectively. Among calcium oxalate stones, the rate of COM stones was 100% in Egypt, 83% in Italy, 81% in Bulgaria, 75% in China, and 69% in Iraq. The rate of uric acid containing stones ranged 4 to 34% in most countries with the highest rates observed in Egypt, India, Poland and Bulgaria. Struvite stones were less than 5% in all countries but India (23%) and Pakistan (16%). Cystine stones were less than 2%. Table 5. Mean values of daily dietary energy intake and rates of total energy by carbohydrates, proteins and fats, as well as percentages in the total population > 65 years are shown in Table 6.
Discussion
In the present study, calcium-containing stones were the most frequent, followed by uric acid-containing stones, while struvite and cystine are less frequent. In accordance to previous reports [32], uric acid containing stones were more frequent in males and in older ages, whereas phosphate stones were more frequent in women.
The average age of RSFs in different countries varies but these differences re ect those that are observable in the general population of their countries, which averaged about 20 years lower. On the other hand, the average age values observed in our series overlapped to those previously reported in other series of patients with kidney stones from the same countries [6,[21][22][23][24][25][26][27][28][29] Male to female (M/F) ratio is different in countries, being balanced between men and women or slightly in favor of men in the countries of North America, Europe, South America and China but heavily in favor of men in Egypt, Pakistan, India and Iraq.
This nding con rm the tendency to an increase of stone formation in women of Western countries [32], and more recently of China [6], while in Egypt, Pakistan, India and Iraq the ratio of males to females is still similar to what was observed in Western countries forty years ago [33]. This trend may be explained by the so-called nutrition transition, that is the change in dietary habits across the world with a convergence towards an increased consumption of unhealthy foods that is the cause of the increase in non-communicable diseases in almost all regions of the world in both sexes [34]. Consumption of unhealthy foods is still limited in some regions of North Africa and South Asia that maintain dietary patterns with a lower risk of urinary stones forming. Moreover, in some countries the characteristics of family structure and cultural rules still present a nutritional disadvantage for women [35].
The spectrum of composition of urinary stones is quite variable in different countries. Differences could be attributable to the different characteristics by age and gender of the populations studied, re ecting the distribution by age and gender in the general population of each country. On the other hand, the modality of stone analysis and reporting in the different centers may be a confounding factor [36]. For this reason, the most robust data are those comparing the rates of calcium-containing with those of uric acid containing stones, whereas it is less signi cant to compare the results of different countries in relation to the speci c crystallographic composition, which should be compared between patients whose stones have been analyzed and reported in the same laboratory.
Calcium-containing stones were the most common in most countries with a rate ranging from 52 to 91%. The highest rates of calcium-containing stones were observed in North America, South America, China, and some European countries. In most countries, calcium oxalate stones (in particular COM stones) were the more frequent calciumcontaining stones, whereas calcium phosphate and mixed calcium oxalate/calcium phosphate stones were more frequent than pure calcium oxalate stones in some countries such as Egypt and India. This trend is in agreement with previous observation in North America where a tendency has been reported of an increase in oxalate stones and a decrease in phosphate stones during the last two decades [10].
The highest frequency rates of acid uric containing stones were observed in Iraq, Pakistan, India, Egypt and Poland and Bulgaria. In general, uric acid-containing stones should be more frequent in older male patients, but surprisingly in our study the highest rates of uric acid-containing stones were observed in two countries with the lowest mean age, namely Egypt and Iraq. The impact of environmental factors may be decisive, considering that high temperatures and high humidity cause a decrease of urinary volumes and urinary pH values resulting in an increase of urinary uric acid saturation and of the incidence of uric acid stones [37,38]. In fact, the highest values of uric acid-containing stones were observed in countries with high mean temperatures [21] and tropic or hot desert climates such as Egypt, India, Pakistan and Iraq. Our data con rm previous evidence in the literature showing a high rate of uric acid-containing stones in Pakistan, Egypt, and Iraq [23,24,39]. In the present study, the prevalence of uric acid containing stones was also high in Southern India in accordance with previous reports. In fact, the frequency of uric acid-containing stones was reported as low (< 1%) in North Western India [25,26], but higher in Southern India [17]. This difference can be explained by different regional eating habits: in the Northern and Western regions, a more traditional vegetarian diet is consumed with exclusive consumption of fruit, vegetables and legumes, whereas in the Southern regions the consumption of sweets, snacks and pork meat is common [40]. On the other hand, in our study the lowest rate of uric acid containing stones was observed in Canada, the country with the lowest mean temperature. Intermediate rate values were observed in countries with a temperate climate, such as China and Italy.
In some countries, the high frequency of uric acid-containing stones may be explained by the effect of dietary factors that contribute to the risk of uric acid stone formation [41].
Although in contrast with previous ndings showing lower rates of uric acid stones in a series of stones analyzed by infrared spectroscopy [27], the high rate of uric acid stones in Poland may be explained by high obesity rate of the population (45%) and unfavorable dietary patterns [42]. In fact, the adherence to the traditional Polish dietary pattern, characterized by high intake of re ned grains, potatoes, sugar and sweets is associated with a higher risk of abdominal obesity and hypertriglyceridemia [43]. Similarly, in Bulgaria the frequency of uric acid-containing stones is associated with an unhealthy nutritional pattern characterized by high consumption of fatty meats and meat products, high-fat milk and a high alcohol intake [44].
The rate of struvite stones is generally lower than described in the past, due to improved health conditions and early diagnosis and treatment of urinary tract infections by urease-producers, although in some countries such as Pakistan and India it still accounts for a quarter of cases.
In some areas of these countries, the diagnosis and treatment of urinary infections is still inadequate and can result in chronic infections and scarring of the urinary tract promoting the formation and growth of staghorn infection stones [45].
Cystine stone rates are similar in all countries, with similar rates than those reported in the literature.
The strength of this study has been to have compared series from different countries according to the same evaluation parameters, but it has some limitations. A limitation was the use of wet-chemical analysis of the stones in 3 out 12 centers that participated in the study. In fact, chemical analysis of the stone has limitations in identifying all the stone components and distinguishing their crystalline forms for which most guidelines recommend analysis by infrared spectroscopy or X-ray diffractometry [36]. Unfortunately, these methods are not available in all centers, so to extend our survey to as many countries as possible, we decided to include also centers where the stones were analyzed with wet chemical analysis. For this reason, data of the stones analyzed in Argentina, Canada and Poland may be less reliable and should be evaluated with caution.
Another possible limitation of this study was the use of frequency rate of the different types of stones as a parameter to compare the pattern of stone composition in different countries. This parameter should be corrected based on the prevalence rate of urinary calculi (all types included) in the population of each country. Unfortunately, we know the rate of prevalence of urinary calculi in the general population only for a limited number of countries.
When we calculated the speci c prevalence of different types of stones, we were able to con rm the high prevalence of uric acid stones in Pakistan and, to a lesser extent, in India while we could not calculate prevalence rates of uric acid stones for Iraq, Egypt, Poland and Bulgaria where epidemiological studies were never carried out to assess the prevalence of urinary stones in the general population. The signi cance of the high frequency rates of uric acidcontaining stones in these countries remains uncertain.
In fact, the frequency of a type of stone is not a measure of its prevalence but it is the result of the prevalence of the different types of urinary stones. In other words, a high frequency of uric acid stones may be due to an increase in the prevalence of uric acid stones but, alternatively, to a lower prevalence of other types of stones (e.g. calcium oxalate).
In conclusion, the frequency of different types of urinary stones varies from country to country. Calcium-containing stones are the most frequent in all countries, with frequencies of up to 90%. The frequency of uric acid containing stones seems to depend mainly on climatic factors, being more frequent in warmer countries with desert or tropical climates although dietary patterns can also lead to an increase in the frequency of uric acid containing stones in association with high obesity rates. Struvite stones are decreasing in most countries except India and Pakistan.
Figure 1
Average age in renal stone formers (RSFs) and general population
|
2021-08-23T18:27:19.346Z
|
2021-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "f5848ddd410f519c3ad4a16c7cc3816fe71a08c1",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-286555/v1.pdf?c=1631894523000",
"oa_status": "GREEN",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "22effc248213833b26761d07ec70b091a233f008",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
118438482
|
pes2o/s2orc
|
v3-fos-license
|
Quantum Matrix Pairs
The notion of quantum matrix pairs is defined. These are pairs of matrices with non-commuting entries, which have the same pattern of internal relations, q-commute with each other under matrix multiplication, and are such that products of powers of the matrices obey the same pattern of internal relations as the original pair. Such matrices appear in an approach by the authors to quantizing gravity in 2 space and 1 time dimensions with negative cosmological constant on the torus. Explicit examples and transformations which generate new pairs from a given pair are presented.
Introduction
The notion of quantum matrix pairs arose in the context of our recent work [1] on quantum gravity in 2 space and 1 time dimensions with negative cosmological constant on the torus. As shown by Witten [2], this model is equivalent to a Chern-Simons theory with non-compact structure group SL(2, R)×SL(2, R). After imposing the constraints the classical geometry may be encoded, up to equivalence, by two commuting SL(2, R) matrices U 1 and U 2 (together with an identical second pair for the other SL(2, R) factor of the structure group), which represent the holonomies of a flat connection around two generating cycles of the (abelian) fundamental group of the torus. The usual approach to quantizing this theory, e.g. [3], is to work with gauge-invariant variables, namely the traces of the holonomies, but in [1] we chose instead to use the gauge covariant holonomy matrices themselves as variables. There we also argued that, after quantization, these matrices obey a q-commutation relation where q → 1 in the classical limit. Using the gauge covariance to put the matrices in standard form, the simplest solution of (1) is to take both matrices to be diagonal, in which case the diagonal elements of U 1 and U 2 obey standard q-commutation relations. However, when studying more general solutions with both matrices of upper-triangular form, we found matrices which had non-trivial internal relations, in addition to the "mutual" relations involving elements of both matrices. This feature of having internal relations is characteristic of quantum groups, and indeed the algebraic structure which emerges has many similarities with quantum groups, as we will explain below. The main purpose of this article is to describe this new algebraic structure, and present some examples. As mathematical objects quantum matrix pairs may be thought of as a simultaneous generalization of two familiar notions of "quantum mathematics", namely the quantum plane and quantum groups. 1 To make this statement more precise we first recall the quantum plane (over the field k), described by two non-commuting coordinates x and y, satisfying the relation xy = qyx (2) for some invertible q ∈ k, q = 1. The algebra of polynomial functions on the quantum plane is then given by k{x, y}/(xy − qyx), where k{x, y} is the free algebra with coefficients in k and (xy − qyx) is the ideal generated by xy − qyx. This algebra is a deformation of the algebra of polynomial functions in two commuting variables k[x, y].
for some invertible q ∈ k, q = 1. The algebra of polynomial functions of these entries, denoted M q (2), is given by the quotient of the free algebra k{a, b, c, d} by the ideal generated by the relations (R), and is a deformation of the algebra of polynomial functions in the commuting variables a, b, c, d.
The pattern of relations (R) seems somewhat arbitrary at first sight, although there are deep reasons for this particular form. One might also ask why the symbols a, b, c, d are displayed as entries of a matrix, instead of, say, as components of a 4vector belonging to some non-commutative 4-dimensional space. A very nice, and not widely known, explanation of the 2 × 2 matrix form was given by Vokos, Zumino and Wess [7], who showed that the internal relations are preserved under matrix multiplication in the following sense: 1. If U is as above the entries of U n satisfy the relations (R) with q substituted by q n for all positive integers n.
This result can be extended to all integers by making a minor modification. The quantum determinant D q = ad − qbc is central in M q (2). Formally adjoining a new generator D −1 q , which commutes with a, b, c, d and satisfies the relations (ad − qbc)D −1 q = 1, gives rise to an algebra, denoted GL q (2), for which U has the matrix inverse: 2. If U is as above with entries belonging to GL q (2), the entries of U n satisfy the relations (R) with q substituted by q n for all integers n.
Finally suppose that is a second matrix with entries satisfying the same relations as those of U, denoted (R ′ ), and commuting with the entries of U. Make U and U ′ invertible by adjoining the generators D −1 q and (D ′ ) −1 q , as above. Thus the entries of U and U ′ belong to the quotient of the free algebra k a, b, c, d, by the ideal generated by (R), (R ′ ), commutativity relations between primed and unprimed generators, commutativity relations between D −1 q , (D ′ ) −1 q and all other generators, and the relations (ad − qbc)D −1 If U and U ′ are as above, the entries of U n U ′n satisfy the relations (R) with q replaced by q n for all integers n.
These multiplicative properties of 2 × 2 quantum matrices are very striking, and provide the inspiration for the definition of quantum matrix pairs which follows.
Suppose, then, that we have two invertible matrices whose entries take values in a non-commutative algebra A over k, and satisfy relations of two different types: internal relations (I), involving the entries of one matrix only and having the same structure for both matrices, and mutual relations (M), involving the entries of both matrices at the same time. Both types of commutation relation may involve q, as well as possibly other scalar parameters.
for some invertible q ∈ k, q = 1, where q acts by scalar multiplication on the right-hand-side, and b) the entries of U n 1 U m 2 obey internal relations with the same structure (I), for all integers n and m, up to possible substitution of scalar parameters, we call (U 1 , U 2 ) a quantum matrix pair.
If condition a) holds, but condition b) only holds for U n 1 U n 2 , for all integers n, we call (U 1 , U 2 ) a restricted quantum matrix pair.
Requirement a) is a natural generalization of the quantum plane relation (2), with x and y being replaced by the 2 × 2 matrices U 1 and U 2 . Requirement b), in either its restricted or unrestricted form, is analogous to the multiplicative properties of quantum matrices described above.
We will proceed to give examples of quantum matrix pairs, both restricted and unrestricted. These examples, based on our previous work [1], are relatively simple in that both matrices are upper triangular, but still display novel features not found in diagonal examples. In particular, the internal relations are not standard Weyl-type relations like (2), but involve three matrix entries simultaneously. We expect that quantum matrix pairs for groups other than SL(2, R) can be found and will have an important role to play in the context of Chern-Simons theory.
In any case, quantum matrix pairs constitute an interesting algebraic structure, worthy of study in its own right. The fact that they closely resemble quantum groups could lead to novel insights and perspectives on the latter subject. Here we should point out that there is a similar construction which arises in the context of Majid's braided matrices [8,Section 10.3]. He gives examples of pairs of 2 × 2 matrices with internal and mutual relations between their entries, such that the product of the first matrix with the second obeys the same internal relations (but not the other way round). By comparison the internal relations in our examples seem very robust, since they carry over to the product of the original matrices in either order, as well as to other monomials in the two matrices or their inverses. Furthermore, two different products of the original matrices may themselves form a new quantum matrix pair.
Perhaps the most intriguing feature about our examples is that their geometric origin reemerges from the algebra, when we find an action of the modular group on spaces of quantum matrix pairs. A fuller understanding of the role of quantum matrix pairs in physical models will undoubtedly involve notions of noncommutative non-local geometry.
Our material is organized as follows. In section 2 the three types of example of quantum matrix pairs are given, and their individual features are discusssed. In section 3 several ways are described of obtaining new quantum matrix pairs from a given one for the examples of section 2, which then leads to an action of the modular group on spaces of quantum matrix pairs. Section 4 contains some comments.
Examples of quantum matrix pairs
As pointed out in the introduction, the holonomy matrices are not gauge invariant. Under gauge transformations they transform by simultaneous conjugation with an element of SL(2, R). Classically this means that the matrices can be simultaneously diagonalized, when they are diagonalizable, but there is also a sector where both matrices are (upper) triangular in form. We will concentrate on the corresponding sector in the quantum theory, since it has much more interesting behaviour than the diagonal case.
Thus from now on we will take both matrices U 1 and U 2 to be of upper triangular form and will denote their entries as follows: Also, for brevity, we will henceforth adopt the following convention: when the index i appears in a statement, i = 1, 2 is understood. Since we require both matrices U i to be invertible, we take α i and γ i to be invertible, which is formally achieved by adjoining new generators α −1 i , γ −1 i , and corresponding relations, to the free algebra generated by α i , β i and γ i . The inverse of U i is then given by In accordance with the definition of a quantum matrix pair, we must specify internal (I) and mutual (M) relations between the generators. These we will subdivide further into diagonal (D) relations when they involve only diagonal entries, and nondiagonal (ND) relations, when they also involve non-diagonal entries. We will list various options for the relations below, and different combinations of these options will furnish the three types of example we want to present.
For the internal diagonal relations there are two choices, namely The first choice implies γ i = α −1 i , whereas the second merely requires the diagonal entries to commute. For the non-diagonal internal relations there are also two choices: where in the second choice r is an invertible element of k, r = 1. Of course, r and q need not be independent. For instance, they may be equal, or r may be a power of q.
The mutual diagonal relations appear in the following two forms: where q ∈ k is the parameter for the fundamental q-commutation relation (1). In practice these two choices are the same, since we will always combine (MD1) with (ID1), which implies (MD2), according to Proposition 2 below. Finally, we restrict ourselves to a single choice of mutual non-diagonal relations: Before giving the examples, we need to derive a few results, starting with a simple but important proposition, which underlies all the subsequent calculations.
Proposition 1 Let A be an algebra over the field k, and let α, β and γ be elements of A, with α and γ invertible. Let q and r be invertible elements of k. Then a) αγ = qγα ⇒ α n γ m = q mn γ m α n , ∀n, m ∈ Z b) αβ = rβγ ⇒ α n β = r n βγ n , ∀n ∈ Z.
To analyse the requirement b) of the definition we need expressions for powers of the matrices U i . These depend on the choice of internal non-diagonal relations.
for (IND1) and (IND2) respectively, using (4) and Proposition 1. Since U n = (U −1 ) p one derives: As a corollary we obtain formulae for the entries of U n 1 U m 2 for n, m ∈ Z. We remark that, in view of the q-commutation relation (1) and Proposition 1, any word in U 1 , U 2 and their inverses is proportional to U n 1 U m 2 for some n, m ∈ Z. Setting we obtain the following formulae: With these preliminary calculations out of the way, we are in a position to present our three types of example, and prove that they are quantum matrix pairs. We do this in the form of a theorem. (12) For type II, U n 1 U m 2 , for all n, m ∈ Z, has internal relations: α(n, m)γ(n, m) = γ(n, m)α(n, m) (13) α(n, m)β(n, m) = β(n, m)γ(n, m). (14) For type III, U n 1 U n 2 , for all n ∈ Z, has internal relations: α(n, n)γ(n, n) = γ(n, n)α(n, n) (15) α(n, n)β(n, n) = r n β(n, n)γ(n, n). (16) In each case these internal relations have the same structure as those of the corresponding U i , with 1 in (5) replaced with q nm in (11) for type I, and r in (7) replaced with r n in (16) for type III. Whilst these three types of example are very similar, each of them exhibits some special feature distinguishing it from the others. In the type I case, both U 1 and U 2 have determinant 1, but mixed products of the U i have nonunit determinant, as a result of the non-commutativity of the algebra. In the type II case, despite the noncommutativity, the property of having commuting diagonal entries propagates to all products of the U i . Finally, the type III case has internal relations involving a parameter, a feature which is reminiscent of quantum groups. We remark that this parameter cannot simply be removed by a rescaling α i → r 1/2 α i , γ i → r −1/2 γ i , since the (MND) relations are not preserved under these replacements. The parameter propagates to powers of U i in a manner again reminiscent of quantum groups (cf. the second Vokos et al result in the introduction). (9), all three types satisfy requirement a) of the definition of a quantum matrix pair. All that remains is to show that the relations (11)-(16) hold. Equations (11), (13) and (15) follow from
Generating new quantum matrix pairs
The examples of the previous section showed how the internal relations are preserved under multiplication of the matrices belonging to a quantum matrix pairs. However there is another aspect to these examples. By taking two different products of the U i , in some circumstances it is possible to generate a new quantum matrix pair of the same or a similar type, as we will see in this section. Furthermore the transformations amongst quantum matrix pairs may preserve the type, so that we can regard them as acting on the space of all quantum matrix pairs of a certain type. We show how this can give rise to representations of a discrete group, namely SL(2, Z) (the modular group), on the space of quantum matrix pairs of type I and type II. We start with a trivial first result in this direction.
Proposition 4 Let (U 1 , U 2 ) be a quantum matrix pair of any of the three types described in the previous section. Then the pair (Ũ 1 ,Ũ 2 ) with entriesα i = α i ,γ i = γ i andβ i = c i β i , where c i ∈ k are arbitrary constants, is a new quantum matrix pair of the same type as the original pair.
Proof: The non-diagonal relations (6), (7) and (8) are linear in β 1 or β 2 . ✷ The main result of this section is stated in the following theorem: Theorem 2 a) Let (U 1 , U 2 ) be a quantum matrix pair of type I. Then is a quantum matrix pair of type I, with q replaced with q nt−ms , for all n, m, s, t ∈ Z. b) Let (U 1 , U 2 ) be a quantum matrix pair of type II. Then is a quantum matrix pair of type II, with q replaced with q nt−ms , for all n, m, s, t ∈ Z. c) Let (U 1 , U 2 ) be a quantum matrix pair of type III. Then (U n 1 , U n 2 ) is a quantum matrix pair of type III, with q replaced with q n 2 and r replaced with r n , for all n ∈ Z.
Proof: (The statements in the proof hold for all n, m, s, t ∈ Z.) First we prove the internal relations. For a), the (ID1) relations follow from (11), since this equation implies that q −nm/2 α(n, m) and q −nm/2 γ(n, m) are each other's inverses. The (IND1) relations follow from (12), since all matrix entries are multiplied by the same factor. For b), the internal relations are equations (13), (14) of the previous section. For c), using the notation and result of Proposition 3 b), a i γ i = γ i α i (ID2) for U i implies α i (n)γ i (n) = γ i (n)α i (n), and α i β i = rβ i γ i (IND2) for U i , together with Propostion 1, implies α i (n)β i (n) = r n β i (n)γ i (n)).
To simplify the proof of the mutual relations , we use the relation , which follows from the q-commutation relation (1) satisfied by U i and Proposition 1. This implies the equations α(n, m)α(s, t) = q nt−ms α(s, t)α(n, m) (17) γ(n, m)γ(s, t) = q nt−ms γ(s, t)γ(n, m) (18) α(n, m)β(s, t) + β(n, m)γ(s, t) = q nt−ms (α(s, t)β(n, m) + β(s, t)γ(n, m) (19) where we are using the notation of (10). Now, starting with b), the first and fourth (MD2) relations, with q replaced with q nt−ms , are equations (17) and (18), and the second and third (MD2) relations α(n, m)γ(s, t) = q −(nt−ms) γ(s, t)α(n, m) α(s, t)γ(n.m) = q nt−ms γ(n, m)α(s, t) follow from the second and third (MD2) relations for U i and Proposition 1. In view of (19), it is enough to show the first of the (MND) relations: For a), the (MD1) relation follows from (17), and the (MND) relation is proved as for case b), after multiplying the entries by the factors q −nm/2 or q −st/2 . For c), the (MD1) relations, with parameter q n 2 , are shown as for b), after setting t = n, m = s = 0. Again, in view of (19), it is enough to show the first of the (MND) relations: 1n r β 2 γ n−1 2 =n r q n β 2 γ n 1 γ n−1 2 =n r q n 2 β 2 γ n−1 2 γ n 1 = q n 2 β 2 (n)γ 1 (n). ✷ The modular group SL(2, Z) has a presentation in terms of two generators S and T , with relations S 4 = (ST ) 3 = 1. 2 We have a representation of SL(2, Z), if we can find automorphisms, S and T , of a space X, which satisfy these relations. There are natural SL(2, Z) representations associated with spaces of quantum matrix pairs, as the following theorem shows.
Theorem 3 Let QMP1 and QMP2 be the spaces of all quantum matrix pairs of type I and II respectively, with entries in an algebra A, and with a fixed parameter q. Then the following definitions give rise to representations of SL(2, Z) on QMP1 and QMP2 : Proof: From a) and b) of the previous theorem, the transformations S and T map type I or II quantum matrix pairs into quantum matrix pairs of the same type, and with the same q parameter. S 2 (U 1 , U 2 ) = S(U 2 , U −1 1 ) = (U −1 1 , U −1 2 ), and thus S 4 (U 1 , U 2 ) = (U 1 , U 2 ). For the type I case, the second relation (ST ) 3 = 1 is proved as follows. First: For the type II case, set q = 1 in this calculation. ✷
Final comments
Quantum matrix pairs combine the preservation of internal relations under multiplication, a quantum-group-like feature, with the fundamental q-commutation relation which holds between the two matrices. We have presented three types of example of this construction, all involving upper-triangular matrices, but with slightly differing features.
It is interesting to make some comparisons between quantum matrix pairs and quantum groups. The internal relations in our examples of quantum matrix pairs differ in structure from the Weyl-type q-commutation relations normally found in quantum groups, as they involve three matrix elements at the same time. Related to this is the fact that the entries of each matrix do not commute in the limit q → 1, which also distinguishes them from Majid's braided matrices [8]. Nonetheless, when the internal relations depend on a parameter, as in the third type of example, quantum integers with that parameter appear in the powers of the matrices, which is a feature strongly reminiscent of quantum groups.
An obvious question for further study is to see whether other examples can be found, e.g. 2 × 2 matrices but not of triangular form, or examples involving other groups.
According to theorem 2, not only do products of powers of the matrices have the same structure of internal relations, but taking two different products gives rise to new quantum matrix pairs of the same type. This shows that, in a sense, it is the whole quantum matrix pair structure, rather than just the internal relations, which is preserved under multiplication in these examples.
It is striking that the action of the modular group on pairs of commuting matrices extends to quantum matrix pairs. This reveals that the construction, which could be taken on a purely algebraic level, actually has a geometric interpretation as well. In future work we hope to arrive at a deeper understanding of quantum matrix pairs in terms of non-local non-commutative geometry.
|
2019-04-12T09:25:16.285Z
|
1999-11-02T00:00:00.000
|
{
"year": 1999,
"sha1": "989f1a7f158faced11f0b236cf3a9e34aef79285",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "7fec024936858872daa74f99bd3bef25a76efcce",
"s2fieldsofstudy": [
"Physics",
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics",
"Physics"
]
}
|
255961030
|
pes2o/s2orc
|
v3-fos-license
|
Visualization of atherosclerosis as detected by coronary artery calcium and carotid intima-media thickness reveals significant atherosclerosis in a cross-sectional study of psoriasis patients in a tertiary care center
Psoriasis is a chronic inflammatory disease of the skin and joints that may also have systemic inflammatory effects, including the development of cardiovascular disease (CVD). Multiple epidemiologic studies have demonstrated increased rates of CVD in psoriasis patients, although a causal link has not been established. A growing body of evidence suggests that sub-clinical systemic inflammation may develop in psoriasis patients, even from a young age. We aimed to evaluate the prevalence of atherosclerosis and identify specific clinical risk factors associated with early vascular inflammation. We conducted a cross-sectional study of a tertiary care cohort of psoriasis patients using coronary artery calcium (CAC) score and carotid intima-media thickness (CIMT) to detect atherosclerosis, along with high sensitivity C-reactive protein (hsCRP) to measure inflammation. Psoriasis patients and controls were recruited from our tertiary care dermatology clinic. Presence of atherosclerosis was defined using validated numeric values within CAC and CIMT imaging. Descriptive data comparing groups was analyzed using Welch’s t test and Pearson Chi square tests. Logistic regression was used to analyze clinical factors associated with atherosclerosis, and linear regression to evaluate the relationship between psoriasis and hsCRP. 296 patients were enrolled, with 283 (207 psoriatic and 76 controls) having all data for the hsCRP and atherosclerosis analysis. Atherosclerosis was found in 67.6 % of psoriasis subjects versus 52.6 % of controls; Psoriasis patients were found to have a 2.67-fold higher odds of having atherosclerosis compared to controls [95 % CI (1.2, 5.92); p = 0.016], after adjusting for age, gender, race, BMI, smoking, HDL and hsCRP. In addition, a non-significant trend was found between HsCRP and psoriasis severity, as measured by PASI, PGA, or BSA, again after adjusting for confounders. A tertiary care cohort of psoriasis patients have a high prevalence of early atherosclerosis, increased hsCRP, and psoriasis remains a risk factor for the presence of atherosclerosis even after adjustment of key confounding clinical factors. Psoriasis may contribute to an accelerated systemic inflammatory cascade resulting in increased risk of CVD and CV events.
Background Psoriasis is a chronic inflammatory disease of the skin and joints that may also have systemic inflammatory effects, including the development of cardiovascular disease (CVD) [1]. While the cutaneous manifestations of psoriasis wax and wane the systemic inflammatory effects may incite continuous, progressive development of CVD and atherosclerosis [2][3][4][5][6]. Multiple epidemiological studies have demonstrated elevated rates of cardiovascular events in psoriasis patients when compared to controls [7][8][9][10]. From McDonald and Calabresi's study of psoriasis and occlusive vascular disease in 1978 to Gelfand's 2006 landmark study, many studies have also linked psoriasis with increased mortality specifically related to CVD [11,12]. This finding has traditionally been explained by the higher prevalence of CVD risk factors in psoriasis patients, such as the components of the metabolic syndrome, tobacco, and alcohol abuse [13][14][15]. These confounding factors have led to debate as to whether or not psoriasis incurs independent risk for the development or progression of cardiovascular disease. There have been several studies that demonstrate no independent association between psoriasis and the development of atherosclerosis [15][16][17]. Proponents of the association between psoriasis and CVD support the concept that CVD risk factors favor inflammation and atherogenesis, and when combined with the proinflammatory state of psoriasis, a synergistic effect may result [4,6,18]. Indeed, murine models of psoriasiform skin have demonstrated that chronic skin inflammation can lead to vascular inflammation and increased rates of thrombosis, suggesting that chronic inflammation exacerbates cardiovascular complications [19]. Now, in a separate study, these observations have been extended to a second skin-contained transgenic mouse model, demonstrating that chronic, but not acute skin inflammation promotes arterial thrombosis [20].
Of utmost concern in psoriasis patients is the possibility of developing significant CVD at a relatively young age that potentially results from this synergistic proinflammatory milieu and the duration of exposure to this milieu. Large population-based studies demonstrate an increased incidence of CVD including stroke, especially among younger, severe psoriasis patients [21][22][23][24]. Several studies have even reported an increased risk of CV mortality with psoriasis [10,25,26]. Thus, these concerns have led to studies investigating the link between psoriasis and sub-clinical CVD using special imaging techniques. These techniques include coronary artery calcium scoring (CAC), carotid intimal media thickness (CIMT), and brachial artery flow-mediated dilation (FMD) amongst others. A systematic review by Shaharyar et al. [27] evaluated multiple studies using these techniques to assess sub-clinical atherosclerosis and concluded that in general, psoriasis patients had higher CIMT and CAC burden as well as endothelial dysfunction compared to controls. However, these studies have individual limitations that invite further investigation into a potential causal link between psoriasis and CVD. One other common link in the inflammatory cascade of psoriasis and CVD may be C-reactive protein (CRP). Extensively studied for its implication in CVD, CRP is regulated in the acute phase by IL-1, IL-6, and TNF-α [28,29]. Synthesized primarily by the liver, CRP is also produced by coronary artery smooth muscle cells in response to inflammatory stimuli, and provides integration of overall cytokine activation [18,28]. CRP has been shown to be associated with all-cause mortality in chronic immune-mediated inflammatory disease, including psoriasis [30] and to correlate with risk of cardiovascular events in patients who have instituted aggressive statin therapy [31]. Some studies suggest that CRP levels can predict prognosis in those with a cardiovascular event, and the high sensitivity test of CRP (hsCRP) may be predictive of cardiovascular events in asymptomatic, healthy populations [32][33][34][35][36], although other studies do not support this contention [37]. While its possible role in the genesis of CV events has not been elucidated, CRP may be an important indicator of cardiovascular risk.
Our aim was to evaluate the prevalence of atherosclerosis in a tertiary care psoriasis cohort and the association between psoriasis, atherosclerosis and inflammatory markers while controlling for major potential confounders. This objective was completed via a cross-sectional study using multi-modal vascular evaluation of the carotid arteries for the presence of carotid plaque, carotid intima-media thickness (CIMT), computed tomography (CT) of the coronary arteries for calcification (CAC), measurement of hsCRP, and measurement of CVD risk factors and psoriasis. seen throughout the year in our tertiary care clinics (psoriasis diagnosed by NJK or KDC) ≥18 years were invited to participate.
Psoriasis patients Inclusion criteria
Psoriasis patients with and without a history of psoriatic arthritis were eligible. Psoriatic arthritis was assessed by asking patients if they had experienced joint pain or swelling, morning stiffness, had been diagnosed with psoriatic arthritis, or had been treated by a rheumatologist for psoriatic arthritis.
Exclusion criteria
Patients with a known or suspected history of systemic inflammatory diseases, with the exception of psoriatic arthritis, were excluded.
Healthy control volunteers Inclusion criteria
Controls were either: (1) Individuals recruited from the same dermatology clinics who were being seen for common dermatologic complaints including seborrheic keratoses, warts, nevi, and actinic keratosis; or (2) Non-genetically-related subjects residing with psoriasis patients.
Exclusion criteria
Control patients with a history of atopic dermatitis, contact dermatitis, acne, connective tissue disease or autoimmune-blistering diseases were excluded, as these conditions are known systemic inflammatory diseases (n = 62).
The control cohort is comparable to the psoriasis cohort as both populations are from the same geographical area and seek treatment for a dermatologic complaint, or reside with psoriasis patients. Approximately 20 % of screened psoriasis patients and healthy control volunteers declined to participate, citing time restraints, or unwillingness to be involved in a research protocol.
Eligible patients participated in a cross-sectional study visit having fasted for 8 h, refrained from exercise for 6 h, and refrained from vitamins, oral antioxidants, tobacco, caffeine and antihypertensive medication on the study date. We measured height, weight, blood-pressure (after being seated for 5 min, on both arms, then averaged), hip and waist circumference, psoriasis area severity index (PASI), Physician's Global Assessment (PGA), and body surface area (BSA). Carotid plaque, CIMT and CAC were measured. Fasting venous blood was obtained for hsCRP, total cholesterol, high density lipoprotein (HDL), triglycerides, low density lipoprotein (LDL) as calculated by the Friedewald equation, and blood glucose which was quantified by the UHCMC clinical laboratory. In addition, medical history was completed including evaluation of known risk factors for CAD, including smoking history, history of hypertension, gender, and race. These were included as binary variables, analyzed as potential confounders, except for race, which was evaluated as a categorical variable. A medical history of atherosclerotic disease was obtained, including a history of coronary or peripheral vascular angioplasty or stenting, as well as history of angina or evidence of peripheral ischemia, and transient ischemic attack or stroke. These patients were included in the analysis of prevalence. All patients who agreed to participate were included in the study.
Clinical measures
Ultrasound was performed using a Toshiba Nemio XG ultrasound machine (model #SSA-580A). We used a 9 MHz linear probe with a depth of 4 cm and a dynamic range set at 70 dB and 32 frames/s. For CIMT, right and left distal common carotid arteries (CCA) were scanned at the anterior, lateral, and posterior angles. Each angle was imaged at 3 consecutive R-waves for a total of 9 images on each side. We used the Carotid Analyzer for Research software (Medical Imaging Applications, LLC, Coralville, IA, USA). Mean CIMT was measured from a 1 cm segment of the far wall of the CCA proximal to the bulb. The average values of the 3 images at each angle were averaged, yielding a mean of all 3 angles as the mean of means for each side. Carotid plaque, defined as encroachment into the vessel lumen ≥1.5 mm or thickening of greater than 50 % than the adjacent segments, was measured by the same technique scanning the length of both internal and external carotid arteries.
Ultrasound procedures were performed by one sonographer (RLF) to eliminate interpersonal technique differences and all measurements were completed by RLF and audited by an experienced vascular medicine specialist.
CAC scoring was assessed with non-contrast enhanced technique by one of two CT (Somatom Sensation 16; Siemens Inc., Malvern PA, USA) scanners performed using the following parameters: 140 kV, 30 mAs, B35f filter, 12 × 1.5 mm collimation, 3 mm reconstruction, 0.36 s scan time, 3.15 mGy average dose; Somatom Sensation 64, 120 kV, 40 mAs, B35f filter, 30 × 0.6 mm collimation, 3 mm reconstruction, 0.36 s scan time, 2 mGy average dose, in the Department of Radiology at UHCMC. Electrocardiogram leads were placed and heart rates were monitored. As long as the heart rate was <100 beats/ min, the coronary arteries were able to be studied. Images were interpreted by one of three board-certified radiologists in the cardiothoracic imaging section of the Department of Radiology, UHCMC. CAC scoring was calculated according to the method of Agatston. The CT attenuation threshold for detection of coronary artery calcification was 130 Hounsfield units.
Evidence of atherosclerosis
We defined atherosclerosis as a CAC score ≥1, or right or left CIMT >75th percentile, or carotid plaque. Patients with coronary stent(s) were defined as having a positive CAC score. Our definition uses validated techniques to diagnose atherosclerosis [38]. CIMT >75th percentile was determined by plotting the mean of the mean values of CIMT against data from the Atherosclerosis Risk in Communities study, which adjusts for age, gender, and race [39,40].
Statistical analysis
Demographics and disease characteristics of psoriasis and controls were summarized using means and standard deviations for continuous variables and frequencies and proportions for categorical variables. We used a twosample Welch's t test to compare mean levels and a Pearson's Chi squared test to compare proportions between the two groups. The prevalence of atherosclerosis in the psoriasis cohort relative to controls was assessed using logistic regression models to estimate odds ratios.
To model the association between psoriasis and the prevalence of atherosclerosis, logistic regression models were used to estimate the odds ratio of atherosclerosis comparing psoriasis to controls. Multivariable logistic regression models adjusted for confounders including age, gender, race, BMI (weight lbs./(height in.) 2 × 703), current smoking status, history of hypertension (systolic ≥140 mm Hg, or diastolic ≥90 mm Hg, or current use of anti-hypertensive medication), serum HDL, and hsCRP. Interactions were not evaluated. Sub-group analysis was performed to evaluate both patients with psoriasis and psoriatic arthritis. Adequacy of the models was assessed using the Hosmer and Lemeshow goodness-offit test.
Separate linear regression models were used to estimate the association between psoriasis and hsCRP. hsCRP was right skewed so a log-transformation was used to model the association between psoriasis and hsCRP. Multivariable linear regression models were used to estimate the geometric mean ratio of hsCRP comparing psoriasis to controls, adjusting for age, gender, race, BMI, current smoking status, history of hypertension, serum HDL, and the presence of atherosclerosis. Residuals were examined for model adequacy.
A similar modeling strategy was followed, using only psoriasis patients, to investigate the association between various measures of psoriasis severity and the prevalence of atherosclerosis, or hsCRP. A sensitivity analysis that excluded those patients with a medical history of cardiovascular disease was conducted. No adjustment for multiple comparisons was made. All analyses were based on a two-sided significance level of 0.05 and were performed using SAS 9.2 and 9.4 (SAS Institute, Cary, NC, USA). No a priori sample size calculations were performed.
Study cohort
295 subjects were enrolled. 283 (207 psoriatic and 76 controls) had all data for the hsCRP and atherosclerosis analysis. The control and psoriasis cohorts had similar age, gender, and race distribution, while differences in BMI, smoking status, hypertension, dyslipidemia, LDL, HDL, and hsCRP were present (Table 1). Psoriasis patients had a lengthy history of disease, 19.4 ± 13.3 years, and an average BSA of 14.3 ± 18.8 (Table 1). Psoriatic arthritis affected 24.6 % of the enrolled patients, which is representative of the frequency of psoriatic arthritis in most tertiary care centers [41]. Greater than 90 % of participating patients had plaque type psoriasis with the remainder having either palmo-plantar, inverse or localized pustular psoriasis. There were no patients in this study with generalized pustular psoriasis, erythrodermic psoriasis or with psoriatic arthritis in the absence of any cutaneous disease. Topical treatments, though not necessarily exclusive, were the most frequently utilized therapy ( Table 1). The frequency of systemic treatment was 36 % (not shown).
Both the psoriasis and control cohorts with a history of hypertension had a similar frequency of non-treatment (Additional file 1: Table S1), defined as not taking any hypertensive medications. Similar findings were present with regard to patients with dyslipidemia (Additional file 1: Table S1), defined as not receiving lipid altering medication (statins, bile acid sequestrants, nicotinic acid, or fibric acid). To ascertain whether our results could be driven by the presence of psoriatic arthritis, analyses were performed with all patients as well as after removing the psoriatic arthritis patients. We found that in subjects with psoriasis only, both untreated hypertension and untreated dyslipidemia were significantly different between psoriasis patients and controls (Additional file 1: Table S2). We categorized subjects with dyslipidemia into high total cholesterol, high LDL or low HDL sub-cohorts using threshold defined by the National Cholesterol Education Program Adult Treatment Panel III (NCEPATP3) (Additional file 1: Table S3). There was no evidence of a difference in untreated dyslipidemia between each of the psoriasis and control sub-cohorts but when we performed this analysis after removing the patients with psoriatic arthritis, patients with an untreated HDL <40 differed from the control patients (Additional file 1: Table S4).
Psoriasis patients showed higher prevalence of atherosclerosis
Ninety-two psoriasis patients had positive CAC scores, 83 had carotid plaque, 87 had a CIMT > 75 percentile and 5 had coronary stents, while 26 controls had positive CAC scores, 23 had carotid plaque, 25 had a CIMT > 75 percentile, and 1 had a coronary stent. One or more of these findings were considered evidence for atherosclerosis, thus the number of psoriasis and controls with atherosclerosis was 140/207 (67.6 %) and 40/76 (52.6 %), respectively (Fig. 1a). The unadjusted odds of atherosclerosis was 88 % higher in psoriasis patients than controls [95 % CI (1.10, 3.21); p = 0.0210; Table 4). Models that further controlled for additional variables within the PGA:hsCRP relationship maintained the association: adjusted geometric mean ratio 1.28 (p = 0.0052) when controlling for FBG, and 1.27 (p = 0054) when controlling for LDL and current statin therapy. The Hosmer and Lemeshow goodness-of-fit tests for the logistic models yielded values ranging between 0.10 and 0.99, thus providing no indication of lack of fit of any of the models. Residuals did not suggest important departures from linearity. A sensitivity analysis that excluded the patients with a medical history of cardiovascular disease (n = 15) yielded results consistent with those presented here (data not shown).
Discussion
We have shown that even after adjusting for multiple confounding factors, a tertiary care cohort of psoriasis patients have a 2.67-fold higher odds of having atherosclerosis compared to controls. The most striking finding was that after age stratification, almost half (49 %) of patients with psoriasis aged 30-39 years had evidence of subclinical atherosclerosis as compared to 15 % of controls. These findings are suggestive that psoriasis may contribute to the inflammatory cascade of atherosclerosis and that the younger a patient develops psoriasis, the higher the CVD risk. These findings are consistent with recent observations from preclinical models systems, wherein chronic skin-contained inflammation, but not acute, promoted the acceleration of arterial thrombosis [19] and [20]. In addition, other investigators have also reported observational studies demonstrating relatively higher measures of association with CV events in younger individuals [25,[42][43][44]. Psoriasis sufferers with long-standing and more severe disease, as well as those with joint involvement likely have a greater systemic inflammatory burden that may increase their likelihood of distant effect on the vascular system. Inflammation plays a crucial role in the initiation and promotion of atherosclerosis [18,45,46]. The immunologic commonalities of inflammation, linking psoriasis and atherosclerosis include infiltrating T-cells, macrophages, monocytes, dendritic cells, and mast cells in psoriatic plaques, and a similar composition of cells in atherosclerotic plaques [47][48][49]. A similar pattern of CD4 + T-cell activation through antigen presenting dendritic cells stimulate the proliferation of CD8 + T-cells, with activity of the T-helper 1 phenotype inflammation that prevails in both psoriasis and atherosclerosis [2,4,47,48,50,51]. This may lead to cyclic inflammation through continuous activation and re-activation of T-cells and macrophages and their ensuing cytokines that result in systemic inflammation mechanisms common to psoriasis and atherosclerosis [2,4,48]. Further support of this idea comes from results demonstrating that effective treatment of psoriasis may improve endothelial cell function [52]. We did not find evidence of an association between severity of psoriasis, as measured by PASI, PGA, or BSA, and the presence of atherosclerosis in this study where psoriasis patients were included without regard to their current treatment type, dosage, or duration. We did not observe a causal role for psoriatic arthritis in our findings; when patients with psoriatic arthritis were removed from our analyses there were more patients with psoriasis only with untreated hypertension and untreated HDL <40 than observed in the controls cohort.
Although other studies have examined vascular imaging techniques such as CIMT, FMD, or CAC scores to evaluate CVD and reported associations with psoriasis [53][54][55], our study utilized a more comprehensive technique for assessing atherosclerosis, with a more analogous control cohort, and adjusted for more confounding factors than previous studies. We used a multi-vessel, multi-site, cumulative approach to detect the spectrum of atherosclerosis from the sub-clinical origins through clinical interventions in all subjects, while utilizing a control cohort recruited from the same clinic. Two other studies have examined CAC scores in psoriasis patients. Both of these studies, like ours, demonstrated increased odds of CAC in psoriasis patients versus controls (after CVD risk factor adjustment), although our study had an even higher percentage of controls with CAD (34 versus 28 versus 4 %) [54,56]. However, these studies examined smaller numbers of patients than our current study. In addition, only one other study has used multiple imaging modalities in the same patient population [56].
Several other studies have evaluated CIMT in psoriasis patients. All of these studies demonstrated increased CIMT in psoriasis versus controls, which is consistent with our results. In several of these studies, IMT correlated with psoriasis severity [53,57]. Studies involving only patients with psoriatic arthritis, which has a greater burden of systemic inflammation than psoriasis, have established an association with atherosclerosis through evaluation of CIMT and carotid plaque [58][59][60]. These surrogate markers have been validated as useful, noninvasive techniques for evaluating atherosclerosis and stratifying risk of future CVD events [38-40, 61, 62].
While our data demonstrated a trend toward elevated hs-CRP in psoriasis patients compared to controls, several other studies have demonstrated statistically significant elevations of hs-CRP levels in psoriasis patients [63,64]. The findings of Troitzsch et al. [63], may be limited due to their study relying upon a population-based approach that lacks details regarding psoriasis severity or duration, and the Usta et al. [64] report reflects a small study that included patients with very mild disease. This suggests that systemic inflammation, especially in psoriasis patients, is multi-factorial and the driving influence as determined by hsCRP, is shared. Inflammation plays a central role in atherosclerosis, and CRP is a critical factor in inflammation [18,28]. We show that serum CRP, detected by the high sensitivity test (hs-CRP, one marker of systemic inflammation) is elevated in psoriasis patients compared to controls even after adjusting for several variables including age, gender, race, BMI, and current smoking status. However, the association was lost after adjusting for hypertension, HDL, and presence of atherosclerosis. These factors individually influence inflammation and hsCRP levels [65][66][67][68][69]. Furthermore, the cross-sectional design of this study which included stable, well-controlled psoriasis patients receiving potent treatments may have decreased levels of systemic inflammation and potentially underestimated the significance of this relationship. Additionally, the study may not have been sufficiently powered to detect significant differences between psoriasis severity and atherosclerosis.
Finally, we have recently demonstrated elevated circulating intermediate monocytes (CD14 ++ , CD16 + ; Mon2) in psoriasis patients; along with increases in the formation of monocyte-aggregates; thus this cell population may provide an additional surrogate outcome measure for predicting CVD in a psoriasis patient cohort that is more refined [70]. These cells correlated to PASI and have previously been demonstrated to be elevated in high risk non-psoriasis CVD patients [71][72][73][74]. Whether this population of cells may be capable of serving as a surrogate marker of skin-mediated promotion of adverse cardiac events in psoriasis patients, and how it changes with successful resolution of psoriasis following systemic treatment, remains to be determined. However, it is clear that current measures of psoriasis severity may lack the precision to address the severity of psoriasis and prevalence of atherosclerosis. Thus developing alternative measures of monitoring patient response to therapy is worthwhile.
Limitations
This is a cross-sectional cohort study of a tertiary care group of psoriasis patients and is subject to the limitations of this design. Since these patients are recruited from a tertiary care center, they may represent a more severe psoriasis cohort than average. In particular, patients were eligible for enrollment regardless of disease severity, duration, treatment type or duration. Compared to controls, our psoriasis patients had higher BMI, LDL, triglycerides, prevalence of smoking, prevalence of hypertension, and lower HDL. These variables could be confounders of the association between psoriasis (or psoriasis severity) and hsCRP as well as psoriasis (or psoriasis severity) and atherosclerosis. The methods of assessing psoriasis severity may be problematic because of floor and ceiling effects. Although multivariable analyses attempted to adjust for these confounders, our results may be biased due to residual confounding. In addition, some data elements were missing and these subjects were excluded. Consequently, any inference could be biased due to potential selection bias stemming from missing data. However, because missing data occurred in less than 5 % of our patients, this is less likely to be cause for concern. Additionally, our psoriasis patients tended to have more severe disease and therefore our results may be more generalizable to patients with more severe disease. Finally, it should be noted that a cross sectional design cannot establish direction of an association.
Conclusions
A tertiary care cohort of psoriasis patients have a high prevalence of early atherosclerosis and elevated levels of serum CRP, detected by the high sensitivity test (hs-CRP, one marker of systemic inflammation), and psoriasis was a risk factor for the presence of atherosclerosis even after adjustment of key confounding clinical factors. Psoriasis may contribute to development of atherosclerosis due to an accelerated systemic inflammatory cascade resulting in increased risk of CVD and cardiovascular events. Future research should focus on whether effective treatment of psoriasis reduces the risk of atherosclerosis, CVD, morbidity, and mortality.
|
2023-01-18T14:03:15.052Z
|
2016-07-22T00:00:00.000
|
{
"year": 2016,
"sha1": "10e001e46fc8262815c9469e07610dd4a2aefcbd",
"oa_license": "CCBY",
"oa_url": "https://translational-medicine.biomedcentral.com/track/pdf/10.1186/s12967-016-0947-0",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "10e001e46fc8262815c9469e07610dd4a2aefcbd",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
225535354
|
pes2o/s2orc
|
v3-fos-license
|
CULTIVATION AND INGESTIBILITY OF ANGOLE PEAS (CAJANUS CAJAN) IN SAHEL GOAT IN NIGER
This study was conducted on the experimental site of the Faculty of Agriculture of Abdou Moumouni University of Niamey. Objective of the work is to study the culture of species Cajanus cajan while determining its applicability to the Sahel goat. Germination test was carried out in petri dishes. Experimental set-up consists of 360 pockets distributed in three blocks each containing four plots of which three plots received urea, three received the NPK, three others received the manure and in the end three were kept as controls. Time between the establishment of grain culture in petri dishes and first appearance of radicle is one day and the staggering of germination is four days. The emergence of seedlings was observed 7 days after sowing in pockets. The seeds have a germination rate of 67.14%. At the first harvest, biomass production is 78.84; 89.42 and 79.87kg / ha respectively for plants treated with urea, NPK 15-15-15 and for plants treated with manure. Production of control plants is 50.45 kg / ha. Species Cajanus cajan has a higher appetence rate (80%) compared to species Leuceuna lecocephala and Gliricidia sepium which have respectively 66.66 and 46.66%.
Introduction
Pastoral areas represent more than 38% of the territory of Sahelian countries such as Mali, Mauritania, Niger, Senegal, Chad and Burkina Faso (Bernard, 2013). Niger, by the diversity of its environments and its terroirs, constitutes an immense reservoir of various plants in particular of pastoral and fodder interest. Nevertheless, the agricultural pressure on the land leads to the reduction of pastoral areas which increases with demographic pressure (Cotula et al., 2009, Chauveau et al., 2006, Jouve, 2006, Cotula et al., 2006, Mathieu and Tabutin, 1996. The possibilities of development of forage crops and improvement of pastoral production are enormous but remain dependent on certain determinants such as seed production, management and land development, especially pastoral areas. Niger is also facing a long dry period, which means that animal production, especially food, is dependent on natural resources. And they cannot meet the needs of animals. It is then necessary to find alternatives that will allow the flock to be able to feed the herds by producing a good quantity of fodder but also to reduce their dependence on the fodder production of natural pastures. One of the alternatives is the forage crop. And among the forage plants, the family of legumes is the most appreciated of the breeders. The species Cajanus cajan commonly known as pigeon pea (Jonhson and Raymond, 1964, quoted by Chrysostome et al., 1998), is one of the most used both in the intertropical zone and in West Africa. If the pigeon pea (Cajanus cajan) is very popular in West Africa by pastoralists for its tolerance to drought, its contribution to soil fertilization (Siambi et al., 1992), thanks to its richness in Nutrients, its adaptability to climatic conditions, its ability to regenerate soils and its multiple uses in humans and animals (Grâce et al., 2009), it is important to deepen its potential in terms of available forage. Thus, this study aims to evaluate the yield of foliar biomass of pigeon pea and to determine its ingestibility in goats of the Sahel.
Study Area
The study was conducted at the experimental park of the Faculty of Agronomy of Abdou Moumouni University, located between coordinates 13 ° 30 'north latitude and 2 ° 08' east longitude, at an altitude of 216 m (Mani, 2013).
The climate is Sahelian with high temperatures between April and June and low temperatures between December and January. Annual cumulative rainfall recorded from 2009 to 2018 varies from 350 to 650 mm with the exception of the year 2012 where there was a cumulative greater than 700 mm.
Biological and Technical Material
The biological material consists of Cajanus cajan seeds produced during the 2017 rainy season. Other technical equipment including milestones for the measurement of plant heights, a pruner for the cutting of biomass, petri dishes and filter paper for the germination test were used.
Germination Test
The germination test was carried out in 7 petri dishes. The principle consisted of disinfecting pigeon pea seeds using an alcohol and moistening the filter papers before introducing about twenty seeds into each box. These petri dishes are then placed in an incubator at a temperature of 25 ° C.
Observations of germination began one day after the start of the experiment and continued until the third day after the last germination. The exit of the radicle is retained as a criterion of germination. Thus, the parameters used to evaluate germination are the germination time which reflects the time elapsed between sowing and the appearance of the radicle, the duration of germination or the average time of germination or staggering indicating the delay between the germination of the first seed and that of the last seed and germination rate.
Assessment of Plant Emergence
The evaluation test of the emergence was done on the experimental ground. It consists of sowing and then watering grains of pigeon pea in the plots daily until the end of the trial which lasted 15 days. Observations made every 2 days focused on seedling development.
Establishment of the Cultivation of Cajanus Cajan
The experiment took place at the experimental park of the Faculty of Agronomy of the Abdou Moumouni University in the winter season of 2018. The experimental device consists of three (3) blocks each of which is a repetition comprising four (4) plots or treatments, a set of 12 parcels of 4 meters by 5, each parcel has 30 pockets, a total of 360 poquets.
Three plots received urea, three received NPK (15-15-15) at a dose of 10 g per pouch. Three other plots received manure at a dose of 50 g per pouch and at the end three plots were maintained as controls. The depths of the pouches vary from 2.5 to 5 cm. The spacings between the lines of culture and between the pockets are of 1 m is a density of 10 000 plants per hectare. Seedlings were carried out on July 10, 2018. On a regular basis, the plots were weeded to prevent the invasion of weeds. Observations on possible attacks by insects that may have caused certain pathologies have been made. They consist in identifying, throughout the duration of the study, the insect populations present on the species and then evaluate the damage caused by them.
Harvesting and Drying of Biomass
The biomass harvest was carried out two months after sowing. It consisted in cutting all the plants from the plots from 20 cm of the collar. The drying of the biomass was done in the shade for 7 days before weighing.
Regeneration Test of Cajanus Cajan
The Cajanus cajan retake capacity test was also done at the
Ingestibility Test
The test of ingestibility is carried out on the farm of the Faculty of Agronomy. It consists firstly in harvesting and mixing three kilograms of three different plant species, namely: Gliricida sepium, Leucaena leucocephala, which have been the subject of several studies around the world and Cajanus cajan. Then these quantities are offered to a batch of 6 goats from the Sahel for three successive days. Observations were made on the ingestion of each forage species by the evaluation of relatively unreported quantities.
Data Processing
Statistical analyzes were made using GenStat 12th edition software to compare the averages of the data collected from the different variables, namely gemination, emergence, plant height, plant leaves, biomass, test d ingestibility and ability of recovery of the plant after cutting.
Germination of Seeds and Emergence of Seedlings
Time, duration and seed germination rate and emergence of seedlings are the importants parameters to characterize pigeon pea growth, as shown in Table 1. Tesing Association) as an adequate germination rate for good seeds, reported by INRAN (1998). Quenum et al. (2016) recorded that germination rates are ranging from 77.5% to 85% on various varieties of pigeon pea, despite traditional conditions of storage and conservation of seeds at the producers. Seedling emergence rate was 57.5% 15 days after sowing, and emergence of seedlings was observed 7 days after sowing and the timing of emergence duration is 10 days after sowing. Emergence rate is low compared to that of Jafar (2017), who obtained an average rate of 88.7% on irrigation. This difference could be due to rotting and abortion of germinated seeds, attack by pathogens or by scarcity of rain during sowing period. According to Louwaars and Marrewitjk (1995), seed vigor depends on genetic makeup, environment, and parent plant nutrition, maturity at harvest, size, weight, and seed density, mechanical integrity, deterioration and aging.
Plant Development Development Height
Four weeks after emergence, elongation of plants averaged 45.2 ± 46 cm. Seven (7) The analysis of variance of results presented in figure 2 shows that there is no significant difference (P> 0.5) between the different treatments.
Development of Twigs
The analysis of variance in relation to number of twigs showed no significant difference (P> 0.5) between fertilized plants and control plants. However, plants treated with NPK (15-15-15) seem to be most branched. Figure 2 shows that plants treated with NPK (15-15-15) have more leaves than plants treated with urea, manure, and controls. However, the analysis of variance shows that there is no statistically detectable difference between different treatments (p> 0.5). Follow-up studies of Cajan plants have shown that initial growth of the seedling is particularly slow. Growth becomes fast in the second month. Niyonkuru (2002) has shown that rapid acceleration of pigeon pea plant growth can be explained by the fact that not only are plants in phenological stage of growth but also in considerable amount of nitrogen they fix especially in their young age. The assimilation of this nitrogen contributes a lot to a rapid growth. As for phase where growth is slowed, this could be due to the fact that the plants adapt to the ecological conditions of the environment. Observations made on experimental plots showed that flowering started on 48th day after sowing. Plants fertilized with urea, NPK and manure show no significant growth compared to control plants.
Leaf Development
These results are consistent with that of Niyonkuru (2002), who notes that the species Cajanus cajan generally does not need manure, whether organic or mineral, except for the crops associated with it. This result is consistent with that obtained by Ido in 2016 which showed that Cajanus cajan flowered between the second and third months with at least 50% of flowering plants. During the trial and during field observations, no pathology or pest attack was observed at plant level. The explanation is that species Cajanus cajan is only attacked when the plant is fruiting in very humid periods and especially in humid regions.
Production of Aboveground Biomass
Production of aerial biomass from treated plants is greater than that of control as shown in Figure 3.
Recovery Capacity of Cajanus Cajan
Regarding the recovery of pigeon pea, no sign of degeneration was found on all plants after cutting and fertilizer input. The pigeon pea is a plant that supports the cut. At the first cut, the fresh weight of the forty plants is 12.25 kg or 3062.5 kg / ha. Three (3) weeks after cutting, biomass yield varies with treatment ( Figure 4).
Figure 4: Biomass yield versus treatments three weeks after cutting on 1-year-old plants
The analysis in figure 4 shows that manure, urea, and NPK (15-15-15) treated plants had highest yield while controls had the lowest. The recovery was good for all plants. No difference was observed between fertilized plants and control plants. These results are in agreement with those obtained by Niyonkuru (2002), who notes that pigeon pea plant is harvested gradually with one harvest every two and a half months and provides a well-graded diet in year.
Ingestibility
With regard to ingestibility, figure 5 present different variations in the consumption of forage species presented to animals. The analysis in figure 4 shows that pigeon pea foliage is more appreciated by Sahel goat with a palatability of 80% at first test, 46.66% at second and 60% at third test that Leucaena leucocephala has a rate of 66, 66% in the first, 43.33% in second and third tests. Gliricida sepium is least palatable with a rate of 46.66 on first, 33.33 on second and 16.66% on last test. This ingestibility test showed that pigeon pea is a better plant than Gliricida sepium and Leucaena leucocephala. This result is found by Kouakou (2011) who states in these terms: "In animal feed, cut foliage is fresh or preserved, a good feed used to feed livestock." The leaves are rich in protein (21-25% DM) and fiber (30-35% crude fiber / DM). Note that only tender (non-lignified) stems are palatable.
Conclusion and Recommendations
Pigeon pea is a species that is under attack in dry areas and produces quality biomass for livestock feed while limiting labor and input investment. The ingestibility test of this species with respect to Gliricida sepium and Leucaena leucocephala has shown that pigeon pea is very popular with animals. Also, the results of evaluation of the production and that of the recovery of the species at one year of installation is encouraging for the establishment of a forage crop. The production of foliar biomass in pigeon pea is not so much related to fertilizer input. This legume forage seems to be able to make profitable the few resources available in the soil. With such production and ease of cultivation, this plant could be used to face the chronic forage deficit in our breeding by the constitution of fodder bank.
|
2020-07-09T09:06:11.329Z
|
2020-07-06T00:00:00.000
|
{
"year": 2020,
"sha1": "10d7dbba934acf3a16f22232c261d3666be85d85",
"oa_license": "CCBY",
"oa_url": "https://www.granthaalayahpublication.org/journals/index.php/granthaalayah/article/download/IJRG19_A09_2662/447",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "8ac01746c3857a3401b335035c2ddfe6eb3ce4a2",
"s2fieldsofstudy": [
"Agricultural And Food Sciences"
],
"extfieldsofstudy": [
"Biology"
]
}
|
236321124
|
pes2o/s2orc
|
v3-fos-license
|
The Creation of True Two-Dimensional Silicon Carbide
This paper reports the successful synthesis of true two-dimensional silicon carbide using a top-down synthesis approach. Theoretical studies have predicted that 2D SiC has a stable planar structure and is a direct band gap semiconducting material. Experimentally, however, the growth of 2D SiC has challenged scientists for decades because bulk silicon carbide is not a van der Waals layered material. Adjacent atoms of SiC bond together via covalent sp3 hybridization, which is much stronger than van der Waals bonding in layered materials. Additionally, bulk SiC exists in more than 250 polytypes, further complicating the synthesis process, and making the selection of the SiC precursor polytype extremely important. This work demonstrates, for the first time, the successful isolation of 2D SiC from hexagonal SiC via a wet exfoliation method. Unlike many other 2D materials such as silicene that suffer from environmental instability, the created 2D SiC nanosheets are environmentally stable, and show no sign of degradation. 2D SiC also shows interesting Raman behavior, different from that of the bulk SiC. Our results suggest a strong correlation between the thickness of the nanosheets and the intensity of the longitudinal optical (LO) Raman mode. Furthermore, the created 2D SiC shows visible-light emission, indicating its potential applications for light-emitting devices and integrated microelectronics circuits. We anticipate that this work will cause disruptive impact across various technological fields, ranging from optoelectronics and spintronics to electronics and energy applications.
Introduction
Two-dimensional silicon carbide has received significant attention recently, and the structure and fundamental properties of 2D SiC and related materials have been investigated by various theoretical studies. This rapidly increasing interest comes from the immense potential and promise that such material holds for the future. As a wide-bandgap semiconducting material with high thermal capability, SiC is a key material for various technological fields ranging from high-power electronics and photonic devices to hightemperature devices and quantum information processing. The 2D form of SiC will naturally benefit from these overall SiC properties. Furthermore, as a result of reduced dimensionality and quantum confinement, 2D SiC is predicted to exhibit exotic optical and electronic properties, that are very useful for various applications [1][2][3][4][5]. Structurally, 2D SiC is predicted to have a graphene-like honeycomb structure consisting of alternating Si and C atoms. In the monolayer SiC, the C and Si atoms bond through sp 2 hybridization to form the SiC sheet, Figure 1a.
One of the most fascinating features about monolayer SiC is that it has a direct widebandgap, Figure 1b. This feature will address countless drawbacks associated with the Nanomaterials 2021, 11, 1799 2 of 10 gapless nature of graphene and the indirect band gap of bulk SiC, Figure 1c. The band gap opening in 2D SiC is related to the electronegativity differences between silicon and carbon atoms, which induces electron transfer from valence electrons of Si to the nearest C, causing a band gap to emerge [6][7][8]. Based on density functional theory, monolayer SiC has a direct band gap of about 2.58 eV [9][10][11][12][13]. The indirect-direct band gap transition characteristic in 2D SiC is similar to the previously reported feature in other 2D materials, such as 2D transition metal dichalcogenides (TMDs). Republished from [22] with the permission of RSC. Figure 1c. (c) Electronic band structures of 6H-SiC. Reproduced from [31] with the permission of AIP publishing. (d,e) a schematic of phase transformation in 2D SiC. (f) a schematic of the unit cell of 2D SiC, showing the in-plane lattice constant, a, and atomic bonding of 2D SiC.
Theoretical studies have also found that 2D SiC has very rich optical properties, such as strong photoluminescence, non-linear optical properties, and excitonic effects as a result of quantum confinement effects [4,[14][15][16]. In terms of mechanical properties, 2D SiC is one of the toughest and stiffest 2D materials. Only graphene and h-BN are stiffer than monolayer SiC [10,17].
The thermodynamic and kinetic stability of bilayer and multilayer SiC has also been investigated. Various theoretical studies have found that only monolayer SiC has a stable graphene-like structure. Multilayer SiC and even bilayer SiC are unable to form planar graphene-like structures. Multilayer SiC resembles the bulk SiC in their structure and properties. Like bulk SiC, both multilayer and bilayer SiC have indirect band gap [18][19][20]. A detailed discussion about the structure, properties, and applications of 2D SiC has been provided in our very recent review paper [5].
Although the concept of sp 2 bonding in 2D SiC might look challenging, as opposed to the naturally occurring sp 3 bonding in bulk SiC, all theoretical studies have confirmed that 2D SiC is 100% planar [14,18,19,[21][22][23][24]. Using first-principle calculations (DFT), Freeman et al. [19] predicted that as wurtzite structures, such as ZnO and 6H-SiC, become only a few atoms thick, they adopt a graphitic structure, as it is the most stable structure for these ultrathin materials. In fact, 2D SiC is not the first structure to contain Si=C bonding. A variety of Si=C containing compounds, known as "silenes", have been reported in the past [25,26]. Thus, monolayer SiC can stabilize itself by adopting a planar, sp 2 structure. However, given that bulk SiC has a tetrahedral sp 3 structure, the phase transformation from sp 3 to sp 2 , Figure 1d-e, must take place if a top-down approach is going to be used for the isolation of 2D SiC. The Si-C bond length in 2D SiC is predicted to be 1.79 A • , which is shorter than 1.89 A • found in bulk SiC. The predicted lattice constant, a, for monolayer SiC is 3.1 A • , which is larger than that in bulk SiC, as shown in Figure 1f [18,19,[27][28][29][30]. Table 1 lists structural characteristics of 2D SiC and related materials. In addition to 2D SiC, other compositions of silicon carbide, i.e., 2D SixCy, are also energetically favorable. Thermodynamic and kinetic stability of 2D SixCy compositions have also been investigated [3,23,[32][33][34][35][36][37]. All theoretical studies confirm that 1:1 stoichiometry, i.e., SiC, is the most stable composition. Unlike other 2D materials, such as silicene, the 2D analog of silicon, or few-layer black phosphorus (BP) that stabilize themselves through buckling, zero buckling is predicted for 2D SiC [22,40,41]. Thus, indeed like graphene and h-BN, monolayer SiC has a stable planar structure. Practically, however, there are only a few published experimental works on multilayer SiC nanosheets [42][43][44]. These previous works used chemical vapor deposition, chemical exfoliation, and hydrothermal methods to prepares SiC nanosheets. However, a graphene-like SiC structure has not been reported yet. Here we report the first successful synthesis of graphene-like 2D SiC via a wet exfoliation process.
Methods and Materials
To form 2D SiC, hexagonal bulk SiC (BeanTown Chemical, Hudson, NH, USA) was exfoliated in isopropyl alcohol or N-methyl-2-pyrrolidone (Sigma-Aldrich, St. Louis, MO, USA), using a bath sonication for 24 h (Branson Ultrasonic Corporation, Denbury, CT, USA). The average concentration of the SiC dispersion is 1 mg/mL. The dispersion was then centrifuged at an average rate of 1000 rpm for about 5 min (Eppendorf AG, Hamburg, Germany). TEM images were recorded using STEM and HRTEM mode (JEOL 2010F operating at 200 kV) with the samples on top of a holey carbon grid. The Raman spectra were collected with an excitation laser beam of 532 nm (WITec Alpha300R).
For most of the characterization tests, e.g., Raman, XRD, SEM, AFM, and PL, drops of a SiC dispersion were placed on different substrates such as a holey carbon grid or silicon substrate, and then dried at ambient conditions prior to the characterization. SEM images were collected using a FEI Quanta 3D FEGSEM instrument. XRD tests were performed using a Rigaku SmartLab instrument (Rigaku, Japan) equipped with Cu KÁ radiation. The AFM measurements were conducted in the tapping mode using a Dimension 3100 instrument (Veeco) and a silicon tip (NSC15/AL BS, Micromasch). PL measurements were performed at room temperature, using a UV micro-photoluminescence system and a sapphire substrate. PI samples were excited by a 269 nm excitation source. Optical microscopy images were recorded using a BX53MRF-S Olympus microscope, (Olympus, Japan).
Absorption spectra were obtained using a UV-2600i UV-Vis Shimadzu spectrophotometer (Shimadzu Corp, Japan). To obtain the absorption spectra, the samples from the supernatant were placed in quartz cuvettes and measured against a reference sample with only the respective solvent in the cuvette.
Results and Discussion
Silicon carbide nanosheets were produced via wet exfoliation of bulk hexagonal SiC in isopropyl alcohol (IPA) or N-methyl-2-pyrrolidone (NMP) solvents, Figure 2a. The exfoliation process consists of two steps: (i) wet exfoliation of SiC precursor via bath sonication and (ii) centrifugation. Throughout the sonication process, the bath temperature was kept below 50 • C, Figure S1. After the completion of the centrifuge step, the samples were transferred onto SiO2/Si or other substrates for various characterization. The thickness of the SiO2 layer is 300 nm. Figure 2 shows the results from optical microscopy, atomic force microscopy, AFM, scanning electron microscopy, SEM, and measurements. The optical image, Figure 2b, shows nanosheets with different sizes in the range of 200 nm-2 µm. Figure S2 presents more optical microscopy images. Figure 2c shows AFM images of two SiC nanosheets. The height profiles of these nanosheets are presented in Figure 2d. As shown, the average height of the red and blue flakes is 0.28 and 0.25 nm, respectively. Thus, we can conclude that the shown nanosheets in Figure 2c are monolayers, as the interlayer spacing in bulk SiC is about 0.25 nm (the dominant plane). Figure S3 presents the related histograms. Figure 2e-h shows an SEM image of another SiC nanosheet, and the associated elemental analysis. The chemical composition and purity of the liquid exfoliated nanosheets is confirmed by X-ray elemental mapping by SEM. As shown in Figure 2g, h, the produced nanosheets are 100% silicon carbide. Only silicon and carbon were observed in the elemental maps. Figure 2d. As shown, the average height of the red and blue flakes is 0.28 and 0.25 nm, respectively. Thus, we can conclude that the shown nanosheets in Figure 2c are monolayers, as the interlayer spacing in bulk SiC is about 0.25 nm (the dominant plane). Figure S3 presents the related histograms. Figure 2e-h shows an SEM image of another SiC nanosheet, and the associated elemental analysis. The chemical composition and purity of the liquid exfoliated nanosheets is confirmed by X-ray elemental mapping by SEM. As shown in Figure 2g,h, the produced nanosheets are 100% silicon carbide. Only silicon and carbon were observed in the elemental maps. Figure 3 presents the results from transmission electron microscopy, TEM, imaging. Figure 3a, b shows the overall morphology of the SiC nanosheets. The contrast variation within the nanosheets could be related to the thickness change, or folded edges. The composition of the SiC flakes was initially confirmed by EDX measurements, as shown in Figure 3c. As can be seen from the EDX spectrum, the sample in Figure 3b only contains Si and C, with an atomic ratio of~1, thus the produced nanosheets are pure silicon carbide. Figure 3d is a high-resolution TEM image of the created 2D SiC. This HRTEM image clearly indicates the highly ordered crystalline structure of the exfoliated SiC nanosheets. All HRTEM images, including those from the edges of the nanosheets, revealed that the created nanosheets have a highly crystalline structure. Figure 3e is a magnified image of the circled region shown in Figure 3d. The graphene-like hexagonal structures shown here in Figure 3e confirm the creation of 2D SiC. Figure 3f is a magnified image of the hexagonal structure shown in Figure 3e. This image further confirms the creation of stable graphene-like 2D SiC in this study, as the honeycomb structure is easily observed in these HRTEM images. We then calculated the lattice constant of the created 2D SiC using the ImageJ software [45] (ImageJ, US National Institutes of Health). Figure 3g shows the relative profile intensity for 2D SiC shown in Figure 3e (along the green line). We obtained an average lattice constant of 3.1 ± 0.01 Å. This value agrees well with theoretical prediction and further confirms that the SiC nature of these materials. The height similarity of the peaks in Figure 3g indicates similar electron densities in the hexagonal rings, which further confirms the monolayer nature of the shown structure. Figure 3h is a high-resolution TEM image of another 2D SiC sample, and Figure 3i is a magnified picture of the circled region in Figure 3h. These images are very interesting, as they show an extended graphene-like SiC lattice. Although some portions of the hex- We then calculated the lattice constant of the created 2D SiC using the ImageJ software [45] (ImageJ, US National Institutes of Health). Figure 3g shows the relative profile intensity for 2D SiC shown in Figure 3e (along the green line). We obtained an average lattice constant of 3.1 ± 0.01 Å. This value agrees well with theoretical prediction and further confirms that the SiC nature of these materials. The height similarity of the Nanomaterials 2021, 11, 1799 6 of 10 peaks in Figure 3g indicates similar electron densities in the hexagonal rings, which further confirms the monolayer nature of the shown structure. Figure 3h is a high-resolution TEM image of another 2D SiC sample, and Figure 3i is a magnified picture of the circled region in Figure 3h. These images are very interesting, as they show an extended graphene-like SiC lattice. Although some portions of the hexagonal lattice in this image look distorted because of the light contrast, there is still one lattice structure. These results confirm that these images indeed belong to monolayer SiC. Figure S4 shows an intensity profile for the sample shown in Figure 3i. The intensity variation of the line profile further reveals that the nanosheet shown in Figure 3i belongs to monolayer SiC. Only monolayer SiC sheets show such a significant variation in intensity between neighboring atoms, without any contribution from the background. Figure S5 shows more TEM images from the exfoliated SiC nanosheets.
It is worth emphasizing that since bulk SiC, including the 6H SiC precursor, has well known polytype crystal structures, HRTEM can be used efficiently to differentiate between single layer SiC and multilayer or bulk SiC. For example, 6H SiC has ABCACB stacking sequences, 2H SiC has AB order, or 3C-SiC has ABC stacking structure. In all these types, the second layer will be arranged differently than the top layer, and there will be contrast differences in their TEM images. For example, in the AB stacking sequence, the second layer (along the c-axis) is shifted parallel with respect to the first one. Thus, the absence of these stacking sequences or layer contrast in our TEM results further confirms the successful isolation of 2D SiC. Figure 4a,b show and compares the Raman results of the SiC precursor, powder, and exfoliated SiC nanosheets. In the case of the SiC precursor, the Raman modes observed at 780, 790, and 970 cm −1 are consistent with the Raman modes of 6H-SiC. The peak around 790 cm −1 is the characteristic peak of silicon carbide and corresponds to the transverse optical (TO) phonon vibrational mode of the Si-C bond. The other peak around 780 cm −1 belongs to the TO mode of the SiC bond. The peak around 970 cm −1 can be indexed to the longitudinal optical (LO) of A1 phonon in silicon carbide. Figure 4b shows the Raman spectrum of the 2D SiC. Figure S6 shows Raman spectrum of 2D SiC in a wider range (600-3000 cm −1 ). This spectrum further confirms the SiC nature of the created nanosheets, as both peaks belong to silicon carbide.
Similar to the precursor, 2D SiC shows an intense TO peak around 790 cm −1 , which is characteristic of all SiC materials, and also one LO peak. Despite these similarities, there are significant differences between the Raman spectra of SiC precursor and 2D SiC. As can be seen from Figure 4b, 2D SiC has a much weaker and broader LO peak, around 930 cm −1 , than the SiC precursor. This asymmetric broadening and down shifting of the LO peak in 2D SiC is a direct result of quantum confinement effects. Furthermore, the disappearance of the TO peak in the low frequency regime, 780 cm −1 , the shoulder, reveals some structural changes during the exfoliation process of 6H-SiC. These changes are a direct result of the evolution/transition of electronic bands and phonons with the number of layers. The weakening of the LO mode in 2D SiC could be related to the surface conversion from sp [3] to sp [2] hybridization. However, it is important to emphasize that Raman measurements were performed on a drop-cased dispersion of SiC nanosheets, and not on pure monolayer SiC. As such, a full understanding of these transitions requires comprehensive characterization tests of both monolayer and few-layers SiC. This will be investigated in our next paper.
We also used X-ray diffraction (XRD), to further characterize the exfoliated nanosheets, as shown in Figure S7. When only 10 or even 20 drops of the dispersion were placed on the substrate, no peak was observed in the XRD scans. The XRD scan shown in Figure S7 was collected using parallel beam optics at a grazing angle of 1 degree. In this case, more than 50 drops were deposited on a silicon substrate. Thus, the sample is not pure 2D SiC anymore. Only one very weak and broad peak, which belongs to SiC, was observed around 35.8 degrees. This peak reveals a d-spacing of about 0.25 nm which agrees well with the Nanomaterials 2021, 11, 1799 7 of 10 interlayer spacing in silicon carbide crystallographic planes. The absence of peaks in the XRD scans is related to the atomic thickness of the produced nanosheets. Only stacked thick films can show intense, sharp peaks in XRD.
absence of these stacking sequences or layer contrast in our TEM results further confirm the successful isolation of 2D SiC. Figure 4a,b show and compares the Raman results of the SiC precursor, powder, an exfoliated SiC nanosheets. In the case of the SiC precursor, the Raman modes observe at 780, 790, and 970 cm −1 are consistent with the Raman modes of 6H-SiC. The peak aroun 790 cm −1 is the characteristic peak of silicon carbide and corresponds to the transvers optical (TO) phonon vibrational mode of the Si-C bond. The other peak around 780 cm belongs to the TO mode of the SiC bond. The peak around 970 cm −1 can be indexed to th longitudinal optical (LO) of A1 phonon in silicon carbide. Figure 4b shows the Rama spectrum of the 2D SiC. Figure S6 shows Raman spectrum of 2D SiC in a wider range (600-3000 cm −1 ). Th spectrum further confirms the SiC nature of the created nanosheets, as both peaks belon to silicon carbide.
Similar to the precursor, 2D SiC shows an intense TO peak around 790 cm −1 , which characteristic of all SiC materials, and also one LO peak. Despite these similarities, ther are significant differences between the Raman spectra of SiC precursor and 2D SiC. A can be seen from Figure 4b, 2D SiC has a much weaker and broader LO peak, around 93 cm −1 , than the SiC precursor. This asymmetric broadening and down shifting of the LO peak in 2D SiC is a direct result of quantum confinement effects. Furthermore, the disap pearance of the TO peak in the low frequency regime, 780 cm −1 , the shoulder, reveals som structural changes during the exfoliation process of 6H-SiC. These changes are a direc result of the evolution/transition of electronic bands and phonons with the number of lay ers. The weakening of the LO mode in 2D SiC could be related to the surface conversio from sp [3] to sp [2] hybridization. However, it is important to emphasize that Rama measurements were performed on a drop-cased dispersion of SiC nanosheets, and not o Thus, as our TEM, and Raman results showed, 2D SiC is attainable. It can be realized via a wet exfoliation process of 6H-SiC. The successful exfoliation of SiC nanosheets from bulk SiC could be related to the solvents used in this study and their efficient interaction (both physically and chemically) with the SiC precursor. In fact, parameters such as bulkiness of the NMP solvent and i its high surface tension, or hydrogen bonding in IPA and its ability to modify the charge carrier density in materials (via acting as an electron acceptor), could be one of the main driving forces behind the successful isolation of 2D SiC [46][47][48]. Earlier studies on silicon/carbon double bonds showed that the sp 3 to sp 2 transitions in Si=C containing materials could be stabilized via mechanisms such as surface depolarization, electronic effects, and steric protection by large substitutes [25,26,[49][50][51]. Thus, both NMP and IPA could contribute positively, and even initiate such mechanisms. However, a comprehensive understanding of the isolation mechanism demands more studies. Figure 4c presents the absorbance spectrum from solution dispersion of 2D SiC. As shown in Figure 4c, the created SiC nanosheets absorb photons at visible light range. The absorption spectrum has two peaks around 2.14 and 2.58 eV, and one small peak around 2.3 eV, corresponding to π→π * transitions. Our absorption data from various solutions of 2D SiC dispersions (in the visible region), reveal that all tested samples have discrete peaks in the range of 2.13-2.7 eV, with the 2.13 eV peak being the most pronounced peak in the visible region. The formation of these discrete peaks could be attributed to the stronglybound exciton in 2D SiC, quantum confinement effects, and even surface defects. Since we tested solution dispersions of SiC nanosheets, as opposed to pure monolayer SiC, more work needs to be conducted before identifying one specific electronic structure/transition or process to each peak. Figure 4d shows the photoluminescence, PL, result from drop casted SiC nanosheets on a sapphire substrate. The excitation wavelength was 266 nm. Our experimental results, from different 2D SiC samples, showed that, unlike bulk SiC, 2D SiC has strong PL properties. Bulk SiC is known to have poor PL properties. As shown in Figure 4d, 2D SiC has a visible emission band at about 2.58 eV (or 480 nm). The good match between the position of the PL peak and one of the absorption peaks (at 2.58 eV) is very interesting and it is a strong indication of a direct band gap transition in the created 2D SiC nanosheets. These results indicate that 2D SiC may be used for blue-green luminescent devices, e.g., light emitting diodes, as well as integrated micro/nano electronic circuits, such as LED integrated computer chips and biolabeling and biosensing. Our absorption and PL data from other samples suggest that the position and intensity of the PL peaks is also affected by the substrate, excitation wavelength, solvent types, and synthesis parameters. Ideally, PL tests should be performed on suspended pure monolayer SiC samples. A detailed and comprehensive analysis of PL performance of 2D SiC is beyond the scope of this paper.
Conclusions
In conclusion, we have performed the first successful exfoliation of true 2D SiC from bulk SiC. Our TEM and Raman results showed that monolayer SiC has a stable graphenelike structure, and that the produced nanosheets are of high purity and crystallinity. We have also analyzed the optical properties of SiC nanosheets. The results showed that the nanosheets have strong emission in the visible range. With a direct band gap at the monolayer limit, 2D SiC represents an interesting platform for the next generation of electronics and optoelectronics technologies.
Unlike bulk SiC, which exists in more than 250 polytypes, monolayer SiC does not have any polytype. Thus, the application of 2D SiC would be less complicated than existing SiC materials. We envision that successive breakthrough in 2D SiC and related SixCy materials could usher in a new era of semiconducting materials with exciting applications in optoelectronics and electronics, bioimaging and sensing, and computing.
Supplementary Materials: The following are available online at https://www.mdpi.com/article/ 10.3390/nano11071799/s1. Figure S1: Photographs from the synthesis process. Figure S2: Optical microscopy images of the exfoliated SiC nanoflakes. Figure S3: AFM and height histograms from the created 2D SiC. Figures S4 and S5: TEM images. Figure S6 and S7: Raman, and XRD results respectively.
|
2021-07-26T05:29:21.192Z
|
2021-07-01T00:00:00.000
|
{
"year": 2021,
"sha1": "dc0b5d93fba662d8351fe97a1b716d03dbe9f9a1",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2079-4991/11/7/1799/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "dc0b5d93fba662d8351fe97a1b716d03dbe9f9a1",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
11271354
|
pes2o/s2orc
|
v3-fos-license
|
Repetition and Emotive Communication in Music Versus Speech
Music and speech are often placed alongside one another as comparative cases. Their relative overlaps and disassociations have been well explored (e.g., Patel, 2008). But one key attribute distinguishing these two domains has often been overlooked: the greater preponderance of repetition in music in comparison to speech. Recent fMRI studies have shown that familiarity – achieved through repetition – is a critical component of emotional engagement with music (Pereira et al., 2011). If repetition is fundamental to emotional responses to music, and repetition is a key distinguisher between the domains of music and speech, then close examination of the phenomenon of repetition might help clarify the ways that music elicits emotion differently than speech.
MUSIC'S REPETITIVENESS IS SPECIAL
Ethnomusicologist Nettl (1983) identifies musical repetition as a rare cultural universal -a characteristic exhibited by the music of every known human culture. Although some traditions, for example certain strands of contemporary art music in the West, explicitly eschew repetition, they do so in conscious response to a tendency toward musical repetition that exists elsewhere in the culture. Evolutionary biologist Fitch (2006) goes so far as to call repetition a"design feature"of music, essentially constitutive of the communicative form. This repetition can happen within a piece, or across multiple hearings.
Speech, by contrast, features a much lower incidence of repetition, and although the specifics are challenging to quantify, aspects of this distinction are plainly evident. For example, music features a litany of symbols instructing the player to repeat, from repeat signs to da capo indications (Kivy, 1993), whereas written language possesses no such lexicon for repetition. In a plea to abolish the practice of "part-repetition," a tradition in eighteenth century music whereby performers would repeat large sections of the piece during performance, Ferdinand Praeger appeals to the unpalatability such a practice would have in speech: Would ever a poet think of repeating half of his poem; a dramatist a whole act; a novelist a whole chapter? Such a proposition would be at once rejected as childish. Why should it be otherwise with music? . . . Since any whole partrepetition in poetry would be rejected as childish, or as the emanation of a disordered brain, why should it be otherwise with music? (Praeger, 1882(Praeger, -1883).
Yet the fact remains not only that sections within musical pieces are often repeated, but also that entire pieces are listened and relistened to hundreds of times, often voluntarily and even enthusiastically. Garcia (2005) explores the ways that repetition's perceived affiliations with childishness, regression, and insanity (well exemplified by Praeger's remarks) have prevented scholars from acknowledging, let alone investigating its function in music (with notable exceptions, such as Ockelford, 2005). They've preferred instead to emphasize music's connections with language, long recognized as a legitimate domain of inquiry. But insight into the parallels between music and language has sometimes come at the cost of insight into music's more unique qualities, like repetitiveness. So closely affiliated with music is this quality, in fact, that its use within speech can actually serve to engender a perceptual shift whereby an acoustic stimulus first perceived as speech comes to be perceived as music. This phenomenon, the speech-to-song illusion (Deutsch et al., 2011), documents the way that the temporally regular repetition of a particular clause can trigger a startling effect on replay of the entire utterance whereby the speaker, at the start of the clause in question, is heard to suddenly break into song. That the simple act of repetition can so dramatically musicalize speech illuminates its special role in delineating these two communicative domains.
ARE THERE FUNCTIONAL COMMONALITIES UNDERLYING DIFFERENT KINDS OF REPETITION?
Johnstone's (1994) two-volume edited collection explores a variety of special cases where language is used repetitively, asking fundamentally whether there are things "repetition always does" (p. 12). By way of an answer, Johnstone observes that: The function of repetition in general is to point, to direct a hearer back to something and say, "Pay attention to his again. This is still salient; this still has potential meaning; let's make use of it in some way." This accounts, for example, for the cognitive utility of repetition to learners, getting the learner's attention on a token of input for a second round in order to have something to work with. We can also call attention to the fact that we're getting one's attention, and we can take that one step further, when awareness of the ability to manipulate www.frontiersin.org allows us to play with attention. Immediacy may be poetic. . . . Repetition is a mode of focusing attention. . . . Repetition focuses attention on the makeup of both the repeated discourse and the earlier discourse. Repetition puts the utterance in brackets making it impossible to treat the language as if it were transparent, by forcing hearers to focus on the language itself. In that sense repetition is metalinguistic (p. 13).
Repetition in speech, in other words, encourages a listener to orient differently to the repeated element, to shift attention down to its constituent sounds on the one hand, or up to its contextual role on the other. For example, if a mob boss in a gangster movie says "take care of it," and is answered by a quizzical look, he might repeat "take care of it!" underscoring for his henchman the darker meaning of the instruction.
The speech-to-song illusion can be understood similarly as a shift to a different level of understanding, in this case, to the utterance's lower-level prosodic aspects. Semantic satiation (Severance and Washburn, 1907), the well-known phenomenon whereby repeatedly speaking a word causes it to shed its semantic associations and devolve into nonsense, can also be understood as a result of an attentional shift down to the word's lower-level phonemic content.
Recent work in music has suggested that in addition to engendering a downward shift, repetition can also trigger an attentional shift up, toward progressively higher levels of the musical structure (Margulis, 2012). When participants were reexposed to the same piece four times in a row and asked during each iteration to press a button each time they heard something repeat, having been previously informed that the repeating thing could range in scope from a two-note motive to a phrase or section, they generally identified repetitions of smaller-scale elements (such as motives) on the first hearing, and then progressively larger-scale elements (such as phrases) across additional exposures. This change can be interpreted as evidence of a shift in orientation from lowerlevel aspects of the musical structure to higher-level aspects of the musical structure. Although repeated exposures seemed to engender an attentional shift upward for these pieces, I hypothesize that for repertoires with less rich hierarchic structuring repeated exposures might push attention down to attributes like microtiming and microdynamics.
In speech, then, repetition may be useful in specialized circumstances where a speaker wants a listener to attend to some different, non-obvious aspect of the utterance: its previously unseen relevance to some larger situational context, for example, or its prosodic or lexical content. But speech normally functions to relay a particular semantic meaning; once the message has been conveyed, the particular words used to convey it are no longer relevant. This condition has been explored within the context of the fuzzy trace theory (Reyna and Brainerd, 1995), which posits a distinction between gist and verbatim memory. Speech is normally associated with gist memory; if asked to recount a story, for example, people use different words to describe the events -they've invested in the meaning rather than in the particular words used to encode it. Music, by way of contrast, is associated with comparatively keener verbatim memory (Calvert, 1991(Calvert, , 2001. Recent work by Krumhansl (2010) shows that listeners can identify songs remarkably well from clips shorter than half a second, suggesting extremely acute verbatim encoding. Music doesn't serve as a discardable vessel for conceptual meaning in the same way that ordinary uses of speech can; rather, its surface, verbatim content retains communicative significance across repetitions. Moving up and down within its structure across rehearings can yield satisfying varieties of engagement with a piece, revealing a stark contrast in the kind of thing sought after by a listener when hearing a piece of music versus an excerpt of speech.
REPETITION AND EMOTIONAL RESPONSE
To gain a foothold in the relatively underexplored domain of the emotional impact of musical repetition, it's helpful to examine a better-explored domain that features repetitive behavior, such as ritual. Like music, ritual features unusual degrees of voluntarily undertaken repetition, and also like music, ritual is capable of eliciting strong emotional response. Boyer and Liénard (2006) adopt a framework for event hierarchies from Zacks et al. (2001) to characterize the special behaviors associated with ritual. Within this framework, gestures (on the order of a few seconds) combine to form episodes (such as tying your shoes) which combine in turn to form scripts (such as getting ready for school or eating dinner at a restaurant). It's typically most natural to recall events in terms of episodes, and excessive focus on the lower gestural level can indicate pathologies such as frontal lobe damage or schizophrenia (Janata and Grafton, 2003). But ritual expressly drives attention down to this level, inducing, Boyer and Liénard claim, a special mental state focusing on low-level properties of actions. Associated with the repeated gestures comes a general effect of goal demotion, where the uses the gestures are typically put to recede and the constituent movements themselves rise to prominence. The excessive repetition also serves as a powerful signal of intentionality, revealing both the internal commitment of the ritual's participant and her ties to a social community that has defined these particular gestures as significant. Shifts in attention, then, of the sort chronicled by the studies reviewed in the first part of this article, might underlie the capacity for a special kind of emotional engagement. Margulis (2013) arbitrarily inserted repetitions into excerpts of contemporary art music by renowned composers Elliott Carter and Luciano Berio, and everyday listeners without special training or experience with the genre rated the repetition-hacked examples as more likely to have been composed by a human artist and the original versions as more likely to have been randomly generated by a computer. Repetition in music, like repetition in ritual, then, can serve to signal intentionality, and this recognition of intentionality might facilitate the capacity to engage with sounds as emotionally communicative.
INTERNAL IMAGERY, EXTERNAL SOUND, AND STRONG EXPERIENCES OF MUSIC
One consequence of the prevalence of musical repetition is the phenomenon of the earworm. Liikkanen (2008) reports that over 90% of people report experiencing earworms at least once a week, and more than 25% say they suffer them several times a day. Brown (2006), a neuroscientist at McMaster, has reported extensively on his own "perpetual music track:" tunes loop repeatedly in his mind Frontiers in Psychology | Emotion Science on a near constant basis. Brown observes that the snippets usually last between 5 and 15 s, and repeat continuously -sometimes for hours at a time -before moving to a new segment.
The experience in each of these cases, the earworm and the perpetual music track, is very much one of being occupied by music, as if a passage had really taken some kind of involuntary hold on the mind, and very much one of relentless repetitiveness. The seat of such automatic routines is typically held to be the basal ganglia (Boecker et al., 1998;Nakahara et al., 2001;Lehéricy et al., 2005). Graybiel (2008) has identified episodes where neural activity within these structures becomes locked to the start and endpoints of well-learned action sequences, resulting in a chunked series that can unfurl automatically, leaving only the boundary markers subject to intervention and control. Vakil et al. (2000) showed that the basal ganglia underlie sequence learning even when the sequences lack a distinct motoric component. And, critically, Brett (2007, 2009) used neuroimaging to demonstrate the role of the basal ganglia in listening to beat-based music; (Grahn and Rowe, 2012) shows that this role relates to the active online prediction of the beat, rather than the mere post hoc identification of temporal regularities.
The circuitry that underlies habit formation and the assimilation of sequence routines, then, also underlies the process of meter-based engagement with music. And it is repetition that defines these musical routines, fusing one note to the next increasingly tightly across multiple iterations. DeBellis (1995) offers this telling example of the tight sequential fusing effected by familiar music: ask yourself whether "oh" and "you" are sung on the same pitch in the opening to The Star-Spangled Banner. Most people cannot answer this question without starting at the beginning and either singing through or imagining singing through to the word "you." We largely lack access to the individual pitches within the opening phrase -we cannot conjure up a good auditory image of the way "you" or "can" or "by" sounds in this song, but we can produce an excellent auditory image of the entire opening phrase, which includes these component pitches. The passage, then, is like an action sequence or a habit; we can duck in at the start and out at the end, but we have trouble entering or exiting the music midphrase. This condition contributes to the pervasiveness of earworms; once they've gripped your mind, they insist on playing through until a point of rest. The remainder of the passage is so tightly welded to its beginning that conscious will cannot intervene and apply the brakes; the music spills forward to a point of rest whether you want it to or not.
Reencountering a passage of music involves repeatedly traversing the same imagined path until the grooves through which it moves are deep, and carry the passage easily. It becomes an overlearned sequence, which we are capable of executing without conscious attention. Yet in the case of passive listening, this movement is entirely virtual; it's evocative of the experience of being internally gripped by an earworm, and this parallel forms a tantalizing link between objective, external and subjective, internal experience. This sense of being moved, of being taken and carried along in the mode of a procedural enactment, when the knowledge was presented (by simply sounding) in a way that seemed to imply a more declarative mode can be exhilarating, immersive, and boundary-dissolving: all characteristics of strong experiences of music as chronicled by Gabrielsson and Lindström's (2003) survey of almost 1000 listeners. Most relevant to the present account are findings that peak musical experiences tended to resist verbal description, to instigate an impulse to move, to elicit quasi-physical sensations such as being "filled" by the music, to alter sensations of space and time, including out-of-body experiences and percepts of dissolved boundaries, to bypass conscious control and speak straight to feelings, emotions, and senses, to effect an altered relationship between music and listeners, such that the listener feels penetrated by the music, or merged with it, or feels that he or she is being played by the music, to cause the listener to imagine him or herself as the performer or composer, or experience the music as executing his or her will, and to precipitate sensations of an existential or transcendent nature, described variously as heavenly, ecstatic or trance-like.
These sensations can be parsimoniously explained as consequences of a sense of virtual inhabitation of the music engendered by repeated musical passages that get procedurally encoded as chunked sequences, activating motor regions and getting experienced as lived/enacted phenomena, rather than heard/cognized ones. It is repetition, specifically, that engages and intensifies these processes, since it takes multiple repetitions for something to be procedurally encoded as an automatic sequence. This mode of pleasure seems closely affiliated with and even characteristic of music, but less so for speech, where emotions are more typically elicited by the listener's relationship to the semantic meaning conveyed by the utterance. This paper argues that the difference in the appetite for repetition between musical and speech-based modes of communication is fundamentally linked with differences in the means by which these modes of communication elicit emotion. Margulis (forthcoming) explores this hypothesis in detail.
|
2016-06-18T01:15:16.395Z
|
2013-04-04T00:00:00.000
|
{
"year": 2013,
"sha1": "4652d84bc51c674a48c1cc4742d4860ad4d50eae",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpsyg.2013.00167/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "4652d84bc51c674a48c1cc4742d4860ad4d50eae",
"s2fieldsofstudy": [
"Linguistics",
"Psychology"
],
"extfieldsofstudy": [
"Psychology",
"Medicine"
]
}
|
237546482
|
pes2o/s2orc
|
v3-fos-license
|
Clinical benefits and potential risks of adalimumab in non‐JIA chronic paediatric uveitis
Abstract Purpose To describe the treatment results with adalimumab in chronic paediatric uveitis, not associated with juvenile idiopathic arthritis (JIA). Methods Medical records of children with non‐JIA‐uveitis were reviewed retrospectively. Children without an underlying systemic disease were pre‐screened with brain magnetic resonance imaging (MRI) to exclude white matter abnormalities/demyelination. Results Twenty‐six patients were pre‐screened with brain MRI, of whom adalimumab was contraindicated in six patients (23%) with non‐anterior uveitis. Forty‐three patients (81 eyes) were included. Disease inactivity was achieved in 91% of the patients after a median of three months (3–33). Best‐corrected visual acuity (BCVA) improved from 0.16 ± 0.55 logarithm of the minimum angle of resolution (logMAR) at baseline to 0.05 ± 0.19 logMAR at 24 months (p = 0.015). The median dosage of systemic corticosteroids was reduced to 0 mg/day at 24 months of follow‐up (versus 10 mg/day at baseline; p < 0.001). Adalimumab was discontinued in thirteen children due to ineffectiveness (n = 8), side effects (n = 1), long‐term inactivity of uveitis (n = 3) or own initiative (n = 1). Relapse of uveitis occurred in 19 (49%) patients, 5 (26%) of them without an identifiable cause. Conclusion Adalimumab is effective in the treatment of non‐JIA‐uveitis in paediatric patients by achieving disease inactivity in the majority of the patients, improving BCVA and decreasing the dose of corticosteroids. Adverse events and side effects are limited. Pre‐screening with MRI of the brain is recommended in paediatric patients with intermediate and panuveitis.
Introduction
Paediatric uveitis is a severe inflammatory eye disease, which can lead to visual loss in one-third of the patients, and eventually to permanent blindness (Cunningham 2000). The pathophysiology of non-infectious uveitis still remains mostly unknown. Uveitis in childhood can be related to systemic conditions of which juvenile idiopathic arthritis (JIA) is by far the most common and well-studied (Angeles-Han et al. 2013;Moradi et al. 2014;Siiskonen et al. 2021). It is considered to be a multifactorial autoimmune disorder with various (epi)genetic predisposing factors and involvement of pro-inflammatory cytokines such as tumour necrosis factor a (TNFa) (Lee & Dick 2012;Cordero-Coma & Sobrin 2015).
The goals of treatment in paediatric uveitis are to preserve vision and prevent ocular complications, by controlling the inflammation and achieving a stable remission as early as possible (Sood & Angeles-Han 2017). Corticosteroids are traditionally used as firstline treatment. However, they can lead to severe ocular side effects such as cataracts and glaucoma and have potential severe risks for the general child's health (Simonini et al. 2010;Sood & Angeles-Han 2017). Therefore, switching to systemic immunomodulatory therapy (IMT) in the early course of uveitis is the state of the art nowadays (Simonini et al. 2010;Sood & Angeles-Han 2017).
Adalimumab, a recombinant human anti-TNFa monoclonal antibody, has shown to be an effective and safe therapeutic in JIA-uveitis regarding the control of inflammation, improvement of visual acuity and reducing glucocorticoid use (Ramanan et al. 2017;Sood & Angeles-Han 2017;Horton et al. 2019). Despite the efficacy of adalimumab in treating uveitis, precautions are needed, since there is upcoming evidence suggesting an association between TNFa blocking agents and demyelination of the nervous system (Kemanetzoglou & Andreadou 2017;Olsen & Frederiksen 2017;Suhler et al. 2018;Cunha et al. 2020). Intermediate uveitis has been in 7-10% associated with multiple sclerosis but also patients with a history of optic neuritis or papillitis could be at increased risk (Cunningham et al. 2017). Although less common than in intermediate uveitis, children with other types of uveitis may have pre-existent and asymptomatic white matter abnormalities and/or demyelination as well, which could potentially progress under anti-TNFa treatment (Olsen & Frederiksen 2017).
Studies involving clinical efficacy of adalimumab in non-JIA paediatric uveitis are scarce Nguyen et al. 2016;Leal et al. 2019). Currently, adalimumab is used offlabel in Europe in paediatric patients with intermediate, posterior and panuveitis. Therefore, the objective of this study is to investigate retrospectively the efficacy and potential risks of treatment with adalimumab in chronic paediatric non-JIA-uveitis.
Patient identification
The medical records from the department of ophthalmology, University Medical Centre Utrecht, the Netherlands, were reviewed to identify all patients with paediatric uveitis. Patients and/or their parents/caretakers were asked to give informed consent to review their medical records. The study adhered to the tenets of the Declaration of Helsinki and was conducted after ethical approval for data collection by the Medical Ethical Committee Utrecht (protocol number: 18-751/C).
Patients eligible for treatment with adalimumab were diagnosed with refractory non-JIA-uveitis and/or if the necessary dosage of IMT and/or oral corticosteroids was leading to complications. Uveitis was defined as refractory when patients did not achieve a 2-step decrease in the level of inflammation in the anterior chamber and/or vitreous humour, despite the use of systemic IMT and/or corticosteroids within at least six months. The reasons for starting adalimumab did not change significantly over the period of the study.
Before starting the additional treatment with adalimumab, to exclude associated systemic disease, children were screened with magnetic resonance imaging (MRI) of the brain or where indicated with visual evoked potential (VEP) to exclude the risk of demyelinating disease (Kemanetzoglou & Andreadou 2017). The children were also screened for tuberculosis and hepatitis.
Inclusion criteria were as follows: treatment with adalimumab (HumiraÒ; AbbVie Inc., Ludwigshafen, Germany), diagnosis of non-JIA-uveitis, onset of uveitis before 16 years of age and a minimum followup of six months. The patients had been treated between January 2008 and March 2020. Diagnosis and treatment of uveitis were performed by an ophthalmologist specialized in paediatric uveitis and a paediatric immunologist according to the National Guidelines Uveitis. The diagnostic criteria for uveitis were defined according to the Standardization of Uveitis Nomenclature (SUN) criteria (Jabs et al. 2005).
Participants received adalimumab at a dose of 20 mg in patients weighing <30 kg, or 40 mg in patients weighing ≥30 kg, administered as a subcutaneous injection every two weeks. The dose and the interval could in some cases be adjusted according to the level of disease activity.
Data collection
Data were retrieved retrospectively from visits at our outpatient ophthalmology clinic at standard time intervals before start of adalimumab (baseline), 3, 6, 9, 12 months, and then yearly after starting adalimumab.
Data extracted from the electronic patient files included as follows: demographics, uveitis characteristics, underlying systemic disease, date of adalimumab start, use of other medication, side effects and complications. The following side effects and complications were registered as follows: cataract requiring surgery, secondary glaucoma, cystoid macular oedema (CME), papillitis, strabismus, amblyopia and abnormal laboratory results (liver enzymes, kidney function, haemoglobin and leucocytes). Secondary glaucoma was defined as glaucomatous visual field defect in combination with intraocular pressure (IOP) higher than 21 mmHg, and glaucomatous cupping of the optic nerve or progressive thinning of the ganglion cell layer and retinal nerve fibre layer (RNFL) on optical coherence tomography (OCT) (Society EG 2014). CME was defined as the presence of macular thickening with cyst formation observed by OCT or late phase leakage on fluorescein angiography (FA) scored according to the Angiography Scoring for Uveitis Working Group (ASUWOG) (Tugal-Tutkun et al. 2010). Papillitis was defined as blurring of the optic disc margins and/or as optic disc hyperfluorescence on FA scored according to the ASUWOG criteria.
Antiadalimumab antibodies were measured on indication, when patients did not achieve a 2-step decrease in the level of inflammation as according to the SUN criteria, or when the physician had doubts about the therapy compliance of the patient (Jabs et al. 2005). This policy did not change significantly over the period of the study.
Main outcome measures
The primary endpoint was the proportion of patients with disease inactivity or improvement from baseline for a minimum period of three months and with no development or worsening of any new ocular complications. Disease inactivity and improvement were defined according to the SUN criteria (Jabs et al. 2005). Disease inactivity was defined as anterior chamber cell count of <1+ cells in anterior uveitis and panuveitis, and <1+ cells in the vitreous humour in intermediate, posterior and panuveitis in absence of papillitis, CME and/or vasculitis. Disease improvement was defined as two grade decrease in the anterior chamber and/or vitreous cavity.
Secondary endpoints were as follows: time to inactivity of uveitis, improvement of visual acuity with two Snellen lines, taper of topical or systemic corticosteroids, time to first uveitis relapse on adalimumab therapy (excluding activity of uveitis up to three months after an eye surgery), improvement of CME on FA and/or OCT by 20% and total decrease of FA score with ≥25% (Tugal-Tutkun et al. 2010).
Statistical analysis
Descriptive statistics were used to report demographic and clinical features of the patients. The McNemar test was used to analyse linked dichotomous variables. Wilcoxon's test for paired samples was used to analyse means of abnormally distributed linked samples. Cox proportional hazard (PH) regression was used to identify factors associated with outcomes. The PH assumption was verified visually by examining the hazard plots and statistically by including the interaction between the logarithm of time and the covariate of interest in a univariate model. The PH assumption was considered suspect if the interaction between time and covariate of interest was significant at the 0.1 level. Variables that changed over time were evaluated as time-updated variables. Correction for analysis of paired eyes was performed using generalized estimating equations (GEE). The p-values of less than 0.05 were considered statistically significant. All significances are two-tailed. All statistical analyses were performed with SPSS 25 (SPSS Inc., Chicago, Illinois, USA).
Results
Fifty children with non-JIA-uveitis were eligible for treatment with adalimumab. Twenty-six children were screened before treatment with MRI of the brain, of whom seven (27%) children were diagnosed with anterior uveitis, seven (27%) with intermediate uveitis and twelve (46%) with panuveitis. Adalimumab was relatively contraindicated in six of them (23%) based on pre-existing white matter abnormalities of the brain, and in one patient based on an abnormal VEP (2%) (Fig. 1). Five patients with preexisting white matter abnormalities of the brain had panuveitis and one patient had intermediate uveitis. Papillitis was seen in five of the six patients with abnormalities on MRI. For subsequent analyses, we included 43 children (81 affected eyes) with non-JIAuveitis who started additional treatment with adalimumab and had a minimum follow-up of six months.
Characteristics of children before treatment of adalimumab Table 1 summarizes the general patient characteristics before additional treatment with adalimumab (baseline). Fifteen (35%) of the patients treated with adalimumab were diagnosed with anterior uveitis and 28 children (65%) with non-anterior uveitis. Median age at baseline was eleven years (range 4-17 years). Median duration of uveitis at baseline was two years (range 0-9 years). Topical corticosteroids were used in 91% of the children (n = 70 affected eyes of 39 children), of whom 44% (n = 17) used topical corticosteroids >3 drops daily. Thirty-one children (72%) were treated with systemic corticosteroids. Forty-one children had active uveitis at baseline. In the other two patients, adalimumab was initiated to reduce the high dosage of topical and systemic corticosteroids, which was needed to maintain disease inactivity. Table 2 shows the clinical disease activity at baseline. Active inflammation in the anterior chamber was present in 27% (n = 22) of the affected eyes, in the vitreous humour in 19% (n = 15) of the affected eyes, and both in the anterior chamber and the vitreous humour in 26% (n = 21) of the affected eyes. Twenty-eight per cent of the affected eyes (n = 23) had no active inflammation in both the anterior chamber and the vitreous humour at baseline.
Primary and secondary outcomes
Median duration of study follow-up was 2.5 years (range 0.5-11.3 years). Disease inactivity of uveitis was achieved in 91% of the patients after a median of three months (range 3-33) of treatment, p < 0.001 (Fig. 2). The median dosage of systemic corticosteroids reduced from 10 mg/day at baseline (range 0-60 mg/day), to 5 mg/day at three months (range 0-20 mg/day) (p < 0.001), and to 0 mg/day at 24 months (range 0-15 mg/day) (p < 0.001) (Fig. 2). Bestcorrected visual acuity (BCVA) improved from 0.16 AE 0.55 logarithm of the minimum angle of resolution (logMAR) at baseline to 0.08 AE 0.26 logMAR at 9 months (p < 0.001), and to 0.05 AE 0.019 logMAR at 24 months (p < 0.05) of follow-up (Table 2). Within the first year of adalimumab use, 35% of the patients (n = 15) had an improvement of visual acuity with two Snellen lines in at least one eye. An OCT scan was performed in 63 of the 81 affected eyes (78%) before the start of adalimumab. Fifteen of the scanned eyes had CME with an increased CMT on OCT. An improvement of ≥ 20% CMT reduction on OCT occurred in 53% of the eyes with CME (n = 8). Six eyes showed this reduction in the first three months. Thirty-one patients (72%) had undergone a FA in the first year after the start of adalimumab therapy, of which eight children had undergone a FA in the second year. In seven of these patients (88%), an improvement of ≥ 25% of the FA score in the second year of follow-up was achieved. After 24 months the median FA score was improved significantly to 2.0 compared to baseline of 13.5 (p < 0.001).
Relapse of uveitis
Relapse of uveitis occurred in 19 (49%) of the 39 patients who achieved disease inactivity, after a median of 14 months (range 3-40 months). In 13 patients (68%), a relapse occurred within 12 months of disease remission, and in six patients (32%) after 12 months. Fourteen patients (74%) had an identifiable cause of relapse: dose reduction or discontinuation of medication (n = 7), lack of therapy compliance (n = 6) or anti-adalimumab antibodies (n = 1). In the group without an identifiable cause of relapse, the median duration of adalimumab controlled remission until the relapse was 5.5 months (range 3-16 months).
Comorbidities and complications
Comorbidities and complications whether or not related to treatment with adalimumab during follow-up were infections (n = 3): scarlet fever, pharyngitis and influenza, causing a short interruption in the use of adalimumab for a maximum of two doses, not leading to hospitalization. During follow-up, no laboratory abnormalities that could be related to adalimumab were identified. Seven patients (16%) had elevated liver enzymes up to two times the normal value. In all six patients (14%) who underwent cataract surgery during adalimumab therapy, the cataract was already present before adalimumab treatment. Two patients were diagnosed with secondary glaucoma.
Adalimumab was discontinued in thirteen children (30%) due to ineffectiveness (n = 8, after 8-105 months, of whom one patient had positive antiadalimumab antibodies), long-term disease inactivity (n = 3, after 31-44 months), side effects (urgeincontinence) (n = 1, after 4 months) or own initiative (n = 1, after 75 months). The dose of adalimumab was adjusted to weekly administration in eleven children (26%), of whom six discontinued adalimumab due to ineffectiveness (n = 5) or own initiative (n = 1). Relapse of uveitis occurred after a median of eight months (range 7-9 months) in all three patients who discontinued adalimumab due to longterm disease inactivity. Two of them used methotrexate, with one child also using topical corticosteroids. One child used no additional medication.
Regression analysis
Univariate analysis of predictive factors for inactivity of uveitis during adalimumab treatment showed that boys achieved disease inactivity faster with a hazard ratio (HR) of 3.34 (95% CI 1.54-7.27, p = 0.002). This was not the case for the anatomic classification of uveitis, age, and type of IMT used. Multivariate analysis confirms these results, with male gender being independently associated with a faster achievement of disease inactivity (HR 3.12, 95% CI 1.39-6.99, p = 0.006) ( Fig. 3; Supplemental information).
Discussion
This retrospective cohort study focusses on the additional treatment with adalimumab used off-label in paediatric uveitis not associated with JIA. We demonstrate that treatment with adalimumab led to prompt inactivity of uveitis in the vast majority of the children (91%). Until now, the effectiveness of adalimumab was only shown for paediatric uveitis associated with JIA (SYCAMORE trial), adults with JIA-uveitis (ADJUVITE trial) or for non-infectious uveitis in adults (VISUAL III trial) (Ramanan et al. 2017;Quartier et al. 2018;Suhler et al. 2018). These trials showed a treatment response of 56-73% in the participants of the adalimumab group (Ramanan et al. 2017;Quartier et al. 2018;Suhler et al. 2018). Two systematic reviews reported a pooled response rate of 87% (95% CI 75-98%) in chronic paediatric uveitis with mostly JIA (Simonini et al. 2014;Ming et al. 2018). Although these studies include patients with different types of uveitis or adults, the results are in accordance with our findings. We also found that 20 patients achieved inactivity without the occurrence of a relapse during the variable follow-up. This suggests that within 6 to 24 months of follow-up 50% of those achieving an early response will maintain a persistent state of inactivity. Long-term follow-up of these patients is rarely reported, but the rates of steroid reduction over two years in this study suggests that weaning of treatment in a significant number of patients is possible. A growing number of studies describe central and peripheral nervous system demyelinating events during the use of TNFa inhibitors, suggesting a possible relation between the use of anti-TNFa agents and demyelination (Scheinfeld 2005;Zhu et al. 2016;Kemanetzoglou & Andreadou 2017;Suhler et al. 2018;Cunha et al. 2020). The literature regarding pre-screening with brain MRI is ambiguous as there are no clear guidelines (Ali & Laughlin 2017;Kemanetzoglou & Andreadou 2017). We performed our prescreening with MRI of the brain in children without an underlying systemic disease and detected white matter abnormalities in twenty-three per cent of them. Although no diagnosis of MS or other central nervous system condition could be made, we did not take the risk of treating these patients with adalimumab as nowadays other therapeutic options for refractory uveitis without this risk of demyelination emerge, like IL-6 inhibitors (i.e. tocilizumab) (Wennink et al. 2020). Interestingly, only one of our patients with abnormalities on MRI had intermediate uveitis, which is the most reported presentation of uveitis in relation to multiple sclerosis and white matter abnormalities on brain MRI (Cunningham et al. 2017;Olsen & Frederiksen 2017). The other five children were diagnosed with panuveitis, which emphasizes the need for the awareness of this potential risk by the treating physicians also in other forms of uveitis. Remarkable is that five of the six patients with abnormities on MRI were diagnosed with papillitis. This possible association should be investigated in the future research. Initially, normal brain MRI was repeated in four patients during adalimumab treatment and showed no signs of demyelination. Also, no clinical signs of demyelinating events were observed during treatment. Based on our results, we recommend screening with MRI of the brain before the start of adalimumab in case of intermediate or panuveitis.
To preserve vision in paediatric uveitis, it is crucial to achieve disease inactivity as fast as possible with limited use of topical and systemic corticosteroids and to prevent the development of secondary vision-threatening complications of uveitis. One of these complications is cataract, which is increasing more rapidly when topical corticosteroids are dosed at >3 drops daily (Thorne et al. 2010). In our study, we observed a relatively quick reduction in the use of topical corticosteroids during adalimumab treatment, reaching statistical significance after nine months of therapy. Equally important, we observed an impressive reduction of systemic corticosteroids during adalimumab treatment that reached statistical significance after six months. Chronic use of systemic corticosteroids involves serious health care risks in paediatric patients, such as ocular hypertension, Cushing's syndrome, hyperglycaemia, osteoporosis and growth retardation (Simonini et al. 2010;Sood & Angeles-Han 2017). Prior to the use of adalimumab, at least three children had clinical signs of Cushing's syndrome, and one child developed osteoporosis due to long-term use of systemic corticosteroids. Therefore, it is essential that the (long-term) use of systemic corticosteroids is minimized.
Although a good effectivity of adalimumab in reaching remission is being shown in our and other studies, a relapse of uveitis still occurs in a significant number of the patients Ramanan et al. 2017;Suhler et al. 2018;Horton et al. 2019;Al-Janabi et al. 2020). In our study, uveitis flared up in five of the children (13%) without an identifiable cause during adalimumab treatment and in 14 (33%) with an identifiable cause. Drug immunogenicity is one of the few described risk factors linked to the failure of anti-TNFa treatment in non-infectious uveitis (Skrabl-Baumgartner et al. 2019). Hence, antiadalimumab antibodies could play a role in the occurrence of uveitis relapse. In our study, three patients had developed anti-adalimumab antibodies resulting in discontinuation of the drug in one patient. Two of these patients had chronic anterior uveitis (one had ANA positive serology), and one was diagnosed with panuveitis (ANA negative). Antibodies were found in 16-26% of the patients in studies with JIA patients. Results of a Finnish study shows that anti-adalimumab antibody levels of ≥12 AU/ml were associated with a higher grade of activity of uveitis, a higher failure to reach disease remission and a lack of concomitant methotrexate therapy in JIA-associated patients (Wang et al. 2013;Skrabl-Baumgartner et al. 2015;Leinonen et al. 2017). Unfortunately, data regarding non-JIA-uveitis are unknown. Our clinical experience is that after increasing the frequency from every other week to weekly, the antibodies can disappear and a control of inflammation can be attained. In some cases, it is possible to set the frequency later on back to every other week. In practice, when regarding treatment failure or its in-effectivity, serum anti-adalimumab antibodies can be analysed together with the serum blood level of adalimumab.
Our study found a positive association between male gender and time to achieve disease inactivity of uveitis. To our knowledge this positive finding is not described in literature before, whilst there is upcoming evidence that there are gender differences in uveitis in children for example in the risk of uveitis and in the severity of the inflammation Kalinina Ayuso et al. 2010;Yeung, Popp & Chan 2015;Haasnoot et al. 2018).
Recent studies show that increasing adalimumab administration to weekly in patients with inadequate inflammatory control on every other week, is a reasonable treatment option (Correll et al. 2018;Lee et al. 2020;Liberman et al. 2020). We do suggest to consider weekly administration of adalimumab when there is no improvement in disease activity or in the case of the presence of anti-adalimumab antibodies. An increase to weekly administration took place in one-fourth of our patients. Disease inactivity was reached in fifty per cent of these children with weekly administration of adalimumab. Treatment of these anti-TNFa resistant patients can be challenging. Recent literature suggests switching to another type of anti-TNFa, tocilizumab, or other more experimental agents in paediatric uveitis such as abatacept, rituximab or a Janus kinase inhibitor (Maccora et al. 2020;Thau et al. 2018;Wennink et al. 2020).
Seven patients (16%) had elevated liver enzymes up to two times the normal value. In all of them, the elevated liver enzymes normalized after dose adjustment of the conventional IMT. After dose adjustment, only one patient had a relapse of uveitis. As a response, the dose of the conventional IMT was increased and the disease activity improved. The dose of adalimumab was not adjusted in these patients. Therefore, our opinion is that these elevated liver enzymes are due to the use of the conventional IMT. However, we cannot exclude the role of adalimumab completely, but we do not think it was significant.
No major side effects or adverse events occurred in our study, resulting in the safe use of adalimumab in the home situation in contrast to other biological agents such as infliximab, which can only be administered intravenously in the hospital. The multidisciplinary team of ophthalmologists and paediatricians needs to make a tradeoff between potential vision loss and potential risk of steroid-related complications on one hand and the risk of infection during adalimumab treatment on the other hand. This emphasizes the importance of a multidisciplinary approach in the treatment of paediatric uveitis by an ophthalmologist and a paediatric immunologist.
The study is limited by its retrospective design. Nonetheless, all data were structurally noted in the electronic patient files with a low inter-observer variability between two paediatric uveitis specialists, using the same protocolled treatment strategies and a uniform way of registration in the expert centre for paediatric uveitis in the Netherlands.
Conclusions
This study shows the effectiveness of anti-TNFa therapy with adalimumab treatment of chronic paediatric non-JIAuveitis. Adalimumab therapy leads to improvement of disease inactivity, BCVA and decrease in the use of corticosteroids. We recommend MRI of the brain as a screening before starting adalimumab in intermediate and panuveitis.
|
2021-09-18T06:17:06.460Z
|
2021-09-16T00:00:00.000
|
{
"year": 2021,
"sha1": "5d165cf8694d0ef02e75f59a7b49d8ebd6fc7dc6",
"oa_license": "CCBYNCND",
"oa_url": "https://onlinelibrary.wiley.com/doi/pdfdirect/10.1111/aos.15012",
"oa_status": "HYBRID",
"pdf_src": "Wiley",
"pdf_hash": "b335937ecabe2827c9a1fb27b02f309ffb437561",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
59356747
|
pes2o/s2orc
|
v3-fos-license
|
Theory for Swap Acceleration near the Glass and Jamming Transitions for Continuously Polydisperse Particles
SWAP algorithms can shift the glass transition to lower temperatures, a recent unexplained observation constraining the nature of this phenomenon. Here we show that SWAP dynamics is governed by an effective potential describing both particle interactions as well as their ability to change size. Requiring its stability is more demanding than for the potential energy alone. This result implies that stable configurations appear at lower energies with SWAP dynamics, and thus at lower temperatures when the liquid is cooled. The magnitude of this effect is predicted to be proportional to the width of the radii distribution, and to decrease with compression for finite-range purely repulsive interaction potentials. We test these predictions numerically and discuss the implications of our findings for the glass transition. These results are extended to the case of hard spheres where SWAP is argued to destroy metastable states of the free energy coarse grained on vibrational timescales. Our analysis unravels the soft elastic modes responsible for the speed-up induced by SWAP , and allows us to predict the structure and the vibrational properties of glass configurations reachable with SWAP . In particular, for continuously polydisperse systems we predict the jamming transition to be dramatically altered, as we confirm numerically. A surprising practical outcome of our analysis is a new algorithm that generates ultrastable glasses by a simple descent in an appropriate effective potential.
I. INTRODUCTION
Understanding the mechanisms underlying the slowing down of the dynamics near the glass transition is a long-standing challenge in condensed-matter [1,2]. Unexpectedly, swap algorithms [3,4] (in which particles of different radii can swap in addition to the usual moves of particle positions) were recently shown to allow for equilibration of liquids far below the glass transition temperature T g [5][6][7][8]. For judicious choice of poly-dispersity, one finds that: (i) the glass transition is shifted to lower temperatures: with swaps the α-relaxation time at T g is only two or three order of magnitudes slower that in the liquid, instead of 15 orders of magnitude for regular dynamics. The slowing down of the dynamics occurs at a lower temperature, which we refer to as T swap 0 . (ii) The spatial extent of dynamical correlations, which are significant near T g , are greatly reduced with swap and only occur at T swap 0 . (iii) The mean square displacement of the particles on vibrational time scales is increased significantly in this temperature range [7]. These observations constrain theories of the glass transition. In particular, current formulation of theories based on a growing thermodynamic length scale appear inconsistent with these observations [9]. A theory of the glass transition should explain both swap and non-swap dynamics. Goldstein [10] proposed that the glass transition is initiated by a transition in the free energy landscape: at high temperature, the system resides near saddles, whereas below some temperature T 0 the dynamics can only occur by activation (whose nature is debated), and is thus much slower. In mean-field models of structural glasses such a transition in the landscape is predicted [11][12][13][14] and corresponds to a Mode Coupling Transition (MCT) where the relaxation time diverges. It was suggested that the MCT transition would be shifted to lower temperature with swap dynamics in [9], as proven and confirmed numerically in a mean-field model of glasses [15]. Yet, understanding the real-space mechanisms underlying the speed up induced by swap in finite dimensions (where the relaxation time cannot diverge) as well as the nature of the very stable glassy configurations swap can reach remains a challenge.
In this work, we tackle these questions by first reviewing the equilibrium statistical mechanics theory of polydisperse systems [16,17], to show that they are equivalent to a system of identical particles that can individually deform according to a chemical potential µ(R), where R is the particle radius. In the (practically important) case where poly-dispersity is continuous, µ(R) is smooth, allowing us to define normal modes of the generalised Hessian that includes radii as degrees of freedom. We prove that requiring its stability is strictly more demanding than for the usual Hessian. Second, we show that these results stringently constrain the glassy states generated by swap algorithms. We illustrate this point by studying the jamming transition in soft repulsive particles, which we prove must be profoundly altered: hyperstaticity is found with an excess number of contacts δz with respect of the Maxwell bound δz ∼ α 1/2 > 0, where α characterises the width of the radii distribution ρ(R). Although we find that the vibrational spectrum of the generalised Hessian is marginally stable with respect to soft extended modes near jamming, these modes are gapped in the regular Hessian, unlike for packings obtained with regular dynamics [18][19][20]. These results are verified numerically by introducing a novel algorithm performing a steepest descent in the generalised potential energy that includes µ(R), which can generate extremely stable glasses without any activation. Third, we investigate the glass transition. We show that the inherent structures obtained after a rapid quench with the regular dynamics are unstable with respect to this new algorithm, which reaches significantly smaller energies. This result indicates that metastable states appear at lower energies with swap, and therefore at lower temperatures when the liquid is equilibrated. Thus the Goldstein transition must be shifted to a lower temperature with swap dynamics, suggesting a natural explanation for its speed up which specifies the collective modes facilitating the dynamics for T swap 0 < T < T g . We predict this shift to be proportional to α in general, and to be inversely proportional to the distance to jamming for sufficiently compressed soft spheres. Lastly, we argue that these results apply to hard spheres as well, if the energy is replaced by a coarse-grained free energy landscape as previously studied in [19,[21][22][23]. We use this approach to provide a simple phase diagram where the Goldstein transition and the emergence of marginality [18] (referred to as a Gardner transition in infinite dimension [23]) can be related to structure for both swap and non-swap dynamics.
A. Effective potential
We now show that a poly-disperse system can be described by an effective potential that includes the radius as degree of freedom, an idea already present in the early works of [16,17]. We consider a system of N particles with continuous polydispersity ρ(R), of width α = ( R 2 − R 2 ) 1/2 / R . Here {R} indicates the set of particle radii and {r} their positions. In what follows we denote R ≡ R 0 , and U({r}, {R}) the total potential energy in the system. We define the partition function Z({r}) annealed over the particle radii: where the sum is on all the permutations P({R}) of the particle radii. In the thermodynamic limit, a grandcanonical formulation is equivalent, in which particles of different radii correspond to different species. The associated partition function writes: (S.2) where µ(R) is the chemical potential at radius R. It is chosen such that in the thermodynamic limit, the distri-bution of radii that follows from Eq. (S.2) is A key remark is that once Eq. (S.2) is integrated on particle positions {r}, one obtains the partition function for the coupled degrees of freedom {r} and {R} with an effective energy functional:
B. Mechanical stability under swap
Let us consider first an athermal system. In the thermodynamic limit, mechanical stability under swap dynamics requires V to be at a minimum. Beyond the usual force balance condition, it implies: where f ij are the contact forces between particle i and j where p is the pressure and d the spatial dimension. To achieve a distribution of radii of width α, the stiffness k R acting on each particle radius must thus be of order: where the average is made on all particles i.
C. Generalised vibrational modes
Stability also requires the Hessian H swap (the matrix of second derivatives of V) to be positive definite. Since they are now N (d + 1) degrees of freedom, H swap is a N (d + 1) × N (d + 1) symmetric matrix, of eigenvalues ω 2 swap . It contains a block of size N d×N d which is the regular Hessian H ij = ∂ 2 U/∂r i ∂r j . We denote by ω 2 its eigenvalues. Because hybridisation with additional degrees of freedom can only lower the minimal eigenvalues of the Hessian, H swap has lower eigenvalues than H (as quantified below), implying that mechanical stability is more stringent with swap dynamics.
Let us illustrate this result perturbatively when k R ≫ k, where k is the characteristic stiffness of the interaction potential U. In general, the eigenvalues of H are functions of the set of stiffnesses {k ij }, but also of the interaction forces {f ij } [24]. We first ignore the effects induced by such pre-stress. Moving along a normal mode of H by a distance x (while leaving the radii fixed) leads to an elastic energy ∼ ω 2 x 2 and change forces by a characteristic amount δf satisfying δf 2 /k ∼ x 2 ω 2 . Because of such change, Eq. (S.5) is not satisfied anymore. Thus the potential V can be reduced further by an amount of order δf 2 /k R ∼ ω 2 x 2 k/k R if the radii are allowed to adapt. This reduced energy can be approximatively written as x 2 ω 2 swap , where ωswap 2 is the eigenvalue associated to that mode in the effective Hessian. We thus obtain:
III. SOFT SPHERE SYSTEMS
To illustrate these ideas, we consider soft spheres with half-sided harmonic interactions, so that: where r ij is the distance between particles i and j, and Θ(x) is the Heaviside step function.
A. Jamming transition for soft spheres under swap
When materials with such finite-range interactions are quenched to zero temperature, they can jam into a solid or not depending on their packing fraction. At the jamming transition separating these two regimes, vibrational properties are singular [25,26], and the effects of swap are expected to be important, as we now show. The vibrational spectrum of the regular Hessian is strongly affected by excess number of constraints δz with respect to the Maxwell threshold where the numbers of degrees of freedom and constraints match. Effective medium [27] or a variational argument [28] imply that in the absence of pre-stress, soft normal modes in the Hessian must be present with eigenvalues: For swap with a small poly-dispersity α << ∆, where we introduced the dimensionless particle overlap ∆ ≡ pR d−2 0 /k, then from Eq.S.6 k R >> k and Eqs. (S.7) applies. It implies that soft normal modes will be present at lower eigenvalues ω * swap 2 ∼ δz 2 k(1 − C 0 α/∆) where C 0 is a numerical constant. Pre-stress can be shown to shift eigenvalues of the Hessian by some amount ≈ −C 1 k∆ [18,29], leading to eigenvalues satisfying ω 0 swap 2 ∼ δz 2 k(1 − C 0 α/∆) − C 1 k∆. Mechanical stability requires positive eigenvalues and we obtain: Eq.S.10 indicates that away from jamming, the relative effects of swap on the structure are proportional to α/∆. Certain materials are marginal stable, corresponds the saturation of inequalities of the kind of Eq. (S.10). As we shall see below, we provide numerical evidence that it is also the situation if swap is used, at least near jamming.
Here this assumption gives an expression for δz which is above (but very close to in the limit α << ∆) the bound for non-swap dynamics of [18], recovered by setting α = 0. Thus in this limit we expect very small change of structure in the glass phase between swap and non-swap dynamics.
For swap with a large poly-dispersity α >> ∆, the situation is completely different. We then have k R << k: in this regime the strong interactions correspond to interactions between particles in contact. As far as the low-frequency end of the spectrum is concerned, these interactions can be considered to be hard constraints (i.e. k = ∞), whose number is N z/2. The dimension of the vector space satisfying such hard constraints is N (d+1)− N z/2 = N(1 −δz/2). These modes gain a finite frequency due to the presence of the weaker interactions of strength k R associated with the change of radius, of strength k R . Importantly, the number of these weaker constraints left is simply the number of particles N . If δz is small the number of degrees of freedom N (1 − δz/2) is just below the number of constraints N : for this vector space we are close to the "isostatic" or Maxwell condition where the number of constraints and degrees of freedom match. Thus we can use the same results for the spectrum valid near the jamming transition introduced above. They also apply in that situation, with the only difference that the stiffness scale k is replaced by k R . In particular if prestress is not accounted for, a plateau of soft modes must appear above some frequency given by Eq.(S.9): This plateau survives up to the characteristic frequency ω i ∼ √ k R . When pre-stress is accounted for, eigenvalues of the Hessian are again shifted by ∼ −k∆. Mechanical stability then implies ∆/αδz 2 > C 2 ∆ and: In this regime, marginal stability (the saturation of the stability bound of Eq. S.12) corresponds to a coordination independent of pressure, with δz ∼ √ α and ω * swap 2 ∼ k∆. We thus predict that swap dynamics destroys isostacity, and significantly affect structure and vibrations. For sufficiently large α, this regime will include the entire glass phase, and vibrational properties and stability will be affected in the vicinity of the glass transition (which sits at a finite distance from the jamming transition [30]) as well.
Note that these predictions apply to algorithms that allow for swap moves up to the jamming threshold. This is not the case e.g. in [31], where swaps are used to generate dense equilibrated liquids, that are then quenched without swap toward jamming. We also expect isostaticity to be restored in algorithms for which the set of particle radii is strictly fixed, but only below some pressure p N that vanishes as N → ∞, above which our predictions should apply.
B. Numerical Model
As shown in Eq. (S.4), swap dynamics is equivalent to a system of interacting particles which can individually deform. To test our predictions, we consider soft spheres as defined in Eq.S.8, whose radius follow the internal potential: wherek R is a characteristic stiffness. We considered a potential diverging as R i → 0 to avoid particles shrinking to zero size. To avoid crystallisation we further considered that particles are of two types: for 50% of them, R This choice leads to a bimodal distribution of size ρ(R), as shown in Fig. S1. Our model corresponds to a swap dynamics where swap is allowed only between particles of the same type. Note that broad mono-modal distributions can be optimised to make swap more efficient while avoiding crystallisation [7], which would be similar to having a very large α in our theoretical description. The spatial dimension is d = 2 in our simulations and k = 1 is our unit stiffness, leading to a simple relation ∆ = p.
To study the jamming transition, we consider a pressure-controlled protocol at zero temperature described in the S.I. The chemical potential of Eq. (S.13) must evolve with pressure to maintain a fixed polydispersity. As shown in Fig. S1.B, it can be achieved within great accuracy simply by imposing thatk R = p/ᾱ, wherē α is a parameter that controls the width α as shown in the inset of Fig. S1. For this bimodal distribution, α is defined as α = (α 1 + α 2 )/2, where α 1 , α 2 are the relative width of each peak in ρ(R). In the limit where the non-swap dynamics is recovered -which happens when α → 0 -α andᾱ are proportional.
C. Structure and stability
Our central prediction is that for swap dynamics materials must display a larger coordination to enforce stability. This prediction is verified in Fig. S2.A, which shows δz v.s. p for various values ofc. Isostaticity is indeed lost and the coordination converges to a plateau as ∆ decreases. Strikingly, we find for the plateau value δz ∼ √ α, consistent with a saturation of the stability bound of Eq. (S.12). This scaling behavior is implied by the scaling collapse in Fig. S2.B which also confirms that the characteristic overlap below which swaps affects the dynamics scale as ∆ ∼ α. Overall, these results supports that the numerical curves δz(∆) in Fig. S2.A correspond to the marginal stability lines under swap dynamics, shown for different polydispersity (see more on that below).
D. Packing fraction
For traditional dynamics, polydispersity tends to have very limited effects on the value of jamming packing fraction φ J . We have confirmed this result in the S.I., by showing that although our model can generate very different distributions ρ(R), the values we obtain for φ J cannot be distinguished if jamming is investigated using non-swap dynamics. However, for swap dynamics we expect the situation to change dramatically: since stability requires much more coordinated packings, they presumably need to be denser too. We denote the jamming packing fraction for swap φ c ≡ lim ∆→0 φ(∆). The inset of Fig. S3.A confirms that φ c increases significantly as ρ(R) broadens. To quantify this effect we consider φ(∆,ᾱ), as shown in the main panel. Assuming a scaling form for this quantity, and requiring that it satisfies the known results for the jamming transition for ∆ ≫ᾱ is some scaling function and β = 1. Since the coordination does not change for ∆ ≪ᾱ, we expect that it is true for the structure overall and for φ, implying that f (x) ∼ x 0 as x → 0. These predictions are essentially confirmed in Fig. S3.B. Note however that the best scaling collapse is found for β = 0.83 < 1. These deviations are likely caused by finite size effects, known to be much stronger for φ than for the coordination or vibrational properties [32], and which may thus be present for our systems of N = 484 particles.
E. Vibrational properties
We computed the Hessian H swap and diagonalized it (see detailed in SI) to extract the density of states D(ω), as shown in Fig. S4.A for different pressures at fixed polydispersity. As expected, at low particle overlap ∆ two bands appear in the spectrum. The lowest-frequency band presents a plateau above some frequency scale ω * swap which satisfies ω * swap ∼ √ ∆ as shown in the inset, as expected if the structure were marginally stable. As shown in S.I., in the absence of pre-stress the minimal eigenvalues of the Hessian increase many folds, again a signature of marginal stability [18]. Further evidence appears in Fig. S4.B showing D(ω) at fixed ∆ = 10 −4 for varying polydispersity. ω * swap essentially does not depend onᾱ as shown in the inset, as expected for marginal packings if the pressure is fixed. The cut-off frequency ω i of the low-frequency plateau scales as ω i ∼ √ k R ∼ 1/ √ᾱ , as predicted above.
IV. GLASS TRANSITION
We now turn to the glass transition, which always takes place at a sizeable distance form the jamming transition [30]: for example for hard discs, φ g ≈ 0.78 and φ c ≈ 0.85. A similar difference of packing fraction occurs by compressing soft spheres at overlap ∆ ≈ 0.05, as illustrated in Fig.S5(d). From the arguments above, we expect that if the poly-dispersity is sufficiently large, vibrational properties will be strongly affected even far away from jamming, in particular near the glass transition.
The direct consequence of this fact is that the energy landscape will be affected by swap, which will in turn affect the glass transition. At high energy configurations are unstable -they are saddles with many unstable directions -whereas below some characteristic energy minima appear. However, since stability is strictly more demanding with swap, this characteristic energy must be Uα (open symbols) as a function of the dimensionless pressure ∆ imposed during the quenches as a function ofᾱ as shown in the legend. The ratio U∞/Uα is shown in (c), it is stronger near jamming but remains significant even far for jamming is the system is sufficiently poly-disperse. As shown in the inset, in relative terms the shift of the energy of inherent structures (U∞ − Uα)/Uα is proportional to α and inversely proportional to ∆ when ∆ is large enough. (d) The packing fraction φc obtained after swap is turned on is larger than φ∞ obtained for nonswap dynamics, an effect that is stronger near jamming. reduced when swap is allowed for. We prove this point in Fig.S5.(a,b), where inherent structures of energy U ∞ are obtained after using a steepest descent for non-swap dynamics. These configurations are not stable for our generalised steepest descent that let particles deform, which leads to configurations of energy U α < U ∞ . This effect is stronger near jamming in relative terms as shown in Fig. S5.(c), but remains significant away from jamming if the poly-dispersity is broad enough. It corresponds for example to a reduction of energy of 25% for ∆ = 0.05 for our α = 0.06. We show in the inset of that panel that the relative shift of energy induced by swap (U ∞ − U α )/U α is proportional to α and inversely proportional to ∆ when ∆ is large enough, in consistence with what we found for the structure in Eq.(S.10).
Thus as the temperature is lowered in these liquids, the Goldstein temperature where activation sets in will be smaller when swap is allowed for. This analysis thus predicts an entire range of temperature where the non-swap dynamics is slowed down by activation, whereas the swap dynamics can flow along unstable modes. More quantitatively, we predict the shift of glass transition temperature ∆T g /T g induced by swap to be proportional to α, in consistence with the observation that very broad distri- butions lead to large swap effects [7]. We also predict that ∆T g /T g is inversely proportional to the distance to jamming ∆ when this quantity is well-defined (e.g. for soft spheres, but also to some extent for Lennard-Jones potentials [26,33]) and large enough. In real space, the unstable modes that render activation useless involve both translational degrees of freedom as well as swelling and shrinking of the particles. We show an example of such a mode in Fig. S6, corresponding to the softest mode of the generalised Hessian we obtain with parameters α = 0.06 and ∆ = 10 −2 . It illustrates that the particle displacements are not necessarily divergent free when swap is allowed, since the system can locally compress or expand by changing the particle sizes.
This interpretation of swap acceleration is consistent with the observation that the dynamics is less collective with swap at the temperature where the non-swap dynamics is activated, since the system can rearrange locally without jumping over barriers if there are enough unstable modes. Collective dynamics is expected only when these modes become less abundant at lower temperatures. Likewise, we expect the Debye-waller factor to be larger with swap, since the vibrational spectrum is softer. Note that these arguments are not restricted to finite range interactions. We expect them to apply as well to Lennard-Jones potentials for example, where the abundance of degrees of freedom vs. the number of strong interactions is also known to affect the vibrational spectrum [26,33].
V. HARD SPHERE SYSTEMS
Our arguments above consider the energy landscape. For interactions potentials which are very sharp, nonlinearities induced by thermal fluctuations are important, and the vibrational properties of a glassy configurations at finite temperature T can differ significantly from those of its inherent structure obtained by quenching it rapidly. Here we consider the extreme case of hard spheres where the energy is always zero, and cannot be used to define vibrational modes. Instead, by averaging on vibrational time scales within a glassy configuration, a local free energy can be defined [21][22][23] where particles that are colliding within that state interact with a logarithmic potential. This description is exact near jamming and systematic deviations are expected away from it [34]. However in practice, the Hessian defined from this free energy captures well the fluctuations of particle positions and the vibrational dynamics throughout the glass phase [22]. (This procedure can be pursued to include thermal effects in soft spheres as well [35]).
Stability and vibrational properties can be computed in terms of this Hessian for non-swap dynamics [21,22]. Salient results are shown in the simplified diagram of Fig S7. Once again, two key determinant of stability are the typical gap between interacting particles∆ ≡ T /(pR d 0 ) relative to the particle radius, and the excess coordination δz, where the coordination is defined from the network of particles that are colliding within a glassy state. A marginal stability line separates stable and unstable configurations, as illustrated in Fig.S7, whose asymptotic behaviour follows δz ∼∆ (2+2θ)/(6+2θ) where θ ≈ 0.41 [35]. (Strictly speaking, this line will depend slightly on the system preparation, but this dependence is expected to be modest, and is irrelevant for the present discussion).
Under a slow compression the system follows a line (in red) in the (∆, δz) plane. Mechanical stability is reached only for some∆ <∆ 0 , a characteristic onset gap where the dynamic crosses-over to an activated regime where vibrational modes become stable, in consistence with Goldstein's proposal. In these materials, deeper in the glass phase the system eventually returns to the stability line, and undergoes a sequence of buckling events that leave it marginally stable [18,19,22]. Marginal stability implies the presence of soft elastic modes (that differs from Goldstone modes) up to nearly zero frequency, and fixes the scaling properties of both structure and vibrations as jamming is approached [18,19]. These results, valid in finite dimensions, have been quantitatively confirmed in infinite dimension calculations [20,23,36]. In that case, the point where buckling sets in was argued to be a sharp transition, coined Gardner, where the free energy landscape fractures in a hierarchical way [36], as supported by numerical studies [37]. For very rapid quenches, it was argued that the entire glass phase should be marginal [22,36].
How is this picture affected by swap? Our arguments for the generalised Hessian of the soft sphere system essentially go through unchanged for the generalized Hessian of the free energy in the hard sphere system. Once again, stability becomes more demanding with swap, and the marginal stability line is shifted to higher coordination in the (∆, δz) plane as represented in the right panel of Fig. S7. Thus the glass transition is shifted FIG. S7: Log-log representation of the stability diagram in the coordination δz = z − zc and∆ plane for continuously poly-disperse thermal hard spheres with non-swap (Left) and swap (Right) dynamics. Note that for hard spheres∆ is independent of temperature and vanishes at jamming. In the Left panel, the blue line separates mechanically stable and unstable configurations. The red line indicates the trajectory of a system under a slow compression. When∆ decreases toward the onset gap∆0, metastable states appear and the dynamic becomes activated and spatially correlated. In the glass phase, the red trajectory will depend on the compression rate, but will eventually reach the blue line at some (ratedependent)∆G. When this occurs, a buckling or Gardner transition takes place where the material becomes marginally stable, leading to a power-law relation between∆ and δz. Right: for swap dynamics, stability is more demanding and is achieved only on the green line, which differs strictly from the blue one. Thus the onset gap decreases to some valuẽ ∆ s 0 <∆0: the dynamics become activated and correlated at larger densities, shifting the position of the glass transition. Marginality is still expected beyond some pressure∆ s G , but leads to plateau value for the coordination, indicating that isostaticity is lost.
toward higher packing fractions. At smaller gap∆ (corresponding to the approach of jamming), stability implies δz ≥ α (2+2θ)/(6+2θ) , again implying that isostaticity is lost with swap. We conjecture that, just as for soft particles, marginal stability is reached in the glass phase, which would correspond to a Gardner transition in infinite dimensions.
VI. CONCLUSION
In swap algorithms, the dynamics is governed by an effective potential V({r}, {R}) that describes both the particles interaction and their ability to deform. As a result, we have shown that vibrational and elastic properties are softened when swaps are allowed for, while thermodynamic quantities are strictly preserved (when thermal equilibrium is reached). This result supports that the cross-over temperature T 0 where mechanical stability appears and dynamics becomes activated must be reduced with swap with T swap 0 < T 0 , leading to a natural explanation as to why the glass transition occurs then at a lower temperature T swap g < T g . Secondly, swap must strongly affect the structure of the glass phase. This is particularly striking near the jamming transition that occurs in hard and soft spheres, where we predict that well-known key properties such as isostaticity must disappear. We have confirmed these predictions numerically, and found that for rapid quenches the effective potential V({r}, {R}) appears to be marginally stable throughout the glass phase.
Concerning the glass transition, our work does not specify the mechanism by which activation occurs in glasses, but it does support that swap delays the temperature where activation is required to relax, which potentially explains several previous observations of swap algorithms [5][6][7][8]. Possible theories to describe the mechanism by which activation occurs in glasses include elastic [38] and facilitation models [39]. We do think however that theories based on a growing thermodynamic length will be hard to reconcile with the notion that some collective modes do not see any barriers at all.
Our analysis also makes additional qualitative testable predictions. By increasing continuously the width α of the radii distribution ρ(R), we predict that T swap g (α) will smoothly decrease, while T g (α) should be essentially unchanged, with (T g (α) − T swap g (α))/T g (α) ∝ α and more specifically ∝ α/∆ for soft spheres. Furthermore, many studies have analyzed correlations between dynamics and vibrational modes, see e.g. [12,13,22,40], which can be repeated to relate the swap dynamics to the spectrum of the effective potential V({r}, {R}). Near T g , we predict the latter to have more abundant modes at low or negative frequencies than the much studied Hessian of the potential energy, and its softest modes to be better predictors of further relaxation processes. Lastly, the present analysis suggests that adding additional degrees of freedom (such as changing the shape of the particles, and not only their size) will increase even further the difference between swap and non-swap dynamics.
Finally, we have shown that ultra-stable glasses can be built on the computer, simply by descending along the effective potential V({r}, {R}). As illustrated in Fig.S7, these configurations must sit strictly inside the stable region of the regular dynamics (i.e. at a finite distance from the blue line). As a consequence, the usual potential energy landscape U({r}) around the obtained configurations does not display excess soft anomalous modes at very low frequency, even near the jamming transition: these modes are gapped. This result must hold for the ground state too (which must be stable toward swap) and by continuity also for low-temperature equilibrated states. It may explain why marginal stability (and the Gardner transition leading to it) could not be observed in protocols where a thermal quench was used from swap-generated configurations [41]. It would be very interesting to see if other well-known excitations of low-temperature glassy solids are also gapped in these configurations, including two-level-systems, reported to be almost absent in experimental ultra-stable glasses [42]. The pairwise potential term reads (S.2) where k is a stiffness, set to unity, r ij is the distance between the i th and j th particles, and Θ(x) is the Heaviside step function. The chemical potential associated with the radii is wherek R is the stiffness of the potential associated with the radii {R}, that serves as a parameter in our study, and is set as described below. R (0) i denotes the intrinsic radius of the i th particle. In each configuration we randomly assigned R Configurations in mechanical equilibrium at zero temperature and at a desired target pressure p 0 were generated as follows; we start by initializing systems with random particle positions at packing fraction φ = 1.2, and set the initial radii to be R i = R (0) i . We then minimize the total potential energy V({r}, {R}) at a target dimensionless pressure ∆ 0 = 10 −1 using a combination of the FIRE algorithm [43] and the Berendsen barostat [44], see futher discussion about the latter below. Each packing is then used as the initial conditions for sequentially generating lower pressure packings, as demonstrated in Fig. S8. Following this protocol, we generated 1000 independent packings at each target dimensionless pressure, that ranges from ∆ 0 = 10 −1 up to ∆ 0 = 10 −5 . For each target pressure, we set the stiffnessk R of the chemical potential of the radii according tok R = p 0 /ᾱ, and varȳ α systematically between 3 × 10 −4 and 1. During minimizations we calculate a characteristic net force scale F typ ≡ i || F i || 2 /N 1/2 , where F i = −∂V/∂ r i is the net force acting on the i th particle, whose coordinates are denoted by r i . A packing is considered to be in mechanical equilibrium when F typ drops below 10 −8 ∆ 0 . FIG. S8: Dimensionless pressure ∆ as a function iteration number for a packing-generating simulation starting from one particular initial condition. Each step of the staircase shape of the signal corresponds to the production of a packing at some desired target pressure. The criterion for convergence to mechanical equilibrium at each pressure is explained in the text. We produced packings ranging from ∆ = 10 −1 to ∆ = 10 −5 . Berendsen barostat parameter: The FIRE algorithm [43] features equations of motion which are to be integrated as in conventional MD simulations. We exploit this feature and embed the Berendsen barostat [44] in our Verlet integration scheme [45]. This amounts to scaling the simulation cell volume by a factor χ, calculated as where δt is the (dynamical) integration time step, and ξ is a parameter that determines how quickly the instantaneous dimensionless pressure converges to the tar-get dimensionless pressure [45]. In Fig. S9 shows the ξ-dependence of the convergence of the instantaneous dimensionless pressure ∆ to the target value ∆ 0 . Below ξ = 0.01, the behavior of ∆ as a function of iteration number is similar. We therefore set ξ = 0.01 througthout this work. Note that for these curves ω * 2 /ω 2 ≈ 5%, indicating that the system is very close to marginal stability.
Computation of Hswap
The total potential energy V({r}, {R}) of our model system is spelled out in Eqs. (S.1)-(S.3). We next work out the expansion of V in term of small displacements δ r i of particle positions, and small fluctuations δR i of the radii, about a mechanical equilibrium configuration with energy V 0 , as The expansion given by Eq. (S.5) can be written using bra-ket notation as where |δℓ is a (d+1)N -dimensional vector which concatenates the spatial displacements δ r i and the fluctuations of the radii δR i : (δ r 1 , δ r 2 , .., δ r N , δR 1 , ... δR N ). The operator H swap can be written as: The elements of the submatrix H Nd,Nd can be written as tensors of rank d = 2 as where n ij is a unit vector connecting between the i th and j th particles, n ⊥ ij is a unit vector perpendicular to n ij , ⊗ is the outer product, δ ij = 1 when particles i, j are in contact, δ i,j is the Kronecker delta, and the sum is taken over all particles l in contact with particle i. The elements of the submatrix Q N,N are scalars given by: (S.7) The matrix T N, N d is not diagonal and each element can be expressed as a vector with two components given by: The eigenvectors of H swap are the normal modes of the system, and the eigenvalues are the vibrational frequencies squared ω 2 . The distribution of these frequencies is known as the density of states D(ω). Effect of the pre-stress on the vibrational modes: When a system of purely repulsive particles is at mechanical equilibrium, forces f ij are exerted between particles in contact. These forces give rise to a term in the expansion of the energy, of the form FIG. S11: Packing fraction φ as function of dimensionless pressure ∆ measured for packings in which radii are not allowed to fluctuate, and whose distribution of radii is borrowed from swap packings generated at ∆ = 10 −4 and at different values ofᾱ, as indicated by the legend. The inset shows a zoom into very small dimensionless pressures, demonstrating that the value of φc depends very weakly on the borrowed distribution of radii of the swap packings, as determined by the parameterᾱ.
often referred to as the "pre-stress term". For plane waves, it can be shown that the energy contributed by this term is very small. However, for the soft modes present when the system is close to the marginal stability limit, it can be shown that this term reduces the energy of the modes by a quantity proportional to the pressure [18].
Marginal stability corresponds to a buckling transition where the destabilising effect of pre-stress exactly compensate the stabilising effect of being over-constrained.
In this scenario where two effects compensate each other, the eigenvalue of the softest (non-Goldstone) modes of the Hessian in the absence of pre-stressω 2 must be much larger than ω * 2 computed when pre-stress is present. To demonstrate this, we have calculated the density of states for systems while including and excluding the pre-stress term. The results are shown in Fig. S10, where it is found that near jamming ω * 2 /ω 2 ≈ 5%, which is consistent with what previously found for the traditional jamming transition [19] and supports that the system is very close to (but not exactly at) marginal stability.
Packing fraction
In the main text we have shown that the jamming packing fraction φ c generated using the swap dynamics increases when ρ(R) broadens, i.e. for smaller values of the parameterᾱ that controls the stiffness of the potential energy associated with the radii. Here we compare the dependence of the packing fraction on pressure as measured for systems in which the radii are not allowed to fluctuate. In addition, in this test we borrow
|
2018-04-10T12:51:14.000Z
|
2018-01-11T00:00:00.000
|
{
"year": 2018,
"sha1": "2ca7957cdbd36e57c365db47af2825118ed0dc2e",
"oa_license": "CCBY",
"oa_url": "https://journals.aps.org/prx/pdf/10.1103/PhysRevX.8.031050",
"oa_status": "GOLD",
"pdf_src": "Arxiv",
"pdf_hash": "2ca7957cdbd36e57c365db47af2825118ed0dc2e",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
224314419
|
pes2o/s2orc
|
v3-fos-license
|
From meniscal resection to meniscal repair: a journey of the last decade
The last decade has shed some light on the darkness sur‐ rounding the treatment of meniscal injuries. A significant amount of work has been done in order to provide a more scientific approach to the treatment of the injured meniscus. Degenerative meniscal lesions and traumatic meniscal tears differ in terms of aetiology and pathology and require differentiated diagnostic algorithms and treatments. A new terminology has; therefore, been defined by the ESSKA meniscus consensus project. A traumatic meniscal tear is caused by an acute and sufficiently serious trauma to the knee. In contrast, a degenerative meniscal lesion occurs due to repetitive minor injuries and lacks a sufficiently serious single trauma. The second European Consensus has stud‐ ied the epidemiology, diagnosis and treatment of traumatic meniscal tears [14]. It follows the first consensus on the management of degenerative meniscal lesions, which was published in 2017 [3]. Both consensus reports combined basic science and knowledge of the clinical experience of more than 80 knee experts throughout Europe [3, 14, 23]. There are major differences in terms of the management of acute traumatic meniscal tears and degenerative meniscal lesions. While magnetic resonance imaging (MRI) should be performed early in traumatic tears for a satisfactory assess‐ ment of the pathology, there is no need for an immediate MRI when a degenerative meniscal lesion is suspected. An MRI will not only provide information about the location, type and size of the meniscal tear but also about the carti‐ lage and ligament integrity, which is important for correct surgical planning. Complete meniscal resection was the primary treat‐ ment option for any type of meniscal tear in the past. The orthopaedic mindset has changed markedly over the last decade. The importance of the meniscus in terms of shock absorption, knee stability, load distribution, lubrication, proprioception and neuromuscular function has been well * Roland Becker r.becker@klinikum‐brandenburg.de
The last decade has shed some light on the darkness surrounding the treatment of meniscal injuries. A significant amount of work has been done in order to provide a more scientific approach to the treatment of the injured meniscus.
Degenerative meniscal lesions and traumatic meniscal tears differ in terms of aetiology and pathology and require differentiated diagnostic algorithms and treatments. A new terminology has; therefore, been defined by the ESSKA meniscus consensus project. A traumatic meniscal tear is caused by an acute and sufficiently serious trauma to the knee. In contrast, a degenerative meniscal lesion occurs due to repetitive minor injuries and lacks a sufficiently serious single trauma. The second European Consensus has studied the epidemiology, diagnosis and treatment of traumatic meniscal tears [14]. It follows the first consensus on the management of degenerative meniscal lesions, which was published in 2017 [3]. Both consensus reports combined basic science and knowledge of the clinical experience of more than 80 knee experts throughout Europe [3,14,23].
There are major differences in terms of the management of acute traumatic meniscal tears and degenerative meniscal lesions. While magnetic resonance imaging (MRI) should be performed early in traumatic tears for a satisfactory assessment of the pathology, there is no need for an immediate MRI when a degenerative meniscal lesion is suspected. An MRI will not only provide information about the location, type and size of the meniscal tear but also about the cartilage and ligament integrity, which is important for correct surgical planning.
Complete meniscal resection was the primary treatment option for any type of meniscal tear in the past. The orthopaedic mindset has changed markedly over the last decade. The importance of the meniscus in terms of shock absorption, knee stability, load distribution, lubrication, proprioception and neuromuscular function has been well recognised over the years [1,4,17]. Based on these findings, there is general agreement about preserving the meniscus whenever possible [21,22]. Studies have shown that the preservation of traumatic meniscal tears does in fact reduce the risk of early osteoarthritis [18,26]. Scientific evidence and a better understanding of meniscal pathology on the one hand and improvement in the surgical technique on the other have shifted the pendulum towards meniscal preservation. New suture devices enable fast, safe and easy techniques for handling meniscal repair using an all-inside technique when compared with outside-in or inside-out techniques and have become the standard for many surgeons. In the same way, a behavioural change with respect to meniscal treatment has occurred among many orthopaedic surgeons. In France, meniscal resection has decreased by 21.4%, while there was a threefold increase in meniscal repairs between 2005 and 2017 [10]. The reduction in meniscal resection was especially apparent in patients below 40 years of age. The meniscus repair ratio also increased in Japan from 9 to 25% between 2011 and 2016 [13]. This might be a direct result of the improved understanding of meniscal pathologies.
Acute meniscal tears are more frequent than previously thought. These tears are mainly longitudinal in nature, including bucket handle tears or radial and some types of root tear. They are often specifically associated with ligament injuries. Our awareness and understanding of these specific injury types have increased. Meniscal pathologies such as ramp lesions or root tears, for instance, have a high incidence and have recently attracted more interest. It has been reported that ramp lesions occur in up to 25% of anterior cruciate ligament (ACL) ruptures, with a higher incidence in contact injuries when compared with noncontact injuries [24,25]. Ramp lesions are a good example when it comes to illustrating the increased awareness of meniscal injuries. This type of lesion is often missed when only the standard anteromedial and anterolateral portals are used [27]. In fact, most of them only become apparent if a notch view or an additional posteromedial portal is used to probe the meniscus. Increased anterior translation or delay in anterior cruciate ligament surgery are the main factors causing ramp lesions [27]. Likewise, lateral posterior root tears are also common in conjunction with ACL injuries. Although they are more easily recognised than ramp lesions, they were mostly neglected in the past. During the last decade, a novel description of specific classification systems and the development of new fixation techniques for these lesions were introduced [15].
Meniscal repair is a clinically successful procedure in more than 85% of patients; however, not all the repaired menisci heal completely [2,12,19]. These data show that there is still a need for a further understanding of meniscal anatomy, biology and healing [28]. This will remain an important field of research over the coming years. Initial reports on the use of platelet-rich plasma (PRP) or more specific combinations of growth factors or stem cells have revealed promising results in improving meniscal healing, especially in the avascular zone [11,20]. However, the risk of misusing biological treatments in a poorly regulated environment is real and we are therefore obliged to be very careful with general recommendations with regard to these techniques.
In the past, meniscal repair was predominantly recommended for younger patients. This has changed and nowadays the patients' biological age is gaining momentum in the decision-making process of repair versus resection as opposed to the chronological age. A specific injury in middle-aged patients is a posterior root tear of the medial meniscus. This is common in this age group and its natural history has shown the rapid progression of osteoarthritis or the development of subchondral insufficiency fractures and osteonecrosis. Early reports on root repair have shown clinical improvement; however, meniscal extrusion was not reduced and the progression of osteoarthritis remains the subject of debate [8].
A recent study of a small group was able to show less osteoarthritic progression after posterior medial root repair in patients with an average age of 47 years and Kellgren and Lawrence stage II [6]. A literature review also reported a decrease in the incidence of osteoarthritis when medial meniscus root tears were repaired [9]. More research is definitively needed in this field in the near future before any final conclusion can be drawn.
More recently, meniscal repair has also been used for meniscal lesions of a degenerative nature. This is the case with horizontal and complex lesions which frequently extend into the meniscal periphery. In the past, the principles of arthroscopic surgery recommended resecting meniscal tissue until a stable peripheral rim was obtained. Today, new concepts, which aim only to remove the loose parts of the tear and stabilise the periphery by vertical suture repair, are emerging [7]. No difference in Lysholm or Knee Osteoarthritis Outcome Scores have been reported when comparing the repair of horizontal and longitudinal meniscal tears in 40-year-old patients after 35 months of follow-up [16].
Preserving not only the meniscal periphery but also as much meniscal tissue as possible, because of the importance in terms of femorotibial load transmission, sounds logical [5].
Hence, if arthroscopy is performed, how much meniscal tissue should be resected and how much of it is suitable for repair? As yet, this question cannot be answered and the concept of combined resection and repair needs to be further examined in large-scale clinical studies.
This editorial shows that the scope for meniscus repair is greater than before and there is still a need for both more basic science and clinical research in order to identify the best practice when treating different meniscal pathologies.
Funding Open Access funding enabled and organized by Projekt DEAL.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
|
2020-10-19T13:54:44.084Z
|
2020-10-19T00:00:00.000
|
{
"year": 2020,
"sha1": "648932609305b0a028f57ec2501f28b58d3df917",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s00167-020-06316-7.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "648932609305b0a028f57ec2501f28b58d3df917",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
2100743
|
pes2o/s2orc
|
v3-fos-license
|
Immune analysis of expression of IL-17 relative ligands and their receptors in bladder cancer: comparison with polyp and cystitis
Background Bladder cancer, cystitis and bladder polyp are the most common urinary system diseases all over the world. Our former research results show that IL-17A and IL-17 F contribute to the pathogenesis of benign prostatic hyperplasia (BPH) and prostate cancer (Pca) while IL-17E interacting with IL-17RB might have an anti-tumor effect. Results Using imunohistochemistry, we systemically compared immunoreactivity of ligands (IL-17A, E and F) and receptors (IL-17RA, IL-17RB and IL-17RC) of IL-17 family, infiltration of inflammatory cells and changes of structural cells (fibroblast cells, smooth muscle and vascular endothelial cells) in sections of bladder tissues from subjects with bladder cancer, cystitis and bladder polyp. Compared with subjects with cystitis, immunoreactivity for IL-17A, IL-17 F and IL-17RC was significantly elevated in the group of bladder cancer (p < 0.01), while immunoreactivity of IL-17E, IL-17RA and IL-17RB, and the infiltrating neutrophils were decreased (p < 0.05). The numbers of infiltrating lymphocytes and phagocytes and CD31+ blood vessels and immunoreactivity of CD90+ fibroblasts were also elevated in patients with bladder cancer compared with those of cystitis. The patterns of IL-17 ligands and receptors, and inflammatory cells and structural cells varied in cystitis, bladder polyp and bladder cancer. In bladder cancer, immunoreactivity of IL-17E and IL-17 F was positively correlated with smooth muscles and lymphocytes, respectively. In addition, immunoreactivity of IL-17A and IL-17E was positively correlated with their receptors IL-17RA and IL-17RB respectively. Conclusions The data suggest that changed patterns of expression of the IL-17 cytokine family ligands and receptors might be associated with infiltration of inflammatory cells and structural cells (CD90+ fibroblasts and CD31+ blood vessels), which might also contribute to occurrence and development in bladder cancer. Electronic supplementary material The online version of this article (doi:10.1186/s12865-016-0174-8) contains supplementary material, which is available to authorized users.
Background
There are about 400,000 individuals with bladder cancer in the world and 150,000 patients die from the disease every year [1,2]. In the United States bladder cancer is the 5th most common type of cancer with 72,500 new cases and 15, 200 deaths occurring in 2013 [3], while cystitis and polyp are considered as high-risk for bladder cancer [4]. It has been shown that multiple bladder polyps and cystitis are easy to develop malignant disease depending on the scope and duration of those relative diseases [5]. Although many factors may be associated with the pathogenesis and mechanisms of above diseases, some cytokines, including tumor necrosis factor-α (TNF-α), interleukin-17 (IL-17) cytokine family and interferon (IFN) are considerably involved in the occurrence and development of cystitis, bladder polyp and bladder cancer [6,7].
The IL-17 cytokine family includes six ligands (IL-17A to IL-17 F) and five receptors (IL-17RA to IL-17RE). Because of their unique and distinct biological functions, most studies are focused on IL-17A and IL-17E in tumors [8,9]. In addition, IL-17 F has also been studied because of its high molecular homology and similar biological functions with IL-17A [10]. Previous studies have shown that both IL-17A and IL-17E can bind to their receptors IL-17RA and IL-17RB, while IL-17 F can bind to its own receptor IL-17RC and/or IL-17RA to fulfill their biological function. It has been indicated that IL-17A and IL-17 F are key proinflammatory cytokines in the pathogenesis of inflammatory and autoimmune diseases [11]. Compared with IL-17A/IL-17 F, IL-17E plays an important role in the pathogenesis of asthma and atopic diseases through binding to the heterodimeric complex of IL-17RA/IL-17RB [12]. On the other hand, some studies have also indicated the paradoxes of the pro-tumor or anti-tumor activity of IL-17 family relative ligands [11,13]. Previous data have shown that macrophages secreting IL-17 family cytokines may play important roles in the pathogenesis of malignant tumors [14]. For example, it has been shown that IL-17A transcripts in peripheral blood mononuclear cells [15] and serum concentrations of IL-17A [16] were significantly higher in peripheral blood mononuclear cells in subjects with bladder cancer than those of controls. In animal experiments, it has been reported that IL-17A promoted bladder cancer growth [17], while IL-17-producing γδ T cells possibly play a key role in the Bacillus Calmette-Guéri (BCG)-induced recruitment of neutrophils to the bladder, which is essential for the antitumor activity against bladder cancer [18]. However, the expression and location of other IL-17 family cytokines and their receptors, and their relationships to bladder relative disease progression, inflammatory cellular infiltration and structural changes are still largely unclear in cystitis, bladder polyp and bladder cancer.
In this study we expanded our previous observations in prostate cancer [19] and hypothesized that in bladder tissues, both infiltrating inflammatory cells and structural cells can express IL-17 family cytokines and relevant receptors, and that such expressions can affect not only tissue remodelling but also angiogenesis, which are associated with disease severity and tumorigenesis. We compared expression and location of IL-17 cytokine family IL-17A, IL-17E and IL-17 F and their receptors IL-17RA, IL-17RB and IL-17RC in tissues derived from subjects with cystitis, bladder polyp and bladder cancer in parallel. We also analyzed the relationships between expression of these IL-17 family cytokines and their receptors, infiltration of inflammatory cells and changes of structural cells in these diseases.
Patients and specimens
The study was approved by the Hospital Ethics Committees of Urinary System Diseases Prevention and Treatment Research Centre of the Affiliated Hospital of Beihua University, Jilin City, Jilin Province, People's Republic of China (approval reference: 2012BH006), as formulated in the World Medical Association Declaration of Helsinki. Written and informed consents were provided by all subjects participated, including 23 patients who were biopsied and diagnosed as cystitis and 6 patients with hyperplastic bladder polyps from January 2012 to December 2014. Tumor tissues were collected from 80 patients with transitional cell carcinoma during surgical resection during the same time.
The above diseases were diagnosed as previously described [20,21]. The clinical characteristics of subjects involved in this study are summarized in Table 1.
Immunohistochemistry
Immunohistochemistry was applied in paraffin sections (4 μm thickness) to evaluate expression and location for IL-17 family relative ligands (IL-17A, IL-17E and IL-17 F) and their relative receptors (IL-17RA, IL-17RB and IL-17RC), infiltration of inflammatory cells (T lymphocytes, macrophages, mast cells and neutrophils) and relevant structural cells (CD90 + fibroblast cells, smooth muscle and CD31 + vascular endothelial cells) as described previously [19]. The sources of antibodies and their optimal dilutions are indicated as Additional file 1: Table S1. DAB kit (diaminobenzidine, ZhongShan Golden Bridge Biological Company, Beijing, China) was used to evaluate positive signals of immunestaining while applying an irrelevant, matching isotype of IgG as substitution of the primary antibody was used as negative control. All slides were blindly analysed by two observers using an Olympus microscope connected with a computer running Image Pro Plus 6.0 software (Media Cybernetics, Maryland, USA). Global immunoreactivity of IL-17 family relative cytokines as well as CD90 + cells was quantified as the percentage staining of the total stainable area of the sections defined with the haematoxylin counterstaining, while inflammatory cells (such as lymphocytes, macrophages, neutrophils and mast cells) and CD31 + blood vessels were quantified as the numbers of immunoreactive cells per unit area of the entire sections [18].
Statistical analysis
All the labeling area stained by the DAB in the immunohistochemistry procedure was measured, which based on the principle of RGB color deconvolution using software of NIH-Image J plugin. According to the sizes of slice sides 6-20 fields, at 200 × original magnification, were analyzed, and data were digitized and transferred to NIH-Image J software. Data were analyzed with a statistical package (
Immunoreactivity and location of IL-17A and IL-17RA
Immunohistochemical staining analysis showed that global IL-17A expression was significantly elevated in tissue sections from bladder polyp and bladder cancer compared with cystitis ( Fig. 1a and b, p = 0.001 and p = 0.001, respectively), while there was no significant difference in global IL-17A immunoreactivity between bladder polyp and bladder cancer (p = 0.170). IL-17A expression located mainly in mononuclear cells, transitional epithelial cells, malignant cells and vascular endothelial cells in bladder cancer (Fig. 1a). In contrast to IL-17A, global IL-17RA immunoreactivity was significantly elevated in cystitis and polyp compared with those of bladder cancer ( Fig. 1c and d, p = 0.001 and p = 0.001, respectively), while global IL-17RA immunoreactivity was significantly higher in polyps than those of cystitis (p = 0.015). The location of IL-17RA immunoreactivity was similar to IL-17A, which mainly located in transitional epithelial cells, mononuclear cells in interstitium and cancerous cells (Fig. 1c).
Immunoreactivity and location of IL-17E and IL-17RB
Global IL-17E expression was significantly elevated in cystitis compared with those of bladder cancer ( Global IL-17RB immunoreactivity was significantly higher in the tissue sections from cystitis than that from bladder cancer ( Fig. 2c and d, p = 0.025). IL-17RB mainly located in mononuclear cells, vascular endothelial cells, smooth muscle cells and some cancerous cells (Fig. 2c).
Immunoreactivity and location of IL-17 F and IL-17RC
Global expression for IL-17 F was significantly greater in the tissue sections from bladder cancer and polyp compared with cystitis ( Fig. 3a and b, p = 0.001and p = 0.008, respectively), but there was no significant difference between bladder cancer and polyp ( Fig. 3a and b, p = 0.294). IL-17 F mainly expressed in mononuclear cells, transitional epithelial cells, vascular endothelial cells and some malignant cells (Fig. 3a). Because IL-17 F can bind not only to IL-17RA but also to IL-17RC to exert biological effects, we also evaluated global immunoreactivity for IL-17RC in all the sections. Global expression for IL-17RC was significantly increased in polyp and bladder cancer compared with cystitis ( Fig. 3c and d, p = 0.007, p = 0.002), while there was no significant difference between polyp and bladder cancer (p = 0.127). Similar to IL-17 F, IL-17RC immunoreactivity located principally in mononuclear cells, transitional epithelial cells, vascular endothelial cells and some malignant cells in bladder cancer (Fig. 3c).
Infiltration of T lymphocytes and macrophages
Compared with cystitis, the numbers of CD3 + lymphocytes increased in both polyps and bladder cancers but did not achieve statistically significance ( Fig. 4a and b, p = 0.273, p = 0.384 respectively). These cells located predominately in the stroma of polyp and bladder cancer. Like CD3 + lymphocytes, CD68 + macrophages also located mainly in stroma of the bladder tissues. The median number of CD68 + macrophages was significantly elevated in bladder cancers compared with cystitis and polyps ( Fig. 4c and d, p = 0.001, p = 0.006 respectively). However, there was no obvious difference between cystitis and polyp (p = 0.3774).
Infiltration of neutrophils and mast cells
In order to explore other inflammatory cells infiltration, we further detected elastase + neutrophils and tryptase + mast cells in the sections from bladder tissues. Both mast cells and neutrophils mainly located in stroma. The median numbers of mast cells were slightly higher in cystitis and bladder cancer, but no statistical difference compared with polyps ( Fig. 5a and b, p = 0.227, p = 0.123, respectively), and again there was no significant difference was achieved between cystitis and bladder cancer (p = 0.440) ( Fig. 5a and b). The median number of neutrophils in cystitis compared was significantly elevated with polyp and bladder cancer ( Fig. 5c and d, p = 0.023, p = 0.001, respectively), while there was no obvious difference between polyp and bladder cancer (p = 0.884) (Fig. 5c and d).
Changing of structural cells
We also evaluated the changing of some types of structural cells in tissues such as α-actin + smooth muscle, CD90 + fibroblast cells and CD31 + vascular endothelial cells. The median expression of smooth muscle cells was slightly decreased in bladder cancer, but there were no statistical differences among three groups ( Fig. 6a and b). The median immunoreactivity of CD90 + cells per unit area of the tissue was dramatically elevated in bladder cancer compared with cystitis and polyp ( Fig. 6c and d, p = 0.006, p = 0.038, respectively), but no statistical difference was observed between cystitis and polyps ( Fig. 6c and d, p = 0.439). The median number of CD31 + blood vessels was significantly elevated in the tissue sections from bladder cancer compared with those of cystitis ( Fig. 7a and b, p = 0.001). Although the median number of CD31 + blood vessels also increased in polyps, this did not achieve statistical significance compared with cystitis ( Fig. 7a and b, p = 0.064).
The relationship of expression for IL-17 cytokine family ligands and their receptors, infiltration of inflammatory cells and changes in structural cells in bladder cancer
In order to explore the correlations between IL-17 family cytokines, and inflammatory cells as well as structural cells, we further analyzed correlations of global expression for IL-17 family ligands and their receptors and relevant phenotypes of cells in bladder cancer. Of great interest, global immunoreactivity for IL-17RA significantly correlated with its ligand IL-17A (Fig. 8a, r = 0.298, p = 0.005), while IL-17E significantly correlated with its receptor IL-17RB (Fig. 8b, r = 0.409, p = 0.0001). In addition, global immunoreactivity for α-actin + smooth muscle was significantly correlated with IL-17E (Fig. 8c, r = 0.301, p = 0.001), while the numbers of CD31 + blood vessels significantly correlated with IL-17 F in bladder cancer tissues (Fig. 8d, r = 0.301, p = 0.013).
Discussion
Although a numbers of previous studies seem to prove that IL-17 family cytokines have the capability to promote occurrence and development of cystitis, polyp and bladder cancer [22][23][24][25], there is a lack of systematically comparison study of IL-17 family ligands and their corresponding receptors in these urinary system diseases. Here we analyzed the IL-17 family cytokines IL-17A, E and F and their receptors IL-17RA, RB and RC in bladder tissues from patients with cystitis, polyp and bladder cancer. We also simultaneously examined inflammatory cellular infiltration, structural changes and angiogenesis in these urinary disorders. Accumulation of infiltrating inflammatory cells is a typical feature of cystitis, possibly in bladder cancer as well. However, proinflammatory responses are doubleedged sword with protective and tumorgenesis roles. On the one hand, such accumulated phagocytes might cause further inflammation in bladder tissues, which possibly exerts an anti-tumor activity, through generating more inflammatory mediators, including IL-17 family cytokines. On the other hand, these mediators and cytokines, in turn, might promote and enhance the abnormal cellular proliferation associated with polyp and bladder cancer, through generating various mediators, including inflammatory cytokines such as interleukin-6 (IL-6) and cellular growth factor (TGF-β) [23,25,26]. Our data showed that expression of IL-17 cytokine family ligands and their relevant receptors was accompanied with markedly distinct profiles of infiltration of inflammatory cells in various conditions of urinary disorders. For example, elevated expressions of IL-17A, IL-17 F and IL-17RC and increased numbers of macrophages were observed in bladder cancer. One possible explanation may be that IL-17A and IL-17 F and IL-17RC can be expressed by these macrophages in bladder cancer. However, expression of IL-17A, IL-17RA and IL-17RC also increased in polyps but with the downgrade of numbers of macrophages. This suggests that macrophages might not be major source of these members of IL-17 family in polyps.
Additionally, other changed abnormal structural cells such as endothelial cells and fibroblasts were also observed in bladder cancer, which might contribute to the increases of immunoreactivity for IL-17A, IL-17 F and IL-17RC. It has been known that IL-17A and IL-17 F can bind to both IL-17RA and IL-17RC. For example, IL-17 F exerts effects on angiogenesis, while the elevated vessels in turn contribute to elevation of IL-17A or IL-17 F production in bladder cancer [27], which formats a positive feedback to promote malignant proliferation. In the present study, expression of IL-17A and IL-17 F increased but IL-17RA decreased in bladder cancer, suggesting that IL-17A and IL-17 F are possibly expressed by different profiles of cellular populations and mainly through binding IL-17RC to exert biological effects.
Although we observed that some immunoreactivity for IL-17A and IL-17 F located in malignant cells, however, until far, there is lack of systemic study whether malignant cells in bladder cancer express IL-17A or IL-17 F. Clearly further experiments remain to be performed in this aspect.
IL-17E, through binding to its own receptor IL-17RB or IL-17RA, shows different properties from IL-17A and IL-17 F in tumor fields. Our data here showed that IL-17E and IL-17RB expression was elevated in cystitis but reduced in bladder cancer, possibly indicating that IL-17E might also be involved in the pathogenesis of benign urinary diseases. On one hand, IL-17E might participate in the pathogenesis of inflammation through acting on its receptor IL-17RB expressed on inflammatory cells and structural cells, which results in inflammation and alternation of structural cells. On the other hand, it is interesting that reduced expression of IL-17E and IL-17RB in bladder cancer. Such reduction in bladder cancer is possibly because the damaged or abnormal structural cells and malignant cells express less IL-17E and its receptor IL-17RB. This might affect anti-tumor effect of IL-17E in bladder cancer. Again, however, the details and underlying mechanisms of IL-17E in the pathogenesis of bladder cancer remain to be explored.
Our results also show that there were more vascular endothelial cells in bladder cancer compared with cystitis. It is well known that in bladder cancer, over-proliferation of cancer cells need more blood supplying, while these malignant cells and increased angiogenesis might affect other cell proliferation. On the other hand, vascular endothelial cells can express IL-17 family cytokine which might attract more inflammatory cells into malignant tissues and promote occurrence and development of bladder cancer. Apart of blood vessels, increased immunoreactivity for CD90 + fibroblasts was observed in bladder cancer, suggesting that these fibroblasts might also contribute to expression of IL-17 family ligands and receptors and possibly to pathogenesis of bladder cancer, through enhancing tumor growth [28,29].
Since inflammatory microenvironment is a feature in urinary tract diseases while the relative members of IL-17 cytokine family closely link with inflammation, we also examined the status of common inflammatory cell types in the present study. The increased numbers of CD3 + T lymphocytes and CD68 + macrophage in bladder cancer suggest that these cells might participate in the process of bladder cancer. Macrophage is an important component of the inflammatory and tumor microenvironment and plays a key role in the progression of tumors. It has been shown that macrophage has dual function according to different tumor types [30]. Its function is, however, extremely complex and has not been elucidated till now. Neutrophil infiltration is a typical characteristic of acute inflammation. We also observed that there were more neutrophils in cystitis tissue than polyp and cancer, accompanied with increased IL-17RA, IL-17E and IL-17RB expression. Although neutrophils might not express IL-17E, these cells do express IL-17RA/or IL-17RB.
In addition, close correlations of expression of IL-17A-IL-17RA and IL-17E-IL-17RB suggest that different signaling of IL-17A-IL-17RA and IL-17E-IL-17RB might play an important role in bladder cancer. It is known that smooth muscle can express IL-17E. Thus, it is reasonably to presume that these cells might be a major cellular source of IL-17E, which may partly explain that the decreased IL-17E expression is a part of result due to the reduction of smooth muscle in bladder cancer. At the mean time, slight but significant correlation between immunoreactivity of IL-17 F and the numbers of CD3 + T lymphocytes was observed in bladder cancer, suggesting that CD3 + T lymphocytes might also be a major cellular source expressing IL-17 F. Whether these IL-17 F + CD3 + T cells are a subgroup of Th17 cells remains to be investigated.
Obviously our study has some limitations. Firstly, it is still unknown for certain whether the specific index assessed by immunohistochemistry has been accurately identified. Clearly other experiments need to be done to confirm the roles of IL-17A and IL-17E signals in bladder cancer occurrence and development. Secondly, there were no entirely normal bladder specimens as controls, which might affect the comparison among the groups.
Conclusion
Taking together our data indicate that changes of inflammatory and structural cells might be associated with the variable expression of IL-17 family cytokines, while increased blood endothelial cells and fibroblasts might be associated with bladder cancer occurrence and development.
Additional file
Additional file 1: Table S1. Antibodies used in the present study.
|
2017-08-03T01:31:19.655Z
|
2016-10-03T00:00:00.000
|
{
"year": 2016,
"sha1": "6851e5f4174152e97df7b351a3d8d5a8adfc0b96",
"oa_license": "CCBY",
"oa_url": "https://bmcimmunol.biomedcentral.com/track/pdf/10.1186/s12865-016-0174-8",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "6851e5f4174152e97df7b351a3d8d5a8adfc0b96",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
113764436
|
pes2o/s2orc
|
v3-fos-license
|
Josephson junction microwave modulators for qubit control
We demonstrate Josephson junction based double-balanced mixer and phase shifter circuits operating at 6-10 GHz, and integrate these components to implement both a monolithic amplitude/phase vector modulator and a quadrature mixer. The devices are actuated by flux signals, dissipate no power on chip, exhibit input saturation powers in excess of 1 nW, and provide cryogenic microwave modulation solutions for integrated control of superconducting qubits.
I. INTRODUCTION
Control of superconducting qubits has, to date, relied almost exclusively on roomtemperature generated signals. While state-of-the-art room temperature control techniques have been tremendously successful, 1,2 the engineering challenges associated with the delivery of high bandwidth microwave signals to the qubit chip, including thermal management, signal integrity, 3 packaging, wiring, 4 and device layout, are poised to become key bottlenecks in larger quantum information systems.
A nascent strategy for alleviating the room-to-cryo bandwidth bottleneck is to integrate microwave multiplexing, routing, and modulation circuits in the cryogenic environment alongside the qubits. 5 Several groups have recently demonstrated Josephson junction based amplifers 6,7 and circulators, 8,9 as well as switches for on-chip routing; 10-12 however, there is still a need for microwave modulation technologies 13,14 that can meet the dissipation and power requirements associated with integrated qubit control. Here, we describe a doublebalanced mixer and a phase-shifter that are built in a superconducting integrated circuit with Josephson junction active elements. We use these components to implement a monolithic Josephson junction vector modulator, as well as an I/Q quadrature modulator-a device that is used ubiquitously to generate shaped microwave pulses for qubit control.
The devices operate in the 6-10 GHz band with no on-chip power dissipation, greater than 1 nW saturation power, greater than 25 dB LO/RF isolation, and with a DC-850 MHz IF bandwidth.
II. DOUBLE-BALANCED MIXER
The prototypical room-temperature double balanced mixer is built with four diodes arranged in a bridge configuration, with the LO and RF ports of the mixer coupled respectively to the common and balanced modes of the bridge. 15 When operated as a modulator, an IF signal biases the diodes pairwise to un-balance the bridge so that a positive (negative) IF voltage causes a portion of the LO signal to appear in-phase (180 • out of phase) across the RF port, while zero IF voltage leaves the bridge completely balanced and the RF port isolated by symmetry. We implement a superconducting version of the double-balanced mixer by relying on the flux-tunable inductance of Josephson junctions in place of the voltage-tunable resistance of the diodes. To this end, our design must address two challenges that are common to the implementation of microwave devices with Josephson junctions rather than semiconducting components. First, the impedance presented by the superconducting circuit embedding the junctions is typically low and inductive, and requires proper matching to the 50 Ω environment. We address this challenge by embedding the junctions in a bandpass filter network that takes advantage of the junctions' inductive impedance. Second, operation of Josephson junction devices at the several-GHz frequency range typically calls for junctions with critical currents on the order ofh ω 0 2eZ 0 ∼ 1µA (e.g. in a Z 0 = 20 Ω circuit operating at ω 0 /2π = 10 GHz); this limits the saturation power in these devices to picowatt levels. By using junction arrays in our devices instead of single junctions, we can increase their saturation power to the nanowatt scale, which makes them relevant to qubit control applications.
To design the Josephson double-balanced mixer we first construct a coupled-resonator band-pass embedding network following the procedure outlined in Ref. 10. We use a 4pole network with Chebychev response, a center frequency of ω 0 /2π=8 GHz, and a target bandwidth of 4 GHz. The resulting network, shown in Fig. 1(a), has four parallel LC resonators R 1 -R 4 , coupled with admittance inverters {J ij } whose values, having units of 1/Ω, are calculated from the filter prototype coefficients {g i }. 16 We implement the first and last inverters, J 01 and J 45 , using capacitors as in Ref. 10, and use inductive transformers T 1 and T 2 to implement inverters J 12 and J 34 . The remaining inverter, J 23 , is inductive and can be replaced by a Josephson junction whose critical current is set to I c =h 2e ω 0 J 23 .
The circuit of Fig. 1(a), with a Josephson junction inserted for J 23 is functionally similar to the microwave switch described in Ref. 10. Here, however, the transformer coupling of the filter's inner section via T 1,2 allows us to balance this section, as shown in Fig. 1 The double-balanced Josephson junction mixer operates as follows. When the polarity of the flux induced in the bridge by the IF current is the same as that induced by the DC flux bias, the resulting circulating currents cancel on junctions J 1 and J 2 while adding up on junctions J 3 and J 4 . This establishes a low-inductance, impedance matched direct signal path through J 1 and J 2 , while junctions J 3 and J 4 are in a high inductance state which suppresses the transmission along the crossed path. In this state, which we call the 'on' state, the LO signal propagates directly to the RF port. When the polarity of the IF current is opposite to that of the DC flux bias, in what we call the 'inverted' state, the induced currents sum on J 1 and J 2 instead, leaving the crossed path through J 3 and J 4 matched. In this state the LO signal propagates to the RF port with an additional 180 degree phase shift. When the IF current is zero, in the 'off' state, the bridge is balanced and no signal propagates to the output. While the mixer can be constructed with only four junctions, the critical current of each of the junctions in the design above would be I c = 0.72 µA, limiting the saturation power to P sat < −90 dBm: a sufficient power level in qubit readout applications, but much lower than the −60 dBm level typically desired for qubit control. A common method for increasing the saturation power of Josephson devices is to replace the single junction with a series array of N junctions, each having a critical current of NI c . 22 This arrangement, however, is susceptible to phase slips 23 and cannot sustain the relatively large phase bias, a significant fraction of Φ 0 /2 per junction, required to operate the mixer. To stabilize a junction array against phase slips, we use the configuration shown in Fig. 2 the individual loops-essentially low-inductance rf-SQUIDs-mono-stable for all phase bias.
The array, therefore, does not have a lower energy state that can be reached by a phase slip event. In total our mixer, with each of J 1 -J 4 replaced by an 80-junction array, contains 320 junctions.
If the array is sufficiently long so that edge effects can be neglected, we can approximate its inductance by assuming translational invariance 24 and finding the equilibrium phase drop across each of the junctions, δ 0 , as a function of an applied phase bias, φ ext , across each stage of the array: The total inductance of the array is then found by: where u is the potential energy per array stage. Evaluating the derivatives in Eq.
(2), we obtain: that we discuss below. We have measured a total of four chips from two different wafers with comparable results. All measurements were performed at 4.2 K in liquid helium.
We first characterize the operation of the mixer under static bias conditions. Fig. 3(a) shows the transmission, S 21 , of the mixer as a function of frequency and IF voltage as applied to the chip via a room-temperature 1 kΩ resistor. The input power to the device was −76 dBm; we observed no difference in the device behavior at lower input powers. The raw data in the figure represents S 21 at the reference plane of the network analyzer, and includes a −50 dB fixed attenuator at the device's input, a +26 dB amplifier at its output, and Next, we characterize the response of the mixer to a sinusoidal signal applied to its IF port. We fed a 7.5 GHz, −76 dBm carrier tone to the mixer LO port and monitored the output from its RF port with a spectrum analyzer; the spectra are shown in Fig. 4(a) modulation is balanced: the carrier modulates through zero and inverts in the negative portions of the IF cycle, as also illustrated in Fig. 3(c). When the carrier power is increased beyond approximately −76 dBm we start observing spurious sidebands at multiples of the IF frequency; we have not, however, characterized the mixer's nonlinearity or intermodulation products. In Fig. 4(b) we trace the magnitude of the upper (blue) and lower (red) sidebands, as well as that of the carrier (green), as we vary the frequency ω m /2π of the IF signal from 200 MHz to 1 GHz. The data shows that the carrier remains suppressed throughout the whole modulation frequency range, and that the magnitude of the sidebands rolls off with a −3 dB point at 850 MHz, somewhat lower than the designed 1 GHz cut-off frequency of the on-chip low-pass filter on the IF port.
Because the mixer is non-dissipative and has no gain, it is not expected to contribute its own noise to the signal. However, noise associated with the DC flux and IF control lines will, in general, result in both amplitude-and phase-modulation noise imprinted on the RF output. From the data in Fig. 3(c) we can calculate the mixer sensitivity to control noise, and if the DC and IF lines are both matched and thermalized to 4 K then we expect both amplitude and phase noise power density to be less than approximately −157 dBc/Hz, referenced to the LO input power. Flux noise in the balanced bridge and critical current noise in the junctions will have an additional contribution to the overall modulation noise on the signal.
III. PHASE SHIFTER
Having demonstrated the operation of a Josephson junction double-balanced mixer, we continue by describing a Josephson junction based phase shifter, a second component that we used in our implementation of a vector modulator. A schematic of the phase shifter is presented in Fig. 5(a). Two over-coupled flux-tunable LC resonators, each containing a 66-junction array similar to Fig. 2(a) (indicated by a junction symbol in the schematic), are connected to the 0 • and 90 • ports of an on-chip coupled-line 90-degree hybrid. The input signal splits evenly between the two arms of the hybrid, reflects off of the two resonators, and re-combines constructively at the 'isolated' port of the hybrid, which serves as the output port for the device. If the reflections off of the two resonators have the same magnitude and phase, none of the reflected power reaches the input port, resulting in unity transmission between the input and the output ports. By applying flux to the two resonators in tandem we change their frequency with respect to that of the input signal and therefore the phase of the reflected power. We designed the resonators with capacitances and inductances of 1.48 pF and 820 pH respectively, and the inductance of the junction array at zero flux was set to 205 pH.
IV. MICROWAVE MODULATORS
The device shown in Fig. 6(a) is a monolithic vector modulator, constructed by concatenating the phase shifter described above and the balanced mixer of Fig. 2(b). The device is controlled by an IF flux that modulates the amplitude of the carrier, and a phase-control flux that modulates the carrier phase, as shown in the inset. The amplitude/phase vector modulator can ideally have zero conversion loss, as compared to a standard I/Q quadrature modulator that must have at least 6 dB of conversion loss, a potential advantage in application where signal losses must be minimized. In Fig. 6 mismatches between the output of the phase shifter and the input of the mixer.
In a final experiment, we used the vector modulator, Fig. 6(a), and the balanced mixer, Fig. 2(b), co-located on the same chip, together with a pair of room-temperature Wilkinson power splitters/combiners to implement an I/Q quadrature modulator as shown in Fig. 7(a).
The phase shifter portion of the vector modulator, controlled here by a DC voltage source in series with a 30 dB attenuator, was used in this experiment to adjust the relative phase between the 7.5 GHz LO signals feeding the mixers. We drove the I and Q ports of the chip with ω m /2π = 200 MHz, −30 dBm sinusoidal signals using two signal generators with a fixed relative phase of 90 degrees, and monitored the output of the modulator using a spectrum analyzer. The 90 degree phase relation between the I and Q baseband signals allows us to perform single-sideband modulation of the carrier: we can select the lower sideband or the upper sideband by setting the relative LO phase to +90 or −90 degrees, respectively, using the on-chip phase control. We further demonstrated a monolithic vector modulator and an I/Q quadrature mixer built with these components. These devices operate with no on-chip power dissipation, 4 GHz LO and RF bandwidth centered at 8 GHz, and DC-850 MHz IF bandwidth. We have shown that by using Josephson junction arrays in place of single junctions we can significantly increase the saturation power of this type of devices, from the typical picowatt levels common in single junction devices, to the nanowatt level in the devices presented here. These devices provide microwave modulation solutions that operate in the cryogenic environment, can be integrated with or nearby a quantum processor, and join a growing family of microwave switches, amplifiers, and circulators that enables integration of qubit control and readout functionality in the cryogenic space.
|
2017-02-22T21:36:36.000Z
|
2016-10-25T00:00:00.000
|
{
"year": 2016,
"sha1": "3a19ba9396a415b2064a1b213dc9ccf8085f0695",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1610.07987",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "3a19ba9396a415b2064a1b213dc9ccf8085f0695",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
238011156
|
pes2o/s2orc
|
v3-fos-license
|
Being and becoming a teacher: Professional Wellbeing Amidst COVID-19 Pandemic
– Covid-19 has a significant impact on higher education worldwide, not just in the Philippines. This study described the professional quality of life of the college teachers in terms of compassion satisfaction and compassion fatigue. This study ascertained the relationship of socio demographic profile of the college teachers and their professional quality of life. This study used a quantitative-descriptive method of research. Convenience sampling was used in this study. A total of 76 college instructors were surveyed (44 from public schools and 32 from private schools). Based on the results of the study, it is concluded that the college teachers have high compassion satisfaction for getting satisfaction from being able to teach people, being proud of what they can do to teach, and being happy to their chosen profession. These college teachers have exposure work related secondary traumatic stress on a moderate degree level by having a feeling of worn out because of their work as a teacher and feeling overwhelmed because of their teaching load and other work-related activities. However, the secondary traumatic stress of college teachers did not result to having high level of burnout, though college teachers still experience burnout but in low degree level only.
INTRODUCTION
Covid-19 has a significant impact on higher education worldwide, not just in the Philippines. The fact is that lockdown and work and study from home have a substantial and detrimental effect on the learning process in school (Burgess & Sievertsen, 2020). Further, it was verified that teachers' quality of life deteriorated throughout the pandemic, which according to some studies resulted in negative effects on teachers' mental and physical health (Lizana et al., 2021). On the contrary, despite the moderate to high threat posed by COVID-19, teachers appear to have managed with the virus's impact, as evidenced by the mild impact on their QoL about mental health nearly six months after the country's huge lockdown (Rabacal et al., 2020).
The formation of learning disorders throughout the learning process as a result of the lack of school activities has a negative influence on vulnerable groups, such as young pupils, social disadvantages, learning disabilities, or the setting of family labor to supervise children studying at home (Brown et al., 2020). As suggested, the government's participation in education, through policies, enables students and instructors to have access to the internet and acquire literacy skills (Supriyanto et al., 2020).
To accomplish their objectives, healthy companies must utilize their human resources; they must integrate their programs and empower their workers (Santos & Nocum, 2020). Further, it is recommended that a company should create a self-care program for contract employees who are suffering burnout (Santos & De Jesus, 2020).
In light of the foregoing, the researchers would like to assess the college instructors' professional quality of life during the COVID-19 scenario. Additionally, the researchers sought to make recommendations based on the obtained data to assist college faculty and their respective institutions in coping with the pandemic scenario.
II. OBJECTIVES OF THE STUDY
This study described the professional quality of life of the college teachers in terms of compassion satisfaction and compassion fatigue. This study ascertained the relationship of socio demographic profile of the college teachers and their professional quality of life.
III. METHODOLOGY
This study used a quantitative-descriptive research technique, which entails the description, recording, analysis, and interpretation of a real-world condition. When obtaining information on the current state of affairs, it is acceptable to employ the descriptive technique (Creswell, 2014). The study adopted the questionnaire on Professional Quality of Life: Compassion Satisfaction and Fatigue . Convenience sampling was used in this study. A total of 76 college instructors were surveyed (44 from public schools and 32 from private schools). Survey research was employed in this study because it incorporates scientific techniques through critical examination and assessment of source materials, data analysis and interpretation, and generalization and prediction (Salaria, 2012). This domain is about the pleasure a teacher can experience from being able to help others and to make a positive difference in the world.
IV. RESULTS AND DISCUSSIONS
Based on the result, the compassion satisfaction of the college teachers has a sum of the weighted means of 42.63 with a high level degree. Item 1 got the highest mean of 4.56 with verbal description of 'very often' among other items in this domain. This item suggests that very often these college teachers get satisfaction from being able to teach people. Further, it can be supported by the Items 8 and 10 which got a mean rating of 4.42 with a verbal description of 'very often'. These describe how the college teachers are proud of what they can do to teach and how happy they are in choosing teaching as their work. Based on the responses of the school college teachers, the sum of the weighted means of this domain is 21 with low level degree. This implies college teachers are associated with low feelings of hopelessness and difficulties in dealing with work or in doing their job effectively.
However, it can be noticed on item 8 (mean = 3.21; verbal interpretation = sometimes) that it has the highest mean among the other items. This describes of feeling overwhelmed because of their teaching load seems endless.
This can be attributed to other schoolwork of the college teachers aside from teaching.
Other reasons that college teachers have low level degree of burnout are can be found on their responses on Items 1, 2, 5 and 10 and all has verbal description 'very often'. This illustrates that college teachers are happy (mean = 1.63), feel connected to others (mean = 1.53), have beliefs that sustain them (mean = 1.79), and very caring persons (mean = 1.63). This was also shown in this domain that Item 1 got the highest mean of 3.79 with a verbal description of 'often'. This item describes that college teachers are often preoccupied with more than one person they teach. Further, they the jump or startled by unexpected sounds (mean = 3.42, verbal interpretation = often). This experience is usually observed to people under traumatic stress. Based on table 4, the summary of the professional quality of life of college teachers is composed of the combination of high degree of level in compassion satisfaction (mean = 42.63), low degree of level in burnout (mean = 21) and average degree of level in secondary traumatic stress (mean = 26.11).
This implies a positive result. It means that despite of some secondary traumatic stresses the college teachers had experienced this only resulted to low burnout. In fact, these college teachers find satisfaction from their work and being able to teach others that They experience happy thoughts, feel successful, are happy with the work they do, want to continue to do it, and believe they can make a difference. * Correlation is significant at the 0.05 level (2-tailed).
Results show that highest educational attainment is positively related to secondary traumatic stress (r = .475, p < .05). This indicates that college teachers with high educational attainment have high secondary traumatic stress. But based on the computed r value, it only means it has a moderate correlation. Furthermore, secondary traumatic stress is positively related to burnout (r = .650, p < .01); strong correlation. This implies that if the college teachers have high exposure to secondary traumatic stress they will more likely to have high level of burnout.
V. CONCLUSIONS AND RECOMMENDATIONS
Based on the results of the study, it is concluded that the college teachers have high compassion satisfaction for getting satisfaction from being able to teach people, being proud of what they can do to teach, and being happy to their chosen profession. These college teachers have exposure work related secondary traumatic stress on a moderate degree level by having a feeling of worn out because of their work as a teacher and feeling overwhelmed because of their teaching load and other work related activities. However, the secondary traumatic stress of college teachers did not result to having high level of
|
2021-08-27T16:33:57.166Z
|
2021-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "33d9b79e392c3eab345b255e026b88237637151d",
"oa_license": "CCBY",
"oa_url": "https://www.theshillonga.com/index.php/jhed/article/download/217/151",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "58c9075fd3bd06711f936c0b78b5e1b3efbf738f",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Political Science"
]
}
|
267739303
|
pes2o/s2orc
|
v3-fos-license
|
Left and right ventricular global longitudinal strain assessment together with biomarker evaluation may have a predictive and prognostic role in patients qualified for hematopoietic stem cell transplantation due to hematopoietic and lymphoid malignancies – a pilot study description
The hematopoietic stem cell transplantation (HSCT) procedure is considered a cardiovascular burden. This is due to the potentially cardiotoxic cytostatic agents used before and the risks associated with peri-transplant procedures. We designed a pilot study to determine the clinical utility of the new ST2 marker; furthermore, we routinely assessed cardiac parameters in HSCT recipients. Based on previous cardio-oncology experience in lung and prostate cancer, we can confirm the prognostic and predictive value of classic cardiac biomarkers and modern echocardiography parameters such as global longitudinal strain of the left and right ventricle. After conducting this pilot study we can create a predictive and prognostic model for patients undergoing HSCT. This will greatly enrich our clinical practice, especially in treating older people.
Left and right ventricular global longitudinal strain assessment together with biomarker evaluation may have a predictive and prognostic role in patients qualified for hematopoietic stem cell transplantation due to hematopoietic and lymphoid malignancies -a pilot study description
Introduction
The procedure of bone marrow transplantation (HSCT, hematopoietic stem cell transplantation), due to cytostatic agents used and the risks associated with peritransplant procedures, is considered a cardiovascular burden.Therefore, assessing the cardiopulmonary fitness and risk factors for cardiovascular complications of patients qualified for this procedure is important.The most recent European Society of Cardiology (ESC) recommendations on cardio-oncology from August 2022, prepared jointly with the European Hematology Association, recommend echocardiography in patients before HSCT, but there is no information as to what echocardiographic parameters may have prognostic significance in this specific patient population [1].An earlier expert document published in 2020 by the Cardio-Oncology Study Group of the Heart Failure Association of the European Society of Cardiology highlighted the importance of baseline cardio-oncologic risk stratification and described extensively the aspects of anticancer therapy but completely omitted the issue of patients qualified for HSCT [2].
The European Society of Cardiology's recommendations on heart failure, published in August 2021, presents a list of anticancer drugs that can cause heart failure [3].Several of these drugs are used in hematology, including, but not limited to, in patients undergoing a subsequent bone marrow transplant procedure.Earlier in 2020, the results of an international European registry (CARDIO-TOX registry) had been published, which showed a significant association between the diagnosis of severe cardiotoxicity of anticancer drugs and the risk of premature death and significantly shorter overall survival [4].The next part of the registry revealed that the co-occurrence of classic risk factors (e.g., hypertension, diabetes, older age) has additional negative implications for the prognosis of these patients [5].A new definition of cardiotoxicity proposed by the International Cardio-Oncology Society was published on-line in December 2021 [6].For the first time, an attempt was made to standardize diagnoses for the degree of myocardial damage caused by oncology or hematology drugs.The document attempts to reach a consensus on the positions of various cardiovascular societies (the European and American ones); moreover, it refers to the existing evaluation criteria in oncology clinical trials proposed by the US National Cancer Institute (Common Terminology Criteria for Adverse Events, CTCAE).The new proposed diagnostics are largely based on left ventricular ejection fraction (LVEF) assessment and global longitudinal strain (GLS) analysis of the left ventricle.Currently, the role of GLS assessment in patients undergoing HSCT is unknown.
In 2020, various ESC expert groups also proposed rules for echocardiographic and biomarker-based monitoring of patients receiving potentially cardiotoxic anticancer treatment [7,8].However, the monitoring rules in these documents were not dedicated to patients undergoing HSCT.
The determination of troponins and N-terminal fragment (pro) of B-type natriuretic peptide (NT-proBNP) is commonly used in cardiovascular risk assessment, but both markers have their limitations.A useful new marker in cardio-oncology may be the ST2 protein.The IL1RL1 gene encodes ST.So far, it has been shown to be involved in the immune response.An increase in its concentration is observed in response to myocardial stretch (which may be relevant in various situations related to infectious complications we observe after bone marrow transplantation).Data published to date indicate that high ST2 levels may correlate with adverse outcomes in myocardial infarction, acute coronary syndrome, and worsening heart failure [9][10][11].The unfavorable significance of high ST2 levels before the HSCT procedure has been demonstrated in the pediatric population [12].Thus, ST2 may have predictive and clinical relevance in the qualification of patients for HSCT as an additional parameter to the risk parameters recognized so far, i.e. age, HCT-CI risk score, and intensity of conditioning, as well as new potential markers to optimize the outcomes of this procedure [13][14][15][16].
Research hypothesis
Changes in left and right ventricular GLS assessment and biomarker concentrations may reflect even subclinical cardiovascular damage and play both a predictive and prognostic role during anticancer treatment and HSCT procedures.
High ST2 and mediators of vascular inflammation levels could correlate with the degree of myocardial damage during anticancer treatment and HSCT procedures.In addition, high ST2 levels in the serum of patients undergoing HSCT may constitute an adverse immune-related factor during the HSCT procedure.
Study design
In order to confirm the research hypothesis we designed a pilot study to determine the clinical utility of modern echocardiography, classic cardiac biomarkers, and the new ST2 marker among others immunological markers.The study will be carried out in two stages (Fig. 1):
STAGE I -diagnostic
Cardiac echocardiography with assessment of left and right ventricular GLS, left atrial strain, and determination of biomarkers (troponin T, NT-proBNP, ST2) will be performed in patients with a diagnosis of hematologic malignancy (1) as part of the qualification for the bone marrow transplant procedure and (2) three months after HSCT.
The study will include one hundred patients in the project's first two years.Patient material (serum and EDTA plasma) will be banked consecutively.
The design of the study makes it possible that the samples are shipped from other centers to the central laboratory within 24 h at refrigerating conditions.The biomarker determinations (including troponin T, NT-proBNP and ST2) will be performed by ELISA and capillary nanoimmunoelectrophoresis (CNIA) using a specific antibody against ST2, among others, as the material is obtained.As a reference, level of troponin-T and NT-proBNP will be determined in a medical diagnostic laboratory using certified IVD procedures.For CNIA evaluation, immunoglobulins and albumin will be removed from the serum portions, followed by samples labeled with fluorescent markers and separated on a matrix in capillaries, where the specific determination of the protein under study will also occur.Moreover, using flow cytometry and LEGENDplex™ Human Vascular Inflammation Panel 2, the following molecules will be quantified: sST2, sRAGE, TIE-2, sCD40L, TIE-1, sFlt-1, LIGHT, TNF-α, PlGF, IL-6, IL-18, IL-10, CCL2 (MCP-1).
STAGE II -observational
The results of tests performed at baseline will be correlated with the incidence of complications during the HSCT procedure.Additionally, results of tests performed before the HSCT procedure will be compared with findings in echocardiography and biomarkers levels after the HSCT procedure, then correlated with the type of prior anticancer treatment and finally related to the prognosis of patients (effectiveness of bone marrow transplantation, cardiovascular complications, subsequent hospitalizations for any cause including cardiac hospitalizations, cardiovascular and all-cause mortality).
Clinical utility
The implementation of this two-stage design should allow for fulfilling the objectives of the project: (1) To determine the relevance of the diagnosis of left and right ventricular GLS abnormalities and selected biomarkers (including troponin T, NT-proBNP and ST2 and other relevant proteins included in the panel) to the clinical course of HSCT; (2) To determine the relationship between the type of prior hemato-oncologic treatment and the risk of severe, moderate and mild CTRCD (cancer therapy-related cardiac dysfunction) in patients qualified for HSCT; (3) Identification of echocardiographic criteria and changes in the concentration of the above biomarkers, allowing early identification of patients at risk of cardiovascular complications of the HSCT procedure, including subclinical myocardial damage, vascular disorders (thromboembolic events, hypertension etc.) and different types of arrhythmias.
The possibility of samples shipment to the central laboratory where the biomarkers will be assessed opens an opportunity for this pilot experiment to evolve into a multicenter study.We will consider doing so after
Characteristics of the first 30 enrolled patients
It was originally assumed that the study would be a single-center experience, conducted in the Polish reference hospital for hematology.The basic goal was to test the initial hypothesis, i.e. that modern echocardiographic parameters can play a predictive and prognostic role in HSCT.The clinical characteristics of patients before HSCT have been presented in Table 1.
The clinical cardiovascular risk profile observed amongst the included patients seems to be prognostically more beneficial than the risk profile observed amongst patients with newly diagnosed hematological malignancies [17].Patients with concomitant cardiovascular diseases or even only risk factors for these diseases are less likely to be considered for HSCT.That is further reason why prognostic markers should be sought amongst sensitive echocardiographic parameters and biomarkers as they can identify patients even with subclinical myocardial damage, regardless of the ischemic or toxic or immune-related etiology.
The echocardiographic characteristics of included patients before HSCT were presented in Table 2.It should be noticed that in 7 patients there is no LV GLS and in 9 patients there is no RV GLS assessment, mainly due to quality of visibility.In 14 patients there is no LA GLS; this is mainly due to insufficient visibility and very high variability of the parameter.
A shortfall of the current stage of the study was the fact that amongst included patients there were some cases in whom a satisfactory image of the left ventricular endocardium was not obtained.This prevented reliable assessment of myocardium strain.The main reason for this study limitation was obesity or tachycardia.Nevertheless, we plan to attempt to evaluate GLS in the following stages of our study.If the preliminary results confirm that the assessed modern echocardiographic parameters can play a role in predicting short-and long-term outcome in HSCT, the next phase of the study will involve multicenter cooperation.The Echocardiography Laboratory at the Institute of Hematology and Transfusion Medicine in Warsaw will remain the central CORE ECHO LAB.One of the reasons is the fact that said Laboratory is a unique Polish center that has the status of Cardio-Oncology
Conclusion
Based on previous cardio-oncology experience in lung and prostate cancer, we can confirm the prognostic and predictive value of classic cardiac biomarkers (D-dimer, NT-proBNP, troponin), modern biomarkers, and echocardiography parameters such as GLS of the left and right ventricle [18][19][20][21].After conducting this pilot study, we can create a predictive and prognostic model for patients undergoing HSCT.This will greatly enrich our clinical practice, especially in treating older people.
Fig. 1
Fig. 1 Design of the study
Table 1
Clinical characteristics of the first 30 patients with hematological malignancies before HSCT included in the study
Table 2
Baseline echocardiography characteristics of 30 included patients estimated in 23 patients / **estimated in 21 patients / ***estimated in 16 patientsCenter of Excellence with the highest designation GOLD confirmed by the International Cardio-Oncology Society. *
|
2024-02-19T05:07:05.374Z
|
2024-02-17T00:00:00.000
|
{
"year": 2024,
"sha1": "2fe1d760e6261dd0777e8e0ac73830a1b9bb2e68",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "2fe1d760e6261dd0777e8e0ac73830a1b9bb2e68",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
226561620
|
pes2o/s2orc
|
v3-fos-license
|
A closed-form approximation for pricing geometric Istanbul options
The Istanbul options were first introduced by Michel Jacques in 1997. These derivatives are considered as an extension of the Asian options. In this paper, we propose an analytical approximation formula for a geometric Istanbul call option (GIC) under the Black-Scholes model. Our approximate pricing formula is obtained in closed-form using a second-order Taylor expansion. We compare our theoretical results with those of Monte-Carlo simulations using the control variates method. Finally, we study the effects of changes in the price of the underlying asset on the value of GIC.
Introduction
The Istanbul option (IO) is an exotic option whose payoff depends on whether the price of the underlying asset has reached or not a certain threshold previously fixed named barrier. If this barrier is reached before maturity, an Asian option (AO) is activated and the average is calculated from the first moment when the price of the underlying asset reaches the barrier until the maturity. However, if the barrier is not reached, a standard European option (EO) is activated at maturity. The IO can therefore be seen as an hybrid option that has the characteristics of both AO and EO. This option is also similar to the AO with barrier studied by Forsyth and Vetzal (1999) and Hsu et al. (2012), the main difference being that in IO the calculation of the average is activated from the first hitting time of the barrier and not from the acquisition date of the contract.
If we consider a Black and Scholes (1973) model, the valuation of products such as the arithmetic Asian options (AAOs) becomes very difficult since the hypothesis of taking the underlying asset price is a geometric Brownian motion does not allow to obtain a closed-form pricing formula because the distribution of the sum of log-normal random variables is not known in theory. However, the price of AAO can be approximated in practice by Monte-Carlo (MC) simulations with variance reduction techniques (see Zhang (2009), Mehrdoust (2015 and Lu et al. (2019)). It is also possible to approach the price of AAO with a Taylor expansion as in Ju (2014).
For the geometric Asian option (GAO), the pricing formula is known in closed-form (see Kemna and Vorst (1990) for the call and Angus (1999) for more examples of payoffs). Recently, the GAOs with barrier have been studied by Aimi and Guardasoni (2017) and Aimi et al. (2018). The price of this type of options has no closed-form expression and is increasingly the subject of financial research. The options involving a geometric average are also studied in the context of stochastic volatility (for examples, see Wong and Cheung (2004) and Hubalek and Sgarra (2011)).
In Jacques (1997), the arithmetic Istanbul call option (AIC) is study in continuous and discrete time trading. The price of AIC is obtained through a log-normal approximation with the moment-matching method (for more details on this approach, see Levy (1992)). In this article, we focus our attention on the pricing problem of the geometric Istanbul option (GIO) in continuous time trading. We consider only the case of a call option with an up-barrier and a fixed strike price. We also suppose that the terms of the contract do not guarantee any payment of dividend or rebate at maturity. This article is organized as follows. In section 2, we describe the continuous-time economic model chosen for our study and its theoretical properties. In section 3, we show with the strong Markov property that the price formula of GIC can be written in semi-closed form. Then, we propose an analytic approximation formula of this price using a second-order Taylor expansion. In section 4, we compare our theoretical results with those of MC simulations using the control variates (CV) method to reduce the variance of MC estimator.
We also compare the price of GIC with AIC, and analyze the price sensitivity of GIC to changes in the price of the underlying asset. Finally, in section 5, we conclude with a summary of the main results obtained in this article.
Financial model description
We consider a standard Black and Scholes model of frictionless markets where there is no arbitrage opportunity, the risk-free interest rate r and volatility σ > 0 are constant. The underlying stock price S t follows a geometric Brownian motion where [0, T ] is the trading period, S 0 > 0 is the initial stock price, µ = r − σ 2 /2 is the risk-neutral drift rate and W t a one-dimensional standard Brownian motion under the risk-neutral probability P.
In this article, the constant B (> S 0 ) is an up-barrier fixed in the terms of the contract. The first hitting time of B by the process S t is a random variable noted τ S B and defined as We also use the following notations : • µ = µ/σ and b = log (B/S 0 )/σ.
• φ(x) and Φ(x) are the Gaussian density and distribution functions, respectively.
The payoff of GIC at maturity T can be written as (G T − K) + , where K is the strike price and G T is a random variable defined as From definitions (2) and (3), we can see that the price of geometric Istanbul and geometric Asian call options coincide when S 0 B. Note that the geometric Istanbul put option whose a payoff at T equal to (K − G T ) + is not priced here. As we will see, our analytical approximation method can be perfectly applied in the case of a put option.
Pricing of geometric Istanbul options
According to the risk-neutral pricing formula in continuous time 1 , the price (or premium) at time 0 of GIO corresponds to expected value of its discounted payoff at maturity, this price will be noted for call option by GIC B . Thus, we have where E P is expectation operator under P-measure.
The probability distribution of G T is essential in order to obtain an analytical formula of GIC B . We notice that this distribution is known when B is not reached before T . In this case, the distribution corresponds to the joint distribution of the geometric Brownian motion and its first hitting time of B. So, only the distribution when B is reached before T is unknown and need to be calculated.
For x > 0, we have Let us introduce a process Z t , t ∈ [0, T ], defined by Z t = W τ S B +t − W τ S B . According to the strong Markov property on event {τ S B < T }, the process Z t is a standard Brownian motion under P-measure. This process is started at zero and completely independent of stopping time τ S B .
Now we can write
where is the probability density function of first hitting time of B by the process S t (see formula 2.0.2 in Borodin and Salminen (2002)). Note that the equality (6) follows from the fact that the random variable is Gaussian for 0 t < T with a zero mean and a variance equal to (T − t) 3 /3 (see Zhang et al. (2015)).
Then, the distribution of G T when B is reached before T , is given by the following formula The derivative of formula (8) with respect to x gives where dx is an infinitesimal quantity.
Remark 1. The integral in (9) does not admit a closed-form expression; however, it is possible to approach it numerically with Gaussian quadrature methods (see Brass and Petras (2011)). As illustrated in Figure 1, the quantity µ 2 /8 is very small for a wide range of parameters r and σ. This observation will allow us to obtain an analytical approximation of (9) using a Taylor series expansion around zero. Figure 1: Values of µ 2 /8 for r from 1% to 8% and σ from 10% to 50%.
Lemma 1. For α 0, γ and T > 0, if β is around zero, then we have Proof. See Appendix C.
Theorem 1. Suppose that K B. We have Proof of Theorem 1. We start by rewriting formula (4) as where U OC B is a price of an up-and-out barrier call option at time 0. The first term in (11) is written as Using Lemma 1, we obtain the following approximation Since K B, then U OC B has a zero value. It is sufficient to calculate the quantities A and B with formulas Theorem 2. Suppose that K < B. We have and U OC B is a price of an up-and-out barrier call option at time 0.
Proof of Theorem 2. The proof is similar to that of Theorem 1, it should just be noted that the price U OC B is nonzero when K < B. Its value for S 0 B is known in closed-form (see formula (7.3.19) in Shreve (2004)).
Numerical analysis
In this section, we compare our analytical approximation formulas (10) and (13) with MC simulations.
In our simulation procedure, we use the CV as a variance reduction technique of estimator obtained by crude MC method. We analyze two types of simulations errors, namely, the standard error and the relative error noted by S.E. and R.E., respectively. Our calculation algorithms are implemented with R software version 3.5.1 on a PC, Dell, Intel(R) core(TM) i3, 1.70GHZ and running under Windows 8. To simulate the price (4), we start by discretizing the interval [0, T ] into n = 2500 points, 0 = t 0 < t 1 < ... < t n = T , with the discretization step ∆t = T /n. The simulation of model (1) is given by the following recursion formula where Y 1 , Y 2 , ..., Y n is n i.i.d. standard Gaussian random variables. In order to obtain a realization of the random variable G T , the time integral in (3) is approximated with trapezoidal rule as follows where t S B = inf t i , i ∈ {0, 1, ..., n − 1}|S ti B is discret version of first hitting time τ S B . The number of paths used in our MC simulations is 10000. We take as a control variate the payoff of a geometric Asian call option (GAC) since the payoff of this option depends on S t0 , S t1 , ..., S tn , which gives a high correlation with the payoff of our option. Our controlled estimator for GIC B is given by where G IC M C B and G AC M C are a crude MC estimators for GIC B and GAC respectively and θ is a parameter that minimizes the variance of G IC CV B . 2 In Table 1, we provide a comparison between the approximate price (10) and the one obtained by MC simulations with the CV technique for different input parameters. The results obtained show that our approximation is efficient and could be applied in finance since the relative errors do not exceed 1.33%. The results in Table 1 also show that the option price increases as K approaches B. Similarly, for Table 2, the relative errors obtained with formula (13) are all strictly less than 1.35%. This confirms once again that the price we provide for GIC is stable to changes in input parameters. We also observe from the results in Table 2 that the option price decreases as K approaches B. It remains to be noted that in both Tables 1 and 2 the option price increases for longer expiration date, which is expected because the price of any type of option depends directly on its time-value. Notes: The input parameters are taken as follows: r = 0.05 and σ = 0.3. We note by "Approx." the price of geometric Istanbul call option obtained with formula (10) and by "MCV" the Monte-Carlo estimator of the price of the same option using the control variates method. We also note by"S.E." the standard error of MCV and by "R.E." the relative error wich is given in percentage with the following formula: Notes: The input parameters are taken as follows: r = 0.05 and σ = 0.3. We note by "Approx." the price of geometric Istanbul call option obtained with formula (13) and by "MCV" the Monte-Carlo estimator of the price of the same option using the control variates method. We also note by"S.E." the standard error of MCV and by "R.E." the relative error wich is given in percentage with the following formula: In Table 3, we analyze the robustness of approximation formulas (10) and (13) when the maturity date is long. Our analysis consists in adopting the same MC simulations strategy by increasing the maturity each time while fixing all the inputs. The results thus obtained show that the relative errors do not exceed 1.5%, which means that our analytical approximations remain both stable and efficient for long-term contracts. Notes: The maturities are taken in years (first row), we consider contracts with a lifetime ranging from 2 to 6 years. For formula (10) In Figure 2, we compare the price of the GIC to the AIC. As in Jacques (1997), we use the log-normal approximation method to estimate the price in the arithmetic case. For the geometric case, we use our analytical approximation formulas (10) and (13). The numerical results show that a GIC is relatively cheaper than a AIC. We also observe, on the left side of Figure 2, that the price of the Istanbul call option rises when the barrier is close to the current price for both types of averages. This observation is explained by the fact that the closer the barrier is to the current price, the higher the probability that it will be reached, thus increasing the theoretical value of the option. Furthermore, on the right side of Figure 2, we can see that the price of the Istanbul call option decreases as the strike price moves away from the current price, this is due to the fact that the probability of the option expire in-the-money becomes progressively lower as the strike price becomes higher than the current price. We end this section with a study on the sensitivity of the price of an GIC to changes in the price of the underlying asset. For this purpose, we analyze an important risk measure which is the Delta (∆). This theoretical quantity is used by options traders to develop good investment strategies. 3 In our case, the ∆ of a geometric Istanbul call option corresponds to the partial derivative of (4) with respect to S 0 . In Figure 3, on the left side, we calculate the ∆ values relative to the price of the underlying asset while increasing the volatility at each plot. On the right side, we fixed the underlying asset and calculate the ∆ values relative to the strike price while increasing the maturity at each plot. As shown in Figure 3, the value of ∆ depends on three main factors: moneyness, volatility, and maturity. It should be noted that ∆ is constantly changing during the trading period and therefore does not predict the maturity value of the underlying asset price.
Conclusion
In this paper, we addressed the pricing problem of geometric Istanbul options under the standard Black-Scholes model. A closed-form analytical approximation formula has been proposed for the price of a call option with a fixed strike price. The numerical results obtained by Monte-Carlo simulations using the control variates method have shown that our analytical approximation is very efficient for a wide range of input parameters and can therefore be used in finance. In addition, we have shown through a comparative study that geometric Istanbul call options have a more attractive price compared to those with an arithmetic average treated by Michel Jacques in 1997. Finally, we illustrated, graphically, the price sensitivity of a geometric Istanbul call option to changes in the price of the underlying asset.
Future research on Istanbul options could follow two directions. The first would be to make changes to the input parameters, such as studying the case of a floating strike price, the adoption of a down-barrier or studying the case of a harmonic average. The second interesting approach would be to extend the concept of Istanbul options to more complex economic models such as the exponential Levy model, the CEV model, the Heston model, etc.
|
2020-09-10T10:21:45.914Z
|
2021-03-12T00:00:00.000
|
{
"year": 2021,
"sha1": "80de14629e5636f95e0d3db62383ec41ad42c43c",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/2103.07440",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "37ab0e7d4a74cb1749bb976a19223c2a54bbff30",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
1809400
|
pes2o/s2orc
|
v3-fos-license
|
Genetic influences on nicotinic α5 receptor (CHRNA5) CpG methylation and mRNA expression in brain and adipose tissue
Introduction The nicotinic α5 receptor subunit, encoded by CHRNA5, harbors multiple functional single nucleotide polymorphisms (SNPs) that affect mRNA expression and alter the encoded protein. These polymorphisms are most notably associated with drug-taking behaviors and cognition. We previously identified common SNPs in a distant regulatory element (DRE) that increase CHRNA5 mRNA expression in the human prefrontal cortex (PFC) and confer risk for nicotine dependence. Genome-wide epigenetic studies in PFC and adipose tissue find strong effects of the DRE SNPs on CpG methylation. However, it is unclear whether DRE SNPs influence CpG methylation en route to modulating CHRNA5 mRNA expression. It is also unclear whether these polymorphisms affect expression in other brain regions, especially those mediating drug-taking behaviors. Results By measuring total and allelic CHRNA5 mRNA expression in human habenula and putamen autopsy tissues, we found that CHRNA5 DRE variants considerably increase mRNA expression by up to 3.5-fold in both brain regions. Our epigenetic analysis finds no association between CpG methylation and CHRNA5 mRNA expression in the PFC or adipose tissues. Conclusions These finding suggests the mechanisms responsible for the genetic modulation of CpG methylation and mRNA expression are independent despite the DRE SNPs being highly associated with both measures. Our findings support a strong association between the DRE SNPs and mRNA expression or CpG methylation in the brain and periphery, but the independence of the two measures leads us to conclude that environmental factors affecting CpG methylation do not appear to directly modulate gene expression.
Introduction
Previous studies found that allelic variation in the α5/α3/β4 neuronal nicotinic acetylcholine receptor (nAChR) subunit gene cluster on chromosome region 15q25.1 significantly increases risk for addiction to multiple classes of drugs [1][2][3][4][5][6][7][8][9], but confers a protective effect for cocaine addiction [8,10]. This region also confers risk for lung cancer and chronic obstructive pulmonary disease (COPD) [2,[11][12][13]. A non-synonymous single nucleotide polymorphism (SNP), rs16969968, in the gene encoding the α5 subunit (CHRNA5) is most commonly implicated in this gene cluster. Functional analysis of this SNP suggests it reduces ligand-mediated signaling [14,15]. In addition to rs16969968 affecting protein function, SNPs in a cis-acting distal regulatory element (DRE), located~15 kb upstream of CHRNA5, increase mRNA expression in the prefrontal cortex (PFC) up to 4-fold [9]. This DRE harbors a cluster of six SNPs (rs7164030, rs1979905, rs1979906, rs1979907, rs880395, and rs905740) in near complete linkage disequilibrium (LD). Joint analysis of rs880395 in the DRE with rs16969968 in the Collaborative Genetic Study of Nicotine Dependence (COGEND) finds increased risk for nicotine dependence compared to risk associated with either SNP alone [9], suggesting that both SNPs can influence phenotypes associated with this genomic region.
Knockout mouse studies examining the behavioral effects of habenular and ventral tegmental area (VTA) Chrna5 mRNA expression find that mice with a null mutation for Chrna5 significantly increase nicotine intake [15,16] and exhibit attenuated nicotine-induced locomotion [17]. Re-expressing Chrna5 in the medial habenula (MHb) reduces nicotine consumption to wildtype levels [16], suggesting that α5 nAChR mRNA expression in the MHb mediates negative reward signaling through the habenulo-interpenduncular pathway. Expression of the α5 receptor subunit in GABAergic neurons of the interpeduncular nucleus (IPN) was found to further modulate this MHb output to serotonergic brain regions [18]. The medial and lateral habenula are also connected to brain regions classically associated with drug-taking behaviors that express CHRNA5 mRNA. This includes afferent connections from the nucleus accumbens and efferent connections to the VTA and substantia nigra, which go on to innervate the PFC and striatum, respectively [19]. In the VTA, Chrna5 modulates the sensitivity of dopaminergic neurons to acute nicotine [15] but not ethanol administration [20]. Furthermore, rs16969968 interacts with a splicing SNP in the dopamine D2 receptor gene (DRD2), also implicated in addiction [21], to affect multiple aspects of prefrontal cortex physiology and behavior [22]. Together, these results demonstrate a pervasive functional profile for CHRNA5 in brain regions central to addiction and cognition. Despite strong evidence for altered Chrna5 expression in the rodent habenula affecting addiction phenotypes, and the association of regulatory DRE SNPs with nicotine addiction, it is unknown whether the DRE SNPs affect CHRNA5 mRNA expression in the human habenula. However, evidence that they modulate expression in the PFC, amygdala, and nucleus accumbens [5,6,9,23], suggests the DRE exerts influence in cortical and subcortical brain regions.
CpG methylation in the CHRNA5 locus is strongly influenced by the DRE polymorphisms according to genome-wide scans of cis-methylation quantitative trait loci (cis-mQTLs) in the prefrontal cortex [24] and biopsied adipose [25] tissue. Moreover, specific CpG sites within the CHRNA5 promoter are hypermethylated in response to adverse childhood events [26]. These same adverse events confer risk for nicotine dependence, even exhibiting an genotype x environment interaction specifically for rs16969968 [27]. While it is reasonably hypothesized that environmental factors affect methylation, which then influences expression, this relationship has not been formally tested. Thus, the relationship between genotype, methylation, and expression remains unclear and needs to be resolved in order to identify the mechanisms underlying substance abuse with respect to CHRNA5.
Here, we have tested the influence of the DRE variants modulate on CHRNA5 mRNA expression in the human habenula and putamen, by measuring total and allelic CHRNA5 mRNA expression. We also compared expression and CpG methylation across DRE genotypes using publicly available genome-wide datasets from BrainCloud [28], BrainCloudMethyl [24], and the Multiple Tissue Human Expression Resource [25,29]. We find the DRE SNPs modulate expression in the putamen and habenula. However, in PFC and adipose tissues where we have measures of both expression and methylation from the same individuals, we find that methylation does not appear to directly influence CHRNA5 expression.
Tissue samples
Twenty-one human habenula autopsy samples were dissected by a trained neuropathologist (CRH) or obtained from the NICHD Brain and Tissue Bank for Developmental Disorders, while 57 human posterior putamen autopsy samples were obtained through the University of Miami Brain Endowment Bank. Demographics for these human tissues are presented in Table 1. Post-mortem tissue collection was performed in accordance with local Institutional Review Board approvals. The overall study described here was performed in accordance with the Institutional Review Board of The Ohio State University.
Nucleic acid isolation & complementary DNA (cDNA) synthesis
Genomic DNA (gDNA) was isolated from all human tissues using a 'salting out' method adjusted for lipid-rich brain tissue, as previously described [9]. Total RNA was isolated by homogenizing the tissues in TRIzol and precipitating the RNA from the aqueous phase using isopropanol. We further purified the RNA using RNeasy Mini Kit spin columns (Qiagen, Germantown, MD) and digested latent gDNA on the column with recombinant DNaseI, as previously described [9]. cDNA preparations were made using 0.5 μg total RNA for each sample. Gene-specific primers (25 nM) supplemented with Oligo-dT (5 μM) were used to prime the reverse transcription reaction.
Sample genotyping
SNPs rs16969968, rs615470, and rs7164030 were genotyped by restriction fragment length polymorphism (RFLP) methods. rs16969968 and rs615470 serve as marker SNPs for measuring allelic mRNA expression, which were chosen because of their high minor allele frequencies, high likelihood to be present in the mature mRNA, and LD pattern which suggests their minor alleles are present on different haplotypes. rs7164030 serves as the representative marker of DRE, which include 5 additional SNPs in high LD (rs1979907, rs19 79906, rs1979905, rs880395, and rs905740). The gDNA regions surrounding these SNPs were amplified using primers tagged with a fluorophore (6-FAM or HEX) and the resultant amplicons were cut with restriction enzymes (rs16969968-Taq1a; rs615470-CviQI; rs7164030-Tsp509I) that recognize only one of the two alleles resulting from the presence of the polymorphism. Fragments were resolved on an ABI 3730 DNA Analyzer (Life Technologies) or by standard gel electrophoresis (1.5 % agarose).
Total and allelic mRNA expression measurement
Total CHRNA5 and β-actin (ACTB) mRNA expression was measured in all human and mouse tissues by qPCR using an ABI 7500 Fast Sequence Detection System (Life Technologies). In addition, we measured the expression of two highly-enriched habenula markers, POU4F1 and CHRNB3, in the habenula samples to determine the purity of the dissections. The relative quantity of CHRNA5 mRNA was normalized within each sample to ACTB mRNA expression for statistical analysis. The influences of available covariates (age, sex, race, post-mortem interval, RNA integrity number, nicotine use, or habenula purity) were tested on ACTB-normalized total CHRNA5 mRNA expression in each brain region using stepwise linear regression.
We quantified allelic mRNA expression in habenula and putamen samples heterozygous for rs16969968 or rs615470 using a fluorescent primer extension method (SNaPshot), as previously described [9]. Fluorescentlylabeled primer extension fragments, representing the two different alleles of rs16969968 or rs615470, were resolved on an ABI 3730. The fluorescent peak heights for each allele, determined using GeneMapper 4.0 Software (Life Technologies), were used to calculate relative allelic expression ratios (ancestral/variant allele). For each sample, at least two separate measurements were used to calculate allelic expression imbalance (AEI). Allelic ratios for cDNA were normalized against the overall average ratio calculated for gDNA for each marker SNP. We subsequently compared the absolute magnitude of the allelic expression in samples heterozygous for rs7164030 versus homozygotes for either allele.
Briefly, samples were genotyped with a variety of Illumina arrays (HumanHap300, HumanHap610Q, Human Hap650Y, Human 1 M-Duo, and Human 1.2 MDuo 1 M) and imputed to 1000 Genomes populations using IM-PUTE2. From the imputed data, we used rs7164030 as a surrogate marker of the DRE to test genetic effects on expression and methylation. Samples were also measured for genome-wide CpG methylation, using the Infinium HumanMethylation450k BeadChip assay, and transcriptome-wide mRNA expression, using Illumina 49 K Oligo Arrays (BrainCloud) or HumanHT-12 v3 Bead-Chips (MuTHER). For analyses, we used CpG probes cg22563815 and cg17108064, which measures methylation at CpG sites 913 and 802 nucleotides upstream of the annotated CHRNA5 gene (hg19 chr15: 78856949 and chr15:78857060), respectively. Although these CpG probes are~12 kb downstream from the DRE SNPs, they are among the highest scoring mQTLs for the DRE SNPs in the CHRNA5 gene region. For expression, we used probes hHC002196 (BrainCloud) and ILMN_1770044 (MuTHER), which hybridize to CHRNA5 mRNA in the 3′ untranslated region and exon 5, respectively.
Statistical analyses
Statistical analyses were performed in R (x64 v.3.1.0) with standardized β-coefficients calculated by the QuantPsyc package. For all datasets, we used interquartile range (IQR) to exclude extreme outliers, defined as data points below Q1 À 3xIQR or above Q3 þ 3xIQR. Next, we identified significant covariates using stepwise linear regression and AIC, using the [step] function to reach a minimal adequate model. We subsequently included significant covariates in analyses of rs7164030 genotype on expression and methylation or as interaction terms in linear regression models of expression and methylation. Potential covariates in our putamen and habenula expression datasets included age, sex, race, smoking history, cocaine use, and post-mortem interval. Potential covariates in the BrainCloud data included sex, age, race, and an estimate of neuron enrichment that was specific to methylation data [30]. Age and batch-specific effects were considered as a potential covariate in the MuTHER dataset. We tested for an overall effect of methylation on expression using linear regression across the entire Brain-Cloud or MuTHER sample populations and report the standardized β-coefficient for the methylation measure. We further tested this relationship within each rs7164030 genotype group to identify any genotype-specific effect that could be obscured when examining the population as a whole.
Results
Total and allelic CHRNA5 mRNA expression in putamen and habenula Two putamen samples were excluded from analyses due to poor RNA quality as indicated by ACTB expression >2 standard deviations above the mean of the remaining samples, leaving 55 total putamen samples. Stepwise linear regression revealed sex as a significant covariate of CHRNA5 mRNA expression measured via qPCR in the putamen. We found a significant effect of the representative DRE SNP rs7164030 genotype on putamen expression (n=55, F=28.90, p=1.82×10 −6 ; Fig. 1), whereby homozygous minor "G" allele samples expressed 3.5-fold more CHRNA5 mRNA than homozygous major "A" allele samples, consistent with our previous findings in PFC [9]. Examining the influence of rs7164030 on habenular CHRNA5 mRNA expression via qPCR with race as a significant covariate revealed no significant effect of genotype, although the direction of the genotypic effect is consistent with our findings in the putamen and PFC.
We also noted that the purity of the habenula dissection, as determined by POU4F1 or CHRNB3 expression, did not influence CHRNA5 mRNA expression. A comparative analysis of habenula CHRNA5 mRNA expression with previously measured PFC expression found no enrichment in the habenula relative to the PFC in humans, consistent with previous reports of generally low expression in these areas in rodents [18,31]. We measured allelic mRNA expression in 21 of 55 putamen samples heterozygous for either rs16969968 or rs615470 (9 co-heterozygous) and 9 of 21 habenula samples heterozygous for rs16969968. The low expression of CHRNA5 in both tissues required us to average the allelic ratio measurements at the two marker SNPs, as done for the putamen, or take an increased number of measurements at the same SNP, as done for the habenula. Thus, the averaged data is only presented for the 9 coheterozygous putamen samples, while the data for all habenula samples is presented for marker SNP rs16969968. Samples heterozygous for rs7164030 exhibited AEI ranging from 2.1 to 6.5-fold differences between the expression of the two alleles, while samples homozygous for either allele of rs7164030 all displayed AEI of <2fold, consistent across both brain regions. We observed greater expression for the major allele of rs16969968 relative to the minor allele in all but one sample exhibiting >2-fold AEI, consistent with the major allele for rs16969968 residing on the high expressing DRE haplotype. Comparing the absolute magnitude of AEI across rs7164030 genotype, we find heterozygotes exhibit significantly greater AEI versus homozygotes (F=7.99, p=0.012; Fig. 2), supporting the hypothesis that the DRE SNPs exert their function in both the habenula and putamen.
Methylation in the BrainCloud dataset significantly differed across rs7164030 genotype in the PFC for both CpG probes (cg22563815: F=109.17, p=5.87×10 −21 ; race, age, and neuron enrichment estimate as significant covariates; cg17108064: F=191.06, p=1.77×10 −31 ; race as a significant covariate). Here, samples homozygous for the variant G allele of rs7164030 had significantly greater CpG methylation at both probes relative to the ancestral A allele homozygotes (Fig. 3c and e). Methylation was also significantly higher across both probes for the variant G allele carriers in the MuTHER dataset (cg22563815: F = 1703.48, p=5.54×10 −172 ; bisulfite conversion gDNA concentration and efficiency included as covariates; cg17108064: F=614.33, p=3.73×10 −92 ; batch and bisulfite conversion efficiency included as covariates; Fig. 3d and f).
Evident in each of our linear models is the confounding influence of genotype on both methylation and expression (Fig. 4). Although we accounted for this influence statistically in the model examining all samples, we subsequently tested whether it was still possible for methylation to influence expression on specific genetic backgrounds (i.e. if one were to carry the DRE SNPs), which could be obscured in the full linear model. Thus, we performed linear regression within each of the rs7164030 genotype groups, finding no evidence that methylation at either probe affects expression in any of the genetic backgrounds defined by the DRE SNPs (Table 2).
Discussion
Our findings reveal pervasive influence of the DRE SNPs on CHRNA5 mRNA expression and methylation in brain and adipose tissue, whereby the minor DRE alleles are associated with greater expression and methylation. This genotypic difference is consistent with our findings in the PFC for both the BrainCloud dataset and in our previous study [9]. The habenula did not show a main effect of the DRE SNPs on total CHRNA5 expression, but we observed strong allelic differences in the habenula that perfectly correlate with the DRE SNPs, consistent with the interpretation that they modulate CHRNA5 expression in the habenula. While significant, the influence of the DRE SNPs on CHRNA5 expression is not as strong in adipose tissue. We previously reported no influence of the DRE SNPs in lymphoblastoid cell lines (LCLs), but other studies with larger sample sizes have found cis-eQTLs for CHRNA5, implicating the DRE SNPs in peripheral whole blood [32], monocytes [33], and lung [34]. Thus, it is likely that the DRE SNPs are modulating Fig. 2 Absolute allelic expression imbalance in putamen (filled markers) and habenula (open markers) compared across rs7164030 genotype. Samples heterozygous for rs7164030 exhibit significantly greater AEI than samples homozygous for rs7164030 (ANOVA p=0.012), consistent with the expectation that AEI is observed in samples heterozygous for the functional allele. Samples homozygous for either DRE allele are not expected to exhibit AEI expression of CHRNA5 in peripheral tissues, but exert less influence relative to their impact in the brain.
The location and epigenetic histone markings in the CHRNA5 locus harboring the DRE SNPs previously led us to propose they act in an enhancer [9]. Data from the ENCyclopedia Of DNa Elements (ENCODE) Project [35] viewed on the UCSC Genome Browser [36] shows histone modifications in a lymphocyte cell line (GM12878) consistent with enhancers, including histone 3 lysine 4 monomethylation (H3K4Me1) and light trimethylation (H3K4Me3), and H3 lysine 27 acetylation (H3K27Ac). However, when a portion of the DRE containing rs880395, rs905740, and rs7164030 was sub-cloned into a vector upstream of a minimal promoter, it acted as a repressor, with Expression and methylation across rs7164030 genotype in BrainCloud and MuTHER. The major "A" allele of rs7164030 was significantly and consistently associated with lower CHRNA5 mRNA expression in the BrainCloud (a) and MuTHER (b) datasets. The "A" allele was also associated with lower CpG methylation measured at two different probes (cg22563815 and cg17108064) in the BrainCloud prefrontal cortex (c and e) and the MuTHER adipose tissue (d and f) no significant expression differences between DRE haplotypes [23]. Given these contradictory results, we find it possible that the DRE contains both enhancer and repressor elements. A dual enhancer/repressor mechanism is not novel. Perhaps the most well-known example is the RE-1 Silencing Transcription Factor (REST), which silences neuronal genes in the periphery [37], but has the ability to enhance gene expression in the brain [38,39]. Evolutionary studies of cis-acting enhancer elements supports the possibility that multiple variants affecting enhancer function can arise together within a population to high frequency [40], sometimes co-opting cryptic or existing regulatory sequences to derive their new functions [41], as we would assume occurred for the DRE SNPs in CHRNA5. A more thorough analysis of the evolutionary constraints on the CHRNA5 locus could provide clues about the adaptive evolution of regulatory elements in humans.
In addition to the epigenetic histone modifications present in the CHRNA5 locus, CpG methylation is strongly associated with CHRNA5 SNPs. Since CpG methylation can repress transcription [42] it led us to examine CHRNA5 CpG methylation and mRNA expression using BrainCloud Fig. 4 Scatterplots for CpG methylation and CHRNA5 mRNA expression in BrainCloud and MuTHER. CHRNA5 expression is moderately correlated with CpG methylation measured in the BrainCloud prefrontal cortex data by probes cg22563815 (a) and cg17108064 (b). Similar results were obtained for the same probes in the MuTHER adipose tissue (c and d). However, the correlation between expression and CpG methylation is explained by rs7164030 genotype, apparent by the stratification of the genotype groups in the scatterplots (A/A = red circles, A/G = blue squares, G/G = black diamonds). Furthermore, linear regression performed within each genotype group finds no significant relationship between methylation and expression, arguing against direct modulation of expression by methylation and MuTHER. Because the DRE SNPs were strongly associated with increased methylation and expression in both datasets, we expected methylation to be positively correlated with expression, thus providing mechanistic evidence linking epigenetic modulation of the CHRNA5 locus and gene expression. Instead, we found that methylation and expression were independent when accounting for DRE genotype and other significant covariates. Evidence that CHRNA5 expression and methylation are independent is important for delineating mechanisms underlying drug addiction that is associated with this gene locus, since both methylation and expression apparently influence addiction risk. One explanation that unifies expression and methylation, that also serves as a caveat of this study, is that methylation influences the expression of CHRNA5 in a way that was not detected here. Such scenarios could include changes in alternative splicing or transcription start site usage which do not change the overall levels of CHRNA5 mRNA, but alter the makeup of the mRNA. The GEN-CODE project has annotated an alternatively spliced transcript (ENST00000559554.1), but it has only been observed as an expressed sequence tag in a neuroblastoma cell line.
Methylation at the CHRNA5 locus is sensitive to environmental factors, as demonstrated by childhood adverse events (CA). CA results in hypermethylation and increased risk for drug dependence [26,27]. Males carrying rs16969968 who experience CA are at greater risk for dependence relative to those without rs16969968 [27]. In the context of our findings, we do not find it likely that hypermethylation changes mRNA expression, although we cannot rule out that CA-induced hypermethylation can be much greater than observed in our samples and subsequently affect expression. Reanalyzing existing CA studies to include the DRE SNPs in addition to rs16969968 could shed some light on the relationships between CA, methylation, and smoking risk. However, a study examining environmental factors, CpG methylation, mRNA expression, and drug dependence would be ideal for resolving the risk conferred by 15q25.1.
Finally, the strong impact of Chrna5 expression in mouse MHb and VTA on nicotine consumption despite low levels of mRNA expression in mice and humans signifies the importance of the specific cell types on which these receptors are expressed. The MHb afferents that express the α5 subunit project to the interpeduncular nucleus [19], which also contains GABAergic neurons expressing the α5 subunit [18], modulating aversiveness associated with nicotine intake [16] and withdrawal [43]. Dopaminergic neurons in the VTA express the α5 receptor subunit [44] and project to multiple addictionrelated brain regions, including the cortex and insula via the mesocortical pathway and limbic areas, the nucleus accumbens, lateral habenula, and amygdala through the mesolimbic pathway [45]. Finding ways to modulate the firing of the cells expressing CHRNA5, directly or indirectly, without targeting nicotinic α5-containing receptors could provide avenues for treating addictive behaviors that circumvent the inherent challenges of developing small molecules for nicotinic receptors. Identifying promising new targets will require a firm understanding of addiction neurocircuitry and of genetic expression within specific cell types in the habenula, IPN, and VTA, in order to exploit the aversive signaling properties of these cells in the context of drug abuse.
Conclusions
Our findings support pervasive but independent influence of the CHRNA5 DRE SNPs on mRNA expression and CpG methylation in the brain and periphery. With evidence that environmental influences modulate CpG methylation in this region, we advocate for future studies to incorporate environmental, epigenetic, and genetic factors in the pathogenesis of addiction associated with
|
2017-08-03T02:59:01.561Z
|
2015-10-01T00:00:00.000
|
{
"year": 2015,
"sha1": "3a3bdd83b795f073ca844f5c2dc1144a226fc751",
"oa_license": "CCBY",
"oa_url": "https://genesenvironment.biomedcentral.com/track/pdf/10.1186/s41021-015-0020-x",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7e3807c7531d93cf0719c807ffc271198c85d2b2",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
115375437
|
pes2o/s2orc
|
v3-fos-license
|
Application of Computer Simulation for Productivity Improvement of Welding Unit in a Heater Manufacturing Industry: A Case Study Based on Arena
Firm’s efficiency and competitiveness are two important challenges in today’s global market that have motivated many manufacturing firms to plan novel manufacturing management strategies. Nowadays, simulation models have been used to assess different aspects of manufacturing systems. This paper introduces a welding unit of a manufacturing line of heater production as a case study and the basic application of the ARENA software. The main goal of this paper is increasing the productivity of the production line by using computer simulation. To achieve this goal, three various scenarios are compared and suggested to obtain the better improvement in productivity.
Introduction
In the manufacturing industry, managers and engineers are seeking to find methods in order to eliminate the common problems in manufacturing systems such as bottlenecks and waiting times [1]. This is because that all of these kinds of problems impose extra cost to the companies [2]. In addition, manufacturing companies are striving to sustain their competitiveness by improving productivity, efficiency and quality of manufacturing industry for instance high throughput and high resource utilization [3]. Managers and engineers define the planning horizon for these aims. In the operative aims one of the most challenging is the bottlenecks. Companies try to identify and eliminate the bottlenecks in the production line [2]. Simulation is the computer-modeled emulation of a real system, for improvement the evaluation of system performance. In fact by using the computer simulation the reality world alters to a controlled environment in order to study system behavior under different in a cost effective manner and lowest risk [3]. Computer simulation has a significant effect on financial and operational parameters by saving monetary cost of investment, decreasing process cycle time, increasing resource utilization and enhancing throughput [2]. Benefits of a Simulation modelling are [4]: 1. to deal with large and complicated decisional issues that cannot be handled with the application of other approaches, 2. to find an answer to the "whatif...?" questions -simulation experiments help to assess different decisional alternatives scenarios. So this paper aims at improving the productivity of the welding unit of a manufacturing system in a heater company as a case study using computer simulation. To achieve this goal three various alternatives are developed and suggested to obtain the better improvement in the productivity.
Literature review
Computer simulation is one of the most effective approaches that can be used to deal with the operational difficulties to increase productivity in different fields, such as production line [5], port and transportation industry [6], supply chain management [7], healthcare system [8] as well as construction industry [9], all of which are not easy to model. There are many researches have been done that to evaluate the manufacturing systems by using the simulation. Basler et al. [10] discussed the application of artificial intelligence approaches and simulation to enhance productivity in the wood industry. Furthermore, Ramis et al. [11] applied simulation in order to recognize and decrease bottlenecks at a sawmill industry. Qayyum and Dalgarni [12] created a simulation model take into consideration constraints systems and process time. Indeed they made change in manufacturing process, system limitations, and capital investment for enhancing the capacity of system. Hatami et al. [13] assessed the importance of different parameters on a production line using simulation and design of experiments (DOE) to improve productivity. In another study, the statistical Taguchi method and computer simulation were combined to investigate the impacts of main and uncontrollable parameters on the overall production output in the paint factory [14]. Dengiz et al. [15] showed how the combination of regression meta-modeling techniques and simulation modeling can be applied to design and improve a real automotive manufacturing system. Based on these investigations, computer simulation has improved the productivity of manufacturing processes and reduced trials and errors to find the best solution [16].
Case study
In this paper one heater factory was selected as the case of study. This factory has four sections including welding, framing, painting and assembly. Based on the managers and engineers comments the welding unit was chosen to simulate and evaluate the production process. In this station, the main frame of heater fount is produced and then transported to the assembly station. Table 1 shows the number of equipment and operators used in this unit. It should be noted that, there is one operator in source preparation test station and three in coal grinding stations. Therefore, total number of operators can be reached to 26 people.
Building simulation model
One of the most significance parameter for developing a computer simulation is collecting the desired data. The necessary data in this paper are gathered in the factory during the manufacturing process. The "stop watch" method is applied for collecting some needed data.
After collecting the data related to duration of all of activities, a probability distribution function should be fitted to every activity since the variability of the activities. Having determined the different resources involved in the manufacturing process along with their relationship and their duties and also the fitted probability distribution of each data sample of activity duration, the simulation model of the considered manufacturing system should be developed. In order to construct the simulation model, simulation software, Arena 13.9 is selected. Figure 1 shows the logic view of simulation model.
Simulation model validation
As it is shown in the Table 2, some obtained results of the simulation and the actual data are accurate up to approximation of 90%.
Improvement
After simulating the welding unit of production line, three different scenarios are suggested and developed to analyze and improve the production line productivity.
Scenario 1
Due to long lines at stations related to welding machines and second line of assembly, also creating consecutive bottlenecks at the stations that have an impact on the amount of final product, in this scenario, adding two workers and two welding machines have proposed in order to help other stations that have long lines. The result after apply this scenario for output of final product and station lines has shown in Table 3 and 4 respectively.
Scenario 2
As it is considered by developing scenario 1, the rate of production was increased and the average of welding machines lines lowered significantly. But this scenario may cause long lines at stations of water heating supply testing, test of tank pipes and assembled first line that this problem has been solved in scenario 2. In scenario 2, due to long lines in test stations, adding one testing operator and one testing compressor has been proposed in order to help test stations. The results indicate a significant reduction in assembly and test lines and following an increase in output product in the model (Table 5). Table 6 summarizes the results of comparison of the average of cited lines: Table 5. Comparions of the rate of output product in the main model with scenarios 1 and 2
Row
The rate of output product in scenario 2 The rate of output product in scenario 1 The rate of output product in main model 1 12506 11333 9509 Table 6 Comparions of the average of parts waiting in line in the main model with scenarios 1 and 2
Test of water heating tank
Test of tank pipe First line of assembly
Scenario 3
In this scenario, it has been tried to change the number of coal grinding sector workers from 4 to 3 people in order to reduce them because they have low average of tasks. These workers also contribute to each other in order to produce. Table 7, 8 and 9 show the comparison of rate of output product, average tasks of coal grinding operators and average waiting time in the line in scenarios 2 and 3 respectively. The average waiting time in the line at these stations has increased. Table 7. Comparions of the rate of output product in scenarios 2 and 3
Row
The rate of output product in scenario 3 The rate of output product in scenario 2 1 13009 12506 Table 8. Comparions of the average tasks of coal grinding operators in scenarios 2 and 3
Row
The average of coal grinding operator working in scenario 3 The average of coal grinding operator working in scenario 3 1 0.7445 0.5479
Discussion
In this paper different scenarios were assessed by using Arena software. In scenario 1, according to reports obtained from the crowded lines in the welding workstations, it was suggested to add 2 welding machines and 2 welding operators which led to a considerable reduction of the line at weld stations. In scenario 2, looking to improve scenario 1 and reduce the line in test stations, adding one compressor machine and one testing operator was proposed that led to reduce the line in addition to increase the production. In Scenario 3, for improving the scenario 2, it was tried to reduce current idle times by reducing the number of coal grinding operators (specific operators in each station) from 4 to 3 people who help each other, and subsequently increase the rate of output product. After the final results, some recommendations were suggested to the company managers as follow: 1. Increase the percentage of welding operators as well as welding machines, 2. Increase the number of testing operators and compressor machines in order to reduce line particularly in assemble station, 3. Improve the ergonomics condition of operator's worktable and workplace, 4. Increase the amount of operator training in order to improve in order to help other stations have high components traffic, 5. Use the fixtures in welding stations
Conclusion
This case study presented the details of a production system, simulated by using the Arena simulation software. A better design of the production system at the company was proposed. This was done by adding 2 welding machines and 2 welding operators which led to a considerable reduction of the line at weld stations. Moreover, it was suggested for adding one compressor machine and one testing operator that led to reduce the line in addition to increase the production as well as it was proposed to reduce existed idle times by reduce the number of coal grinding operators (specific operators in each station) from 4 to 3 people who help each other, and subsequently increase the rate of output product. This paper showed the approach of modelling and designing a production system so that others can do the same. As a future study it is proposed to use other simulation software such as Witness, Show FLOW etc, and compare its result with the results obtained from Arena.
|
2019-04-16T13:29:05.583Z
|
2018-01-01T00:00:00.000
|
{
"year": 2018,
"sha1": "c1a10697f89cf6964f225659e2e37b4546b1a966",
"oa_license": "CCBY",
"oa_url": "https://www.matec-conferences.org/articles/matecconf/pdf/2018/84/matecconf_ses2018_01004.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "24065f5f4ca8a155b6995fafa0922c2716534498",
"s2fieldsofstudy": [
"Engineering",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
118518903
|
pes2o/s2orc
|
v3-fos-license
|
Separable Schmidt modes of a non-separable state
Two-photon states entangled in continuous variables such as wavevector or frequency represent a powerful resource for quantum information protocols in higher-dimensional Hilbert spaces. At the same time, there is a problem of addressing separately the corresponding Schmidt modes. We propose a method of engineering two-photon spectral amplitude in such a way that it contains several non-overlapping Schmidt modes, each of which can be filtered losslessly. The method is based on spontaneous parametric down-conversion (SPDC) pumped by radiation with a comb-like spectrum. There are many ways of producing such a spectrum; here we consider the simplest one, namely passing the pump beam through a Fabry-Perot interferometer. For the two-photon spectral amplitude (TPSA) to consist of non-overlapping Schmidt modes, the crystal dispersion dependence, the length of the crystal, the Fabry-Perot free spectral range and its finesse should satisfy certain conditions. We experimentally demonstrate the control of TPSA through these parameters. We also discuss a possibility to realize a similar situation using cavity-based SPDC.
Two-photon states entangled in continuous variables such as wavevector or frequency represent a powerful resource for quantum information protocols in higher-dimensional Hilbert spaces. At the same time, there is a problem of addressing separately the corresponding Schmidt modes. We propose a method of engineering two-photon spectral amplitude in such a way that it contains several non-overlapping Schmidt modes, each of which can be filtered losslessly. The method is based on spontaneous parametric down-conversion (SPDC) pumped by radiation with a comb-like spectrum. There are many ways of producing such a spectrum; here we consider the simplest one, namely passing the pump beam through a Fabry-Perot interferometer. For the two-photon spectral amplitude (TPSA) to consist of non-overlapping Schmidt modes, the crystal dispersion dependence, the length of the crystal, the Fabry-Perot free spectral range and its finesse should satisfy certain conditions. We experimentally demonstrate the control of TPSA through these parameters. We also discuss a possibility to realize a similar situation using cavity-based SPDC.
Introduction.
Entanglement of two-photon states (biphotons) in continuous variables such as frequency or wavevector suggests the use of biphotons as a quantum-information resource in higher-dimensional Hilbert spaces [1,2]. The dimensionality of the Hilbert space is determined in this case by the Schmidt number, the effective number of Schmidt modes, which can reach several hundred [2][3][4]. But in order to realize any protocols with multimode states, it is necessary to address single Schmidt modes separately. For wavevector variables, any single Schmidt mode can be filtered out using a single-mode fibre and a spatial light modulator [6]. This filtering, in principle, can be lossless, which is crucial for experiments with twin-beam squeezing [7][8][9][10][11][12]. For frequency variables, it is far more difficult to losslessly select a single Schmidt mode. Attempts are being made in homodyne detection [13]; in direct detection experiments the procedure is more difficult. For instance, methods based on nonlinear frequency conversion are proposed, technically complicated to realize with high efficiency [14]. Here we propose and demonstrate a method of engineering a two-photon state in such a way that it contains non-overlapping Schmidt modes, each of which can be filtered losslessly using a spectral device.
Frequency entanglement of biphotons. A two-photon state generated via SPDC can be written in the form whereâ † s ,â † i are the creation operators of the signal and idler photons, and F (ω s , ω i ) represents the two-photon spectral amplitude (TPSA) [15]. The TPSA fully characterizes the spectral properties of a biphoton state and its physical meaning is the joint spectral probability am-plitude of the down-converted photons in signal and idler modes with frequencies ω s and ω i respectively. TPSA is used to determine the degree of frequency entanglement [1,4,[16][17][18] and plays a central role in the heralded generation of pure single-photon states [19].
The TPSA depends on both the pump spectrum F p (ω) and the phase matching in the nonlinear crystal, where ω p , ω s , ω i are the pump, signal, and idler frequencies, sinc(x) ≡ sin x/x, ∆k z ≡ ∆k z (ω s , ω i ) is the longitudinal mismatch, and L the crystal length.
If the TPSA is not factorable, F (ω s , ω i ) = F s (ω s )F i (ω i ), then the state (1) is entangled. Nevertheless, it can always be written as a sum of factorable states (the Schmidt decomposition), where f s,i n (ω) are the Schmidt modes for the signal and idler photons and λ n are the Schmidt coefficients, n λ n = 1. The Schmidt number K ≡ [ n λ 2 n ] −1 represents a measure of entanglement [20].
Separable Schmidt modes. In most cases, filtering out a single Schmidt mode f s,i n is challenging. However, the task becomes simple if the modes of different orders do not overlap in frequency, ∀ω f s n (ω)f s m (ω) ∼ δ nm . (Due to the orthogonality of the Schmidt modes a weaker condition is always fulfilled, dωf s n (ω)f s * m (ω) = δ nm ). Then, with the help of an appropriate spectral device, the simplest one being a prism followed by a slit, any of the signal Schmidt modes f s n (ω) can be filtered, in principle, losslessly, and similarly for the idler modes. As an example, consider SPDC from a pump with a comb-like spectrum. Such a pump can be obtained using standard pulse shaping methods; as a proof-of-principle demonstration we will consider a simple one, based on passing a laser beam through a Fabry-Perot (FP) cavity (Fig. 1). If the phase matching leads to a TPSA stretched in the ω s + ω i direction (a), the introduction of an FP with an appropriate free spectral range will result in a TPSA given by separate maxima (b). If these maxima give nonoverlapping projections on both horizontal and vertical axes, each of them represents a product of Schmidt modes f s n (ω s )f i n (ω i ) [21]. In order to realize this situation in experiment, one can notice that the tilt of the TPSA (Fig.1a) can be changed by using different phase matching conditions. Indeed, the tilt α is given by tan s,i being the group velocities of the pump, signal, and idler photons [22]. In most cases, the tilt is negative, but for frequency-degenerate type-II phasematching in KDP crystal from a 415 nm pump, γ s = 0, hence a zero tilt can be realized [19]. The possibility of a positive tilt, first discussed in Ref. [23], is realized in KDP at higher pumping wavelengths. The tilt is 45 • if KDP is pumped at 532 nm (Fig.1c). The shape of a single TPSA maximum in the presence of a comb-like pump spectrum can then be changed by changing the length of the crystal and the width of a single maximum in the 'comb'.
Gaussian model. Consider now a single 'spot' of the TPSA distribution in Fig. 1. Its shape can be obtained from Eq. (7) by assuming that the pump spectrum is given by a single peak of the 'comb'. For simplicity, let us first describe the shape of this peak as a Gaussian and replace the sinc function by a Gaussian function as well. Then, the cross-section of the TPSA for a single 'spot' in Fig. 1b is represented by an ellipse, which will be oriented horizontally or vertically if [21] sin(2α where σ c is related to the crystal length L, and σ p is given by the width of a single peak in the pump spectrum. This way of obtaining a single-mode TPSA is similar to the one used in Ref. [24]. Clearly, condition (16) can be only satisfied for α > 0. The required value of α is the smaller, the longer is the crystal and the broader is the width of a single 'comb' maximum. This provides additional possibilities to engineer the state. For the maxima to overlap neither in the signal frequency nor in the idler one, they should be sufficiently well separated. This imposes additional requirement on the distance between the 'comb' maxima ∆ω. For α < 45 • , the condition is sin 4 α/σ 2 c + sin 2 α/2σ 2 p >> 1/(∆ω) 2 [21] and it can be satisfied for a sufficiently large ∆ω.
Numerical calculation. An exact numerical calculation of the TPSA has been performed for the case of a type-II KDP crystal pumped by 160 fs pulses with the central wavelength varying from 370 nm to 450 nm, transmitted through an FP interferometer with the thickness of 100 µm. Typical pump spectrum is shown in Fig. 2. By tilting the FP (middle and right panels), one can change the finesse and hence the width of a single maximum. Calculated TPSA shapes for different pump wavelengths are shown in Fig. 3 a-c. One can see that the tilt of the TPSA changes, being negative for wavelengths below 415 nm and positive otherwise. The FP orientation is assumed to be normal and the crystal length is 15 mm. One can observe that even in panel c, separate 'spots' of the TPSA do not represent single-mode states as they are stretched in a tilted direction. The only ways to make them single-mode are either to increase the crystal length or to broaden the FP transmission peaks. This possibility is demonstrated in Fig. 3 d-f showing the case of the FP tilted by 45 • and the crystal length 13 mm (d), 15 mm (e), and 17 mm (f). The TPSA shown in Fig. 3e has the desired feature: each separate maximum represents a single-mode state. The small overlap of the neighboring maxima, partly caused by the background of the FP transmission spectrum, can be eliminated by simultaneously increasing the FP finesse and the crystal length, which was technically impossible in our experiment. In the case of pumping at 532 nm this problem does not arise (Fig. 1c) as the 'spots' are well separated in both dimensions.
For an isolated single 'spot' of the TPSA in Fig. 1c, exact numerical calculation of the Schmidt number gives K = 1.23 [21]. The deviation from the unity is caused by the pump Lorentzian shape and the side lobes of the 'sinc' function. Shaping the pump spectrum as a comb of Gaussian peaks would give K = 1.06. Experimental setup. In the preparation part of our setup (Fig. 4) we use a Ti-Sapphire mode-locked laser, with a pulse duration of 160 fs, a Gaussian spectrum with the central wavelength tunable around 800 nm with FWHM bandwidth of 10 nm, frequency-doubled to get a FWHM bandwidth of 2.8 nm. Into the frequencydoubled beam we put an FP cavity with air spacing of 100 µm in order to shape the spectrum like in Fig.2. The beam is then focused, with an f = 1m lens, into a single 5 mm BBO crystal or a pair of 5 mm KDP crystals cut for collinear degenerate type-II phasematching. In order to reduce the effect of transverse walk-off, the crystals have optic axes tilted symmetrically with respect to the pump direction. After the crystals, the pump is eliminated using a dichroic mirror and a red-glass filter.
In the registration part of the setup, the TPSA is measured using the effect of frequency-to-time Fourier transformation in the course of two-photon light prop- Fig.2. The beam pumps a type-II nonlinear crystal (BBO or KDP). A dichroic mirror (DM) removes the residual pump beam. Photons of the same pair are separated by a polarizing beam splitter and fed in two identical fibers of 1 km length. Finally, photons are detected by two SPADs and coincidence electronics is used to measure their arrival times with respect to the trigger from the laser. Inset: the measured distribution, recalculated into wavelengths, for a 5 mm BBO crystal.
agation through a dispersive medium. This effect, first studied for cw biphotons [25][26][27], was later applied to the spectroscopy of single photons [28] and next to the measurement of TPSA for femtosecond-pulsed biphotons [24,29,30]. The signal and idler photons are sent through different optical fibres, and then their arrival times with respect to the pump pulse are analyzed [24]. After a fibre of length l the joint probability distribution amplitude of the arrival times for signal and idler photons F (t s , t i ) takes the shape of the TPSA F (Ω s , Ω i ), with the frequency arguments rescaled [30], , the tilde denoting the Fourier transformation and k ′′ s,i given by the group-velocity dispersion (GVD) of the fibre. In our measurement setup, the two photons of the same pair are split on a polarizing beam splitter and fed in two identical Nufern 780-HP fibres of 1km length, with the GVD changing from −120 ps/nm/km at 800 nm to −90 ps/nm/km at 900 nm. At the fibre outputs, two siliconbased single photon avalanche diodes (SPADs) with 50 ps time jitter, connected to a three-channel time-to-digital converter, measure the distribution of the arrival times of the signal and idler photons with respect to the trigger signal of the pump pulse provided by a fast photodiode inside the laser housing. The measurement yields a histogram proportional to squared TPSA, with the resolution of 1.5 nm. As an example, the inset to Fig. 4 shows the distribution obtained for a 5 mm BBO crystal pumped by a 404 nm pump. The arrival times are recalculated into wavelengths according to Eqs. (5). Separate maxima of TPSA are clearly seen but they do not represent single-mode states, due to the overall negative tilt of the TPSA. In order to obtain the positive tilt, we made the measurements with KDP crystals and the pump wavelength varying within the range 395−440 nm. The results of the measurement with the KDP crystals are presented in Fig. 5 together with the distributions calculated with an account for the finite time resolution. As the pump wavelength changes from 395 nm to 440 nm, the TPSA tilt changes from negative to positive. One clearly sees the 'spot' structure caused by the FP transmission maxima. The number of distinct transmission maxima is reduced due to the FP tilt (see Fig. 2). Despite the experimental non-idealities (spreading of each 'spot' due to the detectors' jitter, high background caused by low FP finesse and the reflections at the crystal surfaces, and large statistical error caused by weak signals), several important observations can be made. The distribution with the negative TPSA tilt (Fig. 5a), similarly to the inset to Fig. 4, shows the multi-mode structure of separate spots. The distribution in Fig. 5b shows nearly single-mode structure of separate spots but in this case, the whole TPSA is factorable and thus represents a single-mode state. Finally, in the positive-tilt distributions (Fig. 5c,d) each spot, according to the theoretical predictions, has single-mode structure and, at the same time, the whole TPSA is not factorable.
Cavity SPDC. A TPSA with non-overlapping Schmidt modes can be realized in a different way, in which not the pump has a 'comb-like' spectrum but the signal and idler photons. This is the case in cavity-enhanced SPDC [31,32] with type-I phase matching, a negative TPSA tilt, and the cavity resonant for signal/idler radiation. The cavity will then select narrowband Lorentzian maxima at the same signal and idler frequencies. Provided that the pump spectrum is considerably broader than these maxima (but still much narrower than the cavity FSP), the TPSA will consist of single-mode maxima displaced in the direction ω s + ω i = const [33]. For the experimental parameters of Ref. [31], this situation will be realized for a few nanosecond pump pulses and will result in the maxima of about 100 MHz width. Note that a spectrum with a similar structure has been considered in Ref. [34]; such states were called mode-locked two-photon states. The issue of separable Schmidt modes, however, was not discussed. In many current experiments, a single maximum can be filtered out [35], and it can be a Schmidt mode if more broadband pumping is used.
Conclusion. We have shown that it is possible to prepare a biphoton state with multimode frequencytemporal structure containing non-overlapping Schmidt modes. Such a state, provided that the number of modes is large, can be used for higher-dimensional encoding of quantum information, with the possibility to address separate modes. Although in our proof-of-principle experiment there are only few Schmidt modes, a highly multimode state can be obtained by shaping the pump pulse using more advanced techniques. A similar method of creating non-overlapping Schmidt modes can be applied to the spatial TPSA; then the pump spectrum can be shaped by a diffraction grating. Separable Schmidt frequency modes can be also obtained in cavity-enhanced SPDC with sufficiently broadband pump and type-I phasematching. Our results, demonstrating the possibility of achieving a set of non-overlapping Schmidt modes forming a non-separable state, are expected to have a high impact on quantum state engineering.
SUPPLEMENTAL INFORMATION: SEPARABLE SCHMIDT MODES OF A NON-SEPARABLE STATE
Here we provide additional information on the theory and experiment presented in the main paper. The first part contains the calculation of Schmidt modes for a two-photon spectral amplitude (TPSA) given by a set of Gaussian functions. The second part analyzes the shape of a single maximum.
Schmidt modes of a multi-peak TPSA.
Consider the Schmidt modes for a TPSA given by a sum of M double-Gaussian peaks, where we denoted the signal and idler frequencies by x and y and assumed for simplicity that all Gaussians have the same width.
The case where the peaks overlap in both x and y is not interesting for the current work as it does not allow the lossless filtering of a single peak. Therefore, assume that the peaks do not overlap in x, i.e., but partially overlap in y, so that In order to find the Schmidt modes, we need to solve the integral equation with the kernel [1] which is found to be (10) The integral equation for finding the Schmidt modes If the Schmidt mode is searched as one of the orthogonal normalized Gaussian functions, then Eq. (11) results in the equation which is impossible to satisfy in the general case. Therefore, this situation does not provide separable Schmidt modes.
In the special case where the Gaussians do not overlap in y as well, c ij = cδ ij , and Eq. (13) is satisfied with the eigenvalue This special case is at the focus of the present paper as it enables lossless selection of a single-mode state out of an entangled one.
A single peak of TPSA.
Assume first that the shape of a single TPSA peak in Fig. 1b of the main text is given by a double Gaussian function, where σ c is related to the crystal length L, σ p is given by the width of a single peak in the pump spectrum, 2 √ 2 ln 2 σ p being the full width at half maximum (FWHM), and ∆ω is the distance between the peaks. The cross-section of this double Gaussian function, say, at half-maximum height, will be an ellipse oriented horizontally or vertically if the terms with the products ω s ω i disappear, i.e., This alone does not ensure that the peak corresponds to a single term of the Schmidt decomposition for the whole state. Another necessary condition is that different Gaussian functions overlap neither in ω i nor in ω s . This imposes two additional requirements: In the most common case of α < 45 • , the second condition is stronger. It can be written in the form and can be satisfied if ∆ω is large enough.
Finally, consider a single TPSA maximum with an account for our real experimental conditions. The TPSA has the form (2) of the main text, with the pump spectral amplitude given by the Fabry-Perot transmission in the vicinity of a single maximum, where R is the reflectance of each mirror (the mirrors are assumed to be similar), d the FP spacing, φ the tilt of the plates and c the speed of light.
|
2013-12-04T10:09:52.000Z
|
2013-12-04T00:00:00.000
|
{
"year": 2013,
"sha1": "d1d04467d7dfa8a0ad9c4670fd7ca93b8ecb588e",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1312.1092",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "d1d04467d7dfa8a0ad9c4670fd7ca93b8ecb588e",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
257219614
|
pes2o/s2orc
|
v3-fos-license
|
Mixing effects of $\eta-\eta'$ in $\Lambda_b\rightarrow \Lambda \eta^{(')}$ decays
We perform a thorough analysis of the $\eta-\eta'$ mixing effects on the $\Lambda_b\rightarrow \Lambda \eta^{(')}$ decays based on the perturbative QCD (PQCD) factorization approach. Branching ratios, up-down and direct $CP$ asymmetries are computed by considering four popular mixing schemes, such as $\eta-\eta'$, $\eta-\eta'-\eta_c$, $\eta-\eta'-G$, and $\eta-\eta'-G-\eta_c$ mixing formalism, where $G$ represents the physical pseudoscalar gluball. The PQCD predictions with the four mixing schemes does not change much for the $\eta$ channel but changes significantly for the $\eta'$ one. In particular, the value of $\mathcal{B}(\Lambda_b\rightarrow \Lambda \eta^{'})$ in the $\eta-\eta'-G-\eta_c$ mixing scheme exceeds the present experimental bound by a factor of 2, indicates the related mixing angles may be overestimated. Because of the distinctive patterns of interference between $S$-wave and $P$-wave amplitudes, the predicted up-down asymmetries for the two modes differ significantly. The obvious discrepancies among different theoretical analyses should be clarified in the future. The direct $CP$ violations are predicted to be at the level of a few percent mainly due to the tree contributions of the strange and nonstrange amplitudes suffer from the color suppression and CKM suppression. Finally, as a byproduct, we investigate the $\Lambda_b\rightarrow \Lambda \eta_c$ process, which has a large branching ratio of order $10^{-4}$, promising to be measured by the LHCb experiment. Our findings are useful for constraining the mixing parameters, comprehending the $\eta^{(')}$ configurations, and instructing experimental measurements.
Ref. [25], three different mixing schemes for the η − η ′ system were taken into account and the PQCD calculations for the B → η ( ′ ) K decays were improved to the next-to-leading order (NLO) level. It is found that the NLO PQCD predictions in the η − η ′ − G mixing scheme provide a nearly perfect interpretation of the measured values.
The η − η ′ − G mixing scheme was also applied to the B → J/ψη ( ′ ) decays [30] in PQCD. A large gluon contribution was advocated from the analysis of relative probabilities of the B s → J/ψη ′ and B s → J/ψη decays. However, the subsequent measurements from LHCb [31,32] hint at a small gluonic component in the η ′ meson. It is then worthwhile to examine whether these mixing schemes can well explain the measurements in the baryon reactions involving η or η ′ mesons, such as Λ b → Λη ( ′ ) decays.
Our purpose in the present paper is to probe the η − η ′ mixing in the Λ b → Λη ( ′ ) decays by employing the PQCD approach at leading order accuracy. Four available mixing schemes for the η − η ′ system, namely η − η ′ , η − η ′ − G, η −η ′ −G−η c , and η −η ′ −η c mixing are taken into account. The effect of radial mixing is neglected due to the absence of Λ b → η ( ′ ) form factors in the concerned processes [38], and the mixing with the pion is also not considered here under the isospin symmetry. Within these mixing schemes, we calculate the branching ratios, up-down asymmetries, and direct CP violations for Λ b → Λη ( ′ ) and investigate the scheme dependence of the theoretical predictions.
The paper is organized as follows. In Sec. II, we first discuss the four mixing schemes as well as the related mixing angles and review the hadronic light-cone distribution amplitudes (LCDAs). Then we briefly present the effective hamiltonian and kinematics for the PQCD calculations. We show the PQCD predictions for the branching ratios, updown and direct CP asymmetries of the concerned decays with four different mixing schemes in Sec. III. A summary will be given in the last section. The appendices are devoted to details for the computation of the decay amplitudes within PQCD.
II. THEORETICAL FRAMEWORK
A.
η − η ′ mixing phenomenon This section is devoted to the phenomenological aspects of η − η ′ mixing. In this work we will consistently use the quark flavor mixing basis rather than the singlet-octet mixing basis, since fewer two-parton twist-3 meson distribution amplitudes need to be introduced [26]. Following the analysis of Refs. [3,6,25,29], we first introduce four current η − η ′ mixing schemes. In the conventional Feldmann-Kroll-Stech (FKS) scheme [3,4] for the η − η ′ mixing, the physical neutral pseudoscalar mesons η ( ′ ) can be represented as superposition of isosinglet states with shorthand (c, s) ≡ (cos, sin) and φ being the mixing angle. η q = (uū + dd)/ √ 2 and η s = ss are the so-called nonstrange and strange quark-flavor states, respectively. The presence of only one mixing angle in this case is due to the Okubo-Zweig-Iizuka (OZI) suppressed contributions are neglected [14].
Alternatively, allowing for another heavy-quark charm cc component in the η and η ′ , the conventional FKS formalism can be generalized naturally to the trimixing of η − η ′ − η c in the qq − ss − cc basis. The physical states are related to the flavor states via [3] where θ c and θ y are two new mixing angles related to the charm decay constants of the η ( ′ ) mesons.
In QCD, gluons may form a bound state, called gluonium, that can mix with neutral mesons [10]. By including a possible pseudoscalar glueball state η g in the η ( ′ ) mesons [13,29], the FKS mixing scheme can be extended to the η − η ′ − G mixing formalism, where G denotes the physical pseudoscalar glueball. Using the quark-flavor basis, we can write [29,30] where θ i = 54.7 • is the ideal mixing angle between the octet-singlet and the quark-flavor states in the SU(3) flavorsymmetry limit [16]. θ is related to φ by θ = φ − θ i . φ G is the mixing angle for the gluonium contribution. It is assuming that the glueball only mixes with the flavor-singlet η 1 , but not with the flavor-octet η 8 , so the two mixing angles φ and φ G are sufficient to describe the mixing matrix in Eq. (3). It has been verified that the contribution from the gluonic distribution amplitudes in the η ( ′ ) meson is negligible for B meson transition form factors [26]. Hence, we still suppose the η and η ′ mesons are produced via the nonstrange (strange) component in the baryon decays under the η − η ′ − G mixing.
In [6], the authors combined the above two trimixing by considering the tetramixing of η − η ′ − G − η c , which is described by a 4 × 4 mixing matrix. It was assumed that the heavy-flavor state only mixes with the pseudoscalar glueball, then the transformation reads where new angle φ C is the mixing angle between the glueball and η c components. It can be easily seen that the η − η ′ − G − η c tetramixing formalism reduces to the η − η ′ − G and FKS schemes in the φ C → 0 and φ C,G → 0 limit, respectively. As the mixing of η and η ′ is still not completely clear at the moment, they may be mixed with the radial excitations, leading to more complicated mixing formalism. In the following analysis, we will ignore other possible admixtures from radial excitations. In addition, we have assumed that isospin symmetry to be exact (m u = m d ≪ m s ), the mixing with π, such as the π − η mixing [39], the trimixing of π − η − η ′ [40,41] and the tetramixing of π − η − η ′ − η c [42], are not considered here.
B. Light-cone distribution amplitudes
The hadronic light-cone distribution amplitudes (LCDAs) are the important input in PQCD calculations, which describe the momentum fraction distribution of valence quarks inside hadron. There are various models of the Λ b and Λ baryon LCDAs available in the literature [43][44][45][46][47][48][49][50][51][52][53][54]. In this work, we adopt the exponential model LCDAs for the Λ b baryon [45] and Chernyak-Ogloblin-Zhitnitsky (COZ) model for the Λ [49], whose explicit expressions can be found in the previous work [55][56][57] and shall not be repeated here. It has been confirmed that the models employed lead to reasonable numerical results for the Λ b → Λ form factor with fewer free parameters [55].
Two-parton quark components for the η s,c mesons are defined via the nonlocal matrix elements [26,58,59] where N c is number of the color. The η q one can be obtained by substituting s for d in Eq. (5) and multiplying by a factor of 1/ √ 2. The two light-cone vectors n = (1, 0, 0 T ) and v = (0, 1, 0 T ) satisfy n · v = 1. m q,s 0 are the chiral enhancement scales associated with the twist-3 LCDAs, which can be expressed in terms of the decay constants f q,s and the mixing angles. Their values can be fixed by solving for the mass matrix in different mixing schemes [3], which will be given in the next section.
with u = 2y − 1. The shape parameter ω = 0.6 GeV is taken from [61]. f ηc and m ηc are the decay constant and mass of η c meson, respectively. The two normalization constants N v,s are determined by [61] In PQCD picture, the decay amplitudes can be calculated by the convolution of the nonperturbative, universal LCDAs and the perturbative hard scattering amplitude. After defining the nonperturbative LCDAs in the last subsection, we are ready to calculate the decay amplitudes at leading order of the strong coupling constant. Various topological diagrams responsible for the considered decays are presented in Fig. 1. The labels T , C, E and B refer to external W emission, internal W emission, W exchange and bow tie topologies, respectively. The subscript q(s) of E corresponds to the contribution from the nonstrange (strange) component in the η ( ′ ) mesons. Exchanging two identical quarks in the final state baryon and meson for the E or B-type diagram, one can get a new topology denoted by B ′ as exhibited in the last diagram of Fig. 1. We just draw one representative Feynman diagram for each topology here, more complete set of Feynman diagrams refer to our previous work [55,56,63].
In the Λ b rest frame, we choose the Λ b (Λ) baryon momentum p(p ′ ) and the meson momentum q in the light-cone coordinates: short-distance physics. The four-quark operators O i , describe the hard electroweak process in b quark decays, read where the sum over q ′ runs over the quark fields that are active at the scale µ = O(m b ). The decay amplitudes of Λ b → Λη q , Λ b → Λη s , and Λ b → Λη c , namely nonstrange M q , strange M s , and charm M c , are given by sandwiching H ef f with the initial and final states, which can be further expanded with the Dirac spinors as where M S and M P correspond to the parity-violating S-wave and parity-conserving P -wave amplitudes, respectively. Their generic factorization formula can be written as where the summation extends over all possible diagrams R ij . a σ Rij denotes the product of the CKM matrix elements and the Wilson coefficients, where the superscripts σ = LL, LR, and SP refer to the contributions from (V −A)(V −A), (V − A)(V + A), and (S − P )(S + P ) operators, respectively. Notice that an overall factor 8 from a σ Rij has been absorbed into the coefficient in Eq. (17) for convenience. The explicit forms of the Sudakov factors S Rij can be found in [56]. H σ Rij is the numerator of the hard amplitude depending on the spin structure of final state. Ω Rij is the Fourier transformation of the denominator of the hard amplitude from the k T space to its conjugate b space. The impact parameter b, b ′ and b q conjugate to the parton transverse momentum k T , k ′ T and q T , respectively. The integration measure of the momentum fractions are defined as where the δ functions enforce momentum conservation. Those quantities associated with specific diagram, such as H Rij , a Rij , [Db] Rij , and t Rij are collected in Appendix A. The decay branching ratio, up-down and direct CP asymmetries of the concerned decays are given as [63,65] where |P | is the magnitude of the three momentum of the Λ baryon in the rest frame of the Λ b baryon.
III. NUMERICAL ANALYSIS
In this section, we perform numerical analysis for the branching ratios, up-down and direct CP asymmetry within various mixing schemes. As we have discussed before, there are four available mixing schemes for the η − η ′ system, denoted as S1, S2, S3, and S4 respectively. We first collect the scheme dependent input parameters as follows: • η − η ′ mixing (S1) [3], • η − η ′ − G mixing (S2) [29], • η − η ′ − G − η c mixing (S3) [6], • η − η ′ − η c mixing (S4) [3], To obtain the chiral enhancement scales m q,s 0 , we also need the light quark mass as inputs. Because meson distribution amplitudes are defined at 1 GeV, we take m q (1GeV) = 5.6 MeV [66] and m s (1GeV) = 0.13 GeV [25]. The relevant masses (GeV) and the CKM parameters in Wolfenstein parametrization are taken from the Particle Data Group [17]. Their current values are and The lifetime of the Λ b baryon is taken as 1.464ps. For the pion decay constant, we use f π = 0.131 GeV [60] and for the Gegenbauer moments, we choose a 2 = 0.44 ± 0.22 and a 4 = 0.25 [27]. We neglect the scale dependence of the chiral scales and the Gegenbauer moments in the default calculation. As stressed before, the PQCD calculations are performed in the quark flavor basis. The contribution of various topological diagrams to a specific process is determined by the quark flavor composition of the particles involved in the decay. For example, the nonstrange amplitude M q receives the contributions from all the five topological diagrams, while the strange one M s has no contributions from the C and B-type diagrams. Note that the W exchange bu → su transition contribute to M q and M s through E q and E s diagrams, respectively. As for the Λ b → Λη c decay, besides the dominant T -type diagrams contribute to M c , it can proceeds via B ′ diagrams if one replace the ss pair with cc in the last diagram of Fig. 1.
According to Eq. (17), we first give the numerical results of various topologies contributions to the S-wave and P -wave amplitudes for the Λ b → Λη q,s,c processes within four mixing schemes in Table I. The differences among these solutions can be ascribed to the chiral enhancement scales, relating to the decay constants and the mixing angles. The drastic sensitivity of the chiral enhancement scales to the choice of mixing schemes will be reflected in the spread of our predictions for the decay amplitudes. Referring to Eqs. (20) and (21), we can see that the chiral scale m q 0 in S2 is almost twice as large as that in S1, causing distinct nonstrange amplitudes for the two schemes. Analogously, the m s 0 in S1 and S4 are almost equal, resulting in a tiny variation in the strange amplitudes. Likewise, the same mixing parameters are used in S2 and S3 as shown in Eqs. (21) and (22), the calculated amplitudes are exactly the same. Numerical difference between S3 and S4 for the charm amplitude arise from a different choice of mass and decay constant of η c meson as shown in Eqs. (22) and (23). One may wonder why the T -type nonstrange amplitudes yield the same results under S1 and S4, afterall the parameter m q 0 are different between the two schemes. The interpretation is not trivial. As we know that the chiral scales are proportional to the twist-3 meson LCDAs, which do not contribute to the nonstrange amplitude via the external W emission diagram at the current theoretical accuracy (see the expression of H σ Tc7 in Table VII for example) because only the external W emission from (V − A)(V − A) and (S − P )(S + P ) operators survives for the b → suū and b → sdd transitions. Nevertheless, the strange amplitude receives the additional twist-3 contributions arising from the W emission diagrams through the b → sss transition with the (V − A)(V + A) operators inserting. As a result, the m s 0 term appears in the strange amplitude, rendering the different T -type strange amplitudes between S1 and S4.
We now proceed to discuss the relative sizes of various topological amplitudes. From Table I, we observe that the Λ b → Λη q process is predominated by T and C. As the interference between T and C is destructive, the contributions from the exchange diagrams, such as B and E, are in fact important and nonnegligible. Similar to the case of Λ b → Λφ [56], Λ b → Λη s decay is governed by T and E, which are of similar sizes. The Λ b → Λη c process is dominated by the T -type diagrams and its amplitudes are larger than the (non)strange ones by one order of magnitude. The contributions from B ′ -type exchange diagrams are predicted to be negligibly small in all the three processes.
It is worth to underline that the penguin operators could be inserted into the diagrams in Fig. 1 via the Fierz transformation. We do not distinguish between the tree and penguin contributions in Table I. The tree contributions of the strange and nonstrange amplitudes are expected to be small due to the CKM suppressed compared to the penguin ones. If we turn off the penguin contributions, their total amplitudes will decrease by one or two orders of magnitude. This feature is different for the charm one, which is triggered by the quark decay b → scc. The involved large CKM matrix element V cb V * cs enhance the tree contributions, which dominate over the penguin ones. Utilizing the values of Table I in conjunction with various mixing formalism, one can calculate the S-and P - The magnitudes of the S-and P -wave amplitudes (in units of 10 −10 ), branching ratios (in units of 10 −6 ), up-down asymmetries, and direct CP violations of Λ b → Λη ( ′ ) decays with different mixing schemes. The errors for these entries correspond to the shape parameter ω0, Gegenbauer moment a2, mixing angles φ(φG), and the hard scale t, respectively.
, whose numerical results are displayed in the Table II. Branching ratios, up-down asymmetries, and the direct CP asymmetries are shown in the last three columns. There are four uncertainties. The first quoted uncertainty is due to the shape parameters ω 0 in the Λ b LCDAs with 10% variation. The second uncertainty is caused by the variation of the Gegenbauer moment a 2 = 0.44 ± 0.22 in the leading-twist LCDAs of η q,s . Since the Gegenbauer moment a 2 is not yet well determined, the possible 50% variation lead to large changes of our predictions. The third one refers to the uncertainty of the mixing angles φ(φ G ) as shown in Eqs. (20)- (23). Note that the chiral enhancement scale m q 0 changes rapidly with the mixing angles, so this uncertainty can be classified as the hadronic parameter uncertainty. The last one is from the hard scale t varying from 0.8t to 1.2t. The scale-dependent uncertainty can be reduced only if the next-to-leading order contributions in the PQCD approach are included. One can see that these large hadronic parameter uncertainties have a crucial influence on the PQCD calculations. The up-down asymmetries suffer large theoretical errors, especially for the values in S4, due to complex interference effects, which will be detailed later. It is interesting that B(Λ b → Λη ′ ) in S3 is more sensitive to φ G . The phenomena could be ascribed to the sizable η c mixing effect in S3. According to Eq. (4), the charm amplitude for the η ′ mode suffers the suppression from the mixing factor cos θ sin φ G sin φ C = 0.039 +0.040 −0.042 , where the large uncertainty is due to the angle φ G varies in a conservative range φ G = 12 • ± 13 • . Moreover, the large Λ b → Λη c amplitude as indicated in Table I can compensate for this suppression, and give a sizable impact on the Λ b → Λη ′ decay. For the η mode, the corresponding mixing factor is − sin θ sin φ G sin φ C = −0.0076 +0.0078 −0.0082 , which is smaller by a factor of 5. It follows that it do not have much impact on the decay rates involving η.
We now discuss the sensitivity of the branching ratios of the Λ b → Λη ( ′ ) to different mixing schemes. From Table II, one can see clearly that the results of η mode are less sensitive to the mixing schemes, suggests a small gluonic and η c components in the η meson. The marginal differences among various schemes can be more or less traced to the different chiral enhancement scales as already emphasized. However, various schemes lead to very different branching ratios for the η ′ channel. For example, the central values vary from 1.68 × 10 −6 for S2 to 5.67 × 10 −6 for S3. The biggest branching ratio from S3 is ascribed to the fact in the η − η ′ − G − η c mixing scheme, the tree dominated Λ b → Λη c amplitude, induced from b → scc transition, can contributes to the Λ b → Λη ( ′ ) decays through the mixing matrix as indicated in Eq. (4). As stated above the large charm amplitude can compensate for the tiny mixing factor, implies that the component of η c in the η ′ meson is important. Our results indicate that the obtained branching ratios for Λ b → Λη and Λ b → Λη ′ are comparable in magnitude. This observation is different from the pattern of B(B → Kη) and B(B → Kη ′ ), where the former is about an order of magnitude smaller than the latter.
In Ref. [67], the authors point out that few-percent OZI violating effects, neglected in the FKS scheme, could enhance the chiral scale m q 0 sufficiently, which accommodates the dramatically different data of the B → Kη ( ′ ) branching ratios in the PQCD approach. It is therefore interesting to see whether this effect can modify the pattern of Λ b → Λη ( ′ ) branching ratios and improve the agreement with the current data. It should be noted that the inclusion of the OZI violating effects implies two additional twist-2 meson distribution amplitudes associated with the OZI violating decay constants need to be considered, but their contributions turn out to be insignificant [67]. Hence, we can simply concentrate on the effect of the modified parameter set. Using the central values of f q = 1.10f π , f s = 1.46f π , φ = 36.84 • , m q 0 = 4.32GeV, m s 0 = 1.94GeV from [67] as inputs, we derive We will see later that by including the OZI violating effects in S1, B(Λ b → Λη) tend to be large, while B(Λ b → Λη ′ ) tend to be small, favored by the experiments. For the up-down asymmetries of the η mode, all the four solutions basically exhibit a similar pattern in sizes and signs. However, the observation is different for the η ′ channel: from Table II we see that S2 and S3 give large and negative asymmetries, while the central values in S1 and S4 are small in magnitude but with opposite signs. These features can be understood by the following observation. We learn that α describes the interference between the S-wave and P -wave amplitudes from Eq. (19). According to the mixing matrixes as described in the previous section, both the S-wave and P -wave amplitudes in Λ b → Λη ( ′ ) decays can be written as the linear superposition of strange and nonstrange amplitudes through the mixing angle. It should be noted that the nonstrange contents of the η and η ′ mesons have the same sign in S1, while the strange ones are opposite in sign. This means the interference between the strange and nonstrange amplitudes is always destructive in Λ b → Λη but constructive in Λ b → Λη ′ . In addition, compared to the strange amplitude, the nonstrange amplitude acquires additional sizable contributions from the internal W emission diagrams as shown in Fig. 1, which leads to different patterns of the S-wave and P -wave contributions in the strange and nonstrange amplitudes. Numerically, one can see from Table II that the P -wave component dominates over the S-wave one in η ′ mode, whereas they are comparable in η one. Above combined effects cause the imaginary parts of the S-wave amplitudes of Λ b → Λη and Λ b → Λη ′ to have the same sign, while the P -wave ones have the opposite sign as exhibited in Table II. Consequently, the up-down asymmetries of Λ b → Λη and Λ b → Λη ′ are of opposite sign. The feature in S2 scheme can be explained in a similar way. The interference pattern is more complicated when the inclusion of charm content in the η ( ′ ) mesons within S3 and S4. We have learned from Eqs. (4) and (2) that the charm contents of the η ( ′ ) meson for the two mixing schemes are opposite (same) in sign. This difference has very little effect on η mode because of the strong suppression from the mixing factors, however, it has an important influence on η ′ one as discussed before. We predict a large and negative α(Λ b → Λη ′ ) in S3, but a small and positive one in S4.
Since the decays under consideration are dominated by the penguin contribution and the tree amplitudes are color and CKM factor suppressed, their direct CP asymmetries are not large, less than 10%. Although the additional tree amplitudes stemmed from the b → scc transition are included in S3 and S4, the weak phase of V cb V * cs is zero at the order of λ 2 , which is the same as the penguin one, V tb V * ts . The enhancement arising from the charm content in fact leads to a smaller tree-over-penguin ratio and thus the direct CP asymmetries of the η ′ process in S3 and S4 are further reduced to less than one percent.
The comparisons with different theoretical models and the experimental data are presented in Table III. There is a wide spread in the branching ratios predicted by the various model calculations, ranging from 10 −8 to 10 −5 . The LFQM calculations [34] give the lowest predictions for the branching ratios because only the contributions from the tree operators were considered in their calculations, which implies the penguin contributions play leading roles in the concerned processes. In absence of the penguin contributions, our central values of the branching ratios for the η and η ′ modes in S1 will be reduced to 8.92 × 10 −8 and 4.14 × 10 −8 , respectively, which seem to be comparable with the results of LFQM [34]. The two solutions of Ref. [37] are evaluated by using two different Λ b → Λ form factors. The branching ratios for the form factors calculated in pole model (PM) agree with our PQCD predictions within S1 and S4 mixing scheme. The two results from GFA [36,68] are basically consistent with each other, and close to our values in S2. It is also observed from Table III most of the approaches give predictions of the same order of magnitude for the decay rates of the two modes, except for the predictions of Ref. [35], in which the branching ratio for η ′ mode is much larger than that of η by one order of magnitude due to the additional enhanced factor for the η ′ mode.
Unlike the branching ratios, the up-down and CP asymmetries in the concerned decays received few theoretical and experimental attentions. The estimates based on QCDF gives α(Λ b → Λη) = 0.24 +0. 19 −0.12 and α(Λ b → Λη ′ ) = 0.99 +0.00 −0.03 [35], while the LFQM calculations yields α(Λ b → Λη) = α(Λ b → Λη ′ ) = −1 [34]. It can be observed that the available theoretical predictions on the up-down asymmetries vary from each other and differ even in sign. Hence, an accurate measurement of the up-down asymmetry will enable us to discern different models. For the direct CP violation, nearly all of the current theoretical predictions are small, less than 10% in magnitude.
From the experimental data shown in Table III, it is clear that LHCb's measurement of B(Λ b → Λη) [33] is generally larger than the theoretical expectations. Although the prediction of Ref. [37] is in accordance with the central value of the data, its value of B(Λ b → Λη ′ ) exceeds the present experimental bound by a factor of 3. The PQCD results of B(Λ b → Λη ′ ) based on S1 and S3 mixing schemes are also large compared to the experimental upper limit. In particular, the latter is larger by a factor of 2, which indicates the mixing angles φ G or/and φ C may be overestimated. Of course, the measurement was performed in 2015 and the experimental error was also quite large. Moreover, there is still no available data on the up-down and direct CP asymmetries at the moment. We look forward to more experimental efforts to improve the accuracy of the relevant measurements.
As a by-product, we have predicted the decay branching ratio and up-down asymmetry of the Λ b → Λη c mode by use of the values of charm amplitudes in Table I where the first and second sets of error bars are due to the shape parameter ω 0 = 0.40 ± 0.04 GeV in the Λ b baryon LCDAs and hard scale t varying from 0.8t to 1.2t, respectively. Our branching ratio is much larger than the values of (2.47 +0.33+0.42+0.67 −0.19−0.47−0.23 ) × 10 −4 in QCDF [35] and (1.5 ± 0.9) × 10 −4 in GFA [69]. It is not surprising because the PQCD prediction on the branching ratio of J/ψ mode presented in [55] is also generally larger than the corresponding values from [35,69] due to the significant nonfactorizable contributions. Anyway all of these theoretical predictions are at the order of 10 −4 , which can be accessible to the experiments at the LHCb. The predicted up-down asymmetry is nearly 100% and negative, which is consistent with the value of −0.99 ± 0.00 obtained in [35]. Since both the tree and penguin amplitudes have no weak phase to order λ 2 , the direct CP violation for the Λ b → Λη c process is predicted to be zero.
IV. CONCLUSION
Decays of b-hadrons to two-body final states containing an η or η ′ meson are of great phenomenological importance. These processes could provide useful information about the η − η ′ mixing and the structure of the η and η ′ mesons, which is still a long-standing question in the literature. In this work, we have carried out a systematic study on the penguin-dominant Λ b → Λη ( ′ ) decays in the PQCD approach. The calculations are performed in the quark flavor mixing basis, in which we first give the PQCD predictions on the nonstrange, strange, and charm amplitudes including various topological contributions inside. It is observed that the nonstrange amplitude is dominated by the T and C-type diagrams. As the interference between T and C is destructive, the contributions from the exchange diagrams, such as B and E, are in fact important and nonnegligible. The strange amplitude is governed by T and E, which are of similar sizes. The charm one is dominated by the T -type diagrams and its amplitudes are larger than the (non)strange ones by one order of magnitude. Furthermore, the contributions from B ′ -type exchange diagrams are predicted to be negligibly small for all the three amplitudes.
Utilizing the currently four available mixing schemes for the η − η ′ system, namely η − η ′ , η − η ′ − G, η − η ′ − G − η c , and η − η ′ − η c mixing, denoted respectively as S1, S2, S3, and S4, we evaluate the branching ratios, up-down and direct CP asymmetries for Λ b → Λη ( ′ ) decays and investigate the scheme dependence of our theoretical predictions. It is found that the results of η mode are less sensitive to the mixing schemes, implies a small gluonic and η c components in the η meson. However, various schemes lead to quite different predictions on both the branching ratio and up-down asymmetry for the channel involving η ′ . For instance, B(Λ b → Λη ′ ) increases by a factor of 3 from S2 to S3, while α(Λ b → Λη ′ ) varies from −0.046 for S1 to −0.847 for S3 and even flip sign in S4. The large discrepancy among these solutions suggests the η ′ mode is very useful in discriminating various mixing schemes.
We consider theoretical uncertainties arising from the shape parameter ω 0 , Gegenbauer moment a 2 , mixing angles φ(φ G ), and the hard scale t. It is shown that the nontrivial dependence of the PQCD calculations on the nonperturbative hadronic parameters, which are poorly determined at present. In particular, B(Λ b → Λη ′ ) is extremely sensitive to the variation of φ G and thus a good candidate for constraining the mixing parameters, once it is measured with sufficient accuracy. The scale-dependent uncertainty also give large uncertainty to the branching ratios, which can be reduced only if the next-to-leading order contributions in the PQCD approach are known.
We also compare our results with predictions of the other theoretical approaches as well as existing experimental data. Various model estimations on the branching ratios span fairly wide ranges from 10 −8 to 10 −5 . Our branching ratios for S2 scheme are consistent with the GFA calculations, while the S4 ones are close to the results of PM. The predicted central values of B(Λ b → Λη) in various schemes are generally lower than the measurement from LHCb. The inclusion of OZI violating effects can enhance B(Λ b → Λη) by a factor of 2.5, and improve the agreement with the current data. The PQCD results of B(Λ b → Λη ′ ) based on S1 and S3 mixing schemes exceeds the present experimental bound. Note that the measured branching ratios have also large uncertainties. In general, the values in S4 appear to be more preferred by the current data among these solutions. For the up-down asymmetries, there is a considerable deviations among PQCD, QCDF, and LFQM estimates, which should be clarified in the future. On the other hand, since the tree contributions suffer from the color suppression and CKM suppression, the obtained direct CP asymmetries are less than 10%, in comparison with the numbers from QCDF and GFA. For the moment, there is neither experimental information on the up-down asymmetries nor on direct CP asymmetries. It will be interesting to see the updated measurements on the two decay modes.
At last, we also explore the decay of Λ b → Λη c . The estimated branching ratio is at the 10 −4 level with an up-down asymmetry close to -1, which may shed light on the future measurement.
Following the conventions in Ref. [63], we provide some details about the factorization formulas in Eq. (17) for the nonstrange amplitude, which were not given before. As the strange and charm processes have the same decay topologies as Λ b → Λφ and Λ b → ΛJ/ψ modes, respectively, one can find the relevant formulas in our previous work [55,56]. The combinations of the Wilson coefficients a σ Rij and the b dependent quantities [Db] Rij and Ω Rij are gathered in Table IV and Table V, respectively, where the auxiliary functions h 1,2,3 and the Bessel function K 0 can be found in [63].
The hard scale t for each diagram is chosen as the maximal virtuality of internal particles including the factorization scales in a hard amplitude: where the expressions of t A,B,C,D are listed in Table VI. The factorization scales w, w ′ , and w q are defined by with the variables and the other b l defined by permutation. Here, we only present the results of [Db] Rij , Ω Rij , and t A,B,C,D for the C, B, and E q diagrams. The remaining ones are the same as those for Λ b → Λφ and can be found in [56].
In Table VII, we give the expressions of H σ Rij for a representative set of diagrams in each type as shown in Fig. 1, while those for others can be derived in an analogous way. The expressions of a LL and a SP in the Λ b → Ληq decay. For convenience, we have extracted an overall coefficient 8, which is absorbed into the prefactor in Eq. (17).
|
2023-02-28T06:42:26.409Z
|
2023-02-27T00:00:00.000
|
{
"year": 2023,
"sha1": "82b4eaa480e565f00e3796f43823e081cb4fe3a6",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "82b4eaa480e565f00e3796f43823e081cb4fe3a6",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
59450970
|
pes2o/s2orc
|
v3-fos-license
|
Improvement of a near wake model for trailing vorticity
A near wake model, originally proposed by Beddoes, is further developed. The purpose of the model is to account for the radially dependent time constants of the fast aerodynamic response and to provide a tip loss correction. It is based on lifting line theory and models the downwash due to roughly the first 90 degrees of rotation. This restriction of the model to the near wake allows for using a computationally efficient indicial function algorithm. The aim of this study is to improve the accuracy of the downwash close to the root and tip of the blade and to decrease the sensitivity of the model to temporal discretization, both regarding numerical stability and quality of the results. The modified near wake model is coupled to an aerodynamics model, which consists of a blade element momentum model with dynamic inflow for the far wake and a 2D shed vorticity model that simulates the unsteady buildup of both lift and circulation in the attached flow region. The near wake model is validated against the test case of a finite wing with constant elliptical bound circulation. An unsteady simulation of the NREL 5 MW rotor shows the functionality of the coupled model.
Introduction
The dynamic effects of trailed vorticity behind a wind turbine blade on the induced velocities at the blade are considered with a focus on the influence on the aeroelastic behavior. In many state of the art codes for wind turbine aeroelasticity, the unsteady aerodynamics are computed using a blade element momentum (BEM) model with several additions, such as tip loss correction and dynamic stall model, cf. Madsen et al. [1]. In a BEM model, the momentum equation is solved at different radial sections of the rotor independently, ensuring that the induced velocities are in balance with the forces at the blades. Unsteady effects are usually also modeled for each section independently, such as dynamic inflow, which takes into account that the turbine wake development delays the response to changes in wind speed or pitch, or unsteady airfoil aerodynamics, which model the faster time lags in the change of lift due to airfoil motion, turbulence and flow separation.
In reality, the flow at different radial sections is coupled through the wake, which can be modeled as trailed and shed vorticity. According to Leishman [2], the effects of the shed wake are mostly local and the overall aerodynamics along the blade are mainly depending on the trailed wake, where the most important contribution comes from the tip vortex. The influence of the trailed vortices is often computed with a BEM model combined with a tip loss model to account for the increased induction at the tip due to the finite number of blades. Due to the assumption of radial independence in the BEM formulation and because the dynamic inflow time constants are a function only of radius, the present modeling does not predict accurately the time varying trailed vorticity along the blade due to turbulence or blade vibrations.
Beddoes [3] has developed a near wake model that accounts for the unsteady trailed vorticity. It is based on a lifting line model, which is restricted to the first quarter revolution behind a single blade. This restriction makes it possible to use exponential functions to model the decreasing induction from trailed vortex filaments as the blade moves away from them. Madsen and Rasmussen [6] implemented the model for use on wind turbine aeroelasticity. and demonstrated the models basic capability to compute the aerodynamic damping as function of mode shape and not only as function of radius. The original model did not include downwind convection of the vortex filaments away from the rotor plane, but Wang and Coton [4] included the influence tilt angle of the trailed vortices in the axial induction. Andersen [5] added an optimization method for the exponential functions to reduce errors due to the approximation of the induction, compared to the exact evaluation of the Biot-Savart law.
Madsen and Rasmussen [6] suggested to couple the near wake model with a BEM model for the far wake. This BEM model would not include a tip loss correction, because that is implicitly included in the near wake model. The thrust coefficient, on which the computation of the induction using a BEM model is based on, is reduced by a coupling factor. In their work, the bound vorticity for the near wake model has been determined using Joukowski's law, which states that the steady circulation is proportional to the steady lift.
In this work, the core algorithm of the near wake model has been altered to ensure that the downwash due to trailed vortex elements is calculated with the same precision independent of their size and distance from the blade section. Also the influence of the spatial discretization of the blade on the results is investigated, using three different point distributions. An iterative solution scheme with a relaxation factor is introduced to ensure the stable behavior of the near wake model, especially as part of a coupled aerodynamics model. Joukowski's law, the proportionality of lift and circulation, does not hold for unsteady calculations. Therefore the unsteady circulation is determined separately in the attached flow region, analogue to the unsteady lift in the dynamic stall model by Hansen et al. [7].
With these additions, the modeling of the development of the downwash on a wing with a constant elliptical circulation becomes independent of the time step and converges consistently to the analytical steady state solution. It is shown that the altered trailing algorithm is also faster than the original version. To show the capabilities of the coupled model to handle the unsteady case, it has been used to simulate the aerodynamics of the NREL 5 MW reference turbine. The steady induction agrees well with results from a code comparison [1]. This paper is structured as follows: First, the near wake model and its modifications are described. Then the coupled model is introduced. Finally, results from both near wake model and coupled model are presented.
Description of the near wake model
In this section the near wake model is introduced, starting with the original model by Beddoes and followed by a modified version of the vortex trailing algorithm, which is less time step dependent. Furthermore, different point distribution methods and a stabilization of the model through an iterative computation of the downwash are presented. In the last part of the section, a brief outline of the coupling to a far wake model is given.
Original model by Beddoes, modified by Madsen and Rasmussen
Based on the Biot-Savart law, the induced downwash of a vortex filament with length ds and vortex strength ∆Γ, which is trailed from radius r at a point on the blade and stays in the rotor where h is the distance between the vortex trailing point and the calculation point where the downwash is evaluated. The value of h is negative when the vortex is closer to the blade root than the section. The angle β = Ωt determines the position of the infinitesimal vortex element on the circular arc, the angle the rotor has rotated with the constant angular velocity Ω since the vortex filament has been trailed from the blade. The downwash from the circular arc could be evaluated by numerically integrating Equation (1) from 0 to 90 degrees. To avoid these time consuming integrals, Beddoes derived an equation that gives the decrease of the induction by a vortex filament dw compared to its original induction dw 0 , when it has just left the lifting line. This equation is then approximated using two exponential functions: where Φ is a geometric factor depending on the positions of vortex trailing point and calculation point: Madsen and Rasmussen [6] replaced the term 1 + h/(2r) by 0.75 for cases where h/(2r) is smaller than -0.25, which increases the accuracy of the exponential approximation when the vortex trailing point lies further inboard than the calculation point for the induced downwash. This modification is used in the calculations presented here. The computational effort can be dramatically reduced by using the exponential functions. The downwash W can then be split into two contributions [3]: where the index i denotes the time step and X w and Y w are state variables that represent the slowly and quickly decreasing components of the induction from the near wake according to Equation (2): The second terms on the right hand side of Equation (5) contain the induction due to the new finite length vortex filament D w , which has been trailed during the time step. This induction D w is computed using the Biot-Savart-law, assuming the newest element is a straight vortex with length ∆s and perpendicular to the lifting line [3]: where ∆Γ is the strength of the vortex. The contributions from X w and Y w to the induction D w from the newest element are depending on its length, because Y w decreases four times faster with increasing angle β than X w , cf. Figure 2. In Equation (5), D w is multiplied not only by 1.359 and −0.359, but also by the respective exponential factors corresponding to the middle of the element to take this length dependence into account. As shown in Figure 2, this gives not only the desired approximation of the contributions from X w and Y w , but also leads to an underestimation of the induction due to the newest element, because the exponential factors do not add up to 1 for a finite element length. The error due to this underestimation is growing with increasing time step and decreasing distance between calculation point and vortex trailing point.
New formulation of the trailing algorithm
The purpose of the modification explained in the following is to ensure a time step independent behavior of the trailing algorithm in case of a prescribed, constant circulation. Because the circulation is constant, this time step independence means that trailed vortex elements are evaluated correctly independent of their size, which varies with the time step. The decrease of induction from the old part of the wake, the first terms on the right hand side of Equation (5), is already time step independent, because e x e x = e 2x . Therefore, to make the whole algorithm time step independent, both the value of the initial downwash D w and the way it is split into X w and Y w have to be corrected. Instead of calculating a D w for the whole first time step, the ∆s = ∆βr in equation (6) is -0.359 Figure 2. Illustration of the exponential trailing functions. The black mark indicates the underestimation of the induction due to the multiplication of D w by the exponential factors in Equation (5) for an element with length ∆β. whereβ is a constant, very small angle, for which the induction is approximately constant along the vortex filament: dw/ dw 0 ≈ 1 for β ∈ [0;β]. In the simulations presented hereβ = 10 −10 rad has been used. The induction D w for the first time step ∆t can then be approximated as: where < dw/ dw 0 > denotes the average value of dw/ dw 0 given in Equation (2) over the whole length of the newest element. This average can be obtained by integrating: For small values ofβ, which can be chosen independent of the time step, this is a good approximation of the downwash induced by the first vortex filament. The error due to calculating D w based on a straight vortex filament is replaced by the error caused by using Beddoes' functions. As opposed to the way D w is obtained before, it can be consistently split in X w and Y w , which leads to a modified version of Equation (5): The implementation of the new algorithm shows another advantage: In addition to φ and h also the factors D X and D Y for the induction from the new vortex elements are constant for each combination of calculation point and vortex trailing point. Therefore they can be computed once, at the initialization of the model, which was not possible in the original algorithm, as ∆s in Equation (6) for D w is not a constant. Furthermore, only two instead of four evaluations of exponential functions are necessary.
Influence of helical pitch angle, Wang and Coton
So far it has been assumed by Beddoes that the vortices are trailed in the rotor plane. In reality the vortices move in the helical wake. Wang and Coton [2] took this into account by including the pitch angle ϕ of the vortex path in the calculation of the axial induction. For an inflow perpendicular to the rotor plane and assuming a constant helical pitch, that angle can be defined as where V ∞ is the free stream velocity, w a and w t the axial and tangential induction, assumed to be constant in the annular tube, and Ω the rotational speed. The axial part of D w is: This calculation of D w,a does not contain the increasing distance of the vortex filaments from the blade sections as they move downwind.
Discretization of the blade
The algorithm used in this paper distinguishes between two kinds of points:
Numerical stability
The near wake model can become numerically unstable if the downwash induced by the vortex filaments that have been trailed in one time step is so big that the predicted induction starts to diverge. The downwash will then lead to a negative lift of a bigger absolute value than the positive lift in the previous time step. The resulting trailed filaments with a vortex strength of the opposite sign will induce an even bigger negative downwash. If this is the case, the circulation and downwash can reach unphysical values in a few time steps. The instability occurs especially for bigger time steps and close positions of calculation point and vortex trailing point, for example close to the tip when a cosine or full cosine distribution is used. Then the influence of the wake of the previous time step, that would stabilize the algorithm, decays quickly. The 2D shed vorticity effects on lift and circulation also stabilize the model. The problem of instability can be solved by running an iterative version of the NWM with the relaxation factor r: where j denotes the iteration. This iterative process is used in the following coupled model. AOA=AOA(W iter), vrel=vrel(W iter) 3: calculate quasisteady lift and circulation 4: apply unsteady airfoil aerodynamics model to lift and circulation 5: call BEM(CT*k F W ) 6: call NWM(unsteady circulation)
7:
W iter=W lastiter NW*r+W iter NW*(1-r)+W iter FW 8: if abs(W iter-W lastiter)< ε then convergence = true 9: end while 2.6. Coupling to a far wake model Because the near wake model only takes a fraction of one rotor revolution into account, it has to be coupled with a far wake model that calculates the induction from the missing part of the wake. A BEM model is used for this purpose. To consider the fraction of the induction that is computed by the near wake model, the thrust coefficient from the BEM model is multiplied by the coupling factor k F W [5,6], which depends on the operating conditions of the turbine. It is defined as: where C T,BEM denotes the thrust coefficient obtained from the momentum balance of induced velocities without the near wake model and C T,F W is the reduced thrust coefficient used when the far wake BEM is coupled to the near wake model. The coupling factors used in this work are the result of simulations where the integral thrust coefficient from a coupled model has been matched with a BEM model with tip loss correction for the investigated operating conditions, as suggested by Andersen [5]. This matching ensures that steady state results from the coupled model agree with the classical BEM model for different combinations of wind speed, rotational speed and pitch angle. The structure of the coupled model is shown in algorithm 1. It includes the unsteady effects on circulation and on lift, the NWM and the BEM model with reduced thrust coefficient for the far wake.
Results
The modifications of the NWM are investigated for a wing with elliptical circulation. Simulations of the NREL 5 MW reference turbine [9] show the capabilities of the coupled model.
3.1. Near wake model: elliptical wing with prescribed circulation A wing with a prescribed elliptical circulation has been used to investigate the influence of the spatial and temporal discretization. It is modeled as a 10 m long section at the end of a 10 km long blade to approximate a parallel free stream, similar to the case presented by Madsen and Rasmussen, [6]. The circulation at radius r is given as which results in a constant downwash of 1.5 m/s along the wing according to lifting line theory. The blade rotates with 0.03359 rpm, which is equivalent to a free stream velocity at the wing of about 35 m/s. To investigate the effect of the different spatial discretizations introduced in Section 2.4, Figure 5 shows the downwash of the near wake model in steady state for a 175 m long wake. The blade is discretized with 40 (left plot) or 80 (right plot) calculation points, corresponding to 41 and 81 vortex trailing points. While all spatial distributions perform well in the middle sections, the equidistant and cosine distribution lack accuracy close to the edges of the wing. They even lead to negative downwash in the outer sections of the blade.
To show the time step independence of the proposed method to obtain the downwash of the newest trailed filament described in Section 2.2, Figure 6 shows plots of the buildup of the downwash induced by the wake trailed from the wing with prescribed elliptical circulation. At the beginning of the simulation, t = 0, there is no trailed wake and therefore no downwash. The plots of each color represent the distribution of the downwash after t = 0.01 s, t = 0.05 s and t = 5 s. A full cosine distribution with 40 sections has been used to discretize the wing. The new formulation proposed in Section 2.2 is compared to the original algorithm, Equation (5)
Coupled model: NREL 5 MW reference turbine blade
Steady results of the coupled model for the NREL 5 MW reference turbine [9] are shown in Figure 7. The wind speed in the computation is 8 m/s, perpendicular to the rotor plane, and the rotational speed 9.21 rpm. The blade pitch angle is 0 • . At the first 10 meters of radius, where there are no aerodynamic profiles, all the induction due to the root vortex is accounted for by the near wake model. Along large parts of the blade, the relatively constant induction due to the BEM model dominates, but closer to the tip the ability of the near wake model to capture the tip vortex becomes apparent. The combined axial induction from near and far wake agrees well with the code comparison results by Madsen et al. [1].
The thrust coefficients due to near wake, far wake and complete wake for an unsteady simulation are shown in Figure 8. The wind speed and rotational speed are identical as in the steady simulation. For simplicity, the dynamic inflow model used for the far wake had a dimensionless time constant of 1, independent of the radial position. After 100 seconds, the blades start to perform synchronous prescribed vibrations with an amplitude of 0.25 m and a frequency of 1 Hz perpendicular to the rotor plane. The shape of these vibrations is assumed to be the mode shape of a clamped-free prismatic beam. It can be seen that the unsteady aerodynamic effects due to these vibrations are mostly modeled by the near wake model. At 110 s, a pitch step of 4 • within 0.5 seconds is performed, where the coupled model captures both fast and slow parts of the response.
Conclusions
The sensitivity to the spatial discretization of a near wake model, originally proposed by Beddoes, has been investigated. It was found that the case of a prescribed elliptic circulation could be modeled with better accuracy using a distribution based on equi-angle increments between calculation points and vortex trailing points as opposed to an equidistant distribution.
To overcome the high sensitivity of the model on the time step a new formulation of the trailing wake algorithm based on integration of the trailing functions has been developed. This formulation makes the calculation of the downwash time step independent for a constant trailed vorticity, which means that trailed vortex elements are evaluated with the same accuracy, independent of their size. Therefore the modified algorithm can be used with bigger time steps. In addition to that, each time step is computed faster because larger parts of the algorithm can be computed once in the initialization of the program. The model has been stabilized by introducing an iterative solution of the downwash from the near wake at each time step. The near wake model is coupled to the traditional BEM model of the far wake induction by sharing of the total induction through a coupling factor. The coupled model includes unsteady shed vorticity effects for both lift and circulation in the region of attached flow and a blade element momentum model for the far wake. The steady and unsteady behavior of the model has been illustrated based on the NREL 5 MW reference turbine. The coupled model agrees well with established models with regard to the distribution of axial induction and is capable of modeling the unsteady aerodynamic effects at different time scales.
|
2018-12-27T17:17:37.248Z
|
2014-12-16T00:00:00.000
|
{
"year": 2014,
"sha1": "c4a4aa5cb578428353e1571b76e2f7a29b60ed37",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/555/1/012083",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "34408655eaa83fee696af3440f539ede1a6fb424",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Engineering"
]
}
|
235790426
|
pes2o/s2orc
|
v3-fos-license
|
Fast Pixel-Matching for Video Object Segmentation
Video object segmentation, aiming to segment the foreground objects given the annotation of the first frame, has been attracting increasing attentions. Many state-of-the-art approaches have achieved great performance by relying on online model updating or mask-propagation techniques. However, most online models require high computational cost due to model fine-tuning during inference. Most mask-propagation based models are faster but with relatively low performance due to failure to adapt to object appearance variation. In this paper, we are aiming to design a new model to make a good balance between speed and performance. We propose a model, called NPMCA-net, which directly localizes foreground objects based on mask-propagation and non-local technique by matching pixels in reference and target frames. Since we bring in information of both first and previous frames, our network is robust to large object appearance variation, and can better adapt to occlusions. Extensive experiments show that our approach can achieve a new state-of-the-art performance with a fast speed at the same time (86.5% IoU on DAVIS-2016 and 72.2% IoU on DAVIS-2017, with speed of 0.11s per frame) under the same level comparison. Source code is available at https://github.com/siyueyu/NPMCA-net.
Introduction
Video object segmentation (VOS) has been attracting increasing attention in recent years due to its significance in video understanding.The aim of this task is to track the target object from the first frame to the end of the video sequence and segment all the pixels belonging to the tracked target object, which faces problems of object occlusion and appearance variance.
To tackle these problems, some studies adopted online-training mechanism [1,2,3,4].Given the ground-truth mask of the first frame in a test video, they used it to fine-tune the model to obtain the object appearance.In the following inference process, they used the predicted masks to further fine-tune their models.With fine-tuning, the models can adapt to object appearance change, though, the online learning process is time-consuming and inefficient.
Recently, boosted by the rapid development of mask-propagation based VOS models [5,6,7], a better balance between speed and accuracy is reached.The core idea of these methods is to use the estimated mask of the previous frame to guide the model to make segmentation prediction for the current frame.For example, Perazzi et al. [4] proposed to use guidance of previous predicted mask as guidance for the network to learn mask prediction and it proposed a combination of offline and online training method to train the model.They firstly used static image datasets for offline training, and then used the first frame of a test video sequence to fine-tune the model.Oh et al. [5] proposed a Siamese encoder-decoder network with guidance of the previous mask to produce the target object probability map.Johnander et al. [6] offered an appearance module which utilized a class-conditional mixture of Gaussians to model the foreground object appearance for mask prediction.Sun et al. [8] considered both the mask of previous frame and the optical flow to predict target mask.These approaches are usually faster than online training based VOS methods, but they are less adaptive to object appearance variation.
Both online training and mask-propagation based VOS models have limitations, a balance between segmentation accuracy and running speed is crucial for VOS.Early mask-propagation based networks use current frame with previous estimated mask [4] or adding first frame with its provided mask as reference information [5] to directly predict the segmentation mask of current frame.Additionally, Sun et al. [8] used optical flow to build relationship between the previous and the current frames.Different from these methods, we design an attention-based pixel-matching module to find the pixels belonging to the target object in the current frame based on the feature similarity between the current frame and reference frames.In order to capture the object feature without the interference of background, we choose to mask it out and discard the background pixels.However, the target object is varying frame by frame, such process will cause large object appearance variation.Therefore, we choose to use both the first frame and the previous frame as references to provide object information for our pixel-matching module.
NPMCA-net
With the target object's appearance information, we need to determine the target object location, in terms of mask, in the current frame.We design our model based on mask-propagation to keep efficiency, where the non-local structure [9] is adopt to generate the object mask using the obtained target object's appearance information.Specifically, we design a video object segmentation model called Non-local Pixel-Matching network with Channel Attention (NPMCA-net), which includes a newly designed pixel-matching module and a channel attention module.The pixel-matching module is designed to match pixels between the target frame and the reference frames with given ground-truth mask or estimated mask.The channel attention module is used to augment the matched feature map to achieve better decoding.Extensive experiments have shown that our network can achieve a new state-of-the-art performance without loss of efficiency.To better display the accuracy and speed trade-off, we plot our IoU score versus speed in Fig. 1.Our NPMCA-net can achieve both high performance and high efficiency at the same time.Our main contribution is summarized as follows: • We propose a video object segmentation model (NPMCA-net) that strikes a good balance between accuracy and running speed.The model does not rely on online fine-tuning technique, so as to lower the computational demands, yet it can adaptively catch the target object's appearance variation by using both image and predicted mask information in the previous frame.
• Our proposed non-local pixel-matching module can effectively predict the target object mask by aggregating multi-frame information.Moreover, the proposed model also provides high level interpretability by visualizing the obtained feature maps.
Related Works
Video Object Segmentation.Different from statistic image tasks [10,11,12,13,14], VOS only considers to segment the moving object without class reference or prediction.VOS research can be divided into two main categories, one is unsupervised methods and the other is semi-supervised methods.Unsupervised methods, such as [15,16,17], tried to segment the foreground objects without any given labels.Semi-supervised methods aim to segment the objects in a video with a given ground-truth mask in the first frame.For example, some approaches [1,2,3,4] used online finetuning to make the model robust to object appearance variation.And some studies [5,6,7] based on mask-propagation solely relied on offline training for this task, making the models more efficient.Some [18,19] took advantage of Mask R-CNN [20] to predict corresponding box of each object and then conduct segmentation.Additionally, Sun et al. [21] used reinforcement learning to choose better proposals for target object bounding box and then conduct segmentation.However, offline methods generally are less performing than online ones.In this paper, we will focus on mask-propagation based methods under case of semi-supervision and try to design a fast and high-performance model.
Embedding Based Network.Embedding based networks use an embedding vector to represent each pixel.They have been successfully employed in many vision tasks [22,23,24].Many successful VOS approaches are also based on embedding.PML [25] employed an embedding vector to represent each pixel, and then embedding vectors in reference frame are matched with that of target frame using a triplet loss.VideoMatch [26] proposed a matching based algorithm for VOS, which learned to match extracted features to a provided template without memorizing the appearance of target objects.Besides, Ci et al. [27] attempted to predict foreground object by learning location-sensitive embedding.FEELVOS [28] proposed a semantic pixel-wise embedding with a global and local matching mechanism for this task, and Yoon et al. [29] utilized features from different depth layers by combinations of convolution, max pooling and Rectified Linear Units to distinguish the target area from the background.However, most embedding based networks need guidance information to tell which pixels belong to foreground and which ones belong to background.In this paper, we directly match the features with our proposed mechanism (directly compute the similarity of pixels) without the separation of positive or negative pools.
Cosegmentation.Some methods utilize cosegmentation to discover video object.VODC [30] was proposed to distinguish which frames contained the target object and then segmentation was conducted on these corresponding frames.They designed a spatio-temperal auto-context model to obtain superpixel label for each frame and then a multiple instance boosting algorithm with spatial reasoning was deployed to synchronously detect whether a frame contained the target object and predict the segmentation map.Besides, Wang et al. [31] proposed an energy optimization framework which combined intraframe saliency, interframe consistency and across-video similarity.They used saliency and spatio-temporal SIFT flow to detect initial pixels for common object.Then, the spatio-temporal SIFT was used to refine the coarse object regions generated by the prior step.Additionally, Li et al. [32] designed a robust ensemble clustering scheme to predict object-like proposals for unsupervised cosegmentation task.Once the proposals were generated, unary and pairwise energy potentials were minimized with the -expansion to train the model.Although these methods have achieved satisfactory results, they are designed to detect the common object among different video sequences.In our task, we aim to track the same object marked in the first frame for the specific video sequence.
Channel Attention Networks.Channel attention modules have been ever-increasing popular in different computer vision tasks.A multi-channel attention selection mechanism was proposed in SelectionGAN [33] to refine the coarsely generated one on a target image.A residual channel attention network was designed in RCAN [34] to learn the interdependencies of features among channels for image super-resolution.SCA-CNN [35] leveraged channel attention to select semantic attributions of corresponding sentence context.Additionally, Qiu et al. [36] proposed to learn multiple attention maps to obtain hierarchical context information for object detection.All the above methods show that the attention mechanism can help models learn better representations for corresponding targets.Therefore, we consider using the attention module to help our network learn better feature representation for the target object to be tracked and segmented.
Non-local Networks.Non-local operation is mainly treated as a self-attention mechanism to compute the relationships of the pixels through a global view in the network.Wang et al. [9] proposed a non-local operation for capturing long-range dependencies in video classification and static image recognition.DANet [37] plugged non-local operation as position attention module and channel attention module into scene segmentation.In this paper, we introduce the non-local mechanism as a pixel-matching operation to match target pixels and reference pixels to realize the localization of target object in the target frame.
Method
Our motivation is to make VOS model adaptive to object appearance variation and occlusion, and keep a high efficiency at the same time.Therefore, we design a new mechanism by matching the pixels in target frame and reference frames (first and previous frames) to acquire the predicted mask for the target frame.
Channel Attention based Non-Local Pixel-Matching Mechanism
Video Object Segmentation Architecture
Given a video with annotated mask for the first frame, we need to segment the rest frames according to the given mask.In VOS, object appearance is often changing frame by frame for the video object segmentation task.Thus, it is not sufficient if we only care about the object appearance in the first frame, especially when large object appearance variation occurs in the middle of the video.
As illustrated in Fig. 2, we provide three different kinds data for the three encoders: the target frame encoder takes the current frame with the estimated labels of the previous frame as 4-channel input [5]; two parameter-shared reference frame encoders take the first frame and the previous frame as input, respectively.Note that when providing data for reference frame encoders, background pixels from the first frame and the previous frame are removed using groundtruth (first frame) or estimated mask (previous frame).Whist for the target frame encoder, background pixels are not masked-out since the masks for the current and previous frames are different.Then, the feature maps of reference and target frames are extracted by respective encoders.In this way, we can obtain the changing object appearance information and target frame features.
Following that, the feature maps are input into our non-local pixel-matching module.The target feature map is matched with the feature maps from two references using our newly designed non-local pixel-matching module to localize the target objects.In this process, the target feature is matched with two references one by one, individually.Therefore, there are two output feature maps: one is the matched feature map of the target frame with the first frame, and the other one is the matched feature map of target with previous frame.With the help of the previous frame, our network can adapt to object appearance variation, since the gap between the current and previous frames are smaller than that between the current and first frame.On the other hand, if we only consider the previous frame, for the occlusion case, the model will lose the initial object appearance for frames after the occlusion.
After that, the channel attention module is applied to strengthen features by allocating different weights for each feature channel.Once the features are matched and enhanced, the obtained two feature maps are concatenated, where a 3 × 3 convolution layer is used to fuse the two feature maps.Finally, the fused feature map is decoded by the decoder to predict and output the target object masks.Our method can be viewed as an encoder-decoder process, which can directly obtain the segmentation mask of current frame without any post-processing.
Non-Local Pixel Matching with Channel Attention
Our NPMCA-net contains two parts, including a non-local pixel-matching module (NLPMM) and a channel attention module (CM).The CM is in series with the NLPMM.The NLPMM is a non-local structure which can match pixels over the whole feature map.And CM conducts self-attention through the channel dimension instead of the spatial dimension to strengthen the feature representation.With the combination of these two modules, our network can obtain feature representations of the foreground objects for the target frame.The details are discussed as follows.
Non-Local Pixel-Matching Module.The non-local pixel-matching module is one main module of our NPMCAnet, which is used to obtain object appearance of the target frame and localize the target object simultaneously by matching the feature maps of the reference frames and the target frame.Different from the matching process using convolution layers [29] or using metric learning to pull in similar embedding vectors and push away different embedding vectors [28,25], we directly compute similarities between pixels.The framework of NLPMM is illustrated in Fig. 4(a).The inputs of this module are the feature map of reference frame and the feature map of target frame (defined as ∈ ℝ × × and ∈ ℝ × × , where , , are the height, width, and channel number, respectively) extracted from respective encoders.In order to reduce memory and improve efficiency for our approach, once feature maps are fed into the module, a 3 × 3 convolution layer with padding is used to reduce the channel number of input feature maps from to ∕4, the new feature maps are with size ∈ ℝ × × 4 and ∈ ℝ × × 4 , respectively.
After that, the two reduced feature maps are reshaped to ∈ ℝ × 4 and ∈ ℝ × 4 , where = × .The similarity between pixels in the two feature maps is computed: with ( , ) measuring the similarity between ℎ position on reference feature map and ℎ position on target feature map.The similarity of each pixel is calculated in a non-local way, where all positions of the two feature maps are included.Meanwhile, it computes the relation between two spatial pixels from two temporal frames because the inputs are from a temporal sequence.Therefore, it is a space-temporal similarity calculation.After that, instead of directly using the calculated result, we apply softmax to normalize the non-local similarity map , and obtain ′ ( ′ ∈ ℝ × , = × ), with its element value ′ ( , ) being With Eq.( 1) and Eq.( 2), we can generate the relations between any two pixels in the target feature map and the reference feature map.The pixel pair with a large similarity value has high probability belonging to the same pixel of one foreground object.In this case, we can not only match the object appearance but also localize the object.Finally, the new matched feature map ℎ is calculated by a matrix multiplication between the transpose of the reduced reference feature map and the non-local similarity map ′ , Finally, the matched feature map is reshaped back to ℎ ∈ ℝ × × 4 .The coarse mask of the target frame can be obtained by the matrix multiplication between the reference feature map and the similarity map, namely, we can use Eq.( 3) to obtain the pixels of foreground objects in the target frame.To more intuitively understand the matching and localization process, we show the process in Fig. 3. Fig. 3(a) shows how the similarity map is computed, and Fig. 3(b) displays how the matching process can also accomplish the localization.Therefore, we can obtain foreground object appearance and its location at the same time.Besides, visualization of the output of our non-local pixel-matching module is shown in Fig. 4(b).It can be found that this matching module is able to localize the object and mask the target object appearance.The highlighted part (warm color) in the "matched with frame T-1" better demonstrates the matched pixels for the target object.When there is only frame 0 to be referred, it is difficult for the network to find out the pixels for the moving object in the case of large appearance variation.Channel Attention Module.We adopt a channel attention module after the non-local pixel-matching module to strengthen the feature representation of foreground object in this task.The details of our channel attention module is illustrated in Fig. 5(a).The input for this module is the output feature map of non-local pixel-matching module, i.e., = ℎ and ∈ ℝ × × 4 .In order to compute the inter-dependencies between different channels, is first reshaped into ∈ ℝ × 4 , where = × .Then the channel attention map ∈ ℝ 4 × 4 is computed by: where ( , ) measures the relationship between ℎ channel and ℎ channel of .Then matrix multiplication is applied to get the strengthened feature map.Mathematically, the strengthened feature is: Then the strengthened feature map is reshaped back into the size of input feature map, i.e., ∈ ℝ × × 4 .The final output of channel attention module is the weighted sum of the strengthened feature map and the module input feature map : = + , (7) where ≥ 0 is a learned parameter.We do not apply any convolution layer in the channel attention map.The channel attention map is in series with the non-local pixel-matching module to strengthen the representation of feature map instead of adopting a parallel mode in [37].Some visualizations of the output feature map of the channel attention module are displayed in Fig. 5(b).
Two-stage Training Method
We take two-stage training for our network.Firstly, we pre-train our NPMCA-net through static images.Then, we use the video object segmentation datasets to fine-tune the model.We use IoU loss in [7,38] and Adam [39] optimizer with randomly cropped resolution of (256 × 432) patches for both pre-training and fine-tuning.All experiments are running on one NVIDIA GeForce 2080 Ti GPU.
Pre-training on static images.Pre-training on static images for video object segmentation is becoming popular recently since it can help the network adapt to different foreground object appearance.We follow several successful practice in [5,4,40] to pre-train our network by applying random affine transformation on static images.We use saliency datasets MSRA10K [41], ECSSD [42], segmentation datasets Pascal VOC dataset [43] and COCO [44].In this case, the network can be adapted to different object appearance and categories, so as to avoid easy over-fitting.For pre-training, we set a fixed learning rate as 1 -5.
Fine-tuning on videos.Then, we fine-tune the pre-trained model on video object segmentation dataset.We only use DAVIS-17 [45] training set for fine-tuning.During training, we sample three frames in temporal order to obtain temporal information.In order to acquire big variation of object appearance for a long time, we randomly skip frames for sampling.The maximum random skip is 5 and the learning rate for fine-tuning is set as 1 -6.
Inference
Our network is based on the assumption that the ground-truth mask of the first frame is given for semi-supervised video object segmentation.In other words, the first frame is set as the reference frame for all the rest frames.Therefore, to make our network efficient, we only compute the feature map of the first frame once for a test video clip.Following the architecture of our approach, we use previous frame with predicted segmentation mask as another reference frame.We also follow [5] to set three different scale sizes and compute their average as the final output.
Multi-object case.We use softmax aggregation [5] to softly combine multiple objects.Finally, the output probability map is computed by: where , is the output probability of instance at position .= 0 is for background and is the total number of instances.We use Eq.( 8) to compute the probability map of multi-objects and apply it to next frame inference.
Implementation
Encoder.We design three encoders based on ResNet-50 [46] for three inputs (two references and one target).Like [5], the target frame encoder takes 4-channel inputs and two reference frame encoders take 3-channel inputs.Instead of using res5 in [5], we take res4 as the final encoded feature map, whose channel number is 1024.This is because the feature map of res5 is with low resolution, making it inaccurate for small objects.On the other hand, three res5 encoders will cause large memory occupation.
Decoder.After the fusion layer, the fused feature map is finally fed into the decoder.Similar to [5], the decoder also takes the encoder stream through skip-connection as input to produce the mask.With the help of skip-connection, the high resolution feature can replenish the missing information.Finally, the feature map is gradually upsampled with a factor of two till it reaches the same size as input.
Experiment Results
We evaluate our network on video object segmentation datasets, DAVIS-2017 [45], DAVIS-2016 [47] and SegTrack-v2 [48].The evaluation metrics include mean intersection-over-union (IoU) of predicted mask and the ground-truth ( ), contour accuracy between contour points on predicted mask and the ground-truth ( ), and the average of the two metrics ( & ).
DAVIS-2017.DAVIS-2017 is a multi-object dataset.There are 90 videos in total, 60 for training and 30 for validation.We evaluate our method on its validation set.The comparison results with recent state-of-the-art approaches are shown in Table 1.The results are listed from the lowest score of to the highest score.The upper part is from approaches with online-learning or with optical flow.It can be found that our method achieves comparable scores with the best performing ones.Our score is slightly lower than PReMVOS [49], but PReMVOS needs longer running time than all other approaches because both online-learning and optical flow need expensive computational cost.We reach the best performance compared with all other methods without online-learning or optical flow.It can be demonstrated that our NLPMM can realize find out where the target object is in current frame.Further, we directly using masked-out object as the input for reference, making our model less sensitive to the influence of backgrounds while focusing on the object itself.By doing this, our method can capture enough object features.Besides, using the masked-out objects of the first frame and the previous frame as references provides enough information for handling appearance variation.
Method
OL OF (%) (%) & (%) Time (s) OSVOS [ DAVIS-2016.DAVIS-2016 contains 50 videos (30 for training and 20 for validation) for single-object video object segmentation.We report comparison results of the validation set in Table 2.It can be found that our approach achieves better performance than the methods using pixel-matching or metric learning, such as PLM [29], PML [25], FEELVOS [28], and RGMP [5].We also obtain higher score than other methods without online learning.For metric , our method is 1.7% higher than STM [40], whist for the contour accuracy, our method is 0.8% lower than STM [40], this might be caused by the adopted IoU loss.Moreover, our results are competitive with online-learning based methods.According to the running time listed in Table 2, our approach can achieve a good balance between accuracy and efficiency.It demonstrates that our NLPMM is able to localize moving objects with masked-out object references.Additionally, pre-training with statistic images also helps network to adapt to different object classes.In this way, our approach does not rely on online training to learn the object information of current video.
SegTrack v2.We also evaluate our network on the SegTrack v2 [48] dataset.The results are shown in Table 3.It can be found that our network also achieve competitive performance on SegTrack v2 dataset under the same level comparison.Therefore, our network has competitive generalization ability.Our performance even defeat MSK [4] and MaskRNN [55], where online training is used.We set the same training dataset as DMM-net. it can be seen that our method can obtain comparable results with DMM-net.However, we obtain lower performance than DyeNet.This phenomenon may be caused by the fact that they use template matching, which predicts bounding box of the target object first then conduct segmentation.In this way, much background noise can be reduced.In the SegTrack v2 dataset, there are several videos with the background very similar to the target object.In such cases, template can better decrease the disturbance of background.However, for other datasets, such as, DAVIS17, DAVIS16, such conditions are not satisfied, the performance of DyeNet is lower than ours, as reported in Table 1 and Table 2.
Qualitative Results
Qualitative results on two DAVIS datasets are shown in Fig. 6.For each displayed video, we choose 5 frames with the cases of large object appearance variation or occlusion.It can be found that our model can handle different challenges.For example, our model performs well with large object appearance variation cases like in row 2 and
Ablation Studies
Two-stage training method.We firstly conduct the ablation study for the two-stage training method, and the results are displayed in Table 4.It is surprising to find that the performance of pre-train-only case is much better than fine-tune-only case.Both the intersection-over-union score ( ) and the contour accuracy ( ) of pre-train-only are almost 25% larger than of fine-tuning-only.It proves that two-stage training is necessary.If we only train on DAVIS-2017, the categories are far less enough.It can also be found that our approach will perform better when more categories are used for training.The combination of pre-train and fine-tuning achieves the best performance, because pre-training help our model adapt to large categories and fine-tuning help our model to obtain temporal information and adapt to video sequence.Different Modules.We also conduct ablation experiments with some components disabled or removed, and the results are displayed in Table 5.We test three different combinations of the channel attention module and the use of the predicted mask from the previous frame.If we remove our channel attention module, the IoU score and the contour accuracy are 3.4% and 3.7% lower than the full combination, respectively.Therefore, we can conclude that the channel attention module can strengthen the feature representation to help our network better adapt to foreground pixels.On the other hand, if we take out the predicted mask from the previous frame, the IoU score and the contour accuracy are 5.3% and 4.8% lower than the full combination, respectively, which proves that the predicted mask from the previous frame can guide our network to segment the foreground object.Overall, the full NPMCA-net achieves the best performance.It demonstrates that the channel attention module and the use of the predicted mask for the previous frame benefit from each other.
Encoder Setting.Finally, we conduct the ablation study on the setting of encoders with only training with DAVIS-2017 dataset.we conduct the experiment to show the necessity of the parameter-shared encoder for the two references 6. 'One encoder' denotes to use same encoder for the three inputs and 'Two encoders' denotes to parameter-shared setting.It can be found that with only one encoder, the result is almost 5% lower than the two-encoder setting.VOS aims to segment the target object from the first frame to the end.To capture consistent reference object feature information, we set parameter-shared encoder for the first frame and previous frame (where background is masked out).Parameter-shared can map the input reference features into the same representation space, thereby the two reference frames' information can be equally treated.Additionally, parameter-shared can reduce parameters for training.If we use just one encoder for the first, the previous and the target frames, the network will be confused, because the encoder for the current frame needs to encode both image and previous predicted mask information, where the background is not masked out.However, for the first and the previous frames, the background is masked out, and we only use the foreground pixels of the frames.
Table 6
Encoder settings analysis on DAVIS-2017 validation set.'One encoder' denotes to using same encoder for all the inputs 'Two encoders' denotes to the setting of parameter-shared only for the reference frames.
Limitations
Some failure cases from our model are shown in Fig. 7.When foreground objects are overlapped, our model tends to produce incorrect segmentation for those occluded objects, especially when the overlapped objects are with the same category.Nevertheless, if the foreground objects are well separated afterwards, our model can adjust to the correct tracking and segmentation status due to the use of the first frame information, like in row 1 of Fig. 7.This example shows that our method can catch back to the target object after occlusion.However, when there is occlusion for multi-objects, especially when the targets are in the same category, our method will be confused and lose the target (like in the second row of Fig. 7).To overcome this limitation, we consider that we can generate some prototypes to represent each object and push away their feature distances to make the network be sensitive to different object in the future.
Conclusion
In this work, we have proposed a new video object segmentation network NPMCA-net, which combines a nonlocal pixel-matching module and a channel attention module in series connection.Our network achieves the state-ofthe-art performance on both DAVIS-2017 and DAVIS-2016 validation set.Additionally, our NPMCA-net has a good generalization ability.Moreover, our network does not need any post-processing, so as to keep a good balance between accuracy and efficiency.In the future,we consider that we can generate some prototypes to represent each object and push away their feature distances to make the network be sensitive to different object.
Figure 1 :
Figure 1: The IoU score ( ) versus running time on each frame ( ) for various VOS approaches on the DAVIS-2016 validation set.Our model can keep a good balance between performance and efficiency.
Figure 2 :
Figure 2: The framework of our NPMCA-net.It consists of three encoders, where the encoders for the two reference frames are shared.NPMCA-net contains a non-local pixel-matching module, a channel attention module, a fusion module and a decoder.
Figure 4 :
Figure 4: (a) Framework of non-local pixel-matching module (NLPMM).Our NLPMM has two inputs,including the reference feature map and the target feature map.The output is the matched feature map.(b) Visualization of output feature map from NLPMM.The matched feature map can coarsely acquire the foreground object appearance and its location.
Figure 5 :
Figure 5: (a) Framework of channel attention module (CM).The input of CM is the output of NLPMM (matched feature map), and it outputs the strengthened feature map.(b) Visualization of Output feature map from CM. CM is able to strengthen the feature representation.
3
in Fig.6(a) and row 1 in Fig.6(b).Besides, our model can also segment each object when they are occluded by background as shown in row 1 in Fig.6(a) and row 2, 3 in Fig.6(b).The qualitative comparison between our model and other methods are shown in Fig.6(c).
The visual results of our NPMCA-net on DAVIS-2016.
The visual results of our NPMCA-net on DAVIS-2017.
The visual comparison with other approaches on DAVIS-2017.
Figure 6 :
Figure 6: We display the frames with large appearance variation or before and after occlusion and the comparison between ours and other approaches.
Figure 7 :
Figure 7: Limited Cases of Our Network
Table 2
Evaluation on DAVIS-16 validation set.'OL' denotes online-learning.'OF' means using optical flow.Our NPMCA-net can even achieve a bit higher performance than methods with online-learning.
Table 4
Training methods analysis on DAVIS-2017 validation set.The two-stage training method helps our NPMCAnet better adapt to different categories.With only DAVIS-2017 training set, the network is easy to get over-fitting.
Table 5
Network module analysis on DAVIS-2017 validation set.'CM' denotes to the channel attention module, and 'PM' denotes that the input of current frame with the predicted mask from the previous frame.
|
2021-07-12T01:15:59.990Z
|
2021-07-09T00:00:00.000
|
{
"year": 2021,
"sha1": "3b9d795f13085842b5809adface1a0044fb8da39",
"oa_license": null,
"oa_url": "https://arxiv.org/pdf/2107.04279",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "3b9d795f13085842b5809adface1a0044fb8da39",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
225171254
|
pes2o/s2orc
|
v3-fos-license
|
Optimal Control Implementation with Terminal Penalty Using Metaheuristic Algorithms
Optimal control problems can be solved by a metaheuristic based algorithm (MbA) that yields an open-loop solution. The receding horizon control mechanism can integrate an MbA to produce a closed-loop solution. When the performance index includes a term depending on the final state (terminal penalty), the prediction’s time possibly surpasses a sampling period. This paper aims to avoid predicting the terminal penalty. The sequence of the best solution’s state variables becomes a reference trajectory; this one is used by a tracking structure that includes the real process, a process model (PM) and a tracking controller (TC). The reference trajectory must be followed up as much as possible by the real trajectory. The TC makes a one-step-ahead prediction and calculates the control inputs through a minimization procedure. Therefore the terminal penalty’s calculation is avoided. An example of a tracking structure is presented. The TC may also use an MbA for its minimization procedure. The implementation is presented in two versions: using a simulated annealing algorithm and an evolutionary algorithm. The simulations have proved that the proposed approach is realistic. The tracking structure does or does not work well, depending on the PM’s accuracy in reproducing the real process.
Introduction
In process engineering, many applications involve the optimal control of a dynamic system. Sometimes the structural properties of a dynamic system and the nature of the optimal control problem lead to theoretical control laws that are relatively easy to implement without significant computational complexity. On the other hand, there are situations in which the optimal evolution of the system requires important computational effort within a time interval. That is why metaheuristic-based algorithms (MbAs) (see [1][2][3]) have been used for over two decades in control engineering (see [4][5][6]).
The closed-loop control structure able to integrate an MbA is the receding horizon control (RHC) mechanism (see [7][8][9]). The controller of this structure makes optimal predictions using a process model (PM). Model predictive control is a particular case of RHC (see [10]). The integration of MbA and RHC are systematically described in [7,9,11]. The prediction horizon depends on the nature of the optimal control problem (OCP)'s performance index. A big computational complexity occurs when the performance index includes a Mayer-type term that measures the quality of the trajectory in its final extremity. This term is usually called terminal penalty. In this case, the final time of prediction and control horizons are identical. Unfortunately, in the first sampling periods, the prediction horizon covers all or near all control horizon. Therefore, the prediction's calculation takes a long time that possibly does not enter inside a sampling period.
This paper proposes an alternative control structure to avoid the big computational complexity of predicting the terminal penalty. The new approach also aims to give a closed-loop solution to the OPC in question. We suppose that an MbA has already been developed for solving this OCP. An execution series will produce the best solution for the addressed problem. The sequence of the best solution state variables is called the reference trajectory in our approach.
The control structure proposed in this paper also includes a model of the real process (PM) and has, as data input, the reference trajectory produced by the MbA. The closed-loop structure has the following main objective: the reference trajectory must be followed up as much as possible by the sequence of its state variables (the real trajectory). That is why we may consider the closed-loop as a tracking structure. Section 3 states the tracking problem as the new problem that has to be solved by the tracking structure. Neither the OCP in question nor the specific MbA is involved in this statement. The reference trajectory contains intrinsically both. The new control structure has a tracking controller that aims to approach the reference and real trajectories. The tracking controller will use only one-step-ahead predictions to minimize the distance between trajectories as much as possible. Hence, the computational complexity will be much diminished.
For its minimization task, the tracking controller may use, in turn, a metaheuristic based algorithm, denoted by MA in the sequel. Let us note that the MA is different from MbA. Section 4 gives an example of how to solve a benchmark OCP using a tracking structure. The tracking controller's implementation is presented in two versions: the first includes a simulated annealing algorithm and the second an evolutionary algorithm.
The tests have proved that the tracking structure is a pragmatic approach to solve the OCP in question when we already have a reference trajectory (eventually yielded by an MbA). The price to pay is the fact that the performance index is approximated with an error of a few percent. The key factor is the accuracy of the PM in reproducing the real process. Before a real-time implementation with a real process, the tracking structure's designer can simulate it and analyze the degradation of the process' quasi-optimal behavior.
Optimal Control Problem: Hypotheses
Let us consider an OCP regarding an invariant nonlinear dynamic system with differential and algebraic equations as a PM: . X = f (X(t), U(t)) (1) X = [x 1 , · · · , x n ] T ; U = [u 1 , · · · , u m ] T ; X(t) ∈ R n ; U(t) ∈ R m The control horizon is [t 0 , t N ], with discrete moments t i = t 0 + i·T, I = 0, . . . , N, where T is the sampling period, and t 0 is the initial moment (t 0 = 0). If the initial value X 0 = X(t 0 ) and the sequence of control inputs U 0 = U(t 0 ), U 1 = U(t 1 ), . . . , U N−1 = U(t N−1 ) are known, then the sequence of the state variables X 1 , . . . , X N−1 , X N , can be calculated using the PM. Solving an OCP implies calculation of the control sequence that maximizes or minimizes the performance index J(t 0 , t N , X(t 0 ), U) on the control horizon, starting from X(t 0 ).
Besides the initial conditions, there are also a certain number of algebraic and differential constraints imposed to control inputs and state variables.
In this work, we have adopted some working hypotheses that cover a lot of the processes involved in many OCPs.
Working hypotheses: • The field f = [ f 1 , · · · , f n ] T has all the smoothness properties (continuity and differentiability) needed by the sequel's calculation. • The dynamic system described by (1) corresponds to slow nonlinear systems, such as chemical batch processes. The controller needs a sampling time, T, large enough to calculate the current optimal control input (denoted U(k) in the sequel).
•
For this kind of process, the initial state X(t 0 ) can be known. When masses, volumes, and concentrations compose the initial state, this is a realistic hypothesis because it can be regenerated.
•
The constraints refer only to the control inputs: where Ω is an open set.
Closed-Loop Control Structure
In the majority of papers that address an OCP and use a specific MbA, the solution is limited to the calculation of control input values that optimize the performance index using the concerned metaheuristic. Emphasis is placed on the metaheuristic structure and its parameter tuning that involves good effectiveness. For a closed-loop control structure with a real process (not a process model) the sequence of optimal control inputs mentioned before is useless. The closed-loop implementation is a connected problem, important, and not easy to treat.
The receding horizon control (RHC) structure is a well-known method to achieve a closed-loop control structure that can be a solution for this kind of problem. We have already proposed a systematic design procedure in [7] that uses RHC as a closed-loop structure. This one includes the MbA adapted to play the role of the controller.
Because the theoretical framework of the RHC is already well known, we recall hereafter only the elements that define the RHC strategy: • The controller calculates the next control input value by looking ahead for a number of steps regarding a given performance index. The control input is only implemented by one step.
•
The controller predicts over the number of steps taken into account using a dynamic process model (PM).
•
The implementation result is checked, and a new decision is made by taking updated information into account and looking ahead for the same number of steps.
•
The prediction horizon "recedes" at each sampling period but keeps the final extremity (we considered here the most difficult situation when the objective function has a terminal penalty term). Hence, its length decreases by one unit at each sampling period.
The receding horizon controller is, roughly speaking, the MbA, slightly modified to be integrated into the closed-loop. It makes a prediction of the evolution of the process over the so-called prediction horizon.
Correspondence between Performance Index and Prediction Horizon
Usually, the performance index can be expressed for the sake of simplicity by its continuous general form as The first part is a Lagrange-type term that measures the quality along the trajectory of the dynamic system. The second part is a Mayer-type term that measures the quality of the trajectory in its final extremity. The Mayer-type term will be called terminal penalty in the sequel. The structure of the performance index is decisive for the strategy of RHC related to the prediction horizon.
The prediction horizon can have different positions inside the control horizon [0, H] (see [11]). Figure 1 shows the situation when the prediction horizon includes the final moment H of the control horizon. Accordingly, the prediction horizon's length is variable, having the value h = H − k, where k is the current sampling time. Because k evolves from 0 to H−1, it holds h = H, . . . , 1. Generally, this scheme is compulsory when the performance index includes a terminal penalty because this must be calculated, and consequently, the prediction horizon must include the final states. It meets the elements that define the receding horizon control (RHC) strategy.
The prediction horizon "recedes" at each sampling period but keeps the final extremity. Hence, its length decreases by one unit at each sampling period. Unfortunately, in the first sampling periods, the prediction horizon covers all or near all control horizon. Therefore, the prediction's calculation takes a long time that possibly does not enter inside a sampling period. A favorable situation would be when there would be an estimation of the terminal penalty. However, generally speaking, there is no such estimation for each process. That is why we propose another approach in this paper, which avoids calculating the terminal penalty.
Tracking of the Quasi-Optimal Trajectories
This paper proposes a method to implement a closed-loop control structure devoted to the situation in which an MbA has already been developed, aiming to solve an optimal control problem having a terminal penalty. For example, the OCP described in Section 4 has been solved using an evolutionary algorithm, denoted EA1. The later one, which is a stochastic algorithm, yields a single solution after a single execution. Therefore, it generates a single realization of a stochastic process. The designer of EA1 has made the appropriate choices to ensure the convergence of the stochastic process. A solution yielded by EA1 is a sequence of control inputs that optimizes the performance index and allows simulation of the open-loop system's evolution over the control horizon. Practically, one must carry out EA1 many times (e.g., 30-40) to obtain quasi-optimal solutions. After a simulation series, the best quasi-optimal solution is available. Four elements can express this one: • the initial state: 0 X • the sequence of quasi-optimal control inputs: • the sequence of quasi-optimal state variables: • the value of the objective function: * J The simulated state evolution having the best performance index will be called in the sequel reference trajectory.
The solution found by EA1 is an open-loop solution. Its control input sequence is useless for a control structure that includes a real process. It cannot be used directly by a control structure. Generally speaking, the MbA has to be slightly modified and integrated into the controller of an eventual closed-loop (see [7,9,11]). Generally, this scheme is compulsory when the performance index includes a terminal penalty because this must be calculated, and consequently, the prediction horizon must include the final states. It meets the elements that define the receding horizon control (RHC) strategy.
The prediction horizon "recedes" at each sampling period but keeps the final extremity. Hence, its length decreases by one unit at each sampling period. Unfortunately, in the first sampling periods, the prediction horizon covers all or near all control horizon. Therefore, the prediction's calculation takes a long time that possibly does not enter inside a sampling period. A favorable situation would be when there would be an estimation of the terminal penalty. However, generally speaking, there is no such estimation for each process. That is why we propose another approach in this paper, which avoids calculating the terminal penalty.
Tracking of the Quasi-Optimal Trajectories
This paper proposes a method to implement a closed-loop control structure devoted to the situation in which an MbA has already been developed, aiming to solve an optimal control problem having a terminal penalty. For example, the OCP described in Section 4 has been solved using an evolutionary algorithm, denoted EA1. The later one, which is a stochastic algorithm, yields a single solution after a single execution. Therefore, it generates a single realization of a stochastic process. The designer of EA1 has made the appropriate choices to ensure the convergence of the stochastic process. A solution yielded by EA1 is a sequence of control inputs that optimizes the performance index and allows simulation of the open-loop system's evolution over the control horizon. Practically, one must carry out EA1 many times (e.g., 30-40) to obtain quasi-optimal solutions. After a simulation series, the best quasi-optimal solution is available. Four elements can express this one: • the initial state: X 0 • the sequence of quasi-optimal control inputs: • the sequence of quasi-optimal state variables: • the value of the objective function: J * The simulated state evolution having the best performance index will be called in the sequel reference trajectory.
Remark 1: -The solution found by EA1 is an open-loop solution. Its control input sequence is useless for a control structure that includes a real process. It cannot be used directly by a control structure. Generally speaking, the MbA has to be slightly modified and integrated into the controller of an eventual closed-loop (see [7,9,11]).
-If the OCP under consideration has a terminal penalty, there is an additional difficulty to integrate the MbA into the controller: the prediction horizon must include the final time for each sampling period. This fact involves a big computational complexity. The prediction's calculation takes a long time that possibly does not enter inside a sampling period. Therefore, the control structure could not be implemented. -This paper proposes a new method to achieve the closed-loop control structure, and also the calculation of the terminal penalty is avoided. Figure 2 presents the proposed control structure, which includes a tracking controller appellation that will be justified in the sequel. The reference trajectory is used by the tracking controller to yield the control inputs for the real process. These control inputs will determine a quasi-optimal behavior of the process (in closed-loop) if the tracking controller will reproduce the reference trajectory to a certain extent.
Automation 2020, 1, FOR PEER REVIEW 5 sampling period. This fact involves a big computational complexity. The prediction's calculation takes a long time that possibly does not enter inside a sampling period. Therefore, the control structure could not be implemented. -This paper proposes a new method to achieve the closed-loop control structure, and also the calculation of the terminal penalty is avoided. Figure 2 presents the proposed control structure, which includes a tracking controller appellation that will be justified in the sequel. The reference trajectory is used by the tracking controller to yield the control inputs for the real process. These control inputs will determine a quasioptimal behavior of the process (in closed-loop) if the tracking controller will reproduce the reference trajectory to a certain extent. This time the quasi-optimal controller does not incorporate the MbA itself, as in other closedloop solutions. It uses only the state trajectory (2) of its best evolution (in open-loop), over a specific control horizon starting from a particular initial state.
The performance index J(t0, tN, X(tN)) is a Mayer-type term called the terminal penalty in the sequel (e.g., J = x1(tN)·x5(tN)). The lack of a Lagrange-type term is not aiming to reduce the difficulty of the treated OCP. For a closed-loop control structure, which uses an MbA, the terminal penalty is more difficult to treat, because it needs an estimation of the final state, which is, generally speaking, unavailable (see [7]). On the other hand, the solution proposed in this paper is devoted to this kind of OCP, having only a terminal penalty. That is the case, for example, of chemical batch processes.
Tracking Problem
In this section, we propose a statement for a new problem that will allow us to show what it does mean to reproduce the state trajectory (2) to a certain extent. Let us note that, because of the terminal penalty, our desideratum concerns only the final state of the process. It is suitable that the latter one is nearly identical to * N X . Consequently, the OCP's performance index J will be almost equal to * J .
However, this objective is not reachable in most applications. It may need an N-step ahead prediction that is a huge, time-consuming process. In this case, the best we can do is maintain the entire state trajectory in the reference trajectory's neighborhood, as close as possible. The controller will use only one-step-ahead predictions. That is why we may call the control structure from Figure 1 a tracking structure. The input data are a reference trajectory that must be followed up as much as possible by the sequence of its state variables. This pursuit is generated by the tracking controller (TC) that acts
Process
Tracking Controller This time the quasi-optimal controller does not incorporate the MbA itself, as in other closed-loop solutions. It uses only the state trajectory (2) of its best evolution (in open-loop), over a specific control horizon starting from a particular initial state.
The performance index J(t 0 , t N , X(t N )) is a Mayer-type term called the terminal penalty in the sequel (e.g., J = x 1 (t N )·x 5 (t N )). The lack of a Lagrange-type term is not aiming to reduce the difficulty of the treated OCP. For a closed-loop control structure, which uses an MbA, the terminal penalty is more difficult to treat, because it needs an estimation of the final state, which is, generally speaking, unavailable (see [7]). On the other hand, the solution proposed in this paper is devoted to this kind of OCP, having only a terminal penalty. That is the case, for example, of chemical batch processes.
Tracking Problem
In this section, we propose a statement for a new problem that will allow us to show what it does mean to reproduce the state trajectory (2) to a certain extent. Let us note that, because of the terminal penalty, our desideratum concerns only the final state of the process. It is suitable that the latter one is nearly identical to X * N . Consequently, the OCP's performance index J will be almost equal to J * . However, this objective is not reachable in most applications. It may need an N-step ahead prediction that is a huge, time-consuming process. In this case, the best we can do is maintain the entire state trajectory in the reference trajectory's neighborhood, as close as possible. The controller will use only one-step-ahead predictions. That is why we may call the control structure from Figure 1 a tracking structure. The input data are a reference trajectory that must be followed up as much as possible by the sequence of its state variables. This pursuit is generated by the tracking controller (TC) that acts in a manner adapted to this purpose. The control inputs' values are optimally determined for all sampling periods, as explained below.
To avoid confusion, we will denote the reference trajectory (2) by Figure 3 shows symbolically how the tracking structure would work. The variables have the following significance: is the state vector at the moment k estimated by the TC, using the known PM (the state vector at the end of evolution on the sampling period [k−1, k]). • X k , 1 ≤ k ≤ N, is the real state vector "received" from the process at the moment k. In a real implementation, X k is measured and/or estimated.
• X 0 is the process' initial state (X 0 is Y 0 slightly perturbed).
Automation 2020, 1, FOR PEER REVIEW 6 in a manner adapted to this purpose. The control inputs' values are optimally determined for all sampling periods, as explained below.
To avoid confusion, we will denote the reference trajectory (2) by Figure 3 shows symbolically how the tracking structure would work. The variables have the following significance: representing the control inputs' values for the , is the state vector at the moment k estimated by the TC, using the known PM (the state vector at the end of evolution on the sampling period [k−1, k]).
, is the real state vector "received" from the process at the moment k. In a real implementation, ' k X is measured and/or estimated.
• 0 X is the process' initial state (X0 is Y0 slightly perturbed).
The tracking process is defined by several aspects listed below: . According to the PM, the control input k U determines the following transfer: The TC calculates the constant vector k U through an optimization procedure such that 1 1 min arg 4. The real process evolves according to the control input k U and determines the following transfer: The tracking process is defined by several aspects listed below: 1.
At the moment k, the TC "reads" X k and considers it the initial state for the current sampling period, [k, k+1].
2.
The TC calculates U k and X k+1 . According to the PM, the control input U k determines the following transfer: The TC calculates the constant vector U k through an optimization procedure such that 4.
The real process evolves according to the control input U k and determines the following transfer: Aspects #2 and #3 define, actually, the one-step-ahead prediction, which the TC has to make to approach the reference trajectory.
Considering the control structure's objective, the TC has to solve the tracking problem (TP) stated as below: TP : If the reference trajectory (3) and the initial state X 0 are settled, one has to determine the sequence of control input s values U 0 , U 1 , · · · , U N−1 that meet the optimum criteria (4), for 0 ≤ k ≤ N − 1.
(5)
Some remarks can be made: Solving the TP means that the reference trajectory was only followed up "as much as possible" using one-step-ahead prediction. Let note by d k the minimum distance predicted at step k−1: Of course, the best tracking process requires an N−k step-ahead prediction because this one involves the final state X N , which the terminal penalty depends on. In other words, the true objective would be to minimize only the distance X N − Y N (which is not the same thing as to minimize X k − Y k , k = 0, 1, . . . , N−1). However, in this way, we are coming back to a receding prediction horizon. Sometimes, this is unacceptable due to big computational complexity and a relatively small sampling period. The controller would not have enough time to make the computation. C.
On the other hand, the one-step-ahead prediction may be satisfactory from the complexity's perspective. The computation can be done inside one sampling period. D.
When the N step-ahead-prediction cannot be carried out inside one sampling period, the best we can do is to resort to the one-step-ahead prediction. This strategy allows the possibility to implement the quasi-optimal control structure satisfactorily.
Tracking Controller
A solution of the TP is the sequence of the control inputs' values (5). The TC constructs such a solution during the control horizon [0, N] using the following input data: • the PM, usually given by Equation (1); • the reference trajectory (3); • the initial state X 0 ; • the real state vector X k , 1 ≤ k ≤ N; these values are received, in real-time, from the process.
We can make some remarks concerning the tracking strategy presented before.
(I). Neither the OCP nor the MbA (used to generate the reference trajectory) are involved in the TP statement. The reference trajectory intrinsically contains both. (II). Action #3 from Section 3.1 can be, in principle, very efficient, when X k+1 = Y k+1 . This particular case depends on the controllability of the system. Even if this property is satisfied, the access process can involve more sampling periods, and accordingly, the calculation would take more time. Moreover, only in particular cases, the controllability can be proved. (III). Actions #3 and #2 will approach X k and Y k as much as possible while the constraint U(t) ∈ Ω is met.
Therefore the two trajectories are near in the moment k. On the other hand, Y k+1 can be accessed from Y k in only one step, using the control value U * k (see reference trajectory). According to the PM, it holds: Therefore, there is an increased chance of having a small distance between the next pair of state variables, X k+1 and Y k+1 . The calculus of the input U k , using Equation (4), transforms the tracking process into a greedy-type algorithm.
(IV). The key factor that makes the state trajectory to be close to the reference trajectory is PM's accuracy in reproducing the dynamic of the real process.
At the moment k, the process is in the state X k , and the TC search for the constant control input U k that minimizes the performance index In this way, we have another local optimal control problem, whose performance index is the minimization of the Euclidean distance from X k+1 to Y k+1 . Action #2 of the tracking process shows that the minimization (6) makes sense.
We now propose a particular implementation of the TC that yields an approximated solution for the tracking problem. An effective way to determine the vector U k ∈ R m that minimizes the performance index (6) is to use a metaheuristic-based algorithm, denoted by MA in the sequel. Let us note that the MbA used a priori to generate the reference trajectory is different from MA.
The call of this algorithm may be represented by the following instruction: The MA has a fitness function that includes the numerical integration of the PM over the sampling period [k, k + 1]. We give hereafter the outline of the TC, which is called for every single sampling period.
Outline of Tracking Controller
1. Get the current value of the state vector X k .
Send the control input U k towards the process.
Return
In this paper, two MAs will be exemplified: the well-known simulated annealing algorithm (SAA), which is very simple to implement because it uses a single solution within its iterative searching process, and the evolutionary algorithm (EA) that works with a population of solutions.
The simulation tests have proved (see Section 4) that the tracking process, as described before, has practical relevance.
Quality of the Proposed Solution
This strategy does not guarantee that d N would be small enough, such that the terminal penalty would have an acceptable value. The control structure designer may require that d N is inferior to a pre-established value. Equivalently, the relative error of the performance index ε r is less than a maximum value: Constraint (7) can be verified by simulation of the closed-loop presented in Figure 1 (at least in our research's actual theoretical stage). This simulation needs a real process model (RPM) that is different from the PM. The former is obtained by adding unmodeled dynamics and noises to the PM. The RPM construction is a difficult task in itself, which the designer has to solve before making the simulation. If the TP's solution is satisfactory under the adopted hypotheses, the control structure designer may pass to the closed-loop implementation with a real process.
Remark (I) from Section 3.2 recalls that the OCP's quasi-optimal solution results from an MbA, which has behind a convergent stochastic process. When choosing the specific MbA as an optimization tool, its convergence has already been studied.
In the presented TP, we are not faced with a convergence problem, first of all, because the TC works with two finite series of states: X 0 , X 1 , · · · , X N and X 0 , X 1 , · · · , X N . The two series are finite because the control horizon is finite. On the other hand, the real process can perturb the tracking process significantly through its state variables. The perturbation must be realistically modeled and included in the TP statement to obtain a framework which could generate theoretical results.
Example of Using a Quasi-Optimal Trajectory
This section's main objective is to illustrate how a specific OCP can be solved through the intermediary of the quasi-optimal trajectory generated by an MbA. Moreover, we are not aiming to solve a specific OCP and compare our solution with other authors' approaches, but rather to exemplify the newly proposed method on a benchmark problem.
We have considered a well-known problem described in many articles among those we recall here [12]. The OCP is called Park-Ramirez bioreactor and concerns a nonlinear dynamic system. This problem is stated in the Appendix A of this article. One can see that it holds: In the framework devoted to solving OCPs through metaheuristics, we have implemented and used an evolutionary algorithm that solves this problem. We will call it EA1 in the sequel. The EA1 has produced a quasi-optimal solution whose state trajectory is given by (A7). Some details concerning EA1 and the reference trajectory are given in Appendix B. Now, we want to implement a tracking structure using this reference trajectory. The initial state of the process in the closed-loop structure is given by (A8).
The implementation of the TC and the simulation of the closed-loop were carried out using the MATLAB language and system. To simulate the real process evolution, we have used the special functions devoted to integrating differential equations.
TC's Implementation Using Simulated Annealing Algorithm
This section presents the TC's implementation and the closed-loop simulation using the well-known simulated annealing algorithm (SAA). Let us consider SAA as a function that is symbolically called as bellow: [U k , X k+1 , d k+1 ] ← SAA(X k , Y k+1 ).
The searching process refers to U k that has to minimize the performance index. It is naturally coded as a real vector having the components of the control input. SAA evaluates U k calling an objective function (scripts: SAh1, eval_SA; see the folder TC_SA) that calculates the performance index (6). This function performs two actions: the numerical integration of the process over the interval [k, k+1] to calculate the next state X k+1 using the current U k ; -the computation of the distance d k+1 .
Only a candidate solution meeting the constraint (8) is accepted before applying the Metropolis Rule, which will set the iterative process' current solution: SAA has two ways to stop the solution's improvement. The first one is the convergence of the searching process, which can be declared when two conditions are met: -the objective function had no improvement since a certain iterations number, and the annealing temperature is less than a minimum value.
The second corresponds to the situation when the convergence cannot be ascertained after a pre-established large number of iterations.
The first simulation series' objective was to evaluate the TC's behavior when the initial state is different from that considered in the OCP statement (see "influenceX01.m" inside the folder TC_SA). In our example, the initial state is X 00 (see (A3)). X 0 is obtained from X 00 to which a normally distributed noise is added: The noise is generated by a function normrnd (the same name as the MATLAB function) that generates a vector of random numbers chosen from a normal distribution with mean "Mean" and standard deviation "StDev". For the real process, we have considered an RPM identical to PM. Table 1 presents the simulation's results for six noises having different mean values. These values are quite big compared to the variables' values x 1 and x 2 , for example, on the interval k = 0, · · · , 13. It is important to see what the tracking's precision is. The absolute error at the moment k can be calculated using the following formula: where X is the Euclidean norm. The relative error at the moment k can be expressed as follows: The columns of Table 1 give the following values, respectively: the minimum, average, and maximum relative tracking error, the relative error in the final state (Rerr(N)), the reference and current performance index. In lines #1 and #2, the performance index determined by TC with SA has superior values to J*. The explanation is related to the reference trajectory's quasi-optimal character, namely the value J* is not the optimum. Figure 4 presents comparatively the state evolution in the reference case and the closed-loop control system with TC using SAA. The medium value of the noise is Mean = 0.1, which is very big for our process especially for the variables x 1 and x 2 . The relative error of the final state, which is more important for the terminal penalty, is only 0.76%. This fact is reflected in the relative error of the performance index, ε r = 3.81%. The control input's value is depicted in Figure 5 in the two cases. It is interesting to see that the control input given by the TC has relatively the same pattern. Figure 6 shows the evolution of the relative tracking error over the control horizon. Let us note that the tracking error is already near 5% initially, which is a big value.
To simulate the real process, we have considered that the RPM is equivalent to the PM to which a normally distributed noise is added. The latter one is a random variable having a normal distribution like in Equation (9). In this way, the state variables k X and ' We have implemented a similar program as before, to simulate the closed-loop functioning with TC and RPM. The only difference is that the state variables are adjusted using Equations (9) and (11) (see TC_SA_h1_MEAN0_04.m). The values characterizing the normal noise are Mean = 0.04 and StDev = 0.01. The resulting simulation program has a stochastic character; therefore, we cannot conclude after a single execution. We have to repeat the execution more times to give consistency to statistical parameters in such a situation. The control input's value is depicted in Figure 5 in the two cases. It is interesting to see that the control input given by the TC has relatively the same pattern. Figure 6 shows the evolution of the relative tracking error over the control horizon. Let us note that the tracking error is already near 5% initially, which is a big value. The control input's value is depicted in Figure 5 in the two cases. It is interesting to see that the control input given by the TC has relatively the same pattern. Figure 6 shows the evolution of the relative tracking error over the control horizon. Let us note that the tracking error is already near 5% initially, which is a big value.
To simulate the real process, we have considered that the RPM is equivalent to the PM to which a normally distributed noise is added. The latter one is a random variable having a normal distribution like in Equation (9). In this way, the state variables k X and ' , are different: We have implemented a similar program as before, to simulate the closed-loop functioning with TC and RPM. The only difference is that the state variables are adjusted using Equations (9) and (11) (see TC_SA_h1_MEAN0_04.m). The values characterizing the normal noise are Mean = 0.04 and StDev = 0.01. The resulting simulation program has a stochastic character; therefore, we cannot conclude after a single execution. We have to repeat the execution more times to give consistency to statistical parameters in such a situation. That is why the second simulation series will repeat 30 times the closed-loop simulation program. Table 2 gives some statistical parameters characterizing the 30 executions of the closed-loop simulation program. For the relative error of the performance index ( r ε , see Equation (7)), the minimum, average, maximum values, and the standard deviation are indicated. The relative tracking To simulate the real process, we have considered that the RPM is equivalent to the PM to which a normally distributed noise is added. The latter one is a random variable having a normal distribution like in Equation (9). In this way, the state variables X k and X k ,1 ≤ k ≤ N, are different: We have implemented a similar program as before, to simulate the closed-loop functioning with TC and RPM. The only difference is that the state variables are adjusted using Equations (9) and (11) (see TC_SA_h1_MEAN0_04.m). The values characterizing the normal noise are Mean = 0.04 and StDev = 0.01. The resulting simulation program has a stochastic character; therefore, we cannot conclude after a single execution. We have to repeat the execution more times to give consistency to statistical parameters in such a situation.
That is why the second simulation series will repeat 30 times the closed-loop simulation program. Table 2 gives some statistical parameters characterizing the 30 executions of the closed-loop simulation program. For the relative error of the performance index (ε r , see Equation (7)), the minimum, average, maximum values, and the standard deviation are indicated. The relative tracking error in the final state, Rerr(N), is computed according to the Equation (10). Table 2 shows its minimum, average, and maximum value over the simulation series. The particular simulation among the 30, whose performance index is the closest to the average, can be considered typical execution. Its performance index is 29.96, which means the relative error has an acceptable value of 7.1%. Remark 1: We can make a peculiar simulation of the closed-loop, taking X 0 = X 00 , and RPM ≡ PM (without noise). Obviously, the theoretical solution is trivial: the sequence of control input values calculated by the TC is just U * 0 , U * 1 , · · · , U * N−1 , and the sequence of the process' states is just Y 0 , Y 1 , . . . , Y N−1 , Y N . This solution is found on the condition that the MA used for local optimization (SAA in our case) works very well and finds X k ≡ Y k , k = 1, · · · , N. Setting these conditions, the simulation (see TC_SA_X00_NONOISE.m) has proved that our SAA's implementation works very well. Generally speaking, this is a method to test the correctness of the MA's implementation.
TC's Implementation Using an Evolutionary Algorithm
The TC's implementation can use another MA. In this section, we use an evolutionary algorithm called EA2 that is different from EA1.
NB: EA1 and EA2 are two distinct implementations of the evolutionary metaheuristic. EA1 searches the optimal solution of the OCP described in Appendix A, while EA2 solves the TP presented before. There are two different elements that characterize the two OCPs: • the performance index is (A5) for EA1, and (6) for EA2; • the control horizon is [0, · · · , N] for EA1, and [k, k + 1], 0 ≤ k ≤ N − 1 for EA2. EA1 is executed one time to solve its problem, while EA2 is called for every sample period to solve local optimization problems.
These basic elements further involve many other implementation differences, such as solution encoding and genetic operators. Let us consider EA2 as a function that is symbolically called as below: The optimization process refers to U k that has to minimize the same performance index (6). It is naturally coded as a real vector having the components of the control input. In our case U k is a scalar. EA2 evaluates U k using an objective function (implemented inside the program TC_EA_h1_NOISE.m; see the folder TC_EA2) that calculates the performance index (6). This function performs two actions: the numerical integration of the process using the current U k and the computation of the distance d k+1 .
The constraint (8) is implemented when the initial population is generated and any time the solution's value is modified (e.g., inside the mutation operator).
EA2 stops the solution's search after a certain number of generations (40 in our implementation) while the population evolves. This number is tuned according the preliminary tests of the program.
For our peculiar TP, every chromosome of the solution's population encodes the value of U k . So, the length of the solution vector is h = 1 (control horizon). EA2 having the parameters given by Table 3 uses a direct encoding with real (non-binary) values. It has the usual characteristics listed below: • The population of each generation has µ individuals.
•
NGen is the number of generations in which the population is evolving.
•
The selection strategy is based on Stochastic Universal Sampling using the rank of individuals, which is scaled linearly using selection pressure.
•
No crossover operator, because each chromosome encodes a single control input value (whatever the value m is). In our peculiar TP, the solution vector has a single component, i.e., a scalar.
•
The mutation operator uses global variance adaptation ( [1,2]) of the mutation step. The adaptation is made according to the "1/5 success rule". • The replacement strategy: the offspring replace the λ worst parents of the generation. We have simulated the closed-loop system using a program whose MA is EA2. A sketch of the simulation program is given in Table 4. Table 4. Simulation program's pseudo-code list.
Outline of TC_EA_h1_NOISE
Nt ← 15; Set other program's parameters 3. Send the control input U k towards the process.
Display data and graphics
Nt denotes the length of the control horizon as a sampling period number. Instructions #1 and #3 suggest the connection with the real process, in real-time. The state variable is read or estimated, and the control input is sent toward the process. In our simulation, instruction #3 means that U k is memorized for further simulations.
The RPM has been implemented, like in the previous section, using a normally distributed noise. For example, we have chosen Mean = 0.04 and StDev = 0.01. The single running's results of this program are presented in Table 5. Remark 2: Although the results from Table 5 are a bit better than those from Table 2, one can assert that the two TC (using SAA or EA2) lead practically toward similar results.
The discrete control input yielded by the TC is depicted in Figure 7. The evolution's pattern is similar to a large extent to that one presented in Figure 5. Display data and graphics Nt denotes the length of the control horizon as a sampling period number. Instructions #1 and #3 suggest the connection with the real process, in real-time. The state variable is read or estimated, and the control input is sent toward the process. In our simulation, instruction #3 means that k U is memorized for further simulations. The RPM has been implemented, like in the previous section, using a normally distributed noise. For example, we have chosen Mean = 0.04 and StDev = 0.01. The single running's results of this program are presented in Table 5. Table 5 are a bit better than those from Table 2, one can assert that the two TC (using SAA or EA2) lead practically toward similar results.
The discrete control input yielded by the TC is depicted in Figure 7. The evolution's pattern is similar to a large extent to that one presented in Figure 5. We have also simulated the closed-loop system for different mean values and using lots of 30 executions. The results are presented in Table 6, where lines correspond to the means of different noises. For each mean's value, a lot of 30 program executions were carried out. The data from the last four columns describes only the typical execution of the lot. In lines #2 and #3, the performance index Figures 4 and 8 have the same look, a fact that is comprehensible because the two algorithms (SAA and EA2) correctly execute their local optimization task and do not leave their mark on the system evolution.
We have also simulated the closed-loop system for different mean values and using lots of 30 executions. The results are presented in Table 6, where lines correspond to the means of different noises. For each mean's value, a lot of 30 program executions were carried out. The data from the last four columns describes only the typical execution of the lot. In lines #2 and #3, the performance index determined by TC with EA2 has superior values to J*. The explanation is the same as that given for lines #1 and #2 of Table 1. Line #1 corresponds to the peculiar simulation of the closed-loop, when X 0 = X 00 , and RPM ≡ PM (without noise, see TC_EA_h1_NONOISE.m). This simulation is the test that we mentioned in the previous section. It proves that our EA's implementation works very well, because the sequence of the process' states is just Y 0 , Y 1 , . . . ,Y N−1 , Y N and the sequence of control input values is just U * 0 , U * 1 , · · · , U * N−1 .
Discussion
This work proposes a new closed-loop control structure to solve an OCP with a terminal penalty. Section 4 gives an example of how to solve a benchmark OCP using a tracking structure. Emphasis is not placed on the problem but on how to implement the TC. The proposed method is a possible answer to the question: How can we pass from the paper solution computed by an MbA to a closed-loop solution including a real process?
The TC's minimization task involves the employment of an optimization procedure according to Equation (4). We have proposed to use a MA (SAA or EA2) and we have carried out simulations with both algorithms. Why have we presented the two versions?
The first reason is to emphasize that any choice for the optimization procedure is good, while it fulfils the minimization task. Actually, not only an MA may be used. Any other minimization method (deterministic or not) may be employed. Remark 2 and Remark 3 from Section 4.2 underline this idea. If the optimization procedure works well, it will find the best control input U(k) inside a sampling period, whatever the computational complexity would be. In our case, the two MAs execute their task correctly and do not leave their mark on the system evolution. One can eventually notice a difference at the computational complexity level. This difference may be accepted between certain limits.
The second reason is related to Remark 2 as well. The two TCs (using SAA or EA2) lead practically toward similar results. This happens because the RPMs are very much alike in the two simulations: a normal noise having identical parameters added to the PM. In other words, the two RPMs are similar. However, this similarity has its limits because the results from Table 5 are a bit better than those from Table 2. The explanation resides in the noise's stochastic character: the noise's stochastic realizations are not identical.
Therefore one can state that similar RPMs will produce similar results; the TC has no influence (if it works well, that is, it fulfils its minimization task).
What happens when the RPMs are different? In our work, the difference among the real processes is modelled through additive noises. When the RPMs are not similar, the systems' evolutions are different despite the same reference trajectory. The evidence is the data from Table 6. Every line is devoted to a simulation lot with specific noise parameters. The increase of the noise mean implies the decrease of the performance index. The final state's relative tracking error increases as well. Hence, we can notice a degradation of quasi-optimal behavior in comparison with the reference trajectory.
In this paper, we have made a study through simulations of the proposed control structure. The designer of the real-time system has to decide whether this approach would give good results or not. How can the designer take this decision? The key factor is the PM that must replicate, to a large extent, the real process. The construction of a PM must be done following the desired dynamic accuracy. The PM may be enriched with perturbation models or unmodeled dynamics to emulate the real process as accurately as possible. However, this can be a difficult technical task.
With a realistic PM, the tracking structure simulation can help the real-time control system designer take the right decision. Anyhow, a certain degradation of the quasi-optimal behavior will be noticed. The closed-loop does or does not work well, depending on whether the performance index value is acceptable.
The future work on this topic may be focused on some theoretical directions regarding the distance between the real and reference trajectory. The following property results from Remark 1 (Section 4.1): Due to the robustness of the structural properties (e.g., the controllability), it is likely to have the following property: This property could be useful to TC's algorithm in correlation with the accuracy of the PM. Another direction for future research is to find a theoretical method to minimize the performance index (6) for the local optimal control problem stated in Section 3.2.
Funding: This research received no external funding.
Conflicts of Interest:
The author declares no conflict of interest.
Appendix A The Elements of the Optimal Control Problem
Process Model .
Initial conditions: Bound constraints: Performance index ( ) ( ) The maximum performance index Appendix B
Evolutionary Algorithm EA1
The particular OCP described in Appendix A has been solved using the evolutionary algorithm EA1. Every chromosome of the solution's population encodes the control profile that is a series of (A7) The perturbed initial state of the process is: X 0 = X 00 + normrnd(0.08, 0.01, 1, 5). (A8)
Appendix B Evolutionary Algorithm EA1
The particular OCP described in Appendix A has been solved using the evolutionary algorithm EA1. Every chromosome of the solution's population encodes the control profile that is a series of values U 0 , U 1 , · · · , U N−1 . So, the length of the solution vector is h = 15 (the control horizon). EA1 uses a direct encoding with real (non-binary) values and has the usual characteristics listed below:
•
The population of each generation has µ individuals; • The offspring population has λ individuals (λ < µ); • NGen is the number of generations in which the population is evolving; • The selection strategy is based on Stochastic Universal Sampling using the rank of individuals, which is scaled linearly using selection pressure (s); • A usual one-point crossover operator; • The mutation operator uses global variance adaptation ( [1,2]) of the mutation step. The adaptation is made according to the "1/5 success rule"; • The replacement strategy: the offspring replace the λ worst parents of the generation. The EA1 is implemented by the script "EA_h15_A5.m" (see folder EA1 and ReadMe.txt). This program is executed 30 times in order to calculate some statistical parameters. The best solution's state evolution is the reference trajectory used in this paper to implement the closed-loop control structure.
|
2020-10-28T19:17:49.367Z
|
2020-12-01T00:00:00.000
|
{
"year": 2020,
"sha1": "3fe88c1cbcee258c5f2755870fb607e6f045f2f9",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2673-4052/1/1/4/pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "cba249c08f160f0098072c09e06ce9d9ab0a2340",
"s2fieldsofstudy": [
"Engineering",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
237406118
|
pes2o/s2orc
|
v3-fos-license
|
Genome-Wide Single Nucleotide Polymorphism Analysis Elucidates the Evolution of Prunus takesimensis in Ulleung Island: The Genetic Consequences of Anagenetic Speciation
Of the two major speciation modes of endemic plants on oceanic islands, cladogenesis and anagenesis, the latter has been recently emphasized as an effective mechanism for increasing plant diversity in isolated, ecologically homogeneous insular settings. As the only flowering cherry occurring on Ulleung Island in the East Sea (concurrently known as Sea of Japan), Prunus takesimensis Nakai has been presumed to be derived through anagenetic speciation on the island. Based on morphological similarities, P. sargentii Rehder distributed in adjacent continental areas and islands has been suggested as a purported continental progenitor. However, the overall genetic complexity and resultant non-monophyly of closely related flowering cherries have hindered the determination of their phylogenetic relationships as well as the establishment of concrete continental progenitors and insular derivative relationships. Based on extensive sampling of wild flowering cherries, including P. takesimensis and P. sargentii from Ulleung Island and its adjacent areas, the current study revealed the origin and evolution of P. takesimensis using multiple molecular markers. The results of phylogenetic reconstruction and population genetic structure analyses based on single nucleotide polymorphisms detected by multiplexed inter-simple sequence repeat genotyping by sequencing (MIG-seq) and complementary cpDNA haplotypes provided evidence for (1) the monophyly of P. takesimensis; (2) clear genetic differentiation between P. takesimensis (insular derivative) and P. sargentii (continental progenitor); (3) uncertain geographic origin of P. takesimensis, but highly likely via single colonization from the source population of P. sargentii in the Korean Peninsula; (4) no significant reduction in genetic diversity in anagenetically derived insular species, i.e., P. takesimensis, compared to its continental progenitor P. sargentii; (5) no strong population genetic structuring or geographical patterns in the insular derivative species; and (6) MIG-seq method as an effective tool to elucidate the complex evolutionary history of plant groups.
INTRODUCTION
The genus Prunus comprises approximately 200 species of shrubs and trees (Rehder, 1940;Kalkman, 2004), including many economically important fruit trees (e.g., almonds, apricots, cherries, peaches, and plums) as well as ornamental, medicinal, and timber species (Ingram, 1948;Elias, 1980;Zomlefer, 1994;Potter, 2011). Flowering cherries are one of the most popular ornamentals and cultivated trees worldwide, and are classified under the subgenus Cerasus of the genus Prunus, which is native to temperate Asia, Europe, and North America (Li and Bartholomew, 2003). Many forms of ornamental flowering cherries with diverse origins and traits have been cultivated from a wide range of wild flowering cherry species growing in the forests of eastern Asia. Prunus takesimensis Nakai is a wild flowering cherry that is endemic to Ulleung Island, South Korea (see Figure 1A for its location). Given its sole representation on the island, P. takesimensis has purportedly originated via anagenetic speciation from a continental progenitor species (Sun and Stuessy, 1998;Stuessy et al., 2006). Ulleung Island is located in the East Sea/Sea of Japan between 130 • 47 E-131 • 52 E longitude and 37 • 33 N-37 • 14 N latitude, with the shortest distance of 137 km from the east coast of the Korean Peninsula. The island is of volcanic origin and approximately 1.8 million years (Myr) old; it has never been connected to the adjacent continental land mass (Kim, 1985). Despite its relatively small size (total area of approximately 73 km 2 with the highest peak being 984 m above sea level), Ulleung Island is rich in flora with approximately 500 native vascular plant species, of which approximately 37 are endemic (Lee and Yang, 1981). Most of these endemic species are single representatives of diverse vascular plant families that might have derived anagenetically from continental progenitors in adjacent source areas (Sun and Stuessy, 1998).
A different mode of evolution, anagenetic speciation, which describes lineages changing over time without splitting events on islands, has been reported, explaining ca. 25% of insular endemic angiosperm species diversity (Stuessy et al., 2006;Takayama et al., 2015). In anagenetic speciation, a founder population arriving on an oceanic island proliferates in a favorable uniform environment and spreads over the island, gradually accumulates genetic variation through mutation and recombination in isolated environments, and eventually diverges from continental source populations in genetic composition and morphological characteristics (Stuessy et al., 2006(Stuessy et al., , 2014Stuessy, 2007;Takayama et al., 2015). Unlike adaptive radiation, the investigation of anagenetic speciation has been limited to a few geographical regions, primarily to the Juan Fernández Islands in the Pacific Ocean (López-Sepúlveda et al., 2013Takayama et al., 2015) and Ulleung Island in East Asia (Pfosser et al., 2005;Takayama et al., 2012Takayama et al., , 2013. The expected genetic outcomes of speciation via cladogenesis or anagenesis are different. In cladogenesis, morphological or ecological divergence among species is often notable owing to dispersal into different environments and strong selection, but the overall genetic differentiation among populations is generally low within a complex of closely related species. In contrast, lack of geographic partitioning of genetic variations maintains high genetic diversity levels in anagenetic speciation, and no or weak geographical genetic structure is found among the island populations of anagenetically derived species (Stuessy et al., 2006(Stuessy et al., , 2014Takayama et al., 2015).
Ulleung Island is of particular interest to evolutionary biologists and phytogeographers. It is known for the exceptionally high level of anagenetic speciation; 88% of the total endemic plants are anagenetically derived. In comparison, rates of anagenetic speciation among endemic plants on other oceanic islands, such as Hawaii, Bonin Islands, and St. Helena, are 7, 53, and 53%, respectively (Stuessy et al., 2006). Ulleung Island is young (approximately 1.8 Myr old) (Kim, 1985), of low elevation (<1000 m), and relatively ecologically uniform (Yim et al., 1981), and these factors are known to be correlated with a high frequency of anagenetic speciation. Only a few endemic species on Ulleung Island have been investigated to determine continental progenitor species and better understand the genetic consequences of anagenetic speciation. Recently, three endemic plants, Rubus takesimensis Nakai (Yang et al., 2019), Campanula takesimana Nakai (Cheong et al., 2020), and Phedimus takesimensis (Nakai) 't Hart (Seo et al., 2020) have been investigated based on maternally inherited plastid DNA sequences, whereas Dystaenia takesimana (Nakai) Kitag. (Pfosser et al., 2005), Acer okamotoanum Nakai (Pfosser et al., 2002;Takayama et al., 2012), and Acer takesimense Nakai (Pfosser et al., 2002;Takayama et al., 2013) have been previously investigated based on nuclear microsatellite and amplified fragment length polymorphism markers. Their respective genetic patterns in geographic source areas and genetic variations appear to be complex. It has been demonstrated that D. takesimana, A. takesimense, and A. okamotoanum show higher or slightly lower levels of genetic variation than their continental progenitor species (Pfosser et al., 2002(Pfosser et al., , 2005Takayama et al., 2012Takayama et al., , 2013. However, the populations of R. takesimensis on Ulleung Island show significantly lower genetic diversity than its continental progenitor, R. crataegifolius, without geographical population structuring on the island (Yang et al., 2019). Similarly, C. takesimana is substantially less genetically diverse than its continental progenitor, C. punctata, sampled from the Korean Peninsula, but it shows significant population genetic structuring (Cheong et al., 2020). Phedimus takesimensis also shows apparent genetic structuring within Ulleung Island, presumably owing to its limited seed dispersal mechanism, without apparent reduction in genetic diversity (Seo et al., 2020). To further assess the evolutionary importance and emerging patterns of anagenetic speciation on oceanic islands, it is necessary to explore more diverse anagenetically derived endemic species on Ulleung Island using variable molecular markers.
Of the endemic species purportedly derived via anagenesis on Ulleung Island, Prunus takesimensis Nakai is an exceptionally challenging taxon to elucidate its origin and evolution. It suffers from the difficulties in species delimitation and unresolved phylogenetic relationships of flowering cherries due to morphological continuity, lack of diagnostic morphological features, limited informative genetic polymorphisms from appropriate molecular markers, and frequent hybridization and introgression among congeneric species (Bortiri et al., 2001(Bortiri et al., , 2002Potter, 2011;Chin et al., 2014;Cho et al., 2014;Cho and Kim, 2019). P. takesimensis is a deciduous tree species growing up to 20 m high with coetaneous flowering in April and fruiting in late May through June. It is commonly found in wild forests and also cultivated as a popular ornamental tree in residential landscapes, gardens, parks, and streets in Ulleung Island. P. takesimensis was described by Nakai (1918) based on several diagnostic features, such as umbellate inflorescence with 2-5 flowers, absence of hairs in leaves, pedicels and petioles, and coetaneous phenology. However, most of these features are shared with the congeneric species, Prunus sargentii Rehder, which is currently considered the most likely candidate progenitor of P. takesimensis. Prunus sargentii is usually found in the high mountains of the eastern Korean Peninsula along the Baekdudaegan Mountain Range, Jeju Island, Hokkaido and Honshu in northern Japan, and the Russian Far East (Ohwi, 1984;Chang et al., 2004;Kim, 2009). Using multivariate morphometric analyses of morphological characteristics, Chang et al. (2004) identified P. takesimensis as a cohesive group distinct from P. sargentii based on smaller flower size (diameter: 26-32 mm vs. 34-48 mm, respectively) and higher flower numbers per umbellate inflorescence (3-5 flowers vs. 2-3 flowers, respectively) ( Figures 1B-F). Additionally, P. takesimensis has entire and erect (or spreading) calyx lobes and lacks hair on the bud scale, inflorescence, leaf, petiole, and pedicel; however, rare individuals with hair on the pedicel can be found (Cho, M.-S., personal observation; Figure 1D).
Despite its morphological distinction from congeneric flowering cherries, P. takesimensis has never been resolved as a monophyletic lineage in any prior molecular phylogenetic analyses. In addition, no comprehensive study has been conducted to determine its progenitor-derivative relationship or population genetic diversity and structure. For example, with the inclusion of limited samples of P. takesimensis, phylogenetic analyses of Prunus/Cerasus have been conducted to determine primary interspecific relationships (Bortiri et al., 2001;Jung and Oh, 2005;Cho and Kim, 2019). Simple sequence repeat (SSR) genotyping of the collections of ornamental Prunus germplasm at the United States National Arboretum (USNA) suggested genetic closeness between P. takesimensis and P. sargentii, although both species were not retrieved as monophyletic (Ma et al., 2009). Cho et al. (2014) and Cho and Kim (2019), based on nuclear ribosomal DNA (nrDNA) internal (ITS) and external transcribed spacer (ETS) and chloroplast non-coding regions, attempted to resolve interspecific relationships within Prunus in South Korea. However, the resulting phylogenetic trees were poorly resolved since most nodes in the trees were not well-supported. Several phenomena, such as recent speciation and cross-compatibility among species, historical and contemporary gene flow, and incomplete lineage sorting of ancestral polymorphisms, further complicate our understanding of the evolution of flowering cherries (Ohta et al., 2007).
Given the abovementioned methodological challenges for the study of the evolutionary processes of flowering cherries in a phylogenetic framework, we performed a genome-wide single nucleotide polymorphism (SNP) analysis using multiplexed inter-SSR (ISSR) genotyping by sequencing (MIG-seq) in addition to Sanger-derived sequences for nrDNA (ITS and ETS) and seven concatenated cpDNA non-coding regions. MIG-seq is a polymerase chain reaction (PCR)-based next-generation sequencing (NGS) method, which has been recently developed as an effective method for the discovery of genome-wide SNPs from low-quantity or low-quality DNA (Suyama and Matsuki, 2015). It has been used successfully and effectively to clarify persistent taxonomic difficulties or issues for several plant groups with complex evolutionary history based on genome-wide SNPs (Binh et al., 2018;Gutiérrez-Ortega et al., 2018;Yoichi et al., 2018;Park et al., 2019Park et al., , 2020Takata et al., 2019;Strijk et al., 2020;Nakamura et al., 2021;Onosato et al., 2021). In addition to determining the interspecific relationships based on the genome-wide SNPs, we compared the genetic diversity and population genetic structure between the insular derivative, P. takesimensis, and the purported continental ancestor, P. sargentii, using chloroplast DNA (cpDNA) as a maternally inherited marker. It has been hypothesized that P. takesimensis on Ulleung Island has been originated from its continental progenitor species, P. sargentii according to the common morphological characteristics (Chang et al., 2004). However, this hypothesis has never been rigorously tested based on a broad and suitable sampling strategy using highly variable molecular markers. Therefore, in the present study, we extensively sampled P. takesimensis from Ulleung Island and all closely related flowering cherry species from adjacent regions [14 populations (189 accessions) of P. takesimensis and 161 of other species in the subgenus Cerasus]. In particular, the purported continental progenitor species, P. sargentii, was sampled from likely source areas, such as the Korean Peninsula, including Jeju Island, Japan, and the Russian Far East, to determine the geographical source area. The primary objectives of this study were to (1) test the monophyly of P. takesimensis on Ulleung Island and resolve its sister group relationship for identifying its continental progenitor species and source populations in the phylogenetic framework employing genome-wide MIG-seq analysis as well as Sanger sequencing for nrDNA regions ITS and ETS, and seven concatenated cpDNA regions. Secondly, we performed population genetic analyses to (2) evaluate the patterns of genetic variation within the insular populations of P. takesimensis and compare them with the results in its continental progenitor species using MIG-seq and cpDNA data. These objectives allow us to better understand the origin of Ulleung Island endemic plants as well as the genetic consequences of anagenetic speciation in the East Sea.
Plant Material and DNA Isolation
This study followed the most widely accepted Rehder's classification (1940), where the genus Prunus is broadly interpreted and divided into five subgenera. Leaves collected mostly from natural populations were dried with silica gel and used as DNA sources. DNA was extracted using the DNeasy Plant Mini Kit (Qiagen, Carlsbad, CA, United States). We included extensive samples in analyses, a total of 350 accessions belonging to 15 Prunus species in the subgenus Cerasus (Supplementary Table 1). First, 189 accessions from 14 populations of P. takesimensis were sampled from Ulleung Island in addition to 161 accessions of other flowering cherries sampled from Jeju Island, the Korean Peninsula, the Russian Far East, and Japan. Twelve taxa belonging to the subgenus Cerasus, section Pseudocerasus, were sampled; P. spachiana f. ascendens (four accessions: three from South Korea and one from Japan), P. sargentii Rehder (65 accessions: 36 from South Korea, seven from the Russian Far East, and 22 from Japan), P. sargentii var. verecunda (Koidz.) Chin S. Chang (four accessions from South Korea), P. serrulata var. spontanea (Max) Wilson (21 accessions: 16 from South Korea and five from Japan), P. serrulata var. quelpaertensis (Nakai) Uyeki (10 accessions from Jeju Island, South Korea), P. serrulata var. pubescens (Makino) Nakai (17 accessions: 16 from South Korea and one from Japan), P. yedoensis var. angustipetala Kim and Kim (one accession from Jeju Island), P. hallasanensis Kim & Kim (two accessions from Jeju Island), P. longistylus Kim and Kim (one accession from Jeju Island), P. speciosa (Koidz.) Ingram (28 accessions: three cultivated accessions from Jeju Island, South Korea, and 25 wild accessions from Japan), P. incisa Thunb. (one accession from Japan), and P. apetala (Siebold and Zucc.) Franch and Sav. (three accessions from Japan). Two taxa belonging to the subgenus Cerasus, section Phyllomahaleb and section Eucerasus were also included: P. maximowiczii Ruprecht (three accessions from South Korea) and P. avium (L.) L. (one accession from Japan).
Of a total of 350 accessions, 123 accessions representing 13 Cerasus species were used for phylogenetic analyses based on nrDNA ITS and ETS as well as seven concatenated cpDNA noncoding regions. In total, 262 accessions from 13 species, including P. takesimensis (162 accessions), P. sargentii (46 accessions), and other species (54 accessions) were used to determine the phylogenetic position and genetic structure of P. takesimensis by MIG-seq SNP analysis. Ninety nine accessions of P. takesimensis and P. sargentii sampled at the population level were used for the analysis of the cpDNA haplotype network (Supplementary Table 1). Voucher specimens were deposited at the Ha Eun Herbarium, Sungkyunkwan University (SKK), South Korea.
nrDNA and cpDNA Sequences
Nuclear ITS and ETS DNA regions and seven highly variable non-coding regions of chloroplast DNA (petA-psbJ, petD-rpoA, ndhF-rpl32, trnQ-rps16, trnV-ndhC, rpl16 intron, and trnL-rpl32) (Shaw et al., 2007) were amplified for phylogenetic analyses of 123 accessions of 14 Cerasus species (Supplementary Table 1). These datasets were part of our earlier studies (Cho et al., 2014;Cho and Kim, 2019), which were limited to the ranges of the species belonging to P. serrulata/P. sargentii complex and other closely related species. Four accessions of P. spachiana f. ascendens collected from Japan and South Korea were included as outgroup. To gain additional insights into the phylogenetic relationships between continental progenitors and insular derivative species pairs, we produced a populationlevel cpDNA data matrix (including five to eight accessions per population) for P. takesimensis and P. sargentii using five non-coding regions (petA-psbJ, petD-rpoA, trnQ-rps16, rpl16 intron, and trnL-rpl32; Shaw et al., 2007). The dataset included 13 populations of P. takesimensis from Ulleung Island and five populations of P. sargentii collected from its adjacent areas, comprising a total of 99 accessions (see Table 1 for the sampling localities and numbers of each population). All primer pairs used for amplification were as specified in our previous studies (Cho et al., 2014;Cho and Kim, 2019). The thermal cycler program was run as follows: one cycle of 95 • C for 2 min (initial denaturation), 35 cycles of 20 s at 95 • C (denaturation), 40 s at 52 • C (annealing), 1 min at 72 • C (extension), and finally 5 min at 72 • C (final extension). All PCR products were purified using the Inclone Gel & PCR Purification Kit (InClone Biotech Co., Seoul, South Korea). Direct sequencing of the purified PCR products was carried out using the BigDye Terminator v3.1 Cycle Sequencing Kit (Applied Biosystems, Foster City, CA, United States) at the Geno Tech Corp. (Daejeon, South Korea). Contig assembly was made using Mafft ver. 7.017 (Katoh et al., 2002) with default parameters (Auto Algorithm, 200PAM/k = 2 Scoring matrix, 1.53 Gap open penalty and 0.123 Offset value), and editing was performed manually by using Geneious ver. 8.1.7 (Kearse et al., 2012).
Genotyping of MIG-seq SNPs
The MIG-seq library was constructed by two-step amplification, as detailed by Suyama and Matsuki (2015), using 288 samples representing 14 Cerasus species, including P. takesimensis and P. sargentii. The first PCR was performed from genomic DNA using designated primers to target ISSR, followed by a second PCR to add the complementary sequences for the binding sites of the Illumina sequencing flow cell and indices (barcodes) as specified by Suyama and Matsuki (2015) to the first PCR amplicons. After purification, the amplified products were pooled and sequenced on an Illumina MiSeq Sequencer (Illumina, San Diego, CA, United States) for the selected sizes of 350-800 base pairs (bp) using the MiSeq Reagent Kit v3 (150 cycles PE, Ref. 15043893). Low-quality ends of reads, SSR primer regions, anchors, and index-tags were removed from the obtained NGS data using the FASTX Toolkit 1 and TagDust 1.12 (Lassmann et al., 2009). A total of 26 samples with extensive missing data were removed after checking by vcftools (Danecek et al., 2011) 1 http://hannonlab.cshl.edu/fastx_toolkit/ to maintain the optimal dataset quality. The processed reads were analyzed to discover SNPs using STACKS v1.48 (Catchen et al., 2013), generating output data matrices in multiple formats of STRUCTURE, Phylip, and variant call format (VCF) for phylogenetic and population genetic analyses. In STACKS, we used the program 'ustacks' to assemble and pile stacks de novo using the default value of the minimum depth of coverage required (-m 3) and the maximum distance allowed between stacks (-M 2). Deleveraging (d) and removal (r) algorithms were enabled. The program 'cstacks' was used to create a catalog, and 'ustacks' products were searched against the catalog in the program 'sstacks.' The SNP data matrix, including 262 samples, was generated by the program 'populations' using the optimized parameters based on the most robustness in phylogenetic analysis after testing variable parameters (-r = 0.5, 0.75; -p = 2, 8, 16, 24, 32, and 40) as shown in Supplementary Table 2. The applied parameters were; at least 75% (-r 0.75) of minimum percentage of individuals across populations, minimum eight populations present to process a locus (-p 8), minimum minor allele frequency at a locus of 0.05 (-min-max 0.05), and maximum observed heterozygosity at a locus of 0.95 (-max-obs-het 0.95).
Phylogenetic Analysis, Network Construction, and Population Structure
To determine the phylogenetic position of P. takesimensis among other Cerasus species, we first conducted maximum likelihood (ML) analyses of the nrDNA ITS/ETS and the concatenated seven non-coding regions of the cpDNA datasets for 123 samples using W-IQ-TREE (Trifinopoulos et al., 2016), an intuitive and user-friendly web interface and server for IQ-TREE (Nguyen et al., 2015). An IQ-TREE was also constructed from genomewide 5899 SNPs in the MIG-seq dataset for 262 Cerasus samples using P. maximowiczii Ruprecht from the section Phyllomahaleb Koehne as an outgroup taxon. Ultrafast bootstrap support (BS) was calculated from 1000 bootstrap replicates for the robustness of the clades (Hoang et al., 2018). Best-fit substitution models were determined according to the Bayesian information criterion using ModelFinder (Kalyaanamoorthy et al., 2017) implemented in IQ-TREE: HKY + F + I for nrDNA ITS/ETS, F81 + F + I + G4 for cpDNA seven non-coding regions, and TVM + F + ASC + G4 for MIG-seq dataset. Additionally, a SVDQuartets bootstrap consensus tree (Kubatko and Degnan, 2007) was generated based on MIG-seq dataset partitioned for species and populations (for P. takesimensis and P. sargentii) from default setting of 100,000 random quartets with QFM quartet tree search algorithm and bootstrapping of 100 replicates in PAUP 4.0a169 (Swofford, 2020).
For the population-level analysis of insular endemic P. takesimensis and the purported continental progenitor P. sargentii, we constructed a cpDNA haplotype network using TCS version 1.21 (Clement et al., 2000) from five concatenated cpDNA datasets for 99 accessions of both species (Table 1). Gaps were treated as missing data, and the probability of parsimony was set to 95% of the connection limit in accordance with Hart and Sunday (2007). Using the same dataset, genetic variation between both species, and among and within populations was N/I, number of individuals in population. *Same locality as NAR, but individuals with hairs on pedicel. Population excluded from cpDNA network analysis. **Samples were obtained from multiple spots within specified localities, thus exact GPS data are not provided.
evaluated by analysis of molecular variance (AMOVA) using ARLEQUIN ver. 3.5 (Excoffier et al., 2005). The genetic diversity and structure were also analyzed using the MIG-seq dataset based on genome-wide SNPs. Population genetic structure was estimated by the Bayesian model-based genetic clustering method in STRUCTURE ver. 2.3 (Pritchard et al., 2000) based on SNP matrices generated in STACKS using "-write_single_snp" command for avoiding the potential bias from linked SNPs in the same loci. The STRUCTURE computation was run for both datasets; i.e., the population-based dataset of P. takesimensis (14 populations) and P. sargentii (5 populations), including 200 samples (see Table 1 for population information excluding minor samples from various localities), and the dataset including a total of 262 samples of all Cerasus species used in MIG-seq analysis (Supplementary Table 1). The optimal K value for each analysis was estimated by the maximum value of K following the Evanno method (Evanno et al., 2005), implemented in STRUCTURE HARVESTER (Earl and vonHolt, 2012). Genetic diversity in the populations of both species was measured using the program 'populations' in STACKS v1.48 (Catchen et al., 2013). To examine genetic similarities and relationships between individuals, we conducted a principal component analysis (PCA) using the glPCA command of R statistical software, R 4.0.2, in RStudio based on the STACKS output file in VCF format from the MIG-seq analysis of 262 Cerasus samples.
Phylogenetic Tree Derived From SNPs of MIG-seq Analysis
Phylogenetic analyses of 123 accessions of Cerasus species did not resolve the phylogenetic relationship between P. takesimensis and P. sargentii because of the lack of resolution in ML tree of nrDNA ITS and ETS (956 aligned sites; see data matrices available in dryad under doi: 10.5061/dryad.x3ffbg7jn) (Supplementary Figure 1) and non-monophyly of both species in the seven concatenated cpDNA ML tree (5414 aligned sites; see data matrices available in dryad under doi: 10.5061/dryad.x3ffbg7jn) (Supplementary Figure 2). It was apparent that all but one species, P. maximowiczii (section Phyllomahaleb), were poorly resolved with low nodal supports. However, MIG-seq phylogeny based on genome-wide SNPs (262 accessions and 5899 total SNPs; see data matrices available in dryad under doi: 10.5061/dryad. x3ffbg7jn) provided much greater resolution (Figures 2A,B). Specifically, two endemic flowering cherry species with relatively Color codes for species and geographical regions: red for P. takesimensis from Ulleung Island, blue for P. sargentii from Korean Peninsula, orange for P. sargentii from Russia, light green for P. sargentii from Miyagi, Japan, purple for P. sargentii from Jeju Island, South Korea, dark green for P. sargentii from Okayama, Japan, and pink for P. speciosa. (B) The part of ML tree for collapsed accessions of P. takesimensis in the red triangle.
narrow distribution, P. takesimensis on Ulleung Island, South Korea, and P. speciosa on Izu Islands and Izu Peninsula, Japan, were primarily monophyletic, although several other Cerasus species remain unresolved. For the first time, the species boundary of P. takesimensis was delimited at the molecular level, and more importantly, its origin could be assessed through the relationship between continental progenitors and insular derivative species suggested by the MIG-seq phylogeny. P. takesimensis was nested within one lineage (CLADE A; 97% BS value) of its purported continental progenitor species P. sargentii, mainly sampled from populations of the Korean Peninsula, Russia, and Miyagi, Japan. P. takesimensis was found to be monophyletic; a total of 162 accessions formed a highly supported monophyletic group (100% BS value) except for one accession (SIB969-005), which was also nested in the group of P. sargentii accessions collected from Mt. Odaesan in the central part of the Korean Peninsula. The clade comprising all but one accession of P. takesimensis shared the most recent common ancestor with P. sargentii JRS355-54 collected from Mt. Jirisan in the southern part of the Korean Peninsula (Figure 2A). Within P. takesimensis on Ulleung Island, no geographical patterns were observed, including the hairy population (NRH) (Figure 2B). Contrary to P. takesimensis, the purported continental progenitor P. sargentii was not monophyletic; accessions collected from Jeju Island and Okayama, Japan were closely related to P. serrulata species. However, the topology of the MIG-seq phylogenetic tree revealed the geographic structure of P. sargentii (Figure 2A). Within CLADE A, the accessions of P. sargentii collected from geographically connected continental areas of the Korean Peninsula and Russia clustered together, except for one accession (P. sargentii RSS71206 was nested in the Miyagi population). In contrast, the accessions of P. sargentii collected from Okayama, Japan were distantly related to P. takesimensis and were embedded within the lineage of P. serrulata var. spontanea, comprising the accessions collected from the southern islands of South Korea (i.e., Geoje Island, Geomun Island, Bogil Island, and Jeju Island). P. sargentii accessions collected from Jeju Island were nested in the group, which included other Cerasus species from Jeju Island and the Korean Peninsula and was in the sister relationship to CLADE A (Figure 2A). The SVDQuartets bootstrap consensus tree reconfirmed strongly the monophyly of P. takesimensis (100% BS). However, the geographic origin of P. takesimensis could not be inferred from SVDQuartets analysis, as other clades resolved in ML tree were highly unresolved without providing any clue to the origin Table 1. Color codes for species and geographical regions: red for P. takesimensis from Ulleung Island, blue for P. sargentii from Korean Peninsula, orange for P. sargentii from Russia, light green for P. sargentii from Miyagi, Japan, purple for P. sargentii from Jeju Island, South Korea, dark green for P. sargentii from Okayama, Japan, and pink for P. speciosa. Numbers above major branches indicate bootstrap support (BS) percentages.
of P. takesimensis. Clade A in ML tree, including P. takesimensis and the populations of the Korean Peninsula, Russia, and Miyagi, Japan of P. sargentii, was not resolved in the SVDQuartets bootstrap consensus tree. Prunus takesimensis shared the most recent common ancestor with the lineage comprising P. sargentii and P. avium collected from Miyagi, Japan, but with very weak support value (<50% BS) (Figure 3).
cpDNA Haplotype Network and Relationship Between P. takesimensis and P. sargentii The aligned sequence length of five concatenated chloroplast non-coding regions used for the construction of the TCS haplotype network was 3,809 characters (see data matrices available in dryad under doi: 10.5061/dryad.x3ffbg7jn): petA-psbJ (1-931; 931 sites), petD-rpoA (932-1,222; 291 sites), rpl16 intron (1,223-2,076: 854 sites), trnL-rpl32 (2,077-3,217; 1,141 sites), and trnQ-rps16 (3,218-3,809; 592 sites). The TCS haplotype network contained 24 haplotypes in total: 11 haplotypes for P. takesimensis (69 accessions from 13 populations) and 14 haplotypes for P. sargentii (30 accessions from five populations). Of the 24 haplotypes, 10 were exclusive to P. takesimensis, 13 to P. sargentii, and only one (H7) was shared by both species in nine accessions of P. sargentii and one accession of P. takesimensis (SIB2115-6) as shown in Figure 4. The number of haplotypes and polymorphic sites within each population is specified in Table 2. Given the number of populations and haplotypes within each population, the diversity of haplotypes was markedly higher in the populations of P. sargentii than in P. takesimensis, despite the considerably broad sampling of P. takesimensis. The number of haplotypes within populations of P. sargentii ranged from two (ODS) to five (JJ and MYG) with higher gene diversity and nucleotide diversity (mean 0.7428 and 0.0081, respectively) than FIGURE 4 | Map of the haplotypes found in the populations of the insular derivative, P. takesimensis, and its continental progenitor species, P. sargentii, based on five non-coding regions of cpDNA. Populations are labeled with different color codes specified in Table 1. The sampling locations are denoted in different colors: red for P. takesimensis and blue for P. sargentii. Different colored portions in each pie chart represent the haplotype frequencies.
The genealogical relationships among cpDNA haplotypes are shown in Figure 5. In general, the haplotypes of P. takesimensis and P. sargentii were separated from each other in the network; however, the distinction between both species was not complete, with the exception of some haplotypes. Haplotype H7 was found primarily in P. sargentii (nine accessions), with one accession from P. takesimensis forming a ring-like network structure with other closely related haplotypes H4, H8, and H9. Haplotype H11 was found only in a single accession of P. takesimensis (SIB2115-1) and was derived by one mutational step from a missing haplotype of another ring structure composed of P. sargentii haplotypes H13, H10, and H7. Three haplotypes of P. takesimensis, H22 (SIB population), H23 (JRG population), and H24 (DDJ population), were derived from H21 exclusively found in P. sargentii (ODS). The two dominant haplotypes found in the populations of P. takesimensis were H4 (shared by 32 accessions) and H2 (shared by 27 accessions), while the other haplotypes were represented by only a single accession (seven haplotypes; H1, H5, H8, H11, H22, H23, and H24) or two (one haplotype; H3). In contrast, no dominant haplotypes were found in P. sargentii populations. With regard to the partitioning of genetic variation, including both species of P. sargentii and P. takesimensis, the majority of variation (59%) existed within populations, while approximately 30% and 10% of the variation existed between species and among populations within species, respectively ( Table 3). Within the continental progenitor species, P. sargentii, the majority of the variation (86.4%) occurred within populations, while the remaining variation (13.6%) existed among populations. A similar level of genetic variation was found within populations (83.7%) for the insular derivative species P. takesimensis.
Genetic Diversity and Population Structure From MIG-seq Analysis
Based on SNP matrices generated by STACKS, the genetic structure was estimated first for 262 accessions of all Cerasus species (8527 loci, see data matrices available in dryad under doi: 10.5061/dryad.x3ffbg7jn) using STRUCTURE 2.3.4 (Pritchard et al., 2000; Figure 6A). Further comparison between insular endemic species and purported continental progenitor species was made for 200 accessions, including 14 populations of P. takesimensis and five populations of P. sargentii (7038 loci, see data matrices available in dryad under doi: 10.5061/dryad. x3ffbg7jn) (Figure 6B). The best K value was identified as three clusters (K = 3) for both datasets based on the rate of change in the log probability of data between successive K values (Evanno et al., 2005) from STRUCTURE HARVESTER (Earl and vonHolt, 2012). The summary of partitioned K = 3 bar plots for total Cerasus species showed that the two endemic flowering cherry species of P. takesimensis on Ulleung Island, South Korea and P. speciosa on Izu Islands and Izu Peninsula, Japan, were distinct from other Cerasus cherry species in the composition of genetic clusters ( Figure 6A). Specifically, P. takesimensis was differentiated from the purported continental progenitor species, P. sargentii. It was represented by its Population code: please refer to Table 1 for exact localities of each population. N/I, number of individuals sampled in this population at this site.
Frontiers in Plant Science | www.frontiersin.org FIGURE 5 | TCS haplotype network constructed from five non-coding regions of cpDNA. Relationships among the 24 haplotypes found in P. takesimensis on Ulleung Island (circles) and its continental progenitor, P. sargentii, from adjacent areas (squares). Small dots represent either missing or inferred haplotypes and the size of each circle and square is proportional to the population size. Figure 6B). There were very weak genetic structures among populations of P. takesimensis geographically for K = 3 and 4 analyses, with a little bit stronger geographic patterns between the southwestern and northeastern parts of the island with K = 5 analysis. The southwestern populations (HGM, HPR, THR, JRG, NAM, SAD, SIB, and DDJ) and northeastern populations (NSJ, MAL, CBU, CHU, NAR, and NRH) slightly differed in the proportions of the inferred clusters. One outlier accession (SIB969-005) was similar to P. sargentii from the ODS, RSS, and MYG populations. PCA results based on 262 MIG-seq dataset in VCF format (see data matrices available in dryad under doi: 10.5061/dryad. x3ffbg7jn) further confirmed the genetic relationship among accessions of Cerasus species. P. takesimensis and P. speciosa were distinct from other species in the PCA scatter plot constructed by PC1 and PC2 (Supplementary Figure 3). The other species overlapped on both the PC1 and PC2 axes and were not separated from each other. Genetic diversity was estimated from the 200 accession SNP dataset of P. takesimensis (14 populations, 159 accessions) and P. sargentii (five populations, 41 accessions) using STACKS. The expected and observed heterozygosity were higher in the populations of P. takesimensis (mean values of 0.1543 and 0.1905) than in P. sargentii (mean 0.1241 and 0.1658) ( Table 4).
Taxonomic Distinction and Origin of Ulleung Island Flowering Cherry
In this study, our first aim was to examine the taxonomic identity of P. takesimensis as it diverged from its presumptive continental progenitor, P. sargentii, as well as other flowering cherry species. Morphologically, P. takesimensis has been recognized as a distinct species from other flowering cherries occurring in adjacent continents and other islands (Chang et al., 2004). However, its monophyly and taxonomic distinction have never been confirmed in previous phylogenetic analyses. Cerasus species of flowering cherries are often delimitated based on several diagnostic features (e.g., inflorescence type, degree of pubescence on flowers and leaves, and phenology), but such morphological delineation has not been supported by genetic distinction in molecular phylogenetic analyses. For example, the P. serrulata/P. sargentii complex (Cho et al., 2014;Cho and Kim, 2019), which provides a clue regarding the origin of P. takesimensis, has been shown to be non-monophyletic because of insufficient sequence variation in Sanger sequencing-based approaches, lineage sorting, introgression, and hybridization. Therefore, the phylogenetic relationship among Cerasus species could not be inferred in previous studies, which hindered identifying the origin and evolution of P. takesimensis, as well as its relationship with the purported continental progenitor, P. sargentii.
To the best of our knowledge, this is the first study to provide convincing evidence regarding the monophyly of P. takesimensis and identify it as a discrete taxonomic entity on Ulleung Island, South Korea. Its taxonomic distinction and phylogenetic position were established using multiple lines of evidence, including cpDNA haplotype network analysis and robust phylogenetic and population genetic analyses using genome-wide MIG-seq SNPs. Based on the haplotype network, P. takesimensis (insular derivative) was differentiated from P. sargentii (purported continental progenitor), as the former possessed species diagnostic haplotypes exclusively (with one exceptional haplotype H7) (Figures 4, 5). However, the haplotype relationships did not provide additional phylogenetic inferences into species relationships. The MIG-seq SNP data provided well-resolved phylogenetic relationships and strongly supported the monophyly of P. takesimensis. In both of the ML tree and SVDQuartets bootstrap consensus tree, P. takesimensis, with the exception of one outlier (in ML tree), formed a 100% BS-supported monophyletic group, suggesting its genetic cohesiveness on Ulleung Island (Figures 2A, 3). The genetic differentiation of P. takesimensis from its purported continental progenitor, P. sargentii, as well as other flowering cherries, was further identified unambiguously by genetic structure and PCA analyses; P. takesimensis shows unique genomic profiles that are strongly differentiated from other Cerasus species (Figure 6 and Supplementary Figure 3).
In addition to the monophyly of P. takesimensis, MIG-seq SNP phylogenetic reconstruction revealed the continental progenitor and insular derivative species relationship between P. sargentii and P. takesimensis, which provided clues to the origin of P. takesimensis. The MIG-seq method successfully uncovered the species delimitation and phylogenetic relationships of the genetic lineages confined to relatively narrow (or isolated) distribution areas, such as P. speciosa (endemic to the Izu Islands and Izu Peninsula, Japan) and P. takesimensis on Ulleung Island. Furthermore, three geographic groups of P. sargentii are recognized in the MIG-seq ML phylogeny, that is, OKA (Okayama, Japan), JJ (Jeju Island in South Korea), and the combined clade (Clade A) of MYG (Miyagi, Japan), ODS/SBK/HBK/JRS (the Korean Peninsula) and RSS (Russia), which facilitates tracing the geographic origin of the continental progenitor of P. takesimensis. The strongest phylogenetic evidence for detecting progenitor-derivative species pairs comes from insular endemic populations of one species (derivative) nested within source populations of another species (progenitor) (Crawford, 2010). In the ML tree constructed by IQ-TREE, all accessions of P. takesimensis were nested within the populations of P. sargentii (Clade A), and we could reconfirm the continental progenitor and insular derivative species relationship between P. sargentii and P. takesimensis, which was initially presumed based on morphological similarities. All but one accession of P. takesimensis (161 accessions) formed a monophyletic group (100% BS), sharing the most recent common ancestor with one accession (JRS35554) of P. sargentii collected from Mt. Jirisan in the southern part of the Korean Peninsula (86% BS) (Figure 2A). This suggests that P. takesimensis could originate from a single colonization event, most likely from the continental progenitor of P. sargentii from the Korean Peninsula. Given the abundance and wide range of P. takesimensis on the island, a single origin has not been expected due to the geographical proximity of Ulleung Island from adjacent possible source areas. However, the single origin of P. takesimensis is concordant with other anagenetically derived endemic species on Ulleung Island, such as Acer takesimense (Sapindaceae; Takayama et al., 2013), Acer okamotoanum (Sapindaceae; Takayama et al., 2012), Campanula takesimana (Campanulaceae; Cheong et al., 2020), Dystaenia takesimana (Apiaceae; Pfosser et al., 2005), and Phedimus takesimensis (Crassulaceae; Seo et al., 2020). Currently, a single origin is considered to be the norm for anagenetically originated endemic plants on Ulleung Island (Seo et al., 2020), Table 1 including P. takesimensis in the present study, although there are few exceptions with multiple origins of other endemic species, such as Rubus takesimensis (Rosaceae; Yang et al., 2019) and Scrophularia takesimensis (Scrophulariaceae; Gil et al., 2020). Therefore, P. takesimensis represents an additional endemic taxon anagenetically derived on Ulleung Island and evolved by a single introduction highly likely from a source population of P. sargentii on the southern part of the Korean Peninsula according to the ML phylogeny. However, the origin of P. takesimensis seems still uncertain, although a single origin of P. takesimensis from the P. sargentii population in Korean Peninsula is highly plausible (Figure 2A). SVDQuartets analysis found that high proportion of quartets (ca. 43%) are incomparable with the tree, which would indicate incomplete lineage sorting and other processes, such as introgression or paralogous sequences in the alignment. Frequent hybridization and introgression among congeneric species of flowering cherries have been documented in previous studies, making the species relationships rather uncertain (Bortiri et al., 2001(Bortiri et al., , 2002Ohta et al., 2007;Potter, 2011;Cho et al., 2014;Cho and Kim, 2019). Unlike the ML phylogeny, SVDQuartets analysis did not provide the decisive clue to the origin of P. takesimensis. Prunus takesimensis shared the most recent common ancestor with P. avium and P. sargentii from Miyagi, Japan, which was, however, very weakly supported (<50% BS) (Figure 3). In addition, depending on the number of SNPs generated by different parameter settings in STACKS analysis (ML trees not shown), the phylogenetic position of P. takesimensis differed slightly relative to source populations of P. sargentii within Clade A. Furthermore, we cannot rule out the possibility of multiple potential origins, taking into account the genetic and phenotypic variations found in the populations of P. takesimensis. One accession of P. takesimensis (SIB969-005) was nested within the ODS population of P. sargentii (Mt. Odaesan in the central part of the Korean Peninsula) instead of being nested within the conspecific monophyletic group on MIG-seq phylogeny. Another accession of P. takesimensis (SIB2115-6), which was not included in the MIG-seq analysis, shared the same haplotype (H7) as P. sargentii accessions. These two genetic outliers within the same population (SIB; Mt. Seonginbong, Ulleung Island) potentially represent independent dispersal event and convergent or parallel evolution.
While the evidence for monophyly of P. takesimensis (100% BS) and identification of the clade containing P. sargentii populations from mainland South Korea, the Russian Far East, and Miyagi (northern Japan) being sister (97%) to P. takesimensis on Ulleung Island are reasonably well established (Figure 2A), this study raises an issue for the species concept of P. sargentii and its close relatives. The ML tree suggests that P. sargentii sampled from Jeju Island, South Korea, does not cluster with other conspecific populations, but are embedded within the clade comprising primarily P. serrulata (i.e., P. serrulata var. spontanea, P. serrulata var. pubescens, and P. serrulata var. quelpaertensis) and P. sargentii var. verecunda (Figure 2A). In addition, P. serrulata and three infraspecific taxa are shown to be non-monophyletic. Some diagnostic features for delimitation of these infraspecific taxa can be quite variable if not subtle, and thus further detailed morphological study and geographic patterns would be beneficial to revising the species concept of P. serrulata and its infraspecific taxa. Although the inflorescence type based on the peduncle length, i.e., umbel or corymb, can be useful for delimiting two closely related species, P. sargentii and P. serrulata (Chang et al., 2004), this study also suggests that the species concept of P. sargentii may require further revision. The non-monophyly of P. sargentii revealed in this study raises the possibility that the clade A populations distributed in the high mountains of South Korea (excluding Jeju Island), northern Japan, and the Russian Far East, may constitute the species P. sargentii as originally described (Rehder, 1908). Prunus sargentii populations in Jeju Island, South Korea and southern Japan (e.g., Okayama) may represent P. serrulata or the results of introgression between the two closely related taxa. A future morphological and fine-scale molecular phylogenetic study is required to further unravel species delimitation as well as potential gene flow histories of flowering cherries in East Asia.
Genetic Consequences of Anagenesis in P. takesimensis
Theoretically, high levels of genetic variation without geographic partitioning within the island population are predicted as the emerging patterns of the genetic consequences of the species originating from anagenetic speciation as well as a clear genetic distinction between continental progenitor and insular derivative species. This is because the established founding population increases in size over time in a favorable uniform environment on the island, accumulating genetic diversity in the isolated island populations through mutation and recombination, and no geographic partitioning of genetic variation further maintains high genetic levels (Stuessy et al., 2006(Stuessy et al., , 2014Takayama et al., 2015). Several species pairs in Ulleung Island showed the expected trends with higher (Dystaenia takesimana and D. ibukiensis) or slightly lower (Acer takesimense and A. pseudosieboldianum, and Acer okomotoanum and A. mono) genetic variations in island populations than continental progenitors without geographic partitioning (Pfosser et al., 2002(Pfosser et al., , 2005Takayama et al., 2012Takayama et al., , 2013. The endemic tree species to the Juan Fernández Islands in the Pacific Ocean, Drimys confertifolia, Myrceugenia fernandeziana, and Myrceugenia schulzei also showed the genetic patterns compatible with the hypothesis for anagenesis, i.e., similar level of genetic diversity as their continental progenitors, respectively (Drimys winteri, Drimys andina, and Myrceugenia colchaguensis) along with no geographical partitioning of the those variations over the Islands (López-Sepúlveda et al., 2013. Previous studies mostly employed nuclear microsatellites in addition to cpDNA sequences and AFLP analysis to investigate the genetic consequences of anagenesis in Acer (nine loci), Myrceugenia (six loci), and Drimys (nine loci) species. Prunus species, specifically belonging to P. serrulata/P. sargentii complex, appeared more complicated genetically than those species, as they were not clearly distinguished based on 11 nuclear microsatellite loci genotyping (Cho, M.-S., unpublished data). In this study, we assessed the genetic consequences of anagenetic speciation of P. takesimensis using different types of molecular data, that is, maternally inherited cpDNA haplotypes and genome-wide SNPs from MIG-seq analysis. No geographic partitioning of genetic variation was found within the populations of P. takesimensis in either analysis. The genetic diversity in P. takesimensis was lower than the one in P. sargentii in haplotype richness, polymorphic sites, gene diversity and nucleotide diversity from cpDNA haplotypes, but was slightly higher in heterozygosity (He) and nucleotide diversity (π) from much more extensive genomewide SNP analysis (Tables 2, 4). This is consistent with other endemic species derived by anagenesis in Ulleung Island and the Juan Fernández Islands. Generally, oceanic island populations are expected to lose genetic variation at the foundation, as shown in a significant majority of island populations (165 of 202 comparisons) with less allozyme variations at 29% average reduction than their mainland counterparts, as they typically have smaller population sizes than usually larger and broadly distributed mainland progenitor populations (Frankham, 1997). Pfosser et al. (2005) claimed that D. takesimana regained genetic diversity by accumulating genetic variations through mutation, recombination, and drift during or after anagenetic speciation along with an increase in population size. In case of P. takesimensis, the genome-wide SNPs analyzed by MIGseq showed a slight increase in genetic diversity, that is, H E (o) and H E (e): the insular derivative P. takesimensis (mean 0.1543 and 0.1905) versus the continental progenitor P. sargentii (mean 0.1241 and 0.1658) ( Table 4). We, however, cannot rule out the possibility that these results are due to uneven sampling of the two species and further study is required.
The absence of (or very weak) geographic structuring among populations of P. takesimensis across Ulleung Island is clearly confirmed by phylogenetic and population genetic structure analyses based on genome-wide SNPs, which is consistent with the expected pattern for anagenetically derived endemic species (Figures 2B, 6). Lack of spatial structure was also corroborated by the AMOVA result of maternally inherited cpDNA haplotypes, which showed much lower genetic variation among populations of P. takesimensis than other endemics, 16.30%; 69.52% for Phedimus takesimensis (Seo et al., 2020), 51.90% for Campanula takesimana (Cheong et al., 2020), and 56.86% for Rubus takesimensis (Yang et al., 2019). Gene flow through dispersal of seeds and pollen is a fundamental determinant of spatial genetic structure in natural populations of trees at different spatial scales (Roser et al., 2017). In the absence of spatial genetic structures in P. takesimensis, constant gene flow presumably plays a relevant role in reducing genetic differentiation among populations. Flowering cherries show tremendous ability for pollen-mediated gene movement facilitated by their characteristics, such as genetic bridging capacity, inter-and intra-specific genetic compatibility, high frequency of open pollination, perennial nature, tendency to escape from cultivation, and the abundant existence of ornamental and roadside cherries (Cici and Van Acker, 2010). Moreover, it is highly conceivable that genetic exchange by long-distance dispersal of seeds mediated by birds may also occur frequently among the populations of P. takesimensis on Ulleung Island.
Other groups of species pairs of progenitor and derivative endemic species on Ulleung Island displayed variable genetic patterns depending on the species, as described previously. Genetic variation on islands is determined by the effects of various factors, such as loss at foundation, subsequent loss caused by finite population size since foundation, and gains arising from secondary immigration and new mutations (Jaenike, 1973;Frankham, 1997). The species pair of an endemic shrub, Rubus takesimensis, and its continental progenitor, R. crataegifolius, showed significantly higher genetic diversity statistics in the continental progenitor R. crataegifolius in a few parameters (e.g., number of haplotypes and nucleotide diversity) than the insular derivative R. takesimensis, even though R. takesimensis does not have low levels of genetic variation because it has experienced multiple introductions from geographically and genetically diverse source populations (Yang et al., 2019). Two herbaceous endemic plants of Campanula takesimana and Phedimus takesimensis contrasted with each other in genetic diversity statistics; there was no apparent genetic diversity reduction in Phedimus takesimensis compared to its continental progenitor, P. kamtchaticus; however, Campanula takesimana had substantially lower genetic diversity than C. punctata in the South Korean Peninsula. In terms of the partitioning of the genetic variations on the Island, both endemic plants were clearly structured, unlike other herbaceous or woody endemic plants anagenetically derived on Ulleung Island. Campanula takesimana showed substantial genetic structuring and a very narrow geographical source area, that is, Bonghwa in the Korean Peninsula via the plausible stepping stone of Dokdo Island (Cheong et al., 2020). The seed splash mechanism is responsible for the population genetic structure and differentiation among populations of Phedimus takesimensis due to the limited seed-mediated gene flow by raindrops with relatively short dispersal distances (Seo et al., 2020). All these compilations of the genetic consequences of anagenetically derived endemic species on Ulleung Island are based on a limited number of studies conducted to date. Given the important role of anagenesis in the origin of Ulleung Island endemic plants, it is critical to investigate a much broader group of progenitor and derivative relationships to gain insights into speciation on oceanic islands.
CONCLUSION
We provide the most extensive genetic data to date, revealing the monophyly and some clues to the geographical origin of the wild flowering cherry endemic to Ulleung Island, P. takesimensis, from SNPs detected by MIG-seq and complementary cpDNA haplotype analyses. Based on the continental progenitor and insular derivative relationship, P. takesimensis endemic to Ulleung Island appears to have been derived anagenetically from some uncertain source population of P. sargentii in the Korean Peninsula, Russia and Miyagi in Japan via likely single introduction, although the possibility of multiple introductions cannot be completely ruled out. The genetic differentiation of P. takesimensis from its continental progenitor P. sargentii is corroborated by comparative analyses of population genetic structure and phylogeny based on MIG-seq data. The emerging patterns of the genetic consequences of anagenesis in P. takesimensis correspond to theoretical predictions, as shown in other woody endemics. Generally, higher levels of genetic diversity are found in populations of the continental progenitor P. sargentii compared to island populations of P. takesimensis. The absence of strong population genetic structuring or geographical patterns in P. takesimensis, which is consistent with our expectations, is possibly caused by gene flow assisted by seed dispersal mediated by birds across Ulleung Island. This study also highlights the effectiveness of the MIG-seq method for species delimitation and unraveling the complex evolutionary history of island endemic plant groups. However, the other taxa within the P. serrulata/P. sargentii complex remain as unresolved by MIG-seq method, which is not sufficient to distinguish the genetic lineages of sympatric wild flowering cherry species, presumably exchanging genetic materials constantly through hybridization. Considering that MIG-seq utilizes only ISSR regions and provides fewer numbers of SNPs than ddRADseq (e.g., ∼1,000 vs. ∼100,000 SNPs) (Peterson et al., 2012;Suyama and Matsuki, 2015), future studies require more investigation to overcome the potential limitation of MIG-seq method and clarify the complex evolutionary history of wild flowering cherries.
DATA AVAILABILITY STATEMENT
The datasets presented in this study can be found in online repositories. The names of the repository/repositories and accession number(s) can be found below: Dryad (doi: 10.5061/dryad.x3ffbg7jn). Raw reads are available in the Short Read Archive (SRA) at NCBI under the bioProject number of PRJNA742508 titled as "262 Prunus MIG-Seq".
|
2021-09-04T14:03:23.639Z
|
2021-09-02T00:00:00.000
|
{
"year": 2021,
"sha1": "129aac094ee7f8cae4d0bdd6eeac94d1652dc8e4",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fpls.2021.706195/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "129aac094ee7f8cae4d0bdd6eeac94d1652dc8e4",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
1399258
|
pes2o/s2orc
|
v3-fos-license
|
Archetype relational mapping - a practical openEHR persistence solution
Background One of the primary obstacles to the widespread adoption of openEHR methodology is the lack of practical persistence solutions for future-proof electronic health record (EHR) systems as described by the openEHR specifications. This paper presents an archetype relational mapping (ARM) persistence solution for the archetype-based EHR systems to support healthcare delivery in the clinical environment. Methods First, the data requirements of the EHR systems are analysed and organized into archetype-friendly concepts. The Clinical Knowledge Manager (CKM) is queried for matching archetypes; when necessary, new archetypes are developed to reflect concepts that are not encompassed by existing archetypes. Next, a template is designed for each archetype to apply constraints related to the local EHR context. Finally, a set of rules is designed to map the archetypes to data tables and provide data persistence based on the relational database. Results A comparison study was conducted to investigate the differences among the conventional database of an EHR system from a tertiary Class A hospital in China, the generated ARM database, and the Node + Path database. Five data-retrieving tests were designed based on clinical workflow to retrieve exams and laboratory tests. Additionally, two patient-searching tests were designed to identify patients who satisfy certain criteria. The ARM database achieved better performance than the conventional database in three of the five data-retrieving tests, but was less efficient in the remaining two tests. The time difference of query executions conducted by the ARM database and the conventional database is less than 130 %. The ARM database was approximately 6–50 times more efficient than the conventional database in the patient-searching tests, while the Node + Path database requires far more time than the other two databases to execute both the data-retrieving and the patient-searching tests. Conclusions The ARM approach is capable of generating relational databases using archetypes and templates for archetype-based EHR systems, thus successfully adapting to changes in data requirements. ARM performance is similar to that of conventionally-designed EHR systems, and can be applied in a practical clinical environment. System components such as ARM can greatly facilitate the adoption of openEHR architecture within EHR systems.
Background
Currently, an electronic health record (EHR) system is essential to support clinical practice with information technology in healthcare environments. EHR is a repository of information regarding the health status of a subject of care in computer processable form [1]. However, healthcare data is generally too complicated, flexible, and changeable to capture a universal, comprehensive and stable schema of information, which is the foundation of the entire EHR architecture. Highly-specialized and complex EHR systems cannot acclimate to the evolution of healthcare data requirements, which require EHR systems to embrace a dynamic, state-of-the-art, rapidly evolving information infrastructure [2]. In order to protect EHR systems from changes in the healthcare domain, openEHR [3] has published a series of specifications to guide the development of future-proof EHR systems. Solutions span from information modelling to system architecture, to meet the continually evolving needs of EHR systems.
The concept of openEHR focuses on systems and tools necessary to the computation of complex and constantly evolving health information at a semantic level, according to the following three paradigms: separation of information models, domain content models, and terminologies; separation of responsibilities; and separation of viewpoints [4]. Among these three paradigms, the separation of information models, domain content models and terminologies promotes a significant shift from the single-level modelling approach of information system development to a twolevel modelling approach. In the single-level modelling approach, domain concepts that are processed by the EHR system are hard-coded directly into the application and database models. In two-level modelling, the semantics of information and knowledge are separated into a small, comprehensible, non-volatile reference model (RM), which is used to build information systems and knowledge models; archetypes are used as formalisms and structures to express numerous and volatile domain concepts [5].
The RM represents the general features of health record components, their method of organization, and necessary contextual information to satisfy both the ethical and legal requirements of the health record. The RM encompasses the stable features of the health record by defining the set of classes that composes the blocks which in turn constitute the record. Archetypes define entire, coherent informational concepts from the clinical domain [6]. An archetype is a hierarchical combination of components from the RM with available restrictions placed on names, possible data types, default values, cardinality, etc. These structures, although sufficiently stable, may be modified or replaced by others as clinical practice progresses and evolves [7]. Archetypes are deployed at runtime via templates that specify particular groups of archetypes to be used for a particular purpose, often corresponding to a screen form. A template is a specification that creates a tree structure of one or more archetypes, and each constraining instance of various RM types such as composition, section, entry subtypes, etc. Templates typically closely correspond to screen forms, printed reports, and generally complete information to be captured or sent at the application level; they may therefore be used to define message content [4]. By utilizing a two-level modelling approach, EHR systems can be built on a stable RM as a general framework, and thus use archetypes as the domain information model to achieve greater flexibility and stability, particularly in situations in which the domain concepts are vast in number, have complex relationships, and evolve continuously. The responsibilities of specialists from the information technology domain and the healthcare domain are disengaged: developers focus only on the technical components of EHR systems, while specialists develop the structural model based on domain concepts and archetypes. This enables domain specialists to participate directly in the production of the artefacts that are interpreted by EHR systems, to organize and present healthcare information, and to control the EHR system without intervention from the system supplier or re-programming.
Syntactic and semantic interoperability between different EHR systems is facilitated by the use of archetypes. However, agreement between data content and the information models is necessary for real integration and semantic interoperability [8]. In the single-level modelling approach, the domain concepts are implicitly contained within the EHR systems. No consensus healthcare domain models exist in healthcare domain models to which EHR systems may conform; each system may model different aspects and granularity levels of domain concepts, resulting in heterogeneity. In the two-level modelling approach, the RM ensures that EHR systems can always send information to other systems and receive readable information in return, thus ensuring data interoperability. Archetypes can be used as the common knowledge repository to share evolving clinical information that can be processed by the receiving systems, thus enabling semantic interoperability [9]. The openEHR system maintains Clinical Knowledge Manager (CKM) as an official archetype repository to support the governance of international domain knowledge.
Specifications of openEHR can be used to create and sustain a flexible EHR ecosystem that consists of may integrated services, which is too complex to be accurately processed by single applications [10]. The general architectural approach can be considered as five layers: persistence (data storage and retrieval), back-end services (including EHR, demographics, terminology, archetypes, security, record location, etc.), virtual EHR (a coherent set of APIs to the various back-end services and an archetype-andtemplate-enabled kernel responsible for creating and processing archetype-enabled data), application logic (user applications or another service, such as a query engine), and presentation (the graphical interface of the application) [4]. The archetypes share the openEHR innovation of adaptability because they are external to the software, while key components of the software are derived from the archetypes [11]. Much current research has been devoted to the use of archetypes to drive the persistence, accessibility, and presentation of healthcare information systems [12][13][14][15][16].
Most EHR systems in the healthcare field are built according to the single-level modelling approach, despite the many advantages of two-level modelling. One of the primary reasons is that the persistence layer is inadequate to meet the requirements of clinical practice. As the foundation of EHR systems, the persistence layer determines the EHR system architecture, and thus also function as a performance bottleneck. The openEHR system promotes a Node + Path persistence solution that serializes sub-trees of fine-grained data into blobs or strings based on the object or relational systems [17]. The data can be serialized according to different granularity levels, from top-level information objects to the lowest leaf nodes. In essence, the Node + Path solution is an entity-attribute-value (EAV) approach, which takes advantage of the semantic paths in openEHR data to improve the serialized-blob design. The greatest advantages of the Node + Path approach are flexibility and simplicity; all data nodes are serialized, and their paths are recorded adjacently in a two-column table of < node path, serialized node value>. However, the simplicity of the data storage structure induces complex data retrieval logic, which strains the performance of data insertion and query, which requires flexibility [18]. Some researchers have reported similar performances in the processing of entity-centered queries by conventional and EAV models, but that EAV models were approximately 3-5 times less efficient in the processing of attribute-centered queries [19]. Evaluations of persistence solutions using an XML database also indicates that XML databases were considerably slower and required much more space than the relational database [20]. There has also been similar research into the data persistence of alternative approaches based on two-level modelling, such as EN13606 [21] or HL7 Version 3 [22]. In a proof-of-concept work of EN13606 [7], the data storage was developed by applying Object Relational Mapping (ORM) [23] to the RM of EN13606; this approach was investigated by the authors in earlier work [24]. The deep inheritance and complicated relationships of the EN13606 RM induces excessive JOIN operation during data query. Additionally, classes near the top hierarchy become heavily overloaded with data. For example, DATA_VALUE is the basic class of all data types, and contains the common attributes of all instances of all data classes, but it is unable to operate in real time [7]. IBM Clinical Genomics medical research developed a hybrid data model based on the HL7 Version 3 Reference Information Model (RIM) [25]. The hybrid data model combines elements of both the ORM and EAV approaches, which is well-suited to the sparse and flexible data of a medical research data warehouse. However, it demonstrates similar problems to other ORM and EAV approaches in that it does not improve the performance enough to support effective clinical transactions.
The relational database is still of primary importance to the data persistence of EHR systems, and its excellent performance has already been well proved in many successful EHR projects and widely-accepted EHR productions. The primary challenge to the application of a two-level modelling approach to a relational database is that the domain information model is hard-wired into the database model; when the domain information model changes, the relational database must be redesigned in order to facilitate the new domain information model. There are two approaches to support the adaptation of the persistence layer to changes in archetypes. One approach is to use a general data storage structure that is independent of archetypes, such as Node + Path, which has been well-investigated. The other approach is to generate a database model to design a persistence layer driven by archetypes.
This work presents an archetype relational mapping (ARM) persistence solution based on a relational database that can achieve similar performance to the conventional database in practical clinical environments. This work extends the basic ARM method introduced in a previous proof-of-concept work [24] by providing a more sophisticated mapping approach based on templates and archetypes in order to map archetypes to relational tables. Performance optimization rules and details are then provided. First, the ARM approach is introduced in detail, including archetype modelling, template definition, and mapping rules. Then, the ARM is applied to the EHR data requirements of a tertiary hospital in China. A comparison of the generated ARM database, the conventional database deployed in the hospital, and the openEHR official Node + Path database is conducted. After analysis, the challenges encountered during ARM development are discussed, and conclusions are provided.
Methods
An underpinning principle of openEHR is the use of archetypes and templates, which represent formal models of domain content that are used to control data structure and content during creation, modification, and querying [26]. The ARM approach employs archetypes and templates to generate a persistence model and provide data access based on a relational database. The ARM is intended to fulfill several functionalities:
1) Effective adaptation to changes in archetypes.
Archetypes reflect the changing realities of EHR, and existing archetypes are updated over time.
Archetype changes result in several challenges to ARM, such as the application of new archetypes to the relational database, and the possibility of incompatible versions among archetypes, among other potential challenges. 2) Generation of customized persistence models for various EHR requirements. Archetypes define widely reusable components of information, and templates encapsulate the local usage of archetypes and relevant preferences. In order to apply to the local EHR context, ARM employs templates in order to customize the data persistence model. There are three steps to employ ARM in the implementation of the persistence layer: archetype modelling, template definition, and database mapping.
Archetype modelling
Archetype modelling selects and expands existing archetypes from the public archetype repository, or defines new ones in order to meet all data requirements. Archetype modelling has been well-developed, and widely applied in previous studies [27][28][29]. First, all data items must be determined by collecting and analysing the data requirements in detail, such as dataset specifications or database schemas. The data items must then be merged into coherent and meaningful clinical concepts. Then, existing archetypes must be reused as much as possible.
Keywords are used to search the CKM for matching archetypes, including concept name and core data items. Concepts are identified as fully covered by archetypes, partially covered by archetypes, and/or not covered by archetypes. The archetype will be directly reused if it fully covers a concept, extended if it partially covers a concept, and new archetypes are designed for concepts with no existing archetypes.
Template definition
Archetypes describing the general healthcare concepts are adapted to local EHR data requirements by template definition design templates. One corresponding template is designed for each archetype in order to add constraints, such as local optionality, archetype chaining, tightened constraints, default values [26], and ARM constrains. ARM constrains attempt to achieve better performance by aligning the concept-focused archetype model and the data-focused relational model. The ARM constraints are designed as follows: 1) Assign identification data item. An identification data item is used to uniquely identify instances of each archetype; it can represent any basic data type (Table 1) and has an occurrence of 0..1 or 1..1. Only one identification data item can exist for each archetype. 2) Assign query data item. Some data items may always be used as query conditions, particularly those with identical characteristics to the identification data item. These types of data items can be categorized as query data items, and an archetype may have multiple query data items to facilitate the data query.
Collection data structures such as CLUSTER, ITEM_TREE, ITEM_LIST, and archetype slots cannot be used as identification data items or query data items. As basic units of internal structures of archetypes, collection data structures group related data items, and can thus be viewed as embedded archetypes with their own identification data items and query data items.
3) Define mappings between generalized archetypes
and specialized archetypes to facilitate data query. If a domain contains many concepts and data items with similar structures, a general concept with a data item can be used to store the name and all fields related to these concepts. It is impossible to simultaneously define the vast number of concepts included in EHR systems. Additionally, the definitions are difficult to maintain due to the continuous development of new concepts. Archetypes can clearly represent the general concept and specific concepts by specialization. The name and fields of data items in specialized archetypes are mapped to a subset of fields of the corresponding data item in the generalized archetype. Then, the data stored as a generalized archetype instance can be queried with specialized archetypes, and vice versa, using the mappings.
Database mapping
Database mapping generates a relational database schema in order to automatically persist the data represented by archetype instances into relational databases using archetypes, templates, and ARM constraints. A set of mapping rules is designed as follows: 1) Map each archetype to a table. According to the archetype semantic relationship specified in openEHR specifications (Table 2) [30], new and old version of the same archetype can be organized into a single data table. 2) Map the basic data items represented by the archetype basic data type. If the upper bound of the data item occurrence is 1, this indicates a single occurrence data item which can be mapped as a common column. If the upper bound of the data item occurrence is *, this indicates a multiple occurrence data item which is mapped into a standalone table with two columns: one is a foreign key column referring to the identification data item of the current archetype, and the other is a common column mapped from the data item. Table 1 lists the archetype basic data types utilized in ARM and their corresponding mapping rules. For fields with non-preliminary types in each archetype basic data type, the corresponding SQL type is noted as "#" and the mappings must be referenced in Table 1. 3) Constrain the identification data item as the key column. Unique and clustered index constraints are mapped from the identification data item, and added to the column. If there is no identification data item in an archetype, a data item named "id" is generated and used as the identification data item to identify records in the database table; the generated "id" is invisible and cannot be accessed using the archetype. 4) Constrain the query data item as an indexed column. The non-clustered index constraints mapped from query data items are added to columns in order to accelerate data query. 5) Map archetype slots. If the upper bound of the archetype slot occurrence is *, this indicates that the current archetype and the target archetype exhibit a one-to-many relationship, which is mapped as a foreign key column in the target archetype table, referring to the identification data item of the current archetype. If the upper bound of the archetype slot occurrence is 1, this indicates that the current archetype and the target archetype exhibit a one-to-one relationship, so the data items of the target archetype are embedded into the table of the current archetype, thus embedding the target archetype into the current archetype.
6) Map collection data items according to collection data structure. If the upper bound of the collection data item occurrence is 1, this indicates a single occurrence collection data item, so the data items contained in this collection data item can be mapped into the table of the current archetype, and can be viewed as flattened. If the upper bound of the collection data item occurrence is *, this indicates a multiple occurrence collection data item that is mapped into a standalone table with one foreign key column, referring to the identification data item of the current archetype. 7) Propagate query data items. An efficient method to reduce the recursive level deep in the archetype hierarchy tree when querying the leaf archetype is to store the most frequently queried data items of the ancestors in the descendants. For multiple-occurrence archetype slots and collection data items, the query data items in the current archetype can be mapped identically to the identification data item, into the target archetype data table of the archetype slot or the standalone data table of the collection data items as foreign key columns. 8) Naming. Naming rules vary slightly, because each relational database product, such as Microsoft SQL Server or Oracle, has unique restrictions for naming tables and columns. The general principles are as follows. The archetype name is used as the table name for tables mapped from archetypes. The archetype name concatenated with the data item name is used as the table name for tables mapped from collection data items and multiple occurrence data items; the version portion of the archetype name should be removed, since all versions of an archetype are mapped into the same database table.
The path of the data item concatenated with the field name is used as the column name for a column mapped from a data item field; the path of the data item and the column name are unique, but the human readability of the path is poor. One alternative to achieve better readability is to use the textual name provided within the archetype ontology section of the data item rather than the path. However, the uniqueness of the textual name provided within the archetype ontology section of the data item is guaranteed; if the generated names for the table and column are so long as to violate the naming restrictions of the relational database products, they should be shortened in a consistent manner in order to remain unique.
Database comparison
A performance comparison of the ARM approach and an EHR system used in real clinical practice in conducted. The compared EHR system has been deployed in a tertiary class A Chinese for three years. Several legacy systems have been integrated into the EHR system, including HMIS (hospital management information system), LIS (laboratory information system), RIS (radiology information system), PACS (picture archiving and communication system), PMS (pharmacy management system), and OMS (operation management system). Two information systems, CPOE (Computerized Physician Order Entry) and IV (Integrated Viewer), support the order-centered clinical workflow for all clinicians from all departments within the hospital. The IV information system allows clinicians to view the demographic, imaging examination, and laboratory test data of a patient scattered in heterogeneous silo systems in one application, rather than being forced to access different patient data in each corresponding system. The system represents a typical centralized data-reporting application in a hospital that presents patient information from all examination departments and encompasses most EHR data, thus serving as an ideal candidate against which to apply the ARM approach and conduct the comparison.
Ethics
This study was approved by Review Board of Shanxi Dayi Hospital under project 2012AA02A601. A database specialist from the hospital's information technology department exported and de-identified the necessary data of IV database.
ARM mapping
The schema of the IV database has been analysed in detail and extracted into concepts. Fig. 1 depicts the overview of the IV concepts and their relationships.
The primary purpose of this investigation is to explore the performance of the ARM approach. Existing matching archetypes in CKM are selected without modification to facilitate clear interpretation of comparisons. A total of 17 archetypes are selected to encompass the IV concepts. Fig. 2 Fig. 1 IV concepts overview. In all figures, PK stands for primary key. FK stands for foreign key, and indicates the data item which current data item relates to. SLOT indicates target archetype which conform to current data item. Solid line indicates foreign key relationship between archetypes. Dash line indicates composition relationship through slot between archetypes. IDI stands for identification data item. QDI stands for query data item. CI stands for clustered indexed. NCI stands for non-clustered indexed. Data items in italic type are not covered by archetypes Fig. 2 Archetypes overview. In all figures, PK stands for primary key. FK stands for foreign key, and indicates the data item which current data item relates to. SLOT indicates target archetype which conform to current data item. Solid line indicates foreign key relationship between archetypes. Dash line indicates composition relationship through slot between archetypes. IDI stands for identification data item. QDI stands for query data item. CI stands for clustered indexed. NCI stands for non-clustered indexed. Data items in italic type are not covered by archetypes depicts an overview of the selected archetypes and their relationships.
A total of 16 are existing archetypes: One archetype is newly designed: openEHR-EHR-OBSERVATION.lab_test-general.v1 Due to the different granularity and reusability between archetypes and IV concepts, data items belonging to one IV concept are commonly scattered into several archetypes, and vice versa. For example, the Patient concept is mapped to four archetypes, represented by slots in Fig. 3. The archetype openEHR-EHR-INSTRUCTION.request-imaging_exam.v1 is mapped to three IV concepts, as shown in Fig. 4, and the archetype openEHR-EHR-OBSERVATION.imaging_exam.v1 is mapped to two IV concepts, as shown in Fig. 5.
Another common situation is a distinction between metadata-level modelling versus data-level modelling [29]. For example, there are many specific lab test result archetypes, such as blood gases, full blood count, liver function, etc., while the number of lab test results in IV is greater than 200; additionally, more results will be generated with new technologies and instruments. Since the lab test result items all exhibit a similar data structure, it is convenient to define a generalized archetype openEHR-EHR-OBSERVATION.lab_test-general.v1 specialized from Fig. 3 Mapping of openEHR-DEMOGRAPHIC-PERSON.person-patient.v1. In all figures, PK stands for primary key. FK stands for foreign key, and indicates the data item which current data item relates to. SLOT indicates target archetype which conform to current data item. Solid line indicates foreign key relationship between archetypes. Dash line indicates composition relationship through slot between archetypes. IDI stands for identification data item. QDI stands for query data item. CI stands for clustered indexed. NCI stands for non-clustered indexed. Data items in italic type are not covered by archetypes archetype openEHR-EHR-OBSERVATION.lab_test.v1 with three additional multiple occurrence data items (Test Item, Result, and Result Unit) according to the Lab Test Data concept in IV (Fig. 6), along with the specialized archetypes to represent these flexible lab test results.
The subject data item in every archetype is used to represent the patient himself. Table 3 lists the templates and ARM constraints defined according to IV data requirements.
Finally, the ARM mapping rules are applied to the archetypes, templates, and ARM constraints to generate the final relational database schema, as shown in Fig. 7.
Database preparation
To determine whether the performance can meet the requirements of clinical practice, a performance comparison is conducted between the generated ARM database, conventional IV database, and the official Node + Path database.
The schema of the test IV database is shown in Fig. 8. Data items not encompassed by archetypes are removed to maintain comparability of the test IV database. The Node + Path database schema is shown in Fig. 9. Although only one table is required to store all data, one table is assigned to each concept to promote practicality and greatly improve performance.
A dataset is extracted directly from the online IV database, with dates ranging from 2014-01-01 to 2014-12-31, Fig. 4 Mapping of openEHR-EHR-INSTRUCTION.request-imaging_exam.v1. In all figures, PK stands for primary key. FK stands for foreign key, and indicates the data item which current data item relates to. SLOT indicates target archetype which conform to current data item. Solid line indicates foreign key relationship between archetypes. Dash line indicates composition relationship through slot between archetypes. IDI stands for identification data item. QDI stands for query data item. CI stands for clustered indexed. NCI stands for non-clustered indexed. Data items in italic type are not covered by archetypes and containing 103320 imaging tests, 8573157 images, 654213 laboratory tests, and 4846688 laboratory test result items for 29743 patients. All data has been de-identified by removing all patient names, patient phonetic names, patient birth dates, patient death dates, and doctor names. The dataset is imported into three clean instances of the test IV database, the ARM database, and the Node + Path database. The IV database requires 1.60 gigabytes on the hard disk, the ARM database requires 2.90 gigabytes, and the Node + Path database requires 43.87 gigabytes, which is far greater than the space required by the other two databases. Although the Node + Path is much more efficient in storing sparse data, it also includes too many redundancies to store the path of each archetype data item.
Query benchmark
Tests are conducted on a Dell M4700 running WINDOWS 8.1 Enterprise 64 bit operating system and Microsoft SQL Server 2014 Enterprise Edition with an Intel Core i5-3340 M processor, 16 gigabytes of memory, and a 5400-RPM hard disk.
Clinicians use the IV each day to monitor patient imaging exams and laboratory tests in order to make further decisions. The IV presents a patient list for each clinician; when a patient is selected, all correlated imaging exams and laboratory tests are displayed in two pages, enabling the clinician to click on each imaging exam or laboratory test to verify all the results, images, and reports in detail. The IV updates all information in real time to support clinician responses to patient situations with minimal time delays.
Five data-retrieving tests are designed from this workflow scenario: In all figures, PK stands for primary key. FK stands for foreign key, and indicates the data item which current data item relates to. SLOT indicates target archetype which conform to current data item. Solid line indicates foreign key relationship between archetypes. Dash line indicates composition relationship through slot between archetypes. IDI stands for identification data item. QDI stands for query data item. CI stands for clustered indexed. NCI stands for non-clustered indexed. Data items in italic type are not covered by archetypes In addition to these five data-retrieving tests, two patient-searching tests are designed to test performance in finding patients who satisfy certain criteria. Finding similar patients is integral to evidence-based care delivery, and helps clinicians make further decisions. However, because IV defines many concepts in the data level, it is not efficient to implement this with the conventionally-designed IV database. However, with the archetype approach, concepts are explicitly expressed as archetypes and can be mapped to standalone tables.
Test 6: Find all patients with PaO2 > = 129 mmHg in blood gas tests (Query 6.1). Find all patients with PaO2 > = 129 mmHg, PaCO2 > = 27 mmHg, and Arterial pH > = 7.3 in blood gas tests (Query 6.2). Find all patients having abnormal PaO2 > = 129 mmHg, Fig. 6 Mapping of openEHR-EHR-INSTRUCTION.request-lab_test.v1 and openEHR-EHR-OBSERVATION.lab_test-general.v1. In all figures, PK stands for primary key. FK stands for foreign key, and indicates the data item which current data item relates to. SLOT indicates target archetype which conform to current data item. Solid line indicates foreign key relationship between archetypes. Dash line indicates composition relationship through slot between archetypes. IDI stands for identification data item. QDI stands for query data item. CI stands for clustered indexed. NCI stands for non-clustered indexed. Data items in italic type are not covered by archetypes PaCO2 > = 27 mmHg, Arterial pH > = 7.3, SaO2 > = 99 %, and CaO2 > = 17 % value in blood gas tests (Query 6.3). Test 7: Find all patients with PaO2 > = 229 mmHg in blood gas tests, red cell count > = 2 1012/L in full blood count tests, and alkaline phosphatase > = 50 IU/L in liver function tests (Query 7.1). Find all patients with PaO2 > = 229 mmHg in blood gas tests, red cell count > = 2 1012/L in full blood count tests, alkaline phosphatase > = 50 IU/L in liver function tests, thyroid stimulating hormone > = 0.3 μIU/mL in thyroid tests, and sodium > = 140 mmol/L in urea-electrolyte tests (Query 7.2). Table 5 lists the benchmark results of test queries for each database. All queries are composed of multiple, simple SQL clauses to avoid joining tables or clause nesting, resulting in better performance according to clinical practice. Each query was executed ten times, and the average time was calculated. The database cache is turned off to avoid the caching effects of the selected database product.
The performances of the ARM database and IV database were very similar in the execution of data-retrieving tests. ARM performed better in tests 1, 2 and 4, while IV performed better in tests 3 and 5. The detailed reasoning for differences in absolute execution time is highly complex due to the nature of the complicated systems that are affected by many external factors, such as background tasks on the Windows operating system, hard disk cache, etc. There are also some database factors that contribute to the differences in execution time between the ARM database and the IV database.
In test 1, both the ARM and IV database were queried with one SQL clause, using patient id as a condition. The patientIdentifier_identifier_id column of the DPer-sonPatient table in the ARM database was clustered indexed, while the Patient Identifier column of the Patient table in the IV database was non-clustered indexed, which requires additional key lookup operations and thus requires more time.
In test 2, table IRequestImagingExam in the ARM database was queried using one SQL clause; however, in the IV database, three corresponding tables (Imaging Exam Requester, Imaging Exam Filler, and Imaging Exam Item) must be queried with three SQL clauses.
In test 3, two tables (OImagingExam and OImagingExa-mImageDetails) were queried in the ARM database, and three tables (Imaging Exam Filler, Imaging Exam Report, and Imaging Exam Image) were queried in the IV database. However, the OImagingExamImageDetails table containing 8573157 image records in the ARM database is non-clustered indexed, resulting in extra key lookup operations that are slower than the corresponding Imaging Exam Image table in the IV database, which is clustered indexed.
In test 4, one table (IRequestLabTest) in the ARM database was queried while two tables (Lab Test The < eav > node indicates the target generalized archetype to which current specialized archetype is mapped. In the < eavAttributeName > node, the attribute "name" specified the full path of the source data item, the attribute "set" indicates which textual name provided within the archetype ontology section is used since there are multi languages, and the value is the full path of the target data item. In the < eavAttributeField > node, the attribute "name" specified the full path of one data field in the source data item and the value is the full path of the target data item Fig. 7 ARM database schema. In all figures, PK stands for primary key. FK stands for foreign key, and indicates the data item which current data item relates to. SLOT indicates target archetype which conform to current data item. Solid line indicates foreign key relationship between archetypes. Dash line indicates composition relationship through slot between archetypes. IDI stands for identification data item. QDI stands for query data item. CI stands for clustered indexed. NCI stands for non-clustered indexed. Data items in italic type are not covered by archetypes Fig. 8 Test IV database schema. In all figures, PK stands for primary key. FK stands for foreign key, and indicates the data item which current data item relates to. SLOT indicates target archetype which conform to current data item. Solid line indicates foreign key relationship between archetypes. Dash line indicates composition relationship through slot between archetypes. IDI stands for identification data item. QDI stands for query data item. CI stands for clustered indexed. NCI stands for non-clustered indexed. Data items in italic type are not covered by archetypes Requester and Lab Test Filler) must be queried in the IV database.
In test 5, both the ARM and IV databases were queried on one table with one SQL clause, using patient id as a condition. The OLabTestGeneral_patient_value column of the OLabTestGeneralStructureResult table in the ARM database is non-clustered indexed, while the Patient Identifier column of the Lab Test Data table in IV database is clustered indexed.
The Node + Path database requires more time for all tests, even when querying for few results, due to the inevitable full table scan; thus, it is not practical in a clinical workflow. All of the ARM, IV, and Node + Path databases have similar trends: as the query returns more data, the test requires more time to execute.
In patient-searching tests, the series of lab test result archetypes were directly mapped into standalone tables in the ARM database and the Node + Path database. Each table stores only data related to the corresponding archetype, in which the number of records dramatically decreases. However, in the IV database, all lab test result data are stored in an EAV-style table.
In test 6, only one table is queried with different conditions, namely: table OLabTestBloodGases in the ARM database, table Lab Test Data in the IV database, and table OLabTestBloodGases in the Node + Path database, so the increase of query time is trivial among all three databases. The ARM database was the fastest, and the IV database was slower than the Node + Path database, since it contains many more records.
In test 7, the query time increased greatly for the ARM database and the Node + Path database, in which more than one table was queried, namely tables OLabTestBlood-Gases, OLabTestFullBloodCount, OLabTestLiverFunction, OLabTestThyroid, and OLabTestUreaAndElectrolytes in both databases. However, only one table (Lab Test Data) was queried in the IV database. The performance of the Node + Path database was even slower than that of the IV database, since the lab test results table in the IV database is not purely EAV and thus performs much better.
Discussion
This paper presents an ARM persistence solution for archetype-based EHR systems. While the ARM approach is designed to generate a relational database from archetypes and templates and can achieve performance similar to a conventionally-designed database, there were several encountered challenges and issues. Fig. 9 Node + Path database schema. In all figures, PK stands for primary key. FK stands for foreign key, and indicates the data item which current data item relates to. SLOT indicates target archetype which conform to current data item. Solid line indicates foreign key relationship between archetypes. Dash line indicates composition relationship through slot between archetypes. IDI stands for identification data item. QDI stands for query data item. CI stands for clustered indexed. NCI stands for non-clustered indexed. Data items in italic type are not covered by archetypes ARM deployment ARM employs a model-driven approach to allow data persistence to adapt to changes in data requirements according to archetypes that represent general domain concepts and templates tailored to ARM constraints. The mapping rules are implemented in a persistent service to automatically generate the database, and avoid necessary manual uploading of the database. Currently, the changes described in archetype semantic relationships are easy to implement on the relational database, but one change is not explicitly included in the new versions of the archetypes. In ideal archetypes, the data type of data items should be as stable as the data items, and remain unchanged. However, this cannot be avoided during archetype development, particularly for archetypes that are initially developed from local data requirements and later extended to a global scope, which often results in incompatible versions of archetypes.
Changing the column data type can induce chaos into the relational database; thus, two mapping rules are designed to automatically adapt to change. One rule maps each version of an archetype to a table; old versions of an archetype will gradually become outdated and obsolete, and can then be safely moved to a backup database. Prior to removal, multiple versions of an archetype that are simultaneously in service result in a large number of tables . The second rule maps all versions of an archetype to a new table, then imports all data from the old table into the new one according to the conversion algorithm provided with the archetype [30]; the old table is then removed. However, the data conversion process is costly in terms of time and computation if the table contains a great number of records. Although it is safe to redeploy changed archetypes and templates to update a deployed database, the principles of archetype design recommend that archetypes and templates be maintained by a committee of domain experts, and deployed when they are stable. Archetypes should be reused as much as possible to represent the domain concepts. Templates are used to align the data persistence to different data requirements, and to avoid shifting the heterogeneity of data requirements to archetypes.
De-normalization
In archetype modelling, the most important archetype resource is the CKM, in which archetypes are maintained by healthcare experts and published in a central repository. Archetypes in CKM are highly abstract and normalized in order that each archetype represents a complete domain concept. They are revised by various experts according to various kinds of data requirements. In conventionally-designed databases, a combination of well-organized tables, tolerable redundancies for denormalization, and fine-tuned indices allow all queries to be implemented with as few SQL clauses as possible. Several de-normalizations are introduced in ARM to achieve better performance.
First, the de-normalization of the granularity of archetypes is achieved by embedding archetypes by archetype slot mapping. Since the granularities of archetypes and data requirements are not always identical, archetypes composed by archetype slots with a single occurrence represent one concept, and can be embedded together in data requirements. In this manner, query steps can be reduced and the joining of tables can be avoided. However, the embedded archetypes are then deemed to be in a "division" state, indicating that one archetype can be slotted and embedded into many different archetypes or simultaneously used alone. The division caused by archetype de-normalization introduces further complexity to data query using the embedded archetypes. Archetypes used only as components of other concepts, such as openEHR-EHR-CLUSTER.person_name.v1, are seldom used to query data alone. For archetypes, both those mapped standalone and embedded into other archetypes, one must decide, whether to query only the standalone mapped data tables or to query all data tables containing the archetypes according to the semantics of the archetypes. The percentage values in IV and ARM columns are more time spent on each query in the slower database than the faster database. The Node + Path database is not included in the calculation Second, de-normalization of the index redundancy of query data items propagation is achieved. Proper indices can greatly improve the data query performance of relational databases. However, it is inconvenient to add indices in archetypes, due to their high normalization. For example, to retrieve the images of a specific exam request, first has the particular exam request must be queried with the request order id, then the exam items contained in this exam request must be queried, and finally the images of each exam item must be queried. By introducing index propagation, indices can propagate through archetype slots in order to reduce the number of query steps. However, not all propagated indices are necessary to the target archetypes, and will consume space and computation time in order to be updated. These must therefore be manually configured in order to disregard unnecessary propagated indices in ARM constraints, according to the data requirements.
Meta-data level and data-level mapping
Archetypes maintained by domain experts as common knowledge in the centralized approach will gradually accumulate value. Archetypes representing domain concepts at the meta-data level will greatly facilitate the use of clinical data, and improve the performance of data retrieval as compared to concepts defined at the datalevel. As more archetypes are committed to encompass more domain concepts, it will become easier for clinicians to manipulate clinical data. For example, clinicians can use red cell count in openEHR-EHR-OBSERVATION.lab_test-full_blood_count.v1 as conditions to query the patient directly; data-retrieving performance will improve because the result data of each laboratory test are stored separately. The ARM approach reduces the manual updating process for data persistence according to changes in domain concepts, and encourages the use of meta-level data instead of data-level model methods. However, the metadata level model lacks the universal flexibility of the datalevel model. In order to adapt to evolving requirements, the meta-data level model must first define archetypes, and then generate data persistence while the generic structure of the data-level model does not change. In ARM, the meta-data level model and the data-level model are combined to utilize the advantageous of both approaches. The archetype openEHR-EHR-OBSERVATION.lab_test-general .v1 is defined to improve flexibility, and specific archetypes are introduced to facilitate the data query.
Limitations of ARM
Although the ARM approach can provide similar performance to the conventional database, it may not meet the requirements of situations in which the databases must be highly-tuned. For example, there are no onesize-fits-all rules of indices; they must be adjusted according to queries or even internal data distribution in the databases. The ARM strikes a balance between automation and performance to generate data persistence with archetypes.
If the local data requirements can be properly satisfied, the published archetypes are of great value to ARM. When the hierarchy and structure of archetypes are similar to the data requirements, few reorganizations are necessary before application of the ARM approach. If the archetypes and data requirements exhibit structural differences, many reconfigurations are necessary to align the archetypes to the data requirements, possibly including extensions and modifications to the archetypes. However, published archetypes are limited in scope compared to the enormous amount of clinical concepts. New archetypes must be developed if local data requirements are not satisfied. The process of archetype development is very restrictive and requires extensive professional knowledge in order to develop and model high-quality archetypes. This will require much more effort than the conventional database design approach.
Affections to archetypes and templates
In general, the ARM approach conforms to the design principles of archetypes and templates. For example, each archetype should represent a single concept. This represents best practice in database normalization, and the composition of archetypes into large templates is introduced via the concept of slot embedment. Furthermore, the ARM places more emphasis on subtle details in order to achieve better design of archetypes and templates. For instance, ARM requires related data items within archetypes to be organized as clusters in order to explicitly express their relationships. If two data items both have multiple occurrences and a one-to-one relationship, they must be altered to a single occurrence and put into a multiple occurrence cluster; this cluster can then be moved into a standalone cluster archetype and reused elsewhere. In template design, ARM requires the identification data item and query data items to add indices. Although it can be difficult to assign the correct roles to the correct data items, they can help the designer achieve better understanding of the function of each data item when the templates are used as data entry documents, graphical user interface models, or data-retrieving queries. ARM refines the design principles of archetypes and templates by considering their practical use and application in clinical practice.
Conclusions
This paper presents an ARM persistence solution for archetype-based EHR systems. ARM uses archetypes to generate relational databases and achieve similar performance as compared to conventional databases in data-retrieving queries. ARM takes great advantage of the CKM public archetype repository to facilitate data manipulation with well-defined archetypes for clinicians and to achieve better performance in patient-searching queries. System components like ARM can facilitate the adoption of open-EHR architecture in EHR systems. The authors will continue to complete the mapping rules according to the semantics of archetypes, improve ARM constraints and the performance of the generated databases, design and implement data access services, and perform thorough tests of the ARM approach in real clinical environments.
|
2017-06-26T18:12:01.275Z
|
2015-11-05T00:00:00.000
|
{
"year": 2015,
"sha1": "2c55c81d13a386923c67e7d6374d17c061255655",
"oa_license": "CCBY",
"oa_url": "https://bmcmedinformdecismak.biomedcentral.com/track/pdf/10.1186/s12911-015-0212-0",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "a3ab4bf3b0bcd2e7edc0cc1ffb724cb84e78e195",
"s2fieldsofstudy": [
"Computer Science",
"Medicine"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
}
|
31957542
|
pes2o/s2orc
|
v3-fos-license
|
The difference in enteral nutrition ( EN ) versus total parenteral nutrition ( TPN ) in acute pancreatitis by etiology and disease severity
Background: Supplemental nutrition improves long-term outcomes/mortality in acute pancreatitis, with Enteral Nutrition (EN) superior to Total Parenteral Nutrition (TPN). Differences in EN/TPN based upon etiology or disease severity have never been established. Methods: We performed a randomized retrospective case control on subjects admitted to Cooper University Hospital from 06/2007 to 01/2010 with acute pancreatitis who received supplemental nutrition (n = 161). These subjects were examined for caloric and protein demands. Subjects were matched for demographics, weight, albumin, prealbumin eliminating confounders. Demands among disease etiology/severity subgroups and statistical significance were determined. The incidence of EN v.TPN was determined. Results: Significant differences were found in total caloric demands, namely gallstone (n = 50) and alcohol (n = 36) (p = 0.04). Differences in protein demand were not established between these two groups (p = 0.24). Differences in caloric demand were found in bed-side index for severity in acute pancreatitis (BISAP) of 1, 2 and 3 versus 5. Protein demands were different between BISAP of 0 versus all others. 24% of the sample received EN. Conclu: sion: There are significant differences in total caloric demands for subjects with acute pancreatitis by disease severity and in gallstone versus alcohol-induced pancreatitis. These differences are not variations in the sample populations. Finally, EN is under-utilized despite knowledge of its value.
INTRODUCTION
The first reported review of acute pancreatitis was a 53-subject case series in 1889, by Reginald Huber Fitz, at Massachusetts General Hospital [1][2][3][4].Since the time of Fitz we have defined several etiologies and formulated numerous methods for staging the disease severity of acute pancreatitis [5,6].Although there are thousands of studies on acute pancreatitis, the treatment of etiologybased subpopulations has remained somewhat less understood and uniform [7].One homogenous treatment is that of supplemental nutrition in subjects with acute pancreatitis.
The value of lower mortality, decreased incidence of MOF, operative intervention, systemic illness and local septic complications when using EN versus TPN in patients with acute pancreatitis appears to be substantiated.However, none of the 779 studies performed have truly examined the difference in these benefits based upon the etiology or disease severity of acute pancreatitis.Additionally, EN is still underutilized in the setting of acute pancreatitis as a means of supplemental nutrition, despite its defined benefits [21,22].
In this study we shall demonstrate a statistically significant difference in the total caloric and total protein re-quirements of supplemental nutrition in acute pancreatitis patients.This difference will be based upon disease etiologies and severity, defined by BISAP score.Moreover, this study shall set out to demonstrate that despite the conclusive data favoring EN versus TPN, that a large percentage of the study population will still receive TPN.
Study Design
This retrospective case series study was approved by the IRB at Cooper University Hospital and protocol was consistent with all ethical guideline set forth by the declaration of Helsinki.Eligible subject data was screened by admission diagnosis of acute pancreatitis to Cooper University Hospital from June 2007 to January 2010.
Subjects were included by the following criteria; the initiation of EN or TPN during the course of their hospital admission, performance of vital signs, mental status examination, subject weight within 24 hours of admission, albumin, prealbumin levels, blood urea nitrogen (BUN) testing, white blood cell count (WBC), radiographic imaging to examine for pleural effusions.Subjects were excluded if they were under the age of 18 years old, did not receive alternative nutrition, did not meet the necessary components for determining BISAP score or systemic inflammatory response syndrome criterion (SIRS) (Appendix 1 and 2 respectively) [5,6].This yielded a sample size of 161 (n = 161) for the etiology based group with one subject excluded for the severity based group as there was not sufficient data to perform the BISAP score (Appendix 1.0).
Subgroups Analysis
Initial subgroup analysis separated subjects into etiologybased categories.Secondary subgroup analysis divided subjects based upon disease severity.To determine disease severity, the Bedside Index for Severity in Acute Pancreatitis or BISAP score was utilized "Appendix 1" [5].Age of subjects enrolled was entered in a data sheet to meet the BISAP criterion of age greater than or equal to 60.To establish the Systemic Inflammatory Response Syndrome (SIRS) criterion, subject temperature, heart rate, respiratory rate, white blood cell count and percent immature forms on admission were recorded "Appendix 2.0" [6].Subjects that met 2 or more of these criteria were considered to meet the criteria for SIRS.Blood Urea Nitrogen (BUN) was recorded and if greater than or equal to 25 was considered positive.Finally the presence of impaired mental status on admission and pleural effusions was recorded as the final 2 criteria for the BISAP score.Subject BISAP Scores were scaled on a basis of zero to five, with five being the most severe form of pancreatitis.
Calculating Nutritional Requirements
Of the 161 subjects enrolled in this study, the data were analyzed for etiology of pancreatitis, date of initiation of alternative nutrition, type of alternative nutrition initiated.After alternative nutrition was initiated, subjects mean total caloric demands in kilocalories (kCal) and mean total protein demands in grams (gm) were calculated.The caloric demands were computed first based upon subject age, height and weight and sex utilizing the Harris-Benedict basal energy expenditure (BEE) [15].This formula in males is:
To calculate the Total Energy Expenditure (TEE), the BEE was then multiplied by the stress/activity factor (Appendix 3.0).Initial Protein demands were calculated by evaluating subject stress/activity level by factors estimating 10% -15% of TEE as protein demand (Appendix 4.0).
After the initial baseline was calculated, daily weight, serum albumin every 18 to 20 days and prealbumin every 2 -3 days measurements were performed to assess improving nutritional status.Disease states such as end stage renal disease, cardiovascular disease, diabetes and chronic obstructive pulmonary disease were accounted for, as these states are known to alter nitrogen elimination.
Statistical Analysis
Sample size was calculated based upon population prevalence of patients with acute pancreatitis utilizing a Z score of 1.96 and Confidence interval of 95%.
The subjects were separated based upon disease etiology and severity defined as BISAP score "Appendix 1."This group analysis was examined, as above, for date of alternative nutrition initiation, total caloric demands and total protein demands.Mean, standard deviation and standard error of mean for this data were calculated and p values were extrapolated utilizing standard t-Test analysis.Appropriate CI was calculated based upon Z-score analysis utilizing 1.96 when p values were under 0.05, but 2.25 when p values were under 0.01.The incidence of TPN and EN was determined by the number of occurrences of TPN/EN divided by the total sample population (n = 161).Sample population data were plotted on linear xy scatter plots.Total caloric and protein demands were graphed on double y-axis linear charts.
Study Population
The sample population (n = 161) was matched by sex, age and race "Table 1 and Table 2." The average age, weight, albumin and prealbumin levels of the gallstone (n = 50) and alcohol (n = 36) induced groups showed no statistically significant differences (p = 0.53, 0.59, 0.81 and 0.42 respectively).Similar trends were seen amid all etiology-based subgroups.Correspondingly, the severity based subgroups showed no statistically significant difference when comparing the average age, weight, prealbumin and albumin.Linear xy data plot showed clustering of this data between both the etiology and severity based sub populations indicating low variability amongst these population components.
Nutritional Demands of Etiology Based Subpopulation
The most prevalent populations observed were the gallstone (n = 50) and alcohol induced (n = 36) subgroups.A significant difference in the total caloric demands was detected for subjects with gallstone-induced pancreatitis, when compared with alcohol-induced pancreatitis (1916.02mean kCal v. 2071.00 mean kCal, p < 0.040.Comparing other study subpopulations, statistically significant differences were found, sample sizes were small. The protein demands were also compared amongst the etiology sub groups, showing no statistically significant difference between the gallstone and alcohol induced subpopulations (97.84 gm versus 106.08 gm, p value = 0.24).Again, significant differences were shown among the other etiology-based subgroups, however the sample sizes were small.
Nutritional Demands of Severity/BISAP Score Based Subpopulation
The total sample size (n = 160) was also subdivided by disease severity using BISAP score.One subject was excluded, as all of the criterion was not met to establish a BISAP score.When the data were analyzed diseased with respect to severity via the BISAP score, the results showed significant differences between the mean caloric demands of BISAP score of 1 (2061.80kCal), BISAP score 2 (2026.96kCal) and BISAP score 3 (2053.64kCal) when compared to the total caloric demands of a BISAP score of 5 (1676.00kCal) (p < 0.04, p < 0.05 and p < 0.05).
Protein demand of the subpopulation showed that the BISAP score of 0 (85.41 kCal) was significantly different when compared to the protein demands of BISAP scores of 1 (102.78kCal, p < 0.01), 2 (109.30kCal, p < 0.003),
Enteral Nutrition Versus Total Parenteral Nutrition
In the etiology-specific sample population, of the 161 subjects, 37 subjects (n = 161, 22.98%) received EN (Table 3).Similarly, in the disease severity subgroup, only 37 of 160 (n = 160, 23.13%) received EN (Table 4).The only subgroup in which a larger number of subjects received EN versus TPN was that of infectious etiology (7 out of 12, 58.33%).The incidence of subjects using TPN when compared with EN by severity of disease defined as BISAP score.
DISCUSSION
In this study, a significant proportion of the total sample size was composed of both gallstone (n = 50) and alcohol induced pancreatitis (n = 36).This composition was consistent with patient populations found in most hospitals in the United States.Analysis between the gallstone and alcohol induced subgroups showed a statistically significant difference in the total caloric demands (p < 0.04) Figure 1.Based upon the calculations used for TEE, we then set out to determine if this difference is based upon disease etiology or whether the components of this calculation were confounders to our data.
Included in the TEE calculation are subject weight, age and sex to calculate BEE and stress/activity factors [15].When matching our sample population for these parameters, the study showed no significant difference between the subject age, weight, prealbumin and albumin levels, along with similar sex and race distributions among all sample populations.Therefore differences in the total caloric demands of the alcohol and gallstone subgroups likely were due to inherent distinctions in catabolic stress and not variability within the sample population.Additionally, the differing catabolic states of these etiologyspecific subgroups may have indicated variation in their pathophysiology, though not confirmed by our data.
When examining the protein demands of the gallstone and alcohol subpopulations, no significant difference was found (p = 0.24) Figure 1.In this case, calculating protein demand required the measurement of prealbumin and albumin over time.The measurement of prealbumin and albumin were not found to have statistically significant differences (p = 0.81, 0.42 prealbumin and albumin respectively), which is the likely explanation for the findings among mean protein demands of the etiologybased subgroups.Other differences were seen between the protein and total caloric demands of the etiologybased subgroups, however the size of the subpopulations may limit extrapolation of study data into clinical practice.
Another practical application of our study was to determine if a difference exists in total caloric and protein nutrition among varying degrees of disease severity.In this study, we defined disease severity via the BISAP score [5,6].As expected, BISAP scores of 0 had a lower total caloric demand when compared with BISAP scores of 1 -3.Analysis of the total caloric demands concluded statistically significant differences between the BISAP score of 1 -3 (p < 0.04, p < 0.05, and p < 0.05 respectively) when compared with the most severe BISAP score of 5.An interesting trend in BISAP sub-groups is that scores of 1 -3 showed higher total caloric demands when compared to BISAP scores of 4 -5 Figure 2. One possible hypothesis is that with progressive pancreatic tissue necrosis, the unit-based catalytic process may be high but with less viable tissue remaining.
Within the BISAP sub group analysis were trends in protein demands.BISAP scores of 0 showed significant differences in mean protein demands when compared with all other BISAP scores.This may have indicated that protein demands increase from less severe acute pancreatitis and then plateau as severity increases Figure 2.
In addition finding differences amongst nutrition by disease etiology and severity, this study also illustrated trends in the method of supplemental nutrition delivery.Early animal models using intravenous nutrition have shown an increase in pancreatic secretion via stimulation with amino acids such as L-tryptophan and L-phenylalanine, as well as with high fat content infusions [8][9][10].As a result, TPN can cause an increase in pancreatic secretion that further exacerbates the autodigestive processes in acute pancreatitis [11][12][13].Conversely, a decrease in pancreatic secretions through feedings distal to the ligament of Treitz (LOT) has also been established [23,24].Therefore, early EN through feedings distal to the LOT may lessen disease progression or complications.
The first notable advantage of EN is that of mortality.In a review of 8 significant studies, 7.9% of subjects receiving EN, and 15.8% of subjects receiving TPN demonstrated mortality.Subjects with severe pancreatitis demonstrated mortality in 3.1% receiving EN and 23.6% receiving TPN.Further studies examined the impact of EN and TPN on the incidence of Multiple Organ Failure (MOF), operative intervention, systemic infections and septic complications [14][15][16][17][18][19][20].With this in mind, it is questionable why a small proportion of subjects (22.98% of the etiology based subgroup, and 23.13% of the BISAP subgroup) received EN Tables 3 and 4.
Recent international studies have examined the frequency of EN with an estimated incidence of 20% -50% [21,22].These large international studies demonstrated that at other institutions, it is common to use nil per os (NPO) regimens, consistent with our study results.Of these sample groups, the initiation of EN was limited by; 1) A lack of specific skills for tube placement or 2) A general opinion that placing enteral feeding tubes is complicated [25][26][27].To counteract these beliefs, additional studies have suggested techniques for positioning feeding tubes beyond the ligament of Trietz.Techniques utilized in these studies occurred at bedside and included; fluoroscopic advancement with serial radiographic images and the use of IV metoclopramide to ease tube placement.
To further simplify the initiation of enteral feedings, studies examined nasogastric tube (NGT) placement in comparison with jejunal feedings and TPN in acute pancreatitis [28][29][30].The culmination of these studies demonstrated no difference in long term outcomes, diseaserelated mortality or morbidity when comparing NGT feeding with TPN or jejunal feeding.Furthermore, EN was shown by Eckerwall, et al. to improve glycemic control when compared with TPN.Regardless, it appears OPEN ACCESS that further education must be implemented about the benefits of EN versus TPN in subjects with acute pancreatitis.
Potential limitations of this study may have been in limited sample size of certain sub-group analysis.Some of the more rare causes of acute pancreatitis may have failed to show statistical differences, as they failed to show significant study power.Increasing future sample size may potentiate future analysis of nutritional requirement in less common etiologies of acute pancreatitis.Additionally, the analyses were primarily retrospective and have limitations on long-term outcomes.To counteract this potential limitation, our group will prospectively use calorimetric measurements or urine nitrogen measurements to further analyze the metabolic demands.
CONCLUSIONS
This study demonstrated significant differences in mean total caloric demands between the gallstone and alcohol induced subgroups.Protein demands of the gallstone and alcohol induced subgroups were not statistically significant.
Analysis of total caloric demands showed a statistical difference between BISAP scores of 0 and 1, 2, 3. Protein demands were significantly different when comparing BISAP scores of 0 with all other BISAP scores.Finally, our study demonstrated poor use of EN in patients with acute pancreatitis receiving alternative nutritional support.
Figure 1 .
Figure 1.Mean caloric and protein demands of sample population by disease etiology.
Figure 2 .
Figure 2. Mean caloric and protein demand of sample population by disease severity/BISAP score.
Table 1 .
Study population by disease etiology.
Table 2 .
Study population by disease severity.
Table 3 .
Total parenteral nutrition versus enteral nutrition in pancreatitis by disease etiology.
Table 4 .
Total parenteral nutrition versus enteral nutrition in pancreatitis by disease severity.
|
2017-08-15T10:48:06.876Z
|
2012-04-26T00:00:00.000
|
{
"year": 2012,
"sha1": "0ec5f469b2181dbc9df521ff40f339f4148d5e94",
"oa_license": "CCBY",
"oa_url": "http://www.scirp.org/journal/PaperDownload.aspx?paperID=18864",
"oa_status": "GOLD",
"pdf_src": "ScienceParseMerged",
"pdf_hash": "0ec5f469b2181dbc9df521ff40f339f4148d5e94",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
688182
|
pes2o/s2orc
|
v3-fos-license
|
Establishment of an immortalized mouse dermal papilla cell strain with optimized culture strategy
Dermal papilla (DP) plays important roles in hair follicle regeneration. Long-term culture of mouse DP cells can provide enough cells for research and application of DP cells. We optimized the culture strategy for DP cells from three dimensions: stepwise dissection, collagen I coating, and optimized culture medium. Based on the optimized culture strategy, we immortalized primary DP cells with SV40 large T antigen, and established several immortalized DP cell strains. By comparing molecular expression and morphologic characteristics with primary DP cells, we found one cell strain named iDP6 was similar with primary DP cells. Further identifications illustrate that iDP6 expresses FGF7 and α-SMA, and has activity of alkaline phosphatase. During the process of characterization of immortalized DP cell strains, we also found that cells in DP were heterogeneous. We successfully optimized culture strategy for DP cells, and established an immortalized DP cell strain suitable for research and application of DP cells.
INTRODUCTION
Hair follicles have the characteristic of periodical growth, which provides a nice model for the research of tissue regeneration. Dermal papilla (DP) cells have contact with hair follicle stem cells regularly and may play important roles in the regeneration of hair follicle (Su et al., 2017;Woo et al., 2017). The signals from DP may regulate the regeneration of hair follicles and melanocyte (Guo et al., 2016;Li et al., 2013). Dissociated human DP cells induce hair follicle neogenesis in grafted dermal-epidermal composites (Thangapazham et al., 2014). The limitation for DP research lies in the difficulty for culture of DP cells (Morgan, 2014). As so far, the human intact dermal papilla transcriptional signature can be partially restored by growth of papilla cells in 3D spheroid cultures (Topouzi et al., 2017). When the culture environment was changed into 2D environment, very rapid and profound molecular signature changes were discovered (Higgins et al., 2013;Lin et al., 2016). The isolation method of DP by surgical microdissection has been established in mouse vibrissae follicles and in human hair follicles (Gledhill, Gardner & Jahoda, 2013), but the isolated DP cells can not be long-term cultured. Since the isolation of primary DP cells is timeconsuming and has limited population doubling. There are also several inter-individual and intra-individual variations. It is necessary to establish stable DP cell lines to investigate hair biology. Immortalized DP cell lines of human have been established, and had hair growth promoting effects (Shin et al., 2011;Won et al., 2010). In rodent animal models, immortalized rat DP cells have already been obtained (Kang et al., 2015). However, an effective immortalized mouse DP cell line is to be constructed. The goals of this project are optimize the isolation and culture condition of DP from mouse skin and establish an immortalized DP cell line for future research.
MATERIALS AND METHODS
Isolation and culture of DP cells C57BL/6 mice were obtained from and housed in the laboratory animal center of the Army Medical University, Chongqing, China. All the animal-related procedures were conducted in strict accordance with the approved institutional animal care and maintenance protocols. All experimental protocols were approved by the Laboratory Animal Welfare and Ethics Committee of the Army Medical University. Permission number for producing animals: SCXK-PLA-20120011. Permission number for using animals: SYXK-PLA-20120031.
A 9-day old C57BL/6 mouse was sacrificed according to standard protocol. The vibrissa pads were cut off bilaterally with an iris scissor in a 100-mm plate. Vibrissa pads were rinsed with PBS, and then hair follicles were dissected together with their connective tissue sheath using 27G syringe needles under dissecting microscope. The dissected hair follicles were rinsed with PBS and incubated with 0.25% dispase for 20 min at room temperature.
Dissected hair follicles were transferred into a new 100-mm plate and thoroughly washed with PBS. A horizontal cut directly above dermal papilla was made. After that, dermal papilla was dissected out of dermal sheath using 27G syringe needles under dissecting microscope. Then the dissected DP tissues were transferred into a 10 µg/cm 2 collagen I coated 24-well plate. DP media were added after 30 min incubation in 37 • C. DP cells presented at about 3 days later. Cells reach confluence after 2 weeks and were passaged onto collagen I coated plates. DP medium should include α-MEM (Gibco, Waltham, MA, USA), 10% FBS (Gibco, USA), 1 × sodium pyruvate (Gibco, USA), 1 × non-essential amino acid (Gibco, USA), 1 × penicillin-streptomycin, 10 ng/ml bFGF (PeproTech, Rocky Hill, NJ, USA). During the optimization process, the classical DP medium was used as control. The control medium is consisted of DMEM (Gibco, USA), 10% FBS (Gibco, USA), 1 × penicillin-streptomycin. Another control medium was the classical DP medium with the addition of 10 ng/ml bFGF (PeproTech, USA).
Establishment of immortalized DP cell line
Retrovirus with SV40 large T antigen which was flanked with FRT sites was prepared as formerly reported (Yang et al., 2012). Primary DP cells were plated in a 60 mm-dish at 50% confluency in the morning. After attachment, polybrene (final concentration 10 µg/mL) and retrovirus (3.0 × 10 7 TU) were added together. The second day, the supernatant was aspirated out of dish, and new DP medium was refilled. At the same time, hygromycin was added at a final concentration of 200 µg/mL. The culture medium was changed every two days until all the cells in control group died. Antibiotic-selected DP cells were diluted with DP medium to 1-2 cells per 100 µL, and 100 µL diluted cells were added into every well of 96-well plates. Wells with only one cell were labeled and were monitored every 2 days. Cells in the labeled wells were passaged when the cell number of clones reached 20 or more.
RT-PCR
Total RNA of immortalized DP cells were extracted with Eastep TM super total RNA extraction kit (Promega, Beijing, China) according to manufacturer's protocol. Complementary DNA was synthesized from RNA using Rever Tre Ace cDNA synthesis Kit (Toyobo, Osaka, Japan) according to the manufacturer's protocol. Several gene expressions were determined by PCR machine (Bio-Rad, Hercules, CA, USA) with the synthesized cDNA as template. The primers used were shown in Table 1. PCR mastermix (Novoprotein, Shanghai, China) were used when amplifying. The reannealing temperatures (Tm) and product size for the primers were also shown in Table 1.
Immunocytochemistry staining
Cover slides were placed on a 24-well plate, and cells were plated on cover slides. Twentyfour hours later, cover slides were rinsed with PBS and fixed with acetone. Then, the cover slides were rinsed with PBS and incubated with 5% goat serum in PBS at room temperature for 1 h. After that, slides were incubated with a rabbit anti-FGF7 antibody (1:100; Boster, Wuhan, China) or a rabbit anti-α-SMA antibody (1:200; Bioss, Beijing, China) at 4 • C overnight and subsequently with appropriate secondary antibodies (1:500; ZSGB-bio, Beijing, China). The slides were counterstained with DAPI (1:10,000; Beyotime, Shanghai, China) for 10 min. At last, the cover slides were moved to microscope slides, mounted with antifade mounting medium (Beyotime, Shanghai, China), and observed under fluorescent microscope. The immunostaining experiments were repeated three times.
Alkaline phosphatase staining
Cover slides were placed on a 24-well plate, and cells were plated on cover slides. Twentyfour hours later, cover slides were rinsed with PBS and fixed with in situ fixation solution (Beyotime, Shanghai, China) for 10 min. Then the cover slides were rinsed with PBS five times. Fresh made NBT/BCIP staining buffer (Beyotime, Shanghai, China) or BM purple (Roche, Indianapolis, IN, USA) were added into the wells. The plate was covered with aluminium foil in the dark. Color change was monitored every 15 min to avoid non-specific staining. After the colour change appeared, the staining solution was aspirated out and the cells were washed twice with 1 × PBS. At last, the cover slides were dehydrated, cleared, moved to microscope slides, mounted with permount (ZSGB-bio, Beijing, China), and observed under microscope. The AP staining experiments were performed twice.
Detection of immortalization
Primary DP cells and iDP6 cells were cultured. The iDP6 cells were treated with AdGFP (adenovirus with the ability to express GFP protein), AdFlip (adenovirus with the ability to express flip recombinase, which can interact with FRT thus remove the expression of SV40) or PBS. Forty-eight hours later, cells were collected and total proteins were extracted with RIPA lysis buffer (Beyotime, Shanghai, China). Then, total proteins were loaded to 1% SDS-PAGE gel (Beyotime, China) and transmitted to PVDF membrane (Bio-Rad, Hercules, CA, USA). The PVDF membrane were incubated with anti-SV40 (1:1,000; Santa Cruz Biotechnology, Dallas, TX, USA) and anti-GAPDH (1:500; ZSGB-bio, Beijing, China) antibodies. HRP labelled secondary antibodies were used, and the results were observed under ChemiDoc TM Touch Imaging System (Bio-Rad, Hercules, CA, USA). The experiment on reversing immortalization was performed twice.
DP cells can be long-term cultured with the optimized strategy
We optimized the culture strategy for DP cells from three dimensions, plate coating, dissecting method, and culture media (Fig. 1). The optimized dissecting method worked well in obtaining primary DP cells. DP cells grew better on plate coated with collagen I than on uncoated plate. The morphology of DP cells did not have any significant difference between classical DP culture medium (DMEM with 10% FBS) and classical DP culture medium with the addition of bFGF (data not shown). Compared with classical DP culture medium, primary DP cells grew better in the optimized culture medium ( Figs. 2A-2D). The morphology of passaged DP cells was much more resemble in primary DP cells in the optimized culture medium. The cultured DP cells still had the characteristics of agglutinative growth in the optimized culture medium, but not in the control medium (Figs. 2E-2H).
Figure 1 Optimized strategy for the isolation and culture of DP cells.
At first, the whole skin of vibrissa area was cut, then the DP tissue was separated from the skin together with vibrissa pad, and then the DP tissue was collected after dispase digestion. After that, the collected DP tissue was cultured with our optimized culture medium in collagen I-coated plate.
DP cells are heterogeneous
Primary DP cells were immortalized by SV40 system. DP cells before antibiotic-selection were named with 0#. After antibiotic-selection, DP cell strains were selected by infinite dilution method. Not every single cell grew to clone at last. Cell strains were named with the time sequence when they grew to clone beginning with just one single cell. Totally 19 cell strains survived at last, named with iDP1 to iDP19 (1#-19#). The morphologic characteristics of the selected cell strains were different from each other (Fig. 3). Some cells still look like fibroblast, whereas some cells changed into epithelial-like cells (Fig. 3G). iDP6 still had the characteristic of agglutinative growth, while others lost this characteristic. Specially, iDP10 grew clonally, which implied that the cell line was more primitive. For these cell strains, the expression patterns of the markers for DP cells were also determined by RT-PCR, including FGF7, BMP6, Sox2, Tbx18, Sostdc, α-SMA and noggin (Fig. 4). All these data indicate that the cell strains were totally different from each other. Since each cell line grew from one single DP cell, the DP cells from one DP tissue were heterogeneous. iDP6 keeps the molecular characteristics of primary DP cells Taken morphology and mRNA expression characteristics together, iDP6 is the one that most similar to the primary DP cells. To determine whether iDP6 can be used in DP research, the activity of alkaline phosphatase was determined by AP staining, and the expression of FGF7 and α-SMA were determined by immunocytochemistry as well. At protein level, just like in situ, some iDP6 cells still had high AP activity (Figs. 5A-5F). FGF7 was expressed in the cytoplasm of all iDP6 cells (Figs. 5G-5L). α-SMA was expressed in both the cytoplasm and the nucleus of all iDP6 cells (Figs. 5M-5R). Although the AP activity in iDP6 was lower than primary DP cells, the expression patterns of FGF7 and α-SMA was similar to primary DP cells.
The immortal process of DP cells is reversible
At first, the expression of SV40 were determined to make sure that iDP6 were immortalized. Western blot showed that iDP6 cells expressed SV40, while primary DP cells did not express SV40 (Fig. 6A). Then, AdFlip was used to remove the expression of SV40 in iDP6 cells.
AdGFP and PBS were used as control. Results showed that compared with control groups, the expression of SV40 decreased at 48 h after being treated with AdFlip (Fig. 6B). These results demonstrated that iDP6 was successfully immortalized and the immortal process was reversible.
DISCUSSION
Primary cell culture needs to simulate the in vivo environment of the cells. In anagen, DP cells reside in the center of hair bulb. They are circumstanced with a single layer of dermal cells. Usually, DP cells periodically interact with epithelial cells outside of the single layer of dermal cells. In telogen, DP tissue is a little far from hair follicle stem cells (HFSCs). In anagen onset, it begins to move close to HFSCs, and keeps interaction with HFSCs during early anagen. In the late anagen, it begins to move away from HFSCs. In catagen, it keeps away from HFSCs. Thus, the environment for DP cells in vivo varies with hair cycle (Bassino et al., 2015). In addition, exogenous connective tissue may also impact the function of DP cells (Zhang et al., 2014b). To exclude contamination from other mesenchymal cells, epithelial cells, and adipose tissue, the use of microdissection techniques is preferred (Zhang et al., 2014a). To culture DP cells in vitro, all the conditions should be taken into consideration. As the main component of connective tissue, collagen is mostly secreted by fibroblast. Collagen is widely used in tissue engineering and cell culture.
We coated the plate with collagen, and found it was good for the growth of DP cells. DP is relatively independent in the anagen hair follicle. However, it is too small to be isolated quickly. So we used a stepwise method to isolate DP. DP cells grew from the isolated DP. The most important condition for cell culture is the culture medium. Since the classical DMEM with 10% FBS can not long-term culture mouse DP cells, we seek to find culture medium for special fibroblast. We found that a mesenchymal stem cell culture medium α-MEM worked well. Additionally, bFGF is a critical component of human embryonic stem cell culture medium. In conjunction with BMP4, bFGF promotes differentiation of stem cells to mesodermal lineages (Yuan et al., 2013). DP originates from mesodermal, so we added bFGF to the medium. However, since the classical medium and the classical medium with the addition of bFGF did not have significant difference on the culture of primary DP, the main effective additions in the optimized medium maybe sodium pyruvate and non-essential amino acids. Based on these data, the optimized strategy works well in isolating and long-term culture of DP cells. We are the first to use this strategy to culture DP cells.
There are several molecules reported to be expressed in DP cells, including Sox2, Tbx18, Sostdc, α-SMA and noggin (Weber & Chuong, 2013). To characterize immortalized DP cells, all the markers were tested. Recently, we found that two secretive proteins, FGF7 and BMP6 were also expressed in DP cells in vivo. FGF signaling was reported to regulate the size of dermal papilla (Yue et al., 2012), and BMP7 was reported to attenuate fibroblast-like differentiation of DP cells (Bin et al., 2013). Thus the expressions of FGF7 and BMP6 are also tested. Both the expressions of markers and morphology indicate that the immortalized DP cell strains are heterogeneous and iDP6 is a good cell strain to represent primary DP cells. It is reported that human DP cells have stem cell-like phenotypes (Kiratipaiboon, Tengamnuay & Chanvorachote, 2016), neural crest stem cell-like cells were also isolated from rat vibrissa DP (Li et al., 2014), dermal stem cells also lies in mouse dermal sheath (Rahmani et al., 2014). So it is reasonable that DP cells are heterogeneous. But exactly how many kinds of cells are in DP remain to be discovered (Yang et al., 2017). Single cell assay technologies may help.
CONCLUSIONS
From the results of present study, it can be concluded that we optimized the dissection and culture of mouse DP cells from three dimensions: stepwised dissection, collagen I coated plate and α-MEM based culture medium. Based on the optimized strategies, we successfully immortalized the cultured primary DP cells with addition of SV40 large T antigen. We successfully selected several cell strains, characterized them, and found iDP6 cell strain similar to primary DP cells. In addition, the SV40 large T antigen in iDP6 can be removed by the addition of AdFlip. In a word, we establised an immortalized DP cell strain that can be used in future research.
|
2018-02-03T04:33:32.977Z
|
2018-01-26T00:00:00.000
|
{
"year": 2018,
"sha1": "efbd9d932697770d0dc03db2a02e4b9a43f1362a",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.7717/peerj.4306",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "efbd9d932697770d0dc03db2a02e4b9a43f1362a",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
}
|
251203913
|
pes2o/s2orc
|
v3-fos-license
|
Adoption of Artificial Intelligence and Cutting-Edge Technologies for Production System Sustainability: A Moderator-Mediation Analysis
Cutting-edge technologies like big data analytics (BDA), artificial intelligence (AI), quantum computing, blockchain, and digital twins have a profound impact on the sustainability of the production system. In addition, it is argued that turbulence in technology could negatively impact the adoption of these technologies and adversely impact the sustainability of the production system of the firm. The present study has demonstrated that the role of technological turbulence as a moderator could impact the relationships between the sustainability the of production system with its predictors. The study further analyses the mediating role of operational sustainability which could impact the firm performance. A theoretical model has been developed that is underpinned by dynamic capability view (DCV) theory and firm absorptive capacity theory. This model was verified by PLS-SEM with 412 responses from various manufacturing firms in India. There exists a positive and significant influence of AI and other cutting-edge technologies for keeping the production system sustainable.
Introduction
The emergence of several modern technologies has attracted the attention of different industries throughout the world (Ivanov et al., 2020;Queiroz et al., 2020a).Consequently, research areas covering manufacturing science, industrial engineering, operations, and so on have been affected by the influence of these cutting-edge technologies.A ground-breaking technology like artificial intelligence (AI) is helping to remodel the supplychain networks, operational management, and production systems of the firms (Queiroz et al., 2020b;Ivanov et al., 2020;Basile et al., 2021;Thrassou et al., 2021).AI can be integrated with other existing technologies in a firm to understand its business pattern (Sahu et al., 2020;Rodríguez-espíndola et al., 2020).Operation as well as production management system has undergone a dramatic paradigm shift owing to the different applications of AI like ML (Machine Learning), DL (Deep Learning), and NLP (Natural Language Processing).With the help of these technologies, it has been possible to continue production-related internal operations of the firms remotely.The firms can be fully operated by using robots.Business creation with the help of AI will be touching $39 trillion by 2022 which was $1.2 trillion in 2018 (Richards et al., 2019;Dhamija & Bag, 2020).Apart from the impacts of AI technology, other groundbreaking technologies like IoT, blockchain, BDA, and quantum computing have affected the production systems and operation management resilience of the firms (Belhadi et al., 2021;Gupta et al., 2022;Kamble et al., 2018).With the help of IoT technology and drones, it has become easy to optimize inventory checking operations (Dolgui et al., 2020;Rimba et al., 2020).These technologies have benefitted to improve the relationships between the transport workers, suppliers, and customers.Chatbots could help to follow up the orders received by a firm automatically (Vrontis et al., 2021;Chaudhuri et al., 2021;Sakka et al., 2021;Duan & Da, 2021).Even in any apocalyptic unforeseen situation, these cutting-edge technologies are deemed to be helpful to overcome the challenges to sustain operational efficiency and flow of production, keeping the demand-supply situation unhindered (Geunes & Su, 2020;Kim et al., 2020;Baabdullah et al., 2021).Since little is known regarding how AI integrated with other curring-edge technologies could improve operational activities as well as production management systems, the present study has taken an attempt to investigate how AI integrated with new edge technologies could improve firm performance mediated through some contextual factors like production and operational system sustainability with the moderating influence of technology turbulence.Also, little is known to address the unpredictable environmental events causing resilience of operational management and production system (Dwivedi et al., 2019;Wamba et al., 2019b;Ivanov et al., 2020;Kar et al., 2021).This study strives to sort out these growing issues by identifying the determinants impacting the production system and operational sustainability by using the DCV theory (dynamic capability view) by Teece et al. (1997) as well as the absorptive capacity theory (Cohen & Levinthal, 1990).Such being the scenario, the present study attempts to answer the below RQs.
RQ1: How adoption of artificial intelligence and cuttingedge technologies helps production system sustainability? RQ2: Whether technology turbulence could moderate the relationship between the adoption of cutting-edge technologies and production system sustainability?
The organization of the remaining parts of the paper is arranged as follows.Section 2 presents the background study with theoretical underpinning followed by hypotheses formulation and development of the theoretical model in Section 3. Next, Section 4 presents the design of the research followed by data analysis with results in Section 5.
Thereafter Section 6 presents the study synthesis and discussion which included implications of this study and conclusion, limitations with future scope.
Applications of Cutting-Edge Technologies
AI is considered the key technology for achieving persuasive operational transformation in the context of contemporary firm setup since AI technology is emerging in different forms (Aloini et al., 2018).AI possesses the ability associated with the philosophy of the machines to behave, think, as well as perform similar tasks to humans (Schmidt & Hazır, 2019;Gunasekaran et al., 2018).It is a fact that the concept of AI was initiated in the earlier days of 1956, but its applications have gained momentum in recent times (Dolgui et al., 2018(Dolgui et al., , 2019;;Chatterjee et al., 2021a).Further, expert and agent systems, genetic algorithms, and BDA are deemed to be in proximity to AI (Gupta et al., 2019;Wamba et al., 2019a;Rana et al., 2021;Sequeiros et al., 2021).Taking the help of AI technology, the operational management of a firm can be improved (Wang et al., 2018).Firms are investing a considerable amount in improving their information technology (IT) enabled applications to develop intra-firm and interfirm operational efficiency for eventual improvement of their performances (Chakravarty et al., 2013;Gupta et al., 2021).Studies documented that the application of BDA in daily operational activities of firms has provided fruitful results, especially, for large firms such as Uber, Amazon, and Walmart (Kamble & Gunasekaran, 2020;Schildt, 2017;Vahn, 2014).Moreover, quantum computing technology can produce such outputs that classical computers are not able to produce (Gupta et al., 2022).It is hoped that the optimum computing system will be able to accelerate the tasks performed by the machine learning technology (Biamonte et al., 2017;Chatterjee et al., 2021;Khorana et al., 2021;Preskill, 2018).Again, for tracking indoor and outdoor assets, IoT technology is deemed to be helpful (Choi et al., 2012;Mikalef et al., 2021;Wang et al., 2010).IoT technology helps with information sharing in a firm that can influence the production and operational system of the firm (Yan et al., 2016).Another cutting-edge technology like blockchain derives benefits to the production management system and helps in the operational sustainability of a firm (George et al., 2019).This technology is deemed helpful to drive most values to the businesses since it can effectively solve problems which help in the maintenance of consistency of records, authentication of user identity, maintaining auditable information trails, and so on (Centobelli et al., 2021;Chatterjee et al., 2021b).These efficiencies are helping to impact the firms' production efficiency and operational sustainability (Croom et al., 2018).Also, to control and for reduction of social and environmental negative impacts, the manufacturing firms are giving much emphasis the operational sustainability issues (Haapala et al., 2013;Mohanty & Prakash, 2017;Wamba et al., 2019a).Operational performance comprising cost, quality, delivery, and flexibility is impacted by the sustainability performance provided the firms can overcome the impediments caused due to technological turbulence (Wiengarten & Longoni, 2015;Santos et al., 2021).The extant literature has nurtured that several cutting-edge technologies like social media platforms, big data, AI, blockchain, cloud computing, and IoT could impact the firm performance even after the influence of technology turbulence (Centobelli et al., 2021;George et al., 2019;Wamba et al., 2019a).But how all these technologies in an integrated manner could impact the firm performance under the moderating influence of technology turbulence remained underexplored.
Contextualization of Theory
For the identification of different factors impacting production system sustainability, the present research has used the DCV theory by Teece et al. (1997) and the absorptive capacity theory by Cohen and Levinthal (1990).For fulfillment of the objectives of a firm, the firm needs to possess some specific activities which are considered valued attributes of the firm (Schreyögg & KlieschEberl, 2007).The market environment is changing globally and for addressing such changing business situation, the firm should not depend on its static (existing) abilities only but should depend on its dynamic capabilities (DC) as well.DC is explained as the firm's "ability to integrate, build and reconfigure internal and external resources-competencies to address and possibly shape rapidly changing business environments" (Teece, 2012(Teece, , page-1395)).Dynamic capability comprises multiple capabilities which are seizing, sensing, and transforming or reconfiguring capabilities (Teece, 2014).All these dynamic capabilities help a firm to be able to trap suitable business opportunities to address dynamic market environments.Thus, by applying to sense, seizing, and transforming abilities, a firm should ensure maintain production system sustainability.In this perspective, it is argued that a firm must have the capacity to utilize the abilities of cutting-edge technologies like AI, IoT, Blockchain, quantum computing, big data analytics and so on which assist the firms in sensing, seizing, as well as transforming the opportunities for addressing dynamic market environment impacting production system sustainability (Kamble et al, 2021).To use these above-mentioned technologies, the firms need to recognize the opportunities that are essential to fulfill the goal and objectives of the firms.Then the firms need to assimilate and understand the knowledge so obtained from such opportunities and such accumulated knowledge is required to be applied by properly reconfiguring it.Hence, the firms must have dynamic capabilities as well as abilities to recognize, assimilate and use the available opportunities.This concept corroborates absorptive capacity theory.These abilities are construed to overcome the technological turbulence which might impede the process and procedures of the firms to utilize these modern technologies for enhancing the existing firm capabilities and improving their performance.
Hypotheses Formulation and Development of a Theoretical Model
The background studies with the theories could help for identifying the factors which can eventually influence the firm performance.These are explained here along with an explanation of the moderating effects of the moderator technology turbulence.Also, attempts have been made to develop a few hypotheses which help for developing a conceptual model.
AI and Cutting-Edge Technologies
AI includes machines that have the capability to work like human beings (Mishra et al., 2019).Different studies have explained the usefulness of AI (Nilsson, 2014).AI is interpreted as machines that possess intellectual capabilities like humans (Mc Gettigan, 2016;Kamble et al., 2020).Many firms are using AI technology.Airbus is known for using AI technology for examining its production problems and calculating data to arrive at a solution (Ransbotham et al., 2017).KPMG (Australia) is automating its auditing services, Bridgewater associates are engaged to automate their business operations activities taking the help of AI (Tredinnick, 2017).Firms which are specialized in finance, telecommunication, and marketing are using AI technology to become more competitive (Oana et al., 2017).The application of AI technology could be regarded as a dynamic capability of a firm that is perceived to impact the production system sustainability of the firm corroborating with DCV (Kamble et al., 2020).Accordingly, the following hypothesis is articulated.
1 3 H1a: Adoption of AI-based applications (AAI) positively impacts the production system sustainability (PSS) of a firm.
Big data analytics (BDA) technology and related applications have occupied the frontline of operations and production management and information system (IS management) (Wamba et al., 2019a;Kamble & Gunasekaran, 2020).BDA is explained as "a holistic process that invites the collection, analysis, use, and interpretation of data for various functional divisions with a view to gaining actionable insights, creating business value, and establishing competitive advantage" (Akter et al., 2016, p.178;Kamble et al., 2018).BDA is used in business activities and for the purposes of analysis of data (Wamba & Akter, 2019).BDA can explore the ways for extracting valuable information derived from several sources of data which help a firm for gaining competitiveness (Akter et al., 2016;Shams & Solima, 2019).BDA possesses four dimensions which are velocity, volume, accessibility, as well as variety (Morabito, 2015).Studies have demonstrated that BDA capability has a significant association with firm performance and production system sustainability (Aydiner et al., 2019;Kamble & Gunasekaran, 2020).Accordingly, the following hypothesis is developed.
H1b: Adoption of big data analytics (BDA) positively impacts the production system sustainability (PSS) of a firm.
The adoption of quantum computing (AQC) takes the help of quantum computing technology.This technology can produce such outputs which classical computers cannot provide effectively (Al-Rabadi, 2009).This modern cutting-edge technology possesses algorithms that can accelerate the tasks done by machine learning (Biamonte et al., 2017;Preskill, 2018).If information is required by a firm from the huge volume of collected information (data), and if such particular data cannot be searched by ordinary technology, the technological ability of quantum computing of the firm may help to search that information quickly and accurately (Oxford Analytica, 2018).This dynamic ability of the firm is perceived to impact the sustainability of the production system even in a dynamic market environment that is in line with DCV (Kamble et al., 2020).As such, the hypothesis below is provided.
H1c: Adoption of quantum computing (AQC) has a positive impact on the production system sustainability (PSS) of a firm.
Adoption of IoT technology is perceived to be necessary for both indoor asset tracking and outdoor asset tracking (Choi et al., 2012;Wang et al., 2010).Studies demonstrate applications of IoT technology help a firm to optimize its floor operation, can improve production sustainability along with product logistic operation and can help the firm to effectively recognize and assimilate any external congenial opportunity which is in consonance with absorptive capacity (Kamble et al., 2020;Zhong et al., 2013).The IoT technology is comprised of electronic product code (EPC) as well as an EPC network which can provide a scalable information system for different applications like information sharing to impact the production system sustainability (Thiesse et al., 2009;Yan et al., 2016).Such prolific capability of IoT technology is perceived to have impacted the sustainability of the production system of a firm (Kamble et al., 2020).Accordingly, the following hypothesis is proposed.
H1d: Adoption of IoT technology (AIT) positively impacts the production system sustainability (PSS) of a firm.
Adoption of another cutting-edge technology like blockchain has a positive contribution towards firm performance (Kamble et al., 2021).Blockchain is a digital ledger that presents past transactions, distributed in several systems which are called the node, and is operated through various users (George et al., 2019).This allows different participants to introduce records supported by immutable and validated cryptographic protection (Oh & Shong, 2017).With the help of advanced cryptography, blockchain functions as a distributed open-service database (Kirkland & Tapscott, 2016).Applications using blockchain technology cannot be hacked and as such, it is a trusted platform (Orcutt, 2019).Hence, the use of blockchain technology which is considered a dynamic ability to address any dynamic market environment is perceived to impact the production system sustainability of a firm that complements DCV (Kamble et al., 2021).It is thus hypothesized as below.
H1e: Adoption of blockchain technology (ABT) positively impacts the production system sustainability (PSS) of a firm.
Production System Sustainability (PSS)
A sustainable production system is concerned with the creation of goods and services with the help of processes and procedures which are non-polluting and which do not waste natural resources.The processes are needed to be safe, economically viable, and healthful for employees and consumers, as well as for the community (Lozada-Contreras et al., 2021).Production systems should be sustainable so that resources and energy are used efficiently used and sustainable products are produced.Focus is given to recycling of products and usage of renewable energy which can be used for production in the firms (Havenvid et al., 2016).A sustainable production system covers four areas like social sustainability, human sustainability, economic sustainability, and environmental sustainability (Jassem et al., 2021).The firms must possess the ability to address the future needs of the consumers also after meeting the needs of the present (Geng et al., 2021;Kamble et al., 2020).If the firms fulfill these sustainability-related conditions, it is perceived that it will help to develop sustainable operations which would eventually ameliorate the firm's performance.Accordingly, hypotheses are developed below.H2a: Production system sustainability (PSS) positively impacts the operational sustainability (OPS) of a firm.H2b: Production system sustainability (PSS) positively impacts firm performance (FIP).
Operational Sustainability (OPS) and Firm Performance (FIP)
Operations that can meet the present requirements while keeping the option to meet the future necessities are known as operational sustainability (Longoni et al., 2014).It can also be interpreted as the maintenance of the existing practices without endangering future resources (Kleindorfer et al., 2005).To achieve the operational sustainability, a firm needs to articulate sustainable action plans for production, take congenial actions to avoid wastage of energy, to practice sustainable operational systems which are perceived to impact market share, revenue, cash flow, profitability, and value-added productivity of the firm (Gimenez et al., 2012).Accordingly, the following hypothesis is formulated.
Bullet H3: Operational Sustainability (OPS)
Positively Influences the Performance of a Firm (FIP).
Moderating Effects of Technology Turbulence (TT)
When a relationship between two variables is not fixed, a third variable impacting that relationship may strengthen the relationship or may weaken the relationship, or even in some cases it could reverse the direction of the relationship.This third variable impacting such a relationship is interpreted as 'moa derating variable'.Technology turbulence (TT) is defined as "the rate of change of product and process technologies used to transform inputs into outputs" (Ngamkroeckjoti & Speece, 2008, p.413).Again, it has been opined that technology turbulence "is caused by changes in, and interaction between, the various environmental factors especially because of advances in technology and the confluence of computer, telecommunication, and media industries" (Mason, 2007, p.11).Studies have demonstrated that TT can influence the relationship on of market orientation and its performance (Appiah-Adu, 1997).TT emerges when there occurs a quick change in technology and in such a case; the adoption of new technology becomes problematic due to rapid change in technology and is impeded by the users (Ngamkroeckjoti & Speece, 2008).Accordingly, the following hypotheses are formulated.These discussions help to develop a theoretical model which is shown in Fig. 1.
Design of Research
Hypotheses have been tested and validated with taking help of the PLS-SEM approach.This approach is simpler and can synthesize any study which is exploratory in nature (Peng & Lai, 2012;Vinzi et al., 2010).This approach permits for allowing such data which are not even normally distributed.It is not allowed when analyzing CB (covariance-based) SEM (Rigdon et al., 2017;Kock & Hadaya, 2018).This process involves a survey where the responses of the participants are quantified by a recognized scale.Here a 5-point Likert scale is used.A 5-point Likert scale has been used because this scale is simple to use and provides the respondents an opportunity to opt for a neutral stand by providing the option 'neither disagree nor agree'.
Development of the Questionnaire
Extent literature and theories help to prepare the questionnaire which has been articulated in the statement form.Questions have been pre-tested taking help of a comparatively small number of participants.The pre-test result helps to improve the formats and readability so that the participants do not feel any problem responding.After the pre-test, a pilot test has been performed to ascertain the understandability of the questions and to remove the complexities the questions.The pilot test also helped to estimate the reliability of the items and the results also help to drop some items which do not completely explain the corresponding constructs.Finally, the opinion of some experts possessing adequate knowledge and expertise in the present study has been taken to enhance the comprehensiveness of items as well as to fine-tune them.In this way, twenty-seven questions could be prepared.The summary of the questionnaire is shown in Appendix Table 7.
Data Collection
This study aims to investigate the impact of cutting-edge technologies on the betterment of the firm's performance.Hence, data need to be obtained from respondents who have at least a preliminary concept regarding the domain of the present study.Hence, the purposive sampling technique (Apostolopoulos & Liargovas, 2016) is utilized.In this method, researchers principally depend on one's judgment to target the respondents.A purposive and convenient sampling technique was used in the study (Garg, 2019).In this context, the co-authors based in India attended conferences and seminars on the subject matter of this study.A list of potential respondents was prepared using the inputs from the conference participants and their professional networks.In this way, details of 903 prospective respondents could be developed.All the participants received the response sheets each containing twenty-seven questions.A guideline was provided to each of the respondents highlighting how to fill it up.Each of them was assured that their identity would not be disclosed.Two months' time (April -May 2021) was given to them for responding.Within time, 426 replies were received, the rate of the response being 47.27%.All the activities pertaining to the data collection and required follow-ups with the respondents were performed by the coauthors based in India.For conducting the non-response bias test, recommendations provided by Armstrong and Overton (1977) have been followed.For this, an independent t-test and chi-square test with the inputs of the first and last 100 respondents have been conducted.No mentionable deviation of results in these two cases was noted confirming that nonresponse bias could not pose a major concern in this study.After scrutiny, 14 responses were found vague.These were not considered.Analysis has been done with the inputs of the 412 respondents.The details of the demographic information are given in Table 1. 5 Data Analysis with Results
Measurement and Test of Discriminant Validity
Convergent validity of the items has been estimated by computation of loading factor (LF) for each of the items.To assess the validity, reliability, as well as internal consistency of each of the constructs, AVE (average variance extracted), CR (composite reliability), and α (Cronbach's alpha) have been computed.The values are within the allowable range.Table 2 presents the results.Square roots of the values of average variance extracted have been assessed.All the values of the square roots are greater than the respective correlation coefficients (bifactor) satisfying the criteria of Fornell and Larcker (Fornell & Larcker, 1981).This helps to confirm the discriminant validity.Table 3 reflects the results.
Moderator Analysis (Multi-Group Analysis)
The present study has considered the effects of technology turbulence (TT) as a moderator impacting the linkages H1a to H1e.For this, MGA (multi-group analysis) was conducted under the procedure of bootstrapping with consideration of 5000 resamples.The difference of p-values towards effects concerning Strong TT and Weak TT on these five linkages is found to be all less than 0.05 confirming that the TT has significant moderating impacts (Hair et al., 2016) on the five linkages.Table 4 presents the results.
Common Method Variance (CMV)
Since the data has been obtained from the survey, there is a chance for CMV.To eliminate this defect, some procedural measures have been taken.During the collection of data, all the prospective respondents were assured that their anonymity and confidentiality could be preserved.Also, at the time of preparation of the questionnaire, the has been conducted.The result highlighted that the first factor came out to be 20.62% of the variance which is less than the recommended value of 50% (Podsakoff et al., 2003).Since it is noted that Harman's SFT is not a robust test for CMV, a marker correlation test has been conducted (Ketokivi & Schroeder, 2004;Lindell & Whitney, 2001).The results indicated that the differences between CMV and marker adjusted CMV were very small (≤ 0.06) (Mishra et al., 2018a, b).Hence, it is inferred that CMV could not distort the data.
Hypotheses Testing
To test the hypotheses by SEM, bootstrapping procedure considering 5000 resamples has been adopted.Separation distance 7 was considered.Cross-validated redundancy has been determined for which Q 2 has been estimated and its value has become 0.032 (positive) confirming the predictive relevance of the model (Mishra et al., 2018a, b).For verifying if the model is in order, standardized root means square residual (SRMR) was estimated considering it as a standard index.The values of SRMR emerged as 0.032 and 0.061 respectively for PLSc as well as for PLS.These two values are not greater than 0.08 (Hu & Bentler, 1999).As such it is inferred that the model is fit.This approach has helped for estimating the β-values and other parameters.The details are presented in Table 5.The model is now validated.It is provided in Fig. 2.
Mediation Analysis
Mediation effects have been analyzed for OPS within the PSS → OPS → FIP link using the approach recommended by another study (Preacher & Hayes, 2008).The bootstrapping procedure has been taken concerning the indirect effect of confidence interval (CI) to the tune of 95%.The value of the indirect effect of PSS on FIP mediating through OPS can be computed by multiplying the β-values of the relationships between PSS → OPS and OPS → FIP.The value becomes 0.42 × 0.46 = 0.19 (***p < 0.001).Also, the direct effect of PSS → OPS and OPS → FIP are found to be significant at ***p < 0.001.Hence, the results show that OPS can be construed to be the significant partial mediator between PSS and FIP.Table 6 presents the results.
Results
In this study, 13 hypotheses have been formulated.Out of these 13 hypotheses, 5 hypotheses are concerned with the effects of the moderator TT towards 5 linkages.This study highlights that the impacts of AAI, BDA, AQC, AIT and ABT on PSS (H1a to H1e) are significant as well as positive because the corresponding β-values are respectively 0.29, 0.31, 0.26, 0.28 and 0.22.The corresponding significance levels are **p < 0.01, **p < 0.01, ***p < 0.001, *p < 0.05, **p < 0.01.The impacts of PSS on OPS (H2a) and the impacts of PSS on FIP (H2b) are both significant and positive since the corresponding path coefficients have values like 0.42 and 0.37 respectively.The corresponding significance levels are both ***p < 0.001.FIP is impacted by OPS (H3) significantly and positively because the β-value is 0.46 with a significance level ***p < 0.001.The moderator TT has significant impacts on the five linkages and the impacts are also positive because the corresponding β-values are 0.14 (*p < 0.05), 0.16 (*p < 0.05), 0.19 (**p < 0.01), 0.24 (**p < 0.01), as well as 0.34 (**p < 0.01).In terms of R 2 values which are known as the coefficient of determination, the results show that exogenous variables AAI, BDA, AQC, AIT, and ABT can explain PSS amounting to 46% (R 2 = 0.46) and PSS can interpret OPS to the extent of 52% (R 2 = 0.52) and PSS, as well as OPS, can simultaneously interpret FIP by 79% (R 2 = 0.79).This is the predictive power of the proposed theoretical model.
Study Synthesis and Discussion
This study has made an attempt to demonstrate how the emergence of different cutting-edge technologies could influence the business pattern and eventually business growth of the firms.The present study has dealt with how AI, IoT, blockchain, big data analytics, and quantum computing could impact the production system sustainability which could eventually influence the performance of the firm mediating through operational performance sustainability.The present study has shown that the adoption of these cutting-edge technologies could impact production system sustainability (H1a to H1e) which has been supplemented by another study (Ivanov et al., 2020) that investigated how applications of industry 4.0 could impact the operational management of the firms.The present study has investigated how production system sustainability could impact operational sustainability (H2a) as well as firm performance (H2b).This idea is supported by other literature (Wamba et al., 2020).It investigated that BDA and other technologies could impact the performance of the firms which is influenced by the moderator ED.This present research has highlighted that operational sustainability could impact the performance of the firm (H3).This idea corroborates another study (Aydiner et al., 2019).The moderator TT has significant effects on all the linkages covered by H1a to H1e.This has been confirmed through multi-group analysis.Now the impacts of TT on the five relationships (H1a to H1e) are discussed through graphical representation presented by five graphs simultaneously marked by Fig. 3.
Here, through Fig. 3, the impacts of Strong TT, as well as Weak TT, have been shown on the relationship between H1a to H1e.For all the five graphs, the continuous and dotted lines represent the impacts of the Strong as well as Weak TT towards these five relations, respectively.It is observed that with an increase of AAI (for H1a), BDA (for H1b), AQC (for H1c), AIT (for H1d), and ABT (for H1e), the rate of increase of PSS for all the relationships is greater for the impacts of the Strong TT in comparison to impacts of the Weak TT since the dotted lines have gradients less than the gradients of the continuous lines.It is pertinent to mention here that the gradient of a straight line is the trigonometrical tangent of the angle that the straight line makes with the positive direction of the horizontal axis.
Contributions to the Theory
There are multiple theoretical contributions to this study.No studies could investigate how the advantages of cutting-edge technologies like AI, BDA, IoT, AQC, and ABT could impact the production system sustainability of a firm impacting the firm performance under the moderating influence of technology turbulence (TT).This research has successfully investigated all these issues and has been able to provide a suitable theoretical model possessing respectable predictive power.The instant research has utilized two theories absorptive capacity theory and DCV.The dynamic capability view theory conceptualized that a firm must possess DCs which are seizing, sensing, and transforming abilities (Teece, 2014) to trap external and internal opportunities to mobilize them and eventually use them to address the dynamic market environments.The present research has shown that the concept of DCV has been extended by arguing that the capabilities of these cutting-edge technologies are dynamic capabilities that could impact the firm performance eventually helping the firms to address any highvelocity market situation.The present research has extended the concept of absorptive capacity theory by arguing that for utilizing cutting-edge technologies, the firms need to recognize their importance.The firms need to learn how to use these cutting-edge technologies to harness the best results through the process of assimilation.Eventually, the firms need to apply those technologies in a congenial situation so that such use could help the firms to exhibit the best performance.The present study is principally concerned with the adoption of AI and cutting-edge technologies for production system sustainability.Thus, for the interpretation, the study could be dependent on the theoretical lens concerned with a standard adoption theory.But that has not been done.Instead, better-suited contextual factors have been chosen and as a result, the proposed theoretical model could achieve high explanative power.In a study by Rodríguez-espíndola et al. (2020), it is observed that the study has investigated how applications of AI, blockchain, and 3D printing could improve the humanitarian supply chain by ensuring better efficiency.This concept has been expanded in the present research to project how the different disruptive technologies could influence the production patterns and operations of the firms to help improve their performance.This has enriched extant literature.Besides, consideration of the impacts of the moderator technology turbulence (TT) on the relationships between product system sustainability (PSS) with its five predictors has enriched the conceptual model.
Implications for Practice
This research study has shown several important practical aspects.The study demonstrates that big data analytics is helpful to improve the production system.In such a situation, the managers need to adopt a data-driven view when approaching improving the production system.The managers need to arrange to make the employees aware of the advantages of adopting the data-driven culture and the utility of big data analytics capability.The present study has demonstrated that the adoption of AI technology would improve the production system sustainability of a firm.In this context, the managers are needed first to motivate the employees for making them realize the prolific opportunities the AI technology can provide.The employees should be made to realize that usage of AI in the daily routine works will reduce the human load and the pressure of works will be reduced by the automated power of AI.This will enhance the motivation of the employees toward these cutting-edge technologies like blockchain, IoT, quantum computing, and so on.To accomplish this, the managers of the firms need to arrange periodical workshops to keep the employees of the firms appraised about the utility and success of those cutting-edge technologies if those technologies are properly adopted and effectively used in the firms.For this, the managers of the firms are required to arrange for imparting appropriate training to the employees.The motivation of the employees with proper training will help a firm to extract its best potential from these cutting-edge technologies.It is a fact that in this era of rapid advancement of technology, the employees will have inconvenience using the technologies to their full potential.Wang et al., 2010;Choi et al., 2012 Adoption of IoT technology is helpful for both indoor and outdoor asset tracking
Fig. 1
Fig.1The theoretical model (using DCV theory and absorptive capacity theory)
H4a:
The moderator TT (technology turbulence) influences the relationship AAI → PSS.H4b: The moderator TT (technology turbulence) influences the relationship between BDA → PSS.H4c: The moderator TT (technology turbulence) influences the relationship AQC → PSS.H4d: The moderator TT (technology turbulence) influences the relationship between AIT → PSS.H4e: The moderator TT (technology turbulence) influences the relationship ABT → PSS.
Table 1
Information about the participants
Table 2
Properties for measurement
Table 6
Mediation analysis and results
Table 7 Table 7
Morabito, 2015;Shams & Solima, 2019 2019ness for the employees are expected to mitigate the menace of technology turbulence.Before attempting to adopt these cutting-edge technologies, the top management of the firms should improve the dynamic capabilities of the firms.Before investing in adopting such diverse types of disruptive technologies, the top executives of the firms are needed to carefully evaluate the expertise of the employees to sense the dynamic changes in the market environment which may help them to shape the opportunities and to mitigate the risks.They are to assess if the firms' infrastructure could seize such opportunities and if the existing staff can reconfigure such trapped opportunities to gain a competitive advantage.Future researchers may consider the effects of applications of these technologies in the firms to examine how the firm performance could be impacted.Besides, the present study did not analyze the firm cultural issues as a moderator.It is suggested that future researchers should explore the cultural issue to investigate how such consideration could influence the performance of the firms.Summary of questionnaire Strongly Disagree (SD) = [i]to Strongly Agree (SA) = [v] Ransbotham et al., 2017 AI technology is helpful to automate the business operation activities[i][ii][iii][iv][v] AAI3Tredinnick, 2017;Oana et al., 2017AI enabled machines possess intellectual capabilities[i][ii][iii][iv][v] BDA1Wamba et al., 2019a;Aydiner et al., 2019Adoption of big data analytics helps in developing sustainability in production system[i][ii][iii][iv][v]BDA2Wamba et al., 2015; Akter & Wamba, 2016; Big data can help in extracting value-added information from different data sources[i][ii][iii][iv][v]BDA3Morabito, 2015;Shams & Solima, 2019Adoption of big data applications has a significant association with firm performance[i][ii][iii][iv][v] AQC1 Al-Rabadi, 2009 Quantum computing can help in sustaining production system of a firm [i][ii][iii][iv][v] AQC2 Biamonte et al., 2017; Preskill, 2018 Quantum computing can produce outputs which classical computers cannot provide [i][ii][iii][iv][v] AQC3 Oxford Analytica, 2018 Applications of quantum computing can help improve firm performance [i][ii][iii][iv][v] AIT1
|
2022-08-01T15:07:05.897Z
|
2022-07-30T00:00:00.000
|
{
"year": 2022,
"sha1": "a7d889bcee78ab0844f4d8d04c369e0b92ac499e",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s10796-022-10317-x.pdf",
"oa_status": "HYBRID",
"pdf_src": "Springer",
"pdf_hash": "52a17a9867f168ba004dfb4c3c9ef7403089b57f",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": []
}
|
227127038
|
pes2o/s2orc
|
v3-fos-license
|
Lower Bound on the Capacity of the Continuous-Space SSFM Model of Optical Fiber
The capacity of a discrete-time model of optical fiber described by the split-step Fourier method (SSFM) as a function of the signal-to-noise ratio $\text{SNR}$ and the number of segments in distance $K$ is considered. It is shown that if $K\geq \text{SNR}^{2/3}$ and $\text{SNR} \rightarrow \infty$, the capacity of the resulting continuous-space lossless model is lower bounded by $\frac{1}{2}\log_2(1+\text{SNR}) - \frac{1}{2}+ o(1)$, where $o(1)$ tends to zero with $\text{SNR}$. As $K\rightarrow \infty$, the inter-symbol interference (ISI) averages out to zero due to the law of large numbers and the SSFM model tends to a diagonal phase noise model. It follows that, in contrast to the discrete-space model where there is only one signal degree-of-freedom (DoF) at high powers, the number of DoFs in the continuous-space model is at least half of the input dimension $n$. Intensity-modulation and direct detection achieves this rate. The pre-log in the lower bound when $K= \sqrt[\delta]{\text{SNR}}$ is generally characterized in terms of $\delta$. It is shown that if the nonlinearity parameter $\gamma\rightarrow \infty$, the capacity of the continuous-space model is $\frac{1}{2}\log_2(1+\text{SNR})+ o(1)$. The SSFM model when the dispersion matrix does not depend on $K$ is considered. It is shown that the capacity of this model when $K= \sqrt[\delta]{\text{SNR}}$, $\delta>3$, and $\text{SNR} \rightarrow \infty$ is $\frac{1}{2n}\log_2(1+\text{SNR})+ O(1)$. Thus, there is only one DoF in this model. Finally, it is found that the maximum achievable information rates (AIRs) of the SSFM model with back-propagation equalization obtained using numerical simulation follows a double-ascent curve.
Lower Bound on the Capacity of the Continuous-Space SSFM Model of Optical Fiber Milad Sefidgaran and Mansoor Yousefi
Abstract-The capacity of a discrete-time model of optical fiber described by the split-step Fourier method (SSFM) as a function of the signal-to-noise ratio SNR and the number of segments in distance K is considered. It is shown that if K ě SNR 2{3 and SNR Ñ 8, the capacity of the resulting continuous-space lossless model is lower bounded by 1 2 log 2 p1`SNRq´1 2`o p1q, where op1q tends to zero with SNR. As K Ñ 8, the inter-symbol interference (ISI) averages out to zero due to the law of large numbers and the SSFM model tends to a diagonal phase noise model. It follows that, in contrast to the discrete-space model where there is only one signal degree-of-freedom (DoF) at high powers, the number of DoFs in the continuous-space model is at least half of the input dimension n. Intensity-modulation and direct detection achieves this rate. The pre-log in the lower bound when K " δ ? SNR is generally characterized in terms of δ. It is shown that if the nonlinearity parameter γ Ñ 8, the capacity of the continuous-space model is 1 2 log 2 p1`SNRq`op1q. The SSFM model when the dispersion matrix does not depend on K is considered. It is shown that the capacity of this model when K " δ ? SNR, δ ą 3, and SNR Ñ 8 is 1 2n log 2 p1`SNRqÒ p1q. Thus, there is only one DoF in this model. Finally, it is found that the maximum achievable information rates (AIRs) of the SSFM model with back-propagation equalization obtained using numerical simulation follows a doubleascent curve. The AIR characteristically increases with SNR, reaching a peak at a certain optimal power, and then decreases as SNR is further increased. The peak is attributed to a balance between noise and stochastic ISI. However, if the power is further increased, the AIR will increase again, approaching the lower bound 1 2 logp1`SNRq´1 2`o p1q. The second ascent is because the ISI averages out to zero with K Ñ 8 sufficiently fast.
Index Terms-Optical fiber, channel capacity, split-step Fourier method.
I. INTRODUCTION
Optical fiber is the medium of choice for high-speed data transmission. Although general expressions for the capacity of discrete-time point-to-point channels are derived in [1], [2], evaluating these expressions for models of optical fiber remains difficult.
Optical fiber is modeled by the stochastic nonlinear Schrödinger (NLS) equation. There are two effects in the channel that impact the capacity. First, nonlinearity transforms additive noise to phase noise during the propagation. As the amplitude of the input signal tends to infinity, the phase of the output signal tends to a uniform random variable in the zerodispersion channel [3,Sec. IV]. Second, dispersion converts phase noise to amplitude noise introducing a multiplicative noise. The successive application of the phase and multiplicative noise makes signal noise interaction intractable.
The achievable information rates (AIRs) of the wavelengthdivision multiplexing (WDM) vanish at high powers due to treating interference (arising from the application of the linear multiplexing to the nonlinear channel) as noise [4]- [7]. On the other hand, it is shown that the capacity CpSNR, Kq of the discrete-time models of optical fiber as a function of the signal-to-noise ratio (SNR) and the number of segments in distance K satisfies [8], [9] CpSNR, Kq ď log 2 p1`SNRq. (1) The problem of finding the capacity has been investigated for the non-dispersive case in [3], [10]- [12]. It is shown that the asymptotic capacity of this channel is 1 2 log 2 pPq`op1q [3], where P is the average input signal power.
The stochastic NLS equation can be discretized using the split-step Fourier method (SSFM). The capacity of the discrete-time discrete-space SSFM model of the optical fiber with fixed step size in distance as a function of SNR is studied in [13]. It is shown that this model tends to a linear fading channel as SNR Ñ 8, described by a random matrix M K . The asymptotic capacity of this model is [13,Thm. 1] CpSNR, Kq" # 1 2n log 2 pSNRq`Op1q, const. loss, 1 n log 2 log 2 pSNRq`Op1q, non-const. loss, where n is the dimension of the input vector, and the loss coefficient is considered as a function of frequency. As a result, there is only one signal degrees-of-freedom (DoF) at high powers (signal energy) due to signal-noise interaction. However, the model in [13] may not describe realistic fiber where the distance is continuous. The capacity of the discrete-time discrete-space SSFM model as a function of SNR and the number of segments in distance K is studied in [14]. It seems that the analysis in [14] suggests that if K tends to infinity sufficiently fast as K " 4 ? SNR and SNR Ñ 8, the capacity is lower bounded by 1 8 log 2 p1`SNRq`c where c ă 8. In this paper, we consider the SSFM model of optical fiber as a function of K and SNR. The contributions of the paper are as follows.
a) First, we show that when K ě SNR 2{3 and SNR Ñ 8, the off-diagonal terms in the random matrix M K in [13], representing the stochastic inter-symbol interference (ISI), tend to zero due to the law of large numbers, and M K tends to a diagonal matrix with phase noise. As a consequence, the capacity of the lossless continuous-space SSFM model is lower bounded as where the term op1q tends to zero with SNR Ñ 8. This suggests that, unlike the discrete-space SSFM model where asymptotically there is only one DoF and the capacity is essentially finite (for large n), in the continuous-space model the number of DoFs is at least half of the input dimension.
In particular, the capacity grows with the input power with pre-log of at least 1{2. The pre-log in the lower bound when K " δ ? SNR is generally characterized in terms of δ. b) Second, we consider the SSFM model when the nonlinearity parameter γ Ñ 8. It is shown that this channel is a fading channel for any K and SNR. As a result, when K Ñ 8, the channel simplifies to n independent phase noise channels in the lossless case, with the capacity CpSNRq " 1 2 log 2 p1`SNRq´1 2`o p1q.
c) Third, we consider the lossless SSFM model in which the dispersion matrix does not depend on K. It is shown that when K " δ ? SNR, δ ą 3, and SNR Ñ 8, the capacity is 1 2n log 2 p1`SNRq`Op1q. In this case, there is one DoF asymptotically as in (2) [13].
d) Finally, we simulate the AIR of the SSFM model with back-propagation equalization. As previously observed, the AIR characteristically increases with SNR, reaching a peak at a certain optimal power, and then decreases as SNR is further increased (typically to near zero in WDM). The peak is attributed to a balance between noise and ISI. However, if the power is increased further, the AIR will increase again, approaching the 1 2 logp1`SNRq´1 2`o p1q lower bound. The second ascent is because the ISI vanishes as K Ñ 8 sufficiently fast.
The paper is organized as follows. The notation is introduced in Section II. The discrete-and continuous-space SSFM models are presented in Section III. The main capacity lower bound is presented in Section IV, which is proved and extended in Sections V and VI. The results are verified by numerical simulations in Section VII, and the paper is concluded in Section VIII. Appendix A. provides background on a few mathematical concepts.
II. NOTATION
Real and complex numbers are denoted by R and C, respectively, with the imaginary unit j " ?´1 . The real and imaginary parts of a complex number x are denoted by Rpxq and Ipxq, respectively. The magnitude and phase of x P C are denoted by |x| and =x. The complex conjugate of x P C is x˚. Important scalars are shown with the calligraphic font, e.g., P for power, C for the capacity, and L for the length of optical fiber.
Bold letters are used to denote vectors, e.g., x. The p-norm of a vector x P C n is The Euclidean norm with p " 2 is }x} The entries of a sequence of vectors x i P C n , i " 1, 2, . . ., are indexed with convention The n-sphere is denoted by S n . A vector x in the spherical coordinate is represented by its norm }x} and its direction x " x{}x}. The spherical coordinate system is introduced in Appendix A.
Random variables and their realizations are represented by the upper-and lower-case letters respectively. The probability density function (PDF) of a random variable X is denoted by P X pxq. The expected value of a random variable X is denoted by ErXs. The uniform distribution on the interval ra, bq is denoted by Upa, bq. The PDF of a zero-mean circularlysymmetric complex Gaussian random vector with covariance matrix K is denoted by N C p0, Kq. Equality of random variables X and Y in distribution is written as X d " Y . A random variable in C n is said to be absolutely continuous if its PDF is bounded and has at least one finite moment [15,Def. 3]. Such random variable has an absolutely continuous density with respect to the Lebesgue measure, and its PDF does not include a Dirac delta function.
Let X Ď R n and g : X Þ Ñ R. A sequence of probability distributions pµ P q PPR`o n X with the average cost constraint EgpXq ď P is said to escape to infinity with P if [16,Def. 2.4] lim PÑ8 µ P px P X : gpxq ď P 0 q " 0, for any P 0 ą 0.
We say a sequence of channels with conditional distributions pP a py|xqq aPR , x, y P X , tends to a channel Qpy|xq as a Ñ 8 if lim aÑ8 P a py|xq " Qpy|xq point-wise for all x and y. We say a channel P py|xq tends to a channel Qpy|xq as the distribution of X escapes to infinity with an average cost P, if the output of P tends to the output of Q in probability as P Ñ 8 for any sequence of input distributions that escapes to infinity. When the cost function is gpxq " }x} 2 2 , roughly speaking this implies that P py|xq Ñ Qpy|xq point-wise, for all y and all x with }x} 2 ą c for all c ą 0 (except possibly on an input set with zero measure).
A sequence of n numbers x 1 , . . . , x n is shown as px q n "1 or x n . The set of integers t1, 2, . . . , nu is denoted by rns. A sequence of independent and identically distributed (i.i.d.) random variables drawn from the PDF P X pxq is presented as X i.i.d.
" P X pxq, " 1, 2, 3, . . .. Deterministic matrices are denoted by upper-case letters with mathrm font, e.g., D, and random matrices are shown by upper-case letters with mathsf font, e.g., M. The identity matrix of size n is I n .
For a sequence of matrices pA i q m i"1 , the product is defined with convention agonal matrix R with diagonal entries R i is denoted by R " diagpR 1 , . . . , R n q. The following diagonal matrix is used throughout the paper where θ The group of complex-valued nˆn unitary matrices is denoted by U n . Some properties of U n are reviewed in Appendix A.
Suppose that ν is an equivalence relation on the set I n " t1, 2, . . . , nu, partitioning it into non-empty equivalence classes I 1 , I 2 , . . . , I mpνq . The notation U n pνq is used to denote the group of block diagonal unitary matrices A in which if the integers r and s do not belong to one class, then A r,s " 0.
Given two functions f pxq : R Ñ C and gpxq : R Ñ C, we say f pxq " Opgpxqq, if there exists a finite c ą 0 and x 0 ą 0 such that |f pxq| ď c |gpxq|, for all x ě x 0 . In addition, f pxq " opgpxqq if for any c ą 0 there exists a finite x 0 ą 0 such that |f pxq| ď c |gpxq|, for all x ě x 0 .
For a sequence of scalar random variables pX k q 8 k"1 and constants pa k q 8 k"1 , where X k , a k P C, we say X k " O p pa k q, if for any ą 0, there exists a finite c and finite k 0 , such that for any k ě k 0 , P p|X k |{|a k | ą cq ď . Index p in O p p¨q indicates "in probability". Similarly, For a sequence of random matrices pX k q 8 k"1 and constants pa k q 8 k"1 , where X k P C n,m and a k P C, we say Finally, for deterministic matrices, Op¨q and op¨q are defined similarly.
For a sequence of random matrices pA k q 8 k"1 , we say A K Ñ B in probability with convergence rate υ ą 0, if We say f pxq : R Ñ R is asymptotically lower bounded by gpxq : R Ñ R, and write f pxq ě gpxq, if lim xÑ8 pf pxq´gpxqq ě 0.
Finally, for a discrete-time channel X Þ Ñ Y, where X, Y P C m , with the average power constraint E||X|| 2 ď mP and capacity CpPq, we say there are r ą 0 complex signal DoFs in the channel if the capacity pre-log is r{m, i.e., lim PÑ8 CpPq{ logpPq " r{m.
III. SPLIT-STEP FOURIER MODEL
In this section, we consider a modified version of SSFM introduced in [13]. Here, the nonlinearity and noise steps are combined into one step, so that the influence of the additive amplified spontaneous emission (ASE) noise can be seen as phase noise. In what follows, SSFM refers to the modified SSFM.
A. Continuous-time model
Denote the complex envelope of the optical signal at distance z and time t by Qpt, zq. The propagation of the signal in single-mode optical fiber with distributed amplification is governed by the stochastic nonlinear Schrödinger (NLS) equation [13,Eq. 2] BQ Bz " L L pQq`L N pQq`N pt, zq.
Here, L L is the linear operator where β 2 is the second-order chromatic dispersion coefficient, α r ptq is the residual attenuation coefficient (remained after imperfect amplification),˙is convolution resulting from the dependence of the attenuation coefficient with frequency, and j " ?´1 . The operator represents the Kerr nonlinearity, where γ is the nonlinearity parameter. Finally, N pt, zq is zero-mean circularly-symmetric complex Gaussian noise process with covariance matrix where δ B pxq " B sincpBxq, sincpxq " sinpπxq{pπxq, B is the noise bandwidth, δp¨q is the Dirac delta function, andσ is the power spectral density of the ASE noise. Denote σ fiσ ? B. The capacity results obtained in this paper are expected to hold for more general linear and nonlinear operators L L and L N that take into account other forms of dispersion and nonlinearity. However, we restrict the analysis to (10) and (11).
B. Discrete-time SSFM model
Discretize a fiber of length L into K segments of length ε " L{K in distance, and Qpt,¨q to a vector of length n with step size ∆ t in time. Let V i P C n be the input of the spatial segment i, where V 1 " X is the channel input and V K`1 " Y K is the channel output.
In segment i of the modified SSFM, the following steps are performed [13]. a) Modified nonlinear step: In this step (9) is solved analytically with L L " 0. Let V i and U i be the input and output in this step, and M Ñ 8 a large integer. The channel V i Þ Ñ U i is memoryless with the input output relation [13, Eq. 5] where U i, and V i, are entries of U i and V i defined based on (5), and`W i, pmq˘i , is a sequence of discrete-time Wiener processes in m with auto-correlation function for all i, i 1 P rKs, , 1 P rns and m, m 1 P rM s, where µ " ε{M and δ ij is the Kronecker delta function. The nonlinear phase is DenoteZ i fi W i pM q. b) Linear step: In this step (9) is solved analytically with L N " 0 and N pt, zq " 0. If U i is the input and V i`1 is the output of the linear step, the map U i Þ Ñ V i`1 is where F is the discrete Fourier transform (DFT) matrix. Further, a "´Lα {p2Kq where pα q is a discretization of the loss coefficient α r pf q in frequency f , and Remark 1. The input dimension n is fixed, and should not be confused with the block or codeword length that tends to 8.
C. Transition from the continuous-to discrete-time model
The NLS equation (9) defines a continuous-time channel from Qpt, 0q at the input of the fiber to Qpt, Lq at the output of the fiber. Let Bpzq denote the signal bandwidth at distance z.
To discretize the channel, we need to sample the input signal at Bp0q and the output signal at BpLq. Due to Kerr nonlinearity, BpLq is generally signal-dependent and may not equal to Bp0q. The relation between BpLq and Bp0q is an important open question.
As a consequence, the continuous-time model (9) cannot be discretized in a one-to-one manner as in linear channels by sampling the input output signals at the input bandwidth. In this paper, we do not include channel filters or a bandwidth constraint in the model, and let B ∆ " Bp0q Ñ 8. The derivative operator in time can then be approximated using the discrete Fourier transform with an error that tends to zero as the step size in time ∆ t " 1{B Ñ 0. If SNR " K δ and δ ă 1, as K Ñ 8 operator splitting in distance yields a discretization of (9) with a vanishing error as B Ñ 8 and K Ñ 8. Finally, we consider a potentially sub-optimal discretization where the output signal is sampled at B. This corresponds to a receiver that ignores some of the samples that potentially carry information, and results in a discrete-time model in which the input output vectors have the same dimension. Lower bounds obtained for this potentially sub-optimal receiver hold for better receivers as well.
D. Limitations of the discrete-time model
The discrete-time model considered in this paper has a number of limitations. First, it considers signals with infinite bandwidth and does not account for a bandwidth constraint introduced by inline filters or receiver. Second, a receiver that samples the output signal at the input bandwidth ignores potentially useful samples. Third, the spectral efficiency (in bits/s/Hz) of the continuous-time model may not equal to the capacity (in bits/s) of the discrete-time model. This is because the spectral broadening factor may increase with launch power due to nonlinearity [3, Sec. VIII], [17].
In this paper, we do not derive a rigorous one-to-one discretization of the continuous-time model (9). We consider the discrete-time SSFM model, whose capacity may be different from that of (9).
IV. A CAPACITY LOWER BOUND
The capacity of the SSFM model as a function of the signalto-noise ratio SNR " P{pσ 2 Lq, where P is the average signal power and σ 2 L is total noise power, and the number of spatial segments K is where X P C n and Y K P C n are the channel input and output, and IpX; Y K q is the mutual information measured in bits/2D. The capacity of the SSFM model when K Ñ 8 independently of SNR is In this case, the asymptotic capacity corresponds to limits lim SNRÑ8 lim KÑ8 with that order. We also study the capacity when K and SNR go to infinity as K " δ ? SNR, SNR Ñ 8, in which case the capacity is CpSNR, δ ? SNRq.
Remark 2.
Since noise power is fixed by the channel, we express the capacity as a function of SNR instead of launch power. However, this should not imply that the capacity of the nonlinear channel, which is a two-dimensional function of signal and noise powers, is a one-dimensional function of The main result of this paper is Theorem 1 stating that for sufficiently large number of segments K, rate 1 2 log 2 pSNRq´1 2 is achievable at high SNRs in the continuous-space lossless model. Theorem 1. The capacity of the SSFM channel when K Ñ 8 independently of SNR is lower bounded as where the term op1q tends to zero with SNR Ñ 8. For the lossless fiber, a " 1{2.
The capacity of the SSFM channel when K " δ ?
In practice the launch power is finite and K can be chosen to be arbitrarily large. Theorem 1 indicates that rates given by (20) are achievable. In fiber-optic simulations based on SSFM, choosing sufficiently large number of segments, the channel capacity (bits/s) is between the lower bound in (20) and upper bound (1).
Theorem 1 indicates that the number of signal DoFs is at least half of the input dimension in the continuous-space model. In contrast, there is only one signal DoF in the discrete- (3) can be compared with the asymptotic capacity of the discrete-space SSFM model (2), where there is only one DoF, and the pre-log is 1{2n in the lossless model.
V. PROOF OF THEOREM 1 AND RELATED CAPACITY THEOREMS
In this section, we outline the proof of Theorem 1. In addition, we present a number of capacity theorems for models related to the SSFM. Most of the proofs appear in Section VI.
It is shown in [13, Sec. V] that the SSFM channel when K is fixed and the input distribution escapes to infinity with the average input power tends to a linear fading channel. To study the capacity of the SSFM channel when K Ñ 8, we first study this fading model in Section V-A, obtained by replacing nonlinearity with uniform i.i.d. phase noise in segments of SSFM. We show that if K Ñ 8, this fading channel tends to a diagonal phase noise channel in probability with capacity pre-log 1{2.
Later in Section V-B we show that the limit of the SSFM channel when the input distribution escapes to infinity with SNR and there are sufficiently large number of segments is a continuous-space fading channel. Combining these two results, we obtain the pre-log 1{2 in Theorem 1. The cases where pre-log is less than 1{2 are obtained similarly, with further analysis.
Note that, since we consider a continuous-space SSFM model, the proof in [13] showing that the SSFM channel tends to a fading channel with SNR Ñ 8 cannot be used here, because D k does not depend on K in the discrete-space model in [13].
A. Capacity of the continuous-space fading model
We begin by considering the following fading channel, introduced in [13]: where the random matrix M K (independent of input), representing a multiplicative noise, is independent of X and given by [13,Eq. 11] where Rp¨q is defined in (6) and θ i i.i.d.
" U n p0, 2πq. The expression for the additive noise is whereZ i " N C`0 , σ 2 εI n˘. In the lossless case, (25) is simplified and Z K " N C`0 , σ 2 LI n˘. In this paper, a fading channel is any model of the form (23), with multiplicative and additive noise, where pM K , Z K q has arbitrary distribution and is independent of X.
Definition 1. For a dispersion matrix of the form (16), we define the "total dispersion values" d " Kb , P rns. We say dispersion is finite if b is given by (17), for which d ă 8 independently of K, and infinite if b does not depend on K, for which d Ñ 8 as K Ñ 8, b ‰ 0. Similarly, we define the "total loss values" ζ " Ka , P rns.
Denote the average values of the total loss and dispersion in frequency by Alternatively, ζ "´ᾱL{2, whereᾱ fi p1{nq ř n "1 α is the average fiber loss. If n Ñ 8,ᾱ " lim F Ñ8 For realistic fiber, b is given by (17), and dispersion is finite due to factor 1{K in (17). In this case, the effect of dispersion locally in a small segment is infinitesimal, and lim KÑ8 D K " I n . However, we also consider models where b is independent of K; here the dispersion matrix in each small segment is fixed D K ∆ " D, and the total dispersion value is infinite for K Ñ 8. The models with infinite dispersion or nonlinearity are not realistic, but help to understand the capacity as β 2 or γ tend to infinity. They distinguish models with fixed and variable D K .
Below, we lower bound the capacity of the fading channel (23) with finite and infinite dispersion.
1) Finite Dispersion:
A key result of this paper is Lemma 1 stating that as K Ñ 8, the random matrix M K in (23) tends to a diagonal matrix with independent phase noise components.
For a random vector θ, define where C 1 fi F´1 diag`pζ `jd q n "1˘F andL fi E θ rLpθqs. Lemma 1. The random matrix M K has the expansion where θ and θ i are drawn i.i.d. from U n p0, 2πq. In particular, Proof. The proof is given in Section VI-A. The first equality is shown by algebra. The second equation is obtained by applying a concentration inequality for weak law of large numbers, or central limit theorem.
Substituting the limit of M K in (24) into (25), we also obtain q{2ζ. The following lemma follows.
Lemma 2. As K Ñ 8, the fading channel (23) with finite dispersion tends to a sequence of independent phase noise channels where θ The first capacity result in this paper is the following theorem showing that the pre-log of the capacity of the continuous-space fading channel with finite dispersion when K Ñ 8 independently of SNR is 1{2, i.e., there are n real signal DoFs when the input dimension is 2n. LetCpSNR, Kq be the capacity of (23) andCpSNRq " lim KÑ8C pSNR, Kq.
Theorem 2. Capacity of fading channel (23) with finite dispersion and SNR satisfies where SNR " P{pσ 2 Lq and a is given in Theorem 1.
Proof. The result is obtained by combining Lemmas 1 and 2 and the capacity result (27).
Note that (34) holds for any SNR, if K Ñ 8 independently of SNR. Next, we consider the capacity of the finite dispersion fading channel when K " δ ? SNR, δ ą 0, SNR Ñ 8. We require this model in the Section V-A2 when we study the SSFM channel. Substituting (30) into (23), (26) Here, ∆ is a dense matrix, that is generally not diagonal, capturing ISI or intra-channel interactions. If δ ă 1, then ∆X p Ñ p0, . . . , 0q and Y K p Ñ e ζ RpθqX`Z. In this case, the capacity is provided by Theorem 2.
If δ ą 1, ∆X grows with K and constitutes the dominant stochastic impairment. The following theorem establishes a lower bound on the capacity in this case.
Theorem 3. The capacity of the fading channel (23) when where the Op1q term is bounded as SNR Ñ 8.
Proof. See Section VI-B.
2) Infinite Dispersion: We consider the SSFM model when D K ∆ " D is independent of K. Lemma 3 below shows that, in the lossless infinite dispersion case, as K Ñ 8 the random matrix M K in (23) tends to a random unitary matrix.
Lemma 3. Let ν be an equivalence relation on the set t1, 2, . . . , nu, and U n pνq the smallest subgroup of block diagonal matrices in U n that contains D. Then, the distribution of M K tends to the Haar measure on U n pνq as K Ñ 8.
Proof. See Section VI-C.
In the following assume that D is not a block diagonal matrix (of more than one block), i.e. U n pνq " U n .
Decompose the mutual information for the fading channel (23) as: Lemma 3 and Theorem 10 in Appendix A imply that the second term approaches zero as K Ñ 8. The conditional PDF of the signal norm is where I m p¨q is the modified Bessel function of the first kind with order m. Equations (37) and (38) yield the following theorem.
Theorem 4. Suppose that D ∆ " D K is independent of K and is not a block diagonal matrix. The continuous-space fading channel (23) has one signal DoF with capacitȳ where P`}y}ˇˇ}x}˘is given by (38), SNR " P{σ 2 L and the Op1q term is bounded as SNR Ñ 8.
Proof. The first line follows from Lemma 3, stating that if K Ñ 8, then I´X;Ŷˇˇ}Y}¯Ñ 0. The second line follows from the lower bound on the capacity of the non-central chisquare channel derived in [19,Theorem 1] and an upper bound derived by noting that hp}Y}qď 1 2 log 2 p1`SNRq`Op1q due to principle of maximum entropy and hp}Y}ˇˇ}X}q " Op1q.
Remark 3. In general, if U n pνq is the smallest subgroup of block diagonal matrices containing D, then the capacity of the continuous-space fading channel with infinite dispersion has mpνq real signal DoFs, where mpνq is the number of blocks of D.
B. Capacity of the continuous-space SSFM model
In this section, the capacity of the continuous-space SSFM model is investigated in the high power regime, as well as with infinite nonlinearity, and infinite dispersion.
1) High power regime: Let K " δ ? SNR, δ ą 0, and assume that the input X escapes to infinity with SNR. We show that the limit of the discrete-space SSFM channel when SNR Ñ 8 is a diagonal model with phase noise. The convergence rate to this diagonal model depends on δ. We derive a lower bound on the convergence rate, from which a lower bound on the capacity is established. Denote and υpδq " Lemma 4. If the input distribution is absolutely continuous and escapes to infinity with SNR, the SSFM channel with K " δ ? SNR, δ ą 0, γ ‰ 0, tends to the following channel in distribution as SNR Ñ 8 where and Z " N C p0, ησ 2 LI n q, where η is defined in Lemma 2.
Further, 1 ą 0 can be made arbitrarily small with SNR.
Proof. See Section VI-D.
Theorem 1 will be proved using Lemma 4 in Section VI-E. The term Op1q in Theorem 1 for δ ě 2 is given in the proof of Theorem 3.
Remark 4. For 0 ď δ ď 1.5, the dominant term in O p`K´υ pδq˘i s the signal-noise mixing, and for 1.5 ď δ ď 3, is the intra-channel interactions (between different indices). For 3 ă δ both effects are significant.
2) Infinite nonlinearity: It is commonly believed that nonlinearity is a distortion that reduces the capacity. However, in this section we show that when γ Ñ 8, the continuous-space channel has at least n real signal DoFs.
Theorem 5. Capacity of the continuous-space SSFM model satisfies where a is given in Theorem 1 and op1q term tends to zero with SNR Ñ 8.
Proof. It is easy to verify that as γ{K Ñ 8, for any i P rKs and P rns, Φ i, Ñ 8, and consequently, mod pΦ i, , 2πq Ñ Up0, 2πq, independent of Φ i 1 , 1 in any other segment l 1 or coordinate i 1 . Hence, the SSFM channel tends to a finite dispersion fading channel, and Theorem 2 yields the result.
3) Infinite dispersion: The asymptotic capacity of the discrete-space SSFM channel where D K ∆ " D is independent of K ă 8 is given in (2). In this section, we show that this result holds under the same assumption D K ∆ " D for the continuous-space lossless SSFM channel as well. Note that in this case, as K Ñ 8 the dispersion is infinite. Theorem 6. Consider the SSFM model when D K ∆ " D is independent of K and is not a block diagonal matrix of more than one block. If K " δ ? SNR, δ ą 3, γ ‰ 0, then C´SNR, where Op1q term is bounded as SNR Ñ 8.
Proof. Similar to the analysis in [13], it can be shown that for input distributions that escape to infinity with P " pσ 2 LqK δ , δ ą 3, V i, " Ω p`K δ˘f or all i P rKs and P rns. Furthermore, in the proof of Lemma 4 it is shown that in this case mod pΦ i, , 2πq Ñ Up0, 2πq. Hence, infinite dispersion SSFM channel tends to the infinite dispersion fading channel. These two steps can be proved alternatively using induction on the output of segment i P rKs and using Lemma 3 for sufficiently large K. The result then follows from Theorem 4.
A. Proof of Lemma 1
For matrices A and B define the commutator rA, Bs " ABA´1B´1. It can be verified with algebraic manipulations that M K can be written as Since the joint distribution of´ř Let andC m fi p´1q m C m . Expand D K and D´1 K in 1{K using the Taylor's theorem A simple calculation shows that Using (49) and C 1`C1 " 0, Combining (47) and the above relation results in where paq is obtained usinĝ Finally, since eL " e pζ`jdqIn " e ζ`jd I n ,
B. Proof of Theorem 3
In the following, we restrict the input to the class of absolutely continuous random vectors, for which hpM K Xq is a continuous function of K with respect to the total variation distance [15, Theorem 1]. First, we prove that: i. If X is i.i.d, then 1 n IpX; Y K qě 1 2δ log 2 pPq`hp|X 1 |q´Erlog 2 p}X} 4 qs where the op1q term vanishes with P Ñ 8 and ρ fi Next, using this general lower bound we obtain the following.
ii. By choosing X " N C p0, PI n q, we havē CpSNR, (56) Lpθ m q´L. Considering (29), ∆ rr is deterministic and thus zero. If r ‰ , where T r " N C p0, 1q and we used the central limit theorem. This yields Hence, "n log 2`e ζ RpθqX˘`op1q "n log 2`2 πe 2ζ˘`n hp|X 1 |q`nErlog 2 p|X 1 |qs`op1q, where op1q term vanishes with K Ñ 8 and P{K Ñ 8. Next, we bound the conditional entropy part as: The output Y K, is equal in probability to where θ " Up0, 2πq, η is given in (33), and o p p1q term vanishes as K Ñ 8. Note that for a fixed , pT ,r q r are independent. Hence, given X " x, where T " N C p0, σ 2 T q, and Step paq is derived using the Cauchy-Schwarz inequality.
Step pbq follows from the structure of C 1 . Thus, conditioned on X " x, Now, the conditional entropy can be bounded as following This relation together with (59) and (60) yields the first part of the theorem.
Since the left hand side does not depend on p X pxq, we obtain (56). The last equality above follows from the continuity of IpX, Y K q for the SSFM channel as a function of K at K Ñ 8, shown in Lemma 5.
C. Proof of Lemma 3
The proof is based on Theorem 11 in Appendix A. The reader is referred to Appendix A for the notation used in this section.
Let T fi DRpθq. Clearly, T is a unitary matrix. Denote the probability measure of T by µ. We show that the following two conditions of Theorem 11 hold. Condition 1. Denote the smallest closed subgroup of U n that contains support of µ, i.e. Spµq, as H. Moreover, denote the smallest subgroup of block diagonal matrices that contains D as U n pνq.
The first condition to verify is H " U n pνq. This condition is needed, because if H Ă U n pνq, then the product of instances of T will not be in H. Hence, the probability measure of the product of K i.i.d. instances of T would not be a Haar measure on U n pνq.
By letting θ " p0, . . . , 0q, we have D P H, and thus D´1 P H, and D´1DRpθq " Rpθq P H. Hence, H contains all diagonal unitary matrices including the matrix D. Since the smallest block diagonal subgroup that contains D is U n pνq, Theorem 7 in Appendix A implies H " U n pνq.
Condition 2. The next condition to verify is that µ is not normally aperiodic. This means that Spµq is not contained in a (left or right) coset of a proper closed normal subgroup of U n pνq. To see why this condition is needed, by contradiction suppose that there exists a proper closed normal subgroup H of U n pνq and V P U n pνq, such that Spµq Ď VH or Spµq Ď HV, or equivalently V´1Spµq Ď H or SpµqV´1 Ď H. Suppose that V´1 " DRpθ r q¨¨¨DRpθ 1 q.
Consider all matrices M " DRpθ j qV´1. The second condition states that the smallest closed normal subgroup that contains these matrices is U n pνq. In other words, starting from any r initial steps, all possible unitary matrices in U n pνq can be reached.
To verify the second condition, we consider the cases V´1Spµq Ď H or SpµqV´1 Ď H separately.
Left Coset: In this case, V´1D and the subgroup of diagonal matrices belong to H. Suppose that there exists a W P U n pνq such that W R H and W " QΓQ´1, where Γ is a diagonal matrix. However, since H is a normal subgroup of U n pνq and Γ P H, then W " QΓQ´1 P H, which is a contradiction.
Right Coset: Since DRpθ 1 qV´1 P H and DRpθ 2 qV´1 P H, we have DRpθ 1´θ2 qD´1 P H. Hence, for any θ, DRpθqD´1 P H. Similar to the previous case, by contradiction suppose that there exists W P U n pνq such that W R H and W " QΓQ´1, where Γ is a diagonal matrix. However, since H is a normal subgroup of U n pνq, QD´1 P U n pνq, and DΓD´1 P H, thus W " pQD´1qDΓD´1pDQ´1q P H, which is a contradiction.
D. Proof of Lemma 4
The proof of Lemma 4 is similar to the proof of Lemma 2, where M K is expanded in 1{K.
Note that, if D K does not depend on K, when P Ñ 8, phase tends to a uniform random variable in every segment for every input [13]. However, if dispersion values scale as 1{K and P " pσ 2 LqK δ , phase tends to zero in one segment if δ ă 1. But, if we add sufficiently large number K 1´δ`0ò f segments, so that the variance of phase tends to infinity, output phase tends to a uniform variable for every input. In what follows, we make these statements precise.
We prove the lemma formally by induction on the segment index i. The output V i`1 of the segment i P rKs as a function of the channel input X is where in which where the nonlinear phase Φ i, is given in (15). Further, where M 0,K fi I n and Z 0 fi 0. Note that M K,K " M K . First, we expand M i,K similar to the analysis in the proof of Lemma 1. For t P ris, denote and where C 1 is defined in (48). Expand M i,K as: Note that R K p1q " S K . Fix 1 ą 0 sufficiently small. We shall prove that for each i: ‚ Claim 1. For , 1 P rns, where υpδq ě υpδq´ 1 and υpδq is defined in (41). ‚ Claim 2. For P rns and t P ris, ‚ Claim 3. If i satisfies (77), then υpδq in (75) is bounded by where g fi max l,l 1 Prns´l og K´e jp|X l | 2´| X l 1 | 2 q{K´1¯. (79) Note that g " o p p1q, and thus vanishes for absolutely continuous inputs, when δ ą 1. Lemma 4 follows from (74) at i " k and Claim 1. Claim 2 and 3 are needed in the proof of Claim 1.
Note that above Claim 1-3 yield where Z P C n and Z " N C`0 , ησ 2 LI n˘. For i " 1, Claim 1-3 hold, since Assume that Claim 1-3 hold for i P rr´1s. We need to show that they hold for i " r as well. Denote Using the assumption of the induction (75) together with (74) and (69), the nonlinear phase Φ i`1, , where i P rr´1s, can be expanded as: Here step paq follows by substituting (69) and (74) into the previous line. Variables Z 1 i`1, and Z 2 i`1, denote Gaussian noises with variances that do not depend on K and E t, is defined in (82).
The term captures intra-channel interactions, while the terms and 2γε ? εRˆ´e represent the signal-noise interactions.
To show Claims 1 and 3, note that p∆ r q , , P rns, is equal to zero, with probability one. It remains to show that the offdiagonal elements are O p`K´υ pδq˘. We show this for element p1, 2q; the proof is similar for other elements. This element is equal to The rest of the proof is presented for different ranges of δ, and for ζ " 0, separately. Since in general, for i P rKs, where It can be shown that asymptotically as K Ñ 8 the effect of loss in the convergence rate of ∆ r vanishes. ‚ Case 0 ă δ ă 1: In this case, first for r ď K 1´δ{3´ 1 , we prove Claims 2 and 3, and consequently Claim 1 follows. Then using this result, we prove Claim 1 for r ą K 1´δ{3´ 1 as well.
Assume that r ď K 1´δ{3´ 1 . We argue first for t P rrs, Claim 2 holds, from which Claims 1 and 3 are then concluded. Let Q fi γLK´δ|X | 2 , P rns. (90) Using the induction assumption (75) and since r´t`1 ď where paq holds due to assumption of induction (78) and This proves Claim 2. Note that as can be seen in the above calculations, due to signal-noise interactions, at most Furthermore, since for 0 ď δ ď 1, thenˇˇp ∆ r q 1,2ˇ"ˇp C 1 q 1,2ˇOp`K´δ˘, which completes the proof of Claim 3 and consequently Claim 1.
For r ą K 1´δ{3´ 1 , similarly it can be verified that for each where t 2 " t 1`K 1´δ{3´ 1 . Hencěˇp The above bound can be improved using the following approach. When t consecutive terms form a geometric series, then On the other hand, if t 2´t1 " K 1´δ{3` 1 , then similar to the analysis in (91), The second term on the RHS of the above relation is nondeterministic given the input and is of order Hence, the summation of terms with a distance more than K 1´δ{3` 1 can be considered as the summation of independent random variables. Using central limit theorem yieldšˇp ∆ r q 1,2ˇ"ˇp C 1 q 1,2ˇOp´K This completes the proof of Claim 1.
‚ Case 1 ď δ ă 1.5: Similar to the previous case, first assume that r ď K 1´δ{3´ 1 . For i P rr´1s, due to the assumption of the induction (78), υpδq ě 1´g. Since 1´δ{3 ă 2´δ, then it can be verified similar to the previous case that the restrictive term is the signal-noise interaction term and (76) (Claim 2) holds. Hence,
(103)
For δ ą 1, the term 1{pe jK δ´1 Q´1 q is an oscillating function which is of order O p pK g q, which concludes that υpδq " 1´g for r steps as well, r ď K 1´δ{3´ 1 . This proves Claim 3 and consequently Claim 1 for r ď K 1´δ{3´ 1 . For r ą K 1´δ{3´ 1 , similar to the previous case it can be verified that sum of each K 1´δ{3´ 1 consecutive terms is O p pK g q. Thus,ˇp ∆ r q 1,2ˇ"ˇp C 1 q 1,2ˇOp´K δ{3´1`g` 1¯.
(104) Furthermore, using the central limit theorem, this bound can be improved tǒˇp Finally, note that for absolutely continuous inputs, g " o p p1q This completes the proof of Claim 1.
‚ Case 1.5 ď δ ă 2: Since 2´δ ă 1´δ{3, in this regime, the intra-channel term is the restrictive term. In this case, it can be verified that r ď K 2´δ´ 1 consecutive terms form geometric series and their sum is O p pK g q (Claims 2 and 3). Hence, to show Claim 1,ˇˇp∆ r q 1,2ˇc an be bounded as:ˇp ∆ r q 1,2ˇ"ˇp C 1 q 1,2ˇOp´K K´1"ˇˇp Note that similar to the previous case, g " o p p1q for absolutely continuous inputs and hence O p´K ‚ Cases 2 ď δ ă 3 and 3 ď δ: For these cases, first we argue that when (75) holds, then as K Ñ 8, mod pΦ r, , 2πq Ñ Up0, 2πq, independent of pΦ i, q r´1 i"1 . In other words, in this regime as K Ñ 8, SSFM channel tends to the finite dispersion fading channel, except for the first segment and when 2 ă δ ă 3, which is negligible. This, concludes that which completes the proof of Claim 1. Note that Claims 2 and 3 are valid only for δ ă 2. For 2 ď δ ă 3, the second phase operator is equal to For i.i.d. inputs, the second term induces a stochastic impairment that grows if δ ą 2 and when K Ñ 8. Hence, as K Ñ 8, each step will be reduced to uniform phase noise. Similarly, after r steps, there would be a stochastic impairment of order ? i K }X} 4 . Thus, the channel (except for the first segment) is equivalent to the fading channel. For 3 ď δ, SSFM channel tends to the fading channel for any input distribution that escapes to infinity, as K Ñ 8. To show this, first using the assumption of the induction, for 1 ď i ď r, we have Hence, the second term of Φ i,1 , i.e. 2γε ?
E. Proof of Theorem 1
First we show (21) holds. Let Y 1 K fi Rpθ 1 qY K , where θ 1 i.i.d " Up0, 2πq. Due to data processing inequality In the following, we establish a lower bound on the channel where where θ l " Up0, 2πq, independent of X.
If υpδq ă 1 2 δ, then for sufficiently small value of 1 , the term O´K´υ pδq` 1¯X vanishes as K Ñ 8 and the channel tends to n independent phase noise channels. The lower bound (21) on CpSNR, δ ? SNRq for 0 ď δ ď 3{2 can be then established similar to Theorem 2.
If υpδq ą 1 2 δ, an approach similar to that in the proof of Theorem 3 can be applied. The output entropy hpY 1 K q can be bounded as in (59). Bounding the conditional part also follows similarly as in the proof of Theorem 3, with the difference that the defined variable T is not anymore Gaussian and is a random variable with bounded variance σ 2 T . Applying the maximum entropy theorem and letting 1 Ñ 0, the conditional entropy can be similarly bounded as This relation together with (59) and (60), and letting X " N C p0, PI n q, shows that (21) holds also for υpδq ě 1 2 δ. Now, to show (20), first note that similar to the proof of Lemma 4 for the case of δ ă 1, for any 1 when SNR{K Ñ 0 and SNR Ñ 8, we have M K "e ζ`jd S K`Op´K´5 6 log K pSNRq` 1 log K pSNRq" Next, by considering X Þ Ñ Y 1 K , as in (111), we have Hence, by choosing input as N p0, PI n q, we have This completes the proof.
VII. CAPACITY SIMULATION
The capacity results in Section V are supported by numerical simulation, presented in this section.
We compute the maximum AIR by simulation, and compare that with the upper and lower bound (1) and (3). Furthermore, we investigate the properties of the random matrix M K ; in particular we demonstrate that M K tends to a diagonal matrix if K is sufficiently large.
A. Achievable information rate
We consider an SSFM channel corresponding to a discretization of a fiber with parameters given in Tab. I, L " 2000km, and B " 20 GHz and B " 5 GHz, resulting in σ 2 " 1.2ˆ10´1 3 J/m and σ 2 " 3ˆ10´1 4 J/m, respectively. We assume that fiber loss is perfectly compensated with distributed amplification, and choose time parameters ∆ t " 1{B and n " 4096. Each element of the input vector is chosen i.i.d. from a uniformly-spaced multi-ring constellation with m A rings and 8 points in phase.
The AIR is computed with equalization. Given output Y, back-propagation is applied to obtainŶ. The per-sample conditional PDF pŶ 1|X1 pŷ 1 |x 1 q is numerically computed by averaging over all samples. The maximum of IpX 1 ;Ŷ 1 q over the input PDF provides a lower bound on the capacity where the last inequality holds for i.i.d. input. Fig. 2(a) shows the maximum AIR as a function of the launch power and SNR for B " 20 GHz. It can be seen that the AIR is close to the upper bound (1) in the low SNR regime 0 ď SNR ď 15 dB, and then, following a drop, increases again, approaching the lower bound (3) as the SNR is increased.
The AIR tends to infinity along the lower bound, which appears to be tight in our simulations. Fig. 2(b) shows the convergence of the AIR to the lower bound at high powers for B " 5 GHz. Note that dispersion is stronger for larger bandwidth. As a consequence, the stochastic ISI and the drop in the AIR are lower in Fig. 2(b) compared to those in Fig. 2(a). (c) RX, SNR " 50 dB.
(d) RX, SNR " 75 dB. Fig. 2(a), showing a number of symbols in the constellation at the transmitter (TX) and receiver (RX). In the regime SNR ă 38dB, the received symbols are localized around the transmitted symbols, and the AIR is between the upper and lower bounds (1) and (3). In the medium SNR regime 45dB ă SNR ă 55dB, the received symbols are almost independent of the transmitted symbol, resulting in almost zero AIR. Finally, in the high SNR regime SNR ą 75dB, the phase of the received symbols conditioned on the transmitted symbol is uniform; however the amplitude is now localized, limited by an additive ASE noise. The AIR in this regime is p1{2q log 2 p1`SNRq´1{2`op1q.
The analysis in Section V shows that the SSFM model tends to a diagonal one for sufficiently large K without equalization. Both deterministic and stochastic ISI tend to zero with K. Simulation of the AIR without equalization shows a pattern similar to that in Fig. 2, although the value of the AIR is smaller due to deterministic ISI.
It follows that the AIR follows a double-ascent curve. As previously known, the AIR has an inverted bell curve shape, which corresponds to the range SNR ď 43 dB in Fig. 2 (a). The existence of an optimal power in this range is attributed to a balance between the ASE noise and stochastic ISI. However, if SNR is further increased, the ISI eventually averages out to zero as proved in Lemma 4. This gives rise to the second ascent in the AIR, where the AIR approaches the rate of an interference-free phase noise channel.
Note that equalization using back-propagation improves the AIR at low-to-medium SNRs by canceling the deterministic component of the inter-symbol inference (ISI). At high SNRs there is no benefit in applying equalization as the model is already ISI-free. However, if equalization is applied, the model remains diagonal since the phase at the input of the equalizer is uniform conditioned on the channel input.
B. Conditional PDF in the Fading Channel
In this and the next section, we verify the properties of the conditional PDF in the fading and SSFM channels. As the input is multi-dimensional, we compute the distribution of specific entries of the channel matrix M.
In the first experiment, we simulate the random matrix M k (24) for finite dispersion case with zero loss, n " 32, and for values of b in (17) with T " 50, β 2 "´2, and L " 1{4. Fig 4a shows that the empirical PDF of W fi |pM K q 1,1 | converges to the Dirac delta function δpW´1q, which is explained by Lemma 1. For these choices of parameters, b are small and the PDF of |pM K q 1,1 | tends to δpW´1q as K increases.
In the second experiment, the random matrix M k (24) is simulated for infinite dispersion case with zero loss and n " 32. The values of b are chosen to be numbers in the interval p0, π{3s such that the matrix D becomes a nonblock diagonal matrix. Fig 4b shows that the empirical PDF of W fi |pM K q 1,1 | converges to the PDF ofŴ " |M 1,1 |, where M is distributed according to the Haar measure over the group of unitary matrices. This supports the result of Lemma 3. Note that, by Theorem 9 in Appendix A, PŴ pŵq " 2pn´1qŵp1´ŵ 2 q n´2 .
In the third experiment, the first simulation is repeated with the same parameters except with L " 25, which results in larger absolute values for b . In this case, as K is increased, the empirical PDF of W first, very fast as shown in Fig. 5a, gets close to the PDF ofŴ , which by Lemma 3 corresponds to the PDF of |pM K q 1,1 | in infinite dispersion fading channel when K Ñ 8. Hence, it seems that when |b | are not small, the PDF of |pM K q 1,1 | in the fading channel with finite dispersion is similar to the infinite dispersion case.
As K is further increased, the distribution of M K gets far from PDF ofŴ ; as a consequence, as it can be observed in Fig. 5b, the PDF of W tends to δpW´1q rather than (127).
C. Conditional PDF in the SSFM Channel
We verify that the SSFM channel is nearly diagonal when K is sufficiently large. In the first experiment, we simulate a lossless channel with 1000 realizations of the noise and large input X i.i.d.
" 10 8 pUp0, 1q`0.7q. We compute the empirical PDF of |Y 1 |{|X 1 | to show it converges to the Dirac Delta function. Similarly this holds for any i P rns. The number of spatial segments K is chosen as follows. Considering the proof of Lemma 1 in Section VI, the SSFM model is diagonal when C 1 { ? K is small, where the matrix C 1 is defined in (48). Hence, 1 ? K max |d | should be small, e.g., less than 0.1. For the normalized NLS equation in [20], max |d | " pnπ{T q 2 .
In the second experiment, we investigate the rate of convergence of the SSFM channel to the diagonal phase noise model. We consider the normalized SSFM with n " 32, K " 10 4 , σ 2 " 5ˆ10´5, and 1000 realizations of the noise and input X i.i.d.
From Lemma 4 and (87), the rate of convergence of M K to e ζ`jd S K is In Fig. 8, υpδq is simulated. The results are compatible with Lemma 4 stating υpδq ą υpδq´ 1 for any 1 ą 0, where υpδq is defined in (41). Moreover, it can be seen in the proof of Lemma 4 that for small values of σ 2 L, the signalnoise mixing may not be dominant except for very large K "`γL 3{2 σ˘´2 {δ .
When γL 3{2 σK δ{2 is small, υpδq can be lower bounded as which is compatible with the simulation result in Fig. 8. The oscillation for 1 ď δ ď 2 is also explained by the vanishing oscillating term 1{pe jK δ´1 Q´1 q in (103) in the proof of Lemma 4. In the last experiment, the effect of loss is examined. The previous experiment is repeated with fixed δ " 0.6 and ζ "´1.35. For K " 100, K " 1000, and K " 1000, the convergence rate υpδq are is 0.463, 0.512, and 0.534, respectively. This is explained by (88), implying that for 0 ă δ ă 1 and small noise power, υpδq " δ`2aζ{ ln K Ñ δ, where a is a value less than 1. In our experiment, a « 0.23.
VIII. CONCLUSION
The capacity of the discrete-time SSFM model of optical fiber is considered as a function of the average input signal power SNR, when the number of spatial segments in SSFM is sufficiently large as K " δ ? SNR, δ ą 0. First, we obtained the capacity lower bound (21) and characterized the pre-log as a function of δ. In particular, we showed that CpSNRq ě 1 2 log 2 p1`SNRq´1 2`o p1q, where op1q vanishes as SNR Ñ 8. As a result, the number of signal DoFs is at least half of the input dimension.
Second, it is shown that the capacity of the continuous-space SSFM channel when γ Ñ 8 is 1 2 log 2 p1`a SNRq`op1q. Hence, the number of signal DoFs is exactly half of the input dimension.
Third, we considered the SSFM model, named as infinitedispersion, where the dispersion matrix in each segment does not depend on K. It is shown that if K " δ ? SNR, δ ą 3, then CpSNR, δ ? SNRq " 1 2n log 2 p1`SNRq`Op1q, where Op1q term is bounded as SNR Ñ 8. Here, there is exactly one signal DoF.
Finally, AIRs of the SSFM model with back-propagation equalization are obtained numerically. The results show that while the AIR drops significantly in the medium SNR regime due to a considerable stochastic ISI, it asymptotically converges to 1 2 log 2 p1`SNRq´1 2`o p1q, explained by the fact that ISI vanishes at high SNRs.
A. Spherical Coordinate System
The n-dimensional spherical coordinate system is described by a radius, r, and n´1 angles θ , P rn´1s, where θ P r0, πs for P rn´2s, and θ n´1 P r0, 2πq. A vector x P R n can be written in the spherical coordinate as x 1 "r cospθ 1 q, x "r cospθ q ´1 ź r"1 sinpθ r q, " 2, . . . , n´2 x n´1 "r n´1 ź r"1 sinpθ r q. (120) Alternatively, we denote x by its norm }x} and direction x " x{}x} on the surface of the n´1-sphere S n´1 " tx P R n : }x} " 1u.
Complex vectors in C n can be similarly represented.
B. Groups
The reader is referred to [21]- [23] for background on group theory. For a group G, notation H ď G is used to say that H is a subgroup of G and gH and Hg are respectively the left coset and right coset of H w.r.t. g P G.
A probability measure on G is a non-negative, real-valued, countably additive, regular Borel measure µ on G, such that µpGq " 1. The support of µ, denoted by Spµq is the smallest closed subset of G of µ-measure.
A probability measure µ on G is said to be (normally) aperiodic if its support is not contained in a (left or right) coset of a proper closed (normal) subgroup of G.
where U H denotes the conjugate transpose of U. The set of unitary matrices in C nˆn with matrix multiplication forms a group U n , which is a compact Lie group.
The following theorem is a re-statement of [24, Thm. 1], bringing parts of its proof to the theorem statement.
Theorem 7. Suppose that a subgroup H ď U n contains the subgroup of diagonal matrices. Let ν be a binary relation on I n " t1, 2, . . . , nu defined as follows: @r, s P rns, r ν " s if and only if there exists a matrix A P H and t P rns such that A r,t ‰ 0 and A s,t ‰ 0.
Then, we have i. ν is an equivalence relation on I n , ii. U n pνq ď H.
C. Haar Measure
This subsection is borrowed mainly from [25]. Haar measure can be seen as an extension of the notion of the uniform random variable over an interval. The extension is based on the shift-invariant property of the uniform random variable. If X " Up0, aq, then for any b P R, mod pX`b, aq " Up0, aq.
Consider defining uniform distribution on the circle S 1 in R 2 . Considering a circle as a geometric object, a "uniform random point on the circle" should be a complex random variable whose distribution is rotation invariant; that is, if A Ď S 1 , then the probability of the random point lying on A should be the same as the probability that it lies on e jθ A " te jθ a : a P Au.
The uniform distribution µ on a group G is called Haar measure on G, defined based on the"translation-invariant" property as follows. For a group pG,¨q, an element g P G, and a Borel subset S Ď G, the left translation of S by g is defined as gS " tg¨s : s P Su. (123) A measure µ on the Borel subsets of G is called left translation-invariant if for all Borel subsets S Ď G and all g P G, µpgSq " µpSq.
Right translation and right translation-invariant are defined similarly.
There exists a unique Haar measure on any group. The following theorem is proved in [25, Lemma 2.1.] for G " U n . Theorem 8. There exists a unique (left or right) translationinvariant probability measure on U n , called Haar measure.
Let M P C nˆn be a random unitary matrix, W fi |M ,1 |, and W " pW 1 , . . . , W n q.
Again since phase is uniform, then P W 2 1 pw 2 1 q"2 2 pn´1qpn´2qw 1 w 2`1´p w 2 1`w 2 2 q˘n´3. The result can be similarly established for W n 1 by induction. Fig. 8: Rate of Convergence of SSFM channel to the diagonal model for n " 32, K " 10 4 , and P " K δ .
The conditional PDFs for " 2, . . . , n are Theorem 10. Suppose that M P C nˆn is distributed according to the Haar measure on U n and x P C n with }x} " 1. Then, P MX|X pMx|xq is independent of x.
Proof. Fix an orthonormal basis V 1 , . . . , V n of C n such that V 1 " x. Denote the matrix with columns V i by Λ. Assume that M is a unitary matrix distributed according to Haar measure. Define the map Φ : C nˆn Þ Ñ C nˆn as Thus, P pMx|xq " P pM 1 x 1 |x 1 " p1, 0, . . . , 0q T q, Since Φ is invertible, then M 1 is also distributed according to Haar measure and the RHS of above is derived in Theorem 9. This proves that P pMx|xq does not depend on x.
D. Random Walk on Groups
This section is mainly from [22]. A random walk on a group pG,¨q is S K " X K¨Xk´1¨¨¨X1 , K " 1, 2, . . . , where X i i.i.d " µpGq. If X and y are random variables on G with PDF µ and ν respectively, then PDF of X¨Y is µ˙ν, where˙denotes the convolution. Hence, the PDF of S K is the K-th convolution power of µ, denoted by µ˚K.
The following theorem is proved by Kawada and Itô for compact metric groups [27]. A more general version is proved by Stromberg in [23,Thm. 3.3.5] for Hausdorff groups, where the aperiodic condition is replaced with the normally aperiodic condition.
Theorem 11 (Kawada-Itô and Stromberg). Let G be a compact Hausdorff groups and H the smallest closed subgroup of G which contains Spµq. Then, lim KÑ8 µ˙K exists if and only if µ is a normally aperiodic probability measure on subgroup H. Moreover, if this limit exists, then it is the Haar measure on H.
APPENDIX B CONTINUITY OF MUTUAL INFORMATION
Lemma 5. The mutual information of the fading channel (23) and the SSFM channel defined in Section III-B with K segments is a continuous function of K at K Ñ 8.
Proof. Since K is an integer, mutual information IpX, Y K q is not a continuous function of K when K is finite. A small change in SNR can change K " t δ ? SNRu by one, and the model by one segment. However, mutual information is a continuous function at K Ñ 8.
The proof is similar to the proof of the continuity of the output and conditional entropy in the zero-dispersion channel [28, App. I], using the fact that the noise PDF, thus the output PDF induced by noise, vanishes exponentially. We sketch the steps for the SSFM channel.
The conditional PDF of one segment of SSFM is upper bounded The conditional PDF of K segments is upper bounded as Alternatively, the exponential upper bound on the PDF (131) can be obtained from the PDF of the norm which is known at the output (38). We have lim KÑ8 h`Y K |X " x˘"´lim KÑ8 ż ppy K |xq log ppy K |xqdy K paq "´ż lim KÑ8 ppy K |xq log ppy K |xqdy K " h`lim KÑ8 Y K |X " x˘, uniformly over input.
Step paq is obtained from applying the dominated convergence theorem using (131)
|
2020-11-24T02:01:27.707Z
|
2020-11-23T00:00:00.000
|
{
"year": 2020,
"sha1": "388ca22d4470cb326a11e5c352f4b7b64a01be1a",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "388ca22d4470cb326a11e5c352f4b7b64a01be1a",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science",
"Mathematics",
"Physics"
]
}
|
255868950
|
pes2o/s2orc
|
v3-fos-license
|
Mesenchymal stem cells: a promising way in therapies of graft-versus-host disease
It is well acknowledged that allogeneic hematopoietic stem cell transplantation (allo-HSCT) is an effective treatment for numerous malignant blood diseases, which has also been applied to autoimmune diseases for more than a decade. Whereas graft-versus-host disease (GVHD) occurs after allogeneic hematopoietic stem cell transplantation (allo-HSCT) as a common serious complication, seriously affecting the efficacy of transplantation. Mesenchymal stem cells (MSCs) derived from a wealth of sources can easily isolate and expand with low immunogenicity. MSCs also have paracrine and immune regulatory functions, leading to a broad application prospect in treatment and tissue engineering. This review focuses on immunoregulatory function of MSCs, factors affecting mesenchymal stem cells to exert immunosuppressive effects, clinical application of MSCs in GVHD and researches on MSC-derived extracellular vesicles (EVs). The latest research progress on MSC in related fields is reviewed as well. The relevant literature from PubMed databases is reviewed in this article.
Background
Allogeneic hematopoietic stem cell transplantation (allo-HSCT), as the most effective way to treat a variety of malignant blood diseases, has also been applied to improve the therapeutic effect of autoimmune diseases in recent years [1]. Though obvious progress has been made in the source of donor, regimen of condition, the type of HLA, prevention and treatment of graft-versus-host disease (GVHD), GVHD remains the most important complication after allo-HSCT, severely affecting the survival rate of transplant patients [2,3].
According to diverse etiology and pathological principles and response to treatment, GVHD is clinically divided into acute and chronic. Acute GVHD (aGVHD) is characterized by the immune response of T helper cells 1 (Th1), while chronic GVHD is mainly related to the immunity of T helper cells 2 (Th2), showing the characteristics of autoimmune diseases [4]. aGVHD currently proceeds pathologically in 4 steps: (1) tissue damage caused by pretreatment, high-dose chemotherapy or radiation therapy; (2) activation of host antigen presenting cells (APC) and innate immune cells; (3) APC presents antigens, promotes the activation and proliferation of donor-derived T lymphocytes, generates and releases a large number of inflammatory factors, and then forms an inflammatory storm; (4) inflammatory factors recruit and induce effector cell proliferation, leading to target organ skin, liver, and intestine damage [5]. The severity of aGVHD is classified into 4 grades: Grade I (mild), II (moderate), III (severe), and IV (very severe). The clinical presentations of rash, digestive disorders and liver diseases can be refered to in the diagnosis of patients [6,7]. In terms of the prevention of GVHD, the phosphatase inhibitors cyclosporine A (CsA) and tacrolimus play an immunosuppressive role by blocking the secretion of Interleukin 2 (IL-2) and the expansion of T cells. Rapamycin is extensively used by expanding regulatory T cells (Treg) and inducing T cells to acquire-Treg (iTreg). These drugs can be utilized alone or in combination with glucocorticoids. Other preventive methods include using antithymic immunoglobulins, removal of T cells in vivo, and humanized anti-CD52 monoclonal antibodies to control GVHD and graft rejection [8].
At present, the overall effective rate of standard corticosteroid therapy is 50%, and the complete response rate of various immunosuppressive agents is about 30% [9]. Although aGVHD can be partially controlled by glucocorticoids and immunosuppressive agents, severe hormonal resistance, secondary infections, and weakened graft antitumor effects (GVL) still develop, and ultimately leads to treatment intolerance or tumor recurrence. Therefore, innovative biological treatment of aGVHD exerts a tremendous fascination on us.
Being one of the most common adult stem cells, mesenchymal stem cells (MSCs) are non-hematopoietic stem cells originally isolated from bone marrow [10]. It forms the bone marrow hematopoietic microenvironment and advance the proliferation and differentiation of hematopoietic stem cells significantly [11]. Possessing a morphology similar to fibroblasts, it can grow adhered to plastic culture flasks, self-renew and differentiate into osteoblasts, adipocytes, chondrocytes in vitro, expressing CD29, CD44, CD54, CD73, CD90, CD105 and CD166, yet not expressing hematopoietic stem cell markers such as CD11b, CD14, CD19, CD34, CD45 [12]. MSCs maintain unique immunological properties, which preserve immunosuppressive effects with low immunogenicity. Additionally, its low expression of HLA-I molecules, no expression of HLA-II molecules and CD40, CD80, CD86 and other costimulatory factors make MSCs more paramount in clinical application [13]. Numerous studies prove that MSCs plays an indispensable role in maintaining the regulation of peripheral immune tolerance, transplant tolerance, autoimmunity, tumor escape, and fetal maternal tolerance [14]. Researchers propose the concept of suicide gene in order to eradicate tumor cells without damaging normal cells. Hence, a promising carrier is required to deliver therapeutic gene to specific cancer site. By virtue of unique features namely low immunogenicity and good affinity with tumor tissue, MSCs is a potential candidate for the successful delivery [15][16][17]. In addition to tumor therapy, in recent years, MSCs have been clinically adopted to multiple diseases such as acute kidney-injury, myocardial infarction, autoimmune diseases and so on [18,19]. Much of researches in the last two decades have revealed that co-transplantation with hematopoietic stem cells can reduce the incidence of GVHD and improve graft survival, as well as accelerate the reconstruction of hematopoietic and immune systems due to the immunological features of MSCs. Accordingly, MSCs has been used to prevent immune rejection after organ transplantation [20].
Immunoregulatory function of MSCs
Composed of a series of complex mechanisms, the immunoregulatory function of MSCs mainly achieved by contacting cells and the releasing immunoregulatory factors. There remains many unknowns and controversies in the current research.
MSCs interrelate with continuously cell turnover and replacement in body systems [21]. In terms of T cells, MSCs inhibit the proliferation and activation of T cells, and downregulate the secretion of inflammatory factors (such as IL-2, TNF-α, IFN-γ). MSCs are also involved in reducing the ratio of Th1/Th2, as well as the quantity of Th17 by the same means. Meanwhile, sums of data address that conventional T cells may transform to regulatory T cells (including CD4 + CD25 + FoxP3 + Treg, CD8 + CD28-Treg and IL-10 + Tr1) given the function of MSCs [22,23]. Regarding CD4 + CD25 + FoxP3 + Treg, the crucial factor underlying dramatically modifying the mRNA of genes may be the regulation of MSCs. And Foxp3 complex has some unknown connection with these genes [24]. A considerable amount of literature reports that despite the major role stable Foxp3 expression plays in the phenotype and functional stability of Treg, inflammatory Treg may reduce Foxp3 expression and convert into effector T cells under certain inflammatory conditions. MSCs can impel the expression of Runt•related transcription factor 1 (RUNX1), RUNX3 and CBFβ complexes in Treg specific demethylation regions through cell-to-cell contact to enhance Foxp3 stability; Foxp3 complex post-transcriptional regulation can induce the transformation of traditional T cells to Treg and amplify Treg's immunosuppressive function [24]. And the number and function of CD8 + CD28-Treg may be enhanced by the stimulation of IL-10,FasL and apoptosis rate decrease resulted from the function of MSCs [25,26]. In addition, MSCs engender HO-1 which induces and promotes the proliferation of IL-10 + Tr1 [27]. As a member of IL-12 family, IL-35 (Interleukin-35) is concerned with maintaining immune tolerance by inducing the apoptosis of T cells and the proliferation of Treg. Guo noted that the quantity of Treg significantly increased after co-culture with MSCs which overexpress IL-35, whereas the percentage of CD4 + T cells was lower than before [28]. Followed by overexpressing IL-35 in MSCs, MSCs can also specifically migrate to damaged liver tissues and prevent liver cells apoptosis by reducing the FasL expression of monocytes. Above all, IFN-γ secreted by liver monocytes is reduced through the regulation of JAK1-STAT1/STAT4 [29]. The inhibitory effect of MSCs on cytotoxic T lymphocyte (CTL) is performed mainly by inhibiting the proliferation of CTL. Such inhibitory effect can be observed in autologous and allogeneic effector cells [30,31]. Furthermore, much of research found that MSCs suppress the lysis of CTL if added at the beginning of the mixed lymphocyte culture (MLC). However, if being added in the cytotoxic phase, the inhibition of the lysis could be eliminated. Additionally, some researchers suggested that the inhibitory effect of MSCs originates from soluble factors [32].
Further, with direct contact between cells and transforming the phenotype of natural killer (NK) cells, MSCs has also been proven highly effective in inhibiting the proliferation, cytotoxic effect and the secretion of various cytokines of NK cells. And indoleamine 2,3-dioxygenase (IDO) and prostaglandin E2 (PGE2) might be the crucial factors of this function [33]. For B cells, MSCs can render the cell cycle stagnant in the G0/G1 phase and trigger the inhibition of B cells proliferation. According to transwell experiments, MSCs produce a slice of soluble factors which lead to the suppression of B cells. Also, the differentiation of B cells was inhibited by MSCs due to the impaired production of IgM, IgG, IgA. Moreover, chemotactic function of B cells can also be impacted by MSCs [34]. Recent studies indicated that MSCs can enlarge the proportion of regulatory B cells (Bregs), such as CD5 + B cells, CD19 + CD24 high CD38 high B cells, and other Bregs secreting IL-10 [35,36]. Jiang also put forward that human MSCs, as the most efficient one among the antigen-presenting cells (APCs), can inhibit the transformation from monocyte into dendritic cells (DCs) [37]. Owing to the impact derived from MSCs on immune cells especially CD4(±) CD25(±) regulatory T cells and DCs, Mirzaei et al. illustrated that MSCs had remarkable therapeutic effects on patients with multiple sclerosis and amyotrophic lateral sclerosis. Thus a conclusion can be drawn that the influences of MSCs are significant [18]. In the meantime, MSCs can inhibit the function of M1 macrophage cells, and induce the transformation of M1 macrophage cells to M2 macrophage cells. And through co-culture of MSCs and group 3 innate lymphoid cells (ILC3s) with IL-2, much of data addressed that the ILC3s proliferation and the production of IL-22 were upregulated. Afterwards, ILC3s induce the expression of ICAM-1 and VCAM-1 of MSCs mutually as well. Consequently, MSCs suppress the alloreactive T cells proliferation and induce the up-regulation of IL-22 via cellular contact and secretion of cytokines derived from MSCs [38] (Fig. 1). Furthermore, MSCs are correlated with the induction of transformation from macrophages (MØs) to a unique anti-inflammatory immunophenotype (MSCeducated MØs [MEMs]). MEMs impel the secretion of IL-6, which beneficially protects against graft host disease [39].
The paracrine effect of MSCs also plays a pivotal role in the realization of its immune regulatory function. Studies have shown that MSCs achieved direct immune regulation after the contact with effector T cells by releasing NO or Fas/FasL pathway, which induced apoptosis [32]. MSCs could directly secrete anti-inflammatory cytokines such as transforming growth factor (TGF-β), interleukin 6 (IL-6), interleukin 10 (IL-10), indolamine 2,3-dioxygenase (IDO), vascular endothelial growth factor (VEGF), intercellular adhesion molecule (ICAM), prostaglandin E2 (PGE2) and expression inhibitory co-stimulatory molecules such as programmed death ligand-1 (PD-L1) to realize the function of immunoregulation [40][41][42]. Further, it has been reported that Th1 and Th17 cells completed the repolarization process attributed to the increased expression of PD-L1 on MSCs, expanding in proportion of Th2 and Treg cells [43,44]. Previous studies have emphasized that the immunosuppressive function of Tregs was considerably enhanced through co-culture of MSCs and Tregs. Moreover, the mechanism may originate from the up-regulation of IL-10 secretion leading to the increase of PD-1/87-H1 [45]. Interestingly, some researchers reported that a sum of indoleamine 2,3-dioxygenase (IDO) was produced after recipient phagocytes engulfing apoptotic MSCs, which played a crucial role in affecting immunosuppression [46].
The migration of MSCs to the site of injury or inflammation also plays a vital role as a necessary part of its therapeutic effect. It is widely acknowledged that MSCs express multiple chemokine receptors and growth factor receptors, such as CXCR1 [chemokine (C-X-C motif ) receptor 1], CXCR2, CXCR4, CCR1 [chemokine (C-C motif ) receptor 1], CCR2, PDGFR-α (plateletderived growth factor receptors-alpha) [47]. With tissue or organs impaired, a large release of chemokines drive the MSCs to migrate to the damaged tissue and stimulate tissue repair. Ringe et al. reported that MSCs expressed chemokine receptors CXCR1, CXCR2 and CCR2, and the migration of MSCs was correlated with the stimulation of C-X-C motif chemokine ligand 8 (CXCL8) [48]. Further, the impaired tissues could also secrete an ocean of chemokines, which motive the migration of MSCs [49].
Influence of soluble factors
MSCs from diverse species exert influence on immune regulation differently. For human MSCs, IDO was indispensable in immunosuppression by degrading tryptophan and forming secondary metabolites in the microenvironment [50]. The expression of IDO gene in MSCs is linked to the IFN-γ-Janus kinase (JAK)-signal transducer and activator of transcription 1 (STAT1) pathway. If infusing MSCs which over-express IDO gene, the clinical remission (CR) rate will be raised in GVHD patients. In addition to IDO, IFN-γ also takes a seat in effects [51]. IFN-γ generated from T cells suppress the proliferation of T cells, activating rat BM-MSCs by low concentrations of IFN-γ. While high concentrations of IFN-γ won't take effect as mentioned above [52]. Moreover, transforming growth factor-β (TGF-β) and Prostaglandin E2 (PGE2) were further correlated to the function of MSCs. An army of results demonstrated that the secretion of PGE2 was mediated by the COX2/PGE2 pathway and stimulated the up-regulation of immunosuppression of MSCs. And the secretion of PGE2 was associated with the increase of PGES via TLR3 [53].
Normally, low levels of intercellular cell adhesion molecule (ICAM) are present on the surface of MSCs. After pretreatment of MSCs with appropriate concentration of proinflammatory cytokines such as IFN-γ, the production of ICAM such as galectin-1 and vascular cell adhesion molecule-1 (VCAM-1) up-regulate, resulting in contact-dependent effects. Specifically, the higher concentration of ICAM is, the greater its immunosuppressive effect will be, eventually boosting the suppressive effect of MSCs on T lymphocytes [54,55]. Meanwhile, proinflammatory cytokines also induce MSCs to secrete chemokine ligand-9, CXC chemokine ligand 10 (CXCL-10), and CC chemokine ligand 2 (CCL2), etc., all of which are correlated with recruiting effector T cells. Once MSCs and effector T cells are in contact, the generated NO or Fas/FasL ligands activate the apoptosis of effector T cells [50,56]. Mirzei et al. reported that CXCL10 significantly down-regulate angiogenesis and frequency of regulatory T cells in the lungs, and up-regulate the apoptosis of tumor cells and activated T cells trafficking to lungs. Therefore, the prospective of MSCs applied in treating melanoma lung metastasis patients is given [57]. Also, Yu suggested that the inhibition of microRNA let-7a expression affiliated with the 3′ UTR of mRNA of Fas and FasL could up-regulate the level of Fas/Fasl, Consequently enhancing the immunosuppressive efficiency of MSCs [58].
Influence of oxygen concentration
The immunosuppressive effect of MSCs can be affected by the concentration of oxygen as well. A considerable amount of research has demonstrated that the extension of survival time, the decrease of oxidative stress, avoiding DNA damage and chromosomal aberration could result from MSCs cultured under hypoxia condition [59]. Moreover, under hypoxia conditions, MSCs tend to be stem-like, up-express typical surface markers and maintain multiple differentiation potential. The proliferation of MSCs and the secretion of indolamine 2,3-dioxygenase (IDO) are also promoted [52]. Further, mice that received MSCs cultured without oxygen or in low concentration of oxygen showed alleviated symptoms of GVHD and prolonged survival time [60]. Hypoxia inducible factor (HIF) pathway may be the trigger to the enhanced mechanism of MSCs in hypoxia, among which HIF-1α and HIF-2α are key molecules that have protective effects on MSCs. Chang et al. demonstrated that HIF-2α maintained MSCs cell viability and promoted cell proliferation related to the regulation of CyclinD1 (CCND1) and c-Myc (MYC) by the MAPK/ERK signaling pathway [61]. Similarly, Bingke et al. reported that HIF-1α was associated with the increasing of cell activity and the suppression of MSCs apoptosis under hypoxia conditions [62]. Liu et al. suggested that the differentiation and migratory ability of MSCs might be enhanced in low oxygen conditions through the Akt and NFκB pathways [63]. In addition, studies showed that hypoxic pretreated rat-derived BM-MSCs and human gum-derived MSCs up-regulated the expression of anti-inflammatory cytokines, given that the secretion of tumor necrosis factor (TNF) was inhibited and anti-inflammatory cytokine such as IL-10 was promoted [64,65]. In consequence, the control of oxygen concentration plays a paramount role in the clinical application of MSCs.
Influence of distinct Toll-like receptors (TLR) ligands
Furthermore, an ocean of results illustrated that in endotoxemia models induced by lipopolysaccharide (LPS) pretreatment, the inflammation in various tissues such as lung and liver couldn't be relieved after the infusion of BMSCs [66]. The potential mechanism lies in the association with Toll-like receptor (TLR) agonists as shown in Fig. 2. Sangiorgi et al. reported the immunosuppressive action of MSCs directly on T cells caused by LPS stimulating the TLR4 and increasing the gene expression of interleukin (IL)-1β and IL-6. By comparison, CpG oligodeoxynucleotides (CpG ODN) DSP30 also stimulated TLR9, up-regulating expression of transforming growth factor (TGF)-β1 and down-regulating tumor necrosis factor (TNF)-α expression. Subsequently, the proliferation and the immunosuppressive function of MSCs were promoted [67]. All this being said, it is well accepted that the damage to MSCs ability caused by LPS can be avoided through the application of CpG oligodeoxynucleotides (CpG ODN) DSP30. Further, when it comes to Toll-like receptor (TLR), it is necessary to mention the impact of pathogen associated molecular patterns (PAMP) on the function of MSCs.
PAMPs induce the expression of cytokines and soluble factors by distinct signal pathways, and trigger an immune response [68]. PAMPs can be recognized by different TLRs on the MSCs, and send diverse signals depending on the pairing, for instance, TLR3/poly(I:C) Fig. 2 The mechanism of combination therapy. The mechanism of combination therapy is intricate. The potential mechanism lies in the association with Toll-like receptor (TLR) agonists. The immunosuppressive action of MSCs directly on T cells caused by LPS stimulating the TLR4 and increasing the gene expression of interleukin (IL)-1β and IL-6. By comparison, CpG oligodeoxynucleotides (CpG ODN) DSP30 also stimulated TLR9, up-regulating expression of transforming growth factor (TGF)-β1 and down-regulating tumor necrosis factor (TNF)-α expression and TLR4/lipopolysaccharide (LPS). The potential of MSCs then could be altered [69][70][71][72][73][74].
Influence of the injection dose of MSCs
Generally, the therapeutic dose of MSC is 1 ~ 2×10 6 / kg,and can be a maximum single dose of 1.2 × 10 7 /kg [50]. Rat BM-MSC exhibited a significant dose-dependent effect in vitro compared with rat AD-MSC, whereas the latter showed stronger immunosuppressive properties [75]. Interestingly, some reports suggested that the combination of MSCs and short-term mycophenolate mofetil (MMF) could obviously extend its survival time. However, there showed no statistically difference in survival at different doses of MSCs [76]. Despite there being no strong evidence to support the impact of MSCs dose on its immunosuppressive effect, the dose effect has exerted a tremendous fascination on many researchers.
Influence of immunosuppressant
During clinical practice, we routinely use immunosuppressive agents to prevent and alleviate GVHD. However, different types or doses of immunosuppressants may lead to completely different responses. Hajkova et al. indicated that the combination of MSCs and immunosuppressive agents not only promoted cell proliferation and Tregs function, but modulated the balance of distinct T-lymphocyte subsets [42]. Inoue et al. demonstrated the immunosuppressive effects of MSCs showed in vitro. Nevertheless, in a Lewis rats to ACI rats heart transplantation model, low-dose cyclosporine (CsA) was used continuously from 5 to 9 days and 0 to 3 days after surgery, the injection of donor rat BM-MSCs through the portal vein system and the tail vein was also applied, respectively. Consequently, rather than prolong the graft survival time, the therapy reversed the protective effect of CsA on the graft and shortened the survival time of the graft. This is possibly due to the disruption of proinflammatory cytokine environment caused by CsA, leading to an increased anti-donor response, which in turn prevents MSC activation [77]. In contrast, Hajkova et al. suggested that the combination of MSCs and CsA contributed to the alteration of macrophage phenotype (from M1 to M2), which also elevated the secretion of IL-10, in turn heightening the effect of MSCs-mediated therapy [78]. Besides, plenty of additional studies attempted to combine MSCs with mycophenolate mofetil (MMF), rapamycin and FK506 respectively, which displayed remarkable MSCs effects [79,80]. Hence, MSCs combined with appropriate immunosuppressant can be far more effective with half the effort, which will also affect the prognosis of the patients.
Influence of temperature
Ian McClain-Caldwell et al. demonstrated that the heat shock protein, HSF1, could readily transfer into MSCs nucleus through the Cyclooxygenase2/Prostaglandin E2 (COX2/PGE2) pathway which potentially regulated the immunosuppressive function of MSCs at high temperatures [81]. Hyperthermia increases the efficacy of MSCdriven immune-suppression, yet detailed mechanisms need further exploring. The regulation of temperature could be a promising research orientation.
Clinical application of MSCs in GVHD
The first application of MSCs in GVHD was reported in 2004 and achieved a striking clinical response [82]. The clinical application of MSCs has been a new research hotspot for worldwide GVHD treatment ever since.
Application in acute graft-versus-host disease (aGVHD)
United States reported a case of MSC treating 75 children with B ~ D grade refractory aGVHD, in which the effective rate reached 61.3% after 28 days of MSCs infusion, significantly improving the overall survival after 100 days of MSCs infusion in patients [83]. In a meta-analysis of MSC treating refractory acute GVHD, the authors found that patients with pure skin involvement including grade I-II aGVHD showed better clinical efficacy, with clinical remission (CR) achieved after all courses. Furthermore, children responded better than that of adults. Instead, the treatment of severe intestines and liver aGVHD was not ideal [84]. In Turkey, 33 pediatric patients of steroid refractory acute anti-graft host disease were selected for MSC treatment with a drug dose of 1.18 × 10 6 MSCs/kg. A good complete response (CR) rate and 2-year overall survival (OS) rate were obtained after treatment. However, the transplant related mortality (TRM)in patients with PR/NR was 46.6% after 100 days of the first treatment, and some patients have adverse sequelae after all courses. Accordingly, though the therapeutic effect of MSC has been affirmed, its safety for pediatric patients needs further research [85]. Researchers made biologics (JR031) out of MSCs from healthy volunteers. According to I/II and subsequent II/III clinical studies, if enrolled patients (steroid-refractory aGVHD patients) are given intravenous injection at a concentration of 2 × 10 6 MSCs/ kg once every 2 weeks for four consecutive weeks, the treatment can successfully alleviate clinical symptoms of patients and prolong survival with no observable adverse reactions [86]. Also, G M Dotoli et al. suggested that followed by MSCs treatment for steroid-refractory aGVHD, the overall survival in patients extended significantly and only 4.3% of the enrolled patients experienced the side effects such as nausea/vomiting and blurred vision.
Thus the effectiveness and safety of the MSCs treatment are proved [87]. Galleu et al. reported that only aGVHD patients with cytotoxicity against MSCs achieved better clinical response, while others showed no response to the treatment of MSCs. To obtain satisfactory remission, patients can be classified by capabilities of killing MSCs or direct infusion of apoptotic MSCs [46].A single center case series of three patients, who underwent allogeneic hematopoietic cell transplantation and later developed steroid refractory GVHD, were treated with MSC infusions. Two patients achieved complete remission and one patient partial remission of skin and/or gastrointestinal aGvHD, which also confirmed that the application of MSC in treating severe steroid refractory aGvHD is feasible in clinical practice [88].
In a recent case report, a 15 years old boy diagnosed with a GVHD was infused at the concentration of 2 × 10 6 hMSCs/kg eight times in 4 weeks and continued MSCs administration once a week in the following 4 weeks. The Laboratory data was improved dramatically, and gastrointestinal symptoms were eased [89].
Besides, an ocean of data indicated that aGVHD could be alleviated with up-regulation of CXCL5 and anti-CCL24 antibody [90]. The mechanism is illustrated in the Fig. 3. Compared to the control group in sharp contrast, patients with the infusion of MSCs did not show remission during a large multicenter refractory GVHD phase III clinical trial in the United States [91]. Furthermore, the treatment with human-MSCs (hMSCs) by subconjunctival injection is effective in reducing corneal inflammation and squamous metaplasia in ocular GVHD (oGVHD), which makes local treatment with hMSCs a promising strategy for oGVHD [92]. However, a slice of studies held the view that instead of inducing immune tolerance in aGVHD, the treatment of MSCs was involved in breaking the vicious circle of GVH reaction due to the poor long term survival [93]. Based on the inconsistencies of clinical trial, further research on MSCs needs to be continued.
Application in chronic graft-versus-host disease (cGVHD)
Chronic anti-graft host disease is an intractable complication after allo-HSCT. The incidence of cGVHD is approximately 28-60% in patients who survive more than 100 days after allo-HSCT. At present, glucocorticoids and calcium antagonists remain the initial standard treatment for cGVHD, which is not satisfactory with significant related side effects. Some researchers demonstrated that MSCs applied in cGVHD could significantly alleviate the symptoms in distinct tissues such as liver, skin, oral mucosa and so on by increasing the population of CD5 + regulatory B cells (Bregs) and leading to the upregulation of IL-10 [35]. Jurado et al. investigated that 14 cGVHD patients, of which 7 are moderate and 7 are severe, received the infusion of adipose tissue-derived MSCs (AT-MSCs) and the first-line treatments combined with cyclosporine and prednisone. Ten patients completed the trial within 56 weeks were able to stop hormones, 8 of whom achieved complete remission, 2 partial remissions. The clinical efficacy after the application of AT-MSCs showed significant superiority over the historical control group treated only by cyclosporine or tacrolimus and prednisone [94]. Zhang et al. enrolled a steroid refractory GVHD patient with nephrotic syndrome (NS) 10 months after allo-HSCT in 2017. Through MSCs therapy, the enrolled patient achieves complete remission Fig. 3 The mechanism of combination of CXCL-5 and anti-CCL24. The potential mechanism lies in the synergy between CXCL5 and anti-CCL24 antibody (2FC). In vivo, 2FC could decrease not only transplanted Th 1 and Th 17 but also cytotoxic T lymphocytes and natural killer cells to increase immunosuppressive neutrophils without affecting human hematopoietic stem cell reconstitution. What's more, it attenuates the secretion of FN-γ, IL-6, IL-17A, IL-8, macrophage inflammatory protein-1β, and monocyte chemoattractant protein-1. CXCL-5,Chemokine (C-X-C motif ) ligand 5;Anti-CCL24, chemokine (C-C motif ) ligand 24; Th, T helper cells; CTL, Cytotoxic T lymphocyte; NK cells, natural killer T cells; MIP-1β, macrophage inflammatory protein-1β; MCP-1, monocyte chemoattractant protein-1 (CR) due to the down-regulation of B cells numbers and up-regulation of regulatory B cells (Bregs) and Tregs [95]. The clinical characteristics of human chronic graft-versus-host disease (cGVHD) are similar to that of murine sclerodermatous GVHD model such as skin hyperkeratosis and pulmonary fibrosis. Lim et al. indicated that MSCs were correlated with the remission of cutaneous sclerodermatous GVHD, whose potential mechanism might be down-regulating the migration of immune cells and eliminating the secretion of chemokines [96]. Research on MSCs applied in chronic anti-graft host disease (cGVHD) started rather late and is far from enough, which requires further research.
Prediction of the application of MSCs
Recently, a host of studies have found that the therapeutic effect of MSCs can be predicted to a certain extent. Quite a few data illustrated that the lymphocytes populations are expected to offer better treatment, especially T and NK cells. Further, patients with low levels of IL-6 and IL-22, Th17 related cytokines before the therapy are likely to achieve complete remission or partial remission. Instead, patients expressed high levels of bilirubin before MSCs treatment tend to respond worse [97]. In addition, a special attention from clinicians also should be paid to cell dose, patient age and type of organ involvement [98].
Research progress of MSCs in other related fields
Due to the uniqueness of mesenchymal stem cells, many innovative treatments have also focused on mesenchymal stem cells.
3D printing technology, as an emerging discipline, is receiving increasing attention from the medical community. Ma et al. printed a 3D microscale hexagonal architecture using hydrogel which embedded in adiposederived MSCs [99]. This could be a outstanding direction for further study.
Limitations of MSCs clinical application
Whether the immunosuppressive effects of MSCs are associated with increasing tumor recurrence and infection has always been an unavoidable problem in clinical use of MSCs. The conclusions of existing clinical trials are still inconsistent [100].
Dotoli et al. reported that of the 3 patients with hormone-resistant III-IV aGVHD who completely resolved after MSC treatment, 1 died of tumor recurrence [87]. According to the clinical trial of Jurado, MSCs were used for the first-line treatment of cGVHD, no tumor recurrence or infection was fatal, but 2 cases had severe viral infection and 1 case had bacterial infection [94]. In a retrospective study, Blennow et al. have found that the administration of MSCs was a risk factor for invasive fungal infections [101]. However, some studies also suggest that mortality rates in terms of lung infection and tumor recurrence after the treatment of MSCs are similar to those in cGVHD patients who have not received MSCs treatment [35]. The correlation between the clinical application of MSC and tumor recurrence and infection requires more high-quality, large-sample clinical trials to verify because the included literature and patient sample sizes are too small.
Researches on MSC-derived extracellular vesicles (EVs)
In recent years, the characteristics of extracellular vesicles (EVs) derived from MSCs have aroused great interest among researchers. Scholars believed that the immunosuppressive function of MSCs was related to its paracrine effects induced by EVs [102][103][104]. MSC-EVs consists of MSC-derived exosomes and microvesicles, among which the multivesicular body fuses with the cell membrane, exposing contents into the extracellular environment. Afterwards, the small vesicles called exosomes with a diameter of about 40-100 nm are formed. The cell membrane directly buds and detaches, forming the larger vesicles with a diameter of about 50-1000 nm called microvesicles [105]. As Zhang et al. put forward, owing to the effect of MSC exosome, CD4 + T cells were induced by APC-related pathway, elevating the population of CD4 + CD25 + T cells or CD4 + CD25 + Foxp3 + Tregs. Followed by Tregs up-regulation, the immunosuppressive effects of MSC exosome were heightened [106]. According to recent researches, similar tissue repair capabilities as MSCs signal that EVs could be a promising non-cellular approach to tackle GVHD disorders instead of MSCs infusion [107]. Fujii et al. reported that MSC-EVs could prolong the survival of aGVHD in mice and reduce the pathological impairment of target organs, accompanied by the decrease in CD4+ and CD8+ lymphocytes levels. The proportion of CD62L − CD44 + /CD62L + CD44 − T lymphocytes down-regulated in the meantime, implying a therapeutic effect MSC-EV exerted by inhibiting the differentiation of T lymphocytes from a naive state to a functional state [108]. Kordelas et al. reported a refractory GVHD case who received MSC-EVs therapy. Clinical symptoms including diarrhea and hormone consumption were significantly alleviated, as well as GVHD symptoms in the skin and oral mucosa. Both in vitro and in vivo experiments revealed the MSC-EV reduction of IL-1β, TNF-α, and IFN-γ released from peripheral blood mononuclear cells, as well as TNF-α and IFN-γ released from NK cells [109]. Further investigations on osteoarthristis patient treatment suggested that EVs transferred mRNAs, lipids, siRNA, proteins, miRNAs, and ribosomal RNAs to adjacent cells or remote cells apparently as primary mediators of intercellular communications, making EVs an absolutely promising instrument in numerous therapies [110]. Having no self-renewal capability, MSC-EV is small in size, and can be obtained through immortalized cell lines on a large scale. Side effects associated with MSC can also be avoided. Hence high expectations are held over its being a novel non-cellular control method for GVHD.
Conclusion
Recent years, MSCs employed in the prevention and treatment of GVHD after allo-HSCT have generated a wealth of basic and clinical researches. MSCs can achieve immunosuppressive function through cell-to-cell contact and release of immunomodulatory factors. According to the existing research, it is well established that soluble factors, oxygen concentration, distinct Toll-like receptors (TLR) ligands, injection dose of MSCs, immunosuppressant adoption and temperature control are engaged in the therapeutic effect of MSCs, yet further research is required to elucidate the specific mechanism. Most studies revealed that MSCs therapy benefited acute and chronic GVHD, which remains to be verified for a lack of large-scale randomized controlled trial. MSC-EVs, as a non-cellular therapy, can avoid some related side effects of MSC, in light of which, researchers call for additional basic and clinical trials towards its specific efficacy and mechanism of prevention and treatment of GVHD. In general, MSCs are very promising in the prevention and treatment of GVHD, and deserves our further attention and research.
|
2023-01-17T14:20:38.643Z
|
2020-04-07T00:00:00.000
|
{
"year": 2020,
"sha1": "d6d8902b0d711527fcf809c69aca2ec4ffcaa879",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1186/s12935-020-01193-z",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "d6d8902b0d711527fcf809c69aca2ec4ffcaa879",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": []
}
|
18955690
|
pes2o/s2orc
|
v3-fos-license
|
Neuropilin 1 Involvement in Choroidal and Retinal Neovascularisation
Purpose Inhibiting VEGF is the gold standard treatment for neovascular age-related macular degeneration (AMD). It is also effective in preventing retinal oedema and neovascularisation (NV) in diabetic retinopathy (DR) and retinal vein occlusions (RVO). Neuropilin 1 (Nrp1) is a co-receptor for VEGF and many other growth factors, and therefore a possible alternative drug target in intra ocular neovascular disease. Here we assessed choroidal and retinal NV in an inducible, endothelial specific knock out model for Nrp1. Methods Crossing Nrp1 floxed mice with Pdgfb-CreERT2 mice produced tamoxifen-inducible, endothelial specific Nrp1 knock out mice (Nrp1ΔEC) and Cre-negative, control littermates. Cre-recombinase activity was confirmed in the Ai3(RCL-EYFP) reporter strain. Animals were subjected to laser-induced CNV (532 nm) and spectral domain-optical coherence tomography (SD-OCT) was performed immediately after laser and at day 7. Fluorescein angiography (FA) evaluated leakage and postmortem lectin staining in flat mounted RPE/choroid complexes was also used to measure CNV. Furthermore, retinal neovascularisation in the oxygen induced retinopathy (OIR) model was assessed by immunohistochemistry in retinal flatmounts. Results In vivo FA, OCT and post-mortem lectin staining showed a statistically significant reduction in leakage (p<0.05), CNV volume (p<0.05) and CNV area (p<0.05) in the Nrp1ΔEC mice compared to their Cre-negative littermates. Also the OIR model showed reduced retinal NV in the mutant animals compared to wild types (p<0.001). Conclusion We have demonstrated reduced choroidal and retinal NV in animals that lack endothelial Nrp1, confirming a role of Nrp1 in those processes. Therefore, Nrp1 may be a promising drug target for neovascular diseases in the eye.
Introduction A final common complication for multiple retinal diseases, such as age related macular degeneration (AMD), diabetic retinopathy (DR) and retinal vein occlusions (RVO) is the growth of abnormal neovascular blood vessels (neovascularisation) with increased permeability that produce fluid leakage in the macula [1,2], which can lead to vision loss. Neovascularization may originate from choroidal blood vessels (CNV) invading Bruch's membrane and the subretinal space, or from the retinal vasculature. In both cases the abnormal angiogenesis is often driven by excessive production of vascular endothelial growth factor (VEGF), and inhibiting VEGF is the current gold standard in the treatment of neovascular AMD [3]. Moreover, intravitreal administration of VEGF blockers (Ranibizumab, Aflibercept and off-label Bevacizumab) is also effective in preventing retinal oedema and neovascularisation in DR and RVO [4,5]. However, there are limitations. Although VEGF blockage can halt pathological angiogenesis, reduce vessel leakage and cause regression of existing vessels [6], it does not address the causes that drive the disease and requires monthly/bimonthly intravitreal injections. Furthermore, they are only beneficial in subsets of patients [7]. There is also the concern that VEGF is a neuronal survival factor and sustained blocking of VEGF may have undesirable side effects [1,2]. It is therefore important to explore alternative molecular targets for the development of therapies blocking retinal angiogenesis.
The general importance of NRP1 in angiogenesis has previously been demonstrated in numerous studies. For instance, Nrp1 null mutations in mice cause serious vascular abnormalities [18,19] and deficient endothelial tip cell function in sprouting angiogenesis [20,21] during embryogenesis. Several studies on cultured endothelial cells showed an involvement of Nrp1 in VEGF-VEGFR2 signalling [22][23][24]. However, mice harbouring a non-VEGF binding mutant form of Nrp1 develop normally [25], suggesting that the essential function of Nrp1 in sprouting angiogenesis depends on ligands other than VEGF. In fact, two recent studies showed that Nrp1 critically regulates TGFB/BMP signalling in central nervous system (CNS) vascular development [26,27].
Since sprouting angiogenesis not only contributes to vessel growth during embryogenesis but also to neovascularisation in adult eye pathologies, we assessed the role of Nrp1 in choroidal and retinal neovascularisation in endothelium specific, inducible Nrp1 knockout (Nrp1 ΔEC ) mice. CNV in mice was triggered by laser lesions, which is a widely accepted and reproducible model that mimics many features of CNV occurring in the wet form of AMD.
Although the acute laser injury does not mimic the chronic disease condition in humans, the model is useful for the investigation of the cellular and molecular mechanisms that drive CNV. We also used the oxygen induced retinopathy (OIR) model [28,29] to study effects on neovascularisation in the retinal vasculature.
Animals, Materials and Methods Animals
All animals were handled in accordance with the United Kingdom (UK) Animals (Scientific Procedures) Act 1986 and all experiments were covered by a project license approved by the UK Home Office (PPL7157) and the University College of London (UCL) Institute of Ophthalmology (IOO) Ethics Sub-Committee. Due to lethality of Nrp1 knock out animals it was necessary to use an inducible Cre-lox approach. We therefore crossed a tamoxifen-inducible, endothelial cell specific Cre strain (Pdgfb-CreER mice) [30] with Nrp1 floxed [18] mice, creating endothelial specific Nrp1 knock out mice (Nrp1 ΔEC ). Cre negative littermates were used as controls. Cre recombinase was induced in adult animals (6-8 weeks old) by injecting 200 μl of tamoxifen (15 mg/ml in soybean oil; Sigma-Aldrich) i.p. daily for 4 days before laser application. To assess Cre recombinase activity the reporter strain Ai3(RCL-EYFP) was used (conditionally expressing EYFP from a CAG promoter in the ROSA locus).
Laser CNV model
Mice were anesthetized with a mixture (i.p.) of Ketamine (Ketaset, Lyon, France, 75 mg/kg) and xylazine (Domitor 2%, 10mg/kg; Bayer Animal Health, Leverkusen, Germany). Pupils were dilated with a mixture of tropicamide 0.5% (Bausch and Lomb) and phenylephrine 1% (Bausch and Lomb) and a diode laser (wavelength 532 nm, Micron III, Phoenix Research, USA) with a power of 120 mW, an exposure time of 0.1 seconds and a spot size of 50 μm was used to induce 3-4 CNV lesions in the retina. Laser lesions were applied approximately two optic discs from the optic nerve, while avoiding major blood vessels. Vaporisation bubble formation confirmed the rupture of Bruch's Membrane [31] and those spots that did not result in the formation of a bubble were excluded. Laser photocoagulation sites that developed CNV were analysed 1 week after laser application.
In vivo imaging
OCT horizontal scans (400x400x1024 voxels), centred on the optic nerve head, were obtained from both eyes of the mice using an R2200 UHR SD-OCT scanner (Bioptigen Inc., Morrisville, NC, USA) on day 0 (0.5-1 hour after laser application) to confirm rupture of Bruch's membrane and on day 7 to measure the volume of CNV lesions. Images were captured by In Vivo Vue software (Bioptigen Inc.) and the obtained scans were converted to avi-files in order to measure the volume with Fiji Software V1.48q (a distribution of ImageJ). Measurements were made by two independent investigators.
A scanning laser ophthalmoscope (Spectralis™ HRA, Heidelberg Engineering, Heidelberg, Germany) with a 55˚field of view lens was used to assess EYFP expression in Cre reporter mice. One week after laser application, animals were subjected to fluorescein angiographic evaluation. Fluorescein angiograms were obtained using a retinal imaging microscope (Micron III, Phoenix Research, USA) was used. Briefly, after pupil dilation anaesthetised animals were i.p. injected with 100 μl of 10% sodium fluorescein and images were obtained after 1-2 min (early leakage) and 6-7 min (late leakage). Area of leakage was measured by two trained independent investigators using Fiji Software V1.48q (a distribution of ImageJ).
Post-mortem histology
After sacrificing the animals, eyes were enucleated and immediately fixed in 4% paraformaldehyde for 10-15 minutes. Retinas and scleras (with choroid and RPE attached) were dissected separately and flattened by making radial cuts. In order to assess recombination efficiency in the reporter strain, YFP fluorescence was imaged by confocal microscope (LSM710, Carl Zeiss, Meditec, Dublin, CA) directly in wholemounts of RPE-choroid complexes and in cryosections. Whole mounts and cryosections were also subjected to immunohistochemistry to visualise vessels and inflammatory cells using biotinylated isolectin (1:
Statistical analysis
Only laser lesions that developed CNV were included in the statistical analysis. Exclusion criteria were the absence of a visible vaporisation bubble at the time of laser application, no evidence of BM rupture (as observed by OCT immediately after laser application) and the presence of haemorrhage (larger than the laser spot). A non-parametric Kruskal-Wallis test was performed to assess differences between control and Nrp1 ΔEC animals, using SPSS statistics software (SPSS Inc., Chicago, IL). N-numbers refer to the number of mice throughout the manuscript.
Confirmation of Cre recombinase activity
In order to confirm that the Pdgfb-CreER strain can drive Cre recombination in the adult choroidal vasculature, the Ai3(RCL-EYFP) reporter strain was used. One week after laser injury, in vivo laser scanning ophthalmoscopy revealed pronounced fluorescence in most vessels and in laser lesions in animals with the Cre allele but not in Cre negative controls (Fig 1A and 1B). Enhanced yellow fluorescent protein (EYFP) fluorescence was confirmed by confocal microscopy on post-mortem choroid whole mount preparations (Fig 1C and 1D) and on cryosections through laser lesions (Fig 1E and 1F). This demonstrated that the Pdgfb-CreER strain effectively causes recombination of floxed alleles in the adult choroidal vasculature. No differences in vessel growth or health were observed between CreER positive or negative animals (neither during development nor in the adult) in the absence of tamoxifen induction, suggesting the CreER transgene did not have any off-target effects.
The Pdgfb-CreER strain was crossed with a Nrp1-floxed strain to produce offspring with or without the CreER transgene in a homozygous Nrp1-floxed background. Upon tamoxifen treatment, Nrp1 protein was depleted in choroidal endothelial cells (arrows in Fig 1) in CreER positive (Nrp1 ΔEC ) but not in CreER negative (Wt) animals (Fig 1G-1J). CD11b positive microglial cells (arrowheads in Fig 1) maintained Nrp1 expression, irrespective of genotype (Fig 1K-1N), confirming the endothelial specificity of the Nrp1 depletion.
CNV in vivo volume is reduced in Nrp1 ΔEC animals
To assess the effects of vascular Nrp1 depletion on neovascularisation in the retina, we utilised the laser-induced CNV model. OCT was used to measure CNV lesions in vivo. Firstly, rupture of Bruch's membrane was confirmed immediately after the laser lesions were applied (Fig 2A and 2B). Then, 7 days later the laser lesions were re-examined using volume scans (Fig 2C and 2D). The volumes of CNVs within the lesions were measured by assessing the area of the CNVs (red stippled outline in Fig 2E and 2F) in each b-scan (Fig 2G). This showed a statistically significant (p = 0.018) reduction in CNV volume in transgenic animals compared to wild type mice (Fig 2H).
Reduced leakage and CNV areas in Nrp1 ΔEC animals
The effects of endothelial Nrp1 deletion on CNV growth were also evaluated by fluorescein angiography (FA, Fig 3A and 3B) and post-mortem histology (Fig 3D and 3E). Measuring the hyper fluorescent areas in FA images showed a reduction in the Nrp1 ΔEC animals (p = 0.026, Fig 3C). Similarly, CNV areas measured in post-mortem histology (using IB4 lectin staining), also revealed a statistically significant (p = 0.026) reduction in lesion size in Nrp1 ΔEC mice versus Cre negative control mice (Fig 3F), confirming the OCT volume measurements by two independent assessment methods.
Reduced neovascularisation in the OIR model in Nrp1 ΔEC mice
In order to gain further insight about the role of Nrp1 in vascular pathologies in the retina, we also assessed the developing retinal vasculature. First, CreER efficiency was demonstrated in the retinal vasculature in Nrp1 ΔEC mice at the perinatal stage, by immunohistochemistry against Nrp1 on retinal wholemounts from postnatal day (P)5 pups that have been injected with OHT at P3 (Fig 4A-4D). This clearly confirmed endothelial cell specific Nrp1 depletion, whereas Nrp1 in microglial cells ahead of the developing plexus (arrowheads in Fig 4) was not affected. Furthermore, the growing edge of the developing retinal vasculature displayed less and thicker sprouting tips in Nrp1 ΔEC mice (Fig 1A-1C) as previously described [26].
We also assessed neovascular growth in the OIR model [28,29]. In this model perinatal mice are exposed to high oxygen levels, which depletes capillaries in the developing retinal vasculature, resulting in abnormal neovascular tuft formation upon return to room air. OHT was injected at P11 and neovascularisation was quantified at P17, which was significantly reduced in Nrp1 ΔEC animals compared to wild type litter mates (Fig 4E-4I). Interestingly, the remaining avascular area was larger in the mutant animals (Fig 4E-4H and 4J), suggesting that normal, regenerative vascular growth was also affected by Nrp1 deletion.
Discussion
In this study we have demonstrated that Nrp1 mediated signalling is involved in neovascularisation during CNV development as well as during OIR, and could be a drug target in ocular neovascular diseases. CNV volume and fluorescein leakage were both reduced in the laserinduced CNV model in the absence of endothelial Nrp1, but the precise molecular mechanism of this effect is not understood yet. On the one hand VEGF is a well-established ligand for Nrp1 and is known to drive angiogenesis and vascular permeability during intraocular vascular disease [33,34]. On the other hand, Nrp1 is involved in aspects of TGFB superfamily signalling during sprouting angiogenesis [26,27]. More specifically, Nrp1 has been shown to be essential in endothelial tip-cells in developing brain blood vessels [20]. This may be based on its role in supressing the stalk-cell phenotype in endothelial tip-cells via the modulation of Alk1 and Alk5 mediated TGFB superfamily signalling [26]. In a pathological context, we have previously demonstrated that inhibition of TGFB can reduce CNV lesion size in the laserinduced CNV model [35][36][37]. This confirms that blocking of not only VEGF but also of other targets can be effective in preventing CNV. Therefore, targeting Nrp1 as a treatment strategy in ocular neovascular diseases might overcome anti-VEGF resistance.
In this report we used three different readout methods for the laser-induced CNV model: in vivo OCT, and FA, and postmortem histology. Although, each of the three methods assess slightly different aspects of CNV, all of them showed a clear effect caused by Nrp1 depletion. Postmortem histological analysis has the advantage that vascular elements and invading inflammatory cells can be identified separately and volume measurement of the CNV lesions can be obtained with high precision using confocal microscopy [38]. However, dissection and fixation artefacts might introduce noise into those volume measurements. In contrast, evaluating CNV lesions with OCT, measures the in vivo volume of CNV lesions and might be less affected by histological processing artefacts. Many authors have assessed CNVs in vivo based on FA images, but this approach does not take into account the height of the lesion and is probably, out of the three methods discussed here, the least reliable to evaluate CNV area. On the other hand, FA offers complementary data regarding vascular leakage, which the other two methods cannot assess. In summary, an optimal assessment of CNV lesions is ideally based on all three methods in conjunction, with OCT providing exact in vivo CNV volume measurements, histology giving high resolution structural insight and FA allowing for a functional readout of vessel permeability.
|
2018-04-03T01:41:05.350Z
|
2017-01-20T00:00:00.000
|
{
"year": 2017,
"sha1": "4a8c845439e6a111a1e77f17ef2094a3412eaf37",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosone/article/file?id=10.1371/journal.pone.0169865&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "722cefd809f94b1b4068f274add8b0550fea12e9",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
252079347
|
pes2o/s2orc
|
v3-fos-license
|
Multi-fractal modeling of curcumin release mechanism from polymeric nanomicelles
Abstract The physicochemical properties of “smart” or stimuli-sensitive amphiphilic copolymers can be modeled as a function of their environment. In special, pH-sensitive copolymers have practical applications in the biomedical field as drug delivery systems. Interactions between the structural units of any polymer-drug system imply mutual constraints at various scale resolutions and the nonlinearity is accepted as one of the most fundamental properties. The release kinetics, as a function of pH, of a model active principle, i.e., Curcumin, from nanomicelles obtained from amphiphilic pH-sensitive poly(2-vinylpyridine)-b-poly(ethylene oxide) (P2VP-b-PEO) tailor-made diblock copolymers was firstly studied by using the Rietger-Peppas equation. The value of the exponential coefficient, n, is around 0.5, generally suggesting a diffusion process, slightly disturbed in some cases. Moreover, the evaluation of the polymer-drug system’s nonstationary dynamics was caried out through harmonic mapping from the usual space to the hyperbolic one. The kinetic model we developed, based on fractal theory, fits very well with the experimental data obtained for the release of Curcumin from the amphiphilic copolymer micelles in which it was encapsulated. This model is a variant of the classical kinetic models based on the formal kinetics of the process.
Introduction
Amphiphilic copolymers of suitable composition can self-assemble in an aqueous or organic medium in the form of objects, typically micelles, with sizes in the nanometric range. These micelles consist of a hydrophobic core and a hydrophilic corona (Riess, 2003;Atanase et al., 2014;. Moreover, in the case of drug-loaded micelles, the corona stabilizes the micelle and improves its biocompatibility, while ensuring the protection of the loaded active principle. Due to the stealth provided by the corona, the micelles are not destroyed, being invisible to the immune system, and thus the circulation time in the blood is prolonged (Iurciuc-Tincu et al., 2020).
A new class of materials is attracting a lot of interest, the so-called "smart" or stimuli-sensitive polymers whose physicochemical properties can be adapted in response to a change in their environment. This change, which may be of a physical or chemical nature, is most often reversible; it results in a rapid modification of the polymer microstructure (shape, surface characteristics, solubility) by the action of a stimulus (Adibfar et al., 2020).
In particular pH-and thermo-sensitive copolymers, for which the external conditions are relatively easy to modify, can be used to create stimuli-sensitive micellar systems (Atanase & Riess, 2013;Winninger et al., 2019). The variation of pH affects ionic interactions, hydrogen bonds and hydrophobic interactions and therefore modifies the hydrophilic/hydrophobic balance, which affects the solubility of the polymer in an aqueous medium (the polymer chains are contracted or extended). A change in the environment pH of a polymer containing ionizable groups causes a change in the degree of ionization to a specific value called pKa.
These pH-sensitive systems are attracting a wide interest, especially in the biomedical field due to the different pH values encountered in blood, organs, cells. Thus, the variation of pH can be used to generate disaggregation of these nanocarriers into unimers or even to modify their morphology and subsequently to control the release of the loaded active principle (Rao et al., 2018;Zhuo et al., 2020;Ofridam et al., 2021). However, if the glass temperature of the hydrophobic sequence is too high, the core of the systems is glassy, thus freezing the structure and leading to the so-called "frozen-in" micelles, which can be assimilated to solid particles.
The intense research of the last four decades has allowed the development of extremely diverse polymer-drug systems, classified according to several criteria. One of them takes into account the release mechanism and kinetics of the active principle contained. We can thus distinguish three important categories, each of them being divided into other subcategories: (i) diffusion-controlled systems; (ii) erosion-controlled systems; (iii) osmosis-controlled systems. To these three main types, polymer systems are added to which the release of the active ingredient is controlled by ion exchange, or polymer-drug conjugates to which the release of the active ingredient is determined by the kinetics of the hydrolysis of chemical bonds between it and the macromolecular support (Shaik et al., 2012;Vilos & Velasquez, 2012).
Interactions taking place between the structural units of any polymer-drug system imply mutual constraints at different scale resolutions, nonlinearity being one of the most fundamental properties of any complex system dynamics. The universality of the dynamics laws for any polymer-drug system becomes natural. Some of the usual theoretical models describing these systems dynamics are based on the hypothesis that the variables characterizing the polymer-drug system dynamics are differentiable, which can be otherwise unjustified. From such a perspective, validations of these models must be seen as sequential and applicable on restricted domains, for which integrability and differentiability are respected.
Formal kinetic models have been developed for each of these types of systems. Widely used in this context is the Ritger-Peppas kinetic model of (Ritger & Peppas, 1987), based essentially on Fick's law: where M t is the amount of drug released at time t of the process, M 0 is the total amount of drug encapsulated in the system, k is the rate of the release process and n is an exponential factor with a value between 0 and 1. Depending on the value of n , predictions about the release mechanism are possible.
Since nonlinearity is implying that, in the description of polymer-drug system dynamics, non-differentiable behaviors are predominant, it is necessary to explicitly introduce the scale resolution in equations defining the variables governing these dynamics. This leads to the fact that any variables used in the description of any complex system have a dual dependence, both on the space-time coordinates and the scale resolution. In this new perspective, instead of using variables defined by non-differentiable functions, approximations of these polymer-drug system functions will be used at various scale resolutions. Therefore, all variables used to define the afore-mentioned dynamics will work as a limit of families of functions, being non-differentiable at a null scale resolution, and differentiable for non-null scale resolution. Thus, suitable geometrical structures and a class of models for which the motion laws are integrated with scale laws must be developed. These geometrical structures are constructed on the notion of multifractality, the equivalent theoretical modes being based on the Scale Relativity Theory (SRT), either with fractal dimension D F = 2 or in an arbitrary and constant fractal dimension (The Multifractal Theory of Motion). In the case of such (non-differentiable) models, the polymer-drug system's structural units dynamics can be described by continuous but non-differentiable movement curves (multifractal motion curves). The obtained curves exhibit self-similarity as their main property in any of their points, which translates into holographic-type behaviors (every part reflects the global system). Such a complex approach suggests that only a holographic implementation can provide a complete description of the polymer-drug system dynamics (Mazilu et al., 2021;Agop & Merches, 2019).
In the framework of SRT (Nottale, 2011) and The Multifractal Theory of Motion (Mercheș & Agop, 2016;Agop & Paun, 2017), if we assimilate any polymer-drug system with a fractal-type mathematical object, various non-linear behaviors through a fractal hydrodynamic-type description as well as through a fractal Schrödinger-type description can be established (Mercheș & Agop, 2016;Agop & Paun, 2017). Thus, the fractal hydrodynamic-type description implies holographic implementations for dynamics through velocity fields at non-differentiable scale resolution, via fractal soliton, fractal soliton-kink and fractal minimal vortex. In this context, various operational procedures can become functional. Several of these procedures are of particular notice: the fractal cubics with fractal SL(2R) group invariance through in-phase coherence of the structural units dynamics of any polymer-drug system; the fractal SL(2R) groups through dynamics synchronization along the polymer-drug systems structural units; the fractal Riemann manifolds induced by fractal cubics and embedded with a Poincaré metric through apolar transport of cubics (parallel transport of direction, in a Levi Civita sense); the harmonic mapping from the usual space to the hyperbolic one. These procedures become operational so that one can obtain several possible scenarios toward chaos (fractal periodic doubling scenario), but without fully transitioning into chaos (non-manifest chaos).
Our research team has already studied the micellization and the preparation of a curcumin-loaded micellar system based on "frozen-in" poly(2-vinylpyridine)-b-poly(ethylene oxide) (P2VP-b-PEO) pH-sensitive diblock copolymers (Iurciuc-Tincu et al., 2020). From these studies, it appeared that the micellar disaggregation occurs at a pH value of 4.5, which is the pKa of the P2VP sequence. Also, at a pH value smaller than the pKa, the curcumin release is more rapid due to the disintegration of the micelles than at pH 7.4. However, the literature is poor in investigating the drug release kinetics of such structures.
In this work, we analyzed, from a multifractal perspective, the nonlinear dynamics of complex systems, generalizing the results from (Ailincai et al., 2021;Iftime et al., 2020). In such context, by exploring a hidden symmetry in the form of synchronization groups of polymer-drug system entities, we were led to the generation of a Riemann manifold with hyperbolic type metric via parallel direction of transport. Then, the polymer-drug systems nonstationary dynamics were highlighted through harmonic mapping from the usual space to the hyperbolic one.
Synthesis procedure of copolymer samples
Poly(2-vinylpyridine)-b-poly(ethylene oxide), P2VP-b-PEO, diblock copolymers were synthesized by living anionic polymerization in THF in the presence of phenylisopropyl potassium as initiator (Atanase & Riess, 2013). For decreasing the reactivity of the initiator and stopping the transfer reactions, a unit of 1,1-diphenylethylene is recommended. First the 2-vinylpyridine is polymerized at −75 °C for 2.5 h. The ethylene oxide is added and the temperature is increased to 20 °C. The copolymer was recovered by precipitation in heptane, followed by drying in vacuum.
The molecular characteristics of the studied copolymers are presented in Table 1.
Preparation and characterization of drug-loaded micelles
The drug-loaded micelles were prepared by dialysis method using a common solvent. In a typical experiment, 250 mg of each type of P2VP-b-PEO block copolymer sample and 25 mg of Curcumin were dissolved in 50 ml of dimethylsulfoxide solution (DMSO) at room temperature, under stirring and in dark. After complete dissolution of the powder, the solution was dialyzed, in the dark, for 24 h against ultrapure water using dialysis cellulose membranes (molecular weight: 12 kDa, manufacturer, Sigma). After this period, the micellar solution from the dialysis membrane was collected, frozen and then lyophilized to obtain a dry powder which was stored at −4 °C prior to use.
To determine the loading efficiency of Curcumin, a calibration curve was plotted in DMSO, using different Curcumin concentrations and their absorbance was recorded on a Nanodrop spectrophotometer at a wavelength of 435 nm. The calibration curve equation was y = 0.0178x, R 2 = 0.9992, as illustrated in Figure 1.
The amount of Curcumin in 10 mg micelles was extracted into DMSO using the following protocol: a known quantity of drug-loaded micelles was dispersed in 5 ml DMSO and then added to dialysis membranes. The samples thus prepared were added in a known volume of DMSO (20 ml) and kept under stirring in a water bath in Erlenmeyer flasks at 37 °C in the dark. The absorbance was read after 24 h, when its value remained constant, Curcumin being completely extracted in DMSO from micelles. Based on the calibration curve, both drug loading (DLE) and drug encapsulation (DEE) efficiencies were determined using the following equations:
DLE
Amount of drug in micelles Amount of added polymer drug x % 100 (1) (2)
Evaluation of drug release kinetics
The kinetics of curcumin release from micelles was studied in three different pH environments: in phosphate buffer solution (PBS) 0.1 M at pH = 7.4 (specific for blood and colon fluids), in PBS 0.1 M at pH = 6.8 which mimics the fluid intestinal, solution at pH = 2 (solution prepared from 10 mM NaCl and HCl − 0.1 N) which is specific to the pH of the gastric environment.
In a typical experiment, 10 mg of the drug-loaded micelles with were dispersed in 5 ml of buffer solution of different pH and then added to a dialysis membrane. The suspension thus prepared was added to an Erlenmeyer flask and immersed in 20 ml PBS at a given pH value, under stirring, at 37 °C in the dark. At a given time, samples were taken in order to determine spectrophotometrically the absorbance at the Curcumin-specific wavelength of 425 nm. As Curcumin is a hydrophobic substance and is very slightly soluble in aqueous media, 1% (w/w) Tween 20 was added to release medium.
To avoid the degrading action of light on Curcumin, all experiments were conducted in the dark (dialysis, kinetic study of the release process). The vials in which the experiments were performed were covered with an aluminum foil throughout.
The main assumption of this theory is that any polymer-drug system, as a complex system, can be assimilated with a fractal/multifractal mathematical object (Mandelbrot, 1982;Jackson, 1993;Cristescu, 2008). The functionality of such a hypothesis implies, based on the Scale Relativity Theory, employing, in the description of polymer-drug dynamics, continuous and non-differential curves (fractal/multifractal curves). Then, two scenarios for describing polymer-drug dynamics become compatible: i. release dynamics in the Schrödinger multifractal scenario (multifractal Schrödinger equation) (Peptu et al., 2021): ii. release dynamics in the Madelung multifractal scenario (multifractal hydrodynamic equation) (Peptu et al., 2021): where Q denotes the multifractal specific potential: Equation (4) corresponds to the multifractal momentum conservation law, while equation (5) corresponds to the multifractal states density conservation law.
In relations (3) -(6), (10) and the quantities from (3) -(10) have the following meanings: • x l is the multifractal spatial coordinate; • t is the non-multifractal time with the role of an affine parameter of the motion curves; • dt is the scale resolution; • Ψ is the state function of amplitude ρ and phase s; is the singularity spectrum of order α ; • α is the singularity index and it is a function of fractal dimension D f ; • λ is a coefficient associated to the multifractal/ non-multifractal scale transition In fact, the two scenarios describing the dynamics of drug release are not mutually excluding, but on the contrary, they are complementary.
Drug release mechanisms through synchronization modes
Since the multifractal specific potential Q can be put into relation with the multifractal tensor in the form it is natural to admit that tensor (11) becomes fundamental in drug release processes. Then, its characteristic equations are given by the cubics: a X a X a X a a a a a If (13) where X l are the roots of the cubic (13), induces the simply transitive group in the variables h , h and k , whose actions are: The structure of this group is of SL(2R) -type where B l are the infinitezimal generators of the group: This group admits the differential 1-forms: Then, we can define the variable θ as the "angle of parallelism" of the hyperbolic planes (the connection). In such a conjecture, it is noted that, if the cubic is assumed to have distinct roots, the condition (24) is satisfied, if, and only if, the differential forms Ω 1 is null.
Therefore, for the metric (23) with restriction (24), the relation becomes: The parallel transport of the hyperbolic plane actually represents the apolar transport of the cubics . Therefore, the group (16) can be assimilated with a "synchronization" group between the different structural units (entities) of the polymer-drug system. In this process, the amplitude of each of the entity of any polymer-drug system participates, in the sense that they are correlated. Moreover, the phases of any entity of the polymer-drug system are also correlated. The usual synchronization, manifested through the phase shift of the polymer-drug system entities, is, in this case, only a very particular case.
In the following, non-stationary dynamics in complex systems through harmonic mappings will be generated. Indeed, let it be assumed that the complex system dynamics are described by the variables Y j , for which the following multifractal metric was discovered: in an ambient space of multifractal metric: In this situation, the field equations of the complex system dynamics are derived from a variational principle, connected to the multifractal Lagrangian: In the current case, (26) is given by (25) with the constraint (24), the field multifractal variables being h and h or, equivalently, the real and imaginary part of h. Therefore, if the variational principle: . For a choice of the form D 2:t , in which case a temporal dependency was introduced in the complex system dynamics, (31) becomes: In Figures 2 and 3, various nonlinear drug delivery modes at different scale resolutions in dimensionless coordinates are presented: (i) for global scale resolution (Figure 2a and b); (ii) for differentiable scale resolution (Figure 3a and b); (iii) for non-differentiable scale resolution (Figures 4a and b). Let it be noted that, whatever the scale resolution, the drug release dynamics prove themselves to be reducible to self-structuring patterns. The structures are present in pairs of two large patterns that communicate in an intermittent way. In the 0-20 range for Ω and t the resulted structures are communicating with each other via a channel created along the symmetry axis for t ~ 10. This channel is also seen for different (Ω;t) coordinates which is interpreted as an intermittency in the structure bonding.
We present in Figures 5-7, by plotting h in dimensionless parameters, certain temporal self-similar properties of the polymer-drug dynamics. It can be observed that the multifractal structures are contained into similar multifractal structures at much higher scales. Moreover, since the structure's communication channel has an exponential decrease in the (Ω;t) plane, dissipation processes (Jackson, 1993;Cristescu, 2008) occurring during drug release are present. The model manages to express the dissipation of the drug through the reduction of the channel amplitude on the Ω axis as the time variable is increased.
From these previous figures, one can also notice channel-type patterns through self-structuring of the polymer-drug system entities.
Modeling of the drug release mechanism
The obtained results concerning the drug loading and encapsulation efficiencies are given in Table 2. As can be observed from this table, the DLE and DEE are related to the molecular characteristic of the copolymers. Sample A, with the smallest molecular weight, has the highest DLE and DEE values. On the contrary, sample C has the lowest efficiencies having the highest molecular weight.
In Figures 8 and 9, the drug release kinetics as a function of pH for all three samples are illustrated.
From Figure 8, it appears that the Curcumin release kinetics is strongly influenced by the pH, as expected. At pH 2, the P2VP sequence is protonated and therefore a demicelization process occurs leading to the destruction of the micelles and thus to the almost complete release of loaded drug. At other pH values, the micelles are "frozen-in" and the Curcumin release is controlled over time.
Information on the mechanism of transport and release of the active substance from the micelles was obtained using Figure 9. The resulted data is synthesized in Table 3. The strong acidic environment strongly destabilizes the micelles, regardless of the molecular and compositional characteristics of the amphiphilic copolymer used. The effectiveness of releasing the active ingredient is maximum after around 5 hours of experiment. An environment close to the neutral pH value is reducing the intensity of the release process, which becomes the lowest in the case of micelles formed by the copolymer with the highest molecular weight. A possible interpretation must be related to the larger size of the micelle core, on one hand, which reduces the diffusion intensity of the active principle, and perhaps to the length of the copolymer PEO sequences that stabilizes the micelle: a longer length of these sequences will also slow the release of Curcumin. Interesting is the behavior at the physiological value of pH. Regardless of the value of this characteristic, the efficiency of the release process is superior to the weakly acidic environment (pH = 6.8) due to a better solubilization of Curcumin. The value of the exponential coefficient, n, is around 0.5, generally suggesting a diffusion process, slightly disturbed in some cases.
The synergy of the two models describing the release dynamics through the scale relativity theory
As we have previously shown, the two fractal/multifractal scenarios of the dynamics of polymer-drug systems are not mutually exclusive, but, on the contrary, are complementary. Thus, if the hydrodynamic scenario allows the "decoding" of some drug release mechanisms, then the Schrodinger scenario can be employed for the "plotting" of the drug release curves. For a better understanding of our argumentation, let us remind the hypotheses from (Peptu et al., 2021): i. the dynamics of any complex system, independent of the two scale resolutions (differentiable and non-differentiable scale resolutions), are one-dimensional dynamics; ii. the synchronization of the dynamics of any complex system at the two scale resolutions is achieved by "compensating" the velocity fields V D l and V F l given by the restriction V V signifying that the states density is a multifractal Gaussian whose average multifractal value decreases exponentially to zero and whose multifractal variance tends asymptotically toward D /K .
In Figure 10, we have fitted the experimental drug release date with our multifractal model. Based on the premise of our model the release dynamics can be seen here manifested at two temporal scale resolutions. The first one corresponds to about 30% of the overall release mass and can be attributed to release dynamics with on which there are small to none external restrictions. Translating this in the multifractal framework where the model was developed it means the geodesics defining the movement of the drug particles a not "broken" significantly and should be defined by lower fractalization degree. This is seen in Table 4 where the data on fractalization degrees is synthesized. We observe that fractalization degree at lower scale resolutions is with a factor of 3-4 lower than the ones computed at larger scale resolutions. This means that when implementing the "zoom in" we can find dynamics which are completely different from the ones dominating the large-scale dynamics. This is understandable as the simulations presented in Figure 4 show localized dynamics for fixed fractalization values. The large scale resolutions dynamics lead to derived values of the fractalization degree considerably higher. This is understandable as the values represent a cumulative characterization of the system. The values of the fractalization of the system are inverse proportional to the efficiency maximum computed. This is understandable as lower fractalization media are proper environments for fast dynamics.
Conclusions
The main results of this paper are the following: i. The release kinetics, as a function of pH, of a model active principle, i.e Curcumin, from nanomicelles obtained from amphiphilic pH-sensitive poly(2-vinylpyridine)-b-poly(ethylene oxide) (P2VP-b-PEO) tailor-made diblock copolymers was studied by using the Rietger-Peppas equation. The value of the exponential coefficient, n, is around 0.5, generally suggesting a diffusion process, slightly disturbed in some cases; ii. By exploring a hidden symmetry in the form of synchronization groups of polymer-drug system entities, we were led to the generation of a Riemann manifold with hyperbolic type metric via parallel direction of transport. Then, the polymer-drug systems nonstationary dynamics were highlighted through harmonic mapping from the usual space to the hyperbolic one. In such context, various drug release modes, in the form of patterns and channels, become operational; Figure 9. ritger-Peppas kinetic mode implemented for the drug release curves for all three samples. iii. The kinetic model developed based on fractal theory fits very well with the experimental data on the release of Curcumin from the amphiphilic copolymer micelles in which it was encapsulated, being a variant of the classical kinetic models based on the formal kinetics of the process. It also provides information on the mechanism of release of the biologically active principle from micellar nanocarriers, suggesting the structuring of the micelle according to the pH value, with the appearance of preferential channels through which the diffusion of the active principle occurs.
Funding
The author(s) reported there is no funding associated with the work featured in this article.
|
2022-09-06T06:16:48.373Z
|
2022-09-05T00:00:00.000
|
{
"year": 2022,
"sha1": "b07f28a737cc4ea78e5a018735ca4209534461e5",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "d5940d3f8ddff774cd0663c4a3ce45e76fe19f8f",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
56296252
|
pes2o/s2orc
|
v3-fos-license
|
Sustainability, Security and Safety in the Feed-to-Fish Chain: Focus on Toxic Contamination
The paper discusses the issue of feed ingredients in aquaculture as a telling example of implementation of a sustainable food safety strategy, aimed at protecting the health of next generation, under the One Health paradigm. Finfish and fishery products are a main nutrition security component as a valuable source of animal protein, particularly in developing countries. In addition, they are a critical source of essential oligo-nutrients, such as polyunsaturated fatty acids (PUFAs) and iodine. Production and consumption of fish has greatly increased in the last decade, mostly due to the growth of aquaculture. While the demand for aquaculture products continues to increase, there is the need to address consumers' concerns related to the nutritional quality and safety. In fact, both wild and farmed finfish can represent a significant source of exposure to contaminants for the consumer: noticeably, caught and farmed fish have a comparable content of nutrients and contaminants. Aquaculture feeds made of fish meal and fish oil are the main vehicle for transfer of environmental pollutants to farmed fish. The main fish contaminants (e.g., methylmercury, PCBs, PBDE) can bioaccumulate and affect development in humans. Feed ingredients as well fish species have a different liability to contamination depending, e.g., on the lipophilicity of the specific chemicals. Up-to-date risk-benefit assessments show that high intake of fish may lead to an undesirable intake of pollutants which is not sufficiently balanced by the concurrent intake of protective nutrients, such as PUFA. The use of vegetable-based feed ingredients in aquaculture has been explored from the standpoints of economic sustainability and fish productivity to a greater extent than from those of food safety and nutritional value. Available data show that vegetable oils can significantly modulate the lipid profile in fish flesh, depending on the oil and fish species. The use of vegetable ingredients can drastically reduce the accumulation of the main contaminants in fish; likewise the presence of other “unconventional” contaminants (e.g. PAHs) and the nutritional value of fish flesh could deserve more attention in the assessment of novel aquaculture feeds.
The Health Viewpoint: Key Tools for Sustainability, Security and Safety
The concept of sustainability has seen limited applications in the field of food safety still now. The concept of sustainable food safety [1] deals with the components of today's food safety that can impact on the health and wellbeing of our progeny's adulthood, including its ability to produce a healthy next generation. Indeed, the new concept of food safety features increasingly guarantee and promote health and wellbeing of such vulnerable groups as the unborn and the child. Food safety itself is a framework integrating the assessment and management of many factors, from the welfare of the living organisms used for food production, the quality of their living environment through to the management of production and distribution processes, and food processing and consumption at household level [1]. Any action aimed at improving the intake of nutrients essential to healthy prenatal and neonatal development and/or at reducing the impact of dietary developmental toxicants in foods is relevant to the sustainable food safety. Accordingly, this strategic public health perspective pivots on the dietary exposure of women of childbearing age, as well as pregnant and breastfeeding women, especially in the case of high consumers of certain foods. Moreover, nowadays it is recognized that any aspects of sustainability must integrate the One Health (and One Prevention) scenario. Therefore, the health of the future generation is strictly linked to today's environmental quality, today's health and welfare of foodproducing animals, today's farming practices, as well as today's food security, i.e., the sufficient access to safe and nutritious food [2]. In particular, the One Health perspective points to toxic contaminations of foods of animal origin as a novel aspect of zoonosis [3]. In this frame, and according to the European strategy for food safety "from farm to fork" [4], the quality and safety of animal feed stuffs is a major crossroad, encompassing the environment from which feed ingredients are derived, farming practices, animal health and food safety [5; 6; 7; 8]. For instance, the One Health paradigm is developed in the opinions of the European Food Safety Authority (EFSA) that recommended the reduction of the maximum legal limits of some nutrients in animal feeds; based on an integrated assessment. Such limits were evaluated to be far above the physiological requirements for animal welfare and productivity, and in the meanwhile as cause of excessive deposition in edible tissues or products, as in the case of iodine [9] and vitamin A [10]. In other instances, unnecessary high maximum authorized levels of trace nutrients could pose a risk for the health of farm workers, exposed to dusts, [11] or for ecosystems exposed to large outputs of animal excreta [12]. The scientific implementation of One Health actions may actually allow an integrated protection of animal welfare, humans and the environment, whereas Sustainable Food Safety actions may actually protect the chances of health of the next generation.
In the ensuing sections we will discuss the issue of feed ingredients in aquaculture as a telling example of implementation of a sustainable food safety strategy under the One Health paradigm.
Food Fish and Aquaculture
The term "food fish" includes finfishes (mostly teleosts, but also selacians), crustaceans, molluscs, amphibians, freshwater turtles and other aquatic animals (such as sea cucumbers, sea urchins, sea squirts and edible jellyfish) intended for use as human food. An impressive number of aquatic species are cultured worldwide in a variety of farming systems: 530 animal species are registered in FAO statistics, including finfishes (354 species, with 5 hybrids), molluscs (102), crustaceans (59), other aquatic invertebrates (9) and amphibians and reptiles (6) [13]. Albeit it might be justified when taking into account a large sector of waterrelated economic activities, the term "food fish" is a very sweeping one from the scientific standpoint: it includes organisms with completely different biological and ecological characteristics, which are bred or collected in very different ways and, last but not least, provide greatly diversified food commodities.
Global "food fish" production has grown steadily in the last five decades [13]: supply increased at an average annual rate of 3.2 %, thus outpacing world population growth at 1.6%. World per capita apparent fish consumption increased almost two-fold from the 1960s (approximate average 10 kg) to 2012 (approximate average 19 kg) [13]. A combination of factors, such as population growth, rising incomes and urbanization contribute to this substantial development, supported also by the marked expansion of fish production and by an increased efficiency of distribution channels. China, with a "fish food" consumption of about 35.1 kg in 2010, has been responsible for most of the growth in fish availability, owing to the great development in farmed fish production. In 2010, per capita fish consumption was estimated at 23.3 kg in developed countries, where an important and growing share of fish consumed consists of imports; on the other hand, in developing countries fish consumption is mainly based on locally and seasonally available products, with supply driving the fish chain [13].
Fish and fishery products are also main nutrition security components as a valuable source of animal protein: a portion of 150 g of fish provides about 50-60% of the daily protein requirements for an adult. In 2010, fish accounted for 16.7% of the global population's intake of animal protein and 6.5% of all protein consumed; in particular, fish provided more than 2.9 billion people with almost 20 % of their average per capita intake of animal protein [13]. The role of fish proteins in the diet is usually higher in developing countries, especially in some densely populated countries where total protein intake levels may be low. In fact, despite their relatively lower levels of fish consumption, developing countries have a higher share compared with developed countries. In 2010, fish accounted for about 19.6% of animal protein intake in developing countries; conversely, in developed countries the share of fish in animal protein intake has weakened from 13.9 % in 1989 to 11.8% in 2010 face to an increased consumption of other animal proteins [13]. In addition fish, and finfish in particular, is critical for nutrition security as a source of essential oligo-nutrients (Table 1). In Europe it is the critical dietary source of n-3 long chain polyunsaturated fatty acids (n-3 LCPUFAs) [14; 15]. The mean n-3 LCPUFA content can be remarkably different among common edible fish species, as it varies from 200 mg/100 g (cod and whiting) to 2500 mg/100g (herring and tuna). Among farmed fish, Atlantic salmon provides n-3 LCPUFA in high amounts (1800mg/100g), whereas the most consumed freshwater fish, i.e. carp and trout, have acontent of around 300 and600mg/100g, respectively [15] ( The increasing fare of fish consumption is driven by the development of aquaculture. Farming of fish is an ancient art, the earliest known examples dating back to China by 2500 BC. Today, thanks to advances in farming and processing technologies, 47% of all fish for human consumption comes from aquaculture [21]. Indeed the growth of productive catch has almost stopped since the mid 1980s, while between 1970 and 2008 the aquaculture sector has maintained worldwide an average annual growth rate of 8.3% [21]; in 2000-2012 production expanded at an average annual rate of 6.2 % from 32.4 million to 66.6 million tonnes. By maintaining such growth rate, aquaculture could bridge the growing gap between fishery supply and global demand for "fish food" [22; 23]. FAO estimates that, overall, fisheries and aquaculture assure the livelihoods of 10-12 % of the world's population. However, aquaculture development is imbalanced and its distribution is uneven, with Asia accounting for about 88 % of world production by volume. Fifteen main producer countries accounted for 92.7 % of all farmed food fish production in 2012; the large majority are Asian countries (China, India, Vietnam, Indonesia, Bangladesh, Thailand, Myanmar, Philippines, Japan, Republic of Korea), two are Latin American countries (Brazil, Chile), the remaining ones are Egypt, Norway and the U.S.A. [13].
World aquaculture production is divided into inland aquaculture and mariculture. Except some operations in saline water, inland aquaculture generally use freshwater, while mariculture includes production operations in the sea and intertidal zones. Out of 66.6 million tonnes of farmed food fish produced in 2012, farmed crustaceans account for 9.7 % (6.4 million tonnes); mollusc production is more than double (15.2 million tonnes) while its value is only half compared to crustaceans, because a large part are by-products of freshwater pearl culture in Asia; other species provide only 0.9 million tonnes and are farmed mainly for regional markets in a few countries in Eastern Asia. Finfish make up two-thirds (44.2 million tonnes) of 2012 "food fish" production, including species grown from inland aquaculture (38.6 million tonnes, 87.3%) and mariculture (5.6 million tonnes, 12.7%). Finfish grown from mariculture include a large proportion of carnivorous species, such as salmonids and other sea species, which are higher in unit value than most freshwater-farmed finfish [13].
European aquaculture accounts for about 2% of world production [24; 25] and provides 27% of the total production of aquatic organisms in Europe, compared to 47% at the global level [24]. In the European Union aquaculture has grown only to a value close to 0.5% in the period between 2001 and 2008, compared to 7.6% in non-EU countries [24]. The main European producer is Norway (656,000 tons in 2005), followed by France, Spain, Italy and the UK: these countries represent 75% of European production [26]. In France, Spain and Italy the predominant component is represented by shellfish (mussels, clams and oysters); conversely, 90% of the production in Norway, the leading actor in Europe, is represented by salmon.
Italy has a variety of environmental conditions, spanning from the Alps to the Southern Mediterranean; accordingly, the Italian aquaculture is the mirror of almost all species farmed in Europe, distributed in about a thousand production sites. Among finfish, the freshwater species represent about 68% of the production, with rainbow trout (Oncorhynchus mykiss) alone accounting for 55.3 % in 2009; the remainder is given by mariculture of sea bass (Dicentrarchus labrax, 13.2%) and sea bream (Sparus aurata, 12.9%) as well as by a number of local "niche" productions, e.g., eel [27]. Interestingly, the sector recorded a positive growth trend, parallel to the expansion of national fish consumption, from 15 kg in the 1980s to the current 22 kg per capita per year, which is still below the European average [21]. Nevertheless, the national fishery and aquaculture product is insufficient: Italy imports two-thirds of its fish consumption [27].
While the demand for aquaculture products continues to increase, there is a growing recognition of the need to address consumers' concerns related to the quality and safety of products. Issues such as food safety, traceability, certification and eco-labels are becoming increasingly important and considered as priority policy issues [13].
Finfish, Aquaculture and Food Safety
As already mentioned, fish is a source of essential nutrients such as iodine, vitamin D and PUFAs, which play a critical role in many biological processes, such as the growth and development of the nervous system, the maintenance of immune response, cardiovascular system and thyroid function. In the meanwhile, both wild and farmed finfish can represent for the consumer a significant source of exposure to contaminants: persistent halogenated compounds, such as polychlorinated biphenyls (PCB), dioxins, brominated flame retardants, perfluorinated chemicals (perfluorooctane sulfonic acid -PFOS-and perfluorooctanoic acid -PFOA), and organic compounds of chemical elements, such as methylmercury (MeHg) and organotins. The type and level of contaminants in finfish may vary depending on the chemicals capacity to persist, bioaccumulate and biomagnificate along the food chain, the fish species, its lipid content and dietary habits, as well as on the place of catch [14]. In most cases, contaminants are slowly and inefficiently metabolized by fish; however in a few cases, fish metabolism does play an important role, as in the telling case of arsenic (As). Whereas inorganic As is an important carcinogenic contaminant of water and cereals, in fish As is metabolized to organic forms, the main ones being of minimal toxicity (arsenobetaine, arsenocholine) while others (arsenosugars, arsenolipids) are less known but plausibly much less toxic than inorganic As. Thus, fishes may accumulate total As, but less than 10% is inorganic As [28]: accordingly, total As in fish may flag environmental pollution but is of low significance for consumer safety.
Several contaminant groups (PCB, dioxins, and brominated flame retardants) specifically accumulate in the fat fraction of the tissues due to their high lipophilicity. Accordingly, populations that consume greater quantities of fish show higher levels of persistent lipophilic contaminants in serum, breast milk and adipose tissue [29]. Biomagnification along the trophic chain implies that species occupying higher positions in the food pyramid (e.g., salmonids and tuna fish among teleosts) are exposed to the contaminant concentration present in the environment as well as in their preys. The risk to the consumer is related to the long-term exposure: these contaminants are mainly developmental toxicants, thus the most vulnerable groups are represented by women of childbearing age, which may buildup a body burden, as well as pregnant and breastfeeding women, and children [30]. Although generally considered less susceptible than the embryo and foetus, children are considered as an especially vulnerable group of direct consumers, due to their not yet fully mature organism as well as higher exposure than adults because of the increased intake of food per kg of body weight: a specific susceptibility to certain toxicants, like carcinogens or endocrine disrupters (ED) is pointed out up to the pre-and peri-pubertal phase [31; 32]. The risk assessment and management of main finfish pollutants should therefore pay special attention to the protection of the developing organism: indeed, fish pollutants are good examples of the sustainable food safety concept [1].
Since wild fish are exposed to bioaccumulating contaminants through the ecosystem, it has been held that farmed fish would show a lower level of contamination. However, analysis of available data made in 2005 by the EFSA showed no significant differences between the concentrations of both nutrients and contaminants in wild and farmed fish. In particular, the degree of contamination of farmed fish is equal and in some cases greater than the wild species due to the use of feed made from animal ingredients (oil and fishmeal) that are highly liable to contamination and may give rise to bioaccumulation [14]. In feed stuffs for farmed fish, PUFAs have to be supplemented, generally through the use of fish oil. Fish meal is another key ingredient, as it is a source of high-quality protein with adequate proportions of essential amino acids, as well as of essential minerals and also PUFAs present in lipid residues [14]. The typical diet of an omnivorous fish (e.g., sea bream, sea bass, etc.) generally contains 10% fish meal and 2% fish oil, while that for carnivorous fish (e.g., salmonids) 50% fish meal and 25% of fish oil [14].
Currently, small pelagic scombroids (anchovies, herring, mackerel, sardines, etc.) are the main species used for the production of fishmeal and fish oil used in aquaculture. Therefore, the usual aquaculture feeds do mimic the biomagnification process occurring in aquatic ecosystems, and the quality and composition of the feed is a pivotal factor for the presence of contaminants in fish as in all foods of animal origin, hence, for the protection of human health [7; 33]. The contamination of wild fish can be controlled with monitoring programs and, in the long term, with global measures to reduce the release of pollutants into the environment. In farmed fish, contamination levels in animal feed may be managed in the course of the production process or, as a more effective option, can be prevented through the widespread use of new feed ingredients [14] with proven lower liability to bioaccumulation of toxic chemicals.
Sustainable Food Safety: the Toxicology of Main Chemical Contaminants of Finfish
natural and anthropogenic sources into the environment: in water bodies the inorganic Hg is methylated by sulphatereducing bacteria, becoming MeHg, the most toxic organic form, which bioaccumulates in marine organisms and biomagnifies through the food chain [34; 35]. Therefore, fish is by far the main dietary vehicle of MeHg [36]. Concerns about MeHg mainly rely on its developmental neurotoxicity [37; 38]. Experimental and epidemiological studies support that exposure levels occurring in high-intake areas (e.g., New Zealand, Far Oer) are related with deficits in language, attention and memory in the offspring [39; 40; 41; 42; 43]; noticeably a study conducted in the Seychelles indicated that when intake of MeHg occurs through fish high in PUFAs, the developmental neutoxicity is significantly mitigated [44]. In 2012, the EFSA has established a tolerable weekly intake (TWI) of 1.3 µg/kg b.w., based on neurodevelopmental effects after prenatal dietary exposure and estimating the maternal intake and body burden through the concentration of Hg in maternal hair [38]. The consumption of fish contaminated with MeHg may enhance the oxidative stress-related vascular damage in adults, thus enhancing the risk of neurological ischemia and cardiovascular disorders [45; 46; 47; 48]; this aspect was considered by EFSA as potentially important, but the evidence was not sufficiently robust for deriving a healthbased guidance value.
Bioaccumulation in fish occurs via binding to tissue proteins. Generally, about 80 -100 % of total Hg in fish muscle is MeHg, albeit with variations related to age and species [49]. The concentration in fish flesh is not changed by cooking; actually, due to moisture loss, Hg concentrations are often slightly higher in cooked fish. The amount of Hg is related to the age of the fish and the position of the fish species within the food chain; predatory fish and older fish have higher concentrations [49]; specific ecosystem characteristics also contribute to the variability in Hg concentration [50].
Mercury concentrations may exceed 1 mg/kg in shark, swordfish, marlin and tuna. In farmed rainbow trout the ratio of Hg concentrations in feed and in fish is about 1:1; in trouts aged 10-14 months, muscle Hg concentrations were not related to fish weight [51].
Methylmercury has a rather unique place among contaminants, because there are no major dietary sources for all age classes other than fish. In particular tuna, swordfish, cod, whiting and pike were major exposure contributors in adults, including women of child-bearing age; for children, hake was an additional major contributor: most of the above species are not major PUFAs sources. Noticeably, the dietary exposure estimations in high and frequent consumers of finfish are about two-fold higher in comparison to the total population [38]. According to the EFSA estimates, the mean dietary exposure in European Union does not exceed the TWI, with the possible exception of toddlers and other children in some surveys. However, the medians of 95 th percentile dietary exposures across surveys are close to or above the TWI for all age groups; high and frequent fish consumers, which might include pregnant women, may exceed the TWI by up to approximately six-fold. Unborn children constitute the most vulnerable group for developmental effects of MeHg exposure. Biomonitoring data from blood and hair indicate that MeHg exposure is generally below the TWI in Europe, but higher levels are also observed [38].
The most important source of Hg in feed is fishmeal, where it is mainly present as MeHg. A European survey showed the average concentrations of total Hg in complete fish feeds is 0.06 mg/kg and approximately 8% exceeded the maximum allowable level of 0.1 mg/kg [52]. Limited data indicate that the proportion of MeHg to total Hg in aquaculture feeds is consistently over 80% [53; 54; 55].
Dioxins are poorly water-soluble, but they may be absorbed onto mineral and organic particles and undergo airborne or water-borne transport far away from the emission sources and enter into the food webs [56]. The bioaccumulation in fish species depends on both biomagnification and the fat content of the organism. Except for some highly polluted areas, in the general human population about 95% of the exposure to dioxins occurs through the diet, in particular fatty foods of animal origin [60].
Dioxins are the first group of chemical contaminants that have been assessed and monitored as a mixture, because a) they occur in the same foods, and b) they share the same mechanism of toxicity, the activation of the intracytoplasmic aryl hydrocarbon receptor (AhR) [61]. The toxic equivalency factor (TEF) of each congener is based on the congener AhRbinding affinity compared to that of 2,3,7,8-TCDD, taken as the value reference unit [62]; the total concentration of dioxins in a matrix is measured as toxic equivalent (TEQ), obtained by summing the products between the TEF and the concentrations of each individual congeners. Noticeably, another group of compounds, different from dioxins, is considered to add up to the TEQ and is monitored accordingly: the 12 coplanar, "dioxin-like" (DL) PCBs, which are structurally similar to 2,3,7,8 TCDD and activate AhR [63].
Studies in animals and humans show that critical effects of dioxins include alterations of immune, reproductive and neurobehavioral development as well as porphyrin accumulation in the liver [64; 65; 66; 67] and a potent tumor promoting action [68]. The Scientific Committee on Food of the European Commission set a cumulative dose TWI of 14 pg/kg b.w. of TEQ (SCF Scientific Committee on Food, 2001).
The highest average levels of dioxins in food are found in fish liver oil and its derivatives and in the muscle of eel, one of the edible fish with the highest fat content. Other finfish show levels well below those of the eel, with relatively higher values in salmon and Baltic herring; for a comparative glance, eel flesh had a median of 6.7 ng TEQ/kg total weight whereas the median in salmon was 8.0 ng TEQ/kg on lipid basis, thus resulting in much lower values on total weight. Leaner species showed much lower levels, e.g., 1.20 ng TEQ/kg on lipid basis in farmed trout flesh [56]. Contrary to MeHg, dioxins are present in many other foods, such as liver and milk, that show levels comparable or slightly lower than salmon. The contribution of fish to the average daily exposure is between 11 and 63%, depending on the eating habits of the different countries. The average values of dioxins in the diet in the European Union are between 8.4 and 21 pg TEQ/kg b.w./week; thus, a substantial proportion of the European population would have an intake above the TWI [69].
Fish oil is the most important source of contamination of farmed fish feed with dioxins and dioxin-like PCBs, followed by fish meals [14]. In Europe, overall 8% of fish feed samples exceeded maximum tolerated levels and a further 4% exceeded action levels set as contamination alerts [56]: the complete feed will comply with the regulatory limit (2.25 ng/kg) if the individual components also comply with their respective limits (fish meal, 1.25 ng/kg; fish oil, 6 ng/kg) [14]. Previous data indicated that feed ingredients of fish origin produced in Europe contained higher levels of PCDDs/Fs and DL-PCBs than those of South Pacific origin and that the contribution of such ingredients to the total body burden of farmed fish was markedly higher for carnivorous species (where it could reach 98%) than omnivorous species [62]. The mean transfer rate of PCDDs/Fs from commercial fish feed into the flesh of rainbow trout increased with the duration of exposure and ranged from 11.1 % at 6 months to 30.7 % at 19 months; there was a direct correlation between concentration in the lipid fraction of feed and that in fish flesh [70]. The feed-tissue transfer rate for DL-PCBs was higher than that of PCDDs/Fs in Atlantic salmon [71,72] and rainbow trout [73]. Interestingly, dioxins might accumulate also in fish eggs [70], a favoured delicacy for several consumer groups.
Non-dioxin-like polychlorinated biphenyls. Polychlorinated biphenyls (PCBs) are a widespread class of persistent and bioaccumulating chemicals that were widely used for many industrial applications; they include 209 congeners defined according to the number of chlorine atoms and their position. The manufacture and use of PCBs has been prohibited in almost all industrial countries since the late 1980s; however, the combination of widespread use and high environmental persistence make them an excellent example of legacy contaminants, which are still an issue decades after the ban [56]. In addition, PCBs still are released into the environment-feed-food chains from ill-managed hazardous waste sites [74].
PCBs do bioaccumulate because of their high lipophilicity. Besides the small group of DL-PCBs, considered in risk assessment together with dioxins, the bulk of PCBs are the non-dioxin like congeners (NDL-PCBs). Even though they might be considered individually less toxic than the DL congeners, NDL-PCBs are much more numerous, more abundant and include the most persistent congeners.
Experimental and epidemiological data suggest that the critical effects of PCBs in adults and children include liver damage, reduced thyroid function, reproductive dysfunctions in both sexes and tumour promotion (especially in liver), even though potency appears much lower than dioxins [14; 75; 76]. The intrauterine development seems particularly vulnerable, mainly concerning thyroid function and neurological development [77; 78; 79]; children born to mothers habitual consumers of PCBs-contaminated fish from Lake Michigan (USA) had a smaller head circumference [80]. Notwithstanding the persisting importance of PCBs in food safety, unfortunately the available data are considered as inadequate to establish a TWI, even a provisional one.
NDL PCBs may have different toxic modes of actions and effects. Several authors proposed a toxicologically-based classification of PCBs in three groups by introducing a distinction within the group of NDL-PCBs on the basis of structure-activity considerations: besides the DL-PCBs (group II), the "estrogenic" congeners (e.g., PCB 52, 101) make up group I, whereas the "highly persistent, cytochrome-P450 inducing" congeners (e.g., PCB 153, 180) make up group III [81; 82]; another approach identifies three clusters characterized by different patterns of mechanisms (androgen receptor antagonism, transthyretin binding and interference with gap junctions) [83]. The further development of NDL PCBs grouping could lead to the definition of TEF-TEQ approaches for clusters of the main congeners present in feeds and foods. In its turn, this could be highly relevant to assess the toxicological significance of a given PCBs mixture as, indeed, PCBs exposure occurs almost exclusively as a mixture of congeners.
Data on occurrence of NDL-PCBs in food and feed are usually reported as sum of three to seven congeners (PCB 138, 153, 180 and others), referred to as "indicator PCBs" and selected both because of their relatively easy analytical quantification and high presence in food matrices. In the EFSA survey of NDL PCBs in foods and feeds, the highest mean levels of NDL-PCBs in food (whole weight basis) were observed in fish and fish products, with 223 µg/kg in muscle meat of eel, 148 µg/kg in fish liver and 23 µg/kg in muscle meat of fish other than eel. In many studies, fish was the single food commodity providing the highest contribution to exposure (35.9-65.4%) [56].
The pattern of fish contamination parallels that of dioxins, levels being related to lipid content of tissues, biomagnification and area-specific pollution. Some studies have shown that the pitch of the sea is generally less contaminated with PCBs compared to freshwater fish, further pointing out area-specific pollution problems, e.g. from disposal sites [89]. In feed, the highest mean contamination levels were reported in fish oil (59 µg/kg); among complete feeds, fish feed ranked among the most contaminated with a mean of 10 µg/kg [56].
Polybromineted diphenyl ethers. Brominated flame retardants (BFRs) are anthropogenic chemicals added to many consumer products in order to improve their fire resistance; the PBDEs are the group of greater relevance for the contamination of ecosystems and food chains [90]. PBDEs include 209 possible congeners, whose chemical stability is related to the number of bromine substituents, congeners with four to eight bromine substituents showing the highest stability. PBDEs are going to be drastically restricted in the industrialized world but, due to their persistence and lipophilicity, they will remain a legacy issue for food safety, much like PCBs. Based on the presence and persistence in the environment and in food, EFSA has identified eight congeners of primary interest: however, adequate toxicological data for risk assessment exist only for four congeners (BDE-47, -99, -153 and -209) [90].
The available data indicate adverse effects on the liver, the thyroid and the development of the reproductive and nervous systems as well as increased oxidative stress [91; 92; 93; 94; 95; 96]; impaired thyroid regulation and neurobehavioral development are the critical effects used by EFSA for risk assessment [90]. Whereas the available data are not robust enough to define a TWI, the dose-response curves for critical effects were used in order to estimate, for each of the four congener, an exposure causing a minimal increase in response for the critical effects, according to the Benchmark Dose approach (BMD10) [90].
Overall, fish and other seafood are the major commodities with highest PBDEs level, both wild-caught and farmed species. As for PCBs and dioxins, there is a direct relationship between the levels of PBDEs and the fat content of different species of fish [97]. Especially for BDE-47 and -100, the contamination levels in fish with a fat content higher than 8 %, are almost double than in fish with a fat content between 2 and 8 %, and more than 10 times higher than in fish species with a fat content below 2 % [90]. As expected, among main species used for human food herring and eel show the highest levels [98; 99]. The catch area may also play a role, as shown by high levels in brown trout fillets from Lake Mjøsa, a highly contaminated spot in Norway [100]. In highly contaminated fish species or populations, the levels of the sum of indicator PBDEs may be over 300-400 µg/kg fresh weight [98; 100].
High-level fish consumers are likely to have an elevated dietary intake, as well as consumers of food supplements such as fish oil capsules or fish liver oil. The estimated daily intakes of the different congeners for average European consumers range (approximate values) from 0.1-1 ng/kg b.w. to 0.7-4.6 ng/kg b.w. as minimum lower bound and maximum upper bound values, respectively. However, small children are estimated to have intakes about 3-6 times higher than adults [90]. In the lack of a TWI, EFSA assessed a margin of exposure between the BMD10 and conservative estimates of dietary intakes of the four congeners: a potential health concern was identified only in reference to the current dietary exposure to BDE-99, especially in small children [90]. At the moment, there is no basis for an approach to PBDEs as a whole, comparable to the TEQ adopted for dioxin-like compounds. However, PBDEs occur in the same foods and apparently have similar mechanisms of toxicity, thus a TEQ approach might be envisaged.
Only limited information exist on fish feed contamination with PBDEs [101]. Dietary accumulation of PBDEs has been investigated in feeding trials with different species (Atlantic salmon, trouts, carp, etc.): a wide range of congenerdependent accumulation was reported, ranging from less than 0.02 to 5.2 % for BDE 209 to more than 90 % for BDE 47 [14]. These data, however limited, lend support to the monitoring and reduction of PBDEs in aquaculture feeds and feed materials, also through the definition and enforcement of (yet unavailable) maximum tolerated levels.
There are other BFRs present in fish, for instance the HBCDDs (hexabromocyclododecanes). However, these compounds, albeit affecting similar endpoints as PBDEs, have relatively low toxicological potency [102; 103; 104]. HBCDDs are lipophilic, and fish are the food commodity most affected by contamination; however, levels measured in fish do not indicate that HBCDDs accumulate to a great extent. Therefore, no concerns for consumer safety have been identified [90].
Perfluoroalkylated substances. Perfluoroalkylated substances (PFASs) are fluorinated compounds with high thermal, chemical and biological inertness. Perfluoroalkylated substances are both hydrophobic and lipophobic, and therefore they do not accumulate in fatty tissues as other persistent halogenated compounds [105]. PFASs have been used since decades in a range of industrial and chemical applications [106]. The wide use of certain PFASs led to their global distribution in the environment and biota: PFOS and PFOA are the most known PFASs, as well as those most investigated for their toxicological properties [107].
Both PFOS and PFOA elicit hepatotoxicity, endocrine disruption, and reproductive and developmental toxicity in laboratory animals [108; 109; 110; 111; 112; 113]. Biomonitoring studies on the adult Italian population show that internal exposure to PFOS and PFOA is widespread, albeit levels were highly variable and partly dependent also on the characteristics of the living environment [114]. In children from Faroe Islands, a community with high fish consumption, exposure to PFASs was associated with reduced humoral response to immunisations [115]. Taking into account effects on liver, prenatal development and the metabolism of thyroid hormones and cholesterol, EFSA has established a TDI) of 150 ng/kg b.w. for PFOS; PFOA has a similar toxicological pattern but is less potent with a TDI of 1500 ng/kg b.w. [105].
Diet is the main exposure route to PFASs in humans: presence of PFASs in the environment is a major factor driving the entry in the feed-food chains [116; 117]. The PFOS concentrations in foods are almost invariably higher than PFOA, which appears to have a lower accumulation potential. While consumption of game and offal can be important for limited "niches" of consumers, fish and fishery products are the major determinants for the PFOS dietary intake of the general population, contributing 50 to 80 % of total intake, while for PFOA the contribution of fish is 7.6 to 27 %, depending on dietary habits [107]. Positive correlations were reported between PFASs body burden and self-reported fish consumption [118]. Concentrations of PFASs were generally higher in fish caught from fresh water compared to marine water [119; 120], pointing out that the fish living environment is an important factor for these pollutants.
Different from previous estimates [105], the most recent European-wide survey showed no concerns for PFOS dietary intake; the most conservative estimates were always below 10% and 20% of the TDI in adults and toddlers, respectively. As for PFOA, the most conservative figure got a bare 2.1 % of the TDI for toddlers [107]. The relatively high levels of PFOS and PFOA found in some biomonitoring studies [114] are in apparent contrast with dietary intake estimates; such high levels might be due to aggregate exposures (diet plus environment) occurring in specific scenarios and/or to the building-up of a PFASs body burden, whose kinetics has yet to be clarified. Noticeably, the highest PFOS concentrations are recorded in fish liver [105; 107], hinting that some attention could be devoted to fish liver oil.
The information on PFASs in aquaculture feeds is very limited: PFOS is by far the PFAS most frequently found [121]. Considering that PFASs are not lipophilic, fish meal may be the main source. No significant PFASs biomagnification occurred upon a 28-day dietary exposure in rainbow trout, even for PFOS; however, it was noteworthy that skin (a potential edible tissue) was among main deposition sites for PFASs [122].
Organotins. Organotin compounds (OTCs) such as Tributyltin (TBT) and Triphenyltin (TPT) have been widely used and are still used to a lower extent as biocides and pesticides [123]. The main factor involved in OTCs contamination of food chains, and especially seafood, is due to their potential for environmental persistence and bioaccumulation.
Experimental studies identified several effects, with an enhanced susceptibility of the developing organism, such as neurotoxicity, endocrine and reproductive toxicity, tumour promotion and especially immunotoxicity, the critical effect in mammals [124]. Limited data showed that OTCs have essentially similar toxicity and toxicokinetics; thus, EFSA established a group TDI of 0.25 µg/kg b.w./day. After the EFSA opinion, experimental in vitro and in vivo studies on the main OTC, TBT, have indicated an obesogenic action with increased adipocyte proliferation and differentiation [125; 126]. Although the obesogenic effect has still to be defined in the context of OTC risk assessment, these data support the public health relevance of reducing exposure. Seafood is by far the main source of OTCs as contaminants of food chains. The OTCs levels in seafood other than finfish (crustaceans and molluscs) are in general higher than those in finfish, possibly because of the greater contact with sediments, that are a critical environmental compart for OTCs pollution. For instance, calculated mean concentration values for TBT in seafood other than fish is 60 µg/kg fresh weight, whereas in finfish the corresponding estimate is 17 µg/kg fresh weight. However, since finfish consumption is on average much higher than other seafood, finfish is the major contributor to OTCs dietary intake, representing 80%-85% and 66-73% when occurrence medians and means are utilized respectively. A conservative estimate calculated that the intakes for high consumers were up to 0.17 microgram/kg b.w./day, i.e. up to approximately 70% of the group TDI. The TDI may be exceeded by the frequent consumption of seafood caught or farmed from highly contaminated area, such as the vicinity of harbors and heavily used shipping routes [124].
Organotin compounds are not usually monitored in feeds; however, the detection of these compounds, and mainly of TBT, in fish feeds suggests that the carry-over of OTC to farmed fish might be an overlooked issue [121].
This review indicates that some contaminants of fish feeds are a recognized issues and regularly monitored (MeHg, dioxins, PCBs); PBDEs are an emerging problem, especially for fish oils; more data on PFASs and OTCs in aquaculture feeds are needed to assess the exposure of farmed fish to these environmental pollutants. Feed ingredients have a different liability to contamination depending, e.g., on the lipophilicity of the specific pollutants. From the standpoint of human risk assessment, the main fish pollutants share remarkable features of concern, such as the ability to make up a body burden that can be transferred to the next generation and the enhanced susceptibility of the developing organism [1].
One Health: Issues for Risk-to-Benefit Analysis
In 2010, FAO and WHO convened a Joint Expert Consultation on the Risks and Benefits of Fish Consumption, which considered a restricted panel of major nutrients (PUFAs) and chemical contaminants (MeHg and dioxins) in a range of fish species. Considering the benefits of docosahexaenoic acid (DHA) versus the risks of MeHg, the consultation concluded that, among women of childbearing age, pregnant women and nursing mothers, fish consumption lowers the risk of suboptimal neurodevelopment in their offspring compared with not eating fish, in most circumstances. Among infants, young children and adolescents, the evidence was insufficient to derive a quantitative framework of health risks and benefits [127]. In 2014, EFSA has dealt with the actual fish intake amount that can be recommended for nutritional purposes, without unduly exposing the consumers to contaminants. EFSA focused on PUFAs and MeHg, since for them both fish is the only significant dietary source, and devoted a specific attention to pregnancy [15].
According to EFSA, the weekly consumption of 3-4 portions of fish in pregnancy may have beneficial effects on the development of nervous system and is definitely recommended compared to the avoidance of fish consumption for fear of MeHg or other contaminants. Considering the mean MeHg levels detected in fish in Europe, the intake associated with up to 4 portions/week would not elicit any significant risk. However, there is no evidence that an intake higher than 4 portions/week would bring any additional benefits. A successive EFSA statement on the benefits of fish consumption compared to the risks of MeHg pointed out that the risk-benefit balance strongly depends on scenarios of seafood consumption, which are highly variable among European countries, in terms of both the total amount and the main species of fish consumed. When the main species consumed have a high MeHg content, only a few numbers of servings (<1-2) can be eaten before reaching the TWI, which may be attained before the desired intake value for PUFAs especially for vulnerable population groups (toddlers, children and women of childbearing age). Better controls on aquaculture feeds may achieve a substantial reduction of MeHg in farmed fish; however, the fish species that contain higher levels of Hg (tuna, swordfish, cod, etc.) are not farmed. Therefore, besides reducing Hg emissions, EFSA recommends issuing recommendations at national/regional levels to increase the intake of fish species with lower MeHg content [128].
It is now recognized that the balance of risks and benefits is not essentially different between wild and farmed fish. Noticeably, farmed fish has a lower mean content of PUFAs per g of weight, as compared to caught fish of the same species. On the other hand, farmed fish is overall more fat, also because it moves less: therefore, it has a higher percentage of the tissue more liable to fat-soluble pollutants as well as containing PUFAs. Consequently, the eaten portions of wild and farmed fish have overall comparable levels of nutrients and contaminants [14].
Producers, food safety operators, and consumers should be aware that finfish are quite diverse and that the physiology and ecology of edible fish species predict the liability to bioaccumulate certain contaminants. As discussed above, predatory fishes (e.g., tuna, sharks, swordfish) are more liable to persistent contaminants such as MeHg, because the biomagnification is related to the place in the food web, whereas fatty fishes (e.g., herring, mackerel, eel) are more liable to fat-soluble halogenated pollutants (PCBs, dioxins, BFRs). Salmon, as a large and fatty fish, indeed is liable to bioaccumulate both kinds of pollutants.
Since not all fish species are exposed in the same way and to the same extent, dietary habits of people do influence exposure. For instance, in New Zealand the exposure to MeHg is higher in the Maoris, traditionally high fish consumers, and in people with lower socio-economic status, who have a frequent consumption of fish'n'chips, made with large cheap fishes like sharks [129]. These simple information may enforce feasible actions for the prevention and control of contaminants, and/or risk communication.
Nutrients and contaminants are concurrently present in fish tissues. Scientific evidence shows that in many cases they can interact, rather than exert independently their beneficial or adverse actions; for instance, several ED interfere with iodine uptake and utilization by the thyroid, thus possibly increasing the physiological requirements for iodine [130]. This may have some relevance for risk-benefit analysis: experimental and epidemiological studies suggest that the concurrent exposure to PUFAs may mitigate the developmental neurotoxicity of MeHg; MeHg intake through PUFAs-rich fish might be of somewhat lower concern [44; 131]. However, when juvenile mice are exposed to levels of persistent lipophyllic pollutants devoid of apparent toxicity by fish-based diets, subtle but clearly adverse effects are observed in the brain, liver, thymus and thyroid [132; 133]. These studies suggest that the intake via fish food matrix could not afford a detectable protection towards main pollutants in the vulnerable direct consumer, the child. In addition, different lipophilic pollutants do have the same toxicological targets in juvenile rodents, albeit chemical structures are diverse and the potency is much different (from very high -as for TCDD-to very low -as for HBCD): therefore, an additive effect should not be ruled out. The scientific evidence consistently indicates that the overall pollution of fish should be further reduced to improve the balance between health benefits and health risks. Under this respect, caught fish could be better controlled, but farmed fish should be produced minimizing the chance of being polluted. The issue of feeds becomes prominent.
The Perspective of Novel Vegetable Derived Aquaculture Feeds
In general, fish can uptake and bioaccumulate contaminants from feeds without overt toxicity, unless in the case of very high exposures which are unlikely to occur in the farm; therefore, monitoring of zootechnical parameters, such as growth or reproduction, would not provide meaningful alerts of ongoing contamination in most cases. As there are no established biomarkers of effective dose in farmed fish, the analytical monitoring of contaminants in feed and fish samples remains the only current control tool in the routine of aquaculture production. Whereas controls are obviously necessary to check the enforcement of good practices and prevention, the sole reliance on controls is not cost-effective; even high-quality analytical monitoring programmes are just a "defensive weapon" and do not indicate any way forward.
Aquaculture feeds are traditionally based on fish meal and fish oils; however, the feed grade fisheries that supply fish meal and oil have reached their sustainability limits. Therefore, if aquaculture production must expand in order to meet global demand for fish, alternative materials must be investigated and introduced. Indeed, aquaculture development is becoming increasingly constrained by increasingly limited supplies of the industrial fish that provide the fish meal and fish oil on which aquaculture feeds are so heavily dependent. However, replacement of significant amounts of the conventional feed ingredients by feed ingredients of vegetable origin could be achieved without loss of growth performance or effects on fish health [134; 135; 136; 137]. Growth, health and reproduction of fish are primarily dependent upon an adequate supply of nutrients both in terms of quantity and quality, irrespective of the culture system in which they are grown. Dietary protein and lipid requirements, and carbohydrate utilization have been relatively well investigated for several fish species, while data on the requirements of micronutrients such as amino acids, fatty acids and minerals are only available for few most commonly farmed carnivorous and omnivorous species. Lipids are primarily included in formulated diet to maximize their protein sparing. The degree of unsaturation does not appreciably affect digestibility or utilization of fats and oils as energy sources for coldwater or warmwater fish [138]. Carnivores like trout have natural diets rich in triglycerides and can easily adapt to high fat feeds; lipid levels as high as 35% have been reported in some salmonid feeds [139]. The maximum lipid levels for other freshwater fish appear to be lower: in general, 10-20% of lipids provide optimal growth rates, without producing an excessively fatty carcass [140]. Carbohydrates are the least expensive form of dietary energy and are frequently used for protein sparing in formulated diets; the ability to utilize carbohydrates varies in different fish species as well as with the complexity or chemical structure of the carbohydrate source [141; 142]. The ability of carnivorous species to hydrolyse or digest complex carbohydrates is limited due to the weak amylolytic activity in their digestive tract; thus, for species such as the trout, starch digestion decreases as far as the proportion of dietary starch increases. For salmonids, carbohydrate digestibility also diminishes with increasing molecular weight [138]. Therefore, any alternative feed ingredient of vegetable origin for salmonids, or other carnivorous farmed species, should preferably have a low content of complex carbohydrates. Conversely, farmed warmwater omnivorous or herbivorous fish species (e.g., common carp, channel catfish, eel) are more tolerant of high dietary carbohydrate levels. In common carp, carbohydrate levels up to about 25% of the diet are an energy source as effective as lipids [143; 144]. Finfish do need the same essential amino acids as most other vertebrates. The requirements for individual amino acids were found to be consistent between coldwater fish (rainbow trout) and warmwater fish (channel catfish) when expressed in absolute terms and not as percentage of the protein content [145]. Fish, like all animals, require essential fatty acids (the PUFAs) for basic cellular functions (e.g., maintenance of cell membranes), but cannot synthesize them. Vegetable oils are rich in linoleic series fatty acids (n-6) but contain little or no linolenic series fatty acids (n-3), which, however, are present in marine oils; highly unsaturated fatty acids, or HUFAs (20: 5n-3, 22: 5n-3, 22: 6n-3), are limited to seafood fish [138; 146]. Indeed, marine fish species (e.g., bream, sea bass, yellowtail, turbot, flounder) require also HUFAs, while freshwater or anadromous species require a greater amount of n-3 and n-6 fatty acids in the form of α-linolenic acid and linoleic acid; in general, the requirements for n-3 or n-6 PUFA correspond to about 1-2% of the diet by dry weight. Differently from marine fish species, freshwater fish are provided with enzymes to desaturate and elongate C18 PUFAs to the longer chain C20 and C22 PUFAs, which are the functionally essential fatty acids in vertebrates. Therefore, the specific PUFAs requirements must be considered when including novel feed ingredients of vegetable origin in a complete feed tailored for a given species. Determination of dietary mineral requirements is made complex by the fish ability to absorb essential elements (e.g., iodine) from the surrounding water in addition to the diet. Therefore, the dietary requirement of a fish species for a particular element depends to a large extent upon the concentration of that element in the water medium [138; 147]. Since even subclinical deficiencies of trace elements (e.g., copper, selenium, zinc) may impair the fish ability to cope with stress or diseases, the development of feed ingredients of vegetable origin should consider other factors beyond requirements for individual elements, such as those reducing the bioavailability by binding elements within the feed matrix (e.g., phytochelatins) [148; 149], by impairing absorption (e.g., phytates) [150; 151], or through unbalanced intake of elements (e.g., excess zinc impairing copper uptake and utilization) [152]. Although the disorders related to vitamin deficiency in fish are well investigated, quantitative dietary vitamin requirements are probably the least studied area in fish nutrition. While natural food is usually rich in vitamins, this may not be the case with formulated, energy-intensive feed. Vitamin deficiency may appear, therefore, mainly in intensive culture systems [147]. For instance, it might be hypothesized that formulated complete feed using only or mainly ingredients of vegetable origin would be low in vitamin D, thus prompting for an increased need of vitamin D supplementation; however, much more robust scientific evidence is needed to assess whether and how the use of feed ingredients of vegetable origin might increase the risk of nutritional problems in intensively farmed fish.
Beyond supporting the zootechnical performance, fish nutrition strategies play critical roles in fish health, especially concerning immunocompetence and disease resistance within intensive and, to lesser extent, in semi-intensive farming systems. The role of nutrition is further emphasized by the fact that fish depend more heavily on nonspecific defence mechanisms than mammals [153; 154].
Plant oils stand out as the most likely candidates to partly substitute fish oils in fish feeds. Their total global production is around 100 times higher than that of fish oils and a number of studies have shown that they can replace significant parts of the fish oil in diets for salmonids without compromising growth, feed efficiency or reproduction [155]. However, the replacement of fish oil may not be so straightforward, due to its unique content of long chain PUFAs, especially EPA and DHA. Mixtures of vegetable oils have been prepared to simulate the total levels of saturated, monounsaturated and polyunsaturated fatty acids, especially omega-3 found in fish oil; such mixtures are able to replace the fish oil for most of the growth period of several farmed fish [156]. Salmonids such as Atlantic salmon and rainbow trout currently account for over 66% of the total fish oil used in aquaculture. However, salmonids have a lipid metabolism characterized by a 'freshwater' fish pattern in their metabolism of ALA to EPA and DHA. In addition, they are able to store fat at high concentrations in their fillets and have an extremely efficient protein-sparing capability, i.e. a highly efficient lipid utilization capability. These characteristics are peculiar to salmonids and, consequently, from a growth and performance viewpoint, fish oil replacement in these species could be easily and effectively implemented [157].
Fish meal is more difficult to replace, because it has the correct balance of all amino acids required by fish and other interesting features, including the excellent palatability. In general, good quality fishmeals used in aquaculture have protein levels higher than 66% of dry matter. The few vegetable ingredients comparable to fish meal are corn or wheat gluten and concentrated soy protein products: these have a similar amount of protein but with different amino acid profiles from fishmeal. Therefore, none of these ingredients, individually, is able to replace it completely and it is necessary to use a mixture of ingredients to get the optimum amino acid profile [158]. In seabram and sea bass, mixtures of plant proteins can successfully replace a large part (up to above 90%) of fish meal [137; 159; 160; 161].
Serious concerns over pollutants in fish meal and fish oil make aquaculture feeds a food safety issue that requires considerable resources for monitoring and control. The growing attention towards the long-term risks associated with human exposure to contaminants, especially during development [1], has prompted attention towards the use of aquaculture feed ingredients of vegetable origin that could be less liable to bioaccumulation of pollutants [7; 162]. Novel aquaculture feeds were considered mostly from the standpoint of animal nutrition, zootechnical performance and economic advantages; however, a One Health standpoint requires to assess their impact on the safety and nutritional quality of fish food.
The European project AquaMax [163] has represented the most comprehensive research effort toward a strategy to replace fish oil and fish meal in feeds for aquaculture with vegetable ingredients, also considering nutritional and safety issues. Aqua Max examined a wide range of vegetable ingredients for Atlantic salmon (Salmo salar), rainbow trout (Oncorhynchus mykiss), sea bream (Sparus aurata), common carp (Cyprinus carpio) and Indian major carps. In the diets investigated by the project, fish meal and fish oil were still present, but represented a limited portion of complete feeds, in the range of 5-23% and 5-8.4%, respectively: noticeably, the feed for common carp had no fish oil, while the feed for Indian carps was composed of vegetable ingredients only. Main vegetable ingredients in the different formulations included soy, wheat gluten and corn gluten as protein components, and rapeseed oil and linseed oil as lipid components. Rapeseed oil is a potential candidate for fish oil substitution because it has moderate levels of 18:2 (n-6) and 18:3 (n-3), and richness in 18:1 (n-9). In addition, the ratio of 18:3 (n-3)/18:2 (n-6) in rapeseed oil of 1:2 is regarded as beneficial to human health and not detrimental for fish health, provided EPA and DHA are also present from dietary fishmeal [164]. Linseed oil is also a potential candidate for fish oil replacement because it is rich in α-linolenic acid [18:3 (n-3)], it is the substrate for synthesis of (n-3) HUFA, and also contains significant levels of 18:2 (n-6), thus having an 18:3 (n-3)/18:2 (n-6) ratio of 3-4:1. The development of any replacement should also take into account the maintenance of the PUFAs content of fish, as one main nutritional benefit. In fact, the complete substitution of fish oil with either rapeseed oil or palm oil in feeds for Atlantic salmon affected muscle fatty acid composition; the concentrations of 16:0, 18:1 (n-9), 18:2(n-6), total saturated fatty acids and total monoenoic fatty acids increased linearly with increasing dietary palm oil. The concentration of eicosapentaenoic acid (EPA) [20:5 (n-3)] was reduced significantly with increasing levels of dietary palm oil but the concentration of DHA [22:6 (n-3)] was significantly reduced only in fish fed 100% palm oil [165; 166]. When Atlantic salmon was raised on diets with blends of linseed, rapeseed and fish oils, where vegetable oil represented >66% of the added dietary oil, considerable reductions of flesh concentrations of both 20:5 (n-3) and 22:6 (n-3) occurred; however, returning fish previously fed 100% rapeseed or linseed oil to a marine fish oil diet for a finishing period before harvest allowed flesh (n-3) HUFA concentrations to be restored to 80% of salmons fed fish oil throughout the seawater phase, although 18:2 (n-6) remained significantly higher [167]. When soybean, rapeseed or linseed oil, or a mixture of them replaced up to 60% of fish oil in diets for seabream and seabass, the levels of dietary saturated fatty acids in liver were comparable to those in fish fed the fish oil diet; however, in muscle levels were reduced according to that in the diet. Linoleic and linolenic acids accumulated in the liver proportionally to their levels in the diet, suggesting a lower oxidation of these fatty acids in comparison to other 18C fatty acids. The essential fatty acids EPA (20 : 5n − 3), DHA (22 : 6n − 3) and arachidonic acid (20 : 4n − 6) were reduced in the liver at a similar rate, whereas DHA was preferentially retained in the muscle in comparison with the other fatty acids, denoting a higher oxidation particularly of EPA in the muscle [168]. No detrimental effect on the growth and feed conversion ratio was observed in the rainbow trout, as a result of fish oil substitution with canola and flaxseed oil. However, from the point of view of human nutrition, the reduction of EPA and DHA levels in fish fed the vegetable oil diets could constitute a drawback for plant oils replacement. The content of α-linolenic acid concentrations in the muscle of fingerlings of rainbow trout was lower than that in the vegetable oil diets; probably, a high degree of metabolism of the fatty acid contributed to this effect in fingerlings, through β-oxidation and/or desaturation and elongation [169]. Nevertheless, the available data indicate that the magnitude of PUFA reduction in fish flesh is small, albeit variable, and that the impact on fish nutritional value would be limited. The available studies show that a significant portion (60-75%) of dietary fish oil can be replaced by vegetable oils, preferably by a mixture of them, with just a limited impact on PUFAs content in fish muscle. An improper lipid profile in diet may affect metabolism and endocrine regulation in fish, which in their turn can affect the amount of fatty acids in tissues and their oxidation. Several studies indicate that soybean-based ingredients may promote a healthy lipid metabolism [170]. Notwithstanding some changes in lipid profiles, the novel feeds preserve a large portion of the nutritional value of fish, as far as PUFAs are concerned. A 150 g fillet from salmon fed 80% plant protein and 70% plant oil for 12 months contains 1.4 g of EPA + DHA ; by comparison, the same fillet from a farmed salmon fed with conventional feed provides 1.9 g of DHA plus EPA [14]. Thus, the salmon fed the novel feeds contained 74% DHA+EPA of that fed the conventional, fish-based diet. PUFA have a significant role in protecting health: an average intake of 250 mg/day of EPA plus DHA for healthy adults (as provided by 1-2 servings of oily fish per week) has a protective effect against cardiovascular risk while an additional mean intake of 100-200 mg/day (overall 3-4 servings of oily fish per week) supports the formation of the placenta and the development of the brain and retina in the fetus [14; 21; 171]. The intake during pregnancy, breastfeeding and early infancy positively influences also the growth and cognitive function in childhood [172; 173]. However, further research is needed on the possible modulation, if any, of the content of other relevant nutrients, e.g., iodine.
A most interesting aspect is the impact of novel feeds on the bioaccumulation of contaminants. The replacement of the combined oil and fish meal with vegetable ingredients has led to significant reductions in the fillets of farmed Atlantic salmon of the concentrations of lipophilic pollutants by 51-82%, (dioxins, PBDEs, HBCDs, PCBs and the "legacy" organochlorine pesticides such as DDT metabolites) and of the toxic elements Hg and As by 80-96 %. As already noted, As in fish is usually mainly present as organic compounds of minimal or low toxicity [51]. No speciation was performed to assess whether this pattern of As deposition was maintained by using the novel feeds; nevertheless, the data clearly showed that the feed ingredients of vegetable origin were associated with a much lower overall deposition rate of the most relevant contaminants. For instance, compared to fish fed the conventional fish oil diet, feeding salmon flesh the novel vegetable oil diet for 55 weeks achieved a 4-fold reduction in dioxin-like compounds (PCDDs + PCDFs+ DL-PCBs levels), namely 0.5 ng vs. to 2.0 ng TEQ/kg, and a 6fold reduction in PBDEs, namely 0.5 ng/kg vs. 3 ng/kg; accordingly, the margin to the current maximum EU level for dioxin-like compounds (8 ng TEQ/kg) increased from 4-fold to 16-fold [174; 175]. On the other hand some shortcomings of vegetable fees deserve attention. The levels of polycyclic aromatic hydrocarbons (PAHs) significant increased compared to conventional feed [176]. Polycyclic aromatic hydrocarbons are moderately persistent and their relevance in foods is mostly due to their role as cooking by-products [177]; PAHs are human carcinogens, and therefore this finding cannot be underestimated. Polycyclic aromatic hydrocarbons are normally not found in Atlantic salmon, but vegetable derivatives can accumulate PAHs from different sources such as atmospheric deposition of contaminated dust and particulate matter in the plants and the processing for oil production [178]. Also, the use of soybean oil may modify the nutritional value of fish by modifying the lipid profile: the long-term intake of farmed Atlantic salmon fed soybean oil in mice increased the levels of linoleic acid in fat, insulin resistance and accumulation of fat in the liver [179].
As a conclusion, with only a modest reduction of PUFA the vegetable feed ingredients may substantially improve the risk-to-benefit balance. However, the different vegetable ingredients should be further investigated for their liability to PAHs contamination and for the potential impact of fatty acid balance on the nutritional value. Preventing PAHs contamination and controlling the lipid profile might be considered among quality criteria of novel aquaculture feeds.
[12] EFSA (2014). Scientific Opinion on the potential reduction of the currently authorized maximum zinc content in complete feed. The EFSA Journal, 12(5):3668.
[13] FAO (2012). The state of world fisheries and aquaculture. [15] EFSA (2014). Scientific Opinion on health benefits of seafood (fish and shellfish) consumption in relation to health risks associated with exposure to methylmercury. The EFSA Journal, 12 (7): 3761.
|
2019-04-24T13:02:17.053Z
|
2015-05-06T00:00:00.000
|
{
"year": 2015,
"sha1": "5f1f102857c9064074273424aeff8497c2226a5c",
"oa_license": null,
"oa_url": "http://article.sciencepublishinggroup.com/pdf/10.11648.j.ijnfs.s.2015040202.12.pdf",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "96c3e0e11dd2e36c7db9f0ace8cd344cd6a71c3b",
"s2fieldsofstudy": [
"Environmental Science",
"Agricultural and Food Sciences"
],
"extfieldsofstudy": [
"Environmental Science"
]
}
|
119321991
|
pes2o/s2orc
|
v3-fos-license
|
On covering systems of integers
A covering system of the integers is a finite collection of modular residue classes $\{a_m \bmod{m}\}_{m \in S}$ whose union is all integers. Given a finite set $S$ of moduli, it is often difficult to tell whether there is a choice of residues modulo elements of $S$ covering the integers. Hough has shown that if the smallest modulus in $S$ is at least $10^{16}$, then there is none. However, the question of whether there is a covering of the integers with all odd moduli remains open. We consider multiplicative restrictions on the set of moduli to generalize Hough's negative solution to the minimum modulus problem. In particular, we find that every covering system of the integers has a modulus divisible by a prime number less than or equal to $19$. Hough and Nielsen have shown that every covering system has a modulus divisible by either $2$ or $3$.
Introduction
Covering systems were first introduced by Erdős in 1950 [5]. Romanoff had shown in [16] that the numbers that can be written 2 k + p for some integer k and prime p have positive density, and asked whether the same is true for numbers without this property. Erdős described covering systems as a way to guarantee that for all n in a specially constructed arithmetic progression, every number n − 2 k has a small prime factor, settling the conjecture.
Since their introduction, covering systems have proved useful in number theory problems similar to those posed by Romanoff, as well as generalizations like the existence of digitally delicate numbers. See, for example, [4]. In addition to their number theory applications, they present interesting structural questions, and it is not always easy to tell whether a covering system with given properties exists. This paper will consider two well-known conjectures about covering systems: the Minimum Modulus Problem and the Odd Covering Problem. In particular, Hough's negative answer to the Minimum Modulus Problem [10] is adapted to prove a weaker form of the Odd Covering Problem: every covering system has a modulus divisible by a prime p ≤ 19. In [11], Hough and Nielsen significantly improve the methods of Hough's original paper, proving the stronger statement that every covering system contains a modulus divisible by either 2 or 3.
Covering Systems
A residue system C is a collection of modular residue classes where S ⊂ N >1 is a finite collection of moduli; in this paper we do not consider repeated moduli. Then we call C a covering system-or say that C covers the integers, or is covering-if Z = m∈S a m mod m. The simplest covering system without repeated moduli is the system C = {0 mod 2, 0 mod 3, 1 mod 4, 1 mod 6, 11 mod 12}.
Each of the moduli in C divides 12, so we can check that C covers the integers by verifying that each residue class modulo 12 is a subset of one of the residue classes in C.
If C does not cover the integers, there is a nonempty subset R ⊂ Z of uncovered integers, namely R will always be a collection of residue classes modulo Q, where Q = lcm {m : m ∈ S}.
Prior Work
Like many areas of research, the study of covering systems of integers has been largely guided by conjectures of Erdős Swift [18], Churchhouse [3], Krukenberg [12], Choi [2], Morikawa [13], Gibson [8], and Nielsen [14] have steadily pushed up the largest known minimum modulus of a covering system since the problem was posed in 1950. The current best, due to Owens [15], is m = 42, and closely resembles Nielsen's covering with m = 40. Along the way, Krukenberg and Gibson developed notation that has proven invaluable in constructing complex covering systems, allowing Nielsen and Owens to describe covering systems with well over 10 50 distinct moduli.
It would seem intuitively that when disallowing small moduli in a residue system, their effect can be recovered using moduli, suggesting Erdős' conjecture was correct. However, the required number of moduli in a covering system grows quickly with the minimum modulus. Additionally, structural theorems due to Filaseta et al. in [7] imply that some natural approaches to finding a positive answer to the Minimum Modulus Problem are insufficient. Accordingly, Hough proved in [10] that there is a maximum minimum modulus of coverings, M ≤ 10 16 , solving Conjecture 1. The proof is considered in more detail below.
Conjecture 2 (Odd Covering Problem). No covering system consists of only residue classes of odd moduli m > 1.
In contrast to the minimum modulus problem, research on this conjecture has been characterized from early on by necessary conditions for the existence of an odd covering system. Berger et al. [1], Simpson and Zeilberger [17], and Guo and Sun [9] have described conditions for odd covers implying that odd squarefree covers must have many distinct prime factors dividing the moduli. The current best such result, due to Guo and Sun, implies there is no odd squarefree cover with less than 22 primes dividing its moduli.
Again [7] was critical in our current understanding of this problem. Nielsen also argues convincingly that an odd covering system should not exist by systematically trying to construct one and by posing some conservative hypotheticals in [14].
In [11], Nielsen and Hough adapted and optimized the techniques of [10] as well as a technique directly from [7] to prove that every covering system has a modulus that is either even or divisible by 3. Their result is stronger than the one proven in this paper (every covering system has a modulus divisible by a prime p ≤ 19), which more closely follows the techniques used in Hough's original paper.
A common feature of these conjectures is a focus on constraints on the set of moduli. Thus, we can treat covering as a property of an underlying superset M ⊃ S. We can ask of any set M ⊂ N >1 whether there is any finite subcollection of moduli S ⊂ M to which a choice of residues C can be made to cover the integers. If there is, we say M has a covering.
Additionally, to account for structural restrictions on M, it is often useful to consider a multiplicative base P ⊂ N >1 , a collection of pairwise coprime natural numbers. P is said to factorize an integer m if m can be written and factorize a set M if it factorizes all m ∈ M. It need not be the case P ⊂ M, and in many cases P is not unique with respect to M. Note however that since elements of P are pairwise coprime, the factorization of m is unique with respect to P. For a given base element q ∈ P, if there is an integer v such that q v+1 ∤ m for all m ∈ M, we call the smallest such integer v q = v q (M). Otherwise we say v q = ∞.
If there is a covering system C whose moduli are factorized by P, we say P factorizes a covering. Now we are ready for a statement of this paper's main result.
Theorem 3. Let P = {p prime : p > q 0 } be a multiplicative base. If q 0 ≥ 19, then P does not factorize a covering.
Proof of Theorem 3
In [10], Hough solved Conjecture 1 in the negative with a proof fully incorporating the probabilistic method and relying explicitly on a theorem of probability, the Lovász Local Lemma. Hough's theorem is presented, then we trace the proof to justify a more general statement which may be stronger in the case M is factorized by a set other than the primes. Finally, this statement is applied to prove Theorem 3.
Theorem 4 (Hough). There are constants P 0 ∈ N and δ ∈ (0, 1) such that if M ⊂ N >1 and the set then M does not have a covering.
Then, given that so M ′ does not have a covering. This calculation is carried out in Sagemath. We now follow Hough's proof to show how it can be applied more carefully, proving Theorem 3.
Overview
This proof proceeds in steps; at each step we incorporate moduli factorized by progressively larger base elements from P. We find sufficient conditions at each step to guarantee that no residue system drawing from the collection of moduli available at the next step forms a cover.
Let C i be a residue system corresponding to the i th step, and suppose C i has some favorable qualities and does not cover the integers. The favorable qualities are specified explicitly in (3) and (4). We consider R * i , a "good" subset of the uncovered set R i (nested so that R * i ⊂ R * i−1 ); after sieving out by the moduli from the (i + 1) th step, the density of the uncovered portion in R i satisfies a uniform lower bound according to a natural weight on elements of R i . That is to say, at each step where µ i is a measure with support contained in R i and π good is a parameter. A testable condition can then ensure that C i+1 has those same favorable qualities. The Lovász Local Lemma is essential in proving the good portion of the uncovered set is not covered in the transition from i to i + 1, and that our favorable qualities are preserved for the next step.
Preliminaries
Before getting much deeper, we will need more notation. Let M ⊂ N >1 be factorized by a base P. The steps guiding the proof are delineated by the decomposition of P as where M i is the maximal subset of M factorized by i j=0 P j . It will also be useful to consider the collections of new factors at each step: N i is the maximal subset of M factorized by P i alone.
Some variables depend on specific choices of residue systems. Typically residue classes (a m mod m) ∈ C will be treated in the worst case or else inherited by induction. We have intermediate LCMs We now discuss two closely related notions of what it means for a residue class to be "good", both defined relative to a positive real parameter λ . For explicit definitions, see (3) and (4).
An uncovered residue class (r mod Q i ) ∈ R i , considered as a fiber in Z/Q i+1 Z, is λ -good if for each base element q ∈ P i+1 , the portion sieved out by moduli in M i+1 \ M i divisible by q is small. In proving a set M does not have a cover, we hope always to be able to find a large subset of R i to be λ -good. A sufficiently large subset T ⊂ R i whose fibers are all λ -good is then designated R * i = T . Showing that M does not have a covering will depend on guaranteeing at each step that such a set exists. In the base case R * 0 = R 0 . A residue class (r mod Q i ) ∈ R i is λ -well-distributed if it meets two conditions. First, its fiber in Z/Q i+1 Z is not entirely covered after the introduction of new factors, i.e. r ∩ R i+1 is not empty. Second, the fiber meets the uniformity property that for each new factor n ∈ N i+1 the concentration of uncovered residues (b mod n) ∩ R i+1 intersecting with r is not much bigger than average.
It turns out that a λ -good fiber is also λ -well-distributed. Additionally, having enough well-distributed fibers controls the growth of a related statistic measuring bias, which in turn will help insure many of the fibers (r ′ mod Q i+1 ) ∈ R * i ∩ R i+1 are themselves λ -good. When we say a large subset of R i , we are referring specifically a measure on Z/Q i Z, which we define inductively as one in a sequence of measures. Let be the the proportion of good fibers in R i lying over the good fibers in and µ i+1 (r) = 0 otherwise. Note that there is a related parameter π good ≤ π good (i) which will serve as a lower bound. As shown in Lemma 2 in [10], µ i+1 (Z/Q i+1 Z) = π good (i)µ i (Z/Q i Z) for all i ≥ 1, and of course µ 0 (Z/Q 0 Z) = 1. The primary advantage of the methods in this paper over those in [10] come from a modified definition for the "bias statistic" which can more accurately account for the effect of considering a restricted base P.
Consider the function l ′ k (m), the number of k-tuples of natural numbers factorized by P with LCM m. This contrasts with l k found in [10] which counts all k-tuples with LCM m, and has l ′ k (m) ≤ l k (m) for all m ∈ N. Powers of base elements q ∈ P take the value and if m and n are coprime integers factorized by P we have l ′ k (mn) = l ′ k (m)l ′ k (n). Thus l ′ k is multiplicative with respect to P. If m ∈ N is not factorized by P, then let l ′ k (m) = 0. l ′ k is multiplicative if and only if P consists entirely of powers of primes. Let Note that the difference between β ′ k and β k found in [10] is the use of l ′ k rather than l k . This smaller bias turns out to be useful in lowering the final value of q 0 in Theorem 3.
Inductive Criterion
Now we are ready for an inductive statement generalizing Theorem 2 of [10]. We will then fill in a few remaining specifics and examine a consequence of this generality.
Theorem 5 (Inductive Criterion). Let M ⊂ N >1 and let i ≥ 0. Suppose P factorizes M and that the parameters π good , λ , and {P j } i+1 j=0 have been set. Suppose for all j ≤ i that C j is a system of residues and that for all j < i, the uncovered set R * j ⊂ R * j−1 ∩ R j has If, for some k ∈ N, Further, regardless of choices of residues for C i+1 , for all good residues r ∈ R * i , the fiber (r mod Q i ) ∩ R i+1 is nonempty, and for all k ∈ N,
Base Case
In the base case, we record the initial bias, β ′ k (0). Let so δ is an upper bound for the density of Z \ R 0 . We have
Inductive Lemma
If C i+1 is a residue system and (r mod Q i ) ∈ R i is a residue class not covered by C i , then for given n ∈ N i+1 , the set a n,r ⊂ (r mod Q i ) is the portion of the fiber over r sieved out by C i+1 . Explicitly, a n,r = (r mod Q i ) ∩ m|Q i mn∈M i+1 (a mn mod mn).
The boldface notation follows [11] and is intended to invoke a collection of residues modulo n. The fiber (r mod Q i ) ⊂ Z/nQ i Z can be shifted by −r and then projected onto Z/nZ; in this setting a n,r resembles a collection of residues modulo n, rather than modulo nQ i . Now we have the vocabulary to see why β ′ k (i) is useful as a uniform bound. Suppose nQ i is factorized by P. Then, following Lemma 4 of [10], β ′ k (i) controls the k th moment of the random variable µ i (r)|a n,r mod nQ i |, where n is treated as fixed and r varies: Note that this bound is uniform with respect to n ∈ N i+1 and with respect to choice of residues in C i+1 . Let us explicitly define good and well-distributed fibers so that we can apply the bias bound and find good residue classes lying in them. First let us define ω ′ dependent on P, related to the additive function ω. For an integer m factorized by P, ω ′ (m) is the number of distinct base elements q ∈ P dividing m. This function is not defined if m is not factorized by P.
A residue class (r mod Q i ) ∈ R i is λ -good if for each q ∈ P i+1 , the sieved-out subsets of its fiber in Z/Q i+1 Z are under control. Precisely, for all q ∈ P i+1 , ∑ n∈N i+1 q|n |a n,r mod nQ i |e λ ω ′ (n) n We can also say r is λ -well-distributed if (r mod Q i ) ∩ R i+1 is not empty and if residues new factors are bounded (i.e. evenly distributed) as they intersect with r ∩ R i+1 . Precisely, for all That λ -goodness implies λ -well-distributedness is a consequence of the Lovász Local Lemma; see Proposition 1 in [10]. (For background on the Lovász Local Lemma and its application to number theory see [6].) Then, as illustrated by the inequality proved in Lemma 5 of [10], the bias statistics β ′ k control the proportion of good fibers at the next step. For all k ∈ N and all q ∈ P i+1 , we have The requirement that π good (i) ≥ π good then implies which is equivalent to condition (1). By Proposition 3 of [10], the existence of R * i implies that for all k ∈ N the growth of β ′ k (i) is controlled by π good (i):
Calculation
In this section we select parameters to prove Theorem 3. Unless otherwise specified, all numerical estimates are calculated in Sagemath. Let π good = 1/2, let e λ = 2, and let P i = e 6+i for all i ≥ 0. We will focus on the third moment k = 3. Thus for given i, condition (1) is satisfied when 1 + e λ p − 1 In the case i > 8, see [10] for proof. We directly verify the first few cases. For β ′ 3 (0) and β ′ 3 (1) see the exact statements verified in Sagemath as examples. Table 1 is a list of upper bound for β ′ 3 (i) and lower bounds for B 3 (i), for i ≤ 8. Note that, since β ′ 3 (i) < B 3 (i) for each i by Theorem 5, the estimate for β ′ 3 holds at each step. Then it is verified in [10] that for i > 8 we have , and thus B 3 grows faster than β ′ 3 as well. Therefore for all i ∈ N, Condition 1 is met and the set of primes greater than 19 does not factorize a covering.
|
2017-05-11T20:43:06.000Z
|
2017-05-11T00:00:00.000
|
{
"year": 2017,
"sha1": "cd157b190d5d44d71f5cf65f58c24f7ea772bdeb",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "cd157b190d5d44d71f5cf65f58c24f7ea772bdeb",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
246204391
|
pes2o/s2orc
|
v3-fos-license
|
Linux vs. Windows: A Comparison of Two Widely Used Platforms
| ABSTRACT Current studies in OS is usually between linux and windows these days. Both Windows and Linux are widely used PC operating systems (OS). Windows is an eye-catching operating system, but it is not as safe as Linux. With growing worries about OS security, Linux has become well-known among OS users for its security and efficiency. This paper manages two of the principal common types of operating systems (Linux, Windows) with the significance of the operating system in any device and, moreover, to direct the study over Linux and Windows. We've compared various characteristics concerning Windows and Linux that are utilized in various researches and directed a survey for this reason. The results of the survey related to Windows and Linux are analyzed. The findings indicate that Linux is more preferred when concerned with security, whereas Windows is preferred when user-friendliness is concerned.
Introduction
The operating system in a device can be thought of as a link between the needs of end-users and the capabilities of the PC hardware. OS is both a software and system software that not only resolves difficulties between the client and the PC hardware but also includes a few capabilities within the system such as managing PC memory, files, and the protection of other system software. In this way, for all the activity of these capacities, we have numerous OS out of that Windows and Linux are two operating systems that are ceaselessly going after the control of the PC market. Every OS have shown huge development inside the OS purchaser market. Microsoft sent off its underlying OS in 1985. Almost about time, Linux came on the internet.
Windows was released in November 1985 as a graphic or figure-based operating system shell for MSDOS (Microsoft Disk Operating System). Windows is an operating system that prioritizes the PC's growing demands and the client's graphical user interface. Linux is a well-known open-source operating system that runs on the Linux kernel. KernelCare, dpkg, and GNOME programming were used to produce Linux, which was introduced in September 1991. Windows is the most well-known operating system on the market. Clients benefit from Windows in a variety of ways. Because of its simple user interface, Windows is straightforward to comprehend. However, these days, clients are increasingly switching from Windows to LINUX. In recent years, the Linux industry has dominated the IT sector. Because Windows is not more secure than Linux and does not provide hardware adaptability for use, it is becoming increasingly untrustworthy day by day. Since 1993, both Windows and Linux have attempted to gain control of the operating system buyer market. Both operating systems have their own set of benefits and drawbacks. Our research aims to determine the main differences between Windows and Linux and how these differences might affect the expected utilization of end clients. There are several key areas from which we would want to analyze Linux and Windows. These areas include cost, security, configurability, and user-friendliness, which we'll be discussing in detail and the others will be discussed briefly and asked and analyzed in our survey. We'll conjointly inspect explicit instances where each of these two OS will be the best fit for explicit errands. Our interest group is any PC client that needs to utilize and execute an excellent system and improve and use its greatest execution. The rest of this paper is as follows: in Section II, a literature review and a short depiction of all the relative technologies until now is given. The methodology used to understand our examination is presented in section III. The benchmark results are introduced in section IV. Finally, a few closing comments and future work are given in Section V.
Classification of Past and Present Operating Systems:
This section has compiled various famous past and current Windows and Linux Operating System facilities beginning with Windows followed by Linux.
1) Windows: a) Windows 3.x:
Li R (2012), Yang N (2012) and Ma S (2012) state in their research that The Microsoft 3.0 and 3.1 versions of Windows come with a number of features, such as VDD ("Virtual Device Drivers"), which helps divide arbitrary devices among many DOS applications. This adaptation application may run in either bound or secured mode, allowing it to access a maximum of (MB) megabytes of memory and participate in the virtual memory of software. The address space remains constant during execution, and the partitioned memory provides layers of protection. The user interface in Windows 3.0 has also been improved. We can see that Windows 3 is GUI-based and more beneficial for customers, like providing multitasking capabilities.
d) Windows XP:
Windows XP totally accompanies a new UI which brings a new choice of Start menu and Windows Explorer, smoothly running media just as the organization focuses, and furthermore, XP arrived with various settings which can offer security with software program that is utilized with the past variants of windows, just as an online guide. Windows XP was exceptionally effective as even after the launch of its other follower variants; individuals found it simple to utilize.
e) Windows Vista:
Vista highlights brand-new capacities, like a new covering and UI to outstanding specialized adjustments, with a solid focus on safety features. It accompanies different versions, and furthermore, to make the system very defensive, a more rigid, permit understanding was there with Windows Vista.
f) Windows 7:
Windows 7 was released with a progressive approach to the Windows series, resulting in stated characteristics such as application security and hardware security that is far superior to Windows Vista. Multi-contact support, a newly created shell, an updated taskbar, and HomeGroup, which is a home system administration tool, were all included in Windows 7.
g) Windows 8 / 8.1: Windows 8 was released with a completely new and different approach to UI by making changes at the start screen, which includes large artistic tiles that are more useful for contact correspondences and take into account the current reliability redesigned data, as well as a moderately new set of applications that are designed with interaction, i.e. touch-based gadgets in mind. Cloud-focused features and other online systems like Microsoft OneDrive are included in Windows 8 and 8.1. It has also been upgraded to Windows RT for use on ARM-based devices.
h) Windows 10: Windows 10 introduces a new start menu and the option to run the Windows Store app in Windows, regardless of the overall screen size. Windows 10 comes with a plethora of excellent features, such as numerous desktop options that allow us to move a handful of our Tabs to an advanced work area and put them to the side. In the same way that Siri is equivalent to Google, Currently, we have Cortana as a remote assistant in Windows 10 and a tablet PC in Windows 10 that supports a variety of customized out-of-the-box qualities. Faster than previous versions of windows. The start button was removed in Windows 8 but was introduced again in 8.1 and served as a common platform for mobile and computer. It takes better advantage of multicore processing, SSDs, touch screens and other input methods.
Windows 10 2015
Multiple desktops, Cortana voice assistant, Central notification center for app notifications and quick actions.
2) Linux: a) Ubuntu:
Ubuntu is usually the distro of choice for brand-new individuals. It tends to focus on functionality and simpleness for the user that wants the system to "simply function". Launches come every 6 months and also are available on a live CD. Equipment assistance is typically quite good, except for wireless.
b) Linux Mint:
Based upon Ubuntu. Concentrate on the convenience of use and accessibility of proprietary software programs such as codecs as well as a flash plug-in. Uses Cinnamon desktop, which appears like a Windows user interface.
c) Debian:
Debian is a completely cost-free, non-commercial circulation of Linux. It holds to the initial idea of the 'Open Resource' software program. Debian focuses on stable releases that function without issues on all platforms and, therefore, will certainly not be the first to integrate the current bells and whistles.
f) openSUSE:
It is a distro for newbies that additionally wish to utilize Linux in a professional atmosphere.
g) ArchLinux:
Arch Linux is a very customizable, non-commercial distribution for i686 and x86_64 computer systems. All the required bundles can be installed on purpose to minimize pointless bundles using disk room. This distro would not be good for novices since all arrangements are done with editing and enhancing the setup data. This distro takes even more setup time than a few of the distros.
h) Kali Linux:
It is basically a Debian-derived Linux circulation that was originally designed for digital forensics and infiltration screening. The Offensive Safety and Security dept. Keeps and funds it.
Methodology
Our methodology is that we will be comparing both these OS on the basis of four key areas stated below. Linux, then again, is totally free. It is authorized under the GNU General Public License, which takes into account the free circulation of the Linux source code. Anybody can change the code to suit their particular requirements as long as the code is never sold at a cost. It is likewise essential to note that many organizations give subscription-based backing to Linux at an ostensible charge. Red Hat, of the many organizations that give Linux support, offers Red Hat Enterprise Linux with an essential help membership for $349, which incorporates Web support, 2 workday reaction, and limitless episodes. The secret expense in Linux lies in its backing and upkeep.
Windows likewise underlines that Windows Server diminishes the Total Cost of Ownership (TOC). Anyway, once a Linux server is appropriately introduced and custom-fitted to your requirements, it is fundamentally more expense productive to keep up with over the long haul. Microsoft continues to claim that closed source provides a faster and more viable response to security vulnerabilities or defects. In any case, major changes and updates are only released once a month after extensive programming and testing. It is common for bugs and security problems to go unpatched for an extended period. There are numerous flaws in Microsoft's design that render it defenceless in the face of security threats. Many people mistakenly believe that Windows is the primary source of security vulnerabilities and malware simply because it controls the largest share of the overall market. With Windows' multiple hidden security flaws, it's easy to see why it's frequently compromised as technology advances.
Security
The security model for Linux traces its roots back straightforwardly to UNIX, which was the first to perform various tasks and stage a convenient PC working framework. UNIX, from the start, isolated manager honours from those of the typical client, something that Windows didn't carry out until they understood that individuals would really be involving their working framework for more than one client. The UNIX working framework likewise used the principal encryption strategies to be utilized on PCs and fostered a framework that permitted PCs to speak with each. Since the main PCs networks connected these enormous PCs together, it was important to guarantee security across the organization and guarantee that information parcels got to their expected objections. The Linux working framework has acquired all of its safety efforts and plan from UNIX and has even as a rule added to it. The UNIX working framework splits control between typical clients and one superuser, known as root. All clients, naturally when they login onto the framework, start as ordinary clients and afterwards can turn into the superuser assuming they know the right secret phrase. This keeps an amateur client from accidentally making a system-wide change that could carry the framework to a crushing stop. It likewise shields an ordinary client from rolling out any damaging improvements to the framework that could endanger the utilization by different clients on the framework. In the UNIX framework, each record and interaction has a place with a particular client and a particular gathering. Each document has explicit consents for the proprietor, gathering, and others that incorporate read, compose and execute access. The root client can execute any document with execute consent and read, compose, and adjust any record on the file system. This model guarantees that main the right individuals approach records and orders.
Because major framework alterations must be refined as superusers, it is incredibly impossible for someone without sufficient honours to annihilate a framework. While an aggressor can still exploit a bit of security flaw, with many people contributing to the code, a security flaw can be fixed in a matter of hours. However, repairing a security hole in Windows may take some time.
Configurability:
Windows frameworks are restricted by the need to have a real face to appropriately keep up with and arrange the OS. Rather than having the option to effectively add new security elements and components, Windows solid plan makes it hard to effectively add another security module to the current framework without doing a significant framework upgrade. All the security includes that accompany the arrival of a specific Windows programming discharge are the main elements that will be accessible to the framework chairman. In terms of client validation, Windows is utilized to constrain clients and customers to demonstrate their character to the framework. While Linux based verification takes into account validation from Windows-based customers, Windows then again will just verify Windows-based customers.
Linux is designed to be tailored to the client's specific requirements. Because Linux is an open-source program, anyone can download it, modify it, and then recompile it to meet their own needs. Likewise, since Linux isn't restricted by the dependence on a graphical point of interaction, the clients can ordinarily exceptionally tweak projects to do precisely what they need them to do and to assume that they need more control, they can even dig into shell prearranging to robotize and further alter explicit errands. Because of Linux's particular plan, it doesn't dependably need to depend on explicit exclusive programming to achieve errands. Linux machines can be changed to meet the particular requirements of every single client. While it may require some investment to arrange and alter Linux to your necessities, the practically unlimited quantities of ways you can tailor Linux incredibly dwarf how much time taken.
User-Friendliness:
No other operating system comes close to Windows in terms of Ease of use. What more could you ask for than a simple "point and snap" environment with a great GUI? While Windows isn't as safe as Linux right out of the box, it is far easier to set up and install. It is possible to set up, install, and design Windows in a few hours. The majority of Windows' utility may be discovered through simple "point and snap experimentation," and the Windows help system does an excellent job of answering the most insignificant questions that a novice director could have. In Windows, every possible adjustable option is directly at your fingertips. While Windows is really simple to use, it also implies that most people with regular PCs will install Windows, which means they will be less secure, less up to date, and provide fewer administrations than Linux. On the other hand, Windows can be the best option if you require an operating system that you won't have to hire a PC professional to manage or buy a book to learn Linux.
On the other hand, Linux may appear to be a little more intimidating to the typical PC user and occasionally even PC chairman. While many Linux distributions now include a graphical user interface (GUI), such as Gnome or KDE, others do not and rely only on message-based commands. This puts the client in a position where he must learn how to investigate and construct a Linux machine entirely using text-based commands. Linux includes an implicit handbook known as the man pages that allows users to see all of the unique features that each programme or order offers. This manual covers a wide range of commands, including shell commands and even commands for new programming languages such as C. It's also possible that most Linux newbies will require extensive documentation and experience before they can properly explore on a Linux machine. This knowledge can be found in online Linux client networks, websites, and publications. To someone who is very informed about Linux, Linux might be considered extremely simple to comprehend in many respects. It has also been widely reported that Linux is generally safer, better maintained, and provides more support than Windows because understanding the Linux operating system requires someone with above-average PC abilities. Table 2 shows a brief comparison and description of Windows 10 and Linux Ubuntu in these key areas based on the above factors. Everything is communitybased and freely or less costly available.
Results and Discussion
One method for assessing the after-results of this study is to view the short examination between the two OS as displayed in Table 2.
Another way is to look at the number of traits where every system was evaluated better than the next, as displayed in Table 3 and Table 4. According to this viewpoint, Linux is the champ, leading the pack in five out of eight classes. (By and large) that Linux is "to some degree better" than Windows as far as security, unwavering quality, adaptability, versatility, and all-out cost of possession. (By and large) is "fairly better" as far as simplicity of starting establishment, simplicity of on-going organization, and "much better" as far as accessibility of gifted care staff. As demonstrated before, there are contrasts in intensity between how System Admins and IT Chiefs see each operating system in any case.
These distinctions give the detailed reactions for system admins with experience on both operating systems versus IT chiefs with obligations regarding both operating systems versus all survey respondents. The investigation on the general benefits of each operating system and gives rules to picking between Linux and Windows depending on these eight attributes.
Almost 54.5% of users are windows based, and the other is Linux or MacOS based in our survey. In terms of simplicity of installation, the majority of respondents (45.5%) preferred Windows.
Fig. 4. Ease of Installation Graph
The majority of the respondents, i.e. 27.3%, opted for either no opinion or that both windows and OS are about the same in terms of Ease of administration. In terms of skilled staff availability, most respondents (63.6 percent) preferred Windows. The majority of the respondents', i.e. 54.5%, opted that Linux is much better in terms of Scalability. The majority of the respondents, i.e. 54.5%, opted for Linux much better in terms of reliability. The majority of the respondents, i.e. 90.9%, opted for Linux much better in terms of security. The majority of the respondents, i.e. 63.6%, opted for Linux much better in terms of Total Cost of ownership. The majority of the respondents, i.e. 45.5%, opted for Linux much better in terms of Flexibility. The majority of the respondents, i.e. 54.5% in our survey, were Engineers or Programmers. No single operating system is the right choice for every organization and every application. Many organizations find that the best approach is to run multiple operating systems. Linux and Windows are only two choices; there are many others; that said, for organizations that are deciding between Windows and Linux.
Much Better
While assessing Windows versus Linux as an operating system, our survey gives knowledge on the general benefits of each operating system. IT admins and managers can utilize this knowledge to settle on informed choices on the operating system that best meets their organisations' specific requirements and needs.
Conclusion
Linux and Windows will continue to compete in the operating system market. Following a comparison of the essential aspects of both operating systems that are generally critical to the operation of a respectable system, Linux should be your choice if you are looking for a secure, cost-effective, stable system that allows for the most configurability. Windows leads the way in terms of user-friendliness, and it's ideal for a system that's simple to manage and won't perform crucial activities. In general, Linux provides more functionality and a more secure environment, both of which are critical for a successful system.
|
2022-01-23T16:54:01.696Z
|
2022-01-21T00:00:00.000
|
{
"year": 2022,
"sha1": "82cb1d783c428d3df8433acf22d3fd12a8827898",
"oa_license": "CCBY",
"oa_url": "https://al-kindipublisher.com/index.php/jcsts/article/download/2763/2420",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "8a52ecaf6bad4704f6c9a0f0169ff4369c9eeeb9",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": []
}
|
257156902
|
pes2o/s2orc
|
v3-fos-license
|
CD11c+ microglia promote white matter repair after ischemic stroke
Ischemic stroke leads to white matter damage and neurological deficits. However, the characteristics of white matter injury and repair after stroke are unclear. Additionally, the precise molecular communications between microglia and white matter repair during the stroke rehabilitation phase remain elusive. In this current study, MRI DTI scan and immunofluorescence staining were performed to trace white matter and microglia in the mouse transient middle cerebral artery occlusion (tMCAO) stroke model. We found that the most serious white matter damage was on Day 7 after the ischemic stroke, then it recovered gradually from Day 7 to Day 30. Parallel to white matter recovery, we observed that microglia centered around the damaged myelin sheath and swallowed myelin debris in the ischemic areas. Then, microglia of the ischemic hemisphere were sorted by flow cytometry for RNA sequencing and subpopulation analysis. We found that CD11c+ microglia increased from Day 7 to Day 30, demonstrating high phagocytotic capabilities, myelin-supportive genes, and lipid metabolism associated genes. CD11c+ microglia population was partly depleted by the stereotactic injecting of rAAV2/6M-taCasp3 (rAAV2/6M-CMV-DIO-taCasp3-TEVp) into CD11c-cre mice. Selective depletion of CD11c+ microglia disrupted white matter repair, oligodendrocyte maturation, and functional recovery after stroke by Rotarod test, Adhesive Removal test, and Morris Water Maze test. These findings suggest that spontaneous white matter repair occurs after ischemic stroke, while CD11c+ microglia play critical roles in this white matter restorative progress.
INTRODUCTION
Ischemic stroke elicits robust white matter damage leading to sensorimotor impairments, vascular dementia, and emotional disorders [1][2][3]. White matter is composed of axonal fibers wrapped and protected by myelin sheath [4]. Clinically, most stroke patients could spontaneously enter a recovery period several days after the acute stroke event, which manifests as neurological function improvement [5]. Although plenty of stroke patients could not return to normal and remain functional disability. The potential for spontaneous improvement of function is associated with intrinsic mechanisms of axonal plasticity and regeneration [2,6]. However, little is known about white matter regeneration after ischemic stroke.
Currently, it is widely accepted that the presence of oligodendrocyte precursor cells (OPCs) and their migratory capacity are not the only limiting factors for white matter regeneration. Because of lacking a supportive cellular environment, OPCs recruited to the injured sites could not be efficiently differentiated into mature oligodendrocytes [7,8]. Microglia are resident inspectors and scavengers in the central nervous system. Upon brain ischemia, microglia are triggered persistently and demonstrate tremendous heterogeneity that not only potentiate brain injury but also facilitate brain repair [9,10]. Therefore, the abrupt depletion of microglial cells inhibits ischemic remyelination [11] and dysregulates neuronal network activity [12]. Whether there is a new subpopulation of microglia that promotes long-term repair during stroke rehabilitation is unclear. Accordingly, investigations of microglia subpopulations with restorative features could guide potential therapeutic strategies in stroke rehabilitation.
Studies on brain development and multiple sclerosis pointed out that microglia might exhibit white matter regenerative properties through swallowing myelin debris [7,13,14] and secreting neurotrophic factors [15,16]. For instance, the study shows that microglial triggering receptor expressed on myeloid cells-2 (TREM2) activation promotes myelin debris clearance and remyelination in multiple sclerosis [14]. However, currently, communications between microglia and white matter regeneration after ischemic stroke are still lacking.
To fill in the aforementioned gaps, we performed bulk RNA sequencing of microglia sorted from various time points in the white matter regeneration stage after stroke to comprehensively exhibit microglial repairing molecular profiles. Our findings suggest that the CD11c + microglia might be the potential microglial subpopulation promoting white matter recovery.
RESULTS
White matter injury recovered gradually from Day 7 to Day 30 after tMCAO To evaluate the dynamic course of white matter injury after stroke in vivo, we performed the MRI scan with DTI sequence of the same mouse before and at different time points longitudinally after tMCAO. Fractional anisotropy (FA) is a reliable parameter to quantify the white matter integrity of external capsule (EC), where is a WM-enriched area [6]. We found that the FA values significantly reduced in the EC on Day 7 post-tMCAO (0.65 ± 0.04), indicating the broken integrity of the white matter structure. The FA values of EC on the 14th (0.81 ± 0.02) and 30th (0.89 ± 0.01) day after tMCAO gradually increased but were lower than those before tMCAO (1.02 ± 0.02) (Fig. 1A). However, in the cortex and striatum, FA is not positively associated with white matter integrity [17], therefore, we performed myelin basic protein (MBP) and non-phosphorylated neurofilament H (SMI32) staining to assess white matter lesion and quantified as the ratio of SMI32/ MBP values on the 1st, 3rd, 7th, 14th, 30th Day after tMCAO. The loss of white matter integrity and axonal demyelination usually presented with decreased MBP and increased SMI32 staining. The ratio of SMI32/MBP in the peri-infarct areas (Fig. 1C) of the cortex and striatum increased from Day 1 post-tMCAO and reached the highest on Day 7 (1.63 ± 0.09 in cortex and 1.26 ± 0.07 in striatum), then gradually decreased until Day 30 (1.15 ± 0.05 in cortex and 0.66 ± 0.06 in striatum), suggesting that WMI was most severe at Day 7 after tMCAO (Fig. 1D, E). Thus, the process of white matter repair could be continuous from Day 7 to Day 30 after tMCAO. Fig. 1 White matter injury recovered gradually from Day 7 after tMCAO. A Representative DTI axial views of the same mouse brain pre-tMCAO and on the 7, 14, and 30 Day post-tMCAO. Green arrowheads pointed EC. B Quantification of FA value expressed as the ratio of ipsilateral values to the contralateral values. n = 6. *p < 0.05 and ***p < 0.001 versus Pre-tMCAO group, ## p < 0.01 and ### p < 0.001 versus tMCAO 7 Day group. C Schematic diagram of the observation position on the cortex and the striatum of brain slices after tMCAO or sham operation. D The ratios of SMI32 to MBP staining intensity in the ipsilateral cortex and striatum chronologically. n = 6. ***p < 0.001 compared to sham group in the cortex, # p < 0.05, ## p < 0.01, and ### p < 0.001 compared to sham group in the striatum. E Immunofluorescence staining of SMI32 (red) and MBP (green) in the cortex and the striatum area of sham or 1st, 3rd, 7th, 14th, 30th Day after tMCAO. Scale bars, 20 μm. For all error bars (B, D), mean values ± SEM.
Microglia wrapped injured white matter bundles and swallowed myelin debris As the brain's resident immune cells, microglia can be activated and recruited to the lesion site within hours after ischemic stroke. However, their roles in WMI after stroke are still obscure. Through dual staining of MBP and TMEM119 on the 1st, 3rd, 7th, 14th, and 30th Day after tMCAO, we observed the microglia were rapidly activated from Day 1, largely accumulated and acquired the amoeboid morphology on Day 7, and gradually turned to the resting-state on Day 30 after tMCAO in the peri-infarct areas of the cortex and striatum ( Fig. 2A). Interestingly, we observed that the proliferated microglia centered around the damaged myelin bundles in the striatum area and surrounded the injured myelin sheath in the cortex ( Fig. 2A). Quantitative analysis showed that the most obvious spatiotemporal microglia-myelin crosstalk happened on Day 7 and Day 14 post-tMCAO in the striatum (Fig. 2B, C). Additionally, to exclude the plausible increased contact between microglia and myelin bundles was secondary to microglia proliferation, we counted Fig. 2 Microglia wrapped injured white matter bundles and swallowed myelin debris. A Representative images of MBP (green) and TMEM119 (red) immunostaining in the cortex and the striatum area of sham or 1st, 3rd, 7th, 14th, 30th Day after tMCAO mice brains. Scale bars, 20 μm. B Representative image of microglia centered around the damaged myelin bundles in the striatum area. Scale bar, 20 μm. C The number per field and D the percentage of microglia in contact with myelin bundles in the striatum of sham or 1st, 3rd, 7th, 14th, 30th Day after tMCAO. n = 5. **p < 0.01, and ***p < 0.001 compared to sham group, values were shown as mean ± SEM. E Representative images of dMBP (green), Iba1 (red), and CD68 (orange) immunostaining on the 7th Day after tMCAO. Scale bar, 100 μm. F Three-dimension reconstruction of microglia engulfed myelin debris. dMBP, green; Iba1, red; CD68, gray. Scale bar, 10 μm.
the myelin-contact microglia percentage, which also suggested the delicate communication may peak from Day 7 (45.83 ± 1.26%) to Day 14 (43.52 ± 1.69%) post-tMCAO (Fig. 2D). Using dMBP and Iba-1 to mark degraded myelin debris and microglia separately, we found that microglia engulfed myelin debris on Day 7 after tMCAO with confocal microscopy and three-dimension (3D) reconstruction (Fig. 2E, F), which showed the red semitransparent microglia wrapped in green myelin debris (Fig. 2F). Given this, we speculated that microglia communicated with damaged myelin after stroke.
Microglia in the repairing phase of stroke exhibited molecular signatures associated with CD11c microglia population We observed myelin-repair and obvious spatiotemporal microgliamyelin crosstalk from Day 7 to Day 14 post-tMCAO, therefore, we supposed that Day 7 to Day 14 post-tMCAO could be the critical phase for microglia-mediated myelin repair. We then performed the transcriptional sequence of microglia sorted from the sham group and infarcted brain tissue on the 7th, 14th, and 30th-Day post-ischemia using FACS (Fig. 3A). From the total of 20,605 genes, we performed a time sequence (sham, tMCAO 7, 14, and 30 Day) profile analysis using short time-series expression miner (STEM) to search molecular dynamic alternations in line with white matter repair. Detailed STEM analysis of the microarray data presented five clusters including two significant clusters (p < 0.05) according to their expression patterns at different time points after tMCAO (Fig. 3B). One cluster consisting of 1221 genes was continually upregulated from Day 7 to Day 14 after tMCAO. Another cluster consisting of 1493 genes was kept at a higher level than the sham group from Day 7 to Day 14 after tMCAO. Subsequently, we performed gene ontology (GO) (Supplemental Fig. 1A) and Kyoto Encyclopedia of Genes and Genomes (KEGG) (Supplemental Fig. 1B) analysis of the interested 2714 genes. Around 14% of genes were annotated to the "immune system process," "inflammatory response," and "immune response" process (GO: 0002376, GO:0006954, GO:000695), while around 4.2% of genes belonged to the "Phagosome" process. We clustered phagocytosis and lipid metabolism associated genes in Supplemental Fig. 1C, D. A subcluster of CD11c (Itgax)-related genes showed a profound rise and aroused our interest.
Generally CD11c + microglia increased during stroke rehabilitation In the light of RNA-sequencing, we then characterized CD11c + microglia using flow cytometry. We defined mouse microglia as CD45 int CD11b + , different from peripheral infiltrated myeloid cells which were CD45 hig CD11b + . Gating strategy and typical images of CD11c percentage in CD45 int CD11b + population and CD45 hig CD11b + population were shown in Fig. 4A. In the intact mice, brain CD11c + cells were low in both populations. Upon ischemic challenge, CD11c + peripheral cells increased dramatically with the peak of 3 days after ischemia and decreased gradually thereafter. Interestingly, distinct from CD11c + peripheral cells, CD11c + microglia increased gradually and became the dominant CD11c + population from Day 7 to Day 30 after ischemia (Fig. 4B, C). We calculated the phagocytosis capacity of CD11c + microglia by measuring CD68 intensity in CD11c + microglia and CD11cmicroglia. In all stroke time points, CD68 intensity in CD11cmicroglia was only half of that in CD11c + microglia (Fig. 4D).
Depleting CD11c + microglia exacerbated behavior performances of stroke mice Having shown that CD11c + microglia were the endogenous source of myelin-supportive genes, we then explored the functional consequence of removing CD11c + microglia after stroke in vivo. CD11c-cre mice that express Cre recombinant under the control of CD11c promoter were bilateral stereotactic injected with either rAAV2/6M-DIO-taCasp3 (rAAV-taCasp3) or rAAV2/6M-DIO-EGFP (rAAV-control), respectively (Fig. 6A). Behavior tests, AAV injection, and pathological detection schemes were summarized in Fig. 6B. As shown in Fig. 6C, the EGFP + cells were co-labeled with CD11c + Iba1 + microglia on the 7th Day post-tMCAO in the infarcted area, suggesting that CD11c + microglia were well-infected by rAAV2/6 M after stroke. The injection of rAAV-taCasp3 resulted in a significant reduction of CD11c + microglia (17.1 ± 3.03%) in the ischemic brain 21 days after stroke compared to rAAV-control group (51.15 ± 2.87%) by flow cytometry (Fig. 6D). We further compared CD11c positive cells of rAAV-taCasp3 group to rAAV-control group in the infarcted area by 6 Depleting CD11c + microglia in the late phase of stroke impaired functional recovery. A A schematic illustration of the Cre-DIO system. B Experimental design for CD11c + depleted mice and their controls. C Immunostaining of CD11c-cre mice with rAAV-DIO-EGFP injection 3 weeks before tMCAO and subjected to tMCAO 7 days. Scale bars 100 μm. D Representative flow cytometric analysis for mice with rAAV-taCasp3 or rAAV-control injection 3 weeks before tMCAO and subjected to tMCAO 21 days. Percentage of CD11c + microglia and relative CD11c fluorescence intensity of microglia in sham, rAAV-control group, and rAAV-taCasp3 group respectively (n = 4 per group). Stereotactic injections of rAAV did not affect the motor and sensory function of intact mice (n = 36). E Depleting CD11c + microglia deteriorated long-term motor and sensory deficits after tMCAO as assessed by rotarod test (F) and adhesive removal (G) tests. n = 17 for rAAV-control group and n = 11 for rAAV-taCasp3 group. # p < 0.05, ## p < 0.01, ### p < 0.001 compared between rAAV-control and rAAV-taCasp3 group on each day. H Representative images showed the swim paths in the Morris Water Maze. Cognitive functions including training test for learning (I), probe test for memory (J), and speed (K) were evaluated in the Morris Water Maze. n = 17 for rAAV-control group and n = 11 for rAAV-taCasp3 group. *p < 0.05, **p < 0.01, ***p < 0.001, and ns not significant. Values were mean ± SEM. CD11c immunostaining. We observed a significant reduction of CD11c intensity in rAAV-taCasp3 group (Supplemental Fig. 3A, B).
Depleting CD11c + microglia did not impair the behavioral performance of intact mice (Fig. 6E). Sensorimotor functions measured by the Rotarod test and the Adhesive Removal test showed that persistent CD11c + microglia deficiency exacerbated long-term neurological deficits, such as decreased time spent on a rotating bar (Fig. 6F) and increased time for removal of adhesive tapes from the impaired paws until 21 days after tMCAO (Fig. 6G). Spatial cognitive functions, as revealed by the Morris Water Maze test, manifested that mice with rAAV-taCasp3 injection spent more time in finding the hidden platform than mice with rAAV-control injection during the learning phase (Fig. 6H, I). On the probe test, mice with rAAV-taCasp3 injection swam fewer times across the goal area and spent less time in the goal quadrant after the platform was removed, indicating impaired memory retention (Fig. 6J). However, there was no difference in the swimming speed among different treatment groups. These results underscored the importance of endogenous CD11c + microglia to functional recovery after stroke (Fig. 6K).
Depleting CD11c + microglia impaired white matter recovery in the late phase of stroke The infarct area (detected by NeuN staining) of rAAV-control and rAAV-taCasp3 group on the 21st day after tMCAO showed no significant difference (Supplemental Fig. 4A, B). The survival rate of rAAV-control mice was similar with that in rAAV-taCasp3 mice (Supplemental Fig. 4C). We then performed an MRI DTI scan of mice three weeks after tMCAO. The DTI image showed that mice with CD11c-deleted presented a heavier WMI in the EC area, possessing a lower FA value (Fig. 7A). The dual staining of MBP and SMI32 (Fig. 7B) also confirmed that the SMI32/MBP ratio was obviously higher in the rAAV-taCasp3 mice both in the cortex (Fig. 7C) and the striatum 21 days after tMCAO (Fig. 7D). The myelin sheath detected by LFB staining showed worse white matter sheath in the external capsule of rAAV-taCasp3 group (Supplemental Fig. 5) By injecting 5-ethynyl-2'-deoxyuridine (EdU) to label newly generated cells during the myelin-repair phase from Day 7 to 14 after tMCAO, we compared the regeneration of oligodendrocytes in rAAV-taCasp3 mice and rAAV-control mice. We used APC and Pdgfrα to label mature oligodendrocytes and pre-oligodendrocytes respectively. The immunostaining images displayed that the number of EdU + cells is similar between the rAAV-taCasp3 group and rAAVcontrol group (Fig. 7E). However, there was a significant reduction in the number and percentage of EdU + APC + mature oligodendrocytes in the striatum in rAAV-taCasp3 mice (Fig. 7E), while the EdU + Pdgfrα + cell amount and percentage in rAAV-taCasp3 mice were slightly more than rAAV-control group (Fig. 7F, G). These data supported the hypothesis that CD11c + microglia promote oligodendrocyte maturation and facilitate white matter repair.
DISCUSSION
In this current study, we found that ischemic white matter injury recovered from Day 7 to Day 30 after focal ischemic stroke. Parallel to white matter recovery, a unique CD11c + microglia population increased and delivered signals necessary for white matter repair. Selective depletion of CD11c + microglia using CD11c-cre mice and rAAV2/6M-DIO-taCasp3 virus diminished oligodendrocyte maturation and white matter repair after stroke. We concluded that CD11c + microglia play critical roles in this white matter restorative progress.
After acute ischemic stroke, spontaneous repair of white matter damage occurs in and around the ischemic lesion [1,18], which contributes to neurological functional recovery [18,19]. DTI studies from rat tMCAO models demonstrated that FA values remained a continuous decrease in the ipsilateral hemisphere from 24 h to 2 weeks post-stroke and gradually recovered from the ipsilateral corpus callosum to the external capsule until 6 weeks post-stroke [6]. This FA change in corpus callosum and EC was in line with myelination and fiber density [6,19]. Consistently, we longitudinally observed FA decline during the first week and then turned to an increase in the ipsilateral EC from Day 7 to Day 30 in the mouse focal stroke model. While in cerebral cortex, FA values were partially impacted by astrocyte gliosis [17]. Therefore, we applied MBP and SMI32 staining to trace white matter injury in cerebral cortex and striatum. In addition to EC, white matter of the striatum and cortex in peri-infarcted area also spontaneously repaired from Day 7 after ischemic stroke. In line with our results, studies from Cui et al. also reported that remyelination starts from Day 7 after tMCAO by pathological staining [20].
In an aging brain or multiple sclerosis, myelin pieces released from aging or damaged myelin sheaths could be subsequently cleared by microglia [14,21]. However, few studies are concerned about myelin clearance after ischemic stroke. In this current study, we observed microglia centered around damaged myelin bundles after stroke and swallowed myelin debris. The spatiotemporal interaction between microglia and damaged white matter after ischemic stroke aroused our great interest in the correlation between microglia and white matter. By transcriptional analysis of sorted microglia, our study demonstrated CD11c + microglia continuously expand during stroke rehabilitation until Day 30, which was in keeping with ischemic white matter recovery. In the steady state of the adult mice brains, only a small portion of CD11c + cells remain. In case of brain damage or neonatal development, CD11c + cells may increase and take part in immunoregulation, regeneration, and remyelination [16,22,23]. CD11c + cells increase in the brain parenchyma of ischemic stroke models, which is a complex population derived from proliferated resident microglia and infiltrated dendritic cells (DCs) [24]. Our study demonstrated that infiltrated CD11c + cells gradually decreased with the prolongation of stroke onset. However, the functional roles of CD11c + microglia and their induction mechanisms during stroke rehabilitation are poorly understood. A previous study demonstrated that injecting diphtheria toxin into CD11c iDTR mice failed to knock down CD11c + cells but induced a dramatic increase of CD11c + cells [16]. In our study, we applied rAAV6 capsid variants with triple Y731F/Y705F/T492V mutation, which facilitated microglia infection in vivo and in vitro [25]. This construct and its based Cre-DIO system have been recently verified in microglia by several animal studies [26,27]. Besides, peripheral infiltrated cells were scarce in the intact mouse brain. We injected rAAV2/6M virus to infect microglia before stroke, which could reduce the possible rAAV2/6M virus infection of peripheral cells. Therefore, rAAV2/6M-DIO-taCasp3 mainly affected resident microglia but not peripheral infiltrated cells after stroke. Our data demonstrated that bilateral injecting the rAAV2/6M-DIO-taCasp3 virus into CD11c-cre mice cut down more than half of CD11c + microglia 21 days after tMCAO, which allowed us to verify the functional features of CD11c + microglia during the late phase of stroke.
Our study demonstrated that CD11c + microglia could be the dominant CD11c + cells during the late phase of stroke, whose deletion exacerbated stroke mouse behavior and retarded white matter regeneration. We investigated the possible mechanisms of CD11c + microglia promoting remyelination after ischemic stroke. The existence of CD11c + microglia was also found to rise continually in the remote degenerative thalamus on 28 days secondary to primary tMCAO, although their functions have not been addressed [28]. Disease associated microglia (DAM) in neurodegenerative disease were CD11c positive, which participated in lipid metabolism and phagocytic pathways. DAM were considered to have the potential to restrict neurodegeneration [23,29,30]. Oligodendrocyte-supportive genes such as Spp1 and Igf1 were abundant in CD11c + microglia [31]. CD11c + microglia expressed the majority of IGF-1, which is necessary for myelin development. Studies in the developing mouse brain demonstrated that partial depletion of IGF-1 in CD11c + microglia led to a reduction in brain weight, decrease in PLP, MBP, MAG, and MOG expression, and higher frequency of less myelinated fibers in the corpus callosum [16,32]. In mouse early postnatal brain, one unique Spp1 + Igf1 + microglia cluster residing specifically in the axon tracts of the pre-myelinated brain also expressed higher levels of lysosomal markers Lamp1 and CD68 [33]. Genes expressed in the development are often re-expressed upon brain injury. Consistently, in our study, the CD11c + microglia population expressed significantly higher phagocytosis-associated genes CD68 and Axl than CD11cmicroglia population, indicating CD11c + population may own the higher capabilities of debris clearance. Spp1 has been shown pro-myelinative effects after ischemic white matter injury [31,34]. Colony stimulating factor 1 (Csf1), a key regulator of myeloid lineage cells, whose deletion severely impaired white matter regeneration [35]. Phagocytosis of excessive lipid-rich myelin debris may cause cholesterol overload. To keep cholesterol hemostasis, supererogatory cholesterol flows out through ABC transporters, especially ATP-binding cassette transporter A1 (ABCA1) and ATP-binding cassette transporter G1 (ABCG1). Cholesterol is insoluble in water, whose transportation requires apolipoproteins including apolipoprotein E (APOE) and apolipoprotein C1 (APOC1) [36,37]. Myelin-clearance by phagocytes might lead to compensatory increase in the expression of lipid transporters through cholesterol derivates, which could be beneficial for cholesterol recycling and remyelination [38,39]. Take a whole, we predicted that CD11c + microglia owing swallowing capabilities and expressing higher myelin-supportive genes and lipid metabolism associated genes, which accelerated white matter repair after ischemic stroke.
To be summarized, we showed that the microglial gene expression pattern in the late phase of stroke showed a dramatic repairing signature. CD11c + microglia population expressed a characteristic myelinogenetic gene profile, equipping them to play a fundamental role in white matter repair in the stroke rehabilitation stage.
Experimental animals
Male C57BL/6J mice and H11-Itgax-iCre mice were purchased from the Animal Model Center of Nanjing Medical University and GemPharmatech Co., Ltd. (Nanjing, Jiangsu, China) respectively. The mice were bred and housed in an air-conditioned, temperature-controlled, and humiditycontrolled room under a 12-h light/dark cycle. For animal studies, there was no sample-size estimation to ensure adequate power to detect a prespecified effect size. All animal experiments were conducted under the guidelines of the Animal Use and Care Committee of Nanjing University.
Transient middle cerebral artery occlusion model (tMCAO) and treatment
The tMCAO model was established to mimic unilateral focal cerebral ischemic stroke as described previously [40]. After being anesthetized, a midline neck incision was made under a dissecting microscope, and then the right common carotid artery and external carotid artery (ECA) were isolated. A suture (Doccol Corporation, MA, USA) was introduced into a wedge-shaped incision on the ECA and further inserted to obstruct the middle cerebral artery. After one hour of occlusion, the filament was withdrawn for reperfusion. Cerebral blood flow (CBF) was evaluated by a Doppler laser. Mice would be excluded from the study if their relative CBF failed to reduce to 20% of baseline or showed no neurologic deficits after anesthesia recovery. Sham-treated mice were subjected to the same procedure without tMCAO. During the surgery, the body temperature of mice was maintained at 37 ± 0.5°C with a heating pad.
Magnetic resonance imaging (MRI)
Mice before and after tMCAO surgery received an MRI scan on a 9.4 T Bruker MR system (BioSpec 94/20 USR, Bruker) with a 440-mT/m gradient set, an 86-mm volume transit RF coil, and a single channel surface head coil. After anesthetizing with 2.5-3% isoflurane inhalation, mice were restrained on a mouse holder with tooth bars and ear bars for data acquisition and physiological parameter monitoring. Diffusion-weighted images (DWI) were acquired with spin-echo echo-planar imaging (SE-EPI) sequence with the following parameters: Two b-values (b = 0 and 1000 s/mm 2 ) along with 30 non-collinear directions, δ = 4.1 ms, Δ = 10.3 ms; TR: 1500 ms, TE: 23.27 ms, FOV: 20 mm × 20 mm, matrix: 128 × 128, and 22 adjacent slices of 0.7 mm slice thickness. Imaging data were converted into NIFTI format with MRIcron. Diffusion data were postprocessing using FSL (v.5.0.9) pipeline including corrections for eddy currents and movement artifacts (eddy_correct), rotations of gradient directions according to eddy currents corrections (fdt_rotate_bvecs), brain mask extractions based on b0 images (bet) and FA maps calculations by fitting a diffusion tensor model at each voxel (dtifit). EC areas were drawn using the itk-SNAP to extract the FA values.
RNA-seq library construction, RNA sequencing, and bioinformatic analysis
The transcriptome sequencing and analysis were conducted by OE Biotech Co., Ltd. (Shanghai, China). Briefly, total RNA from FACS-sorted microglia was extracted using the mirVana miRNA Isolation Kit (Ambion, USA) following the manufacturer's protocol. The samples with RNA Integrity Number (RIN) ≥ 7 were subjected to the subsequent analysis. The libraries were constructed using TruSeq Stranded mRNA LTSample Prep Kit (Illumina, San Diego, CA, USA) according to the manufacturer's instructions. Then these libraries were sequenced on the Illumina sequencing platform (Illumina HiSeq X Ten) and 150 bp paired-end reads were generated. Raw data (raw reads) were processed using Trimmomatic. The reads containing ploy-N and the low-quality reads were removed to obtain clean reads. Then the clean reads were mapped to the reference genome using hisat2. FPKM value of each gene was calculated using cufflinks, and the read counts of each gene were obtained by htseq-count. DEGs were identified using the DESeq (2012) R package functions estimate Size Factors and nbinom Test. p value < 0.05 and fold change >2 were set as the threshold for significantly differential expression. Hierarchical cluster analysis of DEGs was performed to explore gene expression patterns. The software for Short Time-series Expression Miner (STEM) was provided by OE Biotech Co., Ltd., and p value < 0.05 was significant. GO enrichment and KEGG pathway enrichment analysis were respectively performed using R based on the hypergeometric distribution. Fig. 7 Depleting CD11c + microglia in the late phase of stroke exacerbated WMI. A Representative DTI axial views of mice with rAAV-control and rAAV-taCasp3 injection. Quantification of FA value as the ratio of ipsilateral values to the contralateral values. n = 9 per group. B Immunofluorescence staining of SMI32 (red) and MBP (green) in the cortex and the striatum area of mice with rAAV-control and rAAV-taCasp3 injections. Scale bars, 100 μm. The ratios of SMI32 to MBP staining intensity in the injured cortex (C) and striatum (D). n = 5 per group. E Immunofluorescence staining of EdU (green) and APC (red) on the striatum area of mice with rAAV-control and rAAV-taCasp3 injections. The number of EdU single-positive cells, EdU and APC double-positive cells. n = 5 per group. Scale bars, 100 μm. F Immunofluorescence staining of EdU (green) and Pdgfrα (red) on the striatum area of mice with rAAV-control and rAAV-taCasp3 injections; EdU and Pdgfrα double-positive cells per field. n = 5 per group. Scale bars, 100 μm. G The percentage of EdU + APC + and EdU + Pdgfrα + cells in EdU-positive cells. n = 8 per group. *p < 0.05, **p < 0.01, ***p < 0.001 compared to rAAV-control group, and ns not significant. Values were mean ± SEM.
Total RNA extraction and real-time quantitative PCR analysis
Total RNA was extracted from microglia sorted by FACS using the Trizol reagent (Invitrogen) according to the manufacturer's protocol, then reverse-transcribed into cDNA with a PrimeScript RT Reagent Kit (Takara, Japan). Real-time quantitative PCR (qPCR) of the cDNA was performed using a Step One Plus PCR system (Applied Biosystems, Foster City, CA, USA) with an SYBR Green Kit (Applied Biosystems). Relative quantified levels were compared using the ΔΔCt method normalized to the endogenous control Gapdh. The primer sequences were as follows: Abca1
Stereotactic injections of recombinant adeno-associated virus (rAAV)
Stereotactic injections of rAAV were performed following previously published protocols [26,27]. There was no randomization methods used for grouping. Briefly, male H11-Itgax-iCre mice were fixed in a stereotactic frame after being anesthetized with pentobarbital (20 mg/kg). The sterile ointment was applied to each eye to avoid corneal drying. After exposing the skull surface with a midline scalp incision, the target site was defined using stereotactic coordinates. The injection dose and coordinates relative to bregma were as follows: Lateral ventricles on left side: anterior/posterior (AP) −0.22 mm, medial/lateral (ML) 1.00 mm, dorsal/ventral (DV) −2.50 mm, 2 μl. The primary somatosensory barrel cortex (S1BF) on right side: AP 0.02 mm, ML 3.00 mm, DV −2.00 mm. The Cre-dependent virus of either rAAV-DIO-taCasp3 (rAAV-CMV-DIO-taCasp3-TEVp-WPRE-hGH polyA, AAV2/6 M, 5E + 12 vg/ml) or rAAV-DIO-EGFP (rAAV-CMV-DIO-EGFP-WPRE-hGH, AAV2/6 M, 5E + 12 vg/ml) from BrainVTA (Wuhan, China) was bilaterally delivered into the mice. Keep the glass microelectrode staying for 10 min at the end of each injection to avoid backflow. All the mice were given behavioral tests before and 3-week after rAAV microinjection. Mice with no differences in behavioral tests were included for subsequent experiments.
Behavioral tests
Behavioral tests were performed by an individual blinded to experimental groups. The Rotarod test was performed to assess mouse balance and sensorimotor coordination. Briefly, mice were forced to run on a five-lane rotarod device (IITC Life Science). The rotating rod was placed horizontally, accelerating from 4 rpm to 40 rpm for 5 min. Before the operation, the mice were trained for 3 days at 20 rpm, 30 rpm, and 40 rpm separately in the morning and evening, with an interval of 15 min. On the test day, the mice were placed on the rod in turn, and the average time of latency to fall for three rounds of experiments was recorded. The adhesive removal test was performed to assess tactile responses and sensorimotor asymmetries. Two 2 × 3 mm adhesive tapes were applied to the forepaws. Tactile responses were measured by recording the time to remove the adhesive tape, with a maximum observation period of 120 s. Cognitive function was analyzed using the Morris Water Maze test following the previous protocol [41]. Briefly, during the learning test (Day 1 to Day 5), the time spent finding the submerged platform was recorded from four locations to place the mice into the pool and train the mice to stay on it for 30 s. The mice were allowed to stay on the platform for 30 s when they did not find it, and 60 s was recorded as its latency time. For the probe test on the 6th day, the mice were placed into the pool from two of the four locations which are on the diagonal and allowed to swim freely for 60 s without the platform. All the data were recorded and analyzed using the ANY-maze system (Stoelting, USA).
EdU injections and staining
To label proliferating cells, animals were intraperitoneally injected with the 5-ethynyl-2'-deoxyuridine (EdU, 5 mg/kg, Invitrogen) once a day from Day 7 after tMCAO for 7 consecutive days. EdU staining followed the manufacturer's introduction with Click-iT® EdU Imaging Kits (Invitrogen).
Statistical analysis
Statistical analyses were performed with SPSS 18.0 software (IBM Corp, Armonk, NY, USA). All data were presented as the mean ± standard error of the mean (SEM). The difference between the two groups was analyzed using an unpaired Student's t test for the data was normally distributed. The difference among multiple groups was analyzed by one-way analysis of variance (ANOVA) for the data with homogeneity of variance and followed by the Bonferroni post hoc test. Statistics involving time trends in different groups such as behavior tests and flow cytometry were analyzed with two-way ANOVA followed by the Bonferroni post hoc test. The variance is similar between groups that are being statistically compared. A statistically significant difference was established at p < 0.05.
DATA AVAILABILITY
The datasets used during the current study are available from the corresponding author on reasonable request. The microarray data of RNA-seq were deposited in the NCBI's Sequence Read Archive (SRA) with the BioProject accession number: PRJNA809756.
|
2023-02-25T14:45:36.213Z
|
2023-02-01T00:00:00.000
|
{
"year": 2023,
"sha1": "528c1f3612fcf7ef6eeaecc1c477fe7344d50f2f",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Springer",
"pdf_hash": "528c1f3612fcf7ef6eeaecc1c477fe7344d50f2f",
"s2fieldsofstudy": [
"Biology",
"Psychology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
234497090
|
pes2o/s2orc
|
v3-fos-license
|
Make it personal to beat vaccine hesitancy
A randomized controlled trial found that vaccine hesitancy for COVID-19 is most effectively addressed with information on personal benefits of vaccination.
Microbial predators, ranging from phages to protists, prey on other microorganisms, and are important regulators of food webs and nutrient cycles. Bacterial predators are less well understood than other microbial predators, in particular, their ecological functions. In a new study, Hungate et al. measured the growth rates of predatory bacteria in different environments and showed that they respond to nutrient addition, which is indicative of their role in controlling the base of the food web.
There are facultative predatory bacteria, such as members of the genera Lysobacter and Cytophaga, that can hunt in 'packs' , similar to wolves, but that can also assume a saprotrophic life style. By contrast, obligate predatory bacteria, such as members of the genera Vampirovibrio and Bdellovibrio, rely on predation to survive, for example, by 'sucking' cytoplasm out of prey cells for the former or by 'eating' them from the inside for the latter. To assess the ecological roles of these bacterial predators, Hungate et al. studied 14 different soil sites, ranging from the arctic to the tropics, and one temperate stream with quantitative stable isotope probing (qSIP).
The authors used 13 C-labelled organic matter and/or 18 O-labelled water to measure the incorporation of carbon and oxygen into DNA, respectively. 16S rRNA sequencing then can be used to identify the taxa. Overall, only 7% of the bacterial taxa detected were known predators and of those, only 8% were obligate predators. However, despite accounting for only a minor part of the overall bacterial diversity, predatory bacteria across sites and experiments incorporated isotopic tracers at rates of almost a quarter higher than non-predatory bacteria. The obligate predatory order Vampirovibrionales had the highest rates of isotope tracer incorporation, followed by the facultative predatory genus Lysobacter.
Overall, the growth rate of facultative predators was only slightly higher than that of non-predators, whereas obligate predators showed substantially higher growth rates and carbon uptake than non-predators. Furthermore, their growth was strongly stimulated by the addition of carbon substrates or combined carbon and nitrogen substrates, whereas non-predators and facultative predators showed only a small increase of growth when those nutrients were more abundant. Together these results indicate that bacterial predators are important for top-down control of food webs and that added resources disproportionately flow to the predator trophic level after assimilation by heterotrophic bacteria, who then serve as prey.
In summary, the study shows that predatory bacteria are metabolically more active than other members of environmental microbiomes and that they have important roles in regulating nutrient fluxes in microbial food webs.
Make it personal to beat vaccine hesitancy
Vaccine hesitancy is multifactorial and difficult to address. In this randomized controlled trial, Freeman et al. recruited over 18,000 adults in the UK and assessed their willingness to be vaccinated against COVID-19. Around 10% of participants were strongly hesitant, questioning the safety and benefit of vaccination. Participants were randomized to 10 different information types, ranging from messaging highlighting the public benefit to addressing concerns about the speed of development. The most effective message for reducing vaccine hesitancy was explaining the personal benefits of vaccination, including prevention of serious illness and long-term health problems, although the effect was relatively small. Of note, the overall willingness of people to get vaccinated had increased substantially since a previous similar study that was conducted in October 2020 compared to the current study, which was conducted in February 2021.
Tracking down polyclonal tuberculosis
Tuberculosis can be difficult to treat owing to drug resistance, persistence of bacteria and/or reinfection. All of these challenges can be compounded if a patient is infected with several different strains of Mycobacterium tuberculosis. To find out how common polyclonal infection is, Moreno-Molino et al. sequenced samples from 18 patients who had tuberculosis lesions surgically removed. 39% of the patients showed signs of polyclonal infection. By contrast, sequencing of sputum samples of 218 patients (all from Georgia, the same as the surgical patients, which has a high incidence of tuberculosis) detected a polyclonal infection only in 5%, which suggests that sputum samples can be insufficient for the diagnosis of polyclonal infection and that reinfection might be more common than previously thought. Importantly, a large part of the surgical patients with polyclonal infection harboured strains with differing drug resistance, which will complicate drug treatment.
POCT for drug-resistant gonorrhoea
Sexually transmitted diseases are on the rise and drug resistance is increasing, in particular for Neisseria gonorrhoeae. Trick et al. present a point-of-care test (POCT), which can rapidly and cost effectively detect N. gonorrhoeae and assess its sensitivity to the antibiotic ciprofloxacin. They used a magnetofluidic PCR test using magnetic beads and cartridges, which can be linked to a smartphone application for result reporting. Within 15 minutes, the test amplifies opa (encoding a cell surface protein) and wild-type gyrA (encoding the ciprofloxacin target). Importantly, when used in sexual health clinics in Baltimore, USA and Kampala, Uganda, on samples of over 200 patients, the test showed a specificity and sensitivity of over 97%.
|
2021-05-22T05:14:54.666Z
|
2021-05-20T00:00:00.000
|
{
"year": 2021,
"sha1": "629e0623226493708ca6806eb49391c13d9ced09",
"oa_license": null,
"oa_url": "https://www.nature.com/articles/s41579-021-00579-8.pdf",
"oa_status": "BRONZE",
"pdf_src": "PubMedCentral",
"pdf_hash": "629e0623226493708ca6806eb49391c13d9ced09",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
52819778
|
pes2o/s2orc
|
v3-fos-license
|
How a virus enters without breaking
Viral glycoproteins are the flexible keys to the cell. By changing shape, they open the cell so that the virus can enter. [Libersou et al.][1] show how one glycoprotein helps the vesicular stomatitis virus (VSV) gain access to mammalian cells.
![Figure][2]
VSV particles in the process of
R as proteins are restless, continually fl itting from the cell membrane to the Golgi apparatus and back again. Misaki et al. reveal that the proteins enter recycling endosomes during the journey to the plasma membrane.
One mystery is why Ras proteins-which spur cell growth, differentiation, and survival-move so often. The cluttered cell interior has also made it diffi cult to discern how the proteins travel. Proteins heading for the Golgi might zip through the cytosol or hitchhike in endosomes. Some evidence suggests that they pass through recycling endosomes, whereas other studies indicate they shun the endocytic pathway altogether.
Misaki et al. used COS-1 cells in which recycling endosomes are easier to observe because they gather in the socalled Golgi ring near the organelle, separate from early and late endosomes. The researchers found that Ras proteins do spend time in recycling endosomes, but only on the outbound leg from the Golgi to the cell membrane. Addition of two palmitoyl groups directs Ras to recycling endosomes, the team discovered.
The researchers think that an unidentifi ed vesicle ferries the proteins from the Golgi to the recycling endosomes. Whether recycling endosomes deliver Ras proteins to the cell membrane or hand off their cargo to other carriers is unclear. Receptors such as the epidermal growth factor receptor also slip into recycling endosomes and might activate Ras proteins there. Misaki, R., et al. 2010. J. Cell Biol. doi:10.1083 CLIP catches enzymes in the act P roprotein convertases (PCs) are big shots in the body because they snip and turn on numerous hormones, receptors, adhesion molecules, and other crucial proteins. Mesnard and Constam describe a new technique to track the activity of some of these ubiquitous but hard-to-study enzymes.
The targets of the nine PCs range from insulin to the blood pressure regulator renin to several proteins implicated in Alzheimer's disease. Cancer cells and pathogens such as HIV often co-opt the enzymes for nefarious ends. For example, PCs turn on matrix metalloproteinases that clear away extracellular matrix and allow cancer cells to spread. But the enzymes' widespread distribution and overlapping functions have made it diffi cult to tease out what jobs individual enzymes perform.
To simplify the task, Mesnard and Constam devised a method to determine when and where PCs are working. They fused yellow and blue fl uorescent proteins to create a biosensor they call CLIP. When PCs are absent, the two colors remain together. But active PCs cut CLIP and separate the colors. Researchers can thus track PC activity inside a cell and at its surface, or even in whole tissues. Mesnard and Constam used the approach to fi nd out when two PCs, Pace4 and Furin, switch on in early mouse embryos. The enzymes were on the job before the blastocyst implanted, earlier than expected. The researchers say that CLIP could improve drug design, allowing scientists to pin down where certain PCs are functioning in diseases and monitor the effectiveness of inhibitors dispatched to those sites. Mesnard, D., andD.B. Constam. 2010. J. Cell Biol. doi:10.1083/ jcb.201005026. Yellow and blue have separated in this mouse blastocyst, indicating that two PCs are active. Glycoprotein contortions reshape the viral and cellular membranes, allowing them to fuse. VSV carries a surface glycoprotein called G. Previous work indicated that G has at least three confi gurations-a pre-fusion state, an intermediate form that interacts with the target cell membrane, and a post-fusion conformation. Using electron microscopy and tomography, Libersou et al. tracked G to determine how its alterations spur fusion of VSV particles.
How a virus enters without breaking
Instead of going in tip fi rst, the virus, which is shaped like a bullet, backs in with its fl at base. Low pH triggers the viruses to fuse and trips G molecules into the post-fusion arrangement. However, fusion requires more than gymnastics by G. The researchers found that if they reduced the pH just enough so that the glycoproteins distorted into the post-fusion shape, the viral particles remained locked out.
G undergoes another transformation-glycoproteins not located on the viral base interconnect to form helical arrays. The arrays can also reshape membranes, the researchers found. Libersou et al. conclude that fusion requires two rearrangements of G. First, glycoproteins on the viral base remodel and establish a connection with the cell membrane that initiates fusion. Then G molecules on the sides of the virus connect into helices that can deform the viral membrane to fully achieve fusion.
Ras proteins (green) huddle inside the ring-shaped Golgi apparatus (red).
VSV particles in the process of fusing with liposomes.
|
2018-05-08T18:14:27.915Z
|
2010-10-04T00:00:00.000
|
{
"year": 2010,
"sha1": "42bd1c665080549a801fc78b40f59c1bed68c1d5",
"oa_license": null,
"oa_url": "https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2953430",
"oa_status": "GREEN",
"pdf_src": "PubMedCentral",
"pdf_hash": "ccc9d893cca3c4f3892d21248df7b189b60fe925",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology"
]
}
|
262163579
|
pes2o/s2orc
|
v3-fos-license
|
Reaching the Frail Elderly for the Diagnosis and Management of Atrial Fibrillation—REAFEL
BACKGROUND: Frail elderly patients are exposed to suffering strokes if they do not receive timely anticoagulation to prevent stroke associated to atrial fibrillation (AF). Evaluation in the cardiological ambulatory can be cumbersome as it often requires repeated visits. AIM: To develop and implement CardioShare, a shared-care model where primary care leads patient management, using a compact Holter monitor device with asynchronous remote support from cardiologists. METHODS: CardioShare was developed in a feasibility phase, tested in a pragmatic cluster randomization trial (primary care clinics as clusters), and its implementation potential was evaluated with an escalation test. Mixed methods were used to evaluate the impact of this complex intervention, comprising quantitative observations, semi-structured interviews, and workshops. RESULTS: Between February 2020 and December 2021, 314 patients (30% frail) were included, of whom 75% had AF diagnosed/not found within 13 days; 80% in both groups avoided referral to cardiologists. Patients felt safe and primary care clinicians satisfied. In an escalation test, 58 primary-care doctors evaluated 93 patients over three months, with remote support from four hospitals in the Capital Region of Denmark. CONCLUSIONS: CardioShare was successfully implemented for AF evaluation in primary care.
Introduction
The cumulative prevalence of AF in Denmark is 3.0% [1], increasing with age [2,3] and causing important morbidity [4].Especially in the frail elderly, the risk of unplanned hospitalization and adverse outcomes is increased [5][6][7][8].They have a particularly high risk of stroke, which can be prevented with timely anticoagulant medication [9][10][11].Nonetheless, frail elderly patients with AF are less likely to be treated with anticoagulants and managed with rhythm control than other patients [12].
The detection of AF can be challenging, as approximately 30% have no symptoms [13][14][15].The overall objective of this project was to develop and implement a collaboration model for general practitioners (GP) and hospital cardiologists that allows for evaluation of frail elderly patients to timely diagnose atrial fibrillation (AF) at less of a burden for the patient.
In the conventional diagnostic workup on patients suspected to have AF paroxysms, the GP refers the patients to the hospital cardiologists for evaluation, including heart rhythm monitoring.Patients are usually scheduled for several meetings at the outpatient clinic before the diagnosis is confirmed or rejected and treatment eventually initiated: (1) initial consultation with a cardiologist, (2) receiving a Holter and starting monitoring, (3) delivering back the Holter, (4) consultation with a cardiologist to become informed on the results and next steps in the diagnostic process and treatment.This is cumbersome, especially for patients who are physically or mentally frail, and who often need help for transportation.These patients tend not to be referred or terminate the diagnostic workup prematurely, still suffering the risk of subsequent stroke due to possible non-diagnosed AF.The REAching the Frail ELderly study (REAFEL) was designed to simplify the workup process for diagnosis and management of suspected AF, making it possible to conduct Holter monitoring in a primary care setting and thus aiming to avoid a need for referral to the outpatient clinic.In REAFEL, GPs could use a simple continuous cardiac rhythmmonitoring device (Holter) and received remote support from cardiologists to interpret the results and to guide decisions on the need for referral to further cardiologist evaluation and on choice of adequate anticoagulation therapy.In this assessment, the dialogue between the cardiologist and the GP can be crucial in making the correct decision, as the GP often knows the patient's risk of bleeding and treatment preferences.Additionally, in REAFEL patients could have video consultations with the cardiologist in cooperation with their GP.
We hypothesized that GPs could initiate Holter monitoring and reach a conclusion adequately and safely, thus minimizing the need for referral to the cardiology outpatient clinic.We also wanted to explore whether patients and GPs would be confident and satisfied with this workup.
The Danish company Cortrium ApS (Høje-Taastrup, Denmark) is one of the partners that received the grant from the Danish Innovation Foundation (grant 6153-00009B) and provided C3+ sensors, which are compact three-channel (Holter) sensors that can be connected to conventional electrodes and are easy to manage without special skills.
Materials and Methods
The entire REAFEL project followed the frame of a complex intervention [16] that comprises a pilot study, a pragmatic cluster randomization study, and an escalation phase.
We developed a workup for the diagnosis and management of AF in a pilot study together with one GP clinic with six GPs who received support from the department of cardiology of Bispebjerg and Frederiksberg Hospital between 2019 and 2020, and called it the CardioShare model [17].
For reference to assess workup time from usual referral until report conclusion, we collected data on all referrals to Holter monitoring at the hospital's outpatient department during a randomly chosen month (January 2019) and compared it with the workout time we used with the CardioShare model in the pilot study.
We explored how Holter monitoring would be used by GPs with and without remote support from a cardiologist in a pragmatic cluster randomization study, where all GPs received a compact three-channel Holter monitor (C3+, provided by Cortrium ApS, Høje-Taastrup, Denmark).GP clinics were then randomized as a cluster, including all doctors in the clinic, to receive remote support from a cardiologist (CardioShare arm) or to decide at their own discretion which patients to monitor and further refer to the cardiology outpatient clinic (non-CardioShare arm).
The trial is registered in www.clinicaltrials.gov(NCT04162548) and meets the criteria for a Pragmatic Explanatory Continuum Indicator Summary (PRECIS) as follows [18,19]: (1) Patients evaluated in the project are the same as those evaluated in conventional practice.(2) Patient inclusion happened in connection with an ordinary consultation.
(3) The healthcare staff treating the patients in the project (GPs, and cardiologists) were the same as those treating patients routinely.(4) The resources used in the project (GPs, cardiologists, and nurses at the outpatient department, who evaluated the Holter monitor recordings) were the same as those conducting evaluations in the conventional process at the outpatient department.(5) The proposed CardioShare model was intended to be as flexible as the conventional outpatient diagnostic evaluation.( 6) Follow-up of patients was the same as for conventional evaluation, except that study patients were asked to participate in study-specific interviews.(7) The primary outcome was clinically relevant for the usual patient evaluation process.In this project, the primary outcome was the number of (frail) patients who completed workout from deciding to monitor the heart rhythm to when adequate treatment was initiated based on a conclusive diagnosis.(8) All the data collected in the project were analyzed as part of the primary outcome.
A Priori Calculation of the Population of Interest
We calculated whether we could manage to perform Holter monitoring in a feasible number of frail patients to prevent a stroke with timely initiation of anticoagulation therapy.
In 2015, stroke was the cause of 3.4% of all somatic hospital admissions for patients older than 85 years [20].When AF was identified, there was a total recurrence of stroke of 14% in this group [21].With a total number of admissions of 37.998 in 2014, we calculated that 1.808 recurrences of stroke were preventable with timely anticoagulation therapy.Since anticoagulation medicine can prevent 1/3 of strokes related to AF in the elderly [7], in 2014, there were 600 preventable cases of stroke in Denmark.We conducted a similar calculation for the Capital Region, where there are 2300 admissions/year for stroke, which was a stable number in three consecutive years until 2016.In total, 1300 of these patients were admitted to BHF, which corresponds to 350 patients suffering from preventable strokes.With a 20% diagnostic identification of AF, and an efficacy of anticoagulation of 3 patients out of 10 in the frail elderly group (CHADS-VASc score of 4 or more), we calculated that if 10 frail patients would undergo Holter monitoring weekly, we could monitor 350 patients within one year, which was a feasible process.
The CardioShare Model
From February 2020 to December 2021, nine GP clinics were gradually included as clusters and randomized 1:1 into a CardioShare/intervention group and non-CardioShare/ control group.The inclusion of the clinics was based on outreach meetings and networking.Each clinic was randomized as a cluster, regardless of the number of GPs working in the clinic, and all patients were managed from the clinic according to the allocated randomization.All Holter reports were evaluated and approved by the hospital's cardiologists, who could review recordings on demand.
To receive remote support by cardiologists, GPs in the intervention (CardioShare) group used the national platform for cross-sector communication MedCom (www.medcom.dk/standarder, accessed on 13 September 2023).The cardiologists confirmed the indication for monitoring or asked for further information before the GP initiated Holter monitoring.Length of monitoring varied from one to seven days, according to the cardiologist's advice.Subsequently, all recordings were uploaded to a cloud-based analysis platform provided by Cortrium.The cardiologist received a notification from Cortrium when recordings were analyzed and a report was available, then forwarded the report to the GP along with advice on how to manage the patient according to the findings.The GP informed the patients if further evaluation or treatment at the cardiology department was recommended.With patient consent, the GP would then send a message to the cardiologist, who could schedule an appointment with the patient without additional referral.The GP could ask the cardiologist to communicate directly with the patient by means of video or phone consultation.Figure 1 provides an overview of the process.
process.
The procedure for monitoring was the same for GPs in the control (non-CardioShare) group, but they initiated monitoring as they deemed needed, and solely decided patient management, including whether to refer the patient to a cardiologist for further evaluation.In this group, the monitoring reports from the cloud-based analysis platform were sent to the GPs without advice on patient management, while GPs were guided remotely by a cardiologist, i.e., using the CardioShare model (intervention group).Observation parameters recorded were: (1) CardioShare: number of days from the GP's first message to a cardiologist to results and recommendations sent to the GP by a cardiologist, (2) non-CardioShare: number of days from initiation of recording to results and recommendations sent to the GP by a cardiologist.To standardize our statistics, all delayed cases as well as those with more than one Holter recording were not considered.Our results were compared to the cohort of patients (n = 117) who attended the outpatient department at Bispebjerg-Frederiksberg Hospital throughout January 2019 (pilot project phase) for initiation of Holter monitoring as a reference for usual care, measuring the number of days from date of referral to results and recommendations given to the patient and/or sent to the patient's GP (whichever was earliest).
Inclusion criteria: Patients with suspected AF and patients with known AF where the GP needed information for heart rate control or to assess AF-burden.We encouraged GPs to include patients they considered frail elderly, but inclusion was liberal in order to explore other patient groups being considered relevant from a primary care perspective.To define "frailty" we used a modified version of the simple criteria described for the age The procedure for monitoring was the same for GPs in the control (non-CardioShare) group, but they initiated monitoring as they deemed needed, and solely decided patient management, including whether to refer the patient to a cardiologist for further evaluation.In this group, the monitoring reports from the cloud-based analysis platform were sent to the GPs without advice on patient management, while GPs were guided remotely by a cardiologist, i.e., using the CardioShare model (intervention group).
Observation parameters recorded were: (1) CardioShare: number of days from the GP's first message to a cardiologist to results and recommendations sent to the GP by a cardiologist, (2) non-CardioShare: number of days from initiation of recording to results and recommendations sent to the GP by a cardiologist.To standardize our statistics, all delayed cases as well as those with more than one Holter recording were not considered.Our results were compared to the cohort of patients (n = 117) who attended the outpatient department at Bispebjerg-Frederiksberg Hospital throughout January 2019 (pilot project phase) for initiation of Holter monitoring as a reference for usual care, measuring the number of days from date of referral to results and recommendations given to the patient and/or sent to the patient's GP (whichever was earliest).
Inclusion criteria: Patients with suspected AF and patients with known AF where the GP needed information for heart rate control or to assess AF-burden.We encouraged GPs to include patients they considered frail elderly, but inclusion was liberal in order to explore other patient groups being considered relevant from a primary care perspective.To define "frailty" we used a modified version of the simple criteria described for the age group at the greatest risk of stroke from suspected AF (Table 1) [22][23][24].We classified indications for Holter monitoring according to Table 2.
Table 1. Frailty criteria.
Frail elderly are aged ≥ 65 years and match at least one of the following: (1) Need help with transportation to the hospital's outpatient department.(2) Need help with personal hygiene.(3) Have reduced ability to walk (estimated to take > 5 s to walk 5 m).( 4) Have unintentionally lost weight within the past year.(5) Have cognitive difficulties (dementia, memory deficits, aphasia, etc.).( 6) Have social problems due to abuse, ethnic background, language, etc. Exclusion criteria: Patients younger than 18 years and patients who did not provide written informed consent were excluded from the project.Likewise, patients with suspicion of severe arrhythmia who needed to be referred to a cardiologist and who could attend the outpatient department were excluded.
Patient Reported Experiences
To explore participating patients' experiences of the diagnostic process, all patients who completed a Holter monitor recording were contacted by phone and asked to answer a short survey.When patients did not complete the survey, we recorded the reason as: no contact data available in the patient's electronic journal, the patient did not answer the phone after at least three attempts, the patient did not want to participate, or the patient did not understand the survey's questions due language issues or impaired cognitive function.
To further explore the data collected through the survey, we randomly selected 20% of the patients among those who completed it and invited them to participate in a semistructured interview concerning their experiences during the diagnostic process.To minimize a biased selection of participants for these interviews, we compiled a list of all cases sorted by inclusion date, and invited every fifth patient in the list for an interview.If the invitation was not successful, the next patient on the list was invited.The semi-structured interviews were conducted according to a guide that included a list of the topics that we wanted to explore during the interview (Figure A1).Throughout every conversation, the interviewers encouraged the respondents to speak freely and only asked questions to elaborate within the topics of interest.The conversations were recorded and evaluated continuously until sufficient data were collected, i.e., data saturation, when interviews of two consecutive patients did not provide any new aspects within the fields of topics.Interview recordings were subsequently transcribed verbatim.Two researchers independently coded all interviews before analyzing the patients' narrations and organized the large amount of text in a concise summary of key results, as described by Erlingsson and Brysiewicz [25].
Participating Health Care Professionals' Experiences
In addition to patient interviews, we also invited GPs and nurses to participate in interviews to explore their experiences with the CardioShare model.We conducted 14 interviews with a total of eight GPs and six nurses from both clusters.All conversations were recorded and analyzed using the same method as described for patient interviews.
CardioShare versus Non-CardioShare
From February 2020 to December 2021 a total of 314 patient cases were evaluated; 122 were randomized CardioShare and 111 to non-CardioShare.The remaining 81 cases were enrolled during the same period but by GPs at our feasibility phase clinic, resulting in a non-randomized mixture of CardioShare and non-CardioShare patients.Predominantly female patients participated in the study, across all ages (Figure 2), with a median age of all participants of 62.5 years.In total, 85 patients (27%) were classified "frail", 61 being female and 24 being male.In twelve cases, Holter recording was repeated and one patient underwent a third recording.User failures occurred in eight cases, mostly due to a lack of routine among the staff in the start-up phase of the project.In four cases, the cardiologists recommended a follow-up recording based on the patient's symptoms.The patient who underwent a third recording did so due to new symptoms six months after the initial recordings.
patient underwent a third recording.User failures occurred in eight cases, mostly due to a lack of routine among the staff in the start-up phase of the project.In four cases, the cardiologists recommended a follow-up recording based on the patient's symptoms.The patient who underwent a third recording did so due to new symptoms six months after the initial recordings.
There were recorded delays in the workout process in 64 cases; 6 caused by the patient, 33 caused by the GP, and 25 by the cardiologist.Among a total of 323 recordings during the cluster-randomized main study, 29 patients (9.0%) were diagnosed with AF.The majority, 226 recordings (70.0%), showed normal sinus rhythm, whilst 58 (18.0%) showed other abnormalities, two were inconclusive, and eight recordings failed.In 257 cases (79.6%), no further involvement of the cardiologist was needed, and the GP took over further treatment if necessary/as advised.In total, 46 recordings (14.2%) resulted in the patient's referral to a cardiologist for further diagnostics.In 16 cases (5.0%) cardiologists advised to perform another Holter recording, 8 of which were due to failed recording (Figure 3).
Of the 58 recordings that showed other abnormalities than AF, 33 (56.9%) resulted in referral to a cardiologist for further diagnostics, whilst 21 cases (36.2%) could be handled by the GP without further involvement of other specialists.In three cases, the cardiologists advised to conduct another recording, and in one case, the protocol was violated (non-CardioShare cluster) as the patient's GP was advised to change medication.There were recorded delays in the workout process in 64 cases; 6 caused by the patient, 33 caused by the GP, and 25 by the cardiologist.
Among a total of 323 recordings during the cluster-randomized main study, 29 patients (9.0%) were diagnosed with AF.The majority, 226 recordings (70.0%), showed normal sinus rhythm, whilst 58 (18.0%) showed other abnormalities, two were inconclusive, and eight recordings failed.In 257 cases (79.6%), no further involvement of the cardiologist was needed, and the GP took over further treatment if necessary/as advised.In total, 46 recordings (14.2%) resulted in the patient's referral to a cardiologist for further diagnostics.In 16 cases (5.0%) cardiologists advised to perform another Holter recording, 8 of which were due to failed recording (Figure 3).
patient who underwent a third recording did so due to new symptoms six months after the initial recordings.
There were recorded delays in the workout process in 64 cases; 6 caused by the patient, 33 caused by the GP, and 25 by the cardiologist.Among a total of 323 recordings during the cluster-randomized main study, 29 patients (9.0%) were diagnosed with AF.The majority, 226 recordings (70.0%), showed normal sinus rhythm, whilst 58 (18.0%) showed other abnormalities, two were inconclusive, and eight recordings failed.In 257 cases (79.6%), no further involvement of the cardiologist was needed, and the GP took over further treatment if necessary/as advised.In total, 46 recordings (14.2%) resulted in the patient's referral to a cardiologist for further diagnostics.In 16 cases (5.0%) cardiologists advised to perform another Holter recording, 8 of which were due to failed recording (Figure 3).
Of the 58 recordings that showed other abnormalities than AF, 33 (56.9%) resulted in referral to a cardiologist for further diagnostics, whilst 21 cases (36.2%) could be handled by the GP without further involvement of other specialists.In three cases, the cardiologists advised to conduct another recording, and in one case, the protocol was violated (non-CardioShare cluster) as the patient's GP was advised to change medication.Of the 58 recordings that showed other abnormalities than AF, 33 (56.9%) resulted in referral to a cardiologist for further diagnostics, whilst 21 cases (36.2%) could be handled by the GP without further involvement of other specialists.In three cases, the cardiologists advised to conduct another recording, and in one case, the protocol was violated (non-CardioShare cluster) as the patient's GP was advised to change medication.
During the pilot phase (sample data collected throughout January 2019), a total of 117 patients were referred to the hospital's outpatient department to undergo Holter recording.The duration of the diagnostic process was measured from the date of referral to the date when the cardiologist gave the results to either the patient's GP or the patient themselves.The mean duration of the diagnostic process for AF was 63 days, ranging from 10 to 224 days, and 75% of all patients were diagnosed within 78 days (Figure 4, bottom).In comparison, in the REAFEL study, the mean duration of the workout for diagnosing AF was shorter (from 1 to 25 days) for all patients, as 75% of all patients had AF diagnosed/not found within 13 days.Patients who during the main study were included by our pilot phase clinic are shown separately (Figure 4, top).
themselves.The mean duration of the diagnostic process for AF was 63 days, ranging from 10 to 224 days, and 75% of all patients were diagnosed within 78 days (Figure 4, bottom).In comparison, in the REAFEL study, the mean duration of the workout for diagnosing AF was shorter (from 1 to 25 days) for all patients, as 75% of all patients had AF diagnosed/not found within 13 days.Patients who during the main study were included by our pilot phase clinic are shown separately (Figure 4, top).
Escalation Test
After the cluster-randomized study ended (December 2021) a workshop was held where more GPs and cardiologists from other hospitals in the Capital Region of Denmark were invited to discuss the experiences and how CardioShare could be implemented for diagnosis and management of AF.There was a consensus that GPs could initiate Holter monitoring and interpret the results provided if the cardiologist continued to respond within a short time to the requests from GPs with doubts about the indication for Holter monitoring, or about the results of the report or the following evaluation and/or treatment of the patients.Four cardiology sites with hospital cardiologists and two private practicing cardiologists used the CardioShare model to support 58 GPs distributed in 13 clinics in the escalation test from February to April 2022.In total, 93 patients were evaluated by their GPs using C3+ sensors and supported through the CardioShare model.The Cardiology Council approved this procedure as an additional standard of care method in the Capital Region of Denmark.
Survey
In total, 160 patients were asked to participate in the survey, of whom 102 (63.8%) accepted.The patients were asked how safe they felt from the time they saw their GP due to their symptoms until being explained the outcome of the Holter recording.In total, 90.2% of the patients felt either safe (25.5%) or very safe (64.7%), while 3.9% of the patients felt unsafe or very unsafe (Figure 5).A total of 81 patients (79.4%) were either satisfied (27.5%) or very satisfied (52.0%) with the organization of the diagnostic process.A few did not answer this question (4.9%), and 7.8% of the patients were unsatisfied (4.9%) or very unsatisfied (2.9%) (Figure 6).We were aware that patients tend to answer less critical when responding the survey's closed questions regarding overall satisfaction with the workout process.Therefore, we addressed this specifically as an open question during the interviews.
Escalation Test
After the cluster-randomized study ended (December 2021) a workshop was held where more GPs and cardiologists from other hospitals in the Capital Region of Denmark were invited to discuss the experiences and how CardioShare could be implemented for diagnosis and management of AF.There was a consensus that GPs could initiate Holter monitoring and interpret the results provided if the cardiologist continued to respond within a short time to the requests from GPs with doubts about the indication for Holter monitoring, or about the results of the report or the following evaluation and/or treatment of the patients.Four cardiology sites with hospital cardiologists and two private practicing cardiologists used the CardioShare model to support 58 GPs distributed in 13 clinics in the escalation test from February to April 2022.In total, 93 patients were evaluated by their GPs using C3+ sensors and supported through the CardioShare model.The Cardiology Council approved this procedure as an additional standard of care method in the Capital Region of Denmark.
Survey
In total, 160 patients were asked to participate in the survey, of whom 102 (63.8%) accepted.The patients were asked how safe they felt from the time they saw their GP due to their symptoms until being explained the outcome of the Holter recording.In total, 90.2% of the patients felt either safe (25.5%) or very safe (64.7%), while 3.9% of the patients felt unsafe or very unsafe (Figure 5).A total of 81 patients (79.4%) were either satisfied (27.5%) or very satisfied (52.0%) with the organization of the diagnostic process.A few did not answer this question (4.9%), and 7.8% of the patients were unsatisfied (4.9%) or very unsatisfied (2.9%) (Figure 6).We were aware that patients tend to answer less critical when responding the survey's closed questions regarding overall satisfaction with the workout process.Therefore, we addressed this specifically as an open question during the interviews.The C3+ sensor was evaluated as user-friendly by 84.3% of the patients, whilst 4.9% had bad or very bad experiences with it.There is no association for this point with their level of satisfaction with the overall process.Hence, we can assume that the reasons for patients not being satisfied with the diagnostic evaluation process may be related to the doctor-patient relationship or to organizational matters.
When asked how worried the patients were about the symptoms for which they requested help from their GP, in total, 38 patients (37.3%) were either worried or very worried at that time.This number was halved at the end of the diagnostic process (15.7% were worried or very worried), while 25 patients (24.5%) stated the same level of concern before and after diagnostic evaluation.In depth analysis showed that 8 of these 25 patients answered "not worried at all" in both cases, whilst other five patients (4.9%) answered "worried", and three (2.9%) answered "very worried" to both questions.Five patients (4.9%) indicated a higher level of concern after the diagnostic process was fulfilled.
Semi-Structured Interviews with Patients
Patients' narratives were analyzed as shown in Table A1, and after eleven interviews, we could identify three main themes (i.e., hypotheses to be confirmed or disproved): (1) patients experience a high level of professionalism and quality despite not being seen by a cardiologist; (2) the C3-Holter device is user and easy to handle; and (3) being diagnosed by their own GP makes patients feel safe and secure.The main results within these three main themes are as follows.
Main Theme 1: Patients Experience a High Level of Professionalism and Quality despite Not Being Seen by a Cardiologist
For most of the patients, it does not make any difference if they are only seen by their own GP and not a cardiology specialist.They are confident knowing that their GP and hospital's cardiologists work together: "I assume there is no big difference between the approaches.It's just the consultation that's different.I wasn't nervous at all about quality."The C3+ sensor was evaluated as user-friendly by 84.3% of the patients, whilst 4.9% had bad or very bad experiences with it.There is no association for this point with their level of satisfaction with the overall process.Hence, we can assume that the reasons for patients not being satisfied with the diagnostic evaluation process may be related to the doctor-patient relationship or to organizational matters.
When asked how worried the patients were about the symptoms for which they requested help from their GP, in total, 38 patients (37.3%) were either worried or very worried at that time.This number was halved at the end of the diagnostic process (15.7% were worried or very worried), while 25 patients (24.5%) stated the same level of concern before and after diagnostic evaluation.In depth analysis showed that 8 of these 25 patients answered "not worried at all" in both cases, whilst other five patients (4.9%) answered "worried", and three (2.9%) answered "very worried" to both questions.Five patients (4.9%) indicated a higher level of concern after the diagnostic process was fulfilled.
Semi-Structured Interviews with Patients
Patients' narratives were analyzed as shown in Table A1, and after eleven interviews, we could identify three main themes (i.e., hypotheses to be confirmed or disproved): (1) patients experience a high level of professionalism and quality despite not being seen by a cardiologist; (2) the C3-Holter device is user friendly and easy to handle; and (3) being diagnosed by their own GP makes patients feel safe and secure.The main results within these three main themes are as follows.For most of the patients, it does not make any difference if they are only seen by their own GP and not a cardiology specialist.They are confident knowing that their GP and hospital's cardiologists work together: "I assume there is no big difference between the approaches.It's just the consultation that's different.I wasn't nervous at all about quality"."I was told that cardiologists saw my recording.So, it actually was the specialists who did analyze it"."I think it is just fine that cardiologists give advice [on further treatment] to my GP".
One of the patients would prefer to be seen by a cardiologist right away.Some patients mentioned that it is important for them to be referred by their GP to a specialist right away if their symptoms could be caused by a severe condition.
"It depends on the severeness.If I'm having heart pain or feeling that my heart skips beats, I'd prefer to be referred to cardiologists right away.I'm sure my doctor would do so".
Main Theme 2: The C3-Holter Device Is User Friendly and Easy to Handle
The device being very small, most patients forgot about it while wearing it.They had no problem wearing their clothes, and they did not change their everyday activities.The vast majority of them did not take the device off at all.Those who did, e.g., to take a shower, had no problems reattaching it."I did the usual things-went for a walk every day, played some golf"."It was only three days, so I let it be and just washed around it". "They explained how to replace the patches, and I am comfortable to do so".
"After a shower, I took it back on without any problem".
The most common problem reported by the patients was discomfort caused by itchy skin reactions to the standard electrodes.
"I didn't pay any attention to the device before my skin became itchy"."When you've been wearing it for three days, your skin has become very itchy, and you look forward to get rid of the device".Some patients appreciated the possibility to set a marker on the recording whenever they experienced any symptoms."I quite liked the possibility to push the button when I was experiencing symptoms"."When I felt any kind of symptoms, there was this button I just could push to set a marker on the recording".One factor that makes the patients feel safe and secure is the continuity they experience with their own GP.
"My doctor knows me and my medical history.A disease course can be broad, and specialists only look at things from their own discipline's perspective".Some patients perceive it to be time saving and easier than attending the outpatient department at the hospital."There's always a lot of waiting time.I prefer to see my GP"."I think it was nice, that the recording could be started right away"."When I need to go to the hospital I do need transportation help.I must be ready 90 min before, and then there's waiting time and everything.I think it's quite difficult".
Except the few patients that reported technical or procedural problems during the diagnostic process, the patients in general were very comfortable with only seeing their GP and not being referred to the outpatient department at the hospital, but it is important to inform the patents also if there are technical problems."I hadn't got any feedback from my GP.That was strange"."I called them after two weeks, because I hadn't heard anything yet.I was told that the recording was empty".
Semi-Structured Interviews with GP's and Nurses
Through analysis of the health care professionals' narratives, three main themes (i.e., hypotheses to be confirmed or disproved) could be identified: (1) the C3+ sensor is user friendly and easy to use in a GP's setting; (2) cooperation between GP and cardiologist is helpful and appreciated; and (3) patients benefit from Holter monitoring at their GP's.In this section, we will present the main results within these themes.
The C3+ Sensor Is User Friendly and Easy to Use in a GP's Setting
Overall, the GPs were positive about using Holter monitoring as a diagnostic tool."I think it was quite straight forward.I just had to get used to how to send the correspondence [to the cardiologist] and what to write, but it was actually quite quick to get it sorted out"."There are so many sentences, with all sorts of things, and there are actually only four things you have to do, so you could actually [. ..] make a much simpler guide".
Cooperation between GP and Cardiologist Is Helpful and Appreciated
GPs who used CardioShare had to become used to communicating with the cardiologists instead of issuing a referral, but they were very satisfied with the outcome of this communication.
"It has been, in other words, completely impeccable in every way"."When you have to interpret the results, it's extremely nice to have a cardiologist backing you up".Furthermore, the GPs valued the possibility of conducting Holter monitoring at their clinic without referral to a cardiologist."Being able to do a Holter monitoring is like a function we have wanted, it has been difficult to access, [. ..], and suddenly it has moved very close"."I think it belongs excellently here, at this level where you have professional cooperation along the way".
Patients Benefit from Holter Monitoring at Their GP's
GPs were satisfied being able to perform Holter monitoring on frail patients who otherwise would not be able to be monitored, but also on patients other than frail.
"We have a huge population, we have two nursing homes for which we are primary doctors"."We've actually used it primarily for the unconclusive [patient group], where we thought they're not quite candidates for a hospital cardiology evaluation process"."Health can be measured in many ways; whether they don't get blood clots, or whether they are reassured in their fear of illness".
Discussion
The main result of this study is that the CardioShare model was applicable for diagnosis and management of suspected AF in primary care clinics.The process was shorter than in usual care, it required fewer referrals to the cardiology outpatient clinic, thus being less of a burden for the patients and reducing the capacity needed in the hospital's outpatient department.Another study also showed that teleconsultation may increase access to cardiology evaluation in underserved populations, while reducing in-person referrals [26].
The aim of the study was to facilitate the management of frail elderly under the suspicion of AF, and it is remarkable that two of the first 20 patients included by one of the GPs needed a pacemaker.These patients were considered frail and refused to be referred to the cardiology outpatient department for Holter monitoring, but accepted it to be performed by their GP, and to have a pacemaker implanted.Nevertheless, only 30% of all included patients fulfilled our frailty criteria (Table 1).This is a weakness inherent in pragmatic implementation research.At the same time, the liberal inclusion that was accepted in the study reflects the true need from GPs to evaluate other patients than those who are frail elderly.In 80% of all cases, GPs were able to complete evaluation for arrhythmia suspicion, which shows a great potential in avoiding referral of low-risk patients to the outpatient department, thus requiring fewer resources from the cardiology specialists.Nevertheless, no formal resource analysis was performed, which is a major limitation of this study.
Likewise, although patients reported a high level of satisfaction in the survey and following in-depth interviews, the study does not include standard quality-of-life questionnaires, which poses a limitation for comparisons with similar studies.
Although most patients were very satisfied with the diagnostic process being led by their GP, it cannot be concluded that the patients would be less satisfied by being referred to the hospital's outpatient department since this study did not include any analysis of patientreported experiences in a usual care group.Despite this limitation, it is remarkable that patients described the relationship with their GP as being crucial regarding whether they would prefer being referred to the hospital's outpatient department.When implementing the CardioShare model, it is important to ensure that the patients receive feedback on the results.Some patients did not receive any feedback and were disappointed about it.
In general, the patients found the Holter device easy to use, e.g., not having any problems reattaching it to their chest after they took it off.Some patients experienced itching and chest discomfort while wearing it.Since the device uses the same regular electrodes as those used in standard Holter devices, the general advice is to choose the best-tolerated electrodes.A folder with FAQ and a troubleshooting guide for patients could be useful, and adequate training of all involved health care professionals is crucial, as most problems with utilization of the Holter device seemed to be start-up difficulties that could be avoided.
From the GPs' perspective, Holter monitoring with an easy-to-use compact device and support from hospital-based cardiologists was appreciated.They were very satisfied with the CardioShare cross-sectoral cooperation model, as they felt that their patients experienced an easy and safe approach throughout the workup process.They reported a great need for this kind of collaboration as they find it cumbersome to gain access to Holter monitoring at the hospital's outpatient department.This is similar to findings in observational studies that showed that teleconsultation support enhances primary care physicians' confidence, capacity, and satisfaction rates [27,28].
Conclusions
The CardioShare model is feasible with Holter monitoring performed at the GP's office or at the patient's home.It makes it possible to reach frail patients suspected for AF, and the fact that 70% of all included patients were non-frail shows the true need from GPs to evaluate a greater variety of patients for AF.The model can be implemented for diagnosis and management of suspected AF in primary care and has the potential to reduce the number of referrals to the cardiologists and thus save resources.
Figure 2 .
Figure 2. Distribution of age among all participants by gender; n = 314.
Figure 2 .
Figure 2. Distribution of age among all participants by gender; n = 314.
Figure 4 .
Figure 4. Duration of diagnostic process from decision to results (days).Top: cluster randomization study (pilot clinic that continued to include patients during same period of time; non-CardioShare clinics; and CardioShare clinics; overall n = 314 patients).Bottom: comparison cohort of patients managed as usual in the hospital's outpatient department; n = 117 patients.Please be aware of the scale difference between top and bottom parts of the figure.
Figure 4 .
Figure 4. Duration of diagnostic process from decision to results (days).Top: cluster randomization study (pilot clinic that continued to include patients during same period of time; non-CardioShare clinics; and CardioShare clinics; overall n = 314 patients).Bottom: comparison cohort of patients managed as usual in the hospital's outpatient department; n = 117 patients.Please be aware of the scale difference between top and bottom parts of the figure.
3. 3 . 1 .
Main Theme 1: Patients Experience a High Level of Professionalism and Quality despite Not Being Seen by a Cardiologist
3. 3 . 3 .
Main Theme 3: Being Diagnosed by Their GP Makes the Patients Feel Safe and Secure Most of the patients feel safe and secure with their GP, but it depends on their interpersonal relationship, as mutual trust is built up over time."It depends on, which one of the GPs I'm seeing"."In the old days I felt safe, because you always saw the same doctor.They kind of knew your journal.Today there are many different [doctors/GPs]"."I think it's just fine, if it works.I think it's nice, because it does mean a lot to feel comfortable with the person [doctor / health professional] you're seeing".
Table 2 .
Criteria for diagnostic evaluation.
|
2023-09-24T15:53:03.211Z
|
2023-09-01T00:00:00.000
|
{
"year": 2023,
"sha1": "45f73b75862a4db149e1ef8e68db2b7643addb13",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-4601/20/18/6783/pdf?version=1695120909",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "59254894d5d184fd276afb7e9f8522de2262d1d7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
252178416
|
pes2o/s2orc
|
v3-fos-license
|
Compound Fault Feature Extraction of Rolling Bearing Acoustic Signals Based on AVMD-IMVO-MCKD
The compound fault acoustic signal of a rolling bearing has the characteristics of a varying noise mixture, a low signal-to-noise ratio (SNR), and nonlinearity, which makes it difficult to separate and extract exactly the fault features of compound fault signals. A fault feature extraction approach combining adaptive variational modal decomposition (AVMD) and improved multiverse optimization (IMVO) algorithm parameterized maximum correlated kurtosis deconvolution (MCKD)—named AVMD-IMVO-MCKD—is proposed. In order to adaptively select the parameters of VMD and MCKD, an adaptive optimization method of VMD is proposed, and an improved multiverse optimization (IMVO) algorithm is proposed to determine the parameters of MCKD. Firstly, the acoustic signal of bearing compound faults is decomposed by AVMD to generate several modal components, and the optimal modal component is selected as the reconstruction signal depending on the minimum information entropy of the modal components. Secondly, IMVO is utilized to select the parameters of MCKD, and then MCKD processing is performed on the reconstructed signal. Finally, the compound fault features of the bearing are extracted by the envelope spectrum. Both simulation analysis and acoustic signal experimental data analysis show that the proposed approach can efficiently extract the acoustic signal fault features of bearing compound faults.
Introduction
The rolling bearing is an important part of a rotating machine, the failure of which will cause the failure of other parts of the rotating equipment [1]. The investigation results show that rolling bearing failures account for 30% of all machine failures [2]. In actual industrial sites, the working conditions of bearings are very harsh, such as noise interference, high temperatures, and strong corrosion conditions. In practical engineering applications, rolling bearings are often mixed with two or more kinds of faults [3]. As a consequence, the compound fault diagnosis of the rolling bearing is more difficult [4][5][6][7].
In recent years, the work of detecting and diagnosing bearing faults based on vibration signal methods has achieved rich results. At the same time, the diagnosis of bearing fault defects by acoustic methods has also been paid more attention [8,9]. A method based on the combination of acoustic emission and vibration signal analysis [10,11] overcomes the shortcoming that vibration analysis cannot detect the early faults of bearings. Yoon et al. [12] proposed acoustic emission technology to realize bearing fault diagnosis at low speeds. The acoustic signal adopts a non-contact measurement method, which not only makes it easy to collect the acoustic signal but also is not limited by the working environment conditions of bearings [13]. Many scholars have made great achievements in the detection of rolling bearing failures by acoustic methods. In the field of deep learning, Liu et al. [14] used short-time Fourier transform (STFT) to preprocess the acoustic signal, and the fault features of the rolling bearing were extracted through stacked sparse autoencoders. Zhang et al. [15] successfully applied deep graph convolutional networks to bearing acoustic fault diagnosis. Kim et al. [16] studied bearing fault diagnosis combining acoustic emission technology with a convolutional neural network. There have been many achievements in the deep learning research neighborhood of acoustic signal analysis methods for rolling bearings [17,18]. In machine acoustics, research based on machine learning has also made progress [19][20][21]. Verma et al. [22] designed a fault condition monitoring system for reciprocating air compressors, and realized fault diagnosis by analyzing the collected acoustic signals. In addition, acoustic signal analysis has also achieved many research results in the field of blind source separation (BSS) [23,24]. Qin et al. [25] introduced an acoustic signal compound fault detection based on the combination of improved empirical wavelet transform (IEWT) and singular value decomposition (SVD). All in all, the fault diagnosis of a bearing based on acoustic signals has been paid more and more attention by scholars.
During the research on compound fault diagnosis of bearings, the adaptive decomposition method has become more popular [26]. Jena et al. [27] analyzed the vibration and acoustic signals of gear faults and bearing faults in order to affirm the availability of the filter system. Amarnath and Krishna [28] applied the empirical mode decomposition (EMD) algorithm to the fault detection of the sound signals of bearings and helical gears. Aiming at the problem of EMD endpoint effect and modal mixing, many scholars have proposed improved methods of EMD. As illustrations, the ensemble empirical mode decomposition (EEMD) method [29], the complete ensemble empirical mode decomposition with adaptive noise (CEEMDAN) method [30], and the improved complete ensemble empirical mode decomposition with adaptive noise (ICEEMDAN) method [31] are listed in this paper. Yang et al. [32] used an EEMD algorithm to preprocess acoustic signals. Lei et al. [33] applied the CEEMDAN algorithm to study bearing faults. In order to address the modal mixing defect, Zheng et al. [34] presented a novel partly EEMD technique. In addition, the researchers have also developed a new adaptive decomposition algorithm, variational mode decomposition (VMD) [35], which has a rigorous mathematical derivation. The penalty factor and modal number of VMD must be predetermined, unlike the approaches mentioned above. For the parameter determination of algorithms such as VMD and maximum correlation kurtosis deconvolution (MCKD), the optimization algorithm can be considered first [36], but the calculation cost of the optimization algorithm is high; the second consideration is to study other algorithms in order to determine the parameters. For example, Li et al. [37] proposed information entropy to determine the parameters of VMD. Yin et al. [38] introduced a new method of relative entropy-which is also known as Kullback-Leibler (KL) divergence-to optimize VMD, which was named KL-VMD. Research [39,40] shows that VMD cannot completely extract fault feature information under strong noise interference. Compared with the feature extraction of single faults, compound fault diagnosis is more difficult, especially for the sound signal analysis of compound faults. Therefore, a new method combining AVMD with MCKD is proposed, which takes full advantage of VMD and MCKD to diagnose the compound faults of rolling bearings. Firstly, we propose the adaptive optimization of VMD based on relative entropy and information entropy, and screen out the penalty factor and modal number according to the minimum sum of the two entropies. The optimal component of VMD containing abundant fault information is screened out by the minimum value of information entropy. Secondly, the order of the shift and the filter length of MCKD are adaptively determined by the improved multiverse optimization (IMVO) algorithm, and MCKD processing is performed on the selected intrinsic mode functions (IMFs). Finally, the envelope spectrum of the optimal modal component processed by MCKD is obtained, and the fault frequency is extracted by the envelope spectrum. The main contributions of this work are as follows: (1) A new method based on AVMD-MVO-MCKD is proposed to extract the features of composite faults accurately. (2) Adaptive variational mode decomposition is proposed. Compared with the traditional signal decomposition method, the proposed method not only overcomes the shortcoming of selecting parameters based on empirical knowledge but also solves the mode aliasing problem. Compared with the existing optimization algorithms, the computational complexity is reduced. (3) The IMVO algorithm is proposed to optimize the MCKD parameters. The computational efficiency of the IMVO algorithm is improved, and the important parameters of MCKD are determined adaptively. (4) The proposed method has been successfully applied to the field of compound fault acoustic signals, which has certain reference value for further research of acoustic signal diagnosis methods.
The remainder of the thesis is structured as follows. The theory of the proposed method is introduced in Section 2. In Sections 2.1 and 2.2, VMD theory is briefly introduced and VMD parameter optimization based on relative entropy and information entropy is detailed. The theory of the improved multiverse optimization (IMVO) algorithm is detailed in Section 2.3. The brief introduction of MCKD theory and the detailed steps of the IMVO algorithm to adaptively determine MCKD parameters are presented in Sections 2.4 and 2.5, respectively. In Section 3, the feasibility of the proposed approach is proven by simulation data analysis, and the analysis effects of the proposed method and the comparison method are shown. In Section 4, the analysis of the measured data, the application of the proposed algorithm, and the demonstration of the analysis results of the comparative method are described. Section 5 is our conclusion.
VMD Theory
Decomposing an input original signal into several IMFs is the core function of VMD. The VMD needs to settle the important parameter penalty factor and modal number before analyzing the input signal. After the parameters are determined in advance, the input signal is decomposed by the VMD into multiple IMFs. VMD addresses the shortcomings of the EMD algorithm lacking mathematical theory and modal mixing. The theory of VMD will not be carefully introduced here because of space limitations. Detailed mathematical theory derivation can be found in the literature [35].
Determination of the Penalty Factor and Modal Number of VMD
For the crucial parameter presetting problem of VMD, a new parameter optimization method based on the minimum sum of information entropy and relative entropy is proposed in order to select the parameters of VMD adaptively. The minimum sum of information entropy and relative entropy is calculated as an indicator of iteration termination. The calculation process is as follows: where H(Q) represents the information entropy of the modal component series Q. D KL (Q/P) is the sum of the relative entropy of the original signal P and a series of IMFs Q, which is defined as Formula (2) where P(x) is defined as the original input signal, and Q(x) is the IMFs obtained after decomposing the original input signal. The parameters of VMD are determined according to the minimum value of the sum of information entropy and relative entropy. The important parameter optimization process of VMD is presented in Figure 1. Firstly, the search range of mode number K is set [2,16], and the search step size is 1. The number of modes K = 2 is initialized, and the penalty factor α = 2000 is set. The VMD is used to decompose the original signal, and the sum of information entropy and relative entropy is calculated from Equation (1). We then ask whether it is the minimum value of entropy within the search range. If the judgment conditions are met, the optimal modal value is output; if not, then K = K + 1 continues the iterative calculation. Next, the penalty factor α is optimized based on the minimum entropy judgment condition in Equation (1). The search range of penalty factor α is set [100,2000], and the search step size is 50. We then ask whether it is the minimum entropy in the search range. If the judgment conditions are met, the best penalty factor is output; if not, α = α + 50 continues the iterative calculation. Finally, the optimal number of modes K and penalty factor α are output.
Improved Multiverse Optimization (IMVO)
In 2015, Mirjalili et al. [41] proposed the Multiverse Optimization (MVO) algorithm, which is inspired by the black hole, white hole and wormhole in the multiverse theory. Every universe has an inflation rate. A universe with a high inflation rate tends to produce white holes, and conversely black holes appear. In order to determine the best location in the search space, the algorithm transports things through the carrier wormhole from the white hole in the source universe to the black hole in the destination universe in line with cosmic laws. The search process of the MVO algorithm is divided into two phases: exploration and development. White holes and black holes act on the exploration stage, while wormholes act on the development phase. The theoretical derivation of the algorithm is as follows.
Suppose the following search space exists in the universe matrix: In the formula, d means the number of variables, and n means the number of universes. The search process of the algorithm is carried out under the roulette wheel mechanism, in which each iteration needs to select a white hole.
where x j i is the jth number of the ith universe, N I(X i ) represents the standard expansion rate of the ith universe, r 1 represents a random variable between [0, 1], and x j k is the jth variable of kth universes determined according to the roulette wheel mechanism.
In the iteration process, the wormhole existence probability (WEP) and the travel distance rate (TDR) are crucial variables of the algorithm. For the purpose of improving the efficiency of the optimization process, the linear growth of WEP is changed to logarithmic growth in Ref. [42]. The TDR becomes continually larger during the iteration, and this change is mainly intended to improve the accuracy of the local search ability. The modified adaptive formulas are Equations (5) and (6).
where WEP min = 0.2, WEP max =1, l defines the current iteration, H indicates the maximum iterations, and Q = 5000. The location of the universe is searched, and the optimal position is found according to Equation (7).
When r 2 < WEP, When In the formula, x j represents the jth parameter of the current optimal universe. ub j and lb j represent the upper and lower bounds of the jth parameter. r 2 , r 3 and r 4 are random variables with values between 0 and 1.
MCKD Theory
MCKD obtains the maximum correlation kurtosis of the original input signal by selecting the optimal FIR filter, such that the output signal can recover the characteristics of the original signal. For the selection of the filter length and the order of shift, which are important parameters of MCKD, an improved multiverse optimization (IMVO) algorithm is used to select them adaptively. The theoretical derivation details of the MCKD algorithm were introduced in the Ref. [43].
IMVO-Optimized MCKD Parameters
Two crucial parameters for MCKD are optimized and selected by the IMVO algorithm. The steps of IMVO to optimize MCKD parameters are as follows: (1) In the process of parameter setting, the maximum number of iterations H of IMVO is 50, and the number of universes n is 30; the optimization range of filter length L of MCKD is set to [100, 300], and the order of shift M is set to [1,7]. According to the formula T = f s / f m , the inner ring convolution period T i and the outer-ring deconvolution period T o are calculated. This initializes the location of the universe randomly based on the parameter setting range. (2) The inflation rates of the universe are taken and ranked, then a white hole is selected under the roulette wheel mechanism. (3) The WEP and TDR are updated according to Equations (5) and (6), and then boundary checking is performed. (4) We then perform a calculation operation on the current inflation rate of the universe.
While the cosmic inflation rate is better than the current cosmic expansion rate, the current cosmic expansion rate is updated; otherwise, we keep the current cosmic expansion rate. (5) The universe position update is executed, and the optimal individual is found according to Formula (7). (6) When judging the termination condition, if the termination condition is satisfied, the iteration will be terminated, and the result will be output. If not, the iteration will be continued by returning to step (2).
To sum up, the specific steps of the proposed method are presented in Figure 2.
Simulation Analysis
The feasibility of the improved algorithm is proven by the simulation of mixed signals. The mathematical expression [44] of the bearing inner-ring fault is as follows: In the formula, the natural frequency is 3000 Hz, the sampling frequency is 8192 Hz, and the system attenuation coefficient is 500.
Formula (12) shows the mathematical model expression of the bearing outer-ring fault [45]: where g defines the damping coefficient, and t 0 is the single-cycle sampling time.
The random mixtures of the inner-ring fault simulation signal, the outer-ring fault simulation signal and Gaussian white noise with a signal-to-noise ratio (SNR) of −8 dB are operated by computer, and then the mixed signal is obtained. Table 1 presents the detailed parameters of the mathematical model expressions of bearing defects. Figure 3a shows the time-domain waveforms of the inner-ring fault simulation signal, the outer-ring fault simulation signal, and the mixed signal, respectively; Figure 3b represents the envelope spectrum of the inner-ring fault simulation signal, the outer-ring fault simulation signal, and the mixed signal, respectively. In Figure 3a, the bearing fault impact component of the mixed signal is obviously submerged by noise, and no valuable fault message can be obtained. The failure to obtain valuable fault feature information in the envelope spectrum of the mixed signal in Figure 3b indicates that it is severely affected by noise. The adaptive VMD (AVMD) decomposition is performed on the mixed signal. Figure 4 displays the decomposition results of the AVMD. With the method introduced in Section 2.2, the parameter combination of VMD which is adaptively obtained is [K, α] = [3,100]. The information entropy value of each IMF component is obtained by relying on the information entropy formula, and the information entropy value of each IMF component is filled in Table 2. The IMF component corresponding to the minimum information entropy value is picked as the optimal component from Table 2. Because IMF3 has the smallest information entropy value, IMF3 is chosen as the optimal component. Figure 5 is the envelope spectrum analysis of IMF3 selected by information entropy. From Figure 5, it can be observed that the fault features of the inner ring and outer ring are mixed with each other, which means that bearing compound fault features cannot be separated. Therefore, MCKD is used to analyze IMF3 to accomplish the goal of separating fault features. The deconvolution period is obtained from Equation (13). Figure 6 is the time-domain waveform of the optimal IMF3 after MCKD processing. Figure 7 represents the envelope spectrum of the optimal IMF3 after MCKD processing. In Figure 6, the impact component of the signal is obvious, which shows that MCKD has accomplished the purpose of reducing the noise and recovering the impact component. The fault features of the inner and outer rings of the mixed signal are fully separated and extracted in Figure 7. The fault frequency and harmonics of the inner ring are fully obtained, and the fault frequency of the outer ring is clearly presented in Figure 7. In order to illustrate the superiority of the algorithm in the paper, KL-VMD-MVO-MCKD, EEMD-MVO-MCKD and ICEEMDAN-MVO-MCKD methods are selected as comparative experiments. Figure 8 represents the envelope spectrum of a mixed signal processed using KL-VMD-MVO-MCKD. Figure 9 represents the envelope spectrum of a mixed signal processed by using EEMD-MVO-MCKD. Figure 10 represents the envelope spectrum of a mixed signal processed by using ICEEMDAN-MVO-MCKD. Although the inner-ring fault frequency can be extracted, it is still mixed with the outer-ring fault frequency f o ; the outer-ring fault frequencies f o and 3 f o can be obtained, but the amplitude is not obvious in Figure 8. The fault frequencies of inner and outer rings are separated and extracted, but the harmonics are not obtained in Figures 9 and 10. The results of the comparison method KL-VMD-MVO-MCKD are not as good as the method proposed in this paper, and the calculation efficiency of the comparison method is low. The interference components around the fault frequency spectrum lines are obvious in Figures 9 and 10. From the spectral lines of Figures 9 and 10, it can be seen that the noise reduction ability of the two algorithms in the comparative experiment is not as good as the proposed algorithm. From the spectral lines in Figure 7, it can be concluded that not only can the fault features of inner and outer rings be separated and extracted but also harmonics can be obtained. The spectral lines of the inner-ring fault frequency and the outer-ring fault frequency are very clear in Figure 7. In a word, the analysis effects of the three comparison methods are not as good as the method proposed in this paper. Therefore, the proposed method exhibits the advantages of a good noise reduction effect, a high accuracy of fault feature separation and extraction, and high computation efficiency. The analysis consequences of the simulated mixed signal confirm the feasibility of the proposed approach, which is applicative for the compound fault feature extraction of rolling bearing acoustic signals under a noise environment. From the spectral lines in Figure 7, it can be concluded that not only can the fault features of inner and outer rings be separated and extracted but also harmonics can be
Experimental Analysis
The rolling bearing in the experiment includes inner-ring failure and outer-ring failure. The measured data of compound rolling bearing faults were utilized to test the method in this paper. The measured data came from the laboratory, and two Beijing prestige sensors were used to pick up the sound signals of bearing compound faults when the experimental bench built by the QPZZ-II rotating equipment is working. The layout of the test bench and microphone is presented in Figure 11. Figure 11 shows that microphone 1 was placed in the flush position of the centerline of the side of faulty bearing housing, and microphone 2 was installed in the aligned position of the main shaft of bearing, the two microphones were placed perpendicular to each other and installed at a position of 0.5m from the edge of the faulty bearing seat and 0.5 m in height. The NI Signal Express acquisition module and NI-9234 four-channel acquisition card were used to acquire the fault signal. The sampling frequency was 8192 Hz, and the sampling length of the measured data was 8192. The bearing model used in the experiment was NU205, and its technical parameters are presented in Table 3. The inner-ring defect and outer-ring defect of the bearing were obtained by wire cutting. With the bearing parameters presented in Table 3, the following information can be obtained. Through calculation, the fault frequency of the inner ring was 95.38 Hz, and that of the outer ring was 64.61 Hz. The rotation speed in the experiment was set to 800 r/min, that is, the rotational frequency f r was 13.33 Hz. Figure 11. The layout of the test bench and microphone. The fault type of rolling bearing is shown in Figure 12. Figure 12a is the inner-ring defect, Figure 12b is the outer-ring defect. Figure 13 presents the time-domain waveform and envelope spectrogram of the original acoustic dataset picked up by the two microphones. The mixed noise in Figure 13a is obvious. There is very little useful information obtained from Figure 13b, which can only preliminarily judge that the inner-and outer-ring fault frequency of the bearing are mixed with each other. Because the envelope spectrum of the acoustic signal collected by microphone 2 cannot display useful information, the second signal was selected for AVMD decomposition, and the decomposition consequences are presented in Figure 14. The parameter combination [K, α] = [3,100] of VMD is adaptively determined by the improved algorithm proposed in Section 2.2. The information entropy of each IMF component is filled in Table 4. Table 4 reveals that the IMF3 information entropy of the measured acoustic signal is the smallest, and IMF3 is determined to be the optimal component. The spectral lines of the envelope of IMF3 are presented in Figure 15. The inner-and outer-ring fault features of rolling bearings are mixed with each other, which means that the compound fault features of the bearing cannot be separated in Figure 15. Therefore, the optimal components were analyzed by MCKD to separate the different fault information. After calculation, the inner-ring deconvolution period is T i = 85.89, and the outer-ring deconvolution period is T o = 126.79. Through many experimental analyses, it is determined that the inner-ring deconvolution period and outer-ring deconvolution Figure 15. The envelope spectrum of the optimal component IMF3 selected by information. Figures 16 and 17, respectively, represent the time-domain waveform and envelope spectrum of the optimal component IMF3 after MCKD processing. The shock component of the signal in Figure 16 is obvious, which shows that the MCKD algorithm has superior performance. The mixed inner-ring and outer-ring fault features in Figure 17 are sufficiently separated and extracted. The KL-VMD-MVO-MCKD, EEMD-MVO-MCKD and ICEEMDAN-MVO-MCKD methods are selected as comparative experiments to prove the superiority of the proposed approach. Figure 18 is the envelope spectrum of an acoustic signal processed using KL-VMD-MVO-MCKD. Figure 19 is the envelope spectrum of an acoustic signal processed using EEMD-MVO-MCKD. Figure 20 is the envelope spectrum of the acoustic signal analyzed using ICEEMDAN-MVO-MCKD. The fault frequencies of the inner and outer rings are clearly separated and extracted, and the harmonics are obtained in Figure 17. Although part of the outer-ring fault frequency and inner-ring fault frequency can be obtained from the envelope spectrum, the spectral line amplitude representing the fault frequency is not obvious in Figure 18. The inner-ring fault information cannot be easily obtained from the spectral lines, and the outer-ring fault information can only be obtained with fewer fault frequencies such as f o and 2 f o from the spectral lines in Figure 19. It can be observed from Figure 20 that the fault information of the inner ring cannot be obtained, and most of the fault frequencies of the outer ring can only be obviously separated and extracted. The comparative experimental analysis results show that the accuracy of fault feature separation and extraction by KL-VMD-MVO-MCKD is low and the computational efficiency is low. The experimental results show that the performance of EEMD-MVO-MCKD and ICEEMDAN-MVO-MCKD in the separation and extraction of compound fault features is insufficient, and the noise reduction effect is not good enough. However, the approach proposed in the paper not only separates and extracts the fault features of inner and outer rings but also works well. Therefore, the proposed algorithm not only resolves the shortcomings of traditional decomposition methods that the IMF components contain residual noise and the IMF components only contain signal local fault features but also improves the accuracy of the separation and extraction of compound fault features. Meanwhile, the computational complexity of the proposed algorithm is reduced, and the computational efficiency is improved. In this paper, the method of using AVMD-IMVO-MCKD to detect the acoustic signal of a bearing compound fault is feasible and efficient. The effects of the comparative experiments also verify the applicability and efficiency of the proposed algorithm in the paper.
Conclusions
In this paper, a new feature extraction method based on AVMD-IMVO-MCKD was proposed for rolling bearing compound fault acoustic signals. The main innovations were in two aspects: firstly, we proposed to adaptively determine the parameters of VMD by relying on the minimum entropy as the criterion. The proposed method overcomes the shortcoming of selecting parameters manually by relying on rich experience, and the computational efficiency of the proposed algorithm is very high compared with the optimization algorithm. Compared with traditional signal decomposition methods, AVMD solves the problems of mode aliasing and insufficient decomposition ability. It only takes an average of 3 min to determine the modal number and the penalty factor. Secondly, an improved multiverse optimization (IMVO) algorithm was proposed to adaptively obtain the critical parameters of MCKD. The IMVO algorithm improves the computational efficiency and reduces the computational complexity. The superiority of the proposed method is illustrated by the comparative analysis. The results show that the combination of VMD and MCKD is very necessary. Compared with the existing methods, the proposed approach in the paper can fully separate and extract the fault features of acoustic signal compound faults. However, the deconvolution period parameter of MCKD is obtained according to the formula in the paper, and the optimization algorithm can be considered to determine the deconvolution period in the future.
|
2022-09-10T15:14:17.762Z
|
2022-09-01T00:00:00.000
|
{
"year": 2022,
"sha1": "2f3396a5cdfca14640e5e057bbeeab0b8ed1f7c1",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1424-8220/22/18/6769/pdf?version=1662607224",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "99a1a48352b29f65d7acd7fb6428fab7409299bc",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Computer Science",
"Medicine"
]
}
|
219089006
|
pes2o/s2orc
|
v3-fos-license
|
Semantic knowledge networks in education
. The article is devoted to the modeling a semantic knowledge networks. The knowledge network is the basic concept of the problem of knowledge management. This is a new discipline that implements the principles of sustainable development of education. The method of constructing a semantic knowledge network allows us to analyze the connections between educational disciplines: “Economic Cybernetics”, “Algorithms and Programming” and “Calculus”. The paper compares the topological characteristics of the concept graphs related to various disciplines. We develop the algorithm to implement the subject area model in the form of a semantic knowledge network. 125 concepts are analyzed that provide optimal mastering disciplines and establish the connection between them.
The epidemics, the destruction of the natural environment and climate change, the depletion of material and energy resources, the population explosion and lack of food, as well as the civilization crisis as a whole, are complex interdisciplinary problems of the mankind.The need to resolve them leads to the emergence of areas of science that are characterized by convergence of methods and interdisciplinary approaches.Suprasectoral technologies (information, cognitive, nano-, bio-, social technologies) are currently being actively developed, which contribute to the emergence of new branches of science and serve as a new methodological basis for the nature study [7][8][9].Such interdisciplinary scientific fields lead to new directions in science such as risk management, sustainable development, new nature management, etc. Quality of professional training students in the modern sense is determined by their willingness and ability to use the acquired professional competencies to solve not only professional tasks, but also multidisciplinary problems that may contribute to sustainable development at the level of the country, region and the world as a whole.This implies updating the content and methods of professional training of specialists at a modern university taking into account the requirements of interdisciplinary integration and the implementation of sustainable development ideas [10][11][12][13][14]. Interdisciplinary integration in higher education institutions has to be an important component of introducing sustainable development ideas into the training of modern specialists.The problems of sustainable development itself are multidisciplinary.Such integration will solve the significant contradictions of education, namely the contradiction between the vast knowledge and limited human possibilities.The optimal combination of computer science and other academic disciplines within the same topic will provide conditions for a significant increase of the level of the educational process.
In [15] concluded that students have a large non-used potential to understand more deeply the nature of science and acquire the knowledge important for their future lives and work.
Recently, a lot of talk has been going on about the transition to a knowledge-based society.Knowledge management systems are being developed, and the knowledge management specialists are working in large corporations.Unfortunately, in the discussions of this topic higher education is not considered [16,17].It is unacceptable because the knowledge is created, systematized and accumulated within the universities and then it is passed on to the next generation of people.
The learning process is the management of the process of student's knowledge accumulation and systematization.Only a few researchers focus their attention on this fact [18][19][20].An automated learning environment, built on the basis of semantic knowledge networks, is capable to a large extent of solving the wide range of knowledge management tasks in a university.A feature of the modern stage in the development of educational systems is the necessity of expending the use of formal methods for presenting knowledge and organizing the learning process.These trends are based on the use of the achievements of cybernetics, synergetic, and the theory of artificial intelligence.Many objects of cognitive science research should be described, as a network.Over the past two decades, many studies have focused on the network science methodology as an extensive scientific field of studying complex systems (for example, [21][22][23][24]).Complex systems contain several components that interact with each other, producing complex behaviour.Such a complex system is the human brain and the cognitive processes taking place in it.These processes provide memory and language (for example, [25][26][27][28][29][30][31][32]).Network science is based on mathematical graph theory and contains powerful quantitative methods for researching systems, such as networks (for example, [33]).
At this stage in the development of the education system, the priority is to find ways to improve the learning process, its content and structure.Receiving a fundamental and holistic education can be only as result of the learning process at the level of new quality.In this case the content of various disciplines should reflect the logic and structure of knowledge ties between disciplines.In the absence of intersubjective communications, the knowledge will be fragmentary, unsystematic.Cognitive networks are not only a tool for cognition, but can also a basis for controlling student's knowledge.
Analysis of previous studies
In different historical periods, many variants of semantic knowledge networks that take into account the specifics of intellectual activity have been created.In the "precomputer era" the prototype of semantic knowledge networks was used to formalize logical reasoning.At the beginning of the twentieth century, in psychology, graphs were first used to represent hierarchies of concepts and inherit properties, model human memory and intellectual activity.In the early 1960-s the first machine implementations of semantic networks were made.In one of the first practically significant systems [34], 100 primitive types of concepts were introduced to solve the automatic translation problem.Dictionary of 15 000 concepts was defined.
At present, semantic knowledge networks are widely used in solving many different problems, in particular when building knowledge bases, in problems of machine translation and processing of text in a natural language.Due to the wide range of use of such graphs, there is a need for their refinement -an increase in the number of nodes and an increase in the connectivity between them.
Actual modern studies are devoted to the use of semantic networks in the field of education.For example, in the work [35] the interdisciplinary of applied mathematics is quantitatively analyzed by using statistical and network methods on the corpus PNAS 1999-2013.In article [36] discusses the potential Semantic Web for teacher education.
The paper [37] presents a theoretical method for the integration of semantic knowledge network utilization into the classroom.This paper will also introduce insights from Cognitive Linguistics as to how the brain best learns vocabulary.The method in this paper springs from the fields of psychology and neuroscience as well as inspiration from educators who are building new teaching styles.The purpose of the method detailed in this paper is to inspire other educators to incorporate cognitive linguistic insights into their classes as well as further the discourse on integrating this field into the teaching of English as a second or foreign language.
Authors [38] formulate recipe recommendations using ingredient networks.Researchers have shown how information about cooking can be used to glean insights about regional preference sand modifiability of individual ingredients, and also how it can be used to construct two kinds of networks, one of ingredient complements, the other of ingredient substitutes.These networks encode which ingredients go well together, and which can be substituted to obtain superior results, and permit one to predict, given a pair of related recipes, which one will be more highly rated by users.
With the traditional method of constructing a semantic knowledge network, its formation is carried out manually, which requires significant labour costs.Such networks contain a small number of nodes; nevertheless, they have an important advantage -their nodes and connections are checked manually and are correct.An alternative approach is the automatic construction of a semantic network based on an external source generated by Internet users [39].A striking example of such a source is the Wiktionary [40].
Thus, all of these works are devoted to the integration of semantic knowledge networks in teaching.The increasing information volumes of the educational material of the disciplines dictate the need to use cognitive modelling to solve complex problems of training and teaching.
Theoretical framework
There are various ways of representing knowledge, in particular, such visual methods for describing knowledge in the subject field: semantic networks, graphs of conceptual dependencies, scripts, frames, conceptual graphics and ontology.Let's determine the definitions that are important for this work: "semantic knowledge network", "semantic network", "network model", "cognitive map", "cognitive network", "cognitive scheme".The connection diagram of these concepts is shown on Figure 1.Cognitive maps are a concept from cognitive psychology and were first introduced by Tolman.A cognitive map is an active, information-seeking structure.
In our work, the concepts of "semantic knowledge network" and "semantic network" are equated based on their proximity.
In cognitive science the network is one of the most common types of information models.Typically, a network consists of two components -nodes as network elements and edges, reflecting the interaction between the elements.Using these simple components, you can describe a wide range of objects of different nature and complexity.The network models are based on the concept of network.In such models, all relations are explicitly highlighted.These relations constitute the framework of knowledge of the subject area, the model of which must be created.This class of models includes semantic networks, functional networks, and frames (frame representation).
Although the terminology and structure are different there are similarities inherent in almost all semantic networks: -different nodes of one concept belong to different values, if not it is marked that they relate to one concept; -edges of semantic networks create relationships between concept nodes (marks above arcs indicate the type of relationship); -relations between concepts can be linguistic cases, such as "agent", "object", "recipient" and "instrument" (others mean temporal, spatial, logical relations); -the concepts are organized by level in accordance with the degree of generalization.
An associative approach to knowledge representation defines an object value in terms of its connections (associations) with other objects.Thus, when a person perceives an object and discusses about it, in this time a perceived object is mapped into a certain concept (Fig. 2).This concept is part of the general knowledge of the world, so it is connected by various associations with other concepts.Associations define properties and behaviour of the perceived object.Graphs are best suited for explicitly expressing associations between different concepts.Thus, in the form of a semantic network, knowledge of the world is expressed.A semantic knowledge network is a marked graph in which nodes correspond to certain facts or general concepts, and edges mean relationships or associations between different facts or concepts (Fig. 3).
In each academic discipline (in every science) the number of concepts reflecting the knowledge of this discipline (this science) is finite.There are a number of words that need to be conveyed to the audience.The number of these words is not infinite, because time for their transfer is limited.Textbooks establish linear links between concepts.A normalized description of knowledge networks can be formulated as follows.The body of knowledge of the studied discipline is a system (S).The elementary component that is part of S is a word that reflects a certain concept.With the help of words, all the concepts that make up the S system are recorded.Links between the concepts are established using the grammatical rules of a particular language.With respect to each concept from S, there is a primary sentence that contains its definition.The totality of such definitions forms an invariant kernel S, which ensures the unambiguity of the perception of knowledge within a particular academic discipline.The invariant core of the discipline uses words from other areas of knowledge to determine its concepts.All concepts from S are divided into main and auxiliary.The basic concepts include specific concepts of this particular discipline, which are the subject of its definition and study.Supporting concepts include concepts borrowed from other areas of knowledge that are not studied in this discipline, but are used to determine the content of basic concepts.Many of the basic concepts of a particular discipline, together with the internal relationships between them, form a hierarchically ordered network of knowledge, the nodes of which are the identifiers of the basic concepts.
Thus, the knowledge system can be represented in the form of a hierarchical directed graph -a semantic knowledge network.
The semantic knowledge network building algorithm involves several steps: (1) Writing all the basic terms of the subject area and formulate their definitions (composing the thesaurus of the subject area).
(2) Selecting the terms from the list that appear in the definition of the other terms listed in step 1.
(3) At the lower (I) level, arranging the terms in the definition of which the terms from the list are not used.
(4) At the next (II) level, arranging the terms in the definition of which the terms of level I are used.
(5) At the III level -terms in the definition of which the terms of I and II levels are used, etc. ( 6) At the last level, arranging terms that are not used in the definition of other terms.
Visualization of data in a structural network model is the first step, but the strength of the method lies in the ability to extract important knowledge about the system through a statistical analysis of the network topology.It seems that topology bears an evolutionary imprint and functional [42].A detailed analysis of the available metrics can be found, for example, in [43].Consider just a few metrics often used in cognitive model research.
Let us consider in detail the network structure.A network consists of nodes and links between them, edges.Nodes are more or less stable entities that do not change over time.
Edges represent relationships, interactions, transactions, or any other temporary connections that occur between nodes over a certain period of the time.Edges represent connections between them: friendships, proximity, transactions, exchanges and any other temporary connections between stable objects that occur with a certain frequency.
Edges are important to network analysis because they represent the connectivity basis that will be using to get insights about the complexity network.In a graph database, the relationships between the data are just as important as the data itself.
Giant component is an important notion in network analysis.It's an interconnected constellation that includes most of the nodes in a network.
Clusters are the constellations of nodes that are more densely connected together than with the rest of the nodes in the network.Clusters represent different sub networks within a network and can be used to identify various subcategories that are present within.
In modern network theory, the number of node connections (in the theory of graphs, nodes and nodes are edges and vertices of a graph, respectively) is called a degree.A node's degree indicates how many connections it has to the other nodes in the network.The more degree a node has, the more "connected" it is, which indicates its relative influence in the network.
The concept of degree is a local characteristic of a graph.A nonlocal, integral network structure is defined by two concepts -a path and a loop or cycle.A path is a sequential sequence of adjacent nodes and the links between these nodes when the nodes do not repeat.A loop or cycle is a path when the start and end nodes coincide.Networks without loops are trees.The number of nodes (N) (network size) and the number of links (L) are related as N = L -1 [23].
Identifying the nodes with the highest degree (also called "hubs") is an important part of network analysis as it helps identify the most crucial parts of the network.This knowledge can then later be used both to improve network's connectivity (by linking the hubs together) and disrupt it (by removing the nodes).
Betweenness centrality is another important measure of the node's influence within the whole network.While degree simply shows the number of connections the node has, betweenness centrality shows how often the node appears on the shortest path between any two randomly chosen nodes in a network.Thus, betweenness centrality is a much better measure of influence because it takes the whole network into account, not only the local connectivity that the node belongs to.
A node may have high degree but low betweenness centrality.This indicates that it's well-connected within the cluster that it belongs to, but not so well connected to the rest of the nodes that belong to the other clusters within the network.Such nodes may have high local influence, but not globally over the whole network.
Alternatively, other nodes may have low degree but high betweenness centrality.Such nodes may have fewer connections, but the connections they do have are linking different groups and clusters together, making such nodes influential across the whole network.
In network visualization we often range the node sizes by their degree or betweenness centrality to indicate the most influential nodes.
Network topology is an important element of network analysis.If we analyses networks on the structural basis we will discover many differences among them.A tool for studying complex networks based on graph theory is topological analysis.
When performing network analysis and visualization it is important to classify the topology of the network [44].This can be done through quantitative analysis of degree distribution among the nodes and/or through qualitative analysis using various visual graph layouts.
Degree distribution can be a good indicator of the network's topology.If most of the nodes in the network have exactly the same degree, the network is more of a regular one (it may also indicate the presence of tree-like hierarchical system within the network).If most of the nodes have an average number of connections that is the same and then some of the nodes have more and some of the nodes have less (normal bell-curve distribution of degree), we're dealing with a randomized network.Finally, if there's a small, but significant number of nodes with a high degree and then degree distribution follows a long tail towards a gradual decline (scale-free distribution), this is a small-world network, where there's a significant amount of well-connected hubs, which are surrounded by less connected satellites, which form clusters.Those clusters are connected to one another via the hubs and the nodes that belong to several communities at once.
Graph layout a qualitative measure for identifying topology of a network.A very useful type of layout is Force Atlas, where the most connected nodes with the highest degree are pushed apart from each other, while the nodes that are connected to them but have lower degree are grouped around those hubs.After several iterations this sort of layout produces a very readable representation of a network, which can be used to better understand its structural properties and identify the most influential groups, differences between them, and structural gaps within networks.
Network motifs are the different types of constellations that emerge within network graphs.They can provide a lot of useful information about the structural nature of networks.
For example, some networks may be comprised of dyads or pairs of nodes (which indicates that the level of overall connectivity is quite low).Some other networks can have a high proportion of triads, which usually indicate the presence of feedback loops, which makes the resulting network formations much more stable.More complex formations include groups of four nodes that can be connected as a sequence or between each other, forming interconnected clusters that can encode certain levels of complexity that go beyond simple triad feedback constellations.
It is important to take notice of the network motifs that emerge within a network because it will provide a very good indication of the level of complexity and thus the capacity of the network.
Modularity is a quantitative measure that indicates the presence of distinct communities within a network.If the network's modularity is high, it means it has a pronounced community structure, which, in turn, means that there's a space for plurality and diversity inside.If the modularity is too high, however, it might also indicate that the network consists of many disconnected communities, which are not globally connected, making it much less efficient than an interconnected one.
Modularity works through an iterative algorithm, which identifies the nodes that are more densely connected to each other than to the rest of the nodes in the network.It will then calculate the measure of modularity for the network at large.The higher this measure is, the more distinct those communities of densely connected nodes are.If the modularity measure is 0.4 or above it means that the community structure in the network is quite pronounced.If it's less it means that there are no big differences between the different clusters and most of the nodes are equally densely connected to each other across the whole network.
So far, we've looked at the different measures of connectivity that exist within networks and that help us identify the most influential nodes, clusters, and deduce some basic functional properties of the networks we study.
However, one of the most important aspects of network graphs is that they also let you see the gaps, empty blank spaces, between the islands.Those gaps are usually referred to as "structural gaps" and it has been shown that bridging those gaps can spur innovation, create most interesting collaborations, and give rise to new, unexpected ideas.
In other words, "structural gaps" is where creativity and potential are hidden within the network.Therefore, when visualizing a network, it is important to identify those structural gaps and to devise different actions that could help bridge different nodes and clusters across those empty spaces within the graph in order to spur creativity and innovation.
Results and analysis
As an example of modeling semantic knowledge networks, we analyze the relationship between the concepts of academic disciplines.As you know, that discipline mastering is closely connected with the assimilation and comprehension of the course concept thesaurus.To assimilate further concepts within the framework of this discipline, it is necessary to understand the already learned, often in the framework of the already studied disciplines.Therefore, an actual task is to study the dependencies between concepts and to model them, using cognitive networks [44].
The Fig. 4 shows a fragment of the construction of a semantic knowledge network.To implement the subject area model in the form of a semantic knowledge network, we propose the following algorithm: (1) Classification of all concepts of the subject area into macro concepts (class of concepts), meta-concepts (generalized concepts) and micro-concepts (elementary concepts).
(2) The allocation of common properties, characteristics inherent in each level of concepts.
(3) Highlighting the hallmarks of each level of concepts.
(4) Establishing links between concepts related to the same level.
(5) The allocation of inter-level ties.
We have analysed 125 concepts that are necessary for the "Economic Cybernetics" discipline mastering and the relationship between them (communication means the need for one concept to master another).We conducted a similar study for 125 concepts of the "Algorithms and Programming" and 125 concepts of the "Calculus" discipline.
The constructed graphs (Fig. 5-7) can be used to identify the most important concepts that have the highest degree of apex, as well as concepts that are in the way of studying other important course concepts.The obtained graphs were visualized using the Gephi software product [45].
Gephi is free open-source, leading visualization and exploration software for all kinds of networks and runs on Windows, Mac OS X, and Linux.It is highly interactive and user can easily edit the node/edge shapes and colors to reveal hidden patterns.The aim of the Gephi is to assist user in pattern discovery and hypothesis making through efficient dynamic filtering and iterative visualization routines.
Gephi allows to calculate the topological characteristics of the graph, as: -nodes and edges (what networks are made of); -clusters (groups of nodes that are connected); -degree (the number of connections that the node has); -centrality between (how influential a node is); -modularity (community structure).
Gephi comes with a very fast rendering engine and sophisticated data structures for object handling, thus making it one of the most suitable tools for large-scale network visualization.It offers very highly appealing visualizations and, in a typical computer, it can easily render networks up to 300 000 nodes and 1 000 000 edges.Compared to other tools, it comes with a very efficient multithreading scheme, and thus users can perform multiple analyses simultaneously without suffering from panel "freezing" issues.In large-scale network analysis, fast layout is a bottleneck as most sophisticated layout algorithms become CPU and memory greedy by requiring long running time to be completed.While Gephi comes with a great variety of layout algorithms, OpenOrd [46] and Yifan-Hu [47] force-directed algorithms are mostly recommended for large-scale network visualization.OpenOrd, for example, can scale up to over a million nodes in less than half an hour while Yifan-Hu is an ideal option to apply after the OpenOrd layout.Notably, Yifan-Hu layout can give aesthetically comparable views to the ones produced by the widely used but conservative and time-consuming Fruchterman and Reingold [48].Other algorithms offered by Gephi are the circular, contraction, dual circle, random, MDS, Geo, Isometric, GraphViz, and Force atlas layouts.While most of them can run in an affordable running time, the combination of OpenOrd and Yifan-Hu seems to give the most appealing visualizations.Descent visualization is also offered by OpenOrd layout algorithm if a user stops the process when ~50-60% of the progress has been completed.Of course, efficient parameterization of any chosen layout algorithm will affect both the running time and the visual result.
In Fig. 5-7 the size of the nodes-concepts of semantic knowledge networks characterizes the degree of importance and fundamentality of the corresponding terms of the academic discipline.
For the obtained graphs, their topological characteristics were calculated and analyzed.The results of the study are shown in Table 1.
Table 1. Comparison topological characteristics of the graphs
of the relationship between the concepts of the disciplines: "Economic Cybernetics" (E), "Algorithms and Programming" (P) and "Calculus" (M).Let us analyze the found values of measures (Table 1).The Link Density measure is a measure of the density of edges, calculated as the ratio of the number of edges of a graph to the corresponding number of vertices and determines the maximum number of edges in a given graph.Thus, the values 0.17 -for the graph of discipline "Economic cybernetics" and 0.2 -for the "Calculus" means that the edges are filled with about 17.3% and 19.5% of the maximum possible respectively.The density of the graph of concepts of the discipline "Algorithms and Programming" is less: 11%, which can be explained by a smaller number of connections between concepts on average in the graph.
The maximum degree of 121 vertices was demonstrated by the concept graph in the "Algorithms and Programming".The maximum value of the degree of the vertex in the column "Economic cybernetics" -111.The minimum degree of vertices in the graphs "Economic Cybernetics" and "Algorithms and Programming" are 3 and 1, respectively, which are almost the same.For "Calculus", the number of weakly connected nodes is higher -7, and strongly connected -113, which is less than in "Algorithms and Programming", but more than in "Cybernetics".
It also confirms a greater connection between the concepts of the "Economic cybernetics" and "Algorithms and Programming" than the concepts of the "Calculus".
Mean average node degree for the "Economic Cybernetics" graph is 21.45, and for the "Algorithms and Programming" graph -it is 13.66 and for the "Calculus" -24.18.This is confirming the presence of more connections in the last graph.
The global clustering coefficient (clustering) for a graph is the ratio of the number of vertically connected triples of vertices to the number of triangles (cyclically connected triples of vertices).For the "Economic Cybernetics" graph, the clustering coefficient is 0.4, for the "Algorithms and Programming" graph -it is 0.33, and for the "Calculus" -0.59.This means that the concepts of the "Calculus" course are more often on the path to mastering other important concepts.
As for the diameters of the graphs -for the "Economic Cybernetics" concept graph the diameter value is 5, for the "Algorithms and Programming" graph -9 and for "Calculus" -3.The same relationships are observed for average shortest path-lengths.Which may mean the existence of longer paths in the connections between the "Algorithms and Programming" discipline concepts.
The modularity index is less than 0.4, which means that the structure of communities in all three networks is not sufficiently expressed.
In the field of education, there is always a problem of the contradiction between increasing the amount of scientific information and limiting the time allotted for its assimilation.Teaching academic disciplines in higher education requires constant work on educational information in order to move from extensive to intensive teaching methods.Teaching academic disciplines in higher education requires constant work on educational information in order to move from extensive to intensive teaching methods.One of the ways to intensify the educational process can be the optimal "packaging" of educational information.
The solution to this problem is the construction of a semantic network.An important condition for the successful mastering of educational material is the ability of the teacher to highlight the key issues of the program.Nodal issues of the program are the basis for studying the whole topic.Their significance can be determined using a graph or adjacency matrix.
For example, let a topic contain 6 questions and the logical connections between them are presented in the form of an adjacency matrix (Table 2).P1 P2 P3 P4 P5 P6 B P1 0 1 1 0 0 1 3/6 P2 0 0 1 1 1 1 4/6 P3 0 0 0 1 1 0 2/6 P4 0 0 0 0 1 0 1/6 P5 0 0 0 0 0 0 0 P6 0 0 0 1 0 0 1/6 The significance of the question can be characterized by the weight coefficient determined by the formula: B S i /k where S i is the number of references to the i-th question when studying the others contained in this topic, k -is the total number of questions in this section.The larger the coefficient leads to the greater the significance of the issue.Thus, it is possible to determine the importance of the discipline (section) in the study of all disciplines of the curriculum.A similar technique can be used in the formation of the content of academic subjects on the basis of discipline standards, in the development of curricula and tests, in the selection and organization of educational information for training.
Conclusions
Algorithms for the formation of a semantic knowledge network are developed.The knowledge network is the basic concept of knowledge management.In fact, we introduce a new discipline that implements the principles of sustainable development of education.The method of constructing a semantic knowledge network of terms allows forming an adjacency matrix that reflects the correlation of terms from a terminological dictionary.This matrix allows to evaluate the quality of the terminology in the particular discipline, as well as to determine quantify the semantic connectivity of the whole tutorial.According to obtained results, we can conclude that the concept system in the "Economic Cybernetics" is connected and complex.This means that in this case when studying any concepts, it is necessary to repeat the meaning of those already studied.The concept system in the "Algorithms and Programming" contains fewer dependencies and less connectivity in comparison with graphs.But the experience of studying these disciplines indicates that also the "Algorithms and Programming" is not easy to learn.Further the problem of planning the learning process based on semantic networks of knowledge will be studied.Namely, the distribution of lectures, practical and laboratory exercises will be determined to achieve successfully the learning objectives.In future work, we will to calculate spectral characteristics of graphs for the studied disciplines, as it was done in [50,51].
Fig. 2 .
Fig. 2. The relationship of the concept, subject and word denoting this subject [41].
Fig. 3 .
Fig. 3.The relationship of various concepts in the human mind [41].
Fig. 5 .
Fig. 5.The semantic knowledge network of the course concepts "Economic Cybernetics".
Fig. 6 .
Fig. 6.The semantic knowledge network of the course concepts "Algorithms and Programming".
Fig. 7 .
Fig. 7.The semantic knowledge network of the course concepts "Calculus".
|
2020-04-23T09:09:59.652Z
|
2020-01-01T00:00:00.000
|
{
"year": 2020,
"sha1": "3b4144d56fa5ba224d71449291451e3b9729c537",
"oa_license": "CCBY",
"oa_url": "https://www.e3s-conferences.org/articles/e3sconf/pdf/2020/26/e3sconf_icsf2020_10022.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "f6a98b2a182c08b73d8f55df8288971240a3f0f8",
"s2fieldsofstudy": [
"Education",
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
268507772
|
pes2o/s2orc
|
v3-fos-license
|
Construction of 4 x 4 symmetric stochastic matrices with given spectra
: The symmetric stochastic inverse eigenvalue problem (SSIEP) asks which lists of real numbers occur as the spectra of symmetric stochastic matrices. When the cardinality of a list is 4, Kaddoura and Mourad provided a su ffi cient condition for SSIEP by a mapping and convexity technique. They also conjectured that the su ffi cient condition is the necessary condition. This study presents the same su ffi cient condition for SSIEP, but we do it in terms of the list elements. In this way, we provide a di ff erent but more straightforward construction of symmetric stochastic matrices for SSIEP compared to those of Kaddoura and Mourad.
Introduction
The nonnegative inverse eigenvalue problem asks which lists of complex numbers occur as the spectra of nonnegative matrices.This is a long-standing problem in matrix theory (for example, see a survey paper [1]).The nonnegative matrices could be specified as, for example, symmetric, stochastic, doubly stochastic, symmetric stochastic, or nonspecified forms.
A real square matrix with nonnegative entries is said to be (generalized) symmetric doubly stochastic or simply (generalized) symmetric stochastic if it is symmetric and all of its row or column sums are equal to a nonnegative constant α.We will call it a symmetric stochastic matrix simply.The constant α could be any nonnegative number throughout this study, including the usual case = α 1.In this way, we can trace the behavior of α; otherwise, it is concealed when = α 1.The nonnegative inverse eigenvalue problem for a given list where the form of the matrix is symmetric stochastic is called a symmetric stochastic inverse eigenvalue problem or SSIEP.
Let Λ be a list of real numbers and n the cardinality of Λ.When n is 1 or 2, the SSIEP is easy.This SSIEP has only been solved for the case = n 3 by Perfect and Mirsky [2] in 1965.For the case ≥ n 5, the SSIEP is wildly open (see, for example, [3][4][5]).
For the case = n 4, a sufficient condition for SSIEP is given in [2], and Mourad and coauthors provided a sufficient condition that covered a more comprehensive range for SSIEPs by a mapping and convexity technique and conjectured that the sufficient condition is the necessary condition in [6][7][8].
This study presents a different but more straightforward construction of symmetric stochastic matrices for SSIEP when = n 4, compared to those of Mourad in [6].A particular orthogonal matrix (1) is used, and the symmetric stochastic matrices are expressed simply in terms of the list elements Λ (see below A A , 1 2 , and A 3 ), and we arrived at the same conjecture in [8].
Symmetric stochastic matrices
Let a matrix U be of the form: This matrix U becomes orthogonal if and only if the numbers a and b satisfy the relation 2 , because the product of the transpose of U and U is of the form: , , , 1 2 3 4 be a list of real numbers with nonincreasing order and [ ] Λ the diagonal matrix with the diagonal entries Λ.The product [ ] U U Λ t becomes a symmetric matrix with each row and column summing to λ 1 : The following theorem presents a sufficient condition for the SSIEP, which Kaddoura and Mourad proved in [8].Still, we give a different proof where the particular orthogonal matrix (1) is used, and so different and simple types of symmetric stochastic matrices are obtained.The symmetric stochastic matrices are expressed in terms of the list Λ.
, , , 1 2 3 4 be a list of real numbers with the following conditions: (3) (5) Then, there is a symmetric stochastic matrix whose spectrum is the list Λ.
Let R 1 be the subregion of R whose lists satisfy the further conditions: R 2 the subregion of R whose lists satisfy the further conditions: (7 and R 3 the subregion of R whose lists satisfy the further conditions: First, check that the union of the subregions R R , 1 2 , and R 3 , is the region R. Let ( ) = λ λ λ λ λ , , , 1 2 3 4 be an arbitrary list in R. We will show that λ is placed in one of the subregions.
4 , then by definition of R 2 , ∈ λ R 2 .Now, consider the remaining region that ≥ + − λ λ λ λ 4 .If we assume further that Therefore, this region can be written as ≥ + − λ λ λ λ . Hence, we have that ∈ λ R 1 .Now, we assume the remaining case that Therefore, this region can be written as . Hence, we have that ∈ λ R 3 .This completes the proof that the union of the subregions R R , 1 2 , and R 3 is the region R. Let us consider the lists in the subregion R 1 .In this subregion R 1 , set the matrix U with = a in (1), and then, U becomes orthogonal.Let a list Λ be in R 1 and a matrix , where [ ] Λ is the diagonal matrix with diagonal entries Λ.The matrix A 1 is of the form: Then, the matrix A 1 is symmetric with each row and column summing to λ 1 , and its spectrum is Λ.If we show that each entry of A 1 is nonnegative, then A 1 is a symmetric stochastic matrix with the spectrum Λ, and therefore, the proof is completed for the lists in the subregion R 1 .
Construction of 4 x 4 symmetric stochastic matrices 3 We denote the ( ) i j , -th entry of A 1 as ( ) A ij 1 .By Condition (4), we see that By the fourth expression of Condition ( 6), ( ) All other entries of A 1 are one of the aforementioned forms, and so, they are nonnegative.
Let us consider the lists in the subregion R 2 .In this subregion R 2 , set the matrix U with = 2 .Let a list Λ be in R 2 and a matrix (2).The matrix A 2 is of the form: where ( ) ( ) 4 .Then, the matrix A 2 is symmetric with each row and column summing to λ 1 and its spectrum is Λ.If we show that each entry of A 2 is nonnegative, then A 2 is a symmetric stochastic matrix with the spectrum Λ, and therefore, the proof is completed for the lists in the subregion R 2 .
Finally consider the lists in the subregion R 3 .In this subregion R 3 , set the matrix U with = a 0 and = b 1 2 in (1), and then, U becomes orthogonal.Let a list Λ be in R 3 and a matrix A 3 the product [ ] U U Λ t in (2).The matrix A 3 is of the form: Then, the matrix A 3 is symmetric with each row and column summing to λ 1 and its spectrum is Λ.If we show that each entry of A 3 is nonnegative, then A 3 is a symmetric stochastic matrix with the spectrum Λ, and therefore, the proof is completed for the lists in the subregion R 3 .
Consider the entry 4 .If the list Λ in R 3 satisfies Condition (8), by the fourth expression, we obtain that ( ) ≥ A 0 3 11 . If the list Λ in R 3 satisfies Condition (9), by the second expression of Condition (9), we compute that ; therefore, we obtain that ( ) . If the list Λ in R 3 satisfies Condition (10), immediately, we obtain that ( ) ≥ A 0 3 11 . It is easy to check that all other entries of A 3 are nonnegative using Conditions (3)- (5).
We have shown that for every list Λ in the region R, there is a symmetric stochastic matrix that depends on the conditions of Λ, whose spectrum is Λ.Therefore, the proof of the statement is completed.□ We draw a graph to picture regions R R R , , 1 2 , and R 3 .For the values λ 4 and λ 3 fixed as , we present a graph that is a cross-section of regions on the λ 2 -and λ 1 -axes plane (Figure 1).
The subregion R 1 is the region bounded by lines = λ λ and the circle The subregion R 3 is the region bounded by lines 4 , and = λ λ 1 2 .The region R is the union of subregions R 1 , R 2 , and R 3 , which is the colored region in Figure 1.
For the values λ 1 and λ 2 fixed as > > λ λ 0 1 2 , we present a graph that is a cross-section of regions on the λ 4 - and λ 3 -axes plane (Figure 2).The region R is the union of subregions R R , 1 2 , and R 3 , which is the colored region in Figure 2.For the other cases < < λ λ 0 4 3 or < < λ λ 0 4 3 , we can draw a similar but simpler graph of regions R R R , , 1 2 , and R 3 compared to Figures 1 and 2.
Our empirical study shows that the converse statement of Theorem 2.1 is also true, i.e., for a list of real numbers { } = λ λ λ λ Λ , , , 1 2 3 4 , conditions (3)-( 5) are the necessary and sufficient conditions to have a symmetric stochastic matrix with the spectrum Λ.We leave it as a conjecture.This conjecture is also presented in [8].
Conjecture 2.2.The converse statement of Theorem 2.1 is true.
Symmetric and symmetric stochastic matrices
The symmetric nonnegative matrix case is well known by Fiedler in 1974 [9] and Soules in 1983 [10].
, , , 1 2 3 4 be a list of real numbers with the following conditions: (3) Then, there is a symmetric nonnegative matrix whose spectrum is the list Λ.
We compare the sufficient conditions to have a symmetric stochastic matrix in Theorem 2.1 and a symmetric nonnegative matrix in Theorem 3.1.If a list Λ satisfies Conditions (3) and (4) but does not satisfy Condition (5) in Theorem 2.1, then by Theorem 3.1, there is a symmetric nonnegative matrix whose spectrum is the list Λ.We provide a symmetric nonnegative matrix with spectrum Λ, which is similar to the matrix that appeared in the proof of Theorem 2.1.
, , , 1 2 3 4 be a list of real numbers with the following conditions: (3) (4) (5) Then, there is a symmetric nonnegative matrix whose spectrum is the list Λ.
Proof.Let a matrix V be of the form: Since the matrix V is orthogonal, the symmetric matrix A has the eigenvalues Λ.If we show that all entries of the matrix A are nonnegative, the matrix A becomes a symmetric nonnegative matrix whose spectrum is the list Λ.
We check easily that by Conditions (3) and ( 4), the entries + λ λ holds, which completes the proof of the theorem.□ When we combine the proofs of Theorems 2.1 and 3.2, we obtain another proof of Theorem 3.1.
Figure 1 :
Figure 1: Cross-section of region R about fixed negative values λ 4 and λ 3 .
=
subregion R 2 is the upper part of the two regions bounded by lines = + − λ
Figure 2 :
Figure 2: Cross-section of region R about fixed positive values λ 1 and λ 2 .
matrix V is orthogonal.Suppose that a list Λ satisfies Conditions (3)-(5).Let a matrix A be the product [ ] V V Λ t .Then, the matrix A is of the form:
|
2024-03-18T15:10:53.085Z
|
2024-01-01T00:00:00.000
|
{
"year": 2024,
"sha1": "c32cdaa2be0551bdf80f85567aad790e83f918fd",
"oa_license": "CCBY",
"oa_url": "https://www.degruyter.com/document/doi/10.1515/math-2023-0176/pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "ec7b4d50d413927146a566520dd5e817f15bd88b",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": []
}
|
150362111
|
pes2o/s2orc
|
v3-fos-license
|
Welcome to the Bubble : Experiences of Liminality and Communitas among Summer Camp Counsellors
Summer camps provide a special time and space for youth growth and transformation. This growth is possible, in part, due to the physical and social isolation that contribute to the liminality of traditional residential camps. Camps act as a sort of ‘bubble’ in which alternative realities, norms and identities emerge. For many campers and camp counsellors, the community and personal relationships that develop at camp produce feelings of acceptance and belonging. Positive camp experiences do not occur by happenstance and as such, youthful camp counsellors often feel immense pressure to deliver on the promises that camps offer. This article explores the challenges faced by counsellors as they seek to create and maintain this liminal space. This paper discusses camp counsellors’ own reflections on their personal struggles with social isolation and the need to be accepted, effects of gossip in the close-knit community of camp, a lack of private time or space, and the emotional demands of caring for campers. The article concludes by suggesting how we might reconsider camp counsellor experiences and offers strategies to support counsellors as they navigate and negotiate camp experiences for both themselves and their campers.
Yet we know little of the counsellor experience within residential camp settings.They, too, exist within liminal space where social norms are altered.They are expected to help create the magic that camps promise while they themselves are faced with difficulties, both practical and personal.This study seeks to better understand the challenges faced by camp counsellors so that we can better support their efforts.
Review of Literature Residential Camps and Liminality
Residential camps offer a physical, social, and even emotional break from campers' own communities, homes, and relationships.Natural features of camps (e.g., long gravel driveways, access via boats, surrounding forests) draw boundaries around camp time and space (Preston-Whyte, 2004;Weber, 1995).In this way, camp experiences exist within a liminal space.
Liminality exists when everyday routines, rituals, and spaces are altered or abandoned in meaningful ways (Turner, 1994).Liminality, and the social cohesion generated within it, provide opportunities for individuals to experience ideal, albeit utopian, community living as well as their individual value in belonging to this community (Olaveson, 2001).These experiences can be profound and encourage individuals to readjust and/or recreate their values (Olaveson, 2001).Turner saw these experiences as akin to spiritual rebirth (Olaveson, 2001).
The social and physical seclusion of camps create a space that provides the anti-structure necessary to form what Turner calls a status of "betwixt and between " (1994, p. 8).Camp spaces form a sort of "bubble" away from campers' and staff members' everyday lives wherein reality is suspended.Within this new reality, new ways of thinking and acting can emerge.Roles and statuses can change.Perceptions of equality develop as individuals are reduced to a human common denominator within liminal spaces (Turner, 1982).That is, the roles and status of our ordinary lives (job position, academic performance, family hierarchy) hold less relevance to who we are in a liminal space.Substantial research exists on the benefits of camper experiences (Chenery, 1993;Dorian, 1994;Ewert & Yoshino, 2011;Gustafsson, Szczepanski, Nelson, & Gustafsson, 2012;Henderson & Bialeschki, 1999;Henderson, et al., 2006Henderson, et al., -2007;;Henderson et al., 2006;Holman & McAvoy, 2003;Williams, 2013).The special qualities of camp relationships, and consequently camp community, have often been associated with positive personal, emotional and social Journal of Youth Development | http://jyd.pitt.edu/| Vol. 13 Issue 1-2 DOI 10. 5195/jyd.2018.565Welcome to the Bubble 27 development (Bialeschki, Henderson, & Dahowski, 1998;Glover, Chapeskie, Mock, Mannell, & Feldberg, 2013;Henderson et al., 2006).
The literature is less complete for camp counsellor experiences and specifically, the challenges they face in living and working within camp liminality.Much like campers, camp counsellors may also be disoriented by the differences of camp.While at home they may be described as sons and daughters, as students, or as residents of familiar and comfortable neighborhoods.In the altered context provided by camp, these young counsellors are often considered mentors, teachers, safety personnel, and even emotional support workers (Ross, 2009).In camp, they must, by necessity, create new and important identities.Allen (2004) suggested that individuals, such as camp counsellors, are "embodied, embedded in a social and cultural milieu" and "constituted by power relations" (p.235).Camp counsellors must contend with and navigate multiple, messy and even contradictory messages about what and who they are and are meant to be within camp contexts.This additional dimension to working in liminal space can be demanding, distressing, daunting, and disorienting.
The study described in this paper examined the embodied experiences of camp liminality from the perspective of 38 in-depth interviews with a variety of residential summer camp counsellors.
Their camp experiences are constructed by many discourses, some of them competing, and all can be profound.The purpose was to better understand the complexity and challenges of living and working within camps.
Campers and Communitas
Individuals in liminal communities tend to engage with one another in direct, sympathetic, and non-judgmental ways (Turner, 1982).The usual assumptions made in relation to a person's role, status, reputation, class/caste, sex, or other structural niche tend to fall away and, rather, individuals are addressed in terms of their value to the "here-and-now" (Turner, 1982, p. 48).
These conditions help create what Turner and others (Andrews, 1999;Côté-Arsenault, Brody, & Dombeck, 2009;Sharpe, 2005) call "communitas" (Turner, 1969, p. xvi).Sharpe argues that "communitas emerges when people step out of their structural roles and obligations, and into a sphere that is decidedly 'anti-structural' …and the rules of everyday life can be altered, inverted and made topsy-turvy" (2005, p. 256).Experiences of communitas are characterized by "feelings of equality, linkage, belonging, and group devotion to a transcendent goal" (Arnould & Price, 1993, p. 34).When communitas exists, participants place high value on personal honesty, openness, and lack of pretentions or pretentiousness (Turner, 1982).
Camp community living represents, in many powerful ways, Turner's description of communitas (1982).For example, feelings of belonging and inclusion are among the highest priority for camp counsellors to deliver and manage for those campers in their care (Ross, 2009).Liminal spaces are defined by the collective investment of all participants in singular or focused goals (Sharpe, 2005).Within camps these goals appear to be ideals of fun, belonging, and character development.This would suggest that, within camps, individuals are judged on the degree to which they contribute to this sense of communitas.While usual markers of identity (sex, race, status) do not disappear (and likely play roles in camp communities), camp communitas provides a unique time and space for altered and idyllic social norms.Not surprisingly, the camp community is seen as one of the most outstanding and positive aspects of working at summer camp (Meier & Mitchell, 1993;Ross, 2009).
The role of the counsellor seems central to the ongoing delivery of the camp promise (i.e., fun, belonging, positive character development).The management of ongoing interpersonal dynamics are largely assigned to young summer camp counsellors.Yet very few studies exist that consider the experiences of camp counsellors as they fulfill their many roles within camp settings.Bialeschki et al. (1998) conducted one of the few studies on camp counsellors.They sought to explore the perceived benefits associated with summer camp staff experiences.An analysis of the positive outcomes suggested that camp staff benefited by making friends, learning about diversity, developing teamwork skills, and experiencing personal growth (Bialeschki et al., 1998).
The Camp Counsellor
However, the study identified concerns over fundamental working conditions in camp settings.
For example, those counsellors interviewed felt that they deserved higher wages and more privacy in light of the level of responsibility and intensity of effort that was required of them (Bialeschki et al., 1998).They reported that part of the concern also related to being acknowledged for the hard work done.Overall, the staff felt undervalued.The findings of this study suggested that industry practitioners and researchers alike have much to gain from examining the experiences of camp staff and how best to support them.However, the issues may run deeper than wages and privacy.The complexity and intensity of residential camps can challenge and disrupt.According to Olaveson, "a very intense social life always does a sort of violence to the individual's body and mind and disrupts their normal functioning.This is why it can last for only a limited time" (2001, p.100).The stripping away of familiar roles and statuses necessary for anti-structural reality, in which communitas may emerge, can make camp employees vulnerable.
While vulnerable, camp counsellors are immersed in the production and support of camp liminality for days and weeks on end.During that time, they are responsible for the everyday care and safety of campers in their charge (American Camping Association, 1993;Meier & Mitchell, 1993).Camp managers call on counsellors to act as a camper's friend, parent, therapist, and teacher (Ross, 2009).Camp counsellors, not much older than campers (16-25 years old) themselves, are expected to facilitate personalized opportunities for campers to gain skills and improve self-concepts through positive and meaningful camp experiences.
Within this context, this article seeks to explore the formation of liminality and the emergence of communitas for camp counsellors.How do counsellors experience their many roles within the liminal space offered by residential camps?Insights were gathered from camp counsellors' own reflections of the processes and outcomes that characterize the residential camp experience.
These reflections were used to identify key issues within the counselling process and the challenges camp counsellors' faced in living and working in camp liminality.
Research Design
This qualitative study draws on research materials from 38 in-depth interviews with camp counsellors.These were woven together to create a narrative that reflects at least part of the counsellor story.Several sampling techniques (convenience, purposive, and snowball) (Neuman, 2006) were used to generate a pool of interview participants.Interview participants were selected with particular attention to diversity among demographic characteristics, amount of time worked at camps and time since their last employment experience.
Interview participants had worked a minimum of one summer (approximately two months on a seasonal contract) upwards to "many years" in full-time camp employment.The greatest number of interview participants had three years camp counsellor experience (16%), followed by one and five years (13% each).Interview participants ranged from current employees (18%, usually full-time) or having just finished work two months prior (47%) to 10+ years since they worked at camp (8%).Interview participants were 17 to 59 years old with the majority of interview participants 19 to 24 years old (40%) followed by 25-29 years old (21%).The majority of interview participants were female (63%).Interview participants were well educated and held undergraduate degrees (38%) or postgraduate degrees (29%).
I conducted interviews across southwestern Ontario, Canada using a semi-structured interview guide over a 1-month period.The interviews were digitally recorded.Interviews were transcribed and manually coded at an initial stage and then again using NVivo 10 software to manage the large volume of text.I analysed the research materials for initial themes (camp rituals or the development of community) as well as discursive practices (how liminality was articulated through particular statements).Discursive analysis was employed as a complementary approach (Symon & Cassell, 2012).In particular, I concentrated on the emotional language and metaphors used to articulate the experiences of living and working within the camp context.
This study was conducted by pairing a modified grounded theory method and reflexive methodology.Grounded theory methods offer "systematic, yet flexible guidelines for collecting and analysing qualitative data" that "construct theories 'grounded' in the data themselves" (Charmaz, 2006, p. 2).That is, I looked for commonalities to produce themes among the participants' narratives as well as unusual stories.However, my grounded theory was modified, as I did not seek to generate theories or assume I held some superior authority on the matter.
Instead I drew on a reflexive methodology to call into question my own interpretation, and that of others, of the meaning of camp counsellors' experiences.Reflexive methodology suggests an approach that recognizes a multiplicity of influences in the meaning making of the research process (Alvesson & Skoldberg, 2000).Additionally, reflexive methodology draws attention to the contribution of the researcher's dialogue with the research subject, literature, research participants, herself and the reader in "the process of research and in the (final) textual product" (Alvesson & Skoldberg, 2000, p. 249).
While much of this process is not evident in the discussion that follows, the questioning of the construction of knowledge was implicit to the production of this study.I chose to present the following discussion in a more traditional format with critical and creative inclusions (i.e., personal narratives).A reflexive methodology has not only been beneficial but necessary throughout the research process as I attempted to weave many texts together into a coherent multivoiced research narrative while staying critical and accountable to the various perspectives that informed my study of camp counsellor experiences of liminality and communitas.Charmaz argues that grounded theory methods can "complement other approaches to qualitative data analysis" and that is why they "continue to appeal to qualitative researchers with varied theoretical and substantive interests" (2006, p.9).Ultimately, "when combined with insight and Journal of Youth Development | http://jyd.pitt.edu/| Vol. 13 Issue 1-2 DOI 10. 5195/jyd.2018.565Welcome to the Bubble 31 industry, grounded theory methods offer sharp tools for generating, mining and making sense of data" (Charmaz 2006, p.15).I begin the results with a personal narrative.Through this narrative I seek to situate the reader within the liminal space offered by a typical residential camp.
Welcome to the Bubble
And now the driveway, the very long driveway, climbs over the landscape and down into the main campsite.It's a steep hill down and up and down again.The pond is on the right, then the director's house is on the left, which is followed by the field (where we played sponge wars), and then the little pool, and-oh!The freezing rush of early morning polar bear dips comes over me just thinking about that pool.On the right, as you come into the main area, are the camp office and dining hall; homemade macaroni and cheese (a mealtime staple) is still my favorite.I can stand on the spot there, in the parking lot, and turn and "camp" is all around me; the surrounding green of grass, lakes, and trees, the cold pool water on my skin, the smells from the kitchen, and the sounds of happiness in the air (personal narrative).
I use this narrative to introduce what many interview participants referred to as "the bubble."To many participants, the term was obvious, common, and normal parlance of camp folk.When camp people speak about the bubble, they are referring to camp being a physically and socially exclusive place apart from their real lives, or rather, the lives they lead outside of camp.Camp counsellors even talked about their non-camp lives as "the real world," illustrating how camp is somehow perceived to a degree of fantasy or fiction.For example, Andy commented, "You literally live in a little bubble.I mean you really don't hear anything that's going on in the outside world."The following sections will discuss the formation, benefits, and challenges of camp communitas for camp counsellors.The camp bubble is such a unique experience that Terri said, "those who are inside it [camp] can't explain it, and those who are outside it can't understand it."For those of you unfamiliar with camp or those intimate with it, I'd like to welcome you (back) to the bubble!
Camp Liminality and Communitas
And that's sort of where the driveway ends and the rest of the camp takes over.That funny little shack that's inscribed with my cousin's name and the woods . . .what was it called again?Something romantic, like Beech hollow or Oak valley.It's where the kitchen staff used to eat lunch.Then staff quarters for those who didn't have a cabin with kids.The toilet and shower block.Then "the horseshoe," with little cabins ringing the edges, nestled neatly into the forest.The craft shack and a set of swings sits in the middle.In my childhood, I believe, there was other play equipment in the middle too (personal narrative).
The bubble occupies physical space that is signified by the natural geography of campsites and their surroundings.Elements such as gates, signs, gravel driveways, and tree-lined avenues demarcate the borderlines of camp liminal space (Preston-Whyte, 2004;Weber, 1995).The trees, for example, separate camp and reinforce the geographic isolation of camp from the outside world.Yet it is the meanings that are attributed to these natural features that help to create an ephemeral and liminal sphere around camp space and time.Romantic notions of pure nature and rural idyll pervaded accounts of camp experiences.For example, Tess recounted a love of "lying out on the dock in the middle of the night watching the stars with friends." Henry described the joy of playing in mud puddles and the spontaneity of weather-dependent activities.
Rebecca and Lisa both reflected on moments in the shade of great trees.Childlike awe and appreciation existed in the interviewees' statements about the intrinsic beauty of the natural environments of their camps.Each one drew on special memories of the natural environment such as trees, rivers, rocks, lakes, and woods.The wonder-making of nature in camp experiences adds to the "dreamtime" of its liminality (Preston-Whyte, 2004).The physical elements and discourses of nature work to reinforce participants' sense of camp being in a unique liminal space while bolstering their desire to stay and participate.
The allocation of physical structures, or camp space, also adds to the construction of liminal camp experiences.As Hall (1959) states, "space speaks" (p.147).The physical layout of buildings is crucial in developing a context within which altered expectations emerge (Edwards, 1998).The horseshoe of cabins described in the preceding personal narrative is an example of how buildings were organized in ways to reinforce inclusion and belonging to the camp community.Camps' close living quarters also contributed to the establishment of relationships.
Counsellors shared living quarters with eight to 10 campers and up to two other camp counsellors.While this situation can be challenging, camp accommodation helped to form strong bonds.April said: We just know each other inside and out-you know them so well.And it's on a different level than if you went to school with them.
You're living in contact with this person for 24/7 and even though it may only be for 4 weeks out of the year, you know that person.
You really, really know that person.
The resulting investment of camp participants in the communal aspect of camp life gives rise to intense passions and emotions and by "bring[ing] all those who share them into a more intimate and more dynamic relationship" (Olaveson, 2001, p.94).Andy echoes this notion about camp community: "You live, breath, eat together, you do everything together" so campers and counsellors "form bonds and you have good memories."By being together at camp, the relationships typical of communitas were formed.
Camp participants were encouraged by the geographical and social isolation of camp experiences to step outside their structural roles and obligations (Sharpe, 2005) and invest in the camp community.Several interviewees suggested that personal communication devices, which have become mainstays of youth experiences, were often banned in the hopes of creating more cohesion among camp counsellors.Unplugging from one's multifarious and fast-paced life was desired as a means of investing fully in camp relationships.The liminality of camp space and time, including the limitations of contact with home, contributed to the possibility of experiencing communitas.David reinforced that the isolation from the news of the world (personal and global) intensified the camp experience, because there was little broader information to hinge one's perspectives of reality on: "Like Canada could be invaded and we would have no idea . . .unless somebody called."Camp participants build relationships with one another in the interest of experiencing community and in the absence of other, more familiar, relations.Many interviewees stated that camp was seen as an opportunity to make deep social connections and was about "that sort of opportunity to connect with other people who are like you" (Terri).Trudy viewed camp as primarily an opportunity to "make deep connections with people that really matter" and who "understand you and accept you."The limitations and regulation of contact with home contributed to the power and benefits of experiencing camp communitas.
Belonging and Acceptance -The "Feel" of Communitas
The whole site was filled with sound.Birds and insects?Yes, but the air was perforated with children laughing and yelling and having a great time.You could be noisy at camp.In fact, being noisy made you king!If you could sing loudly or shout with enthusiasm then you belonged to the musical and social fabric of this world.And songs . . .we sang all the time.We sang to wake and eat and wait and walk and play and sleep.The constellations of sounds that engulfed us at camp were heady and giddy at times, soothing and reassuring at others.Music set the rhythm and pace of the day (personal narrative).
Belonging was one of the most pervasive responses about the benefits of camp communitas.
The strong emotional connections with others, such as "that really strong connect of friends at camp" (April), was unanimously described as positive.Rebecca saw the objective of camp as "building community, even for a short period of time, with new people" and "definitely about making new friends."Belonging and acceptance were highly prioritized social objectives and an important aspect for all camp counsellors, regardless of camp hierarchy and staffing structures.
Many staff described that relationships at camp were accelerated, positive, and intensified by simply being at camp.In fact, Olaveson described the relationships of communitas to be: . . .an exceptionally powerful stimulant.Once the individuals are gathered together, a sort of electricity is generated from their The harmonious and unfettered nature of relationships within this context was an expected and accepted outcome of living in a utopian-like community as described by several counselors.
Singing, such as that described in the narrative above, helped to reinforce belonging.For example, Lisa said she always enjoyed the singing at morning circle, because it would bring "the whole camp together."Even the friendship bracelets made and steadfastly worn at camp were powerful reminders of one's social connection and belonging to camp.The belonging of camp communitas can be an emotionally powerful experience.
There is also rich potential inherent in liminal periods for an emergent sense of freedom.Such freedom can be intoxicating and can contribute to spiritual rebirth, transformation, and recuperation (Preston-Whyte, 2004).April's account most typified the genre of life-changing narratives that were commonly told about summer camp experiences: This is a wonderful story.I'm not going to do it justice.You would have to see the transformation yourself.We had a boy that . . .had a whole range of learning disabilities . . .And he came as the most unstable-he was a mess.He was really a mess.And then we worked hard with him for a month. . . . he just completely transformed.It's like it [camp] had gotten rid of his problems. . . .
I think camp saved him.
Most transformation narratives told by interview participants were about their campers.Camp counsellors were positioned and felt responsible for the positive transformation of campers in their care.Camp counsellors were expected to influence the transformation of campers.Eric illustrated how he viewed his role in the positive self-transformation of his campers: . . .seeing a real positive change . . . it just makes you feel that much better about helping these kids because you're a positive value to them.And I really felt that was the biggest highlight-just helping these kids realize they can do so much in the world.
Eric identified the benefits of his role as being of "positive value to them."Camp counsellors appeared to take up the responsibility to improve or transform campers, as April explained: "you're trying to change these kids' lives."Stories that recount the improvement, if not transformation, of camp selves is pervasive in camp texts (e.g., popular culture, research, camp marketing).Camp discourses about the transformative self suggests that campers and camp counsellors have a will to change in profound ways due to camp experiences.
Challenges Faced by Camp Counsellors
Communitas in camp environments happened out of necessity to a certain degree.Given the relative isolation from familiar supportive relationships, staff mentioned that making new friends was essential and needed to be done quickly.Developing friendships came easier for some staff than for others.Matthew, for example said, "I'm shy, so camp wasn't that fun for me.I wasn't outgoing enough to be a favorite."There was immense pressure to fit in, make friends, and belong in new camp environments for both camp counsellors and campers.For example, Steph revealed: I don't like the first day.You know, when everyone gets to know you; it's so crucial.I guess I'm just one of those people who is shy . . .I always feel for the kids the first night.You know, they show up and they just want to be accepted into the group, they want to be treated well, and they've had this experience built up for them by all the people who've already done it . . ."You'll love it, these are going to be the people you'll make friends with for the rest of your life," and just the pressure of that.
truly escape issues and challenges.Both David and Beth talked about how, at camp, it was easy to make too much of something small: I think that a huge part is the fact that the camp community is usually in a residential setting, such a closed community, that things that wouldn't be issues in other professions get magnified because it's such a tiny environment. . .I think in most professions you go home, have dinner, go to a movie, talk to your friends and come back the next day and have a fresh perspective but at camp, you stew about it and you get worried about it and it builds.(Beth) With little time or options to be away from one another, summer camps became a social pressure cooker for the staff that remained on site all summer.
The isolation of camp liminality created a socio-relational vacuum where campers looked to camp counsellors to fill the absences of familiar relationships.Camp counsellors were asked to play the roles of parent, brother/sister, friend, teacher, and mentor to the campers in their care.
Ross's camp counsellor textbook includes six chapters titled "The Counsellor as…the Leader, the Member of the Staff Team, the Teacher, the Disciplinarian, and the Risk and Crisis Manager" (2009, p. vii-viii).Camp counsellors in this study said they were not only expected to play these complex roles but must perform them with maturity, judgement, awareness, and responsibility.
Sophie also added "being really positive" and Tom added "hard work" as a "personality thing" to the list of leadership qualities.Grant recalled a sense of awe for camp counsellors: "I think when you're a camper, you sort of have a vision of them being almost like a god."Being a camp counsellor, according to these accounts, was a lofty task and required near godliness.
The 24/7 nature of camp was mentioned by many of the interviewees.This condition required that camp counsellors were always "on," thus creating pressures for them to selflessly and tirelessly care for campers in a continuous positive and fun manner.The emotional demands on camp counsellors, such as caring, nurturing, being a parent and friend to campers, was largely invisible and taken for granted.Yet there was no denying that the emotional demands placed on camp counsellors were real as evidenced by Zoey's powerful comments: That was probably when I hit a wall, there's too much emotion and I'm too invested . . . .I think if you are going to do a good job, you are going to care.That doesn't necessarily mean you need to bawl your eyes out for an hour when you are sad, but I think that is valid . . .I think that's probably the only time that there's been a huge wall of me being so exhausted emotionally and physically.
Andy and Terri both commented on the fatigue and exhaustion of camp counsellors.Terri described the challenge to stay "happy" when she felt worn out as her summer employment progressed: . . .like stretching every last ounce of anything you have, so you become short, and that's not who you want to be, but it's all you have.You feel so bad for it, but you also need to realize that it's natural, and you shouldn't be expected to be happy-go-lucky, like that's not fair either, I don't think.
Terri felt guilty for not being happy when she was tired, and Elissa suggested that the "happy bubble" that people call camp is, in fact, "a fragile bubble, and it can burst so easily."While I acknowledge the power of liminality in the camp experience, I also wish to acknowledge the toll these experiences can exact from camp counsellors.The very separateness of camp experiences was both a great benefit and a great burden.These data suggested the importance of exploring camp counsellors' experiences of living in and delivering camp communitas.
Conclusions and Implications
The intensity of summer camp's social community raises questions about camp counsellors' opportunities for personal time and self-care within camp contexts.The emotional demands placed on camp counsellors have implications for the mental wellbeing of staff.Camp owners and managers should consider significant steps to support and care for camp counsellors.By doing so, these counsellors can, in turn, offer better care for campers.Failing to care for counsellors may jeopardize not only the wellbeing of the camp workforce but also jeopardize staff capacity to deliver camp.
(Re)Framing Camp Counsellors
I invite readers to (re)frame their understanding of camp counsellors.I wish to start by disrupting assumptions about the unitary nature of experience and subjectivity (Weedon, 2004) and recognize that individuals are complex.Camp counsellors' perceptions, experiences, and responses can be dynamic, messy, and even contradictory (Wearing, 1998).Additionally, camp counsellors work within a complex network of relationships (Marshall, 1997).Camp counsellors must navigate relations that include campers, parents, peers, and managers as well as practices, policies, historical influences, and governing systems that are often invisible.Camp counsellor experiences are shaped through interactions with multiple sources of diverse, often invisible but influential, relationships of power and camp discourses.
Camp counsellors must find support in people they have only just met and who are also embedded in the same complex social reality as they are.Moreover, expectations that everyone will belong and experience personal growth as a result of camp experiences put immense pressures on camp counsellors to deliver with little consideration of how they will cope.This study suggests that camp researchers, industry leaders, and camp managers should seek greater understanding of the dynamics surrounding counsellors and camp liminality.Specifically, they would benefit from better understanding the emotional demands the delivery of communitas places on camp counsellors.Most importantly, this study raises awareness of the pressures created as counsellors attempt to create camp communitas.
Implications for Practitioners
The liminality and communitas of camp can provide significant benefits to both campers and camp counsellors.Camps offer a wide variety of measures that help build and maintain that communitas.Staff may benefit from efforts that help them feel like they belong in the camp culture.Practices that help nurture this feeling include the availability of camp merchandise, clothing or uniforms; camp photos; staff social events; rituals like mealtime announcements or campfire; staff contracts and manuals; mentorships; staff encouragement activities (i.e.warm fuzzies), and daily staff meetings.As discussed in this article, physical aspects such as isolated geographies (e.g., rural, forested or water-only access), reduced contact with friends and family outside of camp (such as limited internet access), and building placement ('U' or circle shaped) can contribute to a sense that camp space is special.While these measures are helpful, they may be dwarfed by the roles played by counsellors.Counsellors put a human face on camp experiences.Camp managers are very much aware of this and seek to hire people who exhibit particular personality traits (e.g., outgoing, high energy, friendly, altruistic).
More than that, camp managers may seek to develop the skills and talents of those on staff.
One camp director in this study spoke of the extensive leadership development programs and alumni networks that helped to maintain the camp's culture from season to season and year to year.While some of these practices appear to be small and/or unintentional, it is often the everyday practices that influence behavior-particularly within liminal spaces where a lack of Journal of Youth Development | http://jyd.pitt.edu/| Vol. 13 Issue 1-2 DOI 10. 5195/jyd.2018.565Welcome to the Bubble 40 other influences can distract participants.When all staff have a clear objective for the delivery of camper experiences (fun, belonging, and personal growth) then the aspects (great or small) of a camper's experiences will likely align.
Camp counsellors face many of the same challenges as campers in becoming oriented in the new social reality that camp liminality creates (new friends, new identities, new roles and statuses).However, they face additional challenges.They are under pressure to perform many complex roles of care (parent, sibling, friend, teacher, mentor, risk manager).They are expected to fulfill these roles through displays or even embodiment of happiness, positivity, genuine care, and fun, with few opportunities for rest or renewal away from campers (and even less from peers).Consequently, the effects of the emotional demands placed on camp counsellors can be significant.
Practitioners have a significant responsibility to ensure youth employment experiences are healthy and appropriate.They must engage reflexively with the practices and discourses that shape camp counsellors' experiences of liminality and communitas.For example, camp counsellors feel pressure to change the lives of campers despite their own youth and inexperience.This expectation suggests that camp counsellors require a good deal of training, guidance, and support in their employment roles.Everyday employment practices of youth and young adults needs to encourage reasonable expectations for development, boundaries, private time and space as well as adequate emotional rest.Some practical solutions to these challenges, revealed through the broader study from which this paper is drawn, suggested inverted staffing structures where the most youthful staff perform planning and logistic roles with the most experienced staff performing contact roles (i.e., camp counsellors).Another option is to schedule a revolving staff roster where each employee is offered a contract shorter than the season (e.g., six out of an eight-week summer), ensuring that every staff member gains sufficient rest and private time away from the site.While no single or universal solution exists, there is a need for engagement with more research, dialogue and self-reflective practices to work toward ethically addressing and supporting youth employment and the delivery of extraordinary community experiences.
Future research would benefit from exploring the emotional demands that the delivery of communitas place on camp counsellors.The findings from a study of this nature could lead to better employment practices that support the growth and performance of youthful camp staff.
A need exists for further research into the employment practices that best support the complex and challenging work of camp counsellors.Better understanding of camp counsellor experiences closeness and quickly launches them to an extraordinary height of exaltations.Every emotion expressed resonates without interference in consciousness that are wide open to external impressions, each one echoing the others (2001, p. 99).
|
2018-06-16T01:41:57.269Z
|
2018-04-20T00:00:00.000
|
{
"year": 2018,
"sha1": "b8429dba2304a5449821bd4a726c3ad48e49d17a",
"oa_license": "CCBY",
"oa_url": "https://jyd.pitt.edu/ojs/jyd/article/download/181301FA02/572",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "b8429dba2304a5449821bd4a726c3ad48e49d17a",
"s2fieldsofstudy": [
"Sociology",
"Education"
],
"extfieldsofstudy": [
"Sociology"
]
}
|
52137293
|
pes2o/s2orc
|
v3-fos-license
|
The Impact of Preprocessing on Deep Representations for Iris Recognition on Unconstrained Environments
The use of iris as a biometric trait is widely used because of its high level of distinction and uniqueness. Nowadays, one of the major research challenges relies on the recognition of iris images obtained in visible spectrum under unconstrained environments. In this scenario, the acquired iris are affected by capture distance, rotation, blur, motion blur, low contrast and specular reflection, creating noises that disturb the iris recognition systems. Besides delineating the iris region, usually preprocessing techniques such as normalization and segmentation of noisy iris images are employed to minimize these problems. But these techniques inevitably run into some errors. In this context, we propose the use of deep representations, more specifically, architectures based on VGG and ResNet-50 networks, for dealing with the images using (and not) iris segmentation and normalization. We use transfer learning from the face domain and also propose a specific data augmentation technique for iris images. Our results show that the approach using non-normalized and only circle-delimited iris images reaches a new state of the art in the official protocol of the NICE.II competition, a subset of the UBIRIS database, one of the most challenging databases on unconstrained environments, reporting an average Equal Error Rate (EER) of 13.98% which represents an absolute reduction of about 5%.
I. INTRODUCTION
Biometrics has many applications such as verification, identification, duplicity verification, which makes it an important research area. A biometric system basically consists of extracting and matching distinctive features from a person. These patterns are stored as a new sample which is subsequently used in the process of comparing and determining the identity of each person within a population. Considering that biometric systems require robustness combined with high accuracy, the methods applied to identify individuals are in constant development.
Biometric methods that identify people based on their physical or behavioral features are interesting due to the fact that a person cannot lose or forget its physical characteristics, as can occur with other means of identification such as passwords or identity cards [1]. The use of eye traits becomes interesting because it provides a framework for noninvasive screening technology. Another important factor is that biomedical literature suggests that irises are as distinct as other biometric sources such as fingerprints or patterns of retinal blood vessels [2]. Research using iris images obtained in near-infrared (NIR) showed very promising results and reported low rates of recognition error [1], [3]. Currently, one of the greatest challenges in iris recognition is the use of images obtained in visible spectrum under uncontrolled environments [4], [5]. The main difficulty in iris recognition using these images is that they may have problems such as noise, rotation, blur, motion blur, low contrast, specular reflection, among others. Generally, techniques such as normalization [6] and segmentation [4] are applied to correct or reduce these problems.
With the recent development of deep learning, some applications of this methodology in periocular [7], [8] and iris recognition [9]- [13] have been performed with interesting results being reported. The main problem with deep network architectures when trained in small databases is the lack of enough data for generalization usually producing over-fitted models. One solution to this problem is the use of transfer learning [14]. Another action is the use of data augmentation techniques [15], [16].
In this context, the goal of this work is to evaluate the impact of image preprocessing for deep iris representations acquired on uncontrolled environments.
We evaluated the following preprocesses: iris delineation, iris segmentation for noise removal and iris normalization. The delineation process defines the outer and inner boundaries of the iris (i.e., the pupil). The segmentation process removes the regions where the iris is occluded. Finally, the normalization process consists of transforming the circular region of the iris from the Cartesian space into the Polar one resulting in a rectangular region.
To improve generalization and avoid overfitting, two Convolutional Neural Network (CNN) models trained for face recognition were fine-tuned and then used as iris representation (or features). From the comparison of the obtained results, we can observe that the approaches using only delineated but non-normalized and non-segmented iris image as input for the networks generated new state-of-the-art results for the official protocol of the NICE.II competition. We also evaluate the impact of the non-delineated iris images, the squared bounding box. We chose the NICE.II competition, which is composed as a subset of UBIRIS.v2 database, to evaluate our proposal because it is currently one of the most challenging databases for iris recognition on uncontrolled environments, presenting problems such as noise and different image resolutions.
Since iris recognition may refer to both identification and verification problems, it is important to point out that this paper addresses the verification problem, i.e., to verify if two images are from the same subject. Moreover, the experiments are performed on the NICE.II competition protocol which is a verification problem.
The remainder of this work is organized as follows. In Section II, we described the methodologies that achieved the best results in NICE.II so far and works that employed deep learning for iris recognition. Section III describes the protocol and database of NICE.II, and the metrics (i.e., Equal Error Rate (EER) and decidability) used in our experiments for comparison. Section IV presents and describes how the experiments were performed. The results are presented and discussed in Section V, while the conclusions are given in Section VI.
II. RELATED WORK
In this section, we briefly survey the best performing works in the NICE.II competition and then describe deep learning approaches for iris recognition.
A. The NICE.II competition
Among eight participants ranked in the NICE.II competition, six of them proposed methodologies considering only iris modality [17]- [22]. The other two fused iris and periocular region modalities [23], [24]. All these approaches were summarized in the paper "The results of the NICE.II Iris biometrics competition" [25] which appeared in 2012.
Taking advantage of periocular and iris information, the winner [23] of the NICE.II reported a decidability value of 2.57. Their methodology consists of image preprocessing, feature extraction from the iris and periocular region, matching of iris and periocular region data, and fusion of the matching results. For iris feature extraction, ordinal measures and color histogram were used, while texton histogram and semantic information were employed for the periocular region. The matching scores were obtained using SOBoost learning [26] for ordinal measures, diffusion distance for color histogram, chi-square distance for texton histogram, and exclusive-or logical operator for semantic label. The approaches were combined through the sum rule at the matching level.
Using only features extracted from the iris, the best result in NICE.II achieved a decidability value of 1.82 and EER of 19%, ranking second in the competition [17]. In that work, Wang et al. first performed the segmentation and normalization of the iris images using the methodology proposed by Daugman [6]. Then, according to the normalization, irises were partitioned into different numbers of patches. With the partitions, features were extracted using Gabor filters. Finally, the AdaBoost algorithm was applied to select the best features and compute the similarity. To the best of our knowledge, this is the state of the art on the NICE.II database when only the iris is used as the biometric trait.
Another work using only iris features for recognition with images obtained in visible wavelength and in unconstrained environments is presented in [27]. Their methodology consists of four steps: segmentation, normalization, feature extraction and matching with weighted score-level fusion. Features such as wavelet transform, keypoint-based, generic texture descriptor and color information were extracted and combined. Results were presented in two subsets of UBIRIS.v2 and MobBIO databases, reporting an EER of 22.04% and 10.32%, respectively. Even though this result is worse than the best one obtained in the NICE.II, which is a subset of UBIRIS.v2 database, a direct comparison is not fair because the data used might be different.
B. Deep learning in iris recognition
Deep learning is one of the most recent and promising machine learning techniques [28]. Thus, it is natural that there are still a few works that use and apply this technique in iris images. We describe some of these works below.
The first work applying deep learning to iris recognition was the DeepIris framework, proposed by Liu et al. [9] in 2016. Their method was applied to the recognition of heterogeneous irises, where the images were obtained with different types of sensors, i.e., the cross-sensor scenario. The major challenge in this field is the existence of a great intra-class variation, caused by sensor-specific noises. Thus, handcrafted features generally do not perform well in this type of database. The study proposed a framework based on deep learning for verification of heterogeneous irises, which establishes the similarity between a pair of iris images using CNNs by learning a bank of pairwise filters. This methodology differs from the handcrafted feature extraction, since it allows direct learning of a non-linear mapping function between pairs of iris images and their identity with a pairwise filter bank (PFB) from different sources. Thereby, the learned PFBs can be used for new and also different data/subjects, since the learned function is used to establish the similarity between a pair of images. The experiments were performed in two databases: Q-FIRE and CASIA cross sensor. Promising results were shown to be better than the baseline methodology, with EER of 0.15% in Q-FIRE database and EER of 0.31% in CASIA cross-sensor database.
Gangwar & Joshi [10] also developed a deep learning application for iris recognition in images obtained from different sensors, called DeepIrisNet. In their study, a CNN was used to extract features and representations of iris images. Two databases were used in the experiments: ND-iris-0405 and ND-CrossSensor-Iris-2013. In addition, two CNN architectures were presented, namely DeepIrisNet-A and DeepIrisNet-B. The former is based on standard convolutional layers, containing 8 convolutional layers, 8 normalization layers, and 2 dropout layers. DeepIrisNet-B uses inception layers [29] and its structure consists of 5 layers of convolution, 7 of normalization, 2 of inception and 2 of dropout. The results presented five comparisons: effect of segmentation, image rotation analysis, input size analysis, training size analysis and network size analysis. The proposed methodology demonstrated better robustness compared to the baseline.
The approach proposed by Al-Waisy et al. [11] consists of a multi-biometric iris identification system, using both left and right irises from a person. Experiments were performed in databases of NIR images obtained in controlled environments. The process has five steps: iris detection, iris normalization, feature extraction, matching with deep learning and, lastly, the fusion of matching scores of each iris. During the training phase, the authors applied different CNN configurations and architectures, and chose the best one based on validation set results. A 100% rank-1 recognition rate was obtained in SDUMLA-HMT, CASIA-Iris-V3, and IITD databases. However, it is important to note that this methodology only works in a closed-world problem, since the matching score is based on the probability that an image belongs to a sample of a class known in the training phase.
Also using NIR databases obtained in controlled environments, Nguyen et al. [12] demonstrated that generic descriptors using deep learning are able to represent iris features. The authors compared five CNN architectures trained in the ImageNet database [30]. The CNNs were used, without finetuning, for the feature extraction of normalized iris images. Afterward, a simple multi-class Support Vector Machine (SVM) was applied to perform the classification (identification). Promising results were presented in LG2200 (ND-CrossSensor-Iris-2013) and CASIA-Iris-Thousand databases, where all the architectures report better accuracy recognition than the baseline feature descriptor [31].
Also one can find the application of deep learning to the periocular and sclera regions, using images captured in uncontrolled environments [7], [8], [38].
In all works found in the literature that apply deep learning for iris recognition, the input image used is the normalized one, where a previous iris location and segmentation is also required before the normalization process. In this work, we evaluate the use of different input images for learning deep representations for iris recognition (verification). The input images were created using three preprocesses: iris delineation, iris segmentation for noise removal and iris normalization.
III. PROTOCOL AND DATABASE
In this section, we describe the experimental protocol and database proposed in the NICE.II competition, which was used to evaluate the proposed methodology in this paper and in some of the related works.
The first iris recognition competition created specifically with images obtained in visible spectrum under uncontrolled environments is the Noisy Iris Challenge Evaluation (NICE). The NICE competition is separated into two phases. The first one, called NICE.I (2008), was carried out with the objective of evaluating techniques for noise detection and segmentation of iris images. In the second competition, NICE.II (2010), the encoding and matching strategies were evaluated. The NICE images were selected as a subset of images of a larger database, the UBIRIS.v2 [39], which in turn comprises of 11,102 images and 261 individuals.
The main goal of NICE.I [4] was to answer the following question: "Is it possible to automatically segment a small target as the iris in unconstrained data (obtained in a noncooperative environment)?". The competition was attended by 97 research laboratories from 22 countries, which received a database of 500 images to be used as training data in the construction of their methodologies. These 500 iris images were made available by the organizer committee, along with segmentation masks. The masks were used as ground truth to assess the performance of iris segmentation methodologies. For the evaluation of the submitted algorithms, a new database containing 500 iris images was used to measure the pixelto-pixel agreement between the segmentation masks created by each participant and the masks manually created by the committee. The performance of each submitted methodology was evaluated with the error rate, which gives the average proportion of correctly classified pixels.
In order to guarantee impartiality and evaluate only the results of the feature extraction and matching, all the participants of the second phase of NICE (NICE.II) used the same segmented iris images, which were obtained with the technique proposed by Tan et al. [40], winner of NICE.I [4]. The objective of NICE.II was to evaluate how different sources of image noise obtained in an uncontrolled environment may interfere in iris recognition. The training database consisted of 1,000 images along with the corresponding segmented iris masks. The task was to build an executable that received as input a pair of iris images and their respective masks, generating a file with the corresponding scores with dissimilarity of irises (d) as an output. The d metric follows some conditions: For the evaluation of the methodologies proposed by the participants, another unknown database containing 1,000 images and masks was employed. Some samples randomly selected from these images are shown in Fig. 2 .., d e k } of dissimilarity scores, respectively, for the cases where id(I i ) = id(I j ) and id(I i ) = id(I j ). The evaluation of the algorithms was performed using the decidability scores d [41].
The metric or index d measures how well separated are two types of distributions, so the recognition error corresponds to the overlap area where the means of the two distributions are given by µ I and µ E , and σ I and σ E represent the standard deviations. Since the goal of the competition is the iris recognition in noisy images obtained in an uncontrolled environment, the methodology presented in this paper focuses only on the features extraction and matching using iris information. As shown in previous studies [23], [24], [42], [43], promising results were achieved using the fusion of iris and periocular modalities, and since this fusion depends on the quality of each modality, it is interesting to study each one of them independently, i.e., using only iris or periocular information.
Considering that the decidability metric measures how discriminating the extracted features are, we have used it to compare our method with the state of the art. We also report the EER, which is based on the False Acceptance Rate (FAR) and the False Rejection Rate (FRR). The EER is determined by the intersection point of FAR and FRR curves. All reported results were obtained in the official test set of the NICE.II competition.
IV. PROPOSED APPROACH
The proposed approach consists of three main stages: image preprocessing, feature extraction and matching, as shown in Fig. 4. In the first stage, segmentation and normalization techniques are applied. We perform data augmentation in the preprocessed images to increase the number of training samples. The feature extraction is performed with two CNN models, which were fine-tuned using the original images, and the images generated through data augmentation. Finally, the matching is performed using the cosine distance. These processes are best described throughout this section.
A. Image preprocessing
In order to analyze the impact of the preprocessing, six different input images were generated from the original iris image, as shown in Fig. 3. In the first image scheme, irises are normalized with the standard rubber sheet model [6] using an 8 : 1 aspect ratio (512 × 64 pixels). In the second image scheme, the iris images are also normalized, however, instead of the standard 8 : 1 ratio they were rearranged in a 4 : 2 ratio (256 × 128 pixels), so that less interpolation is employed in the resizing process. In the third and last schemes, no normalization is performed, applying only the delineated iris images as input to the models. Non-normalized images have different sizes, according to the image capture distance. The impact of the segmentation technique for noise removal was also evaluated in all representations. Note that all the iris images used as inputs for the feature representation models are resized using bi-cubic interpolation to 224 × 224 pixels. The normalization through the rubber sheet model [6] aims to obtain invariance with respect to size and pupil dilatation. In the NICE.II database, the main problem is the difference of the iris size due to distances in the image capture. It is important to note that in non-normalized images, we use an arc delimitation preprocessing (i.e., two circles, an outer and an inner), based on the iris mask.
The segmentation process to noise removal was performed using the masks provided along with the database. These masks were obtained with the methodology proposed by Tan et al. [40], winner of NICE.I [4].
Considering these problems, the proposed approach aims to analyze the impact of non-normalization and non-segmentation of iris images to extract deep features.
B. Data Augmentation
Since the training subset has only 1,000 images belonging to 171 classes, it is important to apply data augmentation techniques to increase the number of training samples. The fine-tuning process can result in a better generalization of the models with more images. In this sense, we rotate the original images at specific angles.
The Performing the validation of all these data augmentation methods on all input images, we determined (based on accuracy and loss) that the best range was −60 • to 60 • with 6 apertures. These parameters were applied to perform the data augmentation in the training set, totaling 7,000 images. Some samples generated by data augmentation can be seen in Fig. 5.
C. Convolutional Neural Network Model
For feature extraction of the iris images, the fine-tuning of two CNN models trained for face recognition were applied. The first model, called VGG, proposed in [44] and used in [7] for periocular recognition, has an architecture composed of convolution, activation (ReLu), pooling and fully connected layers. The second model (i.e., Resnet-50), proposed in [45] and trained for face recognition [46], has the same operations as VGG with the difference of being deeper and considering residual information between the layers.
For both models, we use the same architecture modifications and parameters described in [7]. In the training phase, the last layer (used to predict) was removed and two new layers were added. The new last layer, used for classification, is composed by 171 neurons, where each one corresponds to a class in the NICE.II training set and has a softmax-loss function. The layer before that is a fully-connected layer with 256 neurons used to reduce feature dimensionality.
The training set was divided into two new subsets with 80% of the data for training and 20% for validation. Two learning rates were used for 30 epochs, 0.001 for the first 10 epochs and 0.0005 for the remaining 20. Other parameters include momentum = 0.9 and batch size = 48. The number of epochs used for training was chosen based on the experiments carried out in the validation set (highest accuracy and lowest loss). For training (fine-tuning) the CNN models, the Stochastic Gradient Descent (SGD) optimizer was used. Similarly to [7], [47], we do not freeze the weights of the pre-trained layers during training to perform the fine-tuning. The last layer of each model was removed and the features were extracted on the new last layer.
D. Matching
The matching was performed using the verification protocol proposed in the NICE.II competition. For this, the all against all approach was applied in the NICE.II test set, generating 4,634 intra-class pairs and 494,866 inter-class pairs.
The cosine metric, which measures the cosine of the angle between two vectors, was applied to compute the difference between feature vectors. This metric is used in information retrieval [48] due to its invariance to scalar transformation. The cosine distance metric is represented by where A and B stand for the feature vectors. We also employed other distances such as Euclidean, Mahalanobis, Jaccard and Manhattan. However, due to its best performance, only the cosine distance is reported.
V. RESULTS
In this section, we present the results of the experiments for validating our proposal using the test set from the Nice.II competition database. Initially, the impact of the proposed data augmentation techniques is shown using non-segmented iris images. Then, we analyze the impact of both iris segmentation and iris delineation. Finally, we compare the best results obtained by our approaches with the state of the art. In all subsections, the impact of normalization is also analyzed. Note that in all experiments, the mean and standard deviation values from 30 runs are reported. For analyzing the different results, we perform statistical paired t-tests at a significance level α = 0.05.
A. Data Augmentation
In the first analyses, we evaluate the impact of the data augmentation. For ease of analysis, all iris image employed in this initial experiment may contain noise in the iris region, i.e. no segmentation preprocessing is applied. As shown in Table I, in all cases where data augmentation is used, the decidability and EER values improved with statistical difference. Note that the models trained with data augmentation reported smaller standard deviation. In general, it is also observed that nonnormalization yielded better results than 8 : 1 and 4 : 2 normalization schemes for both trained models, i.e. VGG and ResNet-50.
It is worth noting that the largest differences occurred in the non-normalized inputs, with greater impact specifically in the ResNet-50 model, where the mean EER dropped 7.53% and the decidability improved 0.6361 when applying data augmentation.
B. Segmentation
In the second analysis, the impact of the segmentation for noise removal is evaluated. For such aim, two models are trained (fine-tuned): using segmented and non-segmented images, all with data augmentation.
As can be seen in Table I, for the VGG model segmentation has improved the results. On the other hand, for the ResNet-50 model, the non-segmented images have presented better results. For both models, statistical difference is achieved in two situations and in another one (light cyan color) there is no statistical difference.
Regarding the better results achieved by the ResNet-50 models when using non-segmented images, we hypothesized that this might be related to the fact that the ResNet-50 architecture uses residual information and it is deeper compared to VGG. Thus, some layers of ResNet-50 might be responsible for extracting discriminant patterns present in regions that were occluded in the segmented images, but not in non-segmented ones. Moreover, in segmented images, black regions (zero values) were employed for representing noise regions, and no special treatment was given for those regions.
It is noteworthy that segmentation is a complex process and might impact positively or negatively. However, as the best results here were achieved by the ResNet-50 models when using non-segmented images, we state that using the suitable representation model the segmentation preprocessing can be disregarded.
Once again, non-normalization showed to provide better results in all scenarios, being more expressive here than in the data augmentation analysis.
C. Delineation
Here we evaluated the impact on the recognition of using a usual delineated iris image and a non-delineated iris image, i.e., applying only the squared iris bounding box as input to the deep feature extractor. In both situations, non-normalized and non-segmented images are used. A delineated iris image and its corresponding bounding box (or non-delineated) are shown in Fig. 6.
The comparison of the results of this analysis is shown in Table II. Although the results reported by delineated iris images are better, there is no statistical difference. From this result, we state that the iris bounding box can be used as input for deep representation without the iris delineating (a.k.a. location) preprocessing.
Considering that the bounding box is not pure iris, comparison with other iris recognition methods may not be fair, since there may be discriminant patterns that have been extracted from regions outside the iris. Therefore, our methodology was compared with the state of the art using delineated iris images.
D. The state of the art
At last, the results attained with our models using nonnormalized, non-segmented, and delineated iris images are compared with the state-of-the-art approaches and it is shown in Table III.
These experiments showed that the representations learned using deep models perform better the iris verification task on the NICE.II competition when the preprocessing steps of normalization and segmentation (for noise removal) are removed, outperforming the state-of-the-art method, which uses preprocessed images.
VI. CONCLUSION
In this paper we evaluated the impact of iris image preprocessing for iris recognition on unconstrained environments using deep representations. Different combinations of (non-)normalized and (non-)segmented images as input for the system were evaluated.
Using these iris images, a fine-tuning process of two CNN architectures pre-trained for face recognition was performed. These models were applied to extract deep representations.
The matching, on a verification protocol, was performed with the cosine metric. A significant improvement in the results of both models was achieved using the proposed data augmentation approach. For both models, non-normalized iris achieved a better result. In addition, we verified that the use of nondelineated iris images is slightly worst than the ones attained when using delineated images, but no significant difference was reached. However, for a fair comparison with a state-ofthe-art method, we used only delineated images because they represent the pure irises.
The experiments showed that the models learned on the ResNet-50 architecture using non-segmented and nonnormalized images reported the best results achieving a new state of the art in the NICE.II official protocol -one of the most challenging databases on uncontrolled environments.
As future work, we intend to evaluate this approach in larger databases and the performance of other network architectures for feature extraction and also transfer learning from other domains than face recognition. Dealing with noisy iris regions and the analysis of its impact on iris recognition is the plan for future work as well.
|
2018-08-29T20:22:41.000Z
|
2018-08-29T00:00:00.000
|
{
"year": 2018,
"sha1": "ebcb3df1e3647119137a15af5f9256f9152b77bf",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1808.10032",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "bc15e0ebe7ff84e090aa2d74d753d87906d497f7",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
255864534
|
pes2o/s2orc
|
v3-fos-license
|
Human mobility and factors associated with malaria importation in Lusaka district, Zambia: a descriptive cross sectional study
Malaria is a major public health problem in Zambia with an estimated 4 million confirmed cases and 2389 deaths reported in 2015. Efforts to reduce the incidence of malaria are often undermined by a number of factors such as human mobility which may lead to introduction of imported infections. The aim of this study was to establish the burden of malaria attributed to human mobility in Lusaka district and identify factors associated with malaria importation among residents of Lusaka district. A cross sectional study was conducted in five randomly selected health facilities in Lusaka district from November 2015 to February 2016. Data was collected from 260 patients who presented with malaria and whose status was confirmed by rapid diagnostic test or microscopy. Each confirmed malaria case was interviewed using a structured questionnaire to establish their demographic characteristics, travel history and preventive measures. Travel history was used as a proxy to classify cases as either imported or local. Residency was also used as a secondary proxy for importation to compare characteristics of residents vs non-residents in relation to malaria importation. Logistic regression was used to determine factors associated with malaria importation among residents of Lusaka district. Out of 260 cases, 94.2% were classified as imported cases based on participants’ travel history. There were 131 (50.4%) males and 129 (49.6%) females. Age distribution ranged from 0 to 68 years with a median age of 15 years (IQR 8–27). Imported cases came from all the ten provinces of Zambia with the Copperbelt Province being the highest contributor (41%). Of all imported cases, use of prophylaxis was found to be highly protective [AOR = 0.22 (95% CI 0.06–0.82); p-value = 0.024]. Other factors that significantly influence malaria transmission and importation by residents include duration of stay in a highly endemic region [AOR = 1.25 (95% CI 1.09–1.44); p-value = 0.001] and frequency of travel [AOR = 3.71 (95% CI 1.26–10.84); p-value = 0.017]. Human mobility has influenced malaria transmission in Lusaka district through a number of factors by importing infections. This leads to onward transmission and poses a challenge to malaria elimination and control. However, taking of prophylaxis is highly protective and must be highly recommended.
Background
Malaria continues to be a major public health problem globally and one of the leading causes of death from infectious disease worldwide [1]. Global statistics as of 2015 indicated a decline in malaria incidence with 214 million cases and 438,000 deaths of which 88% and 90%, respectively, occurred in the World Health Organization (WHO) Africa region with 97 countries having ongoing malaria transmission [2,3].
Zambia is a land-locked country with approximately 13.6 million people, 61% of which live in rural areas and 39% in urban [4,5]. Malaria is endemic throughout the country though it is more prevalent in rural areas. Plasmodium falciparum is responsible for most of the malaria cases, including its severe form. In 2015, an estimated 4 million confirmed malaria cases and 2389 reported deaths were due to malaria in Zambia alone [6]. Hence malaria has continued to be a disease of major public health significance in Zambia despite recent successes in scaling up interventions and documented reductions in malaria burden among children [6]. Eliminating infection is, therefore, central to the goal of malaria elimination not only in Zambia but globally. Efforts to attain this goal have been undermined by a number of factors, such as human mobility.
Human population movements play a significant role in malaria transmission [7]. These human movements contribute to the transmission of malaria. The WHO, the United States Center for Disease Control and Prevention (CDC), and most countries define imported malaria as any malaria infection whose origin can be traced to a malaria endemic area outside the country in which the infection was identified. While internal importation is the introduction of parasites from one area to another within a country [3]. To establish the source of infection, knowledge of individual recent travel history is required [8]. The increase in mobility in the last few decades has led to greater concern about the relationship between mobility and malaria transmission. Importation of malaria parasites to low transmission zones from high transmission zones is a major setback in reducing the malaria burden in areas aiming for elimination [9].
There are three malaria transmission zones in Zambia. These are Zone I which includes Lusaka province; Zone II includes Southern, Central, Copperbelt, Western and North-Western provinces and Zone III which includes Luapula, Northern, Muchinga and Eastern provinces [10]. Different trends in the three zones have emerged based on surveys for malaria parasite prevalence in children from 2008 to 2010. Zone I with very low transmission is characterized by parasite prevalence of less than 1% in children under 5 years old; Zone II with low to moderate stable transmission of parasite prevalence of 2-14% in children under 5 years old and Zone III with moderate to high transmission of more than 15% parasite prevalence in children under 5 years old [10]. Seasonal pattern of higher transmission is associated with the rains between November and April. Northern, Luapula and Eastern provinces have the highest annual incidence of malaria, while the lowest is found in Lusaka Province, specifically around Lusaka district.
The burden of malaria is largely attributed to human mobility especially in areas aiming for elimination [8].
Establishing source of infection requires knowing individual recent travel history [8]. Identifying the sources of imported infections due to human travel outside the district and areas of high receptivity within the district aiming for elimination could greatly improve malaria control programmes as this would help target interventions appropriately [11]. This study aimed at establishing the burden of malaria attributed to human mobility in Lusaka district and identifying sources and factors associated with malaria importation among residents of Lusaka district.
Setting
Lusaka province is one of the ten provinces of Zambia with a population of 2,191,225 and density of 100 persons per square kilometre, as of the 2010 census of housing and population. Its capital is Lusaka city which is also a national capital. The study was conducted in five randomly selected health facilities within Lusaka district. These were Chelstone and Kalingalinga health centres with a catchment population of 123,501 and 90,878, respectively; Chawama, Chilenje and Kanyama 1st level hospitals with a catchment population 144,462, 116,510 and 191,056, respectively. These were randomly selected from all health facilities that are run by the government and no private facilities were selected. Health systems in Zambia are classified into three major categories; first level, second level and tertiary level. This selection list included all the health facility levels. A number was given to each facility and place in a bag from which one at a time was picked giving each facility equal chance of being selected. The Zambia Sample Vital Registration with Verbal Autopsy report of 2017 shows that of all deaths that occurred in 2015/2016 attributed to malaria, 86. 6% occurred at government facilities where they received treatment and only 7.4% occurred at private facilities. This indicates that a much larger proportion of the population seeks health care at government facilities compared to private facilities.
Design, participant selection and data collection
The primary outcome was malaria importation by residents and was defined as a proportion of Lusaka residents who tested positive for malaria and had a travel history to a highly endemic district within a 3 months period prior to the study. Travel was defined as having at least an overnight stay in an endemic area where one could expose themselves to the risk of getting bitten by mosquitoes and consequently getting infected with malaria. Using a cross sectional study design, malaria cases in Lusaka district confirmed either by light microscopy (using 10% Giemsa-stained blood smears examined by two independent examiners), or rapid diagnostic test (RDT) (SD Bioline malaria Ag pf, Standard Diagnostics, INC, Republic of Korea) using manufacturer's instruction, were utilized.
Primary data was collected from five randomly selected health facilities in Lusaka district. The study population included all confirmed cases of malaria during the data collection period and excluded all clinical cases. Data collection period was limited from November to February and could not be prolonged to the end of the transmission season due to a limited budget. To establish the burden of malaria due to imported cases, the proportion of travellers was determined. This was based on establishing the patients' travel history. Positive cases that had a travel history within the 3 months prior to the study were considered imported cases. The study investigated internal importation though there were a few cases found to have originated from other countries. A structured questionnaire was administered to all the participants so as to obtain their demographic characteristics, personal protection, travel history and any other information relevant to the study. The interviews were conducted immediately the patient was diagnosed with malaria right at the facility. The obtained information was used to determine the burden of imported cases in Lusaka district, identify the possible sources of infection and factors associated with malaria importation.
Statistical analysis
Contribution to malaria transmission was determined by assessing factors that influence malaria importation. Travel history was used as a proxy to determine importation and was categorised as either local or imported. Due to the very low numbers of local cases, it was impossible to analyse further using this variable hence Residency was used as a secondary proxy to determine importation as this was effected by both residents with a travel history to highly endemic areas and non-residents from highly endemic areas. Categorical variables were described by frequency distributions. To avoid loss of power and bias, continuous predictor variables such as duration of stay in weeks was not categorized [12]. To test any differences, we used a non-parametric Wilcoxon rank sum (Mann-Whitney) test. Chi square test was used to determine associations of categorical independent variables and outcome variables. Logistic regression was used to explore the association between malaria importation and human mobility. Stepwise logistic regression was used to select the best predictors in a multiple logistic regression model. The association of the individual covariate variables were expressed as odds ratios and their associated p-values. Adjusted odds ratios (AOR) and their 95% CI (p < 0.05) level of significance were reported. All analyses were performed using STATA software, version 12.0 SE (Stata Corporation, College Station, TX, USA).
Ethical considerations
The University of Zambia Biomedical Research Ethics Committee (UNZABREC) approved the protocol (REF. 003-08-15) and permission was sought from the Ministry of Community Development Mother and Child Health (MCDMCH). Participation in this study was voluntary and the study ensured minimal risk as we enrolled patients who were voluntarily seeking treatment for malaria. For example, the collection of finger prick blood from the patients was a requirement necessary for correct diagnosis and treatment and so it was already done by the healthcare providers. Only individuals who went through the procedure and tested positive were enrolled. Individual consent was obtained from all participants by signing a consent form to acknowledge their participation. Full information about the study was given to the participants. Confidentiality was maintained in line with the local ethical guidelines.
Results
A total of 260 malaria positive patients were investigated for possible importation of malaria to Lusaka district. The findings of this study showed that 94.2% (95% CI 91.4-97.1%) of the cases investigated were attributed to human mobility and thus classified as imported cases while only 5.8% (15/260) were local cases. This classification was based on participants' travel history. However, travel history as a primary proxy could not be used further for lack of significant number of local cases. Residency was employed as a secondary proxy for malaria importation and thus compared characteristics of Lusaka residents to those of non-residents with a travel history. Out of a total of 158 Lusaka residents, 15 had no travel history and were excluded from the second stage of analysis, where the factors associated with malaria importation were identified. As such, only those with a travel history including non-residents who obviously had a travel history were analysed at this stage. The median age of the malaria cases was 15 years old (IQR 8-27) with the youngest being less than 1 year old and oldest being 68 years old. Males accounted for 50.4% (131/260) of all cases. Lusaka residents accounted for 61% (158/260) of which 143 had a travel history and thus classified as imported cases (Table 1).
The overall study population showed a near balanced representation of males and females. Results showed that the age group 5-14 years was the most affected age group representing 31.2% (81/260) of all cases (Fig. 1). Imported infections were found to be coming from all over the country with Copperbelt Province as the highest contributor (41%) (Fig. 2).
To investigate the association between malaria importation and variables of interest, Chi square test was used and the results established association in the use of prophylaxis, age, duration of stay and occupation. These were found to be statistically significant (see Table 2).
Univariate and multivariate analyses were performed using logistic regression. The adjusted predictors of malaria importation were determined by an investigator led stepwise logistic regression (Table 3). The best predictors for the final model were selected for malaria importation among Lusaka residents. After adjusting for other variables such as sex, age and education level; the frequency of travel to other districts was found to be significantly associated with malaria importation. It shows that those who travelled outside Lusaka district For every week, increase in the duration of stay in an area visited, Lusaka residents were 25% more likely to import infections [AOR = 1.25 (95% CI 1.09-1.44; p-value = 0.001)] (see Table 3).
Discussion
This study established a high proportion of imported malaria compared to local cases in Lusaka district. Despite scaling-up interventions to reduce the incidence of malaria in the district, these imported cases could lead to onward transmission and consequently an upsurge of local cases. Even though this study investigated internal importation, similar studies done in China investigating importation across borders also showed a higher prevalence of imported malaria attributed to human mobility [13,14]. This shows that human mobility challenges the efforts to attain elimination. Literature has shown that malaria importation is indeed a major factor in malaria transmission not only in low transmission setting but also in high transmission setting [9]. Children aged 5-14 years old were found to be the most affected group and were more susceptible to infection due to their weaker immunity compared to adults, considering the fact that this study investigated only positive cases. Even though children may have been accompanied by adults, the adults may not have been affected in a similar manner due to their stronger immunity. This finding is also consistent with a study by Bradley et al. which established that children aged 2-14 years who had travelled to highly endemic areas were at greater risk of infection and were more likely to import malaria [9]. However, in another study by Li et al., adults aged 21-50 were found to be the risk group. This was attributed to travel due to occupation as it was found to be a factor of importation as the majority of these travelled to endemic regions for work [14].
It was established that taking of prophylaxis among residents of a low transmission zone like Lusaka district was highly protective. This was evident in that residents who took anti-malarial drugs prior to their travel were less likely to import infections. However, some patients were found to be suffering a second episode at the time of the study. This suggests that they were either not effectively treated and were likely to transmit malaria or they suffered from a new infection after being reexposed as they had visited a highly endemic area. Infected individuals including asymptomatic patients who were not treated prior to their travel were also at risk of importing malaria to low transmission areas hence the need to take prophylaxis. A study by Julio et al. showed that lack of adherence to prophylaxis was a risk factor for malaria infection among members of the Guatemalan contingent deployed to the Democratic Republic of Congo thus leading to importation [15]. Another study by Muehlberger et al. found that travelling without or with ineffective chemoprophylaxis is a major factor for malaria importation [16]. Duration of stay was found to be a factor of malaria importation in Lusaka district. This was statistically significant showing that for every increase in weeks, Lusaka residents visiting highly endemic regions were more likely to import infections. This is because the overall influence on local malaria transmission by residents and visitors depends on the number of infections brought relative to the duration of infection, while contribution to local transmission depends on local receptivity of the place where infections are imported and the duration of stay [17]. The study findings further established that frequency of travel was a factor for malaria importation to Lusaka district. Results show that residents who travelled more than once to highly endemic districts within the last 3 months prior to their diagnosis were approximately four times more likely to import malaria than those who travelled only once.
Origin of infection is one of the factors investigated and from the descriptive analysis; it was found that imported cases came from all over the country with most of these cases coming from the Copperbelt province. However, we cannot conclude that the Copperbelt province has the highest malaria prevalence in the country based on this result but this simply shows that most of these movements were done between Copperbelt and Lusaka, thus making Copperbelt province a major source and Lusaka district a sink for malaria infections. Identifying these sources and sinks would help inform and target programmes to improve prevention and control measures which may lead to malaria elimination.
Indoor residual spraying (IRS) is one of the interventions put in place by the national malaria control programme and the Ministry of Health to combat the spread of malaria. It targets to cover 85% coverage of the households in low to high transmission zones [5]. Results of a Zambia Demographic Health Survey (ZDHS) done in 2011-2013 show that only 12% of households in Lusaka were sprayed. It was established from our study that only about 8% of the cases had their homes sprayed in the last 1 year. According to the WHO World Malaria Report of 2015, such interventions; in Zambia particularly, are funded by the government, global fund and USAID. This shows that it is somewhat dependant on donor funding which could explain the low IRS coverage. Of all the patients investigated, only a quarter of the patients used insecticide-treated nets (ITNs) despite having strong campaigns on the use of ITNs in the fight against malaria. This shows poor uptake of interventions to combat malaria among the locals even if the government plays its role in making such services available. However, the use of ITNs and IRS was not statistically significant to malaria importation in this study but is very relevant in the efforts to control malaria transmission. The relevance of these findings is that low uptake of interventions and preventive measure practice tends to undermine efforts to control malaria with the hope to eventually attain elimination.
The limitations of this study were that the proportion of malaria imported cases could have been undermined due to asymptomatic infections. Pathogens can be introduced into an area at four different stages. This study only looked at importation through infected visitors and through residents visiting endemic regions. Infections which could have been introduced by infected foreign vectors could not be identified and this could have led to overestimation of local cases. Travel history of the patients was used as a proxy to classify cases as imported or local. However, those who were considered as imported cases could still have been infected locally despite having a travel history. The study lacked information regarding the time of travel in relation to onset of illness. Unfortunately, this study could not carry out further tests to show whether one acquired the infection locally or not. The inclusion of a comparator would remedy most of these limitations but the study lacked information on non-malaria cases for a comparator.
Conclusion
Imported malaria has become a major public health challenge in regions aiming for elimination. The high proportion of imported malaria cases in Lusaka district established in this study suggests that local transmission can consequently go up thus posing a threat to the attainment of malaria elimination. Elimination can only be feasible by implementing control measures based on detecting imported cases, identifying and addressing factors associated with malaria importation so as to control
Learn more biomedcentral.com/submissions
Ready to submit your research ? Choose BMC and benefit from: onward transmission. The relevance of this study was that identified sources of infection and the associated factors that influence malaria importation could be targeted and included in vector control interventions and disease prevention programmes.
|
2023-01-17T14:41:39.731Z
|
2018-11-03T00:00:00.000
|
{
"year": 2018,
"sha1": "5e29278f7a7f56361fcb748cb26527c5df7ca3f3",
"oa_license": "CCBY",
"oa_url": "https://malariajournal.biomedcentral.com/track/pdf/10.1186/s12936-018-2554-4",
"oa_status": "GOLD",
"pdf_src": "SpringerNature",
"pdf_hash": "5e29278f7a7f56361fcb748cb26527c5df7ca3f3",
"s2fieldsofstudy": [
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": []
}
|
18216431
|
pes2o/s2orc
|
v3-fos-license
|
Spectroscopic Studies of Copper, Silver and Gold-Metallothioneins
Metallothionein is a ubiquitous protein with a wide range of proposed physiological roles, including the transport, storage and detoxification of essential and nonessential trace metals. The amino acid sequence of isoform 2a of rabbit liver metallothionein, the isoform used in our spectroscopic studies, includes 20 cysteinyl groups out of 62 amino acids. Metallothioneins in general represent an impressive chelating agent for a wide range of metals. Structural studies carried out by a number of research groups (using 1H and 113Cd NMR, X-ray crystallography, more recently EXAFS, as well as optical spectroscopy) have established that there are three structural motifs for metal binding to mammalian metallothioneins. These three structures are defined by metal to protein stoichiometric ratios, which we believe specifically determine the coordination geometry adopted by the metal in the metal binding site at that metal to protein molar ratio. Tetrahedral geometry is associated with the thiolate coordination of the metals in the M7-MT species, for M = Zn(II), Cd(II), and possibly also Hg(II), trigonal coordination is proposed in the M11-12-MT species, for M = Ag(I), Cu(I), and possibly also Hg(II), and digonal coordination is proposed for the metal in the M17-18-MT species for M = Hg(II), and Ag(I). The M7-MT species has been completely characterized for M = Cd(II) and Zn(II). 113Cd NMR spectroscopic and x-ray crystallographic data show that mammalian Cd7-MT and Zn7-MT have a two domain structure, with metal-thiolate clusters of the form M4(Scys)11 (the α domain) and M3(Scys)9 (the β domain). A similar two domain structure involving Cu6(Scys)11 (α) and Cu6(Scys)9 (β) copper-thiolate clusters has been proposed for the Cu12-MT species. Copper-, silver- and gold-containing metallothioneins luminesce in the 500-600 nm region from excited triplet, metal-based states that are populated by absorption into the 260-300 nm region of the metal-thiolate charge transfer states. The luminescence spectrum provides a very sensitive probe of the metal-thiolate cluster structures that form when Ag(I), Au(I), and Cu(I) are added to metallothionein. CD spectroscopy has been used in our laboratory to probe the formation of species that exhibit well-defined three-dimensional structures. Saturation of the optical signals during titrations of MT with Cu(I) or Ag(I) clearly show formation of unique metal-thiolate structures at specific metal:protein ratios. However, we have proposed that these M=7, 12 and 18 structures form within a continuum of stoichiometries. Compounds prepared at these specific molar ratios have been examined by X-ray Absorption Spectroscopy (XAS) and bond lengths have been determined for the metal-thiolate clusters through the EXAFS technique. The stoichiometric ratio data from the optical experiments and the bond lengths from the XAS experiments are used to propose structures for the metal-thiolate binding site with reference to known inorganic metal-thiolate compounds.
Introduction
Studies of the rich biochemistry and chemistry of metallothionein began following the initial discovery and characterization of a cadmium and zinc binding protein from horse kidney by Margoshes and Vallee in 1957 [1]. This protein was called metallothionein by Kagi and Vallee in 1960 and the first spectral properties were reported in 1961 [2,3]. Since that time, there has been a rapid and extensive study of all aspects of the biochemical and chemical properties of metallothioneins isolated from a wide variety of species. A series of monographs and proceedings have appeared at timely intervals that provide a great many details and trace the development of our knowledge about the metallothioneins [4]. crustaceans, each of which having a primary peptide sequence with 60 to 62 amino acid residues. The class mammalian proteins are generally characterized by formation of two metal binding domains, named o and 13, with zinc, cadmium and copper [4]. The class II peptides are isolated from yeasts, fungus, and plants, while class III metallothioneins are characterized as ,-glutamyl isopeptides or phytochelatins. The structure of the metal binding site in mammalian metallothioneins was determined initially by Otvos and Armitage using Cd N.MR techniques on Cdr-MT isolated from rabbit livers [5]. Later, both H NMR [6] and x-ray crystallographic [7] techniques established the connectivities between the cysteinyl thiolates, S,, and the seven cadmium and zinc atoms. Figures 1 and 2 shows the sequence and metaI-S, connectivities for Cdr-MT 2a based on data of Kagi [8]. Figure [5,7,9], With the peptide chain wrapped round the outside of the binding sites, we see that the S,,-M-S,, bonds act to crosslink the peptide chain forming two, three-dimensional cores for the metal binding sites in the metallated protein. The metals bound to mammalian metallothionein can be exchanged by addition of a metal with a greater binding constant. The observed order of binding strengths for mammalian metallothioneins is: Zn<Cd<Cu<Hg. In Figure 3, we show the CD spectral patterns recorded as Cdr-MT is formed by direct addition of Cd(ll) to aqueous solutions of ZnT-MT. A different spectral pattern is observed if Cd(ll) is added to the metal free protein, apoMT. The CD spectral data in Figure 3 show that the Cd(ll) displaces the Zn(ll) in both domains initially, before saturating with a stoichiometry of 7. Cd(ll) adds to apoMT in a domain specific fashion, first into the o domain up to a stoichiometric ratio of 4, then into the 13 domain up to Cd:MT=7 [10]. These two sets of CD spectral data introduce the two reaction pathways that dominate metal binding in mammalian metallothionein. An incoming metal can bind to either domain preferentially, which is a domain specific pathway, or to both domains on a statistical basis, which is a distributed pathway. Work by Winge et al. [11] has shown that equilibrated metallothionein in which 2 metals are bound exhibits domain specificity. For example, in Cd,Zn-MT, the Cd(ll) will be located primarily in the domain, while Zn(ll) will be located primarily in the 13 domain; in contrast, in Cu,Zn-MT, the Cu(I) will be located in the 13 domain while the Zn(ll) will be located in the o domain. Spectroscopic studies from our laboratory [12] suggest that initial binding is kinetically controlled and the incoming metals bind first in a distributed manner, after time, and at temperatures above about 15 C, the metals redistribute to form the thermodynamically stable, domain specific product. The M-MT structure has also been proposed for Co(II)7-MT based on absorption, magnetic circular dichroism (MCD), and electron paramagnetic resonance (EPR) [13]. A second structural motif is found for Ag(I) and Cu(I), in which a metal:MT stoichiometry of 12 has been determined [12b,12c,14]. Typically, in the Cu(I)-containing mammalian metallothioneins it has been proposed from results of a number of spectroscopic techniques [12c,15] that the Cu(I) is trigonally coordinated by the $,, binding as M(S,) (13 domain) and Me(S,) (o domain).
Finally, a third motif has been proposed based on results of spectroscopic studies of Hg(ll) and Ag(I) binding to rabbit liver metallothionein. Hg-MT 2 is characterized by a unique CD spectrum [16], a coordination number of 2(0.8), with an average Hg-S bond length of 242(3) pm [17]. Similar CD spectral properties have been found for Ag-MT, and Ag7-MT 1 has been formed following addition of Ag(I) to rabbit liver metallothionein [18]. The insulation of the two metal-thiolate cluster structures from the solvent results in emission intensity being measured for Cu-MT at room temperature [12b,14b,19] and Ag-MT and Au-MT at cryogenic temperatures [19]. This wide applicability, and dependence on the metal to protein molar ratio, means that emission spectroscopy is a powerful technique with which to study the metal binding chemistry of the Ag(I), Au(I) and Cu(I)-containing proteins both in vitro and in vivo This present paper concerns the spectroscopic characterization of metal binding to rabbit liver metallothionein. Using optical techniques, we can determine with high precision, the metal to protein stoichiometric ratios that result in the formation of distinct, three-dimensionally organized structures within the metal binding site.
Materials and Methods
For most of the experiments described here, Zn-MT was isolated from rabbit livers following in vivo induction procedures using aqueous zinc salts. In some instances the data were recorded from rat liver protein. The preparative methods were described in the original papers that are cited with the spectral data. Protein was purified using gel filtration and gel electrophoresis techniques [21]. Aqueous protein solutions were prepared by dissolving the protein in argon-saturated distilled water. Protein concentrations were estimated from measurements of the -SH group and zinc concentrations as described previously [22]. These estimations were based on the assumption that there are 20 -SH groups and 7 Zn atoms in each protein molecule. The concentration of-SH groups was determined by the spectrophotometric measurement of the colored thionitrobenzoate anion (42o = 13 600 M -1 cm-1) produced by reaction with DTNB (5,5'-dithiobis(-2-nitrobenzoic acid)) in the presence of 6M guanidine hydrochloride [23]. Zinc concentrations were determined by flame atomic absorption spectroscopy (AAS) using a Varian AA875 atomic absorption spectrophotometer. Very complete isolation and purification details for a range of different metallothioneins have been published as part of the 'Methods in Enzymology' series [4d]. All the metal titration experiments of Zn-MT were carried out in the same manner. Taking as an example titrations of Zn-MT with Cu(I), 10 Circular dichroism spectra were recorded on a Jasco J-500C spectropolarimeter controlled by an IBM $9001 computer using the program CDSCAN [25]. Emission spectra were measured on a Photon Technology Inc. LS-100 spectrometer. Optical glass filters were placed over the excitation (Corning 7-54 or Schott BG-24) and emission slits (Corning CS or $chott GG-420) for observation of emission in the 500-700 nm region with excitation at 300 nm.
Results and Discussion
In this paper we will describe CD and emission spectral data that depend entirely on the formation of the metal-thiolate clustered binding site of the 62 peptide metallothionein isolated from rabbit liver. The spectral intensity is observed to depend directly, but often in a nonlinear manner, on the metal to protein molar ratios. The CD experiment provides information about the winding of the peptide chain around the binding site to allow each cysteinyl sulfur to connect to one of the metals. Because the peptide chain wraps round the clustered metal-thiolate structure the possibility exists that the whole structure will be chiral. Unusually for a protein, there are no amino acids that absorb light with wavelengths greater than 230 nm. ApoMT is totally colorless above 230 nm in the absence of metals. Although metals like Cd(ll), Zn(ll), Ag(I), Au(I), and Cu(I) exhibit no optical transitions within the normal UV-visible wavelength range, the presence of the thiolate ligand results in ligand to metal charge transfer (LMCT). These LMCT transitions lie in the region between 230 and 400 nm. When any transition takes place within a chiral environment, the chirality is imparted on that transition. In particular, there will be intensity enhancement for LMCT transitions that terminate on degenerate states, because under these conditions, exciton coupling can occur. like band centered approximately on the absorption band maximum. When metals bind either directly to the metal-free, or apoMT, or when an added metal displaces an existing metal, for example when Cu(I) is added to Zn-MT, the new metal-thiolate cluster structures that form may introduce new chirality or adopt the same chirality as for the initially bound metal. What is important, is how the parameters that control the observed chiral spectrum change as the new metal is bound to the two domains (in mammalian metallothionein). First, both the absorption and CD spectra will exhibit band maxima at the same energy (as the transitions are the same in both techniques), but the CD intensity mechanism is not as sensitive to transitions that do not originate within the chiral structure, so the CD spectrum becomes more sensitive to metal-based transitions for metals located within the chiral binding site. Second, because the selection rules for the CD bands involve the magnetic dipole operator as well as the electric dipole operator, the relative intensities of the CD bands may be quite different when compared with the associated absorption spectrum. Third, the sign and magnitude of the individual CD bands will change as the three-dimensional structure of the metal binding site changes. Therefore, in experiments in which a tighter binding metal is added to Zn-MT (for example Cd(ll) or Cu(I)), the changes in the CD spectral intensity at every point during the titration directly indicate (i) whether the binding site structure is maintained as with the Zn(ll) (the CD envelope will remain the same but shifted to a different wavelength), (ii) whether a new structure forms under the influence of the incoming metal (a new CD envelope will be observed, also at a different energy), or (iii) whether the binding site opens up and the three-dimensional structure is lost (the CD signal will be essentially quenched as a function of the molar ratio of the incoming metal, or the addition of a competitive ligand). Figure 4. CD spectral profile plotted as a function of the Hg:MT molar ratio as Hg(ll) was added to a single solution of rabbit liver Zn-MT 2 at pH 7 and room temperature. The profile shows the development of the HgT-MT and Hgll-MT species. Reproduced with permission from Lu and Stillman [27].
We can now summarize the key features of the CD experiment as used in probing the formation of ordered metal-thiolate structures in the binding site: the CD spectrum depends on the chirality of the metal binding site as a whole, the spectrum to the red of 230 nm is entirely dependent on the bound metals, and there is no contribution from metal ions in solution (but which might contribute to the absorbance measured). Figure 3 illustrates the very great sensitivity of the CD spectrum to changes that take place in the metal binding site. In a titration of ZnT-MT with Cd(ll), the Cd(ll) first binds to both domains leading to mixed, Cd,Zn-thiolate clusters [10,12c]. The dramatic spectral changes observed at the Cd:MT = 5 point indicate that complete Cd4Sll groups begin to form in the o domain at this stage.
Warming the solution with only 4 Cd(ll) added will generate the domain specific product as Cd (ll) in the 13 domain migrates to the o domain [12a].
From the CD spectroscopic data we obtain precise data on the metal to protein stoichiometric ratio. These data indicate that for Ag(I), Cu(I), and Hg(ll), a number of different complexes form during the titration. Saturation points in the spectral intensities indicate that a preferred structure has formed. However, with metals like Ag(I) and Cu(I), changes in coordination number can result in a new coordination chemistry being observed as the metal to protein molar ratio increases. The clearest indication that a metal bound in the binding site of metallothionein might exhibit more than one coordination number successively in a titration is seen in the titrations of Zn-MT and apoMT with Hg(ll). In Figures 4 and 5, the CD intensity profile is plotted as a function of the Hg:MT molar ratio, first when Hg(ll) is added to Zn-MT 2 at pH 7. Quite complicated speciation is observed in the set of CD spectra when the intensity profile is plotted as a function of the Hg:MT molar ratio. First, the bands due to Zn-S near 240 nm, are replaced by bands due to Hg-S at 260 and 300 nm. However, the structure that forms at Hg:MT=7 is not as well defined as when Hg(ll) is added to the apoMT. With Zn-MT, a new species forms as the Hg:MT ratio increases past 7 up to a maximum at 11. The steep collapse in CD intensity at 11 indicates that the domains must unwind at this point under the stimulus of the 12th mercury atom; the CD spectral intensity is lost because the three-dimensional structure breaks up. However, the Hg(ll) still binds to the 20 Scys up to Hg:MT=20. Figure 5 shows that a third structure can form but only at pH<7. Now, following saturation in the CD signal at exactly 7, a new species forms at Hg:MT=18. XANES [26] and EXAFS [17] studies indicate that this is a unique structure, possibly the HgSo cluster adopts a single domain rather than the two domains anticipated for HgT-MT and Hg-MT [16b,27].
Copper Binding to Znr-MT
Metallothionein binds copper only in the +1 oxidation state [29]. Cu(ll) reacts with metallothionein to form Cu(I) and probably denatures the protein through oxidation of the cysteinyl thiolates. In this present study, aliquots of the Cu(I) salt [Cu(CHCN)]CIO were added to aqueous Znr-MT 2 to replace the tetrahedrally-bound Zn(ll) between 3 and 52 C. These experiments examine the differences in the binding of Cu(I) to metallothionein under kinetic versus thermodynamic conditions. The CD spectra recorded during Cu addition to Znr-MT 2 at the temperatures stated above are shown in Figures 6 and 7, respectively. There are significant differences in the development of the respective spectral envelopes at the different temperatures, indicating a structural change in Cu(I)-binding under the two conditions. At the low temperature, that is under kinetic control, the CD signals saturate with formation of the Cu-MT and Cu-MT species, Figure 6. At the higher temperature (52 C), Figure 7, CuZn-MT exhibits a unique spectrum, in addition to those for the Cu-MT and Cu-MT species. Raising the Spectroscopic Studies of Copper, A7lver and Gold-Metallothioneins temperature from 3 to 52 C has the effect of resolving the unique structure of the CuZn-MT species before forming the Cu(I)-thiolate structures found in Cu-MT and Cu-MT [12c]. The use of CD spectroscopy as a probe during the titration of Zn-MT with Cu(I) not only signals the formation of a series of well-defined Cu(I)-thiolate structures during Cu(I) addition, but also indicates how the polypeptide changes as Cu(I) replaces the Zn(ll). These data show that at low temperatures the polypeptide is constrained in its ability to reorient itself to accommodate trigonally coordinated Cu(I) in place of the tetrahedral Zn(ll), thus forming the kinetic product. At higher temperatures, the polypeptide is clearly able to reorient itself at lower Cu(I):MT ratios, allowing the thermodynamic CuZn-MT product to be resolved as a distinct structure.
Emission from the copper-containing protein isolated from Neurospora crassa was first reported by E3eltramini and Lerch [28]. Since then, emission spectra from a wide range of copper, silver, gold and platinum metallothioneins have been described [18][19][20]29]. Emission spectra from copper metallothioneins from a variety of sources have been studied in great detail. The spectral data shown in Figure 8 illustrate the strong dependence of the emission intensity on the Cu(I) to MT stoichiometry. Because the intensities are plotted in Figure 8 The coordination chemistry of inorganic silver-thiolate complexes is remarkable and unique in that digonal, trigonal and even m tetrahedral geometries are observed [29][30][31][32] Emission spectral intensity profile molecular [AgSR], cycles [30]. The reason that recorded as Ag(I)is added to single solutions of rabbit liver apoMT; the spectra were recorded at 77 silver-thiolate compounds have a tendency to K. The emission intensity profile is plotted as a form aggregates is that there are strong function of the AgMT molar ratio. The emission secondary Ag...S interactions which will oppose intensity rises to a maximum at the AgMT=12 point, the formation of discrete molecular cycles, then falls slightly, accompanied by a red shift in the providing a driving force for aggregation and maximum to the Ag:MT=18point. Reproduced with polymerization. However, the steric bulk of the permission from Zelazowski et al. [18a] substituent R prevents close approach of the silver units and precludes concomitant Ag...S bridging interactions to form oligomeric aggregates. The Ag-S bond length in inorganic silver-thiolate clusters is very dependent on the coordination geometry of the silver(I) metal. Generally, the more sulfurs connected to the silver(I), the longer the bond length of Ag-S. However, the correlation between the bond length and coordination number is not straightforward as a result of the sophistication of the coordination environment, such as the secondary Ag...S interactions, the constraints imposed by ring closure, and the effects [30], trigonal [30], and tetrahedral [31] geometries, respectively, individual bond lengths vary sufficiently that they may overlap those of different geometries, Figure 15. As in inorganic silver-thiolate compounds, digonal and trigonal coordination in silver metallothionein is possible. Circular dichroism and emission spectroscopies have proven to be very effective and precise in identifying structural changes in the metallothionein metal binding sites as metals have been introduced. The metal:protein stoichiometry can be established from the development of CD and emission spectral signals that reach a maximum intensity for a specific Ag:Scy stoichiometry.
Like Cu-MT, mammalian silver metallothioneins emit light, but only at 77 K. Figure 9 shows the emission intensity profile plotted as a function of the Ag(I):MT molar ratio. Spectra were recorded from different solutions formed by adding Ag(I) to apo-MT at room temperature. The emission spectra were recorded from the frozen glassy solutions. Three distinct spectra are observed; at Ag(I):MT of 6, 12, and ca. 18, (see also Figures 10 and 11) Unlike the set of spectra recorded for Cu-MT, Figure 8, the band intensity is not quenched for Ag:MT>12, rather the band maximum red shifts. This spectral change coincides with development of new bands in the CD spectrum measured at room temperature and above, Figures 10 and 11. The sign and magnitude of the bands in the CD spectrum to the red of 220 nm in metal metallothioneins are dependent on the chirality of the metal binding site as a whole [29]. Therefore, the appearance and disappearance of intensity in individual bands is related to the formation and collapse of the specific metal-thiolate species, respectively. Figure 11. CD spectra recorded as Ag(I) is added to a single solution of rabbit liver apo-MT at 50 C. The CD intensity profile is plotted as a function of the Ag:MT molar ratio and shows that there is a change in spectral characteristics at the Ag:MT=12 and 17 points. Reproduced with permission from Gui et al., [18c]. Figure 10 shows the three-dimensional CD spectra recorded for a single solution of Znr-MT 1 at room temperature and pH 2.7. The gradual blue shift of the main CD peak when more than 12 Ag(I) are added to the protein from 310 nm for 12 Ag(I) to 300 nm with 17 Ag(I) identifies the major difference between Ag12-MT and AglT-MT 1. The formation of the Ag2-MT 1 species, however, is only very poorly resolved by CD spectra recorded at room temperature even though the CD spectrum distinctly indicates the formation of Agr-MT 1. In contrast to the CD spectra shown as Figure 10, the Ag2-MT 1 species with its well-defined characteristic CD bands at 315 nm (+, strong) and 265 nm (-,very strong), is unambiguously identified from CD spectra recorded at higher temperatures (50 C) in Figure 11. As at room temperature, addition of Ag(I) past the Ag:MT=12 point results in formation of Ag7-MT 1, with CD band maxima at 370 nm (-, weak) and 300 nm (+, strong). As the pH of the Znr-MT 1 solutions used in these titrations is quite low, the protein can be regarded as apo-MT because all 7 zinc atoms are released from metallothionein at low pH. We found that the formation of Ag2-MT and Ag-MT is better resolved in the CD spectra when Ag(I) is added to apo-MT at low pH values, rather than when Ag(I) is added to Znr-MT 1 at neutral pH [18b]. This may be due to the fact that the presence of seven zinc atoms in metallothionein interferes in the complete formation of Ag2-MT 1 as a result of the competition between the original tetrahedrally-coordinated Zn(ll) and the incoming Ag(I). The observation that formation of Ag2-MT 1 is more clearly shown in CD spectra when recorded at high temperature than at room temperature may imply that extra energy is required to overcome the relatively high activation energy to form the well-resolved Ag2-MT 1 species. The similarity in the Ag-S bond lengths in Ag2-MT 1 and in Ag-MT 1, Figure 15 [33] strongly suggest that the Ag(I) adopts a high fraction of digonal geometry in both species. The local silver-thiolate binding site can be probed by Extended X-ray Absorption Fine Structure (EXAFS) spectroscopy. By measuring Ag K-edge EXAFS, the average bond length of Ag-S can be accurately obtained and the nearest neighbors around Ag(I) can also be estimated. Figure 12 shows the raw data in k-space (inset) and the Fourier transform of kx(k). The Fourier transform spectrum clearly indicates that two separate coordination shells are present. Nonlinear least-squares curve-fitting of the data [33] shows that the nearest neighbor shell in Ag7-MT 1 contains two sulfurs at 2.43 + 0.024 A with 2 8.8x10-,,2 and the second shell contains one silver atom at 2.91 + 0.05A with 2 13.7x10-z ,&,2. The bond length of the Ag-S shell falls in the range of characteristic values for digonal coordination [30] and it is consistent with the coordination number of 2 obtained from the EXAFS analysis. We recently also measured the Ag K-edge EXAFS of Ag2-MT and a preliminary analysis of the EXAFS data (not shown) shows that Ag(I) in Ag2-MT 1 surprisingly has the similar local environment to Ag(I) in Ag7-MT 1. The bond lengths of Ag-S and Ag...Ag for Ag2-MT are 2.45 A and 2.91 A, respectively. At first glance, these results seem to be contrary to the results revealed from the CD spectra. As mentioned above, CD spectroscopy provides information about the protein's three-dimensional structure as a whole, while EXAFS spectroscopy probes the local structure of the metal binding site of the protein. Hence, these two techniques give complementary structural information on metallothionein. As displayed by the CD spectra, the wrapping structure in Ag2-MT is different from that of Ag-MT 1. However, due to the complexity of the silver-thiolate coordination phenomenon as illustrated by inorganic silver-thiolate compounds, it is not surprising that the average Ag-S bond length in the local binding site of Ag2-MT 1 from EXAFS measurements does not change when compared to that of Agz-MT 1. Another possibility is that Ag(I) in Ag2-MT 1 may have a T-shaped geometry [34], in which one of three Ag-S bonds is so long (about 3 A) that it is hardly detected by EXAFS spectroscopy. Finally, Figure 13 shows the emission spectra recorded for glassy solutions of Au-MT and gold thiomalate at 77 K [35]. As with Cu-MT and Ag-MT, the band maximum exhibits a 300 nm Stoke's shift: excitation at 300 nm results in emission at 600 nm.
Summary of structuralproperties
The Cu-S bond length range in copper(I)-thiolate inorganic compounds with digonal and trigonal coordination geometries is shown in Figure 14. It is obvious that the average Cu-S bond length for digonal geometry is about 0.1 A shorter than that found in trigonal geometry. Furthermore, the variation of Cu-S bond lengths in trigonal coordination is much larger than that found for the digonal geometries. This may be ascribed to the fact that the complexes based on trigonal Cu(I)-thiolate coordination have greater distortions due to the intramolecular repulsions between [32] Cus-MT [40] Cu(I) in yeast protein (TmSAu)xMT (Au-S=230 pm [32] Ag(I) in yeast protein Cu,-MT [41] Ag Figure 15. Similar [33].
Modeling the Cu(I)-thiolate structures of Cu2-MT based on synthetic complexes may prove helpful in understanding the binding and functional properties of both copper and silver metallothioneins. Dance has noted several recurring structural motifs in synthetic Cu(I)-thiolate complexes [36] that include" (a) (I-SR)(CuSR) rings and (b) (I-SR)Cu rings. These structures display trigonal coordination for the Cu(I) atoms, the latter motif often including terminal ligation of the Cu(I) by other types of ligands, such as phosphines. Table lists proposed coordination geometries of a number of metallothioneins. Trigonal coordination predominates with the Cu(I) and Ag(I) proteins.
An energy-minimized model of the Cu(I)-thiolate clusters in rabbit liver Cu2-MT 2a, described previously by Presta et al. [12c], is shown in Figure 16. The Cu6S cluster (Figure 16a) representing the o domain cluster, is composed of two CuS, rings, connected by a bridging S.
There are also four terminal cysteine thiolate ligands which bind the remaining Cu(I) atoms, so that all copper atoms are coordinated to three thiolates. The average Cu-bridging $ bond length in this structure is 220(1) pm, while the average bond length between the Cu(I) atoms and terminal sulfurs is 251(1) pm. Therefore, this structure contains the (I-SR),Cu, ring motif common to several synthetic Cu(I)-thiolate complexes.
The Cu6S9 cluster (Figure 16b) representing the I domain cluster, is a cage displaying two six-membered rings with alternating Cu and S atoms, forming front and back "faces" of the cage. These rings or faces are interconnected by bridging sulfur atoms between the opposite Cu atoms, such that three CuScys rings make up the sides of the cage and all Cu(I) atoms have trigonal geometry. The average Cu-S bond length in this cluster is 219(2) pm. This structure, which is a distorted prism of copper(I) ions, has only bridging thiolate ligation, in contrast to the proposed o domain cluster which has four terminal thiolates, and displays both the (I-SR),Cu ring and the (I-SR)(CuSR) ring motifs that are common among synthetic Cu(I)-thiolate complexes.
This model of the protein may be used to account for several interesting properties of copper-metallothioneins. First, although the metallothionein peptide chain is rather short, it is quite efficient at embedding the Cu(I)-thiolate clusters, in large part preventing solvent access to these cores. For this reason, Cu2-MT luminesces brightly at about 600 nm (X,,o, = 300 nm) even in solution [12b], Figure 8. Synthetic Cu(I) complexes normally luminesce only in the frozen glassy state [37]. Within this general encapsulation of the Cu(I)-thiolate cores by the peptide chain, there do exist small openings in each domain where some sulfur and copper atoms are visible, and presumably accessible. The fact that the Cu6Scys o domain cluster has four terminal thiolates which can coordinate another metal suggests that these exposed sulfur atoms in the o domain may be the most susceptible to bind extra metal ions in forming the Cu-MT and (Cu6Cd)(Cu6)n-MT species [12c]. The exposed Cu(I) atoms may also represent the sites of interactions with small ligands like glutathione (Presta and Stillman, unpublished results), which would be able to penetrate the opening, and thus could be involved in metal-exchange reactions [50].
|
2014-10-01T00:00:00.000Z
|
1994-01-01T00:00:00.000
|
{
"year": 1994,
"sha1": "11ed16b0e6d4a413dd9a6863667279ed72b06008",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/archive/1994/453571.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "11ed16b0e6d4a413dd9a6863667279ed72b06008",
"s2fieldsofstudy": [
"Chemistry"
],
"extfieldsofstudy": [
"Medicine",
"Chemistry"
]
}
|
251860911
|
pes2o/s2orc
|
v3-fos-license
|
Research on Intelligent Adaptive Control Method for Waste Heat Power Generation Stability Problem
The effective recovery and utilization of waste heat is an important part of the energy industry chain, but the gas source of waste heat has large fluctuations and strong random frequency changes, which easily affects the quality of power generation. In order to obtain more continuous and stable electricity in the process of waste heat power generation, this paper proposes an intelligent adaptive control method. This control method decouples the variables for the characteristics of strong coupling of the waste heat power generation system, which improves the anti-disturbance performance of the system on the one hand and reduces the steady-state tracking error of the system on the other hand. Experiments show that when the gas source fluctuates, the system fluctuation rate is stable within 7%; when the set value changes, the adjustment time is within 7 s, and the overshoot is small, so the performance is better than the traditional PID control method. The intelligent adaptive control method proposed in this paper is of great significance for the future development of intelligent grid-connected power generation.
I. INTRODUCTION
With the rapid development of society, human demand for energy is also increasing year by year. Using waste heat resources to generate electricity is an important method to alleviate the current world energy crisis. Through the operation of the power equipment, the generator is driven to generate electricity. However, due to the characteristics of many interference factors, strong coupling, and a large time lag in the waste heat power generation system, it is particularly important to maintain the stability of the power generation control system.
A. LITERATURE REVIEW
In research on the control system of waste heat power generation, the initial control strategy often adopts the PID control strategy. Zhou Xiaoran and others designed a waste heat The associate editor coordinating the review of this manuscript and approving it for publication was S. Srivastava. recovery control system based on the Rankine cycle. The control object of this system is temperature, and the control method used is PID control. The control strategy is simple and can basically meet the system requirements. However, the control system is difficult to control more accurately and quickly and needs to be optimized [1]. To compensate for the shortcomings of the PID control system, a closedloop PID algorithm is used in the air compressor waste heat recovery system to maximize the heat collection by limiting the oil temperature of the oil-water heat exchanger. However, because the system is affected by multiple disturbance factors, the closed-loop PID control algorithm does not have high control precision for the target variable, and the adaptability and stability of the system are not ideal [2]. To improve the shortcomings of the closed-loop PID algorithm, Kuang Lei et al. adopted the fuzzy PID control algorithm to strengthen the controllability of the waste heat application in the waste heat recovery system of the air compressor. Although this algorithm can achieve maximum heat VOLUME 10, 2022 This work is licensed under a Creative Commons Attribution 4.0 License. For more information, see https://creativecommons.org/licenses/by/4.0/ collection, it cannot achieve constant temperature control on the waste heat application side, and it is difficult to solve the problem of system coupling [3]. Yupeng Wang et al. designed and implemented a neural PID controller to control the outlet temperature of the evaporator in an ORC system for waste heat recovery. Compared with the traditional PID control strategy, this method can better predict the dynamic response and has strong robustness to parameter changes and external disturbances [4]. Zhang Wen et al. studied the dynamic performance of an internal combustion engine-organic Rankine cycle combined system in an internal combustion engine waste heat recovery system. A control method combining closed-loop proportional-integral and feedforward control is adopted, and the response time and overshoot of PI control are estimated and compared with feedforward control alone.
The results show that the closed-loop system can enhance the stability of the input and output of the control system [5].
The results based on the World Harmonized Transient Cycle (WHTC) show that the designed closed-loop PI control has a shorter response time and better tracking ability in dynamic processes. Pang Kuo Cheng et al. used the simulation platform to compare the VFD control strategy and the pump curve control strategy of the organic Kenlang cycle. The results show that the two control strategies have different effective intervals, but the stability of the recovery system cannot be guaranteed [6]. Zhao Mingru proposed a map-based feedback closed-loop control algorithm in the waste heat recovery system of an internal combustion engine. To make the control more convenient, this method reduces the order of the initialized organic Rankine cycle model and constructs an algorithm fusion controller for the nonlinear model. The control effect of the algorithm has been greatly improved [7]. Zhang et al. proposed a multivariable control scheme for an ORC system for waste heat recovery by combining a linear quadratic regulator with a PI controller in a simulated environment. This solution is equivalent to a linear quadratic integral controller with feedforward, and simulation experiments show that this method has a good response to load tracking and disturbance rejection [8]. A model predictive controller was implemented on a small ORC system, two variants of MPC (linear and nonlinear) were implemented on a real-time embedded platform, and the performance of the controller was compared with that of a traditional PID controller [9]. The linear model predictive controller (LMPC) has better control stability for control variables and is superior to the PID controller in response time, overshoot, oscillation and stability time. The nonlinear model predictive controller (NMPC) strategy can achieve smoother, safer and more efficient operation to achieve similar or better tracking performance under a lower control workload [10].
B. RESEARCH GAP AND MOTIVATION
In summary, there is much room for improvement in the control field of waste heat power generation, especially the stability of the control system. The existing control methods tend to produce fluctuations in airflow within the power generation unit when executing control commands, making it difficult to return the system to the preset rated state with a faster rotational speed and smaller overshoot when fluctuations in the waste heat gas source occur. At the same time, fluctuations in the air supply will cause fluctuations in the output power, which, if not controlled, can lead to overloading of the waste heat recovery unit during energy conversion. The connected electrical equipment will also be affected. Therefore, in order to reduce the energy loss in the waste heat recovery process and enhance the stability of the waste heat recovery system in operation, it is more critical to study the stability of the control system while the structure of the waste heat recovery device is being updated.
C. CONTRIBUTION
The intelligent adaptive control method proposed in this paper combines the coupling analysis, control and identification of controlled objects based on the use of roots power machine [11], aiming at the nonlinear, multivariable, multimodel and strong coupling characteristics of waste heat power generation control systems to realize more accurate control of systems with unknown variables or time-varying systems. Compared with the existing control methods, the intelligent adaptive control method avoids the contradiction between rapidity and overshoot, the simple signal processing process, and the easy oscillation of the system in the traditional PID control method, and retains the advantages of closed-loop PI control. It achieves a shorter response time and more sensitive tracking performance under system fluctuations. The experimental results show that if this intelligent adaptive control theory is applied to the existing control system of waste heat power generation, the stability of the waste heat recovery system and the response rotational speed of the control system can be improved in the case of multiparameter interaction and fluctuation of the gas source. The control method in this paper is of great significance to the continuous and stable production of waste heat power generation, the improvement of power generation and automation level, and the intelligent control of the production process.
D. PAPER ORGANIZATION
The main contents of this paper are as follows: The second part is the research on the parameter coupling mechanism of the waste heat power generation control system. Aiming at the complex relationship between parameters such as temperature, pressure, flow rate and the rotational speed of the roots power machine in the waste heat power generation system, the analysis and modeling are carried out, and the coupling relationship model and control process model of the variables of the waste heat power generation control system are initially obtained. This provides the basis for the design of the control method. The third part is the decoupling analysis of the system coupling variables and the design of the intelligent adaptive control method. Specifically, the ''intelligent adaptive control method'' referred to in this paper refers to the closed-loop adaptive decoupling control strategy under the nonlinear multivariate multimodel. Finally, the parameter stability calculation is carried out to prove that the stability requirements are met. The fourth part uses simulation technology to compare the anti-interference ability and tracking ability of the new control strategy of the roots waste heat power generation system with the traditional PID control strategy. The characteristics of the control effect of the two control strategies are obtained, and the availability of the new control strategy in the waste heat power generation system is determined. The fifth part builds the experimental platform and designs the human-computer interaction system for the test. The test process includes the fluctuation interference test and the set value change test, and the collected data are compared with the data obtained by PID control under the same conditions to verify the significant advantages of the new control strategy in waste heat power generation system.
II. VARIABLE ANALYSIS AND CONTROL PROCESS OF WASTE HEAT POWER GENERATION CONTROL SYSTEM
The waste heat power generation system referred to in this article is an energy conversion and recovery system with a roots power machine as the source power machine. The main components include the base, control system operation cabinet, roots power machine, electric regulating valve, shut-off valve, power generator, speed sensor, flowmeter, thermometer, filter, pressure gauge and connecting devices between various components. The waste heat enters the chamber inside the roots power machine through the intake pipe, and the blades of roots power machine rotor will rotate and do work under the expansion of waste heat gas and pressure. The output shaft of the rotor of the roots power machine is connected with the rotating shaft of the generator through the coupling and other components, and the generator is driven to generate electricity through the connected rotating shaft. The overall structure of the roots-type waste heat power generation device is shown in Figure 1. The waste heat power generation system involves many changes in the working process. When any variable of the system changes, other variables will be affected. To keep the rotational speed of roots generator stable, that is, to ensure the stability of the power generation system, it is necessary to study and analyze these input and output variables and their coupling relationship model.
A. INPUT AND OUTPUT VARIABLES OF THE SYSTEM
With the deepening of research, the relationship contained in the controlled system becomes increasingly complex, and there is often more than one pair of variables that need to be controlled. Because of the mutual coupling relationship, there are increasing amounts of multiple-input multiple-output systems (MIMO systems). Control methods suitable for singleinput single-output systems cannot be implemented in MIMO systems.
The output rotational speed of the roots power machine is directly related to the output of the electrical energy of the system, so the key to the quality of waste heat power generation lies in the stable control of the output shaft rotational speed of the roots power machine. This output is affected by the flow of the working fluid, the temperature and pressure of the working fluid at the inlet, the temperature and pressure of the outlet, and the slagging, ash, and scaling conditions on the inner cavity of the power machine.
The roots-type waste heat power generation control system is a five-input single-output system. The input consists of five variables: flow, inlet temperature, inlet pressure, outlet temperature, and outlet pressure. The control system needs to collect these five input variables and analyze and calculate them according to the deviation between the output variable and the expected value, then obtain the electrical signal for adjusting the electric control valve, and finally transform it into the adjustment of the rotational speed of the roots power machine. The controlled variable of the roots-type waste heat power generation control system is the rotational speed of the roots power machine, that is, the output variable of the system.
B. VARIABLE COUPLING ANALYSIS
The coupling relationship of variables is inseparable from the working process. This section combines the normal working state of the roots-type waste heat power generation control system to analyze the relationship between the input variables and output variables. The coupling relationship between variables lays the foundation for the following model analysis [12].
In the normal working process of the roots power machine, the internal changes are very complex, and there are more or less coupling relationships between each variable. When the opening of the electric regulating valve changes, the flow through the roots power machine will change, and the change in the flow will directly affect the rotational speed of the roots power machine. When the pressure difference between the air inlet and the air outlet occurs, it will directly affect the flow rate, which in turn affects the rotational speed of the roots power machine. When the temperature difference between the air inlet and the air outlet changes, it will directly affect the pressure difference between the air inlet and the VOLUME 10, 2022 air outlet and then indirectly affect the flow rate and the rotational speed of the roots power machine. A schematic diagram of the variable coupling relationship in the rootstype waste heat power generation control system is shown in Figure 2.
C. MULTIVARIATE AND MULTIMODEL COUPLING RELATIONSHIP ANALYSIS
To better explore the work characteristics of the roots-type waste heat power generation control system, it is necessary to model and analyze the variables and coupling relationships involved in the work process of the system. Therefore, we first consider the working process under the ideal state based on the control variable method. The relationship between the power and speed of the generator can be expressed as: In the formula , P is the rated power of the generator, n is the rated speed, and M is the rated torque. According to Formula (1), the relationship between the power of the generator and the rotational speed is a linear relationship in an ideal state; that is, the relationship between the rotational speed of the roots power machine and the power of the generator is also a linear relationship.
1) TEMPERATURE AND PRESSURE MODEL
When the flow rate of waste heat is constant, the working gas can be regarded as an ideal state gas [13], which satisfies Equation (2): Where p is the pressure of the waste heat, V is the gas volume of the waste heat, n 0 is the amount of waste heat gas, R is the ideal gas constant, and T is the temperature of the waste heat.
The flow rate of the waste heat working fluid can be represented by the pipe cross-sectional area S and flow velocity v of the working fluid, and the volume V can be represented by the flow rate Q and time t: When the waste heat flows through the roots power machine and drives the roots power machine to do work, the state of the waste heat working fluid at the air inlet and outlet can be expressed as: In the formula, C 1 is a constant related to the energy consumption of the roots power machine. Under actual working conditions, the pressure of the waste heat working medium will deviate from the calculation results, so this model can be used as a relationship similar to the actual working conditions in the actual application process.
2) TEMPERATURE AND FLOW MODEL
When the pressure of the waste heat working fluid is stable, the model can also be simplified to a steady flow state, and the working fluid gas at this time is regarded as an ideal state gas and processed by Formula (2). Then, after the roots power machine does work, the state of the waste heat working medium at the air inlet and air outlet can be expressed as: In the formula, C 2 is a constant related to the energy consumption of the roots power machine. In the actual application process, this model is the same as the temperature and pressure model and can be used as a relationship similar to the actual working condition. The deviation that occurs also needs to be compensated for the current deviation through the controller and other models, as well as the overall working model of the system.
3) PRESSURE AND FLOW MODEL
When the temperature of waste heat is constant, the relationship between gas pressure and flow rate can be expressed by Bernoulli's equation [14]: In the formula, g is the acceleration of gravity, and the unit is m/s 2 ; h is the height of the fluid, and the unit is m; v is the flow velocity of the fluid, and the unit is m/s; and E is a constant related to energy.
When the Bernoulli equation is applied, the Mach number (Ma) of the fluid needs to satisfy Ma<0.3; the expression of the Mach number during the flow of the waste heat working medium in the device is: where a is the speed of sound in m/s. The working fluid velocity in the pipeline is 55.97 m/s, which meets the Mach number requirement. By analyzing the Bernoulli equation, the gravitational potential energy of the gas can be neglected; then, Equation (7) can be expressed as: When the waste heat gas flows through the roots power machine to do work, the state change of the working fluid at the inlet and outlet can be expressed as: In the formula, the relationship between flow rate and flow velocity can be expressed by Equation (10). C 3 is a constant related to the energy consumption of the roots power machine, which can be specifically expressed as the energy consumed by the unit volume of waste heat gas in the process of doing work by the roots power machine.
D. ANALYSIS OF THE OVERALL CONTROL PROCESS MODEL OF THE SYSTEM
After analysis, the control process structure of the rootstype waste heat power generation control system is shown in Figure 3.
The rotational speed setting value is set by the humancomputer interaction system, and the controller will adjust the opening of the electric regulating valve according to the rotational speed setting value, control the flow rate of the waste heat working medium in the pipeline, and then adjust the flowrate by controlling the RPM of the engine.
When the rotational speed of the roots power machine changes, the controller will send a regulating signal to the electric regulating valve to adjust the valve opening. According to the working characteristics of the electric control valve, the internal signal amplification belongs to a proportional link, the adjustment process belongs to an inertial link, and the mechanical transmission link belongs to a pure lag link. Therefore, the control process transfer function model of the electric regulating valve can be obtained, and the model block diagram is shown in Figure 4.
Here, K 0 is the amplification factor of the electric control valve, T 0 is the time constant of the electric control valve, D 1 (s) is the external temperature difference feedback signal, and D 2 (s) is the external pressure difference feedback signal.
According to the control model, its transfer function can be expressed as: According to the analysis of the working characteristics of the roots power machine, the transfer function of the roots power machine can be expressed by Formula (13): Without considering the sensor signal error, the speed sensor can accurately express the speed it detects, so the transfer function of the speed sensor can be expressed as: In the formula, Q(s) is the sensor signal, and P(s) is the output speed function. The resulting transfer function result is a time-independent constant.
According to the above analysis of each part of the control module, the transfer function of each control module is obtained. After connecting them in series, the overall control process transfer function of the roots-type waste heat utilization system can be obtained. Its overall control process transfer function is shown in Figure 5.
Therefore, after connecting the above parts in series, the transfer function of the rotational speed adjustment process of the roots-type waste heat utilization system can be expressed as: where s represents the Laplace operator; T 0 and T 1 are the oscillation periods; τ is the process lag time constant; and K is the process steady-state gain coefficient.
III. RESEARCH ON INTELLIGENT ADAPTIVE CONTROL METHOD
On the basis of the coupling analysis of the multi-variable and multivariable and multimodel of the waste heat power generation control system in the second part, this part will conduct a closed-loop decoupling analysis of the system according to the nonlinear characteristics of the waste heat power generation control system and design a new type of intelligent adaptive control method [15].The new intelligent adaptive controller designed in this paper can handle the strong coupling between parameters and use the multi-model system obtained from the previous disassembly to design the waste heat power generation system, so that the design of this controller is more in line with the characteristics of the waste heat power generation control system, and it is the first time to apply this control method to the study of the stability of the waste heat power generation system. The design of this control method is very dependent on the overall process of the waste heat power generation device and the work characteristics of the roots power machine. The thermodynamic properties of this system are obviously nonlinear, so the original control system should be expressed as a nonlinear system. Let r and y be the input and output of the system, respectively. The system equation of the nonlinear system can be expressed as: Equation (17) is the system equation of the nonlinear system represented by the matrix equation, where X (t) is the data vector of the nonlinear system composed of the input sequence and the output sequence, v [·] represents that the data vector is a higher-order nonlinear vector function, and a is the time delay of the nonlinear system.
A. ONE-STEP-AHEAD OPTIMAL DECOUPLING CONTROL LAW
To carry out the closed-loop decoupling design, according to the transfer function model of the control system, the coupling effect of the input channel of the control system on the output channel is regarded as the interference source, and the coupling effect is compensated by using the feedforward method. The feedforward compensation method that realizes the control can compensate for the coupling component when it is generated. Compared with the general feedback control, the feedforward compensation can be compensated in a timelier manner, and the feedforward compensation cannot be affected by the time delay of the control system. The input parameter matrix function of the nonlinear system can be expressed as: where N (z −1 ) is a diagonal matrix, which represents the relationship between input parameters and output parameters on the main channel of the control system. Another matrix is as follows: This indicates the coupling relationship between other different channel variables in the control system. According to the nonlinear control system, the one-step ahead optimal performance index is introduced into the control system and has the following weighting form: , and W (z −1 ) are weighted polynomial matrices of one-step ahead optimal performance indicators; S (z −1 ) is also a weighted polynomial matrix, but the main diagonal elements of this weighting polynomial matrix are zero. The optimal control law is the controller that minimizes the performance index of Equation (20). The output of this auxiliary system can be defined as: In addition, the input to the auxiliary system can be defined as: Here, the partial derivative of the high-order nonlinear vector function in the system to the input variable vector is a constant matrix, and matrix C is a constant matrix. In addition, K 0 is a constant matrix term of K (z −1 ). E(z −1 ) can be determined by the Diophantine equation [16]: E 0 is the constant matrix term of E(z −1 ), and it can be known by calculation that F 0 =P(0). In the formula, G(z −1 ) is a diagonal matrix polynomial, and its order is one order lower than the matrix polynomial of the input parameters.
At this time, to solve the optimal control law with the original performance index (20) as the minimum, it can now be equivalently transformed with the minimum performance index for the auxiliary system defined by Equations (21) and (22). The performance index formula of the system is: After solving, the one-step-ahead optimal weighted closedloop decoupling control law that can minimize the performance index formula of the system is: At this time, for the input variables of the system, it is necessary to ensure that the vector r(t) exists. Therefore, the weighted polynomial matrix in the performance index equation needs to satisfy: At this point, by substituting the optimal decoupling control law into the nonlinear system, we obtain In addition, the following can be obtained by calculation: At the same time, to make the nonlinear system have closed-loop stability, the weighted polynomial matrices L(z −1 ), S(z −1 ), and W (z −1 ) must also meet the following conditions: When the higher-order nonlinear vector function in a nonlinear system is small, it can be regarded as a bounded perturbation. The control law at this time can be obtained by the linear method:
B. IDENTIFICATION EQUATION
In the intelligent adaptive control algorithm, the control system needs to have an identification process. The identification process refers to determining a model equivalent to the system under test from the known models through the input data and output data collected by the system.
The parameter matrix of the nonlinear system can be identified by the recursive method. For nonlinear systems, using equation (25), the identification equation of intelligent adaptive control can be calculated [17], [18], [19]: After simplification, it can be written as: Therefore, Equations (27) and (32) can be written as:
C. INTELLIGENT ADAPTIVE CONTROL METHOD BASED ON NONLINEAR MULTIVARIATE AND MULTIMODEL
The intelligent self-adaptive control method is a control method for nonlinear multivariable and multimodel. In the roots-type waste heat power generation system, the control model can be switched intelligently according to the working conditions. And the output variables of the control system can track the changes of the pre-specified bounded input variable settings, while also reducing the effects of coupling effects between different parameters [20], [21], [22], [23].
For the identification equation of Equation.
(34), its linear prediction model can be defined as follows: The linear model-based estimation of the parameter matrix at time t can be corrected online by the following identification methods: For the identification equation of Equation. (34), its neural network nonlinear prediction model can be defined as follows:φ The estimation of the parameter matrix based on the nonlinear model at time t can be similar to the estimation of the linear model. The identification method is as follows: where NN[·] represents the neural network structure. X (t) is a vector composed of the input and output of the nonlinear system and is the input vector of the neural network here. ô(t) is the estimated matrix of the ideal weight matrix at time t. In this formula, ô(t) is bounded. According to Equation (37) and the deterministic equivalence principle, the adaptive closed-loop decoupled controller r 1 (t) based on linear model estimation can be expressed as: According to Equation (36) and the deterministic equivalence principle, the neural network adaptive closed-loop decoupling controller r 2 (t) based on nonlinear model estimation can be expressed as: The weighted polynomial matrix is selected according to the analysis of the weighted polynomial matrix above. ê[X(t)] is calculated as follows: The switching criterion function can be represented by the following function: In the formula, if ||p k (l)|| > 2A, c 1 (t)=1; otherwise, c 1 (t)=0. n is a positive integer, and b ≥0 is a constant, which can be determined in advance. The value of k is 1 or 2. When the value is 1, it means linearity; when the value is 2, it means nonlinearity. Adaptive control is embodied in that, at each moment, the value of J k (t) when k is 1 and 2 is compared, the minimum value is taken, and then the adaptive decoupling controller r 1 (t) or r 2 (t) corresponding to this value is selected.Then apply it to the controlled system.
The system structure of the intelligent adaptive control method based on nonlinear multivariable and multimodel is shown in Figure 6.
The workflow of the intelligent adaptive control system can be specifically summarized as shown in Figure 7.
D. STABILITY ANALYSIS OF THE INTELLIGENT ADAPTIVE CONTROL SYSTEM
When the weighted polynomial matrix satisfies the conditions of Equations (28) and (31) and the upper bound of the nonlinear term is known, we can set: According to the calculation process of the prediction model identification method, it can be known that: Since ||p(t)|| > 2A, c(t)=1; otherwise, c(t)=0. Thus, The following is true: According to Formula (43) and Formula (46), the following can be obtained: According to Formula (47) and Formula (50), the following can be obtained: Combining the deterministic equivalence principle, we can obtain the following at any time t: Therefore, at any time, the identification error of the Intelligent adaptive control system is p 1 (t + a) or p 2 (t + a), and ||X (t − a)|| is bounded.
According to the function switching criterion, the second term of J k (t) is bounded, and then according to Equation (46), it can be obtained that J 1 (t) is bounded. For J 2 (t), if J 2 (t) is bounded, according to the switching function criterion, it can be known that the identification error p(t) of this system satisfies Equation (47).
If J 2 (t) is unbounded, since J 1 (t) is bounded, there is a time t 0 such that when time t is greater than or equal to t 0 , J 1 (t) ≤ J 2 (t) is satisfied. According to the switching function criterion, when the time t is greater than or equal to t 0 +1, the identification error p(t) of this system also satisfies the equation.
In summary, regardless of whether it is bounded, the system identification error satisfies Equation (47), so X(t-a) is bounded, and the system is closed-loop stable.
IV. SIMULATION ANALYSIS
To test the performance of the intelligent adaptive control method based on the roots-type waste heat power generation system, the PID control method previously studied and the control method studied in this paper are used for simulation comparison to test the two control methods in the control effect when encountering continuous small-amplitude fluctuations from the outside world and changes in the set value.The system dynamic model was constructed in a MATLAB/Simulink environment [24].
In the process of PID control system regulation of the roots power machine, to achieve the desired control effect, the PID parameters K p , T i , T d need to be adjusted, and other parameters need to be taken according to the actual production operation and relevant reference materials. In MATLAB, the root trajectory function rlocus and rlocfind commands are used to determine K p , T i , T d according to Ziegler-Nichols critical proportional coefficient method, and the Ziegler-Nichols PID parameters rectification table is shown in Table 1, K k is the crossing gain and T k is the critical oscillation period of the system.
At the critical stable speed, the parameters of the PID controller are adjusted as follows: K p = 0.98; T i = 150.00; T d = 39.00.
The simulation comparison experiment scheme of small amplitude fluctuation interference is to add a step signal or a ramp signal with a lower peak value at a certain time point. The simulation comparison test scheme when the set value changes is to increase the step signal of a certain value at the set time point. Finally, the control effect of the control system is analyzed according to the simulation graph.
In this paper, the control method of this paper is compared with the PID control method on MATLAB/Simulink software simulation platform. The overall structure of the simulation comparison experiment is shown in Figure 8. The values of the devices in the figure will vary as the simulation experiment requires.
The upper part of the figure is an intelligent adaptive control method based on nonlinear multivariable and multimodel with five inputs and one output, and the lower part is a traditional PID control method with a single input and a single output [25]. By changing the magnitude and time of the step signal, the roots-type waste heat power generation system can be simulated under the actual working conditions when the fluctuation or set value changes. The main indicators of the simulation comparison are based on the overshoot and adjustment time, corresponding to the fluctuation range and adjustment time in the performance indicators.
A. ANTI-INTERFERENCE ABILITY TEST UNDER SMALL FLUCTUATION INTERFERENCE
The unstable phenomenon of waste heat gas sources in actual situations can be simulated through low-amplitude step signals and ramp signals. The rotational speed of the simulation test is set to 400 r/min, and two small-amplitude interference signals with opposite directions are added through the controls in the simulator to simulate the increase in gas volume and the gas volume of the gas source in the actual working conditions of the roots-type waste heat utilization system.
After simulation, the obtained results are shown in Figure 9. The black curve r(t) in the figure represents the fluctuation applied to the system, the red curve y(t) represents the rotational speed response curve of intelligent adaptive control, and the blue curve u(t) represents the traditional PID control rotational speed response curve. At 15 s and 35 s, two fluctuating signals with increasing gas volume and decreasing gas volume were added. The magnitude of the signal is not fixed, indicating random small-amplitude fluctuations. The simulation results show that both control methods can stabilize the rotational speed to the set value, but the time for the two to stabilize is different. The PID control method stabilizes the system within 4 s on average, and the control method has an obvious overshoot phenomenon, while the intelligent adaptive control method stabilizes the system in an average of 1.5 s, and the control has almost no overshoot.
In contrast, the adjustment time of intelligent adaptive control is less than a half of that of PID control, and there is almost no fluctuation. Therefore, the intelligent adaptive control method has a faster adjustment rotational speed and higher stability in the face of small fluctuations.
B. TEST OF THE ABILITY TO TRACK THE CHANGE OF THE SET VALUE
The change in the rotational speed setting value is realized by the step signal. The initial rotational speed of the system is set to 400 r/min, and then a step signal is added at 10 s, and the rotational speed is set to 300 r/min. When it reaches 30 s, a step signal is added again, and the rotational speed is set to 500 r/min. The tracking ability of the two control methods is tested when the set value changes.
After simulation, the result obtained is shown in Figure 10. The black curve r(t) in the figure represents the set value of the system rotational speed, the red curve y(t) represents the rotational speed response curve under intelligent adaptive control, and the blue curve u(t) represents the rotational speed response curve under traditional PID control. The rotational speed was set to 300 r/min at 10 s, the intelligent adaptive control method remained stable within 2 s, and the overshoot did not exceed 1%. The stabilization time of the PID control method is 5 s, and the overshoot is 6%. When the time reaches 30 s, the rotational speed is set to 500 r/min, the intelligent adaptive control method remains stable at 2.5 s, and the overshoot also does not exceed 1%. The stabilization time of the PID control method is 6 s, and the overshoot is 5%.
The simulation results show that the adjustment time of the intelligent adaptive control method is shorter, and its rotational speed adjustment effect is obviously better than that of the PID control method. In addition, by comparing the overshoot of the two control methods, we can see the overshoot of the PID control method. It is 5 to 6 times that of the intelligent adaptive control method. Therefore, the tracking ability of the intelligent adaptive control method has obvious advantages compared with the PID control method.
V. ACTUAL TESTING AND DATA ANALYSIS A. EXPERIMENTAL SCHEME DESIGN
For the experiment to verify whether the controller can keep the system to maintain a stable rotational speed in the case of small fluctuations in the roots-type waste heat power generation control system, according to the performance index of the rotational speed, the parameter index of the rotational speed during the experiment is set to 300 r/min. The small fluctuations during the experiment are random fluctuations, which can be simulated by changing the size of the intake air, that is, by manually fine-tuning the opening of the gas tank valve during the experiment. After the experiment, the change curve of the rotational speed during the experiment is drawn, and whether the deviation of the rotational speed fluctuation meets the parameter index of the rotational speed fluctuation range −7%∼ +7% is analyzed.
For the experiment to verify whether the controller can respond in time and make accurate adjustments when the set value of the roots-type waste heat power generation control system changes, according to the performance index of the rotational speed, the adjustment parameter of the rotational speed in the experiment is set as 300 r/min and 400 r/min. To improve the accuracy of the experiment, it is necessary to verify not only the effect when the rotational speed is increased but also the effect when the rotational speed is decreased. Therefore, in the experiment, the initial rotational speed was set to 300 r/min, and then the rotational speed was adjusted to 400 r/min. After the equipment was stabilized, the rotational speed was set to 300 r/min. After the experiment, we analyze the data, compare whether the adjustment time meets the rotational speed adjustment time performance index of 7 s, and analyze the overshoot to verify whether the test process is stable.
B. EXPERIMENTAL PLATFORM DESIGN
In this paper, artificial equipment is used to replace the waste heat generated in actual enterprises. Therefore, under laboratory conditions, compressed air with similar air source properties is selected as the air source to replace the waste heat generated in the actual production process. The power of the generator is inconvenient for data monitoring. Therefore, under laboratory conditions, an eddy current brake is used to replace the generator, and the eddy current brake is used to provide a load to the roots-type waste heat power generation control system, which also makes the monitoring data more accurate.
The overall structure of the experimental platform of the roots-type waste heat utilization system is shown in Figure 11.
1) INTAKE AND OUTTAKE SECTIONS
The data monitoring points of temperature, pressure and flow are at the inlet and outlet, so flow meters, thermometers and pressure gauges are installed on the inlet and outlet pipelines, respectively. After design, the instrument installation of the intake pipeline and the exhaust pipeline is shown in Figure 12.
The thermometer and the pressure gauge are connected by threaded type in the intake and outtake pipelines. The flow sensor is a Hongqi Instrument brand vortex flowmeter, which is installed in a flanged wafer on the intake and outlet pipelines. The valves on the pipeline are a ZDLP-16C-type electric regulating valve and a J41H-16C-type manual stop valve.
2) MAIN PART
The work part of the experimental device is shown in Figure 13(a), and the detection instrument is shown in Figure 13(b). The air inlet and outlet of the roots power machine are connected to the intake pipeline and the exhaust pipeline, respectively. The output shaft of the roots power machine is connected to the speed sensor through coupling, and the other end of the speed sensor is connected to the eddy current brake through coupling. The speed range is 300-500 r/min, and the power range is below 10 kW. Therefore, the speed sensor is a CGQY torque speed sensor and TR-4D torque speed power measuring instrument produced by HT-SAAE. The brakes are also WZ-20 eddy current brakes and WLK controllers produced by HT-SAAE. The eddy current brake generates a load equivalent to the generator of the same power through the setting of the controller to simulate the situation in which the roots-type waste heat power generation control system drives the generator to work under actual working conditions.
3) CONTROL SYSTEM PART
The controller of the experimental platform control system is an embedded control system based on STM32 series microprocessors. Between the controller, the sensor and the controlled device, the conventional RS485, RS232 and standard current signal communication methods are used for data acquisition and output control. The display of data and the operation of the control system are realized through the MCGS touch screen. The operation and control part of the experimental platform is shown in Figure 14.
4) HUMAN-COMPUTER INTERACTION SYSTEM
The data processing of the roots-type waste heat power generation control system is realized by the microcontroller, the data are saved on the touch screen, and the data can be downloaded to a U disk. The human-computer interaction system is realized through the touch screen. The specific human-computer interaction interface is shown in Figure 15.
The overall connection relationship of the control system is shown in Figure 16.
C. EXPERIMENTAL RESULTS AND DATA ANALYSIS 1) SMALL FLUCTUATION INTERFERENCE TEST
To verify the working state of the roots-type waste heat power generation control system under actual working conditions and the performance of the intelligent adaptive control method, according to the experimental scheme of this paper, the rotational speed of the roots power machine is set at 300 r/min. Then, the air supply is manually adjusted to apply small fluctuation disturbances to the system. After the experiment, the system is updated, and the PID control method is used to carry out the same experiment as the experimental control.
During the experiment, several sets of data were sampled, and the rotational speed data were used to draw the rotational speed curve under PID control and the rotational speed curve under intelligent adaptive control.
Several sets of tests were carried out under PID control, and the rotational speed curve drawn by some of the test data is shown in Figure 17.
The blue curve in the figure represents the rotational speed curve under PID control, the red curve represents the deviation range of the rotational speed fluctuation, and the black curve represents the set value of the rotational speed. From Figure 17, it can be roughly seen that the rotational speed of After statistical analysis, the data information is shown in Table 2.
From the statistical analysis in Table 2, it can be seen that the average value of the rotational speed is very close to 300.00 r/min, indicating that the overall rotational speed fluctuates around the set value. However, the maximum deviation and the minimum deviation of the rotational speed exceeded the fluctuation range of the parameter index −7.00%∼ +7.00%, which does not meet the requirements. In addition, the variances of the rotational speed of these 10 groups of data are all above 200.00, which also shows that the fluctuation of the rotational speed under PID control is very obvious, and the stability of the system is insufficient.
Several sets of tests were carried out under intelligent adaptive control, and the rotational speed curve drawn by some test data is shown in Figure 18.
The blue curve in the figure represents the rotational speed curve under intelligent adaptive control, the red curve represents the deviation range of the rotational speed fluctuation, and the black curve represents the set value of the rotational speed. From Figure 18, it can be roughly seen that the rotational speed of the roots power machine is always within the deviation range of rotational speed fluctuations and does not exceed the data points of the parameter indicators. Moreover, it can be clearly seen that under intelligent adaptive control, the size of the deviation is obviously smaller than the requirements in the performance index.
After statistical analysis, the data information is shown in Table 3.
From the statistical analysis in Table 3, it can be seen that the average value of the rotational speed under intelligent adaptive control is approximately 300.00 r/min, indicating that the overall rotational speed also fluctuates up and down from the set value. The maximum deviation and minimum deviation of the rotational speed are within 5.00 r/min, which is obviously smaller than the fluctuation range of the parameter index −7.00%∼ +7.00%. Moreover, the variance of these data is small, which also shows that under intelligent adaptive control, the rotational speed fluctuation is weak, and the system stability is strong.
Through the statistics of Figure 17 and Figure 18, the control effects and characteristics of the two control methods under the disturbance of small amplitude fluctuations are analyzed.
In order to make the comparison between the two control methods more obvious, the rotational speed curves of the two control methods are shown in a graph, as shown in Figure 19.
The blue curve shows the rotational speed curve under PID control, the orange curve shows the rotational speed curve under intelligent adaptive control method, and the black curve shows the rotational speed setting value. It is obvious that the rotational speed under intelligent adaptive control is closer to the rotational speed setting value of 300.00 r/min, and the error is smaller. This indicates that the intelligent adaptive control method is more effective than the PID control method when the rotational speed is disturbed by small fluctuations.
The rotational speed adjustment comparison in Table 4 can be obtained by comparing the rotational speed data of the two control methods.
By comparing the data in the table, it can be seen that the difference between the average rotational speed of the two control methods and the set rotational speed is not large, indicating that the rotational speed of the two control methods changes around the set value and meets the most basic control requirements.
The average maximum deviation of the PID control method is 5.4 times higher than that of the intelligent adaptive control method, and the maximum deviation also exceeds 7.00% of the average rotational speed, which is about 8.11%, and does not meet the parameter index requirements. In contrast, the average maximum deviation under the intelligent adaptive control method is within 2%, which meets the parameter index requirements, and the stability is improved by about 6% compared with the PID control method. In addition, the average variance of the rotational speed under the PID control method is 30 times higher than that under the intelligent adaptive control method, indicating that the intelligent adaptive control method has more effective antiinterference capability than the PID control method in the face of fluctuations in the gas source.
2) SET VALUE CHANGE TEST
The tracking performance test is achieved by changing the set value. First, the equipment is stabilized at 300.00 r/min, and then the rotational speed is adjusted to 400.00 r/min through the human-machine interface. After the rotational speed is stable for a period of time, the rotational speed is adjusted to 300.00 r/min. The rotational speed data of the two control methods are tested when the set value changes. The rotational speed curve under PID control is shown in Figure 20. The black curve in the figure represents the set value, and the blue curve represents the rotational speed. The figure shows that when the set value changes, the rotational speed of the roots power machine fluctuates greatly, the overshoot is close to 50 r/min, and the adjustment time is also relatively long.
After statistical analysis, the data information is shown in Table 5.
It can be seen from the statistical data that the peak overshoot values under PID control all exceed 30.00 r/min, which obviously exceed the fluctuation range required by the control index. From the change of the set value to the stable rotational speed, the adjustment time of the PID control method fluctuates approximately 8 s and reaches 9 s in individual cases. Compared with the control index, the adjustment time exceeds approximately 1 s, and the adjustment time is longer.
The rotational speed curve under intelligent adaptive control is shown in Figure 21. The black curve in the figure represents the set value, and the blue curve represents the rotational speed. It can be seen from the figure that when the set value changes, the rotational speed of the roots power machine smoothly follows the set value change, and there is no large overshoot after the set value is exceeded. From the speed change to the speed trend, the time taken for stabilization is significantly shorter compared to the PID control method.
After statistical analysis, the data information is shown in Table 6.
It can be seen from the statistical data that the peak value of the overshoot does not exceed 10.00 r/min under intelligent adaptive control, which meets the fluctuation range required by the control index. From the change of the set value to the stabilization of the rotational speed, the adjustment time of the intelligent adaptive control method is approximately 5 s, which meets the adjustment time index requirement of 7 s. It is about 3 s shorter than the PID control method, and the advantage is more obvious.
The control effects of the two control methods when the set values are changed are analyzed by the statistics in Figure 20 and Figure 21, respectively. To make the comparison more obvious, the results of the two control methods are shown in one figure, as shown in Figure 22.
The blue curve shows the rotational speed under the PID control method, the orange curve shows the rotational speed under the intelligent adaptive control method, and the black curve shows the set value of the rotational speed. It can be seen that the rotational speed overshoot is smaller and the adjustment time is shorter under the intelligent adaptive control method. This indicates that the intelligent adaptive control method is more effective than the PID control method when the rotational speed setting value changes.
By comparing the rotational speed data of the two control methods, the rotational speed adjustment comparison table shown in Table 7 can be obtained.
By comparison, the adjustment time of the PID control method in the experiment is significantly longer than 7 s, which exceeds the time period of the performance index. Due to the stability of the PID control method, the overshoot of the PID control method has significantly exceeded the rotational speed fluctuation range of 7%. In contrast the intelligent adaptive control method has a significantly faster regulation time, with an improvement of about 41.70%, and the overshoot is all within a 7% rotational speed fluctuation. It is verified that the intelligent adaptive control method has better tracking performance when the setting value of the roots-type waste heat power generation control system changes.
VI. CONCLUSION
Aiming at the problem that the waste heat fluctuates more in the gas source fluctuation and the frequency change is not fixed, which makes the stability of the system difficult to maintain during operation, a new control method for the waste heat power generation system is designed to greatly improve the system stability and achieve better effect.
In this paper, the control variable method is used to analyze the relationship between different parameters, and the coupling relationship model and control process model of the control system variables of waste heat power generation are initially obtained, which provides the basis for the design of the control method. Then, according to the characteristics of strong coupling of waste heat power generation system, research on intelligent adaptive control based on nonlinear multivariable and multimodel is established, and the control method is designed so that the control system can meet the control index requirements of root-type waste heat power generation system. The new control strategy designed in this paper is simulated and compared with the traditional PID control strategy to verify the outstanding advantages of the new control strategy in this paper. Finally, an experimental platform was built for experimental testing. The experiments respectively test the control effects of the intelligent adaptive control method and the PID control method when the small fluctuation disturbance and the rotational speed setting value change. The experimental results show that the intelligent adaptive control method satisfies the performance indicators, and the control effect is better than the PID control method, which verifies the control effect of the control method and the availability in the roots-type waste heat utilization system. It is confirmed that the application of this control method to the waste heat power generation control system can improve the stability of the system. This promotes the development of waste heat control systems. KUN ZHANG received the bachelor's degree in measurement and control technology and instrumentation from the Hebei University of Technology, in 2021, where she is currently pursuing the master's degree in electronic information. She is mainly engaged in the research of waste heat recovery technology.
YAMENG ZHANG received the bachelor's degree in measurement and control technology and instrumentation from the Hebei University of Technology, in 2019, where he is currently pursuing the master's degree in instrument science and technology.
His current research interests include low-quality waste heat recovery and roots waste heat power generation devices.
Mr. Zhang won the Hebei Science and Technology Invention Award in 2019.
YANCHUN XIAO received the master's degree. He has an Associate Researcher title. His research interest includes mechatronics technology and application. VOLUME 10, 2022
|
2022-08-27T15:13:19.337Z
|
2022-01-01T00:00:00.000
|
{
"year": 2022,
"sha1": "31af254e0070e367ffb7a5ac3a92bb5694d7ded0",
"oa_license": "CCBY",
"oa_url": "https://ieeexplore.ieee.org/ielx7/6287639/6514899/09866768.pdf",
"oa_status": "GOLD",
"pdf_src": "IEEE",
"pdf_hash": "d052a31570c8091dc8c828ee8c149b8b4bd5e5d8",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
232382752
|
pes2o/s2orc
|
v3-fos-license
|
Color Changes in Ag Nanoparticle Aggregates Placed in Various Environments: Their Application to Air Monitoring
Fresh Ag nanoparticles (NPs) dispersed on a transparent SiO2 exhibit an intense optical extinction band originating in localized surface plasmon resonance (LSPR) in the visible range. The intensity of the LSPR band weakened when the Ag NPs was stored in ambient air for two weeks. The rate of the weakening and the LSPR wavelength shift, corresponding to visual chromatic changes, strongly depended on the environment in which Ag NPs were set. The origin of a chromatic change was discussed along with both compositional and morphological changes. In one case, bluish coloring followed by a prompt discoloring was observed for Ag NPs placed near the ventilation fan in our laboratory, resulted from adsorption of large amounts of S and Cl on Ag NP surfaces as well as particle coarsening. Such color changes deduce the presence of significant amounts of S and Cl in the environment. In another case, a remarkable blue-shift of the LSPR band was observed for the Ag NPs stored in the desiccator made of stainless steel, originated in the formation of CN and/or HCN compounds and surface roughening. Their color changed from maroon to reddish, suggesting that such molecules were present inside the desiccator.
Introduction
Environmental pollution has become an important problem all over the world. For instance, nitrogen oxide (NO x ), which plays a major role in the formation of ozone and acid rain, is one of the most dangerous air pollutants. Continued exposures to NO 2 cause increased incidence of acute respiratory infection [1,2]. In addition, sulphur compounds such as sulphur oxides (SO x ) and hydrogen sulphide (H 2 S) are also well known to air pollutants. Even short-term exposures to SO 2 are linked with respiratory effects including breathing difficulty and asthma symptoms [1,2]. The H 2 S, produced by many industrial processes and decomposition of oil, is a very poisonous, corrosive, flammable and explosive gas [3].
Focusing on industrial materials, e.g., electronics and semiconductor material, copper and silver are extensively used because of their high electrical conductivity, ductility and malleability. They are, however, inevitably corroded by reacting with H 2 S in ambient air to produce their sulfide such as Cu 2 S and Ag 2 S [4][5][6]. The atmospheric corrosion caused by the pollutant gases is becoming a significant factor in the reliability of electrical equipment. As electronics continue to decrease in size, it is important to be aware of the pollutant gases around the electrical equipment to prevent corrosion risk.
In recent years, particulate matter (PM) has also attracted considerable attention due to adverse effects to human health and materials corrosion. It is known to that the PM can be carried deep into the lungs, worsening lung diseases [1]. Moreover, adsorption of PM containing sulphur compounds on copper and its alloys corrodes locally the metals, Teijin, Tokyo, Japan). These samples are referred to as Ag NPs/SiO2, Ag NPs/Si, NPs/HOPG and Ag NPs/PEN, hereafter. A desktop rotary-pumped sputter coater, SC MKII, developed by Sanyu Electron Co. Ltd., Tokyo, Japan was used with a silver s tering target (purity: 99.8%). The pressure of the Ar gas was ~1 Pa, and the current typically 7 mA at a voltage of 1.2 keV during the sputter deposition with a 30 s durat The samples became gradually darker immediately after the deposition. Therefore, samples were aged in vacuum (<7.0 × 10 −5 Pa) for 7 days prior to air exposure, in orde stabilize the Ag NPs, e.g., morphology and crystallinity. The samples were stored at different places; on a table (in ambient air), near a ventilation fan, inside a clean desicc (class 100 of cleanroom classification) made of acrylic resin, inside a conventional de cator (referred to as "metal desiccator") made of stainless steel, in our laboratory w the illumination light was almost turned off. All the Ag NP samples of Ag NPs/SiO2 NPs/Si, Ag NPs/HOPG and Ag NPs/PEN were subjected to the same conditions.
The Ag NPs/SiO2 samples were used for visual confirmation for chromatic chan as well as Ultraviolet-visible (UV-Vis) measurements. The color of the as-prepared NPs/SiO2 sample (aged in vacuum for 7 days) was maroon as shown in Figure 1a. UV extinction spectra were taken in standard transmission geometry in the wavelength ra of 200-800 nm.
. Figure 1. (a) A photograph of as-prepared Ag/SiO2; (b) and an atomic force microscopy (AFM) image of as-prepared Ag/Si sample that was aged in vacuum for 7 days.
The Ag NPs/Si samples were fabricated to observe morphology of Ag NP aggreg with atomic force microscopy (AFM). In the AFM observation, the dynamic force m corresponding to the tapping mode, of the Nanonavi Station manufactured by SII Na technology Inc., Chiba, Japan was employed with a Si cantilever, SI-DF20, with a tip dius of curvature less than 10 nm. The lateral sizes of individual Ag NPs in the aspared Ag NPs/Si sample (aged in vacuum for 7 days) were 20-30 nm as shown in Fig 1b. The root mean square (RMS) surface roughness of the as-prepared Ag NPs/Si wa timated to be 2.1 nm.
The Ag NPs/HOPG samples were used to analyze chemical compositions and bo at Ag NP surfaces using Rutherford backscattering spectrometry (RBS) and X-ray ph electron spectroscopy (XPS). RBS was conducted with 2 MeV-4 He ions produced fro single-ended Van de Graaff (VdG) accelerator of Hiroshima University. Backscattered ions were detected with a surface barrier detector placed at an angle of 165 with res to an incident beam. RBS analysis also can provide the areal density of Ag atoms in samples. A typical areal density was estimated to be 1.6 × 10 16 Ag atoms/cm 2 . XPS u Mg Kα radiation (hν = 1253.6 eV) was performed with a JEOL 9010 X-ray spectrom (JEOL Ltd., Akishima, Japan). The Ag NPs/Si samples were fabricated to observe morphology of Ag NP aggregates with atomic force microscopy (AFM). In the AFM observation, the dynamic force mode, corresponding to the tapping mode, of the Nanonavi Station manufactured by SII Nanotechnology Inc., Chiba, Japan was employed with a Si cantilever, SI-DF20, with a tip radius of curvature less than 10 nm. The lateral sizes of individual Ag NPs in the as-prepared Ag NPs/Si sample (aged in vacuum for 7 days) were 20-30 nm as shown in Figure 1b. The root mean square (RMS) surface roughness of the as-prepared Ag NPs/Si was estimated to be 2.1 nm.
The Ag NPs/HOPG samples were used to analyze chemical compositions and bonds at Ag NP surfaces using Rutherford backscattering spectrometry (RBS) and X-ray photoelectron spectroscopy (XPS). RBS was conducted with 2 MeV-4 He ions produced from a single-ended Van de Graaff (VdG) accelerator of Hiroshima University. Backscattered 4 He ions were detected with a surface barrier detector placed at an angle of 165 • with respect to an incident beam. RBS analysis also can provide the areal density of Ag atoms in the samples. A typical areal density was estimated to be 1.6 × 10 16 Ag atoms/cm 2 . XPS using Mg Kα radiation (hν = 1253.6 eV) was performed with a JEOL 9010 X-ray spectrometer (JEOL Ltd., Akishima, Japan). The Ag NPs/PEN samples were prepared to detect heavy elements adsorbed on Ag NP surfaces using particle-induced X-ray emission (PIXE). The beams of 2 MeV-protons produced from the VdG accelerator and a Si (Li) detector with an appropriate filter made of polyethylene terephthalate was used for the PIXE analysis. Figure 2 shows the photographs of Ag NPs/SiO 2 samples stored in various environments. All the Ag NPs/SiO 2 samples were aged in vacuum for 7 days prior to air exposure. The color of the as-prepared Ag NPs/SiO 2 sample was maroon as shown in Figure 1a and insets in each photograph in Figure 2. The color faded gradually for the sample stored in ambient air. The chromatic change from maroon to burgundy were clearly recognizable after 14 days. For the Ag NPs/SiO 2 stored into the clean desiccator, its color turned still light violet even after 56 days. For the Ag NPs/SiO 2 stored into the metal desiccator, the color became terracotta after 14 days and almost transparent after 56 days. The color of the Ag NPs/SiO 2 stored near the ventilation fan faded quickly even after 14 days. This sample looked almost transparent after storing for 56 days. produced from the VdG accelerator and a Si (Li) detector with an appropriate filter made of polyethylene terephthalate was used for the PIXE analysis. Figure 2 shows the photographs of Ag NPs/SiO2 samples stored in various environments. All the Ag NPs/SiO2 samples were aged in vacuum for 7 days prior to air exposure. The color of the as-prepared Ag NPs/SiO2 sample was maroon as shown in Figure 1a and insets in each photograph in Figure 2. The color faded gradually for the sample stored in ambient air. The chromatic change from maroon to burgundy were clearly recognizable after 14 days. For the Ag NPs/SiO2 stored into the clean desiccator, its color turned still light violet even after 56 days. For the Ag NPs/SiO2 stored into the metal desiccator, the color became terracotta after 14 days and almost transparent after 56 days. The color of the Ag NPs/SiO2 stored near the ventilation fan faded quickly even after 14 days. This sample looked almost transparent after storing for 56 days. The chromatic changes can be examined in a quantitative way using optical extinction spectroscopy. Figure 3 shows the optical extinction spectra in the wavelength range 200-800 nm obtained from the four kinds of Ag NPs/SiO2 samples presented in Figure 2. All the Ag NPs/SiO2 samples were aged in a vacuum for 7 days prior to air exposure. After the aging in a vacuum, the LSPR band intensity and wavelength of Ag NPs/SiO2 stored in ambient air were 505 ± 6 nm and 0.38 ± 0.02, respectively. The intensity of the LSPR band located at 400-700 nm gradually weakened with ambient air exposure. For the Ag/SiO2 stored in the clean desiccator, the rate of LSPR intensity decrease was lower than that for the sample stored in ambient air. The intensity of the LSPR band maintained at 0.16 even after 56 days. For the Ag NPs/SiO2 stored into the metal desiccator, the LSPR band shifted toward shorter wavelengths drastically. The most remarkable decrease in the LSPR band intensity was observed for the sample stored near the ventilation fan. The reproducibility of chromatic changes and optical extinction spectra of each Ag NPs/SiO2 sample was confirmed. The chromatic changes can be examined in a quantitative way using optical extinction spectroscopy. Figure 3 shows the optical extinction spectra in the wavelength range 200-800 nm obtained from the four kinds of Ag NPs/SiO 2 samples presented in Figure 2. All the Ag NPs/SiO 2 samples were aged in a vacuum for 7 days prior to air exposure. After the aging in a vacuum, the LSPR band intensity and wavelength of Ag NPs/SiO 2 stored in ambient air were 505 ± 6 nm and 0.38 ± 0.02, respectively. The intensity of the LSPR band located at 400-700 nm gradually weakened with ambient air exposure. For the Ag/SiO 2 stored in the clean desiccator, the rate of LSPR intensity decrease was lower than that for the sample stored in ambient air. The intensity of the LSPR band maintained at 0.16 even after 56 days. For the Ag NPs/SiO 2 stored into the metal desiccator, the LSPR band shifted toward shorter wavelengths drastically. The most remarkable decrease in the LSPR band intensity was observed for the sample stored near the ventilation fan. The reproducibility of chromatic changes and optical extinction spectra of each Ag NPs/SiO 2 sample was confirmed. For the Ag NPs/HOPG stored in ambient air, into the clean desiccator the metal desiccator, N, O and S were detected. In addition to these elements, amount of Cl was detected on Ag NPs/HOPG stored in ambient air for 44 days. In spectra of Ag NPs/HOPG stored near the ventilation fan, peaks corresponding to inant elements such as N, O, S, Cl were found. In addition to assignable peaks contaminants, there are some peaks, for example, at energies of 1.64 MeV (in Fi and 1.85 MeV (in all the spectra), that cannot be determined by RBS because of mass resolution for heavy elements. Therefore, we tried to detect trace amounts o element impurities adsorbed on Ag NPs by PIXE analysis using proton beams. Figure 4b-e show the RBS spectra of Ag NPs/HOPG after air exposure for 14 days and 44 days. For the Ag NPs/HOPG stored in ambient air, into the clean desiccator and into the metal desiccator, N, O and S were detected. In addition to these elements, a small amount of Cl was detected on Ag NPs/HOPG stored in ambient air for 44 days. In the RBS spectra of Ag NPs/HOPG stored near the ventilation fan, peaks corresponding to contaminant elements such as N, O, S, Cl were found. In addition to assignable peaks for such contaminants, there are some peaks, for example, at energies of 1.64 MeV (in Figure 4c) and 1.85 MeV (in all the spectra), that cannot be determined by RBS because of its poor mass resolution for heavy elements. Therefore, we tried to detect trace amounts of heavy element impurities adsorbed on Ag NPs by PIXE analysis using proton beams. Nanomaterials 2021, 11, x FOR PEER REVIEW 6 of 13 Figure 5 shows the PIXE spectra of the Ag/PEN samples stored in various environments. Very weak peaks at 3.61 and 3.84 keV overlapped with background signals corresponding to Sb Lα1 and Lβ1 X-rays, respectively, which appear even for a bare PEN, indicating that Sb comes from the foil. Two intense peaks of 22.2 and 24.9 keV correspond to Ag Kα1 and Ag Kβ1 X-ray, respectively. Unfortunately, the impurity elements heavier than Ag, except for Sb, were not detected by the proton-PIXE. In Figure 5c, several peaks, e.g., Cr Kα1 (5.41 keV), Fe Kα1 (6.40 keV), Ni Kα1 (7.47 keV) X-rays, originated in extrinsic impurities adsorbed on Ag NPs. These impurities are coincident with the constituent elements of the metal desiccator. Furthermore, in Figures 5c,d, the Br Kα1 X-ray was observed at an energy of 11.92 keV, indicative of adsorption of Br on Ag NPs. Figure 5 shows the PIXE spectra of the Ag/PEN samples stored in various environments. Very weak peaks at 3.61 and 3.84 keV overlapped with background signals corresponding to Sb Lα 1 and Lβ 1 X-rays, respectively, which appear even for a bare PEN, indicating that Sb comes from the foil. Two intense peaks of 22.2 and 24.9 keV correspond to Ag Kα 1 and Ag Kβ 1 X-ray, respectively. Unfortunately, the impurity elements heavier than Ag, except for Sb, were not detected by the proton-PIXE. In Figure 5c, several peaks, e.g., Cr Kα 1 (5.41 keV), Fe Kα 1 (6.40 keV), Ni Kα 1 (7.47 keV) X-rays, originated in extrinsic impurities adsorbed on Ag NPs. These impurities are coincident with the constituent elements of the metal desiccator. Furthermore, in Figure 5c,d, the Br Kα 1 X-ray was observed at an energy of 11.92 keV, indicative of adsorption of Br on Ag NPs. Figure 6a displays the narrow-scan XPS C 1s spectra of Ag NPs/HOPG after air posure for 14 days. The C 1s binding energy (BE) for the Ag NPs/HOPG stored into metal desiccator was higher than that for other samples, as can be seen in Figure 6a. the sample stored into the metal desiccator, an intense N 1s peak was clearly observe shown in Figure 6b. The higher C 1s BE and the intense N 1s peak indicate that CN an HCN compound forms on Ag NP surfaces [27]. Figure 6c shows the narrow-scan A spectra of Ag NPs/HOPG. As demonstrated by the previous XPS study [28], the Ag BE of 368.6 eV is assigned to Ag2S. Since the Ag 3d binding energies for a metallic Ag Ag compounds such as AgCl, Ag2S, AgO and Ag2O are 367.9 ± 0.4 eV [29][30][31], it is diff to assign each peak. In XPS analysis, the nature of Ag can be clearly seen in 4d vale band spectra as shown in Figure 6d. The valence band spectrum of the Ag NPs/HO stored into the metal desiccator became very sharp compared to that of the as-prepa Ag NPs/ HOPG. The band width, originating from spin-orbit splitting, for the as-prepa sample and Ag NPs/HOPG stored into the metal desiccator were 3.3 eV and 1.9 eV spectively. In contrast, the spectral changes for the samples placed in ambient air, in the clean desiccator and near the ventilation fan were rather small, but the band wi for these three samples were slightly narrower than that for the as-prepared sample. observed band narrowing is evidence that Ag reacts with outermost layer of contamin to partly form compounds such as AgCl, Ag2S and AgCN. The band width of the NPs/HOPG stored in ambient air, into the clean desiccator and near the ventilation were 3.1 eV, 3.2 eV and 2.8 eV, respectively, after 14 days. Further band narrowing been recognized in valence band spectra of Ag NPs samples stored for 56 days (not sho here). Figure 6a displays the narrow-scan XPS C 1s spectra of Ag NPs/HOPG after air exposure for 14 days. The C 1s binding energy (BE) for the Ag NPs/HOPG stored into the metal desiccator was higher than that for other samples, as can be seen in Figure 6a. For the sample stored into the metal desiccator, an intense N 1s peak was clearly observed as shown in Figure 6b. The higher C 1s BE and the intense N 1s peak indicate that CN and/or HCN compound forms on Ag NP surfaces [27]. Figure 6c shows the narrow-scan Ag 3d spectra of Ag NPs/HOPG. As demonstrated by the previous XPS study [28], the Ag 3d 5/2 BE of 368.6 eV is assigned to Ag 2 S. Since the Ag 3d binding energies for a metallic Ag and Ag compounds such as AgCl, Ag 2 S, AgO and Ag 2 O are 367.9 ± 0.4 eV [29][30][31], it is difficult to assign each peak. In XPS analysis, the nature of Ag can be clearly seen in 4d valence band spectra as shown in Figure 6d. The valence band spectrum of the Ag NPs/HOPG stored into the metal desiccator became very sharp compared to that of the as-prepared Ag NPs/ HOPG. The band width, originating from spin-orbit splitting, for the as-prepared sample and Ag NPs/HOPG stored into the metal desiccator were 3.3 eV and 1.9 eV, respectively. In contrast, the spectral changes for the samples placed in ambient air, inside the clean desiccator and near the ventilation fan were rather small, but the band widths for these three samples were slightly narrower than that for the as-prepared sample. The observed band narrowing is evidence that Ag reacts with outermost layer of contaminants to partly form compounds such as AgCl, Ag 2 S and AgCN. The band width of the Ag NPs/HOPG stored in ambient air, into the clean desiccator and near the ventilation fan were 3.1 eV, 3.2 eV and 2.8 eV, respectively, after 14 days. Further band narrowing has been recognized in valence band spectra of Ag NPs samples stored for 56 days (not shown here). Figure 7 shows the AFM images of the Ag NPs/Si sample stored in various environ ments for 14 days. Surface morphology of the Ag NPs samples after air exposure wa different in roughness from that of the as-prepared Ag NPs. The particle coarsening as result of coalescence was observed in all the samples. The RMS surface roughness of th Ag NPs/Si stored in ambient air, into the clean desiccator, into the metal desiccator an near the ventilation fan, were 3.6 nm, 3.5 nm, 6.2 nm and 4.4 nm, respectively. The mos roughened surface was produced on the sample stored inside the metal desiccator, an for the sample stored in the clean desiccator the change in roughness was found to b smallest. In addition, worm-like aggregates of ~150 nm in length and ~50 nm in widt were observed at the surface of the sample stored near the ventilation fan. Figure 7 shows the AFM images of the Ag NPs/Si sample stored in various environments for 14 days. Surface morphology of the Ag NPs samples after air exposure was different in roughness from that of the as-prepared Ag NPs. The particle coarsening as a result of coalescence was observed in all the samples. The RMS surface roughness of the Ag NPs/Si stored in ambient air, into the clean desiccator, into the metal desiccator and near the ventilation fan, were 3.6 nm, 3.5 nm, 6.2 nm and 4.4 nm, respectively. The most roughened surface was produced on the sample stored inside the metal desiccator, and for the sample stored in the clean desiccator the change in roughness was found to be smallest. In addition, worm-like aggregates of~150 nm in length and~50 nm in width were observed at the surface of the sample stored near the ventilation fan.
Discussion
The present work has shown that the adsorbed impurities and the color depends o the environments where Ag NPs/SiO2 were stored for a couple of weeks. This sugges that environmental substances have an effect on the change in color of Ag NPs/SiO2. Co versely, the chromatic change of Ag NPs/SiO2 is indicative of environmental substance enabling one to check the quality or cleanliness of the environment. In this section, chr matic changes of Ag NPs/SiO2 are discussed for each sample in terms of composition and morphological changes, and then the quality of environment in which Ag NPs/Si were stored is also dealt with. For this purpose, chromatic changes measured by LSP characteristics including resonant wavelength and intensity are represented in Figure The
Discussion
The present work has shown that the adsorbed impurities and the color depends on the environments where Ag NPs/SiO 2 were stored for a couple of weeks. This suggests that environmental substances have an effect on the change in color of Ag NPs/SiO 2 . Conversely, the chromatic change of Ag NPs/SiO 2 is indicative of environmental substances, enabling one to check the quality or cleanliness of the environment. In this section, chromatic changes of Ag NPs/SiO 2 are discussed for each sample in terms of compositional and morphological changes, and then the quality of environment in which Ag NPs/SiO 2 were stored is also dealt with. For this purpose, chromatic changes measured by LSPR characteristics including resonant wavelength and intensity are represented in Figure 8. The LSPR characteristics of the four samples largely change in 14 days. In particular, the normalized LSPR intensity for the sample placed near the ventilation fan decreased down to 0.4 at elapsed time of 14 days as shown in Figure 8a. For this sample, the most drastic positive shift (+55 nm) was observed at that time as shown in Figure 8b. Visual compositional changes in Ag NPs within 14 days, obtained from Table 1, are also shown in Figure 9. The largest increase in impurity elements O (0.11 to 0.27), S (0.04 to 0.14) and Cl (0 to 0.06) are found in the atomic ratio for the sample placed near the ventilation fan. Next, chromatic changes shown in Figure 8 will be discussed along with compositional ( Figure 9) and morphological (Figure 7) features for each Ag NPs/SiO 2 sample, below. Atomic ratios of M (*) to Ag Firstly, chromatic changes in the sample placed near the ventilation fan are discussed in comparison with those in the samples placed in ambient air and in the clean desiccator. The normalized LSPR band intensity of this sample decreased drastically compared to other samples as shown in Figure 8a. In addition, the LSPR band shifts quickly toward a longer wavelength as can be seen in Figure 8b. In this sample, the large worm-like aggregates due to particle coarsening were observed, and relatively large amounts of S and Cl atoms were detected. In the previous study [32], the Cl adsorption on clean Ag surfaces led to drastic morphological changes due to the formation of silver chloride (AgCl) islands. Similar situations would occur on the Ag NPs stored near the ventilation fan. In our case, the formation of AgCl caused the particle coarsening. Large amount of extrinsic impurities S and Cl as well as particle coarsening resulted in drastic changes in the LSPR intensity and wavelength. Thus, in an atmospheric environment containing S and Cl, the color of the Ag NPs turned bluish and then faded quickly as shown in Figure 2. Conversely, bluish coloring followed by a quick discoloring suggests significant amount of S and Cl in the environment. Of these elements, Cl would be mainly carried by dust such as particle matter (PM) because Cl was not detected in the samples in the clean and metal desiccators. In fact, ammonium salt, NH 4 Cl, is a common component of PM, and Qu et al., [33] investigated that deposition of NH 4 Cl leads to the atmospheric corrosion of zinc. The atmospheric environment of this sample is essentially the same with ambient air. It is probable that a one-way flow of air through the ventilation fan stimulates the adsorption of such extrinsic impurity elements on Ag NPs, resulting in the fastest discoloring. Thus, relatively quick monitoring of environment qualities can be achieved by one-way flow of the air using a ventilation fan.
Secondly, chromatic changes in the Ag NPs stored into the metal desiccator are discussed. For this sample, a remarkably blue-shift of the LSPR band was observed following the slightly red-shift. The color of this sample became reddish after 14 days passed, different from that of the other samples. The atomic ratio N/Ag for this sample increased to 0.66 at 14 days, remarkably higher than that for the other samples. As indicated by XPS and RBS, the Ag-CN and/or Ag-HCN compounds formed on the Ag NPs. It is probable that the origin of the chromatic change may be such compound formation, followed by surface roughening as revealed by AFM. More importantly, reddish coloring indicates the presence of CN and/or HCN inside the metal desiccator. Considering that the cyanide is used in stainless steel manufacturing [34], there is a possibility that it is released from the components of the metal desiccator.
Finally, the mechanism of a change in optical properties of Ag NPs/SiO 2 is discussed briefly to understand the behavior shown in Figure 8. As described above, metallic Ag NPs exhibit strong LSPR in the visible region. The formation of compound, e.g., Ag 2 S, AgCl and AgCN, layers on Ag NP surfaces exposed to air reduces the volume of metallic Ag NPs. The reduction in volume of metallic Ag NPs enlarges the gap between adjacent NPs. The volume reduction and the gap widening result in the weakening of LSPR intensity (referred to as "effect 1", hereafter) and the blue-shift of LSPR wavelength (effect 2) [35]. For a single metallic NP covered with a thin layer with a high refractive index, the LSPR position shifts toward longer wavelength (effect 3) together with enhancement in LSPR intensity (effect 4), depending on its refractive index as well as thickness [36]. In fact, McMahon et al. [23] calculated Mie scattering efficiency of a single Ag NP with various Ag 2 S thicknesses, and obtained results concerning the effect 1 and effect 3, i.e., reduced intensity and red-shift of LSPR with increasing Ag 2 S thickness. The calculated changes are identical to the LSPR changes observed for the sample placed near the ventilation fan. The LSPR changes observed for the sample stored in the metal desiccator were unique in the LSPR wavelength, i.e., blue-shift, which could be explained by the effect 1 and effect 2. In this sample, less influence of effect 3 would be due to the lower refractive index of AgCN (1.7 [37]) compared with those of AgCl (2.0 [38]) and Ag 2 S (2.2 [39]). The LSPR wavelength shifts as a function of refractive index were experimentally obtained by Sugawa et al. [40] for Ag NPs. It can be, therefore, considered that the refractive index of the overlayer formed on Ag NP surfaces determines the LSPR wavelength changes, red-shift or blue-shift.
Apart from the chromatic change, it should be noted that Br was detected on Ag NPs inside the metal desiccator, indicating that Ag NPs possess an ability to adsorb very toxic Br vapor. Ag NPs can, therefore, be used as adsorbent to detect trace amounts of toxic Br as well as cyanide because of their high reactivity.
Conclusions
The origin of chromatic changes is examined from the viewpoints of compositional and morphological changes in Ag NPs/SiO 2 stored in various environments to develop a small and cheap device for monitoring of the cleanliness of the atmosphere. It was demonstrated that bluish coloring followed by quick discoloring of Ag NPs/SiO 2 was induced by adsorption of a significant amount of S and Cl as well as particle coarsening due to the formation of AgCl, as was observed for the sample stored near the ventilation fan. Of these elements, Cl was probably carried by the particle matter containing Cl compounds, e.g., NH 4 Cl, considering that Cl was only detected in unenclosed samples. A remarkable blue-shift of the LSPR band, i.e., reddish coloring, was observed for the sample stored into the metal desiccator, resulting from the formation of Ag-CN and/or Ag-HCN compounds as well as surface roughening. The chromatic change into reddish color shows the inclusion of CN and/or HCN molecules inside the environment. Furthermore, the discoloration rate, defined as time required to become transparent, would be sensitive to the concentration of material to be adsorbed, meaning that the discoloration rate is a measure of the cleanliness of the air.
As expected from the present work, Ag NPs/SiO 2 will be a promising material to check the quality of the environment. Long-term monitoring is more important than quick sensing for substances that accumulate in the human body and electronic materials, such as gaseous pollutants and PM. Our findings provide a pathway to develop a device that can easily check the cleanliness of the air by monitoring chromatic changes in Ag NPs/SiO 2 with the naked eye. In addition, a fiber optic sensing system based on optical absorption changes, similar to the chemical gas sensor proposed by Chen et al. [41], can be developed using Ag NPs/SiO 2 , if quick sensing rather than monitoring is required.
|
2021-03-29T05:23:13.150Z
|
2021-02-19T00:00:00.000
|
{
"year": 2021,
"sha1": "ba716ca2638bee61b977f3fd42cee2b810622c06",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2079-4991/11/3/701/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ba716ca2638bee61b977f3fd42cee2b810622c06",
"s2fieldsofstudy": [
"Environmental Science",
"Materials Science"
],
"extfieldsofstudy": [
"Materials Science",
"Medicine"
]
}
|
234846417
|
pes2o/s2orc
|
v3-fos-license
|
Hybrid Optimization Method for Correcting Synchronization Errors in Tapping Center Machines
: A hybrid method is proposed for optimizing rigid tapping parameters and reducing synchronization errors in Computer Numerical Control (CNC) machines. The proposed method integrates uniform design (UD), regression analysis, Taguchi method, and fractional-order particle swarm optimizer (FPSO) to optimize rigid tapping parameters. Rigid tapping parameters were laid out in a 28-level uniform layout for the experiments in this study. Since the UD method provided a layout with uniform dispersion in the experimental space, the UD method’s uniform layout provided iconic experimental points. Next, the 28-level uniform layout results and regression analysis results were used to obtain significant parameters and a regression function. To obtain the parameter values from the regression function, FPSO was selected because its diversity and algorithmic effectiveness are enhanced compared with PSO. The experimental results indicated that the proposed method could obtain suitable parameter values. The best parameter combination in FPSO yielded the best results in comparisons of the non-systematic method. Next, the best parameter combination was used to optimize actual CNC machining tools during the factory commissioning process. From the commissioning process perspective, the proposed method rapidly and accurately minimizes synchronization error from 23 pulses to 18 pulses and processing time from 20.8 s to 20 s. In conclusion, the proposed method reduced the time needed to tune factory parameters for CNC machining tools and increased machining precision and decreased synchronization errors.
Introduction
Manufacturing internally threaded mechanical parts, particularly parts with numerous threads, requires high machining precision. Fabrication of threads using high-precision molds is usually the final step in the manufacturing process. Therefore, manufacturers require a reliable method of maintaining high quality in machining internal threads, especially in mass-produced mechanical components with internal threads. Tapping accuracy largely depends on whether the movement of the feed of the tapping axis is well synchronized with the rotation of the spindle during the tapping cycle. The observed index of the tapping axis feed to spindle rotation is the synchronization error. Therefore, to obtain a high-precision internal thread by rigid tapping, the goal is to minimize synchronization error.
For servo parameter commissioning, Lee et al. [1] proposed an iterative measurement and contour performance simulation. Servo parameters were adjusted to reduce contour error, and the Kreuz-Gitter-Meβsystem (KGM) method was used to verify the contour's 2 of 19 accuracy. Yeh et al. [2] proposed "learning automata," an automatic parameter adjustment method for improving parameter convergence and improving efficiency in adjusting control parameters. To optimize the synchronous motion between the spindle and the tap-ping axis, Yeh et al. [3] proposed several linear and nonlinear control design techniques for improving control performance and synchronization, including (1) Cross-coupling control, (2) Nonlinear friction compensation, and (3) Interference observation. Servo gain was optimized by the command control method. The authors achieved synchronization accuracy within 10 ums at a spindle speed of 6000 rpm by applying these techniques. Lu et al. [4] used a fuzzy adaptive proportional-integral-derivative (PID) controller to control Computer Numerical Control (CNC) machine tools used in rigid tapping. The proposed control method can be performed without establishing a controlled object, and the online PID controllable parameters are automatically adjusted by the fuzzy inference method. In another study, Ishizaki et al. [5] proposed a cross-coupled controller for accurately synchronizing the motion of a servo system driven by dual motors. The proposed cross-coupled controller compensates for differential positioning errors between dual servo drives by modifying reference positions and speed commands. Biris et al. [6] performed experiments to compare the effectiveness of eliminating positioning and contour errors. The authors also proposed an adjustment method based on a mathematical model and evaluated its performance in reducing positioning and contour errors in CNC machine tools. Chen et al. [7] proposed an iterative learning control (ILC) algorithm for reducing synchronization error in rigid tapping.
In recent years, additive manufacturing (AM) technology has been proposed to be suitable for manufacturing critical components. Additive manufacturing is completely different from traditional subtractive processing. In contrast, additive manufacturing uses polymer, metal, or ceramic material spray stacking to construct a 3D shapes method that can quickly and flexibly produce a small number of diverse products, significantly reducing the time from the design stage to mass production of the products and considerably improving the utilization rate of materials [8][9][10][11][12]. At present, additive manufacturing can be used to produce energy, aerospace, or biomedical parts, providing high-strength and lightweight products. The machine tools manufacturer MAZAK's hybrid multi-objective machine tool type can perform laser additive manufacturing and 5-axis machining processes on machine tools, demonstrating the possibilities and characteristics of future advanced techniques [13].
The learning control modifies the z-axis and the spindle commands to optimize synchronicity between z-axis output responses and the spindle. Their experiments revealed that 10 runs of the ILC algorithm reduced synchronization error from 0.26 mm to 2.6 × 10 −13 . That is, the use of ILC learning control can substantially reduce synchronization error. Chen et al. [14] proposed intelligent computer-aided process planning (i-CAPP). When the machining process's complexity increases, i-CAPP's Integration intelligence function works together with the domain expert's procedure. Ma et al. [15] accord the coupling between the radial and axial vibrations and the dynamic cutting forces modeled along the tapping path. The radial and axial chatter stability is separately predicted in the frequency domain for the tapping process's stability.
In the past, when the machining process was executed in the factory, the teeth or the tapping tools occasionally broke when the tapping speed was different. Usually, it was necessary to wait for the equipment manufacturer to inspect the equipment to obtain the best parameters. The current study developed a novel method of adjusting rigid tapping electrical control parameters and explored the effects of the adjusted parameters on thread quality obtained by tapping in internally threaded parts. The proposed method combines uniform design (UD) [16][17][18][19], regression analysis [20][21][22][23], fractional-order particle swarm optimizer (FPSO) [24][25][26][27][28], and the Taguchi method [29][30][31][32][33][34][35][36] to analyze the effects of rigid tapping parameters on thread quality and to determine the key parameters and the optimal parameter values. Application of the proposed method in an actual factory process verified its effectiveness for improving thread tapping quality. Through the method provided in this article, the processing industry can get the minimum error parameter combination at different tapping speeds through experiments to avoid breaking the tool when rigid tapping is performed at the end of the processing process and causing damage to the workpiece. This paper is organized as follows. The problem considered in this study is briefly described in Section 2. Section 3 briefly discusses relevant methods, including UD, regression analysis, FPSO, and the Taguchi method. Section 4 presents and discusses the experimental and simulation results. Finally, Section 5 concludes the study.
Problem Description
Machining internal threads is an essential process in mechanical parts manufacturing in various industries. For example, numerous internally threaded holes are required to manufacture printed circuit boards (PCBs) in the computer, communications, and consumer-electronics (3C) electronics industry and manufacture mobile phone cases in the mold industry. Holes are typically threaded in a late stage of the manufacturing process. Poorly threaded holes can severely diminish the manufactured part's quality and performance; therefore, manufacturers require a fast and reliable method for tapping internally threaded holes.
Tapping can be classified as floating or rigid. Each method uses a different tool clamping device, so each method has different synchronization requirements for the spindle and the tapping axis. Floating tapping was developed to solve the problem of synchronization between the spindle and the tapping axis. In floating tapping, the tap-ping tool is clamped in an elastic chuck. The advantage of floating tapping is that tapping can be achieved without complex control theory. The disadvantage, however, is that an excessively high tapping speed causes severe vibration resulting in imprecise tapping and/or disordered or broken teeth. In rigid tapping, a tap collet is typically used to clamp the tapping tool, which provides more precise teeth and enables a more efficient tapping speed than floating tapping. Rigid tapping is also much faster than floating tap-ping. However, synchronization between the spindle and the tapping axis must be exact. Even a slight difference can cause a breakage of the tool. Therefore, accurate synchronization of the spindle and tapping axis is essential. Most research in rigid tap-ping control has focused on minimizing the synchronization error between the tapping axis movement and the spindle to obtain internal threads with high quality and accuracy.
To obtain the required screw pitch specifications in a rigid tapping process, the tapping axis and the spindle must be adjusted to meet the required combination of spindle speed and feed rate of the tapping axis. The relationships can be formulated as in Equation (1) where P is the thread pitch (unit: mm), F is the feed rate of the tapping axis (unit: mm/min), and S is the spindle speed (RPM). In a rigid tapping procedure, the spindle speed and tapping axis feed rate must maintain a certain proportional relationship according to the required pitch specifications. Because pitch P depends on the scale of spindle speed and feed rate of the feed axis, if spindle speed and feed rate of feed axis do not match, it causes synchronization error which is too large and pitch P, not precision. Therefore, the controller has a command-type compensation architecture, and the controller simultaneously commands the spindle and the tapping axis and compensates for positioning errors.
Additionally, the FANUC 31iMA controller used in this experiment is that the control method in which the tapping axis follows the spindle identifies synchronization errors between the spindle and the tapping axis. Figure 1 shows that synchronization error is an essential indicator in rigid tapping because it affects internal threads' shape and precision.
Uniform Design and Regression Analysis
The UD method proposed by Fang and Wang [17,18] was used to design and plan experiments in which the test points were evenly distributed within the test range. A uniform layout is expressed as U n (n s ), where U is the uniform layout, n is the level number, and s is the factor number. Table 1 shows the distribution of a U 6 (6 6 ) uniform layout. In the UD method, data collected in experiments were used for model building, and data analysis was performed using regression analysis. Regression analysis is a statistical method for analyzing relationships between input variables and output variables. Its main purposes are understanding the relationships among independent variables and dependent variables and then building a mathematical model for predicting dependent variables. Depending on the complexity, a regression analysis can be classified as simple regression and multiple regression. Simple regression uses a single independent variable to predict a dependent variable, and multiple regression explores how a dependent variable is related to multiple independent variables.
The unary linear regression equation expresses the relationship between a dependent variable and an independent variable: where β 0 is a constant, β 1 is a regression coefficient, and ε is an error. A multiple linear regression equation expresses the relationship of a dependent variable to multiple independent variables: where β 0 is a constant, β 1 , β 2 , β 3 , . . . , β m is a regression coefficient, and ε is an error. Regression analysis is often used for the interpretation of experimental data and prediction. For interpretation, the regression equation was calculated from the obtained experimental data. The equation was then used to determine the contribution of each independent variable to the dependent variable. A common application of regression analysis is determining the significance of the independent variable to the dependent variable. Since the regression equation is linear, regression analysis can be used for prediction, i.e., to estimate how an independent variable will change a dependent variable. That is, after regression analysis, a regression model is used to predict a future change in a dependent variable, it can be used to determine how to compensate or respond to the change.
Fractional-Order Particle Swarm Optimization (FPSO)
The FPSO was first proposed by Solteiro Pires et al. [27] and was derived from PSO by introducing the Grünwald-Letnikov fractional-order derivative to enhance its diversity and algorithmic effectiveness. The PSO algorithm [37][38][39][40][41] was inspired by the group behavior of foraging birds and seeks the best solution in the solution space by maximizing or minimizing fitness value, which is analogous to the process in which foraging birds maximize the amount of food they consume. Evolution of position and evolution of speed occur as the optimal parameters for position and speed are updated after each iteration. The calculation formula for PSO is as follows: where i = 1, 2, . . . , m (m is the number of particles); k is the number of iterations; w is the inertial weight; V is the velocity; c 1 and c 2 are individual and group parameters, respectively; p i (k) is the position vector; P i l (k) is the best local solution in the current iteration; P g (k) is the best global solution in the current iteration, and r 1 and r 2 are random numbers from 0 to 1.
Recently, many studies enhanced the performance and effectiveness of PSO, including adjusting the strategies [42,43], particle grouping [44][45][46], inertia weights [47][48][49], and other optimization techniques [50][51][52][53]. However, PSO has its main drawbacks, such as lack of robustness and tendency to fall into the local optimum. Additionally, the fractional-order derivative describes the real world more than the integer-order one. Therefore, Solteiro Pires et al. [27] involved the fractional-order derivative in PSO to improve the performance and effectiveness, called FPSO. After that, Gao et al. [28] took a nonlinear time-varying inertia weight in FODPSO (Fractional-order Darwinian PSO) to improve. In this paper, the inertial weight was applied to FPSO, and the calculation formula is as follows: where λ is the fractional order of the derivative.
Taguchi Method
The purpose of the Taguchi method [29][30][31][32][33][34][35][36] is to achieve a design process in which product quality is stable and insensitive to various noises in the production process. In the product design process, functional relationships among quality, cost, and benefit are analyzed to maximize product quality and minimize production costs. A large number of design variables are studied in a small number of experiments. The best combination of design variables can be expressed by the orthogonal table and the signal-noise ratio (SNR). The basic concept of the Taguchi method is to maximize the performance index by using the orthogonal table to perform experiments. Using the orthogonal table reduces experiment time and increases convergence speed. In the Taguchi method, orthogonal arrays and SNR are tools used to optimize the design of engineering parameters in the experimental plan, to reduce variation in important quality characteristics, and to achieve the goal of total cost reduction. Table 2 shows an example of an L 9 (3 4 ) orthogonal array that accommodates a maximum of 4 factors with 3 levels per factor. Table 2. The L 9 (3 4 ) orthogonal array.
The SNR was calculated for quality characteristics, and the goal of this study was the smaller, the better characteristics. The quality function characteristic Y is a non-negative value with a smaller-the-better characteristic; Y can be equal to zero. The SNR of the smaller-the better characteristic is defined as
Experimental Planning and Methods
Drilling and tapping processes are used in the manufacturing processes for diverse products ranging from cylinder engines in the automobile industry to mobile phone molds in the consumer electronics industry. Since the manufacture of these products often requires numerous drilling and tapping procedures, determining the parameter combination that provides the best tapping accuracy is needed to achieve high product quality in the postprocess. This study's objective was to find the parameter combination that minimizes rigid tapping synchronization error without increasing manufacturing time. Figure 2 shows that the experiments were performed in an NDV series machine tool (Yeong-Chin machinery industries Co. Ltd., Taichung, Taiwan) with a FANUC 31iMA controller. The material used for the through-hole processing experiments was polymethyl methacrylate (PMMA) due to its high plasticity, high hardness, and low brittleness. The PMMA was considered an excellent experimental material because it enabled observation of broken or irregular cutters' effects. Therefore, even a significant synchronization error would be unlikely to damage the tapping tool in the tapping axis. The phenomena of stripped thread or thread damage can still be observed in PMMA. An M6 × 1 tapping tool was used in the experiments. The cutting conditions were as follows. Spindle speed, tapping axis feed rate, and tapping depth were set to 3000 RPM, 3000 mm/min, and 50 mm, respectively. The experiment was performed five times at these settings. Figure 3 shows the processing path. The experimental input was the factor that affected synchronization error, and the output was synchronization error. Input factors were factors that affected synchronization error, including feedforward speed coefficient (x1, unit: 0.01%), rigid tapping speed loop proportional gain (x2, unit: As), motor excitation delay time (x3, unit: microsecond), rigid tapping speed loop integral gain (x4, unit: microsecond)), feed position coefficient (x5, unit: 0.01%) and tapping axis position gain (x6, unit: 0.01s-1) [54][55][56][57]. These parameters were set on the FANUC controller system. Figure 4 shows the experimental procedure. The uniform distribution characteristic of UD made each experimental combination meaningful, which substantially reduced the required number of experiments. Table 3 shows the U *28 (288) uniform layout selected for this study's experiments. First, input factors were entered in the U*28 uniform layout. Synchronization error was the quality characteristic, and the response value was obtained by the-smaller-the-better characteristic. Table 4 is the table for selecting column numbers according to the experiment number. Table 4 shows that columns 1-7 were used to construct the U *28 (286) uniform layout, and Table 5 shows the resulting layout. Table 6 shows the 28 levels for each factor. A CCD electronic image microscope was used to observe the results (thread pitch and workpiece shape) of the tapping experiments performed using the factor combinations in Tables 5 and 6. Table 7 displays the experimental results. The experimental input was the factor that affected synchronization error, and the output was synchronization error. Input factors were factors that affected synchronization error, including feedforward speed coefficient (x1, unit: 0.01%), rigid tapping speed loop proportional gain (x2, unit: As), motor excitation delay time (x3, unit: microsecond), rigid tapping speed loop integral gain (x4, unit: microsecond)), feed position coefficient (x5, unit: 0.01%) and tapping axis position gain (x6, unit: 0.01s-1) [54][55][56][57]. These parameters were set on the FANUC controller system. Figure 4 shows the experimental procedure. The uniform distribution characteristic of UD made each experimental combination meaningful, which substantially reduced the required number of experiments. Table 3 shows the U *28 (288) uniform layout selected for this study's experiments. First, input factors were entered in the U*28 uniform layout. Synchronization error was the quality characteristic, and the response value was obtained by the-smaller-the-better characteristic. Table 4 is the table for selecting column numbers according to the experiment number. Table 4 shows that columns 1-7 were used to construct the U *28 (286) uniform layout, and Table 5 shows the resulting layout. Table 6 shows the 28 levels for each factor. A CCD electronic image microscope was used to observe the results (thread pitch and workpiece shape) of the tapping experiments performed using the factor combinations in Tables 5 and 6. Table 7 displays the experimental results. The experimental input was the factor that affected synchronization error, and the output was synchronization error. Input factors were factors that affected synchronization error, including feedforward speed coefficient (x 1 , unit: 0.01%), rigid tapping speed loop proportional gain (x 2 , unit: As), motor excitation delay time (x 3 , unit: microsecond), rigid tapping speed loop integral gain (x 4 , unit: microsecond)), feed position coefficient (x 5 , unit: 0.01%) and tapping axis position gain (x 6 , unit: 0.01s-1) [54][55][56][57]. These parameters were set on the FANUC controller system. Figure 4 shows the experimental procedure. The uniform distribution characteristic of UD made each experimental combination meaningful, which substantially reduced the required number of experiments. Table 3 shows the U * 28 (288) uniform layout selected for this study's experiments. First, input factors were entered in the U * 28 uniform layout.
Synchronization error was the quality characteristic, and the response value was obtained by the-smaller-the-better characteristic. Table 4 is the table for selecting column numbers according to the experiment number. Table 4 shows that columns 1-7 were used to construct the U * 28 (286) uniform layout, and Table 5 shows the resulting layout. Table 6 shows the 28 levels for each factor. A CCD electronic image microscope was used to observe the results (thread pitch and workpiece shape) of the tapping experiments performed using the factor combinations in Tables 5 and 6. Table 7 displays the experimental results.
According to Equation (9), the key factors in synchronization error in rigid tapping are x2 (rigid tapping speed loop proportional gain), x3 (motor excitation delay time), x4 (rigid tapping speed loop integral gain), x5 (position feed coefficient) and x6 (tapping axis position gain). Next, FPSO was used to find the global best combination of values for fac- tors x2, x3, x4, x5, and x6 and to find the global best output value. According to the FPSO The rigid tapping command for the FANUC 31iMA controller used in the experiments was as follows: M29 S__; G84 X__Y__Z__R__F__K__; where M29 is the rigid tapping command, S is the spindle speed, G84 is the right-hand thread tapping by M3 spindle rotation, X, Y are the hole positions in the X-and Y-axes, Z is the tapping depth from the R-plane to Z-depth, R is the R plane position, F is the tapping axis feed rate, and K is the number of cycles.
Number of Factors Column
No.
No. 6 1 40 350 245 6900 9600 2 40 355 250 7900 9700 3 40 360 255 8900 9800 4 40 365 260 9900 9900 5 45 350 250 8900 9900 6 45 355 245 9900 9800 7 45 360 260 6900 9700 8 Figure 6 displays the rigid tapping errors obtained under the optimal parameter combination. Synchronization error was 23 pulses before adjustment and 18 pulses after adjustment. For verification, the simulation results and actual processing results were compared before and after optimization. The optimal combination was then used in an actual thread cutting process. After the process was performed five times, the bolt was cut in half with a face milling cutter, and the workpiece was placed under a CCD electronic image microscope to observe its shape and measure the pitch and appearance of the thread.
The outer diameter (D1) and inner diameter (D2) of the thread were measured, and the difference between D1 and D2 was obtained. Table 14 shows that a significant difference between D1 and D2 indicated that the screw teeth were well-shaped and that the potential for tooth collapse was minimal. Conversely, a slight difference indicated poorly-shaped screw teeth and a high potential for tooth collapse. Screw thread pitch was also measured to determine whether disordered teeth had caused an excessive error during the same period. Table 14 shows that the proposed method obtained superior tapping results compared to the results obtained without optimizing rigid tapping electrical control parameters. Table 14. Difference between outer diameter (D1) and inner diameter (D2) before and after commissioning.
Conclusions
The proposed hybrid method for optimizing synchronization in rigid tapping was verified in an actual thread cutting process. Errors in synchronization of the spindle and the tapping axis were captured, and parameter values were adjusted for optimal synchronization.
Rigid tapping is easily performed when the CNC machine tool used for spiral tapping is operated at the expected speed and feed rate. However, high synchronization of motion is required to achieve a high-quality tap and avoid damage to the tapping tool and the workpiece. Therefore, the UD method proposed in this study integrated regression analysis, FPSO, and the Taguchi method to optimize rigid tapping parameters and obtained excellent tapping results.
For parameters x2, x3, x4, x5, x6, the optimal combination of values was 55, 355, 260, 7900, and 9900, respectively. This combination reduced synchronization error from 23 pulses under the original parameters to 18 pulses, which was a 22% improvement in accuracy. The optimized combination also reduced processing time from 20,880 ms to 20,033 ms.
The method proposed in this study improves machining accuracy by determining the critical factors in rigid tapping synchronization error and the optimal values for the key factors. Additionally, the proposed method reduces the potential for damage to the workpiece during a tapping procedure and reduces the time required for parameter commissioning. The four main findings of the experiments were as follows: 1. The proportional gain and integral gain of the spindle can be adjusted to increase its rigidity during rigid tapping. 2. Adjusting the position gain of the tapping axis and the feedforward coefficient can substantially decrease synchronization errors. Figures 7 and 8 show how to position gain affects synchronization error and torque command. When position gain is significant, synchronization error is small, but current increases. Notably, the present value during air cut commissioning should not exceed 70-80%, and a reserve margin should be set to avoid overheating or overloading the motor during actual cutting. Air cut commissioning means dry run commissioning. Figures 9 and 10
Conclusions
The proposed hybrid method for optimizing synchronization in rigid tapping was verified in an actual thread cutting process. Errors in synchronization of the spindle and the tapping axis were captured, and parameter values were adjusted for optimal synchronization.
Rigid tapping is easily performed when the CNC machine tool used for spiral tapping is operated at the expected speed and feed rate. However, high synchronization of motion is required to achieve a high-quality tap and avoid damage to the tapping tool and the workpiece. Therefore, the UD method proposed in this study integrated regression analysis, FPSO, and the Taguchi method to optimize rigid tapping parameters and obtained excellent tapping results.
For parameters x2, x3, x4, x5, x6, the optimal combination of values was 55, 355, 260, 7900, and 9900, respectively. This combination reduced synchronization error from 23 pulses under the original parameters to 18 pulses, which was a 22% improvement in accuracy. The optimized combination also reduced processing time from 20,880 ms to 20,033 ms.
The method proposed in this study improves machining accuracy by determining the critical factors in rigid tapping synchronization error and the optimal values for the key factors. Additionally, the proposed method reduces the potential for damage to the workpiece during a tapping procedure and reduces the time required for parameter commissioning. The four main findings of the experiments were as follows: 1. The proportional gain and integral gain of the spindle can be adjusted to increase its rigidity during rigid tapping. 2. Adjusting the position gain of the tapping axis and the feedforward coefficient can substantially decrease synchronization errors. Figures 7 and 8 show how to position gain affects synchronization error and torque command. When position gain is significant, synchronization error is small, but current increases. Notably, the present value during air cut commissioning should not exceed 70-80%, and a reserve margin should be set to avoid overheating or overloading the motor during actual cutting. Air cut commissioning means dry run commissioning. Figures 9 and 10
Conclusions
The proposed hybrid method for optimizing synchronization in rigid tapping was verified in an actual thread cutting process. Errors in synchronization of the spindle and the tapping axis were captured, and parameter values were adjusted for optimal synchronization. Rigid tapping is easily performed when the CNC machine tool used for spiral tapping is operated at the expected speed and feed rate. However, high synchronization of motion is required to achieve a high-quality tap and avoid damage to the tapping tool and the workpiece. Therefore, the UD method proposed in this study integrated regression analysis, FPSO, and the Taguchi method to optimize rigid tapping parameters and obtained excellent tapping results.
For parameters x 2 , x 3 , x 4 , x 5 , x 6 , the optimal combination of values was 55, 355, 260, 7900, and 9900, respectively. This combination reduced synchronization error from 23 pulses under the original parameters to 18 pulses, which was a 22% improvement in accuracy. The optimized combination also reduced processing time from 20,880 ms to 20,033 ms.
The method proposed in this study improves machining accuracy by determining the critical factors in rigid tapping synchronization error and the optimal values for the key factors. Additionally, the proposed method reduces the potential for damage to the workpiece during a tapping procedure and reduces the time required for parameter commissioning. The four main findings of the experiments were as follows: 1.
The proportional gain and integral gain of the spindle can be adjusted to increase its rigidity during rigid tapping.
2.
Adjusting the position gain of the tapping axis and the feedforward coefficient can substantially decrease synchronization errors. Figures 7 and 8 show how to position gain affects synchronization error and torque command. When position gain is significant, synchronization error is small, but current increases. Notably, the present value during air cut commissioning should not exceed 70-80%, and a reserve margin should be set to avoid overheating or overloading the motor during actual cutting. Air cut commissioning means dry run commissioning. Figures 9 and 10 compare the effects of different feedforward coefficients on torque command.
3.
Adjusting the time required for motor excitation and stabilization can reduce errors at the start of rigid tapping.
4.
Reducing the z-axis following error, the synchronization error of rigid tapping can get better performance. After a synchronization error, reducing the z-axis following error and commissioning proportional gain, integral gain, and position gain can improve rigid tapping precision.
(a) (b) Figure 6. Synchronization error waveforms in rigid tapping before (a) and after commissioning ( Informed Consent Statement: Not applicable.
Data Availability Statement:
The authors collected the data by themselves using the proposed method for this article.
|
2021-05-21T16:57:13.689Z
|
2021-04-12T00:00:00.000
|
{
"year": 2021,
"sha1": "4db5e65a8213321b16c57d5da36d0295ec98cded",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/2076-3417/11/8/3441/pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "939d743aceef9f2993306ad5e7fd195009537e6f",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
267140191
|
pes2o/s2orc
|
v3-fos-license
|
Whitening of brown adipose tissue inhibits osteogenic differentiation via secretion of S100A8/A9
Summary The mechanism by which brown adipose tissue (BAT) regulates bone metabolism is unclear. Here, we reveal that BAT secretes S100A8/A9, a previously unidentified BAT adipokine (batokine), to impair bone formation. Brown adipocytes-specific knockout of Rheb (RhebBAD KO), the upstream activator of mTOR, causes BAT malfunction to inhibit osteogenesis. Rheb depletion induces NF-κB dependent S100A8/A9 secretion from brown adipocytes, but not from macrophages. In wild-type mice, age-related Rheb downregulation in BAT is associated with enhanced S100A8/A9 secretion. Either batokines from RhebBAD KO mice, or recombinant S100A8/A9, inhibits osteoblast differentiation of mesenchymal stem cells in vitro by targeting toll-like receptor 4 on their surfaces. Conversely, S100A8/A9 neutralization not only rescues the osteogenesis repressed in the RhebBAD KO mice, but also alleviates age-related osteoporosis in wild-type mice. Collectively, our data revealed an unexpected BAT-bone crosstalk driven by Rheb-S100A8/A9, uncovering S100A8/A9 as a promising target for the treatment, and potentially, prevention of osteoporosis.
ll
OPEN ACCESS adipocytes, also showed signs of reduced ''browning'', as evident by weakened expression of PRDM16 and enhanced expression of Nrip1, a marker of BAT de-differentiation and whitening (Figure S4F). 32AT in C57BL/6J mice shows significant ''whitening'' due to aging. 33To further characterize the functional role of Rheb in BAT homeostasis, Rheb protein expression in BAT of wild-type C57BL/6J mice was analyzed.Concomitant with significant ''whitening'' (Figures S5A and S5B) and reduced expression of key functional proteins of BAT (Figure S5C), interscapular BAT in aged mice (20 months old) showed a marked reduction in Rheb protein expression, when compared with their young counterparts (3 months old) (Figure S5C).This suggests that loss of Rheb is associated with age-dependent BAT ''whitening'' and dysfunction.
Rheb ablation in BAT resulted in reduced trabecular bone formation, but increased bone marrow fat in mice
Cancellous bone of distal femur/proximal tibia C57BL/6 mice exhibit significant age-related bone loss, regardless of their sex.Thus, this strain serves as a useful model for the study of age-related osteoporosis (AOP). 34,35To focus on the estrogen-independent role of BAT in AOP, we selected male Rheb AD KO and Rheb BAD KO mice to evaluate the causal relationship of BAT dysfunction with AOP.Remarkably, Rheb AD KO and Rheb BAD KO mice showed markedly reduced trabecular bone mass (BV/TV), bone mineral density (BMD), trabecular thickness (Tb.Th), and trabeculae numbers (Tb.N), but increased trabecular separation (Tb.Sp) compared to their control littermates at 1 and 4 months of age, as visualized by three-dimensional microCT (Figures 2A and 2B) and HE analysis (Figures 2C and 2D) of distal femora.However, neither Rheb AD KO mice nor Rheb BAD KO mice showed prominent alteration in cortical bone parameters, in comparison with their control littermates (Figures S6A and S6B), suggesting that their BAT dysfunction mainly affects trabecular bone.Conversely, the number of adipocytes in the bone marrow, which appear as oval vacuoles by HE staining, was higher in distal tibiae in Rheb AD KO mice and Rheb BAD KO mice than their control littermates (Figures 2E and 2F).Furthermore, the expression of perilipin, a marker for mature adipocytes, was enhanced in the bone marrow of both types of Rheb knockout mice (Figure 2G).
In Rheb BAD KO mice, expression of Ucp1-Cre was not observed in cells located at the trabeculae or lacunae, nor at the periosteum (Figure S7A).Additionally, Rheb protein expression was similar in these cells of both control and Rheb AD KO mice (Figure S7B).These results suggest the osteopenic phenotype observed in Rheb AD KO mice and Rheb BAD KO mice cannot be attributed to nonspecific knockout of Rheb in preosteoblasts, osteoblasts (OBs), osteocytes, or osteoclasts (OCs).
Rheb ablation in BAT led to reduced osteoblast differentiation in vivo and ex vivo Rheb AD KO and Rheb BAD KO mice displayed a marked decrease in osteoblast (OB) differentiation, respectively (Figures 3A and 3B).Consistently, these mice also demonstrated significantly lowered expression of both Osx (Figure 3C) and Ocn (Figure 3D).Rheb BAD KO mice also showed markedly reduced mineral apposition rate (MAR) in comparison with their control littermates, as quantified by calcein labeling (Figure 3E).However, the number of TRAP-positive OCs was also lower in Rheb AD KO and Rheb BAD KO mice (Figure 3F).Consistently, both types of KO mice displayed a marked decrease both in serum Procollagen I N-Terminal Propeptide (PINP), a circulating marker for new bone formation, and in Cross Linked C-Telopeptide of Type I Collagen (CTX-I), a serum marker for bone resorption (Figure 3G). 36The reason remains unknown, but we speculate that it may reflect a decoupled state of bone formation and resorption after BAT dysfunction.Moreover, BMSCs isolated from the femoral marrow cavity of these two types of mice displayed a greatly reduced capacity for OB differentiation (Figure 3H), but significantly enhanced adipocyte differentiation (Figure 3I), without noticeable change in the expression of Rheb itself (Figure 3J).These results collectively suggest that the impaired osteogenesis of these mice may be driven by a reduction in OB lineage commitment of BMSCs.
The transcriptome of BMSCs isolated from Rheb BAD KO mice was subsequently analyzed by whole-genome mRNA array.Rheb mRNA expression in BMSCs of Rheb BAD KO mice was similar to that of the control mice (Figure S8A), suggesting that there was no nonspecific Rheb knockout in these cells.Consistent with decreased OB differentiation capacity, GO analysis of BMSC transcriptome of Rheb BAD KO mice demonstrated down-regulation of genes enriched in ossification, OB differentiation, and bone mineralization, including SP7 (Osx), Prrx2, Ibsp, Fgfr2, and Fgfr3 (Figures S8B and S8C).
Brown but not white adipocytes from Rheb-ablated mice suppress osteoblast differentiation from BMSCs
To investigate whether adipokines from BAT of Rheb AD KO mice directly suppress OB differentiation from BMSCs, conditioned medium (CM) collected either from interscapular BADs, or from epididymal WADs of Rheb AD KO mice cultured ex vivo, was transferred to cultures of primary BMSCs (from wild-type C57BL/6J mice).OB or AD differentiation was then analyzed by ALP staining or oil red O staining, respectively.Interestingly, while CM from BAD of Rheb AD KO mice markedly suppressed OB, but promoted AD differentiation (Figures 4A and 4B), CM from WAD had little effect on either OB or AD differentiation (Figures 4C and 4D).This inhibitory effect of BAT adipokines on OB or AD differentiation was verified since CM collected from interscapular BADs of Rheb BAD KO mice had similar effects (Figures 4E-4H).In contrast, heat-inactivated CM (heated at 95 C for 10 min to denature any proteins) had little effect on OB or AD differentiation of BMSCs (Figures S9A-S9D).These results suggest that adipokines, but not fatty acids, which were reported to suppress OB differentiation in vitro, 37 play a major role in this BAD-OB crosstalk.Collectively, these results suggest that adipokines released from malfunctional BAD, but not WAD, suppress OB but enhance AD differentiation.
Loss of Rheb in BAT induces transcription and secretion of S100A8/A9
To further identify the individual adipokines secreted by malfunctional BAT to suppress OB differentiation, a whole-genome mRNA array assay of interscapular BAT was performed.Among the significantly upregulated genes (Rheb AD KO mice VS control), inflammation-related genes, including leukocyte chemotaxis, myeloid leukocyte migration, and neutrophil chemotaxis, were highly enriched (Figure 5A).Interestingly, BAT from the Rheb BAD KO mice showed upregulated genes enriched in highly similar biological processes (Figure 5B), suggesting that the impaired osteogenesis of both the Rheb AD KO mice and the Rheb BAD KO mice may be caused by similar BAT-derived inflammatory factors.
The heterodimeric protein S100A8 (MRP8)/A9 (MRP14) is a key inflammation alarmin, which acts as both a diagnostic marker and a critical player during various kinds of inflammatory responses. 38Remarkably, S100A8 (with a 36.5-foldupregulation in the Rheb AD KO mice) and S100A9 (with an 8.91-fold upregulation in Rheb AD KO mice), were at the convergence point of these processes (Figure 5C).Marked upregulation of S100A8 and S100A9 was also observed in the BAT of Rheb BAD KO mice, as shown by a Volcano plot (Figure 5D).Both RT-qPCR (Figure 5E) and WB (Figure 5F) confirmed that the S100A8 and S100A9 level was elevated in BAT tissues from both types of knockout mice.Importantly, S100A8 and S100A9 protein level was also elevated in serum of these two types of mice (Figure 5G).Since S100A9 usually forms a heterodimer with S100A8 and functions to stabilize S100A8 and increase its protein level, 38 we focused on the function of S100A8/A9 heterodimer, instead of the independent function of S100A8 or S100A9.
Remarkably, aged C57BL/6J mice (20 months old), a native mouse model for both age-related BAT ''whitening'' and bone loss, also showed increased S100A8 protein levels both in interscapular BAT (Figure 5H) and in serum (Figure 5I), in comparison with their younger counterparts (3 months old).Further, CM from BADs of these aged mice showed much higher S100A8 levels than that from their bone marrow (BM), a major source of myeloid cells including neutrophils and macrophages (Figure 5J).These data collectively suggest that BAT-derived S100A8/ A9 may be associated with age-related bone loss in mice.Transcription factor-binding-motif analysis of the upregulated genes in the BAT of Rheb BAD KO mice using the i-cisTarget web tool 39 revealed that the p65 NF-kB (RelA) consensus motif is most highly enriched (Figure S10A).Additionally, p65 has been documented to bind the promoter of S100A8 and S100A9 to activate its transcription. 40Thus, we hypothesized that p65 transactivates S100A8/A9 in BADs in response to Rheb depletion.As expected, the level of p65 NF-kB phosphorylation (S276, a hallmark of p65 transcriptional activity) was much higher in BADs of Rheb AD KO and Rheb BAD KO mice (Figures S10B and S10C) than in control mice.Consistently, Rheb-depleted primary BADs displayed a marked elevation of p65 phosphorylation level (Figures S10D and S10E), as shown by western blotting.Brown adipocytes can be induced from C2C12 cells (named thereafter C2C12 BADs) in vitro by supplementation of a retinoid X receptor agonist, bexarotene. 41,42onversely, simultaneous treatment of these C2C12 BADs with the NF-kB inhibitor QNZ or SN50 markedly blunted the effect of rapamycin on S100A8 expression and secretion (Figures S10F and S10G).Collectively, these data suggest that S100A8 stimulated by Rheb depletion is dependent on p65 NF-kB.
Brown adipocytic S100A8/A9 mediates the effect of BAT Rheb loss on osteogenesis
Since it has been reported that S100A8/A9 is derived mainly from myeloid cells including neutrophils or macrophages, [43][44][45] we next investigated whether BADs can also secrete S100A8/A9.Brown adipocytes in both Rheb BAD KO and Rheb AD KO mice showed markedly enhanced S100A8 and S100A9 expression (Figure 6A).Consistent with this notion, both lysates and culture medium of primary BADs isolated from Rheb AD KO (Figure 6B) mice and Rheb BAD KO mice (Figure 6C) displayed significantly higher S100A8 and S100A9 protein levels than that from their control littermates.To compare the contribution of BADs and myeloid cells in the release of S100A8/A9, BAT from the control or Rheb BAD KO mice were fractioned into mature BADs and stromal-vascular fractions (SVF) and analyzed for the expression of S100A8/A9.In control mice, the baseline expression levels of S100A8 and S100A9 in the SVF fraction were slightly higher than those in mature BADs.In sharp contrast, mature BADs showed much higher S100A8 and S100A9 expression levels than SVF in Rheb BAD KO mice (Figure S11A).Furthermore, in comparison with the control mice, Rheb BAD KO mice showed similar number of total macrophages, M1 macrophages, and M2 macrophages (Figures S11B and S11C).Collectively, these results suggest that BADs are the main sources for S100A8/A9 secretion in Rheb BAD KO mice.
Additionally, transfection of Rheb shRNA into primary BADs resulted in a marked increase in S100A8 and S100A9 expression and secretion, as revealed by RT-qPCR and WB analysis of the cell lysates (Figure 6D), as well as by WB analysis of the culture medium (Figure 6E).Further, treatment with rapamycin, a specific mTORC1 inhibitor, also increased S100A8 expression and secretion dose-dependently in primary BADs derived from wild-type C57BL/6J mice (Figures 6F and 6G) or C2C12 BADs (Figures S11D-S11F).In contrast, bone marrow-derived macrophages of Rheb FL/FL mice infected with adenovirus encoding Cre showed no significant alteration in the mRNA level of S100A8 (Figure S11G).Collectively, these results confirmed that Rheb depletion induces S100A8 expression and secretion in BADs in vitro, probably via inactivation of mTORC1.
To test whether S100A8/A9 is a factor suppressing bone formation downstream of Rheb loss in BADs, we explored several loss-of-function strategies to assess the effect of blocking S100A8/A9 on bone formation.Intramedullary injection of S100A8/9 neutralizing antibody markedly reversed the reduction of trabecular bone mass, BMD, and trabecular numbers in 10-month-old Rheb BAD KO mice (Figures 6H and 6I).The reduction in OB differentiation of Rheb BAD KO mice was also rescued (Figures S12A-S12C).OCs were also reduced by this treatment (Figure S12D), consistent with a previous finding that S100A8 stimulates OCs formation. 46Likewise, tail-vein injection of S100A8/A9 neutralizing antibody had similar rescue effects on the reduction in bone mass and OB differentiation of Rheb BAD KO mice (Figures S12E and S12F).Consistently, intramedullary treatment with S100A8/A9 neutralizing antibody for 2 months generated the similar effects in 5-month-old Rheb-BAD KO mice, which are at the early stage of age-related bone loss (Figures S12G-S12I).Further, S100A8/A9 neutralization in 5-month-old C57BL/6J mice and in 10-month-old C57BL/6J mice for 2 months led to significant increases in trabecular bone mass, trabecular number, and BMD (Figures S12J and S12K).
Sympathetic nerves release noradrenaline to promote BAT thermogenesis while repressing bone formation. 47Recently, it was shown that this sympathetic nerve innervation in BAT was controlled by paracrine secretion of S100B by BADs. 48Depletion of the Rheb gene in BAD did not alter serum level of noradrenaline (Figure S12L), suggesting that the Rheb-S100A8 axis may regulate bone formation independently of the sympathetic system.
In vitro, supplementation with S100A8/A9 neutralizing antibody blunted the suppressing or promoting effect of BAD CM from Rheb BAD KO mice on OB or AD differentiation from BMSCs, respectively (Figures S13A and S13B).However, CM from S100A8-depleted primary BADs did not significantly enhance OB differentiation from BMSCs (Figures S13C and S13D), suggesting that the effects of BAD-derived S100A8 on osteogenesis are negligible in physiological conditions.Consistently, recombinant S100A8/A9 suppressed OB differentiation of primary BMSCs from wild-type C57BL/6J mice, C3H10T1/2 MSCs, as well as C2C12 stromal cells dose-dependently (Figures S13E-S13G), without compromising cell viability (Figures S13H-S13J).
S100A8/A9 inhibits OB differentiation of BMSCs through targeting toll-like receptor 4 (TLR4)
Previously-reported cell membrane receptors for S100A8/A9, TLR4, or receptor for advanced glycation end-products (RAGE) 49,50 in BMSCs, were analyzed by RT-qPCR.Interestingly, BMSCs showed a much higher TLR4 expression level than that of RAGE (Figure 7A).We next tested Figure 6. Brown adipocytic S100A8/A9 mediates the effect of BAT Rheb loss on osteogenesis (A) Interscapular BAT from either control or Rheb AD KO, or from control or Rheb BAD KO mice, were analyzed for S100A8 or S100A9 expression by immunohistochemistry.The conditional medium (CM) was collected from both interscapular BADs and epididymal WADs of the control and Rheb AD KO mice (B), or from interscapular BADs of the control and Rheb BAD KO mice (C).The cells were then lysed, followed by both the whole-cell lysates and CM analyzed for S100A8 and S100A9 expression by WB.Primary brown adipocyte progenitors (BPADs) from wild-type C57BL/6J mice were transfected with lentivirus carrying the control or Rheb shRNA and induced into mature BADs, followed by analysis of whole-cell or secreted S100A8 level via RT-qPCR (D) or WB (E), respectively.BPADs were induced into BADs and treated with various concentrations of rapamycin for 12 h, then analyzed for S100A8 expression via RT-qPCR (F) and WB (G).The efficiency of rapamycin treatment was assessed by monitoring phosphorylated S6 (S235/236) (G).S100A8/A9 neutralizing antibody was injected into the tibial medullary cavity of Rheb BAD KO mice (10-month) every 3 days for 2 months, and the effects on bone formation were analyzed by 3D-microCT (H) or HE staining (I).Bone histomorphometric parameters of the tibiae were shown in the lower panel of (H).Data are shown as box-and-whisker plots (with median and interquartile ranges) from max to min, with all data points shown.Two-tailed unpaired t-test was used for two-group comparison; for comparison between multiple groups, one-way analysis of variance with multiple comparisons were used, followed by the Bonferroni posthoc test for significance.
whether TLR4 in MSCs is activated upon S100A8/A9 treatment.Indeed, both C3H10T1/2 and C2C12 MSCs demonstrated fast Myd88 translocation to the plasma membrane, a well-known hallmark of TLR4 activation, 51,52 as well as a prominent elevation in its protein level in response to S100A8/A9 stimulation, as observed from fluorescence confocal microscopy (Figures 7B and 7C).Conversely, supplementation with the TLR4 inhibitor TAK242 dramatically reversed the suppressive effect of S100A8/A9 on OB differentiation of C3H10T1/2 MSCs (Figure 7D), further suggesting that S100A8/A9 may repress BMSC differentiation by targeting TLR4.(D) C3H10T1/2 cells were treated with S100A8/A9 and TLR4 inhibitor TAK242 as indicated, followed by assay for OB differentiation by ALP staining (upper panel) and WB (lower panel).For comparisons between 2 groups, two-tailed unpaired t-tests were used.For comparisons between multiple groups, one-way analysis of variance with multiple comparisons were used, followed by the Bonferroni post-hoc test for significance.Data are shown as box-andwhisker plots (with median and interquartile ranges) from max to min, with all data points shown.Two-tailed unpaired t-test was used for two-group comparison; for comparison between multiple groups, one-way analysis of variance with multiple comparisons were used, followed by the Bonferroni post-hoc test for significance.
DISCUSSION
The endocrine role of BAT in bone development, homeostasis, and involution and the underlying mechanism remains largely unclear.In the current study, we provided in vivo evidence that batokines released from whitened BAT suppress osteogenesis remotely.Loss of Rheb in BAT results in release of batokines including S100A8/A9, which in turn activate TLR4-Myd88 signaling in BMSCs, preventing their OB differentiation while promoting AD differentiation (Graphical Abstract).Our data revealed a previously uncharacterized pathological role of dysfunctional BAT in remote inhibition of osteogenesis, through the action of batokines including S100A8/A9.
The activity of Rheb-mTOR signaling is modulated by both GTPase activity of Rheb and its expression level.Both reduction of TSC GAP activity and loss-of-function mutation of TSC promotes Rheb activity.On the contrary, low energy or DNA damage suppresses Rheb activity via activation of TSC. 44,53Regulation of Rheb-mTOR signaling has been implicated in energy metabolism disorders.Mice with high-fat dietinduced obesity is associated with hyperactivation of Rheb-mTOR signaling. 54Paradoxically, mice with ectopic Rheb expression in pancreatic b cells showed improved glucose tolerance and resistance to hyperglycemia induced by obesity. 55While Rheb activates mTOR signaling to suppress lipolysis, it inhibits white-beige transformation of fat and thermogenesis in an mTORC1-independent but PDE45-CAMP-dependent mechanism, 56 further complicating the regulatory role of Rheb-mTOR in metabolism.In this regard, understanding tissue-or cell-type specific role(s) of Rheb-mTOR in these pathological conditions is of fundamental importance to perform targeted inhibition or activation of the pathway without side effects.
Combining the results from Rheb ablation in BAD only (Ucp1-Cre), or in both BAD and WAD (Fabp4-Cre), we demonstrated that Rheb-mTOR inactivation in BAT led to BAT malfunction, outburst of inflammatory batokines including S100A8/A9, and consequent reduction of osteogenesis.Although Fabp4-Cre was also shown to be expressed in macrophages besides adipocytes, 31 based on our previous results revealing that mTOR inactivation in macrophages (using Lyz-Cre) enhances osteoclastogenesis without altering osteogenesis, 57 it is less likely that nonspecific Rheb ablation in macrophages accounts for the osteopenic phenotype observed in Rheb AD KO mice.Ucp1 was reported to be a specific marker for BAD. 6Consistently, Ucp1-Cre expression was only observed in BAD among the examined tissue/organs in this study.Importantly, SVF fraction of BAT from Rheb BAD KO showed no reduction in Rheb expression, suggesting that the osteopenic phenotype observed in Rheb BAD KO mice was specially caused by Rheb depletion in BAD.However, due to the observed Rheb depletion in beige depots, although to a lesser extent than the classical BAT, we cannot exclude the role of beige adipocytes in this BAT-bone crosstalk.
Consistent with the notion that activation of browning program in white adipocytes may support OB differentiation during bone development, 58 our results suggest that whitening of brown adipocytes driven by Rheb loss suppress trabecular bone formation, reduce its peak bone mass, and accelerates its involution.Resembling Rheb BAD KO and Rheb AD KO mice, aged wild-type mice displayed a significant increase of S100A8/A9 both in BAT and serum, concomitant with Rheb depletion and BAT whitening.This reflects the functional relevance of Rheb-S100A8/A9 axis in AOP, although the trabecular bone loss caused by deregulation of this axis does not faithfully phenocopy the latter.Additionally, these results reveal an evolutionary role of Rheb-mTOR in BAT-bone metabolism.It is likely that dysfunction of this signaling axis during aging drives the switch of BAT-bone coupling from an energetically efficient phenotype to an inefficient one, hence losing evolutionary advantage.However, it is unclear whether imbalance of Rheb-mTOR signaling in BAT will result in bone abnormalities in other metabolic disorders with BAT whitening, such as obesity and type 2 diabetes mellitus. 59,60lthough S100A8/A9 is known to be secreted mainly by myeloid cells such as neutrophils and macrophages, 61,62 multiple lines of evidence suggest that S100A8/A9 derived from BADs, but not myeloid cells, may play a major role in the suppression of osteogenesis, at least in the initiating stage.(1) Rheb BAD and Rheb AD KO mice displayed markedly enhanced S100A8/A9 expression and secretion in primary BADs in comparison with their control littermates.(2) Rheb BAD mice exhibited markedly higher S100A8/A9 level in mature BAD fraction than in the SVF fraction of BAT, which mainly includes myeloid cells, lymphocytes, preadipocytes, and endothelial cells. 63,64(3) Both primary BADs and induced C2C12 BADs with Rheb-mTOR inactivation showed markedly increased S100A8/A9 secretion in vitro.(4) Macrophages with Rheb knockout in vitro did not display significant alteration in S100A8/A9 expression.(5) There were far less infiltrating macrophages than BADs in the BAT (less than 5% of CD45 + leukocytes and even lesser macrophages), which does not change during aging. 64Additionally, no significant differences were observed in the number of either total, M1, or M2 macrophages between the control and Rheb knockout mice.(6) We further revealed whitened BAT of aged mice secrete much more S100A8 than bone marrow per se.Taken together, these data suggest that BAD may serve as one of the primary sources of S100A8/A9 secretion upon BAT dysfunction induced by Rheb-mTOR inactivation.Nevertheless, we cannot exclude that S100A8/A9 secretion from BAD may result in feedback secretion of itself from other cells, tissue, or organs and this feedback secretion may also contribute to maintain the suppression on osteogenesis.Interestingly, a very recent study revealed that bone marrow-derived pro-inflammatory S100A8 + immune cells invade the BAT of male rats and mice and impair their sympathetic innervation and thermogenesis during aging. 65Taken together with our findings, these data suggest a possible dual intercommunication role of S100A8 in the bone-BAT axis.
S100A8/A9 has been shown to enhance osteoclastic bone resorption 46 and aggravate osteoarthritis, 66,67 while its role in OB differentiation is unknown.However, our results did not show a prominent increase in OC numbers either in Rheb AD KO mice or Rheb BAD KO mice, suggesting that the bone loss in these mice cannot be attributed to the effect of S100A8/A9 on OCs.In fact, S100A8/A9 strongly suppresses OB differentiation from MSCs by targeting TLR4, the most abundant receptor for S100A8/A9 in MSCs.Supporting our data, TLR4 knockout accelerated bone healing after a skull lesion, 68 and MyD88 deficiency accelerated bone regeneration 69 and resistance to PAMP-induced bone loss. 70Conversely, TLR4 activation in differentiating mouse primary Obs, 71 MC3T3-E1 osteoprogenitors, 72 or in MSCs 73
Cell
BMSC were isolated from the femora and tibiae of 4-6-week-old mice.Primary bone marrow-derived macrophages (BMDMs) were isolated from the femora and tibiae of 1-month-old Rheb FL/FL mice.Brown preadipocytes (BPADs) were isolated from interscapular BAT of neonatal mice.C2C12 myoblasts and C3H/10T1/2 mesenchymal stem cell were purchased from the American Type Culture Collection (ATCC).The cells were cultured in a humidified incubator at 37 C with 5% CO 2 .Cell lines were not authenticated internally.All cells were routinely tested and confirmed to be free of mycoplasma (YK-DP-20, Ubigene Biosciences, Guangzhou, China).
METHOD DETAILS Primary culture of BMSCs
Tibia and femur bones were harvested from 4-6-week-old mice.Briefly, the tissues around the bone were removed, and sterile PBS was syringed into the medullary cavity to flush out the bone marrow.The cell suspensions were then collected and centrifuged at 800 g for 5 min, with the pellets resuspended and cultured with complete medium consisting of a-MEM (C12571500BT, Gibco, Grand Island, NY, USA), 10% FBS (10099-141, Gibco), 1% penicillin and 1% streptomycin (15140-122, Gibco).BMSCs were cultured for 2 days and then washed to remove non-adherent cells and supplied with fresh a-MEM complete medium, with medium renewal every 2 days.
Isolation of primary brown and white adipocytes and collection of conditioned medium (CM)
Brown adipocytes and white adipocytes were isolated from interscapular BAT and white adipose tissue of 1-month-old mice, respectively, following a tissue extraction and 0.1% collagenase I digestion (17100017, Gibco) procedure.After digestion, the mixture was filtered through a 250-mm gauge mesh into a 50 mL conical polypropylene tube and allowed to stand for 2-3 min for the adipocytes to float to the top.The subnatant containing the collagenase solution was removed using a long needle and syringe, then 10 mL of adipocyte wash buffer (Krebs Ringer Bicarbonate HEPES buffer, containing 10 mM sodium bicarbonate, 30 mM HEPES and 500 nM adenosine, pH 7.4, supplemented with 3%(w/v) fatty acid-free bovine albumin fraction V) was added to the adipocytes and allowed to stand for 2-3 min.The subnatant was then removed once again and 10 mL of adipocyte wash buffer was added.The wash procedure was repeated three times, then the adipose cell suspension was centrifuged at 800 rpm for 30 s.After removing and discarding the subnatant, the upper floating layer of adipocytes was taken and cultured with medium containing 10% heat-inactivated FBS in one well of a 12-well plate.After 24 h, conditioned medium (CM) was collected from BADs and WADs.
Treatment of mouse BMSCs with conditioned medium (CM) BMSCs from wild-type C57BL/6J mice were plated into 24-well plates containing a-MEM with 10% FBS (10099-141, Gibco), 1% penicillin and 1% streptomycin.These BMSCs were treated with 10% CM (either heated at 95 C for 10 min or not) from BADs or WADs of the control and Rheb AD KO mice, or from BADs of the control and Rheb BAD KO mice 36 h before reaching complete confluence.After 36h, BMSCs were switched to osteogenic/adipogenic induction medium and cultured for 7 days, then either fixed and stained with ALP/Oil Red O, or lysed for total protein extraction and western blotting.
Isolation of mature brown adipocytes fraction (MAF) and stromal-vascular fractions (SVF) MAF and SVF were isolated from interscapular BAT of 1-month-old mice, following a tissue extraction and 0.1% collagenase I digestion (17100017, Gibco) procedure as documented elsewhere. 9,75After centrifuging for 5 min at 600g, mature BADs in the upper floating layer were collected for total RNA extraction.After further removal of brown preadipocytes, the resulting stromal vascular fractions were also harvested for total RNA extraction.
C2C12 cell culture and induction
For BAD differentiation, C2C12 cells were cultured in medium containing 10% FBS, 10 mM bexarotene, 0.5 mM isobutylmethylxanthine, 125 mM indomethacin, 5 mm dexamethasone, 850 nM insulin and 1 nM T3 for 48 h.Cells were then grown in medium containing 10% FBS, 850 nM insulin and 1 nM T3 for another 3 days.To stimulate thermogenic gene expression, cells were incubated with 10 mM forskolin (fsk) for 4 h. 42BAD differentiation of these cells was then evaluated by oil red O staining, and by analysis of BAD markers Ucp-1 and PGC-1-a via western blotting.For osteoblastic differentiation, C2C12 cells were treated with medium containing 10% FBS, 0.2 mM ascorbic acid, 10 nmol/L dexamethasone, and 1 mmol/L b-glycerol phosphate, 5 mg/mL insulin, 0.5 mM isobutylmethylxanthine and 10 nM all-trans retinoic acid for 7 days for ALP staining or 14 days for ARS staining.
Cell viability assay
WST-8 assay was performed using Cell Counting Kit-8 (CCK-8, CK04-500T, Dojindo, Japan) colorimetric assay to assess cell proliferation following the manufacturer's protocol.C2C12, C3H/10T1/2 cells and BMSCs were seeded into 96-well plates at a density of 5 3 10 4 cells per well.The CCK-8 reagent was added to these cells and incubated at 37 C for 0.5 h after S100A8/A9 treatment.The absorbance (optical density) at 450 nm was measured to analyze cell viability.
mRNA array
Total RNA was extracted from primary BMSCs or BAT of the control or Rheb AD KO mice using TRIzol reagent (Invitrogen, Carlsbad, CA, USA) according to the manufacturer's instructions.For mRNA array assays, samples were submitted to Shanghai Biotechnology Corporation for hybridization on an 8 3 60 K Agilent SurePrint G3 Mouse Gene Expression Microarray (V2) (Agilent Technologies, Santa Clara, CA, USA).
Each microarray chip was hybridized to a single sample labeled with Cy3. Background subtraction and normalization were performed.Finally, mRNAs with expression levels differing by at least 3-fold between the control and Rheb AD KO mice were selected (p < 0.05, Student's t-test).
RNA sequencing
Total RNA was extracted from control or Rheb BAD KO mice using TRIzol reagent (Invitrogen) following the manufacturer's protocol.Total RNA quantity and purity were analyzed using a Bioanalyzer 2100 and an RNA 6000 Nano LabChip kit (Agilent), with RNA integrity values of >7.0.Poly(A) RNA was purified from total RNA (5 mg) using poly-T oligo-attached magnetic beads in two rounds of purification.Following purification, the mRNA was fragmented into small pieces using divalent cations under elevated temperature.Then the cleaved RNA fragments were reverse-transcribed to create the final cDNA library in accordance with the protocol for the TruSeq RNA Sample Preparation v.2 (cat.no.RS-122-2001, RS-122-2002) (Illumina Inc., San Diego, CA, USA); the average insert size for the paired-end libraries was 300 bp (G50 bp).The paired-end sequencing was carried out on an Illumina Hiseq 2500 following the manufacturer's recommended protocol.Before assembly, low-quality reads were removed using Trimmomatic software (www.usadellab.org) and 6G clean reads were obtained.Sequencing reads were aligned to the reference genome (mm10) using the HISAT2 package.The mapped reads of each sample were assembled and counted using the featureCounts package.After a matrix of read counts was generated, differential gene expression was analyzed using the R package edgeR and statistical significance was assessed by exact binomial test.Differentially-expressed genes were selected by the R package with log2 (fold change) values of R1 or log2(fold change) values of % À1 and with statistical significance of p < 0.05.
Cre-expressing adenovirus infection
Bone marrow derived macrophages (BMDMs) of Rheb FL/FL mice were infected with adenovirus encoding Cre recombinase (Ad-Cre, GCD0299190, Genechem), according to the manufacture's protocol.BMDMs of Rheb FL/FL mice infected with the control virus (vectors not encoding Ad-Cre) were used as a control (Ad-NC).
Real time RT-PCR analysis
Mice were euthanized, and total RNA in BAT was rapidly extracted using RNAiso Plus (9109, TaKaRa, Beijing, China) and reverse-transcribed into cDNA using the PrimeScript RT reagent kit (RR047A, TaKaRa).Real-time quantitative PCR (RR420A, TaKaRa) analysis was performed with specific gene primers (Table S1) using an ABI 9700 Thermal Cycler (Applied Biosystems, Foster City, CA, USA).Quantification of RNA expression levels was performed using the 2 -DDCT method, and b-actin was used as a positive control.
Western blotting, silver staining, and immunohistochemical assay
For total protein extraction, cells were lysed in RIPA buffer containing 50 mM Tris-HCl pH 8, 150 mM NaCl, 1% Triton X-100, 0.1% sodium deoxycholate, 0.1% SDS, and 13 protease inhibitor cocktail (Roche, Basel, Switzerland).Lysates were boiled in 23 SDS sample buffer.Proteins were separated by SDS-PAGE, followed by transferring to nitrocellulose membranes (Bio-Rad, Hercules, CA, USA) for immunoblotting with relevant antibodies, or silver staining using the Fast Silver Stain kit (P00175, Beyotime) according to the manufacturer's instructions.For bone histology and histomorphometric analysis, femora and tibiae were decalcified for 20-30 days in decalcification solution (1.45% ETDA, 1.25% NaOH, 1.5% glycerol, pH 7.3) at 4 C. Decalcified bones were processed and embedded in paraffin, and 5 mm sagittal-oriented sections were prepared for histological analyses.For morphological analysis, sections were stained with a modified hematoxylin and eosin (H&E) stain, and osteoclasts were stained using an acid phosphatase, leukocyte (TRAP) kit (Sigma, #387a), which is quantified using Oc.S/BS (osteoclasts surface to bone surface) parameter by OSTEOMEASURE system.Immunohistochemical staining was performed using primary antibodies listed in the key resources table which were added and incubated at 4 C overnight.Images were screened using an Axio Scope A1 microscope (Carl Zeiss Microscopy GmbH, Jena, Germany) and processed using Osteomeasure System (OsteoMetrics, Decatur, GA, USA).Osteoblasts surface to bone surface (Ob.S/BS), osteoclasts surface to bone surface (Oc.S/BS), OSX positive cell number over bone perimeter (N.OSX/B.Pm) and OCN positive cell number over bone perimeter (N$OCN/B.Pm) in 5 randomly selected visual fields per specimen were measured and quantitated by OSTEOMEASURE system.
Secreted protein analysis by western blotting 500 mL of culture medium were added to an Amicon Ultra-0.5 device (UFC50030, Merck-Millipore, Darmstadt, Germany) and centrifuged for 30 min at 12,000 3 g according to the user manual.An equal volume of 23 Laemmli buffer (125 mM Tris-HCl, pH 6.8, 4% sodium dodecyl sulfate, 20% glycerol, 100 mM dithiothreitol, 0.02% bromophenol blue) was mixed with the ultrafiltrate and boiled at 100 C for 10 min.
Immunofluorescence
Femora, tibiae, and BAT dissected from the mice were fixed using 4% paraformaldehyde in phosphate-buffered saline (PBS) at 4 C for 24 h.Brain dissected from the mice which were transcardially perfused with saline followed by 4% paraformaldehyde (PFA) were post-fixed in 4% PFA at 4 C for 24 h.Femora and tibiae decalcified in 15% EDTA (pH 7.4) at 4 C for 14 days.The tissues were embedded in paraffin, and 5-mm sagittal-oriented of femora, tibiae, and BAT, or coronal-oriented of brain sections were prepared for histological analyses.The sections were incubated with primary antibodies against perilipin 1 (PLIN1, #9349,1:150,CST), P-P65-S276 (AP0123, 1:100, ABclonal, Woburn, MA, USA), Rheb (15924-1-AP, 1:100, Proteintech), or Myd88 (A0980, 1:50, ABclonal) and labeled with secondary antibodies for 1 h in the dark.After labeling, cells were incubated with DAPI for 5 min.Images were acquired with an Olympus 200 M microscope (Olympus, Tokyo, Japan).The numbers of positively-stained cells in the whole medullary space or bone trabecula per femur or tibia in three sequential sections per mouse in each group were counted using Image Pro Plus software.
Micro-computed tomography (mCT) analyses
Distal femora and proximal tibiae were fixed overnight in 4% paraformaldehyde and then analyzed by high-resolution mCT (Scanco Medical mCT100, Bru ¨ttisellen, Zurich, Switzerland), with a voltage of 55 kVp, an intensity of 200 mA, a resolution of 5 mm per pixel and an exposure of 230 ms.The trabecular bone and cortical bone, which is proximal to the distal growth plate for the femora and distal to the proximal growth plate for the tibiae, was analyzed starting at the inferior border of the growth plate and extended a further longitudinal distance of 200 slices (1mm).The three-dimensional structure and morphometry were reconstructed by three-dimensional model visualization software (mCT Ray V4.2) and data analyzed by data analysis software (mCT Evaluation Program V6.6) for trabecular bone volume fraction (BV/TV), trabecular thickness (Tb.Th), trabecular number (Tb.N), trabecular separation (Tb.Sp), the cortical thickness (Ct.Th) and cortical BV/TV.
Frozen sections and ALP staining
Mice were euthanized and the femora were removed and fixed with ALP solution (8% paraformaldehyde, L-Lysine (Sigma-Aldrich) and sodium periodate (Merck, Palo Alto, CA, USA).After 24 h, femora and tibiae were decalcified for 20-30 days in decalcification solution at 4 C.The tissues were then transferred into 30% sucrose for 24 h to dehydrate, and then 5 mm sagittal-oriented sections were prepared using a Leica CM1850 frozen microtome (Leica, Wetzlar, Germany).For ALP staining, the tissues were incubated at 65 C for 1 h to remove the OCT embedding agent, then stained using an ALP staining kit (Beyotime Institute of Biotechnology) and quantified using Ob.S/BS (Osteoblasts surface to bone surface) parameter by OSTEOMEASURE system.
Hard tissue sections
Mice were euthanized and then the femora were removed and fixed with 4% paraformaldehyde for 24 h.3-month-old mice were injected with calcein (c0875, Sigma-Aldrich) (15 mg/kg, i.p.) 10 and 3 days before euthanasia.Tissues were then gradually dehydrated by 70-100% alcohol, and finally placed in xylene for 2 days.After embedding using a plastic embedding agent, the tissues were cut with a hard tissue microtome (EXAKT Advanced Technologies GmbH, Norderstedt, Germany).For calcium salt detection, tissues were stained with a Von Kossa staining kit (ab150687, Abcam, UK) according to the manufacturer's protocol.Image acquisitions were performed using Zeiss microscope and quantified using Image pro plus.For calcein detection, tissues were directly screened using a confocal laser scanning microscope (Olympus FV1000, Tokyo, Japan) and quantified by OSTEOMEASURE system.
Enzyme-linked immunosorbent assay (ELISA)
Blood samples were collected from 8-week-old mice by 1.5 mL centrifuge tubes at room temperature and allowed to stand still for 1h.Serum were collected from the upper layer after clotting and centrifuged at 3000 rpm for 10 min at room temperature.The protein levels of S100A8, S100A9, PINP, CTX-I, leptin and noradrenaline in the serum were measured using a mouse S100A8 ELISA kit (E-EL-M1343c, Elabscience, Wuhan, China), a mouse S100A9 ELISA kit (E-EL-M3049, Elabscience),a mouse PINP ELISA kit (E-EL-M0233c, Elabscience), a mouse CTX-I ELISA
Figure 1 .
Figure 1.Deletion of Rheb drives BAT whitening (A) Gross examination (upper panel) and weight statistics (lower panel) of the BAT of the control and Rheb FL/FL ; Ucp1-Cre (Rheb BAD KO) mice.(B) H&E or (C) immunohistochemical (IHC) staining of the BAT sections from the control and Rheb BAD KO mice for the indicated markers.(D) WB analysis of the BAT tissue from these mice for the indicated proteins.(E) RT-qPCR assay for Nrip expression in the BAT of the control and Rheb BAD KO mice.Data are shown as box-and-whisker plots (with median and interquartile ranges) from max to min, with all data points shown.Analyses were performed as two-tailed unpaired t-test.
Figure 2 .
Figure 2. Rheb ablation in BAT resulted in reduced bone mass, but increased bone marrow fat in mice Three-dimensional microCT reconstruction (left panel) and histomorphometric analysis (right panel) of distal femora and proximal tibiae from the control and Rheb AD KO mice (A), or the control and Rheb BAD KO mice (B), were performed.Hematoxylin-eosin (HE) analysis of distal femora and proximal tibiae from 1-month-old (C) and 4-month-old (D) control and Rheb AD KO mice (upper panel), or control and Rheb BAD KO mice (lower panel).Marrow fat in the medullary cavity of distal tibiae from the 1-month-old (E) and 4-month-old (F) control and Rheb AD KO mice (upper panel), or the control and Rheb BAD KO mice (lower panel) was examined by HE staining.Representative images were shown left, while quantitation of the number of marrow adipocytes were shown right.(G) Marrow fat in the medullary cavity of distal tibiae from the control and Rheb AD KO, or from the control and Rheb BAD KO mice was examined by IF staining of Perilipin.Data are shown as box-and-whisker plots (with median and interquartile ranges) from max to min, with all data points shown.Analyses of Figure 2A-D, 2G were performed as two-tailed unpaired t-test.
Figure 3 .
Figure 3. Rheb ablation in BAT led to reduced osteoblast differentiation in vivo and ex vivo Distal femur sections of the control and Rheb AD KO mice (upper panel), or control and Rheb BAD KO mice (lower panel) were analyzed for osteogenesis by ALP staining (A), by Von Kossa staining (B), for the osteogenic markers OSX (C)) or OCN (D) by IHC staining, by calcein labeling (E), or for osteoclast differentiation by TRAP staining (F).Ob.S/BS, Osteoblasts surface to bone surface; MAR, mineral apposition rate; Oc.S/BS, osteoclasts surface to bone surface; N.OSX/B.Pm, OSX positive cell number over bone perimeter, and N$OCN/B.Pm, OCN positive cell number over bone perimeter.(G) Serum Procollagen I N-Terminal Propeptide (PINP) and Cross Linked C-Telopeptide of Type I Collagen (CTX-I) were analyzed by ELISA.Primary BMSCs were isolated from the femoral medullary cavity of the control and Rheb AD KO mice, or control and Rheb BAD KO mice, cultured in vitro, and induced for osteoblastic (H) or adipocytic differentiation (I).The results were analyzed by ALP or Oil red O staining (left panel), respectively, as well as by WB assay for the indicated osteogenic or adipogenic markers (right panel).(J) The expression of Rheb in primary undifferentiated BMSCs were analyzed by WB assay.Data are shown as box-and-whisker plots (with median and interquartile ranges) from max to min, with all data points shown.Analyses were performed as two-tailed unpaired t-test.
Figure 4 .
Figure 4. Brown but not white adipocytes from Rheb-ablated mice suppress osteoblast differentiation from BMSCs Primary BMSCs were isolated from wild-type C57BL/6J mice, cultured in vitro, and treated with conditioned medium (CM) collected either from interscapular BADs (BAD CM-T-BMSC) (A, B), or from epididymal WADs (WAD CM-T-BMSC) (C, D) of control or Rheb AD KO mice.These cells were induced to differentiate into osteoblasts (OB) or adipocytes (AD), followed by inspection with ALP (A, C) or Oil red O (B, D) staining, respectively.Primary wild-type BMSCs were also treated with CM collected from interscapular BADs of Rheb BAD KO mice, induced for OB or AD differentiation.While OB differentiation was analyzed by ALP staining (E) or western blotting for the indicated OB markers (F), AD differentiation was observed by Oil red O (G) staining or western blotting for the indicated AD markers (H).Data are shown as box-and-whisker plots (with median and interquartile ranges) from max to min, with all data points shown.Analyses were performed as two-tailed unpaired t-test.
Figure 5 .
Figure 5. Loss of Rheb in BAT induces transcription and secretion of S100A8/A9 Interscapular BAT from either control or Rheb AD KO (A), or from control or Rheb BAD KO (B) mice, were used for transcriptome analysis by RNA-sequencing.The significantly upregulated genes were further subjected to gene ontology (GO) analysis.Detailed biological processes of the significantly upregulated genes in the Rheb AD KO mice are shown as a CNET plot (C).(D) Significantly up-or downregulated genes (at least two-fold, p < 0.01) in the BAT of control or Rheb BAD KO mice are shown as a volcano plot, with S100A8 and S100A9 labeled.S100A8 and S100A9 expression in the BAT of control or Rheb AD KO (or control or Rheb BAD KO) mice was analyzed by RT-qPCR (E) or western blotting (F).(G) Serum S100A8 and S100A9 levels of these mice were determined by ELISA or western blotting.Interscapular BAT (H) or serum (I) of the 3-month and 20-month C57BL/6J mice were analyzed for S100A8 expression by western blotting.(J) Conditional medium (CM) collected from interscapular BAT or bone marrow cells from these mice were analyzed for S100A8 by western blotting.Silver staining was used to assess equal loading.Data are shown as box-and-whisker plots (with median and interquartile ranges) from max to min, with all data points shown.Analyses were performed as two-tailed unpaired t-test.
Figure 7 .
Figure 7. S100A8/A9 inhibits OB differentiation of BMSCs through targeting toll-like receptor 4 (TLR4) (A) Expression of the S100A8/A9 receptor TLR4 and RAGE in primary BMSCs was analyzed by RT-qPCR.C2C12 stromal cells (B) and C3H10T1/2 MSCs (C) were treated with recombinant S100A8/A9 and analyzed for Myd88 expression and localization by confocal microscopy.Original magnification, 3200 or 3600; scale bar was shown as indicated.(D)C3H10T1/2 cells were treated with S100A8/A9 and TLR4 inhibitor TAK242 as indicated, followed by assay for OB differentiation by ALP staining (upper panel) and WB (lower panel).For comparisons between 2 groups, two-tailed unpaired t-tests were used.For comparisons between multiple groups, one-way analysis of variance with multiple comparisons were used, followed by the Bonferroni post-hoc test for significance.Data are shown as box-andwhisker plots (with median and interquartile ranges) from max to min, with all data points shown.Two-tailed unpaired t-test was used for two-group comparison; for comparison between multiple groups, one-way analysis of variance with multiple comparisons were used, followed by the Bonferroni post-hoc test for significance.
74imal studies were approved by the Ethical Committee for Animal Research of Southern Medical University, and conducted according to guidelines from the Ministry of Science and Technology of China.Rheb FL/FL mice74were a gift from the West China Center of Medical Sciences, Sichuan University.Fabp4-Cre mice and Ucp1-Cre mice were both purchased from Beijing Biocytogen.Mice lacking Rheb in the adipocyte lineage were generated by mating Rheb FL/FL mice with Fabp4-Cre conditional knockout mice.Mice lacking Rheb in the brown adipocyte lineage were generated by mating Rheb FL/FL mice with Ucp1-Cre conditional knockout mice.The resulting Fabp4-Cre; Rheb FL/FL mice and Ucp1-Cre; Rheb FL/FL were hemizygous for the Fabp4-Cre or Ucp1-Cre and homozygous for the floxed Rheb allele.Co-housed Rheb FL/FL littermates were used as controls for all experiments.ROSA26 mT/mG mice (Stock No. 007676) were purchased from the Jackson Laboratories (Bar Harbor, ME, USA).To generate Ucp1-Cre; ROSA26 mTmG/+ animals, heterozygous Ucp1-Cre males were bred to homozygous mTmG (RO-SA26 mTmG/mTmG ) female mice.Geotyping was performed using genomic DNA isolated from tail biopsies, and the primers used are shown in the TableS1.All animals were backcrossed for 10 generations onto the C57BL/6J background.Only male mice were used.C57BL/6J mice (newborns, 4-5 weeks old, 3 months old, 5 months old, 10 months old or 20 months old) were purchased from the Laboratory Animal Center of Southern Medical University (Guangzhou, China).Mice were housed in plastic cages at controlled temperatures of 22 G 1 C, on a 12-h light/12-h dark cycle, with lights on from 06:00-18:00. Stndard rodent chow and water were provided ad libitum throughout the study period.Before microCT or histological analysis, the mice were euthanized by cervical dislocation.
|
2024-01-24T18:58:13.306Z
|
2024-01-11T00:00:00.000
|
{
"year": 2024,
"sha1": "0295af7cbb581b8013ec41ed8cc897d4da6267dc",
"oa_license": "CCBYNCND",
"oa_url": "https://doi.org/10.1016/j.isci.2024.108857",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7cc312a26a669d97dd7ace916766022ce31e8ae6",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": []
}
|
201410668
|
pes2o/s2orc
|
v3-fos-license
|
Managing cyber risk in supply chains: A review and research agenda
Purpose
In spite of growing research interest in cyber security, inter-firm based cyber risk studies are rare. Therefore, this study aims to investigate cyber risk management in supply chain contexts.
Design/methodology/approach
Adapting a systematic literature review process, papers from interdisciplinary areas published between 1990 and 2017 were selected. Different typologies, developed for conducting descriptive and thematic analysis, were established using data mining techniques to conduct a comprehensive, replicable and transparent review.
Findings
The review identifies multiple future research directions for cyber security/resilience in supply chains. A conceptual model is developed, which indicates a strong link between information technology, organisational and supply chain security systems. The human/behavioural elements within cyber security risk are found to be critical; however, behavioural risks have attracted less attention because of a perceived bias towards technical (data, application and network) risks. There is a need for raising risk awareness, standardised policies, collaborative strategies and empirical models for creating supply chain cyber-resilience.
Research limitations/implications
Different types of cyber risks and their points of penetration, propagation levels, consequences and mitigation measures are identified. The conceptual model developed in this study drives an agenda for future research on supply chain cyber security/resilience.
Practical implications
A multi-perspective, systematic study provides a holistic guide for practitioners in understanding cyber-physical systems. The cyber risk challenges and the mitigation strategies identified support supply chain managers in making informed decisions.
Originality/value
To the best of the authors’ knowledge, this is the first systematic literature review on managing cyber risks in supply chains. The review defines supply chain cyber risk and develops a conceptual model for supply chain cyber security systems and an agenda for future studies.
Descriptions such as 'IT security event ', 'cybercrime' or 'cyber-event' all substantially refer to the concept of risk in the cyber context; yet, for example in their seminal paper, Faisal et al. (2007) refer to information risks as characterised by the presence of worms, viruses and Trojans.
A traditional or physical supply chain (SC) is dominated by the movement of products, finance and information (Peck, 2006); whereas a cyber supply chain is a network of IT infrastructure and technologies that are used to connect, build and share data in virtual networks (Smith et al., 2007) enabling new forms of risk un-connected to physical products or even a distinct physical location (e.g. WannaCry ransomware). Supply chains are the backbone of evolving technological ecosystems, Industry 4.0 concepts such as the Internet of Things, Additive Manufacturing, Virtual Reality, Artificial Intelligence, Blockchain, both reflect, expand, alter and innovate the relationships between supply chain partners.
However, developments in cyber security responses lag these advances in the digitalisation of supply chains. It has been argued that supply chains have unintentionally expanded their vulnerability by imprudently collaborating with many diverse partners (Boone, 2017). Smith et al. (2007) take the view that increasingly accessible IT systems have removed traditional, often bureaucratic, layers which used to function as protective barriers for organisations. In line with the growing capability of shared IT systems, modern cyber threats have also advanced dramatically, with increased consequences (Sokolov et al., 2014). A recent example of the developing capability of cyber threats was observed in the food industry, where complacency led to the belief that IT-related risks would only affect office based work (Khursheed et al., 2016). However, more elaborate malware goes beyond the boundaries of offices and can infect automated production systems and the wider supply chain network. Cyber supply chains do not necessarily make business simpler and safer; they add complexity and can become more challenging to manage (Kunnathur, 2015). Intriguingly, a difference between cyber and conventional risk has been identified as the anonymity of cyber risk, as it can remain undetectable until it impacts businesses (Renaud et al., 2018).
Organisations are increasingly becoming aware of cyber risks and their consequences and have increased cyber security response budgets (KPMG, 2017).
Everyday media reports on cyber threats highlight the criticality of these risks for practice, yet the topic has attracted minimal academic attention in spite of its significant implication for the global supply chains (Davis, 2015;Eling and Wirfs, 2019). According to a global risk survey conducted by various consultancy and insurance firms (e.g. Gartner, AXA, Society of actuaries, Deloitte) in 2018, cyber security and data breaches emerged as the top enterprise risk. Extant literature has failed to address the implications of cyber threats at the level of supply chains (Smith et al., 2007;Urciuoli et al., 2013;Xue et al., 2013). To the best of research team's knowledge, this study is the first to contribute a supply chain perspective on cyber risk/security/resilience in the form of a structured literature review (SLR). It is therefore crucial to identify, assess and mitigate cyber risks to reduce supply chain vulnerability. Following on from the above discussion, the study will address the following research question: How can organisations manage cyber risks in supply chains? Through addressing this question, this study will identify, classify, assess and mitigate cyber risks in supply chains.
The remainder of this paper is structured as follows. Section 2 explains the adopted research design and the use of a data mining approach for developing multiple typologies.
Sections 3 and 4 discuss the findings from the descriptive and thematic analysis. Lastly, section 5 discusses key findings, the conceptual model and critical directions for further research along with implications for research and practice.
Research Design
A systematic literature review (SLR) is the universally preferred approach for executing an objective and extensive investigation of literature relevant to a specific research topic. The SLR follows a structured procedure that is scientific, replicable and transparent (Tranfield et al., 2003). Traditional literature reviews can be criticised for bias, as they steer the reader toward a specific direction based on the researchers' perception (Wilding and Wagner, 2012). In contrast to avoid claims of bias, this study presents a 'concept-centric' approach (Webster and Watson, 2002) for conducting an SLR by adapting key elements from Tranfield et al. (2003), Rousseau et al. (2008) and Denyer and Tranfield (2009). The specific SLR process adopted here is divided into three stages, with each stage containing the set of activities shown in Figure 1.
Identification of data sources
This exploratory stage of identifying data sources maps a wide range of literature and helps in building an understanding of critical concepts and developing 'search strings' (Ehrich et al., 2002;Arksey and O'Malley, 2005). The initial step is to identify key search terms derived from the research question. Since the study examines how an organisation can manage cyber risks in supply chains, i.e. the risk associated with combining supply chains and information technology, the choice of keywords was judiciously selected to include two connected fields namely supply chain risk management (SCRM) and information technology (IT). Boolean search was used since the search domain comprised of many interfaces. Different search string combinations were identified based on an initial understanding of the existing literature on cyber risk in supply chains. Appendix I provides an exhaustive list of the keywords selected by the research team. Following a mind mapping session, the most important search string combinations were finalised. Keywords such as 'cyber', 'data', 'information' and 'technology' were combined with risk, disruption, security, attack, along with other related words frequently used in the SCRM/Risk management literature. Tranfield et al., 2003;Rousseau et al., 2008;Denyer and Tranfield, 2009) Figure 2 shows the search string combinations used for the identification of data sources. To obtain a wide range of literature, two electronic databases-Scopus and ProQuest were searched using the search strings identified. Although broader selection criteria are recommended for an SLR, it is critical to define the boundaries and scope of the research. Including articles published in peer-reviewed journals positively influences the quality of the study (Burgess et al., 2006); hence, books, conference papers, editorials, HTML-links as well as both 'grey literature' and 'white literature' were excluded (Ghadge et al. 2019). Furthermore, only academic articles published in the last twenty years were considered in order to capture more recent developments in the area.
Data screening and synthesis
Another essential stage of framing the SLR is to assess the quality of the papers identified.
While there is no consensus across academic fields on one quality appraisal method for SLRs, in management studies, researchers frequently rely on the journal quality-rankings to determine article inclusion (Tranfield et al., 2003). The decision was taken that due to the comparative sparsity of extant literature in this area, instead of a particular journal The initial search run on ProQuest produced 2,856 hits in the literature, while 6,637 potential papers were found via Scopus. Making use of these databases' built-in functions, inclusion and exclusion criteria (explained earlier) were applied to the articles leaving a total of 3,890 peer-reviewed papers, 2,149 from ProQuest and 1,741 from Scopus. After the removal of duplicates, a total of 1,434 papers meeting the selection criteria were taken into consideration.
The next necessary step was to identify papers closely related to cyber security/risk in supply chains. This was done by manually screening the titles and abstracts; two groups (from the research team) independently selected papers and compiled them together to identify common papers. Following this iterative step, further 1373 papers were excluded.
Full-text reading of the 61 remaining papers led to further exclusion of 22 papers. Finally following a rigorous screening process to achieve a high-quality output, 39 papers were considered relevant. Besides, bibliography screening of the selected papers identified a further 3 related articles; giving a total of 41 articles to inform the analysis and were agreed with the external third-party expert.
Data analysis and dissemination
The data analysis stage aims to break the vast amounts of accumulated data into smaller, coherent parts and examine the extent to which they relate to each other (Denyer and Tranfield, 2009). QDA Miner©, a qualitative data analysis software developed by Provalis Research, was used as a text mining platform. Text mining was applied to cross-validate the search strings manually derived from the data identification process and to provide further support for the data analysis. Text mining identified the most important words or phrases by frequency ( Figure 3); the manually selected key strings strongly match with those identified through the text mining. This cross-validation of the choice of search strings helps to limit research team bias and validate the reliability of the SLR process.
Connectivity-based clustering or hierarchical clustering is an algorithm based on the core idea of filtering objects that are more related to nearby objects (than to objects farther
Supply Chain
"E-supply chains involve organisations using online information, to perform, rather than just support, some value-adding activities in the supply chain more efficiently and effectively." (Barlow and Li, 2007, p. 289) "[Cyber supply chain is] the entire set of key actors and their organisational and process-level interactions that plan, build, manage, maintain, and defend the IT system infrastructure." (Boyson et al., 2010, p. 200) "IT system supply chain is a globally distributed and dynamic collection of people, process, and technology." (Simpson, 2010, p. 3) "A cyber supply chain is a supply chain enhanced by cyber-based technologies to establish an effective value chain." (Kim and Im, 2014, p. 387) "The probability of loss arising because of incorrect, incomplete, or illegal access to information." (Faisal et al., 2007, p. 679) Supply Chain Risk " […] degradation or disruption to a supply chain's infrastructure or structural resources resulting from the successful exploitation of IT vulnerabilities by threats within an organisation, within the supply chain network, or in the external environment." (Smith et al., 2007) "IT security incidents occur when a threat directed against an organisational asset causes a compromise in one (or more) of three areas: confidentiality, integrity or availability (CIA)." (Deane et al., 2009, p. 5) Operational risks to information and technology assets that have consequences affecting the confidentiality, availability or integrity of information systems." "Cybercrime can be defined as any crime that is facilitated or committed using a computer, network, or hardware device; in particular, the computer or the device may be the agent, facilitator, or target of the crime that takes place in virtual or non-virtual places." (Cebula and Young, 2010) (Urciuoli et al., 2013, p. 51) "A cyber-event is any disturbance to this interdependent network that leads to loss of functionality, connectivity, performance, or capacity." (Boyes, 2015, p. 29) Supply Chain Risk Management "CSCRM (cyber supply chain risk management) can be defined as the organisational strategy and programmatic activities to assess and mitigate risks across the end-to-end processes (including design, development, production, integration, and deployment) that constitute the supply chains for IT networks, hardware, and software systems." (Boyson, 2014, p. 342) "[…] the application of policies, procedures, and controls (technical, formal, informal and management) to protect supply chain information assets (product, facilities, equipment, information, and personnel) from theft, loss, damage, interceptions or unauthorized access, use, disclosure, interruptions or disruption, modification or fabrication." (Sindhuja and Kunnathur, 2015, p. 483) away), to build a hierarchical network (Tan et al., 2017). Cluster analysis was conducted to identify a group of entities based on their similarities and differences in the subject area.
An exploded view of the identified clusters is provided as an example in Figure 4. It can be observed that sub-areas having a close affinity to each other come together (circled in Figure 4 for clarity) following a hierarchical clustering approach. After studying all the clusters for patterns and dendrograms for the taxonomic relationships (example shown in Figure 5), different themes were identified for the data analysis. Furthermore, subcategories for themes emerged during the iterative process of data screening, and synthesis and these were utilised for developing a 'theme-based' typology. A comprehensive list of meta themes and associated sub-categories identified are shown in Figure 6. The two-fold reporting approach recommended by Tranfield et al. (2003) is adopted in this paper. Descriptive analysis will report an overview of the field of study.
Furthermore, a thematic analysis will report the findings in detail and help in drawing conclusions and future research avenues. Table I presents an overview of the SLR content in terms of the research methodology and different types of research design adopted for data collection and analysis.
Definitions
In the evolving definition of what constitutes a cyber supply chain (Table II), we see broadening of scope over time, from the earliest definition linking online activities undertaken by firms or chain (Barlow and Li, 2007;Sindhuja and Kunnathur, 2015). What is notable is the consistent use of terms relating to the value creation. Kim and Im (2014) believe that cyber supply is 'an effective value chain'. In terms of supply chain risk, the same broadening of the scope is seen over time, but early work is heavily focused on technology and exogenous threats. Later definitions include awareness of endogenous interruptions or disruption, modification or fabrication" (ibid.). In Table II, we see cohesion on definitions of SCRM as it being the application of various tools and a guiding process for endogenous and exogenous risks. Therefore, the study takes forward from these definitions that supply chain cyber security systems are an integrated alignment of processes involving infrastructure network, IT system and organization.
Research distribution
The work by Warren and Hutchinson (2000) can be seen as a milestone for the field and a key paper for this study; they report a survey that found approximately 60% of IT managers had no awareness of, or policy on cyber security. Ironically, attacks in 2005 and 2006 on Homeland Security, the department tasked with keeping the USA secure, seem to have piqued academic interest in the latter half of this period. Looking at the trend in the publications between 1997 and 2017, the first article that relates to cyber supply chains was only published in 2000; since then, academic research on cyber security has grown, particularly in the IT and computer engineering fields.
Geographic distribution
Approximately half of the selected papers originate from researchers based in either the USA or UK ( Figure 7); Government institutions from both countries have raised the profile of cyber security through different initiatives aimed at promoting its importance among both practitioners and academics (see Luiijf et al., 2013). Keegan (2014) and Rongping and Yonggang (2014) claim that inducements and support from governmental bodies will be crucial for the progression of research in this field. Surprisingly, while countries like the USA or UK developed their first national cyber security strategies long before 2010, European countries such as Germany, France or the Czech Republic did not present theirs until 2011. India has emerged as one of the leading low-cost destinations for outsourcing IT operations (Bahl et al., 2011); Luiijf et al. (2013) supports the strength and economic ambition of India with regard to ICT systems and argue that Indian firms see cyber security as an opportunity for further economic growth.
Methodological distribution
The research methodologies can be separated into qualitative, quantitative and mixed approaches. Most of the research methods in this field are qualitative, whereas only a limited number of quantitative research designs have been identified. These findings support the initial claims made about the progression stage of the literature on the topic and are consistent with Creswell (2014) positing that prevalence of qualitative works in an academic field is an indicator of the immaturity of the field and the lack of consensus on key concepts. Maturity and relatively stable constructs are associated with more quantitative research designs (ibid.); by implication, research on the topic of cyber security in SCs is still at a nascent stage. In part this unequal split reflects the multidisciplinary nature of the research topic. Research in IT-related fields is usually dominated by quantitative approaches, while qualitative modes are more prominent in the area of SCM (Ho et al., 2015). Qualitative and quantitative methodologies are not substitutes for each other as they approach different aspects of the same reality (McCracken, 1988), but are simultaneously necessary to understand complexities in the research thoroughly. Only 12% of the sample for this SLR is purely quantitative; Charitoudi and Blyth (2014) propose that the lack of accessible quantitative cyber data critically limits researchers' ability to model supply chain cyber risks.
Thematic analysis
The thematic analysis combines the careful reading of the selected papers, as a part of the data screening and synthesis stage with categories confirmed following the text mining approach.
Type of cyber risks
Extant literature has a variety of theoretical frameworks for the classification of different supply chain risks (e.g., Jüttner et al., 2003;Manuj and Mentzer, 2008;Ho et al., 2015). In an attempt to make sense of these new and unexplored risks, Gordon and Ford (2006)
Physical threats
The physical dimension includes tangibles such as switches, servers, routers and other ICT devices. According to Boyes (2015), the presence of physical and environmental risks seems to be ignored by many risk managers, when talking about cyber risks. In this study, a few articles (e.g. Faisal et al., 2007;Smith et al., 2007;Charitoudi and Blyth, 2014;Tran et al., 2016;Urciuoli and Hintsa, 2017) acknowledge natural disasters as a critical driver for cyber risks. For example, when a flood or a tornado disrupts the functioning of servers, which then interferes with the seamless flow of the cyber supply chain network.
Meanwhile, Smith et al. (2007) and Urciuoli and Hintsa (2017) go one step further and add the deliberate damaging or theft of physical infrastructure components to this physical risk category. Faisal et al. (2007) also consider terrorist attacks to be a part of the physical aspect of cyber risks. Risks that affect the functioning and security of a supply chain's physical assets are, paradoxically, cyber risks.
Breakdown
The, perhaps, humdrum risk of systems or resources breaking down through causes such as outdated firewalls and overdue security updates have only attracted attention in two articles (Boyes, 2015;Tran et al., 2016). While the least exotic cyber risk (e.g., website failure due to a peak in data traffic), cannot be ignored, such failures are easier to predict than natural disasters or intentional attacks; however, their potential consequences can be equally severe.
Indirect and direct attacks
The cyber risk of deliberate assaults falls into two categories -direct attacks and indirect attacks. The first category comprises acts such as hacking attacks (Deane et al., 2009;Khursheed et al., 2016;Sharma and Routroy, 2016;Boone, 2017), denial-of-service (Faisal et al., 2007;Deane et al., 2010) or password sniffing (Warren and Hutchinson, 2000) for financial gains. Several authors, for example, Faisal et al. (2007) and Tran et al. (2016), include the risks of industrial espionage or compromises to intellectual property, under direct attack.
In the Indirect attacks the attackers lay out 'bait' which enables them to access the target system. Commonly discussed methods in the literature include viruses, worms and Trojans (Warren and Hutchinson, 2000;Faisal et al., 2007;Smith et al., 2007;Jones and Horowitz, 2012), counterfeit products, soft-and hardware (Urciuoli et al., 2013;Linton et al., 2014;Williams, 2014;Boyes, 2015), malicious codes (Smith et al., 2007;Deane et al., 2010;Kunnathur, 2015) and spoofing attacks (Warren and Hutchinson, 2000;Smith et al., 2007). If employees accept the bait by, for example, visiting a website or downloading software, the attacker gains access to the system. Cyber-attacks that originate via phishing, i.e. gaining access to sensitive information by disguising the threat as a trustworthy entity, are on the rise (Verizon, 2018), and heightened cyber awareness is necessary to tackle such disguised attacks.
Insider threat
According to Kunnathur (2015), employees often represent the most significant risk to a company's cyber security. Internally, employees were found to be careless with password confidentiality (Stephens and Valverde, 2013), including writing passwords down for easy recall (Venter, 2014). Furthermore, absent-mindedly disclosing sensitive information while discussing with colleagues or others is identified as a risk that companies need to be aware of (Kunnathur, 2015). In connection with these acts of thoughtlessness, the literature also reports incidents in which employees consciously misuse or even sabotage a company's information. For example, opportunistic misuse of confidential data (Deane et al., 2009) or a premeditated personal vendetta against an employer (Sharma and Routroy, 2016). As the employee cyber threat is internal, whether deliberate or accidental, this is termed an insider threat.
Reporting on deliberately executed, maliciously motivated cyber-attacks (Urciuoli, 2010) should not be allowed to crowd out cyber supply risks resulting from merely careless employees (Urciuoli et al., 2013;Urciuoli et al., 2017). In both the negligent and premeditated mode, the human factor can pose the biggest and most unpredictable threat to a company's cyber security. Employees could act as insiders and support criminals in perpetuating their actions, or they could perpetrate a crime on their own, as they may have easy access to facilities or cargo (Urciuoli, 2010).
Points of penetration
To allocate security resources, organisations need to know the weak points of the supply chain network where these risks are most likely to penetrate (Smith et al., 2007); referred to as 'points of penetration' (PoP). Urciuoli et al. (2013) reported that 50% of malicious cyber-attacks target smaller organisations due to the lack of adequate protection measures installed in their information systems. SMEs might have a lower security capability, but their attack surface and visibility are also dramatically smaller (Caldwell, 2015). Data synthesis identifies three key 'failure points' where cyber risks emerge. PoPs are classified into technical, human and physical dimensions. Smith et al. (2007) define the weakest link of a SC quite broadly by claiming all IT-related assets are prone to cyber risks including systems, software, personnel and equipment. ICT systems and related resources may improve performance while also increasing technology risk (Xue et al., 2013). In particular, legacy (inherited) or outdated and poorly maintained systems attract wilful attacks. Outsourcing servers to save up-front capital costs reduces overall direct costs (Boyson, 2014), but the loss of control over security may increase longterm indirect costs dramatically.
Human PoPs
Most companies, as claimed by Sindhuja (2014), complacently assume that cyber security is only about technical security. In reality, technical cyber security solutions will have been grounded in security analysis; the same is often not the case with human involvement, individuals, who theoretically should be the first layer of protection. Boone (2017) argues that companies are only as secure as the most susceptible stakeholder in their supply networks. Urciuoli and Hintsa (2017) suggest that human resources could either willingly choose to harm their own company, or pose a threat by accident or be forced to collaborate with criminals by means of viruses, blackmailing, etc. Kim and Im (2014) found that internal human errors are likely to have severe consequences, but also more challenging to identify than external events. Kunnathur (2015) builds on the importance of human PoPs, arguing that potential cyber aggressors are well aware of this vulnerability. Consequently, they suggest (ibid.) that future cyber risks, and especially intended attacks, are expected to exploit human PoPs rather than, hitherto, focus on the technical domain. This vulnerability is then intensified when SC employees interact with each other across organisational boundaries. Ill-secured inter-organisational supply chain connections between companies are a PoP for cyber risks, which may work as facilitators for the propagation of these risks.
Physical PoPs
Charitoudi and Blyth (2014) state that physical objects such as buildings, machines and other surroundings can also represent a PoP for cyber risks. In a recent study on cyber security in the food industry, Khursheed et al. (2016) report incidents in which obsolete firewalls and inadequate control mechanisms allowed attackers to gain remote access to production lines. In addition, physical infrastructures are always vulnerable to tangible risks such as natural disaster or physical attacks that impact cyber systems. However, as such disasters are naturally rare and unavoidable (Smith et al., 2007), companies like to perceive them as less of a concern for cyber safety (Sharma and Routroy, 2016).
Propagation zones
The consequences of cyber risks can be short to long term. While damage to servers will have noticeable effects immediately following their occurrence, others, for example, information leakage, can take years to recognise (Boone, 2017) or will never be disclosed.
Data theft is central to cybercrime (Urciuoli and Hintsa, 2017) which, to date, seems to have exempted communities from direct cyber-attack. The risk propagation model proposed here, suggests supply chain risks are not static and, propagate out from the centre of risk occurrence to other related areas with the 'cascading or ripple effect' (Ghadge et al., 2013;Dolgui et al., 2018). Therefore, it is likely that cyber risks will typically follow similar risk propagation patterns, as shown in Figure 9.
Primary propagation
As indicated by the PoP discussion, regardless of where a risk finds its way into a system, there is always a disruption to the company's operations. Risk propagation compromises the operation's continuity (Warren and Hutchinson, 2000;Boyson, 2014), productivity (Manzouri et al., 2013) and quality (Jones and Horowitz, 2012). Cyber-attacks in Germany (Boyes, 2015) and Iran (Jones and Horowitz, 2012), report that blast furnaces and centrifuges, respectively, were damaged, threatening not just individual operations but the entire factory/output. A lone report on the consequences for employees (Manzouri et al., 2013) claims aggressor breaches of security systems discourage employees, particularly their willingness to continue working under such circumstances (echoing Reade's (2009) non cyber finding in terror act environments). Except for the above, there appears to be limited discussion on primary consequences from cyber-attacks, and there is a lack of studies focussing on the consequences for employees and organisational sustainability of such attacks, whether successful or not. This theme has exposed a strong tendency to a binary approach based on the success or failure of an attack/cyber risk episode; thus, more studies are needed on the impacts and how processes and people respond to the cyberattacks.
Secondary propagation
Supply chain relationships facilitate information sharing, including detrimental information like cyber breaches. Several authors claim that reputational damage resulting from a cyber-attack discourages further collaboration with existing and prospective SC Primary consequences for focal company Secondary consequences for supply chain network Tertiary consequences for society partners (Urciuoli et al., 2013;Charitoudi and Blyth, 2014). Post-supply-chain cyberattack, authors highlight the potential unavailability of information, services or products for further use (e.g., Warren and Hutchinson, 2000;Charitoudi and Blyth, 2014). Interconnected systems and machinery will be affected, leading to unsatisfied customer requirements and loss of sales and profit. Losses will include near-time opportunity costs, but also potential longer-term reputational damage. Breaches of confidential information (such as supplier databases, contracts and payment details) could have major implications for the supply chain network. In spite of increased security in data storage platforms, data breaches are a regular occurrence; thus, there is a need for robust cyber security measures to protect cyber-physical systems.
Tertiary propagation
A study in the automotive industry found that hostile malware can corrupt the braking system of a car in a way that could not be detected by the manufacturer (Jones and Horowitz, 2012). Thus, individuals in the wider society face the initial brunt of this supply chain cyber-attack. According to Urciuoli and Hintsa (2013), the consequences of SC cyber-attacks for a community or society could be more serious, if criminals attack supply chains relevant to public health, e.g., food or pharmaceutical chains.
There is also a dynamic behaviour to cyber-attack consequences; as defences improve, the attacks move elsewhere. In two articles, Urciuoli and Hintsa (2013; explain that criminals can for now steal valuable cyber data -such as loading lists and transportation schedules -to plan and execute traditional non-cyber [theft] crimes; with relative impunity. It is evident that cyber risks directly impact organisations profit margins, market capitalisation and brand image (Mukhopadhyay et al., 2013), along with indirectly impacting wider businesses and society.
Inter-organizational collaboration
In traditional supply chains, two parties might share some information and very occasionally, the same IT platform. The risk is amplified when cyber supply chains and order management systems link multiple supply parties together or share the data in outsourced (e.g. Cloud) platforms. A lack of accepted standards and guidelines is hindering the development of robust cyber defences (Boyson, 2014;Davis, 2015). Authors argue that supply chain partners must be more transparent with each other on security and should combine security resources and know-how to deal with increasingly sophisticated cyber risks (Rongping and Yonggang, 2014). The propagation of cyber consequences means companies cannot afford to focus only on their security systems and must also be aware of their partner's security conditions (Deane et al., 2010). Supply chain collaboration based on open, honest and trust-based relationships is needed to effectively deal with supply chain cyber-related risks (Tran et al. 2016). Smith et al. (2007) recommend that SC integration, by aligning systems and processes, will yield better returns through standardised ways of working, shared security objectives and better general communication (see conceptual model, Figure 10). Bandyopadhyay et al. (2010) argue that higher levels of integration and collaboration reduce free-riding behaviour when considering investment in cyber security.
Employee knowledge
One of the stand-out findings from this SLR is the important role played by employees as the front-line of cyber security in SCs. Although the most visible layer of security to outsiders, it is challenging to hire cyber-security-trained and skilled resources given the complex, emergent and technological demands of SC security (Xue et al., 2013;Venter, 2014;Khursheed et al., 2016). So far, cyber threats have outpaced training and study initiatives. Ideally, such staff members are proactive employees in contact with cyber applications who need to know not only how to operate the systems, but also how to react in cases of attack. Khursheed et al. (2016) describe the ideal situation in which highly skilled employees are not only cyber risk reactive, but also have the skill-set to pre-empt cyber PoP risks.
Continuous commitment
The eco-systems in which cyber SCs operate are constantly evolving (Kim and Im, 2014); compounded by different geopolitical situations, regulatory frameworks as well as corporate and national cultures that merge in one supply chain. Cyber risk management is not only about protecting data, but also maintaining the privacy, trust and safety of stakeholders involved in the business network. Hackers and other potential invaders, on the other hand, have no such encumbrances and with the advantage of agility can invest in being ahead of the curve thriving on awareness of cyber trends and new technologies (Boyes, 2015), in order to create novel and ever more sophisticated and unpredictable cyber-crimes.
These two issues of timeframe and level of focus are built upon based on a theme found in the cyber supply chain literature, the disconnection between standard business practices and the requirement for a continuous commitment to cyber security. According to Linkov et al. (2013), many of the risks that have struck companies only manifest after months or even years; however, these manifestations exceed the attention (and job) span of most managers who are driven by short and medium-dated performance objectives (Urciuoli and Hintsa, 2017). Boone (2017) goes beyond timing and performance to argue that it is not merely a commitment to cyber security issues which is missing, but also responsibility and ownership. The introduction and maintenance of appropriate cyber security systems cannot be a one-person show; they require the contribution and commitment, over time, of many departments and much expertise.
Governmental involvement
Traditionally, governments have focused their interest on the security of military and national intelligence agencies (Keegan, 2014); however, they now have to include the security of supply chains that are significant contributors to their economies. More than 50 countries have issued national cyber security strategies with defined objectives (Rongping and Yonggang, 2014). The European Union regularly updates its EU Cybersecurity Strategy. The growing complexity of cyber SCs makes it impossible for individual companies acting alone to promote and coordinate holistic security efforts. Hence, Keegan (2014) claims governments have to sponsor and guide cyber security projects and create forums which allow for more accessible communication and planning of strategies to manage cyber risks.
Measures for mitigation
This section has identified measures to mitigate cyber risks from the extant literature. The risk mitigation typically depends on the type of cyber-attack, sophistication of the attack and resilience of the organisation (Amin et al., 2017). While some of the proposed countermeasures may look familiar from the traditional SCRM studies (e.g., supplier audits and information sharing), others focus on cyberspace more explicitly and are, therefore, new to the literature. Building on the scope of cyber risks identified here, the study rejects using a conventional proactive and reactive risk mitigation classification and instead proposes a time phases classification of cyber-attack mitigation measures.
In their efforts to model a system-aware cyber security architecture, Jones and Horowitz (2012) differentiate between three phases of a cyber-attack, namely pre-, transand post-attack. This time phase structure is adopted in this study to use a wider analytical lens on the stages of, and countermeasures for a cyber-attack. Table III classifies cyber risk measures for mitigation following pre, trans and post cyber-attack stages. Pre-attack countermeasures can be divided between those aimed at the technical level and those which are either directed at or carried out by human factors. Firstly, technical countermeasures include aspects such as firewalls and passwords (access control) or the diversification of soft-and hardware and are frequently discussed in the literature as they form the most fundamental layer of protection. They specify the level of system accessibility (Kunnathur, 2015) and are designed to make aggression less attractive to attackers (Al Kattan et al., 2009). However, many authors argue that such technical countermeasures only provide a partial solution and, therefore, need to be complemented by actions that are directed at the backbone of every supply chain, i.e., the personnel (e.g., Smith et al., 2007;Boyson, 2014;Boyes, 2015).
The implementation of automated IT operations has allowed companies to employ fewer staff (Urciuoli et al., 2013). In addition, some argue that, the few remaining IT staff are then over challenged as employees and have little time for security awareness (Sindhuja, 2014;Venter, 2014;Kunnathur, 2015), holistic understanding of systems (Faisal et al., 2007;Urciuoli and Hintsa, 2017) and commitment (Tran et al., 2016;Boone, 2017).
To nurture the capabilities of their employees and prepare them for the new challenges of cyber chains, risk awareness initiatives and training are among the most cited countermeasures in the literature (Table III).
Trans-attack phase
Data consistency checks Jones and Horowitz (2012) Task force Davis (2015) Post-attack phase issues (Bartol, 2014). The adherence to these standards can serve as a base for a standard set of terminology and understanding of key security concepts (Davis, 2015), but also as a guideline to desired security objectives (Kunnathur, 2015). Nevertheless, from a SC perspective, the implementation of these standards has often been criticised for various reasons. Kunnathur (2015) argue that current standards are designed for independent companies; although there is a strong need for standardised inter-organisational practices, it lacks as evidenced by the variety of accrediting bodies/organisations (ibid). Keegan (2014) and Davis (2015) argue that due to the numbers of entities in most supply chains, successful implementation of inter-organisational standards is only replicable at the level of direct supply (Tier 1 suppliers), but cannot extend further up the supply chain network.
Hence, the focal company spending resources on accreditation against these standards cannot ensure that the entire SC will follow their example. Venter (2014) is particularly critical of the standards, stating that some of the proposed methods are not feasible or are simply bad practice. Another criticism is that there is a common misconception of ISO standards, that they do not have an expiration date (Al-Najjar and Jawad, 2011). This makes companies believe that once they have acquired accreditation, they will always meet the required standards. Consequently, companies which have acquired a certificate often assume they do not have to improve their processes continuously, thus risking complacency.
Another countermeasure which is frequently examined in the literature but still requires thorough evaluation is information sharing. As stated in Table IV, many authors consider information sharing as a promising way to cope with cyber risks, because it allows for intra-and inter-organisational communication and processing of risk-relevant data. The enforcement of the General Data Protection Regulation (GDPR) in May 2018 is likely to standardise information sharing to protect breaches of individual and business rights and freedom (National Cyber Security Centre, UK, 2018). Paradoxically, many scholars claim that information sharing is one of the most severe threats to cyberspaces. This is due to the level of support required to handle large volumes of highly sensitive information, without which human errors increase (Smith et al., 2007;Deane et al., 2009;Kim and Im, 2014).
Nevertheless, as Tran et al. (2016) found in a series of interviews, many companies do not perceive potential 'information leakage' as a security risk. It is critical that employees frequently change their passwords and do not share passwords with others to avoid information leakage.
Most of the risks discussed in the literature can be attributed to the pre-attack phase; few articles address countermeasures for subsequent phases (trans-attack and postattack). To address this imbalance, more work is needed on the proactive mitigation of cyber risks and reactive mitigation strategies. 'Cyber-insurance' is one prominent mitigating measure for the post-attack stage. Cyber insurance dates from projections for Y2K related crashes but has burgeoned due to the increase in virtual events and their impact on businesses (Camillo, 2017). The growth of Industry 4.0 is likely to be regulated by similar insurance policies. It may be impossible to design the perfect cyber security system that can deter all risks; therefore, it is expedient to have a diverse set of countermeasures at hand, covering different risk attack scenarios and contingencies.
Conclusion
At its core, supply chain management is a discipline of connectedness; integrating the activities and processes of diverse organisations into effectively functioning networks. But with supply chain integration comes dependencies, some purely commercial, but many arising from integrating IT systems to exchange data/information, giving rise to supply chain cyber risk. This study defines supply chain cyber risk as accidental or deliberate IT events that threaten the integrity of a supply chain's infrastructure, leading to cascading disruptions. Similar to conventional supply chain risks, cyber risk impacts in terms of financial losses, delays and loss of customer service on a short-term basis; and market value and brand reputation on a long-term basis.
A SLR on the nascent area of cyber risks in supply chains was conducted applying a rigorous, transparent and replicable methodology. The study addressed the research question: How can organisations manage cyber risks in supply chains? Text mining was followed by connectivity-based clustering to identify and verify the core themes ( Figure 6) that guide and inform the analysis. Five meta themes were selected: cyber risk types; cyber risk propagation; cyber risk points of penetration; cyber security challenges and mitigation measures.
Under cyber risks, the study classifies cyber risks into five categories: physical threats, breakdown, indirect attacks, direct attacks and insider threats. Cyber risk propagation zones were identified (primary, secondary and tertiary) drawing on previous work which suggests supply chain risks are not static and follows the 'risk propagation' phenomenon (Ghadge et al., 2013;Garvey et al., 2015). The third meta-theme identifies three key failure points where cyber risks are likeliest to emerge. The study classifies these 'points of penetration' (PoPs) into technical, human and physical dimensions. Four critical challenges for an organisation trying to manage supply chain cyber risks are recognised; inter-organisational collaboration; employee knowledge, continuous improvement and the need for government level involvement. The fifth and final meta-theme is measures for mitigation. Although carry over measures from traditional risk mitigation work are identified in the literature, the study rejects using a conventional proactive and reactive risk mitigation classification and instead adopts a time phase-based classification. See Table III for classification of cyber risk measures for mitigation following pre, trans and post cyberattack stages.
While indirect and direct attacks (i.e., viruses, hacker attacks, spoofing attacks) are undoubtedly the most commonly discussed types of attack, the study found that the increasing integration and complexity of cyber SCs, facilitates the occurrence of unintentional cyber risk events such as the underperformance of a critical cyber system or an unintended human error. With the latter, the employee could potentially be anywhere in the interconnected SC, adding to unpredictability and compounding consequences. For capturing these consequences, this study used a risk propagation approach and depicted how cyber risks occurring at one point of penetration spread to other linked entities driven by SC inter-connectivity.
Conceptual model
This study finds that companies need to implement identified control measures holistically at the SC level to create an extensive supply chain cyber security system that builds upon elements from both IT and organisational security systems. To address this need and building on the finding that cyber supply chain risks can emerge from different sources, the study proposes a 'supply chain cyber security system' as a unifying conceptual model ( Figure 10). These sources are identified as either associated with IT (e.g., such as a direct or indirect attack), organisational (e.g., insider threat) or the supply chain (e.g., physical threat) systems. Thus, all three diverse elements namely, IT system, organisation process, and supply chain security system (which includes process and infrastructure network) must be aligned to manage cyber risk in supply chains. Each of these three can then be linked to specific PoPs weak points and linked with technical, human and physical levels. Thus, IT security systems can counter cyber threats by buying hardware, the latest technology and secure software platforms. Organisational security system mitigates cyber-attack by securing physical assets, adhering to set guidelines and by raising awareness among employees. Information sharing, collaborative risk management, and adaptability are found to be key strategies for supply chain security. This interlinked relationship between different (sub) system (shown in overlapping circles in Figure 10) and distinct mitigation strategies (shown in the triangles) is critical for managing cyber risk in supply chains.
Coordination of these security systems, joint information sharing and applying appropriate mitigating strategies can effectively manage cyber risk in supply chains. This integrated model shown in Figure 10, is termed a Supply Chain Cyber Security
Figure 10. A conceptual model for Supply Chain Cyber Security System
System. The conceptual model shows that IT, organisation and supply chain security systems are interlinked, and closer collaboration is essential for successful implementation of cyber risk mitigation strategies (Stephens and Valverde, 2013;Hamlen and Thuraisingham, 2013;Urciuoli et al., 2013;Bartol, 2014). These inter-disciplinary security systems should be coordinated to standardise and implement agreed cyber security strategies for supply chains and wider networks. Alignment of responsibilities and managing conflicting policies/regulations in each system is a challenging problem to handle. There is however the age-old threat that a chain is only as strong as its weakest link; hence our model's focus on the integration of IT system, organisation and supply chain (including process and network infrastructure) security system.
A research agenda for managing cyber risk in supply chains
A literature review is expected to provide critical knowledge gaps along with the development of new models, proposition or theories (Webster and Watson, 2002). The main avenues for future research that emerged from this review are now presented. Recent research has suggested several dimensions that have a substantial influence on a SC's vulnerability to cyber risk. These include different network configurations (Bandyopadhyay et al., 2010;Zhang et al., 2012), firm sizes (Tran et al., 2016), corporate cultures (Xue et al., 2013), industry sectors (Sharma and Routroy, 2016;Tran et al., 2016) and business principles (Durowoju et al., 2012;Charitoudi and Blyth, 2014). This research found that most studies take a generic perspective, and therefore, this study pinpoints the need for contextualised studies that address such dimensions in-depth to relate specific cyber risks to specific dimensions. Similarly, an array of mitigation measures against cyber risks have been identified; however, there is little evidence of specific measures for mitigation being empirically tested. So, to make the mitigation decision useful, for clarity of when and where responses work best, strategies are identified and separated into the three phases namely, pre-, trans-and post-attack. Adopting this approach reveals that there is a lack of research on developing tailored measures for cyber security threats. In addition to highly context-specific studies, large-scale data-driven research is necessary, which can then be utilised to test hypotheses and models (Barlow and Li, 2007;Kunnathur, 2015).
Empirical research on building robust cyber security models utilising modern big data analytics tools and techniques is also required to inform and fuel the next generation of research in this field.
It is evident from this SLR that human/behavioural factors play a vital role in cyber security, and yet have been neglected in favour of studying more technical factors such as data, applications and networks. In cyberspace, employees are a major failure point (PoPs), yet technologically empowered employees manage developments such as IoT, blockchain and decentralised distribution (omnichannel retailing) with little awareness or training on data security. Incriminating human interactions have widely been ignored (Kunnathur, 2015). A variety of supply chain stakeholders can sabotage, either deliberately or unwillingly, even the most sophisticated security systems. However, this study also detects a related lack of research on the impact of cyber risk on employees (and by definition therefore their employing organisation). This is very much an under-explored area (Manzouri et al., 2013), which will become of increasing interest to employees, employers and society.
Implications for research and practice
To identify relevant literature of an appropriate quality and quantity, the SLR had to extend beyond articles in the operations, logistics and supply chain area. Following a replicable and reiterative screening and synthesis process, the scope of this study was still limited to 41 independently verified interdisciplinary papers published between 1990 and 2017.
Complementary cluster analysis following data mining approach provided support for transparency and rigour in conducting what is believed to be a first SLR on cyber risk in supply chains.
The paper provides the following implications for research and practice. The negative consequences of cyber security disruptions could impact not only individual firms or SCs, but entire globally-connected communities. The limited set of papers available for this study suggests that little academic attention has addressed this field compared to other topics/technologies interfacing with supply chain management such as the Internet of Things (IoT), Blockchain, digitalisation, autonomous transportation and virtual reality.
Interestingly, all these disruptive technologies are vulnerable to cyber risks due to the rapid transformation of supply chains following the Industry 4.0 revolution. Supply chain integration and digitalisation go hand in hand. Recently Gartner (2018) predicted that there would be 14.2 billion devices connected worldwide by 2019. Consequently, it is vital to raise awareness of cyber security risks in supply chains and help both practitioners and academics manage future disruptive cyber risks.
There is an increased misuse of cyber-physical systems for counterfeits, forgeries, data theft, trafficking, attacks on transportation infrastructure, ransomware attacks and Crypto-jacking. Such cyber activities significantly impact multiple stakeholders with clear implications for a broader ecosystem. How will businesses, governments and society react to profound and frequent cyber-attacks? This is perhaps the most fundamental cyber riskrelated line of questioning, as the answers will dictate the speed and level of investment in cyber security worldwide.
|
2019-08-23T18:23:44.835Z
|
2019-07-28T00:00:00.000
|
{
"year": 2019,
"sha1": "504263c1a9ac8c75e3934b15e7f7409bfbf114f5",
"oa_license": "CCBYNC",
"oa_url": "http://dspace.lib.cranfield.ac.uk/bitstream/1826/14843/4/Managing_cyber_risk_in_supply_chains-2019.pdf",
"oa_status": "GREEN",
"pdf_src": "Anansi",
"pdf_hash": "024d8b8a5cd40ad38e2d7b7439b8e478480fddcc",
"s2fieldsofstudy": [
"Business",
"Computer Science"
],
"extfieldsofstudy": [
"Business"
]
}
|
267648268
|
pes2o/s2orc
|
v3-fos-license
|
Metformin Alters mRNA Expression of FOXP3, RORC, and TBX21 and Modulates Gut Microbiota in COVID-19 Patients with Type 2 Diabetes
COVID-19 remains a significant global concern, particularly for individuals with type 2 diabetes who face an elevated risk of hospitalization and mortality. Metformin, a primary treatment for type 2 diabetes, demonstrates promising pleiotropic properties that may substantially mitigate disease severity and expedite recovery. Our study of the gut microbiota and the mRNA expression of pro-inflammatory and anti-inflammatory T-lymphocyte subpopulations showed that metformin increases bacterial diversity while modulating gene expression related to T-lymphocytes. This study found that people who did not take metformin had a downregulated expression of FOXP3 by 6.62-fold, upregulated expression of RORC by 29.0-fold, and upregulated TBX21 by 1.78-fold, compared to the control group. On the other hand, metformin patients showed a 1.96-fold upregulation in FOXP3 expression compared to the control group, along with a 1.84-fold downregulation in RORC expression and an 11.4-fold downregulation in TBX21 expression. Additionally, we found a correlation with gut microbiota (F/B ratio and alpha-diversity index) and pro-inflammatory biomarkers. This novel observation of metformin’s impact on T-cells and gut microbiota opens new horizons for further exploration through clinical trials to validate and confirm our data. The potential of metformin to modulate immune responses and enhance gut microbiota diversity suggests a promising avenue for therapeutic interventions in individuals with type 2 diabetes facing an increased risk of severe outcomes from COVID-19.
Introduction
Over four years have elapsed since the emergence of the new Severe Acute Respiratory Syndrome Coronavirus 2 (SARS-CoV-2) virus, impacting over 700 million people globally [1,2].Despite the World Health Organization officially ending the pandemic status in 2023, the emergence of new variants such as Pirola and Eris continues to pose a serious threat [3].Numerous studies have identified type 2 diabetes (T2D) as a significant risk factor for severe Coronavirus disease 2019 (COVID- 19), often resulting in hospitalization or death [4].Elevated levels of hyperglycemia and glycemic fluctuations may adversely affect COVID-19 outcomes, with some studies suggesting that insulin treatment could contribute to increased mortality in COVID-19 patients with T2D [5,6].Additionally, T2D may impair nasal immunity, increasing the risk of hyposmia in individuals with mild COVID-19 pneumonia [7].
Viruses 2024, 16, 281 2 of 11 Metformin, a widely used oral antidiabetic medication, exhibits pleiotropic effects beyond glycemic control [8,9].One potential mechanism for its beneficial effects is its anti-inflammatory properties [10][11][12][13].Metformin may influence the immunometabolism of lymphocytes, impacting the differentiation and balance between pro-inflammatory T helper 1 (Th1) and T helper 17 (Th17) cells, as well as anti-inflammatory regulatory T cells (Treg) [14].This influence has the potential to reduce inflammation and the risk of cytokine storms in COVID-19 patients with T2D [15][16][17][18].The mechanism is achieved through the activation of AMP-activated protein kinase (AMPK), the master sensor and regulator of cellular energy metabolism in mammals, which inhibits T cell differentiation by suppressing the mammalian target of rapamycin (mTOR) and Glucose transporter 1 (GLUT1) [14,16,17,19].This inhibition leads to a decrease in glucose uptake and glycolysis, limiting the energy supply available for T cell activation and proliferation.
In addition to AMPK, there are three key genes involved in the transcriptional regulation of Treg, Th1, and Th17 differentiation.These genes include FOXP3, TBX21, and RORC.FOXP3 (FOXP3 gene) is a transcription factor essential for the development and function of Tregs, which play a crucial role in maintaining immune homeostasis and suppressing excessive immune responses [20].T-be (TBX21 gene) t is a transcription factor that promotes the differentiation of Th1 cells, which are involved in cell-mediated immune responses and the clearance of intracellular pathogens [21].RORγt (RORC gene) is a transcription factor that drives the differentiation of Th17 cells, which are important for the defense against extracellular pathogens and the regulation of autoimmune responses [22].Together, these three genes play critical roles in the regulation of immune responses and maintaining the balance between immune activation and tolerance.
The balance between Th1, Th2, Th17, and Treg cells is crucial for maintaining immune homeostasis and preventing excessive inflammation.The dysregulation of these cells has been implicated in various autoimmune diseases and chronic inflammatory conditions.In the context of COVID-19, an imbalance in the activation of these immune cell subsets can contribute to the severity of the disease.For example, an exaggerated Th1 response may lead to excessive inflammation and tissue damage, while an impaired Th17 response could compromise the clearance of extracellular pathogens such as the SARS-CoV-2 virus [23].Additionally, a dysregulated Th2 response may result in a reduced ability to produce antibodies and mount an effective immune response against the virus.On the other hand, regulatory Tregs are essential for maintaining immune tolerance and preventing excessive immune activation [24].Therefore, understanding and modulating the balance of Th1, Th2, and Th17 cells could be a key factor in developing therapeutic strategies for COVID-19, with the aim of reducing inflammation and promoting a more balanced immune response.
The composition of the gut microbiota undergoes alterations in individuals with both T2D and COVID-19.The gut microbiota, including bacteria and their metabolites, plays a role in influencing inflammation in the lungs through the bidirectional communication pathway known as the gut-lung axis [25].An essential consideration is understanding how the most common oral hypoglycemic agent affects gut microbiota and inflammatory markers in patients with both COVID-19 and T2D.The most common oral hypoglycemic agent, metformin, has been found to have beneficial effects on gut microbiota and inflammatory markers in patients with both COVID-19 and T2D.Studies have shown that metformin can promote the growth of beneficial bacteria, such as Akkermansia muciniphila, and reduce the levels of pro-inflammatory cytokines in the gut [26].This dual effect may contribute to improved outcomes in COVID-19 patients with T2D, as it could potentially reduce lung and systemic inflammation.
This study aims to investigate how the commonly used oral hypoglycemic agent metformin influences gut microbiota and inflammatory markers in individuals with both COVID-19 and T2D.
Sample Collection
Blood samples were collected from two groups of patients: first, COVID-19 patients with T2D and metformin treatment (metformin-treated group); and second, COVID-19 patients with T2D without metformin treatment (non-metformin-treated group).The samples were collected at Transcarpathian Regional Infectious Hospital.All participants provided informed consent by signing a statement.This study adhered to the principles of the Declaration of Helsinki and received approval from the Ethics Committee of I. Horbachevsky Ternopil National Medical University (protocol code 74, dated 1 September 2023).The blood was collected in EDTA tubes and stored at −80 • C until further use (Figure 1).
Sample Collection
Blood samples were collected from two groups of patients: first, COVID-19 patie with T2D and metformin treatment (metformin-treated group); and second, COVID patients with T2D without metformin treatment (non-metformin-treated group).samples were collected at Transcarpathian Regional Infectious Hospital.All participa provided informed consent by signing a statement.This study adhered to the princip of the Declaration of Helsinki and received approval from the Ethics Commi ee of I. H bachevsky Ternopil National Medical University (protocol code 74, dated 1 Septem 2023).The blood was collected in EDTA tubes and stored at −80 °C until further use (Fig 1).
All participants tested positive for SARS-CoV-2, and patients with type 2 diab (T2D) were diagnosed based on the criteria set by the American Diabetes Association.clusion criteria for all groups were as follows: age between 25 and 75 years, no histor other chronic diseases, and no use of antibiotics or probiotics in the past 3 months.Ex sion criteria included pregnancy, lactation, a history of inflammatory bowel disease, other gastrointestinal disorders.Participants were requested to provide a single stool a blood specimen concurrently.Patients taking metformin were administered a dose 1000-1500 mg per day for at least 3 months before admission.
Figure 1.Study flow diagram with information on the methods used.We used two types of bio ical material from patients with COVID-19: stool samples and blood.The microbiota was stud using culture-based method (to calculate alpha-diversity indices) and molecular genetic testing calculate the Firmicutes/Bacteroidetes (F/B) ratio).The determination of the relative normalized pression of the studied genes was carried out by the method of polymerase chain reaction w reverse transcription.
Clinical Data
The medical records of patients were reviewed to obtain clinical data such as N (Neutrophil-to-Lymphocyte ratio), CRP (C-Reactive Protein), Procalcitonin (PCT), monocytes.
Figure 1.Study flow diagram with information on the methods used.We used two types of biological material from patients with COVID-19: stool samples and blood.The microbiota was studied using culture-based method (to calculate alpha-diversity indices) and molecular genetic testing (to calculate the Firmicutes/Bacteroidetes (F/B) ratio).The determination of the relative normalized expression of the studied genes was carried out by the method of polymerase chain reaction with reverse transcription.
All participants tested positive for SARS-CoV-2, and patients with type 2 diabetes (T2D) were diagnosed based on the criteria set by the American Diabetes Association.Inclusion criteria for all groups were as follows: age between 25 and 75 years, no history of other chronic diseases, and no use of antibiotics or probiotics in the past 3 months.Exclusion criteria included pregnancy, lactation, a history of inflammatory bowel disease, and other gastrointestinal disorders.Participants were requested to provide a single stool and blood specimen concurrently.Patients taking metformin were administered a dose of 1000-1500 mg per day for at least 3 months before admission.
Clinical Data
The medical records of patients were reviewed to obtain clinical data such as NLR (Neutrophil-to-Lymphocyte ratio), CRP (C-Reactive Protein), Procalcitonin (PCT), and monocytes.
RNA Extraction and cDNA Synthesis
Total RNA was extracted from the collected blood samples using a standard protocol using NucleoZOL (740404.200,Düren, Germany).The extracted RNA was dissolved in RNase-free water to obtain a concentration of 2 µg/µL.cDNA synthesis was performed using a RevertAid First Strand cDNA Synthesis Kit (K1621, Vilnius, Lithuania) according to the manufacturer's instructions.
Real-Time PCR Amplification
A Biorad CFX 96 Real-Time PCR Detection System (185-5096, Bio-Rad, USA) was used to measure the expression levels of three genes: FOXP3, RORC, and TBX21.Maxima SYBR Green/ROX qPCR Master Mix (2X) (K0221, Thermo Scientific) and gene-specific primers were used for the amplification.The reaction mix had 20 µL of nuclease-free water, 0.5 µL of each gene-specific primer, 2 µL of cDNA template, and 10 µL of 2X Maxima SYBR Green/ROX qPCR Master Mix.The PCR cycling conditions involved initial denaturation at 95 • C for 10 min, followed by 45 cycles of denaturation at 95 • C for 15 s, primer annealing at 60 • C for 40 s, and elongation at 72 • C for 40 s.The specificity of the amplified products was confirmed by melting curve analysis.
The Glyceraldehyde 3-phosphate dehydrogenase (GAPDH) gene was used as the reference gene to normalize the expression levels of the target genes.The expression levels of the target genes (FOXP3, RORγt (RORC), Tbet (TBX21)) were quantified relative to the expression of the housekeeping gene using the comparative Ct (2 −∆∆Ct ) method.The Ct values were converted to relative expression values using a formula that compares the target gene's Ct value to the housekeeping gene's Ct value.The relative expression values were then converted to Log2 values using the formula Log2 (relative expression).To calculate the Th1/Treg and Th17/Treg ratios, we divided the relative normalized mRNA expression of the gene RORγt (or T-bet) by the expression of FOXP3.
Firmicutes/Bacteroidetes Ratio
Participants were asked to provide a single stool sample, gathered in a sterile container, and promptly frozen at −80 • C for subsequent procedures.Using DNAzol by the manufacturer's guidelines, DNA was extracted from a 100 mg portion of the frozen specimens [2].The obtained DNA was dissolved in 200 µL of elution buffer, achieving the final DNA concentration for each sample.The 16 S rRNA gene was amplified through PCR using the CFX96 Touch Real-Time PCR Detection (185-5096, Bio-Rad, USA).The thermal cycling parameters were initiated with a 5 min denaturation at 95 • C, followed by 30 cycles involving 15 s at 95 • C for denaturation, 15 s at 61.5 • C for annealing, and 30 s at 72 • C for extension.A final extension step was performed at 72 • C for 5 min.Each PCR reaction included 0.05 units/µL of Taq polymerase (9012-90-2, Sigma Aldrich), 0.2 mM of each dNTP, 0.4 µM of each primer, 1x buffer, around 10 ng of DNA, and water to reach a final volume of 25 µL.All samples were subjected to triplicate amplification using the specified primer pairs.The thermal cycler recorded the threshold cycles (Cts) for general and specific primers.
In order to assess the makeup of gut microbiota at the phylum level, we utilized quantitative real-time PCR (qRT-PCR) with universal primers designed for the bacterial 16 S rRNA gene, as well as specific primers that target Firmicutes and Bacteroidetes.The threshold cycles (Cts) obtained from qRT-PCR were used to calculate the proportion of taxon-specific 16 S rRNA gene copies in each sample using the formula X = (Efficiency of Universal primers) Ct of Universal primers /(Efficiency of Specific primers) Ct of Specific primers × 100 [27].The efficiency of the universal primers was quantified as Efficiency of Universal primers, with a value of 2 indicating 100% efficiency and a value of 1 indicating 0% efficiency.The efficiency of the taxon-specific primers was denoted as Efficiency of Specific primers.The PCR amplification efficiency was assessed by performing a series of dilutions, and the fluorescence emitted by the SYBR Green dye was utilized to detect the amplified products.If the taxon-specific copy number in a sample was not detectable, it was assigned a value of 0. X represents the proportion of taxon-specific 16 S rRNA gene copies in a given sample, as indicated by the given equation.
Assessment of the gut microbial composition at the level of major phyla was carried out using quantitative real-time PCR (qRT-PCR) with the use of universal primers targeting the bacterial 16 S rRNA gene and specific primers for Firmicutes and Bacteroidetes (Table 1).For the study of gut microbiota, a 1.0 g feces sample was obtained, and 9 mL of isotonic (0.9%) sodium chloride solution was put into a test tube.Through thorough mixing, a homogeneous mass was achieved, establishing a 10 −1 dilution.Successive dilutions from 10 −2 to 10 −11 were prepared similarly.With clean micropipettes, 10 µL was taken from each dilution and put on nutrient media to separate certain microorganisms.Different types of commercial nutrient media made it easier to separate enterobacteria, yeast (Candida spp.), Clostridium spp., Lactobacillus spp., Bifidobacterium spp., and Bacteroides spp.Microorganism identification followed the guidelines outlined in the Clinical Microbiology Procedures Handbook, Volume 1-3, 4th Edition [28].Decimal logarithms (lg CFU/g) were adopted for ease of presentation and subsequent mathematical and statistical processing of colony growth indicators.
To assess the alpha-diversity of the gut microbiota, the Shannon H ′ and Simpson 1/D indices were employed.The Shannon H ′ index was computed using the formula H ′ = −∑ pi ln(pi), where pi denotes the proportion of individuals associated with each genus in the gut microbiota.The Simpson 1/D index was determined using the formula 1/D = ∑ pi 2 , with pi representing the proportion of individuals attributed to each genus in the gut microbiota.For diversity calculations, Abundance Curve Calculator was utilized.
Statistical Analysis
The data were analyzed using the statistical software R (version 4.3.1)and its associated packages, including tidyverse (version 1.3.0)and ggplot2 (version 3.4.4)for data manipulation and visualization.Additionally, the analysis was performed using GraphPad Prism (version 9) for graphical representation of the results of Th17/Treg and Th1/Treg ratios.
Descriptive statistics including mean and standard deviation (SD) were calculated for all variables.The normality of the data distribution was tested using the Shapiro-Wilk test.As the data were not normally distributed, non-parametric statistical tests were used for further analysis.To investigate the correlations between clinical data (NLR, CRP, PCT, monocytes), alpha-diversity indices, and Th17/Treg ratio, we conducted a correlation analysis using Spearman's rank correlation coefficient depending on the normality of data distribution.All statistical tests were two-sided, and a p-value less than 0.05 was considered statistically significant.Additionally, we conducted a power analysis using a Sample Size Calculator to ensure adequate statistical power in our study [2].
Results
The average age of COVID-19 patients with T2D without metformin treatment (53.3% women and 47.7% men) was 54.88 ± 19.19 years, while those treated with metformin (60% men and 40% women) had an average age of 58.43 ± 6.27 years.However, the difference between these groups lacked statistical significance (p = 0.861).
Relative Expression of FOXP3, RORC, and TBX21 in Metformin-Treated COVID-19 Patients
In this study, we determined the mRNA expression of three key genes of T-helper subpopulations, i.e., FOXP3 (Treg), RORγt (Th17), and T-bet (Th1), in patients with COVID-19 and T2D who did or did not take metformin (Figure 2).To calculate the relative normalized expression using the PCR method, a control group of COVID-19 patients without T2D was used.
Sample Size Calculator to ensure adequate statistical power in our study [2].
Results
The average age of COVID-19 patients with T2D without metformin treatment (53.3% women and 47.7% men) was 54.88 ± 19.19 years, while those treated with metformin (60% men and 40% women) had an average age of 58.43 ± 6.27 years.However, the difference between these groups lacked statistical significance (p = 0.861).
Relative Expression of FOXP3, RORC, and TBX21 in Metformin-Treated COVID-19 Patients
In this study, we determined the mRNA expression of three key genes of T-helper subpopulations, i.e., FOXP3 (Treg), RORγt (Th17), and T-bet (Th1), in patients with COVID-19 and T2D who did or did not take metformin (Figure 2).To calculate the relative normalized expression using the PCR method, a control group of COVID-19 patients without T2D was used.
This study found that people who did not take metformin had a downregulated expression of FOXP3 by 6.62-fold, upregulated expression of RORγt by 29.0-fold, and upregulated T-bet by 1.78-fold, compared to the control group.In contrast, patients who took metformin showed a 1.96-fold upregulation of FOXP3, while RORt and T-bet showed 1.84fold and 11.4-fold downregulations, respectively, compared to the control group.
Correlation between Th1/Treg, Th17/Treg mRNA Ratios, and Gut Microbiota Composition and Hematological Parameters
We utilized the 2 −ΔΔCq (Livak) method for normalization to calculate the ratios of relative expression levels among three genes [29].We calculated the ratio between the levels of relative normalized expression of the three genes and determined the peculiar ratios between them.Differences in the ratio between the groups that received or did not receive metformin were established.Patients taking metformin had lower levels of Th1/Treg (0.302 ± 0.44) compared to those who did not (1.42 ± 1.99) and lower Th17/Treg mRNA ratios (0.57 ± 1.17 vs. 149 ± 260) (Figure 3).This study found that people who did not take metformin had a downregulated expression of FOXP3 by 6.62-fold, upregulated expression of RORγt by 29.0-fold, and upregulated T-bet by 1.78-fold, compared to the control group.In contrast, patients who took metformin showed a 1.96-fold upregulation of FOXP3, while RORt and T-bet showed 1.84-fold and 11.4-fold downregulations, respectively, compared to the control group.
Correlation between Th1/Treg, Th17/Treg mRNA Ratios, and Gut Microbiota Composition and Hematological Parameters
We utilized the 2 −∆∆Cq (Livak) method for normalization to calculate the ratios of relative expression levels among three genes [29].We calculated the ratio between the levels of relative normalized expression of the three genes and determined the peculiar ratios between them.Differences in the ratio between the groups that received or did not receive metformin were established.Patients taking metformin had lower levels of Th1/Treg (0.302 ± 0.44) compared to those who did not (1.42 ± 1.99) and lower Th17/Treg mRNA ratios (0.57 ± 1.17 vs. 149 ± 260) (Figure 3).
As for the gut microbiota, patients who did not take metformin had a significantly lower alpha-diversity measured by the Shannon and Simpson indices, but the F/B ratio did not differ significantly (1.5 ± 0.42 in the non-metformin-treated group vs. 1.09 ± 0.36 in the metformin-treated group).
Inflammatory markers in patients taking metformin, such as NLR (4.12 ± 0.88 vs. 14.53 ± 2.19), procalcitonin (0.32 ± 0.24 vs. 2.79 ± 0.87), and CRP (16.58 ± 2.28 vs. 27.38 ± 8.23), were significantly lower compared to patients not taking metformin.As for the gut microbiota, patients who did not take metformin had a significantly lower alpha-diversity measured by the Shannon and Simpson indices, but the F/B ratio did not differ significantly (1.5 ± 0.42 in the non-metformin-treated group vs. 1.09 ± 0.36 in the metformin-treated group).
Discussion
In this study, we aimed to investigate the potential relationships between inflammatory biomarkers, gut microbiota, and Th1/Treg and Th17/Treg ratios in individuals taking metformin.Our findings revealed a positive correlation between the Th17/Treg ratio and the F/B ratio, suggesting that an imbalance in the gut microbiota composition may contribute to the dysregulation of T cell subsets in these individuals.Furthermore, we observed positive correlations between the Th17/Treg ratio and procalcitonin, as well as the NLR, indicating that increased inflammation may be associated with an imbalance in T
Discussion
In this study, we aimed to investigate the potential relationships between inflammatory biomarkers, gut microbiota, and Th1/Treg and Th17/Treg ratios in individuals taking metformin.Our findings revealed a positive correlation between the Th17/Treg ratio and the F/B ratio, suggesting that an imbalance in the gut microbiota composition may contribute to the dysregulation of T cell subsets in these individuals.Furthermore, we observed positive correlations between the Th17/Treg ratio and procalcitonin, as well as the NLR, indicating that increased inflammation may be associated with an imbalance in T cell subsets.
Although we did not observe changes in the F/B ratio between patients taking and not taking metformin, several studies have reported that obese and diabetic patients exhibit a higher F/B ratio compared to healthy subjects [30,31].Patients taking metformin showed higher indices of alpha diversity, aligning with findings from previous studies [32].
Metformin changes the balance of pro-inflammatory and anti-inflammatory T-lymphocytes by changing the way lymphocytes use energy by changing AMP-activated protein kinase (AMPK) [33,34].Our earlier investigations revealed that individuals taking metformin exhibited elevated expression levels of the PRKAA1 gene and reduced expression levels of SLC2A1 and MTOR [35].These molecular changes were correlated with decreased inflammatory markers.The current discovery serves as a substantial complement to our prior studies, providing valuable insights and enhancing our understanding of the favorable effects of metformin on the clinical outcomes of COVID-19.These findings suggest that metformin may have a direct impact on the immune response to viral infections, including COVID-19.By upregulating the expression of PRKAA1 and downregulating SLC2A1 and MTOR, metformin could promote the activation of anti-inflammatory T-lymphocytes and inhibit the production of inflammatory markers.This could potentially explain why individuals taking metformin have shown improved clinical outcomes when infected with COVID-19, as the drug may help mitigate the excessive immune response and cytokine storm often associated with severe cases of the disease.
In a trial by Ventura-López et al., metformin treatment exhibited significant benefits in comparison to the placebo group.Participants receiving metformin showed a notable decrease in the requirement for supplemental oxygen, a more marked reduction in the percentage of viral load, and a faster achievement of an undetectable viral load.Nevertheless, there were no noteworthy distinctions in the duration of hospitalization between the metformin-treated individuals and those in the placebo group [36].In the TOGETHER Trial, the use of metformin did not yield a significant decrease in hospitalizations that was due to COVID-19.However, when considering the per-protocol sample, which accounted for 83% of the participants, there was a reduced likelihood of emergency department visits and hospitalizations of COVID-19 patients, resulting in an absolute risk reduction of 1.4% and 3.1%, respectively [37].
Conclusions
In conclusion, this study suggests that metformin may exert beneficial effects in COVID-19 patients with type 2 diabetes by influencing key immune response genes and modulating the gut microbiota.The upregulation of FOXP3 and downregulation of RORγt and T-bet point towards a potential anti-inflammatory impact, aligning with lower inflammatory markers observed in metformin-treated individuals.However, this study's limitations, including a small sample size and the reliance on culture-based methods for microbiota analysis, call for cautious interpretation.Future research employing advanced sequencing techniques and larger cohorts is essential for a more comprehensive understanding.These findings hint at metformin's potential role in immune modulation during COVID-19, but further well-controlled studies are needed to validate and refine these observations for clinical applications.
1.
Sample size and homogeneity.The sample size in this study may limit the generalizability of the results.A larger and more diverse cohort would strengthen the statistical power.The study participants were recruited from a single medical center, which may affect the representativeness of the findings.A multi-center approach involving different geographic locations and demographic groups could enhance the external validity of the results.2.
Confounders and co-morbidities.The presence of confounding factors and co-morbidities may influence the observed associations.While efforts were made to exclude participants with chronic diseases and other gastrointestinal disorders, the influence of uncontrolled confounders cannot be entirely ruled out.This study did not systematically control for the influence of obesity and dietary habits on the gut microbiota and inflammatory markers.Both obesity and diet are known to be crucial factors influencing microbiome diversity and immune responses.
3.
Gut microbiota analysis.This study employed a culture-based method for calculating alpha-diversity indices, providing insights into the relative abundance of specific bacterial taxa.However, it is crucial to acknowledge that culture-based methods have limitations in capturing the entire spectrum of microbial diversity present in the gut.4.
Cross-sectional design.The cross-sectional design of this study limits the establishment of causal relationships.Longitudinal studies would provide a more dynamic understanding of the relationship between metformin use, gut microbiota, inflammatory markers, and gene expression levels over time.
Figure 2 .
Figure 2. Volcano plot of the expression of investigated genes.The volcano plot vividly captures the transcriptional dynamics of key genes in fifteen COVID-19 patients who did not use metformin and fifteen COVID-19 patients who did use metformin.Five patients who did not take metformin had a significantly reduced expression of the FOXP3 gene, and two more samples each had a significantly increased expression of RORC and TBX21 (A).In contrast, increased expression of FOXP3 and decreased expression of RORC and TBX21 were observed in patients taking metformin (B).
Figure 2 .
Figure 2. Volcano plot of the expression of investigated genes.The volcano plot vividly captures the transcriptional dynamics of key genes in fifteen COVID-19 patients who did not use metformin and fifteen COVID-19 patients who did use metformin.Five patients who did not take metformin had a significantly reduced expression of the FOXP3 gene, and two more samples each had a significantly increased expression of RORC and TBX21 (A).In contrast, increased expression of FOXP3 and decreased expression of RORC and TBX21 were observed in patients taking metformin (B).
Table 1 .
Primer nucleotide sequences used for qRT-PCR assay.
|
2024-02-14T16:21:02.114Z
|
2024-02-01T00:00:00.000
|
{
"year": 2024,
"sha1": "0d5d3eaa260a8663b64f29c6ebc9aea8bbb53418",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1999-4915/16/2/281/pdf?version=1707644087",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "de9a51d232f5126bcb51de0e579e9a1c822f474d",
"s2fieldsofstudy": [
"Medicine",
"Environmental Science",
"Biology"
],
"extfieldsofstudy": []
}
|
257246017
|
pes2o/s2orc
|
v3-fos-license
|
Interactive effects of nutrients and salinity on zooplankton in subtropical plateau lakes with contrasting water depth
Both eutrophication and salinization are growing global environmental problems in freshwater ecosystems, threatening the water quality and various aquatic organisms. However, little is known about their interactive effects on theses stressors and the role of lake depth on these interactions. We used field surveys to compared zooplankton assemblages over four seasons in eight Yunnan Plateau lakes with different trophic states, salinization levels, and water depths. The results showed that: 1) the species number (S), density (DZoop), and biomass (BZoop) of zooplankton exhibited strong seasonal dynamics, being overall higher in the warm seasons. 2) Data collected over four seasons and summer data both revealed highly significant positive relationships of S, DZoop, and BZoop with total nitrogen (TN), total phosphorus (TP), and phytoplankton chlorophyll a (Chl a). 3) S, DZoop, and BZoop displayed a unimodal relationship with salinity, peaking at 400–1000 μS/cm (conductivity, to reflect salinity). 4) The two large-sized taxa (cladocerans and copepods) generally increased at low-moderate levels of TN, TP, Chl a, and Cond and was constant or decreased at high levels. The average body mass (biomass/density) of crustaceans decreased with increasing TN, TP, Chl a, and conductivity. Our findings indicate that zooplankton may be more vulnerable in deep lakes than in shallow lakes when exposed to conductivity stress even under mesotrophic conditions, and the overall decrease in size in zooplankton assemblages under the combined stress of eutrophication and salinization may result in a lowered grazing effect on phytoplankton.
Introduction
With increasing human activities, lake eutrophication has become a global environmental problem (Wurtsbaugh et al., 2019), causing deteriorated water quality and loss of biodiversity (Wang et al., 2014;Lürling et al., 2017). Moreover, eutrophication may show unexpected effects when interacting with other environmental changes such as global warming (Moss et al., 2011;Nazari-Sharabian et al., 2018). For example, the mass deaths of African elephants in Botswana in 2020 were attributed to a combination of eutrophication and elevated levels of cyanobacterial toxins caused by frequent hot-dry climates . In some areas, eutrophication may interact with lake salinization; an issue receiving increasing attention particularly in arid and semi-arid zones due to decreased precipitation, increased evaporation, and an increasing demand for food and hence irrigation by a growing population (Williams, 1999;Yılmaz et al., 2021;Cunillera-Montcusí et al., 2022).
Salinization of lakes may inhibit zooplankton and thus promote the development of phytoplankton being released from grazing (O'Neil et al., 2012;Jeppesen et al., 2015;Hintz and Relyea, 2019;He et al., 2020;Jeppesen et al., 2020a). Moreover, greatly decreased diversity, abundance, and hatching rates of zooplankton have been observed at high salinities (Schallenberg et al., 2003;Jeppesen et al., 2015), and a test of five cladocerans (Alona rectangula, Daphnia pulex, Moina macrocopa, Ceriodaphnia dubia, and Simocephalus vetulus) exposed to high salinity showed strongly reduced survival and reproduction (Sarma et al., 2006). A study comparing lakes with wide salinity gradients in north-west China showed a significant functioning loss of the lake ecosystem with increasing salinity, suggested by drastic reduction of biodiversity, food chain length, and average trophic position in the food chain (Vidal et al., 2021). A study of 45 Tibetan lakes in China, moreover, showed that pronounced shifts in the structure and functions of lake ecosystems may be expected when certain critical salinity levels are passed (Lin et al., 2017).
When facing combined stress from salinization and eutrophication, larger changes in lake ecosystems may be expected. For example, an outdoor mesocosm experiment showed that eutrophication and salinization jointly promoted large blooms of phytoplankton and periphyton while causing declines in the abundance of many invertebrate and macrophyte species (Lind et al., 2018). Another example is a study of bacterial communities in selected lakes and rivers in the semi-arid Inner Mongolia Plateau, which showed combined stressor effects of salinity and nutrients on the aquatic bacterial community, leading to a major decline in species diversity (Tang et al., 2021). A study of 20 lakes with different trophic status and salinity in southern Siberia revealed that an increase in the nutrient load led directly to an increase in zooplankton biomass, while an increase in salinity indirectly led to a decrease in zooplankton biomass (Zadereev et al., 2022). Other studies, however, indicate that zooplankton communities developed in eutrophic systems are less sensitive to salinization due to cross-tolerance effects (Ersoy et al., 2022). Water depth may play a vital role for the interactive effects of eutrophication and salinization in lakes. A study using a global lake data set (573 lakes) demonstrated that the relative roles of nitrogen and phosphorus in affecting eutrophication were determined by water depth, where shallow lakes were mainly eutrophic and dominated by nitrogen limitation, while deep lakes were dominated by phosphorus limitation (Qin et al., 2020). Stratified deep lakes tend to have better water quality and lower biomass of both phytoplankton and zooplankton (Zadereev et al., 2022). Generally, shallow lakes are more susceptible to eutrophication (Qin et al., 2006). Shallow lakes, because of their large surface: volume ratio, are also more vulnerable to drought and imbalanced ratios between evaporation and precipitation (Coops et al., 2003;Jeppesen et al., 2009). However, little is known about the potential role of water depth in influencing the responses of zooplankton to increasing nutrient and salinity levels.
The Yunnan Plateau is an area rich in lakes that experience both eutrophication and salinization. Due to excessive loading of nutrients, many Yunnan Plateau lakes have shifted to a eutrophic state (Liu et al., 2012). Simultaneously, droughts have occurred frequently in the area in recent years; e.g., the average drought frequency increased by 29% from 2009 to 2018; in 2019, the area suffering from extreme drought, which is far more than the normal (Ding and Gao, 2020;Wang and Yu, 2021). The Yunnan Plateau is characterized by lakes with a wide range of depths, and thus exhibit a mosaic of lakes with different eutrophic status, salinities, and depths.
We studied zooplankton in eight lakes at the Yunnan Plateau with contrasting depths, nutrient levels and salinities and had the following hypotheses: 1)The zooplankton biomass will increase with increasing nutrients levels, but less so at high salinities not least at low nutrient concentrations due to an expected cross-tolerance effects in the eutrophic lakes; 2) the zooplankton response to nutrients and salinity will differ between deep and shallow lakes, being stronger for nutrients and less strong for salinity in the shallow lakes.
Study area
The eight study lakes were Lake Luguhu, Lake Chenghai, Lake Yangzonghai, Lake Erhai, Lake Xingyunhu, Lake Dianchi, Lake Yilonghu, and Lake Qiluhu, with widely varying trophic states ranging from oligotrophic to eutrophic, salinity levels ranging from freshwater to brackish, and an average water depth ranging from 2.2 m to 38.6 m ( Figure 1). The area is dominated by a subtropical humid semi-arid monsoon climate (Zheng et al., 2010). Lake Xingyunhu, Lake Dianchi, Lake Yilonghu, and Lake Qiluhu have differed in trophic state in recent years (Supplementary Table S1). The salinity of Lake Chenghai gradually increased between 950-1200 mg/L during 2006-2016 (Supplementary Figure S1).
Water sample collection and analyses
Depending on the morphological characteristics of the lakes, 7-15 sites were selected randomly and sampled in October and December 2020 and April and July 2021. The sampling included Frontiers in Environmental Science frontiersin.org physical variables such as water temperature (WT,°C), dissolved oxygen (DO, mg/L), pH, and conductivity (Cond, µS/cm), measured in situ by a multiparameter water quality sonde (EXO2, Xylem, United State). Turbidity (Turb) was measured by a HACH 2100 Q turbidimeter. Water depth (Z M ) and Secchi depth (Z SD ) were measured by a depth sounder and a Secchi disc, respectively. Water samples were collected at 0.5 m, 3 m and 5 m depth at each sampling site with a 5 L water collector and then mixed. A 1 L subsample was used for water quality determination in the lab. Chlorophyll a (Chl a), total nitrogen (TN), total phosphorus (TP), soluble reactive phosphate (SRP), and nitratenitrogen (NO 3 -N) were analyzed according to the standard methods described by the Editorial Board of Water and Wastewater Monitoring and Analysis Methods of the Ministry of Environmental Protection of the People's Republic of China (State Environmental Protection Administration, 2002). TN was determined by an alkaline potassium persulfate digestion-UV spectrophotometric method. TP was determined by an ammonium molybdate-ultraviolet spectrophotometric method after being digested with K 2 S 2 O 8 solution. Chl a was extracted using 90% acetone (at 4°C for 24 h) after filtration through Whatman GF/C filters (0.45 μm, GE Healthcare United Kingdom Limited, Buckinghamshire, United Kingdom), and absorbance was then read at 665 nm and 750 nm, both before and after acidification with 10% HCl by spectrophotometry. For specific variable and season, we averaged all sample points within each lake and then applied the average of the four seasons as the composite value for the whole lake. Because SRP in most of the lakes we studied was below the detection limit, coupled with the fact that TN and nitrate nitrogen were highly correlated (Supplementary Figure S11), we used TN and TP for subsequent analysis.
Zooplankton sample collection and analysis
For zooplankton analysis, a 20 L subsample (a mix of the samples taken at 0.5, 3, and 5 m depth) was concentrated to 50 mL by filtering it through a 64 μm nylon net to collect zooplankton. The samples were fixed with a formaldehyde solution (3%-5% final conc.) and identified microscopically at a magnification of either ×100 or ×200 (Zhang and Huang, 1991). Zooplankton biomass was estimated by multiplying the averaged density with the average dry mass for specific species taken from the literature, i.e., "Specifications for freshwater plankton surveys (SC/T 9402-2010)" (The Ministry of Agriculture of the People's Republic of China, 2011), and the identification of zooplankton species was mainly conducted according to "New Technology of Micro-biological Monitoring", "Freshwater Rotifera of China", and "Zoology of China-Freshwater Copepoda" (Wang, 1961; Editorial Committee of Zoology of China and Chinese Academy of Sciences, 1979;Shen, 1990).
Data processing and analyses
A comprehensive trophic level index (TLI) of the studied lakes was calculated based on Chl a, TN, TP, and Z SD (Jin and Tu, 1990;Jing et al., 2008). The TLI value for the whole lake is the average of each sampling site during the four seasons.
where TLI j is the comprehensive trophic level index of the Jth parameter, W j is the relative weight of the trophic level index of the Jth parameter, and r ij is the correlation coefficient between the Jth parameter and the reference parameter Chl a.
Using an average water depth of 10 m as a threshold, the eight lakes were roughly divided into deep (with clear thermal stratification) and shallow (without clear thermal stratification) ones to analyze the potential role of water depth for the response of zooplankton to eutrophication and salinization. Due to the strong spatial heterogeneity of water depth at the different sampling sites in each lake, all the site-specific data from all four seasons were used in the correlation analysis to reveal the potential role of water depth. Moreover, analyses based on average data for the four seasons were performed. To ease comparison with other research studies, the Cl − concentration (mg/L) was converted to conductivity (μS/cm) by the function of Moffet et al. (Afonina and Tashlykova, 2020), and salinity (g/L) was converted to conductivity by dividing by 0.774 (Boros and Vörös, 2010).
Functional Plotting Software (Origin, version 9.1; OriginLab Corp., United State), Statistical Program for Social Sciences (SPSS, version 24.0; International Business Machines Corp., United State) and The R Programming Language (R, version 4.2.0; University of Auckland, NZ) were used to analyze data and draw graphs. The data were Log 10 -transformed (zooplankton species number and density, Log 10 (X + 1); zooplankton biomass, Log 10 (X + 0.001)), and SPSS was used to verify whether the data were in line with normal distribution. The "ggpubr", "ggplot2", "ggsci", and "mgcv" packages in R were used for generalized additivity model (GAM) analysis. To develop the GAM model, we first used zooplankton species number and abundance as response variables, and physicochemical indicators of the water environment such as TN, TP, Chl a, conductivity, water depth, transparency and turbidity as independent variables, all variables being log-transformed. Then, we excluded independent variables that were highly correlated with each other. In addition, we used F-statistics and stepwise regression to select significant predictor variables. The final GAM model included four variables of TN, TP, Chl a, and conductivity. To explore the response of zooplankton to nutrient and salinity at different water depths, the lakes were first divided into two groups: deep and shallow. In addition, multiple stepwise regression analysis was performed using SPSS to determine the significance levels of the independent variables, here also including lake depth. In order to determine presence of interactions between nutrients, conductivity and water depths, we developed a regression model including the independent variables and the product terms of the independent variables, i.e., "A*B" model (using spss to convert the variables and build a linear regression model). A second-order polynomial analysis was done using the converted variables (TN*Cond and TP*Cond), with the species number and biomass of zooplankton as response variables, to investigate the interaction between nutrients and conductivity at different water depths.
Main limnological characteristics
The eight lakes formed a clear gradient in nutrient status and salinity measured as conductivity (Cond) ( Table 1). Based on the calculated Trophic Level Index (TLI) values, the lakes were categorized as eutrophic (Lakes Xingyunhu, XYH; Dianchi, DC; Yilonghu, YLH; Qiluhu, QLH), mesotrophic (Lakes Chenghai, CH; Yangzonghai, YZH; Erhai, EH), and oligotrophic (Lake Luguhu, LGH). According to the measured Cond, the lakes could be classified into three groups, with the highest levels in CH and QLH, moderate levels in YZH, EH, XYH, DC, and YLH, and the lowest levels in LGH.
Zooplankton assemblages
The single factor ANOVA analysis indicated that species number, density and biomass of zooplankton differed significantly among the eight lakes (Supplementary Table S2). We found large differences in zooplankton species number S) among the lakes (Figure 2). In general, Ss of cladocerans and copepods was lower than that of rotifers and protozoans except for CH where S for cladocerans and rotifers were higher than for the other two taxa (Figure 2A). S was generally higher in the warm than in the cold seasons except for QLH ( Figure 2B).
The density and biomass of zooplankton (D Zoop , B Zoop ) showed an increasing trend with increasing nutrient status assessed by TLI ( Figures 2C, D) and were mostly higher in the warm than in the cold seasons. Rotifers and protozoans were generally dominant (>60%) in density, except for the overwhelming dominance of copepods in CH in summer (>70%) (Figure 3). In terms of biomass, rotifers and copepods dominated in all the studied lakes, while cladocerans also accounted for a large proportion of CH, EH, XYH, and DC ( Figure 3).
Relationships between zooplankton and the environments
Spearman rank correlation analyses showed highly significant positive relationships between the zooplankton variables (S, D Zoop , and B Zoop ) and TN, TP, Chl a, WT, and TLI and negative relationships with Z M and Z SD (Supplementary Figure S2).
Frontiers in Environmental Science
frontiersin.org The mean value of the four-quarters at each sample site is the mean value of the lake. Refer to Figure 1 for explanation on the meaning of the abbreviated names of the lakes. Generally, S displayed increasing trends with TN, TP, and Chl a ( Figures 4A-C). There is a unimodal trend between S and conductivity (Cond), with S peaking at intermediate Cond and decreasing at higher Cond ( Figure 4D). In comparison, S was much higher in shallow and hypertrophic Lake QLH with similar Cond. The S of cladocerans and copepods generally increased at lowmoderate levels of TN, TP, Chl a, and Cond and was constant or decreased at high levels ( Figures 4E-L). Differently, the S of rotifers and protozoans showed a relatively pronounced increasing slope at high levels of TN and Chl a and a drop at high Cond ( Figures 4M-T). The analyses for summer (Supplementary Figure S4) and four-season mean data in each lake (Supplementary Figure S7 Figures S5, S6) and annual lake averages for the four season (Supplementary Figures S8, S9) revealed similar results as those of the annual full data set. In the analysis of species number and abundance in relation to TN and TP, the R 2 values for relationship to TN were higher than those to TP. The protozoans and rotifers showed the same pattern, while for cladocerans and copepods the relationship were more closely related to TP than to TN.
The ratio of zooplankton biomass to density was also analyzed to elucidate changes in body mass (Leptodora kindti and Simocephalus vetulus were excluded from the 17 sampling sites in Lake Dianchi and from one sampling site in Lake Luguhu in this analysis as the results were biased due to larger size) ( Figure 6). Generally, the biomass/density of total zooplankton gradually decreased with increasing TN, TP, Chl a, and Cond ( Figures 6A-D). The ratio of cladocerans was generally relatively constant along the environmental gradients studied ( Figures 6E-H). Copepods showed a decreasing trend, but in the deep lakes there was a slight increase along the TN and Chl a gradient, and this pattern was consistent for crustaceans (cladocerans and copepods) ( Figures 6I-P). We further analyzed the ratio of zooplankton biomass to phytoplankton biomass (because L. kindti is a top predator that never eats phytoplankton, it was subtracted from the 17 sampling sites in Lake Dianchi in this analysis), which generally showed a weak change with TN, TP, and Cond (Supplementary Figure S10). The ratio averaged 0.18 in the deep lakes and 0.10 in the shallow lakes.
Multiple stepwise regression analyses showed that TN, Cond, TP and water depth (Z M ) were the key driving environmental factors for zooplankton species number (decline with depth), but Cond showed no significant effects on total density and biomass (Table 2). TN, TP, Cond, TN*Cond, TP*Cond, TN*Cond*Z M , TP*Cond*Z M in the regression model (Table 3) were all statistically significant, indicating that interactive effects of Cond with TN and TP, and the water depth played an important role in this interaction. Second-
Frontiers in Environmental Science
frontiersin.org order polynomial analysis showed that both species number and biomass of zooplankton first increased and then decreased with TN*Cond and TP*Cond in the deep lakes, with an overall negative correlation, while a significant positive correlation was found for the shallow lakes (Figure 7).
Changes in zooplankton species diversity and abundance
Our study, based on the mean of four seasons' data of eight plateau lakes, revealed that both species number and biomass of zooplankton generally increased with increasing nutrient status (TP range 0.005-0.27 mg/L, TN range 0.025-6.09 mg/L), though leveling off at high TP. This concurs with the results of numerous other studies revealing that zooplankton abundance is generally lowest in oligotrophic lakes (May and O'Hare, 2005;Lampert and Sommer, 2007;García-Chicote et al., 2018). Some studies have reported a lower or a decreasing trend in species richness at high nutrient status. Investigations of lakes in Denmark, United State, and Italy found decreased zooplankton species richness above 0.05 mg TP/L, while the abundance of zooplankton increased above 0.09 mg TP/L (Havens et al., 2009;Jeppesen et al., 2011). A study of Danish lakes showed that, within the TP range 0.02-0.99 mg/L, a decreasing trend in species number levelled out at 0.1 mg TP/L while that of biomass remained high until 0.4 mg TP/L (Jeppesen et al., 2000). Studies in
FIGURE 4
Generalized additive model of the relationships between TN, TP, Chl a, and Cond with the species number of total zooplankton (A-D), Cladocera (E-H), Copepoda (I-L), Rotifera (M-P), and Protozoa (Q-T) in the studied lakes. The studied lakes were divided into two groups according to mean depth (Z), i.e., DC, XYH, YLH, and QLH as shallow lakes with Z < 10 m, LGH, CH, YZH, and EH deep lakes with Z > 10 m. See Supplementary Table S3 for model parameters.
Frontiers in Environmental Science
frontiersin.org theUnited State (Lake Apopka, Florida) and Europe (Lago Trasimeno, Umbria, Italy) showed an increasing trend in zooplankton biomass in the TN range of approximately 0.75-5.7 mg/L (Havens et al., 2009). Another study in Lake Erhai in Yunnan plateau showed an increase in crustacean abundance in the TN range of 0.3-0.8 mg/L (Yin et al., 2023). The widely occurrence of low SRP under detection limit in these lakes might indicate P-limitation on productivity. We further found water depth (Z M ) to be an important influencing factor. Z M was significant negatively correlated with the species number and abundance of zooplankton. Others have also highlighted the role of lake depth. For example, an analysis of data from 1151 lakes (America and the Europe) showed that deep lakes usually have larger volume and a greater environmental capacity to better buffer, dilute and settle incoming nitrogen and phosphorus nutrients than shallow lakes (Zhou et al., 2022). Another example is a study of 30 lakes in the Yangtze River showing that a TP regime shifts thresholds between a clear water state dominated by submerged macrophytes and a turbid water state dominated by phytoplankton varied little at moderate depths, but decreased significantly when the depth exceeded 3-4 m and increased sharply when the depth was below 1-2 m (Wang et al., 2014). After dividing the eight lakes into deep and shallow, we found that the species number and abundance of zooplankton in deep lakes were more significantly correlated with TN, TP and Chl a than in shallow lakes. Along TN*Cond and TP*Cond gradients, the species number and biomass of zooplankton showed a unimodal relationship in deep lakes, while they increased monotonically along these gradients in the shallow lakes.
Along the conductivity gradient, both the species number and biomass of zooplankton showed increasing trends at a conductivity level as high as 1085 μS/cm in the shallow lakes but were lower at a similar conductivity level (1096 μS/cm) in the deep lake. Similar unimodal changes (as in the deep lake) in the species richness of zooplankton have been reported in some other studies but at different salinity levels, e.g., > 1.4 g/L (1808 μS/cm) in endorheic soda lakes in southeastern Transbaikalia (Russia) (Afonina and Tashlykova, 2020), >500 μS/cm in the Carpathian Basin (Horváth et al., 2014), 0-1 g/L (0-1290) μS/cm in Lake Waihola, New Zealand (Schallenberg et al., 2003), and 1 g/L in a mesocosm experiment (1290 μS/cm) (Nielsen et al., 2008). Increases in the species number and biomass of zooplankton with increasing nutrient levels have also been reported in Danish lakes with conductivities in the range of 0.5-1.2 g/L (646-1550 μS/cm) (Jensen et al., 2010) and in the range of 1745-5790 μS/cm for 24 lakes in Xinjiang, China (Gutierrez et al., 2018). In our study, we demonstrated the interaction between TN, TP and Cond, respectively, and that such interactions differed significantly in deep and shallow (eutrophic) lakes. This indicates a buffering effect of eutrophication, i.e., communities developed under eutrophic conditions may be less sensitive to salinization due to cross- (Ersoy et al., 2022). The shallow lakes in our study and those from Denmark and Xinjiang, China, were mostly eutrophic, while the deep lakes in our study and those with decreased zooplankton at high conductivity were all in an oligotrophic or mesotrophic state. In addition to the factors already discussed, variation in ion composition may also play a role because salinization effects depend on the composition and concentration of ions, both in terms of background salinity and the 'chemical cocktails' of ions created by anthropogenic activities (Schuler and Relyea, 2018;Kaushal et al., 2019;Zadereev et al., 2022). A discrepancy was also found between the effects of conductivity on zooplankton species number (significantly negative) and abundance (weak), as suggested by the results of multiple regression analyses. This may reflect the shift in assemblage composition, i.e., protozoans and rotifers are more tolerant to salinity (Sarma et al., 2006;Hintz et al., 2017) and both dominate and maintain the total density and biomass at high conductivity, as discussed in the following section.
Zooplankton decline in body mass along with eutrophication and salinization
We found that the average body mass (biomass/density) of crustaceans (cladocerans and copepods) decreased with increasing TN, TP, and Chl a due to changes in species composition. This is consistent with previous studies (e.g., Uye, 1994;Ostojić, 2000;Qin et al., 2013). A survey of 74 lakes in northeastern Poland with TP levels ranging from 0.02 to 0.94 mg/L and Chl a from 1.1 to 182 μg/L, revealing a decrease in average body mass of crustaceans with increasing TP (Ejsmont-Karabin and Karabin, 2013). Another example is a study of 71 shallow lakes in Denmark with a mean summer TP range of 0.02-1 mg/L, revealing that the average body mass of cladocerans decreased with increasing TP (Jeppesen et al., 2000). A study in 631 Danish lakes also showed a weak trend of decreasing body mass of crustaceans in the TP range 0.015-0.3 mg/L (Jeppesen et al., 2020b).
We found that the biomass ratio of zooplankton:phytoplankton decreased weakly with increasing TN and TP but it was overall low; thus, the ratio averaged 0.18 in deep lakes and 0.10 in shallow lakes. Although weak, this relationship was similar to those found elsewhere. For example, a study of 1656 shallow lakes from Florida and Denmark showed a decreasing trend of the zooplankton:phytoplankton ratio within the TP range 0.015-0.3 mg/L (Jeppesen et al., 2020b). A study of 71 shallow lakes in Denmark showed that the zooplankton: phytoplankton ratio was approximately 0.45 at 0-0.05 mg TP/L,.0.25 at 0.05-0.1 mg TP/L, 0.15 at 0.1-0.2 mg TP/L, and 0.1 at 0.2-0.4 mg TP/L (Jeppesen et al., 2000). A study of data collected in 466 lakes in Greenland, Denmark, New Zealand, and Norway showed that the zooplankton: phytoplankton ratio decreased within the range 0.003-0.8 mg/L (from a mean value of 0.35 in the most oligotrophic lakes to less than 0.1-0.2 in the most eutrophic lakes) and that the fish predation pressure on largebodied cladocerans was overall unimodally related to TP (Jeppesen et al., 2003). Analysis of data collected from lakes and reservoirs across the United States from the 2012 National Lake Assessment also shows that in
FIGURE 7
Second-order polynomial analysis of the relationship between TN*Cond and TP*Cond and zooplankton species number and biomass in the studied lakes.
Frontiers in Environmental Science
frontiersin.org eutrophic lakes, increasing eutrophication caused a decrease in the ratio between zooplankton and phytoplankton (Yuan and Pollard, 2018). The generally low biomass of cladocerans, low average body mass of crustaceans and low zooplankton:phytoplankton biomass ratio suggest a strong top-down control by fish of zooplankton (Hrbáček et al., 1961;Brooks and Dodson, 1965;Li et al., 2022), which is typical for (sub) tropical lakes (Jeppesen et al., 2020b). Unfortunately, fish data were not available for the eight lakes in this study due to the fishing ban policy. However, from the published literature, fish in Yunnan Plateau lakes are mainly dominated by planktivores, herbivores, and omnivores (Yuan et al., 2010), many of which prey on zooplankton. The eutrophic study lakes (e.g., YLH, QLH, XYH, and DC) are dominated by planktivorous fish, including silver carp (Hypophthalmichthys molitrix) and bighead carp (Aristichys nobilis) (Guan and Huang, 2014;Wang, 2017;Xue et al., 2017). The lakes with the lowest nutrient levels are dominated by omnivores, planktivores, and herbivores. For example, the main fish species in EH are silver carp (H. molitrix), bighead carp (A. nobilis), grass carp (Ctenopharyngodon Idella), blunt snout bream (Megalobrama amblycephala), salangid icefish (Neosalanx taihuensis), and other fishes such as stone moroko (Fei, 2012). The main fish in LGH are omnivores such as goldfish (Carassius auratus) and carp (Cyprinus carpio) (Huang et al., 2020). The fish in YZH are mainly goldfish, silver carp, bighead carp, tilapia, banded catfish (Pelteobagrus fulvidraco), and small fish such as stone moroko . The fish in CH are mainly salangid icefish (Dong and Wang, 2013). Thus, all the lakes were dominated by fish having the potential of feeding on zooplankton.
Along with increasing eutrophication, species richness and the abundance of crustaceans (sum of cladocerans and copepods) showed a weaker upward trend than the abundance of rotifers and protozoans, while cladocerans exhibited a decrease at high TN, all groups demonstrated a decrease at the higher salinity levels, not least in the deep lakes. The average body mass (biomass/density) of crustaceans decreased with increasing TN, TP, Chl a, and conductivity. Such changes have been widely reported in mesocosm experiments (Thompson and Shurin, 2012;Lind et al., 2018;Moffett et al., 2020;Coldsnow and Relyea, 2021;Greco et al., 2022;Hébert et al., 2022;McClymont et al., 2022) and field investigations (Brucet et al., 2009;He et al., 2020;Zadereev et al., 2022) at moderate to low conductivities. However, at higher conductivities, the food web transforms from threetrophic to two-trophic without fish, and large cladocerans such as Daphnia may become dominant until they reach a critical level at high conductivities (Lin et al., 2017).
Conclusion
Our study showed interaction effects of nutrients and conductivity (i.e., salinity) on zooplankton and that water depth may play an important role in this interaction. Eutrophication apparently mitigated the effect of salinity stress on zooplankton abundance to some extent. Both eutrophication and salinization led to a decrease in zooplankton body size. At the same TN*Cond and TP*Cond gradients, the species number and biomass of zooplankton showed a decreasing trend in the deep lakes, while the opposite was true in the shallow lakes, suggesting that the salt tolerance of zooplankton in the shallow lake may be higher than that in the deep lake, but further studies are needed to confirm this. In addition, fish apparently had strong control of large-bodied consumer zooplankton in the studied Yunnan Plateau lakes, not least the eutrophic ones.
Data availability statement
The original contributions presented in the study are included in the article/Supplementary Materials, further inquiries can be directed to the corresponding author.
Author contributions
Conceptualization, LY, YT, HW, and PX; methodology, LY, YW, LZ, PW, YHL, YYL, and XZ; formal analysis, LY and YW; writing-original draft preparation, LY; writing-review and editing, HW, XJ, PX, and EJ; supervision, HW and EJ All authors: contributed to the article and approved the submitted version.
Funding
This research was supported by the Yunnan Provincial Department of Science and Technology (202103AC100001; 202001BB050078) and the Strategic Priority Research Program of the Chinese Academy of Sciences (XDB31000000). EJ was also supported by the TÜBITAK program BIDEB2232 (Project 118C250).
|
2023-03-01T16:07:21.938Z
|
2023-02-27T00:00:00.000
|
{
"year": 2023,
"sha1": "2de70765fea35cc05a38a9eb7b43af86aa50892e",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fenvs.2023.1110746/pdf",
"oa_status": "GOLD",
"pdf_src": "Frontier",
"pdf_hash": "d356e71ace1f289cea7e8b264e30d9c47f56f3f7",
"s2fieldsofstudy": [
"Environmental Science"
],
"extfieldsofstudy": []
}
|
235287334
|
pes2o/s2orc
|
v3-fos-license
|
Evaluation of the adhesion strength of linear filler fibers of composite polymer reinforcement
The article discusses a method for evaluating the adhesion strength of linear filler fibers by tangential stresses obtained during shear tests of end cylindrical axisymmetric beads on a composite polymer reinforcement rod. The geometric parameters and material of the tested samples, the used equipment, the measuring instruments and the results of the tests are presented.
Abstract.The article discusses a method for evaluating the adhesion strength of linear filler fibers by tangential stresses obtained during shear tests of end cylindrical axisymmetric beads on a composite polymer reinforcement rod. The geometric parameters and material of the tested samples, the used equipment, the measuring instruments and the results of the tests are presented.
1.Introduction.
The creation of modern, competitive structures and products largely depends on the properties of the materials used in their production. The structural strength and mechanical characteristics affect the reliability and durability of structures made of polymer composite reinforced (PCR). Strength is one of the important mechanical parameters. The normative regulation of the strength of PCR is currently carried out by the requirements of the State Standard [1], in which the ultimate strength in axial tension [1,2] is determined as for reinforcing steel, and the ultimate strength in compression is determined [1,3], as for plastics. However, the use of normative test methods for PCR does not allow assessing the adhesion strength of linear filler fibers to control the established mechanical properties and strength characteristics, because the main feature of composite materials from plastics and steels is the presence of a component that connects substances of dissimilar chemical structure: matrix and filler. It is known that PCR is cured rods made of glass, basalt, carbon, or aramid fibers (linear fillers), impregnated with a thermosetting or thermoplastic polymer binder [4]. The presence of a large number of processes (physical, chemical, etc.) acting on the rod in the production process creates certain difficulties in the theoretical analysis of product parameters. Therefore, it is extremely important to have effective means and methods for assessing the strength of PCR linear filler fibers as products with outspoken discrete structures. The development of such methods and tools seems to be relevant.
The Method of test for PCR samples and assessment of the adhesion strength of linear filler fibers.
The essence of the method under consideration consists in loading PCR samples on a tensile testing machine 2055-P -0.5 (figure 3) with an increasing load and recording the values of "load-displacement", and loading is performed on the axis-symmetric section of the bearing rod -a bead made with grooves , given the width and diameter along the axis of the reinforcement. The maximum load is used to estimate the adhesion strength of the adhesion of the fibers of the linear filler along the sample diameter [5][6][7][8].
To implement this method, specimens (figure 1) were prepared from glass composite reinforcement (GCR), the geometric dimensions of which are shown in figure 2. Figure 7 shows a PCR sample with a bead made on the rod before testing. Figure 8 shows a sample with a cut bead with clearly visible fibers of linear filler. Figure 7. Sample before testing. Figure 8. Sample after testing.
Conclusions
The considered method of testing for shear of beads, performed on samples of composite polymer reinforcement, makes it possible to monitor the picture of the strength state of the bar in depth for various nominal diameters of the reinforcement. The strength of the transverse links of longitudinally oriented fibers is an indirect indicator of the breaking strength of the reinforcing bar and is informative enough to judge the quality of the reinforcing bar and its tensile strength. In this case, not only the absolute value of the obtained values of the adhesion forces of the fibers of the linear filler is assessed, but also the deviation from the specified (reference or certification) values of the strength parameters of the reinforcement.
|
2021-06-03T00:00:42.853Z
|
2021-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "1098020060730b505615dd1491d0e1836b2c9aca",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1889/2/022082",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "1098020060730b505615dd1491d0e1836b2c9aca",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Physics"
]
}
|
1621932
|
pes2o/s2orc
|
v3-fos-license
|
Epstein-Barr virus-specific methylation of human genes in gastric cancer cells
Background Epstein-Barr Virus (EBV) is found in 10% of all gastric adenocarcinomas but its role in tumor development and maintenance remains unclear. The objective of this study was to examine EBV-mediated dysregulation of cellular factors implicated in gastric carcinogenesis. Methods Gene expression patterns were examined in EBV-negative and EBV-positive AGS gastric epithelial cells using a low density microarray, reverse transcription PCR, histochemical stains, and methylation-specific DNA sequencing. Expression of PTGS2 (COX2) was measured in AGS cells and in primary gastric adenocarcinoma tissues. Results In array studies, nearly half of the 96 human genes tested, representing 15 different cancer-related signal transduction pathways, were dysregulated after EBV infection. Reverse transcription PCR confirmed significant impact on factors having diverse functions such as cell cycle regulation (IGFBP3, CDKN2A, CCND1, HSP70, ID2, ID4), DNA repair (BRCA1, TFF1), cell adhesion (ICAM1), inflammation (COX2), and angiogenesis (HIF1A). Demethylation using 5-aza-2'-deoxycytidine reversed the EBV-mediated dysregulation for all 11 genes listed here. For some promoter sequences, CpG island methylation and demethylation occurred in an EBV-specific pattern as shown by bisulfite DNA sequencing. Immunohistochemistry was less sensitive than was western blot for detecting downregulation of COX2 upon EBV infection. Virus-related dysregulation of COX2 levels in vitro was not recapitulated in vivo among naturally infected gastric cancer tissues. Conclusions EBV alters human gene expression in ways that could contribute to the unique pathobiology of virus-associated cancer. Furthermore, the frequency and reversability of methylation-related transcriptional alterations suggest that demethylating agents have therapeutic potential for managing EBV-related carcinoma.
Background
Gastric cancer is the fourth most common type of cancer and the second leading cause of cancer death worldwide [1]. A variety of genetic alterations as well as infectious and other environmental agents appear to be factors in gastric carcinogenesis. Epstein-Barr virus (EBV), a double-stranded DNA gammaherpesvirus, is found within the malignant cells in 10% of gastric adenocarcinomas, and infection seems to precede malignant transformation [2]. Basic and clinical observations suggest that EBV-associated gastric cancers have a different pathobiology from EBV-negative gastric cancers [3][4][5][6][7][8].
Rational design of virus-directed therapy requires a better understanding of the pathogenic role of EBV in gastric carcinogenesis.
In cell line models, DNMT1 overexpression is mediated by EBV LMP1 and LMP2 [21,[28][29][30][31]. EBV seems to employ epigenetic mechanisms to control the host transcriptome and also to control expression of its own virally encoded genes [11,12,14,15,19,21,24,29,32,33]. Upon initial infection of a cell, the unmethylated viral genome can undergo viral replication with new virion production, while a subset of infected cells acquire a highly methylated viral genome that squelches expression of foreign proteins and mediates long-term viral persistence by way of latent infection [23,34]. Infected tumors tend to have highly methylated EBV DNA, and methylation-related silencing of viral genes helps explain how infected tumors evade immune destruction.
While methylation of gene promoters is typically associated with transcriptional downregulation via selective binding of repressor proteins, the first protein ever shown to bind and activate a methylated promoter was EBV BZLF1, the key factor controlling the switch from latent to replicative forms of viral infection [35]. It appears that the virus has cleverly evolved a means of overcoming promoter methylation to its advantage [34,35]. Antiviral strategies are being explored for their antineoplastic potential. Interestingly, the most commonly used antiviral agents, acyclovir and ganciclovir, are effective at shutting down viral replication but they do not eliminate expression of latent and early lytic viral genes such as LMP1, LMP2 and BZLF1.
The clinical implications of EBV-related methylation of the gastric cancer genome are immense. First, emerging evidence shows the potential for improved diagnosis of gastric cancer by testing gastric washes for cancer-specific methylation patterns, perhaps in concert with tests for EBV to identify the virus-infected subset of cancers [36][37][38][39][40]. Differing patterns of promoter methylation in virus-positive compared to virus-negative cells [11,21,24] emphasize the need to characterize methylation patterns in a manner that that maximizes assay sensitivity for cancer detection. Both infection and altered DNA methylation appear to be early events in carcinogenesis [2,41], potentially facilitating detection of precancerous lesions in stomach juice.
A second clinical implication is the potential for improved treatment of gastric cancer using drugs that reverse the effect of promoter hypermethylation [42,43]. In particular, demethylating agents that inhibit DNA methyltransferase and reverse tumor suppressor gene silencing or oncogene activation are potential antineoplastic strategies [43]. Consideration must be given to possible differences in the effect of demethylating agents in virus-positive versus virus-negative tumors [43][44][45].
We and others have shown that naturally infected gastric cancers have lower CDKN2A (p16) expression [14,15]. In a clinical trial of fluorouracil (5FU) for gastric cancer, CDKN2A promoter methylation status was an independent predictor of survival [46]. The rationale for using demethylating agents like 5-aza-2'-deoxycytidine in clinical trials rests on scientific evidence that demethylating therapy modifies the tumorigenic properties of cancer cells.
Several investigators have successfully infected epithelial cell lines with EBV in vitro [47,48]. In the current study, EBV-positive and EBV-negative AGS gastric cancer cells were examined for differences in gene expression patterns using low-density microarray analysis and reverse transcription polymerase chain reaction (rtPCR). AGS is a cell line that was originally grown from gastric adenocarcinoma tissue and now is widely used as a model of gastric cancer. The role of DNA methylation in mediating selected effects was examined by bisulfite DNA sequencing and by testing the ability of a demethylating agent to reverse the effect of EBV on gene silencing. Results revealed extensive gene dysregulation upon EBV infection in AGS cells with evidence that promoter methylation is responsible, at least in part. Reversal of virus-associated transcriptional effects suggests that demethylating agents should be explored for their potential to control growth of EBV-related malignancies.
Gastric Cancer Cell Lines
The AGS gastric cancer cell line (ATCC CRL-1739) was cultured in Dulbecco's Modified Eagle's Medium containing 10% fetal bovine serum (heat-inactivated for 20 minutes at 65°C) and 1% penicillin-streptomycin (10,000 units penicillin, 10,000 μg/ml streptomycin, Gibco, Carlsbad, CA). The cells were infected with a recombinant EBV strain (a gift from Dr. Henri J. Delecluse) [33,49,50] that was engineered to express green fluorescence protein (GFP) and hygromycin B resistance by cloning these genes into the prototypic B95.8 strain of EBV where the second copy of oriLyt normally resides. Before infection, AGS cells were transfected with 1 μg of an expression vector encoding CD21 (the EBV receptor) and a puromycin resistance gene by using Fugene 6 (Roche, Indianapolis, IN) as previously described [51]. At 48 hours post-transfection, cells containing CD21 were positively selected for using 0.5 μg/mL of puromycin-HCl (Roche). Viral stocks of recombinant EBV were generated in kidney 293 cells, a human embryonic epithelial cell line, by inducing lytic replication using 20 ng/mL phorbol 12-tetradecanoate 13-acetate and 3 mM butyrate. Supernatants were harvested 3 days after induction, filtered (0.8 μM), and frozen at -80°C until use. Puromycin-resistant AGS cells were plated at 50% of full density in 60-mm tissue culture dishes and coincubated with 1 mL of stock virions. Four days later, the EBV-infected AGS cells (now called AGS-B95-HygB) were positively selected with 100 μg/mL hygromycin B (Roche).
DNA fingerprinting confirmed that AGS cells used in this study matched the genotype of AGS cells in the American Type Culture Collection. Fingerprinting was done using the PowerPlex 1.2 STR kit (Promega) followed by electrophoresis on an ABI 310 capillary gel electrophoresis instrument (Applied Biosystems).
Gene Expression by Histochemical Stains
Paraffin blocks were prepared from AGS and AGS-B95-HygB cell lines, and paraffin blocks of primary gastric adenocarcinoma were retrieved from clinical archives. The residual clinical specimens represent all available EBV positive gastric cancers (n = 9) and a random selection of EBV-negative gastric cancers (n = 9). Histochemical stains were applied to paraffin sections to confirm infection and to evaluate gene expression. To prepare blocks, cultured cells were first rinsed in Dulbecco's phosphate buffered saline (Gibco, Invitrogen), harvested with 0.25% trypsin (Gibco, Invitrogen), enmeshed in a clot using Dade Ci-Trol Coagulation Control (Citrated)-Level 1 (Dade Behring, Marburg, Germany) and Thrombin 200 (Pacific Hemostasis, Middletown, VA), fixed in 10% buffered formalin, embedded in paraffin, and sectioned onto coated slides.
EBER in situ hybridization was performed using fluorescein-labeled EBER probe and oligo(d)T control probe on a Benchmark in situ hybridization system (Ventana Medical Systems, Tucson, AZ). Immunohistochemical stains for EBV LMP1 and LMP2 proteins were performed as previously described [52] using citrate antigen retrieval and the CS1-4 cocktail of mouse monoclonal antibodies against LMP1 (1:100, Dako, Capinteria, CA) and the E411 rat monoclonal antibody against LMP2 (1 mg/ml, Asencion, Munich, Germany). Paraffin sections of EBV-related Hodgkin lymphoma and post-transplant lymphoproliferative disorder served as positive controls.
Detection of the EBV genome
A battery of quantitative real-time PCR (Q-PCR) assays targeting six disparate regions of the EBV genome was used to demonstrate that viral infection of AGS cells was successful. Amplified products were detected using the ABI Prism 7500 Real-Time PCR instrument with Sequence Detection System software (Applied Biosystems) as previously described using primers targeting the BamH1W, EBNA1, LMP1, LMP2, and BZLF1 regions of the EBV genome [52] or EBER1 DNA [53].
To check for amplicon contamination, every run contained at least two "no template" controls in which nuclease-free H2O was substituted for template.
Low-Density cDNA Microarray Analysis
RNA was isolated from AGS and AGS-B95-HygB cells using the RNeasy RNA Mini Kit (Qiagen) after first using the QiaShredder™ spin column (Qiagen) to lyse the cells. After confirming RNA integrity using an Agilent Bioanalyzer, expression microarray analysis was performed by SABiosciences Corporation (Frederick, MD) using their GEArray Q Series Human Signal Transduction in Cancer Gene Array. This low-density microarray consists of 96 probes that test activation of 15 signal transduction pathways involved in oncogenesis. (Target transcripts are listed in Results.) Biotin-labeled cDNA prepared from 10 μg of each RNA sample using the AmpoLabeling-LPR method (SABiosciences) was hybridized to the array, and chemiluminescent detection was performed using a CCD camera. Integrated raw intensity values for each spot were generated by GEArray Analysis Suite software (SABiosciences), and further analysis and normalization was performed using Microsoft Excel. The lowest spot intensity value on each array was considered to be background and was subtracted from each raw intensity value for each probe, and then spot intensities were normalized to that of the housekeeping gene, glyceraldehydephosphate dehydrogenase (GAPDH). A pairwise comparison of gene expression levels was made between the EBV-positive cells (AGS-B95-HygB) and parental EBV-negative AGS cells. If the ratio was ≥2 or ≤0.5, the gene was considered to be upregulated or downregulated in infected cells.
Minor Groove Binding Probe Semiquantitative rtPCR
To evaluate expression of five selected genes that were not on the microarray described above, semi-quantitative rtPCR assays were performed (Assays-on Demand, Applied Biosystems) using minor groove binding probes targeting helicase-like transcription factor (HLTF), trefoil factor-1 (TFF1), basic-leucine zipper ATF-like transcription factor (BATF), inhibitor of DNA binding protein-4 (ID4), and nucleostemin (NU). These five were selected because they are reportedly dysregulated in a substantial proportion of gastric adenocarcinomas or EBV-related cancers [32,[54][55][56][57][58][59]. GAPDH served as an endogenous control for relative quantification purposes. RNA was converted to cDNA using the High Capacity cDNA Archive Kit (Applied Biosystems), and the cDNA was diluted 1:10 with nuclease-free water. Each 50 μL PCR reaction contained: 1X TaqMan ® Universal Master Mix, 1X Target Gene Expression assay or GAPDH Endogenous Control mix, and 10 μL cDNA. To check for amplicon contamination, every expression assay on every plate contained at least two "no template" controls in which nuclease-free water was substituted for template. Thermocycling conditions and data analysis were as described above for the SYBR Green rtPCR.
Demethylation Treatment and Sequencing of Bisulfitemodified DNA
To study the effect of demethylation, the AGS and AGS-B95-HygB gastric cancer cell lines were grown to 75% confluency and then for three consecutive days 1 μM of fresh 5-aza-2'-deoxycytidine (5aza; Sigma) was added daily. On the fourth day, RNA and DNA were harvested from treated and untreated cultures. RNA was evaluated for gene expression levels, and DNA was examined for methylation after sodium bisulfite treatment (EZ DNA Methylation Kit, Zymo Research, Orange, CA) to convert unmethylated cytosines to uracil, while keeping methylated cytosines unchanged [60]. The positive control was CpGenome Universal Methylated DNA (Chemicon) subjected to the same bisulfite conversion, and control primers targeting the C8orf4 cellular gene promoter confirmed successful bisulfite conversion of each DNA sample [61]. CpG islands were identified using CpG Plot for each of five human genes- (RefSeq# NM_000963), and TFF1 (RefSeq# 003225.2)for which promoter sequences (3000 basepairs upstream of the transcription start site through exon 1) were downloaded from GenBank. The following parameters for CpG island identification were used: minimum length of 200 bp, minimum average percentage of C+G of 50%, and minimum average ratio of observed to expected C+G of 0.6 [62]. To identify CpG dense regions for COX2 and TFF1, the minimum length parameter was reduced to 50 bp [61]. Primer sequences are shown in Table 1. Each 50 μl PCR reaction contained: 1X PCR Buffer, 2 mM MgCl 2 , 1 unit Platinum Taq DNA Polymerase (Invitrogen, Carlsbad CA), 0.2 mM dNTPs (ABI), and 30 pmol of each primer. Thermocycling conditions were: 94°C for 2 minutes, 40 cycles of 94°C for 30 seconds, 55°C for 30 seconds, and 72°C for 1 minute, 72°C for 10 minutes. To monitor amplicon contamination, every run contained a "no template" control in which nuclease-free water was substituted for template Sequencing was performed on amplicons of bisulfitetreated templates to identify the methylated and unmethylated CpGs with or without 5aza treatment. First, each PCR product was cloned into pGEM-T vector using the pGEM-T Easy Vector System II (Promega) and transformed in JM109 high efficiency competent cells. White colonies containing inserts were selected and cultured overnight, and plasmid DNA was extracted using the QiaPrep Spin Miniprep Kit (Qiagen). Sequencing was done on an ABI 3100 Genetic Analyzer using the ABI PRISM™ BigDye™ Version 1.1 Terminator Cycle Ready Reaction Kit with AmpliTaq DNA Polymerase and an M13R3 primer. Results were downloaded into Sequencher software (Gene Codes, Ann Arbor, MI) to obtain the reverse compliment of each sequence, and both forward and reverse sequences were aligned and analyzed to distinguish unmethylated cytosines from methylated cytosines.
Western Blot on the AGS cell line
Because of the potential for COX2 inhibitor therapy to overcome COX2 effects, the RNA-based results for COX2 were chosen for follow-up study at the proteinlevel. Confluent AGS cells with or without EBV were harvested with 0.25% trypsin, washed twice in phosphate-buffered saline, and pelleted by centrifugation. Cells were resuspended in 500 ul NP-40 cell lysis buffer (50 mM Tris-HCl, 150 mM NaCl, 1% NP-40, pH 8.0), incubated on ice for 30 minutes, and spun at 12,000 rpm for 15 minutes at 4°C. Aliquots of lysate (at 50, 100, 150 ug protein per well) were resolved using SDS-PAGE on a tris-glycine 4-20% gradient gel (Invitrogen) and transferred onto a nitrocellulose membrane. COX2 was detected with a 1:5,000 dilution of the monoclonal antibody followed a 1:10,000 dilution of secondary antibody conjugated with alkaline phosphatase (Amersham Biosciences), and visualization with a Typhoon Phos-phorImager (Molecular Dynamics). Band density measured semi-quantitatively and normalized to beta actin (ACTB) was compared between infected and uninfected AGS cells.
EBV Infection of AGS Gastric Cancer Cells
Successful EBV infection of AGS gastric cancer cells was confirmed using six Q-PCR assays targeting disparate segments of the EBV genome. EBER in situ hybridization showed no EBER expression in the parental "EBV-negative" AGS cells, whereas greater than 90% of AGS-B95-HygB cells were EBER-positive and had activated-appearing nuclear morphology (Figure 1). The proliferation rate was increased as shown by confluence of cultured AGS-B95-HygB cells three days prior to parental AGS cells. The infection persisted for at least 4 months as shown by GFP and EBER histochemical stains. EBV latent (LMP1 and LMP2A) and lytic (BZLF1 and BMRF1) proteins were not expressed in uninfected AGS cells, whereas~10% of infected cells expressed LMP1, half the cells expressed LMP2A, and~35% expressed BMRF1 and BZLF1 proteins implying active viral replication (Figure 1).
SYBR Green rtPCR was used to check the microarray findings for 26 of the dysregulated genes and for 12 additional genes in which no significant change was observed on the microarray. Greater than two-fold alteration in mRNA level in infected versus uninfected Table 2). The genes most strongly affected were IGFBP3 that was upregulated by 42-fold, and COX2, BMP4, and ICAM1 that were downregulated by 35-, 32-, and 22-fold, respectively. The remaining ten microarray-based alterations were not confirmed by rtPCR, suggesting that EBVrelated dysregulation of these factors was lower than two-fold. In one instance, there was a major discrepancy: Expression levels of BCL2L1 by microarray analysis showed a two-fold decrease in infected cells while rtPCR repeatedly showed a five-fold increase in BCL2L1 mRNA levels in infected cells. Less dramatic discrepancies were found when an additional 12 genes were analyzed by rtPCR, with significant (>2-fold) changes found in the expression of 5/12 genes for which no alteration was observed on the microarray (Table 2). When rtPCR was used to evaluate expression of each of 5 selected genes (HLTF, BATF, ID4, TFF1, and NU) that were not on the microarray, all except for HLTF was downregulated by more than five-fold in the EBVpositive AGS cells (Table 2). BATF and TFF1 were most significantly affected (38-fold and 21-fold decrease, respectively).
Demethylation Alters Cellular Gene Expression in AGS and AGS-B95-HygB cells
If gene promoter methylation contributed to EBVrelated alterations in gene expression, then demethylation could reverse the effect and perhaps even restore baseline gene expression. To explore this possibility, AGS and AGS-B95-HygB cells were cultured in the presence or absence of 5aza. On the fourth day, RNA was isolated from treated and untreated cells, and rtPCR was performed to measure mRNA levels. (HIF1A) [21,23,27,[29][30][31][32][33][34][35][36][37]. Treatment with 5aza had varying effects depending on the gene and infection status. Five of the 11 genes (ID2, IGFBP3, CDKN2A, ID4, and CCND1) responded to 5aza treatment with higher mRNA levels, consistent with promoter methylation in the parental AGS cells. In contrast, decreased expression of 4 factors (BRCA1, HIF1A, ICAM1, and COX2) was observed in response to 5aza treatment, possibly because transcriptional repressors of these genes were demethylated. For the remaining factors (HSP70 and TFF1), 5aza had no significant effect in AGS cells, suggesting that these genes are not regulated by methylation in the absence of EBV.
In AGS-B95-HygB cells, reduced mRNA levels were observed for 10 genes upon EBV infection, but treatment with 5aza was not sufficient to fully restore expression of any of the genes except for ID2 and CCND1 where complete reversal of the downregulation was seen (Figure 3). For the remaining 8 genes there was partial restoration of expression upon demethylation.
IGFBP3 upregulation in uninfected cells treated with 5aza suggests that an inhibitor of IGFBP3 is normally methylated in AGS cells, and that 5aza demethylates the inhibitor to induce IGFBP3. EBV infection also strongly induces IGFBP3, but subsequent 5aza treatment does not further induce gene expression rather it partly reverses the effect of EBV on IGFBP3 induction. Overall, these results suggest that methylation is a mechanism by which EBV influences cellular gene expression, and that demethylation can partially or completely restore baseline expression levels.
Bisulfite DNA Sequencing Reveals EBV-specific Methylation
Comparisons of methylation patterns were made between 5aza treated and untreated AGS and AGS-B95-HygB cells to explore whether EBV-specific methylation was a plausible cause for dysregulation of five selected genes and to show how patterns of methylation were affected by 5aza. The targeted promoter region for each gene contains multiple CpG dinucleotides: The CDKN2A region has 34 sites, ID4 has 53 sites, COX2 has 25 sites, ICAM1 has 35 sites, and TFF1 has 14 CpG sites.
Bisulfite sequencing showed CDKN2A (p16) was methylated at 30/34 CpG sites (88%) in both AGS and AGS-B95-HygB cells. After 5aza treatment, only 22-31% of sites remained methylated in AGS cells, whereas 70% of sites remained methylated in AGS-B95-HygB cells ( Figure 4A) in concert with restored p16 expression from 24-fold to 6-fold below baseline (Figure 3). The findings imply a dramatic effect of EBV infection on p16 levels, and a more dramatic 5aza-related increase of p16 in infected cells compared to uninfected ones. The dramatic change in expression despite a minimal change in promoter methylation (88 to 70%) after 5aza treatment of infected cells suggests the likelihood of mechanisms other than promoter demethylation.
EBV infection did not increase methylation of ID4 since its promoter was already largely methylated before infection occurred. Bisulfite sequence analysis of ID4 showed 96-100% methylation in both AGS and AGS-B95-HygB cell lines ( Figure 4B). Upon treatment with 5aza, methylation of 92-96% of CpG sites was seen in both infected and uninfected cells, and this seemingly small change in methylation status was accompanied by induction of ID4 mRNA. It is feasible that there is a non-methylation mechanism of ID4 regulation by EBV.
Interestingly, bisulfite sequencing analysis of COX2 showed that individual gene promoters were either completely unmethylated or else largely methylated in AGS and in infected AGS cells, with somewhat different CpG hypermethylation patterns in infected compared to uninfected clones ( Figure 4C) accompanied by a marked loss of COX2 expression (Figure 3). In the presence of 5aza, there was dramatic demethylation of the COX2 promoter in AGS cells with only 0 to 6% methylation. In contrast, the COX2 promoter was either completely methylated or only half methylated in various clones of AGS-B95-HygB cells treated with 5aza, suggesting hemimethylation of alleles in a given cell or else cell-to-cell heterogeneity with respect to COX2 methylation patterns. On average, COX2 mRNA levels were similar in infected versus uninfected cells after treatment with 5aza. (Figure 3). Further work is needed to determine, in any given cell, how methylation status relates to COX2 levels and viral status (e.g. latent or lytic infection). In any case, EBV substantially influences COX2 expression and also impacts 5aza effects on its promoter. ICAM1 had distinct methylation patterns in infected versus uninfected AGS cells, suggesting that EBV influences expression of this gene via promoter methylation ( Figure 4D). Upon EBV infection, methylation of CpG sites increased slightly from 89 to 94%, particularly at sites #33 and 34, and this subtle change was associated with a pronounced decrease in ICAM1 expression. Treatment of both cell lines with 5aza yielded demethylation of 6-20% of the CpG sites to achieve similar Figure 3 EBV-related alteration of gene expression is often reversed by treatment with a demethylating agent. Bar graphs show substantial dysregulation of 11 genes upon EBV infection, as tested by rtPCR, with zero representing baseline expression levels in parental AGS cells. Treatment of uninfected cells with 5-aza-2'-deoxycytidine (5aza) reveals induction of ID2, ID4, CCND1, CDKN2A (p16) and IGFBP3, suggesting pre-existing promoter methylation. Treatment of infected cells reveals that demethylation can overcome the effect of EBV infection on virusinduced gene dysregulation, at least in part. The most marked drug effect was seen in ID2, CCND1 and COX2 where demethylation virtually completely reversed the impact of viral infection. mRNA levels, which in the infected AGS cells represented a restoration of levels from 24-fold to 9-fold below baseline.
The most dramatic viral-driven changes in promoter methylation were seen in the TFF1 gene. The TFF1 promoter was 50-64% methylated in AGS cells but 100% methylated in the infected counterpart. In particular, CpG sites #10-14 were methylated upon infection and tended to remain methylated even after attempting to demethylate them with 5aza ( Figure 4E.) In EBV-positive AGS cells, 5aza demethylated 14-36% of sites and substantially induced TFF1 expression from 30-fold to 12fold below baseline, whereas in EBV-negative cells demethylation of 43-93% of sites had no effect on TFF1 expression. Overall, these findings provide evidence that EBV alters cellular gene expression by direct or indirect effects on promoter methylation, and demethylation can partially overcome viral effects on gene expression.
Protein-based analysis confirms virus-associated downregulation of COX2 in AGS cells but not in primary gastric adenocarcinoma tissues COX2 was chosen for analysis at the protein level because of the translational potential for COX2 inhibitor therapy to overcome its effects. Lysates of EBV negative and EBV positive AGS cells were compared in western blot analysis. After normalization, COX2 expression in EBV-positive cells was 1.5 to 2.1 fold (average 1.9) lower than in EBV negative cells, confirming the RNA-based finding that COX2 levels are downregulated by EBV in this gastric epithelial cell line. In histologic sections of gastric adenocarcinoma, COX2 protein was localized to the cytoplasm and surface membrane of malignant cells, without significant staining of the nucleus. COX2 was also expressed in a fraction of tumor-infiltrating lymphoid cells. In EBER-negative cancers (n = 9), malignant cells had a mean proportion score of 3.3 and intensity score of 1.8, while EBV-positive cancers had a mean proportion score of 3.5 and intensity of 2.4. Total scores were not significantly different in relation to EBV status (p = 0.07). Likewise, immunohistochemistry of AGS cells did not exhibit a visible difference in COX2 expression in relation to EBV status ( Figure 5), suggesting that immunohistochemistry is less sensitive to changes in protein level than is western blot.
Discussion
Our research suggests that DNA methylation is a mechanism by which EBV alters cellular gene expression, and that virus-related methylation can be reversed, at least in part, by a demethylating agent. Since gross chromosomal abnormalities and microsatellite instability are less frequently detected in EBV-associated gastric cancer compared to EBV negative cancer, it is feasible that DNA hypermethylation is a primary driver of virusrelated oncogenesis that can be capitalized upon in designing therapeutic interventions [19,21].
It is difficult to analyze the natural effect of EBV infection on gastric cells because laboratory models only partially reflect the effects of EBV infection occurring in vivo. For example, LMP1 expression is unusual in naturally infected tumors whereas LMP1 was expressed in 10% of our infected AGS cells [2,[63][64][65][66][67][68][69][70][71]. Low level lytic replication is sometimes reported in naturally infected gastric cancers while it is regularly reported in infected gastric cancer cell lines including our AGS-B95-HygB line [2,[65][66][67][68][69]72]. Other limitations of this study are that only one gastric cell line was examined and the demethylating agent was applied for only three days. Nevertheless, our pilot work showing effects of EBV infection on cellular gene expression and promoter methylation in AGS cells encourages further investigation of the prospect of using demethylating agents to overcome virus-associated effects in vivo.
The observed viral replication in infected AGS cells might be aided by EBV-mediated downregulation of CCND1 and ID2 to achieve S-phase-like cellular conditions that are favorable for viral replication [73]. One of the EBV lytic proteins, BMRF1, was previously shown to upregulate gastrin [74], and high levels of gastrin are implicated in gastric tumor development [75]. Lytic infection is reportedly induced by 5aza [45], suggesting possible pathways of 5aza effect beyond demethylating promoters. Khan et al showed TFF1 expression is directly induced by gastrin [76], so 5aza-mediated reversal of TFF1 loss might be explained in part by indirect effects on viral replication and gastrin induction in addition to the observed demethylation of the TFF1 promoter. TFF1, which encodes trefoil factor 1, is suggested to be a tumor suppressor gene that is lost or methylated in many gastric cancers [77]. Interestingly, prior studies suggest that EBV-related hypermethylation is stimulated by pre-existing promoter methylation [19,21,29], and our data on TFF1 as well as ICAM1 promoters would support this characteristic.
Prior studies showed loss of ID4 expression in about 30% of gastric cancer as well as a means to upregulate ID4 using 5aza in gastric cell lines [32]. Our work shed light on the impact of EBV infection by demonstrating strong downregulation of ID2 and ID4 upon infection. ID2 and ID4 are members of a family of "inhibitor of differentiation" transcriptional regulators. In breast cancer, methylation-related silencing of ID4 is a poor prognostic indicator, and demethylation is proposed as a means of overcoming ID4 repression [78]. In our own demethylation studies, we showed that ID2 and ID4 silencing could be reversed by 5aza in AGS cells.
COX2 encodes an enzyme critical for prostaglandin production that mediates inflammation in the gastrointestinal tract. COX2 may contribute to carcinogenesis by promoting apoptosis resistance, angiogenesis, invasiveness, and by affecting host immunity [79][80][81][82][83][84][85][86][87][88]. COX2 gene silencing by hypermethylation in gastric carcinoma cells was previously reported [18,89,90]. Our data show that EBV infection causes a 50-fold decrease in COX2 mRNA levels in infected compared to uninfected AGS cells. At the protein level, western blot confirmed downregulation of COX2 in EBV-infected AGS cells although the magnitude of change was less dramatic at about two-fold. Nearly complete restoration of baseline mRNA expression upon demethylation suggests a potential pharmacologic means of reversing the viral effect. The clinical implications of such a finding are intriguing given the reported beneficial effects of COX2 inhibition on incidence, recurrence and outcome of gastrointestinal malignancy [88,91]. Follow-up studies are warranted to explore if COX2 testing of gastric cancer tissue identifies patients most likely to benefit from COX2 inhibition. RNA-based assays may be more informative than immunohistochemistry given our evidence that immunostains were uniquely incapable of quantifying changes in COX2 levels in response to EBV infection of AGS cells. Another group found that COX2 immunostain results were prognostic even though they were not EBV-associated in a series of patients treated for gastric cancer [92].
Heritable polymorphisms in COX2 and in IGFBP3 have been reported to affect the risk of developing gastric cancer [91,93,94]. IGFBP3 is thought to reduce gastric cancer metastasis by sequestering IGFs to prevent them from triggering receptor tyrosine kinases [95]. Methylation of the IGFBP3 promoter is found in 67% of gastric cancers, and its silencing is predicted to increase tumor aggressiveness [96]. Our data on upregulation of IGFBP3 in response to 5aza treatment confirms a previous study done on uninfected AGS cells [43]. Interestingly, Lee et al reported that high IGFBP3 expression is a positive predictive marker for response to the antineoplastic drugs paclitaxel and etoposide, suggesting that EBV-infected cancers overexpressing IGFBP3 might be particularly susceptible to these chemotherapeutic agents [97]. If confirmed, these insights could impact tumor classification schemes and assist in managing patients with gastric cancer.
Conclusions
EBV infection had a profound effect on expression of IGFBP3. Unlike IGFBP3 transcripts which were upregulated, most of the tested gene were downregulated by EBV infection. Methylation may be a common mechanism for diverse effects of viral infection. To the extent that viral infection is associated with methylation in vivo, demethylating agents could provide a unified therapeutic approach to overcoming viral effects.
Demethylating agents are already used clinically for managing certain neoplasms [98], and their efficacy in EBV-related gastric cancer should be considered. Our pilot data shows frequent and substantial restoration of multiple transcripts in 5aza-treated, infected cells. Pilot sequencing data identify selective effects of virus infection and 5aza treatment on promoter methylation, providing an impetus for further work to characterize viral-mediated effects and to understand how treatments might be devised to overcome viral effects in tumor cells.
Author details
|
2014-10-01T00:00:00.000Z
|
2010-12-31T00:00:00.000
|
{
"year": 2010,
"sha1": "ce0943892f483cc1d981429e0b54e9e4fcd5fccd",
"oa_license": "CCBY",
"oa_url": "https://infectagentscancer.biomedcentral.com/track/pdf/10.1186/1750-9378-5-27",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "ce0943892f483cc1d981429e0b54e9e4fcd5fccd",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
255126602
|
pes2o/s2orc
|
v3-fos-license
|
Null genotypes of Glutathione S-transferase M1 and T1 and risk of oral cancer: A meta-analysis
Background: Glutathione S-transferase M1 (GSTM1) and Glutathione S-transferase T1 (GSTT1) null genotypes have been considered risk factors for many cancers. Numerous studies have been conducted to evaluate the association of null genotype of GSTM1 and GSTT1 with increased susceptibility to oral cancers, and these have produced inconsistent and inconclusive results. In the present study, the possible association of oral cancer(OC) with GSTM1 and GSTT1 null genotypes was explored by a meta analysis. Materials and Methods: A meta-analysis was conducted on published original studies retrieved from the literature using a bibliographic search from two electronic databases: MEDLINE (National library of medicine, USA) and EMBASE. The pooled odds ratio and presence of publication bias in those studies were evaluated. Results: A total of 49 studies concerning oral cancer (OC) were identified for GSTM1 null genotype. Similarly, 36 studies were identified for GSTT1 null genotype. The pooled OR was 1.551(95% confidence interval [CI]: 1.355–1.774) for the GSTM1 null genotype, while for GSTT1 null genotype, the pooled OR was 1.377 (95% CI: 1.155–1.642). No evidence of publication bias was detected among the included studies. Conclusion: The results suggest that the Glutathione S-transferase M1 and Glutathione S-transferase T1 null genotypes significantly enhances the risk of developing oral cancer by a substantial percentage.
individual to individual. Therefore, variations in genetic host factors that contribute to carcinogenic pathways may have a role in the development of oral cancer susceptibility. The implication is that a genetic deficit in the enzymes that metabolise tobacco carcinogens is likely to have a role in a person's susceptibility to oral cancer. [6] Further more, animal studies that relate the association of carcinogens produced due to tobacco smoking with OCs, have bestowed the precise mechanism of oral carcinogenesis on abilities of metabolites formed as a resultant of tobacco smoking, to bind onto DNA forming DNA adducts. [7] Failure to repair such DNA adducts prior to DNA replication may induce mutation in oncogenes or tumor suppressor genes that can lead to malignant transformation of the cell and thereby initiating carcinogenesis. [8] Further, these chemical carcinogens are eliminated by the action of xenobiotic metabolizing enzymes, that have molecular and cellular mechanisms for detoxifying and excreting harmful substances, and are distinguished into phase I and phase II enzymes. Phase I xenobiotic enzymes are involved in oxidation, reduction and hydroxylation, rendering hydrophobic substances into more hydrophilic in nature, which can easily be excretable, [9] examples of these enzymes include cytochrome P450s (CYPs), cyclooxygenases and aldo keto-reductases. Phase II xenobiotic enzymes are involved in introduction of polar groups moieties and thereby assisting in excretion. [9] Examples of phase II enzymes include uridine diphosphate glucuronosyltransferases, N, O-acetyltransferases, sulfotransferases and glutathione S-transferases (GSTs).
Glutathione-S-transferases (GSTs) a phase II enzymes, are known to protect cells from oxidative stress. Polymorphism of GSTM1 and GSTT1 has been extensively explored in various malignancies. It is hypothesized that any deleted variants of the GSTM1 and GSTT1 genes result in loss of functional activity, often reported as a factor influencing the individual susceptibility to cancer encouraging the concept of gene-environment interactions to Oral cancer risk. [10][11][12] However, this concept has been addressed by focal studies, but the results have been inconsistent and obscure. [10][11][12][13] Further, a previous meta-analysis has rested GSTM1 null genotype and risk of OC to be associated with only in Asian population, [14] yet another recent study has a related association of GSTM1 null polymorphisms with increased risk of OC. [15] In addition to this, a recent meta-analysis on association of null genotype GSTT1 [15] with OC suggests an increased risk of development of OC. However, this comes with limited literature due to criteria for selection of studies. Therefore, whether null genotypes of GSTM1 or GSTT1 is a risk factor for OC remains obscured. Hence, in the present study, an evidence-based quantitative meta-analysis was conducted to address this controversy.
Selection of studies
Two investigators independently searched two electronic databases, MEDLINE (National Library of Medicine, USA) and EMBASE, for studies pertaining to the deficiency of enzymes GSTM1 and GSTT1 and the risk of oral cancer, covering all papers published up to October 2021. The search was conducted using the combination of the following search GSTM1, GSTT1, oral cancers, mouth neoplasm, glutathione, null genotype. Additional articles were also manually retrieved via the references cited in these publications and review articles.
The following criteria were used for the selection of articles for the meta-analysis: (1) Only studies which explicitly describe the association of oral squamous cell carcinoma with GSTM1/GSTT1 null genotypes; (2) The sources of cases and controls, as well as the histopathological diagnosis of oral squamous cell carcinoma, should be mentioned; (3) Individuals should have been genotyped solely through the use of polymerase chain reaction technique; (4) The sample size, odds ratios (ORs), and 95% CIs, as well as any other information that can be used to deduce the results should have been stated. Accordingly, the exclusion criteria used were: (1) design and the definition of the experiments were obviously different from those of the selected papers; (2) the sample size, source of cases and controls and other essential information was not presented and (3) reviews and literature that is repeated.
Extraction of data
Data from the selected articles were extracted and entered into MedCalc, version 20.018. The extraction was performed by two investigators independently. For conflicting evaluations, an agreement was reached following a discussion. For each study, the author, year of publication, country where the study was carried out, number, race and gender of patients and controls, control source (hospital based or population based) and matching of cases and controls were rigorously tabulated.
Statistical analysis
The OR of GSTM1 and GSTT1 polymorphisms in OCs for each study was recalculated and their corresponding 95% CIs were recorded. Presumption was made that all the studies considered are estimating different effect sizes (alternative hypothesis [HA]), and therefore, null hypothesis (H0) was that all studies are estimating the same effect size. To determine whether study heterogeneity exists or not, Q statistics was used, and the I 2 statistic was used to quantify the percentage of variation across studies that is attributable to heterogeneity rather than chance. [16,17] If, P value was ≤ 0.05, indicated evidence against the null hypothesis and null hypothesis was rejected and HA was accepted and ORs were pooled according to the random effect model by DerSimonian and Laird method, [18] otherwise fixed-effect model by inverse variance method was used. [9] To identify publication bias, Begg rank correlation and Egger regression test were used. [19]
RESULTS
A total of 51 studies associating GSTM1 null genotype with respect to oral squamous cell carcinoma were identified. After a careful review, two irrelevant studies were excluded based on the inclusion and exclusion criteria. The data Pertaining to one study as that of Park JY 20 et al., for computing purpose in this meta-analysis, was divided into two as the study was performed in two different populations (African Americans and Caucasians) with a larger sample size and both the data were published in a single literature, accounting for two different populations. A database was established according to the extracted information from each published literature as indicated in table 1. A total of 36 studies were listed for GSTT1 null genotype after excluding 1 study based on the inclusion and exclusion criteria, as indicated in Table 2.
Out of 49 studies included in the meta-analysis of GSTM1, 33 studies came from Asian countries, 6 from European countries and 10 from American countries. The source of controls in all the studies of GSTM1 was predominantly hospital based followed by population based and combination of hospital and population based constituting 27 studies, 17 studies and 5 studies, respectively. The controls were matched with case's sex, age, geographical location, ethnicity and race in 22 studies, 20 studies, 7 studies, 9 studies and 4 studies, respectively. Socioeconomic status of cases was matched with controls in 4 studies, and in 1 study, hospital distribution was matched with controls. In 2 studies, control matching with cases was done with habits. In GSTT1 null genotype, there were 26 studies from Asian countries, 3 from European countries and 7 from American countries. Source of controls from 20 studies were hospital based, 13 studies were population based and 3 studies were mixed by hospital and population based. The controls were matched with case's sex, age, geographical location, ethnicity and race in 15 studies, 19 studies, 7 studies, 8 studies and 2 studies, respectively. Socioeconomic status of cases was matched with controls in 3 studies, and in 1 study, hospital distribution was matched with controls. In 4 studies, control matching with cases was done with habits. In 14 studies of GSTM1 and 10 studies of GSTT1, matching concerning case and control was not mentioned.
Population frequencies
A total of 7049 cases and 10,308 controls from 49 included case-control studies for GSTM1 null genotype were analyzed, out of which 3677 cases and 4200 controls showed the null genotype constituting 52.1% among the cases and 40.7% among the controls. Whereas, in GSTT1, 5169 cases and 7307 controls from 36 included casecontrol studies were analyzed, of which 1759 cases and 1849 controls showed the null genotype accounting for 34.02% among the cases and 25.3% among the controls.
Meta-analysis
Heterogeneity when tested for all the 50 studies of GSTM1 null genotype gave Chi-square-based Q-value of 267.6496 with 50 degrees of freedom (df), I² of 81.32% and P = 0.0001, indicating heterogeneity across the studies. Therefore, a random-effect model was used and the overall OR for GSTM1 null genotype was 1.551 with 95% CI of 1.355-1.774 as indicated in figure 1. On the basis of these findings, it is likely that GSTM1 null status significantly increases the susceptibility to OC.
Likewise, heterogeneity test for all the 36 studies of GSTT1 null genotype yielded Chi-square-based Q-value of 183.8086 with 36 degrees of freedom (df), I² of 80.41% and P = 0.0001, indicating heterogeneity across the studies. Hence, a random-effect model was used for GSTT1 null genotype also, which showed an overall OR of 1.377 with 95% CI of 1.155-1.642 as indicated in figure 2. The data implied that GSTT1 null genotype was significantly associated with OC risk.
For the diagnosis of publication bias, both Egger's test and Begg's test, when applied, showed no evidence of publication bias as P was 0.2454 and 0.2844 in GSTM1 and GSTT1 null genotype, respectively. Hence, none of the studies were excluded.
However epidemiological studies have indicated the combined role of genetic and environmental factors in individual susceptibility to cancer. [25] Over the years, it has been postulated that a reduced detoxification by phase II enzymes was directly proportional to susceptibility to cancers, [26] which is emphasized by toxicology studies that associates increased sister chromatid exchange (SCE) in GSTM1 null genotype and baseline frequency of SCE in GSTT1 null genotype in tobacco smokers. [27,28] Further, studies have correlated upregulation of GST with tumor stage and differentiation with GSTs. [28,29] Hence, it is logical to believe that deprived status of these enzymes can contribute toward tumorigenesis and survival depending on the initiative stage or propagative stage of carcinogenesis respectively.
In addition to this, null genotype of GSTT1 has been suggested to increase the risk of numerous cancers such as lung cancer, [62] colorectal cancer, [63] gastric cancer, [64] leukemia, [32] head-and-neck cancer [30] and OC. [65,66] In contrast, few studies did not find an association of GSTT1 null genotypes with a greater risk of OC. [67][68][69][70] Very recent meta-analyses, as that of Lopes et al., [15] have suggested that GSTT1 null polymorphisms may increase the risk of development of OC. Meta-analysis of Lopes et al. [15] has left out studies as that of M Deakin et al., [45] Hung et al., [46] Jourenkova-Mironova et al., [50] Katoh et al., [51] Sreelekha et al., [54] Kietthubthew et al., [55] Buch et al., [57] Gronau et al., [58] Sikdar et al., [12] Xie et al., [13] Drumond et al., [66] Majumder et al. [60] and Liu et al. [61] Out of these nonincluded studies, three studies are of adequately powered studies, two studies as that of Buch et al. [57] and Majumder et al. [60] negate the association of GSTT1 null genotype with OC and one study as that of Sikdar et al. [12] only relates GSTT1 polymorphism among heavy chewers with OCs. Thus, these studies may influence the meta-analysis results. These ambiguous views created the necessity of meta-analysis to extract an estimate of risk associated with GSTM1 and GSTT1 null status with susceptibility to OC. The presented meta-analysis from 50 published studies (The data Pertaining Park, [20] divided into two studies) suggests that null genotype of GSTM1 and null genotype of GSTT1 are significantly (P = 0.0001) associated with a modestly increased risk of OC, which may be attributed to expression of GST enzymes in the squamous mucosa of the head and neck [71][72][73] and secondly to activation of benzo[a] pyrene-7,8-dihydrodiol-9,10-oxide (benzo[a] pyrene) which later transforms into 7,8-diol-9,10-epoxide in tobacco-associated OC patients. [74,75] T his 7,8-diol-9,10-epoxide is an identifiable substrate for the GSTM1 enzyme. GSTM1 null genotype individuals with adverse habits of tobacco accumulate more DNA adducts through their inefficiency at excreting activated carcinogens such as 7,8-diol-9,10-epoxide. [74,75] Further, if these DNA adducts, starts accumulating at locus of oncogenes or tumor suppressor genes, leads to somatic mutation and disruption of the cell cycle,which may lead to carcinogenesis. [76] Further, the ORs of null GSTM1 and null GSTT1 varied with geographic location. The prevalence of these genotypes in controls varied widely among and within regions. The data pertaining to GSTT1 null genotype meta-analysis from 36 studies showed that GSTT1 deficiency was associated with OC. This may be attributed to multifactorial role of the GSTT1, both activation and detoxification process, expression of GSTT1 in red blood cells resulting in more generalized detoxification process [9] and expression of GSTT1 in the squamous mucosa of the head and neck. [71][72][73] In the present meta-analysis, GSTT1 null genotype was a significant risk factor for OC, in line with meta-analysis of esophageal cancers, [37] prostate cancer, [38] breast cancer [77] and OC. [15] Within ethnic groups, categorization appears to be subjective and varied. There exists no unanimity on defining an "ethnic group." In consequence, ethnic groups, regardless how it is defined, will tend to evolve around social and political attitudes or developments. [78] Hence, basing ethnic identification upon an objective and rigid classification is nonreasonable. Down to the ground, in the meta-analysis, most of the studies relied on geographical location for ethnicity and failed to define a criterion for ethnicity-based recruitment of OC subjects and controls for the study. In the present meta-analysis, the pooled ORs for different ethnic groups was not conducted for two reasons: first, the stratification of studies based on ethnicity would have been vague and spurious, and second, due to the number of studies in each stratum defining particular ethnic group other than Asians would be very few or absent.
In the meta-analysis, the evidence of heterogeneity was observed across studies. The reasons for this might be methodological diversity, especially with arbitrary recruitment of cancer subjects and use of hospital based controls. [9] Nevertheless, from sensitivity analysis, it was found that studies that contributed toward heterogeneity did not demonstrate any significant alteration in the estimated overall OR. Since only published studies were utilized in the meta-analysis, the combination of Egger's and Begg's test did not indicate the evidence of publication bias indicating results of the present study to be stable and credible.
CONCLUSION
In conclusion, the findings of this meta-analysis study suggest a significant role of GSTM1and GSTT1 null genotype in increasing the risk for OC; these findings have to be viewed with caution, as it contained substantial heterogeneity across studies. However, the estimated risk is tainted by heterogeneity across the studies and lack of exhaustive study designs. Further, studies with larger sample size exploring GSTM1and GSTT1 null genotypes among various demographic subgroups with optimum study design are needed to precisely conclude on risk of development of OC in individuals with GSTM1 or GSTT1 null genotype and habits of consumption of tobacco.
|
2022-12-27T14:48:09.349Z
|
2022-02-01T00:00:00.000
|
{
"year": 2022,
"sha1": "aaf6c12c02e35f194f77f8b44328021b9a0d5ae4",
"oa_license": "CCBYNCSA",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "41c4b11b78d1e003182f04d0382e64bfe63f58fe",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
132134671
|
pes2o/s2orc
|
v3-fos-license
|
Application of Balsa Composite in Curved Structures and Its Business Establishment: A Feasibility Study
Balsa wood (Ochroma pyramidalei) are widely available in Indonesian market especially in the creative industry. Balsa wood was preferred due to its excellent strength to weight ratio in-line with workability and the simplicity of its color and texture. Balsa composite has increased the economic value of balsa wood and widen its use from merely lightweights and non-structural objects to some efficient super-structures. The development of balsa composite into curved structure add another level of design sophistication and widen the variation of its use. Application of this material in a shell structure allowed the structure to have more strength to weight excellence, since the shell structure itself had already had excellent depth to span ratio. The added value of the application in Indonesia is expected to add significant contribution in national agroforestry and construction sector. However, introducing the material into construction industry required further examination, especially related to the stability of supplies (forestry management), manufacture readiness, and human resources. This paper showed a theoretical and statistical feasibility study of manufacture-scale business establishment in Indonesia.
Wood in the present market
Indonesian forests are known to have abundant natural sources (rainwater, sunlight, and nutrition), wide varieties of wood, and-unfortunately-controversial exploitation as well.
High-quality wood species such as Ulin or Iron Wood (Schleichera oleosa) and Tembesu (Frahraea fragrans), in spite of its ideal strength and durability for vernacular construction in Kalimantan and Sumatera [1][2][3][4][5], were no longer used in contemporary construction since both were currently protected by the government (SK Menteri Pertanian No. [6].
These woods are relatively lower in quality than Trembesu and Ulin, but faster to grow.
Balsa in Indonesia
Balsa (Ochroma pyramidalei) was not originally used by Indonesian ancestor in their buildings. Balsa was introduced supposedly from Latin America [7] where Ecuador is nowadays one of the major balsa suppliers. However, due to its weight to strength excellence, Balsa had been used in Indonesia as well as other countries as raw material for boats, planes, insulation, and crafts [7][8][9]. Balsa could be harvested within the cycle of 5 years [7], or 6-8 years in Ecuador [9]; faster than many other popular woods in Indonesia. The popularity of balsa in Indonesia had not been overlooked by investors in agriculture and forestry sector. As the result, balsa could be easily found in Indonesian market with relatively low price.
Balsa strength to weight excellence, complemented with its light color, the workability, the simplicity of the texture, and its smooth surface, had made Balsa been so sought after as main material in many industries. Balsa development into sandwich composite and hollow-core composite had added more strength to the wood, so that it can be used as structural material [8,10].
Natural condition of Indonesia and investor interest towards balsa plantation in
Indonesia, suggest that Indonesia has a high prospect of being one of the major rawbalsa suppliers. However, the national economy will benefit more if the material is developed and sold with added value. This paper aims to inform the potential of balsa composite in a curved surface, and the feasibility of establishing the secondary sector balsa-composite business in Indonesia.
Curved surface
A psychological experiment conducted by Bar & Neta [11] showed that there is a preference for curved objects in human perception. The participants of the experiment were fourteen subjects within the age-range of 18 and 40 years old. They were given stimuli DOI 10.18502/kss.v3i14.4331 Page 465 The First ELEHIC of 140 pairs of real object and told to quickly choose which object that they liked better from the two given pictures. The result was that curved objects were liked more than objects with a straight line or sharp angles [11]. This finding also in line with the idea of Biophilic Design, firstly introduced by Cramer & Browning [12], an understanding of the deep-seated need of humans to connect with nature and how to incorporate them in the design. The idea of biophilia explain why a view to nature could enhance creativity and why crackling fire captivates human. Curves are common in nature, on the contrary perfect straight lines are seldom found. Hence, in order to design as close to natural condition of the world as possible, curved objects and curved surface are worth for experimental effort, scientific research, and development.
Curve, laminate, composite, and shell structure
There are several ways to construct a curved surface. According to Iwamoto [13], the methods are sectioning, tessellation (or tiling), folding, contouring (or carving), and forming (or molding). Sectioning, tessellation, contouring, and forming are rather straightforward. Folding, on the other hand, could be considered as an indirect way to construct a curved surface. Folding method, as in origami is about straight lines and angles. However, folding can also form a curved structure in the big picture, or create a structural support for the curved finishing surface.
The idea of curved surface is a key in shell structure development. Shell structure itself was inspired by an egg shell, a curved surface with relatively thin surface compared to its span. In civil and architectural research, there are many ways to develop shell structure: (1) single continuous plate [14], (2) folded plate [15], (3) lamella structure [14], (4) wrapped surface [14], (5) grid shell structure [14,16], and (6) ribbed or framed structure [14]. All of these methods are also creating a curved surface. Both shell structure and curved surface works in three directional axis. In a shell structure, the surface should be able to distribute load and forces evenly to three dimensional direction so that all surface area could endure the same forces.
In the construction point of view, the method of creating a curved surface can be divided based on the material used: (1) bricks, (2) frame (steel frame, timber frame), The First ELEHIC [19] are two exemplary efforts in strengthening balsa prior to its use in any structural project.
O'Neill [10] implemented balsa as hollow-core of a fiber-reinforced sandwich composite to create a short-span lightweight bridge substitute replacing short-span steel bridge for US Army. The scaled experiment using carbon-resin-epoxy shown a possibility that the full-scale bridge prototype could support the 60.000lbs load of a fully loaded tank with under 6 inches deflection. This means that balsa, regardless of its weakness in the raw from, proved to have ability to provide higher strength to weight ratio as a composite.
However, the aforementioned experiment of hollow-core balsa was applied to a straight plane. Construction of hollow core balsa in the curved surface is yet to be tested.
Current and projected Balsa supply in Indonesia
Balsa is categorized as kayu lain-lain (other wood) in Indonesian Central-Bureau of The First ELEHIC
Prototyping requirement of possible product(s)
A multi-disciplinary expert team is required in developing intended balsa composite prototype, especially with expertise in design, structure, material, and chemistry. Environmental study of the designed manufacture is also required to avoid unwanted environmental impact. The prototype should be explored in scaled specimens and tested in full-scale model, then submitted to secure the intellectual right prior its test to the market.
Strong [21] in his book
Manufacture requirements
In manufacture scale, the requirement of machinery and tools are varied, depends on the intended end-products. From the projected intention of developing curved balsa composites, the industry will need large drying chamber for the wood (or oven), dry room storage, wood processing machine (slicer, trimmer, planner, cutter, and sander), steam room, laminating tools, chemicals, ventilating and cleaning device, finishing tools, testing tools, waste processing system, packaging, and delivery system. Digital fabrication machine such as CNC might be needed, depends on the scope of work and intended end products.
All the machinery and tools required for manufacture are available in the market.
Development of custom machine or new machine altogether might not be needed.
However, finding the right machine and tools depends on excellent design of the manufacture system, which should be the task of the prototyping team.
Business model alternatives
Considering the classification of creating balsa composites as a secondary-sector industry, the most straightforward business model is a manufacture or factory model. However, there are also another alternatives such as a 'studio' model, and a 'colab-factory' model (colab is a shorten term of collaboration or co-laboratory). DOI The conventional factory model would deliver some standard panels in a massive scale and would require a distribution or a retail system to sell the end-product. This model would also require massive marketing effort in introducing the product to the market, and no small amount of start-up capital investment. On the other hand, the 'studio' model would allow the company to create customized and varied products from standardized composite module(s) in limited number of ready-stock products. Studio scale could be smaller than the factory and the end-products will be more exclusive.
The third alternatives, the 'colab-factory' , would require costumer participation in the process of creating the products. The 'colab-factory' main activities are creating the module or standard material and providing the means for customer to create their own end-products.
Important factor of the establishment
Supply of raw materials and tools is one of the most important factors in the business model, regardless which model that had been chosen from the alternatives. The efficient establishment will required access to the supplies, with less cost of the material procurement. The factory model establishment, however, will differ from the 'colab-factory' model in term of customer accessibility. In the 'colab-factory' model, the customer or the 'colab' will have to access the facility so that the location with high accessibility is crucial for the establishment. Digital collaboration might be added to the 'colab-factory' model in order to expand the scale of the business though digitalization. With this addition, the 'colab-factory' establishment does not have to be accessed physically by the customer.
Future possibility of Indonesian native fast-grown plants
There are several fast-grown plants used in Indonesian Vernacular Architecture. Sengon (Albizia falcate), which is used in Sundanese houses [22] has 4-10 years harvesting cycle.
Conclusion
In conclusion, development of Balsa into a curved surface and its manufacture establishment in Indonesia is feasible from theoretical and statistical point of view. This statement is derived from three points: (1) national balsa supply is projected to be increased in 2020,
|
2019-04-26T14:07:07.882Z
|
2019-03-31T00:00:00.000
|
{
"year": 2019,
"sha1": "55fe69992a9df8dbbcf3365624accbd3d8a2692c",
"oa_license": null,
"oa_url": "https://knepublishing.com/index.php/KnE-Social/article/download/4331/8943",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "d1e8cca1e94fc792db4bdb845ac52dc4ee57a3a9",
"s2fieldsofstudy": [
"Economics"
],
"extfieldsofstudy": [
"Engineering"
]
}
|
22883328
|
pes2o/s2orc
|
v3-fos-license
|
Mother-to-child transmission of HIV and its prevention: awareness and knowledge in Uganda and Tanzania
Awareness and knowledge about HIV mother-to-child transmission (MTCT) and preventive measures in different population groups and health personnel were analysed in future intervention areas in western Uganda and south-western Tanzania. In Uganda, a total of 751 persons (440 clients of antenatal and outpatient clinics, 43 health workers, 239 villagers, 29 traditional birth attendants) and in Tanzania, 574 persons (410 clients, 49 health workers, 93 villagers, 18 traditional birth attendants) were interviewed. When given options, knowledge on transmission during pregnancy and delivery in women was 93% and 67% in Uganda and Tanzania respectively, and 86% and 78% for transmission during breastfeeding. In Uganda 59% of male interviewees did not believe that HIV is transmitted during breastfeeding. Expressed acceptance of HIV testing was above 90% in men and women in both countries, but only 10% of the clients in Uganda and 14% in Tanzania had been tested for HIV infection. Health workers' knowledge regarding MTCT was acceptable, while traditional birth attendants' knowledge on both MTCT and preventive measures was extremely poor. Recommendations on infant feeding were not compatible with WHO recommendations for HIV-infected women. If prevention of MTCT (PMTCT) interventions are to be accepted by the population and promoted by health personnel, thorough orientation and training are mandatory.
INTRODUCTION
In Africa, especially in the countries of eastern and southern Africa most severely affected by the HIV/AIDS epidemic, the transmission of HIV from mother to child (MTCT) during pregnancy, delivery and during the period of breastfeeding is by far the most common route of HIV infection in children (Newell, 1998).The estimated risk of infection is 5 -10% during pregnancy, 10 -20 % during labour and 10 -20 % during breastfeeding (De Cock, Fowler, Mercier, de Vicenzi, Saba, Hoff, et al., 2000).
Substantial reduction of MTCT of HIV can be achieved by combining antiretroviral treatment, elective caesarean section and avoidance of breastfeeding (Newell, 2003).WHO guidelines recommend avoidance of all breastfeeding, if acceptable, feasible, affordable, sustainable and safe; or, alternatively, exclusive breastfeeding during the first months of life, based on general health benefits to children of unknown status (WHO/UNAIDS/ UNICEF, 1998). However elective caesarean section and nonbreastfeeding are often not safe or feasible in rural areas of most high-prevalence countries.
Many organisations have taken initiatives to reduce perinatal HIV transmission from mother to child (PMTCT programmes) in developing countries (Newell, 2001;UNICEF, 2001).The programmes embark on improving obstetric procedures, offer perinatal antiretroviral prophylaxis or antiretroviral treatment, and counsel on infant feeding options according to WHO guidelines. Most programmes are based on the use of intrapartum and neonatal singledose nevirapine as antiretroviral prophylaxis (Guay, Musoke, Fleming, Begenda, Allen, Nakabiito et al., 1999).The uptake of PMTCT services in Africa has so far been rather low (Temmerman, Quaghebeur, Mwanyumba & Mandaliya, 2003). Furthermore, the cost-effectiveness of such programmes depends on the coverage rates. Many efforts are necessary before PMTCT services are known and accepted in the social environment (family and community) and supported by the formal health institutional environment.
With support from the German Agency for Technical Cooperation (GTZ), PMTCT programmes were introduced in peripheral areas of western Uganda and in the Mbeya region, south-western Tanzania in 2001. In Uganda, pilot PMTCT interventions had first been introduced in 1998. At the time of the study, PMTCT services were offered in seven health units, of which three were in the capital and four in rural areas (WHO, 2002).
In Tanzania, pilot PMTCT services were started in the four referral hospitals of the country in 2001. HIV prevalence in antenatal clients was about 8% in the Ugandan intervention area and 15% in the Tanzanian intervention area. Since antenatal services are the entry point for PMTCT interventions it is important to note that most women in the areas (80% in Uganda and 90% in Tanzania) attend antenatal services at least once during a pregnancy, with 20% of deliveries in the Ugandan and 47% of deliveries in the Tanzanian intervention area being institutional. Most women give birth at home, many with the help of a traditional birth attendant.
The purpose of this study was to analyse the status of awareness and knowledge about HIV MTCT and preventive measures of transmission in different population groups and health staff of the future intervention areas, as part of the preparation for the implementation of PMTCT programmes.We considered this assessment necessary because it had become obvious that the realisation of a PMTCT intervention requires awareness and acceptance by the health personnel, by the target population and by the communities. Furthermore, the assessment helped to design adequate sensitisation measures for the general population, as well as training units for the health staff and community-based resource persons such as traditional birth attendants.
Study area and study participants
The study was conducted from November 2001 to February 2002 in the districts of Kabarole, Kamwenge and Kyenjojo in western Uganda.The districts covered a population of about 1 million. In Tanzania, the assessment was carried out from January to March 2002 in the districts of Mbeya urban, Mbeya rural and Mbozi of Mbeya Region in the south-west of the country.The total population of the Mbeya Region was approximately 2 million. attending the antenatal clinics and 10 men each attending the outpatient clinics, were interviewed at the four prospective PMTCT intervention sites: Fort Portal District Hospital,Virika Mission Hospital (Kabarole District), Rukunyu Health Centre (Kamwenge District) and Kyenjojo Health Centre (Kyenjojo District). In addition, 159 women and 80 men as well as 29 traditional birth attendants were interviewed in eight rural villages (Bujabara, Kicuna, Kibimba, Kabale, Masaka, Kyabyakwaga, Kisansa, Kigunda) of the districts.These villages were randomly selected from a list of all villages of each district. In the centre of the selected villages a bottle was turned and consecutive houses visited following the direction of the bottle opening. Furthermore, 43 health workers were interviewed, preferentially staff of antenatal clinics (ANCs) and mother-child health (MCH) clinics and of maternity wards in the hospitals, while in the health centres of the sites and villages any health worker was chosen. Participants at Fort Portal and Virika Hospital were categorised as urban, participants at Rukunyu and Kyenjojo Health Centres as semi-urban and participants in the villages as rural interviewees.
In Tanzania, a total of 574 interviews were held. Of these, 410 clients, 212 women attending the antenatal clinics and 198 males attending the outpatient clinics were interviewed at the four prospective PMTCT intervention sites: Mbeya Referral Hospital (Mbeya Urban District),Vwawa District Hospital (Mbozi District), Ruanda Health Centre and Igawilo Health Centre (Mbeya Urban District). In addition, 93 villagers (42 women and 51 males) and 18 traditional birth attendants were interviewed in four randomly selected rural villages Hanseketwa, Maganjo, Ndolezi and Idiga Songwe of Mbeya rural and Mbozi Districts. Also, 49 health workers were interviewed at the four sites as well as in the villages. Random selection of two villages each of Mbeya rural and Mbozi districts and selection of interviewees was done as described for Uganda. Participants at Mbeya Referral Hospital and Ruanda Health Centre located in the centre of Mbeya were categorised as urban, participants at Igawilo Health Centre situated at a distance of 20 km from the centre of Mbeya and at Vwawa Hospital were categorised as semi-urban, whereas participants in the villages were categorised as rural interviewees.
The study proposals were reviewed and approved by the Ugandan district and Tanzanian regional health authorities of the respective Ministries of Health.
Questionnaire
A standardised questionnaire was developed for female clients of antenatal services, for male clients of outpatient clinics and community members, for health personnel and for traditional birth attendants (TBAs). Open-ended and closed questions on HIV transmission and preventive measures were used in order to assess the 'active' knowledge of the interviewees, which is of special relevance for example for health workers engaged in HIV counselling. Local authorities were informed about the purpose of the study and authorisation was obtained.The questionnaires were pre-tested over 3 days and adjustments made according to local expressions. Interviews were conducted in the local language Rutooro in Uganda and in the national language Kiswahili in Tanzania.
Data analysis
Data were checked and entered using the EPI-Info programme version 6. Data analysis was performed using the Statistical Package for the Social Sciences (SPSS) version 11.0.The chi-square test was used to compare groups. All tests were two-tailed.
RESULTS
Since the results of the interviews differ between the two countries, they are presented seperately for Uganda and Tanzania. (Table 1) The majority of male and female clients were married (82% and 85%, respectively). In the age group 14 -25 years 46% were primigravidae, while in the age group 26 -35 years and above 35 years, 60% and 92%, respectively, had been pregnant more than four times. Women interviewed in the villages were significantly older (p < 0.001) and had a lower educational level (p < 0.001) compared with those interviewed in the Mother-to-child transmission of HIV and its prevention: awareness and knowledge in Uganda and Tanzania urban and semi-urban health facilities and had had significantly more pregnancies compared with women living in urban and semi-urban areas (p < 0.001). An equal number of women were interviewed in the urban, semi-urban and rural areas.
Professional background of health workers and TBAs
All 43 health workers approached participated in the study: 55% of the health workers were interviewed in urban health institutions and 45% in institutions situated in semi-urban areas. Health workers were clinical officers (6), registered or enrolled nurses (10), assistant nurses (7), midwives (16) and others (4).The median work experience was 9 years. Fifteen (35%) of the health workers had attended training courses for HIV counselling and 10 of these were actually working as counsellors.Twenty-nine TBAs were interviewed.The majority (76%) had acquired their knowledge about antenatal and delivery care from their mother or other relatives; half of them had received additional training through programmes of the Ministry of Health; 86% reported to assist 1 -5 deliveries per month, the remaining 14% attended 5 -10 deliveries per month.Two-thirds of the TBAs (66%) reported that they had their first contact with the women during pregnancy, and only one-third attended their client only for delivery.The regular usage of gloves in order to protect themselves was reported by 59% of the TBAs. Only one TBA explained that she used cord clamps on a regular basis. While only 21% of the interviewees reported to have regular contact with a health unit and to report the deliveries, referral of pregnant women to a health institution when expecting complications during pregnancy or delivery was common (83%).
Results of the interviews Knowledge about HIV transmission of health unit clients and villagers
Nearly all women were aware that sexual intercourse is a route of HIV transmission. Only 6 women (all multipara) mentioned MTCT of HIV.When probed by closed questions, two-thirds of the interviewees expressed the opinion that MTCT was possible; however a high proportion of women disagreed or gave 'don't know' responses. In the rural area compared with the other areas more women believed that MTCT was possible (72% v. 65%); however the difference was not statistically significant. More than 40% of the interviewees rejected the possibility that HIV could be transmitted by breastfeeding, or admitted that they did not know.
When asked to enumerate means of preventing HIV infection, women named the use of condoms (58%), faithfulness (49%) and abstinence (32%).The majority of female interviewees had heard about the possibility of being tested for HIV (93%); however only half of the women could name a place where it was possible to be tested.When asked if they would agree to be tested now, the majority of women answered positively (91%).Yet only 10% of the interviewees claimed that they had been tested. Fear and lack of treatment were the most often mentioned reasons for not being HIV tested.
Similarly to the interviewed women, the most frequently spontaneously mentioned routes of transmission of HIV named by men were sexual intercourse (94%) and blood/sharp instruments (32%). Only 13% of the men mentioned MTCT ( Table 2). The awareness of HIV transmission from mother to child in men was significantly higher compared with women (13% v. 1%; p < 0.001). Also, men living in villages named this route of transmission significantly more often compared with interviewees from other areas (19% v. 3%, p < 0.05), whereas the frequency of naming other routes of transmission did not differ with regard to residence.When presented with different ways of transmission, most men affirmed that transmission through sexual intercourse, blood/sharp instruments and MTCT were possible; however more than half of the men regarded MTCT by breastfeeding as unlikely (Table 2).
When asked if they would agree to HIV testing of their wives the majority of men answered positively. The most frequent reason given for refusal was the fact that treatment was not available; however only twothirds of men would agree to replacement feeding. Reasons given for the disapproving attitude were 'harmful for child','expensive','lot of people want to make business'. Most men would agree to be tested (92%); however only 10% of the interviewed men had ever been tested for HIV infection.
Knowledge of health workers and TBAs about MTCT of HIV
All health workers were aware about the transmission of HIV from mother to child.Transmission during delivery was spontaneously mentioned by 91% of the health workers, although transmission through breastfeeding by only 40%. Given different options, health workers recognised that HIV may be transmitted during delivery and breastfeeding; however 33% of the interviewees did not believe or did not know that transmission could also occur during pregnancy.
Two-thirds of the TBAs (66%) were aware that HIV could be transmitted from mother to child. Asked to name different modes of MTCT, delivery was mentioned most frequently (68%). For the closed questions on transmission modes, half of the TBAs answered that they did not believe in transmission by breastfeeding or they responded with 'don't know' ( Table 3).
Knowledge of health workers and TBAs about preventive measures
Prevention of MTCT was thought to be possible by 79% of the health workers in Uganda. However asked how to prevent transmission, less than one-third of the interviewees could spontaneously name any measure. Even when given several options, one-third of the health workers were not aware of prevention of MTCT by drugs. Prevention of HIV in general was thought to be possible by 38% of the TBAs. Only two TBAs spontaneously mentioned drugs, and one TBA mentioned avoidance of breastfeeding as a measure to reduce transmission.When probed, 50% of the TBAs considered avoidance of breastfeeding as a method to prevent transmission from mother to child.The TBAs recommended breastfeeding a child for at least 12 months. Furthermore,TBAs thought that solid nutrients should be introduced at the age of 6 months and liquid nutrients at the age of 3 months.
Respondents
Demographic characteristics of the interviewees Demographic characteristics of the interviewees are reflected in Table 1. Female interviewees were on average 10 years younger than male interviewees. Women and men participating in the study were mainly farmers with primary school level education; however significantly fewer men were farmers (p < 0.001). One-third of the women in the age group 14 -25 years were primigravidae and in the age groups 26 -35 years and above 35 years, 13% and 16%, respectively, had had more than 4 pregnancies. The age, educational level and number of pregnancies of women did not differ significantly with regard to residence.
Professional background of health workers and TBAs
All 49 health workers approached agreed to participate in the study: 41% of the health workers were interviewed in urban, 41% in semi-urban institutions and 18% were interviewed in dispensaries in villages. Health workers were clinical officers (9), registered and enrolled nurses (29), assistant nurses (3), midwives (5) and others (3).The median work experience was 17 years.Twelve Tanzanian health workers (25%) were trained as HIV counsellors. Half of the 18 TBAs interviewed had acquired their knowledge about antenatal and delivery care from their mother or other relatives, and 45% had received further training by a GTZ-supported project.The majority of the TBAs (89%) reported that they assisted 1 -5 deliveries per month, the remaining attended 5 -10 deliveries per month. Most of the TBAs stated that they attend their client only for delivery, and one-third reported that they had their first contact with the women during their pregnancy. Only 39% of the interviewed TBAs stated that they had regular contact with a health unit and reported the deliveries. However the majority (84%) reported that they usually referred pregnant women to a health institution when expecting a complicated pregnancy or delivery.
Results of the interviews Knowledge about HIV transmission of health unit clients and villagers
The most frequent spontaneously mentioned routes of HIV transmission by female interviewees were sexual intercourse, blood and sharp instruments.The aware-ness of MTCT transmission of HIV was low: only 1% of the interviewed women mentioned this route spontaneously. However, when offered different transmission modes most women were aware of MTCT and transmission through breastfeeding (Table 4).When asked to enumerate means to prevent infection, the use of condoms was most frequently mentioned (58%), followed by faithfulness (49%) and abstinence (32%). Nearly all women had heard about the possibility of being tested for HIV (93%) and could name a place where testing for HIV was offered (88%). The attitude towards HIV testing was mainly positive (92%), but only 14% of the interviewees had so far been tested. Fear and lack of treatment were the most often mentioned reasons for not being tested.
Similarly, the most frequent spontaneously mentioned routes of HIV transmission by men were sexual intercourse, sharp instruments and blood; none of the respondents spontaneously mentioned MTCT. Only when given different options did the majority of interviewees agree to the possibility of MTCT. However, uncertainty about transmission through breastfeeding was considerable (24% answered 'no' or 'don't know'. Most men would agree to the HIV testing of their wives.The most frequent reason given for refusal was the lack of treatment. HIV testing would be acceptable to 92% of men, although only 15% of the interviewed men had ever been tested for HIV infection.
Knowledge of health workers and TBAs about MTCT of HIV
The majority of the health workers were aware about the transmission of HIV from mother to child (Table 5). As modalities, HIV transmission associated with delivery, instruments, bruises and tears, mixing of blood, rupture of membrane and procedures such as cutting the cord and episiotomy were mentioned. As many as 94% of the TBAs were aware that HIV could be transmitted from mother to child, the majority stating that transmission occurred during delivery. Only 6% of the TBAs spontaneously mentioned breastfeeding as a possible route.When probed, still only 56% believed that transmission through breastfeeding was possible.
Knowledge of health workers and TBAs about preventive measures
Prevention of MTCT was thought to be possible by 84% of the health workers in Tanzania. Asked how to prevent transmission, the most frequent spontaneously mentioned measure was the avoidance of breastfeeding. Probed with closed questions, avoidance of breastfeeding (74%) and administration of drugs (55%) were most often mentioned. Of the TBAs, only 20 agreed that any prevention was possible. Even when probed, less than a quarter of the Tanzanian TBAs believed that either caesarean section, drugs or avoidance of breastfeeding could reduce MTCT. Furthermore,TBAs recommended to breastfeed a child for at least 18 months, to introduce solid nutrients to the child at the age of 6 months and liquid nutrients at the age of 3 months.
DISCUSSION
Interventions such as PMTCT programmes can only be successfully implemented if communities understand the underlying problem and know about the existence and benefits of the services.The quality of advice-giving depends on the health workers' and counsellors' knowledge and training.Their perspectives, attitudes, beliefs and self-practice will considerably influence women's choices, and thus form the crucial link between policy and practice (de Paoli, Manongi & Klepp, 2002).We therefore thought that it would be necessary to assess the levels of awareness and knowledge of MTCT and PMTCT in the beneficiaries and in the caregivers before starting a PMTCT intervention programme.
In Tanzanian women, knowledge about MTCT (93%) and about transmission through breastfeeding (86%) was higher than in Ugandan women (67% and 59%, respectively). Since no sensitisation activities had started, the reason for these differences in knowledge remains unknown. Knowledge in western Uganda was, however, still higher than knowledge reported from the Rakai District, Uganda, where among pregnant women visited at home, 40% affirmed that transmission during pregnancy may occur, 58% during delivery, and only 19% through breastfeeding (Kigozi, 2002).
In accordance with a report by Fylkesnes, Haworth, Rosenvärd and Kwapa (1999), expressed acceptance of HIV testing was found to be high (92%) in our study area.Voluntary counselling and testing (VCT) services had been introduced within the context of the HIV/AIDS control programmes in the two study areas in the late 80s.Therefore, availability and accessability of counselling and testing could not have been major constraints. However, only 10% of the interviewd women and 14% of the interviewed men in both countries had undergone HIV testing at the time of interview.The misleading potential of measured initial intention of acceptance and willingness to be HIVtested has been observed (Fylkesnes, et al., 1999). Clearly, in test-dependent prevention programmes, as in our case, knowledge of HIV status is a prerequisite to be able to make use of the PMTCT services. Therefore, serious efforts to increase the acceptance of HIV testing need to be undertaken. With the exception of transmission through breastfeeding (42% in Uganda, 76% in Tanzania), overall knowledge of HIV transmission and MTCT in particular was not worse in men than in women. Surprisingly, in both countries male acceptance of HIV testing of the wife, use of an antiretroviral drug and, in Tanzania, acceptance of non-breastfeeding if possibly preventing HIV infection of the child were far above 90%. In Uganda, men were reluctant to accept nonbreastfeeding (67%), the major reason being concerns regarding costs of alternative food and health of the child.When pregnant women in the Rakai District, Uganda, were asked to estimate their husbandsá ttitude, 46% thought that their husband would allow them not to breastfeed if it were a way of preventing MTCT (Kigozi, 2002).
Journal of Social Aspects of HIV/AIDS
Consequences of disclosure of HIV status have often been described and may include isolation, expulsion from the family and violence (Gaillard, Melis, Mwanyumba, Claeys, Muigai, Mandaliya et al., 2002). In a Kenyan study, after discussing advantages and risks, only a third of 290 HIV-infected women included in a PMTCT study informed their partners.Ten per cent subsequently experienced violence or disruption of the relationship (Gaillard et al., 2002). Likewise, women who are not breastfeeding fear being suspected of being HIV-infected, face stigma, isolation and risk of being thrown out of the community (WHO, 2002). With regard to the high male acceptance rate in our study, we wondered whether answers were given to please the interviewer or to prevent a suggestion of ignorance.
Knowledge of health workers about HIV transmission during pregnancy (67%) and through breastfeeding (84%) was low in the Ugandan setting. Furthermore, a clear need for specific training measures in both study areas became obvious with regard to preventive measures of MTCT. In this context it is important to differentiate between knowledge by 'recall', reflecting active knowledge, and 'recognition' of correct answers given to closed-ended options, reflecting passive knowledge. Only if a health worker actively recalls facts and information will he/she be in a position to transfer the messages and information to the client.
TBAs' knowledge about MTCT and preventive measures was very poor. Most women in peripheral areas live too far from a health unit in which deliveries can be assisted, or cannot afford to give birth in an institution and therefore often rely on the assistance of TBAs. Furthermore, since 66% of the TBAs in Uganda and 33% of the Tanzanian TBAs had their first contact with the pregnant women during the antenatal phase they could play an important role as sources of information about PMTCT services for the women and their families, as well as refer the women to health units where PMTCT services are available.
In both countries,TBAs recommended breastfeeding women to introduce solid food at 6 months and liquids earlier, at 3 months in Tanzania and at 4 months in Uganda.These recommendations are not in accordance with the WHO infant feeding recommendation for HIV-infected mothers. In a recent study we found that at 4 months exclusive breastfeeding was practised by only 10% in the Ugandan and 19% in the Tanzanian intervention area (Poggensee, Schulze, Moneta, Mbezi, Baryomunsi & Harms, 2004).Women introduced liquids on average at 4 months in Uganda and 3 months in Tanzania, as found to be recommended by the interviewed TBAs. Common beliefs that 'water is necessary for quenching thirst' or that 'breastfeeding only will cause hard stools and water will prevent constipation' make it difficult to realise the international feeding recommendations (de Paoli et al., 2002).
Our results show that overall knowledge and awareness of MTCT and preventive measures was present to a certain degree in the population of the study area. Knowledge of health workers showed considerable gaps, and was extremely poor in TBAs.TBAs and possibly other community-based resource persons have an important role to play in communicating messages and as a link between the social environment of the women and the health institution.Therefore not only health workers but also TBAs need to be trained regarding MTCT, PMTCT, infant feeding options and on issues regarding their own protection.
colleagues of the GTZ-supported Basic Health Project in western Uganda and of the Comprehensive AIDS Control Programme in Mbeya Region for their assistance. Special thanks go to Carol Mbabazi in Uganda and to Oliver Kyando and Modesta Kiona in Tanzania for help in logistics and interviewing in Rutooro and Kiswahili.The study was financially supported by the German Ministry for Economic Cooperation and Development, Germany, through the project PN 01.2029.5 (Prevention of mother-to-child transmission of HIV).
|
2017-06-03T09:53:14.662Z
|
2005-07-01T00:00:00.000
|
{
"year": 2012,
"sha1": "59e2b7205ea3602b8a50c72f7d00e2b6b257eecc",
"oa_license": "CCBYNC",
"oa_url": "https://doi.org/10.1080/17290376.2005.9724849",
"oa_status": "GOLD",
"pdf_src": "TaylorAndFrancis",
"pdf_hash": "c99da5db3779d5343c0dfc7a8e5abd02dd74d763",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
231587230
|
pes2o/s2orc
|
v3-fos-license
|
Reduced immune responses to hepatitis B primary vaccination in obese individuals with nonalcoholic fatty liver disease (NAFLD)
Obesity and cirrhosis are associated with poor hepatitis B virus (HBV) vaccine responses, but vaccine efficacy has not been assessed in nonalcoholic fatty liver disease (NAFLD). Sixty-eight HBV-naïve adults with NAFLD were enrolled through the Canadian HBV network and completed three-dose HBV or HBV/HAV vaccine (Engerix-B®, or Twinrix®, GlaxoSmithKline). Anti-HBs titers were measured at 1–3 months post third dose. In 31/68 subjects enrolled at the coordinating-site, T-cell proliferation and follicular T-helper cells (pTFH) were assessed using PBMC. Immune response was also studied in NAFLD mice. NAFLD patients were stratified as low-risk-obesity, BMI < 35 (N = 40) vs. medium-high-risk obesity, BMI > 35 (N = 28). Anti-HBs titers were lower in medium/high-risk obesity, 385 IU/L ± 79 vs. low-risk obesity class, 642 IU/L ± 68.2, p = 0.02. High-risk obesity cases, N = 14 showed lower vaccine-specific-CD3+ CD4+ T-cell response compared to low-risk obesity patients, N = 17, p = 0.02. Low vaccine responders showed dysfunctional pTFH. NAFLD mice showed lower anti-HBs levels and T-cell response vs. controls. In conclusion, we report here that obese individuals with NAFLD exhibit decreased HBV vaccine-specific immune responses.
INTRODUCTION
The implementation of universal childhood hepatitis B virus (HBV) vaccination in >200 countries has led to a significant decline in the global incidence of chronic hepatitis B and hepatocellular carcinoma (HCC), and has been hailed as the first vaccine known to prevent cancer 1 . All available HBV vaccines contain the major HBV surface antigen (HBsAg) and induce antibody and T cell responses in >85% of immunocompetent adults [2][3][4] . The Centres for Disease Control and Prevention recommends the HBV vaccine to all individuals with chronic liver disease (i.e., persons with hepatitis C virus (HCV) infection, diabetes, autoimmune hepatitis, alcohol, and nonalcoholic fatty liver disease (NAFLD)) 5 .
Over the last 2 decades, the obesity epidemic has contributed to the development of the metabolic syndrome and NAFLD 6-8 . Obesity leads to a chronic inflammatory state and negatively affects immunity as evidenced by higher rates of vaccine failure, immunity against foreign pathogens, and infection complications 9 . In general, there is limited knowledge regarding the impact of obesity and NAFLD on immune functions per se. Studies in animal models suggest altered lymphocyte responsiveness to mitogens, dysregulated cytokine expression, decreased macrophage, natural killer cell, and dendritic cell function to pathogens 10,11 . In general, obesity can disrupt overall immune system integrity and lead to alterations in leukocyte development, migration, and diversity. Although HBV vaccination is strongly recommended in all individuals with chronic liver disease, there is limited data on HBV vaccine responses in adults with obesity and NAFLD. Moreover, most studies only focussed on B-cell immunity (i.e., antibody to HBV surface (anti-HBs) responses) to the vaccine. Some studies have found that obesity was associated with poor response to HBV vaccine in preadolescents and in adults with body mass index (BMI) >30 kg/m 2 compared to non-obese individuals 12,13 . Reduced anti-HBs seroconversion rates were also found in adults with diabetes and renal disease compared to healthy adults 14,15 . Although one study found a high prevalence of HBV vaccine unresponsiveness in~70% of pediatric patients with NAFLD 16 , another reported no difference in vaccine response rates in children with NAFLD compared to healthy controls 17 .
We conducted a prospective interventional study to assess the efficacy of HBV vaccination in HBV-naïve adults with NAFLD in Canada, a low HBV endemic country that did not implement universal childhood immunization until the mid 1990s 18 . In this study, we assessed anti-HBs titers, ex vivo B-and T-cell responses, and the role of a special subset of CD4+ T cells (i.e, peripheral T follicular helper cells, pTFHs) involved in generation and maintenance of memory B and plasma cells following completion of three-dose HBV vaccine series (i.e., EngerixB ® or Twinrix ® ). Moreover, we provide supportive parallel HBV immunization data from a unique high-fat diet (HFD)-induced mouse model of NAFLD.
Summary of baseline demographic and clinical data
In total, 386 patients were screened of which 356 were excluded, usually due to prior HBV exposure or vaccination (N = 193) or age (N = 127). Three withdrew from the study or were lost to follow-up. Overall, 68 HBV naive NAFLD adults completed the standard HBV primary vaccination and stratified as BMI < 35 (lowrisk obesity) or BMI > 35 (medium high-risk obesity) based on Health Canada nomogram 19 . Figure 1 (Table 2) were also assessed.
Anti-HBs titers in adults with NAFLD after completion of threedose HBV vaccine series negatively correlated with BMI In total, 85% (58/68) of study participants developed protective anti-HBV surface levels (anti-HBs > 10 IU/L) ( Table 3). HBV vaccine nonresponse (anti-HBs < 10 IU/L) was found in 10/68 NAFLD patients. The rates of vaccine non-response were similar in lowvs. high-risk obesity class patients with NAFLD (i.e., 15% (6/40) of patients with BMI < 35 and 14.2% (4/28) of patients with BMI > 35 were nonresponders) ( Table 4). Poor response to the vaccine was not related to age, gender, smoking status, metabolic syndrome (diabetes, dyslipidemia, and hypertension), and fibrosis stage (including patients who underwent liver biopsy, Supplementary Table 1). All the vaccine nonresponders had increased waist circumference indicating abdominal obesity and risk of obesity related health problems based on Health Canada recommendations (i.e., ≥102 cm for males and ≥82 cm for females) 20 (Table 4). Interestingly, a negative correlation (r = −0.3, p = 0.03) was observed between anti-HBs and BMI ( Fig. 2A). In addition, humoral vaccine response was not associated with age and sex (Fig. 2B, C).
NAFLD patients with BMI > 35 show weaker humoral immune responses to hepatitis B vaccine compared to those with BMI < 35 Overall, NAFLD patients with BMI > 35 showed significantly lower mean anti-HBs levels compared to low-risk obesity class patients with BMI < 35. The mean anti-HBs levels in patients with BMI > 35 were 385 IU/L ± 79 compared to 642 IU/L ± 68.2 in patients with BMI < 35 (p = 0.02; Fig. 2D).
Assessment of total memory B cells
Previous studies have shown that individuals who are nonresponders to the HBV vaccine (anti-HBs < 10 IU/L) show a strong anamnestic response to a booster dose suggesting an effect on memory B cells. Analyses of PBMC were performed in subjects (N = 31/68) enrolled at the coordinating site. The proportion of total memory B cells CD45+ CD19+ CD27+ in PBMC from NAFLD patients was comparable between patients with BMI > 35 vs. BMI < 35, p > 0.05 ( Supplementary Fig. 1).
Assessment of memory T cells and HBsAg-specific-proliferation of CD3 + CD4+ T H cells Long term T cell-mediated protection depends on the generation of a pool of memory cells to protect against future pathogen challenge. The two types of memory T cells central to a vaccine response are central memory T cells (T CM ), having a reactive function (similar to memory B cells), and effector memory cells (T EM ) that are protective (similar to antibodies) 21 . The gating strategy for memory T cells is shown in Fig. 3A. The proportion of circulating T EM and T CM cells between NAFLD adults with BMI < 35 and BMI > 35 at baseline and after vaccination showed no significant difference (Fig. 3B, C). CD4+ T cells showed HBsAgspecific-proliferation in 12/17 (70.5%) cases in patients with BMI < 35 group compared to 5/14 (35.7%) with BMI > 35 (Fig. 3D, E). In addition, high-risk obesity NAFLD patients (mean SI 3.1 ± 0.62) showed a reduced proliferation compared to low-risk cases (mean stimulation index (SI) 5.1 ± 0.68), p = 0.02. Further, gating of CD3+ CD4+ CFSE low cells revealed comparable frequencies of T EM cells between both groups (Fig. 3F). In 2/17 cases with BMI < 35, and 1/ 13 BMI > 35 we found antigen specific proliferation of CD8+ cells ( Supplementary Fig. 2). This was unexpected given that CD8+ T cell responses are difficult to induce by proteins and these responses are likely due to cross presentation of antigen to CD8+ T cells.
Correlation between HBsAg-specific-T-cell response and humoral response A direct correlation has been reported previously between serological and T-cell responses 22 . We found a positive correlation Serum CXCL13 and IL-21 and peripheral T-follicular helper cells (pTFHs) in NAFLD patients at baseline and post vaccination Initially, we compared the pTFH cell response in NAFLD patients with BMI > 35 and BMI <35 at baseline and post vaccination and found no difference among the groups ( Supplementary Fig. 3). We then grouped patients on the basis of antibody levels given that pTFH cells, a subset of CD4+ T cells are involved in generation and maintenance of memory B and plasma cells 23 . We evaluated two important effectors, serum CXCL13 and IL-21 levels and proportion of pTFH between non/low responders, anti-HBs < 100 IU/L and normal-high responders (anti-HBs > 100 IU/L). No differences were observed in the baseline and post vaccination cytokine levels in both the groups (Fig. 4A, B). However, we found a negative correlation between baseline serum CXCL13 levels and anti-HBs levels ( Fig. 4C). Interestingly, post vaccination increase in pTFH was noted in only in the normal-high responders but not in the non/low responders (Fig. 4D).
Antigen specific release of CXCL13 and IL-21 by sorted pTFHs Coculture of pTFHs with B cells in the presence of HBsAg showed a significant antigen specific increase in both CXCL13 and IL-21 in the normal-high responders but not in the non-low responders suggesting dysfunctional pTFHs in this group (Fig. 4E, F).
Anti-HBs antibody levels and T-cell responses in a mice model of NAFLD Mice models who were fed a HFD to induce NAFLD and given the HBV vaccine are depicted in Fig. 5. The liver histology of mice on HFD showed hallmarks of fatty liver disease (<66% steatosis in zone 2 and 3) confirming HFD-induced NAFLD (Fig. 6A). HFD mice both in group V→N (vaccinated before NAFLD was induced) and in N→V group (vaccine administered after NAFLD was induced by HFD) exhibited lower anti-HBs antibody levels than controls (p < 0.05) (Fig. 6B). The N→V group mice also showed a lower CD4+ T-cell response than normal mice and a similar trend was observed in V→N study mice compared to controls (Fig. 6C).
DISCUSSION
This novel interventional study was designed to evaluate the efficacy of hepatitis B vaccine using a standard dosing schedule in HBV naïve/ previously unvaccinated adults with a clinical diagnosis of NAFLD. Overall, this study determined that upon completion of the full vaccination regimen, using the approved recombinant hepatitis B vaccine, seroprotection was induced in~85% of 24,25 . Importantly, this reduced antibody response was more pronounced in high-risk obesity class NAFLD patients with BMI > 35 compared to individuals with a lower BMI < 35 at only~1 month post primary vaccination. Similar to others, we noted a negative correlation between response to vaccination and BMI 26 . However, it is also noteworthy that 3 of 10 nonresponders in the study were cases with a BMI 25-30 (i.e., overweight but nonobese individuals). We found comparable rates of non-immunity (~15%) in both lowvs. high-risk obesity groups, and hence addresses the potential issue of sufficient needle length for immunization of obese individuals 27 . Older age at the time of vaccination (>60 years) has been associated with inadequate antibody response 28 . The median age in the group in our study was~50 years and no Table 4. Clinical characteristics of HBV vaccine nonresponders in the study (i.e., anti-HBs titers < 10 IU/L B Anti-HBs titers were comparable between males and females (p = 0.7, Mann-Whitney U test). C No association was noted between age and anti-HBs levels. D Anti-HBs levels in low and high-risk obesity patients with NAFLD. Anti-HBs levels were significantly lower (p = 0.02) in the high risk obesity group vs. low-risk obesity group NAFLD patients (Mann-Whitney U test). Data is presented as mean ± standard deviation. correlation between age and humoral response was observed. Further, known factors for vaccine non-response such as liver fibrosis stage (i.e., cirrhosis), metabolic syndrome (i.e., diabetes, hypertension, and dyslipidemia), gender or smoking status were not associated with antibody response in the current study 29 .
Antibody response to hepatitis B vaccine involves both B and T cells. We analyzed T cell responses in 31 NAFLD patients and found reduced HBsAg-specific-CD4+ T cell response in patients with BMI > 35 compared to patients with BMI < 35. We noted a comparable frequency of T EM among the proliferated cells in low-and high-risk obesity groups, however, we did not investigate the cytokine production by these cells. Prior studies found intact T cell responses in healthy individuals immunized with the recombinant HBV vaccine and the traditional plasma derived vaccine in the absence of antibodies 30 . In obesity, cell death and enlarged adipocytes occurs due to a constant pro-inflammatory state. Additionally, there is a reduced proportion of CD4+ T cells, which are activated by nonconventional means involving leptin and result in sub-optimal CD4+ T cell differentiation and proliferation in response to antigens 31,32 . Moreover, studies in NAFLD mice models suggest decreased antigen processing and presentation in splenic dendritic cells compared to controls in association with an impaired T cell response 33 .
Dysfunctional pTFH cells have been reported in nonresponders with anti-HBs < 10 IU/L to the HBV vaccine in healthy subjects and also in influenza vaccine nonresponders 34,35 . It is noteworthy that in NAFLD, low responders in addition to nonresponders show dysfunctional pTFHs. Studies have shown that prevaccination levels of CXCL11 and CXCL12 in healthy individuals and CXCL13 in HIV+ subjects may predict hepatitis B-vaccine response 34,36 . Thus, the results of our current study suggest that baseline CXCL13 could potentially be a biomarker for vaccine response in NAFLD, and an area for future investigation.
There is evidence regarding the role of human leukocyte antigen (HLA) alleles in response to HBV vaccination with HLA-DRB1 and HLA-DQB1 being associated with strong antibody responses 37 . We did not perform HLA typing in our study; however, we conducted parallel preclinical studies in a NAFLD mice model to account for vaccine responses within a model of identical genetic backgrounds. We noted blunted anti-HBs and T cell response in mice fed a HFD compared to normal weight mice on normal diet. In the first experiment, mice were fed HFD for 9 weeks and then immunized. This group recapitulates the cohort of NAFLD patients receiving the vaccine. In the second group, both animal groups were immunized at baseline and then fed either a normal vs. HFD, and hence potentially represents the human scenario of HBV vaccination in non-obese children who may subsequently develop adult onset obesity and NAFLD. In agreement with our results, Miyake et al. 33 found impaired HBsAg-specific-humoral and T-cell responses in NAFLD mice who received a similar diet. It would be interesting to develop a mouse model that recapitulates nonalcoholic steatohepatitis (NASH) and study mechanism(s) of HBV vaccination response in NAFLD vs. NASH.
Our study did not include an age-matched healthy arm, however, a pooled analysis of previous studies in~2000 healthy immunocompetent adults predicted >80% seroconversion rates in adults up to 60 years of age 28 . Additionally, we did not stratify patients with lean NAFLD (BMI < 25), low-risk (BMI: 25-29.9), medium-risk (BMI 30-34.9), and high-risk obesity cases (BMI > 35) or perform comparisons based on waist circumference. However, the current study of 68 adults is one of the largest reported to date regarding HBV vaccine responses in obese adult patients with confirmed diagnosis of NAFLD.
As per clinical guidelines and vaccine monographs, approaches for inducing vaccine response in non and low responders include repeating the vaccination series, a single booster dose or immunization with higher doses. The use of a two-dose novel vaccine, Heplisav-B ® -induced high seroprotection rates in diabetics >70 years 38 . Although this vaccine is approved by the U.S. Food and Drug Administration (FDA), it is not yet approved by Health Canada. Whether NAFLD patients benefit from this vaccine is an interesting area of future investigation. It is known that 5-10% of healthy adults are nonresponders to the HBV vaccine, 15-50% of vaccine recipients are anti-HBs negative within 5-10 years after vaccination 39 . The results of the current study are remarkable in that 15% (10/68) of NAFLD patients were non-responders to the primary vaccination as early as 1-month post vaccination. Gara et al. 40 determined that anti-HBs titers decline more slowly among at-risk adults compared to childhood vaccine recipients. Although a recent study showed vaccine-specific-T-cell responses lasting at~20-30 years after adult immunization 41 , to date no studies have explored long-term humoral and adaptive immune response to the HBV vaccine in NAFLD adults. We plan to conduct a follow-up study to assess long-term responses (i.e., antibody waning, the persistence of T-cell immunity and anamnestic response) to the vaccine in individuals enrolled in this study.
In summary, this study showed significantly reduced HBV vaccine-specific-antibody and T cell responses in NAFLD patients with a BMI > 35 compared to low-risk obesity class NAFLD patients with BMI < 35. The hepatitis B vaccine is one of the most important human vaccines recommended worldwide. Obesity and NAFLD impairs normal immune functioning and may further perpetuate chronic disease development, and complications of the metabolic syndrome. This clinical study, together with complementary animal model data, can inform vaccine strategies, development of improved immunization protocols and highlight the impaired host immune response in high-risk obesity class individuals with NAFLD.
Study participants and vaccines
Sixty-eight HBV naive NAFLD patients (18-60 years) in this study were recruited from specialized NAFLD clinics at five Canadian centers from August 2016-2019. Exclusion criteria included subjects <18 and >60 years of age, pregnancy, human immunodeficiency virus+, HCV+, decompensated cirrhosis, and those with serological evidence of HBV exposure or immunization (HBsAg, anti-HBs) and total HBV core antibody (anti-HBc).
All patients provided signed informed consent under an approved ethics protocol (Conjoint Health Research Ethics Board REB16-0274) according to the principles of Good Clinical Practice and the Declaration of Helsinki. The study was registered on ClinicalTrials.gov (NCT02985450). The in-clinic review included identifying risk factors for NAFLD (obesity/hyperlipidemia/ metabolic syndrome/type 2 diabetes) in the absence of significant alcohol consumption and/or an ultrasound showing confirmed steatosis. The diagnosis of NASH vs. NAFLD is based on histopathological criteria (i.e., steatosis, lobular inflammation, and hepatocellular ballooning) 8 which requires a liver biopsy 42,43 . However, few study patients underwent liver biopsy due to invasive nature in the current study and was often not clinically indicated if patients had normal transient elastography. All adults received the standard three-dose regimen (0-6 months) of Engerix B ® (20 µg of HBsAg and 500 µg aluminum hydroxide, Glaxosmithkline, GSK) or a combined HBV/HAV vaccine (Twinrix ® ,GSK) as per Canadian immunization guidelines 44 . The vaccines were purchased by patients or health practitioners and administered as an intramuscular injection into the deltoid muscle of the nondominant arm by clinic nurse.
Assessment of anti-HBs antibodies after vaccination
Anti-HBs titers were assessed in patients at 1-3 months after complete vaccination (i.e., three doses of Engerix B ® or Twinrix B ® ) by a chemiluminescent microparticle immunoassay (detection range, 0-1000 IU/L, Abbott Architect, Mississauga, ON, Canada). HBV seroprotection was defined as anti-HBs ≥ 10 IU/L after complete vaccination 45 . controls. NAFLD mice showed significantly lower levels of anti-HBs than controls (p < 0.05, ANOVA test). C In vitro proliferation of CD3+ CD4+ splenocytes at~day 5 in response to 5 µg HBsAg. Mice in the NAFLD→vaccination (N→V) group showed significantly lower proliferation than controls (p < 0.05, ANOVA). The same trend was observed in the Vaccination→NAFLD (V→N) group. Data are represented as mean ± standard deviation.
Isolation of peripheral blood mononuclear cells (PBMC)
PBMC were separated by density gradient centrifugation from~20 mL of heparinized blood, and~10 7 cells/vial were cryopreserved. PBMC were only isolated at the coordinating center at baseline (i.e., before the vaccination) and post vaccination for assessment of cellular response.
Assessment of circulating memory B and T cells
The proportion of memory B and T cells in PBMC at baseline and post vaccination was assessed using flow cytometry. Approximately, 5 × 10 5 PBMC were stained for memory B-cell markers using anti-CD19, anti-CD45, and anti-CD27. Memory T cells were identified using anti-CD3, anti-CD4, anti-CD8, anti-CD56, anti-CD45RA, and anti-CCR7 using multi-parameter flow cytometry (BD FACS Canto II, Toronto, ON, Canada) Catalog and lot numbers of the antibodies used can be found in Supplementary Table 2. Prior to staining, cells were stained with fixable viability dye FVS 510. Antibodies were used at dilutions recommended by the vendor. Appropriate isotype and compensation controls were used and a minimum of 100,000 events was acquired. Data were analyzed using FACS DIVA and Flowjo v11 (Treestar Inc., San Carlos, CA).
HBsAg-specific proliferation assays
Fresh (or cryopreserved in 4 patients) PBMC (~10 6 ) were labeled with 1 µM carboxyfluorescein-diacetate-succinimidyl-ester (CFSE; BD Horizon, San Diego, CA) according to vendor's protocol. Labeled PBMC were stimulated with 5 µg HBsAg (adw) (ARP, Waltham, MA) in RPMI 1640 with 10% FBS and 2 mmol/L glutamine. Anti-CD3 (1 µg/mL) and anti-CD28 (5 µg/mL) (BD Biosciences, San Jose, CA) stimulated cells served as a positive control. Unstimulated DMSO-treated cells were used as negative controls. Cells were cultured in triplicates and plates incubated at 37°C with 5% CO 2 for~8 days. Cell proliferation was assessed on day 8. SI was calculated as % CFSE low cells in stimulated cells / % CFSE low cells in the unstimulated control 46 . SI > 3 was considered positive for HBsAg-specific proliferation. Cells were stained using the memory T-cell panel and analyzed by flow-cytometry.
HBV vaccination in a mice model of NAFLD
Mice were purchased from Jackson Inc. (Bar Harbour, ME) and housed at the University of Calgary in a pathogen-free facility. All experiments were approved by the University of Calgary animal care committee in accordance with the Canadian Council for Animal care (AC16-0040). C57BL/6 male mice (6-8 weeks old, N = 40;10/group), were fed either a normal chow vs. HFD (40% fat, 40% sucrose, Dyets Inc., Bethelem, PA) 47 and administered 2 doses of Engerix-B ® (0.05 µg/g body weight) 2 weeks apart, intramuscularly, either after inducing NAFLD phenotype (N→V) or at baseline prior to HFD (V→N). The induction of NAFLD phenotype in mice was confirmed on liver histology. Briefly, liver lobes were harvested from NAFLD and control mice, fixed in formalin and embedded in paraffin. Sections were cut at 4 µm diameter and stained with hematoxylin and eosin. Slides were blinded and evaluated by a clinical pathologist to assess for liver steatosis, inflammation, and hepatocyte ballooning.
Assessment of anti-HBs and T cell response in a mice model of NAFLD Blood was collected before vaccination and every 2 weeks post booster via tail vein bleeding and by cardiac puncture at the final time point. Whole blood was collected in heparinized tubes and spun at 1000g for 10 min to obtain plasma. Anti-HBs levels in control and HFD mice were determined using a qualitative ELISA (Aviva Biosystems Biology, San Diego, CA). Spleens were harvested at the final time point for lymphocyte proliferation assessment. A single-cell suspension of splenocytes was prepared and CFSE labeled as described above and~10 6 cells were stimulated with 5 µg of HBsAg. Concanavalin A (Con-A) (4 µg/mL) stimulated and DMSO treated cells served as positive and negative controls, respectively. On day 4-5, splenocytes were assessed for the proliferation of CD3+ CD4+ T cells using flow-cytometry (Supplementary Table 2).
Statistical analysis
Data were analyzed using Graphpad Prism 7 (La Jolla, CA). Demographic, clinical, and laboratory parameters were compared using measures of central tendency; Mann-Whitney or ANOVA were used to compare anti-HBs titers; Mann-Whitney U test and Kruskal-Wallis test with post hoc Dunn's test for multiple comparisons was used to compare proliferation and immunophenotyping data. Nonparametric Spearman's rank correlation test was used for correlation analysis; p values < 0.05 were considered Reporting summary Further information on research design is available in the Nature Research Reporting Summary linked to this article.
|
2021-01-13T06:17:21.271Z
|
2021-01-11T00:00:00.000
|
{
"year": 2021,
"sha1": "377168bf5346a7cc15d77d1c5a2a74d818f5ea50",
"oa_license": "CCBY",
"oa_url": "https://www.nature.com/articles/s41541-020-00266-4.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "934d06c96fea4f1eb6f80be5857225e00a9e4807",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
252189997
|
pes2o/s2orc
|
v3-fos-license
|
Sustainable commercial aviation: What determines air travellers’ willingness to pay more for sustainable aviation fuel?
While low carbon jet fuels (LCJF) offer a viable alternative to conventional jet fuels in terms of reducing aviation emissions, the higher fuel costs may be passed on to customers in the form of increased ticket prices. However, there has been little research into the public’s willingness to pay (WTP) for LCJF use. Our study addresses this gap by exploring citizen’s perceptions of, attitudes toward, and willingness to pay a premium ticket price. We conducted an online survey among UK citizens (N=1008) who flew at least once a year. We used ordered logistic regression to predict the factors that influence WTP for LCJF. The findings confirmed the existence of three factors that explain air travellers’ WTP: social trust, perceived risks, and attitude. Although the overall perception of the benefits of LCJF outweighs the associated risks, the level of awareness of LCJF use is rather low. Despite a favourable attitude toward LCJF use, the majority of respondents were unwilling to pay more for carbon-neutral air travel. Our research contributes to and expands the literature on the current debates about acceptance and WTP for LCJF and energy transitions. Additionally, the findings of our study encourage public and corporate managers to leverage the identified key factors to inform and structure campaigns to increase the acceptability of LCJF use.
Introduction
The aviation industry plays a critical role in bringing global connectivity, and social and economic prosperity (IATA, 2019). More specifically, the total economic impact of the global aviation industry reached USD 2.7 trillion in 2016, corresponding to about 3.6% of the world's gross domestic product (GDP) (ATAG, 2018). Airlines carried around 4.3 billion commercial passengers and 58 million tonnes of freight around the world in 2018 (IATA, 2019). Despite the COVID-19 pandemic having reduced air travel by about 60% in 2020 compared to 2019, the industry is on a recovery path (e.g., an overall reduction of 15% of seats offered by airlines in 2022) (ICAO, 2022). Besides economic and social benefits, the aviation sector also faces significant challenges. The foremost of such challenges is represented by global fossil-fuel jet fuel carbon dioxide (CO2) emissions. Though commercial aviation accounts for only 2-3% of emissions (ICAO, 2016), due to the industry's growth outlook, carbon emissions are estimated to reach as high as 22%-30% by mid-century (Staples et al., 2018). This situation warrants a combined effort from all stakeholders to reduce the carbon intensity of the aviation sector (DfT, 2022;Singh et al., 2019).
To urgently tackle the aviation industry's GHG emissions, we need to simultaneously explore a portfolio of options such as operations management, demand reductions, technological improvements, and alternative low-carbon fuels (Singh et al., 2019).
Operationally, the European Union's programme 'Single European Sky ATM Research' (SESAR) 2 and the Next Generation Air Transportation System (NextGen) 3 in the US are aiming to modernise air traffic management. The success of these efforts ensures airlines travel the shortest possible routes, resulting in more efficient use of the skies and emissions savings.
Likewise, reducing air travel demand (Kroesen, 2013) and carbon taxing (Ryley et al., 2010) are also among the alternatives considered. Despite electrification being deemed as a feasible substitute, there seems to be an agreement among stakeholders that electric aircraft may not be commercially available until well beyond 2050 . Hence, for the near-to mid-term, liquid fuels will continue to play a key role in aviation. In this context, switching from traditional carbon-intensive fossil jet fuel (i.e., Jet A or Jet A-1) to a less carbon-intensive jet fuel seems to be a viable choice for mitigating emissions in the aviation sector (Singh et al., changes to the current aircraft fleets, saving a significant financial cost to airlines when replacing their fleet. Furthermore, LCJF do not cause any disruptions to global aviation operations, are safe, and are environmentally benign. 4 In addition, commercial airlines have the added advantage of using LCJF as a valuable promotion tool for attracting customers (Daggett et al., 2008).
While technological advancements, improving operational efficiencies, and introducing policy frameworks are important in achieving aviation net-zero carbon targets, the role that the public plays cannot be overlooked. In particular, LCJF can be considerably more expensive than fossil jet fuels; hence, it becomes sensible and inevitable for airlines to pass at least part of the costs on to customers in the form of a higher ticket price. However, public familiarity and willingness to pay (WTP) for the use of LCJF in aviation and its environmental benefits remain unexplored. Our paper contributes to the general social acceptance body of knowledge by investigating the public response to WTP for LCJF use in aviation based on the Theory of Planned Behaviour (TPB) framework. In our work, we leverage both the psychometric constructs of public perception, social trust, and attitude, and the demographic characteristics.
It is crucial to understand public awareness of new technologies, as these are linked to the wider social acceptance of innovations (Chin et al., 2014). Failing to increase general awareness can lead to unwillingness to pay; this, in turn, can result in the market failure of the proposed innovation (Wegener and Kelly, 2008). Note that LCJF offers a unique case of exploring public response as the possible benefits (e.g., emission savings, fossil jet-fuel independence) and concerns (e.g., competing with food supply, health risks, ecosystem impacts) are directly related to society. The possible benefits and concerns have been widely explored for other forms of transportation, mainly the road transport sector (van de Velde et al., 2011). Although public knowledge, perceptions, and responses do not guarantee adoption, their absence is likely to result in innovation and technology failure. In some cases, this may even include resistance: environmental, such as contamination risk of underground water resources; and/or social, in the form of 'not in my backyard' kind of thinking. These oppositions have been observed for technologies such as hydraulic fracturing (Tan et al., 2020), wind-based electricity generation (Enevoldsen and Sovacool, 2016), nuclear power (NEA, 2002), hydrogen as fuel (Eames et al., 2006), and carbon capture and storage technologies (Kraeusel and Möst, 2012), to name a few. Several authors have concluded that people are willing to pay more to adopt eco-friendly practices in the aviation industry. However, these studies focus more on airport infrastructure development than on the use of LCJF (e.g., Walters et al., 2018;Winter et al., 2021). There is a lack of studies investigating what determines individuals' WTP for an additional airfare fee for using LCJF.
The key contributions of our paper are threefold. First, our study explores the public's perceived benefits and perceived risks of LCJF use in an industry faced with significant environmental pressures. Second, we examine the issue of trust and confidence in the abilities and approach of various institutions (e.g., industry and/or government) dealing with LCJF.
Third, we investigate the public's mindset toward the use of LCJF in commercial aviation, followed by a contribution to the scientific discourse on public WTP for travelling using LCJF.
Last, we determine the influential factors driving air travellers' WTP for LCJF. Although our findings are based on a large set of unique UK-based data, our results can be taken as reflective of countries having similar socio-economic characteristics.
The rest of this paper is organised as follows. Section 2 provides the background to our current work by discussing prior studies on public acceptance and WTP for sustainable liquid (bio) fuel development, in general, and LCJF in particular. Section 3 presents our research design, followed by survey findings and relevant discussions in section 4, while section 5 concludes this paper.
Background to research
It is becoming increasingly important to reduce aviation emissions to combat climate change (van Dyk and Saddler, 2021). To accelerate LCJF production and deployment, one needs to simultaneously work on removing technical, regulatory, and socio-economic barriers (Goding et al., 2018;IEA, 2021). Focusing on the downstream chain of LCJF use, socioeconomic barriers relating to the public's familiarity and willingness to pay (WTP) remain unexplored.
The application of social science methods to understand human behaviour in sustainable energy uses and related climate change can be extremely useful. Wüstenhagen et al. (2007), for example, emphasised the need to develop strong public support for new energy solutions, as neglecting the social context of sustainable energy technology can slow its wider market development and implementation (Kelly et al., 2019;Pasqualetti, 2011). Several authors J o u r n a l P r e -p r o o f suggested that focusing on a sustainable past might help (Rowe et al., 2019), and argued for focusing on intergenerational gaps to develop sustainable strategies (Essiz and Mandrik, 2022).
As compared to other sustainable energy options (e.g., wind and solar), less attention has been paid to studying the public's acceptance of low carbon fuels, both in road transportation and the aviation sector (Chin et al., 2014;Kumar and Sinha, 2022).
The social context of low carbon transportation fuels has been mainly focused on studying the stakeholders' knowledge, perceptions, attitude, social trust, and WTP (Mamadzhanov et al., 2019;Rice et al., 2020). These components are considered to be vital and pertinent in gauging the general public's acceptance or resistance (Rahman et al., 2017;Yaghoubi et al., 2019). To be more specific, knowledge is being aware of something either by observation, usage, or education (Halder et al., 2010). Perceptions, on the other hand, relate to extracting information from one's experience 5 . Perceptions are crucial for understanding the resultant behaviour, while attitude is the evaluation of that behaviour (Radics et al., 2015;Van Dael et al., 2017).
Likewise, social trust is the public's confidence in various key players related to technology or innovation (Amin et al., 2017). These key players could be experts or institutions that the public relies on for information or technology. These institutions may include the government as the regulator, or policymakers, industry, and scientists. Social trust is recognised as playing a critical role in developing people's attitude toward technology acceptance (Adnan et al., 2018). Finally, the consumers' WTP is their inclination to buy sustainable fuel which is available (Sivashankar et al., 2016), as well as it reflects consumers' potential interest in bearing the cost burden of low carbon fuel (Mamadzhanov et al., 2019).
Several authors have focused on studying public knowledge and perception of and attitude toward sustainable low-carbon fuels in the road transportation sector (e.g., Sivashankar et al, 2018). The general findings were that there is a lack of awareness of low-carbon fuels; the key concerns include the alternative fuels' performance, availability, and higher cost of alternative fuel when compared to conventional fossil fuels, and the threat to food security posed by the increased use of alternative fuels. Another stream of literature has examined the feedstock sustainability for fuel production by exploring public's attitudes toward the environment, input energy use, and government and policymakers in ensuring the enactment of laws and regulations (Smith et al., 2018;Yaghoubi et al., 2019). However, these studies have revealed 5 For details, see 'Principles of perceptual learning and development' (Gibson, 1969).
J o u r n a l P r e -p r o o f a mix of positive and negative attitudes depending on local conditions. Hence, the findings cannot be appropriately generalised to other regions because of different socio-economic and geo-political conditions. The aviation sector is considered one of the most difficult sectors to decarbonise . Technical solutions may take longer than anticipated to mature while, on the other hand, the attitude-behaviour conflict (Filimonau et al., 2018) prevents a voluntary reduction in air-travel demand. The 'attitude-behaviour' conflict implies that while the public recognises that modifying their air travel behaviour may significantly improve GHG emission mitigation efforts, in reality they rarely do so. Therefore, switching to less carbon-intensive fuel has become a critical venue to explore.
Interestingly, only a few studies make LCJF their focal point of inquiry. For example, the studies by Filimonau and Högström (2017), and Filimonau et al. (2018) are two pioneering studies that delved into the social acceptance of LCJF use. Both studies used the constructs of knowledge, perceptions, and attitude toward sustainable liquid (bio) fuels in aviation. Filimonau and Högström (2017) found an imperfect public understanding of sustainable liquid (bio)fuels but, that LCJF are considered a safe alternative for aviation. Likewise, Filimonau et al. (2018) echoed that there is limited understanding of sustainable aviation fuel technology, besides the public's safety concerns about its use. They also found distrust among the public in national-level institutions and, as a result, called for developing awareness campaigns to address this issue. However, these two studies are limited in terms of the research tool used (i.e., descriptive statistics and correlation analysis), and the relatively small socio-demographic profile of the surveyed population (the former based on 132 UK respondents, while the latter focused on 326 Polish participants). Further, these two studies did not examine customers' WTP for LCJF use in commercial aviation-a critical construct for aviation decarbonisation .
Along with the acceptance of LCJF, efforts have been made to investigate the level of support for various policy options for curbing aviation carbon emissions. In this context, Kantenbacher et al. (2018) surveyed the British population to find their opinions on 14 various policy levers. The study found that the public supports policies that do not put any financial burden on the individual; rather, the aviation industry should bear the burden of any carbon mitigation developments. They also found that efforts should be made to develop alternatives to air travel. However, public opinion of LCJF was not made part of the enquiry.
J o u r n a l P r e -p r o o f
A survey of air passengers conducted by Hooper et al. (2008) found that passengers do not see themselves as responsible for the climate impacts of their flights. Instead, their belief was that environmental impact should be dealt with by the respective governments or the airlines themselves. It can be inferred from their study that the general public needs to be further made aware of the environmental impact of aviation in order to reduce the awareness-attitude gap.
To the best of the authors' knowledge, only two studies, namely Rains et al. (2017), and Rice et al. (2020), explicitly assessed customers' WTP for LCJF use. Meanwhile, a third study by Goding et al. (2018) focused narrowly on the WTP of business travellers only. More specifically, Rains et al. (2017) found that customers are willing to pay a 13% price premium, while Rice et al.'s (2020) findings indicate that the public is ready to pay an additional ticket fee (under 15% of the price) in proportion to the level of emission reduction. Likewise, Goding et al. (2018) estimated a price premium of 11.9% to the base ticket price. Though the valuation of WTP is useful from the airlines' perspective, these studies have omitted the wider societal psychology surrounding WTP. Nevertheless, these studies form the theoretical basis of our study. Hence, the present study focuses on determining the general public's perception of the benefits and risks associated with LCJF use, their social trust in institutions dealing with LCJF, and their attitude manifested in their willingness to pay more for eco-friendly LCJF.
In summary, we infer from the literature review performed that there is a need to further explore the public's perception of and attitude toward LCJF use in aviation. This is because the lack of public knowledge of aviation-related emissions can hamper any efforts aimed at mitigating them and decarbonising aviation. Moreover, social trust in key players has not been adequately explored. Likewise, it is vital to explore how the public perceives the benefits and risks of LCJF use, especially when they are indirect and related to society rather than direct and related to an individual (van de Velde et al., 2011). There is no doubt that LCJF are expensive to use in the foreseeable future; hence, it is inevitable for airline companies to raise their ticket selling prices. It is important, therefore, to investigate WTP premium prices for greener flying while, more importantly, trying to understand and establish what influences WTP.
Research design
The primary data for our study were collected via a quantitative survey to ensure a good representation of public views toward WTP for LCJF from the larger population. Figure 1 summarises the research design while a detailed description is provided below.
Survey instrument
Several frameworks and theories, including but not limited to the technology acceptance model (TAM), theory of interpersonal behaviour (TIB), and innovation diffusion theory (IDT) 6 , can be applied to investigate LCJF adoption and WTP. In this study, we based our approach on a theoretical underpinning from Ajzen's (1985) Theory of Planned Behaviour to seek a novel insight supporting WTP for LCJF. Recall that the TPB postulates intention as the primary construct that influences behaviour, and is determined by the subjective norm, perceived behaviour control, and attitude toward the behaviour (Ajzen, 1991). Our motivation for using the TBP is that it provides a broad view of social-psychological constructs to understand citizens' behaviour in their relative contexts (Kollmuss and Agyeman, 2002;Mattison and Norris, 2007); in our case, WTP for LCJF. This theoretical foundation has also been used in more general exploration of bioenergy 7 in the literature, such as in the study by Halder et al. (2017).
To develop our survey instrument, we used perception as a construct to understand what the public thinks about LCJF use. As perception guides actions (Radics et al., 2015) and to obtain a richer picture, we further divided perception into perceived benefits and perceived risks (Filimonau et al., 2018). The confidence in institutions (e.g., academic, government, and industry) plays a major role in developing public attitude (Amin et al., 2017). Therefore, to reflect trust in institutions, we included the construct of social trust in our instrument. The attitude construct (Filimonau and Högström, 2017;Kantenbacher et al., 2018) aims to explore the public's views regarding the usefulness of LCJF. Lastly, the WTP (Rains et al., 2017) construct was included to determine the public's behavioural intention toward LCJF use. The constructs and the corresponding items to measure them are presented in Table 1.
First, we divided the questions around public perceptions of LCJF into benefits and risks components. Perceived benefits are concerned with the extent to which 1) the investments in LCJF will benefit both the economy and society, 2) LCJF use can greatly help in safeguarding the environment, 3) using LCJF will reduce the country's dependence on foreign oil, 4) LCJF can reduce conventional jet fuel dependence, and 5) the benefits of using LCJF exceed other GHG emissions reduction measures in aviation. Perceived concerns, on the other hand, measure the extent to which 1) LCJF pose a safety concern 8 ; 2) higher LCJF production would lead to an increased competition for agricultural land; 3) LCJF production would harm the ecosystem; 3) LCJF take more energy to make than it is worth; and, (5) there is not enough LCJF to meet demand. Second, social trust in whether major stakeholders (e.g., academic, government, and businesses) are doing a good job to promote LCJF was assessed by the following three items: 1) the extent to which the scientific community is doing a good job for society by developing LCJF; 2) the extent to which LCJF producers are helping society; and 3) the extent to which the government or policymakers are doing a good job in regulating LCJF.
Furthermore, attitude toward LCJF was measured by the extent to which: 1) it is a good idea to use LCJF for flights; 2) the idea of using LCJF for flights is liked or disliked; 3) there is a preference for flying with airlines using LCJF; 4) one encourages others to fly on flights using LCJF; and, 5) eagerness to know more about LCJF. Finally, WTP for the use of LCJF on flights was measured by the extent to which one is willing to 1) pay a higher ticket price for flights using LCJF; 2) pay a higher ticket price for flights using LCJF even if a cheaper flight using regular jet fuel is available; and 3) choose a flight that uses LCJF regardless of the flight ticket price.
Note that the above items were measured on a five-point Likert-type scale, ranging from strongly disagree to strongly agree. This scale is used and recommended in similar exploratory studies such as Filimonau et al. (2018). The survey items were derived from the literature and the authors' own experience in the field. Next, the content validity of the survey questionnaire was assessed by a panel of three experts in academia and two managers from the energy and aviation industries. In response to the panel's feedback, minor adjustments were made to the survey questions.
In addition, we included several standard demographic variables (e.g., age, education, gender) to examine the variance of attitude toward LCJF across different groups. 9 The location variable consisted of four countries representing the UK. Finally, respondents were asked to describe their flying behaviour and carbon off-setting mechanisms. For the former, respondents 8 Safety concerns are associated with fuel ignition risk during (de)fuelling and storage, and fuel vapour inhalation by humans, among other technical concerns of fuel characteristics (ICAO, 2017;van Dyk and Saddler, 2021). 9 To be more specific, we divided the educational achievement into primary education or less, secondary education, post-secondary education below degree level, bachelor's or equivalent, and master's/higher degree or equivalent. Employment status comprised of five general categories: student, full-time employed, part-time employed, unemployed, and retired. J o u r n a l P r e -p r o o f were asked to record the number of flights typically taken in a year while the latter gave five choices to be ranked from the most to the least appropriate.
Survey distribution
To obtain data for the survey, a well-established third-party market research organisation, namely Qualtrics, was commissioned. The panel survey was administered online using the Qualtrics® platform during a three-week period in January 2020. The third-party organisation ensured a sample size that represented the UK population aged 16 and over, satisfying a minimum of 95% confidence level. Following Bushman et al. (2012), Len-Ríos et al. (2016), Moreno et al. (2021), and more recently, Sewell et al. (2022), a nonprobability sampling approach was adopted. Further, Heen et al. (2014) showed that online recruitment approaches using tools such as Qualtrics can achieve demographic attributes that are typically within a 10% range of their corresponding values.
Data analysis and reliability
Where appropriate, items designed to measure perceived benefits, perceived risks, social trust, attitude, and WTP were reverse-coded so that a response value of 1 (strongly disagree) now represents a response of 5 (strongly agree). Only one item from the attitude construct was reverse-coded to reflect an affirmative attitude toward the LCJF use. Note that singlerespondent surveys, like ours, may face the common method bias, which can adversely affect the reliability and validity of measures and parameter estimations (Jordan and Troth, 2020). To address this concern, we applied Harman's single factor test to determine if most of the variance can be explained by a single factor. Following Podsakoff et al. (2003), we performed exploratory factor analysis and found that no single factor explains a substantial portion of the total variance among measures. The total variance extracted by a single factor was 28.31%. 10 Therefore, we argue that the common method bias is not present in our study.
The internal consistency of all five constructs was measured by Cronbach's alpha. Note that a Cronbach's alpha of '0' denotes no internal reliability, while a value of '1' denotes perfect internal reliability. A threshold of 0.7 was used as the benchmark for construct reliability (Emma et al., 2019). Table 1 shows that all five constructs in this study had strong internal consistency, with alpha values above the mentioned threshold. In addition to construct reliability, we also investigated how each item factored into a construct using principal component analysis. In our study, the perceived benefits, perceived risks, and attitude constructs comprised five items each. while social trust and WTP had three items each. To confirm the factor analysis, we used the Kaiser-Meyer-Olkin (KMO) measure of sampling adequacy and Bartlett's test of sphericity. Keeping in line with the aim of our work, we used ordered logistic regression to predict the factors that influence WTP for LCJF.
4.1.Respondent characteristics
Our survey invitation was extended to 2,000 respondents. We received 1,081 responses, of which 73 were dropped due to incomplete or missing data. This left us with a final sample of 1,008 responses and a final response rate of ~50% to conduct our analyses. Similar studies had a response rate ranging from ~9% (Chavez et al., 2021) to ~40% (Smith et al., 2018).
This final sample comprised 54% female respondents and 46% male respondents ( Table 2). The participants' ages varied from 18 to 92, with an average age of 46 years. Most of the respondents are from England (87%), followed by Scotland (7.5%), Wales (4.2%), and Northern Ireland (1.3%). This distribution is consistent with the populations in England, Scotland, Wales, and Northern Ireland. Only ~17% of participants were classified as 'frequent flyers' as they had flown more than three times over the previous year. Overall, the number of participants who did not fly over the past year was large at 37%, while the percentage of those who flew once or twice added up to ~63%. Hence, given the high proportion of non-frequent flyers and the fact that perception of and attitude toward LCJF could be expressed even if a respondent does not typically fly, a decision was made to include them in the analysis. Finally, a good majority of participants were educated to university level (~40%).
Perceived benefits
In terms of perceived benefits, most of the participants perceived LCJF as environmentally, economically, and socially beneficial (Table 3). These benefits in producing and using low carbon fuels conform to those in other studies, particularly in aviation (Gegg et al., 2014;Smith et al., 2017), and the road transport sector, in general (Longstaff et al., 2015).
These benefits include emissions reduction potential, fuel diversification, and supply security in addition to enhanced regional/rural development.
Our survey showed that around 58% (or 3 in every 5) of the participants agreed that investments in LCJF production facilities would bring prosperity to the local economy, and to society, in general. A slightly higher proportion of participants (~59%) expressed that using J o u r n a l P r e -p r o o f LCJF could be instrumental in protecting the environment. A nationalistic pattern was observed when a high proportion of the participants (56%) took LCJF as a means to reduce foreign oil dependence, compared to only 1.2% who expressed the opposite. Likewise, participants showed their confidence in LCJF as a means to reduce conventional jet fuel dependence, with 57.8% of the participants being optimistic about the notion, while 5.2% were characterised by scepticism. An interesting behaviour was observed when more than half of the participants recorded 'neither agree nor disagree' for the statement according to which the benefits of using LCJF exceed other GHG emissions reduction measures in aviation. Though the participants did recognise the LCJF contribution in emission mitigation efforts, there seemed to be a significant gap in their understanding and awareness of the LCJF characteristics that would achieve emissions reduction. This further strengthens the notion that LCJF use in aviation needs promotion.
Perceived risks
Several risks relating to the availability, production, and safety of LCJF were investigated in this study. While the participants understood the benefits of LCJF use (section 4.2), at the same time, they were apprehensive about the associated risks (Table 4). The biggest of such concerns was the lack of LCJF availability, with ~55% of participants believing that the supply of LCJF is insufficient to meet airlines' demands. The availability of LCJF has previously been highlighted as a major risk in LCJF's ability to replace conventional fossil-jet fuel, in terms of amount of fuel available (Gegg and Wells, 2017;Reed, 2016), and refinery and transportation infrastructure (Smith et al., 2017). The second biggest perceived risk relates to the food versus fuel dilemma, with 47% of the participants believing that LCJF production would compete for cropland. However, it is to be noted that an almost similar proportion of respondents (41.8%) were 'neutral' on the subject. This result indicated that the food versus fuel controversy was not as fierce as it had previously been reported by Herrmann et al. (2018), Montefrio andSonnenfeld (2011), andTenenbaum, (2008). Within the sample, 54.1% and 43.7% were undecided on whether LCJF take more energy to make than it is worth and on whether LCJF pose a safety concern, respectively. This situation suggested the participants' lack of information on LCJF production process (i.e., how non-fossil feedstock is converted into jet fuel) and technical characteristics. Therefore, it points to the need to disseminate LCJF-related information more widely to the public.
Contrary to our study, Filimonau et al. (2018) investigated the LCJF safety issue from a non-technical perspective, which seems unnecessary as all LCJF must be internationally J o u r n a l P r e -p r o o f certified before they can be used. Lastly, a large proportion of the participants (46.4%) declared not being sure whether LCJF production would harm the ecosystem. This finding was contradictory to the reality where LCJF production is deemed environmentally benign (ATAG, 2011). Overall, the risk perception analysis revealed an interesting conundrum in this survey: though the use of LCJF is believed to protect the environment (limiting carbon emissions), its production is considered to harm the environment (in terms of soil and water pollution).
However, this complex situation can be dealt with by providing comprehensive information on LCJF such that the pros and cons of this innovation are adequately understood by the public.
Social trust
The social trust construct investigated confidence in relevant institutions (i.e., policymakers, scientific community, and LCJF producers). These key players have the legitimacy and power to shape how the public views LCJF. Our survey showed a varying level of trust in all the key players. The participants demonstrated a high level of trust in LCJF producers, followed by the scientific community. The least level of trust was shown in policymakers' efforts related to LCJF development. Around 20% of participants only recognised policymakers' efforts. Table 5 summarises our findings. Particular to aviation, Filimonau et al. (2018) observed the same low level of trust in government and its affiliates dealing with LCJF as in our study. A similar pattern of trust is presented in a study by Longstaff et al. (2015), where the majority of the respondents felt left out in government deliberation on low carbon fuels. Furthermore, our survey results revealed an alarming situation. For all three key players in LCJF, nearly half of the participants recorded 'neither agree nor disagree' relating to the trust level. This finding points to the importance of public engagement by these key players in policymaking, production, and the technological development process in a wider LCJF acceptability context.
Attitude
To gauge participants' attitude toward LCJF, five items were used in our survey, the results of which are presented in Table 6. Our survey results showed an overall positive attitude toward LCJF use. For all the items, approximately one third, and in some cases more than half of the participants, remained in the 'agree' and 'strongly agree' categories. A significant proportion of participants not only showed a keen interest in learning more about LCJF (64.4%), they also believed it to be a good idea to use it for flights (~53%). When considering taking flights using LCJF, ambivalence was noted in participants' responses. The difference between the undecided J o u r n a l P r e -p r o o f (45.9%) and the combined 'agree' categories (47%) seemed trivial. However, slightly higher agreement over being undecided (43.3%) was recorded for encouraging others to take flights using LCJF (46.3%). Like before, this behaviour can be attributed to a lack of knowledge about the use of LCJF.
Willingness to pay for LCJF use
In our study, we defined a scale for WTP (or for paying more) based on three items. The three items were designed to check the voluntary and involuntary WTP for LCJF, while the third item disregarded the ticket pricesee Table 7. For the involuntary WTP a higher ticket price for flights using LCJF, we found 28.4 % of the participants in agreement with the statement, while the majority (37.7%) categorically rejected the notion. For voluntary WTP a higher ticket price, despite the availability of cheaper flights, we found that the participants were not willing to pay a higher price. Only 25.6% agreed, while the majority of the respondents (38.9%) expressed an interest in cheaper flights using regular jet fuel. Similarly, a low percentage of participants agreed to take flights with LCJF regardless of the flight ticket price (23.5%); by contrast, 34.6% disagreed. We can conclude, therefore, that the flight ticket price plays a major role. On the WTP scale, we found an average of 37% of the participants were undecided on all three items.
Our findings contradict previous research done on air travellers' WTP for carbon neutral aviation, such as the studies by Choi et al. (2018) and Seetaram et al. (2018). However, the main difference that can be attributed to our case is that the literature does not specifically include LCJF as an option for carbon neutrality, instead focusing on presenting carbon taxes or air passenger duty, renewable energy projects, afforestation projects, and environmental education as options (Bösehans et al., 2020).
Predictors of willingness to pay for LCJF
Along with the exploratory endeavour, one of the aims of this study was to determine which factors determine WTP for LCJF. First, we examined the zero-order correlations among the primary measures of social trust, perceived benefits, perceived risks, attitude, and willingness. As presented in Table 8, we found that WTP was positively and significantly correlated with social trust (0.370, p <0 .01), perceived benefits (0.327, p < 0.01), perceived risks (0.173, p < 0.01), and attitude (0.365, p < 0.01). However, the correlation between WTP and perceived risks was found to be small, albeit significant. Attitude was positively and more substantially correlated with perceived benefits (0.356, p < 0.01). As one would expect, attitude J o u r n a l P r e -p r o o f and perceived risks presented small, but significant negative correlations (-0.176, p < 0.01).
The inverse relation implied that the participants with reservations toward LCJF were less inclined toward the idea of LCJF use.
Furthermore, we included the measures of perceived benefits, perceived risks, social trust, and attitude, as well as demographics, to predict WTP for LCJF. We carried out the prediction analysis using ordered logistic regression. We used three dependent variables, namely WTP1, WTP2, and WTP3, corresponding to the items composing WTP in Table 8. It is to be noted that the independent variables have been regressed with item-level WTP for better explainability. To explore the demographic effect on WTP, the variables of age, gender, education, location, occupation, donation, and flying frequency were added. This setting provided practicality to analysing the three WTP variables in a single hierarchal model rather than testing them separately. In addition, we performed various tests to ensure our data satisfy the underlying assumptions for ordinal logistic regression (e.g., linearity, no outliers; independence; no multicollinearity).
The ordered logistic regression revealed that social trust, perceived risks, and attitude were found to be significant predictors of WTP1 ("willing to pay a higher ticket price for my flights using LCJF") for both Models 1 and 2see Table 9. On the other hand, perceived benefits was not found to be a statistically significant predictor. With the addition of the demographic variables, the McFadden's pseudo-R-squared value increased from 0.089 (model 1) to 0.097 (model 2). However, only the variables of age and education level were found to be significant predictors.
Looking at the ordered log-odds coefficients (Table 9), the results indicated that by increasing the social trust score by one point, the ordered log-odds of a respondent in a higher category of WTP1 would increase by 24%, while the other variables in the model are held constant. It can be inferred that by increasing social trust, WTP1 increases. Participants who are more concerned about LCJF are 13% less likely to be in higher categories of agree or strongly agree to pay for LCJF along with participants with a higher education level. Similarly, with a unit increase in attitude and education, the ordered log-odds of a respondent in a higher category of WTP1 would increase by 21.5% and 1.1%, respectively, while other variables in the model are held constant. Finally, the ordered log-odds showed that the older population was 1.1% less likely than the younger population to be in a higher category of WTP1. This J o u r n a l P r e -p r o o f suggests that older air travellers were less likely to pay a higher flight ticket price for LCJF use.
The second dependent variable in our study, WTP2, explored WTP for a higher ticket price using LCJF even if a cheaper flight using regular jet fuel was available. The ordered logistic regression revealed that social trust, perceived risks, and attitude were significant predictors of WTP2see columns 4 and 5 in Table 9. The third and final dependent variable evaluated in our study, WTP3, investigated the willingness to choose a flight that uses LCJF regardless of the flight ticket pricesee columns 6 and 7 in Table 9. The ordered logistic regression found that social trust, perceived risks, attitude, and age played an important role in explaining WTP3.
In general, the results suggested that participants with a positive attitude toward LCJF, trust in the effort of key players, and those who perceived more benefits, were in agreement to pay more for LCJF, with the exception of those who showed concerns about LCJF use. To further ensure the robustness of our findings, we examined whether excluding those aged over 79 would make a difference. The results from these exercises did not qualitatively change our main findings. 11
Conclusions
Despite the potential for LCJF to decarbonise the aviation industry, the uptake of LCJF remains in its infancy stage. There is a lack of in-depth investigation of public opinion on the use of LCJF in aviation, particularly WTP for higher ticket prices. To fill the above gaps, we constructed a comprehensive survey and gathered a large sample of UK public views on LCJF.
We achieved the primary objective of our work by determining the significant factors that would predict WTP for LCJF. Findings from this study pointed out that WTP can be explained by five key factors.
Three of these five variables comprised social trust, attitude, and perceived risks. The predictor of social trust is particularly interesting because citizens seem to currently be aware of the efforts being made by the government, industry, and research institutions. However, it is logical that the more the institutions promote and endorse their commitment to LCJF, the greater the public's trust and their willingness to pay would be. As seen with social trust, with a higher rating of attitude, the public's WTP increases. Our findings are in line with prior research that established that having a positive attitude exhibits itself in the intended behaviour (Van Dael et al., 2017). Unlike with social trust and attitude, as the participants' rating of perceived risks increases, their WTP decreases. The negative relationship between the perceived risks and WTP makes sense, as individuals who have apprehensions about LCJF would rate the risk high and be less willing to pay for tickets for flights using LCJF. Finally, the predictors of education level and age were found to be significantly related to WTP. With a higher level of education, it is possible that either social trust increases or the perceived risks decrease. In either case, education level leads to increased WTP. While age was found to also be a significant predictor, it is acknowledged that its contribution to WTP is small. The negative relationship suggests that younger people may be more willing to pay for LCJF. It is worth highlighting here that, unlike in the study by Rice et al. (2020), gender has not been found to be a significant predictor of WTP for LCJF in our research. This is an area that warrants further investigation.
Our study provides valuable insights to decision-makers, both in the government and industry, for framing strategies in the short run and policies in the long run, which would strengthen the public's perception and attitude, in turn resulting in their increased WTP for LCJF use in aviation. For LCJF to produce the benefits that policymakers and airlines seek, the public must actively engage with LCJF production technology. Our research shows that one way to achieve this is to minimise the risk perception, which results in a more positive attitude toward LCJF use and WTP. This can be achieved by promoting LCJF, their production technology, and environmental impact. While doing so, emphasis should be placed on the emissions reduction potential offered by LCJF, and its implications for fuel sustainability.
However, we also recognise that any attempt to promote WTP for LCJF will inevitably face structural barriers not likely to be resolved by outreach and marketing campaigns. For instance, as our findings indicate, older respondents are not willing to pay a premium ticket price. To develop trust, key players must devise mechanisms to promote their domain's contributions among the public. Likewise, our study found that the public do perceive benefits of LCJF within the environmental, social, and economic domains, but also associate risk with LCJF availability and the environmental impact of its production. This highlights a potential need for airlines to devise future interventions in the form of relevant marketing campaigns, particularly to gain support from the younger population. Likewise, to ensure the availability of LCJF, policymakers can investigate various mechanisms that would increase LCJF production, J o u r n a l P r e -p r o o f ensuring that a sufficient volume of fuel is available, for example, by incentivising production or setting up mandates.
In terms of future studies, we aim to assess whether a neutral disposition toward the risk perception and social trust constructs is due to a lack of awareness leading to indecisiveness or whether it is due to the ubiquitous attitudes of participants. Second, we plan to evaluate the links between the psychometric constructs considered in this study in order to draw any correlational inference between them. Furthermore, by recognising the influence of the media in framing public attitudes (Delshad and Raymond, 2013), our future research would look into how media portrays LCJF use in aviation. Another line of inquiry that could be explored is to widen the scope of key players in our study and assess the general public's opinion about the role played by international consortiums, such as the Sustainable Aviation Alternative Fuels Users Group and ICAO, in the LCJF development (Gegg et al., 2014). J o u r n a l P r e -p r o o f J o u r n a l P r e -p r o o f J o u r n a l P r e -p r o o f J o u r n a l P r e -p r o o f J o u r n a l P r e -p r o o f J o u r n a l P r e -p r o o f
J o u r n a l P r e -p r o o f
Declaration of interests
☒ The authors declare that they have no known competing financial interests or personal relationships that could have appeared to influence the work reported in this paper.
☐ The authors declare the following financial interests/personal relationships which may be considered as potential competing interests:
|
2022-09-12T15:49:27.146Z
|
2022-11-01T00:00:00.000
|
{
"year": 2022,
"sha1": "66d3a9c493a457128420c7e8f1b64bd8435048e4",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1016/j.jclepro.2022.133990",
"oa_status": "HYBRID",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "635d0c99f75cc5a935927a5eb7f1b1b1e08f2296",
"s2fieldsofstudy": [
"Business",
"Political Science"
],
"extfieldsofstudy": []
}
|
271240267
|
pes2o/s2orc
|
v3-fos-license
|
Safety of Continuing Trastuzumab for Mild Cardiotoxicity: A Cardiovascular Magnetic Resonance Imaging Study
The safety of continuing human epidermal growth factor receptor 2 (HER2)–targeted therapy in women with mild cardiotoxicity remains unclear. We performed a retrospective matched cohort study of 14 patients with human epidermal growth factor receptor 2–positive breast cancer receiving sequential anthracycline and trastuzumab therapy, nested within the Evaluation of Myocardial Changes During Breast Adenocarcinoma Therapy to Detect Cardiotoxicity Earlier With MRI (EMBRACE-MRI) trial. Among patients who developed cardiotoxicity and were treated with heart failure therapy, we compared those who had trastuzumab therapy interrupted to a matched cohort who continued trastuzumab therapy. By a median of 2.5 years of follow-up, no significant differences were present between the groups in the proportion with magnetic resonance imaging–measured left ventricular ejection fraction < 40%, magnetic resonance imaging–measured left ventricular volumes, left ventricular ejection fraction, edema, fibrotic markers, cardiopulmonary fitness, or quality of life.
R ESUM E
La question de savoir s'il est sûr de poursuivre le traitement par un m edicament ciblant le r ecepteur 2 du facteur de croissance epidermique humain (HER2) en pr esence d'une l egère cardiotoxicit e chez la femme demeure controvers ee.Nous avons r ealis e une etude de cohortes appari ees r etrospective auprès de 14 patientes atteintes d'un cancer du sein positif pour le HER2 qui recevaient un traitement s equentiel par l'anthracycline et le trastuzumab, dans le cadre du programme EMBRACE-MRI (Evaluation of Myocardial Changes During Breast Adenocarcinoma Therapy to Detect Cardiotoxicity Earlier With MRI).Parmi les patientes ayant d evelopp e une cardiotoxicit e et ayant reçu un traitement pour l'insuffisance cardiaque, nous avons compar e In patients with early-stage human epidermal growth factor receptor 2epositive (HER2þ) breast cancer, use of trastuzumab improves outcomes, 1 but it is associated with a risk of cancer therapyerelated cardiac dysfunction (CTRCD).In those receiving sequential anthracycline and trastuzumab therapy, the incidence of CTRCD has been reported to be as high as 27%, 2,3 with clinical heart failure (HF) in up to 10% of patients. 4When CTRCD is identified (ie, by a fall in left ventricular ejection fraction [LVEF], by !10% compared to baseline, to a value below the upper limit of normal), the Canadian Trastuzumab Working Group recommends transient interruption of trastuzumab therapy. 5,6However, trastuzumab therapy interruption is associated with a higher risk of cancer recurrence and death. 7,8herefore, interest is growing in continuing trastuzumab therapy with the concurrent initiation of HF medications, especially in patients with mild CTRCD.Both the Cardiac Safety Study in Patients With HER2þ Breast Cancer (SAFE-HEaRT; n ¼ 30 patients) and Safety of Continuing Chemotherapy in Overt Left Ventricular Dysfunction Using Antibodies to HER-2 (SCHOLAR) trials (n ¼ 20 patients) demonstrated that continuing trastuzumab therapy with initiation of HF medications for mild CTRCD (ie, LVEF > 40%) may be safe, as only w10% of patients developed further LVEF worsening or HF. 9,10Although these studies are encouraging, they both used LVEF measured by echocardiography, did not have a control group, and did not report long-term outcomes or other prognostic measures.
To address these knowledge gaps, we performed an exploratory retrospective matched cohort study nested within a completed Evaluation of Myocardial Changes During Breast Adenocarcinoma Therapy to Detect Cardiotoxicity Earlier With MRI (EMBRACE-MRI) prospective cohort study at our institution.In women with HER2þ breast cancer treated with sequential anthracyclines and trastuzumab who developed mild CTRCD and were treated with HF therapy, we sought to compare the safety of continuing vs stopping trastuzumab therapy using measures of cardiac magnetic resonance (CMR) imaging left ventricular volumes and function, myocardial edema and fibrosis, cardiopulmonary fitness, quality of life (QOL), and long-term CMR and clinical follow-up.
Methods
We performed a retrospective matched cohort study of adult women with stage I-III HER2þ breast cancer receiving sequential therapy with anthracyclines and trastuzumab (with or without radiotherapy) nested within the EMBRACE-MRI trial (NCT02306538; 136 patients).Patients were followed with CMR preecancer therapy (baseline), every 3 months during (except at 9 months), immediately postetrastuzumab completion, and 2 years later (total of 6 CMR studies).All patients had cardiorespiratory fitness (CRF) assessment within 6 weeks postetrastuzumab completion, using a supine cycle ergometer.QOL was assessed postetrastuzumab completion using the Minnesota Living With Heart Failure Questionnaire (MLHFQ) and EuroQol 5-Dimension 3-Level (EQ-5D-3L).Mild CTRCD was defined as a decline in LVEF of !10%, to a value < 55%, but to a nadir value > 40% by CMR.Among patients who developed mild CTRCD and were treated with HF therapy (beta-blockers and/or angiotensin-converting enzyme inhibitor [ACE] or angiotensin receptor blockers [ARBs]), we identified those who had trastuzumab therapy interruption (interrupted group) and matched them to patients who continued trastuzumab therapy (continued group), by age (AE 10 years), baseline LVEF (AE 5%), and LVEF at CTRCD (AE 5%).Although the Canadian Trastuzumab Working Group recommends interruption of trastuzumab therapy when CTRCD is present, the final decision to continue or interrupt therapy was made by both the oncologist and the patient.
The primary outcome was the proportion of patients with CMR-measured LVEF < 40% post completion of trastuzumab therapy.Other outcomes included LVEF, left ventricular volumes, and CMR measures of edema (T2 maps) and fibrosis (T1 maps and/or extracellular volume fraction), poste trastuzumab therapy and at 2-year follow-up; and QOL measures and peak oxygen uptake (VO 2peak; a measure of CRF) immediately postetrastuzumab completion.
Continuous data are presented as median and 25th-75th percentile (quartile Q1-Q3), and binary data are presented as absolute numbers and proportions.Comparisons between the groups were made using the Fisher exact test and the Mann-Whitney rank-sum test.To evaluate differences in time profiles of continuous measures, we applied general estimating equations to assess changes over the follow-up period and no significant differences were present between the groups in the proportion with magnetic resonance imagingemeasured left ventricular ejection fraction < 40%, magnetic resonance imagingemeasured left ventricular volumes, left ventricular ejection fraction, edema, fibrotic markers, cardiopulmonary fitness, or quality of life.celles dont le traitement par le trastuzumab a et e interrompu à une cohorte appari ee ayant poursuivi ce traitement.Après un suivi m edian de 2,5 ans, aucune diff erence significative n'avait et e observ ee entre les groupes en ce qui concerne le pourcentage de patientes dont la fraction d' ejection ventriculaire gauche etait inf erieure à 40 % à l'imagerie par r esonance magn etique (IRM), le volume ventriculaire gauche à l'IRM, la fraction d' ejection ventriculaire gauche, l'oedème, les marqueurs fibrotiques, la bonne forme physique de l'appareil cardiopulmonaire ou la qualit e de vie.evaluated between-group differences using c 2 tests.A P-value < 0.05 denoted statistical significance.
Results
In the EMBRACE-MRI trial, per our current study definition, 19 patients met the inclusion criteria.Among these 19 patients, 7 continued trastuzumab therapy and were thus matched to 7 patients in whom trastuzumab therapy was interrupted, resulting in a total of 14 patients being in this current study.Comparison of the interrupted and continued groups showed no substantial covariate imbalance in baseline characteristics (Table 1), time of onset of CTRCD from baseline (238 days [159-252] vs 242 days [200-250], P ¼ 0.67), or LVEF at time of CTRCD (49% [48-50] vs 51% [49-52], P ¼ 0.63).Maximal doses of beta-blockers and ACE inhibitors and/or ARB used to treat CTRCD in each group are summarized in Table 2.
All patients were followed similarly in our cardio-oncology program; 13 of 14 (93%) patients completed 17 cycles of trastuzumab, and 1 patient in the interrupted group received only 14 cycles.Trastuzumab therapy was interrupted for 1 cycle in 2 patients, 2 cycles in 4 patients, and 3 cycles in 1 patient.By the 2-year follow-up, 2 deaths had occurred in the interrupted group (both due to cancer progression), and none had occurred in the continued group.Postetrastuzumab therapy and at 2-year follow-up, none of the patients developed the primary outcome of CMR LVEF < 40% or clinical HF.
Temporal measures of secondary outcomes including LVEF, left ventricular volumes, T2 values, native T1 and extracellular volume fraction values preecancer therapy, at the time of CTRCD, at the postetrastuzumab therapy timepoint, and at 2-year follow-up are illustrated in Figure 1 for both groups.The trajectories of these measures were not statistically significantly different between the groups, in a comparison for the period from the time of CTRCD to the final follow-up.
No statistically significant differences were present in the postetrastuzumab therapy EQ-5D-3L Index score (1.0The data are provided as number of patients. volumes, LVEF, or markers of edema and fibrosis over followup; and (iii) no statistically significant between-group differences were present in QOL measures and VO 2peak poste trastuzumab completion.This exploratory study provides hypothesis-generating data indicating that in patients with mild CTRCD from sequential anthracyclines and trastuzumab therapy for breast cancer, who are started on HF therapy, continuation of trastuzumab therapy may be a consideration.
Along with prior studies, 9,10 our study provides further impetus to support conducting larger clinical trials to help answer this clinically important question in the field of cardiooncology.
Findings similar to ours have been demonstrated in 2 prior studies.The SCHOLAR study by Leong et al. 10 included 20 patients with HER2þ breast cancer, with mildly reduced LVEF of 40%-54% or a drop in ejection fraction of !15% from baseline to an LVEF !54% measured by echocardiography who were treated with an ACE inhibitor or ARB, and/or beta-blockers while continuing trastuzumab therapy.This study had no control group.Eighteen patients (90%) completed their planned trastuzumab therapy, whereas 2 (10%) developed severe CTRCD (LVEF drop to 28% and 26% with New York Heart Association class III-IV HF symptoms) requiring permanent cessation of trastuzumab therapy.However, these 2 patients improved their LVEF (56% and 47%, respectively) and HF symptoms (New York Heart Association class I) at subsequent follow-up.The SAFE-Heart study 9 included 30 patients with HER2þ breast cancer with LVEF 40%-49% measured by echocardiography and no HF symptoms, treated with ACE inhibitors or an ARB, and/ or beta-blockers while continuing trastuzumab therapy.This study also had no control group.Twenty-seven of the patients (90%) completed trastuzumab therapy, 2 developed symptomatic HF, and 1 developed an asymptomatic LVEF decline to 32%.
Our study adds to the above studies in the following manner.First, we provide a control group in which treatment was continued as a comparison.Both the above-described studies used echocardiography-measured LVEF for the diagnosis of CTRCD and for subsequent follow-up to define safety.Echocardiography measures have significant intraobserver, interobserver, and temporal variability, especially in patients receiving cancer therapy, 11 potentially resulting in misclassification.We used the reference-standard technique for LVEF measurement to demonstrate more confidently that no statistically significant differences were present between the groups.Furthermore, given that patients may develop HF 1-2 years after completion of cancer therapy, we provide 2-year follow-up to ensure that the lack of difference in left ventricular volume and ejection fraction measures is sustained.We provide novel measures that were not available in the prior studies, including CMR-measured markers of myocardial edema and fibrosis, providing further suggestion that no differences were present in potential myocardial substrate for future HF. 12,13We also provide novel data on CRF, which has been shown to be an important prognostic marker of poor long-term outcomes in patients with cancer. 14,15The lack of significant difference in VO 2peak postetrastuzumab therapy completion adds further confidence regarding the safety of the approach of continuing trastuzumab therapy for mild CTRCD.Finally, we examined patient-centric measures and found no significant differences in QOL measures between groups.
Despite the strengths outlined above, our study has limitations.We included 14 patients, which increases the likelihood of type II error in our statistical analysis.Furthermore, although nested within a prospective cohort study, the specific question addressed in this study was retrospective in nature and hence was subject to unmeasured confounders.Therefore, we consider our findings to be hypothesisgenerating, and additive to prior studies, but not practicechanging.The small sample size reflects the fact that our study was nested within the EMBRACE-MRI study, in which only 19 patients received HF therapy for mild CTRCD.However, this approach provided us with the ability to use traditional and novel CMR measures to assess the safety of continuing trastuzumab therapy and helped us reduce potential bias, as all 14 patients received identical care.Despite the fact that all patients in the EMBRACE-MRI trial who developed CTRCD and received HF therapy would be candidates for transient trastuzumab therapy interruption, 5,6 the decision to continue or interrupt trastuzumab therapy was based on clinician and patient preference and not on randomization.Several reasons account for the difference between the recommendations and clinical practice.These reasons include patient reluctance to stop trastuzumab therapy due to concern about poor outcomes, 7,8 and newer data that suggest that unless the LVEF at time of CTRCD is < 40%, the prognosis is excellent, resulting in physician reluctance to stop trastuzumab therapy for mild asymptomatic CTRCD. 16Given that the decision to stop vs continue trastuzumab therapy may be a confounder, our findings should be interpreted within the context of this limitation.
As we wait for data from randomized controlled studies, such as Safety of Continuing HER-2 Directed Therapy in Overt Left Ventricular Dysfunction (SCHOLAR-2; NCT04680442), comparing the safety of trastuzumab continuation vs interruption in patients with mild CTRCD using echocardiography measures, our study adds to the existing literature regarding the potential safety of continuing trastuzumab therapy on the basis of additional prognostic measures.Overall, the data to date from the 2 prior studies 9,10 and the current study are encouraging, but these findings require robust validation before they should impact clinical practice.
Figure 1 .
Figure 1.Individual patient trajectory and group-specific average and corresponding 95% confidence interval for interrupted and continued groups at each study time point.Data are presented for cardiac magnetic resonance imagingemeasured left ventricular ejection fraction (LVEF) and volumes and tissue characterization measures (T2/T1 values, and extracellular volume fraction).P values are for comparison of between-group differences of changes over the 4 follow-up time periods.
Table 2 .
Maximal doses of angiotensin-converting enzyme (ACE) inhibitors and/or angiotensin receptor blocker (ARB) and beta blockers used for treatment in groups who had interrupted vs continued trastuzumab therapy
|
2024-07-18T05:17:48.334Z
|
2024-03-21T00:00:00.000
|
{
"year": 2024,
"sha1": "5d5ece5ce0b13c48c637e98cc0cb98647cdf0fc8",
"oa_license": "CCBYNCND",
"oa_url": "http://www.cjcopen.ca/article/S2589790X24001392/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5d5ece5ce0b13c48c637e98cc0cb98647cdf0fc8",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
269847085
|
pes2o/s2orc
|
v3-fos-license
|
Chemokines as Prognostic Factor in Colorectal Cancer Patients: A Systematic Review and Meta-Analysis
Chemokines orchestrate many aspects of tumorigenic processes such as angiogenesis, apoptosis and metastatic spread, and related receptors are expressed on tumor cells as well as on inflammatory cells (e.g., tumor-infiltrating T cells, TILs) in the tumor microenvironment. Expressional changes of chemokines and their receptors in solid cancers are common and well known, especially in affecting colorectal cancer patient outcomes. Therefore, the aim of this current systematic review and meta-analysis was to classify chemokines as a prognostic biomarker in colorectal cancer patients. A systematic literature search was conducted in PubMed, CENTRAL and Web of Science. Information on the chemokine expression of 25 chemokines in colorectal cancer tissue and survival data of the patients were investigated. The hazard ratio of overall survival and disease-free survival with chemokine expression was examined. The risk of bias was analyzed using Quality in Prognosis Studies. Random effects meta-analysis was performed to determine the impact on overall respectively disease survival. For this purpose, the pooled hazard ratios (HR) and their 95% confidence intervals (CI) were used for calculation. Twenty-five chemokines were included, and the search revealed 5556 publications. A total of thirty-one publications were included in this systematic review and meta-analysis. Overexpression of chemokine receptor CXCR4 was associated with both a significantly reduced overall survival (HR = 2.70, 95%-CI: 1.57 to 4.66, p = 0.0003) as well as disease-free survival (HR = 2.68, 95%-CI: 1.41 to 5.08, p = 0.0026). All other chemokines showed either heterogeneous results or few studies were available. The overall risk of bias for CXCR4 was rated low. At the current level of evidence, this study demonstrates that CXCR4 overexpression in patients with colorectal cancer is associated with a significantly diminished overall as well as disease-free survival. Summed up, this systematic review and meta-analysis reveals CXCR4 as a promising prognostic biomarker. Nevertheless, more evidence is needed to evaluate CXCR4 and its antagonists serving as new therapeutic targets.
Introduction
Colorectal cancer (CRC) is the third most common malignancy in the world and is one of the deadliest malignant tumors.Worldwide, 935,000 deaths were stated in 2020 [1].Appallingly, by 2030, about 2.2 million new cases of CRC and 1.1 million deaths owing to CRC are predicted [2].
Expressional changes in chemokines and their receptors are known to provoke and modulate solid cancers like pancreatic cancer, breast cancer, melanoma as well as colorectal cancer [3][4][5][6][7].Approximately 50 different chemokines are known, divided into four groups.The classification depends on the four conserved cysteine residues [8].The primary amino acid sequence and the structural arrangement of the cysteine residues are defining for the chemokine subfamily [9].At the N-terminus of the chemokine ligand, the conserved cysteine amino acid residues are connected to each other via disulfide bridges [9].Balanced chemokine expression is responsible for proper immune cell function [10].Chemokines and their receptors are expressed by stromal, immune, epithelialendothelium and mesenchymal cells, leading to facilitated intercellular communication [11].Table 1 is intended to provide an overview of the chemokine receptors with a focus on CXCR and corresponding ligands.
Table 1.Overview of the chemokine receptors with a focus on CXCR and corresponding ligands (adapted from [12]).
In tumor cells, an imbalance between pro-and anti-apoptotic proteins triggered by chemokines can occur, allowing tumor cell survival, tumor growth and inhibition of apoptosis [13,14].Chemokines are integrated into several cellular interactions, such as angiogenesis and hematopoiesis and in the intrinsic communication of the human immune system [15][16][17].Famous examples include CXCL9 and CXCL10 as central regulators of T cell densities and composition of T cells in the tumor microenvironment [18][19][20].However, even these chemokines have come under scrutiny in other tumor diseases, with evidence also pointing to a possible detrimental role in tumor progression [18][19][20].Another relevant example includes the chemokine receptor CXCR4, which is expressed in various cells, including endothelial and hematopoietic cells, as well as embryonic and adult stem cells [21].CXCR4 is the predominant receptor for the chemokine ligand CXCL12; overexpression of CXCR4 and CXCL12 is found in several solid cancers, such as CRC, gastric cancer, pancreatic cancer and lung cancer [22][23][24][25][26]. Nevertheless, a differentiation between various tumor types on the basis of chemokine panel is not possible.However, chemokines represent tumorigenic markers whose dysregulation is known to influence survival [23,24,26].Chemokine receptor CXCR4 is a seven-transmembrane G-Proteincouple receptor, which interacts with the chemokine ligand 12 (CXCL12).CXCL12 is also called SDF-1 (Chemokine stromal cell-derived factor-1) which is expressed on stromal cells, including endothelial cells and fibroblasts [8,27].Research has shown that CXCL12 and CXCR4 impact angiogenesis and tumor growth, as well as invasion and metastasis [8,28].Khare et al. reported in their study that both the CXCL12-CXCR4 axis and the CXCL12-CXCR7 axis play an important role in CRC metastasis [29].In addition to tumor metastasis, CXCR4 is also involved in the modulation of carcinogenic stem cells [30].Regulation of apoptosis also occurs via the CXCL12-CXCR4 axis with the participation of members of the Bcl-2 family [31].The transmission of epithelial and mesenchymal colon cancer cells and their metastases are promoted by the interaction of CXCL12-CXCR4, leading to a direct impact on the Wnt-ß-catenin signaling pathway, which contributes to the formation and progression of colorectal cancer [9,32].In general, colorectal cancers exhibit a variety of potential underlying mutations and immune markers, e.g., BRAF, KRAS, APC, PD1, and microsatellite stability (MSS), resulting in impaired survival [33][34][35].Chemokine receptor type 7 (CXCR7) is expressed in many cancer cells and controls angiogenesis, cell growth and immunity [29].In addition, CXCR7 has a stimulating effect on tumor development [36].CXCR4 and CXCR7 can be expressed either individually or together; homo-or heterodimers can be formed when CXCR4 and CXCR7 are expressed simultaneously [29].The presence of heterodimers has been demonstrated in about 65% of colorectal cancers [29].Interactions occur through intracellular signaling effectors when CXCR4 and CXCR7 bind to the chemokine ligand CXCL12 [29,37].Clinical trial efforts are underway, targeting either CXCL12 or the receptor CXCR4 in order to improve antitumoral activation of the immune system [20,38].Due to a large number of different chemokines and chemokine receptors, which were addressed in a large number of diverse studies, an investigation of the reliability and validity of chemokines and chemokine receptors as potential prognostic factors in colorectal cancer patients is needed.Consequently, the aim of our systematic review and meta-analysis was to investigate the prognostic role of chemokine expression in the tumoral tissue of patients with CRC in order to explore possible modular trends within the broad heterogeneity of chemokines.Systematic review and meta-analysis were performed for colorectal cancer only, justified due to the huge number of studies regarding chemokine expression in various solid cancer types.A systematic review and meta-analysis should be performed for each cancer entity itself.
Study Selection
Initial search revealed a total number of 7994 articles.Of these, 2438 duplicates were removed due to exclusion criteria.The titles and abstracts of 5556 articles were screened for inclusion.Records with no survival, animal studies or non-relevant were excluded.Fifty-four papers were actually included by screening the title and abstract.Four papers were excluded due to lack of access to these papers.A total of 50 articles were assessed for eligibility and met the inclusion criteria.Another 19 studies showed too little data on survival and were excluded too.Finally, the remaining 31 papers were included in the quantitative analysis, in which the survival of 6079 patients with chemokine expression in tumor tissue was analyzed.In Figure 1, a PRISMA flow chart is presented.All included and evaluated studies are shown in Table 2.
Qualitative Analysis
Study participation A total of 3 out of 31 studies (9.7%) were at moderate risk, and the majority twentyeight of thirty-one studies (90.3%) were at a low risk.
Study attrition Eight out of thirty-one studies (25.8%) were at a moderate risk of bias, and the majority, 23 of the 31 studies (74.2%), were at a low risk of bias.
Prognostic factor measurement All 31 studies (100%) received a low risk of bias score.
Outcome measurement Three of thirty-one studies (9.7%) were at a moderate risk of bias.The majority, 28 of the 31 studies (90.3%), were at a low risk of bias.
Study confounding Nine of the thirty-one studies (29%) were at a moderate risk of bias.A total of 22 of the 31 studies (71%) were at a low risk of bias.
Statistical analysis and reporting A total of 8 of the 31 studies (25.8%) were at moderate risk of bias.The majority, 23 of the 31 studies (74.2%), were at a low risk of bias.
An overview of the risk of bias evaluated with the QUIPS tool is given in Table 2.
Study Characteristics
Additionally, detailed study information, such as the number of patients or AJCC as well as the year of publication, are presented in Table 3 and in Supplementary Tables S1A-D.
Quantitative Analysis of Chemokine Expression and Survival
A meta-analysis was only carried out if there were three or more studies available.Quantitative analysis revealed that of the 25 assessed chemokines, only for chemokine CXCR4 significant results could be verified.
Four studies of the chemokine CXCL12 for overall survival and disease-free survival were included.One of the studies showed that increased expression led to better survival, contradicting the other three studies.Therefore, no significant result could be shown.Three studies were included on chemokine expression CXCL14 in relation to overall survival, which did not produce any significant results.Three studies on the chemokine CXCL1 were included; however, they were not pooled due to considerable heterogeneity, as recommended by the Cochrane Handbook.For the chemokine CXCL8, four studies on overall survival were included, which also showed broad heterogeneity.Chemokines for which three or more publications were included (CXCL1, CXCL8, CXCL12, CXCL14) data are shown in Supplementary Figure S1A-E.
For all other chemokines examined in this study, there were too few studies to obtain meaningful results.
For CXCR4 systematic review and meta-analysis showed significant results for the chemokine with an influence on overall survival and disease-free survival.Five studies were included regarding CXCR4 expression and overall survival.All individual study results, as well as the pooled effect, showed that a tumoral overexpression of CXCR4 was associated with a significantly shortened overall survival (HR = 2.70, 95% CI [1.57; 4.66], p = 0.0003) (Figure 2).There was a moderate level of heterogeneity (I 2 = 62%, τ 2 = 0.230).
For CXCR4 systematic review and meta-analysis showed significant results for the chemokine with an influence on overall survival and disease-free survival.Five studies were included regarding CXCR4 expression and overall survival.All individual study results, as well as the pooled effect, showed that a tumoral overexpression of CXCR4 was associated with a significantly shortened overall survival (HR = 2.70, 95% CI [1.57; 4.66], p = 0.0003) (Figure 2).There was a moderate level of heterogeneity (I 2 = 62%, τ 2 = 0.230).Also, five studies were included regarding CXCR4 expression and disease-free survival.High CXCR4 expression in colorectal cancer specimens was accompanied by a significantly diminished disease-free survival (HR = 2.68, 95% CI [1.41; 5.08], p = 0.0026) (Figure 3).The level of heterogeneity was moderate (I 2 = 69%, τ 2 = 0.354).
Overall Risk of Bias of CXCR4
Of the included five studies, three studies were rated with low risk of bias [22,46,50,51,56] and two with moderate risk of bias [36,48].Therefore, the overall risk of bias was assessed as mainly low.Summed up, this systematic review and meta-analysis reveals CXCR4 as a meaningful prognostic biomarker.
Discussion
In summary, this systematic review and meta-analysis elucidate that tumoral overexpression of CXCR4 in 1381 patients represents a promising prognostic factor, resulting in significantly shortened overall survival as well as disease-free survival in colorectal cancer patients.
Overall Risk of Bias of CXCR4
Of the included five studies, three studies were rated with low risk of bias [22,46,50,51,56] and two with moderate risk of bias [36,48].Therefore, the overall risk of bias was assessed as mainly low.Summed up, this systematic review and meta-analysis reveals CXCR4 as a meaningful prognostic biomarker.
Discussion
In summary, this systematic review and meta-analysis elucidate that tumoral overexpression of CXCR4 in 1381 patients represents a promising prognostic factor, resulting in significantly shortened overall survival as well as disease-free survival in colorectal cancer patients.
Overall Risk of Bias of CXCR4
Of the included five studies, three studies were rated with low risk of bias [22,46,50,51,56] and two with moderate risk of bias [36,48].Therefore, the overall risk of bias was assessed as mainly low.Summed up, this systematic review and meta-analysis reveals CXCR4 as a meaningful prognostic biomarker.
Discussion
In summary, this systematic review and meta-analysis elucidate that tumoral overexpression of CXCR4 in 1381 patients represents a promising prognostic factor, resulting in significantly shortened overall survival as well as disease-free survival in colorectal cancer patients.
In the present systematic review and meta-analysis, tumoral overexpression of the chemokine receptor CXCR4 serves as a significant biomarker of poor prognosis in colorectal cancer patients.This could offer the opportunity of developing conceivably future targeted therapies with potential improvement of life expectancy.
Studies of CXCR4 included a total of 536 patients with CRC with a focus on overall survival and a total of 845 patients with CRC with a focus on disease-free survival.It was shown that both a significantly diminished overall survival as well as disease free survival could be demonstrated in patients with tumoral overexpression of CXCR4.Zengin et al. pointed out that CXR4 and CXCL12 overexpression in 260 colorectal cancer patients are also associated with a significantly worse prognosis regarding overall respectively disease-free survival [22].Kim et al. examined six CRC cell lines and one hundred twenty-five CRC patients partly even matched with samples of synchronous liver metastasis [48].Screening of samples and cell lines identified CXCR4 as the prominent chemokine receptor-an overexpression of CXCR4 in CRC tissue was confirmed with a significant negative impact on survival [48].Kim et al. have a larger hazard ratio compared to the other studies, which may be attributed to the smaller number of patients [48].Furthermore, Ottaiano et al. illustrated that overexpression of chemokine receptor CXCR4 was unfavorable for CRC patients.In their publication, seventy-two patients with stages II-III CRC were selected and operated on with curative intent [51].Ottaiano et al. also examined the expression of CXCR4 in colon cancer cells using immunocytochemistry, and they came to a positive result in all cell lines examined.Patients with high CXCR4 expression revealed significantly worse disease-free survival compared to patients with low or no expression [51].
Moreover, Xu et al. were able to demonstrate that there is a correlation between clinicopathological characteristics, prognosis, and treatment of colorectal cancer.They analyzed the CXCR4 mRNA expression in colorectal carcinoma tissue from 48 patients using qRT-PCR and compared the expression level with the corresponding non-tumorous tissue [46].Their data confirmed that an overexpression of CXCR4 is associated with worse overall survival [46].
In this systematic review, the chemokines CXCR7 and CXCL12 were also examined in more detail due to the acquainted interactions in the chemokine axis CXCR4-CXCL12-CXCR7.The CXCL12-CXCR4/CXCR7 axis plays an important role in the treatment strategy of patients with CRC [29].D'Alterio et al. highlight that a concomitant negative or low expression of CXCR7, together with a high expression of CXCR4, is significantly associated with poor disease-free survival [36].Further, high expression of CXCR4 together with negative or low expression of CXCL12 resulted in significantly worse relapse-free survival in the sixty-eight patients.The interaction of different chemokines may foster a negative impact on survival [36].Both D'Alterio et al. and Kim et al. confirmed in their studies that high expression of chemokine CXCR4 represents a poor prognosis [36,48].Zhang et al. confirm in their study that worse overall survival is associated with overexpression of CXCR4 in 125 samples from patients with stage II and III colon cancer [56].Wang et al. also verified a worse prognosis regarding high CXCR4 expression in 388 patients suffering from colorectal cancer [50].Regarding the quantitative results with respect to the overall risk of bias was assessed as low as can be seen in Tables 2 and 3. Therefore, the results seem to be reliable.
Consequently, in our systematic review and meta-analysis, the random-effects model revealed a significantly diminished overall survival for CXCR4, pointing out CXCR4 as a prognostic factor.Some limitations have to be mentioned in this systematic review and meta-analysis.The included studies did not all use the same detection methods for chemokine expression, e.g., qRT-PCR, immunohistochemistry or western blot.Further sources of clinical heterogeneity are follow-up periods of different lengths, some studies focused on special tumor stages, as well as studies were included independently of received neoadjuvant radio-/chemotherapy.Future studies regarding the influence of chemokines on survival should take these complex situations into account.Because only a small number of studies were included in the meta-analysis, the statistical heterogeneity may have been imprecisely estimated leading to too narrow confidence intervals of the pooled effects.A major strength of this systematic review and meta-analysis is the large number of 5556 screened studies focusing on chemokine tumoral expression in colorectal cancer patients.Moreover, this systematic review and meta-analysis focus on all chemokines that are known to be expressed in CRC tumor tissue in order to help identify promising new target options for tailored patient therapy approaches.
Immunotherapies are promising therapeutic approaches that can intervene in tumor growth using CXCR4 antibodies.There are some promising completed phase 1 clinical trials, such as NCT02695966 in pancreatic cancer and NCT02179970 in colorectal and pancreatic cancer targeting receptor CXCR4.In addition, there are other studies in phase 2, such as COMBAT/KEYNOTE-202-Study (NCT02826486) in metastatic pancreatic adenocarcinoma, NCT02907099 in metastatic pancreatic cancer and NCT03168139 in colorectal and pancreatic cancer, which are expected to have a promising result in a small cohort group [29].The analysis of the data from the phase IIa study (NCT02826486) suggests that a combination of PD-1 blockade and CXCR4 antagonist can lead to an improvement in pancreatic ductal adenocarcinoma survival [68].Besides, more clinical studies are being conducted focusing on antibodies against CCR2, CCR4, CCR5 and CXCR4 and nanobodies against CCL2, CCL5, CXCL11 and CXCL12 [6,69,70].CXCR4 antagonists are considered to have an important function in chemotherapy due to the interference between tumor and stromal cells [71].A study showed that the CXCR4 antagonist Plerixafor mobilizes leukemia cells, thereby leading to sensitization of cytotoxic therapy and improvement of hematopoietic cell transplantation (HCT) [72].
Kotb, R.M. et al. confirm that the chemokine receptor CXCR4 represents a starting point for optimizing the therapeutic outcome of trastuzumab in breast cancer patients [73].
Nevertheless, further studies and phase II/III trials are needed evaluating CXCR4 antagonists serving as potential new targeted therapies in colorectal cancer patients.Taken together, the chemokine receptor CXCR4 represents a promising future therapeutic target.
Materials and Methods
For this systematic review and meta-analysis, a literature search was performed according to the PRISMA guidelines ("Preferred Reporting Items for Systematic Reviews and Meta-Analyses") [74].The study was registered at PROSPERO, an international prospective register of systematic reviews, in 2020 CRD42020157312.
Study Selection
Studies including patients with colorectal cancer, colon cancer or rectal cancer who underwent surgery with or without perioperative radiochemotherapy or chemotherapy alone were included.All publications that examined chemokine expression in tumor tissue with survival were included.In these studies, tumor tissue was compared with healthy colorectal mucosa tissue.Publications examining and analyzing chemokine expression in blood were excluded.Animal studies, editorials, letters, meeting abstracts and comments were also excluded, as well as all publications for which the full text was not available.Studies focused on colorectal metastases were excluded, too.There were restrictions regarding the language.Publications that were written in English, German and French were included; other languages were excluded due to lack of language skills.There were no restrictions regarding the publication year.The screening process and study selection from title and abstract were performed independently by two reviewers.Any disagreement concerning inclusion or exclusion was decided by consensus.
Data Extraction
The extraction of the data was performed independently by two reviewers.The following data were extracted: title, first author, publication year, location research, journal, language, trial registered, conflict of interest, funding, number of patients, duration of months choosing patients, duration of follow-up, TNM and grade.For each chemokine, the hazard ratio (HR) with 95% confidence interval (CI) limits for overall survival (OS) and disease-free survival (DFS) from univariable analysis were extracted.
If data from a Gene Expression Omnibus (GEO) database were used, it was checked whether the data came from patients or animal tissue.Only human data were extracted, and animal data were excluded.Care was also taken to ensure that the different datasets from the GEO database were only used once in the analysis in order to avoid the risk of bias.
Risk of Bias-Critical Appraisal
The Quality in Prognosis Studies (QUIPS) tool was used to evaluate the risk of bias and the quality of all the studies [75].Two investigators (J.F.-H.and F.K.) assessed the methodological quality of the included studies.Study participation, attrition, prognostic factor measurement, outcome measurement, study confounding and statistical analysis and reporting are the six domains of the QUIPS tool.Each study was in these six domains rated as high risk, moderate risk or low risk.Most important among all domains is study confounding, which was then selected as the overall risk of bias.A low risk of bias was defined as no serious alteration of results.A moderate risk of bias was classified as a slightly serious outcome, and a high risk of bias was defined as a serious alteration of the results.
Data Handling and Statistical Analysis
The main outcome of this systematic review and meta-analysis was the overall survival.Data on cancer-specific survival (CSS) was equated to overall survival (OS).An additional outcome was disease-free survival.Progression-free survival, regression-free survival, relapse-free survival and recurrence-free survival were counted as disease-free survival (DFS).If studies split their observations into a training and a validation set, the effect measures based on the validation set were used.A meta-analysis was only carried out if there were three or more studies available.
The hazard ratio (HR) and the 95% confidence interval (CI) were used as effect measures for the survival endpoints.If a study did not explicitly report the HR and the 95% CI, they were estimated by other reported quantities using the formulas of Tierney et al. [76].A p-value less than 0.05 was considered significant.
Statistical analysis were conducted using the software R (Version 4.3.1)and the package meta [77].Random effects models were applied to account for the expected heterogeneity between the included studies.The between-study variance τ 2 and the I 2 statistic were used to assess the heterogeneity.
The results of the I 2 statistic were interpreted as follows: I 2 between 0% to 40% might not be important; 30% to 60% represents moderate heterogeneity; 50% to 90% represents significant heterogeneity; and 75% and above indicates significant heterogeneity [78].
In addition, to express the amount of heterogeneity, 95% prediction intervals were computed based on the t-distribution [79].The prediction interval relates to the interval to predict the effect in a new study that is similar to the included studies in the meta-analysis.
The effect measures of the individual studies, as well as pooled results, were graphically visualized with forest plots.Statistical analysis of overexpression of chemokines was carried out using univariate values.The meta-analyses were performed using effect sizes that were estimated by univariable survival analyses in the individual studies.
Conclusions
In summary, in this systematic review and meta-analysis, a significant negative influence on overall as well as disease-free survival was shown for the chemokine receptor CXCR4 in primary colorectal cancer patients.Nevertheless, further studies are needed in order to confirm CXCR4 and its antagonists and to identify other chemokines and their receptors as prognostic factors and potential therapeutic targets.
Figure 1 .
Figure 1.Prisma Flow Chart: showing the selection process.
Figure 1 .
Figure 1.Prisma Flow Chart: showing the selection process.
Table 2 .
Overview of all included studies in this systematic review and meta-analysis evaluated by the Quality in Prognosis Studies (QUIPS) tool.OS = overall survival, DFS = disease-free survival.
Table 2 .
Overview of all included studies in this systematic review and meta-analysis evaluated the Quality in Prognosis Studies (QUIPS) tool.OS = overall survival, DFS = disease-free
|
2024-05-19T16:02:51.116Z
|
2024-05-01T00:00:00.000
|
{
"year": 2024,
"sha1": "acadd9919618238f11118f3e201557f214a62b18",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "0b701de2021465452b3dca4f5d002122dd5fa54d",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
199610934
|
pes2o/s2orc
|
v3-fos-license
|
Chinese Medicines for Preventing and Treating Radiation-Induced Pulmonary Injury: Still a Long Way to Go
Thoracic radiotherapy is a mainstay of the treatment for lung, esophageal, and breast cancers. Radiation-induced pulmonary injury (RIPI) is a common side effect of thoracic radiotherapy, which may limit the radiotherapy dose and compromise the treatment results. However, the current strategies for RIPI are not satisfactory and may induce other side effects. Chinese medicines (CMs) have been used for more than a thousand years to treat a wide range of diseases, including lung disorders. In this review, we screened the literature from 2007 to 2017 in different online databases, including China National Knowledge Infrastructure (CNKI), Chongqing VIP, Wanfang, and PubMed; summarized the effectiveness of CMs in preventing and treating RIPI; explored the most frequently used drugs; and aimed to provide insights into potential CMs for RIPI. Altogether, CMs attenuated the risk of RIPI with an occurrence rate of 11.37% vs. 27.78% (P < 0.001) compared with the control groups. We also found that CMs (alone and combined with Western medical treatment) for treating RIPI exerted a higher efficacy rate than that of the control groups (78.33% vs. 28.09%, P < 0.001). In the screened literature, 38 CMs were used for the prevention and treatment of RIPI. The top five most frequently used CMs were Astragali Radix (with a frequency of 8.47%), Ophiopogonis Radix (with a frequency of 6.78%), Glycyrrhizae Radix et Rhizome (with a frequency of 5.08%), Paeoniae Radix Rubra (with a frequency of 5.08%), and Prunellae Spica (with a frequency of 5.08%). However, further high-quality investigations in CM source, pharmacological effects and underlying mechanisms, toxicological aspects, and ethical issues are warranted. Taken together, CMs might have a potential role in RIPI prevention and treatment and still have a long way to investigate.
INTRODUCTION
Thoracic radiotherapy is a mainstay of treatment for patients who have cancer, e.g., lung cancer, esophageal cancer, and breast cancer. Radiation-induced pulmonary injury (RIPI) is a common side effect, including two phases, acute radiation pneumonitis and chronic radiation-induced pulmonary fibrosis (Cao et al., 2017;Huang et al., 2017). The symptoms include a dry cough, shortness of breath, chest pain, fever, and even severe respiratory failure and death (Huang et al., Frontiers in Pharmacology | www.frontiersin.org September 2019 | Volume 10 | Article 927 2017). RIPI is highly likely to occur with a high dose and dose rate of radiation. Capillary endothelial and type I cells (epithelial lining cells) appear to be most susceptible to RIPI (Abid et al., 2001). RIPI may lead to a reduction and damage in pulmonary function, which involves a cascade of inflammatory events, including oxidative damage (Demirel et al., 2016), sphingomyelin hydrolysis, and apoptotic DNA degradation (Abid et al., 2001). Patients with RIPI typically suffer from dyspnea with a decreased vital capacity (VC), forced vital capacity (FVC), forced expiratory volume in 1 s (FEV1), alveolar volume (V A ), transfer factor of carbon monoxide (T L, CO ), and residual volume (RV) (Enache et al., 2013). On the cellular and molecular level, radiation may result in damage to the alveolar epithelial cells as well as vascular endothelial cells and the secretion of a large number of proinflammatory and pro-fibrotic cytokines, including transforming growth factor (TGF)-β1, interleukin-13 (IL-13), endothelin-1 (ET-1), platelet-derived growth factor (PDGF), cyclooxygenase (COX), and prostaglandin E2 (PGE2). Furthermore, type 1 helper T cells (Th1 cells) may affect RIPI by secreting IL-4 and IL-13, while type 2 helper T cells (Th2 cells) induce collagen synthesis for tissue remodeling and fibrosis (Huang et al., 2017).
To date, the treatment strategies for RIPI are mainly based on glucocorticoids, angiotensin-converting enzyme inhibitors (ACEIs), and pentoxyphylline (Deng et al., 2017). Thus, a comprehensive review of the effects of CMs on RIPI may help discover the most effective CMs. In this review, studies of CMs in RIPI were retrieved from online databases from 2007 to 2017. The literature was screened using the inclusive and exclusive criteria and analyzed using statistical software.
Data Retrieval and Collection
The keywords "radiation-induced pulmonary/lung injury, "radiation-induced pneumonitis/pulmonary fibrosis, and "Chinese medicine/ traditional Chinese medicine/ Chinese herbal medicine" were used to retrieve studies of CMs in RIPI from 2007 to 2017 from the online databases of China National Knowledge Infrastructure (CNKI), Chongqing VIP, Wangfang, and PubMed. The duplicates were discarded. The overall efficacy, changes of pulmonary function, underlying mechanisms, CM species and families, traditional use, and usage frequency in these studies were summarized and analyzed. The study was approved by the Ethical Commitee of Hubei University of Medicine.
Criteria of Inclusion
The inclusion criteria were as follows: • The contents of the literature involved the clinical effects of CMs on RIPI, including radiation-induced pneumonitis and/ or pulmonary fibrosis. • The references included pure compounds, fractions, and formulae of CMs. The CMs were composed of fungi, plants, animals, and their parts, and minerals. • The design of the studies was randomized and controlled. • The studies described some parameter of lung function.
Criteria of Exclusion
The data from the following literature were excluded: • The literature was associated with neither CMs nor RIPI. • The pure compounds were not naturally from CMs but were chemical derivatives. • The species of CMs were not given or could not be determined.
• The fractions and/or formulae of CMs were described without the extraction methodology. • The components of the formulae were not given. • The manufacturer and/or manufacturing data were not stated if marketed CM preparations were used. • The dosage of the drugs was not given. • Only the overall efficacy of CMs on RIPI was described without any assessment of lung function or biochemical indices. • The studies were not randomized and controlled. • The literature did not claim any ethical approvals, and the studies without desclaim of patients' agreement or signing informed consents. • The study was any review or a meta-analysis.
Statistical Analysis
The data were selected based on the criteria defined in the inclusion and exclusion criteria for clinical settings, and they were summarized and analyzed using SPSS19.0 (SPSS Inc., Chicago, IL, USA) and GraphPad Prism 5.0 (GraphPad Software, Inc. La Jolla, CA, USA). Data were presented as the mean ± standard deviation (SD). One-way ANOVA was performed for multiple comparisons analysis, and two-sided Student's t test was used to compare differences between the two groups. P < 0.05 was considered statistically significant.
General Information on Literature Retrieval
A total of 642 papers were retrieved, including 390 duplicates and irrelevant literature, 53 reviews and meta-analysis, and 191 articles that were excluded using the selection criteria. As a result, eight studies that met the selection criteria were summarized and analyzed (Figure 1). Ethics committees approved all clinical trials, and all patients agreed on participating studies and signed informed consents. The CMs examined in eight studies (five for prevention, one for treatment, and two for both) included eight formulae (Table 1). Furthermore, some CMs were used with a high frequency ( Table 2). In these papers, the 883 patients were randomly divided into two groups, the control group (Ctrl, 407 patients), the CM-preventing group (CMs, 343 patients) (Figure 2), and the CM-treating group (CMs, 180 patients) (Figure 3).
Overall Efficacy of CMs for Treating RIPI
In the three clinical studies with efficacy data, the treatment groups were CMs alone or combined with Western medicine. The control groups were Western medicine. The results showed that CMs alone and CMs combined with Western medical treatment were more effective than control groups (78.33% vs. 28.09%) (P < 0.001, Figure 3). Furthermore, the change in pulmonary function was investigated in two studies. In total, 120 patients (74 males and 46 females) were treated with CMs or CMs combined with Western medical treatment. Compared with the control groups (Ctrl, 120 patients including 77 males and 43 females), CMs and CMs combined with Western medical treatment were more useful for the improvement of the VC, FVC, and FEV1, indicating that CMs or CMs combined with Western medical treatment were more effective in improving the impaired lung function (Figure 3).
CMs: Commonly Used Families, Traditional Action Classification, and Frequency of Usage
As CMs for RIPI were used according to their traditional property, it is necessary to explore the relationship between pharmacological effects and traditional function, as well as CM species. We firstly summarized CMs according to traditional action classification . However, interestingly, only 5 out of 17 current categories of CMs were used. The order according to the frequency from high to low was as follows: tonics (TON, 34.21%); heat-clearing medicines (HCM, 31.58%); expectorants, antitussives, and antiasthmatics (EAA, 15.79%); blood invigorating and stasis resolving medicines (BISRM, 15.79%); and astringent medicines (AST, 2.63%) (Figure 4). The top five most frequently used CMs were Astragali radix (with a frequency of 8.47%), Ophiopogonis radix (with a frequency of 6.78%), Glycyrrhizae radix et rhizome (with a frequency of 5.08%), Paeoniae Radix Rubra (with a frequency of 5.08%), and Prunellae spica (with a frequency of 5.08%) ( Table 2).
DISCUSSION
Thoracic radiotherapy is the mainstay for the treatment of cancer diseases. However, RIPI is the dose-limiting factor precluding a curative therapy in some cases (Huang et al., 2017). Although several treatment regimens may alleviate RIPI, the therapeutic effects are often unsatisfactory, and these medicines also induce some new side effects (Giridhar et al., 2015). Thus, it is an urgent issue to seek novel drugs for RIPI without serious side effects. CMs have a very long history of curing patients with lung diseases, in which some effective medicines may be found for the prevention and treatment of RIPI (Deng et al., 2017). In this review, we retrieved the most recent publications in the last 10 years, from 2007 to 2017, in different online databases to provide comprehensive insight into the efficacy of CMs on RIPI. We found that eight studies met our criteria. Some CMs were frequently used. In total, eight formulae were used in the reviewed literature. The indices and parameters of pulmonary function were summarized and analyzed to investigate the therapeutic effects of CMs on RIPI quantitatively. The total RIPI occurrence in the CM-preventing groups was significantly lower than that of the control groups (11.37% vs. 27.78%) (P < 0.001), while the treatment efficacy of CMs was significantly higher than that in the control group (78.33 vs. 28.09%, P < 0.001). Our results might be helpful to discover the most effective CMs for treating RIPI if the high-quality studies would be conducted.
The underlying mechanisms of CMs on RIPI seem to be complicated. In addition to the well-known inflammatory and fibrotic factors, other factors, such as oxidative response factors and immune cytokines, were also involved. In this review, CMs may be associated with enhancing anti-oxidation, anti-inflammation, and anti-fibrosis properties, and improving immune function. As to the anti-oxidation property, Shen Fu Injection reduced ROS (Bi and Sun, 2015). For anti-inflammation, CMs alleviated RIPI by reducing inflammatory factors such as IL-1, IL-4, IL-6, IL-10, high mobility group protein B1 (HMGB1), prostaglandin (PGE2), and tumor necrosis factor-α (TNF-α) (Tables 1 and 2). Regarding enhancing the immune function, Qing Fei Yang Yin Huo Xue Fang activated NK cells. For pulmonary fibrosis, Mai Men Dong Decoction-2 reduced epithelial-mesenchymal transition (EMT) by activation of TGF-β (Liu et al., 2016); Scutellariae Radix for EMT may be associated with its active component, baicalin, which inhibited extracellular regulated protein kinases (ERK)/ Glycogen Synthase Kinase 3β(GSK3β) (Lu et al., 2017).
To unveil the relationship between the pharmacological effects and the traditional applications of CMs, the frequency of the species was summarized in Table 2, and CM category according to their traditional applications was analyzed in Figure 4. All botanical species were checked using the Chinese Pharmacopeia (Version 2015) and the online database of the Plant List ( Table 2). For the 38 CMs in this review, 36 botanic families, one animal part, and one mineral were used. Interestingly, only five categories of CMs were used for RIPI. In fact, there are up to 17 categories of CMs in use to treat various diseases according to their traditional applications, including exterior-releasing medicines; heat-clearing medicines; medicines that drain downwards; medicines that expel wind and damp; damp-resolving medicines; damp-draining medicines; medicines that warm the interior; Qi-regulating medicines; digestant medicines; medicines that stop bleeding; blood invigorating and stasis resolving medicines; expectorants, antitussives and antiasthmatics; medicines that quiet the spirit; liver-pacifying and wind-extinguishing medicines; resuscitative medicines; tonics; and astringent medicines (Chen, 2017). The reviewed five categories might be closely associated with their pharmacological actions for RIPI. The first class was tonics (e.g., Astragali Radix, Ophiopogonis Radix, and Glycyrrhizae Radix et Rhizoma). Tonics could enhance immune function, which indicates that tonics may attenuate the symptoms by enhancing resistance to RIPI. The second class was heat-clearing medicines (e.g., Paeoniae Radix Rubra and Prunellae Spica). The main pharmacological effects of heat-clearing medicines are anti-inflammation, anti-pathogenic microorganism, adjusting immune function, anti-pyretic effects, and so on. The third class was the expectorants, antitussives, and antiasthmatics (e.g., Eriobotryae Folium, Armeniacae Semen Amarum, and Asteris Radix et Rhizoma), which could dispel phlegm, inhibit coughing, and exert an anti-inflammation effect, and these CMs respond by targeting pneumonitis and pulmonary fibrosis. As for the blood-invigorating and stasis-resolving medicines (e.g., Salviae Miltiorrhizae Radix et Rhizome), it may treat RIPI by anti-oxidative and anti-inflammation (Peng et al., 2019;Yang et al., 2019). The fifth and last class was astringent medicines (Schisandrae chinensis fructus), which could exert cough suppression, anti-fibrosis, and immune regulation. This may be its potential use for RIPI (Chen, 2017;Zhai et al., 2014). Interestingly, the most frequently used species were tonics, Astragali Radix (with a frequency of 8.47%), and Ophiopogonis Radix (with a frequency of 6.78%), indicating that enhancing the immune function is more critical than anti-inflammation in RIPI (Table 2).
However, further high-quality investigations should be warranted. The research quality with regards to repeatability and standardization is vital for CMs. The diversity of CM species is a limitation in this review, which may affect the quality of the studies. Although we checked all the CMs with the Chinese Pharmacopeia (Version 2015) and the Plant List (www.theplantlist.org) in this study (Table 2), it was still hard to identify the detail species of the used CMs. For the most CMs, there are more than one source for medicinal use. . These three different species may have different components, which might result in different effects. Actually, 11 CMs had more than one species in this review (28.95% out of the total CMs). Furthermore, because of the similar Chinese names but very different sources, the species of CMs that were not given or could not be determined were excluded in this review. The paper only describing Sha Shen (沙参) was not considered in this review because there are two different resources of Sha Shen, where one is named Bei Sha Shen (Glehniae Radix in English, 北沙 参 or North Sha Shen), which is the root of Glehnia littoralis Fr. Schmidt ex Miq. (Family: Umbelliferae) (Committee of Chinese Pharmacopeia, 2015a), while the other is named Nan Sha Shen (Adenophorae Radix in English, 南沙参 or South Sha Shen), which is the root of Adenophora tetraphylla (Thunb). Fisch or Adenophora stricta Miq. (Family: Campanulaceae) (Committee of Chinese Pharmacopeia, 2015b). This means that they share similar Chinese names but are different species, which may result in different pharmacological/toxicological effects. The other similar reason for the standardization of CM research is to verify the process of CM preparation and formulae components. Otherwise, the active components and pharmacological effects would result in differences and confusion. Therefore, the factors that would decrease the quality of research were excluded.
In addition, though the prevention and treatment efficacy of CMs on RIPI were promising, little details of ethical protocols were described in the reviewed studies. This is a very important aspect for patient safety in clinical trials. With regards to publications, 191 out of 199 were excluded due to the low quality of the studies, including 20 articles with neither any ethical approval nor patients' agreement or signing informed consents. Only three clinical studies investigated the VC, FVC, and FEV1 in the treatment cases, the most important indices for pulmonary function.
Another critical issue of CMs is their toxicity and side effects. Herbal medicines have attracted more attention since aristolochic acid-induced nephropathy was reported (Cosyns, 2003). Though there was no toxicity reported in this review, some CMs have been reported to induce irritation (Zhu et al., 2012;Zhong et al., 2006) and be toxic in the endocrine system (Arbo et al., 2009), reproductive system (Tian et al., 2009), liver (Bilgi et al., 2010;Zhang et al., 2009), digestive system, nervous system (Xu et al., 2018;Chaouali et al., 2013), etc. The potential toxic CMs are listed in Table 2, in which herb-induced liver injury (HILI) can be assessed using the CIOMS/RUCAM scale (Fu et al., 2018). Furthermore, according to the Chinese Pharmacopeia, Armeniacae semen amarum was identified as "slightly toxic" ( Table 2). Notably, this also includes that they would be "very toxic, " "toxic, " and "slightly toxic" if they had been used irrationally. Unfortunately, the maximal tolerance and LD50 are not standardized for many CMs as in Western medicine. Regarding this study, the papers mentioned the safety of CMs in which there were no particular methods, designs, and further index results for an LD50 investigation. Nevertheless, we still would like to sensitize the medical staff to the toxicity of CMs.
CONCLUSION
In conclusion, although CMs might play a promising role in RIPI prevention and treatment, there is still a long way to go for CMs on RIPI. Further high-quality investigations in species, pharmacological effects and underlying mechanisms, and ethical and toxicological aspects are warranted.
AUTHOR CONTRIBUTIONS
XBW, XHW, and FC designed the study. YD, YCL, HL, and MLiu retrieved the literature. YD and YCL screened and doublechecked the literature; YD and XBW analyzed the data and wrote the manuscript. MLi and YL polished the manuscript. All authors read and approved the final version.
|
2019-08-16T04:01:01.704Z
|
2019-09-05T00:00:00.000
|
{
"year": 2019,
"sha1": "f58efaa2a32e6413b75c120f198fa9d3e6a23183",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fphar.2019.00927/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "43be2b58e602cd338ef8e7142422268d4424247c",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
210863242
|
pes2o/s2orc
|
v3-fos-license
|
Glial Factors Regulating White Matter Development and Pathologies of the Cerebellum
The cerebellum is a brain region that undergoes extremely dynamic growth during perinatal and postnatal development which is regulated by the proper interaction between glial cells and neurons with a complex concert of growth factors, chemokines, cytokines, neurotransmitters and transcriptions factors. The relevance of cerebellar functions for not only motor performance but also for cognition, emotion, memory and attention is increasingly being recognized and acknowledged. Since perturbed circuitry of cerebro-cerebellar trajectories can play a role in many central nervous system pathologies and thereby contribute to neurological symptoms in distinct neurodevelopmental and neurodegenerative diseases, is it the aim with this mini-review to highlight the pathways of glia–glia interplay being involved. The designs of future treatment strategies may hence be targeted to molecular pathways also playing a role in development and disease of the cerebellum.
Introduction
The involvement of the cerebellum in higher processes of cognition and emotion [1,2] and its relevance as a locus for a range of disorders and diseases make this simple yet elusive structure an important model in a number of fields. Cellular and anatomical dysfunction of the cerebellum has been associated with psychological disorders, such as autism, attention deficit, hyperactivity or schizophrenia [3][4][5][6][7][8]. In recent years, our understanding of some of the more familiar aspects of cerebellar growth, such as its territorial allocation and the origin of its various cell types, has undergone major recalibration. Furthermore, owing to its conserved circuitry across species, insights from comparative studies have contributed an increasingly rich picture of how this system develops. During fetal and postnatal development, the cerebellum undergoes dramatic morphological and structural changes, manifested as increased mass and a 30-fold increase of its surface area during the last trimester of pregnancy [9]. The regulation of its complex and dynamic development is driven by glial-glial and glia-neuron interactions, which produce a high variety of factors and molecules for interactive signal transmission [10]. Proliferation and migration of neural progenitor cells in the external granular layer (EGL) as well as the proliferation of immature glial cells are characteristic of late fetal and early postnatal development of the cerebellum. All of these processes are largely influenced or directed by the activity of the Purkinje cells [11][12][13] together with Bergmann Glia [14][15][16], by glia-glia [17][18][19], as well as by glia-neuron interactions [20][21][22], mainly through signaling via growth factors, chemokines and cytokines, transmitters and transcription factors. This review seeks to highlight shared mechanisms of glial cell regulation that are relevant for development and disease of the cerebellar white matter that may serve to design future strategies for protection.
Astrocytes
Astrocytes have a central role as supporting cells for neurons and oligodendroglia during brain development. Moreover, they represent a highly reactive cell population in numerous central nervous system (CNS) pathologies. Because of their importance in repair and recovery in neurological diseases, it has been suggested to use stem cell and progenitor cell derived astroglia for cell based therapy, e.g. in patients suffering from stroke, Alzheimer Disease, spinal cord disease, and others [23]. The structural and functional integrity of myelinated axons is critical for their reliable and efficient transmission of information. White matter injury has been associated with the development of many demyelinating diseases. Despite a variety of scientific advances aimed at promoting re-myelination, their benefit has proven at best to be marginal. Research suggests that the failure of the re-myelination process may be the result of an unfavorable microenvironment. Astrocytes are the most abundant and diverse type of glial cell in CNS which regulate cells of the oligodendrocytes lineage in diverse ways. As such, much attention has recently been drawn to astrocyte function in terms of white matter myelin repair. White matter astrocytes are different from those in gray matter in specific regards to development, morphology, location, protein expression and other supportive functions. During the process of demyelination and remyelination, the functions of astrocytes are dynamic in that they are able to change functions in response to distinct stimuli or reactive pathways resulting in vastly different biologic effects. Their effects on oligodendrocytes and other cell types in the oligodendrocyte lineage include: serving as an energy supplier, a participant of immunological and inflammatory functions, a source of trophic factors and iron and a sustainer of homeostasis. As such, the ability to manipulate astrocyte function represents a novel therapeutic approach that can repair the damaged myelin that is known to occur in a variety of white matter-related disorders [23]. The properties of astroglia that are useful for neuroprotection are largely attributed to anti-oxidative properties, stabilization of glutamate homeostasis, and growth factor synthesis. In the cerebellum, astroglial cells are classified into four main groups based on morphology: fibrous astrocytes located in the white matter, stellate multipolar astrocytes or protoplasmic astrocytes located in the granular cell layer, and Bergmann's glia (BG) located between the Purkinje cell layer and the molecular layer and that are specialized astrocytes derived from radial glia (Fig. 1). Developmental roles of astrocytes, particularly involving interactions with neurons, have been the subject of a recent review [24].
Oligodendrocytes
Oligodendrocytes development is strongly dependent on proper interaction with other types of glial cells, i.e. astroglia and microglia [17]. The establishment of the glial network represents an important step for healthy brain development [25]. Specifically, glial-derived growth factors regulate the survival, proliferation and maturation of glial cells, strongly influence the maturation and development of oligodendrocytes as well as myelination [26][27][28]. Cell culture experiments show that oligodendroglial cultures in astrocyteconditioned medium survive and proliferate considerably longer than in microglial-conditioned medium [29]. In contrast, microglial-conditioned medium was reported to promote oligodendroglial differentiation and myelination due to its different pattern of cytokines and growth factors in the individual media [29]. The specific composition and the timing of certain cytokine and chemokine signaling appear essential for inducing either proliferation in order to expand the cellular pool during growth or maturation and network establishment (Fig. 2).
Microglia
Microglia are the cells of the immune system in the CNS that make up about 10% of the total glial cells within the nervous tissue [30]. In the cerebellum, they distribute over white matter and the cortical layers during development.
In the human embryo, colonization of the forebrain with microglia occurs at around 5 gestational weeks, while in rats this event takes place at embryonic day 11 [31]. Ramification as a process of microglial maturation occurs in the human mesencephalon between 11 and 22 gestational weeks, whereas in the cerebellum, the immature ameboid shape remains a predominant microglial phenotype [32]. Cerebellar microglia-Purkinje neuron interactions demonstrate properties distinct from cortical microglia [33]. Recent insight underline a role(s) of microglia for neurite growth, synaptic pruning, spinogenesis, and neuronal apoptosis during brain development [34][35][36]. Following experimental demyelination in rodents, oligodendrocyte precursor cells (OPCs) proliferate and differentiate into myelin-producing oligodendrocytes which effect robust remyelination. In contrast, remyelination in multiple sclerosis, the major human demyelinating disease, is generally limited and transient. Rodent OPCs have been well characterized in vitro and their response to growth factors documented. Several growth factors known to affect rodent OPCs were tested and found to have similar effects on human cells. PDGF, neurotrophin 3 (NT3), and glial growth factor 2 (GGF2) promoted proliferation, while insulin-like growth factor-1 (IGF-1), exerted a maturational effect [28]. Microglia can induce apoptosis of Purkinje neurons in vitro [37]. In the cerebellum, microglial functionality is needed for the elimination of excess climbing fibers and for proper GABA transmission by Purkinje cells [38]. Microglia has important functions in the maturation and development of oligodendrocytes. They secrete IGF1 and thus support the proliferation and maturation of OPCs [17,28]. In addition, increased IGF1 stimulation protects immature oligodendroglia against damage triggered by inflammatory processes [29]. Pro-inflammatory, activated microglia interferes with the development of oligodendrocytes. Immature oligodendrocytes and OPCs are vulnerable to inflammatory processes induced by microglia. The survival of immature oligodendroglia and OPCs is reduced by activated microglia. In contrast, survival of mature OLs is enhanced by activated microglia and reduced apoptosis [39].
In the immature brain, exposure to IL1β can cause acute white matter injury [26] and lead to persistent hypomyelination [40]. Microglial contribution to white matter damage via pro-inflammatory responses is also described in models of inflammatory neonatal brain injury and in multiple sclerosis models [41,42]. IL1β has also been demonstrated to interfere with transmission of GABA and of glutamate in Purkinje cells [43]. Like neurons, glial cells are also vulnerable to non-physiological glutamate concentrations. All three types of glial cells express different glutamate receptors and transporters. Oligodendrocytes are very sensitive to excessive activity of the glutamate signaling pathway. Microglia is stimulated at elevated glutamate concentrations, leading to the synthesis of inflammatory cytokines. Astrocytes are responsible for glutamate uptake in synaptic and non-synaptic areas and represent the most important regulators of glutamate homeostasis [44]. In addition, they produce 90% of the brainderived lactate [45], which is an important source of energy for oligodendrocytes during myelination [46].
Cerebellar Pathologies as a Result of Disrupted Glial-Neuronal Interaction
For brain development, the interaction between glial cells and neurons is essential. This is reflected in the secretion and degradation of neurotransmitters, stimulation by growth factors and also by cell-cell contact, all influencing proliferation, maturation, migration and survival of glial cells and neurons [11,[47][48][49][50][51][52]. In the developing and also in the adult brain, it has been described that the function of glial cells can influence and regulate neuronal activity [53].
In the development of neurons, astrocytes are assigned an important partner role. In addition to the maintenance of homeostasis by the uptake and breakdown of neurotransmitters [44] and the supply of nutrients to the neurons [54], they are crucially involved in the formation and maturation of synapses [55]. They also support the outgrowth of axons and dendrites as well as the migration of immature neurons [56]. This mutual interaction can be controlled by neurons through the release of growth factors and of neurotransmitters [57].
During development, Bergmann glia, Purkinje cells (PC) and granule cells contribute to the formation of the cerebellar cortex. An average of eight Bergmann glia are in close contact with a PC, thus promoting differentiation, synaptic training, and the transmission of neurotransmitters [58]. The maturation of Bergmann Glia is in turn influenced by PCs: the expression of SHH by Purkinje cells stimulates the maturation and differentiation of the Bergmann glia [59]. In addition, SHH influences the secretion of gliotransmitters by astrocytes [22] and thus indirectly influences the stimulation of other cell populations by astrocytes.
In agreement with this, ablation of astrocytes and Bergmann glia leads to malalignment of Purkinje cells, and moreover to diminished outgrowth of the dendrites and increased apoptosis of granule cells. BDNF, e.g., secreted by astrocytes, is difficult to diffuse over long distances, so local secretion is crucial [60]. Studies have shown that astroglia express both BDNF and the BDNF receptor [19]. BDNF production by Bergmann glia is directly involved in the migration of immature GCs from EGL into the IGL [61].
In addition to astrocytes, microglia also express BDNF during brain development [21]. Microglia of the cerebellum can modulate synaptic circuitry and synaptic activity between GCs and Purkinje cells through the secretion of BDNF [62]. Microglia may exert neuroprotective properties for cerebellar neurons, however, activation of microglia can also be toxic to immature and mature neurons [63].
It has been proposed that synergy between GABAergic synapses and astrocytic processes is limited to Bergmann glia in the cerebellum [64]. Indeed, microglia expresses the GABA B receptor. GABA has a modulating effect on microglia and can attenuate or block their activation with concurrent release of pro-inflammatory cytokines and phagocytic actions [65].
A fundamental and almost symbiotic co-existence of two distinct cell types of the brain can be seen in the intimate interaction between oligodendrocytes and neuronal axons. The formation of a myelin sheath around nerve fibers by oligodendrocytes is critical for an efficient and low-energy stimulus transmission [66]. Electrical transmission itself represents a key signal for oligodendroglia to initiate and enhance the wrapping of axon with myelin [67,68]. In addition to myelin synthesis, oligodendroglia (OLs) have further influences on the axons of the neurons. It is assumed that OLs provide neurons with additional nutrients via their axons. Inhibition of nutrient transport by oligodendrocytes leads to the degradation of axons and neurons [69].
During development, the interaction of neurons and oligodendrocytes and their precursors plays an important role. Only through contact with an axon is the final maturation of the OLs initiated [66]. Oligodendrocytes express receptors for various neurotransmitters, such as the AMPA receptor [70], the NMDA receptor [71], and GABA A and the GABA B receptor [51,72]. The blockade of the release of synaptic vesicles and neurotransmitters leads to impaired myelination [67]. In particular, stimulation with GABA is important for the development of OLs [50,73]. The proliferation, maturation and migration of immature oligodendrocytes is regulated by GABA, in the first postnatal weeks, stimulation with GABA may be crucial for the development of OLs [74].
Decreased myelination has a major impact on the function and maturation of neurons. In a model for the ablation of oligodendrocytes with no myelination, there is a disrupted interaction in the cerebellum between Purkinje cells and the immature progenitors of the granule cells in the EGL. The reduction is also associated with an altered maturation and morphology of PC dendrites [75]. Hence, impairment of one factor relevant to neuron-glia crosstalk may in fact lead to dysregulation of multiple signaling pathways between neurons and glial cells, disrupting development of the cerebellum in multiple ways.
Cerebellar Glial Cell Alterations in Diseases
There are many diseases in which glial changes in the cerebellum are involved, such as ataxia, leukoencephalopathy, autism and attention-deficit/hyperactivity disorder (ADHD), multiple sclerosis, as well as hypothyroidism which characteristically involve severe glial dysfunction (Table 1).
Glial Inflammation Disorders
When glia are activated, inflammation is amplified by the secretion or expression of inflammatory cytokines, chemokines or inducible nitric oxide synthases (iNOS) [76]. The molecules that are released after glial activation, can promote inflammation or exert anti-inflammatory properties. Astrocyte-specific changes analyzed by transcriptomics include decreased cholesterol biosynthesis and increased immune pathway gene expression [77]. Astrocyte cell endfeet contain aquoporin (AQP4) that contributes to regulating the junctional exchange of ions with blood vessels [78]. Among proinflammatory molecules, AQP4 has an important role in controlling brain edema as it is one of the most abundant water channels controlling the water influx in the brain parenchyma [79]. Among anti-inflammatory molecules, TGFβ, responsible of controlling neuroinflammation, is one of the cytokines that is upregulated after glial activation [80] as well as some neurotrophic factors that are release by astrocytes and microglia after an inflammation and are responsible of neuron protection [81].
Multiple Sclerosis (MS)
AQP4 is one of the most important proinflammatory molecule that is expressed in cerebellum and although its expression level is extremely low in the first postnatal week, it dramatically increases in the second week [82]. In progressive MS, cerebellar lesions frequently present as demyelination in white and gray matter regions [83][84][85]. Reactive astrocytes are a common feature of MS demyelinating lesions, with observed damage to astrocyte endfeet [86]. In an experimental autoimmune encephalomyelitis (EAE) model relevant to multiple sclerosis (MS), it was observed that the AQP4 increase in the cerebellum is associated with BBB disruption by decreased tight junction proteins, like occludins [87].
In this acute phase of EAE model, in, there is a glutamate-mediated synaptic excitability and neurotoxicity due to the astrocytic release of proinflammatory cytokine interleukin-1β (IL-1β) [88,89]. This systemic cytokine exposure has been linked to hypomyelination and microglial activation in a perinatal inflammation model [90]. Hence, glial interleukin-1β may play a central role in microglial activation and glutamate excitotoxicity in inflammatory diseases of the cerebellum, too.
In a MOG-induced EAE model, increased release of INFβ by microglia induces demyelination, and increased density of IFNβ+ microglia are found around white matter lesions [91]. As a therapeutic agent, IFNβ represents a widely used treatment regimen for patients with relapsing-remitting MS (RRMS) [92] and shows treatment efficacy by reducing disease progression and also frequency of exacerbation. In animal experiments, induction of endogenous IFNβ by polyinosinic:polycytidylic acid [poly(I:C)] treatment diminished the severity of EAE, and genetic deletion of IFNβ or its receptor in contrast enhanced clinical score, with more extensive CNS inflammation and demyelination [93]. Treament with IFNβ also reduced axonal damage in a cerebellar slice culture assay with LPS stimulation [94].
Ataxia
One of the main conditions involving astroglial inflammation of the cerebellum is ataxia or lack of coordination. Ataxia is associated with many neurological conditions, such as stroke, brain tumor, multiple sclerosis, traumatic brain injury, toxicity, infection or congenital cerebellar defects [95]. In particular, spinocerebellar ataxia (SCA) is a group of hereditary ataxias that are characterized by degenerative changes in cerebellum. Mutations in many different genes are known to cause the different types of spinocerebellar ataxias (SCA) [96]. Among the Spinocereberllar ataxias, type 1 (SCA1) is the best known autosomal dominant neurodegenerative disease caused by the abnormal expansion of CAG repeats in the coding region of Ataxin 1 gene [97]. Cvetanovic et al. [98] described astrocytic and microglial activities as an underlying cause of SCA1 which is characterized by the loss of Purkinje neurons in the cerebellum. In that study, Cvetanovic et al. proposed that Bergmann glial cell reactivity signaling through NF-kB,, can be responsible for the pathogenesis of Purkinje cell during SCA1, because of their location and intimate interaction [99]. Furthermore Ferro et al. [97] found that the inhibition of NF-κB in microglia of SCA1 decreased the density of microglia and TNFα expression.
In spinocerebellar ataxia type 3 (SCA3), in which abnormal CAG repeats are localized in the coding region of a gene encoding ataxin-3, there is upregulation of matrix metalloproteinase 2 (MMP-2), interleukin-1 and the cytokine stromal cell-derived factor 1alpha (SDF1alpha) due to astroglial and microglial inflammation [100], causing abnormalities in the Purkinje cell. Recently, it has been suggested that antisense oligonucleotides (ASOs) may serve as a potential therapy technique for SCA3 [101].
Autism Spectrum Disorder
A psychiatric pattern that seems to be related to glial cell inflammation in the cerebellum is described in autism spectrum disorders (ASD), which begin during early childhood development and are influenced by genetic and environmental factors. The cerebellum has been described to be a brain region of particular relevance for ASD, and for some of the characteristical symptoms of the disorder. It has been suggested that cerebro-cerebellar connectivity is aberrant in ASD patients [102,103]. Available research studies suggest that chronic neuroinflammation may represent a substantial pathogenic influence in the disease. Altered expression of proinflammatory cytokines and chemokines, such as IL-1, IL-6, macrophage migration inhibitory factor (MIF) and platelet derived growth factor (PDGF) has been demonstrated in ASD patients in the peripheral blood or in brain tissues [104]. The relevance of systemic inflammation for ASD symptoms is also revealed by successful treatment of children with diagnosis of ASD using autologous stem cell infusions, which resulted not only in impressive reduction of symptoms [105] but also in reduction of serum cytokine levels [106]. Dysregulated inflammatory activity in glial cells of the CNS, and specifically in the cerebellum, may therefore represent a therapeutic target in ASD.
Neuron-Glia Interaction Disorders
The role of neuron-glia interaction in neurodegenerative disorders still remains unknown. The cerebellum, due to its simple anatomical organization and well-characterized circuitry, can be a useful tool to approach disorders of neuron-glial interactions [107].
Attention-deficit/hyperactivity disorder (ADHD) is a behavioral and developmental neurological disorder characterized by motor hyperactivity and loss of impulse control, combined with attention deficits and hampered academic performance [108]. A link to cerebellar pathologies has been revealed in clinical studies showing decreased cerebellar volume during in ADHD patients [109]. In G protein-coupled receptor kinase-Interacting protein-1 (GIP1) knockout mice, a genetically modified ADHD model, there is a decrease in GABA levels in astrocytes of the cerebellum that enhances the excitatory/inhibitory input ratio, leading to motor hyperactivity in ADHD. However the mechanism of GABA reduction is still unknown [110,111].
Spinocerebellar ataxia type 7 (SCA7) is an autosomal dominant inherited neurodegenerative disorder with a polyglutamine (polyQ) expanded protein in the nuclear inclusions, and CAG trinucleotide repeats in the coding region of Ataxin-7 [112]. Indeed, it has been identified that polyQ expanded ataxin-7 interfered with the function of GLAST, a glia-specific glutamate transporter which is highly expressed in Bergmann glia, causing Purkinje cell excitotoxicity [106].
Oxidative Stress in Cerebellar Glial Cells
Cerebellar damage in very immature infants can range from the subtle-generalized delay of tissue development and maturation in response to oxidative stress and/or systemic perinatal inflammation,-to severe-bleeding after rupture of the immature vessels, hence leading to focal lesions and parenchymal cysts as sequel. In a newborn rodent model, the great vulnerability of the immature cerebellum in response to oxidative stress has been characterized by maturational delay in oligodendroglial lineage cells, hypomyelination, and inflammatory changes in microglia [113].
Ischemia
After brain ischemia, as a response to inflammation, there is a generation of reactive oxygen species (ROS) in the neonatal and adult brain. Among the many ROS producers, the most important ones seem to be the NADPH oxidase (NOX) as the main superoxide producer [114], Xanthine oxidase (XO), that contributes to brain edema, and the intracellular enzymes such as COX lipoxygenases (LOXs), and cytochrome P450 that are involved in the arachidonic acid metabolism, a major superoxide source during ischemic stroke in the brain [76]. Moreover, the mitochondrial electron transporter chain is another important ROS source in the neonatal and adult brain. During reperfusion after ischemia, a massive increase of intracellular Ca2+ influx may be induced, and Ca2+ accumulation in the mitochondria can provoke free radical production, impairment in mitochondrial membrane permeability and inhibition of ATP production [115]. Particularly in the cerebellum, during oxygen glucose deprivation (OGD), anoxic depolarization of Purkinje cell in cerebellar slices invokes glutamate release from AMPA receptor activation. Indeed, this glutamate release has been proposed to be regulated by glial pH changes [116]. Moreover, after OGD, Bergmann glial cells, increased intracellular Ca2+ influx and membrane depolarization due to the increase of extracellular K+ concentration with the outflow of anions through DIDS sensitive channels [117].
Postnatal Hyperoxia
In utero, arterial oxygen tension is maintained at low levels but premature birth can provoke an increase in arterial oxygen tension upon exposure to the ex utero environment [118]. Scheuer et al. in 2015 found increased levels of nitrotyrosine in the cerebellar lysates correlated to cerebellar volume deficit, increases apoptosis in oligodendroglia precursor cells (OPCs) and a significant in vivo reduction of astroglial PDGFα, BDNF, FGF2 that may contribute to oligodendroglial maldevelopment. After hyperoxia, ultrastructure analysis by electron microscopy indicated thinning of the myelin sheath around the axon. In those experiments, markedly reduced PDGF-A expression was found in the cerebellum. The reduction of PDGF-A expression by high oxygen levels was confirmed in purified astrocyte cultures in vitro, suggesting the impairment of astroglia-oligodendroglia-crosstalk as a cause of cerebellar injury [118]. However, astroglial morphology and GFAP expression were not affected by hyperoxia. Consistent with delayed maturation of microglia in the cerebellum, most of the Iba1 microglia in the cerebellar white matter were of ameboid morphology in postnatal rats cerebella under control and hyperoxia conditions. There were otherwise no obvious hyperoxia-induced changes in morphology or antigen presentation in microglia in the cerebelli of hyperoxia animals.
There are certain compounds that can also induce oxidative stress in the cerebellum, such as Phytanic acid (3,7,11,Phyt). Phyt is a chlorophyll derived acid that is obtained from daily products, such as milk, cheese or red meat. The accumulation of this fatty acid provokes many peroxisome disorders. Particularly in the cerebellum, it can induce histopathological abnormalities, including Purkinje cells alteration with a cellular loss and delayed dendrite development and astrogliosis due to the disruption of redox homeostasis. Indeed, in a mouse model of Phyt intracerebellar administration reactive nitrogen species were increased [119], indicating the potential risks to cerebellar integrity.
Postnatal Hypoxia
In a perinatal brain injury model, the application of chronic hypoxia within the first weeks of postnatal development leads to hypomyelination of the subcortical white matter [120]. Oligodendroglial damage has also been described in the cerebellum; altered development of the cerebellar white matter after chronic hypoxia has been described to be caused, at least partially, by the loss of GABAA receptormediated synaptic input to cerebellar OPCs, which enhances OPC proliferation and reduces oligodendroglial maturation and myelin synthesis [73].
Targets for Potential Therapy
Brain diseases often involve inadequate homeostasis in neuronal and glial cells. In astroglia, pathogenic changes can be found in diverse processes e.g., glutamate uptake, neurotrophins, growth factors, transcription factors, antioxidative capacity, transmitters, as aforementioned. Consequently, these factors and pathways are offering treatment opportunities via prevention of toxicity or via activation of mechanisms of protection and repair.
Inflammation Regulation
NF-κB is a key transcription factor implicated in neuroinflammation which may mediate events in cerebellar astrogliosis. Indeed, during inflammation NF-κB is activated and IKK is phosphorylated [121]. This activation produces neurotoxic and inflammatory molecules that lead to different diseases. With regard to SCA1, one of the main diseases related to astroglial inflammation in the cerebellum, it has been suggested that NF-κB signaling is stage dependent and the activity in SCA1 and the of NF-κB occurs only in the last stages of the SCA1 [121]. Moreover, Kim and co-workers [99] performed selective inhibition of NF-κB in astroglial cells, which in early stages has in fact increased motor deficits, higher Purkinje cell pathology and increased microglial density. With inhibition in late stages however, SCA1 motor deficits are ameliorated, accompanied by better rotarod performance and decreased microglial density. Interestingly, GFAP expression was decreased during the inhibition of NF-κB in early stages while it was increased in late stages, indicating that astroglial NF-κB pathway is beneficial during early, pre-syntomatic stage of the disease and it´s inhibition during late stage has also beneficial outcomes in SCA1 desease [121].
Minocycline
Neuroprotective properties of this antibiotic have been demonstrated in different brain injury models, including hypoxia-ischemia [122][123][124] perinatal inflammation/infection [125] and hyperoxia [113]. The mechanisms by which minocycline exerts its benefits have largely been ascribed to inhibition of microglia. In the immature brain, inhibition of microglia may in fact perturb neuronal development and survival [126]. Toxic effects have been reported to vary with species, i.e. in mice, minocycline enhances brain injury caused by hypoxia-ischemia [127]. Extensive safety tests are therefore required. In an oxidative stress challenge, protection by minocycline coincided with attenuation of oxidative stress and of apoptotic cell death [113], which is supporting previous results on anti-oxidant and anti-apoptotic effects of this drug [128].
Oxidative Stress Modulation
Glutamate neurotoxicity is directly associated with ROS production and consequently to oxidative stress [129,130]. The Amburana cearensis, a species of the family of Fabaceae, has been observed to have antioxidant properties in the cerebellum that increase the levels of glutathione reductase and glutathione peroxidase enzyme. These control the intracellular signaling cascade of glutamate exitotoxicity that stimulates calcium influx and mitochondrial dysfunction, minimizing glial and neuronal cell death. In cerebellum astrocyte-derived cell culture, Amburana cearensis antioxidant compounds increase glutamine synthetase activity, which reduces glutamate neurotoxicity in astrocytes. Another compound important for redox balance in the cerebellum is the docosahexaenoic acid (DHA), the most abundant n-3 fatty acid in the brain derived from fish. DHA is essential for normal brain function and astrocytes are responsible for DHA synthesis [131,132]. Indeed, it has been recently suggested that supplementation with DHA can be an effective treatment against spinocerebellar ataxia 38 (SCA38) a syndrome characterized by the mutation in the ELOVL5 gene that encodes an elongase enzyme responsible for very low chain fatty acids in the cerebellum [133].
Growth Factors
The protection of cerebellar white matter development by minocycline was associated with improved PDGF-A expression in vivo and in astrocyte cultures in vitro, underlining a role for astroglial PDGF-A both in injury and protection in the cerebellum. Administration of PDGF-A intranasally after exposure to oxygen challenge moreover resulted in enhanced proliferation of oligodendroglial lineage cells in the cerebellar white matter [134], hence strengthening the view of growth factor synthesis as a target for protective treatment after postnatal insult.
In the chronic hypoxia model of white matter damage in the immature brain, overexpression of the human the receptor of epidermal growth factor (EGF) in oligodendroglial lineage cells after injury attenuates oligodendroglia cell death, increases the generation of new oligodendroglia from progenitors, and initiates recovery [135]. Moreover, intranasal administration of heparin-binding EGF during recovery after exposure to hypoxia enhanced OPC pool and oligodendroglial maturation, and also diminished ultrastructural pathologies and behavioural deficits. Hence, targeting the EGF receptor in oligodendrocyte progenitor cells during a certain time window is potentially beneficial for treatment of preterm infants with white matter damage. Nonetheless, these investigations were performed in the cerebrum/forebrain, a similar therapeutic effect of EGF administration on oligodendroglial maturation during postnatal development can be assumed to occur in the cerebellum, too.
GABA Modulation
Balancing excitatory and inhibitory synaptic transmission is necessary for a proper brain function. Indeed, one of the main inhibitory neurotransmitter is the c-Aminobutyric acid (GABA) involved in neural tissue development. It has been suggested that mice treated with GABAA receptor antagonist mimics hypoxia effects, so the blockade of GABA uptake reduces NG2 progenitor cell numbers and increases the formation of mature oligodendrocyte [73,136]. Recently Woo et al. [137] suggested that the manipulation of the levels of astrocytic tonic GABA in the cerebellum and in particular, in Bergmann glial cell, modulates neuronal excitability and synaptic transmission in the cerebellum. Moreover, the pharmacological inhibition of Bestrophin 1 (Best1), a protein that inhibit GABA release in Bergmann glial cells and the inhibition of mitochondrial enzyme monoamine oxidase B (MAOB), a protein in charge of GABA synthesis in astrocytes, causes an increased neuronal excitability in cerebellar granule cells, synaptic transmission and motor performance on the rotarod test. Conversely, increased astrocytic GABA release resulted in reduced motor activity, indicating that the astrocytes are a key component modulating GABA function and consequently modulating motor activity [137].
Conclusions
The cerebellum is a brain region that is involved in many complex brain functions such as coordination, cognition, memory, emotion. In several neurodevelopmental and neurodegenerative diseases, damage of the cerebellum contributes to overall neurological symptoms. Given the fundamental role of glial cell types and glia-glia interactions for development, disease, and repair in the cerebellum, it is reasonable to target specific properties and functions of these cells for therapeutic purposes. For future investigations, growth factors like PDGFA and EGF, homeostasis of transmitters such as GABA and glutamate, various anti-oxidants and inflammatory modulators altogether represent a promising list of candidates that may serve for cerebellar protection.
Acknowledgements Open Access funding provided by Projekt DEAL. Funding was provided by Deutsche Forschungsgemeinschaft (Grant Nos. SCHM3007/3-2 and SCHE2078/2-1), Institute of Education, University and Research of the Basque Government (Grant No. POS_2017_1_0095) and Förderverein für Frühgeborene Kinder an der Charité e.V.
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
|
2020-01-23T16:08:10.996Z
|
2020-01-23T00:00:00.000
|
{
"year": 2020,
"sha1": "302bf8277001db3c39c1a0c16ffbeeda789d044e",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/s11064-020-02961-z.pdf",
"oa_status": "HYBRID",
"pdf_src": "PubMedCentral",
"pdf_hash": "302bf8277001db3c39c1a0c16ffbeeda789d044e",
"s2fieldsofstudy": [
"Biology",
"Psychology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
266297002
|
pes2o/s2orc
|
v3-fos-license
|
Coronary Microvascular Dysfunction and Hypertension: A Bond More Important than We Think
Coronary microvascular dysfunction (CMD) is a clinical entity linked with various risk factors that significantly affect cardiac morbidity and mortality. Hypertension, one of the most important, causes both functional and structural alterations in the microvasculature, promoting the occurrence and progression of microvascular angina. Endothelial dysfunction and capillary rarefaction play the most significant role in the development of CMD among patients with hypertension. CMD is also related to several hypertension-induced morphological and functional changes in the myocardium in the subclinical and early clinical stages, including left ventricular hypertrophy, interstitial myocardial fibrosis, and diastolic dysfunction. This indicates the fact that CMD, especially if associated with hypertension, is a subclinical marker of end-organ damage and heart failure, particularly that with preserved ejection fraction. This is why it is important to search for microvascular angina in every patient with hypertension and chest pain not associated with obstructive coronary artery disease. Several highly sensitive and specific non-invasive and invasive diagnostic modalities have been developed to evaluate the presence and severity of CMD and also to investigate and guide the treatment of additional complications that can affect further prognosis. This comprehensive review provides insight into the main pathophysiological mechanisms of CMD in hypertensive patients, offering an integrated diagnostic approach as well as an overview of currently available therapeutical modalities.
Introduction
Hypertension represents a massive global health issue and is one of the most important cardiovascular risk factors.Hypertension-mediated organ damage (HMOD) is common in patients with severe or long-standing hypertension and is also prevalent in less severe hypertension, even in asymptomatic individuals with elevated blood pressure [1].It is important to note that at any given blood pressure category above the normal or optimal, the presence of HMOD is associated with a 2-to 3-fold increase in the cardiovascular risk [2].Up to 40% of newly diagnosed hypertensive patients already have HMOD, predominantly functional and structural alterations of heart, kidneys, eyes, brain, and peripheral arteries [3].
Hypertension is a well-established risk factor for the development of coronary microvascular dysfunction (CMD) [4].The constant high pressure within the larger arteries can lead to damage and remodeling of the smallest arteries and arterioles in the microcirculation, capillaries, and venules, affecting their ability to regulate blood flow [5].This leads to structural and functional remodeling of the coronary microcirculation, in which endothelial dysfunction is one of the most important pathogenetic mechanisms [6].The endothelium plays a crucial role in regulating blood vessel tone and controlling blood flow.In hypertensive individuals, endothelial dysfunction significantly contributes to the development of CMD, which progressively leads to increased resistance in coronary microcirculation and limited blood flow, causing a reduced oxygen supply to the myocardium [7].This is why the finding of myocardial ischemia as a result of CMD is relatively common in patients with hypertension, especially in patients with hypertensive heart disease (HHD).
Many additional risk factors also contribute to the development of CMD in hypertensive patients, including metabolic syndrome, diabetes mellitus, hyperlipidemia, smoking, and others [8][9][10].As the development of hypertensive heart disease progresses, left ventricular hypertrophy is more pronounced, consequently leading to more severe impairment of coronary microcirculation.These changes, accompanied by myocardial fibrosis, lead to an increased risk of heart failure with both preserved (HFpEF) and reduced ejection fraction (HFrEF) [11,12].This is why coronary microvascular dysfunction significantly affects the morbidity and mortality of patients, demanding more purposeful diagnostic and therapeutic algorithms.The purpose of this narrative review is to describe the relationship between coronary microvascular dysfunction and systemic hypertension, as well as its pathogenetic mechanisms, characteristics, and potential role in the development of adverse cardiovascular events, especially heart failure with preserved ejection fraction.
Pathogenetic Mechanisms of Coronary Microvascular Dysfunction
The coronary microcirculation consists of pre-arterioles, arterioles, and capillaries.The main aim of coronary microvasculature is to match blood supply to myocardial oxygen consumption.Any increase in oxygen consumption leads to increased oxygen demands, consequently leading to an increase in myocardial blood flow (MBF).The main role in the control of myocardial blood flow is played by pre-arterioles and arterioles by controlling arterial diameter and tone.In coronary microvascular dysfunction, various mechanisms involved in this process are disrupted by several factors [Figure 1].
The mechanisms involved in CMD can be structural, functional, or a combination of both [13].The main pathogenetic mechanisms of coronary microvascular dysfunction in patients with hypertension are still insufficiently researched.Until now, it has been postulated that the pathogenetic basis for the development of CMD involves a variety of mechanisms, including microvascular spasm, endothelial dysfunction, sympathetic overactivity, influence of female hormones, certain psychological disorders, and others [14,15].These mechanisms are more likely to cause CMD in susceptible patients with hypertension, hyperlipidemia, obesity, or diabetes mellitus [16].In patients with hypertension, the development of left ventricular hypertrophy and the subsequent development of myocardial fibrosis and diastolic dysfunction are important mechanisms of CMD due to several functional and anatomical changes in the microcirculation [17].Maladaptive mechanisms in hypertension, perivascular fibrosis, and the thickening and rarefaction of small vessel walls, are responsible for increased microvascular resistance and inappropriate blood flow distribution [18].Also, several functional mechanisms are described as causes of CMD in patients with hypertension, including reduced nitric oxide availability as the most important one [19,20].It is shown that chronic renin-angiotensin system (RAS) over-activity, nicotinamide adenine dinucleotide phosphate oxidase, cyclooxygenase, xanthine oxidase, and uncoupled endothelial nitric oxide synthase (NOS), as sources of reactive oxygen species, are the main causes of NO deficiency [21].Also, adrenergic activation and prolonged vasoconstriction can also lead to microvascular remodeling and rarefaction, causing ischemia and clinically manifested angina [22,23].It is also important to note that certain studies registered these microvascular changes even in patients without elevated blood pressure, suggesting that microvascular dysfunction and remodeling can precede the onset and development of hypertension [24,25].However, this cause-effect relationship needs further investigation.vasoconstriction can also lead to microvascular remodeling and rarefaction, causing ischemia and clinically manifested angina [22,23].It is also important to note that certain studies registered these microvascular changes even in patients without elevated blood pressure, suggesting that microvascular dysfunction and remodeling can precede the onset and development of hypertension [24,25].However, this cause-effect relationship needs further investigation.
Microvascular Angina and Endothelial Dysfunction
Endothelial dysfunction is bi-directionally related to systemic hypertension.It has been shown that the endothelium controls vascular smooth muscle tone in response to various agents, as well as participating in the pathogenesis of hypertension by producing different mediators with systemic effects [26].In patients with hypertension, endothelial dysfunction is mainly characterized by impaired nitric oxide synthesis and availability, as well as prostacyclin (PGI2) and endothelium-derived hyperpolarizing factor (EDHF) deficiency [27].On the other hand, as a response to reactive oxygen species, increased production of endothelium-derived vasoconstrictors (mainly endothelin-1 and angiotensinconverting enzyme) has been observed [28].This is subsequently associated with the development of vascular inflammation, vascular remodeling, and atherosclerosis.As a result, vasoconstrictive, pro-inflammatory, and pro-thrombotic mediators cause increased vasoconstrictive microvascular reactivity [29].This process leads to both functional and structural changes in the microvasculature and the development of microvascular dysfunction.It is important to emphasize that CMD in patients with hypertension is not solely a result of hypertension but a multifactorial disease with a significant impact on cardiovascular morbidity and mortality.
Microvascular Angina and Endothelial Dysfunction
Endothelial dysfunction is bi-directionally related to systemic hypertension.It has been shown that the endothelium controls vascular smooth muscle tone in response to various agents, as well as participating in the pathogenesis of hypertension by producing different mediators with systemic effects [26].In patients with hypertension, endothelial dysfunction is mainly characterized by impaired nitric oxide synthesis and availability, as well as prostacyclin (PGI2) and endothelium-derived hyperpolarizing factor (EDHF) deficiency [27].On the other hand, as a response to reactive oxygen species, increased production of endothelium-derived vasoconstrictors (mainly endothelin-1 and angiotensinconverting enzyme) has been observed [28].This is subsequently associated with the development of vascular inflammation, vascular remodeling, and atherosclerosis.As a result, vasoconstrictive, pro-inflammatory, and pro-thrombotic mediators cause increased vasoconstrictive microvascular reactivity [29].This process leads to both functional and structural changes in the microvasculature and the development of microvascular dysfunction.It is important to emphasize that CMD in patients with hypertension is not solely a result of hypertension but a multifactorial disease with a significant impact on cardiovascular morbidity and mortality.
Sex-Related Differences in Patients with Coronary Microvascular Dysfunction and Hypertension
Coronary microvascular dysfunction is more prevalent in women than in men [30].Early works on estimating the sex-related differences in coronary microcirculation revealed lower coronary flow reserve (CFR) values in women, predominantly due to differences in resting coronary flow [31].This is also in relation to different mechanisms involved with autonomic regulation and response to oxidative stress, adenosine, endothelin-1, and angiotensin II [32].It is also notable that women have a smaller vessel size than men, which can contribute to lower CFR values [33].Studies on cardiac magnetic resonance revealed specific differences, where women in comparison with men had fewer or no associations between the development of CMD and traditional risk factors, including hyperlipidemia, diabetes, smoking, and obesity [34].This can mainly be the effect of ovarian hormone deficiency, as microvascular angina and estrogen deficiency in hypertensive women have demonstrated an association [35].In the subgroup of both premenopausal and postmenopausal women with hypertension, ovarian dysfunction and consequent estrogen deficiency played a role in the pathogenesis of CMD [36].Also, certain psychological factors can play an important role in the development of coronary artery disease, as well as CMD [37].It is demonstrated that psychological stress induces endothelial dysfunction and vasomotor disorders more often in young women than in men [38].
Metabolic Syndrome
Metabolic syndrome includes a cluster of conditions such as central obesity, dyslipidemia, high blood pressure, and impaired fasting glucose, all related to an increased cardiovascular risk [39].Several studies have demonstrated a correlation of different variables with the presence of microvascular dysfunction in these patients, including age, sex, pulse pressure, fasting glucose, hemoglobin A1c (HbA1c), total cholesterol, low-density lipoprotein (LDL)-cholesterol, estimated glomerular filtration rate (eGFR), and albuminuria [40].Among patients with hypertension, it is shown that patients with metabolic syndrome have a more severe form of CMD than those without metabolic syndrome.Sucato et al. demonstrated that these patients had worse coronary perfusion than patients with diabetes mellitus [41].
Obesity
Obesity has been linked to chronic metabolic disorders, resulting in poor clinical outcomes.Increased oxidative stress, sympathetic nervous system over-activity, and lowgrade systemic inflammation are the main mechanisms of coronary microvascular dysfunction in obese individuals [42].The metabolic activity of adipose tissue, as well as different cytokines and adipokines, are responsible for reduced NO-mediated dilatation, changed endothelial and smooth muscle-dependent vasoregulation mechanisms, and altered vasomotor control.In patients with hypertension, additional volume overload and cardiomyocyte hypertrophy contribute to vascular remodeling.The thickness of epicardial fat tissue, which reflects visceral adiposity rather than general obesity, is also predictive of an impaired coronary vasodilator capacity [43].A study by Bajaj et al. demonstrated that obese patients with CMD have a 2.5-fold higher risk of developing adverse clinical events [44].It has been shown that patients with central obesity have lower values of myocardial blood flow than patients with excess weight and no central obesity.This is important to notice, as cardiovascular risk estimation based on waist-to-height ratio and the presence of central obesity becomes more prevalent than that based on BMI, which is recognizable especially in the case of "obesity paradox" and patients with heart failure [45].Considering the variety of metabolic disorders in the obese population, weight loss and intensified risk-factor control in patients with CMD play an important role in improving angina symptoms, as presented in a study by Bove et al. [46].
Diabetes Mellitus
The key mechanisms of CMD in patients with diabetes are impaired coronary arteriole vasomotion, including impaired endothelial-mediated vasodilation, hypoxia-induced vasodilation, and myogenic response [47].It has been shown that hyperglycemia and insulin resistance play central roles in the development of CMD by leading to oxidative stress, inflammatory activation, and endothelial dysfunction [48].In the later stage of diabetes, structural changes occur.Thickening of the capillary basement membrane and of the arteriole wall results in luminal narrowing, perivascular fibrosis with focal constriction, and capillary rarefaction.These mechanisms lead to increased coronary microvascular resistance and reduced coronary flow reserve and can cause myocardial ischemia [49].CMD is common in patients with diabetes and can be present with or without the finding of significant epicardial coronary artery disease.It has been shown, by certain studies, that more than 70% of patients with type 2 diabetes mellitus have CMD, which can seriously affect future cardiovascular events and prognosis, especially in those with acute myocardial infarction and heart failure [50].
Hypercholesterolemia
Numerous studies have shown that hypercholesterolemia leads to an inflammatory response within the microvasculature, decreased availability of nitric oxide, and increased production of reactive oxygen species (ROS) [51,52].Endothelial dysfunction and capillary rarefaction are the two most important mechanisms, leading to severe microvascular impairment in different organs and provoking glomerulopathy-induced kidney dysfunction and hypertension, reduction in coronary flow reserve leading to coronary microvascular dysfunction, and hepatic dysfunction, as in non-alcoholic fatty liver disease [53].It has been shown that the role of specific vasoactive substances is related to both hypercholesterolemia and hypertension, as well as the development of CMD, predominantly endothelium-dependent microvascular dysfunction.This is representative of the pathway of thromboxane A2, which has an important role in platelet aggregation, vasoconstriction, and proliferation [54].Certain studies demonstrated that patients with uncontrolled hypertension and hypercholesterolemia had increased thromboxane A2 production, which resulted in excessive vasoconstriction, arteriolar remodeling, and capillary rarefaction [55].
Obstructive Sleep Apnea
Obstructive sleep apnea (OSA) is a condition linked to increased cardiovascular morbidity and mortality [56].Repetitive episodes of hypoxemia lead to the excessive production of reactive oxygen species, the development of low-grade inflammation, and endothelial dysfunction.It has been shown that patients with moderate to severe obstructive sleep apnea have lower values of CFR [57].However, the exact influence of OSA on the development and progression of CMD is hard to observe, as these patients usually have several other risk factors related to CMD, including hypertension, diabetes mellitus, obesity, and hyperlipidemia.
Smoking
Cigarette smoke is known as the exertion factor with the most detrimental effects on the endothelium, especially the coronary endothelial system [58].Various toxic components can cause severe endothelial damage, reduce hyperemic coronary blood flow velocity, and provoke the development of microvascular dysfunction.Regarding the presence of CMD, Gullu et al. demonstrated that smokers without obstructive epicardial coronary disease had significantly lower values of coronary flow velocity reserve (CFVR) than the control group [59].On the other hand, even in patients with epicardial coronary artery disease, smoking was associated with impaired invasively derived indices of coronary microvascular dysfunction, which can additionally contribute to a worse prognosis [60].
Diagnostics for Coronary Microvascular Dysfunction in Patients with Hypertension
In recent years, several important diagnostic algorithms have been presented regarding CMD that aim to integrate both non-invasive and invasive modalities [61].The diagnostic algorithm in patients with suspected CMD starts with the exclusion of significant epicardial coronary artery disease.Although CMD can be present in patients with obstructive CAD, the presence of CMD in the absence of obstructive CAD is extremely important to diagnose, especially in patients with additional risk factors for the development of adverse cardiovascular events, primarily heart failure [62].In patients with microvascular angina, non-invasive diagnostic imaging modalities, primarily echocardiography and cardiac mag-netic resonance (CMR), are important for the evaluation of alternative causes of chest pain, including structural and inflammatory conditions [63].Patients with a negative coronary angiogram, a positive stress test for myocardial ischemia, and additional risk factors for the development of CMD (especially those with hypertensive heart disease) should be considered for non-invasive and invasive investigation of CMD.Conventional echocardiographic stress tests have limited utility in the diagnosis of CMD, as significant inter-observer variability is present in cases with low to moderate ischemia burden, resulting in hypokinesia [64].The use of echocardiography in detecting coronary microvascular dysfunction mainly relies on myocardial contrast echocardiography and the estimation of myocardial blood flow (MBF) or coronary flow velocity reserve using pulsed-wave Doppler sampling of the proximal left anterior descending coronary artery.Nowadays, CFVR has higher diagnostic accuracy and better correlation with intracoronary Doppler wire-based techniques, especially in patients with HFpEF, as demonstrated in the PROMIS-HFpEF trial [65].Numerous studies have investigated the prognostic significance of CFVR in patients with hypertension, demonstrating an impairment in microvascular vasodilatation capacity even in the early stages of the disease [66,67].The study by Volz et al. showed that CFVR was significantly lower in patients with resistant hypertension than in individuals with non-resistant hypertension, indicating a more severe impairment of coronary microvascular function that could account for the increased risk of adverse outcomes [66].The main disadvantages of MBF assessment of CFVR are the presence of artifacts and high inter-observer variability, especially in obese patients and patients with lung disease.However, these methods can be helpful as inexpensive methods in the initial assessment of patients with CMD.In addition to its significant role in the diagnosis of obstructive coronary artery disease, strain assessment is becoming equally important in patients with CMD [68].Aside from CFRV, novel protocols of stress echocardiography incorporate the estimation of global longitudinal strain in rest and peak stress to increase sensitivity and specificity of this estimation [69].A study by Jovanovic et al. demonstrated that resting, peak, and ∆LVGLS were all significantly impaired in female patients with coronary microvascular dysfunction and slow coronary flow [70].
Computerized Tomography (CT)
The role of CT coronary angiography is to primarily exclude the existence of significant epicardial coronary artery disease.Recent technical and software advancements provide the possibility to follow the first pass of contrast through the myocardium at frequent intervals and estimate the absolute myocardial flow.Two types of CT myocardial perfusion protocols can be performed, static and dynamic.Static CT myocardial perfusion requires a lower amount of radiation and prospective ECG gating.However, only qualitative and semiquantitative evaluation is possible with this technique.Dynamic CT perfusion allows the estimation of myocardial perfusion in different layers of the myocardium and a complete quantitative myocardial blood flow evaluation, providing evidence of reduced subendocardial perfusion in patients with CMD [71].Novel techniques combining CTAderived FFR and estimation of myocardial perfusion can provide an accurate anatomical and functional assessment of both the myocardium and the coronary circulation within one examination, which can be significant, especially in patients with hypertensive heart disease [72].Studies that investigated myocardial perfusion and coronary-volume-to-leftventricular-mass ratio showed promising results in diagnosing patients with CMD [73].However, the results in patients with hypertension are controversial.The study by van Rosendal and colleagues demonstrated that patients with hypertension and increased left ventricular (LV) mass did not have reduced coronary vascular volume that could be associated with the presence of abnormal perfusion reserve [74].This can also be a result of predominantly functional impairment of coronary microcirculation, as well as a lack of the estimation of coronary vasodilator reserve.
Single-Photon Emission Computed Tomography (SPECT)
With recent advancements in high-sensitivity cardiac cameras and radiotracers, dynamic SPECT found its place in the quantification of myocardial blood flow and the assessment of CMD.Nowadays, iodinated rotenone compounds and solid-state, highsensitivity cadmium-zinc-telluride detectors can detect the first-pass blood perfusion of a tracer and its extraction into the myocardium.This allows the quantification of myocardial blood flow and myocardial perfusion reserve with better accuracy and fewer artifacts.[75].This protocol results in better spatial resolution and higher sensitivity, resulting in shorter acquisition time and lower radiation exposure.Zhang et al. demonstrated that quantitative SPECT analysis of myocardial blood flow provides prognostic value in patients with ischemia and no obstructive coronary artery disease (INOCA) [76].However, as the diagnostic and prognostic significance of SPECT is still under PET and CMR, it can allow clinically useful measurements in the absence of previously mentioned modalities.
Positron Emission Tomography (PET)
The main advantages of PET in the estimation of CMD are global and regional measurements of perfusion, quantitative MBF, and function, both under stress and at rest.By estimating myocardial perfusion during rest and stress, it can accurately estimate myocardial perfusion reserve (MPR), a value that has an excellent correlation with invasive modalities and also with adverse outcomes [77].As it can estimate both epicardial and microvascular coronary distribution, PET can improve risk stratification for patients being investigated for ischemia.Studies of patients with hypertension revealed that the "endogen" type of CMD, predominantly related to alterations in resting myocardial blood flow, is more prevalent in these patients [78].High radiation exposure and cost are the main disadvantages of this method.In comparison to cardiac magnetic resonance, PET lacks the possibility to additionally provide a sophisticated myocardial tissue characterization.
Cardiac Magnetic Resonance (CMR)
Cardiac magnetic resonance has an important place in cardiac diagnostics, considering that it is a non-invasive method during which, with high specificity and sensitivity, the existence of both significant epicardial obstructive coronary disease and coronary microvascular dysfunction can be confirmed or excluded.Diagnostics of coronary microvascular dysfunction via CMR can be established by analyzing myocardial perfusion during the stress test in comparison with myocardial perfusion at rest, which actually evaluates the vasodilatory flow reserve [79].During the stress perfusion test, various vasodilator agents can be used, including adenosine, regadenoson, or dipyridamole.Stress CMR accurately assesses myocardial ischemia, myocardial viability, and cardiac function, all in one examination.Methods within cardiac magnetic resonance to evaluate the existence of coronary microvascular dysfunction can be qualitative and quantitative [Figure 2].A qualitative method of assessment includes visual evaluation of the perfusion during stress, whereby a characteristic diffuse subendocardial perfusion defect is observed.The drawback of the qualitative evaluation of the stress perfusion study is the extremely low sensitivity of only 41% and the inability to clearly differentiate between patients who have a pronounced degree of coronary microvascular dysfunction and patients who have multi-vessel CAD, which can also cause a diffuse subendocardial defect in perfusion [80].If coronary angiography was not performed before the stress perfusion test, in the differentiation of coronary microvascular dysfunction and obstructive coronary disease, late gadolinium enhancement (LGE) sequences can be helpful, on which the zones of the LGE phenomenon are not registered in patients with microvascular dysfunction.Novel CMR diagnostic modalities, myocardial tissue mapping, and extracellular volume fraction (ECV) are important in estimating the presence and degree of interstitial fibrosis, which can be significant in risk stratification, especially in patients with hypertension who have left ventricular hypertrophy, diastolic dysfunction, and consequently an increased risk of HFpEF [81].
vessel CAD, which can also cause a diffuse subendocardial defect in perfusion [80].If cor-onary angiography was not performed before the stress perfusion test, in the differentiation of coronary microvascular dysfunction and obstructive coronary disease, late gadolinium enhancement (LGE) sequences can be helpful, on which the zones of the LGE phenomenon are not registered in patients with microvascular dysfunction.Novel CMR diagnostic modalities, myocardial tissue mapping, and extracellular volume fraction (ECV) are important in estimating the presence and degree of interstitial fibrosis, which can be significant in risk stratification, especially in patients with hypertension who have left ventricular hypertrophy, diastolic dysfunction, and consequently an increased risk of HFpEF [81].Semiquantitative and, especially, quantitative methods of evaluation of stress perfusion are used for definitive assessment.Quantitative methods of assessing coronary microvascular dysfunction can, in addition to establishing a diagnosis, evaluate the severity of the disease, as well as monitor the effect of different therapeutic modalities.New sophisticated and fully automated CMR methods for the analysis of myocardial perfusion enable high diagnostic accuracy, strong prognostic significance, and complete independence from the level of staff training [82].The basic parameter for the analysis is the value of the blood flow through the myocardium (myocardial blood flow-MBF), which is analyzed both at rest (rest perfusion) and under stress (stress perfusion).Patients with global stress MBF below 2.25 mL/g/min without visual defects in perfusion are likely to have coronary microvascular dysfunction [83].The difference in myocardial blood flow at rest and under stress represents the myocardial perfusion reserve (MPR), whose indexed value (MPRI) is the most sensitive parameter in the diagnosis of coronary microvascular dysfunction [84].The accuracy of this method can be significantly increased by analyzing the myocardial perfusion reserve in the subendocardial layer (MPRendo), bearing in mind that the subendocardial layer of the myocardium is the most sensitive to the existence of ischemia [85].The values of these parameters can be fully evaluated and quantified using pixelated perfusion maps at the level of individual segments according to the 16-segment model of the left ventricle.This kind of analysis makes it possible to establish a diagnosis with high sensitivity and specificity and also to differentiate the existence of obstructive coronary disease from coronary microvascular dysfunction.Clinically relevant values of the above-mentioned parameters for the diagnosis of coronary microvascular dysfunction can be registered even in the absence of qualitative changes in perfusion.In studies that used a fully quantitative assessment of stress perfusion to diagnose CMD, an excellent correlation was shown with the values of invasively measured coronary flow parameters (dominantly with the value of the coronary flow reserve-CFR) and with the value of the index of microvascular resistance (IMR) [86,87].In terms of clinical outcomes, stress MBF and MPR/MPRI have been shown to be associated with serious adverse cardiovascular events and mortality [88].
Non-contrast-based CMR techniques for perfusion estimation are the future of CMD diagnostics as they are more sensitive and have even higher diagnostic accuracy than today's widely available techniques.They are based on the principle of estimating myocardial tissue oxygenation by specific protocols or comparing the changes in myocardial native T1 time during the rest and stress perfusion study [89].These techniques can overlook different limitations of conventional techniques, including imaging artifacts, long scan time, inter-observer variability, problems with the absolute quantitation of myocardial blood flow, and restricted use in patients with chronic kidney disease.
Advantages and disadvantages of non-invasive modalities in the estimation of CMR are presented in Table 1.
Invasive Diagnostics
The invasive modalities in the diagnostics of CMD are mainly based on the estimation of coronary blood flow.Coronary blood flow can be estimated by Doppler (measuring coronary flow velocity) or thermodilution (measuring cold bolus transit time), each with a different sensor-tipped intracoronary guidewire [90].In regard to the endothelium function, coronary blood flow can be estimated in response to adenosine (non-endotheliumdependent function) or in response to acetylcholine to evaluate the presence of vasospastic angina (endothelium-dependent function).CFR values (the ratio of the maximal or hyperemic flow to the resting flow) of less than 2.0-2.5 (thermodilution) or 2.5 (Doppler) in the absence of epicardial obstructive coronary artery disease indicate the presence of coronary microvascular dysfunction [91].The ratio between myocardial perfusion reserve and flow can be used to calculate coronary microvascular resistance (CMR).In the thermodilutionbased method, the index of microvascular resistance (IMR) with a cut-off value of >25 is significant for confirming the presence of CMD, while in the Doppler-based technique, the resulting index is called hyperemic microvascular resistance (hMR), with the cut-off value of ≤2.5 mmHg/cm/s [92,93].Regarding endothelium-dependent microvascular dysfunction, the diagnosis can be made if there is an increase of less than 50% in coronary blood flow, accompanied by ischemic ECG changes and angina symptoms, and in the absence of epicardial vasoconstriction.It is important to have in mind that patients with CMD may have both endothelium-dependent and -independent types of microvascular dysfunction.Studies evaluating the invasive indices of CMD in patients with HFpEF revealed abnormalities in coronary flow and resistance [94].The study by Dryer et al. revealed that HFpEF patients had lower CFR and higher IMR values than the control group.These patients were also older and had higher values of NT-proBNP and higher left ventricular end-diastolic pressure, while 93% of them had hypertension as one of the comorbidities [95].
The diagnostic algorithm for the estimation of CMD among patients with chest pain and hypertension, involving both non-invasive and invasive modalities, is presented in Figure 3. arterial stiffness, which represents an index of myocardial oxygen supply and demand, is significant in the assessment of long-term cardiovascular risk and is independently associated with age, abdominal circumference, and Framingham risk score [99].
Coronary Microvascular Dysfunction, Hypertension, and HFpEF
Recent studies that researched the pathophysiology of HFpEF and the role of CMD revealed that, across various studies, 40-86% of patients with HFpEF have coronary microvascular dysfunction, proven by both non-invasive and invasive diagnostic modalities [100,101].It is still uncertain whether CMD is a cause or a consequence of HFpEF.Since myocardial interstitial and focal fibrosis is one of the main mechanisms in HFpEF responsible for increased myocardial stiffness, it is believed that CMD and its consequences are Considering the variety of imaging modalities in diagnostics for CMD, it is notable to mention that in patients with hypertension, the indices of arterial stiffness are independently related to microvascular dysfunction [96][97][98].A recent study by Aursulesei Onofrei et al. demonstrated a predictive value of the subendocardial viability ratio (SEVR), also known as the Buckberg index, in hypertensive patients with CMD.This parameter of arterial stiffness, which represents an index of myocardial oxygen supply and demand, is significant in the assessment of long-term cardiovascular risk and is independently associated with age, abdominal circumference, and Framingham risk score [99].
Coronary Microvascular Dysfunction, Hypertension, and HFpEF
Recent studies that researched the pathophysiology of HFpEF and the role of CMD revealed that, across various studies, 40-86% of patients with HFpEF have coronary microvascular dysfunction, proven by both non-invasive and invasive diagnostic modalities [100,101].It is still uncertain whether CMD is a cause or a consequence of HFpEF.Since myocardial interstitial and focal fibrosis is one of the main mechanisms in HFpEF responsible for increased myocardial stiffness, it is believed that CMD and its consequences are at the core of HFpEF pathophysiology, mostly due to chronic microvascular inflammation [102].The emerging role of inflammation in the development of HFpEF has been the subject of numerous studies in recent years.In patients with hypertension, inflammation is driven mainly by oxidative stress, inducing hypertension-related vascular aging through various mediators [103].This process is shown to be one of the main mechanisms in the development and progression of HFpEF.Kanagala et al. demonstrated that CMD is an independent predictor of all-cause mortality and heart failure hospitalizations in patients with HFpEF [104].It is important to note that a variety of other parameters were found to correlate with CMD and HFpEF, including age, heart rate, diastolic blood pressure, hemoglobin, urea, creatinine, eGFR, BNP, usage of loop diuretics, and increased LV filling pressures.Hypertension is one of the most important factors for the development of endothelial dysfunction and the promotion of pro-hypertrophic and pro-fibrotic signaling, thus directly increasing the risk for the development of CMD, diffuse and focal fibrosis, and HFpEF [105].It has been shown that a significant number of patients with HFpEF exhibit hypertension as a comorbidity (up to 90%) [106].The presence of CMD and hypertension, or more precisely, hypertensive heart disease, have prognostic significance in patients with HFpEF.Extracellular volume fraction, a marker of interstitial fibrosis assessed by cardiac magnetic resonance, is one of the most important parameters to discriminate between HHD and HFpEF.The amount of interstitial fibrosis that clinically correlates with significant LV stiffness, the development of HFpEF, and the transition from HHD to HFpEF is a value of ECV of 31.2%.This value can discriminate between HFpEF and HHD with 100% sensitivity and 75% specificity [107].One more parameter derived from non-invasive diagnostic modalities that can differentiate between HHD and HFpEF is the global longitudinal strain (GLS).In hypertensive heart disease and in HFpEF, fibrosis involves the myocardial mid-wall, where circumferential shortening fibers are located, which is why global circumferential strain (GCS) is affected before longitudinal shortening.It has been found that GLS is significantly more depressed in patients with HFpEF than in patients with HHD, marking it as a more powerful prognostic marker in HFpEF [108].One of the possible explanations could be the more pronounced focal, and especially interstitial, fibrosis in HFpEF patients as a consequence of advanced stages of CMD and LV hypertrophy.However, the exact relationship between all these clinical entities is yet to be determined.The cause-and-effect relationship between hypertension, numerous risk factors, and CMR in the development of HFpEF is presented in Figure 4.
than in patients with HHD, marking it as a more powerful prognostic marker in HFpEF [108].One of the possible explanations could be the more pronounced focal, and especially interstitial, fibrosis in HFpEF patients as a consequence of advanced stages of CMD and LV hypertrophy.However, the exact relationship between all these clinical entities is yet to be determined.The cause-and-effect relationship between hypertension, numerous risk factors, and CMR in the development of HFpEF is presented in Figure 4.
Coronary Microvascular Dysfunction, Hypertension, and Atrial Fibrillation
As previously mentioned, myocardial fibrosis is one of the main consequences of both hypertensive heart disease and coronary microvascular dysfunction and is also an important pathophysiological mechanism of HFpEF.Cardiac magnetic resonance studies demonstrated the presence of myocardial fibrosis not only in the LV myocardium but also in the left atrium, subsequently increasing the risk of atrial fibrillation occurrence [109].It is notable that, aside from being the most prevalent sustained arrhythmia in clinical practice, atrial fibrillation is particularly common in patients with HFpEF [110].Although
Coronary Microvascular Dysfunction, Hypertension, and Atrial Fibrillation
As previously mentioned, myocardial fibrosis is one of the main consequences of both hypertensive heart disease and coronary microvascular dysfunction and is also an important pathophysiological mechanism of HFpEF.Cardiac magnetic resonance studies demonstrated the presence of myocardial fibrosis not only in the LV myocardium but also in the left atrium, subsequently increasing the risk of atrial fibrillation occurrence [109].It is notable that, aside from being the most prevalent sustained arrhythmia in clinical practice, atrial fibrillation is particularly common in patients with HFpEF [110].Although there is a lack of evidence on the exact relationship between CMD and AF, it is proposed that impaired myocardial perfusion in patients with CMD causes atrial remodeling and electrical instability, thus facilitating the occurrence of AF in patients with CMD.Recent studies evaluating the presence and impact of AF in patients with HFpEF revealed that AF is present in 79% of patients with HFpEF [111].Among patients with AF and HFpEF, more than 90% of patients have impaired invasively derived values of CFR, indicating the presence of CMD.It is important to underline that in these patients, hypertension was significantly more prevalent, contributing to the development of CMD, AF, and HFpEF.Based on the above, it is important to search for CMD in patients with hypertension and atrial fibrillation, as these patients have an increased risk of developing HFpEF.
Management of Coronary Microvascular Dysfunction in Patients with Hypertension
Having in mind the variety of pathophysiological mechanisms and different clinical phenotypes, the management of coronary microvascular dysfunction is a challenging task.It is mainly a combination of pharmacological treatment and lifestyle modification, although, in the last few years, several interventional techniques have appeared as potential therapeutic solutions.Lifestyle interventions, including smoking cessation, weight loss, regular exercise, and improved nutrition, have demonstrated positive effects on microvascular function [112,113].It is shown that the optimization of underlying diabetes mellitus and hyperlipidemia, and also the treatment of hypertension, as one of the most important risk factors, is beneficial in patients with CMD [114].Early and continuous regulation of hypertension in patients with CMD is significant, as it can slow down the occurrence and progression of several subclinical and clinical entities such as left ventricular hypertrophy, interstitial myocardial fibrosis, and diastolic dysfunction.This can reduce the ischemic burden, improve symptoms, and reduce the risk of adverse events, especially HFpEF.AEC inhibitors, angiotensin receptor blockers (ARB), calcium channel blockers, and beta blockers with vasodilatory properties have substantial effects on improving microvascular perfusion [115][116][117].Regarding the effects of ACE inhibitors, it is shown that certain medications can also slow down and even reverse reactive interstitial fibrosis, which is important in patients with hypertension [118].The ongoing trial regarding the interventional treatment of hypertension (renal denervation) tends to suggest the positive effects of this procedure on patients with hypertension-related microvascular dysfunction, although the results of previous studies were controversial [117].Considering the already proven positive effects of renal denervation on cardiac morphology and function, the additional effects on the improvement of microvascular function can be helpful in preventing both HFpEF and HFrEF [119].Interventional procedures for the treatment of microvascular angina have been under development in recent years with promising results.The implantation of a coronary sinus reducer, which leads to a significant reduction in vascular resistance in the subendocardium, showed positive effects on angina symptom relief in patients with CMD [120].Future studies should demonstrate the overall clinical benefit of this procedure in everyday practice.
Prognosis
Recent studies that investigated the prognostic significance of invasively derived indices of CMD revealed that depressed CFR was associated with an increased risk of cardiovascular death and heart failure admission, while elevated IMR alone still has a limited prognostic value [121].It is still unclear why IMR has uncertain prognostic significance in patients with preserved CFR.However, one of the possible explanations can be that impaired IMR value can be an earlier indicator of CMD in the subclinical phase of the disease, with dominant functional alterations of the microcirculation.On the other hand, depressed CFR is more significant in the clinical phase of the disease, reflecting both functional and structural alterations, and is more associated with clinical outcomes in these patients.Non-invasive estimation of myocardial perfusion seems to have an additional prognostic significance.The greatest number of studies refer to CMR and PET as the two most important non-invasive modalities.In PET studies, there was a positive correlation with clinical outcomes in the group of patients with both epicardial and microvascular coronary artery disease, as well as with CMD solely [122].The reduction of myocardial flow reserve was associated with the incidence of major adverse cardiovascular events (MACE) in both of these groups.The study by Murthy et al. demonstrated that there was a 3-year cardiac mortality rate of 8% in patients with impaired MFR, among which over 80% had hypertension as a comorbidity [123].
Quantitative CMR methods of estimating myocardial perfusion demonstrated a significant correlation with major adverse cardiovascular events.The value of MPRI (myocardial perfusion reserve index) below the optimal predictive threshold value of 1.47 was related to a three-fold increased risk of having MACE in the 5-year follow-up.It is important to underline that hypertension, alongside MPRI value, was also a significant predictor of poor prognosis in these patients, indicating an important mutual relationship between microvascular angina and hypertension [124].
Future Perspectives
A more integrated algorithm of CMD diagnostics, especially in symptomatic patients and patients with increased risk of HFpEF, is mandatory.This is important not only to control symptoms but also to minimize the possibility of future adverse cardiovascular events.Investigating the relationship between different clinical entities, especially CMD, myocardial fibrosis, hypertensive heart disease, and HFpEF, will be helpful in the proper identification of patients at risk and also to guide further development of different therapeutic modalities.
Conclusions
Coronary microvascular dysfunction is a clinical entity linked with various risk factors that significantly affect cardiac morbidity and mortality.Hypertension, one of the most important, causes both functional and structural alterations in the microvasculature, promoting the occurrence and progression of microvascular dysfunction.CMD is also related to several hypertension-induced morphological and functional changes in the myocardium in the subclinical and early clinical stages.This indicates the fact that CMD, especially if associated with hypertension, is a subclinical marker of end-organ damage and heart failure, particularly that with preserved ejection fraction.This comprehensive review provides an integrated diagnostic approach for patients with hypertension and suspected CMD, as well as an overview of current therapeutical modalities in order to reduce the burden of this emerging condition.
Figure 1 .
Figure 1.Coronary circulation and the role of different pathogenetic mechanisms involved in CMD.
Figure 1 .
Figure 1.Coronary circulation and the role of different pathogenetic mechanisms involved in CMD.
Figure 2 .
Figure 2. A combination of qualitative, semiquantitative, and quantitative methods for the evaluation of CMR stress perfusion study in a patient with coronary microvascular dysfunction.(a) LGE PSIR sequence, short axis view; showing the absence of LGE phenomenon; (b) qualitative analysis of stress perfusion; a global subendocardial perfusion defect is observed (marked by blue arrows); (c) perfusion map during stress perfusion study, short axis section, medial level; a global subendocardial perfusion defect is observed (marked by white arrows); (d) semiquantitative analysis (flow/time curve), short axis section, medial level; the perfusion curves indicate a global perfusion defect in the subendocardial layers of the myocardium (green and blue curves) in comparison to the subepicardial layers (red and orange curves) (marked by white arrows); (e) quantitative analysis of stress perfusion; diffusely reduced normalized values of myocardial perfusion reserve (MPRI) are observed.
Figure 2 .
Figure 2. A combination of qualitative, semiquantitative, and quantitative methods for the evaluation of CMR stress perfusion study in a patient with coronary microvascular dysfunction.(a) LGE PSIR sequence, short axis view; showing the absence of LGE phenomenon; (b) qualitative analysis of stress perfusion; a global subendocardial perfusion defect is observed (marked by blue arrows); (c) perfusion map during stress perfusion study, short axis section, medial level; a global subendocardial perfusion defect is observed (marked by white arrows); (d) semiquantitative analysis (flow/time curve), short axis section, medial level; the perfusion curves indicate a global perfusion defect in the subendocardial layers of the myocardium (green and blue curves) in comparison to the subepicardial layers (red and orange curves) (marked by white arrows); (e) quantitative analysis of stress perfusion; diffusely reduced normalized values of myocardial perfusion reserve (MPRI) are observed.
Figure 3 .
Figure 3. Diagnostic algorithm for the estimation of coronary microvascular dysfunction among hypertensive patients with chest pain (negative and positive symbols correspond to a negative or positive test in the diagnostic algorithm).
Figure 3 .
Figure 3. Diagnostic algorithm for the estimation of coronary microvascular dysfunction among hypertensive patients with chest pain (negative and positive symbols correspond to a negative or positive test in the diagnostic algorithm).
Figure 4 .
Figure 4. Pathophysiological mechanisms of heart failure with preserved ejection fraction (HFpEF) in relation to coronary microvascular dysfunction and hypertension.
Figure 4 .
Figure 4. Pathophysiological mechanisms of heart failure with preserved ejection fraction (HFpEF) in relation to coronary microvascular dysfunction and hypertension.
Table 1 .
Characteristics of non-invasive imaging modalities in the evaluation of CMD.
|
2023-12-16T17:16:17.072Z
|
2023-12-01T00:00:00.000
|
{
"year": 2023,
"sha1": "e0da2936ad89fb882f964fa4d14434aeab38a3dd",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1648-9144/59/12/2149/pdf?version=1702296887",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "bb8532968479310407121548bd546dd29197dca7",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
266052434
|
pes2o/s2orc
|
v3-fos-license
|
Racial and Ethnic Disparities in Clinical Trial Enrollment Among Women With Gynecologic Cancer
Key Points Question Are there racial and ethnic disparities in clinical trial enrollment among women with gynecologic cancer? Findings In this cohort study of 562 592 women with endometrial, ovarian, or cervical cancer, the odds of clinical trial enrollment were lower among Asian, Black, and Hispanic women compared with White women. Comparisons with the US population demonstrated overrepresentation among White women for all cancer sites, underrepresentation among Asian and Hispanic women for all cancer sites, and varied patterns for Black women depending on cancer site. Meaning These findings suggest that efforts to engage women with gynecologic cancer who are from minoritized racial and ethnic groups are needed to increase their representation in clinical trials.
Introduction
Health care disparities exist within all scopes of medicine and occur along various dimensions, including race and ethnicity, socioeconomic status, geography, and language.Racial and ethnic inequities in gynecologic oncology treatment and outcomes are well-established and deeply entrenched in the social determinants of health, 1 prompting calls to address these gaps in care. 2 Clinical trials, defined as research in which humans are prospectively assigned to 1 or more interventions for the evaluation of health-related effects, 3 are essential for ensuring validity, generalizability, and equity of care, as well as advancing medical knowledge.Recent reports 4,5 suggest that between 6% and 8% of the US adult population with cancer participates in clinical trials, with lower representation of patients from minoritized racial and ethnic groups.[8][9] In a recent review, Barry and colleagues 10 outlined the extent of racial disparities in clinical trial enrollment of patients with gynecologic cancer.Most of the reviewed studies compared observed enrollment in clinical trials identified through ClinicalTrials.govwith expected enrollment derived from population-based, age-adjusted incidence rates.Collectively, women from minoritized racial and ethnic groups were underrepresented in these trials, whereas White women were more likely to be overrepresented across gynecologic cancer types.This work is an important starting point for describing racial and ethnic disparities in clinical trial enrollment of patients with gynecologic cancer; however, studies with an internal comparison group with adjustment for potential confounders are needed to fully understand the complex picture of clinical trial enrollment.Moreover, these studies highlight the need for data sets that include large numbers of women from underrepresented groups.As such, we examined associations of race and ethnicity with clinical trial enrollment among women with gynecologic cancer using the National Cancer Database (NCDB).In addition, we present participation-to-prevalence ratios (PPRs) according to period of diagnosis to evaluate trends in the representation status of underrepresented groups in gynecologic cancer trials.
Data Source
The 2020 Participant User File was obtained from the hospital-based NCDB, 11 a cancer registry capturing 70% of cancers diagnosed in the US.Data include sociodemographic characteristics, tumor characteristics, treatment facility attributes, treatment, and survival outcomes abstracted from patient medical records by Certified Tumor Registrars. 12Data submitted to NCDB undergo rigorous quality checks according to American College of Surgeons standards.This study was exempt from the Ohio State University institutional review board and the need for informed consent because the data were anonymous and publicly available, in accordance with 45 CFR §46.We followed the Strengthening the Reporting of Observational Studies in Epidemiology (STROBE) reporting guidelines for cohort studies. 13
JAMA Network Open | Equity, Diversity, and Inclusion
Disparities in Clinical Trial Enrollment Among Women With Gynecologic Cancer
Clinical Trial Enrollment
We categorized women as enrolled in a clinical trial when response observations were enrolled in an institutional (code 2) or double-blind clinical trial (code 3).Categories of no trial (code 0), other (code 1), other-unproven (code 6), or refused trial (code 7) were categorized as no clinical trial enrollment. 14
Covariates
Race and ethnicity were available as self-reported variables coded by the NCDB.We cross-classified race (American Indian/Alaska Native, Asian, Black, Native Hawaiian/Pacific Islander, White, and other) and ethnicity (Hispanic vs non-Hispanic) to produce the following categories: non-Hispanic Asian (hereafter referred to as Asian), non-Hispanic American Indian/Alaska Native (hereafter referred to as American Indian/Alaska Native), non-Hispanic Black (hereafter referred to as Black), Hispanic ethnicity of any race, non-Hispanic Native Hawaiian/Pacific Islander (hereafter referred to as Native Hawaiian/Pacific Islander), non-Hispanic White (hereafter referred to as White), and non-Hispanic other (hereafter referred to as other).The NCDB does not specify what groups are included in "other race."Detailed information on the categories of race that compose the 6 overarching groups is provided in the eAppendix in Supplement 1.
Statistical Analysis
In the NCDB sample, we used multivariable logistic regression to estimate adjusted odds ratios (ORs) and 95% CIs for associations of race and ethnicity with clinical trial enrollment.Factors included as covariates comprised patient, facility, tumor, and treatment characteristics that have been identified as factors related to clinical trial enrollment among patients with cancer 4 and were available in NCDB.
JAMA Network Open | Equity, Diversity, and Inclusion
Disparities in Clinical Trial Enrollment Among Women With Gynecologic Cancer To evaluate the racial and ethnic composition of patients with gynecologic cancer enrolled in clinical trials (in the NCDB) relative to the racial distribution in the overall cancer-specific population, we calculated the PPR according to period of diagnosis (2004-2011 vs 2012-2019). 15The PPR was calculated by dividing the race-specific percentage of clinical trial participants in the study sample (eg, percentage of NCDB patients with endometrial cancer enrolled in clinical trials who are Black) by the percentage of racial and ethnic groups in the US patient population (eg, percentage of US patients with endometrial cancer who are Black) according to cancer site.We used the Surveillance, Epidemiology, and End Results *Stat program to derive population-based race and ethnicity frequencies for each cancer site.We omitted calculations for American Indian/Alaska Native, Native Hawaiian/Pacific Islander, and other women owing to low numbers.Stratification of the PPR by diagnosis period (2004-2011 vs 2012-2019) was done to qualitatively assess clinical trial enrollment over time.We evaluated only 2 time periods to reduce the potential for small numbers.PPRs less than 0.8 can be interpreted as underrepresentation in clinical trials, PPRs of 0.
We also observed that older age at diagnosis (OR per 5-year increment, 0.89; 95% CI, 0.85-0.94)and having 2 or more comorbidities (OR, 0.56; 95% CI, 0.34-0.95)were associated with lower clinical trial enrollment odds.Area-level characteristics were related to clinical trial enrollment.
Discussion
In this retrospective cohort study of women with gynecologic cancer, we used 2 complementary approaches to evaluate racial and ethnic disparities in clinical trial enrollment.First, we examined clinical trial enrollment odds comparing participation among minoritized women with that of White women, with covariate adjustment.These analyses demonstrated lower clinical trial enrollment odds among Asian, Black, and Hispanic women compared with White women, but no difference in enrollment among American Indian/Alaska Native, Native Hawaiian/Pacific Islander, or other race women.In addition, social determinants of health, including area-level income and education, geographic region, and metropolitan status, along with certain facility characteristics, were associated with clinical trial enrollment.In the second analytic approach, analyses comparing the race-specific prevalence of clinical trial enrollment in the NCDB sample with the race-specific cancer prevalence in the US population with gynecologic cancer revealed interesting patterns.First, regardless of diagnosis period, Asian and Hispanic women with an endometrial, ovarian, or cervical cancer were underrepresented in clinical trials compared with the proportion expected on the of US cancer incidence.White women were either adequately represented or overrepresented for all 3 cancer sites, whereas patterns diverged for Black women: among those with endometrial or cervical cancer, adequate representation or overrepresentation was noted but among those with ovarian cancer, underrepresentation was evident.Together, these analyses provide novel information on the landscape of racial and ethnic disparities in gynecologic cancer treatment.
Prior studies 7,8 examining clinical trial representation among patients with gynecologic cancer have compared observed case counts of racial and ethnic groups from published trials (including In both time periods, Asian and Hispanic women were underrepresented in clinical trials for all 3 cancer sites.Black women with an endometrial or cervical cancer diagnosis were either adequately represented or overrepresented in both time periods, but Black women with ovarian cancer were underrepresented.White women were adequately represented or overrepresented in clinical trials for all 3 cancer sites.Numbers were too low to generate meaningful estimates for American Indian/Alaska Native, Native Hawaiian/Pacific Islander, or other race women. trials registered through ClinicalTrials.gov,Gynecologic Oncology Group-based trials, 9 or National Cancer Institute-sponsored gynecologic cancer treatment trials 16 ) to the expected racial and ethnic count obtained from population-based age-adjusted incidence rates.In support of this body of work, we identified adequate representation or overrepresentation in clinical trials among White women with endometrial, ovarian, or cervical cancers along with underrepresentation of Black patients with ovarian cancers.Our findings that Black women with endometrial or cervical cancers were adequately or overrepresented in clinical trials are in line with the findings of 2 prior studies. 8,16For example, Mattei and colleagues 8 reported that Black women with either a uterine or cervical cancer were proportionately enrolled in precision medicine trials, whereas Mishkin and colleagues 16 similarly noted no enrollment disparities for Black women with uterine or cervical cancer in National Cancer Institute-sponsored treatment trials.However, an evaluation of racial representation in Gynecologic Oncology Group-sponsored clinical trials revealed that enrollment of Black women was 9.8-fold lower than expected for endometrial cancer trials and 4.5-fold lower for cervical cancer trials. 9erall, although our PPR findings indicate that Black women are being enrolled in endometrial and cervical cancer clinical trials at levels proportionate to their distribution in the population, this practice of striving for proportional enrollment is unlikely to culminate in the sample sizes needed to make well-powered conclusions about treatment efficacy within minoritized groups. 17Indeed, recent calls for equitable clinical trial inclusion suggest the need to recruit equal numbers of racial and ethnic groups, such that minoritized groups are overenrolled with respect to their size in the general population.A shift in this direction would allow ideally powered analyses of treatment effects within racial and ethnic groups. 18sparities between Black and White populations in gynecologic oncology have been frequently investigated; however, reports focused on other racial and ethnic groups are less common.Our PPR and logistic regression analyses showing underrepresentation and lower clinical trial enrollment odds of Asian and Hispanic women agree with data published by Mattei and colleagues, 8 where women in these groups were less commonly enrolled to precision oncology trials for ovarian and uterine cancer, with Hispanic women also less likely to be enrolled in cervical cancer clinical trials.
Furthermore, in a review of National Cancer Institute-sponsored gynecologic oncology trials, Hispanic, but not Asian, women were less likely to be enrolled in ovarian, uterine, or cervical cancer clinical trials. 16Because of the low numbers of Native Hawaiian/Pacific Islander, American Indian/ Alaska Native, and other race patients, we were unable to provide meaningful estimates of clinical trial enrollment odds or PPRs for these groups.
Apart from race and ethnicity, other factors associated with clinical trial enrollment included the presence of comorbidities, which was related to lower odds of clinical trial enrollment, in line with prior work. 19Although clinical trials traditionally exclude patients with medical comorbidities under the auspice of patient safety, in 2017 and 2021, the American Society for Clinical Oncology recommended broadening clinical trial eligibility to maximize generalizability. 20,21In addition, older age; living in zip codes with higher income; living in zip codes with lower educational attainment; living in urban, small, or medium sized counties; and treatment in the South, Midwest, and Pacific (compared with the Northeast) were associated with lower clinical trial enrollment.Treatment at an academic or research program or an integrated network cancer program was associated with higher odds of clinical trial enrollment.Most of these associations were expected on the basis of prior literature 22,23 ; however, our finding that women living in areas with higher area-level income were less likely to participate in clinical trials was surprising.It is likely that area-level income also captures unmeasured neighborhood effects underlying this unexpected association.Future studies that also include individual-level income measures will be useful in contextualizing this association.
Limitations and Strengths
Our analyses are limited by the available data within the NCDB, because we lack information on important patient and oncologic characteristics.Certain data that can affect clinical trial enrollment, including trial phase sponsor or funding source, availability of clinical trials, trial treatments (and PPRs according to race and ethnicity and diagnosis period and stratified by cancer site are shown in the Figure.Among patients with endometrial cancer, White and Black women were adequately represented or overrepresented (PPRs Ն1.1) in clinical trials in both time periods, with a slight decline in representation among Black women between the 2 time periods (2004-2011, PPR = 1.4; 2012-2019, PPR = 1.1).Asian and Hispanic women were inadequately represented during both time periods (PPRs Յ 0.5).Among patients with ovarian cancer, White women were overrepresented during both time periods, whereas Asian, Black, and Hispanic women were underrepresented during both periods (PPRs Յ 0.6).For cervical cancer, Black and White women were either overrepresented or adequately represented during both time periods, whereas Asian and Hispanic women were underrepresented.Further details of the PPRs are shown in eTable 2 in Supplement 1.
Figure .
Figure.Participation-to-Prevalence Ratios for Women With Gynecologic Cancer by Diagnosis Period and Cancer Site 8 to 1.2 indicate adequate representation in clinical trials, and PPRs greater than 1.2 indicate overrepresentation. 15Additional methodological details are presented in the eAppendix in Supplement 1. Statistical analyses were performed using Surveillance, Epidemiology, and End Results *Stat software version 8.4.1.1 (National Cancer Institute) and SAS statistical software version 9.4 (SAS Institute).All P values were 2 sided, with statistical significance set at P < .05.Analyses were performed from February 2 to June 14, 2023.
JAMA Network Open | Equity, Diversity, and Inclusion Disparities
Treatment at an academic or research program (OR, 6.26; 95% CI, 2.33-16.84)or an integrated network cancer program (OR, 2.93; 95% CI, 1.07-8.05)was associated with higher clinical trial enrollment odds compared with treatment at community cancer programs.Over the study period, we observed higher clinical trial enrollment, with women who received a diagnosis between 2016 and 2019 being approximately 10 times more likely to be enrolled compared with those who received a diagnosis between 2004 and 2006 (OR, 10.18; 95% CI, 6.32-16.39).Patients with ovarian (OR, 3.70; 95% CI, 2.69-5.08)or cervical (OR, 4.30; 95% CI, 2.76-6.70)cancer in Clinical Trial Enrollment Among Women With Gynecologic Cancer Table.Associations of Epidemiologic, Facility, Tumor, and Treatment Characteristics With Clinical Trial Enrollment Among Women With Gynecologic Cancer
Table .
Associations of Epidemiologic, Facility, Tumor, and Treatment Characteristics With Clinical Trial Enrollment Among Women With Gynecologic Cancer (continued) a ORs and 95% CIs were adjusted for all variables in the table.bInthe National Cancer Database, other race is not specified.
International Classification of Diseases for Oncology-3 Codes Including Gynecologic Cancer Histology Types eAppendix.Supplemental Methods eTable 2. Participation-to-Prevalence Ratios for Women With a Gynecologic Cancer Diagnosis According to Cancer Site and Stratified by Year of Diagnosis eReferences
|
2023-12-08T06:17:09.058Z
|
2023-12-01T00:00:00.000
|
{
"year": 2023,
"sha1": "abb8f795ac944029014c8278fc9480288e3f6142",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "ScienceParsePlus",
"pdf_hash": "b05e857d799658471d6d755390ff96c9863db601",
"s2fieldsofstudy": [
"Medicine",
"Sociology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
252611486
|
pes2o/s2orc
|
v3-fos-license
|
Suspected Pediatric-Onset Common Variable Immune Deficiency (CVID) in a Seven-Year-Old Female With Pulmonary Manifestations
Common variable immune deficiency (CVID) is the most common of all primary immunodeficiency rare diseases characterized by hypogammaglobulinemia. This is caused by the defective functioning of B-cells and T-cells, resulting in recurrent infections. Its etiology is unknown but most commonly initiated due to epigenetic factors and epistatic interactions. Moreover, it has a bimodal age distribution and can be more evident from infancy to after 4th decade of life. Herein, a seven-year-old female, the first product of consanguineous marriage with no family history of immunodeficiency disorders, presented predominantly with sinopulmonary involvement. It manifested as severe pulmonary pneumonia, atelectasis, patchy alveolar infiltrates, and lung nodules. She also had a history of diarrhea and otitis media. Despite having a history of recurrent infections since three years of age, she was diagnosed late due to a lack of awareness and knowledge about the presentation of CVID and its different manifestations among the medical community in Pakistan. The diagnosis of CVID is based on the clinical and immunological manifestation of the patient with respect to the European Society of Immune Deficiencies (ESID) diagnostic criteria. Therefore, genetics help detect mutations leading to CVID and establish a genetic diagnosis for CVID-like disorders. However, genetic panel testing is not used as a diagnostic tool in Pakistan due to the unavailability of resources. Instead, the clinical presentation, abnormal lymphocytic counts, and immunoglobulin levels may help diagnose CVID. Early diagnosis will help in the timely utilization of the most effective treatment and management options available. These include intravenous immunoglobulin (IVIG) and hematopoietic stem cell therapy. Ig replacement therapy has shown a beneficial role in halting the cycle of recurrent infections and improving the prognosis of CVID. However, it's a bit expensive therapy. Moreover, the role of hematopoietic stem cell therapy in treating CVID has been documented, but it's not so common and practical.
Introduction
Common variable immune deficiency (CVID) is a heterogeneous disorder presenting various clinical manifestations such as autoimmune cytopenias, lymphoproliferation, and granulomas [1]. In addition, it may involve various genetic abnormalities in 10-15% of the cases [2]. Still, its pathogenic involvement has not been established yet, with most commonly involving TNFRSF13B encoding for transmembrane activator calcium modulator and cyclophilin ligand interactor (TACI), inducible T cell costimulator (ICOS), B-cell activating factor receptor (BAFF-R), and CD-19 [2]. Moreover, it is characterized by abnormal immunoglobulin production, mainly immunoglobulin G (IgG) and occasionally IgA and IgM, due to defective B-cell differentiation leading to recurrent infections [3,4].
It has a prevalence of 0.001 to 3.374 per 100,000, making it a rare yet the most common primary immunodeficiency in humans [5]. While it can occur at any age, it is usually classified into the pediatric age group, which is <18 years of age, and the adult age group >18 years of age [6].
CVID can have infectious and non-infectious manifestations (autoimmune diseases, malignancy/ inflammatory diseases such as benign lymphocytic infiltration or granulomatous diseases, and sinopulmonary, gastrointestinal, and central nervous system involvement) [7].
Diagnosis for CVID is challenging in developing countries due to the lack of sensitivity and specificity of the available diagnostic criteria and the lack of physician knowledge. Although CVID is a clinical diagnosis, specific testing is necessary to rule out monogenic CVID-like disorders [3].
The treatment may differ in patients due to the involvement of various organ systems depending upon the clinical presentation and manifestation of symptoms [3]. However, the most effective management modalities include immunoglobulin replacement therapy and immunosuppressive therapy. At the same time, autologous hematopoietic stem cell transplantation (HSCT) has been documented to show significant results as a treatment modality, but it's not so common [8].
Case Presentation
A seven-year-old Pakistani female, a resident of Karachi, Pakistan, presented to the National Institute of Child Health (NICH) with complaints of fever and cough for the last two weeks. According to her mother, the patient developed high-grade undocumented, intermittent fever not relieved with antipyretics. She also had a productive cough with mucoid sputum, for which she took oral amoxicillin-clavulanic acid; however, no improvement was noticed. A day before arriving at the hospital, she had an episode of hemoptysis, fresh blood mixed with sputum. They visited a local doctor who referred them to NICH. Her past medical history was significant for on and off flu-like symptoms, two episodes of ear discharge in the last two years, and two to three episodes of large volume watery diarrhea each year since three years of age, for which she had been treated on oral medications. There is also a history of hospital admission due to pneumonia at three years of age in a private hospital where she required oxygen support and IV antibiotics. At five years of age, she again had similar complaints and was offered hospital admission, but her parents were reluctant to admission; hence she was treated with a daily course of IV antibiotics at a local clinic.
According to her mother, she has always been thin and lean and has not gained adequate weight in the last few years as her other siblings did. She had no known allergies and was developmentally appropriate for her age. Her birth history was unremarkable, and she had been vaccinated to date according to Pakistan's vaccination's schedule. She is the first twin product of consanguineous marriage, and her other siblings were both healthy and alive. There is no history of any severe illness in the family.
Her physical examination revealed 92 beats/min pulse, blood pressure 100/70 mmHg, temperature 99°F, and 40 breaths/min respiratory rate. She had grade 2 clubbing on General Physical Examination, and her anthropometric measurements revealed that her height (110 cm) and weight (17 kg) were lying at less than the third centile. On respiratory examination, she had asymmetrical chest movements with decreased chest expansion on the left side and dull percussion notes over the left lower chest from the fourth to seventh intercostal space. Also, she had bronchial breath sounds, crackles, and increased vocal resonance in the left mid and lower zones. The rest of the systemic examination was unremarkable.
Her laboratory data revealed an elevated WBC count of 22x10 9 /L (Normal range: 4.5-13.5x10 3 ) with neutrophilia of 64% and an absolute neutrophilic count of 14080. Her blood and sputum culture were negative; however, C-reactive protein (CRP) and erythrocyte sedimentation rate (ESR) was elevated at 6.8 mg/L (Normal range: 3-5 mg/L) and 95 mm/hr (Normal range: <15 mm/hr). Her serum blood urea nitrogen, creatinine, electrolytes, and liver function tests were within normal limits.
Chest X-ray suggested patchy alveolar infiltrates scattered in both the lung fields more on the left side of the chest. An air bronchogram and a silhouette sign were present ( Figure 1). CT scan chest ( Figure 2) showed large areas of consolidation/atelectasis in the left lower lobe and small areas of consolidation in the right middle lobe, left lingular segment, and both lower lobes. Few nodular areas were seen in the right upper, middle and lower lobes and left lingular segment ( Figure 3). Findings were suggestive of pulmonary infection. Her ΔF508 mutations were sent, but it was not detected, and pancreatic fecal elastase levels were 283 ug/ml (Normal range: >200 ug/ml). Human immunodeficiency serology was negative. Serum immunoglobulin assay showed a decrease in IgA and IgG levels as <0.15 g/L (Normal range: 0.33-2.02 g/L) and 1.29 g/L (Normal range: 6.33-12.8 g/L), respectively; however, IgM levels were within the normal range (0.48-2.07 g/L). It raised the suspicion of an immunodeficiency disorder, and a lymphocyte subset analysis was ordered. The analysis revealed low absolute counts of CD3+, CD8+ T-lymphocytes, CD19+ Total B-lymphocytes, and CD56+ Natural Killer cells. CD4+ T-helper lymphocytes were within normal limits, and the CD4+/CD8+ ratio was increased.
After discussing the case with an infectious disease specialist, we treated our patient with a tazobactam/piperacillin combination. Parents were counseled regarding the need for genetic testing and the possibility of a bone marrow transplant. After discussing with her parents, a genetic panel was ordered. It showed variants of uncertain significance (VUS) presenting heterozygous mutations in genes linked with CVID, including CTC1, PRKDC, PRKCD, PLCG2, and DNACJ21 ( Figure 4). These genes were found in patients with CVID, but their definitive role in CVID is not established. We strongly suspected common variable immunodeficiency based on the history of recurrent sinopulmonary infections, hypogammaglobulinemia, lymphocyte subset analysis, and genetic panel test. An allergist-immunologist was taken on board, and intravenous immunoglobulin was given. After a discussion with a geneticist, her parent's genetic samples were also ordered. These panels revealed paternal heterozygosity in genes DNAH5, DNAJC21, IL12RB2, PRKCD, and PRKDC. Moreover, maternal heterozygosity was seen in ADGRE2, CTC1, and DNAJC21. Followup visits were advised to ensure a monthly maintenance dose of intravenous immunoglobulin, but the patient lost to follow-up.
Discussion
While CVID is the most common primary immunodeficiency in Pakistan, it is still not diagnosed and reported on time due to overlapping symptoms with other clinical conditions. Due to this, doctors are even unable to classify it as a differential. CVID is characterized by hypogammaglobulinemia due to inadequate functioning of B-cells and T-cells, which subsequently causes recurrent infections, especially sinopulmonary infections [3]. The etiology of most CVID cases is unknown; however, 5%-25% of its cases are found particularly in consanguineous populations [3].
While CVID can occur in any age group, it is usually classified as pediatric and adult-onset [6]. The clinical manifestation of infectious diseases in CVID remains uniform irrespective of age and most commonly includes pneumonia, sinusitis, otitis media, bronchitis, and shingles [6]. Our patient displayed similar symptoms and had a history of sinopulmonary involvement since birth, otitis media, diarrhea, and pneumonia in the last few years. She was given symptomatic treatment as the symptoms manifested individually over the years. She was hospitalized once due to severe pneumonia at the age of three. Though she presented with recurrent infections throughout, the doctors could not even suspect it as CVID due to a lack of awareness among the physicians and its rare nature in the Pakistani population.
It has been reported that the leading cause of mortality in CVID patients is due to pulmonary complications [1]. The pulmonary complications present as interstitial pneumonia, bronchiectasis, nodules, and interstitial lung disease in pediatric-onset CVID [6]. In the pediatric age group, bronchiectasis secondary to CVID is most commonly known to cause male death. In contrast, lung nodules and pneumonia secondary to CVID are primarily associated with female deaths [6]. Similarly, our female patient presented with few nodular areas in the lung and severe pneumonia. In addition, she also showed patchy alveolar infiltrate in both the lung fields in the X-ray chest. In contrast, in the chest CT scan, atelectasis was seen in the left lower lobe and small areas on consolidations in the right middle and lower lobes. All this suggested pulmonary infection.
Abnormalities of the lymphocytic subset in CVID patients have been reported, such as low switched memory B-cells and expansion of CD21, low CD4+ T cells, B cells, Natural killer cells, and CD8+ T cells [1]. Similarly, our patient also showed low absolute counts of CD19+ Total B-lymphocytes, CD3+, CD8+ T-lymphocytes, and CD56+ Natural Killer cells. However, her CD4+ cells were within the normal range. Unfortunately, due to affordability issues of the patient, her complete peripheral lymphocytic subset was not done. Moreover, according to European Society of Immune Deficiencies (ESID) criteria, hypogammaglobulinemia is a definitive diagnosis in CVID patients with low IgA and IgG levels, while low IgM in few patients and low switched memory B-cells or absent isohemagglutinins, or poor vaccine responses, and clinical presentation of CVID [3]. Similarly, our patient presented with low IgA and IgG levels, but contrary to the usual presentation, IgM levels were within the normal range. Therefore, the abnormal lymphocytic subset, hypogammaglobulinemia, and clinical presentation suggest CVID in our patient.
Later, the genetic panel test was ordered, showing a heterozygous variant of uncertain significance for the genes ADGRE2, CTC1, DNAH5, DNAJC21, IL12RB2, ORAI1, PLCG2, PRKCD, and PRKDC. The parents' genetic panel clearly showed our patient's polygenic origin of CVID. This suggested the involvement of a more complex inheritance pattern than the monogenic one [9]. Therefore, these genetic defects may vary according to the clinical manifestation of the patient [9]. Furthermore, since the parents are in a consanguineous marriage, the likelihood of gene mutations in their offspring increases, as seen in the patient [10]. CVID is often misdiagnosed due to no proper criteria for its diagnosis and poor knowledge among the doctors in Pakistan. However, in other parts of the world, ESID diagnostic criteria are used to diagnose CVID. Moreover, the clinical picture of CVID is similar to other immune disorders making the diagnosis even more difficult [3]. Therefore, whenever patients present with recurrent infections with abnormal lymphocytic subsets, the doctors should always think of CVID as a differential in such a case. In addition, a genetic panel is not a diagnostic tool for CVID, but it can direct toward the monogenic CVID-like disorder. Unfortunately, no genetic panel testing is available in Pakistan, and the lymphocytic subset analysis is quite expensive. The rural areas are even deprived of basic diagnostic tests such as the lymphocytic subset analysis, due to which they are oblivious to this disease. So, recurrent infections and immunological features in any pediatric patient should be one of the main indications for the diagnosis of CVID.
Our case is similar to cases reported in the past where the diagnosis of CVID was established on clinical grounds. An example is a case report from Pakistan published in 2011 where a young Asian male with recurrent pneumonia was diagnosed with common variable immunodeficiency in adulthood on clinical grounds and basic investigations [11]. Another case of an 11-year-old female was reported with a history of recurrent infections in childhood. Based on clinical grounds, she was managed as a case of common variable immunodeficiency [12].
The management of CVID mainly depends upon the symptoms and the systemic involvement in the patient [3]. However, the primary treatment options include high-dose intravenous immunoglobulin (IVIg), steroids for CVID cytopenia, and anti-D antibodies [3]. However, for the vast majority of CVID patients, particularly with infectious manifestations, hematopoietic stem cell transplantation can be used to treat severe autoimmune manifestations of CVID [8]. Our patient was treated with intravenous immunoglobulin and was advised for genetic testing and a bone marrow transplant. Early diagnosis of CVID is essential in treating and preventing the progression of CVID. Early treatment with IVIg has a beneficial role in infectious complications, but its contribution to immune dysregulation, such as auto-immune manifestations, is ambiguous. Also, the doctors should go for prophylactic treatment when the patients present with severe recurrent infections, abnormal lymphocytic subset, and hypogammaglobulinemia.
Conclusions
CVID is a rare primary immunodeficiency that involves multiple organ systems, most importantly sinopulmonary. It presents recurrent infections, abnormal low switch memory B-cells and expansive CD21 cells, and hypogammaglobulinemia. In addition, it may have an association with epigenetic factors. Therefore, in countries like Pakistan, where genetic testing is unavailable, the clinical presentation of symptoms and tests to evaluate lymphocytic counts and immunoglobulin levels can assist in diagnosing CVID among many differentials. This will lead to the utilization of the most effective treatment, hematopoietic stem cell therapy, to increase the life expectancy of the patients. Moreover, there is a further need for research on the complex form of the inheritance pattern and the role of environmental factors in CVID.
Additional Information Disclosures
Human subjects: Consent was obtained or waived by all participants in this study. Conflicts of interest: In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
|
2022-09-30T15:14:34.018Z
|
2022-09-01T00:00:00.000
|
{
"year": 2022,
"sha1": "4c8962bd1dc1c52f266b3bb0f2508c12cdc2e3c8",
"oa_license": "CCBY",
"oa_url": "https://www.cureus.com/articles/107460-suspected-pediatric-onset-common-variable-immune-deficiency-cvid-in-a-seven-year-old-female-with-pulmonary-manifestations.pdf",
"oa_status": "GOLD",
"pdf_src": "ScienceParsePlus",
"pdf_hash": "5fd812f619f5b1172b0bc5437aef9e609f344b11",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
229486891
|
pes2o/s2orc
|
v3-fos-license
|
Green Chemistry Synthesis of Modified Silver Nanoparticles
The important aspect of this unconventional approach is that eco-friendly, commercially available and straight forward method was used to prepared Silver Nanoparticles by using AgNO3 and curcumin solution as agent factor. The (TEM), (XRD), and (FTIR) was used to characterise these silver nanoparticles (AgNPs). Two types of bacterial isolates were used to indicate the antibacterial activity silver nanoparticles which prepared by curcumin solution, Gram negative like (Escherichia Coli E. Coli), & Gram positive (Stapha Urous). The results exhibit that silver nanoparticles synthesized by curcumin solution has effective antibacterial activities.
1.INTRODUCTION
The novel metal nanoparticles have grabbed the attention of researchers in chemical, medicine, engineering and biological fields due to their unique characteristic properties 1-7. Interestingly, classification of nanoparticles may be depend in the way in which they were manufactured, for environmentally friendly procedures. Recently seem to be the best approaches to quantitative biosynthesis of nanoparticles such as vitamins, plant extracts, biodegradable polymers, sugars and microorganisms as reducing and capping reagents 8-9. Polyphenols is play a key role in some of these approaches which can found in tea, red grape and turmeric10-11. In recent decades, it has been understood that curcumin is the main polyphenol in turmeric which has been used as a reducing and stabilizing reagent in synthesis of Ag and Au nanoparticles 12-14. Furthermore, it is believe that utilize metal nanoparticles have a potential bioactivity for novel anti-viral and antiflammatory therapies.
In this work silver nanoparticles (AgNPs) have been synthesised due to its unique antimicrobial property which is opens the door to utilize them in cosmetic products and clothing. Furthermore, silver possesses a high electrical conductivity more than (Au and Cu), which makes it a significant candidate for the conductive inks manufacturing. Another impact is represented by the associated high surface plasmon resonance (SPR), which is measure the adsorption of material at the Ag nanoparticles (AgNPs) surface.
2-Methodology
In this work in order to gain access to the curcumin silver nanoparticles (AgNPs), we chose turmeric powder as commercially available. The first step includes preparing of curcumin solution [1] by adding 0.5g of plant (curcumin powder) to 100 ml of water, then the mixture was heated for 30 minutes at 85 o C. The mixture was filtered by using filter paper (Whatman, no. 1) to obtain a smooth solution of curcumin plant in water.
The next stage was to get the silver nanoparticles. 10 ml of (AgNO 3 , 0.1 M ) with curcumin solution (3 ml ), have been mixed in a conical flask (50 ml), then heated for 1 hour at 60 o C [2]. The formation of silver nanoparticles (Ag-NPs) was characterised due a change of color from (yellowbrownish) [ Figure 1]. Finally, in order to collect the Ag nanoparticles, the solution was centrifuged at 4000 rpm for five minutes, dried and then fully characterised by FT-IR, XRD and TEM.
Characterization
The important aspect of this unconventional approach is that eco-friendly, commercially available and straightforward method. The transmission electron microscope (TEM) morphologies, X-ray differection (XRD), & Fourier transform infrared analysis (FTIR) was used to characterise these silver nanoparticles (AgNPs). Figure 2 exhibits prominent peak at 420 cm −1 due to silver nanoparticles and the peak at 3448 cm -1 of hydroxyl group, while the peak at 1380cm -1 due to NO2 group from silver nitrate [3].
Figure (1). FTIR Spectroscopy of Silver Nanoparticles
The XRD studies of the synthesized nanomaterials at the range of 2ߠ (10 -80°)., The X-ray diffraction patterns at the range of 2ߠ (10 -80°)., of the Ag-NPs prepared using AgNO 3 with curcumin as agent factor are shown in Figure3.The reflections(which correspond to Ag metal with face centered cubic fcc symmetry), which were indexed as 2θ values of 38.78, 45. 35 , 67.03 and 78.92 nm -1 , respectively (JCPDS 04-0783) [4]. The rang size of silver nanoparticles can be determined by used the Debye -Scherrer relation [5].
D = K λ / β cosθ
Where D is the size of grain, K is the (shape factor) (0.9 -1), while λ is the wave length of Xray (1.5418 Å), and θ is the Bragg angle , β is the width of the XRD peak. From this equation, the average grain size of silver nanoparticles is (31.17nm).
CONCLUSION
The main purpose of our research was to desecrib, Silver Nanoparticles prepared by using AgNO 3 and curcumin solution as agent factor. The important aspect of this unconventional approach is that eco-friendly, commercially available and straightforward method. The (TEM), (XRD), and (FTIR) was used to characterise these silver nanoparticles (AgNPs).Two types of bacterial isolates were used to indicate the antibacterial activity silver nanoparticles which prepared by curcumin solution and silver nitrate, Gram negative (Escherichia Coli E Coli), and Gram positive(Stapha Urous). The results exhibit that silver nanoparticles synthesized by curcumin solution has effective antibacterial activities.
|
2020-11-19T09:13:58.631Z
|
2020-11-01T00:00:00.000
|
{
"year": 2020,
"sha1": "085cfeac184e825d94ec83be009d84730199afae",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1742-6596/1664/1/012080",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "e65aa7d7fd31f7fea8c8cedba82db0347f03e31b",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": [
"Chemistry",
"Physics"
]
}
|
249614022
|
pes2o/s2orc
|
v3-fos-license
|
Insights on Microsatellite Characteristics, Evolution, and Function From the Social Amoeba Dictyostelium discoideum
Microsatellites are repetitive sequences commonly found in the genomes of higher organisms. These repetitive sequences are prone to expansion or contraction, and when microsatellite expansion occurs in the regulatory or coding regions of genes this can result in a number of diseases including many neurodegenerative diseases. Unlike in humans and other organisms, the social amoeba Dictyostelium discoideum contains an unusually high number of microsatellites. Intriguingly, many of these microsatellites fall within the coding region of genes, resulting in nearly 10,000 homopolymeric repeat proteins within the Dictyostelium proteome. Surprisingly, among the most common of these repeats are polyglutamine repeats, a type of repeat that causes a class of nine neurodegenerative diseases in humans. In this minireview, we summarize what is currently known about homopolymeric repeats and microsatellites in Dictyostelium discoideum and discuss the potential utility of Dictyostelium for identifying novel mechanisms that utilize and regulate regions of repetitive DNA.
INTRODUCTION
Microsatellites are a universal feature of most organismal genomes, though the prevalence and characteristics of these vary widely between species. These genetic features, sometimes referred to as simple sequence repeats (SSRs), are short tandem repeats composed of 1-6 bp sequences (Ellegren, 2004). SSRs tend to be highly polymorphic and are primarily located within non-coding portions of the genome (Ellegren, 2004). Despite their ubiquity, expansion of microsatellites are known to cause several different diseases. These disease-causing expansions occur in both coding and non-coding regions of the genome, reflecting the wide array of mechanisms by which these microsatellites disrupt normal cellular functions (Ranum and Day, 2002;Orr and Zoghbi, 2007;Brouwer et al., 2009).
The social amoeba Dictyostelium discoideum raises new questions about the function and impact of microsatellites. These questions are raised because the Dictyostelium genome has a massive amount of these features with 11% of its genome composed of SSRs, about a 50-fold enrichment over most other organisms (Eichinger et al., 2005). Interestingly, unlike other organisms that encode mostly dinucleotide repeats, Dictyostelium encodes mostly trinucleotide repeats (Eichinger et al., 2005). The number of tandem repeats of trinucleotides (and hexa-, nona-, etc.) is also extremely high within coding regions resulting in the production of nearly 10,000 proteins that encode SSRs (Eichinger et al., 2005). Surprisingly, unlike in humans, microsatellite expansion within exons does not appear to be detrimental to Dictyostelium Santarriaga et al., 2015). This raises several questions. How does Dictyostelium maintain genome stability? What are the functional aspects of SSRs? How is protein quality control maintained? Here, we will summarize the current knowledge of SSRs in Dictyostelium and describe the potential for utilizing this unique organism to explore questions in microsatellite biology.
MICROSATELLITE MUTATION IN DICTYOSTELIUM
The expansion and contraction of microsatellites is known to be influenced by both the composition and length of the repetitive sequence, as well as the DNA repair landscape of the cell (Schlötterer and Tautz, 1992;Strand et al., 1993;Sia et al., 1997;Lai and Sun, 2003;Shinde et al., 2003;Tian et al., 2011;Hamilton et al., 2017). Current models of microsatellite mutation attribute changes in microsatellite length primarily to slippage mutations, a phenomenon in which a newly synthesized DNA strand briefly dissociates during DNA replication but is misaligned after reannealing due to the repetitiveness of the template, resulting in some number of repeats remaining unannealed (Schlötterer and Tautz, 1992;Strand et al., 1993;Sia et al., 1997). This can result in either expansion or contraction of the microsatellite depending on which strand contains the unannealed portion of DNA (Schlötterer and Tautz, 1992;Strand et al., 1993;Sia et al., 1997). It is known that the frequency of slippage mutations occurring is dependent on the length of the repeat unit, the number of repeat units present, and the nucleotide composition of the microsatellite (Sia et al., 1997;Lai and Sun, 2003;Shinde et al., 2003). Also important in slippage mutation is the presence or absence of functional DNA repair, particularly in the mismatch repair pathway, though some have hypothesized that errors in double strand break repair by homologous recombination may also result in changes in microsatellite length (Sia et al., 1997;Richard and Pâques, 2000).
As mentioned previously, the genome of Dictyostelium is highly repetitive with over 11% of its genome being composed of SSRs (Eichinger et al., 2005). The genome is over 75% A + T rich, a value comparable to some other protozoa such as Plasmodium falciparum but far exceeding most other eukaryotes (Eichinger et al., 2005). Some have proposed that this bias is the reason for the notable prevalence of microsatellites in coding regions because it easier for point mutations to result in a codon identical to neighboring codons, thus increasing the likelihood that a region will become prone to slippage mutations (Tian et al., 2011;Scala et al., 2012). Consistent with this, a high rate of 3n indels present in regions without simple sequence repeats were found to occur in Dictyostelium, presumably occurring via slipped strand mispairings (Kucukyildirim et al., 2020). In addition, it was also observed that nearly one-third of indel events occurred in SSRs, primarily in homopolymeric A:T runs (Kucukyildirim et al., 2020). Together these provide one potential explanation for the high number of trinucleotide repeats in Dictyostelium with small repeats potentially being preferentially expanded, resulting in an abundance of SSRs.
Surprisingly, despite having such unusually abundant microsatellites, early studies estimated that Dictyostelium microsatellites tend to accumulate mutations less rapidly than most other eukaryotes (McConnell et al., 2007;Saxer et al., 2012). By these estimates, the low mutation rate would suggest that rapid mutation is not the source of these extensive microsatellites in Dictyostelium, though another possible explanation for the low mutation rates is that expansion and contraction of microsatellites is balanced, thus masking the effects of mutations over several generations Kucukyildirim et al., 2020). In contrast to the early studies, a later study by Kucukyildirim et al. (2020) estimated an indel mutation rate higher than most organisms and attributed this to the high A + T content of the genome. However, in Plasmodium falciparum, a protist with even higher A + T content and lower percentage of the genome composed by SSRs, the indel mutation rate is estimated to be many fold higher than that of Dictyostelium (Hamilton et al., 2017;Kucukyildirim et al., 2020). It is evident from these conflicting findings that more research is needed to uncover the mutational dynamics of SSRs in Dictyostelium.
It is possible that Dictyostelium has evolved highly efficient DNA repair pathways to prevent additional mutations (McConnell et al., 2007;Saxer et al., 2012). Being a soildwelling microbe means that Dictyostelium cells come into contact with numerous mutagenic compounds that would select for rigorous DNA repair mechanisms (Deering, 1994). Dictyostelium also require efficient DNA repair mechanisms due to the fact that they are professional phagocytes, a process that exposes the cells to constant challenges from the bacteria consumed (Deering, 1968;Hsu et al., 2006;Zhang et al., 2009;Pontel et al., 2016). Importantly, Dictyostelium shows evidence of conservation of multiple eukaryotic DNA repair pathways, including some which were once thought to be limited to vertebrate animals ( Table 1; Hsu et al., 2006;Pears and Lakin, 2014;. Though much of the research on DNA repair in Dictyostelium has been focused on the processes of homologous recombination and non-homologous end joining (Katz and Ratner, 1988;Hsu et al., 2006Hsu et al., , 2011, Dictyostelium also contains several orthologs of genes known to be associated with mismatch repair. However, these have not been extensively studied in Dictyostelium. Given what we know of the relevance of these pathways in microsatellite mutation in other organisms, it is important to consider that there may be insights to be had from studying these processes in an organism such as Dictyostelium that demonstrates remarkably lower microsatellite mutation rates than would be expected of a highly repetitive genome.
DO SIMPLE SEQUENCE REPEATS SERVE A FUNCTION IN DICTYOSTELIUM?
In recent years, more and more research has been conducted to study the functional aspects of homopolymeric amino acid sequences, low-complexity domains, and prion-like domains within proteins (Alberti, 2017;Alberti et al., 2019;Franzmann and Alberti, 2019b;Lau et al., 2020;Guo et al., 2021). While some studies have found evidence for beneficial impacts of having these repetitive domains, research has not yet been conducted to assess the function of any of these features in Dictyostelium. Instead, the research has been focused on looking for evidence of selection acting on these domains through genomic level analysis of SSR distribution and mutational patterns (Eichinger et al., 2005;Saxer et al., 2012;Scala et al., 2012;Kucukyildirim et al., 2020). If SSRs serve a function in Dictyostelium, we would expect to see evidence of selection acting upon them. However, the analyses that have been performed and the conclusions they have drawn have left this question unanswered. There are many arguments for and against the presence of selection acting on SSRs.
One characteristic that favors the idea that selection is in effect is that SSRs within coding regions are often read in frames that disproportionately favor one amino acid. For example, proteins are more likely to homopolymeric runs of asparagine or glutamine than the amino acids that would be produced in the other two reading frames. Furthermore, mutations within these SSRs are often synonymous, indicating that a particular amino acid is favored over alternatives (Eichinger et al., 2005). Polyasparagine and polyglutamine tracts are overrepresented in regulatory factors such as kinases, transcription factors, and RNA binding proteins, indicating that these repetitive regions may play some sort of regulatory role within the cell (Eichinger et al., 2005). Dictyostelium also has a low mutation rate when compared to organisms with similar genome composition, indicating that there may be selection acting to counter the effects of genetic drift in this organism (Kucukyildirim et al., 2020).
In contrast, there is high variation and genetic diversity among amino acid repeats in coding sequences, which is unexpected in protein sequences under purifying selection. Additionally, SSRs in coding regions are equal as variable as SSRs in non-coding regions, indicating that there is not stronger selection occurring as would be expected for a functional protein sequence (Scala et al., 2012). The four amino acids most commonly found in homopolymeric tracts (asparagine, glutamine, threonine, and serine) are all polar and hydrophilic, indicating that they may be more likely to reside on the outer parts of a protein vs. the hydrophobic core (Eichinger et al., 2005;Scala et al., 2012). Low mutation rates may have evolved as a mechanism to protect cells from deleterious expansions or contractions within the genome rather than as a mechanism to preserve function in coding SSRs (Kucukyildirim et al., 2020). It is clear that additional study is required to draw a more definite conclusion on whether selection is acting upon SSRs in Dictyostelium. Additionally, it would be helpful to conduct directly targeted studies on the results of removing the SSRs within some of the proteins they are found in and assessing whether there are effects on fitness.
WHAT CAN WE LEARN FROM DICTYOSTELIUM MICROSATELLITES?
There are several human diseases associated with microsatellite expansion (Ranum and Day, 2002;Orr and Zoghbi, 2007;Brouwer et al., 2009). However, despite the many orthologs of human disease-associated genes and the seeming lack of harmful effects from its highly repetitive genome, relatively little research has been done in Dictyostelium on diseases caused by microsatellite expansion (Myre et al., 2011;Wang et al., 2011;Myre, 2012;Olmos et al., 2020;Haver and Scaglione, 2021). One microsatellite-associated disease that has been modeled in Dictyostelium is Huntington's Disease. In this disease, expansion of a CAG repeat encodes a homopolymeric polyglutamine tract in the huntingtin protein (HTT) that exceeds beyond a pathogenic threshold and is prone to aggregation (Orr and Zoghbi, 2007). An ortholog of HTT exists in Dictyostelium, and deletion of this protein results in several abnormal phenotypes including deficiencies in chemotaxis, flaws in cytokinesis, and improper cell patterning during multicellular development (Myre et al., 2011;Wang et al., 2011;Bhadoriya et al., 2019). Dictyostelium HTT lacks the polyglutamine tract present in exon 1 of human HTT, instead containing a polyglutamine tract further downstream (Myre et al., 2011). Because of this Dictyostelium may serve as an interesting organism to use in studying the effects of the presence and absence of polyglutamine tracts in the HTT protein. Dictyostelium may also serve as an ideal model to assess the impacts of polyglutamine tract length on protein function, a topic of interest in recent studies (Iennaco et al., 2022).
Furthermore, if there are unknown factors in Dictyostelium that mitigate the deleterious effects of expanded microsatellites, we could gain novel insights on how to alleviate the impact of these in human cells. The proteome of Dictyostelium is rich in proteins with prion-like domains, including homopolymeric polyglutamine and polyasparagine tracts as well as low complexity domains consisting of alternating amino acid residues (Eichinger et al., 2005;. However, Dictyostelium has been shown to be resistant to polyglutamine aggregation Santarriaga et al., 2015). Similarly, Dictyostelium has not been found to suffer deleterious effects from its many polyasparagine-rich or low complexity domains, though these are common features in prion proteins (Liebman and Chernoff, 2012;Franzmann and Alberti, 2019a). This begs the question of how Dictyostelium cells are able to tolerate these usually unstable proteins while other organisms would face protein aggregation and cytotoxicity. Has Dictyostelium evolved novel protein quality control mechanisms to maintain these proteins in a soluble, folded state? While this is largely an unanswered question, some evidence exists for novel mechanisms that suppress polyglutamine aggregation (Santarriaga et al., 2018). Potentially there are other mechanisms also involved in mitigating deleterious effects of these genes such as alternative splicing or gene silencing. Further research is certainly needed to clarify the mechanisms of maintaining protein homeostasis within this organism.
Furthermore, due to its repeat-rich genome Dictyostelium is an interesting organism to investigate cellular phenomena associated with expanded microsatellites in a tractable and easyto-use organism (Figure 1; Bozzaro, 2013;Pears and Lakin, 2014;Haver and Scaglione, 2021;Pears and Gross, 2021;. Here we can begin to address many questions of relevance to human health. For instance, is there evidence of Repeat Associated Non-ATG (RAN) translation occurring in Dictyostelium? RAN translation is a phenomenon in which transcripts containing certain SSRs can initiate translation without the presence of an AUG start codon (Zu et al., 2011;Cleary and Ranum, 2014). These transcripts can be translated in multiple frames, leading to the production of proteins which vary in length and composition. This process has been implicated in a number of microsatellite-expansion diseases (Zu et al., 2011;Cleary and Ranum, 2014). RAN translation has not yet been studied in Dictyostelium, though given its highly repetitive genome and its experimental tractability, this organism would be an interesting candidate for studying this phenomenon in vivo and may provide unique insight into physiological functions FIGURE 1 | Areas to study SSRs in Dictyostelium. Dictyostelium has several unique properties that make it a compelling model in which to study the biology of SSRs. These properties include, but are not limited to, its highly repetitive genome, its genetic tractability, its high degree of genetic conservation with humans and its apparent resistance to the toxic effects of microsatellite expansion. Thus, Dictyostelium presents a unique opportunity to gain insights into the processes that regulate microsatellites and the biological consequences of having these repetitive sequences in the genome.
Frontiers in Neuroscience | www.frontiersin.org of RAN translation. Additionally, because Dictyostelium is resistant to the deleterious effects of microsatellite expansion Santarriaga et al., 2015), it provides a unique platform for studying the cellular dynamics of SSRs without cytotoxicity.
Another set of processes that would be advantageous to study in Dictyostelium are the various DNA repair pathways responsible for maintaining the integrity of the genome. As mentioned previously, Dictyostelium contains several orthologs to human DNA repair genes (Table 1), including some that are absent in Saccharomyces cerevisiae and other model organisms (Hsu et al., 2006;Pears and Lakin, 2014;. Defects in DNA repair, particularly in the mismatch repair pathway, have been implicated in microsatellite mutations in several classes of disease. These include but are not limited to neurodegenerative diseases, in which microsatellites can become expanded and encode aggregation-prone pathogenic proteins, and various cancers, in which microsatellite instability can contribute to hypermutability within malignant growths (Loeb, 1994;Boyer et al., 1995;Karran, 1996;Thomas et al., 1996;Dietmaier et al., 1997;Shah et al., 2010;Jeppesen et al., 2011;Yamamoto and Imai, 2015;Schmidt and Pearson, 2016;Cortes-Ciriano et al., 2017;Baretti and Le, 2018;Maiuri et al., 2019). In cancer, defects in mismatch repair are especially important predictors of efficacy for certain chemotherapeutics and may require special therapies to address (Martin et al., 2010;Li and Martin, 2016). The Dictyostelium genome contains orthologs of several human genes known to be involved in mismatch repair, as well as other DNA repair pathways. However, little to no research has been done on mismatch repair in this organism. Dictyostelium would be a good model for studying these highly conserved processes in a simple and genetically tractable model. In doing so, we could gain vital insights on the genetic and biochemical factors that play a role in eukaryotic mismatch repair, allowing us a better understanding of the mechanisms driving human diseases such as hypermutability in cancer cells and microsatellite expansion in neurodegenerative disorders. There is even potential for discovery of novel DNA repair mechanisms that have evolved in Dictyostelium or have remained undiscovered in higher eukaryotes.
CONCLUSION
The social amoeba Dictyostelium discoideum is unique among eukaryotic model organisms in that it features a highly repetitive genome without being known to demonstrate the deleterious impacts of expanded SSRs. However, several important aspects of microsatellite biology, including instability, behavior, and function have not been widely studied in this organism. Understanding biological processes in organisms with unique biological attributes can provide insights that provide novel insight into how nature has dealt with issues that cause disease in humans. Therefore, utilizing the unique benefits of model organisms such as Dictyostelium is important for expanding our knowledge of the processes driving cellular function.
AUTHOR CONTRIBUTIONS
FW wrote the initial draft of the manuscript. KS and FW revised and edited the manuscript. Both authors reviewed and approved the submitted manuscript.
FUNDING
This work was supported by the National Institutes of Health grants NS112191 and GM119544 to KS and the Diverse Scientists in Ataxia Predoctoral Fellowship from the National Ataxia Foundation to FW.
|
2022-06-13T13:29:34.471Z
|
2022-06-13T00:00:00.000
|
{
"year": 2022,
"sha1": "dda27f2702c9a73a4840d04a1aba3a2453f23b6d",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "dda27f2702c9a73a4840d04a1aba3a2453f23b6d",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
2643053
|
pes2o/s2orc
|
v3-fos-license
|
Genome-Wide Signatures of Transcription Factor Activity: Connecting Transcription Factors, Disease, and Small Molecules
Identifying transcription factors (TF) involved in producing a genome-wide transcriptional profile is an essential step in building mechanistic model that can explain observed gene expression data. We developed a statistical framework for constructing genome-wide signatures of TF activity, and for using such signatures in the analysis of gene expression data produced by complex transcriptional regulatory programs. Our framework integrates ChIP-seq data and appropriately matched gene expression profiles to identify True REGulatory (TREG) TF-gene interactions. It provides genome-wide quantification of the likelihood of regulatory TF-gene interaction that can be used to either identify regulated genes, or as genome-wide signature of TF activity. To effectively use ChIP-seq data, we introduce a novel statistical model that integrates information from all binding “peaks” within 2 Mb window around a gene's transcription start site (TSS), and provides gene-level binding scores and probabilities of regulatory interaction. In the second step we integrate these binding scores and regulatory probabilities with gene expression data to assess the likelihood of True REGulatory (TREG) TF-gene interactions. We demonstrate the advantages of TREG framework in identifying genes regulated by two TFs with widely different distribution of functional binding events (ERα and E2f1). We also show that TREG signatures of TF activity vastly improve our ability to detect involvement of ERα in producing complex diseases-related transcriptional profiles. Through a large study of disease-related transcriptional signatures and transcriptional signatures of drug activity, we demonstrate that increase in statistical power associated with the use of TREG signatures makes the crucial difference in identifying key targets for treatment, and drugs to use for treatment. All methods are implemented in an open-source R package treg. The package also contains all data used in the analysis including 494 TREG binding profiles based on ENCODE ChIP-seq data. The treg package can be downloaded at http://GenomicsPortals.org.
Introduction
The specificity of transcriptional initiation in the genomes of eukaryotes is maintained through regulatory programs entailing complex interactions among transcription factors (TF), epigenetic modifications of regulatory DNA regions and associated histones, chromatin-remodeling proteins, and the basal transcriptional machinery [1]. High-throughput sequencing of immuno-precipitated DNA fragments (ChIP-seq) provides means to assess genome-wide expression regulatory events, such as TF-DNA interactions [2]. Sophisticated statistical methodologies have been developed for identifying TF binding events in terms of ''peaks'' in the distributions of ChIP-seq data [3][4][5][6][7][8]. The evidence provided by ChIP-seq binding data that a gene's expression is regulated by a TF is a function of the number of peaks, their intensity and proximity to the transcription start site (TSS) [9]. Furthermore, binding of a transcription factor in a gene's promoter alone does not always result in transcriptional regulation. In the case of highly studied pleiotropic regulator ERa, transcriptional regulation depends on the presence of specific co-factors as well as on the type of activating ligand [10,11]. Therefore, the identification of true regulatory TF-gene relationships requires per-gene summaries/scores measuring the totality of the evidence in ChIP-seq data, integrated with measurements of gene expression levels.
Current approaches to summarizing binding peaks in order to correlate TF binding with transcriptional changes range from simple summaries in proximal gene promoter (e.g. maximum peak height within a narrow region around the promoter) [12][13][14] to weighted sums of peak heights where weights are inversely proportional to the distance of the peak to the gene's TSS [9,15]. Currently used distance-based weights are dependent on TF-specific tuning constants established through ad-hoc examination of the distribution of the peaks [9,12,13].
Dysregulation of transcriptional programs is intimately related to the progression of cancer [16,17] and other human diseases [18,19]. Modulating the behavior of specific TFs is a popular strategy for developing new disease treatments [20][21][22][23]. Genomewide transcriptional profiles associated with a disease phenotype provide indirect evidence of TF involvement in the etiology of the disease. The most common strategy of implicating TF involvement is by computational analysis of genomic regulatory regions of differentially expressed genes [24][25][26][27]. However, such strategies are not effective when the search needs to include distant enhancers and when concurrent activity of multiple regulatory programs lead to ''messy'' transcriptional signatures. ERa-driven proliferation is one such case where the involvement of ERa regulatory program has been difficult to identify in resulting transcriptional profiles using the DNA binding motif analysis [27].
We have developed a comprehensive statistical framework for assessing True REGulatory (TREG) TF-gene interactions by integrated analysis of ChIP-seq and gene expression data. In the first step we introduce a novel two-stage mixture generative statistical model for summarizing ''peaks'' within 2MB window centered around a gene's TSS. Fitting this two-stage model yields scores and associated probabilities of regulation based on ChIPseq data alone (ie TREG binding profile). We show that our approach produces effective summaries for a TF with binding sites clustered in close proximity of TSS (E2F1) and a TF known to exhibit regulation through binding to distant enhancers (ERa).
In the second step we integrate the TREG binding profile with a differential gene expression profile to create an integrated TREG signature of TF regulatory activity. We use TREG signatures to detect faint signals of ERa regulation in ''messy'' transcriptional signature, and demonstrate how such analysis can yield better drug candidates than simply correlating transcriptional signatures of the disease and the drug activity [28][29][30].
Results
An overview of the TREG framework is shown in Fig. 1. We start with ''peaks'' extracted from ChIP-seq binding data and differential gene expression profile that eventually yield the integrated TREG signature of TF activity (Fig. 1A). The foundation of the TREG framework consists of two statistical mixture modules. The first mixture model describes the distribution of functional and non-functional ''peaks'' in ChIP-seq TF-gene binding data (Fig. 1B). Based on this model, we derive the TF-specific distance weights and construct gene-level binding scores (TREG binding scores) measuring the likelihood that a gene is regulated by the given TF. The second mixture model describes the distribution of TREG binding scores for regulated and non-regulated genes (Fig. 1C). This second model provides us with gene-level probabilities that genes are regulated by a specific TF based on the ChIP-seq data alone. TREG binding scores and associated gene-level probabilities for all genes make up the TREG binding profile. The TREG binding profile and differential gene expression profiles are integrated using Generalized Random Set (GRS) methodology [31] to produce an integrated genome-wide TREG signature of the TF activity (Fig. 1D). The TREG signature of ERa is used to demonstrate involvement of its regulatory activity in complex transcriptional profiles and to mine Connectivity Map Data for inhibitors of its activity.
The first mixture module: Deriving gene-specific TREG scores (Fig. 1B) We assume that observed peaks consist of two populations: Functional peaks that are more likely to occur closer to TSS and whose distance to TSS is distributed as an exponential random variable; and, Non-functional peaks that are randomly occurring throughout the 2 million base pair genomic region centered around the TSS, and whose distances to TSS are distributed as a uniform random variable. The distances to TSS of all peaks are then distributed as a mixture of the exponential and the uniform distribution ( Fig. 1, Eq1), where p is the proportion of functional peaks among all observed peaks. We define the TREG binding score for gene g as the logarithm of the weighted average of peak intensities, using the probability of the peak belonging to the population of ''functional peak'' as weights ( Fig. 1 Eq3).
TREG binding scores provide an effective gene-level measure of TF regulation
We assessed the effectiveness of the TREG binding score by comparison to the simple scoring method based on the maximum peak intensity (MPI) within a window of specific size around TSS. The two types of scores were evaluated by comparing the enrichment of genes with high evidence of TF binding among genes differentially expressed in appropriately matched experiments. For gene expression data, we identified genes differentially expressed (two-tailed FDR,0.01) 24 h after treating MCF-7 cell line with estradiol (E2) with and without pre-treating the cell line with Cycloheximide (CHX) [27]. CHX is an inhibitor of protein biosynthesis in eukaryotic organisms. Treatment with E2 after pretreatment with CHX (E2+CHX) resulted in differential expression of genes presumed to be directly regulated by ERa; whereas after E2 treatment without CHX, the majority of differentially expressed genes were secondary target genes functionally enriched
Author Summary
Knowing transcription factors (TF) that regulate expression of differentially expressed genes is essential for understanding signaling cascades and regulatory mechanisms that lead to changes in gene expression. We developed methods for constructing gene-level scores (TREG binding scores) measuring likelihood that the gene is regulated based on the generative statistical model of ChIP-seq data for all genes (TREG binding profile). We also developed methods for integrating TREG binding scores with appropriately matched gene expression data to create TREG signatures of the TF activity. We then use TREG binding profiles and TREG signatures to identify TFs involved in the disease-related gene expression profiles. Two main findings of our study are: 1) TREG binding scores derived from ChIP-seq data are more informative than simple alternatives that can be used to summarize ChIP-seq data; and 2) TREG signatures that integrate the binding and gene expression data are more sensitive in detecting evidence of TF regulatory activity than commonly used alternatives. We show that this advantage of TREG signatures can make the difference between being able and not being able to infer TF regulatory activity in complex transcriptional profiles. This increased sensitivity was critically important in establishing connections between disease and drug signatures.
for cell-cycle genes and reflective of the rapid proliferation resulting from the E2 treatment [27]. For the TF binding data, we used ChIP-seq analysis of the key proliferation regulator E2f1 in growing mouse embryonic stem (ES) cells [32], and ERa binding 1 h after treating MCF-7 cells with estradiol [10]. ChIPseq data at 1 h hour after treatment with E2 is correlated with gene expression changes 24 h after treatment because of the expected time-delay between ERa binding to a gene promoter and the observable change in the gene's expression level.
Among differentially expressed genes, enrichment of genes with high TREG binding scores was statistically significant for both E2F1 and ERa in both experiments (Table 1). Fig. 2 shows the relative levels of enrichment for maximum peak intensity (MPI) score over the range of window sizes around TSSs in comparison to the TREG binding score. Simple MPI scores never attain the level of statistical significance of enrichment attained by TREG binding scores. Furthermore, the performance of the simple score is heavily dependent on the specific size of the window used, and expectedly, the optimal windows are TF-specific. The optimal window size for E2f1 and ERa is around 1 kb and 50 kb respectively, with maximum statistical significance of enrichment attained for the simple score reaching 42% and 80% of the TREG binding score significance, respectively. Similar results were obtained using unweighted sum and linear-weighted sum of TF binding peak intensity scores (supplementary results in Text S1 and Fig. S1). This indicates that TREG binding scores not only provide the best correlation with expression changes, but they also obviate the need of knowing the right window size to use in deriving the summary measure of TF binding. The calculation of TREG binding scores does not include any free parameters that need to be specified in ad-hoc fashion, such as the length of the genomic region around TSS for simple scores, or the ad-hoc weighting parameters used in similar scores before [9,15].
The second mixture module: Gene-level ChIP-Seq binding probability mixture model Having constructed gene-specific TREG binding score, our goal was to estimate gene-level probabilities of ''functional interaction'' between a TF and a gene based on these scores. The histogram of the TREG binding scores (Fig. 1C) clearly shows two populations of TREG binding scores. One population with a majority of TREG binding scores being close to zero, representing genes with low likelihood of functional TF-gene interaction, and the other populations with TREG binding scores distributed in bell-shaped form around the mean slightly higher than 2, representing functional interactions. Therefore, we assume that TREG binding scores come from two populations: Scores significantly greater than zero representing functional TF-gene interactions which are Figure 1. An overview of the TREG framework and statistical models for constructing TREG signatures. A) ''Peaks'' extracted from ChIPseq binding data and differential gene expression profile that eventually yield the integrated TREG signature of TF activity. B) The exponentialuniform mixture module describing the distribution of functional and non-functional ''peaks'' in ChIP-seq TF-gene binding data. C) The exponentialnormal mixture module describes the distribution of TREG binding scores for regulated and non-regulated genes. D) The Generalized Random Set (GRS) methodology for integrating ChIP-seq and differential gene expression data. doi:10.1371/journal.pcbi.1003198.g001 distributed as a Normal random variable; and, scores close to zero representing non-functional interactions which are distributed as an exponential random variable. Assuming that the proportion of TREG binding scores corresponding to functional interactions is g, the distribution of all TREG binding scores is a mixture of Normal and exponential probability distribution functions ( Fig. 1 Eq4). The probability that a TREG binding score for gene g (S g ) is functional is defined as the probability of S g belonging to the normal component ( Fig. 1 Eq5). The set of TREG binding scores and associated probabilities of the score indicated functional TFgene interaction for all genes in the genome (S g , p g ), g = 1,…,G, is the TREG binding profile.
Integrating TREG binding profile and differential gene expression to identify regulated genes Identifying genes that both have high probability of ''functional'' TF binding and are differentially expressed is complicated by the need to set arbitrary thresholds for statistical significance. We have previously developed a method, based on the Generalized Random Set (GRS) analysis that obviates the need for such thresholds when assessing concordance of two differential gene expression profiles [31]. Here we apply the GRS framework to assess the concordance between the TREG binding profile and the differential gene expression profile ( Fig. 1 Eq6) (details in Text S1), and to identify genes with statistically significant concordance. The results ( Table 2) of the analysis generally followed the results based on designating differentially expressed genes (Table 1) with the levels of statistical significance being orders of magnitude higher in the GRS concordance analysis. We demonstrate that GRS is producing expected distribution of p-values under the null hypothesis by systematically examining empirical cumulative distribution functions (ECDFs) of p-values after randomly permuting gene labels in TREG binding profile before GRS analysis (supplementary results in Text S1, Fig. S2). We also compared the results of GRS analysis with the thresholding approach based on TREG binding probability where gene was placed in the ''regulated'' group if the corresponding TREG probability (p g ) was greater than 0.95. Results were similar to the GRS analysis (supplemental results Text S1). However, we also show that in the situations when binding signal is relatively ''faint'', GRS is likely to outperform thresholding approach (Text S1, Fig. S3). Since these are situations in which the method of concordance analysis will make the difference, the GRS is still likely the better default choice for performing the concordance analysis.
Finally, we integrate at the gene level TREG binding profiles with differential gene expression profiles as the contribution of an individual gene to the overall concordance in the GRS concordance statistics e g (Fig. 1 Eq7). The statistical significance of gene-level GRS statistics is assessed by associated resamplingbased p-values (see methods) which define gene-specific TREG concordance scores (t g , Fig. 1, Eq8). The vector of such scores for all genes represents the TREG signature of TF activity (Fig1 Eq9).
The power of TREG binding profiles and TREG signatures in identifying TF targets
We examined the ability of TREG binding profiles and TREG signatures to identify genes regulated by ERa and E2f1. Fig. 3A contrasts the statistical significance of the enrichment by the computationally predicted ERa targets from MSigDB database [33] based on E2+CHX differential gene expression profile (Diff Exp), ERa TREG binding scores (TREG bind) and integrated TREG signature (TREG sig). In this setting, MSigDB targets provide a ''noisy'' gold standard since the perfect gold standard does not exist. While all three data types provided statistically significant enrichment, the integrated TREG signature showed the highest statistical significance of the enrichment. The overall relationship between the TREG binding scores, statistical significance of differential gene expression (2log 10 (p-value) E2+CHX) and the statistical significance of TREG concordance scores (ERa TREG score (s g )) is shown in Fig. 3B. The ''statistically significant'' (p-value,0.001) TREG concordance scores (red dots Figure 2. Relative statistical significance of the association between ChIP-seq and differential gene expression data for different window sizes. The ratio of 2log 10 (p-value of enrichment) of differentially expressed genes (FDR,0.1) among genes with high MPI scores, and 2log 10 (p-value of enrichment) of differentially expressed genes among genes with high TREG binding scores. The ratios related to E2f1 ChIP-seq data and E2 differential gene expression profile are represented by the blue line. The ratios related to ERa ChIP-seq data and are represented by the red line. Ratios smaller than 1 indicate higher significance of enrichment when using TREG scores, as opposed to maximum peak height within the given window. doi:10.1371/journal.pcbi.1003198.g002 Table 1. Statistical significance of LRpath enrichment of genes with high TREG binding scores for E2f1 and ERa among differentially expressed genes (two-tailed FDR,0.01) for E2 and E2+CHX differential gene expression profiles. in Fig. 3B) required both, a high TREG binding score and a high statistical significance of differential expression. Similar analysis of the E2f1 TREG signature showed a similar pattern ( Fig. 3C and D), although the overall statistical significance of enrichment was much higher for all three data types. These results show that integrated TREG signatures are more informative of the regulatory TF-gene relationships than expression or TF binding data alone. TREG binding scores, gene specific concordance statistic, and TREG concordance scores for all genes are given in the Table S1.
Functional analysis of ERa and E2F1 TREG signatures
We further examined ERa and E2F1 TREG signatures to determine molecular pathways and biological processes regulated by these two TFs and to evaluate benefits of such integrated signatures. We assessed the enrichment of genes with high TREG concordance scores in lists of genes related to the prototypical function of ERa and E2F1. For the ERa signature the list consisted of genes associated with the Gene Ontology term ''cellular response to estrogen stimulus'', and for the E2F1 with the term ''regulation of mitotic cell cycle''. In both cases, integrated TREG signatures showed significantly higher statistical significance of enrichment than either TREG binding scores or differential gene expressions (Fig. 4). Unsupervised enrichment analysis of the two signatures revealed that biological processes specifically associated with ERa signature were related to the development of the mammary gland (Fig. 5A). Moreover, significant associations between ERa-regulated genes and some key developmental processes could not have been established using either TF binding or gene expression alone. Likewise, processes related to mitotic cell cycle were most highly associated with E2f1 signature (Fig. 5B). Results of enrichment analysis for all GO terms are provided in Table S2.
TREG methodology applied to ENCODE TF binding data
To assess the reproducibility and specificity of our results, we constructed TREG binding signatures for all 494 TF ChIP-seq datasets in the Genome Browser ENCODE tables [14,34]. Two gene expression profiles in our analysis (E2+CHX and E2) were then systematically compared with 494 ENCODE TREG binding profiles. Top 10 most concordant profiles are shown in Fig. 6. Results show that ENCODE ERa binding profiles correlates equally well with E2+CHX profile as did our original TREG profile (Fig. 6A). Furthermore, all five ENCODE ERa binding profiles correlated better with E2+CHX profile than any other ENCODE profile. Similarly, ENCODE binding signatures most concordant with E2 profile (Fig. 6B) included E2F4, E2F1 and MYC which are all known to be important cell cycle regulators. The statistical significance of the concordance was again similar to the levels we observed with the E2f1 binding profile in mouse Figure 3. Pinpointing regulated genes by integrating binding and differential gene expression data. A) Statistical significance of enrichment of computationally predicted ERa targets from MSigDB database using the E2+CHX differential gene expression profile, (Diff Exp), ERa TREG binding scores (TREG bind) and the TREG signature integrating expression and ChIP-seq data (TREG sig) (the red line indicate p-value of 0.05). B) The scatter plot of TREG binding scores against the statistical significance of differential gene expression. The red points indicate genes with statistically significant TREG concordance scores (t g .2log 10 (0.01)). The red points were overlaid over the black points which means that all significant points are visible C) Statistical significance of enrichment of computationally predicted E2F1 targets from MSigDB database using the E2 differential gene expression profile, (Diff Exp), E2f1 TREG binding scores (TREG bind) and the TREG signature integrating expression and ChIP-seq data (TREG sig) (the red line indicate p-value of 0.05). D) The scatter plot of TREG binding scores against the statistical significance of differential gene expression as in B. doi:10.1371/journal.pcbi.1003198.g003 embryonic stem cells. These results indicate that reproducibility of TREG results across different ChIP-seq datasets and its ability to identify key transcriptional regulators for a given profile. Results of the concordance analysis for all ENCODE TREG profiles are in Table S3.
Finding evidence of ERa activity in complex transcriptional profiles
The ultimate goal of the TREG framework is to facilitate identification and characterization of signatures of TFs regulating disease-related differential gene expression profiles (DRGEP). Here we demonstrate the power of TREG signatures and TREG binding scores in elucidating the faint signals of ERa activity in two complex DRGEPs, the response of MCF-7 cell line 24 hours after treatment with E2 [27] and differences between ER2 and ER+ breast tumors [35]. In both of these DRGEPs, the signal of direct ERa regulation is ''drowned out'' by the strong secondary proliferation-related transcriptional signature, and the standard enrichment analysis of computationally predicted ERa targets in MSigDb fails to find evidence of ERa regulation (Fig. 7). However, the GRS concordance analysis with both TREG binding scores and TREG signatures are highly statistically significant, and the TREG signature which integrated binding and transcriptional evidence again shows the highest statistical significance of concordance (Fig. 7). Additional discussion of these results is provided in supplementary results (Text S1).
ERa activity in perturbation signatures and diseaserelated gene expression profiles
We used the ERa TREG signature to mine a collection of differential gene expression profiles in GEO datasets (GDS signatures), and differential gene expression profiles of small drug perturbations (CMAP signatures) [29], for evidence of ERa regulatory activity. Fig. 8 shows differential gene expression levels of top 10 GEO profiles and top 10 drug perturbations based on the statistical significance of the concordance between the ERa TREG signature and each differential gene expression profiles. In both situations the top transcriptional profiles are obviously related Figure 4. Function of regulated genes. Enrichment of ERa and E2F1 targets among genes associated with two prototypical functional categories associated with ERa (response to estrogen stimulus) and E2F1 (regulation of mitotic cell cycle) function. A) Statistical significance of enrichment of computationally predicted genes associated with ''response to estrogen stimulus'' using the E2+CHX differential gene expression profile, (Diff Exp), ERa TREG binding scores (TREG bind) and the TREG signature integrating expression and ChIP-seq data (TREG sig) (the red line indicate p-value of 0.05). B) Statistical significance of enrichment of computationally predicted genes associated with ''regulation of mitotic cell cycle'' using the E2 differential gene expression profile, (Diff Exp), E2f1 TREG binding scores (TREG bind) and the TREG signature integrating expression and ChIP-seq data (TREG sig) (the red line indicate p-value of 0.05). doi:10.1371/journal.pcbi.1003198.g004 to the ERa activity demonstrating the precision of the TREG signature in this setting. Additional results related more specifically to disease-associated GEO profiles are given in the supplementary results (Text S1).
Using TREG signatures to connect small molecules, transcription factors, and disease. Using the DRGEP of up-regulated genes in ER+ and ER2 breast tumors in comparison to normal mammary epithelium, we mined the Connectivity Map Dataset [29] for putative drugs that could inhibit these signatures. The DRGEPs for ER+ and ER2 tumors were created by differential expression analysis between ER+ tumors and the normal breast tissue (ER+ DRGEP) or ER2 tumors and normal breast tissue (ER2 DRGEP) using the public domain microarray dataset (GSE2740) [36]. We contrasted two distinct strategies. The first approach is the classical CMAP approach of searching for concordance in genes up-regulated in DRGEPs and downregulated in the drug-signature [28][29][30]. The second approach relied on first elucidating the role of ERa in producing DRGEPs of ER+ and ER2 breast cancers, and then searching for drugs that can inhibit ERa signature.
As expected, concordance analysis between ER+ and ER2 breast cancer DRGEPs and TREG signatures of ERa and E2F1 activity demonstrated involvement of E2F1 regulation in both DRGEPs (p-value = 7.0610 214 for ER+ and p-value = 1.3610 272 for ER2). This indicates increased proliferation in both types of breast cancers in comparisons to normal tissue. Expectedly, the involvement of ERa regulation was evident only in ER+ DRGEP (p-value = 0.0007), but not in ER2 DRGEP (p-value = 0.14), indicating that the increased proliferation is driven by ERa activity only in the ER+ breast cancers. Tamoxifen, raloxifen and fulvestrant were among the top five candidate drugs implicated by their ability to inhibit the ERa activity through the concordance analysis between ERa signature and CMAP data were (Table 3). Tamoxifen and raloxifen are modulators, and fulvestrant is an antagonist of ERa. All three are used in treating ER+ cancers [44]. However, the direct concordance analysis between their transcriptional signatures and ER+ DRGEP would not implicate them as potential treatments. This is most likely due to the subtle ERa signature being overwhelmed by other stronger signals such as the proliferation signature of secondary ERa targets [27].
It is critical to note that alternative approaches to elucidate the role of ERa in producing ER+ breast cancer DRGEP would not have been successful. The standard enrichment analysis against computationally predicted ERa targets fails again to provide any evidence of ERa involvement (p-value = 0.7). Furthermore, even the concordance analysis with TREG binding profile fails to provide statistically significant association in this case (pvalue = 0.1). These results demonstrate the sensitivity of the TREG signatures in pinpointing important regulatory mechanisms that can then be exploited in identifying the best drug candidates. In the case at hand, the strategy provided an obvious advantage over the direct strategy of correlating DRGEPs drug transcriptional signatures [28][29][30] to search for drugs that inhibit the global DRGEPs. The improvement in precision resulting from the use of integrated TREG signatures over alternative enrichment strategies that use computationally predicted targets or ChIP-seq data alone, can make a critical difference between the failure or success of such analysis.
Discussion
The problem of identifying functional TF targets that regulate gene expression, in a specific biological context, requires joint considerations of both TF DNA-binding data and the target gene's expression changes. We described a statistical framework for quantifying the evidence of TF-gene interaction from ChIP-seq data, and integrating them appropriate gene expression data to construct genome-wide signatures of TF activity.
Two main findings of our study are that 1) TREG binding scores derived from ChIP-seq data alone are more informative than simple alternatives that can be used to summarize ChIP-seq data; and 2) TREG signatures that integrate the binding and gene expression data are more sensitive in detecting evidence of TF regulatory activity than available alternatives. We show that this advantage of TREG signatures can make the difference between being able and not being able to infer TF regulatory activity in complex transcriptional profiles. This increased sensitivity also showed to be critical in establishing connections between disease and drug signatures that would not be possible using currently available strategies.
Identifying the role of specific TFs in producing disease-related transcriptional profiles is of vital importance for understanding the molecular mechanisms underlying disease phenotype. Although it is possible to obtain direct measurements of TF activity in disease samples [45], such ChIP-seq profiling is technically challenging and systematic profiling of many different TFs is not feasible. Therefore, the ability to infer the role of a TF from the transcriptional profiles remains challenging. The most common strategy of implicating TF involvement is by computational analysis of genomics regulatory regions of differentially expressed genes [24][25][26][27], or by searching for enrichment of known targets among differentially expressed genes [46]. Here we present an alternative strategy relying on direct concordance analysis between TREG signatures of TF activity and disease-related transcriptional profiles. When searching for evidence of regulation by the TF with functional binding sites in distant enhancers, such as ERa, and ''messy'' transcriptional signatures resulting from activity of multiple regulatory programs, our approach dramatically improves the precision of the analysis.
Our results indicate that TREG signatures derived from in-vitro experiments (ERa; MCF-7 cells), and even from a different organism (E2f1; mouse) provide effective means for analyzing transcriptional profiles derived from human tissue samples. This would indicate that TF binding profiles coming from any biological system under which TF shows signs of activity might be sufficiently informative to construct TREG signatures. In this context the recently released ENCODE project data [14,34] may be turned into a powerful tool for detecting TF activity. As a step in this direction, we have created 494 TREG binding profiles using the ENCODE ChIP-seq data and made it available from the support web-site (http://GenomicsPortals.org). Complementary gene expression data generated by directly perturbing specific TFs, such as shRNA knock-downs and overexpression experiments can be used to construct TREG signatures. For example, transcriptional signatures of such systematic perturbations that is being generated by NIH LINCS project (http://LincsProject.org) could provide complementary transcriptional profiles for ENCODE ChIP-seq data.
Our methods are complementary to methods used to analyze the recently released ENCODE project data [14,34]. For some experimental conditions, the ENCODE project provides additional data types that can be used in assessing the functionality of TF binding peaks, such as distribution of specific epigenetic histone modifications. For discussion on how to possibly incorporate this additional information within TREG methodology, please see supplemental discussion (Text S1).
Up-regulated expression of proliferation genes is a hallmark of neoplastic transformation and progression in a whole array of different human cancers [47]. While the core transcriptional signature of proliferation is recognizable in a wide range of biological systems and diseases, the events and pathways that drive the transcriptional program of proliferation vary widely. Increased expression of proliferation-associated genes has been associated with poor outcomes in breast cancer patients [48][49][50][51][52][53][54]. However, the driver mechanisms in many aggressive cancer types are poorly understood. Inhibiting known driver pathways, such as ERa signaling in breast cancer often leads to treatment resistant tumors due to activation of alternative, poorly understood driver pathways [55,56]. Using the signatures of such ''driver events/pathways'' we can identify candidate drugs capable of inhibiting them. In our analysis of ERa activity in ER+ breast cancers we showed that such an approach can highlight connections between disease and drug candidates that would be missed by simply correlating disease and drug transcriptional signatures [28][29][30].
Methods
Mixture model for summarizing ChIP-seq data and deriving gene-specific TREG scores (Fig. 1B) We assume that observed peaks consist of two populations: Functional peaks that are more likely to occur closer to TSS and whose distance to TSS is distributed as an exponential random variable with the parameter l; and, non-functional peaks that are randomly occurring throughout the 2 million base pair genomic region centered around the TSS, and whose distances to TSS are distributed as a uniform random variable. The distances to TSS of all peaks are then distributed as a mixture of the exponential and the uniform distribution (Fig. 1, Eq1), where p is the proportion of functional peaks among all observed peaks, a is the distance of a peak to the gene's TSS, f E ( : Dl) is the probability density function (pdf) of the exponential random variable (rv) with the location parameter l, and f U ( : ) is the pdf of a uniform rv on the interval (210 6 , 10 6 ). We use the standard Expectation-Maximization (EM) algorithm [57] to estimate the parameters of this mixture model (p,l) for each TF. Given the estimates (p p,l l) we calculate the posterior probability for peak i with distance a i from a TSS to belong to the population of ''functional peaks'' (Fig. 1 Eq2). Suppose now that for a gene g, n g is the number of peaks within the 1MB window around its TSS (1MB upstream to 1MB downstream), h g k is the peak intensity (ie, the maximum number of overlapping reads over all positions within the peak), and a g k is the distance to TSS of the k th such peak (k = 1,…,n g ). We define the TREG binding score for the gene g as the logarithm of weighted average of peak intensities, using the probability of the peak belonging to the population of ''functional peak'' (W g k ) as the weight (Fig. 1 Eq3).
Gene-level ChIP-Seq binding probability mixture model
We assume that TREG binding scores come from two populations: Scores significantly greater than zero representing functional TF-gene interactions which are distributed as a Normal random variable; and, scores close to zero representing nonfunctional interactions which are distributed as an exponential random variable (histogram in Fig. 1B). Assuming that the proportion of TREG binding scores corresponding to functional interactions is g, the distribution of all TREG binding scores is a mixture of Normal and exponential probability distribution functions ( Fig. 1 Eq4), where S is the TREG binding score, f E ( : Dy) is pdf of the exponential random variable with the location parameter y, and f N ( : Dm,s 2 ) is the pdf of a Normal random variable with mean m and variance s 2 . We again use the standard EM algorithm to estimate the parameters of this mixture model (g, y, m, s 2 ) for each TF. Given the estimates (ĝ g,ŷ y,m m,ŝ s), the probability of a TREG binding score for gene g (S g ) being functional is defined as the probability of S g belonging to the normal component (Fig. 1 Eq5). The set of TREG binding scores and associated probabilities of the score indicated functional TFgene interaction for all gene in the genome (S g , p g ), g = 1,…,G, is the TREG binding profile. Additional discussion of motivations for the choice of specific distributions is provided in supplemental methods (Text S1).
EM algorithm
Details of the EM algorithm are provided in supplemental methods (Text S1).
LRpath enrichment analysis
The enrichment of genes with high TREG and MPI scores among differentially expressed genes (Table 1, Fig. 2) was performed using the logistic regression-based LRpath methodology [58]. LRpath does not require thresholding on binding scores but uses such scores as the continuous variable that explains the membership of a gene in the ''differentially expressed'' category. Similarly, LRpath was used to analyze enrichment of differentially expressed genes among genes associated with GO terms in Fig. 5 and 6.
Integrating TREG binding profile and differential gene expression to identify regulated genes When performing concordance analysis between TREG binding profiles and the two differential gene expression profiles of interest (E2+CHX and E2) ( Table 2) and constructing TREG signatures in Fig. 4,5, and 6, we used two-tailed p-values not distinguishing between induction and repression activity. When comparing TREG signatures with other DRGEPs (Table 3, and Fig. 7 and 8), we account for directionality of gene expression changes by using single-tailed p-values for increase in gene expression. This is necessary to account for the directionality of the concordance between the TREG signature and the DRGEPs. The ERa TREG signatures for this analysis was constructed by the GRS concordance analysis (Fig. 1D) between ERa TREG binding profile and the single tailed p-values for statistically significant upregulation of gene expression after E2+CHX treatment of MCF-7 cell line. The genes used for plotting heatmaps in Fig. 8 were then selected based on the gene-specific p-values of concordance (pvalue(e g ), Fig. 1D) being ,0.001 (Table S5). The concordance between this ERa TREG signature, and GEO/CMAP transcriptional signatures was performed again using the GRS analysis.
Datasets used in the analyses
The description, location and processing of the ChIP-seq and gene expression datasets are provided in supplemental methods (Text S1).
Computational methods
All computational methods are implemented in the R package treg which can be downloaded from our web site (http:// GenomicsPortals.org). The package also contains processed ChIP-seq data for ERa [10], E2f1 and 15 other transcription factors [32], as well as TREG signatures for ERa and E2f1, and transcriptional signatures derived from GEO GDS datasets and CMAP drug signatures. We have previously described derivation of CMAP signatures [31]. All functional enrichment analyses were performed using the LRpath methodology [58] as implemented in the R package CLEAN [59]. Figure S1 Relative statistical significance of the association between ChIP-seq and differential gene expression data for different window sizes and for different summaries of peak intensities. The ratio of 2log 10 (p-value of enrichment) of differentially expressed genes (FDR,0.1) among genes with high simple scores (MPI, UWS, LWS), and 2log 10 (pvalue of enrichment) of differentially expressed genes among genes with high TREG binding scores. Red dots correspond to MPI scores, blue dots to UWS scores, and the horizontal blue line corresponds to significance attained by the LWS score. A) The ratios related to E2f1 ChIP-seq data and E2 differential gene expression profile. B) The ratios related to ERa ChIP-seq data and E2+CHX differential gene expression profile. Ratios smaller than 1 indicate higher significance of enrichment when using TREG scores. (TIF) Figure S2 Empirical distribution functions of p-values for four GRS concordance analysis between differential gene expression profiles (E2+CHX and E2) and all 494 ENCODE TREG binding profiles. For each case, 1000 GRS analyses were performed by first randomly permuting gene labels in one of the profiles. All Empirical Cumulative Distribution Functions (ECDF) of resulting p-values lie at or below the 45 degree line p-values,0.5, indicating strict control of Type I error rates. For 11 ENCODE profiles the GRS was especially conservative (blue lines). The examination of these 11 TREG profiles indicated unusually small number of peaks indicating that in such situations GRS is particularly conservative. A) Empirical distribution functions of p-values for four GRS analyses described in this Table 2. B) E2+CHX differential gene expression profiles vs ENCODE TREG binding profiles. C) E2 differential gene expression profiles vs ENCODE TREG binding profiles. (TIF) Figure S3 GRS vs simple thresholding to assess correlation between TREG binding scores and differential gene expression profiles. To compare the ability of GRS and the simple thresholding to detect concordance between TREG binding signatures and differential gene expression signatures, we systematically removed genes with strongest TREG binding scores from the E2f1 binding profile and gene expression profiles, and calculated p-values of the GRS and the thresholding analysis in such reduced datasets. The x axes represents the number of remaining genes in the ''regulated'' group. Red dots represent statistical significance of GRS analysis and blue dots statistical significance of the ''thresholding'' analysis. These results indicate that the GRS analysis will likely have higher sensitivity when the ''concordance signal'' between binding and expression data is low, that is when few genes (,1,000) have the TREG binding probability .0.95, while enrichment analysis of ''regulated'' genes will provide higher statistical significance when the signal is strong (.1,000 genes with TREG probability.0.95) such as it was the case with E2f1. This indicate that it is rational to use GRS as the default method since when the signal is strong, the outcome will not change depending on which method is used, and when the signal is weak, GRS has a higher chance of detecting it. (TIF) Figure S4 Proportion of ENCODE TREG profiles enriched for genes associated with the Cell cycle GO term at a specific statistical significance cut-off (x-axis). For TREG profiles (TREG) the analysis was performed using logistic regression modeling of the probability of membership in the Cell Cycle gene list based on the TREG scores as implemented in LRpath methodology. For the binding peaks data (Peak), we first established the list of genes with a significant peak within (210 kb,+10 kb) window around the gene's TSS. Then use Fisher's exact test to calculate statistical significance of the overlap with the Cell cycle gene list. While this approach seems to be somewhat inefficient, it still recapitulates conclusions of TREG analysis that a large proportion of ENCODE binding profiles are enriched for Cell cycle genes. (TIF) Figure S5 Number of peaks in ENCODE profiles for profiles with unusually conservative GRS analysis (blue lines in Fig. S1).
(TIF)
Table S1 ERa and E2F1TREG binding scores, gene specific concordance statistic, and TREG concordance scores for all genes. Table S4 Results of the concordance analysis between TREG ERa up-regulation signature and disease-associated differential gene expression profiles.
(XLSX) Table S5 The genes with gene-specific concordance in the TREG ERa up-regulation signature (p-value(e g ) ,0.001), used for plotting heatmaps in Fig. 8.
(XLSX)
Text S1 Supplemental results, discussion and methods. Results provide additional results, discussion and methods including the statistical properties of GRS methodology and detailed discussion of the EM algorithm used to estimate parameters of the mixture models. (DOCX)
|
2015-07-06T21:03:06.000Z
|
2013-09-01T00:00:00.000
|
{
"year": 2013,
"sha1": "bcd4b6787d2ebd677fe0f565cd92998199d87909",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/ploscompbiol/article/file?id=10.1371/journal.pcbi.1003198&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "775b0d9a9c6e0dba1a24359529f5a2bf6812a261",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Computer Science",
"Medicine"
]
}
|
196612489
|
pes2o/s2orc
|
v3-fos-license
|
Acute pulmonary thromboembolism caused by factor V Leiden mutation in South Korea
Abstract Rationale: Although Factor V Leiden (FVL) mutation is a major cause of inherited thrombophilia in Western populations; the mutation is extremely rare in Asia. Patient concerns: Here we report a case of a 28-year old Korean woman admitted to our hospital with extensive pulmonary embolism. Diagnosis: She was heterozygous for FVL mutation up on evaluation, and screening for asymptomatic family members also revealed heterozygous FVL mutation for her mother. Interventions: Enoxaparin 1 mg/kg was initiated, followed by rivaroxaban 15 mg every 12 hours. Outcomes: The patient showed improvement in both subjective dyspnea and right ventricular dysfunction and was successfully discharged after five hospital days. Lessons: FVL mutation screening may be considered in Asian patients with thrombophilia of uncertain etiology in the future.
Introduction
Venous thromboembolism (VTE), which constitutes pulmonary embolism (PE) and deep vein thrombosis (DVT), is an important public health concern with high morbidity and mortality requiring hospitalization. [1] It is therefore essential to undergo a comprehensive evaluation for the cause of VTE in patients with first occurrence. Major risk factors for VTE include surgery, active cancer, immobility, trauma or fracture, pregnancy, and estrogen therapy. [2] Hereditary deficiencies such as antithrombin (AT), protein C (PC), or protein S (PS), as well as Factor V Leiden (FVL) or prothrombin G20210A mutations have been wellestablished risk factors for thrombophilia of genetic origin. [3,4] FVL thrombophilia is the most prevalent genetic mechanism for inherited hypercoagulable states among the general Caucasian population, with 3% to 7% of the population harboring the mutation. [5] However, FVL thrombophilia has previously not been reported in East Asians. [6] We herein report the first case of inherited heterozygous FVL mutation in South Korea.
Case report
A 28-year old Korean woman presented to the emergency department after a witnessed syncopal episode on July 2017. She had epigastric discomfort and experienced dyspnea on exertion upon climbing stairs 2 days before admission. On the day of admission, she had transiently lost consciousness while Editor: N/A. The authors declare they received no funding to support the elaboration of this manuscript.
All patient clinical data and information provided in the manuscript is stored electronically and can be consulted if deemed necessary.
This case report was conducted in accordance with the ethical standards and according to the Declaration of Helsinki and according to national and international guidelines and was approved by the authors' institutional review board.
Written informed consent was obtained from the patient for the publication of the case details. A copy of the written consent is available for review by the Editorin-Chief of this journal. complaining of dizziness. Her previous medical history was unremarkable. History of smoking tobacco, alcohol, or drug abuse was denied. Upon further inquiry, she admitted to having taken oral contraceptive pills for 5 days before going on a trip.
On admission, she was alert and oriented but lethargic with initial blood pressure of 78/38 mm Hg, a pulse rate of 116/ minutes, and oxygen saturation 76% while breathing ambient air. Cardiac examination showed regular tachycardia with accentuated S2 sound and wheezing, crackles were present in the lower lung field. Abdominal findings were unremarkable. There was no leg edema. Her electrocardiogram revealed sinus tachycardia, normal axis, and normal intervals. Arterial blood gas analysis results were as follows: pH 7.46, pCO2 31.2 mm Hg, pO 2 39.4 mm Hg, and bicarbonate 21.9 mmol/L. D-dimer was elevated to 10.1 mg/ml (reference range < 0.5 mg/ml). The complete blood count, electrolyte, glucose, prothrombin time, activated partial thromboplastin time, renal-, and liver-function tests were within normal range. A contrast-enhanced computed tomography (CT) scan was performed. There was near total occlusion of both main pulmonary arteries and upper, middle, and lower lobar pulmonary arteries that were consistent with acute pulmonary thromboembolism and deep vein thrombosis was seen at the left popliteal vein (Fig. 1). Echocardiography showed dilated right ventricle with dysfunction, D-shaped left ventricle and inferior vena cava dilatation without plethora.
The patient was transferred to the intensive care unit (ICU) for close monitoring. She was hemodynamically stabilized after aggressive fluid resuscitation without need for thrombolysis or embolectomy, and supplemental oxygen was discontinued after several days. Anticoagulation treatment with low-molecular weight heparin was initiated and she was successfully discharged on hospital day 5 after switching to a direct oral anticoagulant (DOAC), rivaroxaban 15 mg every 12 hours.
Thrombophilia study for the patient showed the following results: PC 103 IU/dl (reference range 70-130 IU/dl), PS 75 IU/dl (reference range 70-130 IU/dl), and AT III 95% (reference range 80-120%), all levels within normal range. Lupus anticoagulant, anticardiolipin antibodies, and prothrombin G20210A gene mutation were negative. Homocysteine level was 6.59 mmol/L (reference range 4-15 mmol/L) and factor VIII level 164% (reference range 52-192%) were within normal range. Multiplex PCR was carried out using SNaPshot system to screen for FVL. Screening for FVL showed heterozygous mutation (1691G > A), confirming the diagnosis for massive VTE due to FVL mutation. The patient's family was counselled, and further investigations were done for the patient's father, mother, and brother. The patient's mother was found to have FVL mutation, but other family members were found to be normal (Fig. 2).
Rivaroxaban was tapered down to a dose of 20 mg once daily after an initial 21-day course of higher dose therapy. Follow-up chest CT at 6 months of anticoagulation therapy showed no evidence of remnant pulmonary thromboembolism. After 12 months of anticoagulation therapy, rivaroxaban was discontinued. The patient is under close surveillance and has not had a subsequent thromboembolic event after discontinuation of rivaroxaban.
Discussion
FVL mutation is a major inheritable risk factor for thrombophilia in the Western population accounting for 20% to 25% of patients with VTE. [7,8] However, the incidence of FVL has not been previously reported in Korea, Japan, or China until now. [9,10] For it had never been reported in Korea before, the Korean Society of Thrombosis and Hemostasis does not recommend routine screening of FVL for patients suspected of inherited thrombophilia. To our knowledge, this is the first report of FVL thrombophilia in South Korea. The risk of first time VTE is 3-to 7-fold higher in heterozygous carriers of FVL mutation and 50-to 100-fold in homozygous carriers compared to individuals without mutations. [8,11,12] The clinical expression of FVL is influenced by coexisting risk factors. FVL interacts with external risk factors such as pregnancy, estrogen therapy, selective estrogen receptor modulators (SERMs), oral contraceptives, surgery, and travel. [13] In this case, the concomitant use of oral contraceptives was considered to be a provoking factor for VTE, as it is known that the risk of VTE could increase by 30-to 60-fold in heterozygous FVL carriers taking oral contraceptives. [14] Although PC, PS, and AT mutation tests on coexisting thrombophilic disorders were not conducted, their levels were within the normal range. Also, prothrombin G20210A gene mutation was negative.
Testing for thrombophilia in asymptomatic family members of patient with hereditary thrombophilia is not generally recommended. [15] However, testing may be beneficial in female family members of patients with hereditary thrombophilia if results will influence choices regarding estrogen use or prophylaxis in the context of pregnancy. [15] In this case, the mother of our patient turned out to be an asymptomatic heterozygous carrier, and now she may avoid going over estrogen therapy or using SERMs, which could increase the relative risk of thrombosis. Therefore, decisions regarding genetic screening test should be made on an individual basis.
Anticoagulation is the mainstay treatment of VTE. Many patients with inherited thrombophilia received anticoagulant therapy either over vitamin K antagonist (VKA) or a heparin product historically. [16] The recently published 2016 American College of Chest Physicians (ACCP) guidelines suggest the use of DOACs over VKA for the treatment of VTE in patients without cancer. [17] However, the role of DOACs in the treatment of inherited thrombophilia remains unknown. [18] Wypasek et al reported 2 cases with VTE associated PS deficiency which showed resistance to anticoagulant effects of rivaroxaban. [19] On the other hand, Cook et al showed that a patient with ovarian vein thrombosis associated FVL mutation has been successfully treated with rivaroxaban. Furthermore, a sub-group analysis by Schulman et al demonstrated that dabigatran was non-inferior to warfarin in patients with inherited thrombophilia in recurrent VTE or VTE-related deaths. [20] In our case, there was no residual VTE in the repeated CT imaging 6 months after rivaroxaban treatment. Thus, DOACs may be suitable alternatives to VKA for patients with thrombophilia There is limited information on genetic analysis of the proband's maternal ancestry that showed homogeneous ethnicity without involvement of Caucasian gene. Additional cases are required to elucidate whether FVL mutation could be regarded as one of the rare but plausible causes of genetic thrombophilia even in Eastern Asian populations.
|
2019-07-16T14:31:23.587Z
|
2019-07-01T00:00:00.000
|
{
"year": 2019,
"sha1": "86b3072309fcd449292f375c750d1dbe13b9fc0b",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1097/md.0000000000016318",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "86b3072309fcd449292f375c750d1dbe13b9fc0b",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
235998306
|
pes2o/s2orc
|
v3-fos-license
|
Lipoid Proteinosis Masquerading as Seborrheic Dermatitis
We report a case of lipoid proteinosis (LP) masquerading as seborrheic dermatitis. A 35-year-old female presented to our outpatient department with complaints of itching and crust-like formation on eyelids for five years. She was treated as a case of seborrheic dermatitis elsewhere and got intermittent relief in itching with medications. Beaded lesions were found along the upper and lower eyelids involving the lash line and caruncle on removing the crust. The verrucous lesions were pathognomonic, moniliform blepharosis of LP. Systemic examination revealed hoarseness of voice, and hyperkeratosis was seen on the dorsum of both of her hands. She has been advised lid hygiene, artificial tears, and antihistaminics for itching. Skin emollients were also advised by dermatologists to decrease the chances of abrasion and bleeding from minor trauma. She was well explained about the danger signs as well as neurological and psychiatric implications of the disease. Although ophthalmologists have a rare encounter with this disease, LP is a well-known entity to dermatologists and otorhinolaryngologists, and thus it may sometimes go undiagnosed. An ophthalmologist should be well aware of the life-threatening complications associated with LP, and patients should be sensitized regarding the chronic nature, supportive measures, and danger signs of the disease.
Introduction
Lipoid proteinosis (LP) is known as Urbach-Wiethe syndrome or hyalinosis cutis et mucosae and was first reported by dermatologist and otolaryngologist duo Urbach and Wiethe in 1929 [1]. LP is a rare entity with autosomal recessive inheritance and multiple system involvement causing dermatological, otorhinolaryngological, ocular, and neurological manifestations. Presenting symptoms of LP are usually hoarseness of voice due to deposits in the vocal cords [2]. Respiratory insufficiency and upper respiratory tract infections are the sequelae of involvement of the upper aerodigestive tract. Ophthalmological manifestations are pathognomonic of LP, and these are characteristically beaded papules along upper and lower eyelids over the lash line also known as moniliform blepharosis [3]. Skin depositions cause the skin to become thickened and yellowish. Friction areas such as the hands, elbows, knees, buttocks, and armpits bear more brunt, and there is marked hyperkeratosis in response to minor trauma [4]. Although ophthalmologists have a rare encounter with this disease, LP is a pretty well-known entity to dermatologists and otorhinolaryngologists, and thus it may sometimes go undiagnosed. LP often follows a stable, chronic course, and treatment is seldom indicated. The disease although is not life-threatening, but it deteriorates the quality of patients' life. So ophthalmologists should be aware of varied presentation and multidisciplinary approaches for this rare entity [5].
We describe a case of LP masquerading as seborrheic dermatitis and involving bilateral eyelids in a 35-yearold female who presented with the chief complaints of itching and crust-like formation on both eyelids for five years.
Case Presentation
A 35-year-old female presented to our outpatient department with complaints of itching and crust-like formation on eyelids for five years. She was treated as a case of seborrheic dermatitis elsewhere and got intermittent relief in itching with medications. On ophthalmic examination, BCVA (best-corrected visual acuity) was 20/20 in both eyes. Detailed examination revealed dandruff-like material along with the upper and lower eyelid. Beaded lesions were found along the upper and lower eyelids involving the lash line and caruncle on removing the crust. The verrucous lesions were pathognomonic, moniliform blepharosis of LP (Figures 1, 2). 1 2 The lesions were continuous and involved the entire lid margin and caruncle. Intraocular pressure (IOP) and fundus examination were within normal limits. Systemic examination revealed hoarseness of voice, and hyperkeratosis was seen on the dorsum of both of her hands ( Figure 3).
FIGURE 3: Hyperkeratosis on the dorsum of both of her hands
For hoarseness of voice, she was sent for an otorhynolaryngologist's opinion, and laryngoscopy revealed thickening and irregularities of the vocal cords' mucosa. She has been advised lid hygiene, artificial tears, and antihistaminics for itching. She has been further counseled regarding the chronic nature of the disease, varied ocular manifestations, and multisystem involvement. Skin emollients were also advised by dermatologists to decrease the chances of abrasion and bleeding from minor trauma. The danger signs as well as neurological and psychiatric implications of the disease were well explained to her.
Discussion
LP is a rare entity with less than 500 cases being reported in the scientific literature. LP has been reported more from certain areas including Turkey, Iran, and South Africa. The pathogenesis behind this disorder is a mutation of the extracellular-matrix protein-1 (ECM1) gene located on chromosome 1q21 [6]. Faulty ECM1 protein causes reduced binding between ECM1 and other proteins, leading to an unstable extracellular matrix and in turn stimulating neighboring cells to overproduce proteins and other materials. Overproduction of protein leads to their deposition in tissues, which is a characteristic of LP. It is histologically proven that there are intercellular deposits of periodic acid-Schiff (PAS)-positive hyaline material in the skin, mucous membranes, and internal organs.
The hoarseness of voice is usually the presenting symptom in most cases, and it may persist throughout life and in due course of time can cause difficulty or loss of speech. The tongue may be thick and shortened due to deposits. The thickening of the frenulum of the tongue leads to difficulty in extending the tongue. A decrease in taste buds leads to smoothening of the tongue in some cases.
Neurologic manifestations are due to the presence of deposits in the temporal lobe. The most common presentation is recurrent seizures (epilepsy) followed by headaches, agitated behaviors, paranoia, hallucinations, and memory loss.
The lesion's moniliform blepharosis, a pathognomonic feature of LP, can cause irritation or itching of the eyes, but the vision is not hampered in most cases [3]. An ophthalmologist should know to differentiate between moniliform blepharosis and other eyelid disorders due to deposition as their management and underlying systemic condition are quite varied [7][8] ( Table 1).
TABLE 1: Different deposition disorders associated with eyelid, common area of involvement, and treatment
Other ocular manifestations of LP involve the cornea, conjunctiva, sclera, trabecular meshwork, iris/pupil, lens and zonular fibers, retina, and nasolacrimal duct. Infiltration of glands of Zeiss, Moll, and Meibomian by deposits can subsequentially cause madarosis, trichiasis, and sometimes distichiasis. Focal degeneration of macula and drusen formation in Bruch's membrane were observed in about 30%-50% of patients. Glaucoma (deposition of glycoproteins in a trabecular meshwork or due to hyalinization of scleral trabecula), lens dislocation or subluxation, corneal ulceration caused by trichiasis, keratoconus [9], unilateral or bilateral uveitis [10], dry eyes, nasolacrimal duct obstruction, and transient blindness are the uncommon ocular manifestation of LP.
Due to the chronic, progressive, and indolent nature of the disease, no effective treatment is known. In most cases, management is planned according to the site of involvement and presentation. Treatment goals are symptomatic improvement in clinical condition. As in our case, itching was relieved with artificial tears, lid hygiene, and antihistaminic eye drops. Surgical excision of the papules and CO 2 laser therapy are required mostly for cosmetic purposes.
Conclusions
LP disease requires a multidisciplinary approach, and consolidated opinions from ophthalmologists, dentists, dermatologists, otorhinolaryngologists, and neurologists are pivotal to lay a firm diagnosis. An ophthalmologist should be well aware of the life-threatening complications associated with LP, and patients should be sensitized regarding the chronic nature, supportive measures, and danger signs of the disease.
Conflicts of interest:
In compliance with the ICMJE uniform disclosure form, all authors declare the following: Payment/services info: All authors have declared that no financial support was received from any organization for the submitted work. Financial relationships: All authors have declared that they have no financial relationships at present or within the previous three years with any organizations that might have an interest in the submitted work. Other relationships: All authors have declared that there are no other relationships or activities that could appear to have influenced the submitted work.
|
2021-07-18T05:27:43.025Z
|
2021-06-01T00:00:00.000
|
{
"year": 2021,
"sha1": "26815d942063f2c1c396d9ce0ba9ab11e6438a99",
"oa_license": "CCBY",
"oa_url": "https://www.cureus.com/articles/59989-lipoid-proteinosis-masquerading-as-seborrheic-dermatitis.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "26815d942063f2c1c396d9ce0ba9ab11e6438a99",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
118850422
|
pes2o/s2orc
|
v3-fos-license
|
Roughness Tolerance Studies for the Undulator Beam Pipe Chamber of LCLS-II
We investigate the effect of wall roughness on the wakefield-induced energy variation in the undulator beam pipe of LCLS-II. We find that a wall roughness equivalent to an rms surface slope of 30 mr increases the total induced energy variation within the bunch (due to the resistive wall wake) by a modest 20%.
INTRODUCTION
When a bunch passes through the undulator of LCLS-II, the wakefields of the vacuum chamber will result in an added energy variation along the bunch, one that can negatively impact the FEL performance. The wakefield of the vacuum chamber is primarily due to the resistance of the walls and the roughness of the surface. To minimize the impact of the wakes, one would like a wall surface smooth enough so that the roughness component of the wake is a small fraction of the total wake. In LCLS-I, with an undulator vacuum chamber of the same material (aluminum) and roughly the same aperture as proposed for LCLS-II, the wall roughness tolerance specified as an rms slope of the surface of (y ) rms = 10-15 mr was difficult to achieve [1]. The goal of this study is to understand the consequences to LCLS-II of loosening the roughness specification, say by a factor of 2 to 30 mr.
The vacuum chamber within the undulator of LCLS-II will be primarily extruded aluminum with a racetrack cross-section, as shown in Fig. 1 (in addition, there are short breaks at the quads that will have a different shape and have a larger aperture). The full aperture is 5 mm by 12 mm, vertical by horizontal. From an impedance point of view, with the beam on axis, the effect is essentially the same as for the case of flat geometry, i.e. for a chamber consisting of two parallel plates with a vertical separation of 5 mm.
In this note we begin with the round approximation, i.e. we consider an aluminum pipe of radius a = 2.5 mm. We calculate the total wake effect of resistive wall plus a model of roughness. The roughness model we use consists of small, shallow, sinusoidal corrugations [2]. We choose this model because measurements of samples of polished aluminum, similar to that to be used in the undulator chamber, find that the typical measured roughness is shallow [3]. Note that this model does not include a so-called "synchronous mode" wake [4,5]. Such a mode appears in the case of small deep corrugations which are not expected for the LCLS II undulator.
The calculation of the short range wake of a resistive pipe has been done before [6], as has the case of a pipe with small, shallow corrugations [7]; in this report we properly combine the two effects. Besides the analytical calculation, we present a simple way of estimating the relative contribution of the resistive wall and roughness components on the induced energy variation in the LCLS-II bunch. We next verify our analytical calculations with the 2D time-domain wakefield program ECHO [8]. Finally, we perform the corresponding analytical wake calculations for a vacuum chamber of flat geometry representing the LCLS-II vacuum chamber for different amounts of roughness.
Selected beam and machine properties in the undulator region of LCLS-II that are used in our calculations are given in Table I.
ROUND VACUUM CHAMBER
Consider first a round chamber of radius a, with wall resistance and small (in amplitude), shallow sinusoidal corrugations that represent the wall roughness. While in some cases the beam impedance can be calculated as a sum of the impedances due to resistance and that due to wall roughness, in general case such summation of impedances is not correct. A more general approach is based on the concept of surface impedance [9] defined as the ratio of the longitudinal electric field and the azimuthal magnetic field at the wall, Denoting ζ rw (k) the wall resistive surface impedance and ζ ro (k) the surface impedance due to roughness we can write the beam impedance Z(k) as with wave number k = ω/c, where ω is frequency and c is speed of light; and with Z 0 = 377 Ω. The resistive wall surface impedance ζ rw (k), is given by [10] with σ c the dc conductivity and τ c the relaxation time of the metallic walls.
The roughness surface impedance term is given by [7] ζ ro (k) = 1 here the wall profile radius r is assumed to vary sinusoidally with longitudinal position z: r = h cos κz. For the model to be valid we require the oscillations to be small and shallow, i.e. κa 1 and hκ 1. Note that Eq. 1 implies that at low frequencies the two contributions to the impedance simply add: however, as was pointed out above, in general this is not true. Once the impedance is known, then the wake is obtained by the inverse Fourier transform: with s the distance the test particle is behind the driving particle. Note that in Ref. [7] further practical considerations for such a calculation as a contour integral are discussed.
For the LCLS-II undulator vacuum chamber the dominant effect is expected to be the resistive wall wake, with the roughness corrugations contributing to a lesser degree. The strength of the resistive wall wake for a short bunch depends on the characteristic distance For the roughness model, the long range wake is given by [ with the overall minus sign in the expression indicating that the test particle gains energy from the leading particle 1 . This is the same s dependence as for the long range resistive wall wake, and in the second expression on the right we write the wake in terms of an equivalent roughness conductivity Inserting this conductivity into Eq. 6, one obtains an effective roughness distance (s 0 ) ro . Choosing λ ro = 2π/κ = 300 µm, (y ) rms = hκ/ √ 2 = 30 mr, we find that (σ c ) ro = 2.9 × 10 8 Ω −1 m −1 and (s 0 ) ro = 4.9 µm. We see that the characteristic distance for this level of wall roughness is about half that of the wall resistance.
We numerically performed the integral of Eq. 5, considering the effects of the resistivity of aluminum, and the wall roughness with (y ) rms = 30 mr and oscillation wavelength λ ro = 300 µm. In Fig. 2 we present ReZ(f ) (top; f is frequency) and the point charge wake W δ (s) (bottom) for the case of a pipe that has wall resistance (blue), roughness (red), and both resistance and roughness (yellow). We see that the total effect is dominated by the resistive wall wake, and it is not simply given by the sum of the two individual wakes.
We further note that W δ (0 + ) = Z 0 c/πa 2 = 5.8 MV/(nC m). The first zerocrossing of the wakes is near s 0 = 9.8 µm, (s 0 ) ro = 4.9 µm, and (s 0 ) tot = 12 µm, 1 We define the sign of the wake so that the positive wake corresponds to the energy gain. respectively, where the combined effect has been approximated by In the undulator region of LCLS-II the longitudinal bunch distribution is roughly uniform, with peak current I = 1 kA; the nominal bunch charge is Q = 100 pC, with a maximum of Q = 300 pC possible. The bunch wake is given by the convolution with λ(s) the longitudinal bunch distribution, and a negative value for W λ (s) indicates energy loss. For a uniform bunch distribution with peak current I the relative wake induced energy variation at the end of the undulator is given by with L the length of the undulator pipe and E the beam energy. In Fig. 3 we plot the relative induced voltage in a uniform bunch for the three cases of Fig. 2. We see that for both the 100 pC bunch (total length of = 2 √ 3σ z = 30 µm) and the 100 pC bunch ( = 90 µm) the total energy variation induced within the bunch is ∆δ w = 0.36% for resistance plus roughness, vs. 0.30% for resistance without roughness; the roughness adds a 20% effect. Since the wake drops nearly linearly to zero near the effective s 0 , we can estimate these numbers with the formula wheres 0 = (s 0 ) tot in the former case, ors 0 = s 0 in the latter one; which gives ∆δ w = 0.37% and 0.31% for, respectively, the case of roughness plus resistance, and the case of resistance alone-in good agreement to the more accurate calculations.
FLAT VACUUM CHAMBER
Henke and Napoly give the impedance of a resistive wall in flat geometry in Ref. [11]. With a slight modification we include the effects of both the wall resistance and roughness: We have repeated the previous calculations for flat geometry, for cases of aluminum with ac conductivity and roughness with (y ) rms of: (1) 0 mr, (2) 15 mr, Fig. 5 (bottom). We see that, compared to the round case, W (0 + ) is reduced by the factor π 2 /16 and the first zero crossing of the wake is increased slightly. Thus Eqs. 8, 9, and 12-with the last one multiplied by π 2 /16-can still be used to estimate the relative impact of the roughness and the wall resistance.
In Fig. 6 we plot the relative induced energy variation for a uniform bunch distribution. Here the peak current I = 1 kA; the bunch head is located at s = 0, with the entire bunch extent reaching to 90 µm (for the Q = 300 pC case), and to 30 µm (for the nominal Q = 100 pC case). The length of pipe is assumed to be L = 130 m, and the beam energy E = 4 GeV. The total induced relative energy variation for a resistive pipe with no roughness (for both the 100 pC and 300 pC cases) is ∆δ w = 0.25%. Adding roughness increases this value by 5%, 19%, 38%, when (y ) rms = 15, 30, 45 mr, respectively.
The roughness effect depends on (y ) rms and also on λ ro , though the latter dependence is expected to be much weaker. Repeating the calculation for wall resistance plus roughness with (y ) rms = 30 mr, but taking λ ro = 900 µm we find that the roughness increases ∆δ w by 27.5% compared to the effect of wall resistance alone. This confirms that the dependence of ∆δ w on λ ro is weak. The LCLS-II bunch distribution in the undulator is not exactly uniform with peak current I = 1 kA (see Fig. 7, the yellow curves). The current, numerically obtained 100 pC distribution has slight horns at the head and tail of the bunch, with a slight current droop in the middle; the 300 pC distribution can be described as uniform in front with a long trailing tail. Repeating the induced energy spread calculations with these distributions, both with wall resistance alone and with resistance plus roughness with (y ) rms = 30 mr (λ ro = 300 µm), we obtain δ w (s) as given by the red and blue curves in Fig. 7.
It is interesting to note that, because the 300 pC bunch shape begins as a uniform distribution, δ w (s) quickly drops and rises back to near zero, similar to the behavior in Fig. 6. For the 100 pC case, however, because of the horns and droop, δ w (s)-after reaching its minimum-remains flattened for most of the rest of the bunch. results are not far from the 19% increase estimated above for the uniform distribution. Finally, for completeness, we calculate the wakefield-induced power loss in the undulator beam pipe: P = W λ Q 2 f rep /L, where indicates averaging over the bunch. We find that P = 2.1 (1.0) W/m for Q = 100 (300) pC, using the maximum planned repetition rate, f rep = 300 (100) kHz.
CONCLUSIONS
We have investigated the wake effect of the wall resistance and roughness of the undulator beam pipe on the LCLS-II beam. In particular we wanted to see if it is acceptable to loosen the roughness tolerance from an equivalent rms slope at the surface of (y ) rms = 15 mr to 30 mr. According to the calculations presented here, such a loosening will result in the roughness contribution to the induced voltage to increase from 5% to 20%. The absolute scale of the total wake effect is a relative induced energy variation of ∼ 0.3% (assuming a pipe length of 130 m and a beam energy of 4 GeV).
In this note, we have presented an analytical calculation of the wake in a round or flat chamber with wall resistance and shallow, sinusoidal corrugations.
We have additionally shown that our analytical calculations of the short range wake in such a chamber is in good agreement with results of the time domain, finite difference program ECHO. Finally, we have presented a simple model for estimating the extra effect of wall roughness on the wake of the beam in the LCLS-II undulator chamber.
|
2014-05-02T01:08:46.000Z
|
2014-05-02T00:00:00.000
|
{
"year": 2014,
"sha1": "0e2b4e43bbd293ad1814f7253d3e9a5afd3ba3af",
"oa_license": null,
"oa_url": "http://arxiv.org/pdf/1405.0330",
"oa_status": "GREEN",
"pdf_src": "Arxiv",
"pdf_hash": "2b23080840c586021dec8ee797184e7c16591fc5",
"s2fieldsofstudy": [
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
143433663
|
pes2o/s2orc
|
v3-fos-license
|
Draft Whole-Genome Sequences of Three Isolates of a Novel Strain of a Campylobacter sp. Isolated from New Zealand Birds and Water
Campylobacter spp. are frequently found associated with the avian intestinal tract. Most are commensals, but some can cause human campylobacteriosis.
C ampylobacter species are frequent commensals of the avian gastrointestinal tract, and C. jejuni isolated from birds is associated with zoonotic campylobacteriosis in humans (1,2). Additionally, wild birds are a rich source of novel Campylobacter species (3,4). Swabs of deposited feces from starlings and mallard ducks in Palmerston North, Manawatu, New Zealand, as well as a 0.45-m mixed cellulose ester filter (Millipore, Germany) of 100 ml of water from the Pareora River in Canterbury, New Zealand, were enriched in Bolton broth (LabM, Hampshire, UK) in a microaerobic atmosphere at 42°C and subcultured onto modified charcoal-cefoperazone-deoxycholate agar (mCCDA) (Fort Richard Laboratories, Auckland, New Zealand). Three strains of nonstandard colonial morphology (B423b from a mallard duck, B1491 from a starling, and W677a from the Pareora River) had crude DNA extracted by boiling and 16S rRNA PCR performed using the methodology of Linton and coauthors (5). The products were Sanger sequenced at the Massey Genome Service (Massey University, Palmerston North, New Zealand). Genomic DNA was extracted from a single colony using the QIAamp DNA minikit (Qiagen, Germany) for W677a and the Wizard genomic DNA kit (Promega, WI) for B423b and B1491. DNA was sequenced at New Zealand Genomics Ltd. (Massey University) using a MiSeq instrument (Illumina, Inc.) with paired read lengths of 250 base pairs after library preparation using the Nextera XT library kit (Illumina). Sequence data were trimmed, assembled, and annotated using the "reads to report" Nullarbor pipeline (https://github.com/tseemann/nullarbor), which uses Trimmomatic with default settings (6) and SPAdes v.3.9.0 in careful mode (7). Annotation statistics were extracted from the NCBI Prokaryotic Genome Annotation Pipeline (PGAP) after being uploaded to the NCBI server and are described in Table 1.
Similarity to known Campylobacter spp. was measured by BLAST analysis of the 16S rRNA gene, and genomic average nucleotide identity (ANI) was calculated using the Kosta laboratory's ANI calculator (http://enve-omics.ce.gatech.edu/ani/). The three presented isolates showed 99% ANI and 100% 16S pairwise identity compared to each other, suggesting a monophyletic lineage. Both BLAST and ANI methods identified the most closely related species as being members of C. jejuni, showing 98.15% 16S pairwise identity to C. jejuni NCTC13268 and ϳ79% ANI to C. jejuni (GenBank accession number GCA_000011865), suggesting that the isolates are likely members of a previously undescribed taxonomic group.
The isolates had a genome with an estimated mean size of 1.62 Mb Ϯ 0.04 Mb standard deviation and 1,642 (Ϯ26) predicted coding sequences. The average GC content was 27.46% (Ϯ0.02%). This is lower than that observed in other Campylobacter spp., which typically falls between 30 and 46% (8), with the previously described lowest being 27.9% for C. hepaticus (9). Single copies of 5S, 16S, and 23S rRNA genes were identified in each genome. ABRicate (https://github.com/tseemann/abricate) was used to query the Virulence Factors Database (10) using default settings. Neither the cdtABC operon responsible for cytolethal distending toxin production nor the pVir plasmid associated with described pathogenicity in other Campylobacter species was found, suggesting that these isolates potentially belong to an avirulent commensal taxonomic group.
Data availability. The genome assemblies have been deposited in GenBank, and the accession numbers are detailed in Table 1.
ACKNOWLEDGMENTS
We thank Vathsala Mohan, Sarah Moore, and Anthony Pita for performing the sampling.
Sampling We declare no conflicts of interest.
|
2019-05-04T13:04:41.546Z
|
2019-05-01T00:00:00.000
|
{
"year": 2019,
"sha1": "922536427dff93dd7e7d46a99573b152a68e9cac",
"oa_license": "CCBY",
"oa_url": "https://doi.org/10.1128/mra.00258-19",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "922536427dff93dd7e7d46a99573b152a68e9cac",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
256853050
|
pes2o/s2orc
|
v3-fos-license
|
Jurisdictional Relationships: Democracy and the Administrative State Through the Lens of Caring Society v Canada
This paper draws on theorists of jurisdiction to make visible how democratic self-government and the rule of law take effect in the modern administrative state. Through the lens of Caring Society v Canada, a Canadian human rights case about the funding of social services for Indigenous children, I explore an idea of jurisdictional justice that is inherently connected to the quality of democratic engagement and focuses attention on political, legal and human relationships. I argue that in the face of challenges posed by populist politics and the colonial administrative state, this relational understanding of jurisdiction provides resources for building a sense of legality that can sustain shared public meaning and promote practices of political accountability.
Introduction
This paper is inspired by a remarkable series of events that have unfolded in relation to a human rights complaint, brought by Indigenous claimants in 2007, that the Canadian federal government discriminates against Indigenous children by the way it funds child welfare services on reserves.In 2016, the Canadian Human Rights Tribunal found the complaint to be substantiated, and in a very interesting and nuanced decision, ordered the federal government to increase funding and to reform the system (FNCFCS et al v Canada (AG) 2016 CHRT 2).That decision has been followed by numerous appeals, compliance orders, and other court and tribunal proceedings, as well as Parliamentary debates, appeals to international human rights and an active public discourse.
The story of the Caring Society litigation can be told in many ways.It is about the care of children, and the contested role of the state in ensuring their health and safety (Bezanson, 2018).It is about the self-determination of Indigenous communities and the capacity of human rights law to protect against assimilation (Metallic, 2018).It is about the rule of law, and the capacity of legal frameworks to constrain exercises of power and to structure democratic engagement.It is also, I suggest, a story about how the administrative state can function to insulate state decision-making from accountability and to undermine our collective capacity for meaningful public debate.Dramatizing this phenomenon in a manner both shocking and banal, in the fall of 2021 Prime Minister Trudeau asserted in Parliament and affirmed during a televised election debate that his government was 'not fighting Indigenous children in court' at the same time that the Department of Justice was preparing to file yet another appeal (Canada, 2021;CTV, 2021).
This erosion of meaningful public discourse and state accountability is central to contemporary challenges to democracy and constitutionalism, including challenges posed by populist forms of politics.In this paper, I explore the critical resources for responding to these challenges that can be gained by attending to the administrative state as a whole, and its relationship to democratic public discourse.Lack of public trust in institutions and the insulation of political decisions from accountability are important contributors to the public alienation that fuels populist discourses (Webber, 2023: 9-10).When the practices of state institutions are corrosive to trust, accountability and meaningful political discourse, they undermine the possibilities for genuine democratic engagement and create a breach easily filled by the anti-elitist and commonsensical claims of populist politics.At the same time, critical responses to populism need to take seriously the embodied experiences of citizens and the complexity of the relationships at hand.The challenge is to promote the possibilities of democracy in a manner that is grounded in the realities of everyday life (such as the care of children), and not to cede the ground of democratic egalitarianism to the populist rhetorics of anti-elitism that can sometimes be ironic and "self-liquidating" (Webber, 2023: 8).Instead, how can we build or restore political communities in which the abstract words of collective aspiration, whatever they may be -'democracy', 'consent', 'freedom', 'love'are meaningfully connected to actual practices and experiences?
I suggest that one way to answer this question from the perspective of legal theory is to focus on jurisdiction.Jurisdiction is a useful framework for understanding challenges of meaning and democratic accountability because it enables scrutiny of the modern administrative state as a whole and the way legislative, judicial and executive aspects of the state are interrelated.Much discussion of democratic accountability focuses on tensions between electoral democracy one on hand, and a judicially supervised rule of law on the other.But as the story of the Caring Society litigation shows, it can be executive actions and administrative regimes that give citizens direct experiences of the state as unresponsive or unaccountable, and where public obligations are recast as ostensibly 'private' contractual relations (Rundle, 2020a(Rundle, , 2020b)).A focus on jurisdiction brings into view the role of the executive and administrative elements of states, and their relationship to legal and electoral accountability.Attention to service delivery and the idioms of administrative accountability enable consideration of how they too play a role in the alienation that underpins populism (Rundle, 2020a(Rundle, , 2020b)).
Attention to the meaning and workings of jurisdiction is also important because it helps address the complexity of 'the people' of contemporary states (Webber, 2023: 5-7).In contrast to the substantive wholeness and directness of the peoples' voice postulated by populist discourses, jurisdiction calls attention to the specific legal mechanisms that identify, serve, or account for individuals, institutions and communities.This diversity and specificity of the 'people' is of particular significance in the context of pluralistic societies, and points to the need to foster democratic discourses that move beyond majoritarianism.In the case of the child welfare services at issue in the human rights litigation explored here, jurisdiction provides a way to see how technical legal mechanisms of administration are connected to broad questions of democratic accountability and the political constitution and survival of specific peoples.
Jurisdiction addresses the questions of who plays a lawmaking role, where, how and for whom (Pasternak, 2021).Jurisdiction is practiced and manifested through many forms, including judicial decisions and statutes passed by legislative bodies.Dorsett and McVeigh (2012) identify two essential starting points for thinking about jurisdiction.First, the word 'jurisdiction' is derived from the Latin ius dicere: to speak the law.Jurisdiction is the voice or speech of law.Second, drawing on theorists of the common law, jurisdiction connotes authority.They write: '[j]urisdiction is the practice of pronouncing the law.It declares the existence of law and the authority to speak in the name of the law'.(Dorsett andMcVeigh, 2012: 4) Douzinas (2006) articulates these same two threads when he says that '[i]n jurisdiction, legal speech both constitutes and states the law' (Douzinas, 2006: 23).
Thus, jurisdiction is a technical and legalistic mechanism, oriented to the professional craft of legal practice, such as the rules organizing which courts may hear which kinds of claims (Dorsett and McVeigh, 2012: 4).At the same time, jurisdiction is also a legal concept that explains how law comes into being and by whom.Political self-governance and the application of a legal system are brought into relationship through jurisdiction.
In this article, I draw on theorists of jurisdiction to explore the potential of jurisdiction and 'jurisdictional thinking' (Dorsett and McVeigh 2012) as a resource for promoting self-governance on the contested terrain of constitutional democracy and in the context of challenges posed by populism.I engage with human rights litigation about the funding of child welfare services to argue that the concept of jurisdiction provides us with crucial tools for understanding and responding to challenges of public meaning and democratic accountability in the context of the modern administrative state.I argue that jurisdiction is a concept and a practice that is important for democracy and for the rule of law.Most importantly, I think that attending to jurisdiction as a lens for understanding the challenges of populism and constitutionalism allows us to ask different kinds of questions and make different kinds of interventions with respect to meaningful public accountability and democratic lawmaking.My aim is to contribute to theoretical understandings of law that are capable of attending to, rather than covering over, the complexity of democratic governance and the rule of law.
Jurisdictional Disputes, Human Rights and Colonialism: An Example from Canada
My exploration of these issues takes place in the context of a Canadian human rights case relating to the rights of Indigenous children and the responsibilities of a federal settler colonial state.In this case, the non-profit organization First Nations Child and Family Caring Society brought a complaint against the Canadian federal government under the Canadian Human Rights Act (see generally Blackstock, 2016).In their complaint, the Caring Society alleged that the federal government discriminated against Indigenous children by the way it funds child welfare services on reserves.Pointing to inadequate services, inequities between on-reserve and off-reserve resources, and the crisis of Indigenous over-representation among children in state care, the Caring Society argued that federal policy perpetuatedrather than amelioratingthe harmful legacies of the residential school system, thereby adversely affecting First Nations children on the basis of race or ethnic origin, and breaching the anti-discrimination norms in the Canadian Human Rights Act.The Canadian Human Rights Tribunal (CHRT), the statutory body assigned jurisdiction to adjudicate complaints under the Act, found the complaint to be substantiated and made orders with respect to compensation and also the reform of the system (FNCFCS: paras 458, 481).
In order to understand the legal history of the Caring Society case, a bit of constitutional background is needed.The Canadian constitutional structure is federal in nature, and lawmaking authority is divided according to the terms of the constitution (see generally Webber, 2021).The federal division of powers set out in the Constitution Act, 1867 assigns provincial legislatures the authority to make laws regarding child welfare programs, and provides the federal Parliament with the authority to make laws with respect to 'Indians and Lands Reserved for Indians' (Constitution Act, 1867, s. 91(24)).As a matter of constitutional doctrine, the question presented is whether the provision of child welfare services for Indigenous children who live on First Nations land reserves falls into the class of 'child welfare', or as something that is relating to 'Indians'.The contemporary doctrinal answer is: probably both (NIL/ TU, O Child and Family Services Society 2010 SCC 45).However, as in other areas of social welfare law, the division of legislative powers in the Constitution Act, 1867 creates powers but not necessarily obligations.And for most of Canada's history, neither the federal nor provincial legislatures did legislate to provide child welfare on reserve (Grammond, 2018).
Instead, there is a well-established history of delivering services on a model involving inter-governmental agreement and the provision of funding governed by federal administrative policies rather than the creation of legislated norms.This model, sometimes referred to as 'the provincial model' (Grammond, 2018), has the following basic structure: The federal government enters into an agreement with a provincial government.Under this agreement, the federal government will provide funding to a First Nations child welfare agency, and the provincial government will delegate authority to that agency to provide services on reserve.It is a condition of the federal funding that the First Nations child welfare agency provide services to the community on reserve according to the norms and standards set out in provincial law.This is the kind of arrangement that was the subject of the human rights complaint in the Caring Society case.
The CHRT held that the policies and funding mechanisms contained in these agreements have had the effect of under-resourcing First Nations child welfare agencies as compared with their provincial counterparts, of failing to provide culturally appropriate or adequate services, and facilitating the ongoing practice of taking Indigenous children into state care in disproportionate numbers (FNCFCS: paras 404, 458).For example, federal policy determines the funding for the operating expenses of agencies, such as the salaries of social workers or the provision of support services, on the basis of a formula connected to the population of the reserve.The amount of those operating expenses is widely understood to be insufficient to provide adequate service (is: para 389-90).In contrast, the funding to pay for the costs of caring for a child who is taken into state care is provided for 'at cost', meaning that the full amount required to provide the care is reimbursed by the state.The funding policy thereby creates a structural incentiveat once technocratic and heartbreakingto take children into state care, not as a last resort but in order to obtain services for them, as those services are not available to their families in the community (FNCFCS: para 384, 458).
These kinds of child welfare arrangements only apply on reserve, and the Tribunal found that, therefore, the race or ethnic origin of the children was a factor in their treatment (FNCFCS: paras 395-6, 459).The use of the category 'race or ethnic origin' to frame the injustices of colonialism has been problematized by critical scholars (Lawrence, 2018), but not by the Tribunal in this case.Instead, the Tribunal simply ruled that the policies had the effect of denying or differentiating adversely against kids on reserve in a substantively discriminatory way.The Tribunal held: The evidence in this case not only indicates various adverse effects on First Nations children and families by the application of AANDC's FNCFS Program [the federal program], corresponding funding formulas and other related provincial/territorial agreements, but also that these adverse effects perpetuate historical disadvantages suffered by Aboriginal peoples, mainly as a result of the Residential Schools System.(FNCFCS: para 404).
The challenges of jurisdiction in this case are many.Indeed, the injustices at the heart of the casethe failure to provide needed social services and the separation of Indigenous children from their families and communitiesarose from decades of jurisdictional disputes between federal and provincial governments fought out in elections, in litigation, and at the expense of Indigenous children.Indigenous communities have resisted both the interference and neglect of the Canadian state in a variety of ways, including through the assertion and practice of jurisdiction over families and children in their communities, both within and without the Canadian state framework (Walkem, 2021).As explored by Metallic, Grammond and others, the plurality of relevant jurisdictions is a central part of the picture (Grammond, 2018;Metallic, 2018;Mosher and Hewitt, 2018;Sheppard, 2018).Drawing on this attention to plurality, the specific jurisdictional lens I hope to bring to this case focuses on the quality of the relationships that are created among jurisdictions and jurisdictional actors.When brought to bear on the Caring Society case, jurisdictional thinking makes visible how the challenges of democracy and the rule of law are relational in nature and connected to the capacity of law to foster shared public meaning and accountability.
The range of institutional and jurisdictional relationships at play in the Caring Society case are remarkably complex and provoke essential questions of law and democracy.Consider the following examples.In January 2016, the Canadian Human Rights Tribunal ordered the federal government to reform its funding model and the way it provides the services in question, but the government has largely failed to comply with those orders (Levesque, 2021).There have been numerous subsequent non-compliance and other orders by the Tribunal (2016 CHRT 10;2016 CHRT 16;2017 CHRT 7;2017 CHRT 14;2017 CHRT 35;2018 CHRT 4;2019 CHRT 1;2019 CHRT 7;2019 CHRT 39).While there are many legal relationships at hand (the Canadian Human Rights Tribunal is an administrative tribunal created by statute and thus formally part of the executive), one way to characterize what is happening to is to see it an ongoing instance of the executive branch ignoring the orders of a quasi-judicial body in contravention of the rule of law.
In November 2016, Member of Parliament Charlie Angus (an opposition member from the left-of-centre New Democratic Party) introduced a motion in the House of Commons to compel the government to comply with the Tribunal ruling, which was passed unanimously on November 1 (Canada, 2016).The consequences of this motion are more political than legal because the source of the government's legal obligation to comply with the Tribunal's order is simply the rule of law.Further, the motion itself, as with all parliamentary resolutions, is not legally enforceable.However, the symbolic impact of the motion was considerable, particularly given that it received assent from all Parliamentarians, including the government ministers whose Ministries were simultaneously resisting the implementation of those same orders before the CHRT (Canada, 2016).To my knowledge, this actiona Parliamentary motion that explicitly calls on the executive to comply with the most basic tenets of constitutional governmentis unprecedented in Canada.
Finally, in August, 2017, the United Nations Committee on the Elimination of Racial Discrimination called on Canada, not only to improve its supports for Indigenous children and families (which it had been calling on Canada to do for decades), but to comply with the order of Canada's own human rights tribunal and thereby respect the rule of law (United Nations Committee on the Elimination of Racial Discrimination, 2017).
The questions of jurisdiction raised by the Caring Society case are importantly connected to democracy, public accountability and self-government.First and foremost, throughout the long process of raising and litigating a human rights complaint, the Caring Society has situated the case within a larger social movement calling for the active involvement of children in the legal and political processes that affect them.Through the Caring Society's campaigns, childrenhundreds or perhaps thousands of themattended hearings, demonstrated on Parliament Hill, wrote letters to their local representatives, and more.At some point, there were so many children attending the hearings that they had to come in groups, and the Tribunal had to move the hearings to a larger venue to accommodate the broad public interest in attending (King et al., 2016: 35).This provides a compelling example of how litigation can support (rather than alienate or reduce) democratic engagementincluding by childrenwith justice issues.
The Caring Society case also starkly presents the specific challenges that arise for both democracy and the rule of law when public meaning and accountability are undermined.Caring Society is a case about the capacity of jurisdiction to mediate the relationship between the care and safety of children and bureaucratic cost-reduction.It is a case about jurisdiction as the way sovereignty is manifested, and how governments can fulfill and evade their obligations.The federal government claims to endorse the findings of the CHRT, while refusing or neglecting to follow them.The federal government's words become meaningless, vulnerable to mere strategic deployment, which not only impacts the wellbeing of children needing services, but also precipitates a crisis in public meaning and undermines the capacity of the state to function in the service of democratic self-government.In this way, the Caring Society case is also about the need for attention to jurisdiction, attention to the speech of law and the technical mechanisms for its operation as methods for enacting a more robust practice of public meaning and accountability.
Jurisdictional Justice and Neglect
Questions about jurisdiction involve democracy because jurisdiction enables the internal structuring of self-government: '[j]urisdiction is the legal mechanism used to manage divisions of internal power within states'.(Pasternak, 2021: 178).In her essay on federalism and jurisdictional justice, Lessard (2012) explores this inherent connection between jurisdiction and democracy and argues that when judges are determining issues about the division of powers, they ought to explicitly attend to the democratic credentials of the legislating bodies claiming jurisdiction.In other words, where legislation is created through deep and collaborative engagement with the community members concerned, this fact should bolster the jurisdictional claims of the enacting legislature.Lessard argues that this norm carries particular weight when the communities whose lives or interests are at stake are marginalized from mainstream political processes and institutions such as electoral politics, and when the interests at stake are very important ones (Lessard, 2012: 94).She writes: 'My argument is that a judicial determination of where jurisdictional authority resides in a democratic politya decision that is at base one about the structure of self-governmentshould take account of these elements of functional self-government or "democracy on the ground.'(Lessard, 2012: 107-8).
Lessard also argues, and I think she is right, that different issues arise for constitutional law in jurisdictional disputes as compared with rights disputes, and that there is something about the 'jurisdictional frame' that connects us to questions of democracy in an important way.Lessard acknowledges that constitutional law doctrine about jurisdiction in Canada can be abstract and formal, far removed from the kinds of oppositional politics that she is interested in.However, she challenges us to understand that this story is not inevitable, and certainly not required by constitutional law.She argues that 'jurisdiction' inherently invites questions about self-governance and its structure and therefore '… attention to critical oppositional politics and its recognition as a fundamental and necessary component of democratic engagement is invited rather than foreclosed by our constitutional texts and principles' (Lessard, 2012: 108).
In this way, Lessard's theory harkens back to the more general claim that jurisdiction is the legal form that enables or structures the practice of democratic politics and that jurisdictional justice is both abstract and formal, and tied into the political practices and daily lives of communities.I argue that Lessard's concept of jurisdictional justice can be used to understand more deeply the democratic implications of what is happening in the exercise, non-exercise, and relationships between jurisdictions.
Taking up jurisdictional justice with respect to the Caring Society case, Lessard might have us ask: What are the democratic credentials of the jurisdictional practices at hand?In the Caring Society case, it is not straightforwardly a matter of assessing the democratic credentials of legislation, because there is none.I think that we can build on Lessard's idea of jurisdictional justice to assess, in this case, the non-exercise of jurisdictional authority by the Canadian federal Parliament in the case of child welfare services on reserve.
A jurisdictional lens focuses attention on the federal government's lack of a legislative framework for any action it may take with respect to child welfare on reserve.Metallic (2018) calls this 'jurisdictional neglect', a situation in which 'both the federal and provincial governments play a role in this system, but neither accepts accountability for ensuring First Nations are receiving adequate services, resulting in a diluted responsibility on the part of both governments' (Metallic, 2018: 8).
Courts conceive of this kind of jurisdictional gap in different ways.In a situation where both federal and provincial governments denied lawmaking authority over Métis and non-status Indians, the Supreme Court of Canada characterized the legal context of those communities as a 'jurisdictional wasteland', with 'significant and obvious disadvantaging consequences' (Daniels v. Canada (Indian Affairs and Northern Development), 2016 SCC 12, para 14, see also Sheppard, 2018).The Court has also invoked the prospect of 'legislative vacuums' in which forest management in some areas could go unregulated by either federal or provincial legislatures (Tsilhqot'in Nation v. British Columbia, 2014 SCC 44, para 147).In the context of judicial interpretation of the jurisdictions established by the Australian colonial constitution, we also see the idea of the 'No Man's Land' 'over which the authorities of neither Colony could legitimately assert authority' (South Australia v State of Victoria (1914) 18 CLR 115 at 139, discussed in Dorsett, 2006: 140).
From the perspective of democratic self-governance, it is crucial to see that jurisdictional neglect does not mean that the federal government has less power in this area.Indeed, the absence of legislation passed by Parliament means that the government actually has quite a lot of power: power to impose conditions, to shape program administration at a highly detailed level through reporting requirements, to change or maintain rules without consulting anyone, very little transparency or oversight, etc. (FNCFCS, paras 74-75, Metallic, 2018: 14, 23).In Metallic's words, '[j]urisdictional neglect results not only in a failure to properly address important First Nations policy issues…but, curiously, can also result in excessive or inappropriate control exercised over First Nations by INAC [the federal Ministry responsible].This is due to an absence of the normal checks and balances that come with properly regulated government services established through legislation'.(Metallic, 2018: 16).In many contexts, the assertion of jurisdiction over something or someone is an act of power or an act that facilitates, expands or consolidates that power.But the converse, the non-exercise of jurisdiction, does not always work to constrain power (Cochran, 2019).Where power operates, empty space is not the same as freedom or self-determination.Here, what we see is the failure to exercise lawmaking jurisdiction as a mechanism that enhances state power without democratic oversight.
Jurisdictional neglect also has implications for the rule of law.Dorsett and McVeigh (2012) show that when law declines to take jurisdiction, the result can be a situation in which something or someone becomes 'poorly bound to law'.This can especially be the case when the law that declines is a dominant one, because the capacity of other jurisdictions to assert themselves is uncertain or undermined.The example explored by Dorsett and McVeigh is the relationship between common law and ecclesiastical law with respect to death, burial and dead bodies.Historically, ecclesiastical law dealt with conscience and the meaning of burial, with the common law declining to take jurisdiction by articulating that there is no property in a dead body (Dorsett and McVeigh, 2012: 70).Dorsett and McVeigh argue that with the effective dominance of the common law and the relative demise of ecclesiastical jurisdiction, there is no longer any meaningful exercise of jurisdiction over the dead body.'The consequence of the failure to bind the body to the common law, the way in which precedent transmits law across time, and the effective demise of non-common law juris-dictions…has been to leave the dead body poorly bound to law' (Dorsett and McVeigh, 2012: 70).This 'continues to cause difficulties for thinking about how one engages lawful relations in areas such as patenting cell-lines or other medical technologies derived from the human body' (Dorsett and McVeigh, 2012: 70).Moreover, Dorsett and McVeigh argue that in this context, jurisdiction (through the technology of precedent and a 'thin' account of what is bound by precedent) results in a 'loss of ability to take responsibility for legal judgment'.(Dorsett and McVeigh, 2012: 70).The neglect of jurisdiction means that it is difficult to see what values are at stake and how public responsibilities will be practically taken up.
Drawing this analysis into conversation with the Caring Society case, I think that jurisdictional neglect has meant that child welfare services on reserve are also 'poorly bound' to law, meaning that the quality of connection between those services and the norms of Canadian law is thin and does not facilitate responsibility or accountability for judgment.This leaves a space where the rule of law fails to take effect adequately.In her writings on legal pluralism, colonialism and gendered violence, Napoleon (2019) uses the language of 'spaces of lawlessness'.She argues that 'the geopolitical spaces where Indigenous law has been undermined causes gaps in Indigenous legal worlds which when combined with failures of Canadian law, creates spaces of lawlessness where violence happens.It is usually the most vulnerable who suffer the consequences, mainly Indigenous women and girls'.(Napoleon, 2019: 8).Napoleon shows how spaces of lawlessness are created when one jurisdiction is undermined, and the other fails to maintain or uphold lawful relations.
When jurisdictional neglect takes place in the colonial context, it can be linked both to the failure of (state) jurisdiction, and the undermining of (Indigenous) jurisdiction.The federal government's policies and practices relating to children and family have always been a central component of colonial control, directly or indirectly undermining Indigenous law by impeding the transmission of language, proscribing the operation of legal institutions such as art and ceremonies, and by disempowering and dispossessing women (Truth and Reconciliation Commission of Canada, 2015).All of these have been facilitated by the exercise of federal jurisdiction over 'Indians', primarily through the Indian Act and the Indian Residential School system.From the late nineteenth century until the late twentieth century, generations of Indigenous children were removed from their communities and required to attend institutions run by Christian religious orders where they experienced isolation and abuse (Truth and Reconciliation Commission of Canada, 2015).The Canadian state interferes with and actively undermines the operation of Indigenous law with respect to the welfare of children.At the same time, the federal programs and policies that exist in this area are profoundly inadequate and fail to live up to the state's own normative commitments such as equality and the principle of democratic control over the executive (Friedland, 2009).Thus, the rule of law has only a weak purchase on the exercise of power by the state in relation to Indigenous children; the provision of services to those children is poorly bound to law, with responsibility for judgment ineffective and obscure.
This discussion makes clear that the remedy for jurisdictional neglect is not necessarily the active assertion of substantive norms by the dominant jurisdiction (Metallic, 2018: 22).Indigenous communities are not calling for the federal Parliament to resolve the puzzle of legal accountability by affirming Parliament's authority to order the lives of Indigenous children.To the contrary, many Indigenous communities are working to assert their own jurisdiction over these matters.When federal child welfare policy is poorly 'bound' to law, this means that jurisdiction is poorly connecting the general and the particular; the broad norms of law have little purchase on the technical craft of legal practice.For Canada, this means that the constitutional commitments, such as equality rights, Aboriginal rights, Parliamentary sovereignty and the rule of law, cannot be properly manifested in this context.The damage caused to Indigenous communities is manifest.But this jurisdictional disconnect also undermines the integrity of the Canadian state legal order itself, impacting jurisdictional justice for everyone within its sphere.
There is a recent and important exception to Parliament's jurisdictional neglect in relation to child welfare services on reserve.An Act Respecting First Nations, Inuit and Métis Children, Youth and Families was enacted in 2019, and is the first example of federal lawmaking pursuant to this jurisdiction.The purpose of the Act is, in large measure, to recognise Indigenous jurisdiction over child welfare.The Act does this by providing that Indigenous laws will, under specific circumstances, be enforceable as federal law, and thereby made paramount over any conflicting provincial laws (ss. 20-22).This Act is a jurisdictional disruption to the 'provincial model' at issue in the Caring Society litigation.However, it also carries with it significant pitfalls relating to funding, accountability and the relationships between Indigenous and Canadian jurisdictions that have continuity with the problems of the provincial model.The implications of the Act for jurisdictional justice are uncertain (Metallic et al., 2019).
Jurisdictional Relationships of Democracy
Exploration of the idea of jurisdictional neglect shows that there are important questions about the democratic character of the exercise or non-exercise of law-making jurisdictions, including those visible in the Caring Society litigation.Attention to the colonial context of the litigation brings to light the plurality of jurisdictions at work in the context of child welfare services.These include the federal Parliament, provincial legislatures and the law-making authorities of various Indigenous societies.It is clear that there are important questions about the democratic character of the exercise and nonexercise of all of these jurisdictions.A further dimension of this discussion opens up when we add consideration of the relationships between jurisdictions with reference to criteria about democracy and self-government.
Moving towards relationships and relationality in analysis of legal and political theorising calls for us to attend to the quality of relationships as a core rather than peripheral question (Sheppard, 2010;Nedelsky, 2011).As Eisen (2017) summarizes, 'relational theorists post that relationships are constitutive of persons and institutions a position that in turn gives rise to a normative demand that problems be reconceived and addressed in ways that honour this core truth' (Eisen, 2017: 46).Focusing attention on relationality allows for constructive shifts in understanding both democracy and the rule of law.For example, in her engagement with the work of Lon Fuller, Rundle emphasises the inherently relational character of the preconditions that Fuller saw as constitutive of legality (Rundle, 2013(Rundle, , 2019(Rundle, , 2020a)).Rundle makes visible how Fuller's 'internal morality of law' makes demands on the quality of the relationships between legal officials and legal subjects, and describes the specific forms of agency, responsibility and authority that are only possible when certain kinds of relationships exist (Rundle, 2019).These relational conditions are what makes it possible for a person to have legal subjectivity, and for an authority to govern through law.
Just as the quality of relationships creates the conditions for the rule of law, it is the quality of relationships that create the conditions for legitimacy and the ability to govern.Jurisdictional thinking thus presents a further opportunity to think about relationships in terms of collectivities and institutions, but without permitting collectivities to be taken as dominant or supreme (Nedelsky, 2011).Here, I extend this orientation to scrutiny of the relationships between jurisdictions and the legal and democratic quality of those relationships.
In thinking about how to attend to the relationships between jurisdictions, I draw on Roughan's writings on relative authority.Roughan (2013) draws on theories of legitimacy and case studies of legal plurality as an empirical reality, and provides a compelling account of authority that is inherently relational in nature.Roughan argues that the power and legitimacy of legal authorities depend on the quality of their relationships to other authorities (both state and non-state).She writes: 'When authority is relative, relationships between relative authorities become a condition of their legitimacy' (Roughan, 2013: 8) Relative authorities can cooperate, coordinate or tolerate each other, or they can conflict or undermine each other.As Roughan writes, '[a] claim to relative authority is simply a claim to have legitimate authority through appropriate relationships with other authorities' and 'any claim that such an authority makes entails a commitment to the pursuit of those appropriate relationships' (Roughan, 2013: 158).
In the context of the Caring Society litigation and the federal funding of child welfare services on reserve, the concept of relative authority helps in clarifying what is at stake in the quality of the inter-institutional relationships at play.For example, by delivering child welfare services on reserves through funding policies and agreements rather than legislation, the federal government structures its relationships with First Nations not as public relations of democratic or inter-authority accountability, but as private matters of contract and administration (Rundle, 2020a).For example, in response to the assertion that the funding programs functioned contrary to the terms of the Canadian Human Rights Act, the federal government argued that providing 'funding' does not constitute a 'service' that is available to any 'public', thus putting the complaint outside the scope of conduct prohibited by the Act.From this perspective, the relationship between the federal government and the First Nation is governed primarily by agreement, with this agreement understood in a very privatized manner, such that accountability mechanisms would only exist between institutional entities, not reaching out to citizens or political communities.The federal government was explicit about this in their legal argument, writing in their factum that: The funding at issue is provided on a government to government or government to agency basis and follows a process of discussion and implementation.Individual First Nation children and their families are not invited or expected to participate in the creation of these funding arrangements.They are not parties to the resulting contract and would normally be excluded by the doctrine of privity from seeking legal redress for alleged breaches (FNCFCS et al. v Canada (AG), Respondent's Closing Arguments: para 134.) Here, the jurisdictional structures surrounding federal child welfare programs have the effect of characterizing the relationships and lines of accountability as private.The contractual relationships are 'government to government', or 'government to agency', in which the federal government agrees to provide funds on the condition that they be spent according to its specifications.The provincial government agrees to legally delegate the authority to provide services in accordance with provincial child welfare statutes.And the First Nation or its service agency agrees to fulfill the conditions of the funding grant, including by delivering its services in accordance with the norms set out in the provincial statute.
On this framework, there are no public relationships involved and existing sites of democratic accountability have little purchase on the program.The people of Canada exercise democratic self-government through their representatives in Parliament, but here Parliament has abdicated its role and there is no legislative framework for children's services against which the actions of executive officials can be judged and the officials held to account.There is no accessible public record or even a known obligation to keep particular kinds of records or data at all.This interferes with the capacity of citizens to participate in the creation of the norms that govern society, and to hold their own government to account.Citizens of individual provinces find that the legislative norms duly enacted by their provincial legislatures, such as the principle that taking a child into state care must be a measure of last resort (see FNCFCS: paras 345, 388), are being actively undermined through their executives' participation in a funding agreement.And the children, families and First Nations whose interests and rights are directly at stake are understood not as citizens or political agents at all, but only as consumers of services who are 'not parties' to the agreements and whose political agency is irrelevant (Pasternak, 2014: 152).
In its decision, the Canadian Human Rights Tribunal rejected the idea that there was no 'public' relationship between the federal government and First Nations children (thus bringing federal conduct within the purview of s. 5 of the Act, which prohibits discrimination where such public relationships exist).Instead, the Tribunal emphasized that 'a public is defined in relational as opposed to quantitative terms', and relied on a robustly historicized account of the federal government's relationship to Indigenous peoples and the actual effects of that relationship to help interpret the 'publicness' of the relationship at hand (FNCFCS: para 31).The Tribunal found that '[t]he fact that [the federal government] does not directly deliver First Nations child and family services on reserve, but funds the delivery of those services through FNCFS Agencies or the provincial/territorial governments, does not exempt it from its public mandate and responsibilities to First Nations people' (FNCFCS: para 78).Thus, the Tribunal insisted that the relationship between First Nations children and the federal government was not private and contractual in character, but rather carried the elements of publicity required to hold the state to account in the public interest (FNCFCS: para 76).
Further, the Tribunal held that the relationship at hand was not only public in a general way, but that 'First Nations and, in particular, First Nations on reserve, are a distinct public' (FNCFCS: paras 61, 84).Drawing on the Canadian constitutional principle of the 'honour of the Crown' and the doctrinal characterization of the relationship between the Crown and Aboriginal peoples as 'fiduciary' in nature, the Tribunal held that 'the existence of the fiduciary relationship between the Crown and Aboriginal peoples is a general guiding principle for the analysis of any government action concerning Aboriginal peoples.In the current "services" analysis under the CHRA it informs and reinforces the public nature of the relationship between AANCD and First Nations on reserves and in the Yukon….' (FNCFCS: para 110).The Tribunal further found that the federal government's decision to fulfill its responsibilities to provide services through an approach over which First Nations have very little control reinforces rather than diminishes the strength of federal obligation to act in the interest of First Nations when exercising its role as a public authority (FNCFCS: paras 83, 86).
Reflecting on these findings, with Roughan we might ask how the relationship between the federal, provincial and First Nations governments can be structured to best promote the legitimate authority of all three?With Lessard, we might look to the political agency of the most marginalised and directly affected people as a guide to assessing that legitimacy.
Jurisdictional thinking allows us to see the processes and procedures through which public relationships are created (Rundle, 2019) and in the Caring Society case, those public relationships are found wanting from the perspective of democratic selfgovernment.And from the perspective of a relational approach to law, the question is not just whether the federal exercise or non-exercise of jurisdiction is democratically adequate, it is whether it fosters relationships of democratic community.
In this regard, the Caring Society decision has provided a doctrinal pathusing Canadian human rights and constitutional lawfor thinking through the democratic quality of the jurisdictional relationships at hand.In her analysis of the significant implications of the Caring Society decision, Metallic notes that '[t]he Caring Society decision, while not using the language of "self-government" or prescribing the particular form it should take, signifies that First Nations must exercise meaningful control over the content and delivery of child welfare services in their communities as a matter of human rights law'.(Metallic, 2018: 6) Refocusing this analysis through a different lens, I suggest that the Caring Society decision also signifies that such self-government would be part of a more democratic and more lawful set of institutional relationships between the relative authorities.Jurisdiction provides not only a lens for critique but also techniques for working towards those more just relationships.
Jurisdiction, Care and Accountability
In this discussion so far, I have focused on the relationships between jurisdictional authorities or institutions.I argue that jurisdictional thinking allows us to focus attention on the way concrete and technical practices of law structure public relations of governance and authority.However, one of the most notableand indeed deeply disconcertingaspects of the Caring Society case is the juxtaposition of a technocratic and obfuscating set of relationships, not only with the public norms of equality and democracy, but also with the care, intimacy, and embodied experiences of caring for children.As Bezanson writes, the case is 'fundamentally about how people put together and sustain the necessities of life' and the challenges of this undertaking in the context of neoliberal social policy (Bezanson, 2018: 167).
The Caring Society case is troubling because it discloses the experiences of children who are lonely, afraid and disconnected from their families, experiences of parents whose children are taken away, experiences of social workers making heartbreaking choices between untenable options, trying to bend an unmoving system around the lives of vulnerable people.It is also troubling because it discloses a bureaucratic system that has intervened in the lives of Indigenous communities in an incredibly harmful way, but which very successfully dilutes, avoids, defers and privatizes any kind of accountability even in cases where that harm can be made visible.One illustration of this happens in a documentary film by Obomsawin (2016) about the Caring Society litigation, in which viewers watch the testimony of federal civil servants tasked with implementing the programs in question.The film contains an extraordinary moment in which it is disclosed that the individuals responsible for implementing one program developed to dispense funds in the case of jurisdictional disputes, had received an award for the quality of their work, even though the outcome was that no funds were dispensed under that program at all.The story of the Caring Society case is partly a story about the real risks our society faces with respect to the loss of judgement, the loss of public space and human relationships.
In my view, this observation about the deeply jarring contradictions of the Caring Society case is an important methodological resource.Specifically, it points to the importance of developing theories of law that can account for the full complexity of human lives and communities.While critical analysis of contemporary politics, including populism, serves to help deconstruct the simplistic binaries and thin ideologies that can disconnect politics from our lives (Webber, 2023: 9), the Caring Society case demonstrates that it is crucial to attend to the embodied, affective aspects of our lives and relationships.In its decision, the Tribunal attempts to grapple with some of the tensions in the case somewhat directly: the first sentence of the decision reads: 'This decision concerns children'.(FNCFCS: para.1).Taking up a relational approach to jurisdiction requires that readers not allow this first sentencethe choice of inaugural words, the underlining, the direct articulation of what is at staketo slide across their field of vision without impact.Jurisdiction points our attention to the techniques of writing, and relationships of care open us methodologically to the implications of this sentence.
The Caring Society case is a compelling example of the interconnectedness of rights, care and judgement.For example, in its decision, the Tribunal takes up the administration of child welfare services as a crucial forum for enabling political community.The Tribunal finds that '[t]he transmission of indigenous languages and cultures is a generic Aboriginal right possessed by all First Nations children and the families' protected by the constitution, and therefore that the federal government may have not only a legal but a constitutional obligation to ensure it acts in a way consistent with those rights (FNCFCS: para 106).As described above, Metallic (2018) argues that in drawing this connection, the Tribunal articulates a legal basis in human rights law for First Nations self-government over child welfare services, because equality demands non-assimilation (Metallic, 2018: 6).We must enable relationships of care in order to transmit language and culture, and thus have a political forum for democratic deliberation.At the same time, democratic self-government is required in order for those relationships of care to exist and to be protected from assimilation.
The Caring Society case also highlights the relationships between caregiving work and the adequacy of political discourse for sustaining rights and democracy.Taking up Nedelsky's (2012) positioning of caregiving work as a crucial source of judgment in relation to public policy, I view some of the state failures in this example as connected to the devaluation of care, and the inaccessibility of caregiving practice to the individuals and systems charged with supporting 'child welfare'.In this sense, the embodied and affective aspects of relationality (including, for example, relationships between me and my children, between me and my research subject matter, and how I understand and observe relationships between other things I see, like jurisdictions) are an important methodological resource, including for the reason that they are connected to the capacity to observe and take seriously the affective dissonance generated by the case.This can provide access to a particular type of critical lens, oriented to the capacity of law to sustain (rather than undermine) public meaning.When we hear the federal government argue that the Tribunal has no jurisdiction because funding is not a 'service', or that they have no 'public' obligations to First Nations children, when emotional and physical harm to vulnerable people is viewed through a deeply alienated and technocratic lens, it is our own embodied and affective relationships that provide us with the resources to experience confusion, incredulity or outrage.Those experiences help us see the profundity of the gap between the words invoked and the reality against which democracy and the rule of law must be held to account.Without attention to the importance of these relational experiences, or in a context where daily survival requires that we disregard them, we may instead respond with disconnection, despair and cynicism.This is a context in which words are at risk of becoming meaningless (care through harm) and law of becoming power (government through lawlessness).
The Caring Society case reflects the high stakes of institutional cultivation of meaninglessness for both democracy and the rule of law.Political theorists have long documented how anti-democratic political movements, including those with a populist character, rely on this kind of disorientation and loss of shared, public meaning to facilitate their hold on power in a society.With reference to the totalitarianism of the mid twentieth century, Arendt (2001) calls this the loss of the 'common world' and argues that this loss facilitates the loss of judgement that enabled the collapse of politics in those societies, extending even to genocide.In the context of the present discussion of jurisdiction, the connection I want to make is that this kind of alienation facilitates our ability, as a society, to disconnect, to feel powerless and thus unaccountable, and I think that jurisdictional thinking is one important lens to bring to bear on this problem.When we witness the suffering of a child explained by the failure to correctly fill out a form, we are outraged.But there is no form available to connect that rage to law or to transform it into constructive engagement in the public world.And so, for those whose private worlds are untouched, we may also grow immune to the intellectual and affective dissonance produced by this situation.This is, in part, a matter of jurisdictional practice.
Conclusion: Jurisdiction and Forms for Taking Responsibility
Jurisdiction identifies the perpetual gap between legal norms and the practical, concrete forms of their implementation and administration.The Caring Society case demonstrates how this gap can be a space of lawlessness, where power and violence operate with impunity.In its jurisdictional neglect, the federal government undermines democratic accountability, permits the ongoing inequality of Indigenous communities and casts the institutional relationships at hand in private, non-democratic terms.Norms of democracy and the rule of law begin to lose their grounding in legal relations, and the possibilities of common public meaning are undermined.
At the same time, by calling attention to this gap between norms and experience, jurisdictional thinking also provides opportunities for critique and transformative action.(In taking up their adjudicative jurisdiction in the future, will Canadian Human Rights Tribunal members imagine their hearing rooms filled with children?) Questions and challenges of jurisdiction present the possibility of seeing and scrutinizing the techniques that connect law to practical experiences, rather than forgetting, assuming or erasing them (see Douzinas, 2006: 27, see also Matthews 2017).For example, in their discussion of writing as a technology of jurisdiction, Dorsett and McVeigh (2012) stress the importance of understanding writingeven in its most technical administrative formsas more than administrative, but also about the creation and authorisation of legal relations (seen, for example, in the inauguration of a legal process through a writ or indictment) (Dorsett and McVeigh, 2012: 60-62).They write: Much of the administration of modern law takes place through bureaucratic modes of writing an obvious example is the form.Modern administrationthe idea of thinking of law as a general system and of writing as simply being administrativemakes it harder to think of writing as a jurisdictional technology.The point we make, however, is that unless we hold onto writing as a jurisdictional practice we lose our sense of the authority and authorisation of law, and of how we inaugurate lawful relations.Writing is the preferred method of the common law (and of most modern law) of getting law going.If we lose this sense of writing as a technology of jurisdiction, then modern law becomes what Arendt calls government by no one, with the consequence that we no longer take responsibility for the form and force of our law.
In this way, jurisdictional thinking can promote accountability by focusing attention on the ways in which technical administrative processes (forms, funding formula, administrative guidelines, programming) have consequences relating to substantive legal norms such as the rule of law.Jurisdictional thinking facilitates an account of democratic accountability that can encompass the administrative state as a whole in its field of vision, and thereby brings critical scrutiny to the relationships created and sustained through its practices.Jurisdiction and jurisdictional practices like writing are about connecting norms and experience, and this recognition is part of how we make it possible to take responsibility.
Reflecting on political responsibility, Young (2011) writes: Because we dwell on the stage of history, and not simply in our houses, we cannot avoid the imperative to have a relationship with actions and events performed by institutions of our society, often in our name, and with our passive or active support.The imperative of political responsibility consists in watching these institutions, monitoring their effects to make sure that they are not grossly harmful, and maintaining organized public spaces where such watching and monitoring can occur and citizens can speak publicly and support one another in their efforts to prevent suffering (Young, 2011: 88).
Extending this argument, jurisdiction is how we use the law to structure opportunities to take political responsibility.This can be a source of optimism about structural change, because of the way jurisdiction makes visible the (indeterminate and therefore contestable) relationship between norms and experience.Attending to jurisdiction allows us to resist the caricatures of law and democracy that can be produced by populist politics and by constitutional law, and in this way, the story of the Caring Society litigation provides rich resources for responding to the challenges of contemporary democracy.By scrutinizing the democratic credentials of jurisdictional practice and the quality of the relationships among jurisdictions, we can attend to the embodied relationships of caregiving and the institutional relationships of self-governance; we can 'seek out technical forms through which it is possible to take responsibility'.(Dorsett and McVeigh, 2012: 135).We also seek technical forms through which it is possible to promote and sustain equality, and through which it is possible to promote and sustain care between self-governing people and their institutions.
|
2023-02-15T16:17:14.793Z
|
2023-02-12T00:00:00.000
|
{
"year": 2023,
"sha1": "bc1ed55dc587905261315e7eb8c4638d09a4d047",
"oa_license": "CCBY",
"oa_url": "https://journals.sagepub.com/doi/pdf/10.1177/09646639231155606",
"oa_status": "HYBRID",
"pdf_src": "Sage",
"pdf_hash": "bb23155e4308e473e928c855559c658968ddfc15",
"s2fieldsofstudy": [
"Political Science",
"Law"
],
"extfieldsofstudy": []
}
|
126272477
|
pes2o/s2orc
|
v3-fos-license
|
Exploring Derivatives by Means of GeoGebra
The paper aims to explain how GeoGebra can be used in a differential calculus course to explore the Derivative concepts by providing dynamic-visualizations of the concept. Design research methodology was used in this research by designing an instructional design (hypothetical learning trajectories) in the first phase and conducting the teaching experiment in the second phase. The data collected during the experiment consist of video recordings of the classroom activities, observations, interviews, and students written work. In the third phase of the design research, the data were analyzed retrospectively by comparing the actual learning process and the hypothetical learning trajectory. The results show that the dynamic feature of GeoGebra offers the possibility of zooming in on a graph corresponds to taking infinitesimal when a secant line transforms into a tangent line. This builds a foundation for the understanding of the definition of derivative intuitively.
INTRODUCTION
Over the last few decades, the use of innovative technology in calculus teaching and learning has been an interest in mathematics education (Gravemeijer & Doorman, 1999;Hohenwarter et al., 2008;Özmantar et al., 2009;Diković, 2009;Haciomeroglu & Andreasen, 2013). By means of technology, the teaching and learning of calculus have shifted into more active and dynamic where students can explore various mathematical concepts with multi representations, which is often difficult without using technology. Previous studies (Zimmerman, 1991;Ahuja et al., 1998;Gravemeijer & Doorman, 1999;Lee, 2006;Weigand, 2014) suggested the importance of multiple representations and connections among them in calculus teaching and learning for a more meaningful understanding of the concepts. Having students construct and visualize mathematical objects from a different perspective can help them synthesize analytic and visual thinking, a process that will enhance their conceptual understanding (Haciomeroglu et al., 2009).
In this research, a dynamic mathematics software GeoGebra was used in exploring derivative concepts. GeoGebra is an open-source software for mathematics teaching and learning that offers geometry, algebra and calculus features in a fully connected and easy-to-use software environment. GeoGebra's interface consists of an algebra view and a geometry view or a graphic view (see Figure 1) and it offers the possibility to create mathematical objects dynamically. Mathematical objects can be constructed by entering values into the input bar or using the geometry tools from the toolbar, and their algebraic (or numeric) and graphic representations will be displayed in the algebra and geometry windows respectively. By using GeoGebra, students are able to construct, manipulate, and give arguments about mathematical objects (Hohenwarter et al., 2008).
Figure 1. GeoGebra's Interface
GeoGebra provides tools that can be used to develop and explore calculus ideas. Tall (2009) stated that calculus is fundamentally a dynamic conception and the use of technology can help students to build up dynamic embodied concepts in calculus. For example, tracing the changing slope of a curve's tangent gives a physical embodiment of a curve's tangent that can be estimated numerically or investigated symbolically to give a precise description (Tall, 2009). In conjunction with this, Leibniz explained how to find a tangent line (as cited in Kleiner 2001;Struik 1969): to find a tangent means to draw a line that connects two points of the curve at an infinitely small distance, or the continued side of a polygon with an infinite number of angles, which for us takes the place of the curve. This infinitely small distance can always be expressed by a known differential like ds (see Figure 2). Students' difficulties in understanding the concept of calculus are the result of a poor understanding of the geometrical and algebraic aspects of the concept (Haciomeroglu et al., 2010;Zimmerman, 1991). By using a dynamic GeoGebra software that brings together algebra and geometry aspects in calculus is believed to develop students' conceptions of the derivative by taking into account the visual thinking to the concept. However, it is argued that although technology plays a role to integrate visualization with numerical and symbolic aspects of calculus, the integration must be built into the structure of the course and into the design of particular topics and problems (Zimmerman, 1991). In conjunction with that, Amiel & Reeves (2008) suggested that educational technology researchers should be concerned with examining the technological process as it unfolds in schools and universities and its relationship to larger society.
Despite the numerous studies in the teaching-learning of calculus, it is noteworthy that the research in calculus learning and teaching has not capitalized on advances in design research to further link theories of learning with theories of instructional design (Rasmussen et al., 2014). Design-based research provides a cycle that promotes the reflective and long-term foundation upon which such research can be undertaken (Amiel & Reeves, 2008). Moreover, design research provides a productive perspective for theory development, and results of design research which encompass the design of activities, materials, and systems are the most useful results regarding the improvement of the education system as the ultimate goal of educational research (Edelson, 2002). Therefore, aiming at the development of learning trajectories about students' learning of calculus derivative concept by means of GeoGebra, the present research attempts to answer the question: how GeoGebra can be used in a calculus differential course to explore derivative concepts by providing dynamic-visualizations of the concept?
RESEARCH METHOD
In order to answer the research question, the design research methodology was used in this research by conducting three phases consisting of the design and preparation phase (thought experiment), the teaching experiment phase (instruction experiment), and the retrospective analysis phase (Gravemeijer & Cobb, 2006;Cobb et al., 2003). Each of these forms a cyclic process both on its own and in a whole design research. Therefore, the design experiment consists of cyclic processes of thought experiments and instruction experiments (Freudenthal, 1991).
In the first phase of the design research, an instructional design or the hypothetical learning trajectory (HLT) was developed by taking into account the literature and the previous studies in the field. The contributions of other lecturers of calculus course were also worth in developing the HLT, including the means designed to ensure the internal validity of this research. The hypothetical learning trajectory (HLT) is made up of three components: the learning goal that defines the direction, the learning activities, and the hypothetical learning process-a prediction of how the students' thinking and understanding will evolve in the context of the learning activities. (Simon, 1995). The HLT not only functions as a guideline for carrying out the teaching experiment and the retrospective analysis, but also provides justifications, hypotheses, and expectations in the choices made. It must be reported in such a manner that it can be retracted, or virtually replicated by other researchers (Gravemeijer & Cobb, 2006). A criterion for virtual replicability is 'trackability' (Cobb et al., 2001).
The second phase is the teaching experiment which was conducted in the Department of Mathematics, State University of Jakarta (Universitas Negeri Jakarta) during the course Differential Calculus in year 2016/2017 in a class of 44 students enrolled in the course. The experiment consists of two lessons, focusing on developing students understanding of the definition of derivatives and constructing derivative graphs. In the first lesson, the students worked in front of a computer with GeoGebra software installed, although in the second lessons the students only observed how the lecturer demonstrated GeoGebra due to some limitations. The data collected during the teaching experiment was a video recording of the classroom activities, observations, interviews, students' written work, and field notes.
The data were analyzed retrospectively by comparing the actual learning process and the hypothetical learning trajectory. Every meeting was videotaped and some critical moments during the lessons were transcribed or summarized. Observations took place on the classroom level, group level, and individual level, while interviews were made occasionally during the lesson or purposively after the lesson. Students' written work was chosen, examined and analyzed in accordance with other sources of data to improve the triangulation.
RESULTS AND DISCUSSION
Following are the descriptions of the instructional design or the hypothetical learning trajectory (HLT) along with the results obtained from the teaching experiment. The retrospective analysis is explained sequentially with the result. The analysis focus on how the use of GeoGebra can contribute to the development of students' conception of the derivative.
Approaching Tangent Line from Secant Line
The objective of this lesson is that students will be able to trace the changing slope of a tangent line to a curve by observing the infinitely small distance between two points intersect the curve. By doing so, students were expected to develop an intuitive understanding about taking the limit process in determining the slope of a curve's tangent. The concept of limit is said to be one of the essential concepts in calculus because it underpins the concepts of derivatives and integrals. It can be seen from the diagram (see Figure 3) that the concepts of the tangent to a curve and rate of change underlie the concept of Limits, where Limit is essential to the development of the concepts of derivatives and integrals in calculus. Figure 3. The concepts that underlie calculus concept (source: Lee, 2006.) However, it seems that students have a distinct understanding of limits and derivatives. Students tend to think derivatives as a procedure to obtain another function in which the degree of the derivative is one less than the degree of the initial function. It can be seen from the transcript (see Figure 4) where the conversation took place at the beginning of the lesson. Therefore, the following activities were chosen in order to develop students' conception about derivatives, not merely its procedural understanding but also conceptual understanding by providing a dynamic visualization of the concept. GeoGebra was chosen as the means because its dynamic feature can demonstrate the infinitesimal elements in geometrical figures when a secant line transforms into a tangent line.
In this activity, the lecturer demonstrated how a tangent line is obtained from a secant line. Figure 5 shows the GeoGebra window with curve ( ) in its graphical view and the function formula in its algebraic view. We can also see from the Figure 5 : ''Ok, so the derivative of 3 is 3 2 " (then pose a question again) "Anyone knows how to get the result? " Student # 2 : ''There is a formula for it '' Dosen : ''Ok, so you have leant th e formula in senior high school " "Can you explain how you get the formula? " (there was no exact answer, some students said to one another (silently) that it was just like that) that there is a secant line intersects the curve at point C and D with its slope written as "m". By using GeoGebra, we can construct point D movable along the curve of ( ) by moving point B along the x-axis. If point B is sliding to the left along the x-axis then point D is getting closer to point C along the curve, and it also applies in the opposite direction. Furthermore, as point D is moving along the curve then the number m which represent the slope of the secant line is automatically changing. Students already have initial knowledge about slope of a line, that is, they need at least two points on the line to be able to find the slope of the line by using formula: . Therefore, it was assumed that they can find the slope of the secant line that pass through points C and D (number m as the slope of the line was hiding while the teaching experiment). The following questions were expected to guide students thinking in perceiving the changing slope of the line they are what happens to the slope of the line when point D is moving closer to C?" (asked repeatedly while moving point D) and what is the slope of the line? (asked repeatedly while moving point D).
These questions will bring to the next idea of the infinitesimal distance between C and D because when point D is very close to point C, students can observe in the algebra view that the coordinates of C and D differ slightly. This could be the reason why some students answered that when point D meets C, then the slope of the line is zero per zero or infinite, although some of them answered zero or one as the slope of the line. However, this fact was very surprising because the students failed to relate their previous knowledge about the slope of horizontal lines which is zero and the slope of vertical lines which is infinite, and the line observed was neither horizontal nor vertical. This might happen because they depend solely on the formula . An anticipation was made in the hypothetical learning trajectory when students cannot perceive the infinitesimal distance between point C and D. The lecturer gave a hint to students by assuming the coordinate of point C as ( ( )) and the coordinate of point D as ( ( )) where h is very small. Then the discussion continued when one of the students came to the front of the classroom and wrote the expression: when h is approaching zero?". The students answered that the secant line was transforming into a tangent line of the curve at point C. So, the question was repeated: "What is the slope of the line now?". Since none of the students came up with a solution, the lecturer explained that the slope can be determined by taking the limit of the expression when h is approaching zero, that is, ( ) ( ) . Therefore, the definition of a tangent line below was introduced to students.
Definition of Tangent Line
The tangent line to the curve ( ) ath the point ( ( )) is the line through with slope ( ) ( )
Tangent Line and Rate of Change
After some discussion, the lesson continued by experiencing with GeoGebra. Students worked in pairs with a computer and solved the problems below. Problem 1 was chosen because it allows students to use two different methods for determining the slope of the tangent, first by estimation and second by the definition of tangent. The first method needs visualization of the problem while the second method does not.
The second problem was chosen because instantaneous velocity applies the same as the slope of the tangent line. To find the instantaneous velocity at time t =3 is the same as to find the slope of the tangent line to the curve of at t =3. The function was chosen because students might find difficulties in solving the problem by using the definition or maybe the rule of derivative. The solution of the problem relies to visualization of the problem. Thus, this problem was intended not only to show the connection between the tangent line and the rate of change by means of GeoGebra, but also to build students' flexibility in solving problems. GeoGebra is an easy use software, so students only need to input the formula of function in the input bar and then press the enter button. However, some assistances were provided in the classroom to use GeoGebra in solving problems (see Figure 6). Figure 6. Tangent line and rate of change problems (source: Varberg et al., 2006) While students engaging in the problem, it was observed that students experienced difficulties by the instruction "estimate" in problem 1(c). GeoGebra 1. Consider = 3 1 a. Sketch its graph as carefully as you can. b. Draw the tangent line at (2,7). c. Estimate the slope of this tangent line. d. Calculate the slope of the secant line through (2,7) and (2,01, (2,01) 3 -1) e. Find by the limit process the slope of the tangent line at (2,7).
2. If a point moves along a line so that its distance s (in feet) from 0 is given by = + 2 at time t seconds, find its instantaneous velocity at t =3. provides a grid in the graphic view, so students can show the grid to be able to estimate the coordinate of another point on the line. Moreover, it is also possible to move the graphics view, zoom in and zoom out to see the details of the graphic in the coordinate system. This function helps students in solving Problem 2. Instead of using the grid, students move the graphic view until they can see the intersection between the curve's tangent and the axis. By choosing "intersect" menu in the toolbar, the points of intersection can be depicted in the graphics view and the coordinate of the point was displayed in the algebra view respectively. Therefore, they can find the slope of the tangent or the rate of change. Although GeoGebra provides the "slope" menu in the toolbar to show the slope of the tangent immediately, students experienced how to approach the problem visually (see Figure 7).
Definition of The Derivative
The learning goal of this lesson is that students will be able to determine the derivative of a function by using definition. Since the preceding activities built a foundation of the derivative concept, there was no crucial struggle in finding the derivative by using the definition. It was shown from the result of the midterm test where the majority of the students were able to solve the conceptual problem as shown in Figure 8. Over 44 students enrolled in the course, about 52,1% students were able to solve the problem without any mistakes, whereas 16% were failed to show the appropriate solution. However, the other 31,9% of the students were attempted to solve the problem by definition although some minor mistakes such as algebraic operation and arithmetical procedure occur in their solution. Therefore, we can say that the majority of the students were able to determine derivative by using definition. Figure 9 is a kind of mistakes appear in students solution, where students did not use distributive property in the expression " ( )". These findings suggest us that the dynamic feature of GeoGebra plays a significant role in the development of students' conception of the derivative. By observing how the secant line is approaching the tangent line, students experienced the limit process visually. This intuitive understanding built a foundation for the definition of the derivative. Students can explain by using the definition of derivative how the derivative function ( ) is derived from the function ( ) , where n is a positive integer. This fact was in contrast with their previous knowledge about derivative.
Constructing the Graph of the Derivative
According to Zimmerman (1991), one of the objectives in calculus is to understand differentiation graphically. This lesson was intended to build the connection between the graph of a function and the graph of its first derivative. Therefore, students were expected to be able to sketch the graph of the first derivative function based on the graph of the function.
To demonstrate how GeoGebra can be used to see the relationship between the graph of a function and the graph of its first derivative, the graph of ( ) and a tangent line at any point were drawn. Since the slope of the tangent line is equal to the ycoordinate of the first derivative function, then plotting the point ( ( )) along the domain of the function will create the graph of the first derivative. This can be shown dynamically by using GeoGebra when every point ( ( )) in the domain is tracing by sliding the slider that represent the slope of the curve's tangent. (see Figure 10). The At this point, by observing how GeoGebra represents the changing of slope dynamically, students were expected to be able to perceive 1) The value of the slope of a curve's tangent at x = c equals ( ).
2) The graph of the first derivative function can be constructed by plotting points ( ( )) along the domain of the function. 3) As the tangent line moving along the curve, the slope value is changing. In the interval when the tangent line is sloping upwards it gives positive slope, that is the point ( ( )) will always be above the x-axis. In the interval when the tangent line is sloping downwards it gives negative slope, that is the point ( ( )) will always be under the x -axis, and the horizontal tangent gives the zero slope. 4) Consequently, students will be able to see the connection between this representation and the extreme value of a function, that is the function will reach its extreme value when the first derivative of the function equals zero. During the lesson, students had difficulties in translating the slope of a tangent into the graph of the derivative. Once again, they were confronted with the word "estimate" in the above problem (see Figure 11). The problem depends solely on visual thinking where students have to estimate the value of ( ), ( ), ( ), and ( ). In other words, students have to find the slope of tangent to the curve at x = -1, x = 1, x = 4, and x = 6. It took quite much time in dealing with this problem until they finally can feel the sense of estimating.
Therefore, the results show that the first activity "approaching tangent line from secant line " in the hypothetical learning trajectory built the foundation of students' conceptual understanding of the derivative definition. The result is in line with Tall (2009) that tracing the changing slope of a curve's tangent gives a physical embodiment of a curve's tangent that can be estimated numerically or investigated symbolically to give a precise description. Moreover, by providing the dynamic-visual representation of the tangent line from a secant line, the students could observe the infinitely small distance (ds) as explained by Leibniz in the Leibniz triangle. Thus, the term h 0 in the limit process of ( ) ( ) become make sense for the students.
Hohenwarter (2008) explained that by using GeoGebra, students are able to construct, manipulate, and give arguments about mathematical objects. In the next activity, the students were provoked by a problem that can not be solved easily by the rule of derivative (Figure 6, no.2). It was shown in the result that some students found difficulties in finding the derivative of function by applying the rule of derivative. By providing such type of function, students were forced to see that by visualizing the function using GeoGebra, they can easily construct the graph of the function, the tangent line at x=3, and then find another point on the tangent line to compute the slope of the tangent. This activity was expected to build students' sense of estimating when they were dealing with the next activity in the learning trajectory.
According to Zimmerman (1991), students should be able to sketch the graph of the derivative, given a graph of a function. This ability connects many ideas related to the derivative, such as critical points, inflection points and other characteristics of curves. Thus, the last activity "constructing the graph of derivative" was designed. However, the results show that the connection between the ideas of the derivative was not obvious for the students. This fact suggests us to give more experience to students in working with GeoGebra more often and to observe how the graph of the derivative is constructed dynamically. Although the dynamic characteristics of GeoGebra have the potentials to enhance students thinking for exploring the concept of derivative, the students should have more experienced and get involved in working with GeoGebra.
CONCLUSION
This study explores how GeoGebra is used to present a dynamic approach in introducing derivative. First, GeoGebra is used to visualize dynamically the infinitesimal element in the concept of derivative, that is by designing the first activity -approaching the tangent line to a curve from a secant line. By doing so, the limit process when the secant line approaches the tangent line becomes to make sense. Hence, students show their understanding of the derivative definition. This activity builds a strong foundation so that the students can determine the derivative of a function algebraically by using the definition. Moreover, GeoGebra not only gives the geometric representations of a situation but also provides possibilities of approaching problems from different perspectives. By providing the means GeoGebra and the
|
2018-12-11T17:39:20.726Z
|
2018-02-24T00:00:00.000
|
{
"year": 2018,
"sha1": "c8c89119f9253c9e4189442d71882b8df8067074",
"oa_license": "CCBY",
"oa_url": "http://journal.uad.ac.id/index.php/IJEME/article/download/8670/pdf_20",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "c8c89119f9253c9e4189442d71882b8df8067074",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
219484646
|
pes2o/s2orc
|
v3-fos-license
|
A University Landscape for the Digital World
As the digital transformation clearly highlights the role of universities and institutes of higher education in shaping a higher education system that is more open and provides education to everyone who can benefit from it, this study seeks to analyze, in more detail, what developments are having an impact on higher education and develops future scenarios for education in 2030. The UK study Solving future skills challenges implies that the linear model of education–employment–career will no longer be sufficient in the future, requiring new combinations of skills, experience, and collaboration from educators and employers. This UK study serves as a starting point for the AHEAD trend analysis for a higher education landscape in 2030. Five premises ranging from “No naive innovation view” to “Realistic approach,” and “Diversity in higher education” provide the basis for a search for concepts for the higher education of the future.
In the future, universities and institutes of higher education will play an even more central role in managing and shaping the digital transformation.
Higher education fulfills several objectives for society. In the areas of research and teaching, it primarily creates an educational space to prepare for the future. It prepares students for their further personal and professional development, which will be subject to considerable dynamics. It also provides a space for reflexive thinking about what it means to be a citizen of the globalized, digitized world, ultimately offering students opportunities to further develop their character and attitudes.
In addition, the higher education system will need to be more open in the future, providing access to quality education to everyone who can benefit from it. 1 This study addresses the relationship between higher education and initial and continuing vocational education and training, which are still strongly separated in most worldwide education systems (and particularly in Germany).
The potential of digitization for universities lies not only in the function it can add through e-learning but also in its integrative force, which can improve higher education as a whole, as the 2018 position paper, "Bologna Digital," makes clear (Gibb, Hofer, & Klofsten, 2018;Orr, van der Hijden, Rampelt, Röwert, & Suter, 2018a, 2018b. The present study incorporates this idea. Digitization will lead to changes in the higher education landscape; indications of such changes are presented here. This study does not assume that the university landscape as a whole will be a victim of destructive innovation ("disruption"). The high expectations associated with innovations developed in the Silicon Valley environment (keyword: MOOCs) have not yet revolutionized higher education. Instead, universities have adopted these innovations and integrated them into existing degree programs (Jansen & Konings, 2017; Reich & Ruipérez-Valiente, 2019). However, digital developments can also help universities redefine and better fulfill their role. The emergence of innovative new models and organizations will enrich the higher education landscape. Future progress is not just a matter of retrofitting longstanding higher education approaches (Kelly & Hess, 2013), but also of extending them to foster sustainable changes.
What is meant by "Digitization" as a Process?
According to the Oxford English Dictionary, digitization is the conversion of a text, image, or sound into a digital form that can be processed by a computer. This material process in itself has little influence. To achieve a significant impact, digitization must be integrated into a far-reaching process-and a corresponding ecosystem-that uses digital materials for digital transformation (in short: digitalization) (Brennen & Kreiss, 2016). The Internet and digital networks are means to connect different types of information, to generate new data flows, and to structure communication channels for improved interaction between people and processes. The new information nodes and networks enable a new form of process organization (Castells, 2010;Cerwal, 2017). The application of new digital technologies is therefore not only a question of what technology can do but also how it interacts with other established practices and individual and organizational routines. The particular challenge of the twenty-first century is to ensure that all sectors benefit from the increasing digital transformation of society.
The aim of this study is to analyze in more detail the developments that are having a major impact on the environment of higher education, and to develop scenarios for higher education in 2030 on this basis. The present study thus meets a central demand of the position paper by Baumgartner (2018) on the future role of higher education: "We need more creative scenarios with which we can think about the future of social developments and their possible consequences for our institutions (such as universities)." The organization that represents universities in the UK has recently carried out a study of higher education requirements in a digital, networked world (Universities UK, 2018). The conclusion of this study provides a suitable starting point for the AHEAD study: The linear model of education-employment-career will no longer be sufficient. The pace of change is accelerating, necessitating more flexible partnerships, quicker responses, different modes of delivery and new combinations of skills and experience. Educators and employers need to collaborate more closely, and develop new and innovative partnerships and flexible learning approaches. (ibid.) Thus, concepts are sought for the higher education of the future, which must become stronger and stronger, while building on the current structure of higher education. Such concepts could have an evolutionary and transformative effect on today's higher education system. This search is based on the following five premises: • No naive innovation view: It can be assumed that some parts of the (institutionalized) system will resemble the current one, while innovations will emerge both within this system and through new organizations. • Transfer and renewal through digitization: Digitization is expected to have an impact on many areas of higher education provision and beyond. In addition, new forms of higher education will become increasingly sustainable and scalable. • Realistic: The scenarios should, where possible, have points of contact with current systems of higher education, allowing their potential, including the tensions inherent in the models, to be demonstrated on the basis of exemplary developments. The year 2030 has been chosen as a future endpoint to ensure that innovations are linked to the current situation and the perspective does not become too speculative. • The perspective of the learner: The learner's path through the educational system is the focus of this investigation. The educational provisions offered by universities depend on learner requirements. • Diversity in higher education: In contrast to other future-oriented studies, this paper does not assume that there will be one model of higher education in the future. Instead, we assume that the higher education landscape will continue to become more diversified and that alternative learning and higher education paths will develop in response to various challenges and ultimately coexist. For this reason, the study refers to "higher education" in general, and not simply to institutes of higher education.
Open Access This chapter is licensed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license and indicate if changes were made. The images or other third party material in this chapter are included in the chapter's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the chapter's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder.
|
2020-05-28T09:14:52.356Z
|
2020-05-23T00:00:00.000
|
{
"year": 2020,
"sha1": "8bd58acfeeb851261dac9e9b45a48b9a7aed9341",
"oa_license": "CCBY",
"oa_url": "https://link.springer.com/content/pdf/10.1007/978-3-030-44897-4_1.pdf",
"oa_status": "HYBRID",
"pdf_src": "Adhoc",
"pdf_hash": "31f74f042631ca4ed8b0e53e5f9144bc2388d837",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Sociology"
]
}
|
14016396
|
pes2o/s2orc
|
v3-fos-license
|
Health Seeking Behavior among Rural Left-Behind Children: Evidence from Shaanxi and Gansu Provinces in China
More than 60 million children in rural China are “left-behind”—both parents live and work far from their rural homes and leave their children behind. This paper explores differences in how left-behind and non-left-behind children seek health remediation in China’s vast but understudied rural areas. This study examines this question in the context of a program to provide vision health care to myopic rural students. The data come from a randomized controlled trial of 13,100 students in Gansu and Shaanxi provinces in China. The results show that without a subsidy, uptake of health care services is low, even if individuals are provided with evidence of a potential problem (an eyeglasses prescription). Uptake rises two to three times when this information is paired with a subsidy voucher redeemable for a free pair of prescription eyeglasses. In fact, left-behind children who receive an eyeglasses voucher are not only more likely to redeem it, but also more likely to use the eyeglasses both in the short term and long term. In other words, in terms of uptake of care and compliance with treatment, the voucher program benefitted left-behind students more than non-left-behind students. The results provide a scientific understanding of differential impacts for guiding effective implementation of health policy to all groups in need in developing countries.
Introduction
More than 60 million children in rural China are "left-behind"-both parents live and work far from their rural homes and leave their children behind [1]. The left-behind phenomenon is driven in large part by China's rapid development and urbanization, and in particular by the migration of large numbers of rural residents from their rural homes to urban areas in search of better job opportunities [2]. It is common for migrant parents to leave their children behind with a caregiver-typically the paternal grandparents-in their home communities [1,3,4]. This has created a new, large, and potentially vulnerable subpopulation of left-behind children in rural areas.
There is great concern about whether left-behind status disproportionately hurts children's health and well-being. Previous research suggests that health outcomes are worse for left-behind children than for children whose parents are at home [5][6][7][8][9]. Additionally, left-behind children may be at greater risk of depression, anxiety, and loneliness as a result of separation from their parents [5,8,[10][11][12][13][14]. Compared with children living with both parents, left-behind children are also reported to have poorer physical development (e.g., higher rates of stunting and wasting), lower nutrition and higher rates of anemia [6,7].
However, the reasons why left-behind children may have poorer health than their non-left-behind peers is not well studied. One possible mechanism for lower health outcomes is that elderly surrogate caregivers may not be as proactive as parents in seeking health remediation for children. This may be especially true when considering health conditions that are not acute or have few obvious symptoms, but for which treatment can yield important gains in wellbeing and/or productivity [3,15]. For such conditions, preemption on the part of caregivers may factor highly in uptake of remediation.
Little evidence exists to substantiate the difference in healthcare-seeking behaviors between left-behind children and children whose parents are at home (hereafter, left-behind and non-left-behind families). Particularly, little is known about how left-behind families respond to the healthcare needs of children in comparison to caregivers of non-left-behind children. Some research suggests that grandparents may be less likely to help children in need because they grew up decades earlier and may be less inclined to understand health risks and resources [16]. Systematic differences in health seeking behavior between working-age and elderly caregivers also may occur on account of variations in the availability of time, liquidity constraints, access to and availability of information, or frequency of visits to areas where healthcare services are present [17][18][19][20][21]. In rural China, distances to the nearest county seat, which is often where reliable service is based, can involve hours of travel, potentially affecting the decision to seek care and the actual uptake of care for older caregivers [22,23]. To the extent that families of left-behind children behave differently in seeking healthcare remediation, targeted policies may be needed to reach this important subgroup.
A series of studies suggest that approximately 10 to 15% of school-aged children in the developing world have common vision problems [24][25][26][27]. In most cases children's vision problems can be easily corrected by timely and proper fitting of quality eyeglasses [28]. Unfortunately, studies in a variety of developing countries document that 35 to 85% of individuals with refractive errors do not have eyeglasses [29][30][31].
In rural areas of China the prevalence of vision problems among children is among the highest in the world [27,32,33]. One recent study in the same area as our study shows that about 25% of students in grades 4 and 5 have myopia [31]. However, recent investigations in rural China demonstrate that fewer than one third of children needing glasses actually have glasses and even fewer wear them [34]. According to Yi et al. [31], more than 85% of children in rural China with myopia do not wear glasses.
For myopic children, wearing eyeglasses has substantial benefits. Beyond the improvement in quality of life due to improved vision, providing children with glasses has been shown to significantly improve academic performance [33,35]. The effect of glasses on the schooling outcomes of students is estimated to be at least as large as that of well-known education interventions deemed highly successful [36].
Several factors contribute to the high rates of uncorrected vision problems found in these studies. Research suggests that the lack of awareness about the importance of wearing glasses and misinformation contribute to low usage rates in China [34]. For example, there are commonly held (but mistaken) views in many countries, including China, that wearing eyeglasses will harm one's vision [31,34].
Due to such misinformation and limited access to screenings and exams, eyeglasses wear remains limited in China [37]. Specifically, rural students and their families may not know how to go about getting their first pair of glasses. As a consequence, it is possible that a well-run government program that subsidized vision care services for youth might be needed.
The overall objective of this study is to understand variations in how left-behind and non-left-behind families seek health remediation.
To meet this objective, the study examines this question in the context of an in-the-field randomized controlled experiment seeking to determine the effect of a program to provide subsidized eyeglasses to myopic children. The main results of the study are reported in Ma, et al. 2014, and the Ma et al. paper examine program impacts on eyeglasses uptake and usage, as well as schooling outcomes for children [33,38]. The intervention which subsidizes vision care is a useful setting to study differential health seeking behaviors between left-behind and non-left-behind families because preemptive behavior on the part of caregivers, rather than a response to obvious symptoms in the child, may be a primary driver of timely remediation.
In carrying out a sub-group analysis of differential responses between left-behind children and non-left behind children, the current study focuses on three primary factors. First, the analysis determines to what extent families, irrespective of left-behind status, seek vision care when they are made aware that their child may suffer from a vision problem. Second, the paper explores whether a voucher subsidy designed to boost uptake of care differentially affects left-behind and non-left-behind family subgroups. In approaching this second factor, the analysis compares both the uptake of care and the usage of eyeglasses over the short and longer term. Finally, the study analyzes how distance to the care provider (a proxy for the cost of seeking health care) may affect uptake of vision care service across left-behind and non-left-behind subgroups, both with and without a voucher subsidy. In these ways, the study aims to understand how health interventions can mediate differences in health seeking behavior as they relate to left-behind and non-left-behind families.
The remainder of the paper is organized as follows: Section 2 describes the experiment and data collection. Section 3 presents the results. Sections 4 and 5 discuss the policy implications of the experimental results and concludes.
Study Area and Sampling Technique
The experiment from which the data in this paper are taken took place in two adjoining provinces of western China: Shaanxi and Gansu. These two provinces have a total population of 65 million, and can reasonably represent poor rural areas in northwest China. In 2012, Shaanxi's gross domestic product per capita of USD 6108 was ranked 14th among China's 31 provincial administrative regions, and was similar to that for the country as a whole (USD 6091) in the same year. Gansu's gross domestic product per capita, USD 3100, makes Gansu the second-poorest province in the country according to China National Statistics Yearbook in 2012.
In each of the provinces, one prefecture (each containing a group of seven to ten counties) was included in the study. The prefectures are fairly representative of the provinces. The prefecture in Gansu has a population of 3.3 million making up 10% of the provincial population. The GDP per capita of USD 2680 is very close to the provincial level. The prefecture in Shaanxi has a population of 3.4 million, about 10% of the provincial population. While GDP per capita is USD 13,100, higher than the provincial average, in no small part this is due to mining and other extractive industries; income per capita is close to the provincial average.
To choose the sample, we obtained a list of all rural primary schools in each prefecture. To minimize the possibility of inter-school contamination, we first randomly selected 167 townships and then randomly selected one school per township for inclusion in the experiment. Within schools, our data collection efforts (discussed below) focused on grade 4 and grade 5 students. From each grade, one class was randomly selected and surveys and visual acuity examinations. More than 95 percent of poor vision is due to myopia. For simplicity, we will use myopia to refer to vision problems more generally.
Baseline Survey
A baseline survey was conducted in September 2012. The baseline survey collected detailed information on schools, students and households. The school survey collected information on school infrastructure and characteristics (including distance to county seat). A student survey was given to all students in selected grade 4 and grade 5 classes. The student survey collected information on basic background characteristics, including age, gender, whether boarding at school, whether they owned eyeglasses before, and their knowledge of vision health. Students were also asked about whether their parents worked away from home for more than six months per year. Students indicating this to be the case for both parents were defined as left-behind children for the purpose of our analysis. Students that reported one or both parents at home for more than 6 months a year were considered non-left-behind children. Household surveys were also given to all students, which they took home and filled out with their caregivers. The head teacher of each classroom collected the completed household forms and forwarded them to the survey team. The household survey collected information on households that children would likely have difficulty answering (e.g., parents' education levels and the value of family assets).
Vision Examination
At the same time as the school survey, a two-step vision examination was administered to all students in the randomly selected classes in all sample schools. First, a team of two trained staff members administered visual acuity screenings using Early Treatment Diabetic Retinopathy Study (ETDRS) eye charts, which are accepted as the worldwide standard for accurate visual acuity measurement [39]. Students who failed the visual acuity screening test (cutoff was defined by visual acuity of either eye less than or equal to 0.5, or 20/40) were enrolled in a second vision test that was carried out at each school one or two days after the first test.
The second vision test was conducted by a team of one optometrist, one nurse and one staff assistant and involved cycloplegic automated refraction with subjective refinement to determine prescriptions for students needing glasses. These are standard procedures when conducting a vision exam for children to prescribe eyeglasses. Cycloplegia refers to the use of eye drops to briefly paralyze the muscles in the eye that are used to achieve focus. The procedure is commonly used during vision exams for children to prevent them from reflexively focusing their eyes and rendering the exam inaccurate. It was after the exam that prescriptions and (in the case of the Voucher group) vouchers and instructions were given to the students to take home to their families.
In order to calculate and compare different visual acuity levels we require a linear scale with constant increments [40,41]. In the field of ophthalmology/optometry LogMAR is one of the most commonly used continuous scales. This scale uses the logarithm transformation: e.g., LogMAR = log10 (MAR). In this definition, the variable, MAR, is short for Minimum Angle of Resolution, which is defined as the inverse of visual acuity, e.g., MAR = 1/VA. LogMAR offers a relatively intuitive interpretation of visual acuity measurement. It has a constant increment of 0.1 across its scale; each increment indicates approximately one line of visual acuity loss in the ETDRS chart. The higher the LogMAR value, the worse one's vision is.
Eyeglasses Uptake and Usage
Our analysis focuses on two key variables: eyeglasses uptake rate and usage rate. A short-term follow up survey was done in early November 2012 (one month after vouchers were distributed). A long-term follow up survey was conducted in May 2013 (seven months after vouchers were distributed).
Uptake in our study is defined by ownership. Specifically, we define uptake as a binary variable taking the value of one if a student owns a pair of eyeglasses at baseline or acquired one during the program (regardless of the source). Students that had been diagnosed with myopia in baseline vision examination were given a short survey that included questions about whether they owned eyeglasses and how they acquired them. If a student indicated he or she did have glasses but was not wearing them we confirmed by asking to see them. If the eyeglasses were at home, we followed up with phone calls to the caregivers.
Usage is a binary variable measured by students' survey responses of whether students wear eyeglasses regularly when they study and outside of class. To reduce reporting bias, a team of two enumerators made unannounced visits to each of the 167 schools in advance of the long term follow up survey. During the unannounced visits, enumerators were given a list of the students diagnosed with myopia in the baseline and record individual-level information on their usage of glasses. We double-checked student responses with our unannounced visits data. The significantly correlation suggesting the student responses data are reliable.
Experimental Design
Following the baseline survey and vision tests (described above), schools were randomly assigned to one of two groups: Voucher or Prescription. Details of the nature of the intervention in each group are described below. To improve power, we stratified the randomization by county and by the number of children in the school found to need eyeglasses. In total, this yielded 45 strata. Our analysis takes this randomization procedure into account [42]. The trial was approved by Stanford University (No. ISRCTN03252665, registration site: http://isrctn.org).
The two groups were designed as follows: Voucher Group: In the Voucher group, each student diagnosed with myopia was given a voucher as well as a letter to their parents informing them of their child's prescription. Details on the diagnosis procedures are provided in the Data Collection subsection below. The vouchers were non-transferable. Information identifying the student, including each student's name, school, county, and student's prescription was printed on each voucher and students were required to present their identification in person to redeem the voucher. This voucher was redeemable for one pair of free glasses at an optical store that was in the county seat. Program eyeglasses were pre-stocked in the retail store of one previously-chosen optometrist per county. The distance between each student's school and the county seat varied substantially within our sample, ranging from 1 km to 105 km with a mean distance of 33 km. While the eyeglasses were free, the cost of the trip in terms of time and any cost associated with transport were born by family of the student.
Prescription Group: Myopic students in the Prescription group were given a letter for their parents informing them of their child's myopia status and prescription. No other action was taken. If they opted to purchases glasses, they would also have to travel to the county seat. Unlike the families in the Voucher group, the families in the Prescription group would have to select an optical shop and purchase the eyeglasses. Figure 1 shows the research design of this study.
Tests for Balance and Attrition Bias
Of the 13,100 students in 167 sample schools who were given vision examinations at baseline, 2024 (16%) were found to require eyeglasses. Only these students are included in the analytical sample. There were 988 students in 83 sample schools that were randomly assigned to the Voucher group and 1036 students in 84 sample schools that were randomly assigned to the Prescription group.
Tests for Balance and Attrition Bias
Of the 13,100 students in 167 sample schools who were given vision examinations at baseline, 2024 (16%) were found to require eyeglasses. Only these students are included in the analytical sample. There were 988 students in 83 sample schools that were randomly assigned to the Voucher group and 1036 students in 84 sample schools that were randomly assigned to the Prescription group. Table 1 shows the balance check of baseline characteristics of the students in our sample across experimental groups. The first column shows the mean and standard deviation in the Prescription group. Column 2 shows the mean and standard deviation in the Voucher group. We then tested the difference between students in the Prescription and Voucher groups adjusting for clustering at the school level. The results in column 3 suggest a high level of overall balance across the two experimental groups. There is no significant different between the two groups in student age, gender, boarding status, grade, ownership of eyeglasses, belief that eyeglasses will harm vision, visual acuity, parental education, family member eyeglasses ownership, family assets, distance from school to county seat, and parental migration status.
Irrespective of left-behind status, only 14% of students who needed glasses had them at the time of the baseline (row 5). Fifty percent of students in the sample lived with both parents at home (row 13). About 12% of students had parents that both migrated elsewhere for work and were consider left-behind children (row 14). The other 38 percent of the children in our sample lived with one parent, either their father or mother.
Attrition at the short-and long-term visits was limited ( Figure 1). Only 35 (1.7%) out of 2024 students could not be followed up with in the short term and 74 (3.6%) could not be followed up with in the long term. Due to attrition, the sample of students for whom we have a measure of eyeglasses uptake and usage in the short term and long term is smaller than the full sample of students at the baseline. We therefore tested whether the estimates of the impacts of providing voucher on eyeglasses uptake and usage are subject to attrition bias. To do so, we first constructed indicators for attrition at the short term or long term (1 = attrition). We then regressed different baseline covariates on a treatment indicator, the attrition indicator (one for the short term and long term, respectively), and the interaction between the two. We also adjusted for clustering at the school level. The results are shown in Table 2. Overall, we found that there were no statistically significant differences in the attrition patterns between treatment groups and control groups on a variety of baseline covariates as of the short term and long term. There is only one exception. The short-term survey results showed that compared with attritors in the Prescription group, attritors in the Voucher group were less likely to believe that eyeglasses can harm vision (Table 2, panel A, row 3, column 6). This difference is significant at the 5% level. All estimates adjusted for clustering at the school level. Robust standard errors reported in parentheses; n = 2024; *** p < 0.01, ** p < 0.05, * p < 0.1.
Statistical Approach
Unadjusted and adjusted ordinary least squares (OLS) regression analysis are used to estimate how eyeglasses uptake and usage changed for children in the Voucher group relative to children in the Prescription group. We estimate parameters in the following models for both the short and long term. The basic specification of the unadjusted model is as follows: where y ijt is a binary indicator for the eyeglass uptake or eyeglasses usage of student i in school j in wave t (short-term or long-term). Voucher j is a dummy variable indicating schools in the Voucher group, taking on a value of 1 if the school that the student attended was assigned to Voucher group and 0 if the school that the student attended was in the Prescription group. ε ijt is a random error term.
To improve efficiency of the estimated coefficient of interest, we also adjusted for additional covariates (X ij ). We call Equation (2) below our adjusted model: The additional X ij represents a vector of student, family, and school characteristics. These characteristics including student's age in years, whether student is male, whether the student is boarding at school, whether student is in 4th grade, whether student owns eyeglasses at baseline, whether student thinks that eyeglasses will harm vision, severity of myopia measured by the LogMAR. The family characteristics include dummy variables for whether parents have high school or above education, whether a family member wears eyeglasses, household asset index calculated using a list of 13 items and weighting by the first principal component. Finally, X ij also includes the distance from the school to the county seat, and whether the student is left-behind child.
To analyze the paper's main question of interest-whether the Voucher intervention affected left-behind children differently, we estimate parameters in the following heterogeneous effects model: where Leftbehind i is a dummy variable indicating that a student is a left-behind child. The coefficient β V compares eyeglasses uptake or usage in the Voucher group to that in the Prescription group; β L captures the effect of being a left-behind child on eyeglasses uptake or usage. The coefficients on the interaction terms β VL give the additional effect (positive or negative) of the voucher on eyeglasses uptake or usage for the left-behind children relative to the voucher effect for the non-left-behind children.
In all regression models, we adjust standard errors for clustering at the school level using the cluster-corrected Huber-White estimator. To understand the relationship between distance and the utilization of healthcare, a nonparametric Locally Weighted Scatterplot Smoothing (Lowess) plot is used to the illustrate the eyeglasses uptake rates and the distance from school to county seat (where students redeem voucher for eyeglasses). One advantage of using the Lowess approach is that it can provide a quick summary of the relationship between two variables [43,44].
Results
This section reports the four main findings of the analysis. First part reports the levels of eyeglasses usage at the time of baseline (before any interventions occurred) among both left-behind and non-left-behind students. Second part estimates the average impact of providing a subsidy Voucher on student eyeglasses uptake and usage. Third part looks at the heterogeneous effects of the voucher on left-behind children versus their non-left-behind peers. Last part reports the extent to which distance to the county seat (where service is available) affects eyeglasses uptake in both the Voucher and Prescription groups. Table 3 shows differences in left-behind children and non-left-behind children with poor vision at the time of baseline. Across most of the control variables the two types of students are the same. Exceptions include a slightly higher level (6.9%age points) of education among the mothers of left-behind children (Table 3, column 3, row 9, significant at the 1% level) and a higher level of household assets among non-left-behind children (column 3, row 11, significant at the 1% level). Uptake and usage of eyeglasses at the time of baseline is the same between the two subgroups and quite low, about 13% (column 3, row 5). Nearly 40% of students believed that wearing eyeglasses would negatively affect one's vision (row 6). Generally, the parental education level was low: 15% of students' fathers and less than 10% of the mothers had attended high school (row 9). Data source: baseline survey. All tests account for clustering at the school level. *** p < 0.01, ** p < 0.05, * p < 0.1.
Average Impacts of Providing Voucher on Student Eyeglasses Uptake and Usage
The results for average impact on eyeglasses uptake and usage (irrespective of left-behind status) are shown in Table 4. Columns 1 to 4 show estimates for eyeglasses uptake, columns 5 to 8 show the results for eyeglasses usage. Odd-numbered columns show the results from unadjusted model (Equation (1)) and even-numbered columns show the results from adjusted model (Equation (2)). The estimates from both unadjusted model and adjusted model in are included in Table 4 for short term (one month after vouchers were distributed) and long term (seven months after vouchers were distributed).
The results show that the eyeglasses uptake rate in the Voucher group is significantly higher than in the Prescription group in both short term and long term. In the unadjusted model, at the time of the short-term check, the average eyeglasses uptake rate among children that received only a prescription was about 25% (Table 4, column 1, last row). The uptake rate among children in the Voucher group was 85%, more than three times higher than the Prescription group. At the time of the long term, the average eyeglasses uptake rate among Prescription group was about 43% (column 3, last row), compared to 87% uptake rate among Voucher group. The adjusted model results, which control for student characteristics, and family characteristics, suggest a similar story: Voucher treatment yields positive impacts over both the short and long term. Specifically, our adjusted results show that the eyeglasses uptake rate in the Voucher group was 61 percentage points higher at the time of short term survey (row 1, column 2) and 45 percentage points higher at long term survey (column 4) than the Prescription group. These results are all significant at the 1% level.
Moreover, the results reveal that the eyeglasses usage in Voucher group is significantly higher in both short term and long term. The unadjusted model results show that in the short term, the percentage of students who use eyeglasses was 65% in the Voucher group-more than three times higher than the 21 percent usage in the Prescription group (Table 4, column 5, last row). The long-term results show that the percentage of students who use eyeglasses was 35% in the Prescription group (column 7, last row), compared with 59 percent in the Voucher group. In the adjusted model, our results also show that the Voucher increased eyeglasses usage by 45 percentage points in the short term (column 6, row 1) and 26 percentage points in the long term (column 8, row 1). These results are also significant at the 1% level. (8) show coefficients on treatment group indicators estimated by ordinary least squares (OLS). Columns (1) to (4) report estimates impact of providing voucher on eyeglasses uptake. Columns (4) to (8) report estimates impact of providing voucher on eyeglasses usage. Columns (1) (2) (5) and (6) report the short-term follow up in one month after initial voucher distribution. Columns (3) (4) (7) and (8) report estimates for the long-term follow up in seven months after initial voucher or prescription distribution. Sample sizes are less than the full sample due to observations missing at least one regressor. Standard errors clustered at school level are reported in parentheses. All regressions control for randomization strata indicators. *** Significant at the 1% level.
Heterogeneous Effects on Left-Behind Children
This section examines the main question of interest in the paper. The results indicate that providing the subsidy Voucher on average has a consistent positive impact on eyeglasses uptake and usage in both short term and long term, but there is substantial differential impact between left-behind and non-left-behind children.
As shown in Table 5, in the Prescription group uptake and usage rates among left-behind children were much lower than among their non-left-behind peers. Specifically, the results show that in the short term, the left-behind children were 8 percentage points less likely to purchase eyeglasses than non-left-behind children ( Table 5, columns 1 and 2, row 2). This is significant at the 10% level. In the long term, the left-behind children were also 10 percentage points less likely to purchase eyeglasses (columns 3 and 4, row 2). This is significant at the 5% level. There also was no significant difference in eyeglasses usage between left-behind children and non-left-behind children in the short term (columns 5 and 6, row 2). However, in the long term, the left-behind children were 14 percentage points less likely to use the eyeglasses than their peers (columns 7 and 8, row 2). These results are significant at the 5% level. (8) show coefficients on treatment group indicators estimated by OLS. Columns (1) to (4) report estimates impact of providing voucher on eyeglasses uptake. Columns (4) to (8) report estimates impact of providing voucher on eyeglasses usage. Columns (1) (2) (5) and (6) report the short-term follow up in one month after initial voucher distribution. Columns (3) (4) (7) and (8) report estimates for the long-term follow up in seven months after initial voucher or prescription distribution. Sample sizes are less than the full sample due to observations missing at least one regressor. Standard errors clustered at school level are reported in parentheses. All regressions control for randomization strata indicators. *** Significant at the 1% level. ** Significant at the 5%. * Significant at the 10% level.
Although left-behind students in the Prescription exhibit lower rates of both uptake and usage when compared to their non-left-behind peers, in the voucher group the trend was almost reversed. The results show that in the Voucher group, the eyeglasses uptake rates among left-behind children were higher than among their non-left-behind peers in both the short term and long term. Specifically, the coefficients for the interaction term between Voucher group and whether the student is a left-behind child are all positive and statistically significant ( Table 5, columns 1 through 4, row 3). The eyeglasses uptake rate among left-behind children in the intervention group was additional 8.4 to 6.8 percentage points higher by short term survey (significant at the 1 percent level-columns 1 and 2, row 3). More importantly, the effect was sustained over time. At the time of the long-term survey, the eyeglasses uptake rate among the left-behind children in the Voucher group continued to be additional 13.1 to 11.8 percentage points higher than the non-left-behind children (significant at the 1% level-columns 3 and 4, row 3).
In terms of eyeglasses usage, heterogeneous effect results show that the left-behind children in the Voucher intervention group had a higher (but insignificant) usage rate in the short term, and also had a higher (and significant) usage rate in the long term (Table 5, row 3, columns 5 to 8). The eyeglasses usage rates among left-behind children in the Voucher group increased by an additional 7.6 and 7.2 percentage points by our short-term survey, however this is statistically insignificant (row 3, columns 5 and 6). By the long term, the eyeglasses usage among the left-behind children in the Voucher intervention group was 14.6 and 13.7 percentage points higher than the non-left-behind children (significant at the 1% level-row 3, columns 7 and 8).
Differential impacts on usage across left-behind and non-left-behind families are also instructive. Signals highlighting the importance of the subsidized health service may be affecting elderly caregivers more than working age parents at home. If elderly caregivers respond to a possible signal by increasing their perceived utility of treating poor vision for their dependents, they may supervise glasses wear with more rigor than parents whose value of the treatment could be crowded out by other concerns.
The Relationship between Distance and the Utilization among Rural Families
The hypothesis that time is less valuable among elderly caregivers is borne out by an analysis of distance from service providers and uptake of care. In the discussion above we find that the Voucher intervention not only had a significant average impact on eyeglasses uptake and usage, but also had an additional impact on the left-behind children. In this section examines how "distance from the county seat" (or the cost of using the Voucher) affects the impact across the different sub-groups. To do so, an exploratory analysis is conducted to compare the demand curves of the left-behind children and with non-left-behind children.
The Lowess plots among the Voucher group show that the left-behind children's eyeglasses uptake rate is higher than the non-left-behind children, meaning the left-behind children were more likely to redeem their vouchers than the non-left-behind children in both short term and long term (Figure 2a,b). Although the plots demonstrate a quick summary of the generally negative relationship between eyeglasses uptake rates and distance to the county seat, as the distance move from left to right across the graph, the eyeglasses uptake rate for the left-behind children falls more gradually than that of the non-left-behind children.
The Relationship between Distance and the Utilization among Rural Families
The hypothesis that time is less valuable among elderly caregivers is borne out by an analysis of distance from service providers and uptake of care. In the discussion above we find that the Voucher intervention not only had a significant average impact on eyeglasses uptake and usage, but also had an additional impact on the left-behind children. In this section examines how "distance from the county seat" (or the cost of using the Voucher) affects the impact across the different sub-groups. To do so, an exploratory analysis is conducted to compare the demand curves of the left-behind children and with non-left-behind children.
The Lowess plots among the Voucher group show that the left-behind children's eyeglasses uptake rate is higher than the non-left-behind children, meaning the left-behind children were more likely to redeem their vouchers than the non-left-behind children in both short term and long term (Figure 2a,b). Although the plots demonstrate a quick summary of the generally negative relationship between eyeglasses uptake rates and distance to the county seat, as the distance move from left to right across the graph, the eyeglasses uptake rate for the left-behind children falls more gradually than that of the non-left-behind children. However, the Lowess plots among the Prescription group in both short term and long term show an opposite pattern-the plot for the left-behind children's eyeglasses uptake rate was much lower than the non-left-behind children (Figure 3a,b). Furthermore, the uptake rate for the left-behind children in the Prescription group drops sharply as the distance grows. This means the caregivers of left-behind children are much less likely to purchase the eyeglasses, especially when they live far away from the county seat.
However, the Lowess plots among the Prescription group in both short term and long term show an opposite pattern-the plot for the left-behind children's eyeglasses uptake rate was much lower than the non-left-behind children (Figures 3a,b). Furthermore, the uptake rate for the left-behind children in the Prescription group drops sharply as the distance grows. This means the caregivers of left-behind children are much less likely to purchase the eyeglasses, especially when they live far away from the county seat. But when a subsidy is involved the trends reverses, revealing a higher willingness on the part of left-behind families to travel farther distances to redeem their voucher (Figure 4). But when a subsidy is involved the trends reverses, revealing a higher willingness on the part of left-behind families to travel farther distances to redeem their voucher (Figure 4). CI: confidence interval.
Discussion
The low rate of eyeglasses usage at baseline accords with similar rates found in other research in rural China [27,31,33]. These low rates, together with the common misapprehensions about the negative effect of eyeglasses on vision, suggest that misinformation may be serving to limit usage of glasses in this context.
The finding that the eyeglasses uptake rate in the Voucher group is significantly higher than in the Prescription group in both short term and long-term echo those reported in other contexts that find subsidies and incentives are important drivers of the uptake of care. Our findings suggest the subsidy may offset the cost of uptake while also potentially signaling a higher value for the service in question. The resulting shift in the cost benefit analysis of individuals leads them to newly favor uptake of services 55 [45][46][47][48].
The eyeglasses usage in Voucher group is significantly higher in both short term and long term, which may be due to the voucher's capacity to raise individuals' perceived utility of using eyeglasses. The impact of the voucher highlights the important role subsidies may play not only in the uptake of health services but also compliance with treatment. This finding contrasts with typical "sunk cost effect" considerations regarding subsidized distribution. That is, that subsidies may reduce the psychological effects associated with paying for a product so that goods received for free are less valued and hence less used [47]. Our finding is similar to recent experiments conducted in other developing countries that show no evidence for the psychological sunk cost effect [46,47,49].
Differential impacts on left-behind children provide additional insights into caregiver decisionmaking. The findings in the Prescription group may indicate that older caregivers (grandparents) do not seek care on the basis of information alone, even if that information is accurate and indicates a problem may exist. Our research thus complements the findings of others suggesting that awareness of a possible health risk alone is insufficient to convince vulnerable groups such as left-behind children and their families to seek out care [45,50]. For example, a summary of evidence from a range of randomized controlled trials on health care services in developing countries, find that the take up of health care services is highly sensitive to price. Liquidity constraints, lack of information, nonmonetary costs, or limited attention also contribute to the limited uptake [45].
Discussion
The low rate of eyeglasses usage at baseline accords with similar rates found in other research in rural China [27,31,33]. These low rates, together with the common misapprehensions about the negative effect of eyeglasses on vision, suggest that misinformation may be serving to limit usage of glasses in this context.
The finding that the eyeglasses uptake rate in the Voucher group is significantly higher than in the Prescription group in both short term and long-term echo those reported in other contexts that find subsidies and incentives are important drivers of the uptake of care. Our findings suggest the subsidy may offset the cost of uptake while also potentially signaling a higher value for the service in question. The resulting shift in the cost benefit analysis of individuals leads them to newly favor uptake of services 55 [45][46][47][48].
The eyeglasses usage in Voucher group is significantly higher in both short term and long term, which may be due to the voucher's capacity to raise individuals' perceived utility of using eyeglasses. The impact of the voucher highlights the important role subsidies may play not only in the uptake of health services but also compliance with treatment. This finding contrasts with typical "sunk cost effect" considerations regarding subsidized distribution. That is, that subsidies may reduce the psychological effects associated with paying for a product so that goods received for free are less valued and hence less used [47]. Our finding is similar to recent experiments conducted in other developing countries that show no evidence for the psychological sunk cost effect [46,47,49].
Differential impacts on left-behind children provide additional insights into caregiver decision-making. The findings in the Prescription group may indicate that older caregivers (grandparents) do not seek care on the basis of information alone, even if that information is accurate and indicates a problem may exist. Our research thus complements the findings of others suggesting that awareness of a possible health risk alone is insufficient to convince vulnerable groups such as left-behind children and their families to seek out care [45,50]. For example, a summary of evidence from a range of randomized controlled trials on health care services in developing countries, find that the take up of health care services is highly sensitive to price. Liquidity constraints, lack of information, nonmonetary costs, or limited attention also contribute to the limited uptake [45].
The fact that the voucher intervention boosted uptake even more for left-behind children than non-left-behind children is also compelling. This finding suggests that the demand curve for elderly caretakers may be different than that of working age parents. Time, for instance, may be less valuable for elderly caregivers (having retired, they have more of it), so that it costs less for them to uptake care.
One reason for this is the elderly with lower income may be more sensitive to price than distance when seeking health care [51][52][53]. When the health care service is fully subsidized with a voucher, the distance tends to matter less for the left-behind families, i.e., they are willing to travel further in order to obtain a pair of free eyeglasses. As shown in literature, the demand for health care may be affected by the opportunity time cost [54]. Caregivers of left-behind families may have more free time than parents still working and may be more willing to go to the county seat. In this way, left-behind families may be responding more to price than time, in contrast to their non-left-behind counterparts.
Further research is needed to better understand the reason behind the low uptake rate in the Prescription group. While subsidies appear to be effective at raising uptake, a smaller subsidy could have rendered the same outcome in one or both of the groups. A fruitful line of inquiry would therefore be to more fully explore the demand curve for health services among both left-behind and non-left-behind families in China and beyond. This could help determine what factors crowd out the inclination among different types of caregivers to seek certain health solutions for their children in the first place, and in turn open the door to more cost effective polices targeting vulnerable subgroups.
Conclusions
This study provides new evidence to aid in the design of healthcare policies so that they reach all vulnerable groups in developing countries. The results from a randomized controlled program to provide vision care to rural students in China to better understand how and why left-behind families behave differently to health care interventions.
The findings both confirm the work of others on the importance of subsidies in expanding the reach of care, while also bringing to light the role that differential impacts can have in the design of cost effective health care policy. In the absence of a subsidy, uptake of services in our sample was very low, even if individuals were provided with evidence of a potential problem (an eyeglasses prescription). However, uptake rose dramatically when this information was paired with a subsidy voucher for care (a free pair of eyeglasses). The result complements existing findings that suggest awareness of a possible health risk alone is insufficient to convince vulnerable groups to seek out care [45,50]. What is more, the findings confirm the broad value of subsidies when seeking to raise uptake of underutilized health services.
The analysis of the differential impacts of the subsidy on left-behind children contributes another layer of understanding to this consensus. Left-behind children who receive an eyeglasses voucher were not only more likely to redeem it, but also more likely to use the eyeglasses both in the short term and long term. In other words, in terms of uptake of care and compliance with treatment, the voucher program benefitted left-behind students more than non-left-behind students. This finding is similar to recent programs conducted in other developing countries that show no evidence for the psychological sunk cost effect of subsidized distribution [46,47,49]. This trend may occur because left-behind families are more sensitive to price than distance when seeking health care.
More broadly, the findings on the differential impacts of the voucher programs highlight the existence of systematically different patterns of response among large subpopulations like left-behind children, suggesting that groups may require tailored solutions to reach health policy goals. Further understanding the potential for tailored solutions of this kind may have wide ranging implications for health policy in primary care, infectious disease control, and chronic disease management.
|
2018-05-03T02:53:33.723Z
|
2018-04-28T00:00:00.000
|
{
"year": 2018,
"sha1": "c524af89ab44b3cda37cbf52e582450c00f2b467",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-4601/15/5/883/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "c524af89ab44b3cda37cbf52e582450c00f2b467",
"s2fieldsofstudy": [
"Medicine",
"Sociology"
],
"extfieldsofstudy": [
"Political Science",
"Medicine"
]
}
|
4996551
|
pes2o/s2orc
|
v3-fos-license
|
Identification and functional analysis of glycemic trait loci in the China Health and Nutrition Survey
To identify genetic contributions to type 2 diabetes (T2D) and related glycemic traits (fasting glucose, fasting insulin, and HbA1c), we conducted genome-wide association analyses (GWAS) in up to 7,178 Chinese subjects from nine provinces in the China Health and Nutrition Survey (CHNS). We examined patterns of population structure within CHNS and found that allele frequencies differed across provinces, consistent with genetic drift and population substructure. We further validated 32 previously described T2D- and glycemic trait-loci, including G6PC2 and SIX3-SIX2 associated with fasting glucose. At G6PC2, we replicated a known fasting glucose-associated variant (rs34177044) and identified a second signal (rs2232326), a low-frequency (4%), probably damaging missense variant (S324P). A variant within the lead fasting glucose-associated signal at SIX3-SIX2 co-localized with pancreatic islet expression quantitative trait loci (eQTL) for SIX3, SIX2, and three noncoding transcripts. To identify variants functionally responsible for the fasting glucose association at SIX3-SIX2, we tested five candidate variants for allelic differences in regulatory function. The rs12712928-C allele, associated with higher fasting glucose and lower transcript expression level, showed lower transcriptional activity in reporter assays and increased binding to GABP compared to the rs12712928-G, suggesting that rs12712928-C contributes to elevated fasting glucose levels by disrupting an islet enhancer, resulting in reduced gene expression. Taken together, these analyses identified multiple loci associated with glycemic traits across China, and suggest a regulatory mechanism at the SIX3-SIX2 fasting glucose GWAS locus.
Abstract
To identify genetic contributions to type 2 diabetes (T2D) and related glycemic traits (fasting glucose, fasting insulin, and HbA1c), we conducted genome-wide association analyses (GWAS) in up to 7,178 Chinese subjects from nine provinces in the China Health and Nutrition Survey (CHNS). We examined patterns of population structure within CHNS and found that allele frequencies differed across provinces, consistent with genetic drift and population substructure. We further validated 32 previously described T2D-and glycemic trait-loci, including G6PC2 and SIX3-SIX2 associated with fasting glucose. At G6PC2, we replicated a known fasting glucose-associated variant (rs34177044) and identified a second signal (rs2232326), a low-frequency (4%), probably damaging missense variant (S324P). A variant within the lead fasting glucose-associated signal at SIX3-SIX2 co-localized with pancreatic islet expression quantitative trait loci (eQTL) for SIX3, SIX2, and three noncoding transcripts. To identify variants functionally responsible for the fasting glucose association at SIX3-SIX2, we tested five candidate variants for allelic differences in regulatory function. The rs12712928-C allele, associated with higher fasting glucose and lower transcript expression level, showed lower transcriptional activity in reporter assays and increased binding to GABP compared to the rs12712928-G, suggesting that rs12712928-C contributes to elevated fasting glucose levels by disrupting an islet enhancer, resulting in reduced gene expression. Taken together, these analyses identified multiple loci associated with glycemic traits across China, and suggest a regulatory mechanism at the SIX3-SIX2 fasting glucose GWAS locus. PLOS Introduction Type 2 diabetes (T2D) is a chronic disease affecting over 422 million people worldwide [1] with over 30% of cases occurring in East Asian populations [2]. Large-scale genome-wide association studies (GWAS) have identified >100 loci associated with T2D and >80 loci associated with fasting glucose, fasting insulin, and glycated hemoglobin (HbA1c), many of which have also been implicated in T2D susceptibility [3][4][5][6]. While the largest GWAS of glycemic traits and T2D to date have been performed in populations of predominantly European ancestry [3,[6][7][8][9], other studies have identified glycemic trait and T2D associations in East Asian individuals [5,10,11]. As glycemic trait profiles, allele frequencies, and environmental contributions differ between populations, continued investigation of genetic factors can discover additional loci influencing inter-individual variation in fasting glucose, fasting insulin, and HbA1c levels and T2D. A new resource for genetic analyses, the China Health and Nutrition Survey (CHNS) is an ongoing, household-based, longitudinal survey aimed at examining economic, sociological, demographic, and health questions in a diverse Chinese population [12]. Using a multistage random-cluster design and stratified probability sampling to select counties and cities, data were collected from 228 communities across nine provinces (Guangxi, Guizhou, Heilongjiang, Henan, Hubei, Hunan, Jiangsu, Liaoning, and Shandong) that constituted 44% of China's population as of the 2009 census. In addition to nearly 30 years of longitudinal survey data collected during 9 survey rounds from 1989-2011, quantitative biomarker measurements and DNA are available on 8,403 subjects in the CHNS.
Individual GWAS loci can harbor multiple association signals. More than one association signal has been reported at G6PC2 and PCSK1 for fasting glucose and at KCNQ1, ANKRD55, CDKN2A/B, DGKB, HNF4A, and CCND2 for T2D [5]. Imputation reference panels generated from large sample sizes can facilitate identification of additional signals. For non-European populations, the 1000 Genomes Phase 3 reference panel is currently the most comprehensive, containing information for more than 88 million variants in >2,500 individuals from 26 diverse populations [13]. Identification of additional association signals at trait-associated loci could explain additional heritability and provide further insights into the biology between the locus and the trait or disease.
GWAS have been an efficient method for studying genetic factors influencing biological mechanisms underlying glycemic traits and T2D, but for many of the identified loci, the underlying gene(s), direction of effect, and disease mechanism are largely unknown [14]. For variants located in non-coding regions of the genome, bioinformatic datasets can be used to annotate and predict regulatory variants, target genes, and direction of effect [15][16][17][18], and these variants can be tested for allelic differences in regulatory activity with in vitro laboratory assays [19,20]. For example, among previous functional studies of variants associated with fasting glucose at the G6PC2-ABCB11 locus, two variants in the promoter were shown to affect G6PC2 expression levels by altering FOXA2 binding, and two variants located in the third intron of G6PC2 were shown to affect G6PC2 splicing [21-23]. However, the majority of the glycemic trait-associated variants have not been examined.
To further clarify the genetic contributions to normal variation in glycemic traits in a multi-provincial Chinese population, we performed a GWAS of fasting glucose, insulin, and HbA1c levels and T2D in subjects from the CHNS, using genetic data imputed to 1000 Genomes Phase 3 in up to 7,178 subjects [12]. We examined the population substructure within the CHNS and evaluated candidate functional regulatory variants at one locus using annotation and in vitro laboratory assays.
CHNS population structure
To evaluate population substructure among 8,403 CHNS subjects with genotype data available, we constructed principal components (PCs) using a subset of variants (MAF > 0.05; pairwise LD r 2 <0.02 in a sliding window of 50 variants). Compared to HapMap 3 populations, the CHNS participants clustered closely with the Han Chinese in Beijing (CHB), the Han Chinese in Denver (CHD), and the Japanese in Tokyo (JPT) populations, with greater diversity than any of these populations (S1A Fig). A comparison to only the East Asian samples showed more clearly that the distribution of the CHNS extends beyond that of the CHB, CHD, and JPT samples (S1B Fig). Within the CHNS, the subjects showed two axes of diversity (Fig 1, S2 Fig). PC1, which explained 4.2% of the variance, appeared to cluster by province, while PC2, explaining 0.6% of the variance, showed diversity among subjects within the Guangxi and Guizhou provinces in southern China. PC2 could be partially explained by differences in selfreported ethnicity, particularly among subjects from the Guizhou province, as PC2 appeared to characterize the Miao and Buyi ethnic groups (S3 Fig). To account for population substructure in subsequent association analyses with glycemic traits, we included PC1 as a covariate and performed analyses using an efficient mixed model approach that accounts for sample structure between individuals [24].
GWAS results
We performed genome-wide association analyses of fasting glucose and fasting insulin levels in up to 8,045,193 genotyped and imputed variants (MAF >0.01) from 5,786 non-diabetic individuals in the CHNS who provided fasting blood samples (S1 Table, S4 Fig). We also performed a genome-wide association analysis of HbA1c in 7,178 nondiabetic individuals who provided fasting or non-fasting samples. In addition, 5,731 unrelated subjects were used to assess variant association with T2D status, including 748 cases and 4,983 controls. For each trait, we also searched for additional signals by conditioning on the lead variants (reciprocal conditional analyses). Overall, a majority of CHNS subjects were female (54%) with a normal BMI (mean = 23.2 kg/m 2 ), and subjects with T2D were older (cases: 59.7 years; controls: 51.2 years) with a higher BMI, higher fasting glucose levels, and higher fasting insulin levels (S1 Table).
The diamonds indicate the lead variants, which exhibited the strongest evidence of association at the locus among 1000 Genomes Project Phase 3-imputed variants. Variants are colored based on LD with the lead variants, rs34177044 (red) and rs2232326 (blue) within 8,403 CHNS subjects.
https://doi.org/10.1371/journal.pgen.1007275.g002 these two loci, we used stepwise conditional analyses to identify additional association signals at a locus-wide threshold of P <1 x 10 −5 . Conditional analysis including rs34177044 at the G6PC2 locus revealed a second signal (rs2232326, MAF = 0.04, P unconditioned = 1.8 x 10 −9 , P conditioned = 2.0 x 10 −6 , Fig 2). When conditioning only on rs2232326, rs34177044 was attenuated but remained significantly associated with fasting glucose (P conditioned = 7.0 x 10 −9 ); the attenuation suggests the two signals are distinct yet not fully independent. Haplotype analyses (S3 Table) and regression models containing an interaction term with both variants (P = 0.69) do not suggest a haplotype effect between the two signals, providing further evidence that the two signals are separate. While conditional analyses could be influenced by the moderate imputation quality of rs34177044 (r 2 = 0.70) in CHNS, genotypes from the 1000 Genomes project show that the minor allele of rs2232326 is only inherited with the major allele of rs34177044 (East Asian LD r 2 = 0.04, D' = 1.0). No additional association signals were identified at the SIX3-SIX2 locus (S5 Fig).
The lead variant in the second signal at G6PC2 (rs2232326) is a missense variant (S324P). Amino acid 324 is located in a helix spanning the cell membrane [28], and the substitution of a proline for a serine in the middle of a helix may add kinks to the protein [29]. In addition, both SIFT and PolyPhen [30] predict this variant to be "probably damaging", suggesting that it may affect function of the G6PC2 protein. Based on data from 1000 Genomes Phase 3, rs2232326 is rare in all ancestry populations (MAF: African, 0.2%; Admixed American, 0.3%; European, 0.3%; South Asian, 0.3%) except in East Asians (MAF 5%), and it has few (Admixed American, rs34102076; East Asian, rs139014876), to no (African, European, and South Asian) proxy variants (LD r 2 >0.80). This variant contributed to a significant G6PC2 gene-based association with glucose in Europeans [31] and other protein-coding variants within G6PC2 have been individually associated with fasting glucose levels (e.g. rs492594, rs138726309, rs2232323) [21]. In the CHNS, rs492594 was nominally associated with fasting glucose levels (P = 0.002); other previously described coding variants were either monomorphic or did not pass imputation quality control thresholds in CHNS (S4 Table).
We examined whether the strength of fasting glucose associations at SIX3-SIX2 and G6PC2 varied by province (Table 2). At rs895636 (SIX3-SIX2), the minor allele frequencies (MAF) differed by as much as 0.12 between provinces. Most of the provinces in which individuals have a relatively lower minor allele frequency (0.35-0.38) showed a stronger association between the variant and fasting glucose levels than similarly sized samples of individuals with higher MAF Although allele frequencies between provinces were not statistically different, observed allele frequency differences are consistent with genetic drift and the observed population substructure (Fig 1) Table) [10, 36, 37], and did not reveal any genome-wide significant loci. The most significant variant was located within an intron of FN3KRP (rs9895455, P = 3.5 x 10 −7 ; Supplementary Materials, S7 Fig). rs9895455 is in high LD (East Asian, r 2 = 0.99) with a variant previously reported to be associated with HbA1c in East Asians (rs1046875) [10]. Three additional variants in high LD with rs9895455 have previously demonstrated moderate associations in and East Asian populations (1000G Phase 3), one variant, rs12712928, exhibits high LD (r 2 >0.80) with rs895636 in East Asians and rs12712929 in Europeans (arrow). Red font indicates variants above r 2 of 0.80. Vertical bars indicate the genomic regions examined for allelic differences in regulatory function.
Annotating association signals with eQTL data
To aid in the identification of candidate genes at the strongest association signals, we examined whether any of the variants associated with glycemic traits are also associated with expression of nearby transcripts in pancreatic islets, blood, subcutaneous adipose, or tissues from GTEx (S9 Table) [44][45][46]. These expression quantitative trait locus (eQTL) datasets were generated predominantly from European ancestry donors. GWAS and eQTL signals more clearly coincide when the GWAS variant and the variant most strongly associated with expression level of the corresponding transcript exhibit high pairwise LD (r 2 >0.80). To allow for differences in LD patterns across ancestries in the GWAS and eQTL datasets, we considered GWAS and eQTL signals to be possibly coincident at a less stringent threshold for pairwise LD values (r 2 >0.60, East Asian 1000G Phase 3). The association signal for fasting glucose in East Asians at the SIX3-SIX2 locus contained fifteen variants meeting this criterion (lead GWAS variant rs895636 and fourteen variants with East Asian LD r 2 !0.60), and the association signal for islet SIX3 expression in Europeans contained 14 variants (lead variant and 13 variants with European LD r 2 !0.60) (Fig 3B, S9 Table). One variant, rs12712928, exhibited high LD (r 2 >0.80) with both the lead GWAS and eQTL variants ( Fig 3C). rs12712928-C showed strong association with higher fasting glucose (P = 3.4 x 10 −8 ), similar to the lead fasting glucose GWAS variant (rs895636, P = 2.3 x 10 −8 ), and strong association with lower SIX3 expression level in pancreatic islets (P = 4.7 x 10 −8 ), similar to the lead SIX3 eQTL variant (rs12712929, P = 1.7 x 10 −8 ). In addition to SIX3, rs12712928-C was strongly associated with lower expression level of SIX2 (P = 1.4 x 10 −4 ), SIX3-AS1 (P = 4.8 x 10 −6 ), and two other long non-coding transcripts (S9 Table) [47]. Assuming the fasting glucose GWAS and islet eQTL signals are shared across ancestries, then the strongest candidate variant that may be responsible for both associations is rs12712928.
Regulatory function of prioritized candidate regulatory variants
To establish a set of candidate functional variants at the SIX3-SIX2 locus, we used regulatory chromatin marks (open chromatin and histone states) to predict which variants may affect the transcription of nearby genes. Of 19 candidate variants at the SIX3-SIX2 locus (including the lead GWAS variant rs895636, the lead islet eQTL variant rs12712929, variants in EAS LD r 2 >0.60 with rs895636, and variants in EUR LD r 2 >0.6 with rs12712929), only five variants (rs10192373, rs10168523, rs12712928, rs12712929, and rs748947) overlap pancreatic islet active enhancer and pancreatic islet open chromatin (DNase or FAIRE) marks, as well as predicted transcription factor binding motifs (S9 Fig). All five of these variants have EAS LD r 2 0.66-0.87 with lead GWAS variant rs895636. These data suggest that these five variants are the strongest candidates to affect transcription of the gene(s) at this locus.
To evaluate the allelic differences in enhancer activity of the five candidate functional variants, we conducted transcriptional reporter assays in MIN6 mouse insulinoma cells. We tested 4-6 independent constructs corresponding to each allele or haplotype for a 312-bp DNA region located 18 kb downstream of SIX3 and 37 kb from the 3' end of SIX2 spanning rs10192373, rs10168523, rs12712928, rs12712929 (tested as a haplotype) and for a 365-bp region located 20 kb from the 3' end of SIX3 spanning rs748947 (S9 Fig). While the rs748947 construct showed no enhancer or allele-specific activity (S10 Fig), the haplotype construct had haplotype-differences in enhancer activity in both orientations (Fig 4A). This 4-variant construct containing the fasting glucose-increasing alleles (rs10192373-A, rs10168523-G, rs12712928-C, and rs12712929-T) showed significantly decreased enhancer activity of ! 1.4-fold in magnitude (forward, P = 0.0008; reverse, P = 0.0001) compared to the haplotype containing the non-risk alleles. To determine whether rs12712928 could account for the allele-specific effects, we used site-directed mutagenesis to create two additional haplotype constructs. Haplotype constructs containing rs12712928-C exhibited a 1.5-fold decrease in enhancer activity compared to the haplotype constructs containing rs12712928-G ( Fig 4B). Taken together, these data show that rs12712928 exhibits allelic differences in transcriptional enhancer activity and suggest it functions within a cis-regulatory element at the SIX3-SIX2 fasting glucose-associated locus.
We next asked whether alleles of rs12712928 or the other three variants differentially affect DNA binding to nuclear proteins. A DNA-protein complex specific to the rs1272928-C allele was observed using electrophoretic mobility shift assays (EMSA) with MIN6 nuclear lysate (S10 Table, S11-S12 Figs). Competition with excess unlabeled C-allele probe more efficiently competed away allele-specific bands than excess unlabeled G-allele probe, providing further support for allele-specificity of the protein-DNA complexes (Fig 4C). Based on these results, we hypothesized that rs12712928-C is located in a binding site for a transcriptional regulatory complex that may be disrupted by the rs12712928-G allele. The sequence containing rs12712928-C is predicted to include a consensus core-binding motif for several transcription factors, and a ChIP-seq peak for CTCF also overlaps this region [48][49][50]. To identify transcription factor(s) binding to rs12712928, we used a DNA-affinity capture assay. A protein band showing allele-specific binding to the C allele was identified as the alpha subunit of GABP using MALDI TOF/TOF mass spectrometry. In EMSA supershift assays using antibodies to GABP-α, we observed a supershift of the allele-specific band (Fig 4C), suggesting that GAPB may act as a repressor to reduce enhancer activity at this locus ( Fig 4D).
We used a similar approach to identify potentially functional variants at the G6PC2 locus. The first signal is comprised of two intergenic variants (GWAS index variant and one variant in LD r 2 >0.80). GWAS variant, rs34177044, is~3.2 kb upstream from the transcription start site of G6PC2 and does not overlap any predicted open chromatin marks. rs1402837 (LD r 2 = 0.97) is located 646 bp upstream of the G6PC2 transcription start site and 187 bp and 208 bp upstream, respectively, of other promoter variants previously shown to exhibit allelic differences on transcriptional activity, rs573225 and rs2232316 (S13 Fig) [51]. rs1402837 also overlaps open chromatin marks, suggesting rs1402837 may play a regulatory role in fasting glucose levels in the context of other G6PC2 promoter variants.
Discussion
In this study of genetic associations with T2D and related glycemic traits in Chinese individuals from 9 provinces in the CHNS, we observed associations with fasting glucose at SIX3-SIX2 and G6PC2, including a coding variant representing an additional signal at G6PC2. We also showed that the SIX3-SIX2 fasting glucose locus colocalizes with eQTL associations for SIX3, SIX2, SIX3-AS1, RP11-89K21.1, and AC012354.6 in pancreatic islets, and we showed evidence that rs12712828 functions as a regulatory variant at the SIX3-SIX2 fasting glucose locus. ). An arrow points to an allele-specific protein complex binding to the C allele. We observed a supershift with the addition of antibodies to the alpha subunit of GABP (denoted by à ). (D) Model of rs12712928 as a functional regulatory variant at the SIX3-SIX2 locus. Alleles, including rs12712928-C, are associated with higher fasting plasma glucose levels and lower expression of SIX3 and other transcripts in human pancreatic islets. Arrows indicate the transcription start site (TSS) of the SIX3 and SIX2 genes. An oval represents GABP bound differentially to rs12712928-C, which exhibited lower transcriptional reporter activity compared to rs12712928-G. Genetic associations in CHNS also supported (P<0.05) previously reported associations at 6, 2, 9, and 16 loci with fasting glucose, fasting insulin, HbA1c, and T2D, respectively. The moderate sample size of CHNS prohibited us from identifying additional associations.
Hundreds of genes contribute to the heritability of complex traits [52]. As more GWAS and genome-wide meta-analyses are conducted across genetically diverse populations, identification of additional association signals and loci will help to explain the levels of heritability. The CHNS adds to the growing number of population-based cohorts available for the study of metabolic traits. With its multi-provincial study design, the CHNS includes subjects of differing ethnicities, from both urban and rural areas across China. Additionally, linkage of the genotype data with biomarkers and decades of longitudinal phenotype data (e.g. nutrition, health outcomes, environment) will allow environmental and societal contributions to trait or disease outcomes to be evaluated.
Alterations in regulatory elements or the coding sequence of G6PC2 can impact levels of fasting plasma glucose. G6PC2 encodes an enzyme belonging to the glucose-6-phosphatase catalytic subunit family responsible for the terminal step in gluconeogenic and glyconeogenic pathways that lead to the release of glucose into the bloodstream [53]. Several previous studies have identified >1 fasting glucose association signals at this locus in populations of European and African ancestries, two of which include nonsynonymous coding variants [25, 26, 31, 33, [54][55][56][57]. We identified two distinct signals at G6PC2 associated with fasting plasma glucose levels. Variants within the primary CHNS association signal have been associated with fasting glucose in East Asian populations previously [11]. The lead variant in the second signal at G6PC2 (rs2232326) is a missense variant (S324P). We were unable to assess evidence of association with other coding variants in G6PC2 as the variants were either monomorphic in CHNS or did not pass quality control thresholds.
To date, the association between variants near SIX3 and glycemic traits remains specific to East Asian populations. rs895636 was reported as the lead variant associated with fasting plasma glucose in a GWAS of >17,000 Korean and Japanese subjects (P = 9.9x10 -13 ) and in a separate GWAS meta-analysis of up to 46,085 East Asians (P = 2.5x10 -13 ) [27]. However, in Europeans, nominal to no association has been observed between rs895636 and fasting glucose (P = 0.002, n>96,000) [33], HbA1c (P = 0.05, n>46,000) [36], fasting insulin (P = 0.73, n>96,000) [33], and T2D (P = 0.41, n>120,000) [3]. Allele frequency is a possible explanation for ancestry differences. In East Asians, the MAF of rs895636 is 0.42 while the MAF in Europeans is only 0.16. Larger sample sizes of European-ancestry individuals may be needed to identify the association between variants at the SIX3-SIX2 locus and glycemic traits. Other genetic and environmental factors may also be playing a role in the fasting glucose association at SIX3-SIX2 in East Asian populations that are not present in other populations.
We provide compelling evidence that rs12712928 is a regulatory variant at the SIX3-SIX2 fasting glucose locus. The rs895636-T allele is associated with increased fasting glucose levels and decreased SIX3, SIX2, and other transcript expression in pancreatic islets [47]. rs12712928 is in high LD (East Asian and European r 2 >0.87) with both rs895636 and the lead pancreatic islet eQTL variant, rs12712929, and was the strongest candidate for an effect of regulatory function based on its location in a putative islet enhancer element. Compared to rs12712928-G, the allele rs12712928-C demonstrated decreased transcriptional activity, as well as allele-specific binding to the alpha subunit of GABP, which suggests that at least GABP, and possibly other transcription factors, bind to the C allele and repress expression of SIX3 and SIX2. rs12712928 may be responsible for the GWAS signal, or given that some GWAS signals are affected by multiple functional variants [58,59], other variants at this locus may also contribute to variation in fasting glucose.
SIX3 is a strong candidate for a target gene at the SIX3-SIX2 fasting glucose locus. Highly expressed in pancreatic islets [44], SIX3 encodes sine oculis homeobox-like protein 3, a transcription factor that localizes to the nucleus of adult beta cells to regulate insulin production and secretion. Decreased expression of SIX3 results in the misregulation (i.e. decreased levels) of insulin [60], which promotes the uptake of glucose into fat, liver, and skeletal muscle cells, thus lowering blood glucose levels [61]. Consistent with the effects of SIX3 in mice, the risk allele rs12712928-C is associated with both decreased expression of SIX3 and increased levels of fasting glucose. In CHNS, rs12712928-C was also moderately associated with decreased fasting insulin levels (P = 7.6x10 -5 ) and an increased risk for T2D (P = 0.03). However, in the islet eQTL data, rs12712928 was also associated with expression level of SIX2, SIX3-AS1, RP11-89K21.1, and AC012354.6. SIX2 is also believed to play a role in the regulation of islet beta cell functions such as insulin output [60]; however, less is known about its biologic function compared to SIX3. Additionally, the roles of SIX3-AS1, RP11-89K21.1, and AC012354.6 are not well characterized. One or more of these transcripts could be a target gene underlying the association signal and contributing to the biological effect on fasting glucose.
In conclusion, this study confirmed many previously identified loci associated with T2D and related glycemic traits and validated a recently described G6PC2 missense variant associated with fasting glucose. We report a functional variant at the SIX3-SIX2 locus, rs12712928, and provide evidence of a potential mechanism by which this variant affects expression of at least SIX3, leading to decreased levels of fasting glucose. Our use of a denser reference panel of >8 million variants in a diverse Chinese population allowed us to conduct higher resolution genetic analyses than reported previously. Further functional analyses of the variants identified in this study is the next step to confirm which variants and genes are affected. Replication of the moderately-significant associations would be useful to better understand the genetic architecture of glycemic traits.
The China Health and Nutrition Survey
The China Health and Nutrition Survey (CHNS) is a nationwide, longitudinal survey aimed at examining economic, sociological, demographic, and health questions in a Chinese population. Details of subject selection and study design have been described elsewhere [12]. Briefly, a stratified probability sample with a multistage, random cluster design was used to select counties and cities within 9 diverse provinces (Guangxi, Guizhou, Heilongjiang, Henan, Hubei, Hunan, Jiangsu, Liaoning, and Shandong), stratified by income and urbanicity using State Statistical Office definitions. A total of 4,560 households from 228 communities were then randomly selected from within each stratum. Health data was collected during nine rounds of surveys from 1989-2011 (1989,1991,1993,1997,2000,2004,2006,2009, and 2011). The 2009 survey was the first to collect fasting blood samples. The CHNS was approved by the Institutional Review Boards at the University of North Carolina at Chapel Hill (#07-1963, #05-2369), the Chinese National Human Genome Center at Shanghai (#2017-01), and the Institute of Nutrition and Food Safety at the China Centers for Disease Control (#201524-1). All participants provided written informed consent.
The present analysis was limited to subjects who participated in the 2009 survey round and for whom blood biomarker traits were available (n = 9,551). For the glucose and insulin analyses, subjects were only included if their blood sample was obtained after an overnight fast (n = 6,779). Subjects were excluded from a particular analysis if their biomarker trait value exceeded 4 standard deviations beyond the group mean or have type 1 diabetes. Fasting blood samples were not required for the HbA1c or T2D analysis. Additionally, one member of each first-degree relative pair was randomly removed for analyses of T2D, as current software to analyze associations with binary traits do not control for the high number of related individuals with CHNS.
Glycemic trait measurements
Following an overnight fast, a blood sample (12 mL) was collected by venipuncture. Glucose and insulin were measured in the central laboratory of the China-Japan Friendship Hospital. Detailed descriptions of the laboratory procedures for measuring glucose (GOD-PAP method; Randox Laboratories Ltd, UK), and insulin (radioimmunology in a gamma counter, XH-6020 analyzer; North Institute of Bio-Tech, China), levels have been described previously [62]. HbA1c was measured in the central laboratory of the China-Japan Friendship Hospital [Guizhou and Hunan, HLC method, HLC-723 G7 (machine), Tosoh, Japan (reagent). Theory: Boronic Acid Afinity HPLC] or in the field [Guangxi and Henan: HPLC method, Primus Ultra 2 (PDQA1c), Primus, USA; Heilongjiang, Hubei, Jiangsu, and Liaoning: HLP method, Bio-Rad (D10), Bio-Rad, USA; Shandong: HLC method, HLC-723 G7, Tosoh, Japan. Theory: Ion exchange HPLC]. Different methods and machines were calibrated with the same quality control products made in Bio-Rad, USA. Participants were classified as having T2D if they were at least 18 years old and met at least one of the following criteria: 1) HbA1c !6.5%, 2) fasting blood glucose !126 mg/dL, 3) received the diagnosis from a physician after the age of 20 (self-report), or 4) reported taking diabetes medication (self-report) (S1B Table).
Genotyping and quality control
DNA samples were extracted and genotyped at the Chinese National Human Genome Center, Shanghai, China. Genotyping was performed with the Illumina HumanCoreExome chip using the standard protocol recommended by the manufacturer. Genotyping was attempted on the 10,131 unique samples, 316 duplicates, and 1 set of triplicates (total n = 10,449). 1,513 samples were unable to be genotyped due to inadequate DNA concentrations (<10 ng/uL or OD 260/ 280 outside the 1.5-2.0 range), and an additional 69 samples were excluded for poor quality. Using KING, we identified 7 pairs of samples that were unintentionally duplicated; one sample from each pair was excluded. We used PLINK v.1.9 to compare genotype heterozygosity on the X chromosome to self-reported gender and excluded 129 mismatched samples and six samples with apparent XXY or XXXY genotypes. The CHNS data contained 4 sets of identical twins; 1 subject was randomly excluded from each twin pair. Finally, based on the principal component analysis described below, we excluded two samples that were outliers from Hap-Map samples of Han Chinese in Beijing, China (CHB), Chinese in Metropolitan Denver, Colorado (CHD), and Japanese in Tokyo, Japan (JPT). After exclusions, 8,403 samples were successfully genotyped and passed all genotyping quality control.
We applied variant quality control checks in PLINK v.1.9 on the 8,403 remaining CHNS samples that were successfully genotyped. Of the initial 538,448 variants, we discarded 4,306 variants due to call rate <95% and/or deviation from Hardy-Weinberg equilibrium (P <10 −6 ). Of the remaining 534,143 variants, 193,236 (36.2%) were monomorphic and 340,906 (63.8%) were polymorphic.
Because of the household-based study design of CHNS, we expected the CHNS to include many first-and second-degree relatives. Using KING [63], we calculated kinship coefficients for all pairwise relationships and identified 3,681 first-degree relative pairs and 1,567 seconddegree relative pairs.
Genotype imputation
We performed genotype imputation of the autosomal chromosomes of 8,403 samples with the 1000 Genomes Project Phase 3 v5 reference panel [13,64] using the Michigan Imputation Server [65]. We used Eagle2 [66] for pre-phasing, followed by imputation with Minimac3 software. We also imputed the X chromosome using with the 1000 Genomes Project Phase 3 v5 reference panel. We imputed male (n = 3,927) and female (n = 4,476) samples separately, using Mach for pre-phasing and Minimac2 for imputation. Imputation yielded data for 47,095,001 variants, and the 534,143 directly genotyped variants were also assigned imputed genotypes. We removed variants with an imputation r 2 <0.30 (35,615,501 variants) or a MAF <0.01 (37,891,969 variants) as additional quality control procedures. In total, we tested 8,045,193 variants for association with fasting glucose, insulin, and HbA1c levels and T2D.
Accounting for population substructure
We constructed principal components (PCs) to capture population substructure among the CHNS subjects. We identified a set of 55,601 independent variants with MAF > 0.05 and pairwise LD r 2 <0.02 in a sliding window of 50 variants and used the variants to construct PCs in 8,403 CHNS subjects (Fig 1; S1-S3 Figs). The set of 55,601 variants was trimmed to match a list of 47,032 variants that were also available in HapMap Phase III samples. Individuals from CHNS and HapMap III were plotted based on the first two eigenvectors produced by the PC analysis (S1 Fig). We tested for the association between each of the first 10 PCs and each of the phenotypic traits to identify PCs associated at P<0.05; the first PC was included as a covariate in the regression models.
Genome-wide association analysis
Fasting glucose, fasting insulin, HbA1c, and T2D were adjusted for age, age 2 , BMI, gender, and PC1. Residuals were then inverse normal-transformed to satisfy model assumptions of normality. Efficient mixed model associations (EMMAX) accounting for population structure and relatedness were performed using EPACTs v.3.1.0 [24]. Because EMMAX was designed for analysis of linear traits, GWA analyses for T2D were performed using the Firth bias-corrected logistic regression likelihood ratio test implemented in EPACTs [67]. For all analyses, genotype was modeled as an additive effect, with the genotype dosage values used as the primary predictor of interest. Due to the correlation of the glycemic traits, we used a genome-wide significance threshold of P <5 x 10 −8 to define a single result as genome-wide significant, as used in previous association studies of this scale and high trait correlation [68]. A conservative experiment-wide Bonferroni-corrected P-value for four could be considered as P<1.25x10 -8 . We created regional association plots using LocusZoom [69] with LD estimates generated from the CHNS subjects. All variant positions correspond to build hg19.
Conditional analyses
At loci that exhibited evidence of genome-wide significant association (P <5 x 10 −8 ), we identified additional association signals using conditional analysis. We added the most strongly associated variant into the regression model as a covariate and tested all remaining regional variants (+/-1 Mb from the initial lead GWA variant at each locus) for association. Since we were focusing on a much narrower region of variants during the conditional analyses, we set a less stringent locus-significance threshold of P <1 x 10 −5 based on~5,000 variants in a 2 Mb region. We performed sequential conditional analyses until the strongest variant no longer met the P-value threshold.
Associations with other metabolic traits and outcomes
We used summary data available in the Type 2 Diabetes Knowledge Portal [70] to explore associations between the newly identified loci and other metabolic traits and outcomes. Association summary statistics available (last assessed June 23, 2017) included coronary artery disease from CARDIoGRAM [71]; kidney-related traits from CKDGen [72]; T2D from DIA-GRAM, GoT2D, BioMe AMP, CAMP, and SIGMA [3,43,[73][74][75]; BMI and waist-hip-ratio from GIANT [76,77]; and glycemic traits from MAGIC [26,36,78,79]. Additionally, we used data available from the ICP-GWAS (systolic and diastolic blood pressure) [80] and the AGEN adiponectin GWAS [81]. To identify variants in high LD (r 2 >0.80) with the lead variants, we used LDlink with all East Asian sample populations from the 1000 Genomes Project as the reference [82].
Regulatory element annotation
We used ENCODE [15], ChromHMM [83], and Human Epigenome Atlas [84] data available through the UCSC Genome Browser to determine which of the candidate variants in each association signal overlapped open-chromatin peaks, ChromHMM [83] chromatin states, and chromatin-immunoprecipitation sequencing (ChIP-seq) peaks of histone modifications H3K4me1, H3K4me3, and H3K27ac, and transcription factors in pancreatic islets and the pancreas.
Transcriptional reporter assays
To measure variant allelic differences in enhancer activity at the SIX3-SIX2 locus, we designed oligonucleotide primers (S10 Table) with KpnI and XhoI restriction sites, and amplified the 312-bp DNA region (GRCh37/hg19 -chr2: 45,191,192,213) around: rs10192373, rs10168523, rs12712928, rs12712929 (tested as a haplotype). Separately, we amplified a 365-bp region (GRCh37/hg19 -chr2: 45,192,192,721) around rs748947. The 312-bp haplotype construct was altered to create a missing haplotype for rs12712928 using the QuickChange site directed mutagenesis kit (Stratagene). As previously described [19], we ligated amplified DNA from individuals homozygous for each allele into the multiple cloning site of the luciferase reporter vector pGL4.23 (Promega) in both orientations with respect to the genome. Isolated clones were sequenced for genotype and fidelity. 2x10 5 MIN6 cells were seeded per well, and grown to 90% confluence in 24-well plates. We co-transfected five independent luciferase constructs and Renilla control reporter vector (phRL-TK, Promega) using Lipofectamine 2000 (Life Technologies) and incubated for another 48 hours. 48 hours post-transfection, the cells were lysed with Passive Lysis Buffer (Promega). Luciferase activity was measured using the Dual-luciferase Reporter Assay System (Promega) per manufacturer instructions and as previously described [19].
Electrophoretic mobility shift assay (EMSA)
Nuclear cell protein was extracted from MIN6 cells using the NE-PER nuclear extraction kit (Thermo Scientific). 17 bp oligonucleotide probes were designed centered on each variant: rs10192373, rs10168523, rs12712928, and rs12712929 (S10 Table). The annealed doublestranded oligonucleotide biotin labeled and unlabeled probes for both alleles were generated as previously described [19]. To conduct EMSAs, we used the LightShift Chemiluminescent EMSA Kit (ThermoFisher Scientific) and followed the manufacturer's recommendations. Briefly, a 20 μl binding reaction consisting of 6 μg nuclear extract, 1X binding buffer, 50 ng/μL poly (dI-dC), and 200 fmol of labeled probe was incubated at room temperature for 25 minutes. For competition reactions, 25-fold excess of unlabeled probe for either allele were incubated for 15 min prior to the addition of 200 fmol labeled probe and incubated for an additional 25 minutes. For supershift assays, 6 μg of polyclonal GABP-α antibody (sc28312X; Santa Cruz Biotechnology) was added to the binding reactions and incubated for 25 minutes prior to the addition of 200 fmol labeled probe. The reaction was further incubated for an additional 25 minutes. Protein-probe complexes were resolved on non-denaturing PAGE on 6% DNA retardation gels (Thermo Scientific), transferred to Biodyne B nylon membranes (PALL Life Sciences), cross-linked on a UV-light cross linker (Stratagene), and detected by chemiluminescence. EMSAs were carried out on second independent day and yielded comparable results.
Identification of proteins binding rs12712928
To identify factors in the protein complex binding rs12712928, we conducted a DNA affinity capture assay as previously described [19]. Briefly, the 450 μL binding reactions consisted of 300 μg of pre-cleared, dialyzed MIN6 nuclear extract, 1X binding buffer, 50 ng/μL poly (dI-dC), and 40 pmol of biotin-labeled probe for either rs12712928 allele (same as EMSA probes) or a scrambled control. Binding reactions were incubated at room temperature for 30 min on a rotator, and then 100 μL of streptavidin-magnet Dynabeads were added to the reaction and incubated for an additional 20 minutes. Beads were washed and bound DNA-proteins were eluted in 1X reducing sample buffer. Proteins were separated on NuPAGE denaturing gel and allelic differences in protein bands was visualized with Coomassie G-250 staining. The UNC Michael Hooker Proteomics Center used a Sciex 5800 MALDI-TOF/TOF mass spectrometer to identify the proteins in the excised protein bands. (6)
|
2018-04-26T19:49:05.519Z
|
2018-04-01T00:00:00.000
|
{
"year": 2018,
"sha1": "7d1c7aeb3b368d376f72af2017a90e534d801b17",
"oa_license": "CCBY",
"oa_url": "https://journals.plos.org/plosgenetics/article/file?id=10.1371/journal.pgen.1007275&type=printable",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "7d1c7aeb3b368d376f72af2017a90e534d801b17",
"s2fieldsofstudy": [
"Biology",
"Medicine"
],
"extfieldsofstudy": [
"Biology",
"Medicine"
]
}
|
16932185
|
pes2o/s2orc
|
v3-fos-license
|
Viscoat Assisted Inverted Internal Limiting Membrane Flap Technique for Large Macular Holes Associated with High Myopia
Purpose. To investigate the surgical outcomes of Viscoat® assisted inverted internal limiting membrane (ILM) flap technique for large macular holes (MHs) associated with high myopia. Design. Prospective, interventional case series. Methods. Fifteen eyes of 15 patients with high myopia underwent vitrectomy and Viscoat assisted inverted ILM flap technique to treat MH without RD. Patients were followed up over 6 months. The main outcome measures were MH closure evaluated by optical coherence tomography (OCT) and best-corrected visual acuities (BCVAs). Result. MH closure was observed in all eyes (100%) following the initial surgery. Type 1 closure was observed in 13 eyes (86.7%); type 2 closure was observed in the remaining 2 eyes (13.3%). Compared to the preoperative baseline, the mean BCVA (logarithm of the minimum angle of resolution) improved significantly at 3 months and 6 months after surgery (P = 0.025, 0.019, resp.). The final BCVA improved in 10 eyes (66.7%), remained unchanged in 3 eyes (20.0%), and worsened in 2 eyes (13.3%). Conclusion. Vitrectomy combined with Viscoat assisted inverted ILM flap technique is an effective treatment for large MHs in highly myopic eyes. It may increase the success rate of the initial surgery and enhance the anatomical and functional outcomes.
Introduction
High myopia (spherical equivalent of refractive error of at least −6 diopters (D) and axial length of at least 26.5 mm) has high prevalence in China and accounts for 4.1% of the population [1,2]. Highly myopic eyes are susceptible to different retinopathies, among which macular holes (MHs) with or without associated retinal detachment (RD) are one of the most vision-threatening complications [2]. The mechanism of MH is different in highly myopic eyes versus nonmyopic ones. Various factors such as severe axial elongation of the globe, the presence of posterior staphyloma, and chorioretinal atrophy make these cases challenging [3].
Vitrectomy has been the gold standard of MH treatment [4], and adjunctive procedures have been developed to improve the MH closure rate [3,5,6]. Recently, the inverted internal limiting membrane (ILM) flap technique was reported by Michalewska et al. [7] to have higher closure rate for MH, in which ILM around the MH was left to cover the MH to facilitate hole closure and regeneration of retinal structure. This technique was also applied to myopic MHs and proved to be beneficial in both the anatomical and functional outcomes [8,9]. However, there are some limitations of the original technique. First, spontaneous retroversion of the ILM flap occurred frequently in up to 14%-20% of cases during the fluid-air exchange, leading to an initial failure [7,9]. Repeated surgeries are implemented to reposition the flap to cover the MH, which increase the patient's pain and burden. Second, a rolled segment of the peeled ILM other than a single-layered ILM is used to fill the MH, which is less regular and physiological compared to the normal foveal contour and may induce over gliosis. Third, the indocyanine green (ICG) stained ILM is placed into the MH, which has macular toxicity and may induce retinal pigment epithelium (RPE) damage [10].
To overcome the drawbacks of the original inverted ILM flap technique as indicated above and to increase the success rate of the initial surgery for large MHs associated with high
Methods
This study was a prospective and interventional case series. Patients with large MHs associated with high myopia were enrolled in the Eye Hospital of Wenzhou Medical University from February 2014 to January 2015. The inclusion criteria were as follows: high myopia > −6 D; axial length > 26.5 mm; clinical presentation of MH without RD; minimum diameter of MH ≥ 400 m; intraocular pressure < 21 mmHg. Patients with a history of RD or proliferative vitreoretinopathy, any kind of retinal surgery, diabetic retinopathy, vitreous hemorrhage, retinal vascular occlusion, uveitis, trauma, optic atrophy, ocular tumors, glaucoma, corneal opacity, or incomplete chart records were excluded.
Each patient was informed about the risks and benefits of the study and their written informed consent was obtained. The study was approved by the Institutional Review Boards of Wenzhou Medical University. All investigations adhered to the tenets of the Declaration of Helsinki.
Surgical Technique.
All eyes included in this study were treated with standard 23-gauge 3-port pars plana vitrectomy (PPV) by one experienced surgeon (Zongming Song) under local anesthesia. For this procedure, a posterior vitreous detachment was first created, followed by the removal of the residual thin premacular posterior cortex. The peripheral vitreous was also excised. Triamcinolone acetonide was used intraoperatively to facilitate visualization of the vitreous and posterior hyaloid in all eyes. Epiretinal membrane was removed with forceps if present.
The inverted ILM flap technique was modified based on the method reported previously [7] with major improvement as described below. Viscoat (Alcon Laboratories, Fort Worth, TX, USA) was used during the procedure. It is a lowmolecular-weight dispersive viscoelastic material composed of 3% sodium hyaluronate and 4% chondroitin sulfate, which has been commonly used in phacoemulsification to protect the corneal endothelial cells from the ultrasound damage. The initial use of Viscoat was prior to staining the ILM with ICG. Viscoat was injected into the MH and its mirror symmetrical area superior to the MH (Figure 1(a)). 0.125% ICG solution was then carefully applied around the MH within the arcade (Figure 1(b)). Excessive ICG as well as Viscoat was then removed by suction. Thereafter, the ILM was peeled off in a circular fashion for approximately 2.5 disc diameters around the MH (Figure 1(c)). During the half-circumferential peeling, the inferior ILM was completely peeled off while the superior ILM was not removed completely from the retina but left attached to the MH margin, forming an ILM flap of about 1 disk diameter. Viscoat was then injected and smeared in an arch around the MH in the lower half part of the macular as an adhesive (Figure 1(d)). Instead of packing the MH with a folded ILM reported by other authors [7,11], the ILM flap was flipped by inverting it using the intraocular forceps to cover the whole MH and gently massaged to make it flattened (Figure 1(e)). Supplementary Viscoat was applied on top of the inverted ILM flap as ballast (Figure 1(f)). Air-fluid exchange was performed with gas tamponade by 15% perfluoropropane (C3F8) at the end of surgery. (See Supplementary video file S1 in Supplementary Material available online at http://dx.doi.org/10.1155/2016/8283062.) The patients were subsequently kept in a facedown position overnight for 3 days and were allowed to position themselves in any manner other than the supine position until the gas was absorbed.
Ophthalmic Examinations.
All patients underwent a complete ophthalmic examination including refraction and best-corrected visual acuity (BCVA) measurement, slit-lamp examination, indirect ophthalmoscopy, and spectral domain optical coherence tomography (OCT) (SPECTRALIS HRA OCT; Heidelberg Engineering, Heidelberg, Germany) before surgery and at 1 week and 1, 3, and 6 months after surgery. B-Scan ultrasonography (B-Scan-Cinescan, Quantel Medical, France) and IOLMaster (Carl Zeiss Meditec AG, Jena, Germany) measurements were also performed before surgery. The BCVA was recorded in decimal acuity and converted to the logarithm of the minimal angle of resolution (log MAR) for statistical analysis. If the BCVA was counting fingers or hand movements, it was assigned as the equivalent Snellen acuity of 20/2000 or 20/20000, respectively, based on a previous publication [12].
Statistics. Statistical analysis was performed by SPSS
for Windows version 17.0 (SPSS, Inc., Chicago, IL). The preoperative and postoperative BCVAs (logMAR values) were analyzed using the paired -test. A value of < 0.05 was considered to be statistically significant.
Results
The baseline clinical characteristics of all patients in this study are shown in Table 1. Six males and 9 females with a mean age of 60.1 ± 8.8 years were enrolled. The mean axial length was 29.83 ± 1.96 mm. All eyes had staphyloma, among which, 10 eyes had posterior pole staphyloma (type 1) and 5 eyes had macular staphyloma (type 2), as per the classification of Curtin [13]. The mean minimal diameter of the MH was 597.60 ± 115.84 m. Six eyes had retinoschisis around MH as shown in OCT examination. Eleven eyes were phakic, 1 eye was aphakic, and 3 eyes were pseudophakic before the surgery. Phacoemulsification with intraocular lens implantation was performed in 8 eyes concurrent with PPV and in 2 eyes during the follow-up period because of the development of cataracts. The mean follow-up period was 8.7 ± 2.0 months (range: 6.0-11.2 months).
There was no significant intraoperative complication that occurred in any case. No retroversion of the inverted ILM flap during fluid-air exchange was observed in any surgery. The main surgical results are listed in Table 2. MH closure was achieved in all 15 eyes (100%) after the initial PPV with Viscoat assisted inverted ILM flap technique. According to MH closure type classification based on OCT observation The mean preoperative BCVA was 1.28 ± 0.70. The mean BCVA at 1 week and 1, 3, and 6 months after surgery was 1.31± 0.96, 1.19 ± 0.84, 1.10 ± 0.75, and 1.07 ± 0.88, respectively. In general, compared to the preoperative baseline, there was a significant improvement in BCVA at 3 months ( = 0.025) and 6 months ( = 0.019) after surgery, but not at 1 week and 1 month after surgery. The postoperative BCVA improved in 10 eyes (66.7%), remained unchanged in 3 eyes (20.0%), and worsened in 2 eyes (13.3%) at the final follow-up examination.
Discussion
Michalewska et al. [7] first introduced the inverted ILM flap technique for idiopathic MHs and proved its efficacy in improving MH closure rate and postoperative BCVA. They hypothesized that this inverted ILM could serve as a scaffold for the proliferation of glial cells that fill MH, producing an environment for the photo receptors to assume new positions in direct proximity to the fovea [7,14]. This technique was then applied to more complicated cases such as chronic refractory MHs and myopic MHs, with or without RD [8,11,[15][16][17][18]. Most of these reported outcomes were promising in both anatomical and functional aspects. However, there are still some challenges in this technique.
The most challenging part of this technique is maintaining the inverted ILM flap in position during the subsequent manipulation of fluid-air exchange. The free end of the ILM flap tends to turn over and move away from the hole opening, leading to a failure of the initial surgery. Michalewska et al. [7] reported spontaneous retroversion of the ILM flap during the fluid-air exchange in up to 14% of their cases. Kuriyama et al. [9] used inverted ILM flap technique for MHs associated with high myopia and failed in 20% of cases due to the complete * * In addition, it takes extra time for the liquid to evaporate completely at the end of the fluid-air exchange. More recently, Lai et al. [11] proposed an idea of using autologous blood clot to stabilize and seal the ILM flap within MH before air-fluid exchange and claimed a success rate of 96% in their study. However, there may be potential risks of an increased rate of infection or proliferative vitreoretinopathy due to the extra manipulations and the introduction of blood from outside the eye. In our technique, a small amount of Viscoat was applied around MH and on top of the inverted ILM flap, which has dual effect of adhesive and ballast to stabilize the flap during the fluid-air exchange. Viscoat can be left in place without causing any toxic effect to the retina [20]. No dislodged inverted ILM flap was observed during or after the initial surgery in our cases. Another difference from the original inverted ILM flap technique is that a large single-layered inverted ILM flap was used to cover the MH in our study. In the original technique, a rolled segment of the peeled ILM over the MH is massaged from all sides until the ILM is inverted and then packed into the MH [7]. Actually, the MH is filled with a roll of multilayered ILM rather than a single layer of inverted ILM flap, which has been confirmed in the postoperative optical coherence tomography [7,15]. More recently, some authors intentionally repositioned or inserted a sufficient amount of inverted ILM tissue into the MHs to enhance the 6 Journal of Ophthalmology anatomical closure [11,17]. A pack of multilayered ILM may be beneficial in anatomical recovery by sealing and bridging the hole. However, the appropriate amount of the folded ILM is difficult to determine, and too much ILM inserted in the center of macula may induce over fibrosis and hinder further functional recovery [19]. In addition, maneuvering the ILM within MH is technically difficult and may damage the foveal tissue. In our method, we half-circumferentially peeled the superior ILM to form an ILM flap of about 1 disk diameter size and covered the MH from its superior margin considering gravity. Therefore, the MHs were covered with single-layered inverted ILM flaps that were much larger than those in the original technique. We believe that the single-layered ILM over MH could provide a more regular and physiological structure for glial proliferation and aid in MH closure without inducing too much fibrosis in the fovea.
An important concern about the inverted ILM flap technique is the potential macular toxicity from the staining dye. Due to the limited availability of Brilliant Blue-G (BBG), ICG remains the most popular dye to facilitate ILM peeling used by Chinese retina specialists. The toxicity of ICG and its damage to retina after vitreoretinal surgery has been reported [21]. In case of subretinal application, RPE can be affected. Imai and Azumi [10] observed an expansion of RPE atrophy at 1 week after PPV with inverted ILM flap technique. They assumed that the staining of ILM with ICG and placing the ICG-stained tissue into MH may induce chorioretinal toxicity, provoking the RPE damage. To minimize the potential toxicity of ICG, the most effective way is to insulate the exposed RPE from ICG staining. In our technique, Viscoat was injected into the MH and the proposed covering portion of the inverted ILM (mirror symmetrical area superior to MH) before the injection of ICG solution. There was no development of the preexisting chorioretinal atrophy or any sign of new RPE damage observed in any case during the follow-up period.
The anatomical closure rate of myopic MHs after the initial surgery was 100% in our case series, which is significantly higher than those reported previously [8,9,11,17]. In highly myopic eyes, the presence of staphyloma results in the shortening of retina in comparison to the posterior eye wall, which makes the surgical repair of MHs more challenging. As in the classic macular surgery, ILM peeling alone may release the tangential traction, but it does not compensate for retinal shortening in high myopia. The effective approach to overcome the discrepancy between the retina and sclera may be covering the MH with a large single-layered ILM flap as shown in our technique. There are multiple factors influencing the success rate of MH surgery, including the characteristics of each specific case, the surgical technique, and equipment adopted, as well as surgeon's skill and learning curve. We believe that the modification to the inverted ILM flap technique may contribute to the high success rate in the current study.
With regard to the visual outcome, the postoperative BCVA varied among the individuals despite the high anatomical success rate achieved in our study. Qu et al. [22] suggested that the visual outcomes might not be certain to improve in highly myopic MHs even if the MHs were anatomically closed. The MH closure type may be correlated to the final visual outcome [14,23]. 86.7% of our cases had type 1 closure with a close to normal foveal configuration in U-shape or V-shape after surgery, which in general had significantly improved vision. Two cases presented as the type 2 closure with bare RPE in the center had unsatisfied visual outcome. The photoreceptor and external limiting membrane was absent based on the observation with OCT at the end of follow-up. Other factors accounting for the limited vision improvement may include the large size of MH, ultralong axial length, preexisted severe chorioretinal atrophy, presence of retinoschisis around MH, and complicated cataract. Further study is warranted to determine the risk factors that hinder the postoperative visual recovery in myopic MHs.
This study has some limitations, which include the small sample size, lack of a control group, and a relatively short follow-up period. In addition, this technique was only applied to the myopic MHs with attached retina in the current study. Further randomized controlled clinical studies involving a larger number of patients are needed to determine the efficacy of this technique in the management of myopic MHs with or without retinal detachment.
In conclusion, vitrectomy combined with Viscoat assisted inverted ILM flap technique is an effective treatment for large MHs without RD in highly myopic eyes. The proper use of Viscoat can effectively prevent retroversion of the ILM flap during the fluid-air exchange and minimize the toxic effect of ICG staining to the RPE. This technique may increase the success rate of the initial surgery and enhance the anatomical and functional outcomes. We highly recommend this simple, cost-effective technique for large MHs associated with high myopia.
Disclosure
The author Ding Chen had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis.
|
2016-05-12T22:15:10.714Z
|
2016-03-07T00:00:00.000
|
{
"year": 2016,
"sha1": "652043bdb5f1136b338a254e5efae62393853742",
"oa_license": "CCBY",
"oa_url": "http://downloads.hindawi.com/journals/joph/2016/8283062.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "eb71b5edbf001347368ee6625a1970f6cc293425",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
245125600
|
pes2o/s2orc
|
v3-fos-license
|
Theoretical Insight Into Diamond Doping and Its Possible Effect on Diamond Tool Wear During Cutting of Steel
Natural diamond tools experience wear during cutting of steel. As reported in our previous work, Ga doping of diamond has an effect on suppressing graphitization of diamond which is a major route of wear. We investigate interstitial and substitutional dopants of different valence and different ionic radii (Ga, B, and He) to achieve a deeper understanding of inhibiting graphitization. In this study, ab initio calculations are used to explore the effects of three dopants that might affect the diamond wear. We consider mechanical effects via possible solution strengthening and electronic effects via dopant-induced modifications of the electronic structure. We find that the bulk modulus difference between pristine and doped diamond is clearly related to strain energies. Furthermore, boron doping makes the resulting graphite with stable sp2 hybridization more perfect than diamond, but Ga-doped diamond needs 2.49 eV to form the two graphene-like layers than only one layer, which would result in the suppressed graphitization and reduced chemical wear of the diamond tool.
INTRODUCTION
In the 21st century, miniaturization has gained rising importance owing to the increasing demand for higher precision and further downsizing of various devices. The ultra-precision processing technology has allowed for higher quality and reliability of products with complex shapes and micro-features. Natural single crystal diamond, said to be the hardest natural material on the Earth, is considered to be an excellent precision cutting tool in high-accuracy microscopic processing due to the excellent thermal conductivity, low thermal expansion, low coefficient of friction, and high wear resistance. It can be machined to form nanometric scale cutting edges and is widely used in ultra-precision machining of non-ferrous metals, optical components, molds, and other parts [Gao and Huang (2017), Wang et al. (2012), Zong et al. (2007), Uddin et al. (2007)].
However, single crystal diamond tools easily suffer excessive wear on their cutting edges when machining ferrous metals [Shimada et al. (2004);de Oliveira et al. (2007)]. Under the catalytic effect of ferrous metals and high interface temperatures, graphitization occurs and diamond in a metastable state transforms into stable layer graphite [Paul et al. (1996), Narulkar et al. (2009)]. Multiple experimental tests were developed to reduce the chemical wear through tool modification techniques to alter the diamond tool properties and suppress the wear initiation process. We reported in a previous study that gallium doping reduced diamond tool wear when cutting steels [Lee et al. (2019)]. Boron-doped diamond, a semiconducting material, has attracted much attention in physics and electrochemistry [Zhao andLarsson (2014), The Anh et al. (2021), Garcia-Segura et al. (2015)]. However, the addition of boron, which in the same group and has a smaller ionic radius than gallium, makes the resulting graphite more perfect [Gu et al. (2016), Bagramov (2021)]. Therefore, it is necessary to have a further understanding of the effects of different dopants on diamond properties and the wear process. The doping effect can differ not only due to different dopant atoms but also due to different doping sites. For example, the same dopant atom can cause p-type doping in substitutional doping and n-type doping in an interstitial position. The smallest inert element He, which has been used to modify the structure and strength of diamond, was also studied as the interstitial dopant [Chen et al. (2020)]. In this study, we investigate ab initio calculations, using density functional theory, the effects of doping that might affect the diamond wear. We consider mechanical effects via possible solution strengthening and thermodynamic effects via dopant-induced modifications at the diamond surface. We compute the effect of interstitial and substitutional dopants of different valence and different ionic radii (Ga, B, and He) to help disambiguate mechanical and thermodynamic effects.
AB INITIO CALCULATIONS
To verify the implications of B-, Ga-, and He-doping in a diamond cutting tool, periodic ab initio simulations were performed on pristine and doped diamond materials. The mechanical and thermodynamic effects were assessed with interstitial (I) and substitutional (S) dopants in a cubic diamond cell with 64 C atoms, as shown in Figure 1.
B-, Ga-, and He-doped diamond materials were investigated by density functional theory (DFT) performed in the Vienna ab initio simulation package (VASP) [Kresse and Hafner (1993), , ] with the projector-augmented plane-wave method (PAW) [Blöchl (1994), Kresse and Joubert (1999)]. The Perdew-Burke-Ernzerhof (PBE) exchange-correlation functional , Perdew et al. (1997)] was used, and a kinetic energy cutoff of 520 eV was selected for the plane wave basis set. Energy and force convergence criteria for electronic and structural relaxations (ion positions and cell vectors) were set at 1 × 10 −6 eV and 0.01 eV/Å, respectively. Based on our past theoretical work [Lee et al. (2019)], the theoretical bulk lattice constant of diamond (3.573 A˚) was used, and the Brillouin zone is sampled using a 3 × 3×3Monkhorst-Pack k-point mesh in conventional standard cubic diamond cells (64 C atoms, 7.14 Å × 7.14 Å × 7.14Å). We computed the defect formation energies (E f ), strain energies (E s ), bulk moduli BM, and electronic structures.
For both I and S configurations, the energy required to dope the material is represented by E f which is expressed as follows: In the case of substitutional doping: In the case of interstitial doping: The energy required to deform the diamond cell induced by the dopants can be estimated by E s , where E (MC 64 -M) is the energy of the distorted diamond material excluding the dopant atom, and E (C 64 ) is the energy of a fully relaxed pristine. In order to exclude the effect of a single vacancy defect on the strain energy, the dopant is replaced by a C atom with the same fractional coordinates in the case of S-doping, while the structure remains distorted.
To assess the effects of dopants on the diamond mechanical properties, we also computed the bulk moduli BM. The value was obtained via a linear fitting equation with five data points, where P and V are the pressure and volume of a fully relaxed structure. The data points were computed by an isotropic compression and expansion of a simulation cell by −0.5, −0.1, 0.1, and 0.5% of the cell volume, as shown in Figure 2. The bulk modulus was computed as where V and the derivative are taken at the optimized geometry (V 0 and -a of the linear equation, as marked in Figure 2).
We also studied the effect of doping on surface energy c, where E hlk slab is the total energy of the slab structure, E hlk bulk is the energy of the oriented bulk, and A slab is the surface area.
The effect of doping on the surface energy can be related to stability against exfoliation (graphitization) (Lee et al., 2019).
COMPUTATIONAL RESULTS
The defect formation energies of all dopants in interstitial sites are strongly positive, which means that doping is thermodynamically unfavored. Substitutional dopants are strongly energetically preferred to interstitials for both Ga and B dopants. The defect formation energy for a C vacancy in bulk diamond is 5.32 eV. For both substitutional and interstitial doping, the defect formation energies of He doping are strongly positive, 10.14 and 6.59 eV, as shown in Figure 3. There is an obvious difference in strain energies with He, 16.06 and 0.48 eV for substitutional and interstitial doping, respectively. The inert element is not chemically reactive and will not bond with other atoms, which results in significant displacements of surrounding C atoms for substitution and a very difficult way to insert but keeping the diamond crystal structure very well. But for B doping, it is obvious to see that the end state of substitution with a formation energy of 1.16 eV is much preferred to an interstitial position (10.69 eV). The effects of dopants on the bulk modulus are summarized in Table 1. The bulk modulus of pristine diamond is computed to be 434 GPa, in good agreement with previously reported values [Brazhkin and Solozhenko (2019)]. The small size of B leads to less softening (422.94 GPa for substitution and 410.33 GPa for insertion) than Ga doping of diamond (415.74 GPa for substitution and 398.30 GPa for insertion) for the same doping position. For He and B dopants, we get close numbers as cross interstitial and substitutional positions. As shown in Figure 4, the bulk modulus difference between pristine and doped diamond is clearly related to strain energies. It can be deduced that the larger strain energies led to mechanical softening. Although the reduction in the bulk modulus is observed under the influence of a dopant and may promote wear, the mechanical property is still acceptable for machining purposes. Based on the above results, we will only discuss one favor configuration for each dopant in the following calculation, interstitial He and substitutional structures of B and Ga.
We compute the surface energy of pristine diamond for both (110) and (111) facets, as summarized in Table 2. The effects of dopants on the surface energy are summarized in Table 2. The surface energy decreasing (%) between pristine and doped diamond systems with 64 C atoms is calculated by the formula: ( E1−E2 E1 × 100%). Here, the decrease in surface energy is within 10% as compared with pure diamond but varies with that of different dopants. The surface energy of the (110) face decreases by 4.39 ∼9.44%. The surface energy of the (111) face decreases by 3.33% for Ga doping, 9.10% for He doping, and 6.67% for B doping. This trend shows that the higher the concentration, the greater will be the decrease. The positive impact of the dopant comes with the lowering of the surface energy of the diamond. Dopants would reduce the surface energy and increase the surface stability of the diamond tool, which can reduce the interaction at the surface. The density of states of diamond with three different dopants is provided in Figure 5. The total DOS presents separated conduction bands (CB) and valence bands (VB). Partial DOS (PDOS) of B, Ga, and He is also presented as green lines. Compared to the DOS of pure diamond, the DOS is strongly influenced by dopants. Substitutional doping (B and Ga) in diamond moves the Fermi level downward with the reduced band gap. Additionally, interstitial doping (He) moves the Fermi level upward and introduces new energy states in the bandgap. Dopant-induced modifications would have effects on the band alignment at the diamond-iron interface and thereby on possible electrochemical reactions that might facilitate or inhibit the diamond wear.
The diamond cutting ability of ferrous materials is strongly limited due to its extreme affinity to iron; diffusion of Fe is reported into the diamond layer at higher temperatures (from 600°C) [Zenkin et al. (2018)]. In DSC (differential scanning calorimetry) curves, they observed inflection points about 890°C corresponding to the transition temperature of the diamond graphitization reaction Lee et al., 2019. The {111} plane of diamond would graphitize preferentially, and graphitization occurs when the rings of the {111} plane are flattened [Liang et al. (2012)]. To assess effects of the three dopants on the graphitization process, a diamond (111) surface (5.05 Å × 5.05 Å, 64 C atoms) was constructed with a 15 Å vacuum layer, as shown in Figure 6.
In the diffusion process, the Fe atom will be absorbed on the diamond surface first, as shown in (A) of Figure 6. The energy required to remove the Fe atom from the diamond (111) surface can be estimated by the binding energy E bind , where E slab+Fe is the total energy of the adsorption configuration. E slab is the energy of the base fragment (diamond surface), and E cell is the energy of unicell Fe in the face-centered cubic (fcc) arrangement with 2 Fe atoms (n 2). The lower the bind energy, the weaker and harder is the Fe atom absorbed on the surface, and this is assumed to reduce the cutting tool wear.
According to Figure 7, there is no significant difference in the binding energy between He doping and pristine materials (6.81 and 6.78 eV). Ga doping gives a higher binding energy (7.04 eV), and B doping shows the biggest value (8.79 eV). The results indicate that the Fe atom should be much easier to bond to the diamond (111) surface after B doping. It also suggests that B doping might enhance cutting tool wear more than Ga and He doping.
As shown in Figure 8, adsorption configurations display that all systems retain the diamond structure very well. Most C-C bonds between different layers change less than 0.1 Å. The structure is distorted only near the doping site with the large size of the Ga atom. As Fe atoms diffuse into diamond, the graphitization process appears with the surface diamond (111) bilayer morphing into a graphene-like layer. Based on Bader charge analysis, the electrons localized in the Fe atoms are listed at the bottom. Compared to pristine (6.91e), Ga doping will reduce the charge transfer from the Fe atom to surrounding C atoms (7.16e). B and He doping resulted in the graphene-like structure around the doping site. Figure 9 shows that there is a strong correlation between the degree of graphitization and the position of Fe sites. With Fe atoms diffusing into the diamond lattice, the deeper the Fe position, the more graphene-like layers are formed. The layers are sp 2 hybridized in the plane and weak π bonds to Fe atoms and the diamond substrate. It can be clearly seen that Ga doping makes a C-C bond in the substrate shorter (1.539 Å) than that in pristine diamond (1.685 Å). Figure 10 reveals thermodynamic effects of different doping on graphitization as Fe atoms diffuse into the diamond lattice. After Fe atoms diffused through the first diamond (111) bilayer, all dopants make sp3-hybridized carbon atoms more susceptible to graphitization in thermodynamics. In the pristine system, there is only 2.42 eV released as one graphene-like layer is formed (3.08, 3.26 and even 9.06 eV released as B, He, and Ga doped). Furthermore, the different and interesting phenomenon occurred while Fe atoms diffused through the second C-C bilayer in the next step. The interstitial He atom is located in the same intermediate layer. He doping made reactants release more energy (5.06), and the exothermic reaction would be much more favorable. Compared to the pristine system, B atoms are given the same fundamental sp 2 hybridized bonds and are more stable than sp 3 hybridized atoms. Therefore, B doping demonstrated a similar phenomenon and may slightly accelerate the reaction. The larger ion radius of Ga results in a stronger interaction with the neighboring C atoms. Ga doping of diamond exhibited a completely opposite trend. It needs 2.49 eV to form another graphene-like layer. It is explained that the graphitization process is inhibited to some extent, with Ga dopants and the chemical wear of the diamond tool being reduced.
CONCLUSION
In this study, ab initio calculations are adopted to investigate the mechanical and thermodynamic effects of doping that might affect the diamond wear. We consider interstitial and substitutional dopants of different valence and different ionic radii (Ga, B, and He) to help identify the working principle. To summarize, some conclusions can be drawn.
1) The bulk modulus difference between pristine and the doped diamond is clearly related to strain energies. The larger strain energies led to mechanical softening. 2) All three dopants could reduce the surface energy and increase the surface stability of the diamond tool, which can reduce the interaction at the surface. 3) Three dopants make the diamond (111) surface more susceptible to graphitization in thermodynamics. It is especially favorable for the Ga-doped diamond, 9.06 eV released as the top graphene-like layer formed. The sp 2 hybridized B atoms demonstrated a similar phenomenon with the pristine system. The interstitial He atom was located in the same intermediate layer with diffused Fe atoms, which made the reaction much more favorable.
DATA AVAILABILITY STATEMENT
The original contributions presented in the study are included in the article/supplementary material; further inquiries can be directed to the corresponding authors.
AUTHOR CONTRIBUTIONS
LH carried out the formal analysis, investigation, data curation, writing of the original draft, and review and editing. SM was responsible for conceptualization, formal analysis, and revision. ZZ conducted the formal analysis, investigation, data curation, funding acquisition, and review and editing.
|
2021-12-14T14:17:10.742Z
|
2021-12-14T00:00:00.000
|
{
"year": 2021,
"sha1": "080a8d967d5d0271be9d183af8ce88bb11a21723",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fmats.2021.806466/pdf",
"oa_status": "GOLD",
"pdf_src": "Frontier",
"pdf_hash": "080a8d967d5d0271be9d183af8ce88bb11a21723",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": []
}
|
56155908
|
pes2o/s2orc
|
v3-fos-license
|
Lack of Association of SP110 Gene Polymorphisms with Pulmonary Tuberculosis in Golestan Province, Iran
Background and Objectives: Mycobacterium tuberculosis is the causative agent of pulmonary tuberculosis, a main public health problem that results in 1.5 million deaths annually. A number of epidemiological studies suggested that host genetic factors could play a main role in susceptibility to tuberculosis infection. SP110 is an interferon-induced nuclear body protein with vital roles in apoptosis, cell cycling and immunity. SP110 gene has been suggested to be a suitable candidate for limiting TB infections. Thus, we investigated the possible association between SP110 gene polymorphisms and susceptibility to tuberculosis in the Golestan Province, Iran. Methods: We investigated the frequency of rs1135791 polymorphism of the SP110 gene among 100 pulmonary tuberculosis patients and 100 healthy individuals who were referred to the health centers in the Golestan Province (Iran) between 2014 and 2015. Frequency of genotypes was evaluated using amplification refractory mutation systempolymerase chain reaction. Results: The frequency distribution of TT, TC and CC genotypes among the patients was 65%, 31% and 4%, respectively. In the control group, the frequency distribution of TT, TC and CC genotypes was 56%, 46% and 7%, respectively. There was no significant difference in the frequency of rs1135791 between the patients with pulmonary tuberculosis and the healthy controls (P=0.42). Conclusion: Based on the results, the SP110 rs1135791 variant is not a genetic risk factor for development of pulmonary tuberculosis in Golestan Province, Iran.
(radiological findings, positive acid-fast bacilli sputum smear and culture, and response to anti-TB drugs). All study procedures were approved by the Institutional Review Board of Golestan University of Medical Science. Written informed consent was obtained from all participants. Genomic DNA was isolated from the whole blood samples using commercial DNA isolation kits (Pajouhan Co., Iran). DNA concentration was adjusted to 100 ng/μl by adding deionized water. DNA purity was evaluated by Picodrop. The DNA samples were stored at -20 o C until analysis. For genotyping of rs1135791, we used amplification refractory mutation systempolymerase chain (ARMS-PCR) with a forward and two reverse primers specific for the wild type and mutant variants. Two PCR reactions were performed for each sample with a normal ARMS amplification primer and the mutant ARMS primer, and the presence or absence of a PCR product indicated the presence or absence of the target allele. ARMS-PCR primers correspond to wild-type sequence were as follows: forward consensus primer (5' GGGTGAATTCACAGAGGATGGGG3'); wild type reverse primer (5' TTCAGCAGCTCTCCTAGGGTCA3'). For amplification of the mutant, the wild-type reverse primer was replaced with a mutant reverse primer (5'TTCAGCAGCTCTCCTAGGGTCG3'). All primers were purchased from SinaClon BioSciences, Iran. The PCR solution (25 μl) contained 100 ng of genomic DNA, 10× PCR buffer, 10 pmol of each primer, 10 nmol of dNTP, 1.5 mmol Mg 2+ and 1U Taq polymerase. The cycling conditions were as follows: initial denaturation at 95 °C for 5 minutes, 30 cycles of denaturation at 95 °C for 30 seconds, annealing at 62 °C for 30 seconds, extension at 72 °C for 20 seconds and final extension at 72 °C for 5 minutes. Later, 10 μl from the PCR products was pipetted onto 2.5% agarose gel containing 0.1 Ag/ml of ethidium bromide (Gibco BRL) for gel electrophoresis and visualization of the amplified fragments. Statistical analysis was performed using MedCalc software (Version 12.1.4.0). P-values less than 0.05 were considered statistically significant.
INTRODUCTION
Tuberculosis (TB) remains a major global health problem and is the second leading cause of death from infectious diseases (1). Two-third of affected individuals are with latent TB infection, but the remaining onethird can spread active TB infections, which ultimately results in 1.5 million deaths annually (2). Family-based studies have provided abundant evidence on the role of genetics in TB susceptibility, and encouraged further host genetic studies to reveal mechanism of TB susceptibility and pathogenesis (3). Various TB studies on animal models have identified a number of suitable candidate genes for genetic association studies in humans (4). Recently, the intracellular pathogen resistance-1 (IPR1) gene located on the supersusceptibility to tuberculosis 1 (sst1) locus has been shown to contribute fundamentally to innate immunity against Mycobacterium tuberculosis infection in a murine model (5). SP110 (51 037 bp on 2q37.1; MIM 604457) is the human homologue of IPR1 and the role of its variants has been implicated in a number of diseases including TB (7). Results of the studies on SP110 gene in the individuals with TB have been contradictory. Two single nucleotide polymorphisms (SNPs) of this gene (rs3948464 and rs2114592) were found to be associated with TB in the West African population (8), whereas rs1427294 and rs1135791 have been reported to be significantly associated with TB in India (9) and China (10), respectively. However, similar studies in other populations have not found such association (7). In addition, a metaanalysis of five SNPs could not find a consistent association with TB (11). Based on the heterogeneity of findings and the lack of enough studies on the Asian population, this study aimed to evaluate the relationship between variation in the SP110 gene and TB in Golestan Province, Iran.
MATERIAL AND METHODS
The samples were selected from individuals who were referred to the health centers in the Golestan Province (Iran) between 2014 and 2015. This case-control study was performed on 100 pulmonary TB patients (57 males, mean age: 20 years) and 100 healthy controls (50 males, mean age: 25 years). Diagnosis was made based on the standard criteria 31/ Kheiri and colleagues (T and C) among the patients and control participants. Odds ratio (OR) was calculated to assess the association between the alleles and TB infection. There was no significant difference in the frequency distribution of the alleles between the patients and the controls ( Table 1). Among 100 patients, the frequency distribution of TT, TC and CC genotypes was 65%, 31% and 4%, respectively. In the control group, the frequency distribution of TT, TC and CC genotypes was 57%, 36% and 7%, respectively.
studying the genetic basis of TB in different populations can provide valuable information that could be useful for treatment and prevention of the disease. One of the candidate genes for susceptibility to TB is SP110, which is located on human chromosome 2q37.1. The gene is a member of the Sp100/Sp140 family and functions as a transcriptional activator and a nuclear hormone receptor co-activator (6). The main cytokine involved in response to viral infections is IFN (12) which itself induces SP110, indicating the role of SP110 in
RESULTS
The patients and healthy controls were separated on the basis of three possible genotypes and the presence or absence of 195 bp PCR products. In the gel electrophoresis, the samples with the TT genotype had a single band for the primer R1 (reverse oligonucleotide primer for amplification of allele T). Samples with the heterozygous genotype (TC) had two bands for primers 1R and 2R. The samples with the CC genotype had a single band for primer 2R (reverse oligonucleotide primer for amplification of allele C). We were able to detect both alleles
DISCUSSION
TB caused by M. tuberculosis is a common and deadly infectious disease that primarily affects the lungs but can also spread to other organs, such as the kidneys and the brain. TB is more prevalent in families with a previous history of the disease, and various studies have investigated the role of host genetics in TB susceptibility. Animal studies have shown that the absence of a particular gene can lead to development of the severe form of TB. Several genetic factors have been shown to be involved in the pathogenesis of TB. Hence, prevention and treatment methods.
CONCLUSION
Based on the results, the SP110 rs1135791 variant is not a genetic risk factor for development of pulmonary TB in the Golestan Province, Iran.
the IFN response (13). According to previous reports, controlling SP110 could be effective in controlling intracellular infections such as TB. A study on the West African population was the first to report the association of SP110 SNPs with pulmonary TB (8). However, results of our study and some previous studies in Indonesia, South and West Africa, Russia and India could not demonstrate a relationship between SP110 SNPs and susceptibility to TB (14)(15)(16)(17). However, a study on the Chinese population found an association between rs11556887 and rs1135791 and TB susceptibility (10). As mentioned earlier, we could not find a direct relationship between SP110 SNPs and TB among Iranians living in the Golestan Province. Additional studies on the genetic basis of TB in different populations could contribute to the improvement of current
|
2018-12-11T12:06:46.598Z
|
2018-04-01T00:00:00.000
|
{
"year": 2018,
"sha1": "5008491c34a726c2ba6b86a728529a74c5544760",
"oa_license": null,
"oa_url": "http://mlj.goums.ac.ir/files/site1/user_files_c5015c/admin-A-10-1-519-d02ebd3.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "d07fb8b50db2e9e1e520fe0420d6d7b280567ab2",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
198372198
|
pes2o/s2orc
|
v3-fos-license
|
Long intergenic non-coding RNA, regulator of reprogramming (LINC-ROR) over-expression predicts poor prognosis in renal cell carcinoma
Introduction Long intergenic non-coding RNA, regulator of reprogramming (LINC-ROR) is a newly identified cytoplasmic long non-coding RNA (lncRNA) implicated in cell longevity and apoptosis. We aimed in the current work for the first time to investigate the association of the expression profiles of LINC-ROR and three stem-related transcriptional factors with clinicopathological data and their impact on renal cell carcinoma (RCC) progression in a sample of RCC patients. Material and methods Expression levels of LINC-ROR and stemness-related factors: SOX2, NANOG, and POU5F1 were detected in 60 formalin-fixed, paraffin-embedded tissues, and their paired adjacent non-cancer tissues (n = 60) by using real-time qRT-PCR analysis. Additionally, the expression profiles were compared with the available clinicopathological features. Results The genes studied were markedly up-regulated in RCC (medians and interquartile ranges were 30.3 (1.84–235.5), 10.2 (1.84–53.9), 5.39 (0.94–23.5), and 12.5 (1.61–43.2) for LINC-ROR, SOX2, NANOG, and POU5F1, respectively) relative to paired non-cancer tissue. High expression levels were associated with poor prognosis in terms of tumour undifferentiation (for LINC-ROR, SOX2, and NANOG), lymph node infiltration (for SOX2), postoperative recurrence (for LINC-ROR and SOX2), and shorter overall survival (OS) and progression-free survival (for all genes studied). The best curve for OS prediction was constructed with LINC-ROR data (area under the receiver operating characteristic curve (AUC) = 0.804 at a cut-off value of 72.7, sensitivity 78.9%, and specificity 80.5%). Conclusions Collectively, aberrant LINC-ROR and pluripotent gene expression may be recognised as prognostic markers for RCC. Future functional studies are highly recommended to validate the study findings.
Introduction
Renal cell carcinoma (RCC), which arises from renal tubular epithelial cells, accounts for 2-3% of all cancers worldwide [1]. There has been a global rise in RCC incidence in past decades, and up to 35% of RCC patients present with metastasis [2]. Therefore, discovering biomarkers for early detection and better prognostic stratification of RCC has emerged as a noteworthy target for future investigations.
According to the World Health Organisation (WHO), there are three major histopathological RCC types: clear cell RCC (ccRCC), papillary cell RCC (pcRCC), and chromophobe RCC (chRCC) [3]. Each RCC subtype is known to display unique pathological characteristics, distinct genetic alterations, and different disease outcomes [1]. The interplay of the genetic mechanisms that affect RCC pathogenesis is still poorly understood. The use of integrated molecular analyses enables a deeper understanding of the RCC molecular landscape, resulting in the development of more effective targeted cancer therapies as an extension of existing modalities [4,5].
Cancer has been found to histologically mimic embryonic tissue [6]. Aggressive cancer cells exhibit phenotypic traits remarkably similar to those of pluripotent stem cells; both have high proliferation rates and a substantial ability to self-renew by bypassing senescence [7]. Accumulating evidence has demonstrated the convergence of tumourigenic signalling and developmental pathways [8].
Since the introduction of induced pluripotent stem cell (iPSC) generation mediated by transcription factor (TF) reactivation, studies have been directed towards identifying the differential expression profiles altered during pathological nuclear reprogramming in cancer [7]. SOX2, NANOG, and POU5F1 (also known as OCT3/4) are three pluripotency-associated TFs that are well known to be up-regulated in various cancer types through modulation of apoptotic signalling pathways [9]. Notably, modifying the expression of pluripotency-associated genes in animal models has generated epithelial-derived tumours in multiple tissues with altered spatial and temporal gene expression [10].
Recent evidence has suggested complex networking between TFs, chromatin organisation, and non-coding RNAs that regulate cell fate and differentiation. Several long intergenic non-coding RNAs (lincRNAs) have also been shown to be associated with pluripotency [11]. LincRNA expression has been shown to be highly correlated with the expression of stem cell-related TFs [12]. Binding sites for pluripotency-related genes have been identified within the promoters of these long non-coding RNAs (lncRNAs) [13]. In vitro silencing of lincRNAs has been shown to induce cells to exit the pluripotent state and altered gene expression patterns [14]. In 2010, Loewer et al. [11] identified lincRNA, regulator of reprogramming (LINC-ROR or ROR) in human fibroblasts and CD34-positive blood cells during iPSC generation. Furthermore, in vitro and in vivo studies have determined the critical role of ROR in modulating cell proliferation, apoptosis, and chemosensitivity in cancer [15][16][17].
To the best of our knowledge, no clinical studies have been conducted exploring the role of ROR in RCC patients to validate the aforementioned in vitro and in vivo results. In the current study, we aimed to analyse the transcriptomic signature profiling of LINC-ROR and three of the main pluripotency-related genes (SOX2, NANOG, and POU5F1) in RCC patients to assess their clinical utility as prognostic biomarkers.
Case selection and pathological assessment
In total, 120 formalin-fixed, paraffin-embedded (FFPE) specimens were analysed, including 60 RCC Figure 1 shows the flow chart of the current study case selection. All specimens were obtained after radical or partial nephrectomy, and cases with needle biopsies were excluded. All retrieved cases were archived in the Department of Pathology, Suez Canal University between 2011 and 2016. Patient data were obtained from medical records. Missing data were directly retrieved from the patients. Patient follow-ups were performed until December 2016 and ranged from seven to 32 months. Postoperative recurrence was calculated, and survival times were estimated from the date of nephrectomy until patient death or the endpoint of follow-up. The study was conducted following the ethical and legal standards adopted by the Declaration of Helsinki. Approval was obtained from the Medical Research Ethics Committee of the Faculty of Medicine, Suez Canal University. Consent was directly obtained from patients as a routine measure for sample archiving.
Histopathological examination
Histological examination, nuclear grading, and tumour staging of RCC samples were based on haematoxylin and eosin staining. Blind assessments were made without prior knowledge, according to the internationally standardised protocols we mentioned in our previous work [18]. Most of the RCC cases were the clear cell type and contained very little delicate vascular stroma. The papillary and chromophobe types also had only a little amount of stroma, and thus it was unlikely that it affected accurate determination of the expression of the genes assessed in this study.
Gene expression profiling
Total RNA was purified from the FFPE sections using a Qiagen RNeasy FFPE Kit (Cat # 74404, Qiagen, Hilden, Germany) following the manufacturer's protocol. A short incubation of each sample with proteinase K at a higher temperature, as recommended, partially reverses formalin crosslinking of the released nucleic acids, improving RNA yield and quality as well as RNA performance in downstream assays. All samples were subjected to treatment with RNase-free DNase I for 2 h at 37°C to eliminate all genomic DNA, including very small DNA fragments that are often present in FFPE samples after prolonged formalin fixation and/or long storage times. At an absorbance ratio of 260/280 nm, RNA concentration and purity were assessed by a Nanodrop ND-1000 spectrophotometer (NanoDrop Technologies, Wilmington, DE), followed by agarose gel electrophoresis check for RNA integrity. The range of the extracted RNA was (20-65 ng/μl). Reverse transcription was performed with a high-capacity cDNA reverse tran-scription kit (Part No. 4374966, Applied Biosystems, Thermo Fisher Scientific, Waltham, MA, USA) using a Mastercycler Gradient Thermocycler (Eppendorf, Hamburg, Germany). Detection of LINC-ROR, SOX2, NANOG, and POU5F1 gene expression and that of the endogenous control GAPDH was performed using real-time quantitative reverse transcriptase polymerase chain reaction (qRT-PCR). TaqMan assays were used for the pluripotent genes (Applied Biosystems, Thermo Fisher Scientific Inc., assay ID Hs01053049_s1 for SOX2, Hs02387400_g1 for NANOG, Hs03005111_g for POU5F1, and Hs02786624_g1 for GAPDH) as previously described in detail [19]. Quantification of LINC-ROR was achieved using SYBR Green with primers for exon 4 (F: GCCTGAGAGTTGGCATGAAT and R: AAAACCTCACTCCCATGTGC) [20]. The Minimum Information for Publication of Quantitative Real-Time PCR Experiments (MIQE) guidelines were followed for all PCR reactions. Gene expression in randomly selected study samples (10%) was re-evaluated in separate runs to test the reproducibility of the qPCR analysis, and the results showed very close cycle of quantification (Cq) values with low standard deviations.
Statistical analysis
Data analyses were conducted using the R packages SPSS version 22 and GraphPad Prism version 7. Multivariate analysis was conducted using PCORD version 5.0. The relative expression levels were calculated using the delta-delta threshold d cycle method. The Wilcoxon matched-pair signed-rank test was carried out to compare the expression level between cancer samples and their corresponding adjacent non-cancer tissues. Receiver operating characteristic (ROC) curves and the area under the ROC curves (AUCs) were determined to identify the clinical utility of the expressed genes. Ordination analysis and two-way agglomerative hierarchical clustering were performed for data exploration. Spearman's correlation analysis, chi-square (c 2 ), Fisher's exact, Mann-Whitney U (MW), and Kruskal-Wallis (KW) tests were applied when appropriate. A two-tailed p-value was considered significant at values < 0.05. Binary logistic and linear regression methods (Enter and Stepwise models) were applied to determine predictors for recurrence and overall survival (OS). The influence of covariates, such as age, gender, grade, tumour type, size, and gene profile, on OS was analysed. Kaplan-Meier analysis and the Cox proportional hazard model were performed to assess survival rates.
Characteristics of the study patients
The baseline features of RCC patients are shown in Table I. Overall, 21 female and 39 male RCC patients were included in the study. Their ages ranged from 48 to 79 years (mean ± standard deviation of 58.6 ±6.31). Notably, in patients with solitary tumours (76.7%), the left kidney was more frequently affected (63.3%). Despite the lack of lymph node metastases in nearly two-thirds of the spec-imens (65%) and the small tumour sizes (76.7%), most samples showed moderate/poor differentiation (85%). Recurrence occurred in approximately one-fourth of patients (26.7%) after a mean duration of 4 ±1.7 months. Postoperative disease-free survival ranged from 2 to 32 months. The OS rate was 68.3% in the first year and 11.7% in the second year, with a mean ± SD of 11.4 ±4.9 months.
Gene expression analyses
Gene expression profiling demonstrated overexpression of the LINC-ROR, SOX2, NANOG, and POU5F1 genes in tumour specimens compared with those in their paired non-cancer tissues (all p < 0.001). Medians and interquartile ranges of the relative expression levels in RCC were 30
Prognostic value of gene expression analyses
Univariate analysis revealed marked associations between LINC-ROR, SOX2, and NANOG expression levels and undifferentiated tumours (p = 0.010, 0.046, and 0.042, respectively). SOX2 up-regulation was linked to lymph node infiltration (p = 0.047). Moreover, patients with higher LINC-ROR and SOX2 levels experienced post-operative recurrence (p = 0.003 and 0.035). The expression of all genes presented significant associations with shorter PFS and low OS (all p < 0.05) (Table III). Similar results were found in correlation analyses between gene expression levels and the clinicopathological findings (Table IV).
The prognostic performance of the four genes in predicting OS in RCC patients was assessed using ROC analyses (Figure 3). In single-gene analyses, LINC-ROR data elicited the best curve (AUC = 0.804 at a cut-off value of 72.7; sensitivity 78.9%, specificity 80.5%) followed by NANOG (AUC = 0.748 at a cut-off value 11.2; sensitivity 68.4%, specificity 75.6%) and SOX2 (AUC = 0.714 at a cut-off value 25.5; sensitivity 73.3%, specificity 75.6%), whereas POU5F1 had an AUC of 0.632 at a cut-off value of 15.2, a sensitivity of 52.6%, and a specificity of 56.1%. Despite the good performance of the combined gene analyses, with AUC values ranging from 0.675 to 0.755, the AUC of LINC-ROR remained the best (Table V).
Logistic regression analyses revealed that both lymph node infiltration and LINC-ROR expression acted as recurrence predictors in RCC patients (p = 0.007 and 0.047, respectively; Table VI). Furthermore, linear regression analyses demonstrated that LINC-ROR was an independent predictor for OS (R = 0.443, R 2 = 0.192, p < 0.001).
Survival analysis
Analyses of different influencing factors on the survival time of RCC patients are demonstrated in Table VII. Cox regression showed that LINC-ROR, POU5F1, and recurrence were risk factors that predicted OS (hazard ratio (HR) = 4.21, 95% confi- high LINC-ROR, SOX2, NANOG, and POU5F1 gene expression revealed significant differences in OS (p = 0.018, 0.008, 0.028, and 0.001, respectively), which suggested that high expression was associated with poor survival (Figure 4). Similar find- ings were found in samples stratified by the histopathological type.
Multivariate analysis
Bray-Curtis analysis was applied to combine the effects of both clinicopathological characteristics and the transcriptomic signatures of the four tested genes. Samples were scattered along multiple axes; axis 1 represented 30.1% of the variance, whereas axes 2 and 3 accounted for 14.6% and 11% of clustering, respectively. The study population was clustered into three distinct groups according to survival time ( Figure 5). Notably, patients with poor survival (i.e. less than 1 year) were most affected by high LINC-ROR levels. The same category of patients was associated with poor differentiation and recurrence.
Discussion
A complex network of TFs, non-coding RNAs, and signalling transducers drives cancer. Unravelling the interactions between these molecular cancer players would pave the road towards a better understanding of cancer development and progression.
Currently, marked up-regulation of LINC-ROR, SOX2, NANOG, and POU5F1 genes was found in cancer samples compared with the adjacent non-cancer tissues. Additionally, LINC-ROR expression levels were positively correlated with SOX2 and NANOG levels. These results were consistent with previous studies [14,15]. Pluripotent TFs, which are often epigenetically maintained in a silent state, have been shown to be reactivated in several cancers, and their knockdown or suppression with microRNA or small interfering RNA transfection has been shown to hinder tumour progression, promote apoptosis, and improve chemosensitivity [21]. Interestingly, these TFs are now well known to operate in conjunction with non-coding RNAs. A few human lncRNAs may also be under the direct control of the core pluripotency TFs [22]. A recently identified cytoplasmic lncRNA, LINC-ROR, was reported to play a crucial role in the pluripotency maintenance [23]. The promoter of the LINC-ROR gene was found to contain binding sites for SOX2, NANOG, and POU5F1 [24]. Upon binding of TFs, LINC-ROR transcription is activated, whereas silencing of these proteins suppressed LINC-ROR expression through a regulatory feedback loop [25]. Similarly to our findings, LINC-ROR over-expression has also been observed in several cancers, including colorectal carcinoma [15] and breast cancer [25].
Another main finding of our study was the association of the four genes with poor prognosis. The Kaplan-Meier curves and ROC analyses showed that the LINC-ROR, SOX2, NANOG, and POU5F1 expression profiles were associated with shorter survival times. RCC patients with high LINC-ROR and SOX2 levels had a higher recurrence rate after an average of 4 months post-nephrectomy.
Notably, regression analyses confirmed that LINC-ROR was an independent predictor for recurrence and poor OS. In addition, LINC-ROR, SOX2, and NANOG expression showed a significant association with poor differentiation, whereas SOX2 was a poor marker for LN metastasis. In accordance with previous studies, melanoma and HCC cells expressing high POU5F1 and NANOG levels ex-hibited a more aggressive malignant phenotype [26]. NANOG expression has also been considered a prognostic biomarker for triple-negative breast cancer (TNBC) [27].
Recently, increasing attention has been focused on the contribution of LINC-ROR to cancer progression. LINC-ROR has been suggested to be a driving factor in tumourigenesis [24], as depicted in Figure 6. Increased LINC-ROR expression levels have been associated with differentiation, apoptosis, invasion, EMT, and metastasis in cancer [24,28]. Up-regulation was recognised as a poor prognostic factor in colon cancer [29]. However, LINC-ROR silencing reduced the malig- The following setup parameters were adjusted. Distance method: relative Euclidean method; endpoint selection method: variance-regression; axis projection geometry: Euclidean; residual distances: Euclidean; score calculation by weighted averaging. Samples were scattered along multiple axes; axis 1 represents 30.1% of variance, whereas axes 2 and 3 account for 14.6% and 11% of clustering, respectively. The study population was clustered into three distinct groups according to survival time.
Notably, most of the patients with poor survival (less than 1 year) manifested high levels of LINC-ROR. The same category of patients was associated with poor differentiation and recurrence Figure 6. The role of LINC-ROR in tumourigenesis and cancer progression nant phenotype of cancer cells [24]. One of the peculiar roles of lncRNAs is acting as competing endogenous RNAs (ceRNAs) by binding and se-questering microRNAs (miRNAs) and thus preventing miRNAs from silencing their target genes [30], a process that has been shown to be relevant to tumourigenesis and differentiation [12].
LINC-ROR has been demonstrated to serve as an miRNA sponge in embryonic stem cell self-renewal [31]. LINC-ROR effectively maintained the levels of the SOX2 TF by binding to miR-145 in human amniotic epithelial stem cells, altering the landscape of pluripotency and differentiation [32]. The tumour-promoting function of LINC-ROR can also be attributed to its influence on transcription frameworks involved in various cancer-related signalling pathways [24]. LINC-ROR can play an oncogenic role by modulating the c-Myc pathway [20]. LINC-ROR has also been shown to elicit re-expression of foetal genes as atrial and brain natriuretic peptides [28]. Additionally, upon DNA damage and expression of the tumour suppressor Tp53, LINC-ROR levels increased and suppressed Tp53 mRNA through a translation repression mechanism [16]. At the DNA level, LINC-ROR has been reported to block the recruitment of a histone-modifying enzyme (G9A methyltransferase), thereby triggering chromatin modification and promoting tumour growth and metastasis [32].
In conclusion, deregulation of the LINC-ROR/ pluripotent gene axis in renal tumours highlights their use as potential prognostic biomarkers. Targeting this RNA directly rather than targeting its regulated miRNAs, tumour suppressor genes, or oncogenes might provide a shortcut for future cancer therapies. However, due to limited clini-cal studies on LINC-ROR, further studies in other types of tumours are warranted.
|
2019-07-26T07:23:42.454Z
|
2019-05-17T00:00:00.000
|
{
"year": 2019,
"sha1": "e51925c0e18902a102987f6969f44db4ae0bb47f",
"oa_license": "CCBYNCSA",
"oa_url": "https://www.archivesofmedicalscience.com/pdf-93419-66450?filename=Long%20intergenic.pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "fa3ca7f7a75cad5610a8a34b50f8aedac4cd251f",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
218949194
|
pes2o/s2orc
|
v3-fos-license
|
Simultaneous Treatment of COVID-19 With Serine Protease Inhibitor Camostat and/or Cathepsin L Inhibitor?
Based on the newest results of the research, the new coronavirus uses the protein ACE-2 as a receptor for docking to pneumocyte type II [2]. A broadly put up team under the direction of infection biologists of the German Primate Centre and with participation of the Charite Berlin, the Endowment Veterinary College of Hannover, the Accident Clinic BG Murnau which wanted to find out Munich LMU, the Robert of Koch’s Institute and the German Centre for Infection researched how SARS-CoV-2 penetrates into lung cells and how this process can be blocked [3]. They have published their results in the magazine Cell [3]. They also identified the TMPRSS2 as responsible cellular protein for the entry into the cell [3]. The type II transmembrane protease pro-TMPRSS2 activates the spike (S) protein of SARS-CoV on the cell surface after the recipient, which binds in cells during virus access. Without TMPRSS2, SARS-CoV-2 reaches cell access via an endosomal pathway, in which CTSL can plays an important role with the activation of the S protein fusogenicity.
Based on the newest results of the research, the new coronavirus uses the protein ACE-2 as a receptor for docking to pneumocyte type II [2]. A broadly put up team under the direction of infection biologists of the German Primate Centre and with participation of the Charite Berlin, the Endowment Veterinary College of Hannover, the Accident Clinic BG Murnau which wanted to find out Munich LMU, the Robert of Koch's Institute and the German Centre for Infection researched how SARS-CoV-2 penetrates into lung cells and how this process can be blocked [3]. They have published their results in the magazine Cell [3]. They also identified the TMPRSS2 as responsible cellular protein for the entry into the cell [3]. The type II transmembrane protease pro-TMPRSS2 activates the spike (S) protein of SARS-CoV on the cell surface after the recipient, which binds in cells during virus access. Without TMPRSS2, SARS-CoV-2 reaches cell access via an endosomal pathway, in which CTSL can plays an important role with the activation of the S protein fusogenicity.
SARS-CoV-2 needs for cell entry ACE-2, TMPRSS2 and CTSL
Substantially for the entry, the control of S protein of the virus is by presence of serine protease TMPRSS2 and CTSL. TM-PRSS2 activates S protein for cell-cell and virus-cell fusion in transformation only. The S protein is fusogenetically activated by CTSL, therefore allowing fusion of the viral and endosomal membranes. Subsequent infection is sensitive to inhibitors of endosomal acidification such as ammonium chloride, suggesting that SARS-CoV-2 requires a low-pH milieu for infection. On the other hand, S protein can mediate cell-cell fusion at neutral pH, indicating that S protein-mediated fusion does not include an absolute requirement for an acidic environment. Given these discordant findings, we hypothesized that cellular factors sensitive to ammonium chloride, such as pH-dependent endosomal proteins, may play a role in mediating SARS-CoV-2 viral entry. The requirements for proteases in the activation of viral infectivity and the effect of protease inhibitors on COVID-19 infection are examined. Our results are consistent with a model in which SARS-CoV-2 employs a unique three-step method for membrane fusion, involving receptor binding and induced conformational changes in S glycoprotein followed by CTSL proteolysis and activation of membrane fusion within endosomes.
TMPRSS2
TMPRSS2 encodes a protein that belongs to the serine protease family. The encoded protein contains a type I transmembrane area, a receiver classified one area, a garbage collector receiver cysteine-rich area and a pro-sweep area. Serine provokes are known to be involved in many physiological and pathological processes. This gene was demonstrated to be regulated by androgenic hormones in prostate cancer cells and regulated in androgen-independent prostate cancer tissue. It is thought that the pro-inflammatory area of this protein is cleaved and hidden in the cell media after auto splitting. Alternately split transcript variants that encode different isoforms have been found for this gene.
Camostat
Camostat (mesylate) is supplied as a crystalline solid. A stock solution can be prepared by dissolving the camostat (mesylate) in the solvent of choice, which should be purged with an inert gas. The camostat (mesylate) is soluble in organic solvents such as dimethyl sulfoxide (DMSO) and dimethylformamide. The solubility of camostat (mesylate) in these solvents is about 25 mg/mL. Camostat is a protease inhibitor [3,4]. It inhibits trypsin (Ki = 1 nM) and various inflammatory proteases, including plasmin, kallikrein and thrombin. Camostat inhibits the incorporation of the SARS-CoV and the surface glycoprotein COVID-19 into pseudotyped particles of vesicular stomatitis virus (VSV) in Vero cells, Calu-3 cells and primary human lung epithelial cells when administered at a concentration of 10 µM [4]. It reduces the number of genomic equivalents of SARS-CoV-2, a marker of infection, in Calu-3 cells. Camostat inhibits the function of the sodium channel in human respiratory epithelial cells (IC50 = 50 nM) and improves mucociliary clearance in sheep [4]. Administration of camostat (1 mg/ kg) inhibits the production of tumor necrosis factor-α (TNF-α) and monocyte chemoattractant protein-1 by monocytes and the proliferation of pancreatic star cells in a rat model of pancreatic fibrosis [2]. Camostat is in Japan well known as "Foipan". In Japan, camostat is certified for patients with chronic pancreatitis and showed attenuating effect in pancreatic fibrosis [5]. For SARS-CoV-2 the first clinical trials were initiated at the University of Aarhus, Denmark. Camostat has the potential to block the entry of the virus into the lung cells, well known as pneumocytes type 2. To date, no clinical studies were performed, nor any results are present. What we know is that camostat could have a promising potential in COVID-19.
SARS-CoV-1 used CTSL, and infection could be blocked by
CTSL inhibitors in mouse model [6]. SARS-CoV-2 could be blocked in a similar way; therefore, clinical trials should be performed as soon as possible with CTSL inhibitors in COV-ID-19.
Conclusions
To date, many different treatment options are used "off-label" in severe cases or are under clinical trial. Table 1 shows the different therapeutic approaches to treat COVID-19. In this work, we focus on serine protease inhibitor camostat, which partially blocks infection by SARS-CoV-2 [2,3,[7][8][9][10]. Simultaneous treatment of the cells with camostat and a cathepsin inhibitor has efficiently prevented both cell access and multi-step growth of SARS-CoV-2 in human Calu-3 weather pathway epithelial cells [11]. This efficient inhibition could be attributed to the double blockade of access from the cell surface and through the endosomal pathway. These observations suggest camostat as a candidate antiviral drug to prevent or suppress TMPRSS2dependent infection by the SARS-CoV-2 [8,9]. Further investigations are necessary. A new therapy option in COVID-19 patients is based on the combination of camostat (serine protease TMPRSS2 inhibitor) and a cathepsin inhibitor [5,6]. First clinical off-label trials should be performed as soon as possible in COVID-19 patients. In a recent study, an orally bioavailable broad-spectrum antiviral, ribonucleoside analog β-D-N4hydroycytidine (NHC, EIDD-1931), showed positive results of inhibiting SARS-CoV-2 in human airway epithelia cell cultures and multiple coronaviruses in mice [10]. Further trials Table 1. Different Therapeutic Approaches to Treat COVID-19
Classes Treatment option
Anti-viral drug > 85% of patients received antiviral drugs, including oseltamivir (75 mg every 12 h orally), ganciclovir (0.25 g every 12 h intravenously) and lopinavir/ritonavir tablets (400/100 mg twice daily); lopinavir/ ritonavir sirup in children; remdesivir is currently in trials in more than 10 medical facilities in Wuhan and is known to prevent MERS-CoV-2
In trials
Ribonucleoside analog NHC (EIDD-1931) [10] In vitro Anti-malarial drug Chloroquin and hydroxychloroquin have been effective in inhibiting the exacerbation of pneumonia due to the anti-viral and anti-inflammatory activities
|
2020-05-21T00:05:04.430Z
|
2020-05-01T00:00:00.000
|
{
"year": 2020,
"sha1": "6843d0fd105f0bb6ed2f92993d7cec375676c029",
"oa_license": "CCBYNC",
"oa_url": "https://www.jocmr.org/index.php/JOCMR/article/download/4161/25893144",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "21d5de71b96277f0ff116c6d21119a4944f98215",
"s2fieldsofstudy": [
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
214767162
|
pes2o/s2orc
|
v3-fos-license
|
Integration of Tobacco Treatment Services into Cancer Care at Stanford
As part of a National Cancer Institute Moonshot P30 Supplement, the Stanford Cancer Center piloted and integrated tobacco treatment into cancer care. This quality improvement (QI) project reports on the process from initial pilot to adoption within 14 clinics. The Head and Neck Oncology Clinic was engaged first in January 2019 as a pilot site given staff receptivity, elevated smoking prevalence, and a high tobacco screening rate (95%) yet low levels of tobacco cessation treatment referrals (<10%) and patient engagement (<1% of smokers treated). To improve referrals and engagement, system changes included an automated “opt-out” referral process and provision of tobacco cessation treatment as a covered benefit with flexible delivery options that included phone and telemedicine. Screening rates increased to 99%, referrals to 100%, 74% of patients were reached by counselors, and 33% of those reached engaged in treatment. Patient-reported abstinence from all tobacco products at 6-month follow-up is 20%. In July 2019, two additional oncology clinics were added. In December 2019, less than one year from initiating the QI pilot, with demonstrated feasibility, acceptability, and efficacy, the tobacco treatment services were integrated into 14 clinics at Stanford Cancer Center.
Introduction
Approximately 30% of cancer deaths in the U.S. are attributed to smoking [1]. Continued smoking after a cancer diagnosis can complicate cancer treatments by interacting with medications, increasing the risk of treatment side-effects, and decreasing wound healing time [2][3][4][5]. Quitting smoking as early as possible after a cancer diagnosis not only eliminates these complications but also has been shown to increase survival and well-being while decreasing the risks of disease recurrence and second primary tumors [6,7]. For these reasons, tobacco cessation services are recommended as a standard component of cancer treatment [8].
Patients diagnosed with cancer who smoke often want to quit but face challenges. Research indicates that about 50% of patients who smoked before a cancer diagnosis continued to smoke during treatment [9]. Among patients who continued to smoke after undergoing treatment for head and neck cancer, most reported being motivated to quit and making past attempts to quit, but few entered a formal tobacco treatment program or received pharmacotherapy to deal with nicotine withdrawal or dependence, and most returned to smoking within 1 month posttreatment [10].
The National Comprehensive Cancer Network's (NCCN) guidelines for treating tobacco dependence in cancer care recommend repeated interventions (even as brief as 3 min) to encourage engagement in tobacco treatment, including counseling and the use of cessation medications [8,11]. Combination medication and behavioral therapy is considered more effective compared to behavioral therapy or medication alone. Food and Drug Administration (FDA)-approved medications are nicotine replacement therapy (NRT; patch, lozenge, gum, nasal spray, and inhaler), bupropion, and varenicline. Referral resources for quitting smoking are available through state quitlines and online, for example, the National Cancer Institute's (NCI) smokefree.gov website [12].
Historically, clinical systems have used an "opt-in" approach to tobacco treatment, whereby patients have to request or accept a referral from their oncology providers. As an alternative, in an "opt-out" model, all patients who use tobacco are automatically referred for evidence-based tobacco cessation treatment, with the option to opt-out from engaging in care. The opt-out model has increased patient reach and engagement [13], and quit rates [14]. "Opt-out" models for tobacco treatment have proven feasible in inpatient and cancer care settings [15,16].
Despite the promotion of NCCN guidelines for effective tobacco cessation strategies, systemwide adoption of comprehensive tobacco treatment in cancer settings has been low. In a survey of NCI-designated comprehensive cancer centers, 40% reported not having any tobacco treatment available for patients [17]. System-level barriers to care include lack of organizational prioritization, staff training, clarity in roles and responsibilities, and resources for treating tobacco [17,18]. Research has examined oncologists' practice patterns regarding tobacco assessment, cessation support, perceptions of tobacco use, and barriers to providing cessation support for patients with cancer. In a survey of 1101 members of the American Society of Clinical Oncology, 80% reported they always ask about tobacco use at the initial visit, 58% always advised those who use tobacco to quit, and 38% reported assessing tobacco at follow-up. Reported barriers to routinely providing cessation support were inadequate training (38%) and perceiving patients as resistant to tobacco treatment (74%) [19]. Asking about tobacco use and advising cessation are necessary first steps for treating tobacco addiction; however, ongoing support and combination treatment often are necessary to prevent relapse [20]. Insurance coverage for tobacco treatment in the U.S. also remains inconsistent and often non-comprehensive, creating access challenges for patients. While the Affordable Care Act and other federal laws require most health insurance plans to cover some tobacco treatment, policymakers may not enforce these requirements [21].
Despite the known harms of tobacco use and the recommended need for tobacco cessation treatment in the oncology setting, the clinical trial evidence is weak with regard to outcomes. A recent meta-analysis of 21 smoking cessation studies among cancer survivors reported a nonsignificant treatment effect [22]. Subgroup analyses of specific behavior change techniques (e.g., general encouragement and enhancing self-efficacy), found to be effective in other patient populations (e.g., patients with chronic obstructive pulmonary disease [COPD] and patients undergoing surgery), also were nonsignificant. Novel approaches appear warranted and particularly with consideration of the unique aspects of cancer care systems and cancer survivorship. More broadly, but with relevance here, there have been calls for "practice-based production" of evidence that maximizes external validity, conducted in clinical settings with real-world patient populations [23].
This quality improvement (QI) project aimed at integrating accessible, tobacco treatment, consistent with NCCN guidelines, into cancer care at a major cancer center. With novelty, the treatment approach combined an opt-out model to maximize outreach, a telemedicine-delivered treatment and virtual same-day delivery pharmacy to maximize reach, and a training program with individual counseling to maximize engagement. The planning effort by the Tobacco Treatment Service team started in August 2018 and has been supported by a Moonshot P30 Supplement Award (P30CA124435-11S1) as part of the National Cancer Institute's Cancer Center Cessation Initiative (C3I). By adhering to the "Plan, Do, Study, Act" (PDSA) methodology [24,25], the team implements weekly improvements to advance the quality and effectiveness of services provided to patients and family members. Reported here is the process, methods, and results from engaging providers and patients in three pilot clinics at the Stanford Cancer Center Palo Alto location (Head and Neck Oncology, Thoracic Oncology, and GI Surgery Oncology) and the subsequent efforts for center-wide implementation. Project activities were reviewed by Stanford Medicine's Institutional Review Board and determined to fall within the domain of a quality improvement evaluation (Protocol #48420).
Materials and Methods
The Stanford Cancer Center (SCC), an NCI-designated Comprehensive Cancer Center, is located in Northern California with sites in Palo Alto, Redwood City and San Jose, California. The catchment area encompasses nine counties: Alameda, Merced, Monterey, San Benito, San Mateo, Santa Clara, Santa Cruz, San Joaquin, and Stanislaus Counties. Serving a vast area, SCC's annual number of newly registered cancer patients is over 6500. State surveillance data in 2018 for adults (18-80 years) in the SCC's catchment area indicate a smoking prevalence of 6% for women and 15% for men (11% overall) [26].
The Stanford Tobacco Treatment Service was developed with the goal of connecting patients receiving cancer care and their family members who use tobacco to evidence-based, high-quality tobacco cessation treatment. While combustible cigarettes are the main tobacco product of use, patients using any form of tobacco (e.g., cigars, smokeless tobacco, e-cigarettes) are eligible for treatment.
The Stanford Tobacco Treatment Service consisted initially of 1.3 supported full-time equivalent (FTE) team members, increased over time to 2.3 FTE, including a full-time (initially half-time) certified tobacco treatment specialist, two program managers at combined 1 FTE (initially 0.5 FTE), and three addiction medicine doctoral-level clinicians together contributing 0.3 FTE. The service also is an 8-hour a week practicum site for four psychology students, providing supervised patient care. Ancillary support to the program is provided by advanced practice providers and management staff in the oncology clinics, the SCC's data/quality/IT improvement systems team, and the virtual pharmacy partner with pharmacists certified to prescribe NRT.
Onboarding and ongoing training activities for team members include in-services (by the team's addiction medicine clinicians and the California Smoker's Helpline clinical director), key literature (e.g., NCCN guidelines, Surgeon General Reports, review articles), and webinars. Additionally, one staff member completed an accredited tobacco treatment specialist program and three team members participated in Memorial Sloan-Kettering's Assessment and Treatment of Tobacco Dependence in Cancer Care multiday workshop.
Pre-Implementation Assessment at a Pilot Clinic
As part of this QI initiative, the Stanford Tobacco Treatment Service partnered first with Stanford's Head and Neck Oncology Clinic as a pilot site to assess barriers and facilitators to cancer patients accessing tobacco treatment. The clinic was chosen for its elevated smoking prevalence, receptivity to the project, and >95% tobacco screening rate yet low rate of treatment referrals (<10%) and low engagement of patients in tobacco cessation treatment. Prior to the pilot, <1% of the clinic's patients who reported smoking were engaging in tobacco cessation treatment. Tobacco quit success rates were not tracked during this timeframe. While the clinicians viewed tobacco dependence as medically relevant, the clinics were not resourced with tobacco treatment specialists and the oncology care providers' time was limited. The clinicians and the clinic leadership expressed interest in a pilot partnership to connect identified tobacco users with flexible and convenient cessation treatment services.
Critically, the integrated model needed to fit within the clinicians' preexisting workflow. As a first step, the Stanford Tobacco Treatment Service team conducted a gemba [27], which refers to the act of shadowing a team, workflow, and/or process in the actual place where the work is done and value is created. In this case, the team shadowed in clinic to learn about the tobacco-screening workflow and barriers to referring patients for treatment. The clinic was observed to follow an opt-in model, reliant on the clinicians to generate tobacco treatment referrals in the electronic health record (EHR). The system's sole tobacco cessation program at the time was an in-person group offered one hour a week at the medical center's psychiatry clinic. The reliance on clinician referrals, the location of the tobacco cessation group external to oncology, and the in-person format and limited frequency limited patient engagement. Insurance coverage also was a barrier to care, and stigma may have hindered engagement given that the tobacco treatment service was offered in a psychiatric setting.
Through attendance at clinic meetings, the in-clinic shadowing, and conversations with clinic leadership, the tobacco treatment QI team developed strong relationships with medical assistants, nurses, providers, and administration to ensure the prioritization and sustainability of tobacco treatment integration. The identified barriers informed intervention development and implementation. Developed was a flexible menu of accessible tobacco treatment services to offer patients and their family members as a covered benefit throughout and beyond their oncology treatment.
Intervention Development and Implementation
An automated opt-out referral process was developed to ensure that all patients treated in the Head and Neck Oncology Clinic were screened for tobacco use and provided opportunities to engage in tobacco treatment. The QI workflow is described here.
First, patients were screened for tobacco use at every clinic visit by the oncology team. Once identified by a medical assistant, a magnet was placed either on the exam room door or on the clinic white board to indicate a positive tobacco screen. These magnets were visible to all staff and providers. Oncology care providers gave patients identified as tobacco users a tobacco-treatment brochure and informed them that the Stanford Tobacco Treatment Service would call them within 1 week to discuss the services available. Patients were given the option to opt-out of being contacted. When a patient opted-out of treatment, the provider emailed the Stanford Tobacco Treatment Service team to not contact that patient.
Next, the Stanford Tobacco Treatment Service generated a weekly report from the EHR with contact information for all patients positively screened for tobacco use in the clinic. A tobacco treatment team member called every patient on the report up to three times, leaving voicemails as needed. Additionally, messages were sent via the EHR patient portal introducing the tobacco treatment specialist and tobacco treatment options available to patients and family members for a free or reduced price.
The menu of treatment services included counseling and/or cessation medications. Counseling was offered in-person at Stanford before or after their clinic visit, individually or via group, or virtually through telemedicine or over the phone. The cessation counseling was offered to patients by predoctoral clinical psychology students, supervised by the team's licensed clinical psychologists. Family members also were eligible to receive counseling support and/or cessation medications. The counseling was offered as a three-session treatment with the option to continue. Translation services were available by phone through Stanford Health Care's Translator Services. Patients who selected telephone-based counseling were referred to the toll-free California Quit Line through an electronic referral portal where a trained counselor reached out to patients over the phone to offer free, ongoing, telephone-based counseling in six languages. Information on NCI's smokefree.gov web, chat, and phone-support services also was provided.
For cessation medication, patients were connected to Alto, a virtual pharmacy in Northern California, that offered same-day delivery of cessation medications at a discounted price. A partner in this QI initiative, Alto's pharmacists furnished NRT following patient consultation and offered a two-week free trial to the patients and their family members. For patients interested in varenicline or bupropion, consultation via telemedicine with the Stanford Tobacco Treatment Service's addiction medicine physician was available. Prescriptions for varenicline or bupropion were written by the addiction medicine physician and could be filled, often with same day delivery, by Alto.
To address insurance eligibility barriers, tobacco treatment was offered at no charge as a covered benefit for Stanford patients receiving cancer care. To streamline care delivery, with fewer hand-offs, services for tobacco treatment were offered through the Stanford Cancer Center. Housing the tobacco treatment services within the Stanford Cancer Center and framing the services as part of cancer care may minimize stigma. Patients were made aware of the new service through brochures and a treatment service website.
Lastly, to communicate program metrics and to further refine service delivery, once per month, a Stanford Tobacco Treatment Service team member attended the clinic's morning huddle and/or monthly staff meeting to present patient engagement data and to discuss any improvement opportunities with the screening process or service infrastructure.
Expansion
Six months after initiating the pilot in the Head and Neck Oncology Clinic, two additional clinics were engaged: Thoracic Oncology and GI Surgery Oncology. The Stanford Tobacco Treatment Service team initiated gembas in each clinic to learn their workflows and to identify the need for modifications to meet the unique needs of each clinic. Based on what was learned in Head and Neck Oncology, clear expectations were set with clinic leadership regarding the tobacco treatment team's scope of work and time to outreach. Partnerships were developed with social work and case management to ensure patients with needs beyond tobacco dependence were supported appropriately.
Results
Prior to the QI initiative, between January 2018 through the end of December 2018, a total of 5735 patients were seen at the three participating pilot clinics; 5580 (97%) patients were screened for tobacco use; 242 (4%) patients had current tobacco use indicated; and 42 (17%) patients had clinician tobacco treatment assistance documented. Clinician assistance to patients' tobacco use included prescription for cessation medication (n = 7), brief provider counseling (n = 2), and/or a referral to the psychiatry clinic's tobacco treatment group (n = 35). Data for the psychiatry clinic's tobacco treatment group were available only for Q1 of 2018 and indicated that the service received 9 referrals from the three oncology clinics of interest; however, none of the referred patients completed an intake or engaged in treatment. Likely barriers were insurance, distance, scheduling, and care being based in psychiatry rather than within oncology. Tobacco quit success rates were not tracked during this timeframe.
The pilot launched in January 2019 in Head and Neck Oncology, followed by Thoracic Oncology and GI Surgery Oncology starting in July 2019. By December 2019, a total of 6396 patients were seen at the three clinics during this pilot phase. Of patients seen, 6330 (99%) patients were screened for current tobacco use and 368 (6%) tobacco users were identified. All identified tobacco users were opted into treatment and received phone outreach from the tobacco treatment service and information through their EHR using a secure messaging system. Shown in Figure 1 To date, 44 patients have come up for their 6-month follow-up and 9 have reported being tobacco-free (20% of those engaged). Outcome data continue to be collected for patients at 6-months posttreatment initiation.
By 11 months, with building evidence of feasibility, acceptability, and efficacy, the tobacco treatment service with the opt-out approach was expanded to 11 additional SCC clinics, for a total of 14 clinics. Since the expansion on November 23, 2019 to February 15, 2020, 13,013 patients were screened for tobacco use and 435 patients (3%) were newly identified as tobacco users. All 435 patients were called by the tobacco treatment service within 1 week of being identified in clinic and were provided information on quitting tobacco use through their EHR. The average number of new smokers identified per week has increased from 3 to about 35, since expansion from the pilot phase. Thus far, a total of 600 patients (75%) have been reached by the team's tobacco treatment specialist. To date, 181 patients (30% of those reached) have engaged in treatment with 108 (60%) choosing one treatment modality and 73 (40%) choosing combination treatment. Outcome assessments will continue to evaluate the efficacy of treatment service delivery for supporting long-term abstinence from tobacco use.
Discussion
As part of a national movement supported by NCI, this QI initiative aimed to integrate tobacco cessation treatment into cancer care at Stanford and to improve tobacco treatment service utilization to help patients become tobacco-free. Within 6 months of initiating the QI initiative in a single pilot To date, 44 patients have come up for their 6-month follow-up and 9 have reported being tobacco-free (20% of those engaged). Outcome data continue to be collected for patients at 6-months posttreatment initiation.
By 11 months, with building evidence of feasibility, acceptability, and efficacy, the tobacco treatment service with the opt-out approach was expanded to 11 additional SCC clinics, for a total of 14 clinics. Since the expansion on November 23, 2019 to February 15, 2020, 13,013 patients were screened for tobacco use and 435 patients (3%) were newly identified as tobacco users. All 435 patients were called by the tobacco treatment service within 1 week of being identified in clinic and were provided information on quitting tobacco use through their EHR. The average number of new smokers identified per week has increased from 3 to about 35, since expansion from the pilot phase. Thus far, a total of 600 patients (75%) have been reached by the team's tobacco treatment specialist. To date, 181 patients (30% of those reached) have engaged in treatment with 108 (60%) choosing one treatment modality and 73 (40%) choosing combination treatment. Outcome assessments will continue to evaluate the efficacy of treatment service delivery for supporting long-term abstinence from tobacco use.
Discussion
As part of a national movement supported by NCI, this QI initiative aimed to integrate tobacco cessation treatment into cancer care at Stanford and to improve tobacco treatment service utilization to help patients become tobacco-free. Within 6 months of initiating the QI initiative in a single pilot clinic, two additional clinics were added. Within 11 months, an additional 11 clinics were added for a total of 14 clinics.
In the pre-implementation assessment phase, common barriers to treatment were identified including reliance on clinician referrals, lack of insurance coverage, distance of travel to counseling sites, and fear of stigma around counseling services, which had been centered in psychiatry. Through employing the opt-out approach, offering treatment via telemedicine, and providing treatment as a covered benefit for patients and family members, this QI initiative maximized treatment access and provided individualized treatment planning. The tobacco treatment model offered a menu of treatment services to help patients meet their quit goals. Screening and treatment delivery processes were incorporated into current workflows to reduce time and costs for patients and the medical center without compromising patient-provider interpersonal contact and quality of care. To date, the quit rate is 20% at the 6-month follow-up.
Cancer patients who smoke report higher motivation to quit-but not higher quit rates-relative to the general population [28,29]. Hence, a cancer diagnosis offers a unique and critical opportunity to help patients quit. However, given the general lack of success in the tobacco treatment research literature [22], tobacco treatment in cancer care may require new models of engagement and delivery. This QI initiative experience identified the need to provide flexible and individualized treatment interventions. Patients often feel burdened by the immediate consequences of their cancer diagnoses and treatment, which puts the consequences of long-term tobacco use in their periphery. Utilizing motivational interviewing techniques, the treatment team focused on rapport building and engagement. In addition, the initiative utilized a workflow that provided feedback and consultation with the patient's medical team to enhance coordination of patient-centered care and to close the treatment loop. This QI initiative observed a level of patient tobacco treatment engagement higher than other comprehensive tobacco treatment services within oncology settings [30,31], which tend to be about 20%. With regard to abstinence outcomes, the University of Texas MD Anderson Cancer Center, which provides a comprehensive approach to tobacco treatment, has reported a 9-month abstinence rate of 38% among 2779 individuals treated from 2006-2013. Automatic referral systems that identify current tobacco users or recent quitters at clinical or preclinical evaluations have been found to dramatically increase participation in tobacco treatment in cancer care [29,30].
Future Work
The tobacco treatment team anticipates expansion of the automated referral service across the institution's cancer network, which includes 2 additional sites in Northern California. The team is investigating the use of mail and digital solutions to further extend outreach and workflow efficiencies. Anticipated is an increase in patient engagement in treatment of 8 to 10 patients per week (about 400 to 500 patients within the next year), which with the supervised trainee model and virtual pharmacy is doable. Once the process and infrastructure are developed for the cancer network, the team plans to expand services across all of Stanford Health Care.
This practice-based QI evaluation has notable limitations. Findings may not generalize to other cancer centers. For example, the clinical capacity to call all patients identified as tobacco users may be more challenging in systems with higher smoking prevalence. Relying on self-report, tobacco use may have been underreported due to the stigma of smoking and having cancer or because of smoking being contraindicated for surgery or a cancer treatment medication [32][33][34]. The treatment model is time and resource intensive, as person-to-person connections were prioritized, and required administrative oversight as the service's documentation and referral systems were not well embedded in the institution's operations. Given these considerations, the scalability of the model requires further testing.
Despite these limitations, the process described here is offered as a potential model for QI initiatives to integrate tobacco cessation treatment into cancer care. Quitting smoking supports positive outcomes in the long-term care of cancer survivors [19,35]. The treatment offerings and model have engaged patients in a high touch manner of frequent person-to-person interactions, which opened opportunities to tailor and meet the individual needs of patients.
Conclusions
The initial results of this QI tobacco treatment initiative demonstrate that providing comprehensive tobacco treatment services to patients within an oncology setting can result in higher tobacco treatment engagement. A tobacco treatment intervention implemented as a covered benefit of cancer care was found to be feasible and acceptable to clinicians and patients. Offered as evidence from practice-based QI evaluation, further testing of the model's feasibility and scalability is warranted.
|
2020-03-26T10:18:16.424Z
|
2020-03-01T00:00:00.000
|
{
"year": 2020,
"sha1": "c95ca58c24f1dc3f2909946954c53537da29d239",
"oa_license": "CCBY",
"oa_url": "https://www.mdpi.com/1660-4601/17/6/2101/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "5d4909cd879f35b163cfc5506f19df84391f6cc2",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
58920661
|
pes2o/s2orc
|
v3-fos-license
|
TICKHONOV BASED WELL-CONDITION ASYMPTOTIC WAVEFORM EVALUATION FOR DUAL-PHASE-LAG HEAT CONDUCTION by
The Tickhonov based well-condition asymptotic waveform evaluation is presented here to study the non-Fourier heat conduction problems with various boundary conditions. In this paper, a novel Tickhonov based well-condition asymptotic waveform evaluation method is proposed to overwhelm ill-conditioning of the asymptotic waveform evaluation technique for thermal analysis and also presented for time-reliant problems. The Tickhonov based well-condition asymptotic waveform evaluation method is capable to evade the instability of asymptotic waveform evaluation and also efficaciously approximates the initial high frequency and delay similar as well-established numerical method, such as Runge-Kutta. Furthermore, Tickhonov based well-condition asymptotic waveform evaluation method is found 1.2 times faster than the asymptotic waveform evaluation and also 4 times faster than the traditional Runge-Kutta method.
Introduction
In earlier decades, the thermal analyses have been done by using traditional iterative manners.These iterative manners were undeniably very precise, but these methods are computationally expensive.Model based parameter estimation (MBPE) was made notorious to abate the computational convolution [1].In 1990, asymptotic waveform evaluation (AWE) model was presented [2], which is a superior case of MBPE technique, and they are competent to show that AWE method is at least 3.33 times faster than the conventional frequency domain numerical methods.Unfortunately, AWE moment matching is ill-conditioned, because AWE technique is incompetent to forecast the delay and initial high frequencies in an accurate manner [3].Then Lanczos [4], process was developed, and in this process, the linear systems are transformed into Pade approximation deprived of forming ill-conditioned moments.But if the systems are non-linear, researcher either can elucidate the problem by using ill-conditioned AWE technique, or they can renovate non-linear system into linear system.If they linearize the problems, either higher order term deserted or they should be familiarized superfluous degree of freedoms.To evade all those complications, a technique was introduced, entitled well-conditioned asymptotic waveform evaluation (WCAWE) [5].Using --------------THERMAL SCIENCE, Year 2016, Vol. 20, No. 6, pp. 189120, No. 6, pp. -1902 WCAWE, moment matching process does not disregard higher order term and also avoids extra degree of sovereignty for non-linear systems.In WCAWE method, they acquaint with two correction terms to confiscate ill-condition of AWE moment matching, and this method was well-recognized to solve the frequency domain finite element exclusively for electromagnetic problems, where the simulation was carried out for different types of antennas.In WCAWE technique, [Z] matrix was picked randomly to find out the correction terms.In recent years, WCAWE method is used to solve the most challenging Helmholtz finite element model (FEM) [6].
The objectives of the present work are to deliberate about Fourier and non-Fourier heat conduction problems for diverse boundaries conditions.Various methods can be found to solve Fourier and non-Fourier heat conduction equation with different initial and boundary conditions [7][8][9][10][11][12].They used various conventional iterative techniques, which are computationally expensive.In AWE method, the non-Fourier heat conduction models are needed to be converted into a linear equation [13,14].In this method, higher order term is deserted during the transformation of linear equation.Hence, the technique is not capable to forecast the tangible temperature responses.Recently, Tickhonov based well-condition asymptotic waveform evaluation technique (TWCAWE) was proposed for fast transient thermal analysis of non-Fourier heat conduction.The technique successfully approximates the temperature responses for single boundary condition, as reported in [15].However, the method can not predict the temperature responses in case of different boundary conditions because the algorithm might breakdown in any specific situation.Therefore, in the current work, we propose TWCAWE method to investigate the Fourier and non-Fourier heat conduction in different boundary conditions, which implanted with Tickhonov regularization technique to enrich the immovability [16,17].In this proposed TWCAWE model, no need to renovate non-Fourier heat conduction equation into linear equation.In the current study, we are capable to find out the [Z] matrix mathematically instead of choosing randomly, which assists to find out the correction term efficiently.The results attained from TWCAWE method precisely matched with Runge-Kutta (R-K) results and also competent to remove all instabilities of AWE.Additionally, the method proposed in the current work is 1.2 times faster compared to AWE.
Mathematical formulation
The fundamental perception of the Fourier heat conduction model is infinite thermal wave propagation speed.The classical Fourier heat conduction law relates the heat flux vector, q, to the temperature gradient, ∇θ: where K c is the thermal conductivity.Hence, the previous traditional parabolic heat equation is symbolized by eq. ( 2) that originated from eq. (1): where σ = K c /ρc, ρ and c are the thermal diffusivity, the mass density, and the specific heat capacity, respectively.The traditional heat conduction equation, symbolized by eq. ( 1), is unable to explain some distinct cases of heat conduction, e. g., near the absolute zero temperature and extreme thermal gradient.Tzou et al. [3] suggested a non-Fourier heat transfer model that contains two phase delay denoted by eq. ( 3).Due to the presence of phase delay, this model is able to describe these distinct situations and also able to eradicate the dimness of the classical heat conduction model: where b κ is the phase delay of longitudinal temperature gradient in regards to local tempera- ture and a κ is the phase delay of heat flux in regards to local temperature.
The model denoted by eq. ( 3) promises both phase delay a κ and b κ for fast heat tra- nsfer.The model is represented in a standardized 2-D hyperbolic equation specified by eq. ( 4): , where b is the standardized time, b Z -the standardized phase delay for temperature gradi- ent, and a Z -the standardized phase delay for heat flux.The length and width are given by l and h, respectively, while σ is the thermal diffusivity.
The FEM is a sturdy technique as it is able to explain multi-dimensional and diverse types of problems [18][19][20][21][22].The FEM meshing was carried out using the rectangular element with nodes j, k, m, and o.The FEM based on Galerkin's weighted residual technique is applied on eq. ( 4) to achieve eq. ( 5) [3]: Elemental matrices can be instigated for any type of model by using eq.( 5).
The TWCAWE algorithm
Consider a model of physical phenomenon: where [K](s) is the compound matrix, F( ) s -the compound excitation vector, x( ) s -the key vector, and s = j2πf (j is the imaginary notation, f -the frequency).Consider an extension point s 0 (s 0 = j2πf 0 ) and the Taylor series is [5]: where b 1 and c 1 are selected large enough so that no substantial higher order terms of [K] n and/or F n are trimmed.The moments matching AWE subspaces for eq.( 8) are rendering to eq. ( 7).The moments are: THERMAL SCIENCE, Year 2016, Vol. 20, No. 6, pp.1891-1902 These moments are linearly dependent for trivial values of q as monotonous pre-multiplied by
−
To avoid these difficulties, moments are calculated in an alternative way in TWCAWE method, which is able to produce significant results for any value of q.Consider TWCAWE moment subspaces are The TWCAWE moments (Algorithm 1) are calculated, as shown in tab. 1.
Here, we simplified certain scheme: and e r is the r th unit vector, all its en- tries are 0 excluding the r th , which is 1.
Breakdown of the TWCAWE
In the present work, TWCAWE method has been used to forecast the temperature response in diverse boundary conditions.In some specific circumstances, the algorithm might breakdown [6,22,23].Assume the Taylor coefficient matrices are: and right hand side follows the identical pattern where the moments rendering to eq. ( 8) are: The TWCAWE method, as represented in algorithm 1.For additional precision, the above cases produce the following: Table 1.Algorithm 1 (TWCAWE moments calculation) where ζ is a vector with entries below machine precision.To evade the breakdown, the modified TWCAWE moments calculations (algorithm 2) are shown in tab. 2.
Tickhonov regularization scheme
The TWCAWE algorithm implements the Tickhonov regularization technique [16,17] to amend the stiffness matrix marginally in mandate to condense the volatility problem.Contemplate a well-condition approximation problem, Kx ≈ y, the residual ||Kx -y|| 2 appears to be the minimum depending on the optimal of 1 ( ) .
⋅ can be condensed by adding a term, which is indicated in eq. ( 9): where h c is the regulation parameter which depends on the order of the equation, whereas [I] is the identical matrix.The family of this estimated inverse is defined by − = ⋅ + c K K h I K In TWCAWE, during moment calculation, inverse of [K 0 ] matrix is supernumerary by C h from WCAWE algorithm.Particulars of h c and [I] can be found at [16].
Transient response
The nodal moments [a] can be taken out from the global moment matrix for any random node i: The transient response for any random node i can be approximated by using Pade approximation, then supplementary streamlined to partial fractions [3], as exposed in eqs.( 11)-( 13 . . . Poles and residues can be originated by resolving eqs.( 14)-( 16): a a a a a a a a a a .
The final transient response for any random node, i is:
Boundary conditions
In the present work, the Fourier and non-Fourier heat transfer is investigated by considering 2-D rectangular models.The heat conduction model with dual phase delay has been explained here by using finite element analysis.
For heating flux, three distinctive temporal profiles, such as (1) instantaneous heat impulse, (2) constant heat imposed, and (3) periodic type are preferred because those boundary conditions are widely used in engineering problems.Figure 1 shows 2-D rectangular slabs meshed to generate the rectangular element.Diverse boundary conditions shown in fig. 2 are executed at the left edge of the slab.
Observation of Fourier heat conduction
Figures 3 and 4 exemplify the Fourier temperature responses respecting the normalized time and normalized distance, correspondingly with instant heat imposed at the left edge of the boundary.The results demonstrate that, the thermal influence is sensed instantaneously throughout the system, if the surface of a material is heated.This leads to the simultaneous development of heat flux and temperature gradient.The classical Fourier law assumes instantaneous thermal equilibrium between the electrons and photons.Figures 3 and 4 also spectacles that, when instantaneous heat is executed at the left edge of the boundary, the same thermal effect is felt at the right edge of the slab instantly.
Observation of non-Fourier heat conduction
In the present work, three distinct boundary conditions are investigated for non-Fourier heat conduction.The phase delays are varied to spectacle the comparative significance of the wave term and the phonon-electron collaboration.In this work, to authenticate TWCAWE results, we compared the results with R-K outcomes.Figures 5-10 exemplify the temperature responses for non-Fourier heat conduction by using three typical temperature profiles.In these figures, we have compared the results of three different numerical techniques, such as R-K, AWE, and TWCAWE to prove the accuracy.In these comparisons, we have used similar experimental settings of total length and width of the slab, total number of nodes, thermal conductivity and imposed temperature for the mentioned methods.In this study, the fourth order the R-K method has been taken as an exact analytical technique as a benchmark, since the results obtained from this technique are accurate [3], but computationally expensive.Figures 5(a Table 3 exemplifies the simulation time disbursed by several methods deliberated in this work.The results spectacle AWE method is 3.33 times and TWCAWE is 4 times faster than
Conclusion
This paper recommends TWCAWE technique to illustrate the non-Fourier heat conduction in diverse boundary conditions.There is no need to linearize the matrix equation and also no need to announce additional degree of freedom.Lastly, during moment calculation, we are competent to evaluate [Z] matrix mathematically for this problem.To perform fair comparison, similar experimental settings have been used for all three methods.The presented numerical comparison demonstrates that TWCAWE method is precise and wellconditioned since this technique can approximate the delay and initial high frequencies precisely.The numerical specimens also indicate that, the results are very subtle in the way in which the boundary conditions are quantified.Additionally, it is found that TWCAWE is 1.2 times faster than AWE, but 4 times faster than R-K.Moreover, the results are considerably superior than AWE.
Figure 4 .Figure 1 .
Figure 4. Fourier temperature responses along the centre of the slab for different time ) and (b) display the temperature spreading of sudden heat executed on the left edge of the slab, also signifies the comparison of TWCAWE, R-K, and AWE methods for diverse values of Z b at node 5 and node 59, respectively.It can be observed that, at node 5, initial fluctuation has been reduced, which helps to approximate the temperature behavior of other node accurately (e. g., node 59), as shown in fig.5.
Figure 5 .
Figure 5. Normalized temperature responses along the centre of the slab for Z b = 0.5, 0.05, and 0.0001 with instantaneous heat imposed; (a) at node 5, (b) at node 59 This evaluation spectacle that TWCAWE results absolutely match with the R-K outcome for Z b = 0.5 and 0.05, but AWE results show inconsistency.For the instance of Z b = 0.0001, TWCAWE congregates to similar steady-state as R-K.Before uniting to the steady-state, R-K results display convergence with TWCAWE, but AWE is incompetent to forecast the accurate temperature response as TWCAWE and R-K due to instability of AWE.It can be also observed from fig. 5 that, for TWCAWE method, the temperature behaviors are converged after normalized time of 0.25 and 0.31 in case of Z b = 0.05 and Z b = 0.0001, respectively, at node 5.In case of node 59, the temperature behaviors are converged after normalized time of 0.5 and 0.9 for Z b = 0.05 and Z b = 0.0001, respectively, because the node 59 is far from the boundary where temperature imposed, as shown in fig. 1. Figures 6(a)-(d) display the non-Fourier temperature circulation respecting distance along the centre of the slab at time 0.005, 0.05, 0.1, and 0.5, correspondingly for TWCAWE.This temperature distribution with instant heat executed at boundary conditions for Z b = 0.5, 0.05, and 0.0001 respectively.Immediate heat pulse is imposed on the left edge of
Figure 6 .
Figure 6.Normalized temperature distribution along the centre of the slab for Z b = 0.5, 0.05, and 0.0001 with instantaneous heat imposed; (a) at time 0.005, (b) at time 0.05, (c) at time 0.1, and (d) at time 0.5 the slab and the heat is drifting towards the other edge of the slab.Figure 6 also shows the evaluation of TWCAWE, R-K, and AWE outcomes.The evaluation shows TWCAWE results precisely adjacent to R-K solution, but AWE shows inconsistency.Figures 7 and 8 point to the non-Fourier temperature responses for intervallic heat imposed at the left edge of the slab.Figures 7(a) and (b) signify the temperature dissemination with veneration in time for Z b = 0.5, 0.05, and 0.0001 at node 5 and node 59, correspondingly.Figures 9 and 10 spectacle the temperature responses with persistent heat enacted at the left edge of the rectangular slab.
Figure 6 also shows the evaluation of TWCAWE, R-K, and AWE outcomes.The evaluation shows TWCAWE results precisely adjacent to R-K solution, but AWE shows inconsistency.Figures 7 and 8 point to the non-Fourier temperature responses for intervallic heat imposed at the left edge of the slab.Figures 7(a) and (b) signify the temperature dissemination with veneration in time for Z b = 0.5, 0.05, and 0.0001 at node 5 and node 59, correspondingly.Figures 9 and 10 spectacle the temperature responses with persistent heat enacted at the left edge of the rectangular slab.
Figure 7 .
Figure 7. Normalized temperature response along the centre of the slab for Z b = 0.5, 0.05, and Z b = 0.0001 with periodic heat executed; (a) at node 5, (b) at node 59These figures also epitomize the evaluation of TWCAWE, R-K and AWE.These figures spectacles that TWCAWE results utterly matched with R-K outcomes on behalf of Z b = 0.5 and 0.05, but in this instance AWE results show divergence.In the case of
Figure 8 . 5 Figure 9 .
Figure 8. Normalized temperature distribution along the centre of the slab for three values of Z b with periodic heat levied; (a) at time 0.005, (b) at time 0.05, (c) at time 0.1, and (d) at time 0.5
Figure 10 .
Figure 10.Normalized temperature dissemination along the centre of the slab for three values of Z b with constant heat levied; (a) at time 0.005, (b) at time 0.05, (c) at time 0.1, and (d) at time 0.5 R-K technique.Additionally, from the mentioned results, we can determine that TWCAWE method is also 1.2 times faster than AWE technique.
|
2018-12-15T15:10:31.592Z
|
2016-01-01T00:00:00.000
|
{
"year": 2016,
"sha1": "e4a65c250ae2685eb0fefc540a6d81e93a90c7d3",
"oa_license": null,
"oa_url": "http://www.doiserbia.nb.rs/ft.aspx?id=0354-98361400104R",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "2ffa2c970c05d8708a288598fcd61dc889d1e278",
"s2fieldsofstudy": [
"Engineering",
"Physics"
],
"extfieldsofstudy": [
"Physics"
]
}
|
251519591
|
pes2o/s2orc
|
v3-fos-license
|
Perspectives and challenges in patient stratification in Alzheimer’s disease
Background Patient stratification is the division of a patient population into distinct subgroups based on the presence or absence of particular disease characteristics. As patient stratification can be used to account for the underlying pathology of a disease, it can help physicians to tailor therapeutic interventions to individuals and optimize their care management and treatment regime. Alzheimer’s disease, the most common form of dementia, is a heterogeneous disease and its management benefits from patient stratification in clinical trials, and the development of personalized care and treatment strategies for people living with the disease. Main body In this review, we discuss the importance of the stratification of people living with Alzheimer’s disease, the challenges associated with early diagnosis and patient stratification, and the evolution of patient stratification once disease-modifying therapies become widely available. Conclusion Patient stratification plays an important role in drug development in clinical trials and may play an even larger role in clinical practice. A timely diagnosis and stratification of people living with Alzheimer’s disease is paramount in determining people who are at risk of progressing from mild cognitive impairment to Alzheimer’s dementia. There are key issues associated with stratifying patients which include the heterogeneity and complex neurobiology behind Alzheimer’s disease, our inadequately prepared healthcare systems, and the cultural perceptions of Alzheimer’s disease. Stratifying people living with Alzheimer’s disease may be the key in establishing precision and personalized medicine in the field, optimizing disease prevention and pharmaceutical treatment to slow or stop cognitive decline, while minimizing adverse effects.
Background
Dementia is a growing public health concern which affects over 50 million people globally, a total which is projected to grow to more than 150 million by 2050 [1]. Alzheimer's disease (AD) is the most common type of dementia, predominantly affecting those 65 years and older, and is characterized by a global decline in cognition and an inability to perform daily activities [2]. AD is characterized by the accumulation of amyloid-beta (Aβ) and hyperphosphorylated tau protein that forms senile plaques and neurofibrillary tangles (NFTs) in the brain, respectively [2]. These neurotoxic proteins play a role in neuronal cell death which correlates with the clinical manifestation of AD [2]. The clinical features of AD and other causes of dementia often overlap and include symptoms such as memory decline, apathy, anxiety, and depression, which often occur in earlier stages [2]. Later symptoms include impaired communication and speech, disorientation, and confusion [2].
There are two classes of medications approved by the US Food and Drug Administration (FDA) for the symptomatic treatment of AD: cholinesterase inhibitors (donepezil, galantamine, and rivastigmine), and N-Methyl-D-aspartate receptor antagonists (memantine); both classes may improve symptoms but do not intercept disease pathology [2]. In June 2021, the first disease-modifying therapy (DMT), aducanumab, was approved by the FDA using the accelerated approval pathway for the treatment of AD [3,4]. Aducanumab is the first approved treatment directed at the underlying pathology of AD, namely, the presence of Aβ in the brain [3]. While efficacy studies of aducanumab were conducted in those with prodromal-to-mild (early) AD, there are currently few guidelines publicly available to identify who may benefit most from treatment [4]. Identifying and appropriately selecting patients for DMTs (such as aducanumab) by accurately characterizing them is of great importance, as several agents are in late-phase clinical development, including donanemab, gantenerumab, and lecanemab [5]. Patient stratification may be harnessed as a tool to identify individuals properly to optimize treatment outcomes in terms of benefits and risks from current or future treatment options [6].
Across the field of healthcare, there is a growing need to provide more effective and safer care that is tailored to the individual person with a disease [7]. Yet, many complex diseases, such as AD, are heterogeneous in their clinical presentation, severity, and response to therapies [8][9][10]. This makes it difficult to determine the appropriate clinical course of action, as people may exhibit multiple phenotypes on distinct disease trajectories [8,9]. Patient stratification is an approach that aims to cluster people with a disease into more homogeneous groups by classifying them according to factors considered highly important to the disease [10], so that interventions can be offered to those who will benefit the most. Proposed patient stratification schemes in AD have focused on those that can account for differing contributions from all relevant risk factors, such as genetic predisposition, molecular indicators, and pathologic stage disease [10]. A patient stratification scheme may account for molecular indicators, pathologic staging, and relevant risk factors including genetic predisposition [10]. Official bodies such as the World Health Organization have recognized that the stratification of the health risks of people with chronic diseases could strengthen population health management and enable the provision of better-tailored services [11]. It can also ensure individual needs are met in a timely and efficient manner [12]. The documented benefits of patient stratification in advancing care in other disease areas highlight its potential for AD. Oncology is probably the highest-profile field where patient stratification has made its impact. For example, the stratification of breast cancers into integrative clusters has been associated with distinct clinical courses and response to therapy [13]. Consequently, breast cancer mortality has decreased by 40% over the past three decades [14]. Elsewhere in oncology, immunotherapies targeting immune checkpoints have led to exciting new therapeutic strategies, but there is still a need to understand which patient groups would benefit most from the treatments. Since interactions between tumor and immune cells in the tumor microenvironment influence the effectiveness of immunotherapy, more detailed understanding of the tumor microenvironment should enable better patient stratification to improve clinical outcomes [15]. Patient stratification can revolutionize disease treatment by tailoring therapeutic options to the individual, as seen in the molecular targeting of tumor biomarkers in breast cancer [14]. There are numerous barriers to patient stratification in AD, and a paucity of literature discussing patient stratification in everyday AD clinical practice. In this review, we discuss the importance, challenges, and evolution of patient stratification in AD.
Potential for patient stratification in AD
AD diagnosis: stratifying AD from other dementias AD is a multifactorial disease, and the first step to stratification of AD is accurate differential diagnosis of AD from other causes of dementia [2]. The symptoms and biomarker abnormalities observed in AD may overlap with other dementias, resulting in misdiagnosis [2,16]. Criteria developed over the past 15 years from both the US National Institute on Aging and the Alzheimer's Association (NIA-AA) and the International Working Group for New Research Criteria for the Diagnosis of AD (IWG) require biomarker evidence of disease pathology for differential diagnosis of AD [16]. The 2018 NIA-AA research criteria introduced a classification scheme diagnosing AD biologically based on the presence of Aβ, pathologic tau, and neurodegeneration/neuronal injury (AT[N] framework) [17]. The 2021 IWG perspective recommends a clinical-biological diagnosis of AD, which is restricted to those who have specific AD clinical phenotypes that are then confirmed by biomarker evidence of both Aβ and tau [16]. A cognitively unimpaired person with biomarker evidence of both amyloid and tau would be diagnosed with "preclinical AD" according to the NIA-AA research criteria, but be diagnosed as "at risk for progression to prodromal AD or AD dementia" under the new IWG guidelines [16,17].
Identifying people at risk for AD clinical progression
The phenotypic and genotypic variability in AD makes early diagnosis and stratification of patients, at the prodromal or an earlier stage of AD, challenging [18]. This difficulty stems from variable disease trajectories and complex neurobiology [18]. People with AD are generally staged according to the severity of clinical symptoms as cognitively unimpaired, having mild cognitive impairment (MCI) due to AD (also called prodromal AD), or AD dementia [2,17]. AD dementia is further classified as mild, moderate, or severe [17]. There can be a long pre-symptomatic period in AD, with pathology accumulating for up to 20 years prior to the onset of symptoms [2]. Although the IWG recommendations are not widely used in clinical settings, the IWG recommends against AD biomarker assessment in cognitively unimpaired people, a recommendation based, in part, on cross-sectional evidence observing AD brain lesions in cognitively unimpaired people in both neuroimaging and post-mortem examinations [16]. As not everyone with biomarker evidence of AD will progress [16], it is imperative to identify those who are at risk for disease progression. Additionally, people with AD progress along the continuum at widely varying rates, and research is ongoing to identify specific factors that influence the rate of progression in AD [19]. Identifying individuals at an increased risk of progression aims to ensure that those in need of urgent treatment are given access [20]. Those who are at a higher risk of progression may be appropriate candidates for more aggressive pharmacologic therapy, namely DMTs [20]. For people at lower risk of progression, active monitoring, lifestyle interventions, and symptomatic therapies may be more appropriate [2]. Evidence-based AD risk reduction is feasible in clinical practice, tailoring interventions through a mix of neuropsychological, clinical, and laboratory assessments [21].
A breadth of research has been conducted to identify prognostic factors for progression to MCI and dementia, as detailed in various meta-analyses and systematic reviews [22][23][24]. As expected, one of the first indicators of AD risk is personal memory complaints; those with subjective cognitive decline are at twice the risk of developing dementia compared with people without subjective memory complaints [23]. Anxiety is also associated with an increased risk of progression from normal cognition to MCI or dementia [24]. Once MCI due to AD is detected, the risk of progression to AD dementia is increased at older ages, in women, in those with low-level education, in people with at least one copy of the apolipoprotein E ε4 (APOE ε4) allele, in those with impaired cognition, and/or in those with comorbidities such as depression, diabetes, or hypertension [18,22,25,26]. Progression from MCI due to AD to AD dementia is also linked with biomarkers of AD pathology such as cerebrospinal fluid (CSF) markers of phosphorylated tau (pTau) and total tau (tTau)/Aβ(1-42) ratio, with brain atrophy (hippocampal, medial temporal lobe, entorhinal) and parieto-temporal hypometabolism on [ 18 F]-fluorodeoxyglucose (FDG) positron emission tomography (PET) [22,27]. In addition to the risk of progression, the speed of progression is also impacted by age, gender, and APOE ε4 genotype [28].
A variety of modifiable risk factors are also associated with progression. Modeling indicates that up to 40% of dementia could be prevented or delayed by intervening in risk factors that can be modified during various stages of life [29]. Elimination of 12 potentially modifiable risk factors may help reduce dementia prevalence. In early life (<45 years), a 7% reduction in dementia prevalence was suggested if low education level as a risk factor was eliminated. In midlife (age 45-65 years), the risk factors would be hearing impairment (8% risk), traumatic brain injury (3%), hypertension (2%), alcohol consumption of >21 units per week (1%), and obesity (1%). In later life (age >65 years), the risk factors would be smoking (5% risk), depression (4%), social isolation (4%), physical inactivity (2%), exposure to air pollution (2%), and diabetes (1%) [29]. Modifiable and non-modifiable risk factors of AD modulate many aspects of the disease course, including onset, clinical manifestations, and prognosis [30].
Patient stratification in clinical practice
While literature regarding stratification for treatment with a DMT is theoretical [20,31], there are conclusions which may be drawn based on existing treatment paradigms and gaps in AD clinical practice. In some countries, such as the UK, general practitioners (GPs) play a major role in the early stages of patient identification and stratification ( Fig. 1) [32].
Outside of a clinical trial setting, non-specialists, such as GPs, are typically the first to screen for and evaluate patients with MCI or dementia. Patients, family, or caregivers who suspect symptoms of dementia often report these symptoms at primary care clinics [32]. These symptoms include impaired memory, mood and personality changes, and psychological symptoms such as depression and anxiety, all of which could be caused by AD [32]. Subsequently, GPs conduct initial brief assessments (e.g., patient history regarding psychological and behavioral symptoms, physical examination, relevant blood and urine tests, and cognitive tests using a validated instrument) to support the suspicion of cognitive decline and rule out any other potential causes of these symptoms. Patients are either referred to memory clinics for further dementia assessment or treated as non-dementia cases [32]. GPs are essentially stratifying patients into three categories: those with suspected dementia, those with cognitive problems without dementia, and those who are not cognitively impaired but still have complaints. Memory clinics conduct further investigations to diagnose patients, taking into consideration the initial assessments from GPs (Fig. 1). These additional investigations include in-depth cognitive and behavioral assessments, structural neuroimaging (brain computerized tomography [CT] or magnetic resonance imaging [MRI]) scans, PET scans, genotyping, and fluid biomarker analysis (discussed in more depth in the next section) [10]. Brain CT or MRI is an integral part of the diagnostic pathway in patients with suspected AD dementia; however, other methods such as PET scans, APOE genotyping, and the analysis of the core AD CSF biomarkers are not routinely used in clinical settings [33][34][35][36]. Diagnostic tests conducted by AD specialists can be used to stratify patients further, which can help inform the treatment and care pathway. In terms of DMTs, this could mean selecting patients who will most likely benefit from treatment.
The dementia detection rate differs between countries, especially between higher-and lower-income countries, and depends on a multitude of factors [6,37]. These factors are mainly based around how developed the healthcare system is, training of GPs and specialists, and cultural perceptions of dementia [37]. In the UK (where a quarter of the population will be over 65 years by 2050), only two-thirds of people with dementia receive a formal diagnosis, while in Brazil (where the population age 65 and older is projected to triple by 2050) the proportion is even lower, with one-quarter of people with dementia receiving a formal diagnosis [37][38][39]. This highlights the heterogeneity regarding diagnostic processes in different countries. A factor in the staggering disparity between the dementia diagnosis rates in the UK and Brazil is the inadequate availability of memory clinics in Brazil, which are often limited to universities; this can cause long waiting lists for those who live in remote areas, and can drastically increase the time it takes to reach a diagnosis of dementia [37]. Additionally, neuroimaging facilities, particularly MRI, are scarce [37]. Secondary care aside, GPs often have very limited time to conduct a cognitive screening test (less than 10 minutes), which is similar between the UK and Brazil, lack the confidence to diagnose dementia, and are often unaware of guidelines and protocols for evaluating and managing patients with memory complaints [37]. Cognitive decline is often viewed as a normal part of aging among patients and their families in Brazil [37]. Words such as "Alzheimer's" or "dementia" have a stigma associated with them which is worse among high-income families who frequently feel ashamed and consequently hide the condition from others [37].
Patient stratification to enrich clinical trial populations
A challenge for clinical trials is identifying people with AD who are likely to progress rapidly, as they are more likely to indicate whether a new drug is efficacious over the duration of a late phase clinical trial [40]. Historically, there have been highly variable trajectories of cognitive decline in the placebo group of randomized clinical trials in AD, highlighting the importance of careful selection of clinical trial inclusion and exclusion criteria [41]. Stratifying patients by those who may likely benefit from a DMT, according to current clinical thinking, will be crucial for understanding the prognosis for defined patient subgroups [21]. In recent years, investigation of DMTs has focused on slowing disease progression and targeting early (i.e., prodromal-to-mild) AD [5]; accordingly, patients are stratified for earlier stages of disease [10]. The Free and Cued Selective Reminding Test (FCSRT) is a neuropsychological test used to evaluate episodic memory and can be used to enrich a clinical trial population [40,42]. The FCSRT has been incorporated into the inclusion criteria of Roche-sponsored pivotal trials such as CREAD, a phase III trial of crenezumab, and GRADUATE, a phase III trial of gantenerumab to help identify participants with an elevated risk of developing AD dementia [40,43]. Including the FCSRT as part of the eligibility criteria increases the potential of enriching a clinical trial population with individuals with early AD who are likely to progress during the study [40]. This is essential in investigating DMTs targeting the earlier stages of AD, as the ability of a DMT to demonstrate efficacy versus placebo is based partly on the rate of decline, or progression, observed within the placebo group. The trajectory of the placebo group helps to determine the treatment difference at the end of a clinical trial [40]. TRAILBLAZER-ALZ, a phase II trial of donanemab in participants with early symptomatic AD, is another recent example of the use of patient stratification to enrich a clinical trial population. In addition to conventional eligibility criteria, participants were required to have flortaucipir PET scans with evidence of pathologic tau deposition but with quantitative tau levels below an upper threshold. This resulted in a decrease in the variable trajectories of clinical decline [44]. Although flortaucipir PET scans have demonstrated their capability as an enrichment tool, PET scans are considered both invasive and expensive and therefore are not utilized in routine diagnosis [35,45]. An important aspect of stratifying participants for global clinical trials is the considerable regional heterogeneity in terms of patient characteristics and outcomes, and therefore of symptom manifestation and progression [46].
Identifying people at risk for amyloid-related imaging abnormalities
There is a pressing need to utilize safety biomarkers, so as to enable the early detection or avoidance of amyloid-related imaging abnormalities (ARIA) [47]. ARIA is a common side effect of anti-Aβ-targeting monoclonal antibodies. While ARIA is mostly asymptomatic, it can manifest clinically in headaches, confusion, and neuropsychiatric symptoms [47]. Individuals may also experience mildly asymptomatic ARIA which quickly resolve following the discontinuation or dose modification of anti-Aβ therapy [47]. ARIA are divided into two classes: ARIA-edema (ARIA-E) is a vasogenic edema determined by MRI (i.e., sulcal effusion on fluid-attenuated inversion recovery images), which indicates inflammation of the affected vessels [47]. ARIA-hemorrhage (ARIA-H) is again detected by MRI and characterized by a signal of hemosiderin deposits involving microhemorrhages and superficial siderosis on T2*-weighted gradient echo or susceptibility-weighted imaging, indicating cerebral amyloid angiopathy [47]. As it is difficult to predict the occurrence of ARIA, current FDA guidelines for enrolling participants in clinical trials assessing DMTs recommend MRI evaluations to exclude participants with ≥5 microhemorrhages and with any evidence of superficial siderosis or prior parenchymal hemorrhage [47]. The mechanisms which lead to the development of ARIA are not fully understood; however, it has been well demonstrated that high doses of anti-Aβ therapy and APOE ε4 carriers are at a higher risk of developing ARIA [47]. In EMERGE and ENGAGE, two phase III randomized clinical trials of aducanumab in participants with MCI due to AD or mild AD dementia, the most common adverse event in those receiving a high dose of aducanumab was ARIA-E. This is consistent with prior aducanumab studies and safety data from other anti-Aβ therapies. ARIA-E incidence was also higher in APOE ε4 carriers than APOE ε4 noncarriers, and was found to be dose-dependent when assessed by APOE carrier status [48]. Developing a reliable method to stratify participants based on the risk of developing ARIA may mitigate the occurrences of ARIA, a trend which may also translate well into clinical practice [47].
Future identification of eligibility for a disease-modifying therapy
It is expected that, as DMTs become more widely available, the demand for an early and timely AD diagnosis and treatment will increase [20]. As a growing number of DMTs in development target amyloid or tau pathology [5], individuals without biomarker-confirmed evidence of these pathologies may not be suitable for an anti-Aβ or anti-tau DMT. Additionally, those with biomarker evidence of AD pathology but normal cognition may not progress to AD dementia [16] and may not be candidates for a DMT. Furthermore, people with more advanced AD may not be suitable for the majority of the DMTs in the AD pipeline. These putative DMTs mainly target disease onset or disease progression in the earlier stages of AD. A robust patient stratification scheme will work to exclude people who are ineligible for DMTs based on a combination of cognitive and functional assessments, genetic risk factors, demographic and lifestyle factors, and AD pathology [10].
Methods and diagnostics for AD stratification
Cognitive and functional assessments, genotyping, CSF and PET imaging biomarkers, and MRI are some of the current methods which can be used to stratify patients according to their AD pathology and clinical stage (Fig. 2). These methods of stratifying can provide results that are both "dynamic" or "static" in nature, either evolving as AD progresses or remaining the same throughout (Table 1). This should be taken into consideration when using these methods to stratify patients. Cognitive assessment is traditionally the first step to a diagnosis of AD or other dementias [16]. Assessment scales have been developed that evaluate cognition, function, and behavior for both clinical practice and research [32,49]. Cognitive screening tools are quick, economical, and non-invasive [49]. These evaluation scales range from brief assessments used to screen for dementia to more detailed tools assessing different cognitive domains [49]. No singular cognitive assessment tool is recognized as the best assessment tool across the AD continuum; some tools are optimized for specific stages of dementia. The Mini-Mental State Examination (MMSE) is the most commonly used screening instrument for detection of cognitive impairments associated with AD and only takes 5 to 10 min to administer [49]. However, the MMSE may have limited discrimination between the cognitively unimpaired and those with MCI, as well as between AD and other dementias [49]. The General Practitioner Assessment of Cognition (GPCOG) is a two-part, 4-to 6-min test, freely available online to screen for dementia in primary care which has been recognized as an effective screening tool and a viable alternative to the MMSE [50]. The Montreal Cognitive Assessment (MoCA) is a 10-to 15-min test that is validated for MCI in a populationbased cohort as well as for MCI and dementia in memory clinics [49]. Assessing episodic memory is especially helpful for differential diagnosis of AD dementia, as this domain is one of the first to decline in typical forms of AD but not in other dementias [2,49]. The preclinical Alzheimer's cognitive composite (PACC) measures episodic memory, executive function, and global cognition [51]. PACC is currently used in secondary prevention trials as it can capture decline in amyloid-positive, clinically normal, older adults [51]. The addition of category fluency, a semantic category, to PACC (PACC5) adds unique information about amyloid Aβ-related decline [51]. Despite the proven utility of these screening tools, studies have demonstrated that sociodemographic variables, such as age and education level, and health variables, such as medical history, have implications for the performance of cognitive screening tests [52]. As previously discussed, there are a variety of demographic, genetic, and environmental risk factors that are associated with modulation of cognitive decline which are key components for stratifying for risk of AD and AD progression [2,16,22,29]. Personal history and demographic factors that have been linked to increased AD risk include older age, female sex, low education level, smoking, excessive alcohol consumption, low physical activity, low social contact, and exposure to air pollution [16,29]. Medical history also impacts AD risk, which is increased by frailty, depression, hearing impairment, traumatic brain injury, obesity, diabetes, and hypertension [2,16,22,29]. Genetic mutations in amyloid precursor protein (APP), presenilin (PSEN)1, and PSEN2 lead to accumulation of Aβ which results in Aβ deposition and aggregation in the brain; these are the driving mutations for familial AD [53]. The APOE ε4 genotype is a major risk factor for the development of AD, and the risk increases further from 47% in heterozygous APOE ε4 carriers to 91% in homozygous APOE ε4 carriers, regardless of ethnicity [2,18,54]. Additionally, in clinical trials of amyloid-targeted antibodies as DMTs for AD, APOE ε4 has been linked to an increased risk for ARIA [54]. The characteristic pathology of AD, Aβ plaques and tau NFTs, may be detected using PET scans with targeted ligands and CSF examination [17,55,56] [56,57]. PET scans may be quantified using standardized uptake value ratios to determine whether a person is above or below the amyloid positivity threshold associated with AD [58]. AD pathology can also be detected using validated CSF biomarkers for AD [16,59]. Key CSF biomarkers for AD, including decreased Aβ(1-42), increased tTau, and increased pTau, are measured with commercially available enzyme-linked immunosorbent assay and electrochemiluminescence immunoassay kits [59,60]. PET or CSF confirmation of AD pathology allows for definitive diagnosis of AD according to either the IWG or NIA-AA criteria [16,17], which encourages informed decision-making in AD treatment and care management [21]. Longitudinal PET or CSF measures may also be useful to monitor disease progression. The Roche Elecsys ® AD CSF immunoassays are an example of a diagnostic tool that can be used to detect AD pathology. Elecsys ® pTau/Aβ(1-42) and tTau/Aβ(1-42) are robust biomarkers for predicting the risk of clinical decline and conversion to dementia in non-demented patients [61]. One exciting development in fluid-based biomarker diagnostics is the NeuroToolKit (Roche Diagnostics International Ltd.), a panel of 12 CSF biomarker assays covering Aβ pathology, neurodegeneration, neurofilament light chain (NfL), and α-synuclein metabolism. Investigation is ongoing to evaluate these biomarkers as a tool for selecting patient populations and as pharmacodynamic biomarkers [62]. Another contribution to improving fluid-based diagnostic approaches in AD has been the commercial development of liquid chromatography tandem mass spectrometry-based assays that simultaneously quantify Aβ isoforms in human plasma and determine common APOE prototype [63]. According to the company behind this approach, in a study of 686 participants over the age of 60 with cognitive impairments or dementia, a sensitivity of 92% and a specificity of 76% were achieved when compared with amyloid plaque deposits confirmed with a PET scan [64]. Another reported study examined the diagnostic accuracy of plasma pTau217 for AD [65]. In a cross-sectional study of 1402 participants from three selected AD cohorts, plasma pTau217 discriminated AD from other neurodegenerative diseases with performance that was deemed significantly better than established AD plasma-and MRI-based biomarkers, but not significantly different from key CSF-or PET-based biomarkers [65]. While such research is promising, the authors highlighted that further research would be needed to validate the findings in unselected and more diverse populations [65]. Blood-based biomarkers for AD pathology are also being investigated, in the hope of validating them as a more economical and less invasive option [66]. However, blood-based biomarkers are not yet widely available for AD diagnosis or stratification [66]. The latest advancement in blood-based biomarkers is the QPLEX ™ Alz plus assay kit [67]. Preliminary evidence suggests that this assay kit can be used to predict cerebral amyloid deposition using blood-based biomarkers, but independent validation is still required [67]. Should blood-based biomarkers become available, they may alleviate the bottleneck in stratifying patients for treatment with a DMT [66,68]. Biomarkers of neurodegeneration, while not specific to AD, may also be useful for stratification [17]. Brain MRI can be used to exclude other causes of cognitive impairment and detect atrophy in different regions of the brain, and correlates closely with other symptoms of AD [17,22]. Some CSF biomarkers are indicative of neurodegeneration or other neural damage, including NfL, which is elevated in many neurological conditions, and neurogranin, which may be differentially elevated in those with AD [69,70]. FDG PET often shows reduced cerebral glucose metabolism in people with AD compared with healthy individuals [27]. Although this is not fully specific to AD, the pattern of FDG PET hypometabolism varies in different dementias and often contributes to the differential diagnosis [27]. These biomarkers help characterize downstream effects of AD pathology at the individual patient level.
Major challenges to stratification in AD
A well-defined patient stratification scheme will be crucial to identify patients who will likely benefit from a DMT. There are many challenges and counterarguments that pertain to the initial diagnosis of AD due to cultural stigma or issues with GP preparedness, as well as healthcare system readiness and availability of biomarker testing.
Limitations of a biomarker-based diagnosis of Alzheimer's disease
A major challenge in clinical practice is to distinguish AD from other neurodegenerative disorders and the presence of comorbidities. The in vivo presence of biomarkers of AD lesions could warrant AD as a primary diagnosis; however, these lesions can be found in other neurodegenerative disorders such as dementia with Lewy bodies [16]. Similarly, cases with pure AD pathology are only found in 3-30% of neuropathologic series of people with AD dementia at post-mortem examination [16]. Biomarkers for the pathologic changes that underlie other neurodegenerative diseases often found in people with dementia are currently under investigation, and as a result, distinguishing AD from other neurodegenerative diseases mainly relies on clinical phenotype or post-mortem examination [16].
Cultural perceptions and understanding of Alzheimer's disease
A timely AD diagnosis allows for earlier intervention; however, the stigma of a diagnosis in the absence of effective therapeutic options may prevent individuals from seeking help [6,71]. The social stigma of an AD diagnosis is palpable in some societies and cultures, which may further discourage patients from reporting early symptoms to their GPs and delay early intervention or support [37,71]. Particularly after the first few years after a diagnosis of dementia, psychological complications may contribute to an increased risk of suicide [6]. Another major issue is the lack of awareness from patients, their families, and GPs about dementia and its symptoms [6]. Patients have reported dismissing symptoms of dementia as a normal part of aging, and therefore do not disclose them to GPs [71]. Some GPs have also reported that they struggle to differentiate cognitive impairment from normal aging [71]. The normalization of symptoms causes delays in seeking help [6]. Patients and GPs often prioritize physical over mental health, which may cause them to be dismissive to symptoms of dementia [6]. Aside from a diagnosis of dementia directly affecting the patient, families and caregivers around the patient are also negatively affected. They are often reluctant to take on the caregiving burden for a variety of reasons. This may be a result of fear and anxiety, time constraints, and/or changes to the family dynamic after a diagnosis [6].
To mitigate the impact of cultural perceptions and the lack of understanding of AD on diagnosis rates, increased efforts with educational campaigns around dementia, its symptoms, and the importance of an early diagnosis are essential. A better understanding of dementia by patients, families, and caregivers will encourage patients to actively seek a formal diagnosis and develop a disease management plan. Promotion of support services to manage AD may also alleviate the burden on both patients and families [71].
Improving collaboration between primary care and memory clinics
A current challenge is enabling GPs better to identify patients who would benefit from further assessment and treatment at a memory clinic, as some GPs admit having difficulty identifying early symptoms of AD and/or lack confidence in their ability to accurately diagnose AD [6]. Healthcare systems are not prepared for the extensive screening required to assess for MCI. It was estimated that over 80 million patients in 2019, across France, Germany, Italy, Spain, Sweden, and the UK, would undergo cognitive screening for MCI due to AD and that over 10 million would screen positive [31]. These numbers may increase as DMTs become available. The number of specialists available in France, the UK, and Spain is a significant rate-limiting factor which will exacerbate waiting times across the diagnostic pathway [6,31]. A contributing challenge is the poor coordination of the diagnostic process between primary care clinics and memory clinics [6]. In the future, GPs will need support in the difficult task of explaining that DMTs may not be suitable for patients with advanced disease.
Collaborative care models with multidisciplinary teams are a viable solution to improving the collaboration between primary care and memory clinics. Over the past two decades, there has been increasing use of collaborative care approaches to developing new models of care for older patients, leading to improved quality, efficiency, and outcomes of care. These models require the implementation of a robust dementia care strategy, training of a healthcare workforce from different disciplines, partnerships with healthcare delivery systems, and agreements with third-party payers to recognize the cost savings to support these changes in specialty practice [72].
Biomarkers and healthcare system readiness
The potential requirement of widespread diagnostic testing for the rollout of DMTs will be a resource-intensive and an economically demanding burden on healthcare systems [20,31,68]. Brain MRI may be used in the diagnostic process to assist in the differential diagnosis of other causes of cognitive impairment, and to detect atrophy in different regions of the brain [17,22]. Both PET imaging and CSF assessments are diagnostic methods that are only accessible in specialized centers. PET imaging may be cost-prohibitive, and CSF testing may be perceived to be invasive; owing to these challenges, these methods are not utilized in routine diagnosis [45]. A sudden and sustained increase in demand for biomarkerconfirmed diagnosis by AD specialists could strain memory clinic resources [20,31,68]. While healthcare systems may face challenges in offering biomarker-confirmed diagnoses, the potential advances in dealing with AD in clinical care must not be ignored. For example, biomarker profiles could allow prediction of personalized trajectories of future cognitive progression in patients. A recent biomarker-based prognostic model was applied to data from a large cohort of patients [73]. Featuring three CSF and two MRI measures, the model significantly improved personalized prognosis when incorporating biomarker information on top of cognition and demographic data [73]. The authors concluded that biomarkers that are being routinely collected in some clinical settings seem to have an unused potential to improve patient prognosis of future cognitive decline in clinical practice and may also be useful for optimizing clinical trial design [73]. There is also hope that blood-based AD biomarkers may counteract the anticipated diagnostic capacity limitations in some healthcare systems by providing testing at lower cost and higher accessibility [74]. Plasma measurements of pTau217 and NfL, combined with basic demographics, significantly predicted both change in cognition and subsequent AD dementia in 435 cognitively unimpaired elderly individuals [74]. It has been predicted that a combination of pTau and other accessible biomarkers might provide accurate prediction about the risk of developing dementia, and promising results have been achieved using the BioFINDER and Alzheimer's Disease Neuroimaging Initiative patient cohorts [75]. Nevertheless, the application of these biomarker-based approaches to routine healthcare is some way off at present [75]. In particular, further validation of results is required in large, unselected, and ethnically diverse primary care populations with a lower pre-test probability of underlying AD [75]. The validation of a diagnostic method that is capable of being administered in a resource-limited environment, such a blood-based testing kit, would be beneficial in relieving the stress on specialty centers [68]. Stratifying people with AD by their susceptibility to adverse events of special interest, such as ARIA, will impose an additional burden on memory clinic resources. Subsequent monitoring of ARIA is also required to ensure patient safety [76]. Stratification based on the APOE ε4 genotype, a major risk factor for the development of AD, will be required to determine the likelihood of dementia, disease progression, and possible risk for ARIA [28,54].
The potential for personalized AD treatment and care
Stratification may play a critical role in decreasing the impact of an AD diagnosis on families [77]. Family caregivers often report work-related stress that stems from having to adjust their work schedule around patient needs, with some caregivers having to give up their own jobs entirely [77]. Caring for people with dementia is a heavy financial burden, and many families experience challenges balancing the time needed to care for the patient and time for other activities [77]. The importance of stratification, personalized treatment strategies, and ultimately precision medicine in AD care may increase rapidly with the approval of aducanumab and other potential DMTs in phase III development [5,76]. The goal of precision medicine is to optimize disease prevention and pharmaceutical treatment to slow or stop clinical decline, while minimizing adverse effects [21]. Precision medicine takes into consideration an individual's specific genotype, phenotype, biomarker profile, and/or psychosocial characteristics to determine optimal treatment and care [21]. Determining a patient's risk of developing AD and understanding underlying molecular mechanisms behind their pathology are two important factors in applying precision medicine [21].
Conclusions
Patient stratification is an increasingly important tool in AD treatment and is crucial to determine which patients are at risk of progression to MCI due to AD and AD dementia. A variety of diagnostic techniques are available to stratify patients based on clinical disease characteristics, biomarkers of disease pathology, genotype, and demographic risk factors. However, currently, the heterogeneous nature and complex neurobiology of AD makes stratification of patients difficult. Aside from the complex molecular pathology associated with AD diagnosis, early detection and stratification come with an array of challenges including cultural stigma and physician training. Quickly streamlining the healthcare system for AD diagnosis and stratification is imperative, given the recent approval of a first DMT.
|
2022-08-13T13:38:49.865Z
|
2022-08-13T00:00:00.000
|
{
"year": 2022,
"sha1": "eb3bdc8a0fa15ac1ebcca20222834d721b5b5736",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "7ac73599c735dd94379ed048194e9eef27c911f3",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
213939095
|
pes2o/s2orc
|
v3-fos-license
|
MARKETING MIX PERFORMANCE AND CUSTOMER RELATIONSHIP IN IMPROVING TRUST OF INDIHOME CUSTOMER: A CASE FROM WEST JAVA INDONESIA
In recent years, an internet provider company faced a lack of market achievement. The fluctuation amongst the customers could also be viewed according to the report where they felt that the service should be improved in terms of responsive richness, customer trust, customer relationship management and etc. The need to have a clear look at the whole context on the way to increase the customer should be taken into consideration in particular. This study attempts to examine the strategic marketing initiated to enhance the customers’ relationship in making loyal to this company product. With the survey on 280 respondents in three different states, the random sampling technique was used through combining the cluster sampling technique. With analyzing the relationship between variables of Structural Equation Modeling (SEM) approach, the result showed that strategic marketing mix performance combined with customer relationship has a significant role in improving the customer trust. This finding gives an implication for the management of the company to improve the customer trust supported by the customer relationship and the marketing mix performance development.
INTRODUCTION
Teledensity is an indicator that shows the number of telephone line units compared to the population. Teledensity of internet services shows the number of internet connections compared to the population. Indonesia's internet teledensity, which includes fixed and mobile access, has only reached 15.36%, far below Singapore at 74.18%. This means that the number of internet connections compared to Indonesia's population is still relatively low compared to Singapore.
Seeing these opportunities, PT Telekomunikasi Indonesia, Tbk (Telkom) presents triple play throughout Indonesia through IndiHome services. This Triple Play service includes landlines, Internet on Fiber or High Speed Internet, and UseeTV Cable (IPTV). In addition to Telkom, there are other telecommunications provider companies in Indonesia, namely Biznet, Firstmedia, IndosatOoredoo GIG, MNCPlay Media, and MyRepublic. The provider company has similarities in the types of products, namely providing internet and cable TV services, but what distinguishes IndiHome from PT Telkom is the presence of telephone products. IndiHome's superiority lies in the set of boxes that use HD technology that will increase the sharpness of image quality and can record television shows for 1 week, so customers can watch television shows. In addition, the quality of the internet network that has used optical fiber produces internet speeds of 10 Mbps -1000 MPB, as well as landline networks that are not owned by competitors.
Telkom claims that the superiority of high speed internet is a mainstay service, because it is able to transfer data with bandwidths of up to hundreds of Mbps beyond the quality of coaxial or copper cables. UseeTV Cable also adds value to Telkom's Triple Play services, by presenting features such as TV on Demand, Video on Demand, Pause and Rewind, and Video Recorder.
IndiHome services will continue to grow, considering that internal and external factors are indeed very supportive. Internal factors, namely production equipment owned by Telkom have fulfilled the existing capacity. According to Spire (2017: 2), in terms of fixed broadband, IndiHome has the largest market share of 96.2%, Biznet 0.3%, First media 2.1%, MNC Play 0.3% and other 1.2% . While external factors, namely the improvement of people's purchasing power along with the recovery of the economy in Indonesia and the increasing need for information and edutainment through the internet.
However, there has been a problem with IndiHome since this product was launched in West Java, which from January 2015 to December 2016 could not reach the sales target. To be able to maintain customer achievement and continue to increase the number of customers, it requires a great deal of trust from customers towards telecommunications service providers. Kotler and Keller (2009) suggest the importance of building customer trust and confidence so that customers will voluntarily be loyal to the company. Customer trust is very important in developing relationships, especially in the service business that is full of risks. Based on the results of observations in the field and the results of deep interviews with related parties, a picture of the alleged causes of the still high confidence of IndiHome customers in West Java is related to marketing mix performance and customer relationship.
The marketing mix is also an important factor because it can control the marketing tools used by the company to produce the desired response from various target markets. The problem faced by IndiHome is in terms of products, although IndiHome products have services that are not owned by competitors, namely telephones. But the advantages of services on IndiHome products do not affect customer demand for services and products provided by the company. Telkom Indonesia Indome Price is more expensive than competitors. This is likely related to customer value perceived by customers, namely with the bundling system on IndiHome products with prices paid for 3 services, not in accordance with the needs and expectations of customers and prospective customers, most of which only require 2 services, so there are 1 service already paid but not used. For product promotion, Indi Home is doing more door-to-door personal selling, as well as promoting personal to person personal selling by introducing Indi Home products to the public by all Telkom employees who are deployed directly to the field. The promotion is known as the uproar (mass marketing movement).
According to Zeithaml and Bitner (2012), Service Marketing Mix is defined as an element controlled by an organization that can be used to satisfy or communicate with customers, this element appears as the main decision variable in marketing text or marketing plans. The marketing mix consists of things the company can do to influence customer demand for services and products provided by the company. Traditionally, for tangible products used 4P models. While services use the 7P approach to meet customer needs, namely: products, prices, places, promotions, people, physical facilities, and processes.
Customer trust is also thought to be related to managing customer flexibility. Customer flexibility is a factor that plays an important role in the world of telecommunications business today which is felt to be very complex, so a new approach is needed to manage consumers. Lovelock and Wirtz (2011) suggests that CRM (Customer Relationship Management) is a form of marketing activity to produce a deeper or meaningful relationship with customers. While according to Kotler& Bowen (2010), CRM acts as a tangent point with customers to maximize customer loyalty.
Meanwhile, information obtained based on observations that customers are still disappointed with the service from Indi Home regarding the follow-up of any complaints reported by customers. Customers complain, if payment problems, Telkom will quickly inform, especially talking fines must be fast. But when it comes to complaints handling problems, it is not as fast as payment information. So that customers do not feel a good relationship with the company. Meanwhile, customer care or retention is very important with the aim that consumers can be loyal to the company's products and services. With the establishment of good relations with customers, it is expected to increase customer value and customer trust in Indi Home products.
Based on this background, this study aims to examine the effect of marketing mix performance and customer relationship on Indi Home customer trust in West Java.
Marketing mix performance
According to Zeithaml and Bitner (2012) Service Marketing Mix is defined as an element controlled by an organization that can be used to satisfy or communicate with customers, this element appears as the main decision variable in marketing text or marketing plans.
Kotler and Keller (2012) said "the marketing communication mix consists of eight main lines of communication namely advertising (advertising), sales promotion (sales promotion), events and experiences (events and experiences), Public Relations and publicity, direct marketing (direct marketing), interactive marketing, word-of-mouth marketing and personal selling.
Zeithaml and Bitner (2012) say that the dimensions of Service Marketing Mix are organizations controls, communicate, marketing plan. Kotler and Keller (2012) say that the dimensions of Service Marketing Mix are marketing tools, marketing objectives.
So, based on a study of these definitions and considering the unit of analysis in this study, namely PT Telkom Indonesia West Java Area, the marketing mix performance in this study is measured based on the dimensions of product, price, promotion, place, people, physical evidence and process"
Customer relationship
According to Gronroos (2000) relationship marketing focuses on three main areas, namely decision making, determining the value process, determining the form of the interaction process as the core of relationship marketing and determining the communication process planned to attract, develop and improve customer relationships. Baran, Galka and Strunk (2008) explain that CRM is a marketing strategy that focuses attention on managing customer experience by better understanding their needs and their buying behavior.
Berry (1983) in Ivana Adamson, Kok-Mun Chan, Donna Hand ford (2003) argues that the dimension of relationship marketing is repositioning. Oliver (1999) argues that the relationship marketing dimension is preferred product or service, future despite, marketing efforts. Gronroos (2000) argues that the dimensions of relationship marketing are value process, interaction process, communication process.
Lovelock and Wirtz (2011) suggest that CRM is a form of marketing activity to produce a deeper or meaningful relationship with customers. While according to Kotler and Bowen (2010) CRM acts as a tangent point with customers to maximize customer loyalty.
Rahaman, Ferdous and Rahman (2011) use five dimensions of CRM, namely thankful, responsiveness and relationship, appropriateness, caring, and keep in touch. The Peter and Olson (2008) states that marketing mix decisions are related to the selection of market selection so that the target selection and marketing mix design must work together.
According to Parvatiyar and Sheth (2002), CRM programs are realized in the form of Continuity Marketing, One-to-One Marketing, and Partnering / Co-Marketing. The Continuity Marketing program contains loyalty and membership card programs where customers are often given prizes on the basis of relationship loyalty and membership with the company. Form of rewards or prizes such as special services, prize points, discounts including discounts on other products. Oneto-One Marketing is intended to be able to fulfill and satisfy each customer's individual and unique needs. While partnering program is a partnership relationship between customers and companies in order to serve the needs of end users.
So, based on the study and by adjusting it to the unit of analysis of the study, the customer willingness variable in this study was measured by the dimensions of providing convenience, giving gifts, and customer gathering. Kotler and Keller (2009) have a view of the importance of how to build customer trust and confidence so that customers will voluntarily be loyal to the company. According to Sirdeshmukh et al. (2002) customer trust is the hope held by consumers that service providers can be relied upon to fulfill their promises. Schurr and Ozanne (1985) in Roland Kantsperger (2010) have the same opinion in terms of the dimensions of customer trust, namely the trustworthiness of words or promises within the organization and will fulfill their obligations in exchange relations. As for equality expressed by Sirdeshmukh et al. (2002) in Roland Kantsperger (2010: 6) that customer trust is held by consumers that service providers can be relied upon to deliver they're promises.
Customer trust
Whereas Anderson and Weitz (1989) in Roland Kantsperger (2010) have another opinion that customer trust is one party believes that their needs will be fulfilled in the future with actions taken by other parties. Crosby et al. (1990) in Roland Kantsperger (2010) have the same opinion that customer trust is the belief that the seller will serve the long-term interests of consumers.
Based on the opinion of Mitchell in Egan (2001), several situations and indicators of trust are as follows: 1. Probity (focus on trust and integrity and reputation) 2. Equity (related to fair-mindedness, benevolence) 3. Reliability (related to the reliability and accuracy and consistency of the product or service expected in some cases related to the warranty issued by the company).
So based on the study of the concepts and dimensions of customer trust above, and by considering the unit of research analysis, the variable customer trust in this study is measured by dimensional constructs that include guarantees of quality, company reputation, and fulfillment of promises,
Hypothesis development
The study mentions the effect of the marketing mix on customer trust, as found in the Long-Yi Lin (2011) study that promotion strategies have a significant positive effect on customer trust. Nina Kurnia Hikmawati, Sucherly, and Surachman Sumawihardja (2015) about the influence of marketing relations and marketing mix on customer trust in telecommunications operators in Indonesia, who found that these variables significantly influence customer trust, where the marketing mix has a higher degree of influence than marketing relations . Raza and Rehman (2012) found that customer trust is influenced by price, service, and brand image. In addition, Ofoegbu & Udom (2013) found that customers tend to leave telecommunications providers if there is no sales promotion either as an incentive for buyers or compensation; customers are ready to switch to other telecommunications providers that offer attractive sales promotions.
Morgan and Hunt (2014) describe the relationship between customer willingness and customer trust. Ivana Adamson et al. (2013) who found that to increase the trust of corporate customers, parallel channels of communication must be developed with customers, and show flexibility in their relationships and maximize the benefits of reciprocal relationships. Nina Kurnia Hikmawati, Sucherly, and Surachman Sumawihardja (2015) found that marketing relations and marketing mix had a significant effect on customer trust. Hulten (2007) stated that active customers are more relational, with high levels of trust and commitment, and passive customers are less relational.
Based on the description above, the following hypotheses are arranged: H: Marketing mix performance and customer relationship affect customer trust both simultaneously and partially.
METHODOLOGY
This is a quantitative research with the unit of analysis is Indi Home customers in West Java region. The study is conducted in several major cities in West Java region which included in region 3 in the Telkom region of West Java, namely Bandung, Tasikmalaya, Cirebon, Sukabumi, and Karawang with a total sample of 280 respondents. The sampling technique usediss multistage random sampling which combined cluster sampling technique with systematic random sampling. In analyzing the relationship between variables used the SEM (Structural Equation Modeling) approach
Goodness of Fit
The following tabel show the result of Goodness of Fitwith Structural equation modeling (SEM). The overall analysis of model reveals that there is a good measure of Goodness of Fit (good fit), and very good (close fit). Thus, the goodness of model has been fit .
Data processing with LISREL obtained by the load values of standard factors and calculate the value of the construct reliability coefficient summarized in the following table 2:
Figure 1 Standardized Loading Factors
Source: Primary data Processing, 2019
Measurement Model
Standardized loading factor estimation is presented in the figure above, show that the observed variables (indicators) have a standardized loading factor >0.50. The value of the standardized loading factor of the observed variable is greater than the critical value, then variables have good measurement validity.
Hypothesis Testing
The results of partially testing explained by the table 4:
Figure 2 Research Findings
The results of the study show that the hypothesis is accepted that marketing mix performance and customer relationship affect customer trust both simultaneously and partially. Customer relationship has a greater influence than marketing mix performance in increasing customer trust in IndiHome customers in West Java.
Convenience is a customer relationship aspect that has the highest impact in increasing customer trust, followed by customer gathering and reward. Convenience shows the ease of getting information from every service offered to customers, activating services offered to customers, obtaining access to bills from cellular operators, and Customer Relation Officer (CRO) who are ready to serve communication needs quickly and easily. With these various facilities, customers can increase their trust in the IndiHome provider.
Customer gathering and rewards also have an impact on increasing customer relationships so as to increase customer trust. Customer gathering and the attractiveness of the customer gathering program can improve customer relations. While reward, is able to increase customer relationship with the attractive value of giving gifts to loyal customers, with attractive frequency, and giving gifts at certain moments, and each customer is given the same opportunity to get prizes.
In terms of marketing mix performance, the process has the highest level of influence in increasing customer trust, followed by product, place, price, people, promotion, and physical evidence. This indicates that customer trust mainly lies in the process of service provided by the company to customers. Processes that are smooth and in line with customer expectations represent a good marketing mix performance that increases customer trust. Product and place is the next aspect of the marketing mix that drives customer trust. Products that have diverse services along with good network quality, as well as internet speed and good image quality and a wide range of services also have implications for the formation of customer trust, because it can provide satisfaction with the services provided to customers so that customers increasingly trust company. Whereas place also supports the creation of customer trust, especially with the location of service centers that are easily accessible to customers and the ease of transportation facilities for customers to reach the location of service centers. This makes it easy for customers to submit complaints or complaints if a problem occurs.
Meanwhile, physical evidence in the form of room attractiveness and cleanliness of the service center office environment, the completeness of infrastructure facilities, and the convenience of a service center office room turned out to have the lowest level of influence in terms of marketing mix performance in increasing customer trust. This indicates that the physical evidence of a service center office is not the main aspect that encourages increased customer trust, but the process aspect is the main one.
The results of this study thus support the findings of previous studies which mention the influence of the marketing mix on customer trust, as found in the Long-Yi Lin (2011)
CONCLUSION AND SUGGESTION
The results of the study showed the hypothesis that marketing mix performance and customer relationship affect customer trust both simultaneously and partially. Customer relationship has a greater influence than marketing mix performance in increasing customer trust in IndiHome customers in West Java. Convenience is a customer relationship aspect that has the highest impact in increasing customer trust, followed by customer gathering and reward. In terms of marketing mix performance, the process has the highest level of influence in increasing customer trust, followed by product, place, price, people, promotion, and physical evidence. This finding has implications for IndiHome management in West Java that efforts to increase customer trust must be supported by increased customer willingness supported by the development of marketing mix performance. Customer relationship improvements need to be prioritized on convenience development, followed by customer gathering and reward. Meanwhile, to develop marketing mix performance, it needs to be prioritized on developing the process, followed by product, place, price, people, promotion, and physical evidence.
|
2020-03-19T20:00:49.201Z
|
2020-01-01T00:00:00.000
|
{
"year": 2020,
"sha1": "98b02c809e5fb983c9085f36121cbdb5d76683ed",
"oa_license": null,
"oa_url": "https://doi.org/10.31838/jcr.07.02.51",
"oa_status": "GOLD",
"pdf_src": "MergedPDFExtraction",
"pdf_hash": "7d563647dbdfa1559626e861ba592f8d7392ba0d",
"s2fieldsofstudy": [
"Business"
],
"extfieldsofstudy": [
"Business"
]
}
|
140862483
|
pes2o/s2orc
|
v3-fos-license
|
Development of a compact all-solid-state lithium secondary battery using single-crystal electrolyte
−29− Synthesiology English edition Vol.12 No.1 pp.29–40 (Aug. 2019) secondary battery. According to the NEDO roadmap for FY 2013 in Japan, an all-solid-state battery is positioned as a product that fully covers the potential of a next-generation battery, and is set for practical utilization in 2030. The conventional lithium secondary battery is roughly composed of four parts: a positive electrode, a negative electrode, an electrolyte, and a separator that separates the positive and negative electrodes. On the other hand, an all-solidstate lithium secondary battery is composed of three parts: a positive electrode, a negative electrode, and a lithium solid electrolyte (a lithium ion conductor), and the lithium solid electrolyte plays the roles of both an electrolyte and a separator. Figure 1 shows a schematic diagram of a conventional liquid-state lithium secondary battery and an all-solid-state lithium secondary battery. While the materials for positive and negative electrodes in conventional liquidstate lithium secondary batteries can be used in all-solid
state lithium secondary batteries, a lithium solid electrolyte must be newly developed. An all-solid-state lithium secondary battery that uses this lithium solid electrolyte is expected to have several advantages such as reduced internal resistance and operating voltage, and a bipolar all-solid-state lithium secondary battery that enables increased output is expected to have more advantages such as high voltage, high capacity, long life, simplified packaging, and possibility of using lithium metal negative electrodes. Currently, research institutions and companies around the world are engaging in the development of all-solid-state lithium secondary batteries. In this paper, a lithium solid electrolyte is defi ned as a solid-state electrolyte through which lithium ions move.
Problems of all-solid-state lithium secondary battery and all-solid-state lithium secondary battery that Advanced Coating Technology Research Center has in view
Various kinds of R&D are being actively conducted for allsolid-state lithium secondary batteries, from component development for lithium solid electrolytes to battery design of all-solid-state lithium secondary batteries. [5] Table 1 summarizes the characteristics of the main lithium solid electrolyte for which development is being currently conducted. Sulfide solid electrolytes have high lithium ion conductivity and plasticity. While these characteristics are advantageous in battery fabrication, they are disadvantageous from a safety perspective as harmful hydrogen sulfi de gas is produced when many of the component materials used react with water. Since lithium secondary batteries are familiar devices used in everyday life, safety must be guaranteed, and the Advanced Coating Technology Research Center (hereinafter, the Center) engages in the development of all-solid-state lithium secondary batteries that use oxide solid electrolytes with high safety, although there are still issues. Figure 2, an overview of the current situation of lithium secondary batteries and future prospects, shows the positioning of the all-solid-state lithium secondary battery that the Center sets as the goal. Although it is still diffi cult to achieve high capacity and high output with oxide all-solidstate lithium secondary batteries because large surface area is needed, it is thought to excel in higher safety, longer life, and better environmental resistance compared to sulfide all-solid-state lithium secondary batteries. It is thought that the goal should be the creation of small all-solid-state lithium secondary batteries that take advantage of these characteristics and can be used in the Internet of Things (IoT), wearable devices, and medical use.
Problems and solutions for garnet-type lithium solid electrolyte
The Center has been engaging in the development of oxide materials for positive and negative electrodes used in conventional lithium secondary batteries, before starting the research for all-solid-state lithium secondary batteries. Recently, we have become more involved in R&D for all-solidstate lithium secondary batteries, and we have concentrated our R&D on a garnet-type among the oxide lithium solid electrolytes. Research of garnet-type lithium solid electrolytes is being conducted at research institutes and companies around the world. In general, the bulk body is fabricated by a sintering method, but garnet-type lithium solid electrolytes are broken down by high temperature, and there are disadvantages that lithium evaporates, and sintered density of the sintered body cannot be increased. The sintered density has been increasing every year though low-temperature sintering achieved by densification of garnet-type lithium solid electrolytes and the spark plasma sintering (SPS) method. However, grain boundaries are always present in the sintered body, and grain boundary resistance when lithium ions pass between the grains is large, so it is difficult to bring out the original performance of the bulk body. Therefore, it may be possible to bring out the original lithium ion conductivity performance of the bulk body if one large grain without any granular boundaries is used, that is, by growing a large single crystal solid electrolyte. In an oxide all-solid-state lithium secondary battery that uses a sintered body of the garnet-type lithium solid electrolyte, there is the problem that lithium metal needles grow like dendrites in the solid electrolyte during precipitation of lithium metal. In papers, it is reported that lithium metal dendrites may cause short-circuits in all-solid-state lithium secondary batteries. [6] To solve this issue, we thought it was necessary to create a single crystal of a lithium solid electrolyte that is a bulk body without any grain boundaries.
To overcome the issues of gar net-type lithium solid electrolytes, the Center started development of large garnettype lithium solid electrolytes without grain boundaries by utilizing a single crystal growing technology that it has been developed over the years. If the disadvantage of a garnet-type lithium solid electrolyte could be overcome by obtaining a single crystal, it was expected to be a breakthrough technology as a component for oxide allsolid-state lithium secondary batteries. There had been no successful growth of single crystals for large lithium solid electrolytes, including garnet-type lithium solid electrolytes. Therefore, the growth of single crystals for lithium solid electrolytes was a meaningful research topic as an academic elemental technology for solid-state ionics that investigates the movement of lithium ions in solids, as well as for the field of all-solid-state lithium secondary batteries. Also, an ideal interface could be created by using single crystals of lithium solid electrolytes, and it was thought possible to clarify the interface structure that could not be shown in a sintered body.
Problems and solutions for formation of solidsolid interface
In fabricating an oxide all-solid-state lithium secondary battery, we worked on the issue of how to bond single crystals of garnet-type lithium solid electrolytes and the electrode layer. For the fabrication of an electrode for all-solid-state lithium secondary batteries, an integral sintering method, fabrication of thin film batteries by a sol-gel method, and pulse laser deposition (PLD) methods have been reported. Table 2 summarizes the methods and characteristics of major fabrication methods of oxide all-solid-state lithium secondary batteries. In all methods, it is necessary to form a strong boundary surface between the lithium solid electrolyte and the electrode layer. In the garnet-type lithium solid electrolyte, integral sintering at high temperature is difficult because of formation of different phases, which is a reactive product at solid interfaces between electrodes and solid electrolytes due to the mutual diffusion of electrodes and solid electrolytes at high temperature. Therefore, we focused on the development of an electrode layer fabrication technology by the aerosol deposition (AD) method that uses a room-temperature impact consolidation (RTIC) that was researched and developed over the years as a thick film ceramics coating technology at the Center. We thought the problem could be solved by applying a room-temperature bonding technology using the AD method that is a film forming process at room temperature. While various cases of usage are considered for all-solid-state lithium secondary batteries, we are aiming for the fabrication of a bulk-type all-solid-state lithium secondary battery that has high battery capacity. Therefore, to fabricate a bulk-type allsolid-state lithium secondary battery, we thought an all- Table 2. Fabrication methods of major oxide all-solid-state lithium secondary battery and their characteristics solid-state lithium secondary battery could be fabricated at room temperature, by combining with the AD method, which enables strong film forming at room temperature and is a technology to form strong interfaces for lithium to move between single crystals of garnet-type lithium solid electrolytes and solid-state interfaces.
Research goal for this paper
In the process of conducting R&D for oxide all-solid-state lithium secondary batteries at the Center, we felt there were two issues that were important in realizing all-solid-state lithium secondary batteries: short-circuits inside batteries and interface formation between solids. Particularly, shortcircuits inside batteries have been taken up as a problem widely, and a solution has not been found by any other method. In this paper, we describe the current situation of the R&D for all-solid-state lithium secondary batteries and lithium solid electrolytes, the electrode formation technology using the AD method that is a room-temperature film forming technology, and achievement of single crystals for garnet-type lithium solid electrolytes. The research currently being conducted at the Center is described to solve the aforementioned two issues of oxide all-solid-state lithium secondary batteries.
All-solid-state lithium secondary battery and lithium solid electrolyte
As mentioned in the previous chapter, an all-solid-state lithium secondary battery is composed of three main parts: a positive electrode, a negative electrode, and a lithium solid electrolyte. In all-solid-state lithium secondary batteries, oxide lithium solid electrolytes in which anions are composed of oxygen and sulfide lithium solid electrolytes in which anions are composed of sulfur are widely researched. A sulf ide lithium solid electrolyte has the advantage that interface formation with electrodes is easy since it has excellent plasticity, and there are reports of a solid electrolyte that has high lithium ion conductivity of 10 -2 S/ cm order that surpasses the existing organic electrolyte solutions. [7][8] On the other hand, there is the danger of harmful hydrogen sulfide gas being generated. An oxide lithium solid electrolyte is superior in safety aspects, but has issues in solid-solid interface formation and lithium ion conductivity. In the current situation of development of allsolid-state lithium secondary batteries, research of sulfide all-solid-state lithium secondary batteries that use sulfide lithium solid electrolytes is taking the lead and approaching realization, and time is necessary for the realization of oxide lithium solid electrolytes. However, the Center is engaging in R&D of oxide all-solid-state lithium secondary batteries, looking at the superior safety performance of oxide lithium solid electrolytes. Oxide solid electrolytes include poly-anion-type lithium solid electrolytes represented by NASICON-type structure material, [9] perovskite-type lithium solid electrolytes, [10] and garnet-type lithium solid electrolytes. [4][11]- [19] The Center reported a relatively high lithium ion conductivity (10 -4 S/cm) among oxide lithium solid electrolytes. We focused on garnet-type lithium solid electrolytes in which metallic lithium can be used as highpotential positive and negative electrode active material, which cannot be used in organic electrolyte solutions since it has a wide potential window, and have been conducting R&D for oxide all-solid-state lithium secondary batteries since 2009.
Garnet-type lithium solid electrolyte
A garnet-type lithium solid electrolyte possesses, as the name implies, a very similar crystal structure as garnet used as a precious stone, or yttrium-aluminum-gallium (YAG) garnet used as optical crystals. The original garnet is expressed by the general equation C 3 A 2 B 3 O 12 , in which the C site has oxygen and dodecahedral coordination, the A site has oxygen and octahedral coordination, and the B site has oxygen and tetrahedral coordination. On the other hand, in garnet-type lithium solid electrolytes, lithium is present in the interspace where oxygen and octahedral coordination are present in an ordinary garnet structure. Figure 3 shows the crystal structure of a garnet-type lithium solid electrolyte.
For example, in a garnet-type lithium solid electrolyte with Li 7 La 3 Zr 2 O 12 composition, the C site is occupied by lanthanum, the A site by zirconium, and the B site and interspace are occupied by lithium.
For a garnet-type lithium solid electrolyte, it is known that various elements are substituted, and it has been reported that at the C site calcium, strontium, barium substitute; at the A site, niobium, tantalum, tin, and hafnium; and at the B site, aluminum and gallium. [11]- [15] The amount of lithium changes according to the substituent element, and lithium ion conductivity changes with the changes of arrangement and occupancy rate of lithium. For garnet-type lithium solid electrolytes, there are several reports of lithium conductivity of 10 -4 S/cm order, [8]- [18] and it has excellent lithium ion conductivity among oxide lithium solid electrolytes. On the other hand, it is a difficult sintering material, and achieving high denseness as a material is difficult, and lithium ion conductivity of the original bulk body cannot be utilized as component material due to the effect of grain boundary resistance. Recently, components with high denseness have been fabricated by electric current sintering and hot press methods, and there are reports of lithium ion conductivity of 10 -3 S/cm order. [20]- [22] Although the denseness of components is increasing, there are also new reports of internal shortcircuiting. [3][6] [12][13] [15] The problem of internal short-circuits is that short-circuits occur between positive and negative electrodes due to precipitation of lithium metal within an allsolid-state lithium secondary battery. Of course, in an allsolid-state lithium secondary battery, there is no combustion like in a conventional lithium secondary battery, but the function of the battery is lost through short-circuiting. Internal short-circuits of an all-solid-state lithium secondary battery is, as shown in the image of Fig. 4, caused by the growth of lithium metal along the grain boundary of lithium solid electrolytes. It has been reported that an internal shortcircuit is caused even at low current, even after densification of the components using various methods, and the problem could not be solved. We thought this problem could be solved by using a lithium solid electrolyte without grain boundaries, or a single crystal, and started the development of large single crystals of garnet-type lithium solid electrolytes.
Single crystal growth of garnet-type lithium solid electrolyte using FZ method
Single crystals of garnet-type lithium solid electrolytes up to now had been synthesized by high-temperature sintering or a flux method, and either methods yielded single crystals of about 1 mm at maximum. [16]- [18] We thought a lithium solid electrolyte usable for an all-solid-state lithium secondary battery could not be grown by high-temperature sintering or flux methods, and we considered growing single crystals using a f loating zone (FZ) method. Figure 5 shows the melting furnace for the FZ method, and Fig. 6 shows the outline of the FZ method. The FZ method is named from the fact that the melt zone floats in space. The melt zone is supported by surface tension with raw material rods at top and bottom, and single crystals grow by moving the melt zone. Since this method does not use containers such as crucibles, there is no inclusion of impurities from crucible materials, and the growth of single crystals becomes possible even with highly volatile materials by managing the growth condition since the melt zone is localized. Surveying the past cases of single crystal growth, it is reported that LiCoO 2 , which is a positive electrode active material of lithium secondary batteries, has been grown by the FZ method. [23] [24] We determined that a garnet-type lithium solid electrolyte can be grown by examining and devising a growth method using the FZ method that does not use crucibles, to counter the occurrence of lithium evaporation in high temperature and the high reactivity of a garnet-type lithium solid electrolyte itself. As merits of using the FZ method, when single crystals of garnet-type lithium solid electrolytes are developed successfully, technological transfer and joint development with private companies can be expected. There are venture companies that sell various single crystals grown by the FZ method, and, as the FZ method is a so-called melting method, if single crystals can be grown, there are possibilities of improving and adapting large single crystal growth methods to industrial purpose for use in production by companies. There is a path for joint development with companies that already own facilities for growing and manufacturing single crystals. In fact, there is an example of pulling single crystals by the Czockralski (CZ) method using an iridium crucible in our published patent. [25] In fact, we faced hardship when we started the actual investigation of single crystal growth of garnet-type lithium solid electrolytes. We did not know whether single crystals of garnet-type lithium solid electrolytes could be grown by a melting method. In addition, generally in single crystal growth by a melting method, one often considered issues while looking at phase diagrams, but in garnet-type lithium solid electrolytes, composition was complicated and there was no phase diagram. Therefore, in our R&D, various single crystal growth parameters were changed, analysis of the coagulated material was done after actual growth to achieve single crystal growth, and this was conducted by trial-and-error. Four years after starting the investigation of single crystal growth of garnet-type lithium solid electrolytes, we found the conditions for single crystal growth. Some of the characteristics of growth conditions are as follows: to excessively add about 1.2 times of lithium carbonate that will be the lithium source when preparing the raw material; to remove gas derived from evaporating lithium by passing about 7 L/min of dried air during single crystal growth; to remove air bubbles by rotating the supplied multi-crystal sample at about 40 rpm; and to keep the growth rate of single crystal growth to about 10 mm/ h. Figure 6 shows the conditions of the growth in this study. Particularly, the part about the growth rate is interesting. Regarding single crystal growth using a general FZ method, the growth rate is kept at about 1-2 mm/h, but concerning garnet-type lithium solid electrolytes, single crystals could not be obtained at a generally-used growth rate, and were obtained at a fast rate of about 5-10 times. Using this method, we grew single crystals of garnet-type lithium solid electrolytes with various chemical compositions using the melting method for the first time in the world, and after organizing the patent application and the know-how, we published an academic paper. [26] [27] In this paper, we describe the evaluation result of Li 7-x La 3 Zr 2-x Nb x O 12 in which part of zirconium was substituted by niobium in a garnet-type lithium solid electrolyte as described in Reference [26]. Please refer to this reference for the experimental method and details of the result. Figure 7 shows a single crystal rod of a garnet-type lithium solid electrolyte Li 6.5 La 3 Zr 1.5 Nb 0.5 O 12 that was grown in this study, and a single crystal plate that was cut and surface polished. As shown in Fig. 7, we were able to grow a large single crystal with length of 8 cm and diameter of 8 mm for the fi rst time in the world.
Crystallographic and electrochemical evaluation of garnet-type lithium solid electrolyte
The fi rst evaluation conducted was on the correlation of lithium ion conductivity by substitution amount of Nb. Since higher lithium ion conductivity will be advantageous for battery action, this is the most important property. Looking at the correlation between the amount of Nb substitution and lithium ion conductivity using an AC impedance method, the lithium ion conductivity was maximum when Nb substitution was 0.5, and the value, as shown in Fig. 8, was 1.39 × 10 -3 S/cm at 298 K. This value was higher than the conventionally reported value for a sintered body sample, and it is thought to be because there is no grain boundary effect. As a result of X-ray diffraction and neutron diffraction measurements, as shown in Fig. 9, it was confirmed that the arrangement of lithium differed from the crystal structure of a conventionally reported garnet-type lithium solid electrolyte. In a conventionally reported garnettype crystal structure, lithium was dominant at the 24d site, while in our crystal structure analysis result, lithium occupied the 96h site where the 24d site was split into four. As a result, the distance between lithium became shorter compared to the garnet-type crystal structure, and this is thought to have led to the increased lithium ion conductivity.
To see whether the problem was solved by achieving single crystals for garnet-type lithium solid electrolytes, we conducted internal short-circuit tests by dendrite growth of lithium metal. For the short-circuit test, we used symmetric batteries to which lithium metal was attached, and confirmation was made by applying constant current and repeating melting and precipitation of lithium metal. Figure 10 shows the results of internal short-circuit tests. From these results, it was confirmed that the battery operated without short-circuiting at current density of 0.5 mA/cm 2 . The lithium ion conductivity calculated from the results of internal short-circuit tests was 1.0 × 10 -3 S/cm, and there was no major difference compared to the measurement results of 1.39 × 10 -3 S/cm obtained by an AC impedance method. From the above results, we believe the problem of short-circuiting that was the issue of garnet-type lithium solid electrolytes was solved using single crystals.
Electrode formation by aerosol deposition method
Another issue was the interface formation between solids of electrodes and single crystals of garnet-type lithium solid electrolytes. We attempted to solve this issue by using the aerosol deposition (AD) method to create the electrodes. The AD method is a technology for forming a film by mixing fine particles with gas, and shooting this mixture from a nozzle in a decompressed condition as an aerosol jet onto a substrate.
Using the "room-temperature impact consolidation (RTIC)" (in which high-density solidif ication occurs at room temperature without heating by applying high pressure or mechanical impact to fine particle materials such as ceramics with particle diameter of around 1 μm) that was discovered by Akedo, one of the authors of this paper, dense and highly adhesive ceramic films were formed on substrates of various materials such as metal, glass, and plastic, at room temperature. This is a film forming process for which AIST has the know-how. [28] This film forming process is a technology that is already utilized by companies and has high versatility. Please refer to Reference [28] published in Synthesiology for detailed explanation and past efforts on the AD method.
The reason why we used the AD method as the technology for electrode formation was because there was a technological merit as a technology that was already being used in the industrial world. The representative characteristics of the AD method are as follows: it is a room-temperature film forming process that does not require heating; it does not require binders such as a coating film; adhesion between substrate materials and films is strong; and a composite film can be created by utilizing multiple types of fine particles together. In forming electrodes on single crystals of garnet-type lithium solid electrolytes, we thought that the AD method was appropriate due to the facts that it is a room-temperature film forming process and that there is strong adhesiveness between the substrate and the fi lm. The fi rst reason for the decision is because a garnet-type lithium solid electrolyte is a material with relatively high reactivity. When a fi lm forming process requiring heating is used, a garnet-type lithium solid electrolyte which is the substrate and an electrode may react, a different substance may be formed at the interface, and it will not function as an all-solid-state lithium secondary battery. The second reason is that because lithium ions move on the interface of the garnet-type lithium solid electrolyte and the electrode, strong interface adhesiveness is required. In the actual film forming maneuver, LiNi 0.8 Co 0.15 Al 0.05 O 2 (NCA), which is the positive electrode active material of current lithium secondary batteries, was used along with dry air, and film forming was done by spraying an aerosol jet onto a single crystal of a garnet-type lithium solid electrolyte placed in a decompression chamber. Figure 12(a) shows a fi lm-formed product of an NCA electrode on a single crystal of a garnet-type lithium solid electrolyte as the substrate, using the AD method.
Evaluation of all-solid-state lithium secondary battery
Using single crystals of garnet-type lithium solid electrolytes and the AD method as explained above, the Center created an original prototype of an oxide all-solid-state lithium secondary battery. As a negative electrode active material, to utilize the merit of single crystals of garnet-type lithium solid electrolytes, metallic lithium was crimped. Figure 12(b) shows a schematic diagram of an all-solid-state lithium secondary battery that was developed. For evaluation tests, fi ve cycles of charge/discharge were repeated at voltage range 3.0 V-4.2 V, 0.5 μA, and at 60 ºC, and then fi ve more cycles of charge/discharge were done at 25 ºC. Figure 13 shows the results of the charge/discharge test at 60 ºC, and Fig. 14 shows the result at 25 ºC. As shown in Fig. 14, although it does not reach the logical volume of NCA, it was confi rmed that reversible charge/discharge was accomplished in a roomtemperature environment. That is, it was confi rmed that the interface formation of the solids of a garnet-type lithium solid electrolyte formed by the AD method and the positive electrode fi lm was suffi ciently strong. Also, it was clarifi ed
Future prospects
There are still several issues for an oxide all-solid-state lithium secondary battery that is a prospective candidate for a next-generation secondary battery. In this research, it was found that the issues could be solved by combining single crystals of garnet-type lithium solid electrolytes and the AD method.
Single crystals of garnet-type lithium solid electrolytes, which we succeeded in growing for the first time in the world, has drawn interest of many research institutions and companies academically as well as industrially. Academically, there was hardly any research on garnettype lithium solid electrolytes because a large bulk body of single crystals of lithium solid electrolytes did not exist. Currently, we are engaging in research on solid dispersal of lithium ions in a single crystal with solid ionics researchers, as well as conducting joint research with various institutions on the measurement of basic properties. Moreover, we are conducting research on the interface structure between solid electrolytes and electrodes to enable fabrication of an ideal electrode interface using single crystals. Industrially, we are conducting joint development with companies for single crystal growth toward achieving high quality, mass production, and large size, for single crystals of garnet-type lithium solid electrolytes.
The Center currently engages in research to clarify the mechanism with which single crystals of garnet-type lithium solid electrolytes are grown in a melting method, search for new garnet-type lithium solid electrolytes by element substitution to increase lithium ion conductivity, fabrication of a composite electrode film consisting of an electrode active substance and a lithium solid electrolyte, and achievement of a thick electrode film. While there are many issues to be overcome, we have succeeded in growing single crystals of high quality solid electrolytes, and we aim for practical realization of an all-solid-state lithium secondary battery by around 2030. Was involved widely in development of materials and devices as well as research of optical magnetic recorder and optical sensor during university period, and worked on product development in a venture company that manufactured barcode readers. After joining Mechanical Engineering Laboratory, obtained idea for current research (AD method) from about 1994. Project Leader, NEDO Nanotechnology Program for five years since 2002. In this paper, overviewed and supervised the electrode fi lm formation by the AD method.
In general, the FZ method is used in crystal growth in places where a crucible cannot be used, and therefore, I feel there is a large gap in using it for large single crystal growth. If you have any clues on large multiple crystal growth, I think you should explain them to a degree that you are allowed to disclose, to make your claims more convincing.
Answer (Kunimitsu Kataoka)
For achieving large crystals, we submitted evidence for large single crystal growth for garnet-type solid electrolytes by the Czoklalski (CZ) method using an iridium crucible in the published patent No. WO2016017769A1 for which we submitted a patent application. Based on this fact, I added the following text.
"In fact, there is an example of pulling single crystals by the Czockralski (CZ) method using an iridium crucible in our published patent."
Comment (Haruhiko Obara)
You write, "There is a difference in lithium arrangement between the crystal structure and the garnet-type lithium solid electrolyte that have been reported." I think you should add some discussion about why you obtained such a crystal structure and how it affects the physical properties (such as conductivity).
Answer (Kunimitsu Kataoka)
It is still unknown why the lithium arrangement changed. There is a possibility that we were able to confirm the original crystal structure through neutron diffraction that uses a single crystal with abundant diffraction data. Although this may be a problem of interpretation, the crystal structure obtained in this study has shorter distance between lithium compared to the conventional garnet-type crystal structure, and we think the ion conductivity increased as a result.
Therefore, I added the following text: "In a conventionally reported garnet-type crystal structure, lithium was dominant at the 24d site, while in our crystal structure analysis result, lithium dominated the 96h site where the 24d site was split into four. As a result, the distance between lithium became shorter compared to the garnet-type crystal structure, and this is thought to have increased lithium ion conductivity."
Comment (Haruhiko Obara)
You write, "The problem of internal short-circuiting that was the issue of a garnet-type lithium solid electrolyte was solved by using a single crystal." I think it is apparent that dendrites do not form because there is no grain boundary. Or, is there a basic mechanism that prevents dendrite growth when a single crystal is used?
Answer (Kunimitsu Kataoka)
While it may seem obvious, this is not so clear and is a theme that is discussed to this day. There is a report that dendrite growth is a phenomenon that occurs regardless of the presence or absence of grain boundaries. However, in our research, we were able to prevent dendrites in the single crystal solid electrolyte. One factor is thought to be, as shown in Fig. 4, that the lithium metal precipitates homogenously since the surface of a single crystal is fl at. In fact, we confi rmed the phenomenon in which dendrites grew in single crystal solid electrolytes with rough surfaces and caused cracks.
Comment (Haruhiko Obara)
You mention that the difficult issue of sintering integrally could be solved by room-temperature bonding technology using the AD method, but I cannot judge objectively whether the problems such as interface resistance and prevention of dispersal were actually solved. Do you have any evidence of the problems being solved?
Comment (Haruhiko Obara, AIST)
This paper describes a highly original research on electrolytes using oxide single crystals and on electrode formation using the AD method for realization of all-solid-state lithium secondary batteries, and I think it is valuable as a paper for Synthesiology.
Comment (Masahiko Makino, AIST)
I think the social demand for lithium secondary batteries will continue to increase in the future. It is expected that the technology developed in this paper will be the core as its usage expands to "IoT, wearables, and medical use" as aspired by the authors. This paper provides a detailed explanation on the hardships met in growing single crystals of garnet-type lithium solid electrolytes. I think it is appropriate as a paper of Synthesiology as it contains valuable information.
Comment (Haruhiko Obara)
There is a possibility that the readers may become excessively concerned about the danger of sulfide all-solid-state lithium secondary batteries, on which many companies and research institutes are working, as you indicate the safety issues of sulfi de solid electrolytes that are your competing technology. I think you should reconsider the expression in the paper in making comparison with oxide batteries.
Answer (Kunimitsu Kataoka)
I am not criticizing sulfide all-solid-state batteries. The issue of production of hydrogen sulfide gas when sulfide solid electrolytes react with water has been discussed for a long time in academic societies and papers, and is common knowledge among researchers and developers of all-solid-state lithium secondary batteries. However, with sulfide material large surface area and excellent interface formation can be achieved, and many institutes and companies are moving to realize sulfi de all-solid-state lithium secondary batteries fi rst. Since the generation of hydrogen sulfi de gas by water reaction cannot be avoided due to the nature of the material, institutes and companies are currently doing R&D by devising packaging. On the other hand, it has also become common knowledge among researchers and developers that there is no safety issue for oxide all-solid-state lithium secondary batteries.
Answer (Kunimitsu Kataoka)
Since the AD method is a film forming technology at room temperature, it does not require a heating process. In the integral sintering method, heating is necessary to assure sintering. If heating is required, thermal dispersion occurs mutually at the interface of different solids, and there is high possibility of formation of different phases. On the other hand, the fact that the AD method is a technology of film forming at room temperature is evidence of being a solution above all else. 4 Expansion of use to "IoT, wearables, and medical use"
Comment (Masahiko Makino)
For the "link between research goal and society," you tend to concentrate on technical descriptions. How about addressing "high safety," "long lifespan," "environmental resistance," or "Usage: IoT, wearables, medical use" that are listed in " Figure 2.
Overview of the current situation of lithium secondary battery and its future prospect" in this paper? I think there is particularly great expectations from society for secondary batteries of medical use.
Answer (Kunimitsu Kataoka)
I added the following text. "Although it is still difficult to achieve high capacity and high output with oxide all-solid-state lithium secondary batteries because a large surface area is needed, it is thought to excel in higher safety, longer lifespan, and better environmental resistance compared to sulfide all-solid-state lithium secondary batteries. It is thought that the goal should be the creation of small all-solidstate lithium secondary batteries that take advantage of such characteristics and can be used in the Internet of Things (IoT), wearable devices, and medical use."
|
2019-08-17T01:36:37.253Z
|
2019-01-01T00:00:00.000
|
{
"year": 2019,
"sha1": "25e9198750ba68d08d1e51ab55dd7893e79a7282",
"oa_license": null,
"oa_url": "https://www.jstage.jst.go.jp/article/synth/12/1/12_28/_pdf",
"oa_status": "GOLD",
"pdf_src": "Adhoc",
"pdf_hash": "8ddf99987e83f78b5e902712297ffc98d43ffa3f",
"s2fieldsofstudy": [
"Materials Science"
],
"extfieldsofstudy": []
}
|
235737400
|
pes2o/s2orc
|
v3-fos-license
|
Characteristics and Bioactivities of Carrageenan/Chitosan Microparticles Loading α-Mangostin
This study attempted to develop carrageenan/chitosan based microparticles loading α-mangostin which was extracted from Vietnamese mangosteen skin. The carrageenan/chitosan/α-mangostin microparticles were prepared by ionic gelation method by mixing chitosan, carrageenan with α-mangostin and subsequently cross-linking the mixtures with sodium tripolyphosphate crosslinking agent. The content of α-mangostin in microparticles was changed to evaluate the effect of α-mangostin content on physical, morphological properties, particles size and bioactivities of the carrageenan/chitosan/α-mangostin microparticles. The obtained results showed that carrageenan, chitosan was interacted together and with α-mangostin. The presence of polymers matrix improved the release ability of α-mangostin into ethanol/pH buffer solutions. The carrageenan/chitosan/α-mangostin microparticles have antibacterial (gram ( +) strains) and antioxidant activities. The results suggested that combination of chitosan and carrageenan in the microparticles can enhance the control release of α-mangostin into solutions as well as keep the bioactivities and reduce the vero cell toxicity of α-mangostin.
Introduction
The α-mangostin (MGS) is a xanthone derivative compound extracted from the pericarps of Mangosteen-a tropical fruit which is mainly found in the Southest Asia countries like Vietnam, Thailand and Malaysia. This xanthone derivative is found to have a variety of bioactivities such as antibacterial [1,2], anti-inflammatory [3][4][5], antioxidative [6,7], anticancer activities [8][9][10] as well as antifungal [11] and anti-allergic [12]. Therefore, the application of this compound in the pharmaceutical industry is promising and worth investing. In fact, mangosteen skins have usually eliminated directly to environment without treatment. This can cause the waste of many valuable organic compounds in mangosteen skins as well as the environmental pollution from them. Therefore, the extraction of organic compounds from mangosteen skins is a good way to re-use of mangosteen skins because these organic compounds have great bioactivities and potential application in many fields such as pharmacy, food, etc. The α-mangostin has low solubility in water with only 2.03 × 10 −4 mg.L −1 at room temperature. Thus, there have been many studies to solve this disadvantage, including co-solvation, structure modification, complex formation [13][14][15] and micro/nanoparticle drug delivery systems [16][17][18]. Among the above methods, using micro/ nanoparticles to increase the solubility of MGS is our focus in this work because they can help to improve the solubility and therapeutic index of the drug compounds [19,20].
One of drug delivery nanoparticle systems which widely used is polymeric drug delivery system. The polymers have source from nature or synthesis with high biocompatibility and biodegradability [20,21]. In our study, chitosan and carrageenan have been chosen for loading MGS to improve the solubility and bioavailability of MGS in use. Chitosan is a biopolymer can be obtained through the processing of seafood waste (crabs shells, lobsters shells, shrimps shells or krill shells) [21,22]. It is found to be biodegradable, nontoxic and have good biocompatibility [23][24][25], and it can be used for safe drugs and bioactive compounds delivery. Additionally, chitosan can be adsorbed to the mucus membrane along the gastrointestinal tract thanks to its mucoadhesive property, hence it is usually applied to carry colon-targeted drugs [26][27][28][29]. Carrageenan which can be extracted from various types of red algae like Agardhiella, Encheuma, Furcellaria, Gigartina and Hypnea, etc. are linear sulfated polysaccharides [30][31][32]. These polysaccharides also show myriads of bioactive properties such as antioxidant [33,34], anticoagulant [35,36], antiviral [37][38][39], antibacterial [40] and antitumor [41,42]. Pacheco-Quito et al. has made a schematic layout of carrageenan applications in various pharmaceutical formulations, including tablets, pellets, films, suppositories, inhalable systems, micro particles and nanoparticles [43].
Materials
The main materials and chemicals used for the study are α-mangostin powder (MGS) (extracted from the skin of mangosteen, purity of 90%, Vietnam), carrageenan powder (CAS Number 9000-07-1, κ-carrageenan is predominant,
Preparation of Carrageenan/Chitosan/α-Mangostin Microparticles
The procedure for preparation of carrageenan/ chitosan/α-mangostin microparticles is following: Preparation of carrageenan solution: 50 mg of carrageenan was added into 100 mL of distilled water. The mixture was stirred on the magnetic stirrer at 80 °C for 15 min to completely dissolve the carrageenan to form a transparent solution. Next, the carrageenan solution was stirred and cooled to 50 °C before slowly adding the KCl solution (5 mg KCl/5 mL distilled water). The solution was continuously stirred for 15 min to obtain a transparent carrageenan solution (solution A).
Preparation of chitosan solution: 100 mg of CS was added into 100 mL of 1% acetic acid solution. The mixture was stirred on the magnetic stirrer for 30 min to obtain chitosan solution (solution B).
Preparation of MGS solution: MGS was weighed and added to 20 mL of ethanol to get a transparent yellow MGS solution (solution C).
Preparation of STPP solution: 20 mg of STPP was dissolved in 2 mL of distilled water (solution D).
The solution A was cooled to 40 °C before adding slowly the solution B and ultra-sonication at 10.000 rpm to obtain solution AB. Next, solution C was dropped at a rate of 3 mL/ min to the solution AB in ultrasonic stirring. Then, solution D was added slowly to the solution AB to cross-link polymer in solution. After that, the solution was maintained in ultrasonic stirring for 5 min to obtain a homogeneous solution. Finally, the solution was iced in salt-ice-water mixture for 2 h before centrifuging at 6000 rpm to obtain the solid part. The solid part was freeze-dried, finely grounded and stored in PE tubes at room temperature until use. The ratio of components and designation of carrageenan/chitosan/α-mangostin samples were presented in Table 1.
Characterization
The morphology of the CCG microparticles was evaluated using field emission scanning electron microscope (FESEM) (Hitachi S-4800, Japan). The size distribution of the CCG microparticles was assessed using dynamic light scattering (DLS) method (SZ100, Horiba, Japan). The thermal behavior of the CCG microparticles was determined using differential scanning calorimetric (DSC) (DSC204F1, Netzsch, Germany).
Setting Up Calibration Equation of MGS in Different pH Buffer Solutions
When taken orally, MGS and CCG microparticles will be taken in the digestive system with different pH environments. Therefore, investigation of MGS release will be carried out in different pH buffer solutions (pH 1.2, pH 4.5, pH 6.8, pH 7.4), which simulated the body fluids.
To determine the amount of MGS released from CCG microparticles, it is necessary to set up the calibration equation of MGS in pH solutions. MGS has poor solubility in water and buffer solutions, thus, ethanol was added in the pH buffer solutions (50/50 v/v) to evaluate more accuracy the release of MGS [48].
The calibration equation of MGS in pH solutions was built by diluting method from solution having standard concentration. 10 mg of MGS was added to 200 mL of buffer/ ethanol solution. The mixture was stirred continuously for 8 h until MGS was dissolved completely. Next, this solution was withdrawn and diluted to certain concentrations before taking ultraviolet and visible (UV-Vis) spectroscopy (S80 Libra, Biochrom, UK). Excel software was used to build the calibration equation of MGS based on the obtained optical density values and to calculate the regression coefficient (R 2 ).
Drug Release Analysis
10 mg of CCG microparticles was added in 200 mL of buffer solution. The mixture was stirred continuously for 360 min at 37 °C. During the first hour, for every 20 min and then every hour after that, exactly 5 mL of the solution was withdrawn and 5 mL of fresh buffer solution was added to maintain volume of solution. Next, the withdrawn solution was taken the UV-Vis spectrum on the S80 Libra UV-Vis spectroscopy. The amount of MGS released from CCG microparticles is calculated based on the calibration equation and the measured optical density value. The experiment was done in triplicate and the mean value was calculated.
The percentage of MGS released is calculated using the formula: where: C 0 and C t are initial carried MGS and released MGS at time t, respectively.
Antibacterial Activity Testing Method
This is a method to test the antibacterial activity in order to evaluate the level of strong antimicrobial strength of test samples through turbidity of the culture medium. The values for showing activity are IC 50 Testing procedure was indicated as follow: the original sample is diluted with 2 steps, firstly in 100% DMSO then distilled water into a series of 4-10 concentrations. The highest test concentration was 256 µg/mL with the extract and 128 µg/mL of the clean matter. The test microorganisms are kept at − 80 °C. Before the experiment, the test microorganisms are activated in the culture medium so that the concentration of bacteria reaches 5 × 10 5 CFU/mL; Fungi concentration reached 1 × 10 3 CFU/mL. 10 µL of sample solution at different concentrations was added to 96-well plate, then 190 µL of active microorganism solution was added, incubated at 37 °C for 16-24 h.
MIC value was determined at the well which has the lowest concentration of sample inhibited completely the growth of microorganism. where, High Conc /Low Conc : the sample at the high/low concentration; High Inh% /Low Inh% : inhibition percentage at high/ low concentration).
Anti-Oxidant Activity Testing Method
Analysis of the ability to trap free radicals generated by 1,1-diphenyl-2-picrylhydrazyl (DPPH) is an approved method for rapid determination of antioxidant activity of samples. The sample was dissolved in dimethyl sulfoxide (DMSO 100%) and DPPH was diluted in 96% ethanol. The absorption of DPPH at λ = 515 nm (Infinite F50, Tecan, Switzerland) was determined after dropping DPPH to the test sample solution on a 96-well microplate and incubating at 37 °C for 30 min. The results of the tests were expressed as the mean of at least 3 replicate tests ± standard deviation (p ≤ 0.05). Flavonoid or ascorbic acid was used as positive control.
The mean value of scavenging capacity (SC, %) at the sample concentrations was entered into an Excel data processing program by the following formula:
Cell Toxicity Testing Method
MTT method was applied to evaluate the cell toxicity of MGS and CCG microparticles. The Vero cell (kidney, African green monkey) was provided by American Type Culture Collection, USA. The cells were cultured at 37 °C, CO 2 5% in DMEM (Dulbecco's Modified Eagle Medium) adding L-glutamine 2 mM, Pencicilline + Streptomycin sulfate, fetal bovine serum 5-10%. The cell fluid was then dripped onto a 96-well microplate (1.5 × 10 5 cells/well), incubated with the test samples at a concentration range of 100 g/mL → 6.25 µg/mL for the extract, each concentration was repeated 3 times. Ellipticine or Paclitaxel (Taxol) in DMSO was used as a positive ( +) standard. The formazan crystalline conversion product was dissolved in DMSO and the optical density (OD) was measured at λ = 540/720 nm on an Infinite F50 instrument (Tecan, Männedorf, Switzerland). The ability to inhibit cancer cell proliferation at a given concentration of the sample in % compared with the control according to the formula: Samples exhibiting activity (% inhibition ≥ 50%) were determined to have an IC 50 value (µg/mL or µM) as the concentration of the sample at which 50% inhibition of cell survival, using TableCurve AISN Software software (Jandel Scientific, San Rafiel, CA).
Morphology of the Carrageenan/Chitosan/ α-Mangostin (CCG) Microparticles
FESEM images of MGS and CCG microparticles prepared with different MGS content were shown in Fig. 1. As observation from Fig. 1, the MGS had a structure surface more separately than the CCG microparticles. The MGS was in thin sheets stacked together to form blocks (Fig. 1a, b). The CCG microparticles had a dense structure, chitosan and carrageenan were mixed and bonded together better through a polyelectrolyte complex (PEC) between OSO 3 − of carrageenan and protonated amine (NH 3 + ) of chitosan as well as ionic cross-linking of tripolyphosphate anion bridges with NH 3 + cations of chitosan and NH 3 + cations of PEC [47]. The cross-linking of polymers in CCG microparticles through tripolyphosphate anion bridges could be also observed on the FESEM images. As loading MGS, the CCG microparticles tend to form smaller particles with less voids on the surface (Fig. 1f, h, j, l) as compared to the CCG0 sample (Fig. 1d). The MGS may be filled in the voids between chitosan and carrageenan, entrapped inside [50]. This indicated that MGS could interact effectively with chitosan and carrageenan in our proposed schema (Fig. 2).
Particle Size Distribution of CCG Microparticles
The CCG microparticles were dispersed in distilled water to record diagrams of their particle size distribution. These diagrams of CCG microparticles were shown in Fig. 3 as well as the size range and average particle size of CCG microparticles were listed in Table 2. It is clear that the CCG0 microparticles had larger Z-average particle size than the CCG5, CCG10, CCG15 and CCG20 samples. This suggested that CCG microparticles loading MGS could be dispersed in water better than the CCG0 sample. From Fig. 3 Table 2, the CCG microparticles had a range of size from 43 to 1106 nm with the various peak sizes depending on MGS content. The Z-average particle size of CCG microparticles is larger than peak size of them showing that a small and a large component in number of particles. The PDI > 0.4 indicated that the CCG microparticles has a broadly polydisperse distribution type. As increasing the MGS content in CCG microparticles, the Z-average size of samples tends an increase. This may be due to the hydrophobic nature of MGS.
DSC Analysis of CCG Microparticles
DSC diagrams of the MGS and CCG microparticles were displayed in Fig. 4. The melting temperature of MGS was found at 172.8 °C with the melting enthalpy or melting energy of 90.33 J/g (Table 3) [51]. For carrageenan/chitosan microparticles without MGS, one broad peak appeared at 90.8 °C with the melting enthalpy of 396.7 J/g could be attributed for melting process of carrageenan and glass transition process of chitosan [52,53]. The appearance of only one peak in range of 40 °C to 150 °C indicated that carrageenan was good miscible with chitosan through bonds as presented in Fig. 2. As loading MGS, the position of this above peak was slightly shifted and the enthalpy was decreased corresponding to the reduction in the crystallization of CCG microparticles (Table 3). It may be due to the dispersion and interaction of MGS with polymers leading to the limitation in the molecular movement/mobility of the carrageenan and chitosan chains, and then presenting an amorphous state of the microparticles [48]. Another evidence for the interaction of MGS with polymers is the melting peak of MGS does not be assigned in DSC diagrams of the CCG10 and CCG20 microparticles. As increasing MGS content, the compatibility of MGS and polymers was reduced. This was exhibited as a very small peak at around 175 °C in the DSC diagram of the CCG20 sample.
From DSC results, it can be suggested that MGS was interacted with chitosan and carrageenan, leading to the decrease in melting enthalpy of CCG microparticles.
Calibration Equation of MGS in Different Ethanol/Buffer Solutions
The MGS content in the solution is determined by using UV-Vis method. As observation from UV-Vis spectra of MGS in the different solutions (ethanol, ethanol/buffer solutions (50/50 v/v)) in the wavelength range from 200 to 400 nm, it can be seen that the absorption peaks at 244 nm can be found in all UV-Vis spectra (Fig. 5). Therefore, the maximum wavelength of 244 nm has been chosen to determine the content of MGS in these solutions.
The calibration equation of MGS in ethanol/buffer solutions and the corresponding linear regression coefficients (R 2 ) were shown in Fig. 6. These calibration equations have high values of linear regression coefficient (≥ 0.99), therefore they can be used to calculate the amount of MGS released from the CCG microparticles in different ethanol/ buffer solutions.
Release Amount of MGS from CCG Microparticles
The MGS amount released from free MGS and the CCG microparticles in different ethanol/buffer solutions was displayed in Fig. 7. It can be seen that the release of MGS from the free MGS and CCG microparticles depends on pH of buffer solution, polymer matrix, testing time and MGS content in the CCG microparticles.
In different ethanol/buffer solutions, the MGS release amount from the free MGS and CCG microparticles was varied and ordered in ethanol/pH 1.2 buffer > ethanol/pH 4.5 buffer ethanol/pH 6.8 buffer > ethanol/pH 7.4 buffer. The better release of MGS in acidic environment may be due to MGS is a weak acid (pKa1 = 3.68 (primary carbonyl)). Moreover, the sulfate groups in carrageenan can react with proton H + in acidic environment, leading to the MGS to release more easily. On the other hand, the degradation of Fig. 2 Illustration of the ionically crosslinked chitosan-tripolyphosphate and chitosan-carrageenan polyelectrolyte complex in the chitosan-carrageenan microparticles electrostatic interaction of components in the CCG microparticles (Fig. 2) due to the presence of H + ion could cause to the increase in the MGS release from the CCG microparticles [50]. In ethanol/pH 1.2 buffer and ethanol/pH 4.5 buffer solutions, the MGS was released almost completely from the CCG5 sample after 360 min of testing. In ethanol/pH 6.8 buffer and ethanol/pH 7.4 buffer solutions, the highest MGS release amount from the CCG10 sample after 360 min of testing is 87.63 and 74.42%, respectively.
From Fig. 7, it can be recognized that carrageenan/chitosan matrix had a strong effect on the release of MGS [46,51,54]. The difference in MGS release amount from the free MGS and CCG microparticles suggested that MGS was loaded by carrageenan/chitosan microparticles and MGS and polymer matrix was interacted together as aforementioned.
The MGS was distributed in both surface and inside of microparticles, therefore, an initial burst effect could be observed for first 120 min of testing because of the release of MGS on the surface of the CCG microparticles and then, a slow release of MGS was observed probably due to the release of MGS that had been linked with polymer matrix [51,54].
The release of MGS from the CCG microparticles was also affected by the content of MGS in the CCG Fig. 3 Diagrams of particle size distribution of CCG microparticles microparticles. The MGS amount released from the CCG20 sample in all tested solutions was much lower than that from others. This can be due to the less compatibility of MGS with polymer matrix as mentioned in DSC analysis subsection. In acidic environment, the MGS release amount from the CCG5 sample was higher than that of the CCG10 and CCG15 samples while in alkaline environment, the MGS release amount from CCG10 sample was higher. This difference may be explained by the dissimilar interaction ability of drug-polymers, polymers-solutions, drug-solution.
MGS Release Kinetic
The kinetic models expressing release mechanisms of a drug from a certain matrix such as zero-order (ZO), first-order (FO), Higuchi (HG), Hixson-Crowell (HC) and Korsmeyer-Peppas (KMP) are typical [46,51,55]. To study the release mechanism of MGS from the CCG microparticles in two stage, fast release for first 60 min of testing and slow release in following minutes, the release patterns were fitted to those five models based on the R 2 value (Table 4).
In first stage (0-120 min of testing), in vitro release of MGS from the free MGS followed the KMP model in all tested solution while it followed the KMP model when formulated into chitosan/carrageenan microparticles in ethanol/ pH 1.2 buffer solution, ethanol/pH 6.8 buffer and ethanol/ pH 7.4 buffer solutions. In ethanol/pH 4.5 buffer solution, the release of MGS from the CCG5, CCG10, CCG15 and CCG20 was complied with the KMP, FO, ZO and KMP models. In general, the release of MGS in fast release stage was complex [55]. This process was combined by various mechanisms such as swelling of polymers, dissolution of polymers, diffusion of MGS, dissolution of MGS in solvents, etc.
In second stage, most of the MGS release processes from the free MGS and CCG microparticles were fitted well with ZO or FO models depending on pH of buffer solution and MGS content in CCG microparticles. This indicated that the release of MGS was concentration-independent or mainly controlled by drug concentration [51]. The diffusion constants obtained from the KMP model reflecting MGS release mechanism from the free MGS and CCG microparticles are different suggesting that the release of MGS can follow or un-follow Fick's law of diffusion depending on investigation conditions.
Antibacterial Activity of CCG Microparticles
The results of Antibacterial activity testing in Table 5 indicated that MGS and CCG microparticles can inhibit Gram ( +) strains (Staphylococcus aureus, Bacillus subtilis, Lactobacillus fermentum) and cannot inhibit Gram (−) strains (Salmonella enterica, Escherichia coli, Pseudomonas aeruginosa) and yeast (Candida albican) at tetsted concentration. The MGS has a great antibacterial activity [2]. The IC 50 and MIC of CCG microparticles were higher than that of MGS showing to a less antibacteria activity of CCG microparticles as compared to MGS. This may be due to the low content of MGS in CCG microparticles (5-20 wt.%) and low antimicrobial activity of polymer matrix [56,57]. The MGS and CCG microparticles can inhibit Staphylococcus aureus better than Bacillus subtilis, Lactobacillus fermentum. As increasing the MGS in samples, the antibacterial activity of CCG microparticles became stronger.
Anti-Oxidant Activity of CCG Microparticles
The anti-oxidant activity of the MGS and CCG microparticles was listed in Table 6. The MGS was known as a good anti-oxidant substance [6,7]. It can be seen that MGS has better anti-oxidant activity than the CCG microparticles. The decrease antioxidant activity of the CCG microparticles compared to MGS may be due to the low content of MGS in the microparticles as this was only 5 to 20 wt.%. As increasing the MGS content in CCG microparticles, their antioxidant activity was slightly increased. This result indicates that the CCG microparticles are not suitable for the application as an antioxidant substance because of their low scavenging capacity.
Vero Cell Toxicity of CCG Microparticles
The vero cell inhibition rate and IC 50 values of the MGS and CCG microparticles on vero cells were presented in Table 7. MGS caused toxicity on vero cells at the concentration of 25 µg/mL (99.98% cells were inhibited). As loaded by carrageenan/chitosan blend, the vero cell toxicity of MGS was reduced. When increasing the MGS content, the vero cell inhibition rate of CCG samples was increased. For examples, at the concentration of test sample of 100 µg/ mL, the cell inhibition rate values of CCG 5, CCG 10, CCG15 and CCG 20 samples are 17.45%, 9.83%, 29.82 5
Conclusions
In this study, the carrageenan/chitosan microparticles loading α-mangostin (extracted from the skin of Vietnamese mangosteen) were prepared successfully by ionic gelation method. Chitosan and carrageenan can be mixed well together owing to the formation of chitosan-carrageenan polyelectrolyte complex. The α-mangostin was embedded in carrageenan/chitosan microparticles thanks to the ionic cross-linking and interaction of components in the carrageenan/chitosan/α-mangostin microparticles. These microparticles loading α-mangostin had a more compact structure and smaller particle size than the carrageenan/ chitosan microparticles. The carrageenan/chitosan matrix can improve the release ability of α-mangostin in ethanol/ buffer solutions. The release of α-mangostin from the carrageenan/chitosan microparticles in different ethanol/buffer solutions is a complex process and depends on pH of buffer solution, testing time and α-mangostin content in the microparticles. The carrageenan/chitosan microparticles loading α-mangostin can inhibit gram ( +) strains and adequate antioxidant activity. The microparticles are non-toxic to vero cells with the IC 50 higher than 100 μg/mL. This is promising for application of carrageenan/chitosan/α-mangostin microparticles in food and beverages fields. Funding No funding has received.
|
2021-07-06T13:40:29.844Z
|
2021-05-13T00:00:00.000
|
{
"year": 2021,
"sha1": "55c0f6007856a0afef7c29acc02c9420ef20d994",
"oa_license": "CCBY",
"oa_url": "https://www.researchsquare.com/article/rs-457618/v1.pdf?c=1620947722000",
"oa_status": "GREEN",
"pdf_src": "Springer",
"pdf_hash": "55c0f6007856a0afef7c29acc02c9420ef20d994",
"s2fieldsofstudy": [
"Materials Science",
"Environmental Science",
"Medicine"
],
"extfieldsofstudy": [
"Chemistry"
]
}
|
221183157
|
pes2o/s2orc
|
v3-fos-license
|
Study protocol: functioning curves and trajectories for children and adolescents with cerebral palsy in Brazil – PartiCipa Brazil
Background Gross motor development curves for children with Cerebral Palsy (CP), grouped by Gross Motor Function Classification System (GMFCS) levels, help health care professionals and parents to understand children’s motor function prognosis. Although these curves are widely used in Brazil to guide clinical decision-making, they were developed with Canadian children with CP. Little is known about how these patterns evolve in children and adolescents with CP in low-income countries like Brazil. The PARTICIPA BRAZIL aims to: (i) to identify and draw a profile of functioning and disability of Brazilian children and adolescents with CP by classifying them, for descriptive purposes, with all five valid and reliable functional classifications systems (gross motor function, manual ability, communication function, visual and eating and drinking abilities); (ii) to create longitudinal trajectories capturing the mobility capacity of Brazilian children and adolescents with CP for each level of the GMFCS; (iii) to document longitudinal trajectories in the performance of activities and participation of Brazilian children and adolescents with CP across two functional classification systems: GMFCS and MACS (Manual Abilities Classification System); (iv) to document longitudinal trajectories of neuromusculoskeletal and movement-related functions and exercise tolerance functions of Brazilian children and adolescents with CP for each level of the GMFCS; and (v) to explore interrelationships among all ICF framework components and the five functional classification systems in Brazilian children and adolescents with CP. Methods We propose a multi-center, longitudinal, prospective cohort study with 750 Brazilian children and adolescents with CP from across the country. Participants will be classified according to five functional classification systems. Contextual factors, activity and participation, and body functions will be evaluated longitudinally and prospectively for four years. Nonlinear mixed-effects models for each of the five GMFCS and MACS levels will be created using test scores over time to create prognosis curves. To explore the interrelationships among ICF components, a multiple linear regression will be performed. Discussion The findings from this study will describe the level and nature of activities and levels of participation of children and youth with CP in Brazil. This will support evidence-based public policies to improve care to this population from childhood to adulthood, based on their prognosis.
(Continued from previous page) Discussion: The findings from this study will describe the level and nature of activities and levels of participation of children and youth with CP in Brazil. This will support evidence-based public policies to improve care to this population from childhood to adulthood, based on their prognosis.
Keywords: Cerebral palsy, International Classification of Functioning, Disability and Health -ICF, Participation, Activity, Gross motor function Background Cerebral palsy (CP) refers to a group of developmental disorders of movement and posture due to a nonprogressive impairment of the immature brain [1] that can affect health across all domains of functioning described by the International Classification of Functioning, Disability and Health (ICF) [2]. Evidence from developed countries shows that one in three children with CP does not walk, one in four does not speak, one in four has epilepsy, and one in 25 has hearing impairment [3].
Using ICF concepts and language, children with CP have primary impairments in body structures and functions, like muscle weakness and spasticity. Despite the non-progressive nature of the underlying brain damage, these impairments in the neuromusculoskeletal system and compensations due to the altered postural patterns may continue to progress [3,4]. The association of dysfunctions and contextual factors usually results in activity limitations and participation restrictions that are secondary to the neurological impairments of this population [1,2]. Regular assessments of functioning make it possible to chart progress and understand the evolution of the condition and the need to modify contextual factors, including therapeutic approaches, to achieve specific goals [5]. CP has traditionally been described in terms of clinical type, stratified into spastic unilateral (hemiplegia) or bilateral (diplegia and quadriplegia), dyskinetic or ataxic [3,6,7]. However, these descriptions do not describe what the child does from a functional point of view [8].
To address this reality, functional classifications have been developed, including the Gross Motor Function Classification System (GMFCS), Manual Ability Classification System (MACS), Communication Function Classification System (CFCS), Eating and Drinking Ability Classification System (EDACS) and Visual Function Classification System (VFCS) [8,9]. It is important to highlight that functional classifications facilitate the exchange of consistent information among members of the interdisciplinary team and between the team and the family or the child/adolescent. In addition, the classifications standardize the population with CP for research purposes [8].
In 2002, Rosenbaum, Palisano and colleagues created the gross motor development curves for children with CP, based on 5-year longitudinal assessments of 657 Canadian children from across Ontario, reported according to the five levels of the GMFCS [10,11]. These motor capacity curves help parents and healthcare professionals to understand patterns of motor development of children with CP, according to their functional level and age, as well as to predict their potential for motor acquisition and functional independence [11]. Targeting improved clinical applicability, centile reference curves based on the 66-item Gross Motor Function Measure (GMFM-66) were constructed by Hanna et al. (2008) [12]. These curves are widely used to guide clinical decision-making in Brazil, but all these tools were constructed based on the development of children with CP, aged 1 to 13 years, served by 19 publicly-funded children's rehabilitation services in Ontario, Canada [11,13]. Subsequently, Hanna et al. (2009) followed a sample of the study participants into adolescence and young adulthood [14]. Longitudinal trajectories and reference centiles were also developed in Canada and United States for several other outcomes, such as range of motion (Spinal Alignment and Range of Motion Measures -SAROMM), endurance (Early Activity Scale for Endurance -EASE), and strength (Functional Strength Assessment -FSA) [12] in young children with CP.
In the Netherlands, motor growth curves were created similar to those in Canada, despite differences in country, health service system and time period [15]. Trajectories for mobility, self-care [16] and participation [17] for Dutch individuals with CP across GMFCS levels were also developed. These studies have highlighted expected age-intervals at which motor and functional performance levels are achieved. Nevertheless, Van Gorp et al. (2018) observed that the development of motor performance in individuals with CP continues after gross motor capacity limits have been reached in childhood [18]. All the aforementioned studies addressed children and adolescents with CP from high-income countries.
Regarding the prevalence of functional levels, research has shown that the percentage of children with CP classified as 'moderate to severe' has decreased in Australia in the past decades [19]. In contrast, children with CP in low-and middle-income countries (LMIC) were reported to have more severe physical limitations and even higher rates of comorbidities compared to developed countries [20]. No previous studies have described the evolution of activity curves and participation trajectories of children and adolescents with CP in LMICs, such as Brazila country with diverse socioeconomic and cultural conditions that faces many challenges, such as access to public services and evidence-based rehabilitation treatments. The impact of these conditions on the development of children with disabilities is largely unknown.
A Brazilian population study is therefore needed to create activity curves and participation trajectories for children and adolescents with CP in Brazil, and to understand the relation of functional classification levels with body functions, activities and participation, across the life span. These curves would allow professionals to answer the following research questions: (1) Controlling for GMFCS levels, do Brazilian environmental factors influence the development of functioning of children and adolescents with CP? (2) What are the relationships among body functions, activities (capacity and performance) and participation across the life span in Brazilian children and adolescents with CP across functional classification levels? The specific research aims are:
Methods
Design, participants and ethical approval PARTICIPA BRAZIL will be a multicenter, longitudinal, prospective cohort study, in which Brazilian children and adolescents with CP (1 to 14 years of age) will be invited to participate, primarily at the Public Health System (Sistema Único de Saúde -SUS) and philanthropic services. Nine partners from seven Brazilian public universities have already agreed to participate in this study.
These centers have hospitals or partnerships with public centers that collectively assist more than 500 children and adolescents with CP, mainly with physical therapy programs provided by trained professionals who are experienced in assessment and management of children with disabilities. The cities are located in strategic regions of Brazil -4 universities/centers in the Southeast, 1 in the South, 1 the Middle Region of Brazil and 1 in the Northeast. Also, the project will be nationally advertised. Additional Brazilian public hospitals and public or philanthropic services will also be invited to participate. The assessments will be started only after the agreement of the parents, who will be asked to sign the Informed Consent Form. For children and adolescents, an assent form will be signed if the participant has the ability to do so. Ethical approval for this multicenter study was obtained before the start of the project at Federal University of Juiz de Fora (CAAE: 28540620.6.1001.5133).
Inclusion criteria
Children and adolescents diagnosed with CP, born after 2007, enrolled in rehabilitation services of Brazilian public university hospitals and partner services. Participants with clinical neuromotor characteristics and/or history consistent with CP, such as spasticity or mobility impairments, will be included in the study if, in the judgment of the health professionals providing their therapies, these children 'look like' they have CP, even if no formal diagnosis has been given, as it is often the case in Brazil.
Exclusion criteria
Children and adolescents with other recognized neuromotor dysfunctions, such as myelomeningocele, Down syndrome, or muscular dystrophies will be excluded from the study.
Control criteria
Children and adolescents with CP who have received botulinum toxin, selective dorsal rhizotomy, musculoskeletal or bone surgery, baclofen pump, or other technical interventions during the study period, will be included and followed, considering the time of these procedures as variables for separate analysis. All adaptive equipment and medications used by participants will be documented during the study follow-up. GMFM assessments will be done using the standard procedures, namely without the use of adaptive equipment. Note that although the use of these procedures and equipment will be documented, this study is not intended to investigate the specific effects of any of these interventions.
Sample size
Sample size calculations were performed using data from Scrutton and Rosenbaum [21]. Based on the GMFM-88 (Gross Motor Function Measure -88 items version) and estimated score limits for a 10-year-old in each GMFCS stratum (98-100, 90-95, 60-80, 12-50 and < 10%, respectively), a sample of 150 children per GMFCS stratum would provide a power of 0.85 [11]. Based on the study by Palisano et al., a sample size of 700 children will be necessary for estimation of percentiles by age and GMFCS levels based on calculations for adequacy of the width of the 95% confidence interval (CI) for the 5th, 50th, and 95th percentiles [22,23]. This sample size estimate is in accordance with other studies that investigated functional trajectories in children and adolescents in CP across the world [11,15,[23][24][25].
Instruments and procedures
All participating centers will perform data collection following the same procedures ( Fig. 1). Table 1 shows the instruments that will be used to evaluate each component of the ICF, according to age and GMFCS level. Children < 6 years will be evaluated every 6 months, and the children and adolescents ≥6 years of age will be evaluated annually. We expect to have at least one evaluation per year, during the 4 years follow-up, totaling a minimum of four evaluations per participant. For the constructions of the curves, it is essential to have at least three longitudinal evaluations per child [26]. The examiners, mainly physical and occupational therapists, will receive pre-study training, both on the theory and practical applications for all instruments and classifications. Examiners should have agreement above 80% (intra-class correlation coefficient = ICC ≥ 0.80) against criterion teststo be assessed during training and every year during the study proceduresto check their reliability. Some measures will be performed with the child or adolescent and another few with the caregiver (as can be seen in Fig. 1). Also, the number of measures that will be applied will depend on the functional classification of the participant (the more functional ones will receive more assessments, but also the more functional the participant, the faster the tests will be completed). There will be different assessors for the child/adolescent and for the parents. The time spent in each measurement will depend on the motor ability of the child, but we estimate a mean time of 90 minutes of evaluation. Participants will take breaks if needed. As the evaluation will occur once a year or twice yearly (in children under 6 years old), it may be necessary to split the tests into two visits (maximum of 1 week apart) to avoid burdening up the participant.
For descriptive purposes, the children will be described according to clinical type, such as spastic unilateral or bilateral, dyskinetic or ataxic and according to their functional classification. The main caregiver will complete a questionnaire about the contextual factors of the participant, including: personal factors (e.g., health status, age, gender, educational level, life habits, history of other impairments) and environmental factors (e.g., orthotic devices, wheelchairs, transfer devices, access to health Table 1. The family's economic level will be accessed by means of the Brazilian Economic Classification Criteria (BECC) [27].
The participants will also have their weight and height measured using standardized instruments. Weight will be measured on a digital scale calibrated to zero, in Kilograms, with the child undressed or by taking the difference between the weight of the parent with and without the child on their lap. Height will be measured in centimeters by stadiometer, in the supine or standing position, in those children who do not have significant musculoskeletal deformities (e.g., scoliosis, kyphosis or flexion deformities of the lower limbs). In children who present deformities, height will be estimated by the length between the knee and the heel (anterior surface of the leg to the sole of the foot), using a stadiometer, applying the formula of Stevenson (1995), where: height = (2.69 x knee length) + 24.2 [28].
Activity and participation measures Functional Classifications Systems
Participants' mobility will be classified by the valid and reliable GMFCS [10]. GMFCS uses a five-level ordinal scale to describe the level of independence in postural control and mobility of children and adolescents with CP [10,29], stratified by age bands: < 2 years, 2 to < 4 years, 4 to < 6 years, 6 to < 12 years, and 12 to 18 years of age. Level I describes the most functional children, who walk independently and go up and down stairs without assistance, whereas level V represents children with the [10,29]. The MACS, the CFCS, the EDACS and VFCSanalogues of the GMFCS with good validity and reliabilitywill be also used to document the functioning of the children and adolescents across five levels, in the same way of the GMFCS [8,9,14,30,31]. All the functional classifications have five levels, where level I represents the most independent children or adolescents, and level V represents children or adolescents who require most assistance in the respective functional domain. All of these measures are standardized, reliable, valid and complementary to one another [8,9,14,31]. The classification levels of each of these instruments at childhood are summarized in Table 2.
The mobility performance of the children aged 4-18 years in home, school and community will be classified using the Functional Mobility Scale (FMS) [32,33]. The FMS rates walking ability at three specific metric distances based on three environments: 5 m (home), 50 m (school) and 500 m (community). Opposite to the GMFCS ratings, in FMS children in level 1 use wheelchairs and children in level 6 are independent on all surfaces [32,33]. The participants will be classified by trained therapists in the first assessment and reclassified in subsequent assessments for all classification systems.
To assess children's gross motor capacity, we will use the following tools: Gross Motor Function Measure (GMFM-66) [34], the Gross Motor Function Measure-Challenge Module [35], and 10-m fast walk test [36]. The GMFM-66 is a quantitative clinical tool that assesses gross motor activity with the purpose of measuring changes in children with CP over time [34]. The items are grouped into five dimensions: A: lying and rolling; B: sitting; C: crawling and kneeling; D: standing; E: walking, running and jumping. Items are scored on a four-point ordinal scale (specifically defined for each of the four scores for every item): 0 = does not perform; 1 = starts an activity; 2 = partially completes the activity; 3 = complete the activity as described in the GMFM-66 manual. In this study, we will compute the Rasch analysis-based GMFM-66 scores, providing an interval scale using the new GMAE-3 application (Gross Motor Ability Estimator -3rd version) [34].
The Challenge Module, composed of 28 items, measures more complex gross motor activities. It was created for children and adolescents with CP in GMFCS levels I and II (if GMFCS II, minimum GMFM-66 score arbitrarily set at 70 to reflect the higher end of the Level II ability spectrum) [11], 5 to 18 years of age, able to follow instructions for a motor skill test. The test includes 17 locomotor items and 7 object control items. The mean score of three trials is calculated for each item and the total of these means reported. Scores ranged from 0 to 112 [37].
Walking capacity will be evaluated by the 10-m fast walk test (10mFWT) for children from 4 to 18 years of age [36,38]. The 10mFWT has the potential to provide valuable clinical information regarding gait abilities and outcomes in ambulatory children (GMFCS I, II and III) able to walk 10 m with or without a walking aid [36,38]. It is safe, easy, inexpensive to administer and allows us to calculate the walking speed for the minimum distance required for functional ambulation. To assess children's performance in activities and participation the following tools will be used: Pediatric Evaluation of Disability Inventory -Computer Adaptivetest (PEDI-CAT) [39], Young Children's Participation & Environment Measure (YC-PEM), and Participation & Environment Measure for Children and Youth (PEM-CY) [40,41]. The PEDI-CAT was developed to measure performance in daily activities, mobility, cognitive-social, and responsibility in children and adolescents up to 21 years of age [39]. The application requires a computer with the instrument software installed and can be selfadministered (i.e., completed by the child's parents) or through a parent interview with a professional [39,42]. In the domains of daily activities, mobility, and cognitive social, the four-point scores are based on different levels of difficulty. The responsibility domain classifies items on a five-point scale, describing the sharing of responsibility between caregiver and child or adolescent in managing complex, multi-step life tasks. The overall score is transformed to a normative score (based on age) and a continuous score that will be used in the analyses. The PEDI-CAT has been translated and adapted culturally to Brazil [43].
The YC-PEM and the PEM-CY are parent-completed measures that look at participation of children and youth, aged 0-5 years and 5-17 years, respectively, in the home, daycare/preschool (YC-PEM) or school (PEM-CY) and community [40,41]. Both instruments capture parent/caregiver perspectives of the child's frequency of attending activities, level of involvement (i.e., engagement in the activities) and satisfaction with valued activities, and of the supports, barriers, resources and helpfulness of the environment in those 3 settings. Both instruments (YC-PEM and PEM-CY) have been translated and adapted culturally to Brazil [44,45]. In this study, we will analyze: 1) frequency of attendance (rated using an eight-point scale with response options varying from daily to never); 2) level of involvement (five-point scale with response from minimally to very involved); and 3) change desired (yes or no). Activities in a setting are summed to provide a frequency score per setting. Environment scores (percentages) will be used in the description of contextual factors and their relationships with other ICF components.
Body functions measures
Neuromusculoskeletal and movement-related functions will be evaluated by Functional Strength Assessment (FSA) [46]. The FSA provides an estimate of strength for major muscle groups including the neck and trunk flexors and extensors, hip extensors, knee extensors and shoulder flexors bilaterally [46]. Items are scored on a 5point ordinal scale of 1 (only flicker of contraction or just initiates movement against gravity) to 5 (full available range against gravity and strong resistance) [46]. Exercise tolerance will be measured by: Early Activity Scale for Endurance (EASE) [47], Six Minute Walk Test (6MWT) [48,49] and Shuttle Run Test (SRT) [50,51]. The EASE is a parent-completed questionnaire of the child's perceived endurance for activity in young children with cerebral palsy, until 5 years old, including frequency, intensity, duration, and type of physical activity [47]. Items are scored on a 5-point ordinal scale from 1 = Never to 5 = Always, with higher scores indicating greater exercise tolerance [47]. The 6MWT is a submaximal test that assesses the tolerance for walking a prolonged distance with or without walking aid in children and adolescents from 4 to 18 years of age in GMFCS levels I, II and III. The greater the distance covered in six minutes the better the exercise tolerance [48,49]. In the SRT participants will walk or run between 2 markers delineating the respective course of 10 m at a set incremental speed determined by a signal (every minute) [50,51]. The starting speeds for the tests are 5 and 2 km/h for participants who are classified at GMFCS I and II, respectively, and the speeds are increased by 0.25 km/h every minute [50,51]. The last completed level (accurate to a half shuttle) will be recorded and used for analysis. This test has been shown to be reliable, valid, and sensitive to change in children with CP [50,51].
Statistical analysis
Initially, the data will be explored descriptively, and the assumptions of normality will be tested using the Kolmogorov-Smirnov test. Moreover, Q-Q plots will be used to verify which distribution best fits the data.
To identify and draw a profile of functioning and disability, categorical variables will be presented through frequencies (and percentages) and numerical variables through means and standard deviations, using all five functional classification systems.
Longitudinal trajectories will be created describing the average change in gross motor function, activity and participation, between different ages, using nonlinear mixed-effects models fit for each of the five GMFCS and MACS levels. For the mobility capacity trajectory curves, the GMFCS will be used; for activities and participation trajectory curves, we will use the GMFCS and MACS; and for neuromusculoskeletal, movement-related and exercise tolerance functions, the GMFCS will be used.
Random effects (e.g., age) will be fitted for each parameter to estimate the variability in the true change parameters among children.
Each model will consider two parameters: rate and limit of development (average maximal performance level for a subgroup). To enhance interpretation, the rate parameters will be used to calculate the average age by which individuals will reach 90% of the limit (age-90). The 95% CI of the limit and age-90 will be calculated and used to detect differences between GMFCS and MACS levels.
We will adopt a Multiple Regression analysis to explore the interrelationship between each of the functional classifications (GMFCS, MACS, CFCS, EDACS and VFCS) as a response variable, and the predictor variables (personal and environmental factors, capacity, performance and functional outcomes). Stepwise, starting from linear to cubic fitting regressions, will be used to generate equations for predicting personal and environmental factors, capacity, performance, and functional outcomes. To avoid collinearity, Spearman's test will be applied to correlate all the predictor variables. The correlation matrix will be analysed, and variables that to exhibit a high correlation will be considered collinear.
All data collected in the different centers will be inserted in a password-secured identified Excel spreadsheet. Statistical analysis will be performed in Statistical Package for Social Sciences (SPSS©, version 25).
Application of study results
It is expected that through this study, Brazilian therapists will be able to apply longitudinal trajectories validated for Brazilian children, serving as a guide for clinical decision-making. In addition, the findings from this study are expected to help us to describe and understand activities and participation, neuromusculoskeletal and movement-related functions and exercise tolerance functions of children and youth with CP in Brazil across the spectrum of functional levels and the different geographical regions of Brazil. This will make it possible to propose evidence-based public policies to improve services to this population in different stages of life, from childhood to adulthood, according to their motor prognosis and phase of motor evolution.
Being able to report levels of activities and participation will support the arguments for higher and most appropriate investments in treatment and assistive technologies during important phases of these children's lives. This should help to promote their best capacity and quality of life to improve their participation in society and that of their families. Finally, it is expected that this study will inform us about the relationships among the different domains of the ICF and its contextual factors in Brazilian children and adolescents with CP. These findings will allow therapists to better understand important factors that influence their clinical decisions, and potentially expand the range of services and advice they have to offer.
Potential risks and challenges
We may experience some difficulties in the follow-up of the children and youth across the years. To try to control this problem we will explain to the caregivers the importance of the study to their child and to the understanding of the care needed for children with CP in Brazil. We will also build in a number of tracking strategies for the children and families, including sending the children birthday cards, sending families annual study newsletters and asking each family at the start of the study for a contact (e.g., grandparents) who could help us find families that move to another house during the study. One another strategy to maintain the families in the study is that we will give a report after each evaluation, with broad treatment guidelines and ideas for adaptive equipment and technologies that might be useful.
Dissemination of results
We plan to participate in conferences, to present the project and the results in plain language to all family participants (caregivers) and children and youth, and to CP organizations and services. We will disseminate the results of the study in papers in high impact peer-reviewed journals. All knowledge translation activities will be done in both Brazilian Portuguese and English. This study will permit the development of strategies of knowledge translation to Brazilian citizens, to illustrate that children and youth with CP have different prognoses according to their functional level, and that they can participate and be integrated in daily life activities and leisure during their childhood regardless of functional level.
Future research
The results of this study may help professionals to advocate for the development of future research regarding the access of Brazilian children and adolescents with CP to appropriate equipment and orthoses; to investigate the effects of interventions focusing on providing enrichment of activities and participation; and to inform public policies towards better access to health services considering the variability of the contextual factors across the country. Also, we believe that after the development of this study, studies that investigate the knowledge and implementation of the 'F-words for Childhood Development' in lowincome countries, like Brazil, will have created a big difference in the profile of the families [55,56]. Our PartiCipa Brazil Team advocates for these studies.
|
2020-08-20T14:16:30.587Z
|
2020-08-20T00:00:00.000
|
{
"year": 2020,
"sha1": "60e7a319e0ced8f698b3284674f84bbb845b1fe4",
"oa_license": "CCBY",
"oa_url": "https://bmcpediatr.biomedcentral.com/track/pdf/10.1186/s12887-020-02279-3",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "60e7a319e0ced8f698b3284674f84bbb845b1fe4",
"s2fieldsofstudy": [
"Psychology",
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
216422332
|
pes2o/s2orc
|
v3-fos-license
|
Multidimensional Clustering of EU Regions: A Contribution to Orient Public Policies in Reducing Regional Disparities
This paper applies multidimensional clustering of EU-28 regions with regard to their specialisation strategies and socioeconomic characteristics. It builds on an original dataset. Several academic studies discuss the relevant issues to be addressed by innovation and regional development policies, but so far no systematic analysis has linked the different aspects of EU regions research and innovation strategies (RIS3) and their socio-economic characteristics. This paper intends to fill this gap, with the aim to provide clues for more effective regional and innovation policies. In the data set analysed in this paper, the socioeconomic and demographic classification associates each region to one categorical variable (with 19 categories), while the classification of the RIS3 priorities clustering was performed separately on “descriptions” (21 Boolean categories) and “codes” (11 Boolean Categories) of regions’ RIS3. The cluster analysis, implemented on the results of the correspondence analysis on the three sets of categories, returns 9 groups of regions that are similar in terms of priorities and socioeconomic characteristics. Each group has different characteristics that revolve mainly around the concepts of selectivity (group’s ability to represent a category) and homogeneity (similarity in the group with respect to one category) with respect to the different classifications on which the analysis is based. Policy implications showed in this paper are discussed as a contribution to the current debate on post-2020 European Cohesion Policy, which aims at orienting public policies toward the reduction of regional disparities and to the enhance complementarities and synergies within macro-regions.
Introduction
The current debate on post-2020 European Cohesion Policy confirms the need for public policies targeting the reduction of regional disparities and the enhancement of complementarities and synergies within macro-regions. Such interventions, supported by the European Structural & Investment Funds, are key instruments for the implementation of EU policies and programmes, aimed at fostering the cohesion and competitiveness across larger EU spaces, encompassing neighbouring member and non-member States (European Commission 2016). 1 To this end, regions are encouraged to share their best practices, to learn from each other and to exploit the opportunities for joint actions, through dedicated tools created by the European Commission. A specific dimension of such leverages is the set of strategic priorities that regions have outlined in their smart specialisation on research and innovation. The concept stems from academic work on the key drivers for bottom-up policies aiming at structural changes that are needed to improve job opportunities and welfare of territories (Foray et al. 2009;Barca 2009;Foray 2018). In the programming period 2014-2020, the European Commission has adopted the Research and Innovation Smart Specialisation Strategy (RIS3) as an ex-ante conditionality for access of regions to European Regional Development Funds (ERDFs). Such policies are built on specific guidelines and on a very detailed process of implementation (European Commission 2012, 2017Foray et al. 2012;McCann and Ortega-Argilés 2015). They identify "strategic areas for intervention, based both on the analysis of the strengths and potential of the regional economies and on a process of entrepreneurial discovery with wide stakeholder involvement. It embraces a broad view of innovation that goes beyond research-oriented and technology-based activities, and requires a sound intervention strategy supported by effective monitoring mechanisms" (European Commission 2017, p. 11).
Although over 65 billion EUR of ERDFs have been allocated to such policies their impact has not been scrutinised yet and no effective monitoring tool has been implemented. 2 In addition, no systematic information on the list of projects implemented under the various regions' RIS3 priorities is available. 3 For regions aiming at learning from other regions' practices on RIS3, information on regional strategies and goals is shared through online platforms, such as the S3 platform run by EC-JRC. Other loci of interaction among regions are those supported by the EU Interreg programmes, 4 the Interact Initiatives, 5 and the macro-regions strategies. 6 National programmes too, provide fora to cross-region crosscountry comparison of structural features and policy measures on diverse domains. 7 Several academic studies provide analytical frameworks to support public decision making on subject such as income disparities (Iammarino et al. 2018) or quality of institutions (Charron et al. 2014). However, no systematic analysis has linked jointly the different aspects of EU regions specialisation strategies and their socio-economic characteristics. This paper aims to fill this gap by applying a multidimensional clustering of EU-28 regions in order to provide clues for more effective regional policies. The clustering proposed in the paper builds on an original dataset, where the EU-28 regions are classified according to their socioeconomic features , and to the strategic features of their research and innovation smart specialisations strategy (RIS3) ). In the first classification, each region is associated to one categorical variable (with 19 modalities) based on a multidimensional analysis (PCA and CA) of a large dataset, and it provides a perspective focused on regional heterogeneity across EU regions. In the second classification, two clustering of "descriptions" and "codes" of RIS3s' priorities were considered (respectively made of 21 and 11 Boolean categories). This comparative perspective is made possible by a non-supervised boolean textual classification of priorities using information on RIS3 from the Eye@RIS3 platform (European Commission-Joint Research Center JRC).
The paper is structured as follows. Section 2 describes the methods used to obtain a multidimensional classification and the dataset built on the classification of socioeconomic features of EU-28 regions and classification of priorities pointed out in their smart specialisation strategies. Section 3 returns the main results. Section 4 builds on the results of the analysis and discusses their implications for policy and possible future strands of this research.
Data and Methods
The data analysed in this paper results from the merging of two main datasets. 8 First of all, we use the classification of regions according to their socioeconomic features of Pagliacci et al. (2019). A socio-economic categorical variable is defined classifying the 208 territorial entities in EU-28 regions in 19 categories. Secondly, with regard to smart specialisation strategies, we use the classification defined by Pavone et al. (2019). There, the RIS3 priorities of 216 EU-28 territorial entities are summarised in two multi-class categorical variables: Description (21 categories) and Codes (11 categories). These two categorisations derive from an automatic classification of the priorities specified by each region in terms of free text of descriptions and of codes, which belong to three domains: scientific, economic, and policy objectives. 9 In the dataset, each record refers to a priority defined by the region with a free text description and with a series of codes in the three domains. Each region could specify one or more priorities. The automatic analysis of the two corpora (description and codes) has allowed the classification of priorities in 21 topics for descriptions and 11 groups for codes. The results of the three classifications can be cross-referenced by using the online tool created ad hoc for such cross-tabulation. Developed within the AlpGov project to map R&I in the Alpine regions, the tool is implemented to query the classifications of all the EU regions. Through an effective visualisation of maps and data, 10 it allows policy makers, researchers and public to query specific combinations of interest, focusing on the most detailed identification of groups of regions along the three categorisations: of economic characteristics, and of RIS3' priorities descriptions and codes.
Merging the two datasets, in this paper we study the multidimensional classification of 191 territorial entities according to the three above mentioned categorical variables.
The state of the art in clustering is provided by a huge literature (Jain 2010), developed in a variety of scientific fields with different languages and focusing on the most diverse problems: clustering heterogeneous data, definition of parameters and initialisations (such as the times of iterations in K-means, e.g., MacQueen 1967) and the threshold in hierarchical clustering (Jain and Dubes 1988), as well as the problem of defining the optimal number of groups. Research is increasingly focusing on combining multiple clustering of the same dataset to produce a better single one clustering (Boulis and Ostendorf 2004).
Without going into the merits of what could be the best method of classification, we put forward a grouping of regions according to their similarity in terms of their socio-economic characteristics and their RIS3 priorities. This enable comparing policy strategies in EU by implementing a factor analysis and a cluster analysis, applied on the matrix Regions × Categorical variables. Given that our case study comprises only one univocal categorical variable (19 regions' socio-economic and demographic categories) and two multi-class categorical variables (Codes and Descriptions of regions' RIS3's priorities, respectively with 11 and 19 categories), we directly apply a Correspondence Analysis (Benzecri 1992;Greenacre 2007) to the Boolean matrix Regions × Categories (191 × 51), in which the totals of rows depends on the number of categories in which each region has been classified. Usually, a matrix Units × Categorical variables (univocal classification) is studied through a multiple correspondences analysis that transforms the matrix Units × Variables (m × s) into a Boolean matrix Units × Categories (m × n). This latter matrix is considered as a particular frequency table which has the total of rows equal to the number of categorical variables considered in the analysis, while the total of columns is equal to the frequency of each category in the m units considered (Bolasco 1999). Then a correspondence analysis is applied, after transforming the Boolean data into row and column profiles, looking for their reproduction in factorial subspaces according to the criterion of the best orthogonal projections. In the present analysis, given a multiple categorization in two out of three dimensions, we adopt a Correspondence Analysis on the Boolean matrix. The factors highlight the configuration of the profiles in a graphical context. The interpretation of each factor through the analysis of the nodes' polarization sheds light on the association structure among regions' profiles. 11 Then a hierarchical agglomerative clustering based on Ward's aggregation method, with Euclidean distance, is applied on the results of the Correspondence Analysis on the dataset of regions.
Results
The correspondence analysis is applied to the Boolean matrix Regions × Categories. In this matrix, each region is classified according to a socio-economic class and to the set of categories of codes and categories of descriptions. Results of such an analysis are presented in Figs. 1 and 2, with regard to the distribution on f1f2 plane, respectively, of the 51 categories and of the 191 regions. "Appendix 1" lists the coordinates of the categories on the first four factors: these figures allow to interpret the existing polarizations in each factor. Building on this information, by analysing Fig. 1, we observe that the first factor polarises information on the specialisation of the regional economy, from services (left) to manufacturing (right), while the second factor polarises information on income, from low income (bottom) to high income (top). Figure 2 shows the distribution of the regions relative to the differences highlighted in Fig. 1. Therefore, from left to right there are regions more In the clustering process applied to such results, each factor represents only a part of the overall set of information and different results can be obtained, according to the number of factors considered. The selection of the most appropriate number of factors can be derived by observing the boxplot of coordinates of regions in each factor. 12 Figure 3 presents the regions coordinates of the ten factors, they show different projections of the cloud of points and highlight outliers.
In particular, the 5th factor singles out only the difference between one region (in the case in this example, the Brussels region-BE01) and all the others. The same holds true for the 10 th factor (in this case, the Luxembourg region-LU00). When five factors are considered, one single cluster results with only this outlier and, by increasing the number of factors under analysis, other outliers emerge as single clusters. Therefore, in order to avoid the influence of these outlier regions within the clustering process, without excluding them from the analysis, we proceed to carry out a cluster analysis considering, for the aggregation criteria, only the coordinates related to the first four factors. By analysing the resulting dendrogram 13 (Fig. 4), nine groups of regions have been selected. According to the Calinski and Harabasz index, the optimal number of cluster is five, but in order to single out significant aggregations of regions in terms of dimensions that are relevant for our analysis we adopted a greater number of clusters. The choice of the 5 clusters, although optimal from a statistical point of view, leads to an excessively broad and not relevant aggregation with regard to the economic analysis. For example, with the 5-clusters classification we obtain a first cluster that represents 46% of the information and groups 45% of the regions: with regard to its characteristic features, this cluster has the same RIS3 priorities (Manufacturing, Agro-food and Sustainable Energy) associated to very heterogeneous socio-economic conditions. Therefore, the choice of the greater number of clusters aims at obtaining groups with more homogeneous socio-economic characteristics for the various priorities. We have adopted a classification in nine clusters that will be detailed below and summarised in the table embedded in Fig. 7. Figures 5 and 6 show the distribution of regions and groups, respectively on the f1f2 plane and f3f4 plane.
For each of the nine clusters, Table 1 lists the characteristic categories, which are defined as those with a test-value greater than 2.1 14 (they are ranked in decreasing order of their test-value, column 3). The weight of those categories, i.e. the number of times the category occurs in the dataset, is shown in absolute and relative terms, respectively in columns 4 and 5. The ratio of each category in the cluster to all categories in the cluster (columns 6) highlights the extent to which the category is characteristic.
We observe that not all the codes are characteristic categories associated to the nine clusters: by selecting categories according to their test-value we are focusing only on those presenting a value that is significantly above the average occurrence among the regions in the cluster.
In general, with regard to the three sets of categories under analysis, Table 1 returns that, in seven out of nine cases, the clusters are characterised by a mix of socio-economic categories and classes of priorities. In the case of cluster #3, there are only socio-economic aspects as characteristic categories (being the most barycentric cluster), while in cluster #7 there is only one priority as characteristic category: this happens because none of the other categories of the regions grouped in this cluster are-on average-significantly higher than the average of their occurrence in the whole dataset. The nine clusters are now described with regard to the selectivity/homogeneity of their characteristic categories. These two elements are of fundamental importance for understanding and interpreting each group. Selectivity represents the group's ability to represent a category. It indicates the percentage of category in the cluster compared to the entire dataset. Homogeneity, on the other hand, represents the similarity in the group with respect to one category, it indicates the percentage of regions with the same category in the cluster.
Cluster #1, encompassing 31 regions, is characterised by the socio economic class Highincome; low-population density; tourism (with 85.71% occurrences in the cluster, which are associated to 38.71% of regions) and the description priority Sustainable Energy (77.42% of regions). The first characteristic category represents an element of selectivity of the category in the cluster, while the second one represents an element of homogeneity within the group.
Cluster #2 comprises 31 regions and it is characterised by two distinct socio-economic classes (both characterised by very low income), and description of priorities associated to Manufacturing (74.2% of regions), Agrofood (77.4% of regions) and Fashion (present at 55.6% in the cluster). Socio economic classes represent the selectivity features, while Manufacturing and Agrofood represent the homogeneity character of this group.
Cluster #3 encompasses 25 regions and the only distinctive element of this group are socioeconomic conditions: Medium-income; employment & population imbalances; manufacturing: textile, basic metal, transport; very poorly educated (present at 50% in the cluster and referred to 24% of regions) and Urban regions; high-income; poorer employment conditions; touristic (present at 55. 6% in the cluster and referred to 20% of regions): both characters show critical socioeconomic conditions. (3) (3) Cluster #4 (with 14 regions) is characterised by regions with a low and very low income (respectively 83.3% and 61.5% of occurrences in the cluster, respectively referred to 35.7% and 57.1% of regions). The priorities' descriptions refer to Tourism (100% of regions), Creative industry (92.9% of regions) and Agrofood (85.79% of regions). Also in this case, the socio-economic conditions represent the selectivity features, while priorities' descriptions are the homogeneity character within the group.
Cluster #5, (with 14 regions), is characterised by the socio-economic class Highincome; sparsely populated; public sector; highly educated (85.7% of regions) and priorities' descriptions referred to: Social innovation & education (78.6% of regions); Growth & Welfare (64.3% of regions); Bio economy (71.4% of regions). In this case all the characteristic categories represent the homogeneity character linking the regions in this cluster.
Cluster #6, (with just 5 regions) differs from cluster #5 because of its socio-economic features, characterised by Very-high income; large urban regions; high-employment; highly educated (with 60% of occurrences in the cluster associated with three regions).
Cluster #7 encompasses 18 regions with just one characteristic category: i.e. the marine and maritime priority (55.6% of the regions); other categories associated to regions in the cluster are not significantly higher than the average of the whole dataset.
Cluster #8 comprises 28 regions and it is characterised by the socio economic class High-income; high-employment; low-manufacturing; services & public sector (with 70.83% occurrences in the cluster, referring to 60.7% of regions) and by the priority descriptions: Optics (with 100% occurrences in the cluster and referred to 17.9% of regions); Transport & Logistics (60.7% of regions); Energy Production (46.4% of regions). Optics represent a specific element, while the most homogeneous elements are the socioeconomic class and Transport & Logistics description.
Cluster #9 is composed of 25 regions and it is characterised by two different socio-economic classes: Very-high income; manufacturing; population imbalances (with 85.71% occurrences in the cluster, referred to 48% of regions) and Low-income; high-density; high unemployment; agriculture; food & drinks; very poorly educated (62.5% of occurrences in the cluster, referred to 20% of regions). What unites regions with such different socioeconomic conditions is the set of characteristic categories of description: Healthy Food (present at 76.5% in the cluster and referred to 52% of regions); ICT & Tourism (present at 51.8% in the cluster and referred to 56% of regions); Life Science (68% of regions); Aeronautics, Aerospace & Automotive industry (36% of regions). Cluster 9 has as selectivity elements both socio-economic classes and Healthy Food priority, while there are no very high values of homogeneity (Life Science, referred to 68% of regions, is the highest value). Figure 7 maps the nine clusters, with the table in the right panel summarising the homogeneity and selectivity elements characterising the nine set of clusters under analysis. It is clear from the map that the different clusters do not just capture geographical proximity, but rather the similarity in the status (socio-economic and demographics elements) and areas of specialization.
Discussion and Conclusions
In this paper, we aim at interpreting the overall framework of interconnected structural socioeconomic and demographic features and policy programmes on smart specialisation strategy in the EU. By identifying clusters of EU regions, we provide policy makers with Maps of clusters of regions, by socioeconomic features and RIS3s' priorities: summary of selectivity and homogeneity characteristic categories a more systematic and informed tool they can use to learn from other regions, when they focus on the projects implemented within the various priorities.
Clustering of multidimensional categorisation is a multifaceted issue that must be addressed with the awareness that various methods of clustering are also affected by the data under analysis, such as: the overall number of observations, the number and type of variables (categorical, non-categorical and mixed variables, multiple vs. single categorisations), the distribution of observation along the various dimensions under analysis, and missing data. In the analysis presented in this paper, we merge two data sets on EU regions. They summarise information on two interrelated sets of issues: respectively, the structural features of regions and the RIS3 priorities defined by their policy programmes. Each dataset is built by using clustering techniques applied to different types of variables: numerical, for data on the 19 socioeconomic and demographic features, considered by Pagliacci et al. (2019), and texts, for RIS3's priorities categorised in the automatic text analysis elaborated by Pavone et al. (2019). In each passage of clustering, transparent, i.e. accountable, decisions, have been taken: from the general one of defining the number of clusters, to the selection of the principal components, identification of the socioeconomic categories as well as of the number of factors to be used in clustering the groups of co-occurrences in the multidimensional space of priorities' descriptions and priorities' codes. While the process of progressive reduction of multiple categories produces some loss of information, it makes it possible to single out common or singular features that otherwise would not be observable, and to use them for policy analysis. The value added by the multidimensional analysis of both socioeconomic dimensions and priorities of smart specialisation lies precisely in that.
The results provided by cluster analysis on the results of the correspondence analysis support a complementary indication on the comparative analysis of the EU regions. In the grouping of regions obtained, it is possible to highlight the elements of homogeneity and the elements of selectivity within each of the nine groups: the former are the characteristics common to most of the regions of a group, while the latter are those occurring mainly within a group.
Policy implications emerging from the analysis presented in this paper may be considered at different levels. In particular, macro-regions that aim at designing more focused strategies may leverage on complementarities and synergies across regions each of them encompasses: these clearly emerge from homogeneous features and selectivity characters of priorities identified in the cluster analysis.
|
2020-04-09T09:26:58.207Z
|
2020-04-06T00:00:00.000
|
{
"year": 2020,
"sha1": "a555c33e82991cac5cf5a4d086da01f188b9307c",
"oa_license": "CCBY",
"oa_url": "https://iris.unimore.it/bitstream/11380/1200076/4/Pavone%20et%20al.%20SSIR%202020_preprint%20version.pdf",
"oa_status": "GREEN",
"pdf_src": "SpringerNature",
"pdf_hash": "1d7236ff6fd30c90d7aea7a1511d2056cd06932c",
"s2fieldsofstudy": [
"Economics",
"Political Science",
"Geography"
],
"extfieldsofstudy": [
"Geography"
]
}
|
244773514
|
pes2o/s2orc
|
v3-fos-license
|
Luzin's problem on Fourier convergence and homeomorphisms
We show that for every continuous function there exists an absolutely continuous homeomorphism of the circle such that the Fourier series of the composition converges uniformly. This resolves a problem set by N. N. Luzin.
Introduction
Is it possible to improve the convergence properties of the Fourier series by an appropriate change of variable (homeomorphism of the circle)? The history of the problem goes back to a result of Julius Pál [13] (a Hungarian mathematician, who worked for many years in Copenhagen), improved by Harald Bohr [3].
Theorem 1 (Pál-Bohr). Given a real function f ∈ C(T) one can find a homeomorphism φ : T →T such that the Fourier series of the superposition f • φ converges uniformly on T.
For the convenience of the reader we include a proof of the Pál-Bohr theorem in §2, essentially following the original idea, a surprising use of the Riemann conformal mapping theorem.
The Pál-Bohr theorem inspired a number of problems which have been studied actively. In particular, real proofs providing a visible construction of φ were given by Saakyan [14] and by Kahane and Katznelson [8]. Kahane and Katznelson also proved the complex version of the theorem, and further showed that for any compact family of functions f there exists a single homeomorphism φ carrying the entire family into the space of functions with uniformly converging Fourier series. On the other hand, it is possible to construct a real function f ∈ C(T) such that no homeomorphism φ can bring it to the Wiener algebra (answering a problem posed by N. Luzin), see [11].
In all known proofs of the Pál-Bohr theorem the homeomorphism φ in general is singular. This prompted Luzin to ask: is it possible, for an arbitrary f ∈ C(T), to find an absolutely continuous homeomorphism φ such that GK is supported by the Israel Science Foundation and by the Jesselson Foundation. 1 f • φ has a uniformly converging of Fourier series? * This problem remained open so far. In this paper we resolve this conjecture.
Theorem 2. Given a real function f ∈ C(T) one can find an absolutely continuous homeomorphism φ : T →T such that the Fourier series of the superposition f • φ converges uniformly on T.
Further, for any p < ∞ it is possible to have in addition that φ ′ ∈ L p .
We remark that Kahane and Katznelson showed that this cannot be done if we require also that φ ∈ C 1 [8, example 5]. Thus an interesting gap remains between our results and those of [8]: is is possible to do the same with a Lipschitz homeomorphism?
We should also mention that certain kinds of improvements in the behaviour of the Fourier transform are impossible to achieve using absolutely continuous homeomorphisms. For example, there exists an f ∈ C(T) such that f • φ ∈ l p for any 1 ≤ p < 2 and any absolutely continuous homeomorphism φ [12]. This result also implies that it would be difficult to answer Luzin's conjecture using a version of the original construction of Pál and Bohr (e.g. replacing conformal maps with quasi-conformal ones), as that construction gives an H 1/2 function (see below, §2), and any g ∈ H 1/2 has g ∈ l p for all p > 1.
The paper is organised as follows. In §2 we prove the Pál-Bohr theorem. In §3 we sketch an easier result: that for any continuous function f there exists a Hölder homeomorphism ϕ such that f • ϕ has a uniformly bounded Fourier series. This sketch covers about two thirds of the ideas needed for the full result and we believe it will be beneficial to the reader. The remainder of the paper contains the proof of theorem 2.
The Pál-Bohr theorem
Recall that we wish to show that for every real-valued continuous function f on T there exists a homeomorphism ϕ such that f • ϕ has a uniformly converging Fourier series. Clearly we may assume f > 0, by adding a constant if necessary, so let us do so. Let Ω be the domain delimited by f in polar coordinates, namely Ω = {re iθ : r < f (θ)}. Let u be a conformal map of the disk D onto Ω taking 0 to 0. By Caratheodory's theorem [6, §I.3], u extends to a one-to-one map of ∂D onto ∂Ω and hence g(t) := |u(e it )| is a change of variables of f (in other words, g = f • ϕ). * Our attempts to discover the history of the problem were only partially successful. It is mentioned in Bary's book [2, chapter 4, §12], attributed to Luzin, who passed away in 1950, before Bary's book was published (as far as we know, Luzin did not conjecture one direction or the other for the expected answer). Another piece of information is from the MathSciNet entry for [11], which states that Luzin's other problem is from the twenties. It is reasonable to assume that both problems were asked together, but we have no evidence of that. We thank Carruth McGehee for help with our historical research.
We claim that g has a uniformly converging Fourier expansion. To see this write u(z) = c k z k and get k|c k | 2 .
Since D |u ′ (z)| 2 dz is the area of Ω and is finite, we get that g ∈ H 1/2 , the Sobolev space of functions whose 1 2 -derivative is in L 2 . The theorem is finished by the following three observations. First, that H 1/2 ∩ C is a Banach algebra (lemma 2.1 below). Hence |g| is also in H 1/2 (lemma 2.3). A continuous, H 1/2 function has a uniformly converging Fourier expansion (lemma 2.4).
Proof. Since f g ∞ ≤ f ∞ g ∞ , we need to show that f g H 1/2 ≤ C f A · g A for some absolute constant C, where f A := max{ f ∞ , f H 1/2 } and f 2 H 1/2 = ∞ k=−∞ |k|| f (k)| 2 . For k ≥ 1 let P k be the following smoothed spectral projections, and let P 0 f = |j|≤1 f (j)e ijt so that f = ∞ k=0 P k f. We get For each of the terms we can write where the last inequality follows because k l=0 P l is a de la Vallée Poussin kernel, and in particular is the difference of two Fejér kernel, hence has norm at most 3 on L ∞ . An identical calculation holds for the terms P l g P k f and hence showing the Banach algebra property, as needed.
Lemma 2.2. If f ∈ H 1/2 ∩ C and if z is a function analytic in a neighbourhood of {f (x) : x ∈ T} then z • f ∈ H 1/2 ∩ C.
Proof. We follow Wiener's proof of the same fact for Wiener's algebra. We first note that being in the algebra A = H 1/2 ∩C is a local property, namely if some f : T → C satisfies that for every x there is an ǫ > 0 and a function a x ∈ A such that a x | (x−ǫ,x+ǫ) = f | (x−ǫ,x+ǫ) then it follows that f ∈ A. Indeed, we take a finite cover of T by intervals (x 1 − ǫ 1 , x 1 + ǫ 1 ), . . . , (x n − ǫ n , x n + ǫ n ) and take some corresponding partition of unity p 1 , . . . , p n with p i ∈ A then f = f p 1 + . . . + f p n = a x 1 p 1 + . . . + a xn p n ∈ A (recall that A is a Banach algebra, by lemma 2.1). This shows locality, and it is therefore enough to show that for every x there exists some ǫ > 0 and a ∈ A such that a| (x−ǫ,x+ǫ) = z • f | x . By translation invariance we may assume that x = 0 and by invariance to addition of constants we may assume that f (0) = 0.
(trapezoidal functions satisfy all these properties, for example). We fix some δ > 0 and write f = f 1 + f 2 where f 1 is a trigonometric polynomial with |f 1 (0)| < δ and f 2 A < δ (for example we can take f 1 to be a Cesàro partial sum of f ). We then have Hence for n sufficiently large (depending on δ, of course), we have f ρ n A ≤ Cδ. Using the Taylor expansion of z near zero we can write and since (f ρ n ) k A ≤ C k f ρ n k A ≤ (Cδ) k we can make the sum converge in A by taking δ sufficiently small. This shows that locally z • f ∈ A and proves the lemma.
Proof. By lemma 2.1 A = H 1/2 ∩ C is a Banach algebra and since we already know g ∈ A we get that |g| 2 = Re(g) 2 + Im(g) 2 ∈ A. Since |g| 2 > 0 and since √ · is analytic in the neighbourhood of any interval [ǫ, 1/ǫ], the previous theorem shows that |g| = |g| 2 ∈ A.
Lemma 2.4. If f ∈ H 1/2 ∩ C then f has a uniformly converging Fourier expansion Proof. By a standard approximation argument, it is enough to show that for some universal constant C, where S n (f ) is the n th Fourier partial sum. Indeed, once (1) is established we may write f = f 1 + f 2 with f 1 a trigonometric polynomial and f 2 A < ǫ and get Since ǫ > 0 was arbitrary, the lemma is proved. So we need only demonstrate (1).
To prove (1) we note that F n , the n th Cesàro partial sum of f satisfies F n ∞ ≤ f ∞ ≤ f A so it is enough to prove the claim for S n − F n . We write Dubins-Freedman-like homeomorphism, but this time we do not investigate its almost sure behaviour (as in [10]) but its average behaviour. We then start reducing the randomness step by step, always keeping the same averaged behaviour, until reaching a degenerate random variable, and that is the required homeomorphism.
In this section we are going to sketch a proof of an easier result: that for every continuous function f there is a Hölder homeomorphism ϕ such that f • ϕ has a uniformly bounded Fourier expansion. This result is weaker than our main result in two aspects: the homeomorphism is only Hölder rather than absolutely continuous (this is the more important difference); and the Fourier series is uniformly bounded rather than uniformly converging. After the sketch we will briefly describe what additional ideas are needed to get the main result (whose proof, of course, occupies the rest of the paper).
Lemma 3.1. Let γ > 0 and M be two parameters. Assume (v i,j : i ∈ {1, . . . , n}, j ∈ N) satisfy that for every j there is an l(j) ∈ {1, . . . , n} and a b(j) ∈ N such that and assume further that |{j : l(j) = l, b(j) = b}| ≤ M b γ for all l and b. Then there exists a choice of signs ǫ 1 , . . . , ǫ n ∈ {±1} such that where C is a constant that depends only on γ and M .
This lemma and its proof are inspired by a result of Kashin [9], who proved a discrete version of Menshov's correction theorem. Roughly the proof goes as follows. We divide the values of i into blocks, and choose the signs randomly in each block, ensuring good local behaviour. We then group the blocks into higher level blocks, and choose signs randomly inside each second-level block, multiplying with the previously chosen signs, and so on. This is similar to the method of renormalisation in mathematical physics.
There is a connection between lemma 3.1 and Komlós' conjecture. Recall that Komlós' conjecture asks whether under the condition j |v i,j | 2 ≤ 1 for all i one may choose ǫ i such that i ǫ i v i,j ≤ C for all j. We could have used Komlós' conjecture for lemma 3.1 if we needed the lemma only for γ < 1, if we did not have the factor b(j) 1/50 , and if it were proved. Unfortunately, we need lemma 3.1 for γ = 91 (see page 49) and Komlós' conjecture is not proved.
This lemma is the only part of the proof of the Hölder result which exists in the main result verbatim. The reader is invited to see the details in the proof of lemma 4.1 (lemma 4.1 has two additional parameters, when compared to lemma 3.1, which are needed just because the proof is by induction).
We have opted to state this technical-sounding lemma first because, in a way, it was our starting point. Inspired by the use of Spencer's theorem (a partial result on Komlós' conjecture, see [16]) to solve a version of the flat polynomials conjecture [1], we asked ourselves whether these techniques might be useful for Luzin's conjecture. Even though that path did lead us to a proof, eventually our proof does not use Spencer's theorem or any other result from that area.
Continuing, throughout we will identify the circle T with the interval [0, 1). We will consider homeomorphisms which fix 0, so they are simply increasing functions from [0, 1] into itself.
The process then continues. Assume that we have already chosen φ(k2 −(n−1) ) for all k = 0, . . . , 2 n−1 . We then, for each odd This allows to define φ on all dyadic rationals, after which it can be continued to the entire interval [0, 1] by continuity. For example, if I ≡ [0, 1] then the resulting φ I is exactly the Dubins-Freedman random homeomorphism [4]. Our focus in this paper, though, is on the case where I(d) ⊂ J for some J strictly contained in [0, 1], in which case continuity, and even the Hölder property, are not only trivial but also hold deterministically, and not just with probability 1, as they do for the Dubins-Freedman homeomorphism.
Our first two lemmas are for constant I. It will be convenient, therefore, to make the following definition. For an q ∈ (0, 1 2 ) we let I q ≡ [ 1 2 − q, 1 2 + q] and ψ = ψ q = φ Iq . Lemma 3.2. Let f be continuous and let x, y ∈ [0, 1 2 ]. Then This lemma is not very surprising, as it merely says that if x and y are close (and not too close to 0 or 1) then the densities of ψ(x) and ψ(y) are quite close. Unfortunately, the densities of ψ(x) do not have nice closed formulas (the definition of ψ can certainly be traced to gives closed formulas for x = k2 −n for small n, but the formulas become unmanageable quickly). The proof, very roughly, is as follows. We first note that E(ψ(x)) = x (this holds for any φ I just under the condition that all I(d) are symmetric around 1 2 ). From this we get that E(ψ(2x) − ψ(2y)) = 2x − 2y, that is the variables ψ(2x) and ψ(2y) are close in the 1-Wasserstein distance. We now note that ψ(x) − ψ(y) has the same distribution as U · (ψ(2x) − ψ(2y)) where U is an independent random variable uniform on [ 1 2 − q, 1 2 + q]. This smoothens the estimate on the probabilities and makes it into an estimate on densities (this is similar to the way one uses convolution to translate an estimate in a cruder norm to an estimate in a finer norm -the convolution here is multiplicative but the idea is the same). We omit all further details of the proof. The square root in lemma 3.2 is just an artefact of our proof, the real behaviour is probably |x − y|/(x + y). This argument is not used as is in the proof of the main result, but formula (38) is quite similar (but has a different proof from the one sketched above).
be two intervals with d(y 1 , y 2 ) < ǫ for all y i ∈ J i for some ǫ > 0 (in other words, the intervals are both short and close to one another). Let f and g be two continuous functions. Then The proof is similar to the proof of lemma 3.2 and we omit it. The corresponding lemma in the proof of the main result is lemma 6.5, page 34.
The next lemma is the main lemma in the proof of the result. It implements one step of a 'reduction of randomness' strategy, that is of starting from φ I for I ≡ [ 1 2 − q, 1 2 + q] and reducing the randomness step by step (always keeping around an estimate similar to the one given by lemma 3.2) until reaching a single homeomorphism with good properties. Unfortunately, the formulation is a mouthful. Lemma 3.4. Let f be a continuous function satisfying ||f || ∞ ≤ 1, let q ∈ (0, 1 2 ) and let n, j ∈ N. Let I be a function from the dyadic rationals into subintervals of [ 1 2 − q, 1 2 + q], with the following properties: (i) For all m < n and all k ∈ {1, . . . , 2 m − 1} the interval I(k2 −m ) is degenerate, i.e. a single point.
Then there exists a function J from the dyadic rationals into subintervals of [ 1 2 − q, 1 2 + q] satisfying properties (i) and (iii) and such that for each k ∈ {1, . . . , 2 n−1 } odd, J(k2 −n ) is either the left or right half of I(k2 −n ) (in particular, property (ii) is satisfied for J with j + 1 instead of j).
Further, for every u ∈ N, every r ∈ {2 u−1 , . . . , 2 u − 1} and every ξ ∈ 2 −u−2 Z ∩ [0, 1) we have the estimate Before starting the proof, let us discuss quickly the integration interval E ξ,n . In fact, this has been added only for convenience. Lemma 3.4 could have been proved with (2) replaced with This would certainly simplify the formulation of the lemma, but, unfortunately, would complicate significantly its proof. This is due to various technical issues stemming from the vicinity of the peak of the Dirichlet kernel (the complications arise in the proofs of lemmas 3.5 and 3.6 below, and since these are not spelled out in detail, the reader would have to trust us on this point. But indeed, the complications incurred are significant). Hence we opted for the statement above which 'delays' the handling of the peak of D r until n ≈ r.
Sketch of proof of lemma 3.4. To choose J is equivalent to choosing 2 n−1 signs, as our only freedom is in choosing, for k ∈ {1, . . . , 2 n −1} odd, whether we take J(k2 −n ) to be the left or right half of I(k2 −n ). Denote therefore, for any ǫ = (ǫ 1 , ǫ 3 , . . . , ǫ 2 n −1 ), ǫ k ∈ {±1}, the corresponding J by J ǫ (ǫ k = 1 means that we take the right half of I(k2 −n ) and ǫ k = −1 means we take the left half). With this notation, our goal becomes finding some ǫ such that J ǫ satisfies (2).
Fix one k ∈ {1, . . . , 2 n−1 }, odd. Denote where ǫ ±1 is an ǫ with ǫ k = ±1 and the rest chosen arbitrarily (clearly, for x ∈ [(k − 1)2 −n , (k + 1)2 −n ] only the choice of ǫ k matters). Denote We Our strategy would be to find estimates for ∆ k D r and feed them into lemma 3.1. Here are two such estimates (the corresponding lemmas in the main proof are lemmas 7.3 and 7.4) Proof sketch. Condition on φ J ǫ (k2 −n ) and consider independently the intervals [(k−1)2 −n , k2 −n ] and [k2 −n , (k+1)2 −n ]. Let V be one of these intervals. We have that φ J ǫ is deterministic on (k ± 1)2 −n by assumption and on k2 −n by the conditioning, so it is deterministic on both ends of V . This means that φ J restricted to V is simply an appropriately linearly mapped version of the φ I from lemma 3.2. Applying that lemma gives that F ± satisfies an appropriate 'Hölder away from the boundaries' estimate. Integrating by parts gives a good estimate for V F ± e(l(x− ξ)), still conditioned on φ J ǫ (k2 −n ) (this is the standard argument that shows that a Hölder function has powerlaw decaying Fourier coefficients, the fact that it is only Hölder away from the boundaries requires no change in the argument, and the 2 n factors come from the scaling). Integrating the conditioning gives lemma 3.5.
Lemma 3.6. For every r < s and every ξ, Lemma 3.6 follows from lemma 3.3 using the same scaling argument that was used to conclude lemma 3.5 from lemma 3.2 and we omit the details. (The factor 2 −cj in lemma 3.6 comes from the factor ǫ c in lemma 3.3. Unlike in lemma 3.5 no cancellation in the exponential sum is used here, this lemma uses only that the maximum of e(l(x − ξ)) on [(k − 1)2 −n , (k + 1)2 −n ] can be estimated by min{|ξ − k2 −n | −1 , s − r}. The cancellation would come in by combining this lemma with lemma 3.5, later).
Proof of lemma 3.4, continued. For every u ∈ N define U := 2 max{u,n}+2 . For every k ∈ {1, . . . , 2 n − 1} odd and every ξ ∈ 1 Next, for every u ∈ N, for every 0 ≤ s < u and every t ∈ [2 u−1 , 2 u ) divisible by 2 s+1 , every k ∈ {1, . . . , 2 n−1 } odd and every ξ ∈ 1 We rearrange the vectors w 0 , w + and w − into one long list v k,j (we consider them to be vectors in k, so for example we could take v k,1 = w 0 k,0,1 if we like to start with w 0 , or we could take v k,1 = w + k,0,0,0,2 if we like to start with w + ). We now apply lemma 3.1 with n lemma 3.1 = 2 n . Lemma 3.1 requires us to find, for every v k,j some l(j) and some b(j) such that and to count how many j exist for each choice of l and b. We take l(j) = ⌊2 n ξ⌋ for the corresponding ξ, and this gives an estimate of the form (3) from lemmas 3.5 and 3.6. Checking which b is appropriate for each vector is straightforward and we save the reader all the index checking (note that occasionally we have to combine the estimates from the two lemmas using the fact that for any two numbers x and y, min{x, y} ≤ √ xy). Eventually we get that {j : b(j) = b, l(j) = l} ≤ Cb 9 ∀b, l for some absolute constant. The conclusion of lemma 3.1 is then that there exists a choice of ǫ k such that Finally, to estimate ∆D r for an arbitrary r write r in its binary expansion r = 2 u−1 + 2 s i t i := 2 u−1 + 2 s 1 + · · · + 2 s i for some decreasing sequence s i . We get Hence for any ξ ∈ 1 Plugging in the estimate (4) proves the lemma. The factor b 1/50 in (4) plays an important role here, as it makes the estimates for w 0,± decay exponentially as |u − n| increases and as s decreases (the factor 2 −cj from lemma 3.6 multiplies all the terms equally and translates to the 2 −cj factor in lemma 3.4 directly). The fact that the integral is only on E ξ,n and not on the whole of [0, 1] comes from the fact that lemmas 3.5 and 3.6 do not work for ξ too close to k2 −n (lemma 3.5 explicitly and lemma 3.6 gives a useless estimate), so we have to drop the corresponding w 0 and w ± and remove the corresponding element from ǫ k ∆ k .
Theorem 3.
For every continuous function f and every λ < 1 there is a λ-Hölder homeomorphism ψ such that S r (f • ψ) converges boundedly.
Proof sketch. Fix q sufficiently small such that any RH-descriptor I with gives rise to a homeomorphism φ I which is λ-Hölder surely (not just almost surely). Now construct a sequence of RH-descriptors as follows. We start with I 0 ≡ [ 1 2 − q, 1 2 + q]. We invoke lemma 3.4 with I lemma 3.4 = I 0 , n lemma 3.4 = 1 and j lemma 3.4 = 1. We denote the output of the lemma (i.e. J lemma 3.4 ) by I 0,1 . This halves the length of the interval at 1 2 so now we have |I 0,1 ( 1 2 )| = q. We then apply lemma 3.4 again, again with n lemma 3.4 = 1 but this time with j lemma 3.4 = 2 and I lemma 3.4 = I 0,1 , and denote I 0,2 := J lemma 3.4 (which will have |I 0,2 ( 1 2 )| = 1 2 q). We continue like this, shrinking the interval at 1 2 more and more and finally define I 1 (d) = lim j→∞ I 0,j (d).
We get that This makes I 1 suitable as entry to lemma 3.4 with n lemma 3.4 = 2 and j lemma 3.4 = 1. Again we apply lemma 3.4 infinitely many times and get in the limit an RH descriptor I 2 with I 2 ( 1 4 ), I 2 ( 1 2 ) and I 2 ( 3 4 ) all degenerate. We continue this process infinitely many times and get an which is completely degenerate, i.e. corresponds to a single homeomorphism φ. This φ will be λ-Hölder because it is in the support of φ I 0 .
To estimate we simply sum the differences coming from lemma 3.4. Since the error there is C2 −c(j+|n−u|) , the sum over j simply gives C2 −c|n−u| . A crude estimate for the difference between E n,ξ and E n+1,ξ gives that this can also be bounded by C2 −c|n−u| . We get, for every u ∈ N, every r ∈ {2 u−1 , . . . , 2 u − 1} and which is bounded by a constant independent of u. Applying Bernstein's inequality shows that the estimate holds for all ξ ∈ [0, 1]. The theorem is proved.
What is needed to go from theorem 3 to our main theorem? Clearly we can no longer allow our random homeomorphism the flexibility to change by a constant proportion (our q) at every step, as that could never be absolutely continuous. We have to allow more flexibility where our function f fluctuates a lot, and less flexibility where f is more flat. In other words, we need to make q into a function of the relevant dyadic rational d, such that 'the local q' depends on the behaviour of f in the appropriate area. The 'flatness' of f is naturally encoded using its Haar decomposition and indeed, the necessary condition for the homeomorphism to be absolutely continuous is a bound for certain sums of squares of Haar coefficients. Interestingly, these sums turn out to be dyadic BMO functions, and can thus be estimated by the (dyadic) John-Nirenberg inequality, see lemma 5.5.
Had we applied this 'local q' strategy naively, we would have ruined the nice local structure of φ, i.e. the fact that every dyadic interval can be handled with little knowledge of the surrounding, a fact that was extremely useful in the proofs of lemmas 3.5 and 3.6. Why would that ruin this local structure? Because the local q near d needs to depend on f in the vicinity of φ(d), which is a global property. We solve this problem by replacing φ with φ −1 , its inverse as a homeomorphism. In other words, instead of first choosing φ( 1 2 ) (as we did in the proof sketch above) we first choose which x will have φ(x) = 1 2 , and we choose it uniformly in an interval [ 1 2 − q, 1 2 + q] where q depends on the behaviour of f near 1 2 . We then choose x 2 such that φ(x 2 ) = 1 4 , and we choose it uniformly in an interval symmetric around x/2, whose width depends on the behaviour of f near 1 4 , and so on. This change, however, breaks lemma 3.1 which requires the partition into intervals 'in the x axis' to be uniform, but our replacement of φ with φ −1 made the partition in the y axis uniform, while the partition in the x axis becomes random. To fix this problem we do not break all intervals at every step (i.e. whenever we move from I n to I n+1 in the proof of the theorem). Instead we break into halves only intervals larger than a certain threshold, leaving the smaller intervals fixed. The details of this can be seen in the beginning of §7, page 36.
Finally, the transition from 'uniformly bounded Fourier series' to 'uniformly converging Fourier series' is not particularly problematic. The modulus of continuity of f has to be taken into account in lemma 7.2 (the analog of lemma 3.4 in the proof of the main result). The proof of the theorem from lemma 7.2 then proceeds by showing that E ξ,n E(f • φ In )D r is a good approximation of E(f (φ In (ξ))) E ξn D r . As n → ∞ the first integral converges to The details of all this fill the next 4 sections.
A hierarchical probabilistic construction
Our first lemma is proved using a hierarchical random construction, in the spirit of Kashin (and also related to the Komlós conjecture, as discussed in §3). Its formulation is a bit strange because we had to add two parameters (α and β) to make the induction work. We note that lemma 3.1 follows from it by simply setting α = 1 and β = 1 50 (and increasing M to 2, if necessary).
Proof. The constant A will be chosen later. We argue by induction on n.
The case n = 1 is clear (if A > 1/ log 2), for all values of α, β, γ and M . Assume therefore the claim has been proved for all n ′ < n and for all values of α ′ , β ′ , γ ′ and M ′ . Thus we are given α, β, γ, M , v, l and b. Let α ′ := 1 2 (α + 0.99) and β ′ := 1 2 (β + 1/25). Let K ≥ 2 be some integer parameter to be chosen later. Divide {0, . . . , n − 1} into blocks of size K (the last block may be smaller) and let n ′ := ⌈n/K⌉ be the number of blocks. We will first chose signs δ 1 , . . . , δ n ∈ {±1} such that the sum in each block will be controlled. We denote these sums by w, namely, for every s ∈ {0, . . . , n ′ − 1}, j ∈ N we define w s,j := If the last block is shorter than K, truncate the sum defining w. We wish to choose the δ such that for every s ∈ {0, . . . , n ′ − 1} and every j such that |sK − l(j) mod n| ≥ 2K we have where λ is some absolute constant to be defined shortly. It will be convenient, when |sK − l(j) mod n| < 2K, to denote w s,j = 0, so let us do so. The choice of δ will be random, uniform, i.i.d. Since the estimates for each block are independent of the other blocks, let us fix the number of the block s. We divide the j into two sets, S and T according to whether b(j) ≥ (|sK − l(j) mod n| + 1) α ′ β ′ /β (this is S ) or not (T ). In the first case the second term in (5) is the smaller, so we need to estimate the probability that |w s,j | ≤ σK 1/2 b(j) −β/β ′ . We use |v sK+r,j | ≤ 1/b(j) and Bernstein's inequality for sums of random variables tell us that Now, for every b there are no more than Cb β/β ′ α ′ ≤ Cb 2 relevant values of l (recall that we fixed the value of s, and note that we used the inequalities α ′ > 0.99 and β < β ′ ). Further, for each couple (b, l) we have no more than M b γ values of j, so all-in-all we have no more than CM b γ+2 values of j for b. A union bound gives We claim that if λ is sufficiently large then this sum is small. This is a simple calculation but let us do it in detail nonetheless. We write For any ε > 0 we have b ε ≥ eε log b and thus the second multiplicand is no more than exp(−cλ(γ + 1)50e log b) = b −cλ(γ+1)50e . Pick λ so large such that (this can be done independently of γ or M , recall that M ≥ 2) and get This finishes the estimate for S .
where the second inequality uses our assumption that |sK − l(j) mod n| ≥ 2K. Again using Bernstein's inequality we have For every value q of |sK − l(j) mod n| there are at most 2 possible values of l(j) which give it. The restriction b < (|sK − l mod n| + 1) α ′ β ′ /β gives that for every q there are no more than q α ′ β ′ /β ≤ q 2 values of b, and for each couple (b, l) there are no more than M b γ ≤ M (q α ′ β ′ /β ) γ ≤ M q 2γ possibilities for j. All-in-all the value q corresponds to at most CM q 2γ+2 values of j. Thus we can bound and a calculation similar to the one done above for S shows that the value of σ ensures that this sum is also smaller than 1 4 , for λ sufficiently large. Fix λ to satisfy this requirement (uniformly in α, α ′ , M and γ) and the previous one. Since the sum of the probabilities of all dissenting events is less than 1, we see that some choice of δ for which (5) will be satisfied exists.
Before continuing we make one modification of (5). Recall that we defined We are now in a position to apply the lemma inductively. We need numbers and the other requirements (recall that α ′ = 1 2 (α + 0.99) and β ′ = 1 With these definitions (7) follows immediately from (6), so the only thing we need to check is, for an arbitrary given b ′ and l ′ , for how many values of or in [0, 2) in the case of b ′ = 1. The number of such b can be estimated by a simple derivative bound giving (8) also holds in the case that b ′ = 1 (in which case we need to verify how many b satisfy 2b β/β ′ K −α ′ ∈ [0, 2), but only the constant is affected). Adding the facts that each value of l ′ corresponds to at most K different values of l; and that each couple We are ready to apply our induction assumption! We apply it with α ′ , β ′ , γ ′ := 2γ + 1, n ′ and the vectors w s,j /2σK 1/2−α ′ . We get that there exists some signs Defining The first term is from the fact that we zeroed out w when |j − sK| < 2K. When moving back to v we need to bound these v and we bound them using a naive absolute value bound (we use here that To make the calculation manageable, we will now bound the second term in (9) by a sequence of quite rough bounds. For M ′ we write . Inserting everything into (9) gives .
Together with (9) we get (and also such that K ≥ 2). To bound the first term in (10), note that for b < θ 15 we have Inserting both estimates into (10) gives Choosing A = 2C 5 completes the induction, and proves the lemma.
A family of homeomorphisms
A dyadic rational is a number d of the form k2 −n for some integers k and n. If k is odd we define rnk(d) = n and let also rnk(0) = −∞. Let θ be a map from the dyadic rationals in (0, 1) into [ 1 4 , 3 4 ]. Let n be an integer. Our goal is to define for every θ and n a homeomorphism ψ θ,n : It turns out that it is a little more natural to first define ψ −1 θ,n which is a 'Dubins-Freedman style' homeomorphism, so we do that, and then define ψ as its inverse. But first we need some notation.
We define In words, T d is a triangle function based on a certain dyadic interval I (whose centre is d). We will only apply the functional ν d to increasing functions, and then ν d ψ is the variation of ψ over I.
We may now define ψ −1 θ,n . The definition is by induction on n, with the induction base being Assume ψ θ,n−1 has been defined for all θ. We now define for n ≥ 1, where the sum is over all d ∈ (0, 1) of rank n (totally 2 rnk(d)−1 terms). An equivalent definition is to write, for every k ∈ {0, . . . , 2 n }, and complete linearly between these points. In particular we note that ψ −1 θ,n (d) stabilises for all dyadic d, as n → ∞, in fact when n > rnk(d).
To understand why we call ψ −1 a Dubins-Freedman style homeomorphism just note that if we were to take θ uniform in [0, 1] (which we do not allow here) then the result is exactly the Dubins-Freedman homeomorphism of level n.
Finally, let us remark that ψ −1 has a recursive formula (which could also have been used to define ψ −1 ). To state it define With these definitions we get Similarly we have for ψ itself Both formulas (which are clearly equivalent) are a special case of lemma 5.1 below (used for I = [0, 1 2 ] and I = [ 1 2 , 1]). We say that I is a dyadic interval if there exist some n ∈ Z + and k ∈ {1, . . . , 2 n } such that I = [(k − 1)2 −n , k2 −n ]. We call n the rank of I. For every interval I ⊆ [0, 1] we define L I to be the affine increasing map taking [0, 1] onto I. The following lemma (whose proof is merely playing around with the definitions) formalises the local structure of ψ and ψ −1 .
Lemma 5.1. Let θ and n be as above. Let I be a dyadic interval . Then for every θ, n, I and for every x ∈ I, A short version of (16) is For ψ itself this can be written as Proof. We first note that ψ −1 θ,n (x) is linear on every dyadic interval of order n, as it is a sum of piecewise linear functions each of which has its jumps of the derivative on dyadic rationals of rank ≤ n. This shows the case that m = n as the left hand side of (16) in linear on I, the right hand side of (16) is linear on I (as ψ −1 θ,0 (x) = x always) and they are equal at the boundaries of I.
We will prove the claim by induction on n, with the base case being the case of n = m just proved. We first note that for x ∈ I the only non-zero terms in (11) are those for which d ∈ I and hence for every (we used here that rnk(L I (d)) = rnk(d) + m for all d ∈ (0, 1), and that the boundaries of the interval do not have rank n so we may omit them). Recall that ψ −1 θ,n = ψ −1 θ,n−1 on each dyadic rational of rank < n and in particular on the boundaries of I. Hence α and β do not depend on whether we define them with n or n − 1. This allows to apply the induction assumption and get Inserting all these into (18) gives and by (11) this is exactly for some ǫ > 0 and all dyadic d then ψ θ,n converges uniformly as n → ∞ and the limit is a Hölder homeomorphism of [0, 1]. As ǫ → 1 2 , the Hölder exponent tends to 1. Proof. Examining (12) we see that ψ −1 is strictly increasing, hence ψ is well-defined. Further, a simple induction shows that For dyadic intervals of order m < n we may get the same estimate since ψ θ,n (k2 −m ) = ψ θ,m (k2 −m ). For m > n the linearity of ψ −1 θ,n on dyadic intervals of order n gives The lower bound is similar and we can conclude that for an appropriate δ depending only on ǫ, which converges to 1 when ǫ → 1 2 . Since the series ψ −1 θ,n (d) stabilises for all dyadic rational d, we get that the limit (denote it by ψ −1 ∞ ) also satisfies (19). For a general x < y ∈ [0, 1] we may find a dyadic interval contained in [x, y] of size at least 1 4 (y − x). In the other direction we may find two dyadic intervals I 1 , I 2 such that [x, y] ⊆ I 1 ∪ I 2 and such that |I 1 ∪ I 2 | ≤ 4(y − x).
for all n, finite or infinite. Reversing gives a similar estimate for ψ, namely Since ψ −1 θ,n stabilises on all dyadic rationals, we get a dense set of numbers where ψ θ,n stabilises, and on these numbers the limit satisfies (21). This of course shows that ψ θ,n (x) converges uniformly, and that the limit also satisfies (21), for all x, y ∈ [0, 1].
We note for future use that (15) extends to n = ∞, namely where A is again θ( 1 2 ). Let us also remark that during the proof of lemma 5.2 we defined ψ −1 θ,∞ = lim n→∞ ψ −1 θ,n , but this notation is not ambiguous since this limit is also the inverse of ψ θ,∞ , since both ψ n → ψ ∞ and ψ −1 n → ψ −1 ∞ are uniform (the fact that ψ −1 n → ψ −1 ∞ uniformly is implicit in the proof of lemma 5.2).
We now move to conditions on θ which will guarantee that ψ θ,∞ is absolutely continuous. This clearly requires θ to be usually close to 1 2 , so let us introduce a function q from the dyadic intervals into [0, ∞) and ask that |θ(d) − 1 2 | ≤ q(d) for all dyadic d (it will actually be convenient later to add a constant and require |θ − 1 2 | ≤ cq, but let us ignore this for a bit). Thus we need to find some condition on q that will ensure that ψ θ,∞ is absolutely continuous whenever |θ − 1 2 | ≤ q. To formulate the condition, we need two auxiliary functions. The first, Q n : whenever x is in some dyadic interval I, |I| = 2 −n , and d is the middle of I. With Q n defined we denote We can now state our condition.
Lemma 5.3. There exists a constant ν 0 such that for any q for which and for any θ such that |θ − 1 2 | ≤ ν 0 q, the homeomorphisms ψ θ,∞ and ψ −1 θ,∞ are absolutely continuous. Further, for any p < ∞ there exists a constant ν 1 (p) such that |θ − 1 . Notice that (24) does not prohibit z from being ∞ on a set of zero measure.
Proof. Assume |θ − 1 2 | ≤ νq for some ν. Examine a finite n. Since ψ −1 θ,n is piecewise linear we may consider the derivative. The relation between z and ψ is given by the following inequality. We claim that for every n and every x ∈ [0, 1], where at dyadic x the left derivative is bounded by the left limits of the terms, and ditto on the right (and these derivatives, of course, do not need to be equal).
Recall (14) and the definitions of θ ± and A just before it. We wish to differentiate (14) and for this we note that A is just a number, and further satisfies Hence, for x ∈ [0, 1 2 ] non-dyadic, = exp CνQ q,0 (x) + Cν where ( * ) is our induction hypothesis. For x dyadic the calculation above holds identically for both left and right derivatives (for x = 1 2 only the left derivative), when Q is replaced by appropriate left or right limits. The lower bound for (ψ −1 ) ′ is identical. The case of x ∈ [ 1 2 , 1] is identical, with q − and θ − replaced by q + and θ + respectively; and with (27) replaced by Q q,n = Q q + ,n−1 (2x − 1). This finishes the proof of (25).
Denote for brevity ψ n := ψ θ,n and write (25) as Our assumption (24) on z shows that, if ν is chosen sufficiently small, then exp(Cνz(x)) is integrable. Hence ψ −1 n are uniformly absolutely continuous, and hence their limit, ψ −1 ∞ , is absolutely continuous. We next claim that (ψ −1 n ) ′ (x) converges for almost all x. There are two ways to see this: the first to repeat the argument that showed (25) to show that (ψ −1 n ) ′ /(ψ −1 m ) ′ is bounded by a partial sum of Q-s. The second, to note that (ψ −1 n ) ′ is a positive martingale with respect to the dyadic filtration F n and apply the martingale convergence theorem [5, §4.2 (2.11)]. By the dominated convergence theorem we get and in particular (ψ −1 ∞ ) ′ ≤ exp(Cνz(x)) for almost every x ∈ [0, 1]. This shows the 'further' clause of the lemma for ψ −1 , as for ν sufficiently small (24) gives || exp(Cνz(x))|| p < C(p). The results for ψ are then concluded from lemma 5.4 below.
Lemma 5.4. If φ is an absolutely continuous homeomorphism of [0, 1] with 1 φ ′ ∈ L p for some p > 0, then φ −1 is also absolutely continuous, with Proof. Assume without loss of generality that φ is increasing. We first show that φ −1 is absolutely continuous. Let therefore A ⊂ [0, 1] be a countable union of intervals. Denote B = φ −1 A, so it is also a countable union of intervals. Then for every ε > 0 where the last inequality follows from Markov's inequality. Choosing ε = ( so φ −1 is absolutely continuous. To show the derivative equality write φ ′p where the first equality is a change of variables.
Tailoring the homeomorphism family to the function. Recall from the discussion on page 12 that we need to tailor the function q, which describes a family of homeomorphisms in this section, and would be used to construct a measure on it in the next section, to our function f from the statement of theorem 2. Here we do that, but first we need some notation.
Definition. Recall the Haar functions ½ [0,1] , ½ [0,1/2] − ½ [1/2,1] etc. It will be convenient to index them using their support, so for a dyadic interval J, let h J be the function which is |J| −1/2 on the left half and −|J| −1/2 on the right half (the function ½ [0,1] will not be associated to any interval, this will not be a problem).
For a function f ∈ L 2 we define q = q f , a function on the dyadic rationals, as follows. Let I be some dyadic interval and let d be its middle. Then we define q(d) : Recall the definition of z q in (23). Applying it to the q above we get (here and below this notation means that the sum is over both I and J. Note that x does not need to belong to J, only to I).
Proof. Let us first calculate the L 1 norm of z = z q f . By Fubini where the last inequality follows since the h J are orthonormal and ||f || 2 ≤ ||f || ∞ ≤ 1.
The same calculation shows that z f is a dyadic BMO function. Indeed, let U be some dyadic interval, let L U be the affine increasing map taking [0, 1] onto U . Define g U (x) = f (L U (x)). Then write, for x ∈ U , where the equality marked by ( * ) comes from mapping J to L −1 U (J), I to L −1 U (I), and noting that the linear change of variable in the integration together with the fact that ||h L −1 U (J) || ∞ = |U |||h J || ∞ give together a factor of U ; and where the inequality marked by ( †) comes from applying (30) to g U (which is, of course, also bounded by 1).
Further, for any p < ∞ there is an Proof. This follows immediately from lemmas 5.3 and 5.5.
Before moving forward let us rearrange the formula for q = q f ( 1 2 ) in a way that will be useful in §6. We first write (we remark that also q ≤ √ 2/( √ 2 − 1)×the same sum). We next recall that the partial sums of the Haar expansion have a simple form. Let X k be the sum of the first 2 k terms in the Haar expansion of f . Then for any dyadic I with |I| = 2 −k and any x ∈ I The sum J f, h J h J is not the partial Haar expansion of f but rather that of g := f − f , since the very first Haar function does not correspond to any interval J. Hence Applying Parseval we get from (31) This is the form of q we will use below.
Remarks. The decomposition above for q gives a decomposition for the corresponding z (recall its definition in (23)) which has a probabilistic intuition behind it. Indeed, write where z k is the function given by applying the procedure to get z from q (which is linear) for one term. Then each z k is the increasing process of a martingale, for example z 1 is the increasing process of the standard Haar martingale X n defined above in (32). Recall that for any martingale X n , its increasing process is defined as A n = n−1 i=1 E((X i+1 − X i ) 2 | X 1 , . . . , X i ), see e.g. [5, §4.4].
Let us sketch a probabilistic proof of lemma 5.5. We claim that if X n is a bounded martingale, then its increasing process A n satisfies P(A ∞ > λ) ≤ 2e −cλ . To see this we apply the Skorokhod embedding theorem to embed the martingale into Brownian motion, and then A ∞ becomes the time. The result then follows from the fact that the probability of Brownian motion to stay for time t inside the interval [−1, 1] decays exponentially in t. Thus each z k has an exponentially decaying tail, and therefore so does their weighted sum z. We skip any further details.
Our last remark would be useful mostly to readers who have already examined the proof of lemma 6.4 below. Indeed, the specific form of q we use is mostly dictated by its use in that lemma. Its use there leaves two places for flexibility. Lemma 6.4 could have worked with the sequence 2 −k/2 replaced with any sequence decaying faster that 1/k 2 ; and the term 2 k g 2 , which is just the L 2 norm of a partial Haar expansion of g, could have been replaced with others norms, e.g. with the L 1 norm. But lemma 5.5 does not hold for the L 1 norm, only for L p norms with p ≥ 2. Thus our choice of L 2 norm comes from the need to have both lemma 5.5 and 6.4 hold, each constraining q f from one side.
The rest of the proof holds for any admissible η, so let us fix one such η (possibly depending on p, if we are proving the 'further' clause of theorem 2) and remove it from the notation. Further, we allow arbitrary constants like c and C to depend on it. For a function τ from the dyadic rationals into [−1, 1] we denote ψ f,τ,n := ψ 1 2 +ηq f τ,n ψ f,τ,∞ := ψ 1 2 +ηq f τ,∞ . For convenience, let us note at this point the locality formulas for ψ f,τ,n . Define f ± and τ ± as in (13) and recall that L I is the affine increasing map taking [0, 1] onto I. Then where, as usual, A = θ( 1 2 ) = 1 2 + ηq f ( 1 2 )τ ( 1 2 ). The formula is a direct consequence of (22), (17) and the facts that (q f ) − = q f − and q f •L I = q f •L I , both of which are easy to verify.
Most importantly, from now on we take τ random, i.i.d., uniformly distributed in [−1, 1]. All E and P signs in this section will refer to this measure on τ . The case of n = ∞ follows by taking a limit, which is allowed since (ψ −1 n ) ′ → (ψ −1 ∞ ) ′ for almost every x (28), for every τ . By Fubini, for almost every x we have (ψ −1 n ) ′ → (ψ −1 ∞ ) ′ almost surely (in τ ). Exchanging the limit and the expectation is allowed since ψ −1 n ′ (x) is bounded by a bound depending only on f , by (25), and finite for almost every x, by lemma 5.5, so the bounded convergence theorem applies. Lemma 6.2. Let X 1 and X 2 be two random variables on [−1, 1] with densities ρ 1 and ρ 2 respectively. Let Then there exists a variable Q ('a coupling of X 1 and X 2 ') taking values in [−1, 1] 2 such that (i) Q i has the same distribution as X i .
(in fact, P(Q 1 = Q 2 ) = p, but we will have no use for this fact).
Proof. If p = 0 then we can take Q to be two independent copies of X 1 and X 2 so assume this is not the case. Let Z be a random variable with density 1 p min{ρ 1 , ρ 2 }. If p = 1 then let Y i be a random variable with density 1 1−p (ρ i − min{ρ 1 , ρ 2 }). Let Z, Y 1 and Y 2 be all independent. Now throw an independent coin which succeeds with probability p. If the coin succeeds, let Q = (Z, Z), if not, let Q = (Y 1 , Y 2 ). Clause 2 is now obvious, and clause 1 is not difficult either, because the density of Q i is p times the density of Z plus 1 − p times the density of Y i i.e. (ii) If we denote Proof. Define auxiliary variables z i by Let ρ i be the density of z i . A simple calculation shows that Now couple z 1 and z 2 using lemma 6.2 (call the coupling Q) and the coupling succeeds (i.e. Q 1 = Q 2 ) with probability at least p. Now define and the properties of T follow from the way we constructed it. Proof. We divide into two different cases according to whether q := q f ( 1 2 ) > n −9/10 or not (q f from (29)). Denote for brevity ψ = ψ f,τ,∞ .
We start with the case of q > n −9/10 . In this case we claim that for every To see (38) couple ψ(x 1 ) and ψ(x 2 ) as follows. We define τ 1 and τ 2 two random (dependent) variables, each of which has the same distribution as τ i.e. a uniform function from the dyadic rationals into [−1, 1]. The definition of the τ i is as follows. For all dyadic p = 1 2 we take τ 1 (p) = τ 2 (p) (and of course independent for different p and uniformly distributed). For p = 1 2 we use the variable given by lemma 6.3, namely, we use lemma 6.3 with λ lemma 6.3 = ηq and then let τ i ( 1 2 ) = T i . We now note that, by (36) Note that τ ± do not depend on i (here is where we used that τ 1 (p) = τ 2 (p) for all p = 1 2 ) and of course neither do f ± . Hence lemma 6.3 gives where the y i are from lemma 6.3 (the first inequality is in fact an equality, but we will not need this fact). Denote the event on the left hand side by G. Then where the equality marked by ( * ) follows because E(½ G ·the same) = 0, since under G we have ψ f,τ 1 ,∞ (x 1 ) = ψ f,τ 2 ,∞ (x 2 ). This shows (38).
Summing over all k gives and because of our assumption that q > n −9/10 (which we have not used so far!) we get a bound of Cn −1/10 log n. This finishes the case q > n −9/10 . For the case of q ≤ n −9/10 our first order of things is to rearrange the question slightly. Recall that we wish to estimate E(f • ψ) · e(nx). We first let g = f − f and write Recall next the estimate (33) for q = q f ( 1 2 ). Using it with the condition q ≤ n −9/10 gives ∀k.
Let k be the smallest integer such that 2 k > n 7/5 . Then For each i write We start with the estimate of II i . We wish to condition on ψ −1 (i2 −k ) and ψ −1 ((i + 1)2 −k ) so denote the σ-field of these two variables by B i . Then by Fubini. We have reached a simple, but crucial point in the argument. We note that ψ −1 conditioned on B i is a different version of ψ −1 . Precisely, let L be the linear increasing map taking We now use lemma 6.1 which states that E((ψ −1 h,σ,∞ ) ′ (y)) = 1 for almost all y. This shows that Let E be the set of indices i such that [i2 −k , (i + 1)2 −k ] ⊆ [ψ(r), ψ(s)]. Sum the right hand side over all i ∈ E and use Cauchy-Schwarz to get and the second term is bounded by (40) by Cn −1/10 . As for the first term we may write by (44)-(45) ≤ ECn −1/10 ||(ψ −1 f,τ,∞ ) ′ || 2 ≤ Cn −1/10 . (we need to condition on B i in the first inequality because E is itself random). In the last inequality we used that η is admissible, recall the definition of admissibility in the beginning of §6. This terminates the estimate of the second, more interesting part of (41).
To estimate I i we note that from 2 k ≥ n 7/5 and (35) we have ψ −1 (y) − ψ −1 (i2 −k ) < Cn −28/25 for every y ∈ [i2 −k , (i + 1)2 −k ] and hence |e(nψ −1 (y)) − e(nψ −1 (i2 −k ))| ≤ Cn −3/25 . we get With the estimate on II i above we get This is almost what we need, but what we need precisely is the integral from ψ(r) to ψ(s). The difference between it and i∈E (i+1)2 −k i2 −k is two short intervals, [ψ(r), i 1 2 −k ] and [i 2 2 −k , ψ(s)] for some i 1 and i 2 . The integral on these is much simpler to estimate as they are short. For example, we may estimate and similarly for the other interval. We get Recalling (39) from the beginning of the proof of the small q case we get s r E(g(ψ(x))) · e(nx) dx ≤ Cn −1/10 and as explained just before (39), this gives the same result for f . The lemma is thus proved. Then Proof. We keep the notation q = q f ( 1 2 ). Denote α := min{ 1 2 + ηqx : x ∈ J 1 ∪ J 2 } β := max{ 1 2 + ηqx : x ∈ J 1 ∪ J 2 } so that β − α ≤ Cε and then ) and note that |F 0 i | ≤ Cε as it can be written as an integral over an interval of length at most Cε. Hence we need to estimate |F ± 1 − F ± 2 |. The proofs of the + and − cases are identical, so for brevity, we will do the calculations for F − .
Let us first dispense with an easy case. Assume that n ≥ ε −1/2 . In this case, fix some y ∈ J 1 ∪ J 2 and condition on τ ( 1 2 ) = y. Denote We change variables and get We use lemma 6.4 with f lemma 6.4 = f − , n lemma 6.4 = nA and [r, s] lemma 6.4 = I(y) and get |F − (y)| ≤ A · C(nA) −1/11 ≤ Cε 1/22 . Integrating over y in either J i gives that both F i satisfy |F i | ≤ Cε 1/22 and we are done.
Assume therefore that n < ε −1/2 . Let y ∈ J 1 ∪ J 2 and consider ψ f,τ,∞ conditioned on τ ( 1 2 ) = y. We keep the notations A, F − (y) and I(y) above, and write ψ = ψ f − ,τ − ,∞ for brevity. We apply (46) to two different y i and subtract, getting and again II and III are integrals over intervals of length at most Cε, so may be ignored. As for I, we write which we plug into the integral in I to get I ≤ Cnε. We get by our assumption that n < ε −1/2 . Integrating over y 1 and y 2 gives Since we covered both small and large n, we get is identical, and the lemma is proved.
Reducing randomness
We have spent many pages on construction of a random homeomorphism ψ such that the expectation of f • ψ has some good properties. In this last section we are going to remove the randomness step by step, always controlling the expectation, until we are finally left with a single ψ such that f • ψ has the same good properties.
Let I be a map giving for each dyadic rational d = k2 −n ∈ (0, 1) an interval I(d) ⊆ [−1, 1] (possibly degenerate). For each I we define φ I = φ I,f,η to be our homeomorphism ψ f,τ,∞ with τ (d) taken uniformly in I(d) for all d. We call such I an RH-restrictor (RH standing for Random Homeomorphism). Sometime we will denote τ I for τ which has this distribution.
From now on, when we say 'condition ψ on ψ −1 ( 1 2 ) = y' or 'condition ψ on ψ(y) = 1 2 ' we will actually mean ψ f,τ,∞ with τ ( 1 2 ) = (y − 1 2 )/ηq f ( 1 2 ) and the other τ (d) uniform in [−1, 1] (this is, of course, a version of the usual conditional probability, but it is defined for all y and not just for almost all y, which is convenient). Another version of the same thinking is the following Lemma 7.1. Let U = (U i ) be a finite partition of [0, 1] into dyadic intervals (i.e. U = [0, 1] and different U i ∈ U are disjoint except possibly their endpoints). Let I be an RH-restrictor which is degenerate for any d in the boundary of any U ∈ U . Then φ −1 I (U ) is deterministic for any U ∈ U (i.e. it is dependent on f but not random).
Proof. We go by induction on the number of intervals in the partition. Assume the claim has been proved for all partitions with less than n intervals, and let U be with |U | = n. Let U ∈ U be an interval with smallest size. Let V be U 's father i.e. U ⊂ V and |V | = 2|U |. Then V \ U must also be in U . Replacing U and V \ U with V gives a new partition U ′ with |U ′ | < n hence the induction assumption gives that φ I (W ) is deterministic for any Figure 1. An RH restrictor: on the left the dyadic partition, on the bottom the δ-uniform partition.
W ∈ U ′ and in particular for all W ∈ U other than U and V \ U . Next, let d 0 , d 1 and d 2 be the beginning, centre and end of V , respectively. Then φ −1 I (d 0 ) and φ −1 I (d 2 ) are deterministic by the inductive assumption and I(d 1 ) is degenerate, say {x} for some x ∈ [−1, 1] (the last claim is because d 1 is a point in the boundary of U ). Hence, by lemma 5.1, which is deterministic, proving the claim.
Definition. For a continuous function f with ||f || ∞ ≤ 1 and a δ > 0, we say that an RH-restrictor I is of type See figure 1. Of course, not for every RH-restrictor I one may find some f and δ such that I is of type (f, δ) -such I are quite special and if we do not wish to specify f and δ we will simply say that 'I has a type'. Let us remark on the appearance of φ −1 I (U i ) in properties (ii) and (iii) (and indirectly in (iv) too, as it applies only when (iii) does not). Of course, φ I is random. But property (i) implies, using lemma 7.1, that in fact, φ −1 I (U i ) is deterministic, and all randomness is inside the U i . Hence properties (ii) and (iii) are also a function of f and I, and are not random.
We call the U and the m the corresponding partition and the corresponding value (to I), respectively.
The following lemma is the main lemma of this paper. It implements the 'reduction of randomness' strategy that we outlined in the beginning of this section. The reduction in randomness is implemented by moving from an RH-restrictor for a given m, to an RH-restrictor with m + 1. Recall the standard definition of the modulus of continuity of a function, where here and below dist is considered cyclically in [0, 1] e.g. dist(0.9, 0) = 0.1. Further, for every ξ ∈ [0, 1] denote by B ξ the union of the φ −1 I (U i ) that contains ξ and its two immediate neighbours. We understand 'neighbours' cyclically, e.g. if ξ ∈ φ −1 I (U 1 ) then B ξ is the union of φ −1 I (U 1 ), φ −1 I (U 2 ) and the last φ −1 I (U i ). Then there exists an RH-restrictor J of type (f, δ) with the same corresponding partition U , with the corresponding value being m + 1 and with the following properties (i) J(d) ⊆ I(d) for all d, (ii) For every u ∈ N, every r ∈ {2 u−1 , . . . , 2 u − 1} and every ξ ∈ 2 −u−2 Z ∩ [0, 1), where the set over which integration is performed E ξ = E ξ (U , I, f, δ) is defined by Here and below the constants C and c are allowed to depend on η. D r is the usual Dirichlet kernel.
Proof. For any ξ ∈ [0, 1] denote h(ξ) := E(f (φ I (ξ))). Denote V i = φ −1 I (U i ) for brevity. Examine one U i in our partition, and recall that V i is not random. The requirements from J leave relatively little freedom for J inside U i . If . Thus choosing J is equivalent to choosing a sequence of signs ε i ∈ {±1}, one for each i for which |V i | > 1 2 δ (say that ε i = 1 means that we take the right half of I(d) and ε i = −1 the left, and denote for brevity Y = {i : |V i | > 1 2 δ}). Denote the J that corresponds to a vector ε ∈ {±1} Y by J ε .
Fix one i ∈ Y , one ξ ∈ 2 −u−2 Z ∩ [0, 1) and let ε + and ε − be two vectors with ε + i = 1 and ε − i = −1 and define (this is a good definition since the values of ε different from ε i have no effect on φ| V i . Hence they effect neither F ± nor ∆). In other words, if we take J(d) to be the left half of This relation is important because it 'linearises' the problem of choosing a homeomorphism. Our strategy will be to find estimates for ∆ i D r (· − ξ) for various i, r and ξ, and then apply lemma 4.1. These estimates will occupy the next three lemmas. The reason for subtracting h(ξ) in the definition of F ± (which, of course, has no affect on ∆) is to get the ω(δ c ) c factor in the statement of lemma 7.2. Precisely, we claim that J I To see (49), first note that if x ∈ V i then |x − ξ| ≤ dist(ξ, V i ) + δ. Let x ′ be a point in the boundary of some φ −1 I (U ), U ∈ U , closest to x among such points, and let ξ ′ be the point closest to ξ, again from among the points in the boundary of φ −1 I (U ), U ∈ U . Then the δ-uniformity of φ −1 I (U ) says that |x − x ′ | ≤ 1 2 δ and |ξ − ξ ′ | ≤ 1 2 δ. We now use the deterministic Hölder condition (35) and get where the third inequality follows because x ′ and ξ ′ are both on points where Taking expectations shows (49). It is also occasionally useful to integrate only the right term, which gives This will be used below.
In light of (49) let us define so that ||F ± || ∞ ≤ H. Of course H depends on ξ, i and other parameters, but we suppress this in the notation. Both h(ξ) and H will not play a very important role until the very end of the proof, at lemma 7.6. Unlike lemmas 7.4 and 7.5 below, which are only used as sublemmas of lemma 7.2, the next lemma actually has one application after lemma 7.2 is finished. For that application, i is not necessarily in Y . Hence keep in mind, when reading this lemma, that i is arbitrary. Lemma 7.3. With the definitions above, for every r < s and every ξ we have More importantly, Recall that dist is considered cyclically.
Proof. The first clause of the lemma is simple. Indeed, on the one hand , (s − r) δH (recall that |V i | ≤ δ and that ||F ± || ∞ ≤ H), establishing the lemma in this case. For the second clause we may assume that |r| and |s| are both bigger than 1/δ, as in the other case the first clause gives a better estimate. We will also assume that ξ ∈ V i as otherwise (53) is vacuous. Finally, we may assume that h(ξ) = 0, as subtracting a constant from f changes neither ψ f,τ,∞ nor φ I , and the only loss in generality is that now we may only assume ||f || ∞ ≤ 1 (rather than ||f || ∞ ≤ 1 2 , which is one of the implicit assumptions of the lemma).
We will work in each half of U i separately. Denote therefore by U ± i these two halves, and denote V ± i := φ −1 J ε (U ± i ) (no relation to the ± in F ± , which plays no role here). For concreteness let us work on U − i , the case of U + i is literally identical. By the locality of ψ (37) φ J ε restricted to V − i is a linearly mapped copy of ψ, precisely where the last equality is a (linear) change of variables. Let ξ * be one of ξ, ξ − 1 or ξ + 1, whichever is closest to V − i , and note that dist(ξ, the whole of R here -originally we defined it only on V − i , but it is just an affine function and we extend it to an affine function on R) and r = r|V − i | and with these notations Integrating over φ J ε (d) gives Adding these estimates for r and s gives the result.
Lemma 7.4. With the definitions above lemma 7.3, for every r < s and every ξ we have Proof. Essentially, the lemma follows from lemma 6.5 using the same integration by parts that was used to derive the previous lemma from lemma 6.4. But let us do it in detail nonetheless. We start by showing which is the more interesting case. Assume ξ ∈ V i as otherwise the claim is vacuous. Recall that As in the previous lemma we may assume that h(ξ) = 0 and ||f || ∞ ≤ 1. We use the locality of ψ (37) to get Denote f := f •L U i . Note that τ ± := τ J ε ± •L U i has the following distribution: where d is as usual the centre of U i . Denote Define ξ as in the previous lemma with respect to V i i.e. ξ = L −1 V i (ξ * ) etc., and r = r|V i | (also as in the previous lemma) and get Integrate by parts and get ))e( ry) dy 1 1 − e(|V i |(1 − ξ)) .
This gives us the longest formula in this paper, We now note that each integral over y above is exactly of the form given by lemma 6.5, with ε = 2 −m . Hence they are bounded by C2 −m/22 . Bounding |V i | ≤ δ and |1 − e(θ)| ≥ cθ gives Summing the r and s terms gives This is the main estimate of the lemma. We still need to show the simpler estimate Cδ(s − r)2 −m/22 . We keep the notations f , τ ± and ξ. For every l ∈ {r, . . . , s − 1} we denote l := l|V i | and then the same locality argument that gave (55) gives We apply lemma 6.5 directly (without integration by parts) and get We sum over l from r to s − 1. This gives the second bound, and proves the lemma.
Aggregating the last two lemmas we get Lemma 7.5. With the definitions above lemma 7.3, for every r < s and every ξ we have Proof. Denote D = s−1 l=r e(l(x − ξ). Assume first that both |r| and |s| are larger than 1 /δ. We use lemma 7.3 for both F + and F − and summing the result gives and similarly for s. Applying this to whichever of r and s has a smaller absolute value gives as needed. The case that either |r| ≤ 1 /δ or |s| ≤ 1 /δ is similar but simpler. Lemma 7.3 gives the estimate | ∆D| ≤ δ(s − r)H, lemma 7.4 gives the estimate | ∆D| ≤ Cδ(s − r)2 −m/22 , and we combine them as above. The lemma is thus proved.
The last step in proving lemma 7.2 is to choose the ε i . Recall that for each i ∈ Y = {i : |V i | > 1 2 δ} we need to chose an ε i ∈ {±1}. It will be convenient to add dummy variables, so we will choose ε i for every i, and ignore those outside Y . Denote by N the number of U i in our partition, and ∆ i ≡ 0 for every i ∈ Y . Denote The ε i will be chosen using lemma 3.1. Here are the details. Lemma 7.6. There exists an ε = (ε 1 , ε 2 , . . . , ε N ) such that for every u ∈ N, every r ∈ {2 u−1 , . . . , 2 u − 1} and for every ξ ∈ 2 −u−2 Z ∩ [0, 1), Proof. We need to prepare a sequence of vectors to 'feed' into lemma 3.1, which will give us our signs. The estimates of these vectors will come from lemma 7.5.
We now define our vectors. For every u ∈ N let U be the smallest power of 2 with U ≥ max{2 u+2 , N }. For every i ∈ {1, . . . , N } and every ξ ∈ except if 2 u > 1 /δ and V i ⊂ B ξ (B ξ from the statement of lemma 7.2), in which case we define w 0 i,ξ,u = 0. Next, for every u ∈ N, for every 0 ≤ s < u and every t ∈ [2 u−1 , 2 u ) divisible by 2 s+1 , every i ∈ {1, . . . , N } and every again, except if 2 u > 1 /δ and V i ⊂ B ξ , in which case we define w ± i,ξ,u,s,t = 0. This terminates the list of vectors (we think about them as vectors in i) that we wish to feed into lemma 3.1, after some scaling.
We note the following convenient fact. For ξ ∈ [0, 1] define l(ξ) by ξ ∈ V l(ξ) . Then (We used here that dist is defined cyclically and that |V i | > 1 4 δ for all i).
this time without the restriction V i ⊂ B ξ . Another step that will simplify the analysis below relates to the term H from lemma 7.5. Recall that is was defined in (52) by ≤ min{1, Cω f (δ 4/5 )(|l(ξ) − i mod N | + 1)}.
We start with w 0 i,ξ,u for 2 u ≤ 1 /δ. By lemma 7.5 and (59) where ( * ) comes from (56) and from the observation that if V i ⊂ B ξ then the minimum is achieved at 2 u+1 (perhaps up to a multiplicative constant). Define b(u, ξ) = b(u) = max{1, ⌊1/δ2 u+1 ⌋} and get that We used here b instead of b because of the factor Cλ l(ξ)−i . Since it appears in all cases equally, it will be more convenient to count how many vectors satisfy an inequality of the form (60), for any fixed values of l and b, and then remove the factor Cλ l(ξ)−i in the end. To count the number of ξ with l(ξ) = l for some given l, we note that ξ ∈ 1 U Z and since U ≤ max{2 u+2 , 2N } ≤ max{ 4 /δ, 2N } ≤ 8 /δ and since |V i | ≤ δ for all i, we see that each possible value of l(ξ) repeats at most 9 times. Thus |{(ξ, u) : 2 u ≤ 1 /δ, l(ξ) = l, b(u) = b}| ≤ 18 ∀l, b because we have at most 9 different ξ which give the same l and at most 2 different values of u which give the same b (the largest allowed by the condition 2 u ≤ 1 /δ). For 2 u > 1 /δ lemma 7.5, (56) and (59) give The multiplicity in b is at most a constant in this case, but there is multiplicity ⌊δU ⌋ + 1 ≤ Cδ2 u in l, because ξ is taken in 1 U Z. Hence we get at most Cδ2 u ≤ C b(u) 44 vectors satisfying (60) For w ± and 2 u ≤ 1 /δ lemma 7.5, (56) and (59) give Hence we define b(ξ, u, s, t) = b(s) = ⌊1/δ2 s ⌋ and get |w ± i,ξ,u,s,t | ≤ min The same argument as above gives that each l(ξ) repeats no more than 9 times, but here we need to count over u and t as well (with s fixed). The number of possibilities for t given u and s is 2 u−s−2 and u is bounded below by s + 1 and above by the requirement 2 u ≤ 1 /δ. Totally we get |{(ξ, u, s, t) : l(ξ) = l, b(s) = b}| ≤ 9 for all l and b. Finally, the most complicated case is that of the vectors w ± for 2 u > 1 /δ. Lemma 7.5, (56) and (59) give We define b(ξ, u, s, t) = ⌊max{1/δ2 s , (δ2 u ) 1/44 }⌋, so (61) holds. We still need to add up the number of vectors that correspond to each bound in (62). The number of ξ which give every particular l is bounded by ⌊δU ⌋ + 1 ≤ Cδ2 u . As for the multiplicity in b, each value can be achieved either if δ2 ], and we need to check, in each case, how many possibilities for u, s, t and ξ we have.
In the first case, we get δ2 u < ( b + 1) 44 . This has two implications. First, the number of u which satisfy 1 < δ2 u ≤ ( b+ 1) 44 can be bounded by C log b. Second, by the above, each l(ξ) repeats no more than C( b + 1) 44 times for each t and s. Further, the restriction δ2 s ∈ ( 1 b+1 , 1 b ] gives that there is at most one possibility for s. The number of possibilities for t given u and s is always 2 u−s−2 and this can be bounded by So how many (s, t, u, ξ) are there that satisfy |w ± i,ξ,u,s,t | ≤ min{1/|i − l mod N |+1, b} for a given l and b? As just explained, there are C log b possibilities for u, C b 44 possibilities for ξ and C b 45 possibilities for t and s so in total we get no more than C b 89 log b ≤ C b 90 possibilities.
In the second case ( ] so the number of possibilities for u is bounded, and they all satisfy δ2 u < ( b + 1) 44 . As for s, we have that 2 s Together with the bound on the number of times each l(ξ) repeats we get a total of C b 89 possibilities. This concludes our bounds. We rearrange all our vectors w 0 i,ξ,u and w ± i,ξ,u,s,t into one list v i,j and define l(j) = l(u, ξ) or l(ξ, u, s, t), as the case may be, and similarly for b. The estimates (60), (61) become and |{j : l(j) = l, b(j) = b}| ≤ C b 90 . It is now time to handle the factors C 1 λ l(j)−i . We have To finish the proof of lemma 7.6, assume we are given some u ∈ N and some r ∈ {2 u−1 , . . . , 2 u − 1}. Write r in its binary expansion r = 2 u−1 + 2 s k t k := 2 u−1 + 2 s 1 + · · · + 2 s k for some decreasing sequence s k . We get Hence for any ξ ∈ 1 U {0, . . . , U − 1} we have i,ξ,u,s k ,t k−1 (our assumption that V i ⊂ B ξ whenever 2 u > 1 δ is satisfied on E ξ , recall its definition (47)). With (64) we get E ξ ∆(x)D r (x − ξ)dx ≤ C2 −m/44 ω f (δ 4/5 ) 1/500 min{(δ2 u ) 1/100 , | log 2 (δ2 u )|(δ2 u ) −1/4400 }.
(the log(δ2 u ) in the second estimate comes from the case that 2 u > 1 /δ in (64), from the counting on s, because as s decreases from u towards zero there are ≈ log(δ2 u ) values of s for which min{(δ2 u ) −1/44 , δ2 s } is constant (δ2 u ) −1/44 and only then does it starts to decrease exponentially). Lemma 7.6 is thus proved, and due to (48), so is lemma 7.2.
Proof of theorem 2. Recall that the theorem states that for every continuous function f there is an absolutely continuous homeomorphism ψ such that S r (f • ψ) converges uniformly. Assume without loss of generality that ||f || ∞ ≤ 1 2 . We apply lemma 7.2 in a doubly infinite process. We start with the map I(d) = [−1, 1] for all d, which is of type (f, 1) with respect to the trivial partition {[0, 1]} and to m = −1. We apply lemma 7.2 and get a map J 0 which is of type (f, 1) with respect to {[0, 1]} and to m = 0 such that for every u, r and ξ as in lemma 7.2 We apply lemma 7.2 to J 0 and get a J 1 , apply it again to get J 2 and so on, and for each m we have for every u, r and ξ interval (a union of three intervals of the type φ −1 I k+1 (U ), U ∈ U k+1 ) and if 2 u > 1 /δ k+1 then it is a union of between 0 and 3 intervals of the type φ −1 I k+1 (U ), U ∈ U k+1 , depending on details of the splitting (these intervals might be neighbours or might not be, this makes no difference to us). In all cases we apply lemma 7.3 for the RH-restrictor I k+1 , for both F + and F − , and for all relevant values of i. We get 1 2 (F + (x) + F − (x)) = E(f (φ I k+1 (x))) − E(f (φ I k+1 (ξ))).
|
2021-12-02T02:16:30.512Z
|
2021-11-30T00:00:00.000
|
{
"year": 2021,
"sha1": "fb7725670cabf71c5387e56cee3d1e9181528bd6",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Arxiv",
"pdf_hash": "fb7725670cabf71c5387e56cee3d1e9181528bd6",
"s2fieldsofstudy": [
"Mathematics"
],
"extfieldsofstudy": [
"Mathematics"
]
}
|
212637067
|
pes2o/s2orc
|
v3-fos-license
|
Dried Blood Spot Tests for the Diagnosis and Therapeutic Monitoring of HIV and Viral Hepatitis B and C
Blood collected and dried on a paper card – dried blood spot (DBS) – knows a growing interest as a sampling method that can be performed outside care facilities by capillary puncture, and transported in a simple and safe manner by mail. The benefits of this method for blood collection and transport has recently led the World Health Organization to recommend DBS for HIV and hepatitis B and C diagnosis. The clinical utility of DBS sampling to improve diagnostics and care of HIV and hepatitis B and C infection in hard to reach populations, key populations and people living in low-income settings was highlighted. Literature about usefulness of DBS specimens in the therapeutic cascade of care – screening, confirmation, quantification of nucleic acids, and resistance genotyping -, was reviewed. DBS samples are suitable for testing antibodies, antigens, or nucleic acids using most laboratory methods. Good sensibility and specificity have been reported for infant HIV diagnosis and diagnosis of hepatitis B and C. The performance of HIV RNA testing on DBS to identified virological failure on antiretroviral therapy is also high but not optimal because of the dilution of dried blood in the elution buffer, reducing the analytical sensitivity, and because of the contamination by intracellular HIV DNA. Standardized protocols are needed for inter-laboratory comparisons, and manufacturers should pursue regulatory approval for in vitro diagnostics using DBS specimens. Despite these limitations, DBS sampling is a clinically relevant tool to improve access to infectious disease diagnosis worldwide.
INTRODUCTION
The use of dried blood collected on blotting paper, -"Dried Blood Spot" (DBS) -, was developed gradually after the Second World War. The beginning of diagnosis on DBS is associated with Robert Guthrie who implemented large-scale neonatal screening for phenylketonuria in the 1960s. In the field of infectious disease diagnosis, the first references to DBS relate to syphilis, with studies published as early as the 1950s (Blaurock et al., 1950). The DBS sample was also used for serological surveillance or diagnosis of trypanosomiasis, hepatic amebiasis, congenital rubella, or hepatitis B (Ashkar and Ochilo, 1972;Ambroise-Thomas and Meyer, 1975;Farzadegan et al., 1978;Sander and Niehaus, 1980). After these modest and historic beginnings, a progressively increasing interest for DBS was observed in the early 2000's, mainly due to the needs of therapeutic monitoring of HIV infection.
The blood can be directly deposited on the filter paper when a capillary blood collection is done with a retractable incision device, or using a pipette when peripheral venous blood is collected. The procedure for DBS collection has been detailed (Ostler et al., 2014). The capillary sampling on DBS is performed at the heel of the infant and at the level of the digital pulp in children and adults. Self-sampling is possible after a minimum training. For digital samples, the fingers innervated by the ulnar nerve (half of the 3rd finger and 4th or 5th finger) are generally preferred because less sensitive than fingers innervated by the median nerve. The size of the skin penetration depends on the size and type of lancet, and determines the amount of blood that can be collected. Standard DBS cards use pre-printed circles 12 mm in diameter to receive between 50 and 70 µl of blood. The massage of the finger before puncture and warming of the hands in warm water, can facilitate the sampling. After puncture it is important to exert a strong intermittent pressure to maintain the bleeding and complete the blotting paper card.
INDICATIONS OF DBS IN INFECTIOUS DISEASES
Dried blood spot sampling offers an alternative to the reference samples -plasma and serum -in situations where there are no facilities or expertise to take venous whole blood specimens; or where transport of body fluids is difficult. Hence, DBS specimens collected outside of healthcare facilities can be viewed as an alternative to rapid diagnostic tests (RDT). Compared to RDT, DBS sampling has advantages and limits summarized in Table 1. By comparison with RDT one of the main limit is that diagnostic approach using DBS request a post-test visit after the sampling which is associated with a risk of loss to follow up. On the other hand, it is useful to test a large number of persons by using DBS testing in a centralized laboratory, and it is possible to confirm the infections with Western blot and molecular tests. Hence, in countries with high economic resources, DBS can be used to promote access to screening in key populations having little use of care facilities. Screening programs using DBS have been running for several years in Great Britain, targeting in particular intravenous drug users. Up to 20% of new HCV diagnoses were made using DBS in 2013 in Scottish addiction treatment centers (McLeod et al., 2014). A program of this type has been implemented by the Montpellier teaching hospital since 2013 in all addiction treatment centers located in the Languedoc Roussillon Region (Accueillir, 2014). Sex workers (Shokoohi et al., 2016), homeless people (Foroughi et al., 2017), migrants and some men who have sex with men (MSM) (Bogowicz et al., 2016), as well as populations living in hard to reach areas such as French Guiana (Schaub, 2017) are also key populations that can have difficult access to laboratory infectious diseases testing for which the use of DBS should be considered.
In resource-limited settings the high rates of infectious disease mortality and morbidity are, to a large extent, due to a lack of diagnostic means (Global Health Workforce Alliance and World Health Organization [WHO], 2013). Insufficient access to nearby laboratory facilities is a major concern. The lack of adequate human and financial resources -health professionals and biologists -are also noteworthy. It is estimated that threequarters of Africa's population have access to minimal health care structures in Africa, and more than 90% in Asia. By contrast, less than one-third of Africans would have access to advanced health facilities and just over 50% in Asia (RAND Corporation, 2007. Estimating the global health impact of improved diagnostic tools for the developing world; Abou Tayoun et al., 2014).
Point of care (POC) tests -usable outside healthcare facilities -and technologies requiring minimal infrastructure (near-POC tests), constitute a major progress but will require significant investments to expand the limited range of analyzable parameters. Necessity of machines maintenance, reagents supply and quality control assurance need also to be considered in a medium and long term approach for decentralized laboratories. Good acceptability of capillary blood sampling is another advantage of DBS when compared with venepuncture (Pell et al., 2014;Almond et al., 2016). In this context, sampling on DBS considers decentralized sampling as possible, whilst carrying out the test in a well-equipped centralized clinical laboratory, without waiting for the uncertain availability of new technologies and without necessity of maintaining a cold chain for transportation ( Figure 1A). Transportation and storage in field situations can have a significant effect, as shown for HIV and HCV nucleic amplifications (Monleau et al., 2010;Tuaillon et al., 2010;Manak et al., 2018), and HCV antibodies testing (Marques et al., 2012). The filter papers used can also impact on the performances of in vitro diagnosis tests. Among the DBS collection cards available Whatman 903, Munktell TNF or Ahlstrom Grade 226 have been recommended but other cards have also demonstrated good performances (Waters et al., 2007;Rottinghaus et al., 2013;Smit et al., 2014;World Health Organization [WHO], 2014;Taieb et al., 2016). DBS specimens should be considered from a public health perspective, for which the clinical performance of the in vitro laboratory assays is crucial. The clinical performance of a test can be analyzed as a trade-off between the intrinsic performances of the assay and its accessibility in the field. The best clinical performances are obtained in populations in whom the highest proportion of infected persons are tested and detected positive. High clinical performances may be achieved using DBS based strategies ( Figure 1B). Implementation of DBS for HIV viral load is considered as one of the most medically effective immediate measures to reduce AIDS-related mortality in Africa (Phillips et al., 2015).
In addition to individual diagnosis, sampling, transport and storage simplification makes the DBS a particularly suitable tool for population studies. In France the mandatory reporting system for HIV is associated with virological surveillance by the HIV National Reference Center using dried serum spot (Lot et al., 2004). The HIV National Reference Center identifies HIV types, groups, and subtypes, and estimates incidence using a recent infection test. This surveillance system provides robust and comprehensive data on the HIV epidemic in France.
Other countries, such as Germany, have also integrated DBS into their HIV surveillance system thanks to the DBS ease Hofmann et al., 2017). In southern countries and areas with difficult access the DBS allows large-scale surveys to plan and monitor health programs. DBS specimens collected during Demographic and Health Surveys (DHS) are useful for estimating the prevalence of diseases, allowing reliable countrywide and regional distribution of HIV estimates (Bellan et al., 2013), but also of hepatitis B, C, and delta, as recently reported (Meda et al., 2018;Njouom et al., 2018;Tuaillon et al., 2018).
DBS IN THE THERAPEUTIC CASCADE OF CARE
Reaching and testing persons at risk of HBV, HCV, and HIV is a main challenge as part of the global effort to eliminate these infections as public health threats by 2030 (World Health Organization [WHO], 2016). Diagnosis of viral hepatitis and HIV follows a sequential strategy initiated by serological screening based on the detection of antibodies or antigens, to which succeeds a confirmation step and the therapeutic monitoring ( Figure 1C). DBS analyses can be integrated into each steps of the diagnosis cascade.
DBS for Screening and Serologic Confirmation of HIV and Viral Hepatitis
Dried blood spot can be used in the initial serological screening step as an alternative to conventional assays based on venipuncture and RDT. DBS can be decentralized such as RDT, but requires a return visit for post-counseling. This second visit leads to a risk of loss to follow up (Bottero et al., 2015). The lower rate of retention in care using DBS is one of the limitations of DBS testing compared to RDT. Nevertheless, questioning health care workers about their perception of DBS testing highlights some advantages related to a diagnostic strategy requiring a second visit: (i) unlike for RDT, the person who performed the DBS does not bear the responsibility of performing the assay and of immediately notifying the diagnosis to the screened subject; (ii) in the event of a positive test, the medical team can anticipate and better organize the post-test counseling; (iii) in an approach of risk-reduction, the time between screening and the result announcement may be perceived as beneficial to raising awareness.
Dried blood spot samples can be used with standard HIV and viral hepatitis immunoassays. Automated immunoassays performed on DBS can be more efficient than RDT based on immuno-concentration, immunochromatography or agglutination methods. We reported that acute HIV infection was detected earlier on DBS using a fourth-generation HIV test (combining detection of total antibody and p24 Ag) compared to RDT (Kania et al., 2015). Suboptimal analytical sensitivity was also reported for HBsAg RDT (Scheiblauer et al., 2010). This analytical sensibility is generally insufficient to meet the minimum requirements of European regulatory authorities and WHO (Lower limit of detection (LOD) < 4 IU/mL) (Chevaliez et al., 2014;World Health Organization [WHO], 2019). HBsAg laboratory tests have a sensitivity <0.1 IU/mL (Tuaillon et al., 2012), suggesting that even with a dilution factor of 10 or 20 times related to the elution step, the DBS testing may have a better sensitivity than HBsAg RDT. Systematic reviews have reported 98% sensitivity and 100% specificity for HBsAg screening on DBS compared to 88.9% sensitivity and 98.4% specificity using HBsAg RDT Lange et al., 2017a). By contrast, the overall performances of RDT and DBS test dedicated to anti-HCV detection appeared very close, with 98% sensitivity, and 99% specificity using DBS, and 98% sensitivity and 100% sensitivity using RDT (Lange et al., 2017a;Tang et al., 2017).
The confirmation of a positive screening test is a second and often crucial step in the cascade of care. DBS can be used for confirmation of anti-HIV and anti-HCV detection on Western blot. Results in band patterns are similar to those observed in blood and comparable performances in term of sensitivity and specificity (Tuaillon et al., 2010;Kania et al., 2013;Manak et al., 2018). However, these assays are rarely available in low incomes countries, and were generally expensive. Confirmation by genomic detection on DBS is also an effective means of confirming infections with HCV, HBV, and HIV (Tuaillon et al., 2010;Nhan et al., 2014). Nucleic acid tests on DBS are able to confirm and HIV and HCV replication since the RNA level is generally higher than the LOD in treatment naïve subjects (Smit et al., 2014;Nguyen et al., 2018), and to detect chronic active HBV infections (HBV DNA > 2000 IU/ml). Access to a more efficient diagnostic tool in case of high clinical suspicion despite a negative result of TDR could be considered as another way of DBS use (Kania et al., 2015).
DBS for HIV and Viral Hepatitis Molecular
Testing to Detect Viral Replication, Therapeutic Response and Virologic Theray Failure One of the most obvious indications of the DBS concerns HIV, HBV, and HCV molecular tests. Testing HIV, HBV, and HCV nucleic acids is necessary for therapeutic initiation or monitoring of the therapeutic response. However, access to nucleic acid tests remains very limited in low income countries. The performances of most commercial HIV-1 RNA assays have been evaluated on DBS: Abbott RealTime, Roche Cobas, Siemens Versant, bioMérieux Nuclisens, Biocentric Generic, Cepheid GeneXpert. The Abbott test has been the most largely used assay, and can be considered as the reference for HIV viral load on DBS. The authors conclude quite consensually on the satisfactory performance of viral load on DBS (Smit et al., 2014). Nevertheless, the variability of the results must be underlined. Sensitivities ranged from less than 80% to almost 100% at the threshold of 1000 HIV RNA/mL recommended by WHO for virological failure, and specificity from less than 60% to nearly 95% for a given test (Smit et al., 2014;World Health Organization [WHO], 2014;Taieb et al., 2016). The viral reservoir of HIV DNA induces a quantification bias when whole blood is used. The HIV DNA contained in infected cells is released into the DBS eluate. We have shown that the risk of over-quantification becomes greater over 10 6 HIV DNA copies per million of peripheral blood mononuclear cells (Zida et al., 2016). The use of a specific extraction of RNA or DNAse significantly improves the specificity (Rollins et al., 2002). At a threshold of 5000 HIV RNA copies/mL, the techniques on DBS show better performances, but this high threshold implies a deterioration of the negative predictive value for a test of therapeutic failure (World Health Organization [WHO], 2014).
The detection of nucleic acids on DBS also finds a strong clinical indication in the diagnosis of infections transmitted from mother to child. Regarding HIV diagnosis, the presence of maternal antibodies in infants makes serological tests ineffective until at least 1 year of age. Molecular diagnosis of HIV infection on DBS has been shown to be effective (Rollins et al., 2002) and is routinely used in clinical practice for this indication. DBS sampling is finally useful for specialized tests such as HIV resistance genotyping, and antiretroviral dosages (D'Avolio et al., 2010;Monleau et al., 2014). It should be noted that although the clinical relevance of HIV resistance genotyping on DBS is indisputable, it nevertheless faces the difficulty of molecular genotyping on DBS, at the threshold defining the therapeutic failure (Monleau et al., 2014).
For viral hepatitis, there are fewer data on molecular tests, but they show better clinical performance than those published for HIV. For HBV, sensitivities and specificities are estimated at 95% on average (D'Avolio et al., 2010), and the threshold for inactive hepatitis is easily reached in DBS (<2000 IU/mL, approximately equivalent to <10,000 DNA copies/mL). For HCV, mean sensitivities and specificities are also high and estimated >98% (Lange et al., 2017b), which can be explained by the paucity of low HCV viremia during the natural history of this infection. Confirmation of HCV viremia by testing HCV core antigen (HCVcAg) on DBS has also recently been reported in intravenous drug users living in Africa and Asia (Mohamed et al., 2017;Nguyen et al., 2018). This method appeared highly specific and may benefit from substantial stability under prolonged storage conditions but with a lower analytical sensitivity compared to DBS HCV RNA testing (Soulier et al., 2016;Nguyen et al., 2018).
SEROLOGICAL AND MOLECULAR TESTING USING DBS IN THE FIELD
Heterogeneity in the pre-analytical procedures used for DBS specimens between laboratories influence the result and make comparisons difficult. The size of the spot, the nature of the elution buffer, the elution volume -and therefore the dilution factor -, the extraction technique, impact the performance of the tests. WHO guidelines for the implementation of HIV viral load and viral hepatitis tests stress the need for the manufacturers to provide application notes for the use of DBS, and at best to pursue regulatory approval for in vitro diagnostics using DBS specimens (World Health Organization [WHO], 2014[WHO], , 2017. The bioMerieux Nuclisens and Abbott RealTime HIV-1 viral load kits have obtained regulatory approval for use on DBS. On this matrix, the detection threshold (95% detection) is 802 copies/ml for the bioMerieux test (CE-IVD) and 839 copies/ml for the Abbott test (CE-IVD and WHO prequalification). However, a study has shown that the detection rate does not reach 95% for viral loads between 1000 and 5000 RNA copies/mL (Viljoen et al., 2010 For viral hepatitis B and C, no viral load test performed from DBS is currently IVD approved. The same scarcity situation must be noted for viral serology techniques, for which there is neither regulatory approval for use on DBS nor official application note from manufacturers. Despite these important limitations, DBS collection is recommended by WHO for diagnosis of HIV and viral hepatitis B and C in field settings to improve access to screening and management of populations with poor access to serum tests (Soulier et al., 2016;Taieb et al., 2018).
CONCLUSION
Capillary blood collected on blotting paper is an alternative method of sampling with many advantages compared to serum or plasma specimens. The lower analytical sensitivity of assays performed on DBS compared to serum/plasma is one of the limits of DBS, since biomarkers can be present at very low concentration during infection. However, data suggest that the analytical sensitivity of DBS is generally higher than RDT. Another limitation is the lack of commitment by manufacturers to use serological and molecular tests on DBS specimens. The pre-analytical steps of laboratory analyses performed on DBS keep a manual character, including the following steps: to punch out of the spot from each bloodsoaked circle, to transfer to the elution tube or plate, to put the tube on a laboratory shaker and let the punched DBSs gently elute for a minimum of 2 h, to transfer the tubes to microcentrifuge to eliminate debris from the supernatants. Diagnostic tests on DBS are consequently difficult to integrate into the laboratory workflow. Hence, the tests on DBS require rigorous validation in clinical laboratories to guarantee the quality of the results. Despite these limitations, DBS is a clinically relevant tool for decentralized sampling. DBS can contribute more broadly to improve access to in vitro diagnosis in order to reach the treatment target to help end the AIDS epidemic and to eliminate viral hepatitis as a public health threat by 2030.
AUTHOR CONTRIBUTIONS
ET, DK, AP, KB, and PV contributed to the conception and wrote the manuscript. FT, EO, RS, J-CP, and AM participated to wrote and analysis. All authors read and approved the final manuscript.
|
2020-03-10T13:13:24.238Z
|
2020-03-09T00:00:00.000
|
{
"year": 2020,
"sha1": "8caeca1ec7dd31f3db0c942e371090e2754f02c1",
"oa_license": "CCBY",
"oa_url": "https://www.frontiersin.org/articles/10.3389/fmicb.2020.00373/pdf",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "8caeca1ec7dd31f3db0c942e371090e2754f02c1",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
266900329
|
pes2o/s2orc
|
v3-fos-license
|
Micro-computed tomography analysis of calcium hydroxide delivery efficacy in C-shaped canal system of mandibular second molars
Background Calcium hydroxide [Ca(OH)2] is widely accepted as a biocompatible interappointment intracanal medicament. This study aimed to analyze the efficacy of Ca(OH)2 placement into the C-shaped canal system of mandibular second molars using the syringe method with and without lentulo spiral utilizing micro-computed tomography (micro-CT). Methods Twenty-four extracted mandibular second molars were instrumented and classified into C-shaped floors (n = 12) and non-C-shaped floors (n = 12). Both groups were placed with Ca(OH)2 using the syringe system, then all teeth were scanned and cleaned, and placed with Ca(OH)2 again but with the syringe system followed by lentulo spiral and rescanned. The specimens were scanned using micro-CT to analyze the volume, volume percentage, uncontacted surface area, and uncontacted surface area percentage of Ca(OH)2 with the two delivery methods in the entire canal and at the apical 4 mm of the canal. Mann-Whitney test and Wilcoxon signed-rank test were used to determine the statistical differences among the groups. Results Syringe administration used in conjunction with lentulo spiral presented lower uncontacted surface area, a lower percentage of uncontacted surface area, larger volume, and a higher percentage of volume than syringe without lentulo spiral (P < 0.05). There was no significant difference between the C-shaped floor group and the non-C-shaped floor group (P > 0.05) in the Ca(OH)2 uncontacted surface area, volume, and percentages at different regions of canals and among different delivery techniques groups. Conclusions The lentulo spiral and syringe technique combination can increase the volume and contacted surface area of Ca(OH)2 in the C-shaped canal system of mandibular second molars.
Background
Elimination of microorganisms and their by-products from an infected root canal system is crucial for successful endodontic treatment.However, it has been reported that some microorganisms may remain lodged in the dentinal tubules even after careful chemo-mechanical preparation [1,2].Thus, intracanal medication between sessions can be essential for disinfecting the root canal system, minimizing the risk of reinfection, and favoring periapical tissue repair [3,4].
Calcium hydroxide is highly recommended and widely accepted as a biocompatible interappointment intracanal medicament [5,6].Rajasekharan et al. reported that changes in pH, short-term calcium ion release, and maximum release rate were dependent on the exposed surface area, while maximum calcium ion release was dependent on the volume [7].Therefore, to maximize the penetration and disinfection properties, Ca(OH) 2 should ideally be placed deep and in close contact with the canal surface along the working length.
The morphology and complexities of the canal system can also influence the effectiveness of intracanal medicament placement [9].Mandibular second molars with C-shaped canal configurations are especially commonly found in east Asia [16][17][18].The main anatomic feature of the C-shaped canal system is the presence of a fin or web connecting the individual root canals, which can harbor many microorganisms even after shaping [19].Thus, application of intracanal medicament is strongly recommended in these cases.
Traditional methods to verify the quality of Ca(OH) 2 placement into complex canals are using radiographic images, clearing process and splitting teeth samples [12,15,20].However, these techniques may exhibit subjectivity, lack quantifiability, and possess low resolution.Furthermore, studies using traditional methods reported controversial results of different delivery techniques, including K-type ultrasonic file, Gutta-Condensor, Pastinject, Lentulo spiral, injection system, et al. [13,21].Evaluation was done through scoring by examiners [15,22], calculation of the density of the intracanal medicament and observation of porosities [8,10,13,14,23], and weight calculation [21].No previous studies have investigated the percentage of the contacted and uncontacted canal surface area by Ca(OH) 2 , which may more clearly show the contact condition between the intracanal medicament and canal surface.
Micro-computed tomography (micro-CT) has been used in endodontic studies to analyze changes in canal volume, surface area, and uninstrumented surface area after canal preparation [24][25][26].To our knowledge, there have been no studies using micro-CT to evaluate the placement efficacy of Ca(OH) 2 .In addition, it is unclear whether lentulo spiral has a significant auxiliary effect on increasing the contact area during placement of intracanal medicament by syringes.
Thus, the purpose of this micro-CT study was to analyze the effectiveness of intracanal placement of Ca(OH) 2 in C-shaped canals of mandibular second molars using the syringe method with and without lentulo spiral.
Methods
The manuscript of this laboratory study has been written according to Preferred Reporting Items for Laboratory studies in Endodontology (PRILE) 2021 guidelines (Fig. 1) [27].
Specimen preparation
80 mandibular second molars from native Chinese patients with fused roots, radicular grooves, and no macroscopically visible external defects were extracted because of severe periodontitis, and were then collected and ultrasonically cleaned.The age and sex were unknown.The study was approved by the university clinical research ethics board (WCHS-IRB-ST-2017-224).The teeth were scanned by micro-CT (µCT-50; Scanco Medical, Brttisellen, Switzerland) at an isotropic voxel size of 30 μm at 90 kVp and 200 mA.All data were exported in DICOM format.The teeth were then threedimensionally reconstructed using VGStudio MAX 2.0 (Volume Graphics, Heidelberg, Germany) software, and the pulp chamber floor (PCF) was investigated.These teeth were divided into two groups according to a classification based on the shape of the PCF by Min et al. as follows [28]: C-shaped PCF and orifice: a peninsula-like floor with a continuous or discontinuous C-shaped orifice, which encompasses Types I, II, and III of Min's classification; Non-C-shaped PCF and orifice: without peninsula-like floors and with separated mesial and distal canal orifices, similar to Type IV of Min's classification.
The sample size was calculated based on the data of our pilot study, and the calculation method in a previous study [29].The analysis was performed using two independent means from the T-test family in G*Power 3.1 software (Henrick Heine-Universität) with α = 0.05 and 95% power inputs.10 was considered the minimum sample size to observe significant difference between groups.Considering a low incidence of non-C-shaped PCF canal in fused roots, a final of 24 teeth (12 per group) in the C-shaped PCF and non-C-shaped PCF groups were selected for the study.
Standard access cavities were prepared for all the selected teeth.A size 10 K-type file was introduced into each canal until the tip of the instrument was visualized at the apical foramen when observed under a dental operating microscope (DOM) (Pico; Carl Zeiss, Jena, Germany).Working length (WL) determination was accomplished by subtracting the file 0.5 mm from the total canal length.Root canal preparation was performed using a ProGlider™ Rotary Glide Path File sized 16/0.02 up to Waveone Gold 25/0.07 (Dentsply Maillefer, Ballaigues, Switzerland) mounted on a reciprocating X-Smart Plus motor (Dentsply Maillefer).During mechanical preparation, irrigation was carried out copiously between files with 2 mL of 1% NaOCl.After completion of instrumentation, each canal was irrigated with ultrasonic flush for 30 s using Irrisafe tip (Acteon, Merignac, France), which was inserted to 1 mm short of the WL and activated at 20% power as without cutting power.Each canal was subsequently irrigated with 5 mL 1% NaOCl for 1 min and then dried with paper points.
Intracanal medicament placement
All roots were embedded with 3 M impression material putty (Express XT, 3 M ESPE, St. Paul, MN, USA) to seal the root structure until the cemento enamel junction (CEJ).UltraCal XS Ca(OH) 2 paste (Ultradent Products, South Jordan, UT, USA) was applied using the following techniques: • Group 1: Syringe system with NaviTip needle.The 29-G NaviTip needle (Ultradent Products) was attached to the UltraCal syringe and introduced into the canal 2 mm short of the predetermined canal length.The syringe was then gently pressed and withdrawn from the canal until paste reflow was observed and extrusion was evident at the orifices under DOM.Subsequently, a cotton pellet was applied into the canal orifice with Caviton (GC Corporation, Tokyo, Japan) as the temporary filling material.The teeth were stored at 37 °C and 100% humidity for two weeks.Two weeks after the placement of Ca(OH) 2 paste, the impression material was removed.A second micro-CT scan was carried out for further 3D measurement and analysis.Subsequently, the impression material was removed to ensure that the apical foramen was unobstructed, as opposed to being in a sealed state as observed clinically.The Ca(OH) 2 pastes in all of the teeth were removed using Irrisafe or ET-20 ultrasonic tips powered by a piezoelectric unit (Acteon, Merignac, France).The remaining pastes were further removed using air-scaler-attached EDDY sonic tips (VDW, Munich, Germany), NaOCl, and through a gentle up-and-down motion to ensure free movement of the tips.Then, a final flush was set in an ultrasonic bath (Fisher Scientific Company, Ottawa, Canada).
During pilot experiments, these procedures led to nearly complete removal of Ca(OH) 2, which was also confirmed by micro-CT inspection; the teeth were then reused in group 2. • Group 2: Syringe system with NaviTip needle followed by lentulo spiral.Compared with group 1, after injection of Ca(OH) 2 paste, a size 25 lentulo spiral (Dentsply Maillefer) attached to a slow-speed handpiece was inserted 1 mm from working length passively in a clockwise direction at 300 rpm for 10 s.The lentulo spiral was removed, and the UltraCal syringe needle was reinserted into the canal.Ca(OH) 2 paste was reinjected until paste reflow was observed and extrusion was evident at the orifices under DOM.A cotton pellet was then applied into the canal orifice with Caviton as the temporary filling material.
Teeth were stored at 37 °C and 100% humidity for two weeks, and a third micro-CT scan was taken to compare the placement efficacy of different techniques.
Micro-CT image measurement and analysis
The measurement methodology is based on a previously published technique by our research team [30].The DICOM files obtained from the scanned batches were subjected to 3D co-registration using the Elastix rigid image registration module, integrated into the 3D Slicer software (v4.1.1)(Harvard SPL, Boston, MA, USA).Subsequently, the registered data were imported into VGStudio MAX 2.0 for further processing.A semi-automatic threshold-based segmentation technique was employed to create a 3D model of the canal and Ca(OH) 2 , allowing for precise measurements (Fig. 2AI, II, III, 2BI, II, III).The region of interest (ROI) for the Ca(OH) 2 paste was defined from the canal orifices to 0.5 mm short of the apex (covering the entire canal) and from the canal orifices to the apical 4 mm of the canal.The volume, uncontacted surface area, and their respective proportions relative to the entire canal and the apical 4 mm region were measured.The VGStudio MAX software was employed for the precise measurement of intracanal Ca(OH) 2 volume, considering various delivery methods.The determination of the uncontacted surface area of Ca(OH) 2 was carried out through the following steps: Initially, the region of interest (ROI), comprising both the canal walls and the surface of intracanal Ca(OH) 2 , was digitally reconstructed and exported to 3D STL format using VGStudio MAX software.Subsequently, the surface area of the canal wall that came into contact with Ca(OH) 2 was computed.This process involved the calculation of the canal surface area and a Boolean intersection using Geomagic Studio 2012 software (Raindrop Geomagic, Morrisville, NC, USA), where each voxel represented 30 μm.In such cases, the Boolean intersection of the canal surface and the contacted surface area of Ca(OH) 2 equaled zero:
Statistical analysis
Mann-Whitney test was used to determine the difference between C-shaped and non-C-shaped groups in the volume and the uncontacted surface area of Ca(OH) 2 in the entire canal and at the apical 4 mm, and Wilcoxon signed-rank test was used to determine statistical differences between the two different delivery techniques.All statistical tests were performed using SPSS software (SPSS 20.0 for Windows, SPSS, Chicago, IL, USA).A value of p < 0.05 was considered statistically significant.
Results
The delivery technique using syringes followed by lentulo spiral showed significantly higher efficacy of Ca(OH) 2 delivery, regarding volume, uncontacted surface area, and their percentages, in both C-shaped and non-Cshaped PCF groups compared with the delivery with syringes alone (Fig. 2).The median (interquartile range) of the uncontacted surface area of Ca(OH) 2 after placement with the two techniques in both C-shaped and non-C-shaped PCF groups are shown in Fig. 3.There were statistically significant differences among the two types of delivery techniques in both the C-shaped and non-Cshaped PCF groups (P < 0.001).Specifically, syringe followed with lentulo spiral presented lower uncontacted surface area and a lower percentage of uncontacted surface area than syringe without lentulo spiral, regardless of whether the region of interest encompassed the entire canal (Fig. 3A-B) or the apical 4 mm of the canal (Fig. 3C-D) (P < 0.05).median (interquartile range) of the volume of Ca(OH) 2 in the entire canal and apical 4 mm placed with the two techniques in both C-shaped and non-C-shaped PCF groups are shown in Fig. 4. Statistically significant differences were found between the two placement techniques in both C-shaped and non-C-shaped groups, both in the entire canal (Fig. 4A-B) and in the apical 4 mm (Fig. 4C-D) (P < 0.05).Specifically, syringe followed with lentulo spiral showed a larger volume of Ca(OH) 2 in the entire canal and at the apical 4 mm.
Between the C-shaped and the non-C-shaped PCF groups, there were no statistically significant differences in the mean volume, uncontacted surface area and the percentages for the two delivery techniques, both in the entire canal and at the apical 4 mm (P > 0.05).
Discussion
Ca(OH) 2 is the most commonly used intracanal medicament for eliminating residual microorganisms after chemo-mechanical preparation [3,31].To obtain the maximum therapeutic benefit of Ca(OH) 2 , the paste must be in direct contact with the root canal walls to allow penetration of the OH − ion and the Ca 2+ ion [22].Therefore, in the assessment of Ca(OH) 2 placement, uncontacted surface area and volume of Ca(OH) 2 should be essential parameters.
Previous methods used to assess the efficacy of different Ca(OH) 2 delivery techniques include scoring by an examiner [15,22], calculation of the density of Ca(OH) 2 and the porosities [8,10,13,14,23], and weight calculation [21].However, scoring by different examiners can result in inaccurate results as it is largely subjective.Simcock et al. reported a strong correlation between the radiographic appearance and the weight after Ca(OH) 2 delivery into the canal [21].Calculation of Ca(OH) 2 density is unable to show the area where the drug is in effect as it does not show where Ca(OH) 2 is directly in contact with the canal surface.Weight calculation before and after delivery of Ca(OH) 2 also has a disadvantage that it cannot separately measure the amount of Ca(OH) 2 in each portion of the canal, for instance, the apical portion.By using the amount of uncontacted surface area as a parameter, the quality of the application of the medicament on the root canal surface can be represented, especially at the apical portion.
For two decades, micro-CT has widely been used in endodontic research.Imaging software allows researchers to calculate and analyze canal surface area before and after instrumentation.No previous study has used micro-CT for assessing the uncontacted or contacted surface area and volume of Ca(OH) 2 medicament in root canals.In the present study, the surface area of Ca(OH) 2 was regarded as the pre-preparation area, and the canal surface area was regarded as the post-preparation area.We applied micro-CT scanning to reconstruct a 3D model, which enabled us to calculate and analyze the contacted surface area of the Ca(OH) 2 in a more standardized and precise manner.
Ca(OH) 2 paste requires effective delivery into the canal to achieve total antibacterial activity.Several techniques are used for Ca(OH) 2 placement, among which syringe and lentulo spiral are commonly advocated clinically [23].Staehle et al. stated that the syringe system provided superior filling quality with Ca(OH) 2 in straight or lightly curved root canals; the addition of the lentulo spiral was reportedly more effective in filling curved canals [12].Similar results were also reported in other studies [8,31,32].
UltraCal XS Ca(OH) 2 paste is a commonly used commercial paste with a syringe system.NaviTip syringe TM is relatively small and can be inserted into the apex of root canals.Thus, two delivery techniques were utilized in the present study, namely NaviTip syringe and Nav-iTip syringe used in conjunction with lentulo spirals.The results of our study indicated that using lentulo spirals can help to reduce the uncontacted surface area of Ca(OH) 2 in root canals.This is in agreement with Torre et al. 's study, which compared Ultradent tips, Ultradent tips used in conjunction with lentulo spiral, and lentulo spiral used alone [14].Their results showed that Ultradent tips used in conjunction with lentulo spiral and lentulo spiral alone were significantly more effective than Ultradent tips alone in the apical 3 mm and 1 mm.Similarly, the syringe-lentulo spiral group in the study by Tan et al. showed a greater mean radiodensity than the syringe-#25 finger spreader group at all levels [23].As expected in our study, differences in delivery techniques resulted in variations in the uncontacted surface area of Ca(OH) 2 in both the C-shaped canal systems with C-shaped PCF and non-C-shaped PCF.The group in which NaviTip syringe was used in conjunction with lentulo spiral showed a significantly lower uncontacted surface area.This could be due to the rotation of the lentulo spiral, which may help to remove small, entrapped air bubbles within the canal and spread the Ca(OH) 2 into the irregular isthmuses and fins in C-shaped canals, thus allowing better distribution and contact of Ca(OH) 2 with the canal surfaces.
Apart from delivery techniques, the placement quality of Ca(OH) 2 also depends on the canal morphology.C-shaped canal systems that usually consists of 2 or 3 "main" canals interconnected with fins and isthmuses in the form of a C-configuration in cross section of the root have been reported to have large amounts of debris and uninstrumented areas after preparation [19,[33][34][35].Studies have shown that the use of rotary instruments does not improve the situation and no preparation was evident at the interconnecting fins or isthmuses which especially highlights the importance of intracanal medication [36].Furthermore, the continuously aging of the east Asia population would further complicate this question that not only endodontists but also general dentists may face more and more root canal therapy (RCT) cases related to C-shaped canals owing to extensive caries or abrasion.
Most of the previous studies that focused on intracanal medication delivery techniques were related to the (21,23,37).No comparable data were reported in C-shaped canals.Therefore, in the present study, mandibular second molars with C-shaped canal system were used to assess the effect of complex canal anatomy on the placement quality of Ca(OH) 2 .Based on our pilot study, most of the Ca(OH) 2 was removed from the canals after cleaning protocol abovementioned, and the canal space showed no statistical differences to the primary volume, therefore, in the current study, the samples were reused in the lentulo spiral groups to minimize the differences caused by the complex C-shaped canal morphology.There may be potential differences in the pre-medicament baseline in the two techniques, nevertheless, any discrepancies arising from the minute remaining paste were considerably smaller than using an entirely different tooth for comparison.In addition, owing to the C-shaped canal serving as a perfect and well-known 'complex model' , research methods reported on C-shaped canal systems in this study can also be easily adapted to other studies about other complex human hard tissue anatomy or benchmark quests from the industrial community about related clinical procedures, such as ultrasonic activation/irrigation benchmark test described in this study [38].
The findings of our study indicated that under the investigated conditions, the uncontacted surface area was not influenced by the shape of the PCF in the group filled with the NaviTip syringe system and NaviTip syringe used in conjunction with lentulo spirals.This finding broadly supports the work of Rahde et al., who reported that the curvature did not influence the filling quality of Ca(OH) 2 into extracted human teeth [20].However, conflicting results were reported by Sharifi et al., who found that the curvature affected the density of Ca(OH) 2 [9].An explanation for this disparity could be the differences in experimental specimens.Sharifi et al. used simulated canals in resin block to assess the effect of the canal curvature while Rahde used extracted human first molars and single-rooted teeth [9,20].
Contrary to expectations, this study did not find a significant difference in the uncontacted surface area of Ca(OH) 2 between C-shaped and non-C-shaped PCF groups.Such results may be attributed to the complexity of the root canal systems, as numerous ramifications and isthmuses can exist in C-shaped canal systems with C-shaped and non-C-shaped PCF.Development of novel delivery techniques is needed to improve the application of intracanal medicament in C-shaped canals.
This study showed that an appropriate delivery technique is essential for the clinical application of Ca(OH) 2 for canal disinfection, especially in complex canal anatomies.The limitation of the study is the small sample size and only one kind of Ca(OH) 2 pastes, and another limitation is that we used only C-shaped canal system and it may take caution to extrapolate the results to other canals.
Conclusions
Although complete contact between Ca(OH) 2 and the canal surface is not yet possible, the combination technique of lentulo spiral and syringe can significantly increase the contacted surface of the root canal wall by Ca(OH) 2 in C-shaped canal system of mandibular second molars.
Fig. 2
Fig. 2 Micro-CT three-dimensional reconstructions of Ca(OH) 2 placement in C-shaped or non-C-shaped PCF mandibular second molars.(A-B) I. Red parts in the models represent the initial post-preparation canal surface area in non-C-shaped PCF and C-shaped PCF, respectively.II-III.grey parts represent Ca(OH) 2 that was placed into the canal using different delivery techniques.IV-V.green and red parts represent the surface area of the Ca(OH) 2 that was and was not in contact with the initial canal surface, respectively.II, IV.Ca(OH) 2 placed with syringes alone.III, V. Ca(OH) 2 placed with syringes and lentulo spirals.(C-D) cross-sections views of the coronal, middle and apical thirds of roots
Fig. 4
Fig. 4 Ca(OH) 2 volume (mm 3 ) and the percentage of volume (%) with and without using lentulo spiral.(A-B) Volume (mm 3 ) and the percentage of volume (%) of Ca(OH) 2 in the entire canal with C-shaped or non-C-shaped PCF.(C-D) Volume (mm 3 ) and the percentage of volume (%) of Ca(OH) 2 in the apical region with C-shaped or non-C-shaped PCF.*P < 0.05, **P < 0.01, ***P < 0.001
|
2024-01-11T05:08:43.286Z
|
2024-01-09T00:00:00.000
|
{
"year": 2024,
"sha1": "d7039070f45c60ace38800c6cd7ce115ddf4b5ca",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "ScienceParsePlus",
"pdf_hash": "d7039070f45c60ace38800c6cd7ce115ddf4b5ca",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
235345419
|
pes2o/s2orc
|
v3-fos-license
|
Immunization against Pseudomonas aeruginosa using Alg-PLGA nano-vaccine
Objective(s): Pseudomonas aeruginosa is the bacterium that causes of pulmonary infection among chronically hospitalized patients. Alginate is a common surface antigen of P. aeruginosa with a constant structure that which makes it an appropriate target for vaccines. In this study, P. aeruginosa alginate was conjugated with to PLGA nanoparticles, and its immunogenicity was characterized as a vaccine. Materials and Methods: Alginate was isolated from a mucoid strain of P. aeruginosa and conjugated with to PLGA with˝ N-(3-Dimethylaminopropyl)-N′-ethylcarbodiimide hydrochloride ˝= ˝EDAC˝ and N-Hydroxysuccinimide (NHS). Chemical characterization of prepared nano-vaccine was performed using FTIR Spectroscopy, Zetasizer, and Atomic Force Microscopy (AFM). The immunogenicity of this nano-vaccine was evaluated through intramuscular injection into BALB/c mice. Four groups of mice were subjected to the injection of alginate–PLGA, and two weeks after the last administration step, opsonophagocytosis assay, IgG detection, challenge, and cytokine determination via ELISA were carried out. Results: Alginate-PLGA conjugation was corroborated by FTIR, Zetasizer, and AFM. The ELISA consequence showed that alginate was prospering in the instigation of the humoral immunity.The immunogenicity enhanced against the alginate-PLGA. Remarkably diminished bacterial titer in the spleen of the immunized mice posterior to challenge with PAO1 strain in comparison with the alginate alone and control groups. Conclusion: The bacterial burden in the spleen significantly decreased after the challenge (P<0.05). The opsonic activity was significantly increased in the alginate- PLGA group (P<0.05).
Introduction
The mucoid Pseudomonas aeruginosa is among the most important causes of pulmonary infection in Cystic Fibrosis (CF) patients. This patient correlates with alginate-producing P. aeruginosa. The bronchial obstruction due to viscous mucus secretion and poor inhibition of colonization of mucoid P. aeruginosa results in a high incidence of pulmonary infection (1)(2)(3)(4).
P. aeruginosa strains often have mucoid colony morphologies, due to the over production of the alginate in this bacterium. Alginate, which is a linear copolymer of partially O-acetylated b -1, 4-linked D-mannuronic acid and L-guluronic acid is the main component of the P. aeruginosa biofilm matrix. Alginate, that is also called Mucoid Exopolysaccharide(MEP), is produced by mucoid strains of P. aeruginosa and significantly increases the resistance of these organisms to antibiotics treatment and host defenses and decreases the chemotaxis of polymorphonuclear cells. It prevents the complement activation and suppresses phagocytosis by neutrophils and macrophages, especially in CF patients (5)(6)(7)(8)(9)(10)(11).
Alginate is not solely expressed in mucoid strains. Indeed, its synthesis is also increased in non-mucoid strains of P. aeruginosa exposed to a hypoxic milieu. This phenomenon is observed in the lungs of CF patients within the mucus plugs in the airway (12).
As alginate has very slight structural variations, is considered as an appropriate target for developing new vaccines. There are different alginate vaccines for P. aeruginosa. To enhance its immunogenicity, alginate has been conjugated to carrier proteins (for instance, binding thiolated MEP (mucoid exopolysaccharide), to keyhole limpet hemocyanin (KLH), its covalently coupling to P. aeruginosa toxin A or alginate-tetanus toxoid (TT) conjugate, etc (13-18, 20, 22, 26,38, 46).
In the present study, we built and characterized candidate vaccines based on P. aeruginosa alginate using PLGA nanoparticles. Then the efficacy of the Algnanoparticles that is stimulating the immune responses was assessed in vivo and in vitro.
Bacterial strains
In this research, alginate was produced using two standard strains of P. aeruginosa, i.e. PAO1, and mucoid strain 8821M (Kindly provided by Dr Sobhan Faezi, Medical Biotechnology Research Center, Paramedicine Using Alg-PLGA as a nano-vaccine Azimi and Safari Zanjani Faculty, Langarud, Iran). It is necessary to mention that, the strain 8821M was used only for the extraction of alginate and we have used the PAO1 strain for the other tests such as challenge, opsonophagocytic, the titer of antibody and cytokine examination.
Extraction and purification of alginate
Alginate was purified according to the protocol proposed by Htatno et al. with some modifications. The PAO1 strain of P. aeruginosa strain (ATCC15442) was inoculated into the synthetic medium (pH of 7.5) including 10.1 ml/l glycerol, 0.5 g/l glucose, 0.37 g/l L-glutamine, 0.6 g/l NaHPO 4 , 0.12 g/l K 2 HPO 4, and 0.13 g/l MgSo 4 .7H 2 O (all materials were bought from Merck, Germany) at 37 °C for three days. The culture was inactivated by adding 4.5 ml of 90% savlon and incubated at 60 °C for 15 min, followed by repeated centrifugations (at 18000 × g and 4°C for 30 min) to pellet the bacterial cells. Thereafter, the resulting supernatant containing alginate was collected, filtered, and incubated at 4 °C for at least 8 hr.
After adding ice-cold ethanol (Merck, Germany) with a volume three-folds greater than that of supernatant and subsequent incubation at 4 °C overnight, the precipitated alginate was collected through centrifugation (3500 × g at 4 °C for 15 min) and dissolved in Tris buffer (pH 8.0) containing 5% SDS (Merck, Germany), 10 mM CaCl 2 (Merck, Germany) and proteinase K (10 μg/ml, Bioneer, South Korea) and incubated at 56 °C for 2 hr. To remove any remaining DNA and RNA contamination, DNase I and RNase A (at 100 mg/ml concentrations, Bioneer, South Korea) were used.
A mixture of the sample : phenol : chloroform (2:1:1) was added and then incubated at 60 °C for 45 min. After centrifugation (40000 × g at 22 °C for 20 min), the supernatant was collected and mixed with an equal volume of chloroform.
After 8 min of incubation, the tube was centrifuged (40000 × g at 22 °C for 40 min) and then dialyzed against dH2O for three days and finally lyophilized. For alginate isolation, the sample was applied to a XK 16 column (2.6 × 100 cm) packed with a Sephacryl S-400 gel filtration column (GE Healthcare, Life Sciences, Swaziland). The eluted tubes were evaluated for the uronic acid content at 595 nm (23)(24)(25).
Conjugation of alginate into PLGA
Conjugation of alginate into PLGA was done according to the protocol proposed by Safari Zanjani and Azimi (61). With minor modified, by replacing the alginate antigen at the end of the protocol. Then, chemical characterization of the conjugated alginate-PLGA was carried out (19,21).
Immunization of mice
Mice (BALB/c, weighing 20 to 25 g, 6-8 weeks old) in four groups (Alg -PLGA, Alginate, PLGA, and PBS (control group)) were intramuscularly immunized. Each group included six mice (three mice for bacterial enumeration in the spleen and three for sera isolation and opsonic killing activity, antibody titers, and cytokines response). The mice in each group were immunized using 10 μg (in accordance with the standard) of their corresponding antigen.
These mice were immunized three times with two weeks, interval. Prior to the first immunization and two weeks after each immunization, blood samples were collected and sera were isolated through centrifugation (41,42,51,54).
Enzyme-linked immunosorbent assay
Detection of antibody titer to the nanovaccine was performed through ELISA assay by Safari Zanjani and Azimi (61,62).
Challenge test
Pseudomonas aeruginosa inoculum was incubated on BHI broth from a fresh overnight culture of P. aeruginosa PAO1 on BHI agar, under agitation (180 rpm) at 37 °C for 3-4 hr. The cells (OD 620 nm = 0.18) were centrifuged and re-suspended in a sterile BHI broth.
The plating serial dilutions were used to determine the number of bacteria. Fourteen days after the last immunization, all mice (immunized with Alg-PLGA, Alg alone, PLGA, and control groups) were challenged with the PAO1 strain of P. aeruginosa (1.5 × 10 8 CFUs) through the peritoneal injection route.
After the 72 hr from the challenge, the mice were killed and their spleens were harvested and homogenized in 10 ml of PBS (pH 7.4). Finally, serial dilutions of homogenates were plated onto Nutrient agar in triplicate and CFUs were calculated after two days of incubation at 37 °C (32,(47)(48)(49)(50).
Opsonophagocytosis test
Opsonophagocytosis test was performed according to the method described by Safari Zanjani and Azimi (61,62).
Cytokine test
Mice were intramuscularly injection three times, with fourteen days intervals. 14 hr after the last injection, blood samples were taken, centrifuged (10000g, 15 min) and frozen before performing the cytokine test. The cytokines, i.e. TNF-a, IL-4, IL-17A, and INF-γ were quantified through Enzyme-linked immunosorbent test. Cytokine test were done using different kits (all from Mabtech Ebioscience R&D, USA) according to the manufacturer's instructions.
Statistical review
For statistical review, the Graph-Pad Software version 6.0 for Windows, (San Diego, CA, USA) was used. Data were done using Tukeys test (ANOVA). Kaplan-Meier survival curves and the log-rank test were used to analyze different groups. All data were expressed as mean±SD and P-values less than 0.05 were considered to be significant.
Nanoparticle characterization and analysis
Due to the results of size exclusion chromatography through Sephacryl S-200 HR, the tubes of alginate-PLGA (fraction numbers of 21 and 22), which contained high levels of nano-conjugates, was collected ( Figure 1). These tubes were selected, and the average hydrodynamic size of PLGA was measured via a Zetasizer instrument. Considering the results, the size and surface charge of nanoparticles (before binding to alginate) were 109 nm and -4.51 mv, respectively. After binding, the characterization of the charge in alginate-nanoparticles equaled 465.5 nm and -5.21 mv, respectively. The data predicated that binding were prosperity achieved.
FTIR test was done for the parts of PLGA. Due to FTIR data, the wave numbers from 1550 to 1810 cm -1 were the predicative of carboyl groups (at 1692.43 cm -1 and 1779.84 cm -1 for nanoparticles and alginate, respectively. When the conjugation of alginate by PLGA nanoparticles was done, the alginate-PLGA samples' peaks were observed at 1173.47 cm-1 and 1099.23 cm-1 for nanoparticles and alginate, respectively. These conversion in wave numbers were predictive of the formation of ester bonds in alginate -nanoparticles, verificating the event of the conjugation protocol ( Figure 2).
The three dimensional surface topography of alginate, PLGA, and alginate-PLGA nanoparticles were checked using Atomic Force Microscopy (AFM). Due to the concequence, the size of nanoparticles before alginate conjugated was in the range from 12 nm to 24 nm ( Figure 3A and B). When the conjugation of nanoparticle was done, the size raised to 160-198 nm in the alginate-PLGA sample ( Figure 3C and D).
The shapes of surface binding grooves on nanoparticles were measured via AFM. After connection, the grooves were in the rounded form in alginate-PLGA, while before connection, these sites were sharp in PLGA. These data were indicative of the successful connection between alginate and PLGA.
Antibody responses to immunization
The anti-alginate, IgG was significantly raised in Using Alg-PLGA as a nano-vaccine Azimi and Safari Zanjani the mice which received alginate-PLGA, compared to all other groups of mice ( Figure 4). These results indicated that alginate and alginate-PLGA are both suitable immunogens, though alginate-PLGA was more successful to motived the humoral immunity. No difference was significant between PLGA alone with control group (PBS) at (P<0.05).
Challenge test
To peruse as the peculiar antibodies enhanced to the conjugated alginate-nanoparticles have efficacy to inhibit the distribution of P. aeruginosa into interior organs, we inquired the spread of infection via determining bacterial loads in the spleen. Then 72 hr of infection with PAO1 strain of P. aeruginosa, P. aeruginosa charge was measured in the spleens of immunized mice.
Moreover, significant decreased bacterial titers were observed in the spleens the immunized mice infected by PAO1 strain, compared to PBS (control group) and PLGA groups ( Figure 5). Furthermore, we establish that the antibodies enhanced to alginate-nanoparticles were more significantly effective to decrease the bacterial load, evaluated to the alginate group (P<0.05). Also we observed the mean difference is significant between control group and other groups (P<0.05).
Opsonic killing activity
The opsonophagocytosis experiments were conducted to measure the functional activity of peculiar antibodies, which can intercede P. aeruginosa uptake by phagocytes, and their function is correlated to the clearance of infection. The antisera of IM immunized mice was taken and opsonic killing activity was determined in vitro.
The antisera of immunized mice which had received alginate-PLGA were associated to significant killing levels of about 91% in a 1:4 dilution (Figure 6), suggesting that the candidate vaccine had induced a potent P. aeruginosa PAO1-specific antibody response.
In mice, who had received control group, the phagocyte activity (2.3%) was apperceived. A significant difference was apperceived among the results of alginatenanoparticles and PLGA groups (P<0.05). We also apperceived the average difference among (PBS) with other groups ( P<0.05).
A remarkable opsonic killing activity was observed, when combined nanoparticle antiserum was treated to bacterial strain. Bars represent means of duplicate determinations, and error bars indicate SD. Results were accepted to be significant at (P<0.05). The * indicates to be significant at (P<0.05) among alginate-nanoparticles with eperimental groups. Moreover, there were significant at (P<0.05) among the (PBS) and other groups.
Cytokine responses to immunization
The cytokine profiles in blood samples of mice in each Figure 4. Evaluating the antibody titer to alginate in the immunized mice. Alg-specific IgG consequences are the mean of three mice in eperimental groups. Bars represent mean±SD from three mice per group. ND demonstrates the non-scrutable difference and * demonstrates to be significant at (P<0.05) among Alg-nanoparticles with other groups. Moreover, there was no significant difference at (P < 0.05) among nanoparticles alone with the control group (PBS) Figure 5. Challenge test. 72 hr since infection with PAO1 strain of Peudomonas aeruginosa, the spleen of mice was separated and homogenized. The serial diluted of homogenates were plated for P.aeruginosa numeration. Bars represent the means of duplicate determinations, and error bars indicate SD. Consequences were admitted to be significant at (P<0.05). * Indicates to be significant at a P-value less than (0.05) among Alg-nanoparticles with other groups. Furthermore, there were significant between control group (PBS) and other groups Figure 6. The opsonic killing activity of specific antisera versus Pseudomonas aeruginosa strain PAO1. Fourteen days after the third injection, all sera of experimental group were taken and pooled together immunized group were determined fourteen-hours after the last boost immunization. Cytokine levels in the isolated sera of each group were determined through ELISA. INF-γ mediates TH1-cells response, while IL-4 enhances TH2 response. Due to the cytokine results for immunization with alginate, only IL-4 was significantly elevated (P<0.05), compared to control groups (PLGA and PBS), and alginate was not influential on TNF-a, IL-17A, and INF-γ cytokine responses. However, as shown in (Figures 7), alginate-PLGA was able to significantly increase the levels of TNF-a, IL-4, IL-17A, and INF-γ (P< 0.05) in the immunized groups, compared to the control (PLGA and PBS) and alginate groups.
These results indicate that alginate conjugate in PLGA has improved and developed more pathways in cytokine response in the immunized mice.
Discussion
P. aeruginosa is the bacteria that causes of lifetreating infections in patients with Cystic Fibrosis (CF). These infections are because of alginate ability to create biofilms which display tolerance and resistance to antimicrobial agents. Because, the Alg plays a virulence role in adherence and colonization of this bacteria in respiratory epithelium which doesn't toxic activity in cells (34)(35)(36)(37)(38)(39). Therefore, the control and prevention of P. aeruginosa infection is a great concern. Hence alginate, which is a surface antigen, is a good vaccine candidate. Several studies showed that the protected epitope of alginate in PAO1 strains can be efficient to stimulate the immunity response and vaccine expansion in the patient with (CF) infection (35,36,39,40,44).
Immunity against alginate can be efficient in eradicating the P. aeruginosa from patients with (CF). Introducing effective and safe in expensive carrier systems is the best issues in developing of vaccine candidate (45)(46)(47)(48)(49).
We know the efficacy of a vaccine candidate strongly is related to select an appropriate carrier. PLGA are decomposable and polymeric matrices. As a result, vaccines composed of PLGA are a new part of vaccine which are more efficient and safe to the organs and more economical than coventional vaccine (52,53,56,(57)(58)(59)(60).
Therefore in the research, we built and distingushed novel nanoparticles-based vaccine candidates including alginate from P. aeruginosa. Moreover, we analyzed the candidate Alg-PLGA vaccine against this bacteria. Challenge test was assessed in the immunized mice, with the PAO1 strain of P. aeruginosa, and the functional activity of the conjugated PLGA was measured based on the in vitro opsonophagocytosis test, antibodies titer, and cytokine responses.
Cytokines have various functions in hosts, depending on the bacterial antigen type and site.
The physiochemical studies showed that conjugation was successfully done. Furthermore, these candidate vaccines have indicated some advantages such as appropriate antigen delivery, non-toxicity and induction of strong immune responses using low antigen levels.
Other applications of these vaccines include increasing the drug absorption and penetration, alginate presentation to B lymphocytes for stimulation of peculiar antibody responses, and phagocytosis by cytotoxic T lymphocytes.
The immunization study showed that the Alg-PLGA group introduces more immunogenicity than the Alg alone group. The monitoring of antibody responses and cytokines response indicated that immunogenicity considerably increases in alginate-nanoparticles conjugate compared to the pure alginate in the mice model. In the candidate vaccine, pathogenic P. aeruginosa stimulates TH2 cells.
Then TH2 cells were produced IL-4,INF-ɣ, and IL-17A cytokines responses to clear P. aeruginosa strain PAO1, which mediated the recruitment and infiltration of polymorph nuclear leukocytes such as neutrophils. On the other hand, the production of TNF-a as an intermediate septic shock and INF-ɣ cytokines leading to activated macrophages, subsequently, opsonophagocytosis of P. aeruginosa.
The main finding of this research is that the candidate vaccine ( Alg-PLGA ) stimulated the motifs TLR and NodX with an unknown mechanism. Then, (NF-KB) transcription is activated and the cytokine genes such as TNF-a and INF-ɣ, with a synergistic mechanisms are expressed.
INF-ɣ is stimulated innate immunity through Using Alg-PLGA as a nano-vaccine Azimi and Safari Zanjani macrophage receptors. These data suggested the macrophages, leading to the removal and clearance of P. aeruginosa in the host blood. The alginate-PLGA sample raised a broad immune response with a high antibody titer and activated cell-mediated immune responses through different pathways and elevation of the opsonic killing activity (compared to the alginate alone vaccine). Also in this study, we observed a significant increase in the antibody titer and cytokines responses.
Conclusion
To concludes, the conjugation of P. aeruginosa alginate to the nanoparticle with Ethyl-3-(3dimethylaminopropyl) carbodiimide as spacer molecules increment the functional activity of nanovaccine through decreasing the bacterial propagation and increasing the killing of opsonized bacteria. This research was a basis for the subsequent expansion of a candidate vaccine for possible usage in humans to defend them against patient with this bacteria.
|
2021-06-06T05:16:12.447Z
|
2021-04-01T00:00:00.000
|
{
"year": 2021,
"sha1": "47dedc2a00806fee0d77bfa9d3e33e000e246b10",
"oa_license": "CCBY",
"oa_url": null,
"oa_status": null,
"pdf_src": "PubMedCentral",
"pdf_hash": "47dedc2a00806fee0d77bfa9d3e33e000e246b10",
"s2fieldsofstudy": [
"Medicine",
"Materials Science"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
258543163
|
pes2o/s2orc
|
v3-fos-license
|
The effectiveness of a recreational sports program to enhance tolerance among students of Al-Qasim Green University in Babylon, Iraq
The current research aimed to identify the effectiveness of a recreational sports program to enhance tolerance among students of Al-Qasim Green University, and the researcher used the experimental approach to design the two groups (experimental and control), and the study sample consisted of (82) students, (41) students for the experimental group, (41) students for the experimental group. control
Introduction
The phenomenon of recreation has become one of the social systems that prevail in almost all societies, where interest in it began as one of the manifestations of the civilized behavior of the individual, and interest in it increased and its types and fields multiplied to invest leisure time. It is certain that recreation has made great strides during the last two centuries, and its various aspects and means have witnessed significant development, especially with regard to the methods and curricula of education and training. Nowadays, experts and researchers in the field of sports, recreation and others have been finding us with the latest recreational methods and approaches, based on a number of sciences and field research that made the individual practicing his activities a subject for him, which is what makes developed countries witness development. Amazing in the field of recreation and to attract high levels, and now we can get to know the civilization of societies by identifying the tools and means that they use in recreation. And sports activity is one of the most widespread physical recreations among young people, especially in educational institutions and schools, and what helps in that is that sports activity is a factor of active positive rest that constitutes an important area in leisure time, in addition to that it is considered one of the works that are performed To improve the health and physical level of the individual, to gain good stamina, to give him joy and pleasure, to rid him of fatigue and hatred, and to make him capable of work and production. And if the recreational sports activity constitutes an essential axis of the individual's life, then it is more appropriate for it to be an important field in the education of the individual, as we find all the educational processes and methods used in raising this group based mainly on play, activity and movement in order to prepare him to occupy his place.
In the social world as a useful individual within the limits of his personal capabilities, and giving him the opportunity to build his physical, mental and social capabilities and to meet the demands of his environmental, material and moral life. And since our modern world is full of many cultural, social, psychological and moral changes, and these changes find expression in young people; Young people, by virtue of their age stage, represent the orientation towards the future, and therefore it is more important than others to feel these changes, live with them, and actively participate in their implementation. Perhaps the psychological, social and intellectual challenges that our current society faces is what makes it necessary to develop behaviors such as tolerance among young people. During his academic life, university students face many pressures, problems, and events that occur to them in the learning environment, whether it is a result of students and dealing with each other, or the general school climate in which they are present, which requires the student to have a set of characteristics and Characteristics that enable him to perform his role efficiently and effectively. And it helps him to regulate his emotions in a good way and confront negative ones, in addition to that our society is currently experiencing a series of changes, which were reflected in one way or another on the behavior of individuals and their attitudes towards each other and the increase in the difference in visions and intellectual differences and the lack of acceptance of the other and Unwillingness to tolerate between individuals despite the importance of this variable, whether in relation to the individual's personality or in the individual's relationship with others. The results of the studies conducted on tolerance indicate its association with many variables. The results of the study (Maltby et al., 2001) [21] concluded that there is a positive and significant relationship between psychological well-being and tolerance of oneself, tolerance of the other, tolerance of attitudes and The most tolerant individuals are characterized by being less anxious, depressed, and angry, and that tolerance is one of the important variables in maintaining the mental health of the individuals who are more willing to tolerate, they are more loyal, the results of the study (McCullough & Witvliet, 2002) indicated that tolerance contributes from 25-49% of the total variance in predicting psychological well-being, and the study (Orathinkal & Vansteenwegen, 2006) [22] confirmed that tolerance can lead to an increase in positive emotions and to Improve healthy and positive relationships. Through the work of the researcher in teaching university students, the researcher noticed some behaviors emanating from some students that are characterized by intolerance and acceptance of the other.
Research problem
The problem of the current study is to answer the following questions: 1. What is the effectiveness of the recreational program for promoting tolerance among students of Al-Qasim Green University in Babil Governorate, Iraq? 2. What is the continuity of the effectiveness of the recreational program to enhance tolerance among students of Al-Qasim Green University in Babil Governorate, Iraq?
Research objectives
• Building a recreational and sports program to enhance the values of tolerance among a sample of Al-Qasim Green University students. • Evaluation of the effectiveness of the recreational sports program in promoting tolerance among a sample of Al-Qasim Green University students. • Exposing the continuity of the effectiveness of the recreational sports program in promoting tolerance among a sample of Al-Qasim Green University students.
Research importance
The importance of the current research is evident in the following: • The results of the current research represent an addition to the balance of Arab knowledge and research, which is characterized by scarcity in this field. • Promoting the values of tolerance among university students.
Research hypotheses
• There are statistically significant differences between the mean scores of the students of the experimental group in the pre and post test of the tolerance scale in favor of the post test. • There are statistically significant differences between the mean scores of students of the experimental and control groups in the post-test of the tolerance scale in favor of the experimental group. • There are no statistically significant differences between the mean scores of the experimental group students in the post and follow-up test of the tolerance scale.
Terms used in the search Tolerance
The researchers define it as a cognitive, emotional and behavioral component, which is evident through the student's self-acceptance and indulgence with himself, tolerance and acceptance of others, respect for their difference from him, forgiveness for those who offend him, as well as avoidance of anger and violence in the different life situations he goes through, and It is determined procedurally by the total score obtained by the students on the tolerance scale prepared in the current research.
Sports recreational program:
Tahani Abdel-Salam (2001, 233) [3] defines it as a set of recreational and sports activities organized under the supervision of a recreational pioneer in order to achieve the goal of recreational education, which is to change the behavior of members in leisure time to optimal behavior, by developing information, skills and Developing positive attitudes towards occupying free time.
And the basic research sample was chosen by the intentional method from students of the College of at Al-Qasim Green University from the research community after applying the tolerance scale, which consisted of (82) students from the College of at the University of and their ages ranged between (18)(19)(20)(21)(22) years, with a chronological average age of (19.34) years, and a standard deviation of (0.59).
The equivalence of the data of the research sample
The researchers calculated the equivalence between the students of the control and experimental groups in some variables that may affect the experimental variable, which are the growth rates (age -weight -height) for students, and the measure of tolerance among students of Al-Qasim Green University. (1), it is clear that there are no statistically significant differences between the mean scores of the students of the experimental and control groups in the measured variables, and therefore there is parity between the students of the control and experimental groups in the pretest.
Devices and tools used
The researchers used the tools and means that help to achieve the goal of the research, and the researcher identified the tools, devices, and standards appropriate to the subject of the study as follows: Tests used in the research: Tests of the students of the basic sample 1. Length: by using a pentameter (centimeter). 2. Mass: By using a medical scale (kilogram). 3. Age: Via college records (date of birth). 4. Tolerance scale (prepared by the researchers).
Sources of preparing the scale
The researcher prepared the tolerance scale used in the current research in light of the following sources: [20] . 3. Tolerance Scale (Zainab Mahmoud Shuqair, 2010) [7] . 4. Tolerance Scale (Bushra Ismail, 2011) [2] . 5. Reviewing previous studies that used scales to measure tolerance.
Scale description
It is a self-report measure that measures a person's tendency to forgive, more than an attitude towards a particular event, person, or situation. The scale has a total score and consists of three sub-dimensions (tolerance with others, self-tolerance, and tolerance with different attitudes). The scale consists of (21) statements distributed equally over its sub-dimensions, each of which is answered using a scale of three choices.
They range from completely (3) to not at all (1).
Sub scale dimensions
The tolerance scale consists of three sub-dimensions: 1. Tolerance towards others. 2. Self-tolerance.
3. Tolerance of different situations.
Scientific Transactions Scale validity
Factor validity: Factor analysis seeks to identify underlying variables (factors) that explain the pattern of associations among many variables, and is used to reduce and summarize data to identify a few factors that explain the variance observed in a much larger number of variables.
To calculate the factor validity of the tolerance scale, the researcher used the exploratory factor analysis using the Principal Components Method with rotating the axes using the Varimx Method. The researcher also used Bartlett's Test of Sphericity to ensure that the correlation matrix is not equal to the unity matrix. (Field, 2009, 648), and the result of Bartlett's Test was statistically significant at the level of significance (0.01), and this indicates that the correlation matrix is free of complete correlation coefficients, meaning that the correlation matrix is not equal to the unity matrix and that there is a correlation between some variables in the matrix, which Provides a statistically sound basis for the use of the factorial analysis method. To determine the factor to which the phrase belongs, the researcher used the following criteria: • The phrase is classified within the factor that achieved the highest degree of saturation. • The saturation of the expression on the factor should be at least (0.30), or higher. • That the content of the phrase corresponds to the contents of the phrases that belong to the same worker. (Fouad Abu Hatab and Amal Sadeq, 1991, 640-641).
It is clear from Table (2) that
The first factor: a number of (7) phrases were saturated with it, and the value of its latent root was (4.28), and it explained (16.68%) of the discrepancy in the performance of the survey sample on the scale and easily rid him of negative emotions towards himself, accordingly, this factor can be called "selftolerance". The second factor: A number of (7) phrases were saturated with it, and the value of its latent root was (3.98), and it explained (15.54%) of the variation in the performance of the survey sample on the scale, with his always looking at the good aspects of the abuser, and disposing of him with ease with his negative emotions and negative thinking towards him; Therefore, this factor can be called "tolerance towards others". The third factor: A number of (7) phrases were saturated with it, and the value of its latent root was (4.09). It explained (15.95%) of the variation in the performance of the survey sample on the scale. Its phrases indicate a personal willingness to tolerate, which appears through situations. Accordingly, this factor can be called "tolerance of attitudes." The total variance of the scale was (48.17%). The acceptable and statistically significant saturation should not be less than (0.30); Accordingly, it is clear from the previous table that the items of the verbal communication scale showed saturations of more than (0.30) on the three factors, and therefore they are statistically significant saturations (Saud bin Dahian and Ezzat Abdel Hamid, 2002, 206). By calculating the validity of the tolerance scale using the factor validity method, it is clear that the scale has an acceptable validity coefficient; This indicates that it can be used in the current research, and the results that will be produced by the research can be trusted.
Scale reliability Cronbach's alpha reliability coefficient: Cronbach's alpha
The researchers calculated the reliability of the tolerance scale using Cronbach's alpha method, and the following table shows the values of the stability coefficients using the "Cronbach's alpha" method for each statement and the reliability coefficient of the tolerance scale as a whole. And if the reliability coefficient in the alpha method for each statement of the scale is less than the value of Cronbach's alpha for the scale as a whole, then this means that the item is important and its absence from the scale negatively affects its reliability coefficient (Field, 2009). It is clear from Table (3) that the expressions of the tolerance scale have a reliability coefficient less than the value of the reliability coefficient of the scale as a whole, which is (0.827). From the foregoing, and by calculating the reliability of the tolerance scale using the Cronbach alpha method, it is clear that the scale has a high degree of stability, which indicates the possibility of using it in the current research, and the reliability of the results that will result from the research.
Internal consistency scale
The researchers calculated the internal consistency of the tolerance scale by calculating: • Correlation coefficients between the degree of each statement of the scale and the degree of the dimension to which it belongs. • Correlation coefficients between the score of each statement of the scale and the total score of the scale.
• Correlation coefficients between the dimensions of the scale and the overall score of the scale.
Table (4) shows the correlation coefficients between the degree of the expression, the degree of the dimension to which it belongs, and the total degree of the tolerance scale.
It is noted from Table (4) that
• Correlation coefficients between the degree of each statement of the tolerance scale and the degree of the dimension to which a function belongs statistically at the level of significance (0.01); This means that the scale statements are consistent with the dimension to which they belong.
The correlation coefficients between the degree of each statement of the tolerance scale and the total score of the scale are statistically significant at the significance level (0.01); This means that the scale statements are consistent with its total score. .655** .606**
20
.613** .560** * * Tabular R value at (328) degrees of freedom and significance level (0.05) = (0.113). * Tabular R value at (328) degrees of freedom and significance level (0.01) = (0.148). Tolerance with attitudes .792** * Tabular R value at (328) degrees of freedom and significance level (0.05) = (0.113). * Tabular R value at (328) degrees of freedom and significance level (0.01) = (0.148) By calculating the internal consistency of the tolerance scale, it becomes clear that the scale has internal consistency. This indicates that it can be used in the current research, and that the results of the research will be trusted.
Scale scoring method
The scale is answered using the three scale Likert scale (it does not apply at all, it applies to some extent, it applies completely), and it is corrected by giving the following grades (1, 2, 3) in the case of items with a positive formula, while the negative items are corrected by giving the following grades (3, 2, 1).
The proposed recreational sports program (prepared by the researcher)
The target group of the program: The current program was applied to a sample of (41) students from the Faculty of at Al-Qasim Green University -Iraq (the experimental group) who suffer from a low level of tolerance. The goal of the program: to train the students, the research sample, to practice the behaviors of tolerance with themselves and those around them, and in the different life situations that they face in the future of their lives, and to achieve psychological compatibility in general.
Program design steps
• Through reviewing scientific references and previous studies in the field of recreation and recreational activities, as well as in the field of psychology and mental health. • The researchers conducted personal interviews with a number of experts in the field of recreation, psychology and mental health to find out the content of the program, the total time, the number of units, the total time of the unit, the number of practice times per week, and the time of implementing the main part of the unit. • The researchers identified the most important recreational games and activities that aim to enhance tolerance among university students, the research sample under study and the expert opinion poll form.
• The researchers determined the total period required to implement the proposed recreational program. • The researchers determined the number of units, the total time of the unit, the number of times of application per week, and the number of times of practice per unit. • Determine the implementation time of (the introductory part -the main part -the closing part) in the unit.
Techniques used in the program:
Dialogue, discussion, brainstorming, self-monitoring, puzzle game, self-education, role playing, modeling, cooperative play.
Points to be taken into account in the implementation of the program:
• Establishing an atmosphere of familiarity between the researcher and the research sample.
Tools used in the implementation of the program
The tools used were determined according to the type of activity, and they are: • Wands, ropes, pens, chairs, flags, whistles, balls, string, seats, pieces of cork, cardboard, sticky notes, boards, a musical instrument.
Suggested program content
The content of the program was developed in recreational units aimed at enhancing tolerance among students of Al-Qasim Green University in Iraq, and the content of each unit was divided into the following transformation:
Introductory Section
This section aims to prepare the students physically and psychologically to accept teamwork and introduce a spirit of fun and pleasure and active participation in the program units. This section contains a group of recreational games. The duration of this section is (15) minutes.
The main Section
The aim of this section is to enhance tolerance among students, the research sample, and this section contains a group of selected recreational games, and the duration of this part is (45) minutes.
The concluding Section
The researchers took into account that the main section, including recreational games, should be followed by a gradual calming period characterized by pleasure, encouragement and relaxation to bring the body to its natural state, and the duration of this section was (10) minutes.
Program timeframe
The proposed program included (30) units, the time of each unit (70) minutes, at the rate of (3 units) per week, for a period of (10) weeks. The content of the program, the necessary tools and the amount of aid available, as well as the most important activities that students wish to practice, in addition to a survey of experts specialized in the field of recreation, motor education and psychology, who numbered (9) experts, and the recreational sports program came The proposal is as follows:
Program survey
The researchers conducted an exploratory study, with the aim • Identifying the suitability of the program content for the research sample. • Preparing the tools and devices for the program • Identifying the appropriateness of the time period specified for the program units • Determine the appropriate organizational method when applying the program • Discovering problems and difficulties during the implementation of the program • Ensure the availability of security and safety factors during the application.
Main study
After informing the students of the research sample about the aim of the research and obtaining their consent to participate in the experiment, and to ensure the validity of the tolerance scale for the research sample and the integration of the content of the recreational program and its suitability for the purpose of the research, the researcher conducted the following.
Pre-test:
The pre-test of the research sample was carried out by confirming the equivalence between the control group and the experimental group in the research variables.
Program application:
The recreational program was applied to (41) students.
Post-test:
And after completing the application of the program, a post-test was conducted.
Statistical processors:
The data were processed statistically using the SPSS program through: • Mean.
Presentation and Discussion of the results First hypothesis results
The first hypothesis states that: "There are statistically significant differences between the mean scores of the students of the experimental group in the pre and post test of the tolerance scale in favor of the post test". To validate this hypothesis, the researchers applied the tolerance scale to the experimental group before and after applying the program, and to analyze the results of the university students, the research sample, on the scale prepared for that, the researchers did: By calculating the significance of the differences between the mean scores of university students, the research sample, in the pre and post applications of the tolerance scale, and determining the direction of these differences, using the "Ttest" for two related averages. Table ( 8) shows the researcher's findings. Calculating the size of the effect of the recreational program used in the current research as an independent variable on tolerance as a dependent variable, which is due to the recreational program used using the Eta square equation, and the results were as shown in the last column of Table (8).
It is clear from Table (8) that
• The students (the research sample) in the post-test on the tolerance scale obtained a high average score, compared to the average score of students in the pre-application, with a statistically significant difference at the level of significance of 0.01 in favor of the post-application. • To verify the size of the effect of the recreational program used in the current research as an independent variable on the tolerance scale as a dependent variable, the value of the ETA square was calculated, so the value of the ETA square for the total degree of the tolerance scale was (0.992). Therefore, it can be explained by the recreational program, while the rest of the variation is explained by other variables.
Second hypothesis results
The second hypothesis states that: "There are statistically significant differences between the mean scores of the students of the experimental and control groups in the posttest of the tolerance scale in favor of the experimental group".
To verify the validity of the hypothesis, the researchers used the "T-test" to calculate the significance of the differences between the post-test of the mean scores of the experimental and control group students in the tolerance scale, and the results were as follows.
Third hypothesis results
The third hypothesis states that: "There are no statistically significant differences between the mean scores of the experimental group students in the post and follow-up test of the tolerance scale". In order to verify the validity of this hypothesis, the researchers applied the post and follow-up test of the tolerance scale on the experimental group after applying the program and one month after completing its application (follow-up test). In order to analyze the results of the study sample students on the tolerance scale, the researcher carried out the following procedures: • Calculating the significance of the differences between the mean scores of the study students in the post and follow-up applications of the tolerance scale, and determining the direction of these differences, using the ttest for two related averages. Table (10) shows the researcher's findings. These results confirm the effectiveness of the program used in developing tolerance and that the members of the experimental group who participated in the program have benefited from the educational contents and activities of the educational contents, activities and experiences contained in the program, which includes lectures and targeted discussions on the subject of the research, in addition to the use of the technique of reconstruction. Cognitive studies that help students modify the style and content of their beliefs and ideas in a positive direction, which helps them deal with disparate situations in a positive, effective and tolerant way. From the foregoing, it is clear that there are statistically significant differences in favor of the post-test in the scores of the tolerance scale. This is due to the fact that the outdoors is considered the playground of nature and the educational school that is full of different methods of education, and life in the outdoors is sufficient to form a solid person. Also, one of the most important purposes of the camps is the acquisition of values, learning the system, getting used to commitment and obedience, acquiring new habits and lifestyles, and drawing closer to ideals (Abdul Latif Khalifa, 1992). The practice of recreational activities also affects the personality of the individual and helps him to take off. On tension, achieving psychological balance, satisfying psychological and social needs, and helping him in self-expression and unloading pent-up emotions (Parker, 1976). It also helps the individual try to reach perfection and act according to what those around him expect. This confirms the importance of recreational activities in guiding students towards acquiring cooperative behavior, because the most important thing that students gained from the activities implemented in the current program was interdependence and cooperation, as the program created an educational experience that developed cooperative relations, a sense and a sense of community (Farida Awi, 2001). And Muhammad Al-Hamahmy, Walid Ahmed Abdel-Razek (2017, 212, 213), Tahani Abdel-Salam (2001, 294) indicate the importance of the presence of recreational programs in educational institutions such as schools, universities, and others, as educational institutions have a role in education so that students can through Practicing recreational activities and practicing them in spare time and working on their development, as well as modifying the behavior of these students through the practice of these activities, as it works to satisfy their tendencies and needs and benefit from their energies in the service and development of society. And Effat Abdel-Salam (2000, 56) mentions that sports recreation, with its various and varied activities, is considered one of the best means to occupy leisure time, so recreation entered within the social systems, which interest in it began as one of the manifestations of urbanization for the individual because of its contributions to achieving happiness for the individual and enhancing values and reduce his psychological stress.
|
2023-05-07T15:09:47.042Z
|
2023-01-01T00:00:00.000
|
{
"year": 2023,
"sha1": "5e57c1c3d091df81848a987c121a358e90b493ca",
"oa_license": null,
"oa_url": "https://www.journalofsports.com/pdf/2023/vol8issue1/PartD/8-1-71-839.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "50e02f9171d41c83510353150e207989d142b94d",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": []
}
|
235826030
|
pes2o/s2orc
|
v3-fos-license
|
A Model Study on Collaborative Learning and Exploration of RBAC Roles
Role-based access control (RBAC) can effectively guarantee the security of user system data. With its good flexibility and security, RBAC occupies a mainstream position in the field of access control. However, the complexity and time-consuming of the role establishment process seriously hinder the development and application of the RBAC model. The introduction of the assistant interactive question answering algorithm based on attribute exploration (semiautomatic heuristic way to build an RBAC system) greatly reduces the complexity of building a role system. However, there are some defects in the auxiliary interactive Q&A algorithm based on attribute exploration. The algorithm is not only unable to support multiperson collaborative work but also difficult to find qualified Q&A experts in practical work. Aiming at the above problems, this paper proposes a model collaborative learning and exploration of RBAC roles under the framework of attribute exploration. In this model, after interactive Q&A with experts in different permissions systems by using attribute exploration, the obtained results are merged and calculated to get the correct role system. This model not only avoids the time-consuming process of role requirement analysis but also provides a feasible scheme for collaborative role discovery in multidepartment permissions.
Introduction
With the development of the information system, information sharing among people becomes more and more convenient and fast. However, the "explosive" growth of the information system brings people convenient and quick access to information, and it also brings the problem of information security. It is not only the sharing of information between people that needs to be protected but also the information between industrial systems. For example, when computing matrix in the research field of Kalman filtering, multiple computing contents need to be encrypted [1,2].
To prevent the intrusion of illegal users or leakage caused by the careless operation of legal users, many solutions have been proposed [3,4]. For example, Lihua proposes a new privacy protection scheme, which plays a good role in protecting privacy [5]. Access control allows users to access system resources only according to their permissions setting and may not exceed their permissions. To ensure flexibility and security, role-based access control (RBAC) [6] has been widely studied and applied due to its good applicability and occupies a mainstream position in the access control model [7]. The RBAC model introduces roles between users and permissions; connects users and permissions with roles and grants and revokes access permissions to users by assigning and canceling roles to users; and realizes the logical separation of users and access permissions [8]. Flexibility in permission management and its high correlation with an enterprise's organizational structure greatly facilitate permission management [9].
However, the increasing complexity of the information system leads to the increasing complexity of the RBAC model system construction [10]. In the design and use of a traditional RBAC system, the relationship between "users and roles" and "roles and permissions" is dependent on the acquisition of system requirement information and the personal experience of administrators. With the increasing complexity and diversification of the information system, the number of users, resources, and permissions in access control is increasing, and the business process and related domain knowledge of information systems are becoming complex. As a result, designing and managing an RBAC system that meets the functional and security needs of users solely relying on human beings is challenging [11]. With the development and prosperity of machine learning has given us more ways and methods to solve problems, machine learning is applied in various fields [12]. Many scholars have also applied machine learning to information security, Sun proposes an ESS-based algorithm of balancing the QoS and privacy risk, which reaches a stable state of maintaining long-term service by multiple iterations [13], and Yin uses a recursive neural network for intrusion detection [14]. In addition, machine learning is also applied to various fields, such as hyperspectral image processing and classification [15,16]. With the prosperity and development of information system, information security combined with many research fields has been widely discussed and studied [17,18].
Among them, zhang Lei [19] proposed an auxiliary interactive question answering algorithm based on attribute exploration and used the attribute exploration algorithm to interact with experts to get the required roles and the partial order relationship between roles in the RBAC system. The reason why the attribute exploration algorithm [20] can obtain the roles and the partial order relationship between roles is that the attribute exploration algorithm is an important tool in the formal concept analysis [21]. Formal concept analysis is considered as a favorable tool for data analysis and knowledge description and has been widely used in data analysis [22], knowledge discovery [23], rule extraction [24], concept cognitive learning [25], and other fields. Among them, the important data structure-concept lattice [26] can well represent the partial order structure among data. Each lattice node on the concept lattice is composed of a group of intent and extent which have a natural correspondence with roles and permissions in role engineering. The role system mined by the concept lattice theory can not only reflect the hierarchical relationship between the roles but also ensure the correctness of the roles mined [27].
Although the auxiliary interactive question answering algorithm based on attribute exploration can accomplish the role design of RBAC with heuristic assistance, the traditional attribute algorithm relies on the complete system permissions knowledge. In practice, it is difficult to find people who have a good knowledge of all permissions, especially when the permissions are involved in multiple departments. For example, it is difficult to find an expert who knows all the permissions information well when constructing the role of the administration system in conjunction with the faculty system. This defect severely limits the development and application of the RBAC model.
In this paper, it is found that the Duquenne-Guigues and the set of roles obtained by the auxiliary interactive question answering algorithm based on attribute exploration have a close relationship with the whole system, but also have a close relationship with the local subsystem. Therefore, we can find an interactive domain expert in each one and merge the roles and Duquenne-Guigues of each system after the interaction of multiple systems is completed, to obtain the set of the Duquenne-Guigues and roles of the entire system. Therefore, this paper proposes a model collaborative learning and exploration of RBAC roles (RCLE). Under the framework of interactive Q&A of property exploration, a method is designed to support the role discovery of the same group of users under different permission systems. This model not only avoids the time-consuming process of role demand analysis and questionnaire survey in the process of role construction but also avoids the defects of the auxiliary interactive question and answer algorithm of attribute exploration in the construction of role system across departments.
Basic Definition
The relevant definitions used in this article are as follows [23,25,26]: Definition 1. An access security context K=ðU, M, IÞ is composed of two sets U, M, and I (the relationship between U and M). The element of U is called user (object), and the element of M is called permission (attribute). ðu, mÞ ∈ I or uIm means that user u has permission m. We use ðu, mÞ ∉ I, which means that user u does not have permission m.
If A and B satisfy that f ðAÞ = B and gðBÞ = A, then we call the binary group ðA, BÞ a concept. A is the extent of the concept ðA, BÞ, and B is the intent of the concept ðA, BÞ.
The computation of Definition 2 is carried out throughout the text. Definition 2 shows how to compute concepts in a given access security context. Since more than one formal context will be involved in the following paragraphs, for the convenience of distinguishing, f 1 ðAÞ and g 1 ðBÞ represent the calculation of f ðAÞ and gðBÞ on the formal context K 1 .
The concept of access security context K=ðU, M, IÞ has the following basic properties (∀A, A 1 , A 2 ⊆U,∀B, B 1 , B 2 ⊆M): is a pseudointent.
Wireless Communications and Mobile Computing
Definition 4 provides the conditions for the establishment of pseudointent. To prove whether an attribute set is a pseudointent, we only need to verify whether it meets the two conditions of Theorem 14.
Definition 5. Set K=ðU, M, IÞ is an access security context, According to the value dependence theory of concept lattice, the Duquenne-Guigues can produce all value dependence held in an access security context, namely, the implication relation of an attribute. It can be seen from definition 6 that the Duquenne-Guigues of access security context can be obtained as long as all pseudointents are found. The correlation judgment between attribute set and implication set in Definition 7 can be used in the calculation of pseudointent.
Definition 8. Let K=ðU, M, IÞ be an access security context, M = fm 1 , m 2 ⋯ m n g, and the permission (attribute) in M satisfies the basic linear order relationship ðm 1 < m 2 < · · < m n Þ. For any Definition 8 describes the lexicographical order relation of the property set < which is a linear order relation of 2 M . All property sets can be generated one by one according to the lexicographical order and tested one by one to see if the property set is a pseudointent or intent.
A Model for Collaborative Learning and Exploration of RBAC Roles
The attribute exploration algorithm interacts with domain experts by asking questions, traverses the attribute set in lexicographical order, and tests whether the set is pseudointent or intent. The use is the attribute set of the pseudointent to produce the implication, so as to construct the Duquenne-Guigues of the access security context and obtain the relevant context knowledge. Lexicographical order < is a linear order on the power set of all permission (attribute), which guarantees the completeness of the attribute exploration algorithm. In other words, the set of roles obtained by the traditional role discovery algorithm based on attribute exploration is complete. However, due to the lack of cooperation mecha-nism, traditional role discovery algorithms based on attribute exploration cannot build a role system across departments.
The key to the above problem is how to discover the set of roles and the implication relationship between permissions under multiple permission systems (Duquenne-Guigues). In this paper, we found that after the attribute exploration among different departments, we further analyzed and summarized the roles and Duquenne-Guigues under different permissions systems, so as to obtain the role construction of the crossdepartment permission system.
Basic Theorem.
To facilitate the elaboration, we first make the following definition.
Definition 9. Given an access security context K 1 = ðU 1 , M 1 , Definition 10. A model for collaborative learning and exploration of RBAC roles RCLE = ðK 1 , Based on the above definition, we have the following findings, which can be used as the theoretical basis of the RCLE model.
(2) Lastly, proof T is not pseudointent. If T satisfies the definition of the pseudointent (2), then each pseudointent Therefore, T does not satisfy the definition of pseudointent (2) that T is not a pseudointent in K.
Theorem 11 shows that the set of permissions (attributes) is neither intent nor pseudointent if it is not related to any implication in the Duquenne-Guigues. Because in the attribute exploration, only the set of permissions (attributes) that are intent or pseudointent are considered, and the set of permissions (attributes) that satisfy theorem 11 can be ignored and not calculated. Proof. Because D is related to JðKÞ, so by definition 7, we know that D is intent or pseudointent. D ∈ CðK 1 Þ, and then in K 1 , the users that coown the permission set D are g 1 ðDÞ.
According to definition 10,
RCLE Model Framework
Based on the above definitions and theorems, this section designs a model of RBAC role collaborative learning and exploration (RCLE) by referring to the framework of traditional attribute exploration algorithm and expert questions. the algorithm uses the traditional attribute exploration framework to discover the roles of different permissions system, then automatically revises the set of roles and the Duquenne-Guigues according to the obtained knowledge and the proposed theorem. In this way, we can get the required roles and the implication relationship between permissions and permissions of the system after the fusion of multiple systems. The model architecture is shown in Figure 1.
Wireless Communications and Mobile Computing
Using the attribute exploration algorithm, the role discovery algorithm interacts with system security managers in different departments to obtain the required set of roles (intent) and the set of implications between permissions (Duquenne-Guigues) in each department. The following is the specific process of the attribute exploration role discovery algorithm: At the beginning of the algorithm, the access security context is empty, the Duquenne-Guigues is empty, and the intent set is empty. Then, the set of attributes to be tested is continuously generated in lexicographic order, and an expert is asked if the implication with the attribute set as the preceding is true. If not, add a counterexample to the access security context and recalculate. If true, the attribute set is judged to be intent or pseudointent. If it is a pseudointent, then an implication form with the pseudointent added to the Duquenne-Guigues. If it is not a pseudointent, according to the value dependence of the concept lattice and the correlation theory of the attribute set, it must be intent, and then the attribute set is added to the intent set.
The set of roles and the Duquenne-Guigues obtained are substituted into the RCLE model, and the set of roles and the Duquenne-Guigues required in the system after multidepartment system fusion are calculated. At the initial stage of the algorithm, the access security context is the union of multiple access security contexts, the Duquenne-Guigues is empty, and the set of roles is empty. In line 1 of the algorithm to determine whether the algorithm has reached the end state. Inline 4-8 of the algorithm, it means that the permissions set belongs to the role set of departments 1; so, the permissions jointly owned by users in department 1 and department 2 are calculated. Line 9-13 of the algorithm indicates that the permission set belongs to the role set of department 2; so, the permissions jointly owned by users in department 2 who have B permission set are calculated in department 1. Algorithm 14-23 is the processing process of the B permission set. Line 24-28 of the algorithm is the process where B does not exist in the set of roles of department 1 and department 2, nor in their Duquenne-Guigues, where the findNextB algorithm calculates the next permission set of B ′ according to the correlation definition.
Example of the RCLE Algorithm Process
This section illustrates the running process of the RCLE model with an example. Access security context K 1 = ðU 1 , M 1 , I 1 Þ and U 1 = ð1, 2, 3, 4Þ represents (dean of faculty, dean of
It can be seen from the above algorithm example process that the RCLE model utilizes the traditional attribute exploration role discovery algorithm to interact with the system managers of multiple departments, so as to obtain the set of roles and the implication relation between permissions under the combination of multiple departments.
Experiment and Analysis
6.1. Experimental Design. In order to verify the performance of the model proposed in this paper, the random function simulation in the JAVA language MATH library is used to generate two sets of access security context as test data. The experimental design is divided into two aspects. The first aspect is to observe the change in the number of the implication relation (Duquenne-Guigues) by changing the experimental conditions. The second aspect is to change the experimental conditions to observe the changes in the number of roles (intent).
In the experiment, the algorithm traverses the access security context to answer the questions instead of the experts. The algorithm takes the randomly generated access security context as the objective access security context and traverses the entire access security context when judging whether the implication relation is true. If all users in the access security context meet the implication relation of this implication, the implication is considered to be true. Otherwise, it is considered that this implication relation is not valid, and a user is taken from the access security context and provided to the The first group of experiments sets the access security context with the same number of users (objects) and the number of permissions (attributes) from 0 to 30 at an interval of 5 to test. The purpose of the test is to fix the number of users to change the number of permissions and observe the change in the number of implications. The test results are shown in Figure 2.
The second group sets the number of access security context with the same number of permissions (attributes), and the number of users (objects) is tested from 0 to 300 at intervals of 50. The purpose of testing is to fix the number of permissions, change the number of users, and observe the change of the number of implications. The test results are shown in Figure 3.
The third group of experiments sets access security context with the same number of users (objects) and the number of permissions (attributes) from 0 to 30 at an interval of 5 to test. The purpose of the test is to fix the number of users, change the number of permissions, and observe the change in the number of roles. The test results are shown in Figure 4.
The fourth group sets the number of access security context with the same number of permissions (attributes), and the number of users (objects) is tested from 0 to 300 at inter-vals of 50. The purpose of the test is to fix the number of permissions, change the number of users, and watch the number of roles change. The test results are shown in Figure 5.
6.2. Experimental Analysis. The first, second, third, and fourth groups of experiments show that whether the number of fixed objects, changing the number of attributes, or the number of fixed attributes, changing the number of objects, the implication relationship, and role (intent) increase with the scale expansion of the access security context.
The RCLE model proposed in this paper not only avoids the time-consuming and labor-consuming process of role requirement analysis and questionnaire survey in the process of role construction but also solves the defects of the traditional auxiliary interactive question and answer algorithm based on attribute exploration, which does not support crossdepartments.
Conclusion
Because of the defect that the traditional semiautomatic heuristic method for constructing the RBAC system cannot construct a role system in different permission system departments, this paper proposes a model of RBAC role cooperative learning and exploration. Based on the local access security context, three theorems are summarized from the local point of view, and the proposed theorems are proved by mathematical rigor. Finally, a model of RCLE is given according to the theorems. The model uses the traditional attribute exploration role 7 Wireless Communications and Mobile Computing discovery method to construct the role system of different permission systems, and then according to the theorem proposed in this paper, calculates the role system of the multiple departments. Because the RCLE model greatly saves the timeconsuming steps in the process of role to formulate and has characteristic of the interdepartmental build role, and so here we will further the development of tools for easier operation and makes the model able to get the more extensive application and development.
Data Availability
The data used to support the findings of this study are included within the article.
Conflicts of Interest
The authors declare that there is no conflict of interest regarding the publication of this paper.
|
2021-07-15T13:28:35.697Z
|
2021-01-01T00:00:00.000
|
{
"year": 2021,
"sha1": "933563a1b004f674514eb35b5b382f376a2af487",
"oa_license": "CCBY",
"oa_url": "https://downloads.hindawi.com/journals/wcmc/2021/5549109.pdf",
"oa_status": "GOLD",
"pdf_src": "Anansi",
"pdf_hash": "6565e7c603974c122c6609f0730bff9f587479a4",
"s2fieldsofstudy": [
"Computer Science"
],
"extfieldsofstudy": [
"Computer Science"
]
}
|
210246257
|
pes2o/s2orc
|
v3-fos-license
|
Electric power quality control in electro-technical complexes of oil processing plant
The paper deals with electric power quality research, in the load buses of the electrical networks of oil processing plant (“Production Association “Kirishinefteorgsintez”, LLC). Measurements were carried out under conditions of operating equipment at the distribution substations of the enterprise and at the division boundary of inventory responsibility with the power-supply system at the points of power transmission from the power-supply system to the industrial consumer. Assessment of the allowed contribution to the deterioration of electric power quality showed that at the increase in the rated capacity of drives based on frequency converters over 12% of the total installed capacity of electricity receivers of oil processing plant, the maximum allowed voltage waveform distortion coefficient values at the points of power transmission will be exceeded. To implement normalizing the electromagnetic environment and lowering non-sinusoidality of voltage, proposed is a system of measures, that allows controlling electric power quality indicators within the limits, stipulated by regulatory documents, resulting to reducing the risk of penalties by the power-supply authority and large scale undersupply of products under conditions of continuous technological cycle.
Introduction
Electric power quality is closely related to the security of supply, since the normal mode of power supply to consumers is deemed to be a mode, wherein the consumers are provided with uninterruptible power supply of standardized quality, in amounts agreed in advance with the power supply authority.
According to the present-day state of energy sector of Russia, with energy performance of power complexes at industrial enterprises being one of the promising directions for the development, electric power quality is one of its main components.
Priority directions for the development of oil processing plants involve measures to improve oil processing and energy saving during petroleum products production.
The total capacity of electricity receivers used during the technological process at modern oil processing plants exceeds 100 MW. The most commonly used in the power supply system of oil processing plants are power transformers with rated capacity of 1000 kVA and voltage of 6 (10)/0.4 kV (more than 60% of the total number of power transformers).
Herewith, the installed capacity of electric drives with frequency converters, included in the electro-technical complexes of such enterprises, reaches 10% of the installed capacity of the oil processing plant. The rectifier, that feeds the inverter of the frequency converter, is designed according to a three-phase bridge circuit. There is a tendency towards increasing the number and installed capacity of such drives. According to the conditions of the technological process, the converting load on the power substations of the oil processing plant reaches, per bus section, more than 40% of the feeding transformer capacity of 6(10)/0.4 kV.
In connection therewith, as well as owing to the fact, that the energy component in the prime cost of oil processing is 15% and has a tendency towards continuously increase, the studies to determine electric power quality of the electro-technical complex of "Kirishinefteorgsintez" oil processing plant have been carried out.
Results of studies
Measurements of electric power quality indicators have been carried out under conditions of operating equipment at distribution substations of the oil processing plant at the division boundary with the power-supply system. Voltage waveform distortion coefficient values in the mains in connection with higher harmonics spectrum of 6k ± 1 (5, 7, 11, 13, 17,), showing that the sources thereof are 6-pulse frequency converters of a controlled drive, were revealed.
Based on data on equipment repair and insulation measurement protocols, negative influence by high harmonics in current on electricity receivers, causing additional losses in electric machines, transformers and networks, insulation depreciation, deterioration of the operating conditions of shunt capacitive compensation devices (SCCD), relay protection and automation tools, decreasing reliability of electricity receivers operation at the oil processing plant, has been proved [1,2].
Under conditions of oil processing plants, capacitor banks contribute to creating conditions close to the resonance of currents at a frequency of any of the harmonics, which results to dangerous overloading by current [3,4,5,6].
To determine the influence of capacitor batteries on the level of conductive electromagnetic interference in the electric mains of the enterprise, under conditions of varying the operating mode parameters of SCCD, studies of the operating modes of electrical equipment, connected to the bus sections of the studied power sub-stations, have been conducted. When changing the operating mode of SCCD within the range of 0 ÷ 100% at a pitch of 25%, an increase in voltage non-sinusoidality ratio by 7 times and more was revealed. This phenomenon and the presence in the current, consumed by SCCD, 19% of the current component of the 11th harmonic confirms the presence of a resonance of higher harmonic currents in the electric mains. In this case, the currents at the terminals of the capacitors exceed their rated value by 1.54 times.
Therefore, despite the fact that the capacitor unit allows to increase power factor (cos ϕ) from 0.81 to 0.98, its inclusion on the bus section on the side of 0.4 kV seems to be inexpedient without taking measures to improve power quality in the load bus of the electric mains.
To check the possibility of higher harmonics penetration to a level of 6(10) kV, the levels of higher harmonics on 6(10) kV buses of the power supply centers of oil processing plant have been measured. It was revealed that attenuation of current/voltage higher harmonics by several times, during the penetration of electromagnetic interference, propagating in 0.4 kV networks to the 6(10) kV side, owing to the presence of only magnetic coupling of the windings of power transformers, is favorable to the arrangement of SCCD on the 6 (10) kV side.
Based upon the current regulatory documentation and existing techniques for measuring, controlling and analyzing electric power quality, an assessment of the allowed contribution to the deterioration of the electric power quality by oil processing plant electricity receivers at the division boundary with the power-supply system was made [7,8].
The assessment of the allowed contribution to the deterioration of electric power quality was carried out by the level of non-sinusoidality of voltage.
The measurement of the nth harmonic voltage component K(n)i was carried out for linear voltages [9,10,11].
The value of the nth harmonic voltage component KU(n) in percentage form as a result of averaging N observations KU(n)i within the time interval Tvs equal to 3 sec: The consumer's requirements concerning the allowed level of harmonic emission in the power-supply system at the point of common coupling are determined by the expression: where Snli-is the installed capacity of non-linear load kVA.
The allowed contribution of the ith consumer, according to the rules for connecting the consumer to the general-purpose network under the terms of the impact on electric power quality, is determined by the expression: The normally allowed and maximum allowed voltage waveform distortion coefficient values at the division boundary with the power-supply system at the points of power transmission from the power supply authority to the industrial consumer in electrical networks with different rated voltages Unom, stipulated by the regulatory documentation, applied in the Russian Federation, are given in Table 1 [12,13,14]. Table 1. The normally allowed and maximum allowed voltage waveform distortion coefficient values at the division boundary with the power-supply system at the points of power transmission from the power supply authority to the industrial consumer.
Normally allowed value at Unom, kV Maximum allowed value at Unom, kV Where KU(n)nd is the normally allowed value of the nth harmonic voltage component coefficient, according to Table 1.
As a result of processing the data of industrial experiments, it was established that with a change in the power of the transverse capacitive compensation installations from 0 to 120 kvar, the nonsinusoidality coefficient in voltage and current increased and amounted to about 3% in voltage and 17,5% in current. The presence in the current consumed by the installation of a high value of the component of the current of the 7 th (350 Hz) harmonic indicates the presence of a resonance of higher harmonic currents in the network. The non-sinusoidal current level with the transverse capacitive compensation unit turned on is presented in Figure 1. Research results shows, that the widespread introduction of frequency-controlled drives, which are the main source of voltage waveform distortion, results to an increase in the allowed contribution to the deterioration of electric power quality by electricity receivers of the oil processing plant at the division boundary with the power-supply system.
Thus, with an increase in the installed capacity of drives based on frequency converters over 12% of the total installed capacity of electricity receivers of the oil processing plant, the maximum allowed voltage waveform distortion coefficient values at the division boundary with the power-supply system will be exceeded.
Conclusion
Consequently, in combination with the widespread introduction of controlled drives, as the most costefficient steps, while producing petroleum products, under conditions of oil processing plant, scheme measures, the use of filtering devices, the use of special equipment, characterized by low level of higher harmonics emission in the mains, can be proposed for normalizing the electromagnetic environment and lowering non-sinusoidality of voltage.
The results of studies and the measures developed in connection therewith are examined by the energy services of "Production Association "Kirishinefteorgsintez", LLC with the aim of implementation.
|
2019-11-14T17:07:46.239Z
|
2019-11-13T00:00:00.000
|
{
"year": 2019,
"sha1": "b3d01d358af07341f0d02d3892ecc85b31fd3a87",
"oa_license": null,
"oa_url": "https://doi.org/10.1088/1757-899x/643/1/012001",
"oa_status": "GOLD",
"pdf_src": "IOP",
"pdf_hash": "2e321263b4f463d61f54d294bccfb7a2818a7b6f",
"s2fieldsofstudy": [
"Engineering"
],
"extfieldsofstudy": [
"Physics",
"Environmental Science"
]
}
|
264387855
|
pes2o/s2orc
|
v3-fos-license
|
Comparative Study About Different Doses of Remimazolam in Short Laparoscopic Surgery: A Randomized Controlled Double-Blind Trial
Objective To study the efficacy and safety of different doses of remimazolam used for induction and maintenance in short laparoscopic surgery. Methods A randomized controlled trial was conducted between May 2021 and May 2022 on patients underwent laparoscopic surgery for 30 minutes to an hour. Based on the drug used and the infusion rate, included patients were allocated into the Low-group of remimazolam (using a constant infusion rate of 6.0 mg/kg/h for induction and the rate of 1 mg/kg/h for maintenance), the Median-group (9.0 mg/kg/h for induction, 2 mg/kg/h for maintenance), the High-group (12.0 mg/kg/h for induction, 3.0 mg/kg/h for maintenance), and the Propofol group. The postoperative extubation time was used as the primary outcome. Results A total of 192 patients were included in the study, with 47, 48, 48, and 49 patients in the Low-, Median-, High-, and Propofol group, respectively. There was a significant difference in postoperative extubation time, with the High-group having the highest duration of 15.21±2.34 minutes compared to the Median-group (13.17±1.71 minutes, p<0.001), Low- group (12.72±1.31 minutes, p<0.001), and the Propofol group (12.24±1.23 minutes, p<0.001). No significant difference was found between the Low-group and the Propofol group, while the Median-group still showed higher postoperative extubation time compared to the Propofol group (p=0.008). Conclusion Compared to propofol, total intravenous induction and maintenance with high and median dosages of remimazolam may prolong postoperative extubation time. Remimazolam can be safely used for induction and maintenance at various doses while not increasing the likelihood of adverse events.
Introduction
Remimazolam, a new ultrashort benzodiazepine, acts through gamma-aminobutyric acid-a (GABA-A) receptors at the amygdala and reticulum activating system binding sites. 1 By altering the conformation of chloride channels, it inhibits the action of these channels in the central nervous system and causes hyperpolarization. 2 Midazolam, a traditional benzodiazepine used for sedation and anticonvulsants since 1982, has some drawbacks that limit its use for the maintenance of general anesthesia, including drug accumulation and prolonged sedation. 3Remimazolam, on the other hand, is a midazolam derivative with an ester moiety added and two forms -besylate and tosylate. 4Non-specific tissue esterases metabolize it, with its main metabolite CNS7054 possessing reduced binding capacity with the GABA-A receptor. 4Reports indicate that remimazolam has the advantages of rapid onset of action, short recovery time, and stable hemodynamics. 4Furthermore, flumazenil can quickly and specifically reverse its sedative effects, making it increasingly used for clinical anesthesia. 5emimazolam was first approved for procedural sedation during gastrointestinal endoscopy. 4It was only recently approved in November 2021 for inducing and maintaining general anesthesia in China.Despite the package insert allowing for a 3-fold difference between the maximum and minimum dose of remimazolam for induction and
Anesthesia Protocol
To calculate the induction and maintenance doses, the patient's height and weight were obtained.On the day of the operation, standard monitoring was carried out as per ASA recommendations, with the depth of sedation and anesthesia monitored using the bispectral index (BIS) monitor from Aspect Medical Systems, US.Total intravenous induction was carried out in a sequential order of remimazolam (Jiangsu Hengrui Pharmaceutical Co., Ltd., China) or propofol (Sichuan Guorui Pharmaceutical Co., Ltd., China), sufentanil (0.5 ug/kg, Yichang Renfu Pharmaceutical Co., Ltd., China), and cisatracurium (0.15 mg/kg, Shanghai Pharmaceutical Dongying Pharmaceutical Co., Ltd., China), which was followed by endotracheal intubation.The time from the beginning of intravenous induction to the disappearance of eyelash reflex was recorded as the time of loss of consciousness (LoC).An infusion system (B.Braun, Germany) was used for targetcontrolled infusion.Remimazolam was administered to the Low group with a constant pump rate of 6.0 mg/kg/h and then changed to 1 mg/kg/h for maintenance until the end of the operation, while the Median-and High-groups had induction rates of 9.0 mg/kg/h and 12.0 mg/kg/h respectively, and maintenance rates of 2 mg/kg/h and 3.0 mg/kg/h respectively.The Propofol group received an induction dose of 2.0 mg/kg and a maintenance rate of 6.0 mg/kg/h.During the operation, total intravenous anesthesia was maintained with remifentanil added at a rate of 8-10 ug/kg/h (Yichang Renfu Pharmaceutical Co., Ltd., China).Intraoperative hypotension was treated with norepinephrine to maintain the mean arterial pressure at or above 65mmHg, and to ensure that the systolic blood pressure did not fluctuate more than 20% of the baseline value.Accordingly, the atropine was taken to prevent bradycardia.Core temperature was monitored and maintained at 36.3-36.8°C.
Additionally, a quality of recovery −40 questionnaire (QoR-40) 6 was finished face-to-face as a baseline value (QoR-40 baseline ) 1 day before to the operation.The questionnaire was completed under the guidance of a researcher who had received training on the QoR-40 questionnaire survey.
Anesthesia Recovery Protocol and Follow-Up
The patients were transferred from the operation room to the Post-Anesthesia Care Unit (PACU).The Anesthesiologists in the PACU monitored the awakening of the patients using the Richmond agitation-sedation scale (RASS). 7Eight minutes after entering the PACU, their name was called every 2 minutes, and the passive eye-opening time was recorded.Once consciousness was regained with a RASS score between −3 and 0 and a train-of-four rate (TOFR) greater than 90%, the tracheal tube was removed and the extubation time was noted.After extubation, administer 4-5 L/min mediumflow mask oxygen therapy to the patient and continuously monitor the breathing and state of consciousness.If the RASS score of the patient remains at −3 or even drops to −4 within 5 minutes after extubation, 2mg Flumazenil (Jiangsu Nhua Pharmaceutical Co., Ltd., China) and 0.25 µg/kg nalmefene (Chengdu Tiantaishan Pharmaceutical Co., Ltd., China) were used for intervention.The assessment was repeated every 10 minutes after the intervention, and the same dose of flumazenil and nalmefene was repeated once if necessary.The inability to maintain pulse oxygen saturation (SpO 2 ) levels above 90 after extubation with medium-flow mask oxygen therapy was classified as a post-extubation respiratory depression event.Similarly, a RASS score greater than +2 points was considered as emergence agitation, while a score of −4 or −5 points after 30 minutes in the PACU suggested delayed recovery.In addition, postoperative nausea and vomiting (PONV) were monitored, and tropisetron (5mg, Qilu Pharmaceutical Co., Ltd., China) was used to treat the condition.Once the Alderete score was ≥ 9, patients were transferred to the general ward, and the PACU stay duration was recorded.Following transfer, a nasal cannula was used to deliver continuous low-flow oxygen therapy and SpO 2 levels were monitored continuously until 24 hours post-operation.Any SpO 2 values below 90 during this period were considered a respiratory depression event.All patients received ultrasound-guided transversus abdominis plane block and intravenous parecoxib sodium (40 mg, Pharmacia & Upjohn Company LLC) as an analgesic in the PACU.Hydromorphone was used as a supplement if pain relief was found to be inadequate.
On the day following the surgery, the patients underwent follow-up, during which they completed the QoR-40 questionnaire (QoR-40 postoperative ).The incidence of respiratory depression and nausea and vomiting were carefully documented, and the modified Brice questionnaire 8 was used to evaluate the occurrence of awareness during operation.
Outcomes
The primary outcome was the time to extubation after surgery.The secondary outcomes comprised various parameters such as: time of LoC, the time of PACU stay, passive eye-opening time, BIS values at different time points (T0: 5 minutes after induction, T1: at the beginning of the operation, T2: 30 minutes after the operation, T3: at the end of the operation, T4: 10 minutes after the operation), type of operation and operation time, volume of intraoperative infusion, dosage of remimazolam, propofol and remifentanil, usage rate of norepinephrine, flumazenil and nalmefene, and incidence of PONV.Besides, the study observed the impact of the sedative drug's economic cost on the patients and assessed postoperative recovery quality using the QoR-40 questionnaire.The QoR-40 questionnaire included five dimensions of recovery, namely physical comfort (12 items), emotional state (9 items), physical independence (5 items), psychological support (7 items), and pain (7 items).Each item was rated on a five-point Likert scale -none of the time, some of the time, usually, most of the time, and all of the time.The total score obtained through QoR-40 assessment ranged from 40 (indicating poor quality of recovery) to 200 (indicating supreme quality of recovery). 6esides, the study assessed the basic information of the patients, including gender, age, height, weight, and BMI.
Sample Size Calculation and Statistical Analysis
The PASS software (version 15.0) was used to calculate the sample size for multiple sample mean comparisons.In the preliminary trial, the average postoperative extubation time of each group was: 11.31±1.21minutes (Low-group), 13.15 ±1.17 minutes (Median-group), 16.79±1.52minutes (High-group), and 14.97±1.19minutes (Propofol-group) respectively, with the lowest detectable difference of 1.82.Mean and standard deviation comparisons were performed using the Turkey method, with α=0.05 and β=0.2.The groups were designed to have a 1:1 ratio, and 42 participants in each group were required to achieve a power of 0.8.Considering a dropout rate of 20%, it was planned to recruit 50 patients to each group.
The statistical evaluation was executed utilizing SPSS software (Version 27.0, Chicago, Illinois, USA).Mean ± standard deviation (M±SD) was adopted to signify measurement data, followed by multi-sample mean-variance analysis.For Count data, n (%) was employed, along with either the chi-square test or Fisher's exact test.The LSD method was used to performrepeated analysis of variance (ANOVA) measurements and multiple comparisons within and between groups.P-values less than 0.05 were deemed statistically significant.
Results
From May 2021 to May 2022, a total of 211 people met the initial inclusion and exclusion criteria and were divided into groups according to the digital random number method.Among them, 14 people were excluded because the operation lasted more than one hour, 3 people were excluded due to conversion to laparotomy, and 2 people were excluded because the operation time was less than half an hour.In the end, a total of 192 people were included in the study, with 47 people in the Low-group, 48 people in the Median-group, 48 people in the High-group, and 49 people in the Propofol group (Figure 1).
Primary Outcome
The postoperative extubation time in the High-group was 15.21±2.34minutes, which was significantly longer than the Median-group (13.17±1.71minutes, p<0.001), the Low-group (12.72±1.31minutes, p<0.001), and the Propofol group (12.24±1.23 minutes, p<0.001).The Low-group exhibited no significant difference in the postoperative extubation time compared to the Propofol group, whereas the Median-group still showed a higher postoperative extubation time when compared to the Propofol group, with a p-value of 0.008.(Table 1) The High-group had a range of 11-20 minutes of
Secondary Outcomes
The duration of LoC decreased gradually with increasing doses of remimazolam, and there were statistically significant differences among the groups.The High-group had the shortest LoC time, which was 60.00±35.51seconds, followed by the Median-group (82.08±33.28seconds, p<0.001) and the Low-group (100.23±24.79seconds, p<0.001).The Propofol group had the shortest LoC time of all, at 34.47±12.56seconds (p<0.001), as per Table 1.
Patients in the Propofol group had a lower average actual body weight (66.96±10.87kg) compared to the Mediangroup (72.46±11.70kg, p=0.023) along with a lower BMI (23.01±2.87vs 24.24±2.95 in Median-group, p=0.045 and 22.84±2.93 in High-group, p=0.023).The Propofol group also had a higher intraoperative norepinephrine usage rate and intraoperative hypotensive event rate (53.06% vs 30.61%) compared to the other three groups who received remimazolam during the operation (p<0.05).Furthermore, there were significant differences in the consumption of remimazolam and propofol as well as their corresponding costs among the four groups (p<0.001), as shown in Table 1.There was no statistically significant difference between other outcomes, p>0.05 (Table 1).
Compared with the preoperative basic conditions, the total scores of the QoR-40 in the four groups were all statistically reduced after the operation within groups, p<0.05, but the differences were without statistical significance between groups.Correspondingly, the four sub-items of QoR-40 (Emotional status, Physical comfort, Psychological support, and Physical independence) also have statistical differences in the comparison of time before and after the operation, p<0.05, and similarly, there is no statistical difference between the four groups about these four sub-items.Regarding the Pain sub-item of QoR-40, no statistically significant differences were observed within or between the four groups, p>0.05 (Table 2).
Discussion
Remimazolam is an ultra-short-acting benzodiazepine, which can be rapidly metabolized to CNS7054 mainly by nonspecific esterase [mainly carboxylesterase 1A (CES 1A)] in the human liver, with a half-life of about 6.8 minutes. 9ostoperative extubation time refers to the time after surgery that the patient is off the ventilator and the endotracheal tube is successfully extubated, which can reflect patient recovery and postoperative safety.Early identification of abnormalities during recovery can lead to further evaluation and treatment, benefiting high-risk patients. 10A shorter postoperative extubation time usually means a faster recovery for the patient and a lower risk of postoperative complications.It is widely used as an index to evaluate the quality and effect of surgery and to evaluate the effectiveness and safety of new anesthetic drugs and techniques. 11A standard post-awakening extubation strategy was used in this study, and we found a statistical difference in postoperative extubation time among the Low-group (using a constant infusion rate of 6.0 mg/kg/h for induction and 1 mg/kg/h for maintenance), the Median-group (9.0 mg/kg/h for induction, 2 mg/kg/h for maintenance), High-group (12.0 mg/kg/h for induction, 3.0 mg/kg/h for maintenance), and the Propofol group.The postoperative extubation time of the High-group (15.21±2.34minutes) and the Median-group (13.17±1.71minutes) was longer than that of the Low-group (12.72±1.31minutes) and the Propofol group (12.24±1.23 minutes), which was similar to the median time of the extubation time [21 minutes, 95% confidence interval (CI) (15-28) minutes] reported previously. 12Although the statistical difference was achieved, the clinical value of the time difference remains to be explored.Similar to the average loss of consciousness (LoC) time reported in previous studies (81.7 seconds vs 97.2 seconds in two groups), 13 there were also differences in the time of LoC after induction and the amount of remimazolam used among the three groups and the duration of LoC decreased gradually with increasing doses of remimazolam.The High-group had the shortest LoC time, which was 60.00±35.51seconds, followed by the Mediangroup (82.08±33.28seconds) and the Low-group (100.23±24.79seconds).The Propofol group had the shortest LoC time of all, at 34.47±12.56seconds.Additionally, the intraoperative norepinephrine usage and hypotension incidence rate were lower in the remimazolam group than in the Propofol group, indicating better hemodynamic stability.Nevertheless, the cost associated with the administration of remimazolam was greater compared to the Propofol group, and this cost increased notably as the dosage of remimazolam increased.
The possibility of resedation and secondary respiratory depression after using remimazolam is a major concern for clinicians, as they can adversely affect clinical quality and safety. 9Administration of high doses of remimazolam followed by antagonizing with flumazenil has resulted in reports of resedation with respiratory depression. 14,15owever, in this study, the High-group, which administered the upper limit of the recommended dosage of remimazolam for anesthesia management, did not cause any more cases of postoperative respiratory depression, increased incidences of flumazenil or nalmefene use in the PACU, or re-sedation events, indicating that remimazolam may be safe for the respiratory tract.This may be due to its quick metabolism without accumulation and not affecting the level of respiratory muscle tension.Results may also be attributed to the emphasis on protecting patients' body temperature during the operation.
Personalized sedative drug use is recommended in medical routine, and the titration method is commonly used. 16owever, as a new drug for clinical use, this study did not adopt a sequential design 17 or Dixon up-and-down methods 17 to explore the dosage distribution of remimazolam.Instead, three dosage regimens were directly designed based on the recommended labeled induction and maintenance doses combined with standard intervention measures to reduce bias and confoundings.This methodology was based on existing reports of remimazolam's safety and limited data on its clinical application. 1 Since remimazolam is a new generation of benzodiazepines, delayed awakening or delirium during awakening in the recovery room should be considered. 18The incidence of emergence agitation did not differ significantly among the four groups and no delayed emergence occurred.This suggests that remimazolam's application may reduce emergence agitation compared to midazolam, which needs further study.Future studies should focus on postoperative delirium in high-risk groups, such as elderly patients or patients who have been sedated with remimazolam for an extended period.The focus of this study was short laparoscopic surgery in ASA grade I-II patients; thus, extrapolation of the research findings may be limited.
According to the results of this study, the BIS value in the remimazolam group during the operation was higher in comparison to the Propofol group.However, there were no significant differences in the BIS value among the three different dosages of remimazolam administered.These findings indicate that using BIS value to accurately monitor the depth of anesthesia during maintenance with remimazolam may not be entirely reliable.This study's conclusion is consistent with prior research 13,17 and highlights the need for further calibration and exploration of the correlation between remimazolam and BIS values, and the underlying mechanisms.
The pilot experiment revealed that patients who received remimazolam for induction and maintenance were more fatigued after waking up in the PACU compared to those who received propofol.To evaluate the quality of recovery, the study employed the QoR-40 questionnaire.However, no discrepancies in the quality of recovery were found between the groups that received remimazolam and the group that received propofol.The limited sample size may have lowered the power of studying the QoR-40 questionnaire between groups.Additionally, the questionnaire was completed 1 day postoperatively instead of immediately postoperatively which may have contributed to the absence of differences between groups.Future studies should consider shortening the interval for the postoperative evaluation using the QoR-40 questionnaire.
This study has the following limitations.Firstly, the induction effect of remimazolam has not been compared with that of midazolam.Secondly, other possible adverse reactions caused by remimazolam were not given enough consideration, such as the deterioration of liver function and changes in the blood system.Lastly, the short follow-up period did not allow for the observation of potential long-term adverse events.
Conclusions
Compared to propofol, total intravenous induction and maintenance with high and median dosages of remimazolam prolonged postoperative extubation time.Remimazolam can be safely used for induction and maintenance at various doses while not increasing the likelihood of adverse events.But remimazolam has a relatively high cost of use when compared to propofol.
Figure 1
Figure 1 The CONSORT Flow Diagram.
Figure 2
Figure 2 Bispectral index (BIS) at different times.Notes: Data were shown as a Violin plot with the median value and quartiles.# Compared with the Low-, Median-, and High-group of remimazolam at the same time, p<0.05.T0: 5 minutes after induction; T1: at the beginning of the operation; T2: 30 minutes after the operation; T3: at the end of the operation; T4: 10 minutes after the operation.
Table 1
Results of Outcomes , while the Median-group had a range of 10-16 minutes, and the Low-group had a range of 10-15 minutes.Finally, the Propofol group exhibited a range of 10-16 minutes.
Table 2
Results of Preoperative and Postoperative Analysis of the Quality of Recovery-40 (QoR-40) Questionnaire
|
2023-10-22T15:17:19.824Z
|
2023-10-01T00:00:00.000
|
{
"year": 2023,
"sha1": "18ab06f673baa3740e7b554363898a55f15f4c49",
"oa_license": "CCBYNC",
"oa_url": "https://www.dovepress.com/getfile.php?fileID=93613",
"oa_status": "GOLD",
"pdf_src": "PubMedCentral",
"pdf_hash": "2f899692e83fccac8e936ed83a72dec517a07bab",
"s2fieldsofstudy": [
"Medicine"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
252601538
|
pes2o/s2orc
|
v3-fos-license
|
PTH, FGF-23, Klotho and Vitamin D as regulators of calcium and phosphorus: Genetics, epigenetics and beyond
The actions of several bone-mineral ion regulators, namely PTH, FGF23, Klotho and 1,25(OH)2 vitamin D (1,25(OH)2D), control calcium and phosphate metabolism, and each of these molecules has additional biological effects related to cell signaling, metabolism and ultimately survival. Therefore, these factors are tightly regulated at various levels – genetic, epigenetic, protein secretion and cleavage. We review the main determinants of mineral homeostasis including well-established genetic and post-translational regulators and bring attention to the epigenetic mechanisms that affect the function of PTH, FGF23/Klotho and 1,25(OH)2D. Clinically relevant epigenetic mechanisms include methylation of cytosine at CpG-rich islands, histone deacetylation and micro-RNA interference. For example, sporadic pseudohypoparathyroidism type 1B (PHP1B), a disease characterized by resistance to PTH actions due to blunted intracellular cAMP signaling at the PTH/PTHrP receptor, is associated with abnormal methylation at the GNAS locus, thereby leading to reduced expression of the stimulatory G protein α-subunit (Gsα). Post-translational regulation is critical for the function of FGF-23 and such modifications include glycosylation and phosphorylation, which regulate the cleavage of FGF-23 and hence the proportion of available FGF-23 that is biologically active. While there is extensive data on how 1,25(OH)2D and the vitamin D receptor (VDR) regulate other genes, much more needs to be learned about their regulation. Reduced VDR expression or VDR mutations are the cause of rickets and are thought to contribute to different disorders. Epigenetic changes, such as increased methylation of the VDR resulting in decreased expression are associated with several cancers and infections. Genetic and epigenetic determinants play crucial roles in the function of mineral factors and their disorders lead to different diseases related to bone and beyond.
Introduction
Complex interplay of the parathyroid hormone (PTH), fibroblast growth factor 23 (FGF23), Klotho and 1,25(OH)2 vitamin D (1,25(OH) 2 D) regulates calcium and phosphate metabolism. However, each of these molecules has additional biological effects beyond bone mineral regulation, related to cell signaling, metabolism and ultimately survival (1). Therefore, these factors seem to be tightly regulated at different levelsgenetic, epigenetic, protein secretion and cleavage. In this review, we discuss the main genetic and epigenetic regulatory pathways of PTH, FGF-23, Klotho and 1,25(OH) 2 D, that are clinically relevant for the activity of these hormones. Many of these actions intersect in the proximal renal tubule, where phosphate reabsorption is decreased in response to PTH or FGF-23, and where 1-a hydroxylase is synthesized for the generation of 1,25(OH) 2 D (2). We describe epigenetic mechanisms such as DNA methylation, mRNA stabilization and histone modification that are involved in mineral regulation. Histone modifications include acetylation, methylation, ubiquitylation, phosphorylation, all of which might affect accessibility to DNA. For example, histone deacetylases (HDAC) promote DNA condensation, suppressing DNA transcription (3).
We also briefly discuss endochondral bone formation, as it relates to the actions of the parathyroid hormone receptor (PTH1R), a critical G-protein coupled receptor that mediates the function of PTH and the parathyroid hormone related peptide (PTHrP). The physiologic mechanisms downstream of mineral hormones that govern calcium and phosphorus homeostasis will only be briefly discussed.
The parathyroid hormone synthesis and secretion
The parathyroid hormone (PTH) is critical to maintain normal levels of serum calcium. In case of parathyroidectomy, and limited external calcium supply, death ensues within hours from severe hypocalcemia and hyperphosphatemia, unless treated with PTH (4). Therefore, it's not surprising that PTH secretion appears to be a highly regulated process. The PTH gene in humans consists of three exons that span about 4 kb on chromosome 11p15, with exon 2 encoding the majority of the prepropeptide sequence and exon 3 encoding the amino acids of the mature peptide (5). Transcription of the PTH gene is stimulated mainly by hypocalcemia (6,7), but also by hyperphosphatemia (8), uremia (9-11), and is suppressed by 1,25(OH) 2 D (6). While patients with advanced renal disease often have elevated levels of PTH, the relevance of a possible stimulatory effect of uremia on PTH synthesis remains undefined, as patients with renal disease have other abnormalities that might contribute to PTH elevation. When serum calcium increases, it binds to the extracellular domain of Ca 2+ -sensing receptor (CaSR) activating intracellular pathways that result in a decrease in PTH secretion, whereas a decrease in Ca 2+ releases this suppression, to promote tonic PTH secretion (12). The PTH response to calcium is highly dependent on the CaSR, as demonstrated by diseases associated with gain or loss of function mutations in CaSR, which cause hypocalcemic or hypercalcemic disorders, respectively (12,13). Interestingly, it has been recently shown that phosphate might have an inhibitory effect on the CaSR independent of calcium, thereby providing a mechanism for PTH secretion in response to hyperphosphatemia (14). After transcription of the PTH gene, mRNA is partly degraded by cytosolic proteins that bind to the 3'-untranslated region (UTR) (15). Among such cytosolic proteins the AU-binding factor 1 (AUF1) and N-ras are PTH mRNA-stabilizing proteins, while the K-homology splicing regulatory protein (KSRP) is a destabilizing protein (15). Conditions such as hypocalcemia or uremia can increase the amount of PTH mRNA by modifying the activity of mRNA binding proteins (16, 17).
The PTH protein is synthesized as a 115 amino acids preprohormone. The 25-amino acid signal pre-sequence is cleaved in the endoplasmic reticulum (ER), and subsequently residues -6 through -1 of the prohormone (PTH -6-84) are cleaved in the Golgi apparatus by proprotein convertases, among them being furin the most efficient (18). The mature, active, hormone circulates as an 84 amino acid protein (PTH 1-84) (19). A few mutations have been described in patients, associated with abnormal processing of prepro-PTH (5), or affecting the mature PTH protein, resulting in hypoparathyroidism (20). For example, amino acid substitution from cysteine to arginine in the preprohormone, disrupts the hydrophobic core of the signal sequence and such mutation leads to inefficient protein processing in the ER (21). Other mutations affecting residue 1 (near the site of cleavage of pro-PTH) or residue 56 of the mature version of the hormone have been found in patients with idiopathic hypoparathyroidism (IPH) (5,22). Interestingly, in patients with the mutations at residue 1 or residue 56, the different available PTH immunoassays show very variable levels of serum PTH, going from below normal levels (consistent with hypoparathyroidism) to elevated levels (as in pseudohypoparathyroidism) depending on the affinity and target site of the antibody used to detect the circulating PTH (5, 20) Table 1. Therefore, these diseases illustrate the importance of normal processing of the PTH preprohormone and the potential limitations of current assays to accurately quantify the biologically active portion (PTH 1-84) of circulating PTH polypeptides.
After intracellular processing of the prohormone, the mature polypeptide (84 amino acids) is packaged in secretory granules in the cytosol. Intracellularly, PTH(1-84) is further cleaved by proteases, releasing biologically inactive C-terminal portion to the circulation, which serve as an additional regulatory step (23). Additional post-translational modifications of the circulating PTH peptide have been demonstrated including phosphorylation and oxidation, which might affect biologic activity or detection by current PTH assays (24,25).
Normal methylation at the GNAS locus is required for PTH1R signaling in the kidney
The parathyroid hormone receptor (PTH1R) is a member of the class B or secretin family of G-protein coupled receptors (GPCR) (26) that mediates the actions of PTH as well as the parathyroid hormone related peptide (PTHrP) (19). While PTH main actions are related to maintain normal calcium levels, PTHrP principal biologic function is to regulate endochondral bone formation (27). The main intracellular signaling pathway activated by either peptide when bound to PTH1R, namely cAMP production, requires activation of the stimulatory subunit of the G-protein (Gsa) Figure 1.
Patients with pseudohypoparathyroidism (PHP) have hypocalcemia and hyperphosphatemia, similar to those of hypoparathyroidism, but have high levels of biologically active PTH (28). Most identified cases of pseudohypoparathyroidism type1a (PHP1A) are due to loss-of-function mutations in Gsa (28, 29). Gsa is encoded by the GNAS locus, a complex locus on chromosome 20q that generates 3 additional transcripts (A/B, extra-large form of Gsa (XLas) and neuroendocrine secretory protein 55 (NESP55). Loss-of-function mutations in the GNAS, when inherited from a female, result in PHP with skeletal abnormalities that are consistent with Fuller Albright's description of PHP, and hence these constellations of clinical findings are currently known as Albright's hereditary osteodystrophy (AHO) or PHP1A (30). Interestingly, if the GNAS mutation is instead derived from the male, there is no resistance to PTH actions in the kidney, but only the skeletal abnormalities, which is known as pseudopseudo-hypoparathyroidism (PPHP). Most cases of pseudohypoparathyroidism type 1B (PHP1B) are sporadic, whereas autosomal dominant PHP is less common and often associated with maternal derived mutations in STX16. Both forms of PHP1B are associated with abnormal methylation (both abnormal gain and loss of methylation) at differentially methylated regions (DMR) in GNAS exons. Such methylation changes, by unknown mechanisms, suppress Gsa expression from the maternal side. Given that the paternal allele of Gsa is normally progressively suppressed with age in the proximal renal tubule, suppression of Gsa derived from the mother results in deficient Gsa expression in the proximal renal tubule, and thus impaired response to PTH in patients with PHP1B (31, 32).
The role of PTHrP, PTH1R and histone deacetylases in bone development
PTHrP acting on PTH1R is necessary to maintain chondrocyte proliferation during bone development (33) Figure 1. Mice with knockout of the PTH1R or PTHrP die in utero or shortly after birth, and share a common phenotype characterized by accelerated mineralization of bones formed by endochondral replacement (27). In humans, homozygous mutations in the PTH1R gene resulting in severe loss of function are lethal, as seen in Blomstrand Chondrodysplasia (BLC) (34). Heterozygous mutations in PTH1R are compatible with life and associated in humans with primary failure of tooth eruption (35,36).
Similarly to PTH, PTHrP also predominantly signals via cAMP (26). The downstream effects of increased cAMP are protein kinase A (PKA) activation and inhibition of saltinducible kinases (SIK) (37). The inhibitory effect on SIK, upon PTH1R activation, favors dephosphorylation of the class II histone deacetylases HDAC4 and HDAC5 (38). HDAC4 and HDAC5 are class II histone deacetylases, that unlike class I histone deacetylases, have only modest deacetylase function (gene suppression). Instead, class II histone deacetylases have N-terminal extensions that bind 14-3-3 proteins in the phosphorylated state (39). Upon dephosphorylation of HDAC4/HDAC5 due to SIK inhibition, the 14-3-3 proteins are released, and the free N-terminal extension of HDAC4/ HDAC5 bind and inactivate transcription factors, such as myocyte enhancer factor 2 (Mef2), which exerts a control on chondrocyte hypertrophy (39). HDAC4 knockout mice have accelerated chondrocyte hypertrophy and die prematurely with a similar phenotype as PTHrP or PTH1R knockout animals, consistent with PTHrP, PTH1R and HDAC4 sharing a common signaling pathway (38). Epigenetics of PTH and PTHrP signaling. The parathyroid hormone receptor (PTH1R) mediates the actions of two ligands, PTH and PTHrP, with independent roles in calcium homeostasis and bone developments, respectively. PTH1R main signaling pathway in response to PTH or PTHrP requires activation of the a subunit of the heterotrimeric stimulatory G-protein (Gsa). Gsa is encoded by GNAS, a complex locus that normally undergoes methylation on the maternal side (for normal expression of Gsa). In the proximal tubule, Gsa is almost exclusively derived from the maternal side during adult life. cAMP generation results in phosphaturic action (via downregulation of phosphate transporters Npta and Npt2c) and vitamin D activation. In the developing bone, downstream of Gsa, cAMP production activates PKA and inhibits SIK thereby exerting a control in MEF2c, which results in chondrocyte proliferation. PTH, parathyroid hormone; PTHrP, parathyroid hormone related peptide; PKA, protein kinase A; SIK, salt inducible kinase; HDAC, Histone deacetylase; MEF2, myocyte enhancer factor..
FGF-23 regulates serum phosphate and increases during renal injury
FGF23 is primarily a bone-and bone marrow-derived hormone, with 251 amino acids, which is critical to maintain phosphate homeostasis. FGF23 decreases phosphate reabsorption and 1,25(OH) 2 D synthesis in renal proximal tubules (40). FGF-23 was first identified in families with autosomal dominant hypophosphatemic rickets (ADHR) (41), associated with missense mutations in FGF-23 at positions 176 or 179 (R176Q/ W and R179Q/W) that render this peptide cleavage resistant and thus increase the intact portion of FGF-23 (iFGF-23), which is the biologically active peptide and thus result in hypophosphatemia (41). Using site-specific antibodies that bind either to the Nterminal or C-terminal site of FGF-23, it was found that FGF-23 is cleaved intracellularly mainly by the pro-protein convertase furin at the consensus site, but additional proteases including tissuetype PA (tPA) and urokinase-type PA (uPA) have now also demonstrated to cleave (inactivate) FGF-23 under experimental conditions (42)(43)(44).
Phosphate consumption increases dose-dependently mRNA abundance of FGF-23 in the bone as well as circulating iFGF-23 (2,45). The increase in iFGF-23 in response to phosphate levels is at least in part dependent on the bone Na+-Pi co-transporter PiT2/Slc20a2 and the FGF receptor (FGFR1c). For example, global knock out of PiT2 in mice is associated with inappropriately normal levels of FGF-23 when these mice are fed with a low phosphate diets (46). Intracellular phosphate has ligand-independent effects on FGFR1c by receptor phosphorylation, which activates the ERK pathway and the transcriptional activators EGR1 and ETV5 resulting in increased expression of polypeptide N-acetylgalactosaminyltransferase 3 (GALNT3), which catalyzes the protective O-glycosylation in FGF-23, preventing cleavage by furin (47). Thus, acting on two different surface receptors, phosphate enhances transcription and prevents cleavage of FGF-23 (48). Additional regulation of cleavage is provided by phosphorylation of serine 180 by Fam20C, which prevents O-glycosylation, hence favoring cleavage (48), presumably when serum phosphate is low.
In the kidney, FGF-23 binds to the FGF receptor (FGFR) and its co-receptor klotho (49,50). The intracellular actions of FGF-23 decrease the membrane availability of the phosphate channels in the brush border membrane (BBM) of the proximal renal tubule, namely the type II sodium-phosphate co-transporters Npt2a and Npt2c (51). FGF-23 also downregulates the activity of 1-apha hydroxylase and upregulates 24-hydroxylase, thereby decreasing the levels of 1,25(OH) 2 D (2, 52) Figure 2. The decrease in functional vitamin D limits the absorption of phosphate and calcium in the intestine (53).
Both acute and chronic kidney injury increase circulating FGF23 levels as mechanism to prevent hyperphosphatemia (54). However, high FGF-23 levels correlate with morbidity and mortality in patients with renal disease (54-57). The mechanisms how kidney injury regulates FGF23 remain largely unknown. We performed a comprehensive metabolomic analysis in individuals undergoing renal arterial and vein blood sampling, to identify renal derived metabolites that correlate with circulating FGF-23. This led to the identification of glycerol-3-phosphate (G3P), a kidney derived metabolite, that increases during acute kidney injury (AKI) in human subjects and that parallels the increase in serum FGF-23 (58). When injected to animals, G3P transitions to lysophosphatidic acid in the bone and bone marrow, where it is required for VDR induced FGF23 transcription at -395 to -311 promoter site (58). In order to identify additional transcription enhancers of FGF-23, which control the response to phosphorus levels or that are active in patients with chronic kidney disease (CKD), Onal et al. used chromatin immunoprecipitation with the antibodies CTCF, H3K9ac, H4K5ac, H3K4me1, H3K4me2, and H3K27ac, followed by DNA sequencing (ChIP-seq). This technique led to the identification of a gene region 16 kB upstream from FGF-23, as a putative enhancer (59). Consistent with its regulatory role, global knock-out of this region prevents the early increase in FGF-23 transcription in mice with CKD. Three additional epigenetically marked regions were tested for the contribution to FGF-23 secretion in response to phosphate or 1,25(OH) 2 D. Among these, the deletion of an enhancer region in close proximity to FGF-23, almost completely blunted the transcription increase of FGF-23 in the bone in response to a high phosphate diet or 1,25(OH) 2 D injection (60).
Additional control of FGF-23 secretion, beyond phosphate and renal disease
In vivo, other positive regulators of FGF-23 include inflammatory related cytokines such as IL-1b, TNF-a (61), lipopolysaccharides (LPS) (62), mineral mediators like parathyroid hormone (PTH) (63), 1,25(OH) 2 D and hematopoietic factors such as hypoxia or iron deficiency (64). The effect of inflammation or iron deficiency is mediated by an increase in hypoxia-inducible factor a (HIF1a) abundance or stabilization (65, 66). Several investigations have demonstrated the role of HIF1a in FGF-23 stimulation. In osteoblasts cell lines, HIF1a increases the FGF-23 promoter activity and results in increased FGF-23 transcription and secretion, which is completely reversible with HIF1a blocking inhibitors (66). In contrast to phosphate mediated stimulation of FGF-23, HIF1a also promotes the cleavage of iFGF-23, such that the increase in the biologically active portion of FGF-23 is attenuated (62). Interestingly, in patients with ADHR, urine phosphate wasting exacerbates during periods of iron deficiency. This is because iron deficiency stimulates HIF1a and cleavage is prevented in this mutant form of FGF-23, thus increasing the amount of iFGF-23 (67).
In contrast to iron deficiency, which typically only elevates Cterminal FGF-23, some iron preparations have been commonly associated with hypophosphatemia via an increase in iFGF-23. For example, in a pooled analysis of clinical trials, 41% of patients treated with ferric carboxymaltose developed hyperphosphatemia predominantly within 2 weeks of treatment (68,69).
Supporting the notion that there are multiple pathways involved in FGF-23 upregulation beyond phosphate sensing or HIF1a, conditional deletion of HIFa (HIF1a/Osteocalcin (OCN)-Cre) does not reduce FGF-23 levels in the Hyp mice (64), an animal model of X-linked hypophosphatemia (XLH). XLH in humans is caused by deletion of the phosphateregulating endopeptidase homolog X-linked (PHEX) and is associated with high levels of FGF-23. The physiologic function of PHEX and how its loss results in increased FGF-23 is not well understood. PHEX has been shown to cleave proteins involved in bone remodeling and mineral balance such as osteopontin and the parathyroid hormone related peptide (PTHrP (70)) but has not been consistently found to have an important role in FGF-23 cleavage (71, 72).
Klotho is modified at the epigenetic level in renal disease
Klotho is the FGF-23 coreceptor and a secreted protein, which was incidentally discovered in mice with features of premature aging. There are 3 isoforms of this protein (alpha, beta and gamma) (73). While klotho is mainly expressed in the distal tubule, the FGF-23 actions on phosphate transporters and vitamin D metabolism occur in the proximal tubule. Ablation of klotho in the proximal tubule of mice results in hyperphosphatemia upon challenge with a high phosphate diet only, consistent with a biologically relevant role of klotho in the proximal tubule (74).
Lack of klotho in humans or mice leads to severe hyperphosphatemia, increased 1,25(OH) 2 D and calcium levels, similar to FGF-23 deficiency. During CKD progression, aklotho levels decline in parallel with increases in FGF-23 (75). Klotho expression in CKD appears to be regulated through epigenetic mechanisms (76) Figure 2. For example, mice exposed to uremic toxins have increased levels of DNA methyltransferase (DNMT1) which leads to DNA hypermethylation of the klotho gene and consequently lower abundance of klotho protein (77, 78).
In mice with folic acid induced AKI, the activity of the histone deacetylases HDAC1 and HDAC1 was shown to be elevated and correlated with a decrease in Klotho expression. Thus, histone deacetylation decreases the accessibility of transcription factors to DNA, thereby suppressing expression (79).
Similar to the original descriptions of klotho leading to premature senescence and hyperphosphatemia, patients with CKD suffer from a shorter life span, vascular calcifications, and bone disorders (80). The contribution of klotho deficiency to these manifestations have been supported by clinical observations and animal models. For example, pharmacologic targeting of deacetylation and methylation increased klotho levels and reduced renal fibrosis in mice with unilateral ureteral occlusion (81).
Vitamin D regulation
Vitamin D is generated from skin in response to sun UV light in the form of cholecalciferol (D3) and from diet via intestinal absorption in the forms of ergocalciferol (D2) from plants and D3 from animals (82). Vitamin D then circulates to liver to get activated to 25-hydroxyvitamin D and subsequently to kidney proximal tubules to become active 1,25(OH) 2 D or calcitriol (83). 1,25(OH) 2 D increases reabsorption of calcium and phosphate in the intestine, calcium reabsorption in the renal distal tubules and has crucial role in growth and development of bones and teeth. The majority of vitamin D comes from UVB photosynthesis (90-100%) (84) and it is regulated by skin color and latitude. The high melanin content of darker skin types blocks UVB, producing less vitamin D, and the lower melanin content of lighter skin allows for more UVB penetration, producing more 1,25(OH) 2 D (85). Skin pigmentation serves as evolutionary mechanism to keep optimal vitamin D levels in the body. 1,25(OH) 2 D deficiency in early childhood can lead to rickets, while in adults leads to osteoporosis and osteopenia.
At the cellular level 1,25(OH) 2 D binds to vitamin D receptor (VDR) and regulates VDR expression (86). VDR is a nuclear receptor and transcriptional regulator that regulates expression of more than 900 genes. In addition, VDR has calcitriol independent effects, as shown in the model of alopecia that cannot be rescued by calcitriol (87). VDR in the complex with retinoid X receptor acts as a ubiquitous transcription factor (83). It activates or represses numerous target genes by binding to vitamin D responsive elements (VDREs) in their promoters (88,89). By this mechanism, VDR regulates the expression of genes involved in essential biological processes, including calcium and phosphate metabolism, cell cycle, organ development and immunity (83). Therefore, 1,25(OH) 2 D deficiency is implicated in cancer development, immunity and infectious diseases (83).
Genetic mutations in VDR with loss of function cause hereditary vitamin D-resistant rickets (HVDRR) (90). HVDRR is characterized by hypocalcemia, secondary hyper parathyroidism, and severe early-onset rickets. Affected children may also exhibit alopecia. Patients with HVDRR are resistant to 1,25(OH) 2 D treatment and require high dose calcium supplementation.
Epigenetic regulation of VDR in acquired diseases
VDR gene expression is regulated by four promoters giving rise to 12-14 alternatively spliced transcripts in a tissue specific manner (91). VDR promotor activity is regulated at the epigenetic level by methylation, acetylation, phosphorylation and sumoylation.
Methylation
DNA methylation of promotor regions usually leads to reduction of gene expression. Changes in methylation of VDR were found in many diseases, like cancer, infectious diseases, immune disease, multiple sclerosis, and kidney stones.
Cancer
Methylation of VDR has been shown in different cancer types. Patients with adrenocortical carcinoma were found to have higher methylation of cytosine nucleotide of CpG islands in VDR promoter in adrenal glands, leading to reduction of VDR protein and loss of its protective role against malignant growth (92). Similarly, pediatric adrenocortical tumors with high VDR promoter methylation had lower VDR mRNA levels and correlated with advanced disease and reduced survival in these patients (93). Conversely, hypomethylation of VDR promoter in adrenocortical adenoma tissue correlated with more differentiation and aldosterone production from those tumors (94). Methylation of VDR promoter was shown to be important in acute myeloid leukemia cells and DNA methyltransferase inhibitor 5-aza induced VDR expression (95). Use of VDR agonists with hypomethylating agents decreased tumor burden in acute myeloid leukemia mouse models (95).
There is a lot of data about VDR methylation in other cancers as well and most of it is at the observational level. For example, epigenetic profiling of primary melanoma identified VDR hypermethylation important for melanoma progression and was associated with worse survival (96) in a VDR-dependent manner (97). Methylation specific PCR of colorectal cancer tissue vs surrounding healthy tissue showed that hypermethylation of VDR inversely correlates with VDR expression and it is associated with tumor staging (98). Decreased methylation status of VDR in colorectal cancer tissue correlated with longer overall survival (98). In patients with hepatocellular carcinoma the percentage of VDR gene promoter methylation was significantly higher than in the control group of patients (99).
Infectious diseases
VDR is known to be implicated in several infectious diseases, like tuberculosis, HIV, COVID-19, EBV etc. Methylation of VDR has been shown to play the role in some of those conditions, like tuberculosis and HIV. In children with active tuberculosis, there was more VDR DNA methylation, that was associated with reduced VDR expression and could be associated with increased susceptibility to tuberculosis (100). On the other hand, VDR promoter was hypomethylated in children with EV71-associated severe hand, foot and mouth disease as compared to healthy controls (101).
HIV induced hypermethylation of VDR in T cells led to reduction of VDR, which could mediate T cell apoptosis (102). HIV infected podocytes were shown to have increased expression of DNA methyltransferase and accordingly increased CpG methylation at VDR promoter, repressing VDR expression (103).
Immune disease
There are some examples of the role of VDR methylation in immune mediated diseases. For example, cumulative methylation level of all CpG sites in VDR promoter was significantly reduced in patients with rheumatoid arthritis vs control patients (104). VDR promoter at exon 1c showed increased DNA methylation levels in T cells from patients with multiple sclerosis compared to controls, with 6.5-fold increase in VDR mRNA levels (105).
Miscellaneous
Promoter hypermethylation of two target regions in VRD has been shown to be increased in patients with recurrent kidney stones formations vs controls (106).
VDR has an important role in inflammatory response of beta-cells in diabetes type 2. Acetylation of lysin 91 (K91Ac) in VDR serves as a docking site for one of ATP-dependent chromatin remodeling complexes, BAF complex, important in diabetes (109). Mutation of K91 to alanine (K91A) or arginine (K91R) in Vdr gene significantly reduced the interaction with BAF complex, as well as the total acetylation level of VDR. Binding of BAF complex attenuated VDR activity, while inhibition of VDR-BAF improved beta cell survival and activity, improving glucose levels in db/db diabetic mouse model (109).
The effect of histone modification on response to VDR 1,25(OH) 2 D3 modulates histone marks of active chromatin at promoter and enhancer regions. The epigenome of human monocytes revealed 550 histone markers of active promoter regions (H3K4me3) and 2473 histone markers of active enhancer regions (H3K27ac) responsive to 1,25(OH) 2 D3 (110). Further, colocalization of VDR and transcription start site of identified regions highlighted 260 and 287 regions with H3K4me3 and H3K27ac modifications, respectively, that were identified on 59 promotors or enhancers of VDR responsive genes. This is the way how histone modification epigenetically modulates the effect of VDR (110).
Histone acetylation usually leads to activation of transcription (111) and is regulated by interaction between histone acetyltransferases and deacetylases. 1,25(OH) 2 D3 directly affects some VDR coactivators with acyltransferase activity, like H3K27ac at the promoter of several VDR target genes (112). Class I histone deacetylase inhibitor, MS-275, reduced colitis activity in a mouse model of ulcerative colitis. The effect of histone H3 deacetylase was blocked in Vdr-/-mice, suggesting that histone deacetylation by MS-275 alleviated colitis by activating VDR (113). Histone methylation leads to both gene activation or repression, depending on the histone site that is methylated and it is regulated by methyltransferases and demethylases (114). For example, 1,25(OH) 2 D3 induced the expression of the histone demethylase KDM6B that demethylates H3K27me3, a histone mark that correlates with gene repression. Overall, modification of histone by chromatin regulators affects VDR governed gene expression.
Phosphorylation
There are several phosphorylation sites in VDR protein, which are responsive to different kinases. PKC-b was shown to phosphorylate serine at position 51 in VDR (115). Phosphorylation resistant mutation of serine to glycine in this region led to decreased Vdr transcription in response to calcitriol. Therefore, phosphorylation of serine 51 by PKC-b could play a role in diseases requiring Vdr transcriptional activation. PKA phosphorylates serine at position 182 and decreases heterodimerization with RXR, decreasing transactivation by calcitriol (116). However, opposite result was found in rats, where PKA was shown to upregulate Vdr transcription in response to PTH (117). Casein kinase II phosphorylates serine at position 208 (118). Replacement of serine with glycine at this position led to decreased Vdr transcriptional activity in response to calcitriol. Subsequently, phosphatase inhibitor, okadaic acid, was shown to increase VDR response to calcitriol (119). ATM (ataxia telangiectasia mutated) kinase (DNA-damage response kinase) was shown to phosphorylate serine 208 and 222 of VDR, which impairs the effect of ATM on VDR transactivation activity (120). Calcitriol induces ATM in a positive feedback loop, which might suggest positive role of VDR in carcinogenesis.
Sumoylation
Sumoylation is a process of binding Small Ubiquitin-like modifier (SUMO) to lysine in transcription factors, which modifies their activity. It was shown that protein inhibitor of activated STAT 4 (PIAS4) sumoylates VDR with SUMO 2 and inhibits is transcription (121). The same group subsequently identified sentrin/SUMO specific protease 1 and 2 (SENP1 and SENP2) to reverse SUMO2 binding to VDR (122). They identified lysine 91 as a likely VDR site that gets sumoylated. It is not certain whether VDR can be glycosylated. There is one in vitro study that showed OGlcNAcylation of VDR in THP1 cells and in human macrophages, without correlation to downstream signaling or physiologic conditions (123). In conclusion, VDR is finely regulated at multiple posttranslational levels at baseline and in disease, which could potentially serve as targets for new treatments.
Concluding remarks
Mineral metabolism hormones are tightly regulated at multiple levelstranscriptional, post-translational, secretion and interaction level. While there is significant understanding about genetic regulation, epigenetic regulation is not as thoroughly investigated. Many of the epigenetic studies are based on correlations and open the area for mechanistic studies and possible pharmacologic or genetic modifications. This could serve as novel therapeutics in mineral metabolism and beyond, e.g., cell cycle and energy metabolism modifications.
Author contributions
IP and PS conceived the framework and main text of this review article. IP and PS wrote the draft and reviewed the manuscript. All authors contributed to the article and approved the submitted version.
Funding
This article was funded by NIH Grant #1K08DK124568-01.
Conflict of interest
The authors declare that the research was conducted in the absence of any commercial or financial relationships that could be construed as a potential conflict of interest.
Publisher's note
All claims expressed in this article are solely those of the authors and do not necessarily represent those of their affiliated organizations, or those of the publisher, the editors and the reviewers. Any product that may be evaluated in this article, or claim that may be made by its manufacturer, is not guaranteed or endorsed by the publisher.
|
2022-09-30T13:20:44.813Z
|
2022-09-29T00:00:00.000
|
{
"year": 2022,
"sha1": "01e25b9b26e76403c40a947aab43e631ce5ec7e8",
"oa_license": null,
"oa_url": null,
"oa_status": null,
"pdf_src": "Frontier",
"pdf_hash": "01e25b9b26e76403c40a947aab43e631ce5ec7e8",
"s2fieldsofstudy": [
"Medicine",
"Biology"
],
"extfieldsofstudy": [
"Medicine"
]
}
|
234182479
|
pes2o/s2orc
|
v3-fos-license
|
Islamic Religious Literacy Practice And Ideology: A Case Study On Two Religious Student Organizations of Public University In Lombok Island
Religious literacy is related to a certain ideological motif that directs the literacy practice. The study would like to present the relationship between the practice of religious literacy and the ideology in public university by means of case study in the City of Mataram, Lombok island, West Nusa Tenggara. The study will be conducted based on a case study in two Islamic-based organizations (A and B group) of a public university in that city. The study defines that group A has been committing the culture of listening more in religious literacy and thus the literacy of the group is a passive one, while group B is active literacy, because it has been committing not only the culture of listening but also the culture of interpreting, indicated by academic writing programs on the Qur’an. These literacies might be traced back to the ideological base and patron that both groups have adopted.
Introduction
An opinion in one of the Indonesian national newspapers has questioned the importance of developing religious literacy in order to improve the more tolerant, contextual and peaceful understanding of religion [1]. Such view is actually good but it might trigger some sort of problem simplification: by means of religious literacy all kinds of religion-related problems will be solved and the life of religion will be peaceful due to the presence of the religious literacy. Unfortunately, a more critical perspective to a practice of literacy, including the religious literacy itself, is related to certain ideological motives that govern the practice of literacy [2]. Therefore, it might be stated that what has been read and what has been written and also how the reading and the writing has been conceived by an individual or a group of certain individuals are related to the ideological motives that the individual or the group of certain individuals have internalized.
That relationship is revealed by Brandt [3]. He shows that reading Bible obediently has become the activity of Sunday School in the 19 th Century. Then, the dichotomy between reading and writing has occurred ever since then. The dichotomy has occurred because reading and writing construct the paradigm in different manner. Reading is considered as an activity of internalizing the religious values through the reading of God's revelation, while writing is considered as an activity of reconstructing the mind; as a result, writing is considered as a profane activity in comparison to reading. Furthermore, reading is considered as a more sacred activity because it is identical with obedience. Not to mention, writing tends to be performed in the context outside the church such as trade, office and politics. "The secularization" between reading and writing thus marks the literacy activity that the churches sponsor and that is finally engaged in mass reproduction through the learning process in the schools. In brief, reading is believed to be the sign of obedience while writing is believed to be the sign of criticality.
As the general practice of literacy that relates to the practice of education [4]- [6], the practice of religious literacy also relates to the practice of religious education and the practice of religious education is not apart of ideology. This relationship is found in a study by Saparudin who has analysed the relationship between ideology and practice of education in several Islamic educational institutions [7], [8]. Long before that, actually Ahmad Shalaby, as having been quoted by Saparudin, has suspected the presence of ideological-theological motivation in the establishment of the first Islamic boarding school namely Nizamiyah [7]. According to Shalaby, the Islamic boarding school has been established in order to socialize the ideology of Sunni as an effort of preventing the influence and the dissemination of the ideology of Syiah. Similar argument has also been proposed by Safi [9], who shows the relationship between the ideology of the ruler and the production of Islamic knowledge that occurred in the Dynasty of Saljuk.
A study of literacy practice in other religion, which is more recent, is conducted by Rackley; he investigates the practice of religious literacy in two different Christian communities [10]. Although he uses the term "culture" to show the different practice of religious literacy in the two different Christian communities among the young American generation, the intention of the practice is to establish a worldview or a faith that distinguishes the practice of both Christian communities. The first group, known as Methodist, practises a literacy that has been more influenced by the "culture of interpretation" of Bible because the members of the group have actively established the meaning of their religion literacy through discussions and interpretations. On the other hand, the second group, known as Latter-Day Saint, practises the religion literacy by means of "culture of listening" of Bible because the members of the group passively and literally accept the meaning of their literacy from what has been written in the Bible. Therefore, the practice of religious literacy that takes place between both groups consists of the activities of reading and memorizing, which are considered to be more related to the faithfulness and the obedience. Based on the results of his study, it might be argued that the practice of religious literacy the first group has committed is more criticalsubstantial while the practice of religious literacy the second group has committed is more formalist-doctrinaire. Thus, it might be concluded that the literacy practice always involves the aspects of ideology or faithfulness that has been internalized within an individual or a group of individuals who practices the religion.
However, a study that explores the relationship between the religious literacy and the ideology, especially in the Indonesian context, has not been conducted intensively whereas such study is important to conduct so that the patterns of religious literacy practice along with their motives might be identify. Throughout the study, the gap in this topic will be pursued and the results of the investigation toward the relationship between the religious literacy practice and ideology in the public university will be displayed.
Investigating religious literacy practice in the public university is important and interesting to conduct. The reason is that universities become the place where the discourse of religion and faithfulness is discussed and influences the discourse of religion in the surrounding community. Public universities are the place where many people from different faith, religion, social background, culture and political views meet and learn from each other. Although these people (who belong to the same universities) occupy a local area within the universities or regions, they are connected with the national and global problems including the ones related to religion and nation [11].
The religious literacy practice that will be discussed in the study is the one that has been performed around the community of Muslim university students or Muslim student organization. In other words, it might be stated that the religious literacy practice that will be discussed in the study is the practice of Islam literacy. The practice of Islam literacy has been selected because this practice seems to be more prominent in comparison to the practice of other religion literacy. Despite the reason, the practice of Islam literacy might be a role model on how the practice of religious literacy might have relationship with the ideology for every community that performs the religious literacy practice.
The activities of Islamic preach and education in public universities are often seen as the activities of a community known as Islamic Campus Preaching Institution (LDK, Lembaga Dakwah Kampus) or any other institutions that bear the similar concept. Usually, the studies toward such institutions are related to the activities that have been performed, the books, the literatures that have been read or the affiliation to the transnational Islamic ideology or thought and the potential radicalism within these institutions. These studies have not sought to appreciate the presence of other groups that actually operate within the activities of Islamic preach, or specific to the study, within the activities of Islam literacy. In the same time, these studies have not sought to appreciate the similarities and the differences among the existing groups. Not to mention, these studies have not targeted the other Indonesian regions since most of them are focused on Java, which is the basis of several well-known universities in Indonesia [12]- [17]. On the contrary, these studies toward the other Indonesian regions, such as the Province of West Nusa Tenggara, have not been clearly mapped. Therefore, the study will serve as another means for viewing the activities of Islamic preach through the perspective of literacy practice in two different communities under the same university.
Theoretical Frameworks
Literacy practice refers to a set of literacy events that occurs repetitively or that occurs under certain pattern. A study of ideology toward the literacy practice in general is divided into two contrary models of literacy namely the "autonomy model" and the "ideology model." The autonomy model is frequently used in referring to the "literacy thesis" that has been proposed by Jack Goody as having been quoted by Dewayani and Retnaningdyah [2], namely that literacy is considered as an independent variable that influences the cognitive and social capacity of an individual. On the contrary, the autonomy model views reading and writing as a neutral and context-free process which main objective is to achieve the state of "literacy" within a community. The state of literacy encourages the importance to teach literacy in a community as part of the skills for arranging and discerning texts. The autonomy model is frequently proposed in the domains of education and psychology, which tend to view literacy as a cognitive process. In the same time, the autonomy model also teaches certain mechanistic strategies so that students will understand texts and write their opinions articulatively [2].
This model, that tends to neglect the subjectivity of an individual, later is criticized because it is consider to obscure the ideological and the social factors that form the practice of literacy. A study by Brian Street about the practice of literacy in Iran on 1970s, which was conducted anthropologically, has found three kinds of the practice of literacy: literacy of learning place in Qur'ani schools, literacy of secularism in formal schools and literacy of commerce among the fruit traders. In his study, Street has concluded that the process of reading and writing (education in the wide sense) occurs in the context of power relationship that puts human beings on the different positions within the society. This model is called as ideology model, which then will be more prominent due to its view that has been developing within the society [18].
Based on the above explanation, the present study then will rely on the ideology model as the theoretical framework that will be used for viewing the practice of literacy in two religion communities. With this model, the literacy will not only be comprehended as a set of mechanistic skills in relation to reading and writing but also as a social practice that accommodates the values, the cultural experiences and the ideological interest that people believe to influence the interaction between an individual and a text. The ideology model does not neglect the important role of literacy in establishing the cognitive and social transformation of an individual. However, by viewing literacy in a more contextual manner, as having been offered by the ideology model, certain aspect such as power relationship that might encourage or inhibit the process of cognitive transformation within an individual might be disclosed. Therefore, the present study strives to investigate the relationship that has been established between www.psychologyandeducation.net the practice of religion literacy and the ideolog that might influence the practice of literacy. Ideology in the present study is understood in accordance to Street's definition of ideology. Indeed, street does not provide explicit explanation about the meaning of ideology in the literacy model that he has proposed but he implicitly states that the ideology model originates from the conception of knowledge, identity and the presence of the people who practice literacy [18].
Street's point of view is in line with the opinion by Eagleton about ideology. According to Eagleton, ideology is the construction of meaning and values that are juxtaposed with certain interests that are relevant to the social power [19]. Ideology, as having been defined by Charlence Tan [20], refers to the belief that becomes the framework of self-definition and its relationship with the world which directs the life of individuals and groups. This belief includes values, habits, norms and other elements that form ideology. Therefore, ideology might be defined as the values that are believed to be relevant as social power [7].
Religion and ideology are frequently associated to each; as a result, such association leads to the formation of religious ideology. Actually, ideology itself is not a religion and vice versa. However, ideology benefits religious ideas for most of the times and, similarly, political ideas (social power) frequently looks for its origin in the religious ideas [21]. The relationship between ideology and religion starts from a theological belief that develops into a communal association and then becomes institutionalized as ideological social-politic movements. Thereby, religious ideology might be viewed in terms of several elements. First, there is a system of understanding that has been based on religion teachings both in the form of school of thoughts and in the form of the concepts by certain figures. In this stage, religionism has not evolved into ideology. Second, the belief and understanding of certain religions then will be turned in the foundation of justification or legitimation of action both internally and externally. In this stage, there is a shift from mere religious understanding toward ideology that might enable the internalization, the identity formation and the claim of truth. Third, there is a desire to strengthen the existence. In this stage, desire becomes dominant and the dominant desire grows and manifests in multiple strategies and media. Such contemplation has been used in a study by certain experts such as [7], [9], [22], [23].
The relationship between religious ideology and Islamic educational process and institution has been studyed by several scholars. Ahmad Shalaby as having been mentioned by Saparudin suspects the presence of ideological motivation within the educational process in Madrasah Nizamiyah. In the modern times, several studys have been conducted toward the association between religious ideology and education (educational process and institution) such as the studys by scholars [20], [24]- [27].
The elaboration on the association between the ideology and the educational institutions has led to the understanding toward the relationship between ideology and practice of literacy as having been previously explained. The practice of literacy cannot be set apart from the education because the implementation of education itself is originally the practice of literacy itself. Therefore, speaking about Islamic education will not be separated from the discussion on the practice of Islam literacy. Indeed, the practice of Islam literacy as part of the practice of religion literacy has certain characteristics but this fact does not diminish its association to the process of Islamic education that several religious groups or communities have pursued.
The characteristics of Islamic literacy as the practice of religious literacy are as follows. First, the practice of Islam literacy is centred upon the text (the definition of text might be expanded); the text might be the sacred texts as Koran or the religious texts that have been resulted from the religious thoughts or contemplations. Second, the texts that become the focus of the practice of Islam literacy might be used from one generation to another. Third, the sacred religious texts www.psychologyandeducation.net become the part of religious rituals. Fourth, the religious texts, both the sacred ones and the profane ones, become the part of individual and collective identity [28]. Therefore, the practice of Islam literacy means the practice of literacy that relates to the primary texts (The Qur'an and Hadith) and the secondary texts of Islamic teachings that become the main reference in the practice of literacy. These texts will be the part of religious rituals and the part of individual and collective identity. In the last part, the association between ideology and education will be apparent.
Research Method
Through the description on the four characteristics of the practice of Islamic literacy and its association with ideology, a case study will be selected for the conduct of the present study. The case study is selected because the present study conducted in a single place. The location for the present study then is the biggest state university in the province of West Nusa Tenggara. This university is not a religion-based university; instead, it is a public university. The university is located in the centre of the city in the Province of West Nusa Tenggara.
By selecting one of the public universities, the results of the present study are expected to be able to describe how the public universities in the central part of Indonesia hold the practice of religious literacy. Then, the present study will focus on the university-level student groups whose main activities are held in the domain of Islam. The study will involve two groups or two internal Islamic organizations. The two groups are selected because they are structurally under the university (rectorate) supervision. The two organizations are under the supervision of a lecture or a group of lectures who have been assigned to coach the two organizations.
As a guideline in the data gathering activities, the present study would like to refer to the four criteria of religious literacy that have been proposed by Rosowky [28]. The present study benefits multiple data sources in the data gathering process. These data sources consist of documentation, interview and observation. In addition, the present study employs focus group discussion (FGD). FGD performed in the beginning of the study in order to attain preliminary and overall information from the informants or the participants under investigation. Then, interview along with documentation and observation has been conducted. The interview conducted in semi-structured manner toward the informants. However, in certain occasions the unstructured interview has been conducted as well so that the informants will be more convenient in delivering their information.
The interview was conducted in order to attain the data on the religious views of the informants along with the reasons behind the selection of certain religious texts. The interview was conducted to the caretakers of the two organizations and also to the coaches and the students who have joined the two organizations. The interview was supported by the data from the documentation, namely the documentation of religious texts that have been used by the two religious groups. The texts then will be turned into the textual elements for the study. The present study intends to identify the ideology or "the system of faith" that has been hidden within the texts; in other words, the main objectives in conducting the study is to identify the hidden meaning and value which might not be explicit during the preliminary reading. Interview and text (document) study is useful for digging information on how texts can be parts of collective identity and group ideology. The text study in the present study is followed up by an observation toward the activities of the activists from both groups. The observation intends to identify the pattern of activities that have been performed by both groups. Furthermore, the observation will also be useful to see how texts can be manipulated in the sequence of their actions.
www.psychologyandeducation.net
There are two religious organizaitions. These organizations have different focus in its activities and rituals, but both are intra-organization in that public university. The two intra-university religious organizations are named Lembaga Dakwah Kampus (LDK, Islamic-Preach Institution in University Level) and Pusat Studi Alquran (PSQ, Centre for the Qur'an Study). LDK organization appears first in 1987 and then PSQ organization in 2006.
The emergence of LDK in 1987 was related to the rise of the tarbiyah movement in the 1980s on public campuses in Indonesia. As having been investigated by Rof'ah [29], general characteristics of LDK organization in this campus are similar to the LDK organization in other universities throughout Indonesia. The activities that the LDK Organization in this campus holds are weekly halaqah, mentoring sessions for new students, and recruitment or regeneration. Then, the other activities that the LDK Organization in this university holds are celebration of religious festival days, gathering humanitarian aids for Rohingya and Palestine and gathering aids for the victims of didaster. The interview with the LDK activists shows that there is a synergy of activities between the LDK Organization in the university and numerous companies in Mataram and even throughout the Province of West Nusa Tenggara. The synergy of activities is possible to establish due to the presence of Friendship Forum (FSLDK, Forum Silaturahim Lembaga Dakwah Kampus) in the Province of West Nusa Tenggara. The establishment of the synergy is encouraged by the small size of the LDK community in the province; the number of the LDK Organizations through the province is so minimum that all LDK Organizations in Mataram would like to merge themselves so that they will be a bigger community.
As having taken place in other universities throughout Indonesia, the role of LDK Organization in the learning activities within the campus is also apparent. The activists of LDK Organization play their role in the assistance or the mentoring sessions for all Moslem students; this activity is known as AAI (Asistensi Agama Islam, the Assistance of Islam) is conducted prominently to the new students. The Unit of Mentoring Sessions within the structure of the LDK Organization in this campus is a semiautonomous unit and thus it is named Badan Semi Otonom Mentoring Agama Islam (Semi-Autonomous Institution of Islam Mentoring). The mentors in this unit are assigned to test and guide the new students' capacity of reading Koran because reading Koran becomes the part of enrolment test. This activity is usually conducted in the beginning of the year or the beginning of the semester. In other words, all Moslem students who enrol themselves to the university should possess the capacity of reading Koran; for the Moslem students who have not possess the capacity they will be guided by the mentors of the LDK Organization. In addition to this activity, the tahsin activity to the new students will also be performed until the new students are considered to already possess the good capacity of reading Koran.
Paying attention to the guideline books of the mentoring session, it might be stated that "Tarbiyah" concept in general becomes the ideology style of the LDK Organization. This "Tarbiyah" style can be traced from the ustadz who give their lectures in the mentoring activities; these ustadz are known as the Tarbiyah Ustadz due to their slogan: "You should join Tarbiyah or you will be in trouble." As having been recorded in the website of LDK Organization, one of the ustadz is famous with that motto. The Tarbiyahstyle of Islamic preach also appears in the study of Tahqif, an acronym of Tarbiyah and Tsaqifiyah, an Islamic study that is disseminated to the mentors of Islamic preach. Despite these findings, it is interesting to add that there seems to be an effort to change the impression of the mentoring activities in the LDK Organization. The effort can be seen from the change of the name Mentoring Agama Islam (MAI, Mentoring Sessions of Islam) into Bina Pribadi Islam (Mentoring of Islamic Personality). According to a lecturer who holds the supervision on the LDK Organization, the change has been pursued in order to eliminate the "negative" impression toward the LDK activists.
www.psychologyandeducation.net
In addition, still according to the lecturer, the terms that have been associated to exclusiveness such as Ikhwan, akhwat, akhi and ukhti are replaced by brother or sister.
In addition to LDK Organization, the religious activities are also performed by PSQ. The establishment of the PSQ Organization was initialized by the lecturers of Religious Education. According to a bulletin issued by the PSQ Organization, the PSQ Organization serves to develop the students' love, interest, indulgence and study toward the knowledge of Qur'an that has been based on the governing scientific principles. In the last caretaking period, the PSQ Organization proposes the following jargon: "Being the accelerator of the Qur'an popularization among the university communities." The activities held by the PSQ Organization is around the training programs that relate to the skill in the domain of the Qur'an reading, calligraphy art and the Qur'an study. These training programs include tilawatil Alquran, tartil Alquran, tahfid Alquran, calligraphy and papers on Alquran. Actually, these training programs are open to all students in the university but in the practice most of the participants in the training program are the members of the PSQ Organization. In addition to holding the Alquran-related activities, the other activities that the PSQ Organization holds are related to the reading of shalawat, both the Al-Barzanzi shalawat and the other shalawat that have been read in the society. The reading of the shalawat is usually performed in the evening prior to the conduct of tilawah exercise and tartil Alquran that has been scheduled. The tradition of Shawalat reading is not found in the list of LDK Organization activities. If tilawah or tahfid can be found in both organizations then Shalawat reading can only be found in the PSQ Organization.
Another activity that the LDK Organization does not hold is writing papers on Alquran (KTI-Q, Karya Tulis Ilmiah Alquran). This activity is actually part of the efforts for internalizing the Alquran calues through the studys that might be benefitted within the community. Indeed, the LDK Organization has a department named Division of Media which product is bulletin or wall magazine that is displayed in the mosque; however, these products are more popular and have not been based on specific study, which the activists of PSQ Organization have performed.
There are several differences that should be proposed between the two organizations. First, in terms of inter-organization member relationship it seems that the PSQ Organization is more open (inclusive) because they enable the meeting the male and female caretakers and the members of the organization to meet in the same room while the LDK Organization does not hold such a kind of meeting. Second, in terms of organizational structure it seems that the LDK Organization has clear separation between the male coordinator (Ikhwan) and the female coordinator (akhwat) while the PSQ Organization tends to not separate the male and the female coordinator. Third, in terms of shalawat reading activity the LDK Organization holds the shalawat reading activity without the use of musical instrument while the PSQ Organization holds the shalawat reading activity with the use of musical instrument. Fourth, in terms of ustadz, the ustadz who give their lectures in the LDK Organization tend to be affiliated with certain Islamic political parties or with the Salafi movement while the ustadz who give their lectures in the PSQ Organization tend to be affiliated to the Islamic boarding schools within the network of either Nahdlatul Wathan or Nahdlatul Ulama.
The four characteristics have discerned the two organizations based on the ideological affiliation. As having been frequently mentioned in the previous studys, the Tarbiyah ideology is more prominent among the activists of LDK Organization in the campus of several universities throughout Indonesia [30]- [32]. The Movement or the Pilgrims of Tarbiyah has appeared to the surface since 1980s. The appearance of the Pilgrims was inspired by the Islamic thought proposed by Hasan al-Banna (1906-1949, the founder of Ikhwanul Muslimin movement in www.psychologyandeducation.net Egypt [31]. Since the entrance of the movement to the public universities in Indonesia, Ikhwanul Muslimin has been developing very rapidly.
As having been found, Ikhwanul Muslimin comes to the surface as a very phenomenal movement in the Middle East region. From Egypt, the movement has gone to Syria, Sudan, Jordan, Kuwait and other Gulf countries and thus the movement has formed the main Pan-Islamic Arab. The recruitment of the group relies on the cell system, which enables the rapid development.
Halaqah and daurah are held in houses, mosques, campuses and both the open and close platforms. These activities are known as usrah (group/family). Each group consists of 10 to 20 members under the leadership of a murabbi (instructor/mentor). It is this activity that inspires the mentoring activities within the scope of LDK Organization on several universities [33].
Ikhwanul Muslimin develops Islamic preach or dakwah as the framework for their struggle. Hasan al-Banna, through his slogan "Islam is constitution," asserts that Islam is a complete and all-encompassing system. Every single thing has been clearly explained in Koran, which moral principles are believed to be universal. Then, the development of Ikhwanul Muslimin becomes more rapid with the appearance of Sayyid Qut who later becomes the great ideologist for the movement. He wrote a book entitled Ma'alim fi al-Tariq which will be the classical reference for the Islamists throughout the world. The book also asserts the presence of Jahiliyah Age or the Age of Darkness as the result of abandoning Syariah. For Qutb, Moslem people have gained their success in turning the direction in the history and taking over the power from the West by following the role model of salaf al-saleh. Qutb described these people as the outstanding group of Qur'ani generation who stayed consistent in the path of dakwah by standing on the ground of their belief toward tauhid [33]. In sum, the ideology of Tarbiyah emphasizes the doctrine of committing kaafah (all-encompassing) literally in internalizing the Islamic teachings throughout all life aspects including the state affairs (Ali, 2012, p. 73). Despite the ideology, the excess of the ideology is the appearance of other factions that tend to be more radical in comparison to Ikhwanul Muslimin.
In general, there are two paths of Ikhwanul Muslimin movement in Indonesia. The first path is the path of Tarbiyah/Pilgrims. This path concentrates on the preparation of the cadres on the grassroot level who will be the base of the most solid mass. In preparing this cadre, Tarbiyah/Pilgrims should rely on halaqah. The Islamic prayer materials that have been developed in halaqah are, for example, guidelines for the development of Islam cadres and the management of tarbiyah activities. These materials are designed in a guidebook named Manhaj Tarbiyah Islamiyah. Then, the second path is the path of Siyasah/Party. Siyasah/Party serves as the actualization for the Islamic cadres in the domain of politics. The ideas that have been developed in the path of Tarbiyah are interpreted into the field of political party. Therefore, political party is considered as the extension of dakwah strategy. In addition to the two main paths, there are other paths that might be benefitted. For example, the path of tarbiyah amaliah is related to the business activities. There are also other paths of Tarbiyah that have been developed in the domain of professional workers by means of small group formation; these tarbiyah groups are usually found in both the public offices and the private offices. The activities of these tarbiyah groups are usually around the religion affairs held by the caretakers of the mosque in these offices. In addition to the tarbiyah movement in the domain of professional workers, there are also tarbiyah movements in the domain of military known as tarbiyah askariyah/paramilitary; these movements are known as paramilitary groups. This path is the military wing within the Pilgrims environment. The simplest form for the paramilitary path is liqa (meeting) which involves the cadres to join physical and martial art exercises [34].
Referring to several paths of the activities or the movement, the activities of the LDK Organization are usually manipulated as the part of the path of Tarbiyah/ Pilgrims. Since the movement has been brought about from other country/nation, it might www.psychologyandeducation.net be stated that Tarbiyah movement is a transnational-exclusive movement. Based on the findings in the study, the term exclusive is pinned due to the fact that the Tarbiyah movement displays religious attitudes and social relationship that tend to be closed. In an interview, an exactivist from the LDK Organization told how the strong is the regulation that she should abide to.
At that time, I did not have any motorcycle.Consequently, everytime we have an activity I should take a free ride with my friends who have one. It was a coincidence that all of my friends who have motorcycle are male. So, finally I went along with them but my seniors there prohibit me to not take a free ride with the male student who was not my muhrim (spouse). It was better for me to leave the LDK Organization.
Although I have resigned from the LDK Organization, my seniors in the institution always contact and ask me to return again (12/3/2018).
Indeed, not all students who join the activities held by the LDK Organization are aware of the map and the ideology of the movement; however, paying attention to the results of the interview with the senior members of the LDK Organization it is apparent that these senior members adopt certain ideology. in addition, the ideology style that has been adopted is apparent in a number of texts that they refer in the holding several activities or studys.
Different than the LDK Organization that tends to have Tarbiyah/Ikhwan-style Ideology, the PSQ Organizations are more open. Based on the observation and the interview that have been conducted, it might be concluded that the activists of the PSQ Organization have more "traditional" characters although they are more open (inclusive) especially in terms of both the inter-member relationship and the study of religion that has been proposed.
The meaning of the term "inclusivetraditionalism" that has been pinned to the PSQ Organization here is that the activists of this organization display more obedience to a teacher, tuan guru or kyai, based on the doctrine sami'na wa ata'na (we listen and we obey) on the one hand but on the other hand they are more open to the social relationship and even to the ideas outside the ones that the activists of this organization believe.
In addition, such traditionalistic attitude is reflected in numerous rituals that the members of the organization perform just as the other activists of Nahdlatul Ulama (NU) and Nahdlatul Wathan (NW) perform such as tahlil and salawatan. Indeed, most of the PSQ activists are more affiliated to the two organizations. However, the PSQ Organization is more open to any university student who wants to join their activity despite the background; even some of the female activists are dressing in veils (niqab).
The inclination of the PSQ Organization to be affiliated to the two organizations are apparent from the activities that they hold. The activities of the PSQ Organization, such as study or workshop, are frequently held in the Islamic boarding schools that at certain level are affiliated to the organizations. According to several scholars, such as van Bruinessen, NU is a traditionalistic Islamic group but the progressivity of their paradigm is equal to the modernistic Islamic group [35]. The characteristics of the traditionalism that NU has are apparent in several rituals such as tahlili, selametan and communal reading of salawat, which to the modernistic Islamic group is considered as heresy. These rituals are also practiced by the activists of PSQ.
Similar situations also occur in the NW Organization, who is considered to have traditionalistic view as well. In addition to practicing tahlil, selametan and salawat reading, NW forms their identity by regularly reading wirid and hizib of Nahdlatul Wathan [7]. Furthermore, both NW and NU strengthen their obedience toward the teacher or known as guru or kyai. Despite the similarities, the characteristic that differentiate NW from NU is found in the address of Syafii school of thought. NW declares that they only belong to the Syafii school of thought in their Articles of Association/Bylaws www.psychologyandeducation.net [36] while NU declares that they belong to four schools of thought in Fikih.
Both NU and NW have similar political orientation in their ideology. NU proposes the moderate orientation or the middle way in the domain of politics; in other words, NU can be considered to have accommodative orientation in the domain of politics [37]. On the other hand, the Tarbiyah movement defends Islam more as the state ideology. This attitude then puts the Tarbiyah movement vis a vis to the state. All life aspects that are not based on the Islamic teachings should be considered illegal.
The above explanation shows that both universitystudent organizations seems to normally operate in the domain of religion but in the practice they have different ideology and point of view. The different ideology and point of view then gives peculiar style to the practice of literacy and the selection of literatures between the two organizations.
The Literacy Practices
The two different ideologies separate the practice of literacy in each religious group (LDK and PSQ) of the university; despite the differences, both groups still share several similarities. Before discussing the different aspects, the similar aspects between the two religious groups will be proposed in the first place. Both organizations have the figures that will determine the fate of the organizations. The statement implies that both organizations have some kind of patron-client relationship. Despite the similarity, there is a single difference that can be found in the figures who become the role models of each organization. In other words, the practice of literacy still centres on certain figure with more "listening" culture than "discussing" culture and even with very few "interpreting" culture. In addition, the "listening" culture is influenced by the style or the orientation of each role model in both organizations.
Looking at the patron-client relationship in both organizations, it is apparent that the practice of literacy is a practice that has been sponsored byborrowing the words of Brandtcertain figures whose implications can be found in the selection of literatures (reading and writing) in both organizations. Since the sponsor of the practice of literacy is the figures with the characteristics of certain religious ideology, the practice of literacy between the two organizations is also conducted in accordance to the "sponsor's order" implied by the figures. This explanation might be described through the following figure. www.psychologyandeducation.net As having been explained previously, there are differences between the LDK Organization and the PSQ Organization. Indeed, both organizations internalize the "listening" culture but the PSQ Organization a practice of literacy that emphasizes the "interpreting" culture more. Despite the fact that the productive of such literacy practice is far from the expected quality, at least there are efforts to develop the "interpreting" culture. The "interpreting" culture in the practice of literacy is a characteristic of the active or productive literacy. The "interpreting" culture that is manifested into the form of papers refers to the stage of literacy creation that the university students have achieved in solving the problems that they encounter based on the Koran study that they have conducted. The culture is very apparent from the titles of the scientific papers themselves, which can be considered as a combination between "passive literacy" and "active literacy." Passive literacy emphasizes the skills of absorbing information and knowledge more, while active literacy emphasizes the skills of creating information and knowledge. This conception is borrowed from the concept of media and information literacy.
Media and information literacy refer to the competencies that display the capacity of an www.psychologyandeducation.net individual in accessing, evaluating, using and creating media in numerous forms critically and ethically [38]. If accessing, evaluating and using are part of passive literacy then creating is part of active literacy. Therefore, it can be concluded that scientific papers are the actualization of active literacy among the university students who become the members of PSQ Organization; since these papers are parts of active literacy, the "listening" culture tends to shift a little bit to the "interpreting" culture in the practice of Islam literacy. members of both organizations indeed read the religious texts but these texts are accepted and understood passively without any effort to construct the meaning that these texts carry. Therefore, in order to strengthen the conception of listening culture that Rackley has proposed the present study will use the term "passive literacy" in the practice of religion literacy. The reason is that in the reality both organizations do not only "listen" but also "read" the religious texts although they read these texts with passive attitude.
The culture of interpretationin Rackley conceptionhas been practiced by the PSQ Organization in line with the practice of listening culture. This culture of interpretation is less appearent in the religious literacy practice of LDK community. As a result, the practice of literacy and the interpreting culture is still practiced altogether with the listening culture. The more interesting fact is that the interpreting culture is practiced by the PSQ Organization, which is ideologically more traditional-oriented (NU and NW).
The elements that differentiate the PSQ Organization and LDK Organization certainly centre on the patron or the sponsor behind the organizations' practice of literacy. The patron of literacy refers to the role model and the figure of the practice of literacy in a community. Thereby, the practice of literacy in both organizations will heavily depend on the patron of literacy who becomes their role model. The listening culture and the interpreting culture are only the extension of the role that the patron of literacy plays. The role that the patron of literacy plays is also influenced by the ideological orientation of the figure who becomes the role model.
www.psychologyandeducation.net The strong patron's role might be associated to the tradition of Islamic education that have been developing both in Middle East and in Indonesia. It is the patron's role that enables the relationship between the ideology and the practice of literacy. As having been disclosed, most of the times the ustadz who have been invited into the practice of Islam literacy activities display obvious polarization. The ustadz from Tarbiyah movement frequently appear in the slogans by the LDK Organization activists. On the contrary, the ustadz from the traditions of Islamic boarding schools are frequently engaged in the activities held by the PSQ Organization. This matter is interesting since there is a fact that the patron is not always part of the intervention by the university that fosters the two organizations. As having been found, the affiliation of the LDK Organization coaches are the NW Organization; however, in the practice many ustadz who have been involved in the activities held by the LDK Organization are from the Tarbiyah movement. In comparison, the affiliation of the PSQ Organization coaches are from the traditionalist movement and thus the ustadz who have been engaged in their activities are from the traditionalist movement.
Conclusion
By using the ideological and contextual perspective, the present study proposes that the practice of Islam literacy among the religious communities has relationship to the ideology. The ideology itself is also related to the patron in each www.psychologyandeducation.net organization. The practice of literacy pattern in both organizations under the study have certain characteristics which lead to the association to the basis of religious ideology that has been adopted by the patron and the activists from the two organizations.
The present study has found that the LDK Organization pays more attention to the listening culture in the practice of literacy; as a result, the LDK Organization displays the passive literacy.
On the contrary, the present study has also found that the PSQ Organization pays the listening culture in addition to the interpreting culture; the evidence of such practice can be traced to the existence of scientific papers on Koran the members of PSQ Organization have composed and thus it might be concluded that the PSQ Organization displays the active literacy. In sum, this case can be traced back to the ideological base that the two organizations have adopted. The LDK Organization adopts the exclusive-transnational ideology while the PSQ Organization adopts the inclusive-traditional ideology.
|
2021-05-11T00:03:25.732Z
|
2021-01-20T00:00:00.000
|
{
"year": 2021,
"sha1": "b18d6586480341c5c00ae24dacfe0aad5c443985",
"oa_license": "CCBY",
"oa_url": "http://psychologyandeducation.net/pae/index.php/pae/article/download/1775/1552",
"oa_status": "HYBRID",
"pdf_src": "Anansi",
"pdf_hash": "1f27ed5041c230cc94d7c60d0178fe6c91496106",
"s2fieldsofstudy": [
"Education"
],
"extfieldsofstudy": [
"Sociology"
]
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.